hexsha stringlengths 40 40 | size int64 3 1.03M | ext stringclasses 10
values | lang stringclasses 1
value | max_stars_repo_path stringlengths 3 972 | max_stars_repo_name stringlengths 6 130 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 10 | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 972 | max_issues_repo_name stringlengths 6 130 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 10 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 972 | max_forks_repo_name stringlengths 6 130 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 10 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 3 1.03M | avg_line_length float64 1.13 941k | max_line_length int64 2 941k | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1c11fa31f6af0b8605aa97c0e4887d1677041561 | 106,217 | py | Python | agMontage/mViewer.py | AndrewDGood/mViewer | 3cde275c60204dca77d019263392864332f958b7 | [
"BSD-3-Clause"
] | null | null | null | agMontage/mViewer.py | AndrewDGood/mViewer | 3cde275c60204dca77d019263392864332f958b7 | [
"BSD-3-Clause"
] | null | null | null | agMontage/mViewer.py | AndrewDGood/mViewer | 3cde275c60204dca77d019263392864332f958b7 | [
"BSD-3-Clause"
] | null | null | null | #---------------------------------------------------------------------------------
# Montage Viewer Python Application
#---------------------------------------------------------------------------------
from threading import Thread
import tornado.ioloop
import tornado.web
import tornado.websocket
import tornado.template
import sys
import socket
import webbrowser
import subprocess
import os
import errno
import sys
import tempfile
import shutil
import random
import hashlib
import shlex
import math
import json
from pkg_resources import resource_filename
# This file contains the main mViewer object and a few support objects:
#
# mvStruct -- Parser for Montage executable module return structures
# mvViewOverlay -- Data structure for image overlays info
# (grids, catalogs, etc.)
# mvViewFile -- Data structure for FITS file display info
# mvView -- Data structure for an mViewer view (includes instance
# of the previous two objects)
# mvMainHandler -- Event handler for Tornado web service toolkit
# (main index.html request)
# mvWSHandler -- Event handler for Tornado web service toolkit
# (support file requestss; e.g., JS files)
# mvThread -- Extra thread for browser communications
# mViewer -- Main object
# Python does not support forward referencing, that is, the ability to
# refer to a class in source code before it has actually been defined.
# So the simplest way to accomodate this is to define classes in
# "reverse order", as above.
# mViewer is the main object and contains all the code that specifies
# how the picture we see of the sky is to be generated (and how interactive
# events like zooming are to be handled). It is the main thread the user
# talks to interactively.
# The actual specification of the "view" (how the picture is to look)
# held in the mvView object, which contains all sorts of data values
# (like the grayscale color table name) but no processing methods beyond
# the basic "print me out" __str__ and __repr__ code. mvView in turn
# uses a couple of specialized containers: mvViewFile (for specifying
# an image file and its attributes) and mvViewOverlay (for specifying
# various layers drawn on top of the image).
# At startup, mViewer spawns of a second thread (mvThread), whose sole
# purpose is to handle browser (our display surface) interactions. It
# picks a port at random and fires up a tornado.Application object (web
# server listening on that port) and points it at mvMainHandler and
# mvWSHandler, two objects which deal with requests from the browser
# (plus a third tornado built-in for requesting simple file downloads).
# It then fires up a browser instance (which browser is configurable)
# and points it at "localhost" and the above port.
# mvMainHandler is the mechanism where the basic "index.html" file is
# gotten to the browser. This is the only "page" the browser is ever
# going to display and contains all the Javascript needed to be the
# other half of our user interface.
# The rest of the user interface interaction is handled by mvWSHandler.
# It and the Javascript in the browser send messages and data back and
# forth whenever anything needs to be done. The events driving this
# interaction can be initiated in the browser (user clicks the "zoom in"
# button) or by the user at the mViewer "prompt" (basic python
# user interaction).
# All requests for changes to the view end up in the mViewer code, which
# uses various Montage modules on the back end (mExamine, mSubimage,
# mShrink, mViewer) to do the data management. Currently, this is
# handled through subprocess.Popen() infrastructure, though the Montage
# Project is currently working on the Montage modules directly
# callable in Python. Montage modules return a structure text response
# and the final code here, mvStruct, is used to parse this and turn it
# into a Python data object.
#---------------------------------------------------------------------------------
# CLASS RELATIONSHIPS
#
#
# +-> mvView +-> mvViewFiles
# | | (Data structures)
# | +-> mvViewOverlays
# |
# |
# |
# +-> mvThread +-> webbrowser.open() (Browser interface) --+
# | | |
# | | V
# | |
# | | ------------
# | | [ Browser: ]
# mViewer + +-> tornado.Application +-> mvMainHandler <=> [ ]
# | | [ mViewer ]
# | +-> mvWSHandler <=> [ Javascript ]
# | ------------
# |
# |
# |
# | (Montage Applications)
# | ------------------
# +-> subprocess.Popen() <=> [ ]
# | [ mExamine ]
# | [ mSubimage ]
# | [ mShrink ]
# | [ mViewer ]
# +-> mvStruct [ ]
# ------------------
#---------------------------------------------------------------------------------
#---------------------------------------------------------------------------------
# COMMUNICATIONS PATHWAY
#
# +---------- (external libraries) -----------+
# | |
# | |
# Browser Browser Python Web Server Python
# GUI code (built-in) (localhost:<port>/ws) server code
#
# JS WebClient JS WebSocket mViewer.mvWSHandler mViewer functions
# ------------ ------------ ------------------- -----------------
#
# Javascript send() -> send() ==> on_message() -> from_browser()
# mViewer <-> update_display()
# GUI receive() <- onmessage() <== write_message() <- to_browser()
#
#
# The Python on_message() and write_message() methods are part of the
# tornado.Application framework but the actual code is written by us.
# The associated Javascript send() and onmessage() functions are part
# of the browser built-in WebSocket functionality. Together they let
# us connect messages generated in our Javascript code with our
# back-end Python processing.
#
#---------------------------------------------------------------------------------
#---------------------------------------------------------------------------------
# MVSTRUCT This simple class is used to parse return structures
# from the Montage C services.
class mvStruct(object):
# Text structure parser
def __init__(self, command, string):
string = string[8:-1]
strings = {}
while True:
try:
p1 = string.index('"')
p2 = string.index('"', p1 + 1)
substring = string[p1 + 1:p2]
key = hashlib.md5(substring.encode('ascii')).hexdigest()
strings[key] = substring
string = string[:p1] + key + string[p2 + 1:]
except:
break
for pair in string.split(', '):
key, value = pair.split('=')
value = value.strip(' ')
if value in strings:
self.__dict__[key] = strings[value]
else:
self.__dict__[key] = self.simplify(value)
def simplify(self, value):
try:
return int(value)
except:
return float(value)
# Standard object "string"
def __str__(self):
string = ""
for item in self.__dict__:
substr = "%20s" % (item)
string += substr + " : " + str(self.__dict__[item])+ "\n"
return string[:-1]
# Standard object "representation" string
def __repr__(self):
thisObj = dir(self)
string = "{"
count = 0
for item in thisObj:
val = getattr(self, item)
if item[0:2] == "__":
continue
if item == "simplify":
continue
if val == "":
continue
if count > 0:
string += ", "
if isinstance(val, str):
try:
float(val)
string += '"' + item + '": ' + str(val)
except:
string += '"' + item + '": "' + str(val) + '"'
else:
string += '"' + item + '": ' + repr(val)
count += 1
string += "}"
return string
#---------------------------------------------------------------------------------
#---------------------------------------------------------------------------------
# MVVIEWOVERLAY Data holder class for overlay information.
# These three classes here are pure data holders to provide
# structure for an mViewer display. The main one (mvView)
# describes the whole display and contains an arbitrary number
# of mvViewOverlay objects for the overlays on the main image.
# It also contains four mViewer.mvViewFile objects for the gray or
# red, green, blue file info.
# We also need information on image/canvas sizes and cutout
# offsets in order to map zoom regions and picks to original
# image coordinates and to shrink/expand the displayed region
# to fit the window.
class mvViewOverlay:
type = ""
visible = True
coord_sys = ""
color = ""
data_file = ""
data_col = ""
data_ref = ""
data_type = ""
sym_size = ""
sym_type = ""
sym_sides = ""
sym_rotation = ""
lon = ""
lat = ""
text = ""
# Standard object "string" representation
def __str__(self):
thisObj = dir(self)
string = ""
for item in thisObj:
val = getattr(self, item)
if item[0:2] == "__":
continue
if val == "":
continue
substr = "%40s" % (item)
if isinstance(val, str):
string += substr + ": '" + str(val) + "'\n"
else:
string += substr + ": " + str(val) + "\n"
return string
# Standard object "representation" string
def __repr__(self):
thisObj = dir(self)
string = "{"
count = 0
for item in thisObj:
val = getattr(self, item)
if item[0:2] == "__":
continue
if val == "":
continue
objType = val.__class__.__name__
if count > 0:
string += ", "
if objType == "bool":
if val == True:
string += '"' + item + '": true'
elif val == False:
string += '"' + item + '": false'
else:
if isinstance(val, str):
try:
float(val)
string += '"' + item + '": ' + str(val)
except:
string += '"' + item + '": "' + str(val) + '"'
else:
string += '"' + item + '": ' + repr(val)
count += 1
string += "}"
return string
#---------------------------------------------------------------------------------
#---------------------------------------------------------------------------------
# MVVIEWFILE Data holder class for image file information.
class mvViewFile:
fits_file = ""
color_table = ""
stretch_min = ""
stretch_max = ""
stretch_mode = ""
min = ""
max = ""
data_min = ""
data_max = ""
min_sigma = ""
max_sigma = ""
min_percent = ""
max_percent = ""
# bunit = ""
# Standard object "string" representation
def __str__(self):
thisObj = dir(self)
string = ""
for item in thisObj:
val = getattr(self, item)
if item[0:2] == "__":
continue
if val == "":
continue
substr = "%40s" % (item)
string += substr + ": '" + str(val) + "'\n"
return string
# Standard object "representation" string
def __repr__(self):
thisObj = dir(self)
string = "{"
count = 0
for item in thisObj:
val = getattr(self, item)
if item[0:2] == "__":
continue
if val == "":
continue
if count > 0:
string += ", "
if isinstance(val, str):
try:
float(val)
string += '"' + item + '": ' + str(val)
except:
string += '"' + item + '": "' + str(val) + '"'
else:
string += '"' + item + '": ' + repr(val)
count += 1
string += "}"
return string
#---------------------------------------------------------------------------------
#---------------------------------------------------------------------------------
# MVVIEW Data holder class for the whole view.
#
# These are the quantities that we use to populate the update structure every
# time the image on the screen changes in any way.
class mvView:
# Default values (to be overridden by actual image information)
image_file = "viewer.png"
image_type = "png"
disp_width = "1000"
disp_height = "1000"
image_width = "1000"
image_height = "1000"
display_mode = ""
cutout_x_offset = ""
cutout_y_offset = ""
canvas_height = 1000
canvas_width = 1000
xmin = ""
ymin = ""
xmax = ""
ymax = ""
factor = ""
currentPickX = 0
currentPickY = 0
current_color = "black"
current_symbol_type = "circle"
current_symbol_size = 1.0
current_symbol_sides = ""
current_symbol_rotation = ""
current_coord_sys = "Equ J2000"
gray_file = mvViewFile()
red_file = mvViewFile()
green_file = mvViewFile()
blue_file = mvViewFile()
bunit = "DN"
overlay = []
# Standard object "string" representation
def __str__(self):
thisObj = dir(self)
string = "\n"
count = 0
for item in thisObj:
val = getattr(self, item)
if item[0:2] == "__":
continue
if item == 'json_update':
continue
if item == 'update_view':
continue
if val == "":
continue
substr = "%25s" % (item)
objType = val.__class__.__name__
if objType == 'str':
string += substr + ": '" + str(val) + "'\n"
elif objType == 'list':
string += substr + ":\n"
count = 0
for ovly in val:
label = "%38s %d:\n" % ("Overlay", count)
string += label
string += ovly.__str__()
count += 1
elif objType == 'mvViewFile':
string += substr + ":\n"
string += val.__str__()
else:
string += substr + ": " + str(val) + "\n"
count += 1
string += "\n"
return string
# Standard object "representation" string
def __repr__(self):
thisObj = dir(self)
string = "{"
count = 0
for item in thisObj:
val = getattr(self, item)
if item[0:2] == "__":
continue
objType = val.__class__.__name__
if objType == "instancemethod":
continue
if objType == "builtin_function_or_method":
continue
if val == "":
continue
if count > 0:
string += ", "
if isinstance(val, str):
try:
float(val)
string += '"' + item + '": ' + str(val)
except:
string += '"' + item + '": "' + str(val) + '"'
else:
string += '"' + item + '": ' + repr(val)
count += 1
string += "}"
return string
# Updates coming from Javascript will be in the form of a JSON string.
# This method loads the view with the contents of such.
def json_update(self, json_str):
json_dict = json.loads(json_str)
self.update_view(json_dict, 0, None)
# The above code turns the JSON string into a Python dictionary
# This routine recursively transfers the values in that dictionary
# to this mvView object. We don't yet deal with the possibility
# of the JSON having different structure from the current object;
# that would complicate things considerably.
#
# If needed in future iterations of the application, we will
# deal with updates (e.g. adding an overlay or switching between
# grayscale and color) via separate commands.
def update_view(self, parms, index, parents):
if parents is None:
parents = []
for key, val in parms.iteritems():
newlist = list(parents)
if len(newlist) > 0:
newlist.append(index)
newlist.append(str(key))
objType = val.__class__.__name__
if objType == "float" or objType == "int" or objType == "str" or objType == "unicode" or objType == "bool":
newlist.append(str(key))
depth = len(newlist)
depth = depth / 2
element = self
for i in range(0, depth-1):
name = newlist[2 * i]
ind = newlist[2 * i + 1]
element = getattr(element, name)
elementType = element.__class__.__name__
if elementType == "list":
element = element[ind]
if objType == "unicode":
setattr(element, str(key), str(val))
else:
setattr(element, str(key), val)
elif objType == "list":
count = 0
for list_item in val:
self.update_view(list_item, count, newlist)
count = count + 1
elif objType == "dict":
self.update_view(val, 0, newlist)
else:
print "ERROR> Object in our dictionary? Should not be possible."
#---------------------------------------------------------------------------------
#---------------------------------------------------------------------------------
# MVMAINHANDLER Tornado support class for the index.html download.
# This and the next object are used by Tornado as document type handlers
# for its web server functionality. This one is for retrieving "index.html";
# the next for processing commands sent from Javascript. Our tornado instance
# actually uses one more: a built-in method for simple file retrieval (which
# we use to get a bunch of static files like Javascript libraries and banner
# images to the browser.
class mvMainHandler(tornado.web.RequestHandler):
# This object needs the workspace
def initialize(self, data):
self.data = data
self.workspace = self.data['workspace']
# The initialization GET returns the index.html
# file we put in the workspace
def get(self):
loader = tornado.template.Loader(".")
self.write(loader.load(self.workspace + "/index.html").generate())
#---------------------------------------------------------------------------------
#---------------------------------------------------------------------------------
# MVWSHANDLER Tornado support class for browser "processing" requests.
class mvWSHandler(tornado.websocket.WebSocketHandler):
# The methods of this tornado RequestHandler are
# predefined; we just fill in the functionality we
# need. We make use of the intialization method
# to give the mViewer object a handle to this
# webserver (which it needs for sending messages
# to the browser).
def initialize(self, data):
self.debug = False
self.data = data;
self.viewer = self.data['viewer']
self.view = self.viewer.view
self.viewer.webserver = self
# This is another "pre-defined" tornado method.
# If the browser sends a message that it is shutting
# down, we take the oppurtunity to delete the workspace
def open(self):
if self.debug:
print "mvWSHandler.open()"
self.write_message("mViewer python server connection accepted.")
self.write_message("")
# This is where we process "commands" coming
# from the browser. These are things like
# resize, zoom and pick events. All we do
# in the webserver code is to pass them along
# to mViewer for processing.
def on_message(self, message):
if self.debug:
print "mvWSHandler.on_message('" + message + "')"
self.viewer.from_browser(message);
# If the browser sends a message that it is shutting
# down, we take the oppurtunity to delete the workspace
def on_close(self):
if self.debug:
print "mvWSHandler.open()"
print '\nWeb connection closed by browser. Deleting workspace.'
self.viewer.close()
print 'Exiting.\n'
os._exit(0)
#---------------------------------------------------------------------------------
#---------------------------------------------------------------------------------
# MVTHREAD Second thread, for running Tornado web server.
# Since we want to have the web window containing mViewer open
# while still allowing interactive Python processing, we need
# to put the Tornado connection in a separate thread.
class mvThread(Thread):
def __init__(self, port, workspace, viewer, view):
Thread.__init__(self)
self.remote = None
self.port = port
self.workspace = workspace
self.view = view
self.viewer = viewer
self.handler = mvWSHandler
self.debug = False
self.daemon = True # If we make the thread a daemon,
# it closes when the process finishes.
def run(self):
if self.workspace is None:
print "Please set a workspace location first."
return
data = dict(view=self.view, viewer=self.viewer, port=self.port, workspace=self.workspace)
application = tornado.web.Application([
(r'/ws', mvWSHandler, dict(data=data)),
(r'/', mvMainHandler, dict(data=data)),
(r"/(.*)", tornado.web.StaticFileHandler, {"path": self.workspace})
])
# Here we would populate the workspace and configure
# its specific index.html to use this specific port
application.listen(self.port)
if self.debug:
print "DEBUG> mvThread: port = " + str(self.port)
if self.viewer.serverMode:
host = socket.getfqdn()
print "Cut/paste the following into your browser: http://" + host + ":" + str(self.port)
else:
platform = sys.platform.lower()
browser = "firefox"
if platform == 'darwin':
browser = "safari"
elif platform.find("win") == 0:
browser = "C:/Program\ Files\ (x86)/Mozilla\ Firefox/firefox.exe %s"
if self.debug:
print "DEBUG> mvThread: browser = [" + str(browser) + "]"
webbrowser.get(browser).open("localhost:" + str(self.port))
tornado.ioloop.IOLoop.instance().start()
#---------------------------------------------------------------------------------
#---------------------------------------------------------------------------------
# MVIEWER Main entry point. We name the three "main" parts of the
# system (this Python entry point, the Javascript main entry point,
# and the back-end C image generation utility all "mViewer" to
# identify them as a set.
class mViewer():
# Initialization (mostly setting up the workspace)
def __init__(self, *arg):
self.debug = False
self.serverMode = False
nargs = len(arg)
if nargs == 0:
workspace = tempfile.mkdtemp(prefix="mvWork_", dir=".")
else:
workspace = arg[0]
workspace = workspace.replace("\\", "/")
try:
os.makedirs(workspace)
except OSError as exception:
if exception.errno != errno.EEXIST:
raise
return
self.workspace = workspace
self.view = mvView()
self.pick_callback = self.pick_location
# Use the webserver connection to write commands
# to the Javascript in the browser.
def to_browser(self, msg):
if self.debug:
print "DEBUG> mViewer.to_browser('" + msg + "')"
self.webserver.write_message(msg)
# Shutdown (remove workspace and delete temporary files - subimages, etc.)
def close(self):
try:
files = os.listdir(self.workspace)
for file in files:
if self.debug:
print "Deleting " + self.workspace + "/" + file
os.remove(self.workspace + "/" + file)
except:
print "Workspace cleanup failed."
try:
os.rmdir(self.workspace)
except:
print "Workspace directory ('" + self.workspace + "') delete failed"
try:
self.command("close")
print "Browser connection closed."
except:
print "No active browser connection."
# Failure or incomplete execution of a close may result in
# extraneous temporary files in the work directory, which
# must be removed manually.
# Utility function: set the display mode (grayscale / color)
def set_display_mode(self, mode):
mode = str(mode)
if len(mode) == 0:
mode = "grayscale"
if mode[0] == 'g':
self.view.display_mode = "grayscale"
if mode[0] == 'G':
self.view.display_mode = "grayscale"
if mode[0] == 'b':
self.view.display_mode = "grayscale"
if mode[0] == 'B':
self.view.display_mode = "grayscale"
if mode[0] == 'r':
self.view.display_mode = "color"
if mode[0] == 'R':
self.view.display_mode = "color"
if mode[0] == 'c':
self.view.display_mode = "color"
if mode[0] == 'C':
self.view.display_mode = "color"
if mode[0] == 'f':
self.view.display_mode = "color"
if mode[0] == 'F':
self.view.display_mode = "color"
# Utility function: set the gray_file
def set_gray_file(self, gray_file):
self.view.gray_file.fits_file = gray_file
if self.view.display_mode == "":
self.view.display_mode = "grayscale"
# Utility function: set the blue_file
def set_blue_file(self, blue_file):
self.view.blue_file.fits_file = blue_file
if self.view.display_mode == "":
if self.view.red_file.fits_file != "" and self.view.green_file.fits_file != "":
self.view.display_mode = "color"
# Utility function: set the green_file
def set_green_file(self, green_file):
self.view.green_file.fits_file = green_file
if self.view.display_mode == "":
if self.view.red_file.fits_file != "" and self.view.blue_file.fits_file != "":
self.view.display_mode = "color"
# Utility function: set the red_file
def set_red_file(self, red_file):
self.view.red_file.fits_file = red_file
if self.view.display_mode == "":
if self.view.green_file.fits_file != "" and self.view.blue_file.fits_file != "":
self.view.display_mode = "color"
# Utility function: set the current_color
def set_current_color(self, current_color):
self.view.current_color = current_color
# Utility function: set the currentSymbol
def set_current_symbol(self, *arg):
nargs = len(arg)
symbol_sides = ""
symbol_rotation = ""
symbol_size = arg[0]
symbol_type = arg[1]
if nargs > 2:
symbol_sides = arg[2]
if nargs > 3:
symbol_rotation = arg[3]
self.view.current_symbol_size = symbol_size
self.view.current_symbol_type = symbol_type
self.view.current_symbol_sides = symbol_sides
self.view.current_symbol_rotation = symbol_rotation
# Utility function: set the coord_sys
def set_current_coord_sys(self, coord_sys):
self.view.current_coord_sys = coord_sys
# Utility function: set the color table (grayscale file)
def set_color_table(self, color_table):
self.view.gray_file.color_table = color_table
# Utility function: set the grayscale color stretch
def set_gray_stretch(self, stretch_min, stretch_max, stretch_mode):
self.view.gray_file.stretch_min = stretch_min
self.view.gray_file.stretch_max = stretch_max
self.view.gray_file.stretch_mode = stretch_mode
# Utility function: set the blue color stretch
def set_blue_stretch(self, stretch_min, stretch_max, stretch_mode):
self.view.blue_file.stretch_min = stretch_min
self.view.blue_file.stretch_max = stretch_max
self.view.blue_file.stretch_mode = stretch_mode
# Utility function: set the green color stretch
def set_green_stretch(self, stretch_min, stretch_max, stretch_mode):
self.view.green_file.stretch_min = stretch_min
self.view.green_file.stretch_max = stretch_max
self.view.green_file.stretch_mode = stretch_mode
# Utility function: set the red color stretch
def set_red_stretch(self, stretch_min, stretch_max, stretch_mode):
self.view.red_file.stretch_min = stretch_min
self.view.red_file.stretch_max = stretch_max
self.view.red_file.stretch_mode = stretch_mode
# Utility function: add a grid overlay
def add_grid(self, coord_sys):
ovly = mvViewOverlay()
ovly.type = "grid"
ovly.visible = True
ovly.color = self.view.current_color
ovly.coord_sys = coord_sys
self.view.overlay.append(ovly)
return ovly
# Utility function: add a catalog overlay
def add_catalog(self, data_file, data_col, data_ref, data_type):
ovly = mvViewOverlay()
ovly.type = "catalog"
ovly.visible = True
ovly.sym_size = self.view.current_symbol_size
ovly.sym_type = self.view.current_symbol_type
ovly.sym_sides = self.view.current_symbol_sides
ovly.sym_rotation = self.view.current_symbol_rotation
ovly.coord_sys = self.view.current_coord_sys
ovly.data_file = data_file
ovly.data_col = data_col
ovly.data_ref = data_ref
ovly.data_type = data_type
ovly.color = self.view.current_color
self.view.overlay.append(ovly)
return ovly
# Utility function: add an imginfo overlay
def add_img_info(self, data_file):
ovly = mvViewOverlay()
ovly.type = "imginfo"
ovly.visible = True
ovly.data_file = data_file
ovly.color = self.view.current_color
ovly.coord_sys = self.view.current_coord_sys
self.view.overlay.append(ovly)
return ovly
# Utility function: add a marker overlay
def add_marker(self, lon, lat):
ovly = mvViewOverlay()
ovly.type = "mark"
ovly.visible = True
ovly.sym_size = self.view.current_symbol_size
ovly.sym_type = self.view.current_symbol_type
ovly.sym_sides = self.view.current_symbol_sides
ovly.sym_rotation = self.view.current_symbol_rotation
ovly.lon = lon
ovly.lat = lat
ovly.coord_sys = self.view.current_coord_sys
ovly.color = self.view.current_color
self.view.overlay.append(ovly)
return ovly
# Utility function: add a label overlay
def add_label(self, lon, lat, text):
ovly = mvViewOverlay()
ovly.type = "label"
ovly.visible = True
ovly.lon = lon
ovly.lat = lat
ovly.text = text
ovly.coord_sys = self.view.current_coord_sys
ovly.color = self.view.current_color
self.view.overlay.append(ovly)
return ovly
# This utility function is used by load_JSON() below
# to convert any unicode to simple printable ASCII
def fix_unicode(self, input):
if isinstance(input, unicode):
return input.encode('ascii')
else:
return input
# Load the view object with data described in a JSON file
def load_JSON(self, json_file):
# Open the JSON file and parse the contents
try:
fp = open(json_file, 'r')
json_data = json.load(fp)
except:
print "JSON file open failed."
os._exit(0)
return
# Loop over the contents, loading them into the view object
# This is going to add to the existing view (replace in the
# case of image parmeters), so the user should zero the view
# out if they want a clean start.
# First check and see if we want modify the image parameters
if "image" in json_data:
image = json_data["image"]
if "type" in image:
image_type = image["type"]
# Grayscale image parameters
if image_type == "grayscale":
if "gray" in image:
gray = image["gray"]
if "file" in gray:
file_name = self.fix_unicode(gray["file"])
else:
print "Required grayscale filename missing."
color_table = 0
if "color_table" in gray:
color_table = self.fix_unicode(gray["color_table"])
min_value = "-2s"
if "min" in gray:
min_value = self.fix_unicode(gray["min"])
max_value = "max"
if "max" in gray:
max_value = self.fix_unicode(gray["max"])
mode = "gaussian-log"
if "mode" in gray:
mode = self.fix_unicode(gray["mode"])
self.set_gray_file(file_name)
self.set_color_table(color_table)
self.set_gray_stretch(min_value, max_value, mode)
else:
print "Grayscale image information missing."
return
# Color image parameters
elif image_type == "color":
# BLUE
if "blue" in image:
blue = image["blue"]
if "file" in blue:
file_name = self.fix_unicode(blue["file"])
else:
print "Required blue filename missing."
min_value = "-2s"
if "min" in blue:
min_value = self.fix_unicode(blue["min"])
max_value = "max"
if "max" in blue:
max_value = self.fix_unicode(blue["max"])
mode = "gaussian-log"
if "mode" in blue:
mode = self.fix_unicode(blue["mode"])
self.set_blue_file(file_name)
self.set_blue_stretch(min_value, max_value, mode)
else:
print "Blue image information missing."
return
# GREEN
if "green" in image:
green = image["green"]
if "file" in green:
file_name = self.fix_unicode(green["file"])
else:
print "Required green filename missing."
min_value = "-2s"
if "min" in green:
min_value = self.fix_unicode(green["min"])
max_value = "max"
if "max" in green:
max_value = self.fix_unicode(green["max"])
mode = "gaussian-log"
if "mode" in green:
mode = self.fix_unicode(green["mode"])
self.set_green_file(file_name)
self.set_green_stretch(min_value, max_value, mode)
else:
print "Green image information missing."
return
# RED
if "red" in image:
red = image["red"]
if "file" in red:
file_name = self.fix_unicode(red["file"])
else:
print "Required red filename missing."
min_value = "-2s"
if "min" in red:
min_value = self.fix_unicode(red["min"])
max_value = "max"
if "max" in red:
max_value = self.fix_unicode(red["max"])
mode = "gaussian-log"
if "mode" in red:
mode = self.fix_unicode(red["mode"])
self.set_red_file(file_name)
self.set_red_stretch(min_value, max_value, mode)
else:
print "Red image information missing."
return
else:
print "Invalid image type: " + image_type
return
# Now loop over any overlay specifications, adding them
if "overlays" in json_data:
overlays = json_data["overlays"]
noverlay = 0
for overlay in overlays:
noverlay = noverlay + 1
if "type" in overlay:
overlay_type = self.fix_unicode(overlay["type"])
else:
print "Invalid type for overlay " + noverlay
return
# Check for all the potential overlay parameters
# Some can have default values
file_name = ""
column = ""
ref_value = ""
data_type = "flux"
symbol_type = "circle"
symbol_size = 1.0
symbol_sides = ""
symbol_rotation = ""
color = "white"
coord_sys = "Equatorial J2000"
lon = ""
lat = ""
text = ""
if "file" in overlay:
file_name = self.fix_unicode(overlay["file"])
if "column" in overlay:
column = self.fix_unicode(overlay["column"])
if "ref_value" in overlay:
ref_value = self.fix_unicode(overlay["ref_value"])
if "data_type" in overlay:
data_type = self.fix_unicode(overlay["data_type"])
if "symbol_size" in overlay:
symbol_size = self.fix_unicode(overlay["symbol_size"])
if "symbol_type" in overlay:
symbol_type = self.fix_unicode(overlay["symbol_type"])
if "symbol_sides" in overlay:
symbol_sides = self.fix_unicode(overlay["symbol_sides"])
if "symbol_rotation" in overlay:
symbol_rotation = self.fix_unicode(overlay["symbol_rotation"])
if "color" in overlay:
color = self.fix_unicode(overlay["color"])
if "coord_sys" in overlay:
coord_sys = self.fix_unicode(overlay["coord_sys"])
if "lon" in overlay:
lon = self.fix_unicode(overlay["lon"])
if "lat" in overlay:
lat = self.fix_unicode(overlay["lat"])
if "text" in overlay:
text = self.fix_unicode(overlay["text"])
# For each overlay type, check for required parameters
# then call the appropriate overlay "add" function
# CATALOG
if overlay_type == "catalog":
if file_name == "":
print "No catalog file name given."
return
self.set_current_coord_sys(coord_sys)
self.set_current_color(color)
if symbol_sides == "":
self.set_current_symbol(symbol_size, symbol_type)
elif symbol_rotation == "":
self.set_current_symbol(symbol_size, symbol_type, symbol_sides)
else:
self.set_current_symbol(symbol_size, symbol_type, symbol_sides, symbol_rotation)
self.add_catalog(file_name, column, ref_value, data_type)
# IMAGE OUTLINES
elif overlay_type == "imginfo":
if file_name == "":
print "No image metadata file name given."
return
self.set_current_coord_sys(coord_sys)
self.set_current_color(color)
self.add_img_info(file_name)
# COORDINATE GRID
elif overlay_type == "grid":
self.set_current_color(color)
self.add_grid(coord_sys)
# LABEL
elif overlay_type == "label":
if text == "":
print "No label text given."
return
if lon == "":
print "No longitude given."
return
if lat == "":
print "No latitude given."
return
self.set_current_coord_sys(coord_sys)
self.set_current_color(color)
self.add_label(lon, lat, text)
# MARKER
elif overlay_type == "marker":
if lon == "":
print "No longitude given."
return
if lat == "":
print "No latitude given."
return
self.set_current_coord_sys(coord_sys)
self.set_current_color(color)
if symbol_sides == "":
self.set_current_symbol(symbol_size, symbol_type)
elif symbol_rotation == "":
self.set_current_symbol(symbol_size, symbol_type, symbol_sides)
else:
self.set_current_symbol(symbol_size, symbol_type, symbol_sides, symbol_rotation)
self.add_marker(lon, lat)
else:
print "Invalid overlay type: " + overlay_type
return
# Start a second thread to interact with the browser.
def init_browser_display(self):
self.port = random.randint(10000,60000)
template_file = resource_filename('agMontage', 'web/index.html')
index_file = self.workspace + "/index.html"
host = socket.getfqdn()
port_string = str(self.port)
host = "localhost"
if self.serverMode:
host = socket.getfqdn()
with open(index_file,'w') as new_file:
with open(template_file) as old_file:
for line in old_file:
new_file.write(line.replace("\\SERVER\\", host).replace("\\PORT\\", port_string))
# Copy all required web files into work directory
# (includes Javascript, css, icons, etc.)
#---------------------------------------------------------------------------
# JSON files (coordinate "pick" statistic information)
#---------------------------------------------------------------------------
in_file = resource_filename('agMontage', 'web/pick.json')
out_file = self.workspace + '/pick.json'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/pick0.json')
out_file = self.workspace + '/pick0.json'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/pick1.json')
out_file = self.workspace + '/pick1.json'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/pick2.json')
out_file = self.workspace + '/pick2.json'
shutil.copy(in_file, out_file)
#---------------------------------------------------------------------------
# CSS files
#---------------------------------------------------------------------------
in_file = resource_filename('agMontage', 'web/stylesheet01.css')
out_file = self.workspace + '/stylesheet01.css'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/ColorStretch.css')
out_file = self.workspace + '/ColorStretch.css'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/LayerControl.css')
out_file = self.workspace + '/LayerControl.css'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/RegionStats.css')
out_file = self.workspace + '/RegionStats.css'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/FITSHeaderViewer.css')
out_file = self.workspace + '/FITSHeaderViewer.css'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/InfoDisplay.css')
out_file = self.workspace + '/InfoDisplay.css'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/ZoomControl.css')
out_file = self.workspace + '/ZoomControl.css'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/spectrum.css')
out_file = self.workspace + '/spectrum.css'
shutil.copy(in_file, out_file)
#---------------------------------------------------------------------------
# Javascript files
#---------------------------------------------------------------------------
in_file = resource_filename('agMontage', 'web/WebClient.js')
out_file = self.workspace + '/WebClient.js'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/mViewer.js')
out_file = self.workspace + '/mViewer.js'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/iceGraphics.js')
out_file = self.workspace + '/iceGraphics.js'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/ColorStretch.js')
out_file = self.workspace + '/ColorStretch.js'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/LayerControl.js')
out_file = self.workspace + '/LayerControl.js'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/RegionStats.js')
out_file = self.workspace + '/RegionStats.js'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/FITSHeaderViewer.js')
out_file = self.workspace + '/FITSHeaderViewer.js'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/InfoDisplay.js')
out_file = self.workspace + '/InfoDisplay.js'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/ZoomControl.js')
out_file = self.workspace + '/ZoomControl.js'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/spectrum.js')
out_file = self.workspace + '/spectrum.js'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/viewerUtils.js')
out_file = self.workspace + '/viewerUtils.js'
shutil.copy(in_file, out_file)
#---------------------------------------------------------------------------
# 30x30 Icons
#---------------------------------------------------------------------------
in_file = resource_filename('agMontage', 'web/colors.gif')
out_file = self.workspace + '/colors.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/info.gif')
out_file = self.workspace + '/info.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/layercontrol.gif')
out_file = self.workspace + '/layercontrol.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/pan_up.gif')
out_file = self.workspace + '/pan_up.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/pan_up_left.gif')
out_file = self.workspace + '/pan_up_left.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/pan_left.gif')
out_file = self.workspace + '/pan_left.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/pan_down_left.gif')
out_file = self.workspace + '/pan_down_left.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/pan_down.gif')
out_file = self.workspace + '/pan_down.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/pan_down_right.gif')
out_file = self.workspace + '/pan_down_right.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/pan_right.gif')
out_file = self.workspace + '/pan_right.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/pan_up_right.gif')
out_file = self.workspace + '/pan_up_right.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/center_30.gif')
out_file = self.workspace + '/center_30.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/center_30.png')
out_file = self.workspace + '/center_30.png'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/pick_location.gif')
out_file = self.workspace + '/pick_location.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/zoom_in.gif')
out_file = self.workspace + '/zoom_in.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/zoom_out.gif')
out_file = self.workspace + '/zoom_out.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/zoom_reset.gif')
out_file = self.workspace + '/zoom_reset.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/zoom_reset_box.gif')
out_file = self.workspace + '/zoom_reset_box.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/zoom_pan.gif')
out_file = self.workspace + '/zoom_pan.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/header_icon.gif')
out_file = self.workspace + '/header_icon.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/eye02_30.gif')
out_file = self.workspace + '/eye02_30.gif'
shutil.copy(in_file, out_file)
#---------------------------------------------------------------------------
# Misc. Icons
#---------------------------------------------------------------------------
in_file = resource_filename('agMontage', 'web/favicon.ico')
out_file = self.workspace + '/favicon.ico'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/eye02_22.gif')
out_file = self.workspace + '/eye02_22.gif'
shutil.copy(in_file, out_file)
#---------------------------------------------------------------------------
# Other image files
#---------------------------------------------------------------------------
in_file = resource_filename('agMontage', 'web/waitClock.gif')
out_file = self.workspace + '/waitClock.gif'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/galplane_banner.jpg')
out_file = self.workspace + '/galplane_banner.jpg'
shutil.copy(in_file, out_file)
in_file = resource_filename('agMontage', 'web/banner02.gif')
out_file = self.workspace + '/banner02.gif'
shutil.copy(in_file, out_file)
#---------------------------------------------------------------------------
self.thread = mvThread(self.port, self.workspace, self, self.view)
self.thread.start()
# The web browser thread receives any messages the Browser sends
# These will be commands (like "zoomIn") or requests to process an updated
# view send as JSON. That thread forwards the command here.
#
# This from_browser() method does any local processing needed to modify
# the view, then calls the update_display() method to have a new PNG
# generated and appropriate instructions sent back to the browser.
def from_browser(self, message):
if self.debug:
print "mViewer.from_browser('" + message + "')"
# Find the image size
ref_file = self.view.gray_file.fits_file
if self.view.display_mode == "":
print "No images defined. Nothing to display."
sys.stdout.write('\n>>> ')
sys.stdout.flush()
return
if self.view.display_mode == "grayscale":
ref_file = self.view.gray_file.fits_file
if self.view.display_mode == "color":
ref_file = self.view.red_file.fits_file # R, G, B files should have identical size
command = "mExamine " + ref_file
if self.debug:
print "\nMONTAGE Command:\n---------------\n" + command
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stderr = p.stderr.read()
if stderr:
print stderr
raise Exception(stderr)
return
retval = mvStruct("mExamine", p.stdout.read().strip())
if self.debug:
print "\nRETURN Struct:\n-------------\n"
print retval
sys.stdout.write('\n>>> ')
sys.stdout.flush()
subimage_width = retval.naxis1
subimage_height = retval.naxis2
if self.view.xmin == "":
self.view.xmin = 0
if self.view.xmax == "":
self.view.xmax = retval.naxis1
if self.view.ymin == "":
self.view.ymin = 0
if self.view.ymax == "":
self.view.ymax = retval.naxis2
self.view.image_width = retval.naxis1
self.view.image_height = retval.naxis2
# ------------- Processing commands from the Browser -------------
# COMMAND: "update"
#
# Just do/redo the view as it stands.
#-----------------------------------------------------------------
args = shlex.split(message)
cmd = args[0]
if cmd == 'update':
self.update_display()
#-----------------------------------------------------------------
# COMMAND: submitUpdateRequest
#
# Received an updated JSON view from the browser. This is
# actually the most general request as all sorts of changes
# might be made to the view on the browser side. The rest
# of the commands below are special cases (e.g. "zoomIn")
# for when it makes more sense to have Python (in conjunction
# with the Montage executables) figure out what needs to be
# updated.
#-----------------------------------------------------------------
elif cmd == 'submitUpdateRequest':
jsonStr = args[1]
self.view.json_update(jsonStr)
self.update_display()
#-----------------------------------------------------------------
# COMMAND: "initialize" or "resize"
#
# Initalize or resize the canvas to fit the image area.
#-----------------------------------------------------------------
elif cmd == 'initialize' or cmd == 'resize':
self.view.canvas_width = args[1]
self.view.canvas_height = args[2]
if self.view.factor == 0:
self.view.xmin = 1
self.view.xmax = self.view.image_width
self.view.ymin = 1
self.view.ymax = self.view.image_height
if cmd == 'initialize':
self.update_display()
else:
self.to_browser("resized")
#-----------------------------------------------------------------
# COMMAND: "zoomReset"
#
# Reset the zoom (default zoom displays entire image on screen;
# may not fill canvas area).
#-----------------------------------------------------------------
elif cmd == 'zoomReset':
self.view.xmin = 1
self.view.xmax = self.view.image_width
self.view.ymin = 1
self.view.ymax = self.view.image_height
self.update_display()
#-----------------------------------------------------------------
# COMMAND: General Zoom/Pan Commands
# "zoom", "zoomIn", "zoomOut",
# "panUp", "panDown",
# "panLeft", "panRight"
# "panUpLeft", "panUpRight",
# "panDownLeft", "panDownRight"
#
# There is a lot of common code for zooming and panning,
# so we group these commands together.
#-----------------------------------------------------------------
elif (cmd == 'zoom' or
cmd == 'zoomIn' or cmd == 'zoomOut' or
cmd == 'panUp' or cmd == 'panDown' or
cmd == 'panLeft' or cmd == 'panRight' or
cmd == 'panUpLeft' or cmd == 'panUpRight' or
cmd == 'panDownLeft' or cmd == 'panDownRight'):
boxxmin = 0.
boxxmax = float(self.view.canvas_width)
boxymin = 0.
boxymax = float(self.view.canvas_height)
if self.debug:
print ""
print ""
print "ZOOMPAN> cmd: [" + cmd + "]"
print "ZOOMPAN> Size of currently-diplayed PNG"
print "ZOOMPAN> " + str(self.view.disp_width) + " X " + str(self.view.disp_height)
print ""
print "ZOOMPAN> Canvas:"
print "ZOOMPAN> " + str(self.view.canvas_width) + " X " + str(self.view.canvas_height)
# Obtain "current" subimage boundaries
oldxmin = float(self.view.xmin)
oldxmax = float(self.view.xmax)
oldymin = float(self.view.ymin)
oldymax = float(self.view.ymax)
if self.debug:
print ""
print "ZOOMPAN> Pixel ranges from the previous cutout"
print "ZOOMPAN> oldx: " + str(oldxmin) + " to " + str(oldxmax)
print "ZOOMPAN> oldy: " + str(oldymin) + " to " + str(oldymax)
# For box zooming, we are given the box boundaries
# but for zoom in/out and panning we need to calculate
# them based on the canvas
#-------------------------------------------------------------
# Zoom based on explicitly-set subimage boundaries
# (x and y limits).
#
# In the browser interface, these boundaries are set by
# drawing a box on the canvas using the mouse.
if cmd == 'zoom':
if self.debug:
print ""
print "ZOOMPAN> Box from command line"
boxxmin = float(args[1])
boxxmax = float(args[2])
boxymin = float(args[3])
boxymax = float(args[4])
# Zoom In (new subimage boundaries are calculated based on
# current subimage boundaries).
elif cmd == 'zoomIn':
if self.debug:
print ""
print "ZOOMPAN> Box by zooming in"
box_width = float(self.view.canvas_width)
box_center = box_width / 2.
boxxmin = box_center - box_width / 4.
boxxmax = box_center + box_width / 4.
box_height = float(self.view.canvas_height)
box_center = box_height / 2.
boxymin = box_center - box_height / 4.
boxymax = box_center + box_height / 4.
# Zoom Out (new subimage boundaries are calculated based on
# current subimage boundaries).
elif cmd == 'zoomOut':
if self.debug:
print ""
print "ZOOMPAN> Box by zooming out"
box_width = float(self.view.canvas_width)
box_center = box_width / 2.
boxxmin = box_center - box_width
boxxmax = box_center + box_width
box_height = float(self.view.canvas_height)
box_center = box_height / 2.
boxymin = box_center - box_height
boxymax = box_center + box_height
#-------------------------------------------------------------
# Pan the view region without altering the current zoom
# scale. Implemented panning directions are multiples
# of 45 degrees.
#-------------------------------------------------------------
elif cmd == 'panUp':
if self.debug:
print ""
print "ZOOMPAN> Box by panning up"
box_height = float(self.view.canvas_height)
boxymin = boxymin + box_height / 4.
boxymax = boxymax + box_height / 4.
elif cmd == 'panDown':
if self.debug:
print ""
print "ZOOMPAN> Box by panning down"
box_height = float(self.view.canvas_height)
boxymin = boxymin - box_height / 4.
boxymax = boxymax - box_height / 4.
elif cmd == 'panLeft':
if self.debug:
print ""
print "ZOOMPAN> Box by panning left"
box_width = float(self.view.canvas_width)
boxxmin = boxxmin - box_width / 4.
boxxmax = boxxmax - box_width / 4.
elif cmd == 'panRight':
if self.debug:
print ""
print "ZOOMPAN> Box by panning right"
box_width = float(self.view.canvas_width)
boxxmin = boxxmin + box_width / 4.
boxxmax = boxxmax + box_width / 4.
elif cmd == 'panUpLeft':
if self.debug:
print ""
print "ZOOMPAN> Box by panning up and left"
box_height = float(self.view.canvas_height)
box_width = float(self.view.canvas_width)
boxymin = boxymin + box_height / (4. * math.sqrt(2))
boxymax = boxymax + box_height / (4. * math.sqrt(2))
boxxmin = boxxmin - box_width / (4. * math.sqrt(2))
boxxmax = boxxmax - box_width / (4. * math.sqrt(2))
elif cmd == 'panUpRight':
if self.debug:
print ""
print "ZOOMPAN> Box by panning up and right"
box_height = float(self.view.canvas_height)
box_width = float(self.view.canvas_width)
boxymin = boxymin + box_height / (4. * math.sqrt(2))
boxymax = boxymax + box_height / (4. * math.sqrt(2))
boxxmin = boxxmin + box_width / (4. * math.sqrt(2))
boxxmax = boxxmax + box_width / (4. * math.sqrt(2))
elif cmd == 'panDownLeft':
if self.debug:
print ""
print "ZOOMPAN> Box by panning down and left"
box_height = float(self.view.canvas_height)
box_width = float(self.view.canvas_width)
boxymin = boxymin - box_height / (4. * math.sqrt(2))
boxymax = boxymax - box_height / (4. * math.sqrt(2))
boxxmin = boxxmin - box_width / (4. * math.sqrt(2))
boxxmax = boxxmax - box_width / (4. * math.sqrt(2))
elif cmd == 'panDownRight':
if self.debug:
print ""
print "ZOOMPAN> Box by panning down and right"
box_height = float(self.view.canvas_height)
box_width = float(self.view.canvas_width)
boxymin = boxymin - box_height / (4. * math.sqrt(2))
boxymax = boxymax - box_height / (4. * math.sqrt(2))
boxxmin = boxxmin + box_width / (4. * math.sqrt(2))
boxxmax = boxxmax + box_width / (4. * math.sqrt(2))
#-------------------------------------------------------------
# The box (especially for draw box zooming) will
# not necessarily have the proportions of the canvas.
# Correct the aspect ratio here.
box_width = boxxmax-boxxmin
box_height = boxymax-boxymin
if self.debug:
print ""
print "ZOOMPAN> Input:"
print "ZOOMPAN> boxx: " + str(boxxmin) + " to " + str(boxxmax) + " [" + str(box_width) + "]"
print "ZOOMPAN> boxy: " + str(boxymin) + " to " + str(boxymax) + " [" + str(box_height) + "]"
box_aspect = float(box_height) / float(box_width)
canvas_aspect = float(self.view.canvas_height) / float(self.view.canvas_width)
if self.debug:
print ""
print "ZOOMPAN> Aspect:"
print "ZOOMPAN> box: " + str(box_aspect)
print "ZOOMPAN> canvas: " + str(canvas_aspect)
# Based on the ratio of box aspect ratio to canvas,
# if the box is taller and skinner than the canvas;
# make it wider.
ratio = box_aspect / canvas_aspect
if self.debug:
print ""
print "ZOOMPAN> Apect ratio adjusment factor: " + str(ratio)
if ratio > 1.:
box_width = int(box_width * ratio)
box_center = (boxxmax + boxxmin) / 2.
boxxmin = box_center - box_width / 2.
boxxmax = box_center + box_width / 2.
# The box is shorter and fatter than the canvas;
# make it taller.
else:
box_height = int(box_height / ratio)
box_center = (boxymax + boxymin) / 2.
boxymin = box_center - box_height / 2.
boxymax = box_center + box_height / 2.
box_width = boxxmax-boxxmin
box_height = boxymax-boxymin
box_aspect = float(box_height) / float(box_width)
if self.debug:
print ""
print "ZOOMPAN> Adjust to canvas aspect:"
print "ZOOMPAN> boxx: " + str(boxxmin) + " to " + str(boxxmax) + " [" + str(box_width) + "]"
print "ZOOMPAN> boxy: " + str(boxymin) + " to " + str(boxymax) + " [" + str(box_height) + "]"
# If we are zoomed out far enough that we see the whole image,
# part of the canvas may not be covered due to a difference in the
# image and canvas aspect ratios. In this case (and when zooming
# part of the zoom box may be off the image. If we detect this,
# we shift the zoom box to be as much on the image as possible,
# with the same sizing.
if cmd == 'zoom' or cmd == 'zoomIn':
if boxxmax > self.view.disp_width:
diff = boxxmax - self.view.disp_width
boxxmax = boxxmax - diff
boxxmin = boxxmin - diff
if boxxmin < 0:
diff = boxxmin
boxxmin = boxxmin - diff
boxxmax = boxxmax - diff
if boxymax > self.view.disp_height:
diff = boxymax - self.view.disp_height
boxymax = boxymax - diff
boxymin = boxymin - diff
if boxymin < 0:
diff = boxymin
boxymin = boxymin - diff
boxymax = boxymax - diff
if self.debug:
print ""
print "ZOOMPAN> After shifting:"
print "ZOOMPAN> boxx: " + str(boxxmin) + " to " + str(boxxmax)
print "ZOOMPAN> boxy: " + str(boxymin) + " to " + str(boxymax)
# Convert the box back to image coordinates
factor = float(self.view.factor)
boxxmin = boxxmin * factor
boxxmax = boxxmax * factor
boxymin = boxymin * factor
boxymax = boxymax * factor
boxxmin = boxxmin + oldxmin
boxxmax = boxxmax + oldxmin
boxymin = boxymin + oldymin
boxymax = boxymax + oldymin
if self.debug:
print ""
print "ZOOMPAN> In image pixel coordinates:"
print "ZOOMPAN> boxx: " + str(boxxmin) + " to " + str(oldxmax)
print "ZOOMPAN> boxy: " + str(boxymin) + " to " + str(oldymax)
if boxxmin < 0:
boxxmax = boxxmax - boxxmin
boxxmin = 0
if boxxmax > self.view.image_width:
boxxmax = self.view.image_width
boxxmin = boxxmax - box_width*factor
if boxymin < 0:
boxymax = boxymax - boxymin
boxymin = 0
if boxymax > self.view.image_height:
boxymax = self.view.image_height
boxymin = boxxmax - box_height*factor
if self.debug:
print ""
print "ZOOMPAN> Clipped by the image dimensions:"
print "ZOOMPAN> boxx: " + str(boxxmin) + " to " + str(oldxmax)
print "ZOOMPAN> boxy: " + str(boxymin) + " to " + str(oldymax)
# Save new subimage boundaries
self.view.xmin = int(boxxmin)
self.view.xmax = int(boxxmax)
self.view.ymin = int(boxymin)
self.view.ymax = int(boxymax)
# Update view
self.update_display()
#-----------------------------------------------------------------
# COMMAND: "center"
#
# Center the current view "box" (subimage) on a reference point
# defined by the most recent "pick" operation (defaults to
# image center if no such point has been chosen).
# Current zoom scale is retained.
#-----------------------------------------------------------------
elif cmd == "center":
# Obtain "current" subimage boundaries
oldxmin = float(self.view.xmin)
oldxmax = float(self.view.xmax)
oldymin = float(self.view.ymin)
oldymax = float(self.view.ymax)
box_width = float(self.view.canvas_width)
box_height = float(self.view.canvas_height)
# box_xcenter = box_width / 2.
# box_ycenter = box_height / 2.
factor = float(self.view.factor)
# Reference current "pick" point and redefine
# subimage boundaries
if cmd == 'center':
if (self.view.currentPickX != 0) and (self.view.currentPickY != 0):
boxymax = self.view.currentPickY + ((box_height * factor)/2)
boxymin = self.view.currentPickY - ((box_height * factor)/2)
boxxmax = self.view.currentPickX + ((box_width * factor)/2)
boxxmin = self.view.currentPickX - ((box_width * factor)/2)
else:
boxymax = (self.view.image_height + box_height * factor) / 2
boxymin = (self.view.image_height - box_height * factor) / 2
boxxmax = (self.view.image_width + box_width * factor) / 2
boxxmin = (self.view.image_width - box_width * factor) / 2
if (boxxmax > self.view.image_width):
boxxmax = self.view.image_width
boxxmin = self.view.image_width - (box_width * factor)
if (boxxmin <= 0):
boxxmin = 1
boxxmax = (box_width * factor)
if (boxymax > self.view.image_height):
boxymax = self.view.image_height
boxymin = self.view.image_height - (box_height * factor)
if (boxymin <= 0):
boxymin = 1
boxymax = (box_height * factor)
self.view.xmin = int(boxxmin)
self.view.xmax = int(boxxmax)
self.view.ymin = int(boxymin)
self.view.ymax = int(boxymax)
self.update_display()
#-----------------------------------------------------------------
# COMMAND: "pick"
#
# Examine a user-selected location
#-----------------------------------------------------------------
elif cmd == 'pick':
factor = float(self.view.factor)
if factor == 0.:
factor = 1.
boxx = float(args[1])
boxy = float(args[2])
self.view.currentPickX = self.view.xmin + boxx * factor
self.view.currentPickY = self.view.ymin + boxy * factor
if (self.view.currentPickX >= 0 and
self.view.currentPickY >= 0 and
self.view.currentPickX <= self.view.image_width and
self.view.currentPickX <= self.view.image_height):
self.pick_location(self.view.currentPickX, self.view.currentPickY)
else:
print "Pick location not within image."
# self.update_display()
# Updating the current "pick" location does not update
# the view, so the "currentPickX" and "currentPickY"
# values stored in the mView representation (JSON)
# may be obsolete.
#
# This does not affect the pick calculations, as they
# they use the internal, up-to-date x and y values.
#-----------------------------------------------------------------
# COMMAND: "header"
#
# Get FITS header
#-----------------------------------------------------------------
elif cmd == 'header':
self.get_header()
#-----------------------------------------------------------------
# Updating the display entails generating the subimage to be shown
# in the viewport, and packaging relevant information about the
# view in the data structure that is passed to the back end.
def update_display(self):
if self.view.display_mode == "":
print "No images defined. Nothing to display."
sys.stdout.write('\n>>> ')
sys.stdout.flush()
return
if self.view.display_mode == "grayscale":
# First cut out the part of the original grayscale image we want
command = "mSubimage -p"
command += " " + self.view.gray_file.fits_file
command += " " + self.workspace + "/subimage.fits"
command += " " + str(self.view.xmin)
command += " " + str(self.view.ymin)
command += " " + str(int(self.view.xmax) - int(self.view.xmin))
command += " " + str(int(self.view.ymax) - int(self.view.ymin))
if self.debug:
print "\nMONTAGE Command:\n---------------\n" + command
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stderr = p.stderr.read()
if stderr:
raise Exception(stderr)
return
retval = mvStruct("mSubimage", p.stdout.read().strip())
if self.debug:
print "\nRETURN Struct:\n-------------\n"
print retval
sys.stdout.write('\n>>> ')
sys.stdout.flush()
else:
# Or if in color mode, cut out the three images
# Blue
command = "mSubimage -p"
command += " " + self.view.blue_file.fits_file
command += " " + self.workspace + "/blue_subimage.fits"
command += " " + str(self.view.xmin)
command += " " + str(self.view.ymin)
command += " " + str(int(self.view.xmax) - int(self.view.xmin))
command += " " + str(int(self.view.ymax) - int(self.view.ymin))
if self.debug:
print "\nMONTAGE Command:\n---------------\n" + command
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stderr = p.stderr.read()
if stderr:
raise Exception(stderr)
return
retval = mvStruct("mSubimage", p.stdout.read().strip())
if self.debug:
print "\nRETURN Struct:\n-------------\n"
print retval
# Green
command = "mSubimage -p"
command += " " + self.view.green_file.fits_file
command += " " + self.workspace + "/green_subimage.fits"
command += " " + str(self.view.xmin)
command += " " + str(self.view.ymin)
command += " " + str(int(self.view.xmax) - int(self.view.xmin))
command += " " + str(int(self.view.ymax) - int(self.view.ymin))
if self.debug:
print "\nMONTAGE Command:\n---------------\n" + command
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stderr = p.stderr.read()
if stderr:
raise Exception(stderr)
return
retval = mvStruct("mSubimage", p.stdout.read().strip())
if self.debug:
print "\nRETURN Struct:\n-------------\n"
print retval
# Red
command = "mSubimage -p"
command += " " + self.view.red_file.fits_file
command += " " + self.workspace + "/red_subimage.fits"
command += " " + str(self.view.xmin)
command += " " + str(self.view.ymin)
command += " " + str(int(self.view.xmax) - int(self.view.xmin))
command += " " + str(int(self.view.ymax) - int(self.view.ymin))
if self.debug:
print "\nMONTAGE Command:\n---------------\n" + command
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stderr = p.stderr.read()
if stderr:
raise Exception(stderr)
return
retval = mvStruct("mSubimage", p.stdout.read().strip())
if self.debug:
print "\nRETURN Struct:\n-------------\n"
print retval
sys.stdout.write('\n>>> ')
sys.stdout.flush()
# Get the size (all three are the same)
if self.view.display_mode == "grayscale":
command = "mExamine " + self.workspace + "/subimage.fits"
else:
command = "mExamine " + self.workspace + "/red_subimage.fits"
if self.debug:
print "\nMONTAGE Command:\n---------------\n" + command
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stderr = p.stderr.read()
if stderr:
raise Exception(stderr)
return
retval = mvStruct("mExamine", p.stdout.read().strip())
if self.debug:
print "\nRETURN Struct:\n-------------\n"
print retval
sys.stdout.write('\n>>> ')
sys.stdout.flush()
subimage_width = retval.naxis1
subimage_height = retval.naxis2
xfactor = float(subimage_width) / float(self.view.canvas_width)
yfactor = float(subimage_height) / float(self.view.canvas_height)
if float(yfactor) > float(xfactor):
xfactor = yfactor
self.view.factor = xfactor
if self.view.display_mode == "grayscale":
# Shrink/expand the grayscale cutout to the right size
xfactor = float(subimage_width) / float(self.view.canvas_width)
yfactor = float(subimage_height) / float(self.view.canvas_height)
if float(yfactor) > float(xfactor):
xfactor = yfactor
self.view.factor = xfactor
command = "mShrink"
command += " " + self.workspace + "/subimage.fits"
command += " " + self.workspace + "/shrunken.fits"
command += " " + str(xfactor)
if self.debug:
print "\nMONTAGE Command:\n---------------\n" + command
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stderr = p.stderr.read()
if stderr:
raise Exception(stderr)
return
retval = mvStruct("mShrink", p.stdout.read().strip())
if self.debug:
print "\nRETURN Struct:\n-------------\n"
print retval
sys.stdout.write('\n>>> ')
sys.stdout.flush()
else:
# Shrink/expand the three color cutouts to the right size
# Blue
command = "mShrink"
command += " " + self.workspace + "/blue_subimage.fits"
command += " " + self.workspace + "/blue_shrunken.fits"
command += " " + str(xfactor)
if self.debug:
print "\nMONTAGE Command:\n---------------\n" + command
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stderr = p.stderr.read()
if stderr:
raise Exception(stderr)
return
retval = mvStruct("mShrink", p.stdout.read().strip())
if self.debug:
print "\nRETURN Struct:\n-------------\n"
print retval
# Green
command = "mShrink"
command += " " + self.workspace + "/green_subimage.fits"
command += " " + self.workspace + "/green_shrunken.fits"
command += " " + str(xfactor)
if self.debug:
print "\nMONTAGE Command:\n---------------\n" + command
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stderr = p.stderr.read()
if stderr:
raise Exception(stderr)
return
retval = mvStruct("mShrink", p.stdout.read().strip())
if self.debug:
print "\nRETURN Struct:\n-------------\n"
print retval
# Red
command = "mShrink"
command += " " + self.workspace + "/red_subimage.fits"
command += " " + self.workspace + "/red_shrunken.fits"
command += " " + str(xfactor)
if self.debug:
print "\nMONTAGE Command:\n---------------\n" + command
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stderr = p.stderr.read()
if stderr:
raise Exception(stderr)
return
retval = mvStruct("mShrink", p.stdout.read().strip())
if self.debug:
print "\nRETURN Struct:\n-------------\n"
print retval
sys.stdout.write('\n>>> ')
sys.stdout.flush()
# Finally, generate the PNG
command = "mViewer"
noverlay = len(self.view.overlay)
for i in range(0, noverlay):
type = self.view.overlay[i].type
if type == 'grid':
visible = self.view.overlay[i].visible
coord_sys = self.view.overlay[i].coord_sys
color = self.view.overlay[i].color
if visible == True:
if color != "":
command += " -color " + str(color)
command += " -grid " + str(coord_sys)
elif type == 'catalog':
visible = self.view.overlay[i].visible
data_file = self.view.overlay[i].data_file
data_col = self.view.overlay[i].data_col
data_ref = self.view.overlay[i].data_ref
data_type = self.view.overlay[i].data_type
sym_size = self.view.overlay[i].sym_size
sym_type = self.view.overlay[i].sym_type
sym_sides = self.view.overlay[i].sym_sides
sym_rotation = self.view.overlay[i].sym_rotation
color = self.view.overlay[i].color
if visible == True:
if color != "":
command += " -color " + str(color)
if sym_type != "" and sym_size != "":
command += " -symbol " + str(sym_size) + " " + str(sym_type) + " " + str(sym_sides) + " " + str(sym_rotation)
command += " -catalog " + str(data_file) + " " + str(data_col) + " " + str(data_ref) + " " + str(data_type)
elif type == 'imginfo':
visible = self.view.overlay[i].visible
data_file = self.view.overlay[i].data_file
color = self.view.overlay[i].color
if visible == True:
if color != "":
command += " -color " + str(color)
command += " -imginfo " + str(data_file)
elif type == 'mark':
visible = self.view.overlay[i].visible
lon = self.view.overlay[i].lon
lat = self.view.overlay[i].lat
sym_size = self.view.overlay[i].sym_size
sym_type = self.view.overlay[i].sym_type
sym_sides = self.view.overlay[i].sym_sides
sym_rotation = self.view.overlay[i].sym_rotation
color = self.view.overlay[i].color
if visible == True:
if color != "":
command += " -color " + str(color)
if sym_type != "" and sym_size != "":
command += " -symbol " + str(sym_size) + " " + str(sym_type) + " " + str(sym_sides) + " " + str(sym_rotation)
command += " -mark " + str(lon) + " " + str(lat)
elif type == 'label':
visible = self.view.overlay[i].visible
lon = self.view.overlay[i].lon
lat = self.view.overlay[i].lat
text = self.view.overlay[i].text
color = self.view.overlay[i].color
if visible == True:
if color != "":
command += " -color " + str(color)
command += " -label " + str(lon) + " " + str(lat) + ' "' + str(text) + '"'
else:
print "Invalid overlay type '" + str(type) + "' in view specification."
if self.view.display_mode == "grayscale":
fits_file = self.workspace + "/shrunken.fits"
color_table = self.view.gray_file.color_table
stretch_min = self.view.gray_file.stretch_min
stretch_max = self.view.gray_file.stretch_max
stretch_mode = self.view.gray_file.stretch_mode
if color_table == "":
color_table = 0
if stretch_min == "":
stretch_min = "-1s"
if stretch_max == "":
stretch_max = "max"
if stretch_mode == "":
stretch_mode = "gaussian-log"
command += " -ct " + str(color_table)
command += " -gray " + str(fits_file) + " " + str(stretch_min) + " " + str(stretch_max) + " " + str(stretch_mode)
else:
fits_file = self.workspace + "/red_shrunken.fits"
stretch_min = self.view.red_file.stretch_min
stretch_max = self.view.red_file.stretch_max
stretch_mode = self.view.red_file.stretch_mode
if stretch_min == "":
stretch_min = "-1s"
if stretch_max == "":
stretch_max = "max"
if stretch_mode == "":
stretch_mode = "gaussian-log"
command += " -red " + str(fits_file) + " " + str(stretch_min) + " " + str(stretch_max) + " " + str(stretch_mode)
fits_file = self.workspace + "/green_shrunken.fits"
stretch_min = self.view.green_file.stretch_min
stretch_max = self.view.green_file.stretch_max
stretch_mode = self.view.green_file.stretch_mode
if stretch_min == "":
stretch_min = "-1s"
if stretch_max == "":
stretch_max = "max"
if stretch_mode == "":
stretch_mode = "gaussian-log"
command += " -green " + str(fits_file) + " " + str(stretch_min) + " " + str(stretch_max) + " " + str(stretch_mode)
fits_file = self.workspace + "/blue_shrunken.fits"
stretch_min = self.view.blue_file.stretch_min
stretch_max = self.view.blue_file.stretch_max
stretch_mode = self.view.blue_file.stretch_mode
if stretch_min == "":
stretch_min = "-1s"
if stretch_max == "":
stretch_max = "max"
if stretch_mode == "":
stretch_mode = "gaussian-log"
command += " -blue " + str(fits_file) + " " + str(stretch_min) + " " + str(stretch_max) + " " + str(stretch_mode)
image_file = self.view.image_file
command += " -png " + self.workspace + "/" + str(image_file) # Set image format here
if self.debug:
print "\nMONTAGE Command:\n---------------\n" + command
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stderr = p.stderr.read()
if stderr:
raise Exception(stderr)
return
retval = mvStruct("mViewer", p.stdout.read().strip())
if self.debug:
print "\nRETURN Struct:\n-------------\n"
print retval
sys.stdout.write('\n>>> ')
sys.stdout.flush()
if stderr:
raise Exception(stderr)
return
if retval.stat == "WARNING":
print "\nWARNING: " + retval.msg
sys.stdout.write('\n>>> ')
sys.stdout.flush()
return
if retval.stat == "ERROR":
print "\nERROR: " + retval.msg
sys.stdout.write('\n>>> ')
sys.stdout.flush()
return
self.view.disp_width = retval.width
self.view.disp_height = retval.height
if self.view.display_mode == "grayscale":
# self.view.gray_file.bunit = retval.bunit
self.view.gray_file.min = retval.min
self.view.gray_file.max = retval.max
self.view.gray_file.data_min = retval.datamin
self.view.gray_file.data_max = retval.datamax
self.view.gray_file.min_sigma = retval.minsigma
self.view.gray_file.max_sigma = retval.maxsigma
self.view.gray_file.min_percent = retval.minpercent
self.view.gray_file.max_percent = retval.maxpercent
else:
# self.view.blue_file.bunit = retval.bunit
self.view.blue_file.min = retval.bmin
self.view.blue_file.max = retval.bmax
self.view.blue_file.data_min = retval.bdatamin
self.view.blue_file.data_max = retval.bdatamax
self.view.blue_file.min_sigma = retval.bminsigma
self.view.blue_file.max_sigma = retval.bmaxsigma
self.view.blue_file.min_percent = retval.bminpercent
self.view.blue_file.max_percent = retval.bmaxpercent
# self.view.green_file.bunit = retval.bunit
self.view.green_file.min = retval.gmin
self.view.green_file.max = retval.gmax
self.view.green_file.data_min = retval.gdatamin
self.view.green_file.data_max = retval.gdatamax
self.view.green_file.min_sigma = retval.gminsigma
self.view.green_file.max_sigma = retval.gmaxsigma
self.view.green_file.min_percent = retval.gminpercent
self.view.green_file.max_percent = retval.gmaxpercent
# self.view.red_file.bunit = retval.bunit
self.view.red_file.min = retval.rmin
self.view.red_file.max = retval.rmax
self.view.red_file.data_min = retval.rdatamin
self.view.red_file.data_max = retval.rdatamax
self.view.red_file.min_sigma = retval.rminsigma
self.view.red_file.max_sigma = retval.rmaxsigma
self.view.red_file.min_percent = retval.rminpercent
self.view.red_file.max_percent = retval.rmaxpercent
self.view.bunit = retval.bunit
# Write the current mvView info to a JSON file in the workspace
json_file = self.workspace + "/view.json"
jfile = open(json_file, "w+")
jfile.write(repr(self.view))
jfile.close()
# Tell the browser to get the new images (and JSON)
self.to_browser("image " + image_file)
# Default function to be used when the user picks a location
# This can be overridden by the developer with a callback of
# their own.
def pick_location(self, boxx, boxy):
ref_file = []
if self.view.display_mode == "grayscale":
ref_file.append(self.view.gray_file.fits_file)
if self.view.display_mode == "color":
ref_file.append(self.view.blue_file.fits_file)
ref_file.append(self.view.green_file.fits_file)
ref_file.append(self.view.red_file.fits_file)
radius = 31
json_file = self.workspace + "/pick.json"
jfile = open(json_file, "w+")
jfile.write("[")
nfile = len(ref_file)
for i in range(0, nfile):
command = "mExamine -p " + repr(boxx) + "p " + repr(boxy) + "p " + repr(radius) + "p " + ref_file[i]
if self.debug:
print "\nMONTAGE Command:\n---------------\n" + command
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stderr = p.stderr.read()
if stderr:
raise Exception(stderr)
return
retval = mvStruct("mExamine", p.stdout.read().strip())
if self.debug:
print "\nRETURN Struct:\n-------------\n"
print retval
print ""
print " File " + ref_file[i] + ":"
print ""
print " Flux (sigma) (RA, Dec) Pix Coord"
print " ------------------ ------------------------- ----------"
print " Center: " + repr(retval.fluxref) + " (" + repr(retval.sigmaref) + ") at (" + repr(retval.raref) + ", " + repr(retval.decref) + ") [" + repr(retval.xref) + ", " + repr(retval.yref) + "]"
print " Min: " + repr(retval.fluxmin) + " (" + repr(retval.sigmamin) + ") at (" + repr(retval.ramin) + ", " + repr(retval.decmin) + ") [" + repr(retval.xmin) + ", " + repr(retval.ymin) + "]"
print " Max: " + repr(retval.fluxmax) + " (" + repr(retval.sigmamax) + ") at (" + repr(retval.ramax) + ", " + repr(retval.decmax) + ") [" + repr(retval.xmax) + ", " + repr(retval.ymax) + "]"
print ""
print " Average: " + repr(retval.aveflux) + " +/- " + repr(retval.rmsflux)
print ""
print " Radius: " + repr(retval.radius) + " degrees (" + repr(retval.radpix) + " pixels) / Total area: " + repr(retval.npixel) + " pixels (" + repr(retval.nnull) + " nulls)"
print ""
# Write the current mvView info to a JSON file in the workspace
# json_file = self.workspace + "/pick" + str(i) + ".json"
# jfile = open(json_file, "w+")
jfile.write(repr(retval))
if i < (nfile-1):
jfile.write(",")
jfile.write("]")
jfile.close()
self.to_browser("pick")
# Get the FITS header(s) for the image(s) being displayed.
def get_header(self):
ref_file = []
if self.view.display_mode == "grayscale":
ref_file.append(self.view.gray_file.fits_file)
if self.view.display_mode == "color":
ref_file.append(self.view.blue_file.fits_file)
ref_file.append(self.view.green_file.fits_file)
ref_file.append(self.view.red_file.fits_file)
nfile = len(ref_file)
for i in range(0, nfile):
command = "mGetHdr -H " + ref_file[i] + " " + self.workspace + "/header" + str(i) + ".html"
if self.debug:
print "\nMONTAGE Command:\n---------------\n" + command
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stderr = p.stderr.read()
if stderr:
raise Exception(stderr)
return
retval = mvStruct("mGetHdr", p.stdout.read().strip())
if self.debug:
print "\nRETURN Struct:\n-------------\n"
print retval
sys.stdout.write('\n>>> ')
sys.stdout.flush()
if self.view.display_mode == "color":
self.to_browser("header color")
else:
self.to_browser("header gray")
# Send a display update notification to the browser.
def draw(self):
if self.debug:
print "DEBUG> mViewer.draw(): sending 'updateDisplay' to browser."
self.to_browser("updateDisplay")
#---------------------------------------------------------------------------------
| 32.935504 | 220 | 0.482766 |
0d4d35f429d9884d50d3487eaab77b76c0811763 | 1,285 | py | Python | Leetcode/426.convert-binary-search-tree-to-sorted-doubly-linked-list.py | EdwaRen/Competitve-Programming | e8bffeb457936d28c75ecfefb5a1f316c15a9b6c | [
"MIT"
] | 1 | 2021-05-03T21:48:25.000Z | 2021-05-03T21:48:25.000Z | Leetcode/426.convert-binary-search-tree-to-sorted-doubly-linked-list.py | EdwaRen/Competitve_Programming | e8bffeb457936d28c75ecfefb5a1f316c15a9b6c | [
"MIT"
] | null | null | null | Leetcode/426.convert-binary-search-tree-to-sorted-doubly-linked-list.py | EdwaRen/Competitve_Programming | e8bffeb457936d28c75ecfefb5a1f316c15a9b6c | [
"MIT"
] | null | null | null | """
# Definition for a Node.
"""
class Node(object):
def __init__(self, val, left=None, right=None):
self.val = val
self.left = left
self.right = right
class Solution(object):
def treeToDoublyList(self, root):
"""
:type root: Node
:rtype: Node
"""
def recurse(node):
if node:
# Pre order traversal
recurse(node.left)
if self.last:
self.last.right = node
node.left = self.last
else:
self.first = node
self.last = node
recurse(node.right)
if root == None:
return None
self.last = None
self.first = None
recurse(root)
self.first.left = self.last
self.last.right = self.first
return self.first
z = Solution()
a = Node(4)
b = Node(2)
c = Node(5)
a.left = b
a.right = c
d = Node(1)
e = Node(3)
b.left = d
b.right = e
k = Node(8)
i = Node(-6)
j = Node(-8)
k.left = i
i.left = j
res = z.treeToDoublyList(b)
res_orig = res
while res:
print(res.val)
res = res.right
if res == res_orig:
break
print(res_orig.left.val)
print(res.val) | 17.60274 | 51 | 0.491051 |
950c92e998c8a53d9b60f588122b143ce85bff54 | 8,244 | py | Python | main.py | bbhulani/sdc_p12_semantic_seg | 73043d6b9d228288313b47a69fffa6c428d451ba | [
"MIT"
] | null | null | null | main.py | bbhulani/sdc_p12_semantic_seg | 73043d6b9d228288313b47a69fffa6c428d451ba | [
"MIT"
] | null | null | null | main.py | bbhulani/sdc_p12_semantic_seg | 73043d6b9d228288313b47a69fffa6c428d451ba | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import os.path
import tensorflow as tf
import helper
import warnings
from distutils.version import LooseVersion
import project_tests as tests
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def load_vgg(sess, vgg_path):
"""
Load Pretrained VGG Model into TensorFlow.
:param sess: TensorFlow Session
:param vgg_path: Path to vgg folder, containing "variables/" and "saved_model.pb"
:return: Tuple of Tensors from VGG model (image_input, keep_prob, layer3_out, layer4_out, layer7_out)
"""
# Use tf.saved_model.loader.load to load the model and weights
vgg_tag = 'vgg16'
vgg_input_tensor_name = 'image_input:0'
vgg_keep_prob_tensor_name = 'keep_prob:0'
vgg_layer3_out_tensor_name = 'layer3_out:0'
vgg_layer4_out_tensor_name = 'layer4_out:0'
vgg_layer7_out_tensor_name = 'layer7_out:0'
# Load the VGG model from sess
tf.saved_model.loader.load(sess, [vgg_tag], vgg_path)
# Get graph
graph = tf.get_default_graph()
# Get input layer
input = graph.get_tensor_by_name(vgg_input_tensor_name)
# Get keep probability
keep = graph.get_tensor_by_name(vgg_keep_prob_tensor_name)
# Get the layers
layer3 = graph.get_tensor_by_name(vgg_layer3_out_tensor_name)
layer4 = graph.get_tensor_by_name(vgg_layer4_out_tensor_name)
layer7 = graph.get_tensor_by_name(vgg_layer7_out_tensor_name)
return input, keep, layer3, layer4, layer7
tests.test_load_vgg(load_vgg, tf)
def layers(vgg_layer3_out, vgg_layer4_out, vgg_layer7_out, num_classes):
"""
Create the layers for a fully convolutional network. Build skip-layers using the vgg layers.
:param vgg_layer3_out: TF Tensor for VGG Layer 3 output
:param vgg_layer4_out: TF Tensor for VGG Layer 4 output
:param vgg_layer7_out: TF Tensor for VGG Layer 7 output
:param num_classes: Number of classes to classify
:return: The Tensor for the last layer of output
"""
# 1x1 convolutional layers instead of fully connected layer
layer3_1x1 = tf.layers.conv2d(vgg_layer3_out, num_classes, 1, strides=(1, 1), padding='SAME',
kernel_regularizer=tf.contrib.layers.l2_regularizer(1e-3))
layer4_1x1 = tf.layers.conv2d(vgg_layer4_out, num_classes, 1, strides=(1, 1), padding='SAME',
kernel_regularizer=tf.contrib.layers.l2_regularizer(1e-3))
layer7_1x1 = tf.layers.conv2d(vgg_layer7_out, num_classes, 1, strides=(1, 1), padding='SAME',
kernel_regularizer=tf.contrib.layers.l2_regularizer(1e-3))
# Upsampling by 2 kernel_size = 4, strides = 2x2
upsampled_layer7 = tf.layers.conv2d_transpose(layer7_1x1, num_classes, 4, strides=(2, 2), padding='SAME',
kernel_regularizer=tf.contrib.layers.l2_regularizer(1e-3))
# Skip connections
# Add layer4_1x1 and upsampled_layer7.
input = tf.add(upsampled_layer7, layer4_1x1)
# Upsampling layer4 again
upsampled_layer4 = tf.layers.conv2d_transpose(input, num_classes, 4, strides=(2, 2), padding='SAME',
kernel_regularizer=tf.contrib.layers.l2_regularizer(1e-3))
# Add layer3_1x1 with upsampled_layer4
input = tf.add(upsampled_layer4, layer3_1x1)
output = tf.layers.conv2d_transpose(input, num_classes, 16, strides=(8, 8), padding='SAME',
kernel_regularizer=tf.contrib.layers.l2_regularizer(1e-3))
return output
tests.test_layers(layers)
def optimize(nn_last_layer, correct_label, learning_rate, num_classes):
"""
Build the TensorFLow loss and optimizer operations.
:param nn_last_layer: TF Tensor of the last layer in the neural network
:param correct_label: TF Placeholder for the correct label image
:param learning_rate: TF Placeholder for the learning rate
:param num_classes: Number of classes to classify
:return: Tuple of (logits, train_op, cross_entropy_loss)
"""
# The output tensor is 4D so we have to reshape it to 2D
# logits is now a 2D tensor where each row represents a pixel and each column a class
logits = tf.reshape(nn_last_layer, (-1, num_classes))
cross_entropy_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=correct_label))
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(cross_entropy_loss)
return logits, training_op, cross_entropy_loss
tests.test_optimize(optimize)
def train_nn(sess, epochs, batch_size, get_batches_fn, train_op, cross_entropy_loss, input_image,
correct_label, keep_prob, learning_rate):
"""
Train neural network and print out the loss during training.
:param sess: TF Session
:param epochs: Number of epochs
:param batch_size: Batch size
:param get_batches_fn: Function to get batches of training data. Call using get_batches_fn(batch_size)
:param train_op: TF Operation to train the neural network
:param cross_entropy_loss: TF Tensor for the amount of loss
:param input_image: TF Placeholder for input images
:param correct_label: TF Placeholder for label images
:param keep_prob: TF Placeholder for dropout keep probability
:param learning_rate: TF Placeholder for learning rate
"""
rate = 0.0001
keep = 0.5
sess.run(tf.global_variables_initializer())
for epoch in range(epochs):
for image, label in get_batches_fn(batch_size):
_, loss = sess.run([train_op, cross_entropy_loss],
feed_dict={input_image: image, correct_label: label,
keep_prob: keep, learning_rate: rate})
#total_accuracy += (accuracy * len(image))
print("EPOCH {} ...".format(epoch+1))
print("Loss = {:.3f}".format(loss))
print()
pass
tests.test_train_nn(train_nn)
def run():
num_classes = 2
image_shape = (160, 576)
data_dir = './data'
runs_dir = './runs'
tests.test_for_kitti_dataset(data_dir)
# Download pretrained vgg model
helper.maybe_download_pretrained_vgg(data_dir)
# OPTIONAL: Train and Inference on the cityscapes dataset instead of the Kitti dataset.
# You'll need a GPU with at least 10 teraFLOPS to train on.
# https://www.cityscapes-dataset.com/
with tf.Session() as sess:
# Path to vgg model
vgg_path = os.path.join(data_dir, 'vgg')
# Create function to get batches
get_batches_fn = helper.gen_batch_function(os.path.join(data_dir, 'data_road/training'), image_shape)
# OPTIONAL: Augment Images for better results
# https://datascience.stackexchange.com/questions/5224/how-to-prepare-augment-images-for-neural-network
# Build NN using load_vgg, layers, and optimize function
learning_rate = tf.constant(0.0001)
epochs = 7
batch_size = 1
#correct_label = tf.placeholder(tf.int32, None)
label = tf.placeholder(tf.int32, None)
correct_label = tf.one_hot(label, num_classes)
input, keep, layer3, layer4, layer7 = load_vgg(sess, vgg_path)
output = layers(layer3, layer4, layer7, num_classes)
logits, training_op, cross_entropy_loss = optimize(output, correct_label, learning_rate, num_classes)
# Train NN using the train_nn function
train_nn(sess, epochs, batch_size, get_batches_fn,
training_op, cross_entropy_loss, input,
correct_label, keep, learning_rate)
# Save inference data using helper.save_inference_samples
helper.save_inference_samples(runs_dir, data_dir, sess, image_shape, logits, keep, input)
# OPTIONAL: Apply the trained model to a video
if __name__ == '__main__':
run()
| 42.715026 | 146 | 0.698083 |
22b49a83c062eceacf7fc2c0217ef15498ab7d88 | 451 | py | Python | eclcli/rca/rcaclient/v2/versions.py | shimisho/eclcli | a1f55ec9c6a0849c8b2100ddb8938a3bee141100 | [
"Apache-2.0"
] | null | null | null | eclcli/rca/rcaclient/v2/versions.py | shimisho/eclcli | a1f55ec9c6a0849c8b2100ddb8938a3bee141100 | [
"Apache-2.0"
] | null | null | null | eclcli/rca/rcaclient/v2/versions.py | shimisho/eclcli | a1f55ec9c6a0849c8b2100ddb8938a3bee141100 | [
"Apache-2.0"
] | null | null | null | from .. import base
def getversion(obj):
try:
return obj.version
except AttributeError:
return obj
class Version(base.Resource):
def __repr__(self):
return "<Version>"
class VersionManager(base.ManagerWithFind):
resource_class = Version
def list(self):
return self._list("../../", "versions")
def get(self, version):
return self._get("../../%s" % getversion(version), "version")
| 18.791667 | 69 | 0.618625 |
8dd0a20f53fdb8035aad302461a03c3e5d34c01b | 5,272 | py | Python | paddlevideo/tasks/train_dali.py | ttjygbtj/PaddleVideo | 8cafade8d1a230e56e046f5295077afb7a7075e3 | [
"Apache-2.0"
] | null | null | null | paddlevideo/tasks/train_dali.py | ttjygbtj/PaddleVideo | 8cafade8d1a230e56e046f5295077afb7a7075e3 | [
"Apache-2.0"
] | 1 | 2022-01-14T02:33:28.000Z | 2022-01-14T02:33:28.000Z | paddlevideo/tasks/train_dali.py | Thinksky5124/PaddleVideo | c8e9c5ff53d99bd70bfeb6246a53e668064a9940 | [
"Apache-2.0"
] | null | null | null | # copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
import os.path as osp
import paddle
from ..modeling.builder import build_model
from ..solver import build_lr, build_optimizer
from ..utils import do_preciseBN
from paddlevideo.utils import get_logger, coloring
from paddlevideo.utils import (AverageMeter, build_record, log_batch, log_epoch,
save, load, mkdir)
from paddlevideo.loader import TSN_Dali_loader, get_input_data
"""
We only supported DALI training for TSN model now.
"""
def train_dali(cfg, weights=None, parallel=True):
"""Train model entry
Args:
cfg (dict): configuration.
weights (str): weights path for finetuning.
parallel (bool): Whether multi-cards training. Default: True.
"""
logger = get_logger("paddlevideo")
batch_size = cfg.DALI_LOADER.get('batch_size', 8)
places = paddle.set_device('gpu')
model_name = cfg.model_name
output_dir = cfg.get("output_dir", f"./output/{model_name}")
mkdir(output_dir)
# 1. Construct model
model = build_model(cfg.MODEL)
if parallel:
model = paddle.DataParallel(model)
# 2. Construct dali dataloader
train_loader = TSN_Dali_loader(cfg.DALI_LOADER).build_dali_reader()
# 3. Construct solver.
lr = build_lr(cfg.OPTIMIZER.learning_rate, None)
optimizer = build_optimizer(cfg.OPTIMIZER, lr, model=model)
# Resume
resume_epoch = cfg.get("resume_epoch", 0)
if resume_epoch:
filename = osp.join(output_dir,
model_name + f"_epoch_{resume_epoch:05d}")
resume_model_dict = load(filename + '.pdparams')
resume_opt_dict = load(filename + '.pdopt')
model.set_state_dict(resume_model_dict)
optimizer.set_state_dict(resume_opt_dict)
# Finetune:
if weights:
assert resume_epoch == 0, f"Conflict occurs when finetuning, please switch resume function off by setting resume_epoch to 0 or not indicating it."
model_dict = load(weights)
model.set_state_dict(model_dict)
# 4. Train Model
for epoch in range(0, cfg.epochs):
if epoch < resume_epoch:
logger.info(
f"| epoch: [{epoch+1}] <= resume_epoch: [{ resume_epoch}], continue... "
)
continue
model.train()
record_list = build_record(cfg.MODEL)
tic = time.time()
for i, data in enumerate(train_loader):
data = get_input_data(data)
record_list['reader_time'].update(time.time() - tic)
# 4.1 forward
outputs = model(data, mode='train')
# 4.2 backward
avg_loss = outputs['loss']
avg_loss.backward()
# 4.3 minimize
optimizer.step()
optimizer.clear_grad()
# log record
record_list['lr'].update(optimizer._global_learning_rate(),
batch_size)
for name, value in outputs.items():
record_list[name].update(value, batch_size)
record_list['batch_time'].update(time.time() - tic)
tic = time.time()
if i % cfg.get("log_interval", 10) == 0:
ips = "ips: {:.5f} instance/sec.".format(
batch_size / record_list["batch_time"].val)
log_batch(record_list, i, epoch + 1, cfg.epochs, "train", ips)
# learning rate iter step
if cfg.OPTIMIZER.learning_rate.get("iter_step"):
lr.step()
# learning rate epoch step
if not cfg.OPTIMIZER.learning_rate.get("iter_step"):
lr.step()
ips = "ips: {:.5f} instance/sec.".format(
batch_size * record_list["batch_time"].count /
record_list["batch_time"].sum)
log_epoch(record_list, epoch + 1, "train", ips)
# use precise bn to improve acc
if cfg.get("PRECISEBN") and (epoch % cfg.PRECISEBN.preciseBN_interval
== 0 or epoch == cfg.epochs - 1):
do_preciseBN(
model, train_loader, parallel,
min(cfg.PRECISEBN.num_iters_preciseBN, len(train_loader)))
# 5. Save model and optimizer
if epoch % cfg.get("save_interval", 1) == 0 or epoch == cfg.epochs - 1:
save(
optimizer.state_dict(),
osp.join(output_dir,
model_name + f"_epoch_{epoch+1:05d}.pdopt"))
save(
model.state_dict(),
osp.join(output_dir,
model_name + f"_epoch_{epoch+1:05d}.pdparams"))
logger.info(f'training {model_name} finished')
| 36.611111 | 154 | 0.612481 |
99d1396e619c6315323ce733c8514213d99577a8 | 4,412 | py | Python | resource/Ui_QR_code.py | zengbolin/pyqt5_python- | 55d7b82b940bff825e8282de0d15ecd3875a6eff | [
"Apache-2.0"
] | 7 | 2019-11-23T05:22:48.000Z | 2022-03-30T05:46:36.000Z | resource/Ui_QR_code.py | zengbolin/pyqt5_python- | 55d7b82b940bff825e8282de0d15ecd3875a6eff | [
"Apache-2.0"
] | null | null | null | resource/Ui_QR_code.py | zengbolin/pyqt5_python- | 55d7b82b940bff825e8282de0d15ecd3875a6eff | [
"Apache-2.0"
] | 1 | 2021-05-06T06:18:35.000Z | 2021-05-06T06:18:35.000Z | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'I:\python_base_Code\PyQt5学习\PyQt_Demo\resource\UI\QR_code.ui'
#
# Created by: PyQt5 UI code generator 5.13.0
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
# 二维码界面实现
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName("Form")
Form.resize(520, 350)
Form.setMinimumSize(QtCore.QSize(520, 350))
Form.setMaximumSize(QtCore.QSize(520, 350))
icon = QtGui.QIcon()
icon.addPixmap(QtGui.QPixmap(":/login/QQ.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
Form.setWindowIcon(icon)
self.widget = QtWidgets.QWidget(Form)
self.widget.setGeometry(QtCore.QRect(1, 1, 520, 30))
self.widget.setMinimumSize(QtCore.QSize(520, 30))
self.widget.setMaximumSize(QtCore.QSize(520, 30))
self.widget.setStyleSheet("image: url(:/login/版图.png);")
self.widget.setObjectName("widget")
self.widget_2 = QtWidgets.QWidget(Form)
self.widget_2.setGeometry(QtCore.QRect(1, 31, 520, 319))
self.widget_2.setStyleSheet("background-color: rgb(235, 242, 249);")
self.widget_2.setObjectName("widget_2")
self.label = QtWidgets.QLabel(self.widget_2)
self.label.setGeometry(QtCore.QRect(140, 60, 221, 31))
self.label.setStyleSheet("font: 11pt \"黑体\";\n"
"")
self.label.setObjectName("label")
self.login_btn = QtWidgets.QPushButton(self.widget_2)
self.login_btn.setEnabled(True)
self.login_btn.setGeometry(QtCore.QRect(140, 260, 230, 35))
self.login_btn.setMinimumSize(QtCore.QSize(230, 35))
self.login_btn.setMaximumSize(QtCore.QSize(230, 35))
self.login_btn.setStyleSheet("QPushButton{\n"
" background-color:rgb(85, 170, 255);\n"
" font: 10pt \"微软雅黑\";\n"
" color: rgb(255, 255, 255);\n"
" border-radius:6px;\n"
"}\n"
"QPushButton:pressed{\n"
" background-color:rgb(0, 170, 255);\n"
"}\n"
"\n"
"QPushButton:hover{\n"
" background-color:rgb(25, 225, 255);\n"
"}\n"
"")
self.login_btn.setCheckable(False)
self.login_btn.setObjectName("login_btn")
self.pushButton = QtWidgets.QPushButton(self.widget_2)
self.pushButton.setEnabled(True)
self.pushButton.setGeometry(QtCore.QRect(110, 120, 121, 121))
self.pushButton.setStyleSheet("border-image: url(:/login/QR_code.jpg);")
self.pushButton.setText("")
self.pushButton.setCheckable(True)
self.pushButton.setChecked(True)
self.pushButton.setObjectName("pushButton")
self.pushButton_2 = QtWidgets.QPushButton(self.widget_2)
self.pushButton_2.setEnabled(False)
self.pushButton_2.setGeometry(QtCore.QRect(270, 120, 121, 121))
self.pushButton_2.setStyleSheet("border-image: url(:/login/scan.png);")
self.pushButton_2.setText("")
self.pushButton_2.setCheckable(False)
self.pushButton_2.setFlat(False)
self.pushButton_2.setObjectName("pushButton_2")
self.label.raise_()
self.login_btn.raise_()
self.pushButton_2.raise_()
self.pushButton.raise_()
self.retranslateUi(Form)
self.pushButton.clicked['bool'].connect(Form.show_hide_menu)
self.login_btn.clicked.connect(Form.show_menu)
QtCore.QMetaObject.connectSlotsByName(Form)
def retranslateUi(self, Form):
_translate = QtCore.QCoreApplication.translate
Form.setWindowTitle(_translate("Form", "扫描二维码"))
self.label.setText(_translate("Form", "用微信扫描二维码添加好友"))
self.login_btn.setText(_translate("Form", "返回"))
import rc.login_rc
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
Form = QtWidgets.QWidget()
ui = Ui_Form()
ui.setupUi(Form)
Form.show()
sys.exit(app.exec_())
| 42.834951 | 115 | 0.589755 |
c74a70d33b23c5a605717fec76cc0df89708fef0 | 14,313 | py | Python | src/mayaToMantra/deployment/mayaToMantra/scripts/scripts/mtm_initialize.py | haggi/OpenMaya | 746e0740f480d9ef8d2173f31b3c99b9b0ea0d24 | [
"MIT"
] | 42 | 2015-01-03T15:07:25.000Z | 2021-12-09T03:56:59.000Z | src/mayaToMantra/deployment/mayaToMantra/scripts/scripts/mtm_initialize.py | haggi/OpenMaya | 746e0740f480d9ef8d2173f31b3c99b9b0ea0d24 | [
"MIT"
] | 66 | 2015-01-02T13:28:44.000Z | 2022-03-16T14:00:57.000Z | src/mayaToMantra/deployment/mayaToMantra/scripts/scripts/mtm_initialize.py | haggi/OpenMaya | 746e0740f480d9ef8d2173f31b3c99b9b0ea0d24 | [
"MIT"
] | 12 | 2015-02-07T05:02:17.000Z | 2020-07-10T17:21:44.000Z | import pymel.core as pm
import pymel.core.uitypes as pui
from pymel.core import Callback
import maya.cmds as cmds
import logging
import os
import getpass
import subprocess
import mtm_mantraAttributes as mantraAttributes
import mtm_attributeManager
import shutil
import sys
#import AETemplates.AEdagNodeTemplate as dgTmpl
reload(mantraAttributes)
log = logging.getLogger("mantraLogger")
RENDERERNAME = "Mantra"
GLOBALSNAME = "mayaToMantraGlobals"
def getHoudiniPath():
return "C:/Program Files/Side Effects Software/Houdini 11.0.446.7"
def getInstallDir():
return "C:/Users/haggi/coding/mayaTo/VS/"
def prepareEnv():
username = getpass.getuser()
if not os.environ.has_key('H'):
os.environ['H'] = getHoudiniPath()
tmpdir = ""
if not os.environ.has_key('TMP'):
if not os.environ.has_key('tmp'):
# TODO: correct tmpdir
pass
else:
tmpdir = os.environ['tmp']
else:
tmpdir = os.environ['TMP']
basePath = os.environ['H']
currentDir = pm.workspace.path
sn = pm.sceneName().basename().split(".")[0]
os.environ['HB'] = basePath + "/bin"
os.environ['HD'] = basePath + "/demo"
os.environ['HFS'] = basePath
os.environ['HH'] = basePath + "/houdini"
os.environ['HHC'] = basePath + "/houdini/config"
os.environ['HIP'] = currentDir + "/mantra/" + sn
os.environ['HIPNAME'] = "untitled.hip"
#os.environ['HOUDINI_DESKTOP_DIR'] = "C:/Users/haggi/Desktop"
os.environ['HOUDINI_TEMP_DIR'] = tmpdir + "/houdini"
os.environ['HSITE'] = basePath + "/site"
os.environ['HT'] = basePath + "/toolkit"
os.environ['HTB'] = basePath + "/toolkit/bin"
#os.environ['HOUDINI_OTL_PATH'] = os.environ['HOUDINI_OTL_PATH'] + ";" + currentDir + "/mantra/" + sn + "/shaders"
os.environ['HOUDINI_VEX_PATH'] = currentDir + "/mantra/" + sn + "/shaders" + ";&;"
path = os.environ['PATH']
if not (basePath + "/bin") in path:
os.environ['PATH'] = basePath + "/bin;" + path
if not os.environ.has_key("MTM_HOME"):
os.environ["MTM_HOME"] = getInstallDir()
#print "VexPath", os.environ['HOUDINI_VEX_PATH']
def startRenderProc(ifdFile):
log.info("Starting render procedure")
prepareEnv()
IDLE_PRIORITY_CLASS = 64
verbosity = 4
try:
renderGlobals = ls(type = "mayaToMantraGlobals")
verbosity = renderGlobals.verbosity.get()
except:
pass
cmd = "mantra -V " + str(verbosity) + " -f " + ifdFile
log.info("Starting cmd %s" % cmd)
process = subprocess.Popen( cmd, bufsize = 1, shell = True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT, creationflags=IDLE_PRIORITY_CLASS)
while 1:
line = process.stdout.readline()
if not line: break
# log.info(line.strip())
pm.mel.trace(line.strip())
if "Render Time:" in line:
break
log.info("Rendering done.")
def startShaderCompiler( shaderLib, shaderFile):
log.info("Starting vcc compiler procedure")
prepareEnv()
IDLE_PRIORITY_CLASS = 64
includePath = os.environ["MTM_HOME"] + "/shaderIncludes"
cwd = os.getcwd()
# go into shader directory because the -x option is needed to create .vex files and they are
# placed into the current directory
os.chdir(os.path.dirname(shaderFile))
cmd = "vcc -I " + includePath + " -x -m " + shaderLib + " " + shaderFile
log.info("Starting vcc cmd: %s" % cmd)
process = subprocess.Popen( cmd, bufsize = 1, shell = True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT, creationflags=IDLE_PRIORITY_CLASS)
while 1:
line = process.stdout.readline()
if not line: break
# log.info(line.strip())
pm.mel.trace(line.strip())
log.info("Compiling done.")
os.chdir(cwd)
def attributeManager():
reload(mtm_attributeManager)
#mtm_attributeManager.attributeManager()
mtm_attributeManager.GUI()
def menuCallback( arg = None):
log.debug("Mantra main menu callback: %s" % arg)
if arg == "AttributeManager":
attributeManager()
def mantraMainMenu():
log.debug("Creating mantra main menu")
menuName = "Mantra"
if pm.menu(menuName, query=True, exists=True):
pm.deleteUI(menuName)
gMainWindow = pm.mel.eval('$tmpVar=$gMainWindow')
mantraMenu = pm.menu(menuName, label = menuName, parent = gMainWindow, tearOff = True )
pm.menuItem( label = 'AttributeManager', command = Callback(menuCallback, "AttributeManager") )
pm.menuItem( label = 'DummyMenu' , command = Callback(menuCallback, "Dummy") )
pm.menuItem( label = '', divider = True )
pm.menuItem( label = 'DummyMenuA' , command = Callback(menuCallback, "DummyA") )
pm.setParent("..", menu = True)
def renderProcedure():
log.debug("RenderProcedure")
pm.mel.trace("================ Start MayaToMantra Rendering ======================")
# TODO: get directorys and filenames
currentTime = pm.currentTime(query = True)
outputPath = pm.workspace.path + "/mantra/" + pm.sceneName().basename().split(".")[0]
imagePath = pm.workspace.path + "/images"
imageName = pm.sceneName().basename().split(".")[0]
try:
mantraGlobals = pm.ls(type="mayaToMantraGlobals")[0]
except:
log.error("mayaToMantraGlobals not found")
return
mantraGlobals.basePath.set(outputPath)
mantraGlobals.imagePath.set(imagePath)
mantraGlobals.imageName.set(imageName)
if not os.path.exists(outputPath):
os.makedirs(outputPath)
if not os.path.exists(imagePath):
os.makedirs(imagePath)
shaderOutputPath = outputPath + "/shaders"
if not os.path.exists(shaderOutputPath):
os.makedirs(shaderOutputPath)
geoOutputPath = outputPath + "/geo"
if not os.path.exists(geoOutputPath):
os.makedirs(geoOutputPath)
pm.mayatomantra()
mantraGlobals.basePath.set("")
mantraGlobals.imagePath.set("")
mantraGlobals.imageName.set("")
pm.currentTime(currentTime)
def renderOptionsProcedure():
log.debug("RenderOptionsProcedure")
def commandRenderProcedure():
log.debug("commandRenderProcedure")
renderProcedure()
# reload(mantraRender)
# mar = mantraRender.mantraRenderer()
# mar.batchRendering = True
# mar.render()
def batchRenderProcedure():
log.debug("batchRenderProcedure")
renderProcedure()
def batchRenderOptionsProcedure():
log.debug("batchRenderOptionsProcedure")
def batchRenderOptionsStringProcedure():
log.debug("batchRenderOptionsStringProcedure")
def cancelBatchRenderProcedure():
log.debug("cancelBatchRenderProcedure")
def showBatchRenderProcedure():
log.debug("showBatchRenderProcedure")
def showRenderLogProcedure():
log.debug("showRenderLogProcedure")
def showBatchRenderLogProcedure():
log.debug("showBatchRenderLogProcedure")
def renderRegionProcedure():
log.debug("renderRegionProcedure")
def textureBakingProcedure():
log.debug("textureBakingProcedure")
def renderingEditorsSubMenuProcedure():
log.debug("renderingEditorsSubMenuProcedure")
def renderMenuProcedure():
log.debug("renderMenuProcedure")
def removeRenderer():
pm.renderer(RENDERERNAME, edit = True, unregisterRenderer = True)
menuName = RENDERERNAME
if pm.menu(menuName, query=True, exists=True):
pm.deleteUI(menuName)
def dimConnections(element):
print "DimConnections"
if element['name'] in ['motionblur']:
value = pm.checkBoxGrp( element['name'], query = True, v1 = True)
print element['uielement']
valueInv = value
#groups = mantraGlobalsAttributes.mantraGlobalsAttributes
#if el['name'] in ['xftimesamples', 'geotimesamples', 'motionfactor', 'mbtype']:
if value:
print "enabling ctrls"
else:
print "disabling ctrls"
pm.intFieldGrp('xftimesamples', edit = True, enable = valueInv)
pm.intFieldGrp('geotimesamples', edit = True, enable = valueInv)
pm.floatFieldGrp('motionfactor', edit = True, enable = valueInv)
pm.attrEnumOptionMenu('mbtype', edit = True, enable = valueInv)
pm.checkBoxGrp('imagemotionblur', edit = True, enable = valueInv)
def MantraTabCreateFromAtt():
log.debug("MantraTabCreate from Attribute List")
reload(mantraAttributes)
if len(pm.ls(type = GLOBALSNAME)) == 0:
mrg = pm.createNode(GLOBALSNAME, name = GLOBALSNAME)
else:
mrg = pm.ls(type = GLOBALSNAME)[0]
parentForm = pm.setParent(query = True)
pm.setUITemplate( "attributeEditorTemplate", pushTemplate = True)
pm.scrollLayout( "MantraScrollLayout", horizontalScrollBarThickness = 0)
pm.columnLayout("MantraColumnLayout", adjustableColumn = True)
mantraAttributes.mantraGlobalsATList.createUi(mrg)
pm.setUITemplate( "attributeEditorTemplate", popTemplate = True)
pm.formLayout(parentForm, edit = True, attachForm = [ ("MantraScrollLayout", "top", 0),
("MantraScrollLayout", "bottom", 0),
("MantraScrollLayout", "left", 0),
("MantraScrollLayout", "right", 0)
])
def MantraTabUpdate():
log.debug("MantraTabUpdate")
def MantraTranslatorTabCreate():
log.debug("MantraTabCreate from Attribute List")
reload(mantraAttributes)
if len(pm.ls(type = GLOBALSNAME)) == 0:
mrg = pm.createNode(GLOBALSNAME, name = GLOBALSNAME)
else:
mrg = pm.ls(type = GLOBALSNAME)[0]
parentForm = pm.setParent(query = True)
pm.setUITemplate( "attributeEditorTemplate", pushTemplate = True)
pm.scrollLayout( "MantraTrScrollLayout", horizontalScrollBarThickness = 0)
pm.columnLayout("MantraTrColumnLayout", adjustableColumn = True)
#mantraAttributes.mantraGlobalsATList.createUi(mrg)
pm.setUITemplate( "attributeEditorTemplate", popTemplate = True)
pm.formLayout(parentForm, edit = True, attachForm = [ ("MantraTrScrollLayout", "top", 0),
("MantraTrScrollLayout", "bottom", 0),
("MantraTrScrollLayout", "left", 0),
("MantraTrScrollLayout", "right", 0)
])
def MantraTranslatorTabUpdate():
log.debug("MantraTranslatorTabUpdate")
def melGlobals():
createMantraTab = "global proc createRendererTab()\
{\
print(\"createMelTab\");\n\
python(\"import mtm_initialize as tr;reload(tr);tr.MantraTabCreateFromAtt()\");\n\
};"
updateMantraTab = "global proc updateRendererTab()\
{\
print(\"updateMelTab\");\n\
python(\"import mtm_initialize as tr;reload(tr);tr.MantraTabUpdate()\");\n\
};"
pm.mel.eval(createMantraTab)
pm.mel.eval(updateMantraTab)
createMantraTab = "global proc createTranslatorTab()\
{\
print(\"createMelTab\");\n\
python(\"import mtm_initialize as tr;reload(tr);tr.MantraTranslatorTabCreate()\");\n\
};"
updateMantraTab = "global proc updateTranslatorTab()\
{\
print(\"updateMelTab\");\n\
python(\"import mtm_initialize as tr;reload(tr);tr.MantraTranslatorTabUpdate()\");\n\
};"
pm.mel.eval(createMantraTab)
pm.mel.eval(updateMantraTab)
def registerRenderer():
log.debug("registerRenderer")
baseCmd = "import mtm_initialize as mtmi;reload(mtmi);"
renderProc = baseCmd + "mtmi.renderProcedure()"
batchRenderProc = baseCmd + "mtmi.batchRenderProcedure()"
commandRenderProc = baseCmd + "mtmi.commandRenderProcedure()"
batchRenderOptionsProc = baseCmd + "mtmi.batchRenderOptionsProcedure()"
batchRenderOptionsStringProc = baseCmd + "mtmi.batchRenderOptionsStringProcedure()"
pm.renderer(RENDERERNAME, rendererUIName = RENDERERNAME)
pm.renderer(RENDERERNAME, edit = True, renderProcedure = "python(\"" + renderProc +"\")")
pm.renderer(RENDERERNAME, edit = True, batchRenderProcedure = "python(\"" + batchRenderProc +"\")")
pm.renderer(RENDERERNAME, edit = True, commandRenderProcedure = "python(\"" + commandRenderProc +"\")")
pm.renderer(RENDERERNAME, edit = True, batchRenderOptionsProcedure = "python(\"" + batchRenderOptionsProc +"\")")
pm.renderer(RENDERERNAME, edit = True, batchRenderOptionsStringProcedure = "python(\"" + batchRenderOptionsStringProc +"\")")
pm.renderer(RENDERERNAME, edit = True, addGlobalsNode = "defaultRenderGlobals")
pm.renderer(RENDERERNAME, edit = True, addGlobalsTab = ('Common', "createMayaSoftwareCommonGlobalsTab", "updateMayaSoftwareCommonGlobalsTab"))
pm.renderer(RENDERERNAME, edit = True, addGlobalsTab = (RENDERERNAME, "createRendererTab", "updateRendererTab"))
pm.renderer(RENDERERNAME, edit = True, addGlobalsTab = (RENDERERNAME + " Translator", "createTranslatorTab", "updateTranslatorTab"))
def unloadAndUnregister():
if pm.pluginInfo('mayatomantra', q = True, l=True):
pm.unloadPlugin('mayatomantra', force = True)
if pm.renderer(RENDERERNAME, q = True, exists = True):
pm.renderer(RENDERERNAME, unregisterRenderer = True)
pm.loadPlugin( 'mayatomantra' )
if len(pm.ls(type = GLOBALSNAME)) == 0:
pm.createNode(GLOBALSNAME, name = GLOBALSNAME)
else:
log.info("mayaToMantraGlobals node exists, using old one")
def unregister():
if pm.renderer(RENDERERNAME, q = True, exists = True):
pm.renderer(RENDERERNAME, unregisterRenderer = True)
def initRenderer():
"""call this function during after initialisation of the mantra render plugin.
From within the plugin.
"""
log.debug("Init Mantra Renderer script.")
#unloadAndUnregister()
melGlobals()
registerRenderer()
if not pm.about(batch=True):
mantraMainMenu()
#log.debug("Adding aetemplates to path.")
#sys.path.append(os.environ["MTM_HOME"]+"/scripts/AETemplates")
#192.168.64.133
| 37.765172 | 152 | 0.650178 |
20c291900c56c80e90f428e62fac1a4059ac57ce | 4,899 | py | Python | conanfile.py | icolwell-as/Lanelet2 | 0e2e222352936cd70a9fab5256684a97c3091996 | [
"BSD-3-Clause"
] | 19 | 2020-07-28T14:50:10.000Z | 2021-12-21T18:45:38.000Z | conanfile.py | icolwell-as/Lanelet2 | 0e2e222352936cd70a9fab5256684a97c3091996 | [
"BSD-3-Clause"
] | 1 | 2020-11-08T10:32:27.000Z | 2020-11-08T10:32:27.000Z | conanfile.py | icolwell-as/Lanelet2 | 0e2e222352936cd70a9fab5256684a97c3091996 | [
"BSD-3-Clause"
] | 11 | 2020-08-02T01:12:51.000Z | 2021-12-18T07:17:16.000Z | import os
import sys
import em
import xml.etree.ElementTree as ET
from conans import ConanFile, CMake, tools
find_mrt_cmake="""
set(mrt_cmake_modules_FOUND True)
include(${CMAKE_CURRENT_LIST_DIR}/mrt_cmake_modules-extras.cmake)
"""
cmake_lists="""
cmake_minimum_required(VERSION 3.0)
project(lanelet2)
cmake_policy(SET CMP0079 NEW) # allows to do target_link_libraries on targets from subdirs
set(CMAKE_MODULE_PATH ${CMAKE_CURRENT_LIST_DIR})
set(BoostPython_FOUND Yes)
include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
conan_basic_setup(SKIP_STD)
# declare dependencies
include_directories(lanelet2_core/include lanelet2_io/include lanelet2_projection/include lanelet2_traffic_rules/include
lanelet2_routing/include lanelet2_validation/include)
add_subdirectory(lanelet2_core)
add_subdirectory(lanelet2_io)
add_subdirectory(lanelet2_projection)
add_subdirectory(lanelet2_traffic_rules)
add_subdirectory(lanelet2_routing)
add_subdirectory(lanelet2_validation)
add_subdirectory(lanelet2_examples)
add_subdirectory(lanelet2_python)
add_subdirectory(lanelet2_maps)
# declare dependencies
target_link_libraries(lanelet2_io PUBLIC lanelet2_core)
target_link_libraries(lanelet2_projection PUBLIC lanelet2_core)
target_link_libraries(lanelet2_traffic_rules PUBLIC lanelet2_core)
target_link_libraries(lanelet2_routing PUBLIC lanelet2_core lanelet2_traffic_rules)
target_link_libraries(lanelet2_validation PUBLIC lanelet2_core lanelet2_io lanelet2_routing lanelet2_traffic_rules lanelet2_projection)
target_link_libraries(lanelet2_examples_compiler_flags INTERFACE lanelet2_core lanelet2_io lanelet2_routing lanelet2_traffic_rules lanelet2_projection)
target_link_libraries(lanelet2_python_compiler_flags INTERFACE lanelet2_core lanelet2_io lanelet2_routing lanelet2_traffic_rules lanelet2_projection)
"""
def read_version():
package = ET.parse('lanelet2_core/package.xml')
return package.find('version').text
def get_py_version():
return "{}.{}".format(sys.version_info.major, sys.version_info.minor)
class Lanelet2Conan(ConanFile):
name = "lanelet2"
version = read_version()
settings = "os", "compiler", "build_type", "arch"
generators = "cmake"
license = "BSD"
url = "https://github.com/fzi-forschungszentrum-informatik/lanelet2"
description = "Map handling framework for automated driving"
options = {"shared": [True, False], "fPIC": [True]}
default_options = {"shared": False, "fPIC": True, "boost:python_version": get_py_version(), "boost:without_python": False}
requires = ("python_dev_config/0.6@bincrafters/stable",
"boost/1.69.0@conan/stable",
"eigen/3.3.7@conan/stable",
"geographiclib/1.49@bincrafters/stable",
"pugixml/1.9@bincrafters/stable")
exports_sources = "*"
exports = "lanelet2_core/package.xml"
proj_list = [
'lanelet2_core',
'lanelet2_io',
'lanelet2_projection',
'lanelet2_traffic_rules',
'lanelet2_routing',
'lanelet2_validation'
]
def _configure_cmake(self):
cmake = CMake(self)
cmake.definitions["PYTHON_VERSION"] = get_py_version()
cmake.configure()
return cmake
def _pythonpath(self):
sys.version_info.major
if self.settings.os == "Windows":
return os.path.join("lib", "python" + get_py_version(), "site-packages")
if os.path.exists("/etc/debian_version"):
print("On debian")
if sys.version_info.major == 3:
return os.path.join("lib", "python3", "dist-packages") # its ROS, idk...
else:
return os.path.join("lib", "python" + get_py_version(), "dist-packages")
return os.path.join("lib", "python" + get_py_version(), "site-packages")
def source(self):
if not os.path.exists("mrt_cmake_modules"):
self.run("git clone https://github.com/KIT-MRT/mrt_cmake_modules.git")
mrt_cmake_dir = os.path.join(os.getcwd(), "mrt_cmake_modules")
with open("mrt_cmake_modules/cmake/mrt_cmake_modules-extras.cmake.em") as f:
extras = em.expand(f.read(), DEVELSPACE=True, PROJECT_SOURCE_DIR=mrt_cmake_dir,
CMAKE_CURRENT_SOURCE_DIR=mrt_cmake_dir)
with open("mrt_cmake_modules-extras.cmake", "w") as f:
f.write(extras)
with open("Findmrt_cmake_modules.cmake", "w") as f:
f.write(find_mrt_cmake)
with open("CMakeLists.txt", "w") as f:
f.write(cmake_lists)
def build(self):
cmake = self._configure_cmake()
cmake.build()
def package(self):
cmake = self._configure_cmake()
cmake.install()
def package_info(self):
self.cpp_info.libs = list(reversed(self.proj_list))
self.env_info.PYTHONPATH.append(os.path.join(self.package_folder, self._pythonpath()))
| 40.155738 | 151 | 0.719126 |
4b874874851d348e226ae2300b2fb53d178652e2 | 13,430 | py | Python | readimc/_imc_mcd_file.py | mezwick/readimc | 7f362157b468220ad07bb4d1cb48abb7d8199b73 | [
"MIT"
] | null | null | null | readimc/_imc_mcd_file.py | mezwick/readimc | 7f362157b468220ad07bb4d1cb48abb7d8199b73 | [
"MIT"
] | null | null | null | readimc/_imc_mcd_file.py | mezwick/readimc | 7f362157b468220ad07bb4d1cb48abb7d8199b73 | [
"MIT"
] | null | null | null | import mmap
import numpy as np
import re
import xml.etree.ElementTree as ET
from imageio import imread
from os import PathLike
from typing import BinaryIO, List, Optional, Sequence, Union
from readimc._imc_mcd_xml_parser import IMCMcdXmlParser, IMCMcdXmlParserError
from readimc.data import Acquisition
from readimc.data import Panorama
from readimc.data import Slide
import readimc
class IMCMcdFile(readimc.IMCFileBase):
_XMLNS_REGEX = re.compile(r"{(?P<xmlns>.*)}")
def __init__(self, path: Union[str, PathLike]) -> None:
"""A class for reading Fluidigm(R) IMC(TM) MCD(TM) files
:param path: path to the Fluidigm(R) IMC(TM) MCD(TM) file
"""
super(IMCMcdFile, self).__init__(path)
self._fh: Optional[BinaryIO] = None
self._xml: Optional[ET.Element] = None
self._xmlns: Optional[str] = None
self._slides: Optional[List[Slide]] = None
@property
def xml(self) -> ET.Element:
"""Full metadata in proprietary Fluidigm(R) XML format"""
if self._xml is None:
raise IOError(f"MCD file '{self.path.name}' has not been opened")
return self._xml
@property
def xmlns(self) -> str:
"""XML namespace of full metadata in proprietary Fluidigm(R) XML
format"""
if self._xmlns is None:
raise IOError(f"MCD file '{self.path.name}' has not been opened")
return self._xmlns
@property
def slides(self) -> Sequence[Slide]:
"""Metadata on slides contained in this Fluidigm(R) IMC(TM) MCD(TM)
file"""
if self._slides is None:
raise IOError(f"MCD file '{self.path.name}' has not been opened")
return self._slides
def __enter__(self) -> "IMCMcdFile":
self.open()
return self
def __exit__(self, exc_type, exc_value, traceback) -> None:
self.close()
def open(self) -> None:
"""Opens the Fluidigm(R) IMC(TM) MCD(TM) file for reading.
It is good practice to use context managers whenever possible:
.. code-block:: python
with IMCMCDFile("/path/to/file.mcd") as f:
pass
"""
if self._fh is not None:
self._fh.close()
self._fh = open(self._path, mode="rb")
self._xml = self._read_xml()
self._xmlns = self._get_xmlns(self.xml)
xml_parser = IMCMcdXmlParser(self.xml, default_namespace=self.xmlns)
try:
self._slides = xml_parser.parse_slides()
except IMCMcdXmlParserError as e:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
"error parsing slide information from MCD-XML"
) from e
def close(self) -> None:
"""Closes the Fluidigm(R) IMC(TM) MCD(TM) file.
It is good practice to use context managers whenever possible:
.. code-block:: python
with IMCMCDFile("/path/to/file.mcd") as f:
pass
"""
if self._fh is not None:
self._fh.close()
self._fh = None
def read_acquisition(self, acquisition: Acquisition) -> np.ndarray:
"""Reads IMC(TM) acquisition data as numpy array.
:param acquisition: the acquisition to read
:return: the acquisition data as 32-bit floating point array,
shape: (c, y, x)
"""
if acquisition is None:
raise ValueError("acquisition")
try:
data_start_offset = int(acquisition.metadata["DataStartOffset"])
data_end_offset = int(acquisition.metadata["DataEndOffset"])
value_bytes = int(acquisition.metadata["ValueBytes"])
except (KeyError, ValueError) as e:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
"cannot locate acquisition image data"
) from e
if data_start_offset >= data_end_offset:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
"invalid acquisition image data offsets"
)
if value_bytes <= 0:
raise IOError("MCD file corrupted: invalid byte size")
num_channels = acquisition.num_channels
data_size = data_end_offset - data_start_offset
bytes_per_pixel = (num_channels + 3) * value_bytes
if data_size % bytes_per_pixel != 0:
data_size += 1
if data_size % bytes_per_pixel != 0:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
"invalid acquisition image data size"
)
num_pixels = data_size // bytes_per_pixel
self._fh.seek(0)
data = np.memmap(
self._fh,
dtype=np.float32,
mode="r",
offset=data_start_offset,
shape=(num_pixels, num_channels + 3),
)
width, height = np.amax(data[:, :2], axis=0).astype(int) + 1
if width * height != data.shape[0]:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
"inconsistent acquisition image data size"
)
img = np.zeros((height, width, num_channels), dtype=np.float32)
img[data[:, 1].astype(int), data[:, 0].astype(int), :] = data[:, 3:]
return np.moveaxis(img, -1, 0)
def read_slide(self, slide: Slide) -> Optional[np.ndarray]:
"""Reads and decodes a slide image as numpy array using the ``imageio``
package.
.. note::
Slide images are stored as binary data within the Fluidigm(R)
IMC(TM) MCD(TM) file in an arbitrary encoding. The ``imageio``
package can decode most commonly used image file formats, but may
fail for more obscure, in which case an ``IOException`` is raised.
:param slide: the slide to read
:return: the slide image, or ``None`` if no image is available for the
specified slide
"""
try:
data_start_offset = int(slide.metadata["ImageStartOffset"])
data_end_offset = int(slide.metadata["ImageEndOffset"])
except (KeyError, ValueError) as e:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"cannot locate image data for slide {slide.id}"
) from e
if data_start_offset == data_end_offset == 0:
return None
data_start_offset += 161
if data_start_offset >= data_end_offset:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"invalid image data offsets for slide {slide.id}"
)
try:
return self._read_image(
data_start_offset, data_end_offset - data_start_offset
)
except Exception as e:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"cannot read image for slide {slide.id}"
) from e
def read_panorama(self, panorama: Panorama) -> np.ndarray:
"""Reads and decodes a panorama image as numpy array using the
``imageio`` package.
:param panorama: the panorama to read
:return: the panorama image as numpy array
"""
try:
data_start_offset = int(panorama.metadata["ImageStartOffset"])
data_end_offset = int(panorama.metadata["ImageEndOffset"])
except (KeyError, ValueError) as e:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"cannot locate image data for panorama {panorama.id}"
) from e
data_start_offset += 161
if data_start_offset >= data_end_offset:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"invalid image data offsets for panorama {panorama.id}"
)
try:
return self._read_image(
data_start_offset, data_end_offset - data_start_offset
)
except Exception as e:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"cannot read image for panorama {panorama.id}"
) from e
def read_before_ablation_image(
self, acquisition: Acquisition
) -> Optional[np.ndarray]:
"""Reads and decodes a before-ablation image as numpy array using the
``imageio`` package.
:param acquisition: the acquisition for which to read the
before-ablation image
:return: the before-ablation image as numpy array, or ``None`` if no
before-ablation image is available for the specified acquisition
"""
try:
data_start_offset = int(
acquisition.metadata["BeforeAblationImageStartOffset"]
)
data_end_offset = int(
acquisition.metadata["BeforeAblationImageEndOffset"]
)
except (KeyError, ValueError) as e:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"cannot locate before-ablation image data "
f"for acquisition {acquisition.id}"
) from e
if data_start_offset == data_end_offset == 0:
return None
data_start_offset += 161
if data_start_offset >= data_end_offset:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"invalid before-ablation image data offsets "
f"for acquisition {acquisition.id}"
)
try:
return self._read_image(
data_start_offset, data_end_offset - data_start_offset
)
except Exception as e:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"cannot read before-ablation image "
f"for acquisition {acquisition.id}"
) from e
def read_after_ablation_image(
self, acquisition: Acquisition
) -> Optional[np.ndarray]:
"""Reads and decodes a after-ablation image as numpy array using the
``imageio`` package.
:param acquisition: the acquisition for which to read the
after-ablation image
:return: the after-ablation image as numpy array, or ``None`` if no
after-ablation image is available for the specified acquisition
"""
try:
data_start_offset = int(
acquisition.metadata["AfterAblationImageStartOffset"]
)
data_end_offset = int(
acquisition.metadata["AfterAblationImageEndOffset"]
)
except (KeyError, ValueError) as e:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"cannot locate after-ablation image data "
f"for acquisition {acquisition.id}"
) from e
if data_start_offset == data_end_offset == 0:
return None
data_start_offset += 161
if data_start_offset >= data_end_offset:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"invalid after-ablation image data offsets "
f"for acquisition {acquisition.id}"
)
try:
return self._read_image(
data_start_offset, data_end_offset - data_start_offset
)
except Exception as e:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"cannot read after-ablation image "
f"for acquisition {acquisition.id}"
) from e
def _read_xml(
self,
encoding: str = "utf-16-le",
start_sub: str = "<MCDSchema",
end_sub: str = "</MCDSchema>",
) -> ET.Element:
with mmap.mmap(self._fh.fileno(), 0, access=mmap.ACCESS_READ) as mm:
# V1 contains multiple MCDSchema entries
# As per imctools, the latest entry should be taken
start_sub_encoded = start_sub.encode(encoding=encoding)
start_index = mm.rfind(start_sub_encoded)
if start_index == -1:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"start of XML document '{start_sub}' not found"
)
end_sub_encoded = end_sub.encode(encoding=encoding)
end_index = mm.rfind(end_sub_encoded, start_index)
if end_index == -1:
raise IOError(
f"MCD file '{self.path.name}' corrupted: "
f"end of XML document '{end_sub}' not found"
)
mm.seek(start_index)
data = mm.read(end_index + len(end_sub_encoded) - start_index)
text = data.decode(encoding=encoding)
return ET.fromstring(text)
def _read_image(self, data_offset: int, data_size: int) -> np.ndarray:
self._fh.seek(data_offset)
data = self._fh.read(data_size)
return imread(data)
@staticmethod
def _get_xmlns(elem: ET.Element) -> str:
m = re.match(IMCMcdFile._XMLNS_REGEX, elem.tag)
return m.group("xmlns") if m is not None else ""
def __repr__(self) -> str:
return str(self._path)
| 37.830986 | 79 | 0.574981 |
703cf18c35303307f1a9e9eee59a0b6df50ee29b | 346 | py | Python | src/main.py | syatsuzuka/Sample_TkinterApp | e2b25f2c8222bf128b71c0c66443890dbca61941 | [
"Apache-2.0"
] | null | null | null | src/main.py | syatsuzuka/Sample_TkinterApp | e2b25f2c8222bf128b71c0c66443890dbca61941 | [
"Apache-2.0"
] | null | null | null | src/main.py | syatsuzuka/Sample_TkinterApp | e2b25f2c8222bf128b71c0c66443890dbca61941 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
'''
Sample Tkinter App
'''
import tkinter as tk
import json
from gui.MainWindow import MainWindow
if __name__ == '__main__':
'''
main function
'''
root = tk.Tk()
func_dict = dict()
settings = json.load(open("config/settings.json"))
app = MainWindow(root, func_dict, settings)
app.mainloop()
| 14.416667 | 54 | 0.635838 |
a19642f4888b1fa3cfad07801764104261e72912 | 4,300 | py | Python | viseron/camera/frame.py | dadaloop82/viseron | 1c6c446a4856e16c0e2ed6b9323d169fbdcae20f | [
"MIT"
] | 399 | 2020-08-31T21:13:07.000Z | 2022-03-31T18:54:26.000Z | viseron/camera/frame.py | dadaloop82/viseron | 1c6c446a4856e16c0e2ed6b9323d169fbdcae20f | [
"MIT"
] | 157 | 2020-09-01T18:59:56.000Z | 2022-03-25T07:14:19.000Z | viseron/camera/frame.py | dadaloop82/viseron | 1c6c446a4856e16c0e2ed6b9323d169fbdcae20f | [
"MIT"
] | 53 | 2020-09-01T07:35:59.000Z | 2022-03-28T23:21:16.000Z | """Frame read from FFmpeg."""
import logging
from typing import List
import cv2
import numpy as np
from viseron.detector.detected_object import DetectedObject
LOGGER = logging.getLogger(__name__)
class Frame:
"""Represents a frame read from FFMpeg."""
def __init__(
self,
cvt_color,
color_plane_width,
color_plane_height,
raw_frame,
frame_width,
frame_height,
):
self._cvt_color = cvt_color
self._color_plane_width = color_plane_width
self._color_plane_height = color_plane_height
self._raw_frame = raw_frame
self._frame_width = frame_width
self._frame_height = frame_height
self._decoded_frame = None
self._decoded_frame_umat = None
self._decoded_frame_umat_rgb = None
self._decoded_frame_mat_rgb = None
self._resized_frames = {}
self._preprocessed_frames = {}
self._objects: List[DetectedObject] = []
self._motion_contours = None
def decode_frame(self):
"""Decode raw frame to numpy array."""
try:
self._decoded_frame = np.frombuffer(self.raw_frame, np.uint8).reshape(
self._color_plane_height, self._color_plane_width
)
except ValueError:
LOGGER.warning("Failed to decode frame")
return False
return True
def resize(self, decoder_name, width, height):
"""Resize and store frame."""
self._resized_frames[decoder_name] = cv2.resize(
self.decoded_frame_umat_rgb,
(width, height),
interpolation=cv2.INTER_LINEAR,
)
def get_resized_frame(self, decoder_name: str):
"""Fetch a stored frame."""
return self._resized_frames.get(decoder_name, self.decoded_frame_umat_rgb)
def save_preprocessed_frame(self, decoder_name: str, frame):
"""Store a frame returned from a preprocessor."""
self._preprocessed_frames[decoder_name] = frame
def get_preprocessed_frame(self, decoder_name: str):
"""Return stored from from a preprocessor."""
return self._preprocessed_frames.get(
decoder_name, self.get_resized_frame(decoder_name)
)
@property
def raw_frame(self):
"""Return raw frame."""
return self._raw_frame
@property
def frame_width(self):
"""Return frame width."""
return self._frame_width
@property
def frame_height(self):
"""Return frame height."""
return self._frame_height
@property
def decoded_frame(self):
"""Return decoded frame. Decodes frame if not already done."""
if self._decoded_frame is None:
self._decoded_frame = self.decode_frame()
return self._decoded_frame
@property
def decoded_frame_umat(self):
"""Return decoded frame in UMat format. Decodes frame if not already done."""
if self._decoded_frame_umat is None:
self._decoded_frame_umat = cv2.UMat(self.decoded_frame)
return self._decoded_frame_umat
@property
def decoded_frame_umat_rgb(self):
"""Return decoded frame in RGB UMat format.
Decodes frame if not already done.
"""
if self._decoded_frame_umat_rgb is None:
self._decoded_frame_umat_rgb = cv2.cvtColor(
self.decoded_frame_umat, self._cvt_color
)
return self._decoded_frame_umat_rgb
@property
def decoded_frame_mat_rgb(self):
"""Return decoded frame in RGB Mat format.
Decodes frame if not already done.
"""
if self._decoded_frame_mat_rgb is None:
self._decoded_frame_mat_rgb = self.decoded_frame_umat_rgb.get()
return self._decoded_frame_mat_rgb
@property
def objects(self) -> List[DetectedObject]:
"""Return all detected objects in frame."""
return self._objects
@objects.setter
def objects(self, objects):
self._objects = objects
@property
def motion_contours(self):
"""Return all motion contours in frame."""
return self._motion_contours
@motion_contours.setter
def motion_contours(self, motion_contours):
self._motion_contours = motion_contours
| 30.496454 | 85 | 0.647907 |
fe54167467bb6199791cf844b6a8fc4b32971a6f | 42 | py | Python | Chapter01/git_tag.py | PacktPublishing/Secret-Recipes-of-the-Python-Ninja | 805d00c7a54927ba94c9077e9a580508ee3c5e56 | [
"MIT"
] | 13 | 2018-06-21T01:44:49.000Z | 2021-12-01T10:49:53.000Z | Chapter01/git_tag.py | PacktPublishing/Secret-Recipes-of-the-Python-Ninja | 805d00c7a54927ba94c9077e9a580508ee3c5e56 | [
"MIT"
] | null | null | null | Chapter01/git_tag.py | PacktPublishing/Secret-Recipes-of-the-Python-Ninja | 805d00c7a54927ba94c9077e9a580508ee3c5e56 | [
"MIT"
] | 6 | 2018-10-05T08:29:24.000Z | 2022-01-11T14:49:50.000Z | git tag -a <tag_name> -m "<tag_message>"
| 14 | 40 | 0.642857 |
6b167b76185ec035b7492dfa81e1dc02daa7ebba | 1,261 | py | Python | quest/pycocoevalcap/bleu/bleu.py | shuokabe/deepQuest-mod | 7140a57c30deedb0570bc835c6ad3c848f0039f4 | [
"BSD-3-Clause"
] | null | null | null | quest/pycocoevalcap/bleu/bleu.py | shuokabe/deepQuest-mod | 7140a57c30deedb0570bc835c6ad3c848f0039f4 | [
"BSD-3-Clause"
] | null | null | null | quest/pycocoevalcap/bleu/bleu.py | shuokabe/deepQuest-mod | 7140a57c30deedb0570bc835c6ad3c848f0039f4 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
#
# File Name : bleu.py
#
# Description : Wrapper for BLEU scorer.
#
# Creation Date : 06-01-2015
# Last Modified : Thu 19 Mar 2015 09:13:28 PM PDT
# Authors : Hao Fang <hfang@uw.edu> and Tsung-Yi Lin <tl483@cornell.edu>
from bleu_scorer import BleuScorer
class Bleu:
def __init__(self, n=4):
# default compute Blue score up to 4
self._n = n
self._hypo_for_image = {}
self.ref_for_image = {}
def compute_score(self, gts, res):
assert (gts.keys() == res.keys())
imgIds = gts.keys()
bleu_scorer = BleuScorer(n=self._n)
for id in imgIds:
hypo = res[id]
ref = gts[id]
# Sanity check.
assert (type(hypo) is list)
assert (len(hypo) == 1)
assert (type(ref) is list)
assert (len(ref) >= 1)
bleu_scorer += (hypo[0], ref)
# score, scores = bleu_scorer.compute_score(option='shortest')
score, scores = bleu_scorer.compute_score(option='closest', verbose=1)
# score, scores = bleu_scorer.compute_score(option='average', verbose=1)
# return (bleu, bleu_info)
return score, scores
def method(self):
return "Bleu"
| 26.270833 | 80 | 0.57732 |
81ad7f3c40914b6ac3220f0a4595f40cb53153f5 | 299 | py | Python | Implementation-of-linear-algebra-operations/2.simple-numpy-implementation/lesson1.py | zelzhan/Linear-algebra-with-python | a58042c9f29f67aafcd2c1c4c1300a0e9223a650 | [
"MIT"
] | null | null | null | Implementation-of-linear-algebra-operations/2.simple-numpy-implementation/lesson1.py | zelzhan/Linear-algebra-with-python | a58042c9f29f67aafcd2c1c4c1300a0e9223a650 | [
"MIT"
] | null | null | null | Implementation-of-linear-algebra-operations/2.simple-numpy-implementation/lesson1.py | zelzhan/Linear-algebra-with-python | a58042c9f29f67aafcd2c1c4c1300a0e9223a650 | [
"MIT"
] | null | null | null | #Using the distance formula and trigonometry functions in Python, calculate the #magnitude and direction of a line with the two coordinates, (5,3) and (1,1).
import numpy as np
import math
a = np.array([5, 3])
b = np.array([1, 1])
length = np.linalg.norm(a-b)
angle = math.atan((a-b)[1]/(a-b)[0])
| 29.9 | 157 | 0.692308 |
e9d5359a52796193b3ef95c55a3422d64190fc1f | 13,773 | py | Python | backend/whip/serializers.py | tecty/SENG2011 | 81861a2d4452dee16433d5038d34beaaed9665c8 | [
"MIT"
] | null | null | null | backend/whip/serializers.py | tecty/SENG2011 | 81861a2d4452dee16433d5038d34beaaed9665c8 | [
"MIT"
] | null | null | null | backend/whip/serializers.py | tecty/SENG2011 | 81861a2d4452dee16433d5038d34beaaed9665c8 | [
"MIT"
] | 2 | 2019-01-08T03:08:03.000Z | 2019-11-19T13:24:19.000Z | from django.contrib.auth.models import User,Group
from rest_framework import serializers,validators
from .models import Location, Profile, Parameter, Post, Bid,Event,Message
from django.utils import timezone
# to support the recursive message
from rest_framework_recursive.fields import RecursiveField
class LocationSerializer(serializers.ModelSerializer):
class Meta:
model = Location
fields = ("address","lat","lng")
def create(self,validated_data):
# Only create a location if it's exist
return Location.objects.get_or_create(**validated_data)[0]
"""
Solution in
https://django-rest-auth.readthedocs.io/en/latest/faq.html
And this serialiser to simply the saving procedure, never have direct use in
viewset
"""
class ProfileSerializer(serializers.ModelSerializer):
# couldn't be changed by user
is_trusted = serializers.BooleanField(read_only = True)
class Meta:
model = Profile
fields = (
"location",
"tel",
"is_trusted",
)
class UserSerializer(serializers.HyperlinkedModelSerializer):
# password only can be written
password = serializers.CharField(write_only = True)
# also the password again
password_again = serializers.CharField(write_only = True)
"""
Extended Fields
"""
# location of this user
location = LocationSerializer(source = "profile.location")
# telephone of this user
tel = serializers.IntegerField(source = "profile.tel")
# whether it is is the trusted
is_trusted = serializers.BooleanField(
source = "profile.is_trusted", read_only = True)
"""
validation codes
"""
def validate(self, data):
if data["password"] != data["password_again"]:
raise serializers.ValidationError({
"password_again":[
"Two password must be same"
]
})
# we no longer need the password again
data.pop("password_again")
# ELSE: validate successfully
return data
class Meta:
model = User
fields = (
'id',
'username',
"password",
"password_again",
"location",
"tel",
"is_trusted",
)
def update(self, instance ,validated_data):
# do the same things as create
# pop the foreign to create instance
profile_data = validated_data.pop("profile",{})
# I don't know the ** means
# This line just for creating a location instance
# if there is not exists
# [0] is for that we only need the object from this creation
location = Location.objects.get_or_create(
**(profile_data.pop("location")))[0]
# push back the location object
profile_data['location'] = location
# update this user
user = super(UserSerializer, self).update(instance,validated_data)
# set the password by a delicate function
# In this way, the password will encrypt and save correctly
user.set_password(validated_data["password"])
# save this model
user.save()
# user profile_data serializer to update
ProfileSerializer().update(user.profile,validated_data=profile_data)
return user
def create(self,validated_data):
# pop the foreign to create instance
profile_data = validated_data.pop("profile",{})
# I don't know the ** means
# This line just for creating a location instance
# if there is not exists
# [0] is for that we only need the object from this creation
location = Location.objects.get_or_create(
**(profile_data.pop("location")))[0]
# push back the location object
profile_data['location'] = location
# create this user
user = super(UserSerializer, self).create(validated_data)
# TODO: need to modify if we divide our group data
user.groups.add(Group.objects.get(name='Poster'))
user.groups.add(Group.objects.get(name='Bidder'))
# set the password by a delicate function
# In this way, the password will encrypt and save correctly
user.set_password(validated_data["password"])
# save this model
user.save()
# user profile_data serializer to update
ProfileSerializer().update(user.profile,validated_data=profile_data)
return user
class MessageSerializer(serializers.ModelSerializer):
parentMsg = serializers.CharField(write_only = True,allow_blank = True)
sub_msg = serializers.ListField(child = RecursiveField(),source= "message_set.all",read_only = True)
owner = UserSerializer(read_only = True)
msg = serializers.CharField(allow_null = True)
class Meta:
model = Message
fields = (
"id",
"parentMsg",
"msg",
"owner",
"sub_msg"
)
def create(self, validated_data):
# push this user to validated data
validated_data['owner'] = self.context['request'].user
# pop the parent message first
parentMsg = validated_data.pop("parentMsg","")
if parentMsg:
# re push in the fetched instance of parentMsg for the forein link
validated_data["parentMsg"] = Message.objects.get(pk = parentMsg)
# call the super method to create this obj
return super(MessageSerializer,self).create(validated_data)
class EventSerializer(serializers.ModelSerializer):
owner = UserSerializer(read_only = True)
location = LocationSerializer()
# couldn't choose the post as who event it want
post_set = serializers.PrimaryKeyRelatedField(many = True, read_only = True)
"""
validation code of this serializer
"""
def validate_eventTime(self, eventTime):
""" Event time > now """
if eventTime <= timezone.now():
raise serializers.ValidationError(
"Event Time must be later than now;"
)
return eventTime
def validate_bidClosingTime(self, bidClosingTime):
""" Bid closing time > now """
if bidClosingTime <= timezone.now():
raise serializers.ValidationError(
"Bid Closing Time must be later than now;"
)
return bidClosingTime
def validate(self,data):
if data["bidClosingTime"] >= data["eventTime"]:
raise serializers.ValidationError(
{"bidClosingTime":[
"Bid Closing must happen before the Event time "
]}
)
# ELSE: validate successfully
return data
class Meta:
model = Event
fields = (
"id",
"title",
"owner",
"eventTime",
"bidClosingTime",
"location",
"post_set"
)
def create(self,validated_data):
validated_data['owner'] = self.context['request'].user
# location data may not exist
location = validated_data.pop("location",{})
# get or create a location
validated_data["location"] = \
LocationSerializer().create(location)
# use parents method to create this obj
post = super(EventSerializer,self).create(validated_data)
# return back this created obj
return post
def update(self, instance ,validated_data):
# these field won't update
validated_data.pop("owner", {})
validated_data.pop("post_set",{})
# get or create the new location instance
# location data may not exist
location = validated_data.pop("location",{})
# get or create a location
validated_data["location"] = \
LocationSerializer().create(location)
# use the serializer to update this instance
event = super(EventSerializer, self).update(instance,validated_data)
# return back the updated instance
return event
class ParameterSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Parameter
# fields = ('key', 'value', "isDelete")
fields = ('id','key', 'value')
class BidSerializer(serializers.HyperlinkedModelSerializer):
# protect the state been modify by client
# TODO: Provide another to endpoint to change it
state = serializers.CharField(read_only = True)
post = serializers.PrimaryKeyRelatedField(queryset = Post.objects.all())
owner = UserSerializer(
read_only = True,
)
# inherentant the context fcfrom this class
msg = MessageSerializer(read_only = True)
message = serializers.CharField(write_only = True,required = False)
rateOfBidder = serializers.FloatField(read_only = True)
bidderReceivedPoints = serializers.IntegerField(read_only = True)
class Meta:
model = Bid
fields = (
'id',
'post',
'owner',
'msg',
'message',
'offer',
"state",
"bidderReceivedPoints",
"rateOfBidder"
)
def update(self, instance,validated_data):
# only can change offer in these stage
if (instance in ["BD","CL"]):
# and owner only can change the offer
instance.offer = validated_data["offer"];
# update the state to BD
instance.state = "BD"
# push the change to database
instance.save();
return instance
def create(self, validated_data):
# push the current user into the validate data
validated_data['owner'] = self.context['request'].user
try:
# try to fetch bid with two key constrain
bid = Bid.objects.get(
owner = validated_data["owner"],
post = validated_data["post"]
)
if bid != None:
# I can fetch a bid of this post,
# redirect to update this bid, rather than create a new bid
return self.update(bid, validated_data)
except Exception as identifier:
# do nothing just for mute the error
pass
# pop the message to create the message char
msg = validated_data.pop('message',"")
# pass this message as string to it
validated_data['msg'] = \
MessageSerializer(context = self.context ).create({"msg": msg})
# created and save the bid bid
bid = super(BidSerializer,self).create(validated_data)
return bid
class PostSerializer(serializers.ModelSerializer):
# eventId = serializers.IntegerField(write_only = True)
# set the foreign stat sytle
# extraParameter =serializers.StringRelatedField(many = True)
state = serializers.CharField(read_only = True)
bid_set = BidSerializer(many=True,read_only = True)
posterReceivedPoints =serializers.IntegerField(read_only = True)
# inherentant the context fcfrom this class
msg = MessageSerializer(read_only = True)
message = serializers.CharField(write_only = True,
required = False ,allow_blank = True)
"""
validation code of this serializer
"""
def validate_extraParameter(self, extraParameter):
"""
for the mutual exclusion of keys in extraparameter
If there is a duplicate keys in extra paramter, raise error
"""
# create a dict to detect collesion
ParameterDict = {}
for Parameter in extraParameter:
# perform a check for duplicate key in the dictionary
if Parameter.key in ParameterDict:
# raise a validate error and break the loop
raise serializers.ValidationError(
"extraParameter must have unique key"
)
# add this key value pair into this dict
# so it will raise error when it occour a duplicate key
ParameterDict[Parameter.key] = Parameter.value
# ELSE: validate successfully return back this data to validated_data
return extraParameter
class Meta:
model = Post
fields = (
# "eventId",
"id",
'title',
"event",
"peopleCount",
"budget",
"posterReceivedPoints",
"state",
"extraParameter",
"msg",
"message",
"bid_set",
)
def validate(self,data):
if data["event"].owner != self.context['request'].user:
raise serializers.ValidationError(
{"owner":[
"You'r not the owner of this event."
]}
)
return data
def create(self, validated_data):
# pop the message to create the message char
msg = validated_data.pop('message',"")
# pass this message as string to it
validated_data['msg'] = \
MessageSerializer(context = self.context ).create({"msg": msg})
# use parents method to create this obj
post = super(PostSerializer,self).create(validated_data)
# return back this created obj
return post
def update(self, instance ,validated_data):
# these field won't update
validated_data.pop("event", "")
print(validated_data);
# use this serializer to update the data
instance = super(PostSerializer, self)\
.update(instance, validated_data)
return instance | 33.592683 | 104 | 0.603209 |
0d8fddd0ba24e39cfea36ac6f6d8e708da0a403a | 31,872 | py | Python | sklearn/metrics/_regression.py | ryuwd/scikit-learn | da562b4fa58bdce4a7f3470f733f33d728747a66 | [
"BSD-3-Clause"
] | 13 | 2020-05-03T18:42:05.000Z | 2022-03-23T07:44:19.000Z | sklearn/metrics/_regression.py | ryuwd/scikit-learn | da562b4fa58bdce4a7f3470f733f33d728747a66 | [
"BSD-3-Clause"
] | 2 | 2022-03-11T13:26:38.000Z | 2022-03-11T23:28:31.000Z | sklearn/metrics/_regression.py | ryuwd/scikit-learn | da562b4fa58bdce4a7f3470f733f33d728747a66 | [
"BSD-3-Clause"
] | 8 | 2020-10-05T20:56:08.000Z | 2020-10-27T23:30:03.000Z | """Metrics to assess performance on regression task.
Functions named as ``*_score`` return a scalar value to maximize: the higher
the better.
Function named as ``*_error`` or ``*_loss`` return a scalar value to minimize:
the lower the better.
"""
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Mathieu Blondel <mathieu@mblondel.org>
# Olivier Grisel <olivier.grisel@ensta.org>
# Arnaud Joly <a.joly@ulg.ac.be>
# Jochen Wersdorfer <jochen@wersdoerfer.de>
# Lars Buitinck
# Joel Nothman <joel.nothman@gmail.com>
# Karan Desai <karandesai281196@gmail.com>
# Noel Dawe <noel@dawe.me>
# Manoj Kumar <manojkumarsivaraj334@gmail.com>
# Michael Eickenberg <michael.eickenberg@gmail.com>
# Konstantin Shmelkov <konstantin.shmelkov@polytechnique.edu>
# Christian Lorentzen <lorentzen.ch@googlemail.com>
# Ashutosh Hathidara <ashutoshhathidara98@gmail.com>
# License: BSD 3 clause
import numpy as np
import warnings
from .._loss.glm_distribution import TweedieDistribution
from ..utils.validation import (check_array, check_consistent_length,
_num_samples)
from ..utils.validation import column_or_1d
from ..utils.validation import _deprecate_positional_args
from ..utils.validation import _check_sample_weight
from ..utils.stats import _weighted_percentile
from ..exceptions import UndefinedMetricWarning
__ALL__ = [
"max_error",
"mean_absolute_error",
"mean_squared_error",
"mean_squared_log_error",
"median_absolute_error",
"mean_absolute_percentage_error",
"r2_score",
"explained_variance_score",
"mean_tweedie_deviance",
"mean_poisson_deviance",
"mean_gamma_deviance",
]
def _check_reg_targets(y_true, y_pred, multioutput, dtype="numeric"):
"""Check that y_true and y_pred belong to the same regression task.
Parameters
----------
y_true : array-like
y_pred : array-like
multioutput : array-like or string in ['raw_values', uniform_average',
'variance_weighted'] or None
None is accepted due to backward compatibility of r2_score().
Returns
-------
type_true : one of {'continuous', continuous-multioutput'}
The type of the true target data, as output by
'utils.multiclass.type_of_target'.
y_true : array-like of shape (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples, n_outputs)
Estimated target values.
multioutput : array-like of shape (n_outputs) or string in ['raw_values',
uniform_average', 'variance_weighted'] or None
Custom output weights if ``multioutput`` is array-like or
just the corresponding argument if ``multioutput`` is a
correct keyword.
dtype : str or list, default="numeric"
the dtype argument passed to check_array.
"""
check_consistent_length(y_true, y_pred)
y_true = check_array(y_true, ensure_2d=False, dtype=dtype)
y_pred = check_array(y_pred, ensure_2d=False, dtype=dtype)
if y_true.ndim == 1:
y_true = y_true.reshape((-1, 1))
if y_pred.ndim == 1:
y_pred = y_pred.reshape((-1, 1))
if y_true.shape[1] != y_pred.shape[1]:
raise ValueError("y_true and y_pred have different number of output "
"({0}!={1})".format(y_true.shape[1], y_pred.shape[1]))
n_outputs = y_true.shape[1]
allowed_multioutput_str = ('raw_values', 'uniform_average',
'variance_weighted')
if isinstance(multioutput, str):
if multioutput not in allowed_multioutput_str:
raise ValueError("Allowed 'multioutput' string values are {}. "
"You provided multioutput={!r}".format(
allowed_multioutput_str,
multioutput))
elif multioutput is not None:
multioutput = check_array(multioutput, ensure_2d=False)
if n_outputs == 1:
raise ValueError("Custom weights are useful only in "
"multi-output cases.")
elif n_outputs != len(multioutput):
raise ValueError(("There must be equally many custom weights "
"(%d) as outputs (%d).") %
(len(multioutput), n_outputs))
y_type = 'continuous' if n_outputs == 1 else 'continuous-multioutput'
return y_type, y_true, y_pred, multioutput
@_deprecate_positional_args
def mean_absolute_error(y_true, y_pred, *,
sample_weight=None,
multioutput='uniform_average'):
"""Mean absolute error regression loss.
Read more in the :ref:`User Guide <mean_absolute_error>`.
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
multioutput : {'raw_values', 'uniform_average'} or array-like of shape \
(n_outputs,), default='uniform_average'
Defines aggregating of multiple output values.
Array-like value defines weights used to average errors.
'raw_values' :
Returns a full set of errors in case of multioutput input.
'uniform_average' :
Errors of all outputs are averaged with uniform weight.
Returns
-------
loss : float or ndarray of floats
If multioutput is 'raw_values', then mean absolute error is returned
for each output separately.
If multioutput is 'uniform_average' or an ndarray of weights, then the
weighted average of all output errors is returned.
MAE output is non-negative floating point. The best value is 0.0.
Examples
--------
>>> from sklearn.metrics import mean_absolute_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_absolute_error(y_true, y_pred)
0.5
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_absolute_error(y_true, y_pred)
0.75
>>> mean_absolute_error(y_true, y_pred, multioutput='raw_values')
array([0.5, 1. ])
>>> mean_absolute_error(y_true, y_pred, multioutput=[0.3, 0.7])
0.85...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
check_consistent_length(y_true, y_pred, sample_weight)
output_errors = np.average(np.abs(y_pred - y_true),
weights=sample_weight, axis=0)
if isinstance(multioutput, str):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
def mean_absolute_percentage_error(y_true, y_pred,
sample_weight=None,
multioutput='uniform_average'):
"""Mean absolute percentage error regression loss.
Note here that we do not represent the output as a percentage in range
[0, 100]. Instead, we represent it in range [0, 1/eps]. Read more in the
:ref:`User Guide <mean_absolute_percentage_error>`.
.. versionadded:: 0.24
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
multioutput : {'raw_values', 'uniform_average'} or array-like
Defines aggregating of multiple output values.
Array-like value defines weights used to average errors.
If input is list then the shape must be (n_outputs,).
'raw_values' :
Returns a full set of errors in case of multioutput input.
'uniform_average' :
Errors of all outputs are averaged with uniform weight.
Returns
-------
loss : float or ndarray of floats in the range [0, 1/eps]
If multioutput is 'raw_values', then mean absolute percentage error
is returned for each output separately.
If multioutput is 'uniform_average' or an ndarray of weights, then the
weighted average of all output errors is returned.
MAPE output is non-negative floating point. The best value is 0.0.
But note the fact that bad predictions can lead to arbitarily large
MAPE values, especially if some y_true values are very close to zero.
Note that we return a large value instead of `inf` when y_true is zero.
Examples
--------
>>> from sklearn.metrics import mean_absolute_percentage_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_absolute_percentage_error(y_true, y_pred)
0.3273...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_absolute_percentage_error(y_true, y_pred)
0.5515...
>>> mean_absolute_percentage_error(y_true, y_pred, multioutput=[0.3, 0.7])
0.6198...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
check_consistent_length(y_true, y_pred, sample_weight)
epsilon = np.finfo(np.float64).eps
mape = np.abs(y_pred - y_true) / np.maximum(np.abs(y_true), epsilon)
output_errors = np.average(mape,
weights=sample_weight, axis=0)
if isinstance(multioutput, str):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
@_deprecate_positional_args
def mean_squared_error(y_true, y_pred, *,
sample_weight=None,
multioutput='uniform_average', squared=True):
"""Mean squared error regression loss.
Read more in the :ref:`User Guide <mean_squared_error>`.
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
multioutput : {'raw_values', 'uniform_average'} or array-like of shape \
(n_outputs,), default='uniform_average'
Defines aggregating of multiple output values.
Array-like value defines weights used to average errors.
'raw_values' :
Returns a full set of errors in case of multioutput input.
'uniform_average' :
Errors of all outputs are averaged with uniform weight.
squared : bool, default=True
If True returns MSE value, if False returns RMSE value.
Returns
-------
loss : float or ndarray of floats
A non-negative floating point value (the best value is 0.0), or an
array of floating point values, one for each individual target.
Examples
--------
>>> from sklearn.metrics import mean_squared_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_squared_error(y_true, y_pred)
0.375
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_squared_error(y_true, y_pred, squared=False)
0.612...
>>> y_true = [[0.5, 1],[-1, 1],[7, -6]]
>>> y_pred = [[0, 2],[-1, 2],[8, -5]]
>>> mean_squared_error(y_true, y_pred)
0.708...
>>> mean_squared_error(y_true, y_pred, squared=False)
0.822...
>>> mean_squared_error(y_true, y_pred, multioutput='raw_values')
array([0.41666667, 1. ])
>>> mean_squared_error(y_true, y_pred, multioutput=[0.3, 0.7])
0.825...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
check_consistent_length(y_true, y_pred, sample_weight)
output_errors = np.average((y_true - y_pred) ** 2, axis=0,
weights=sample_weight)
if not squared:
output_errors = np.sqrt(output_errors)
if isinstance(multioutput, str):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
@_deprecate_positional_args
def mean_squared_log_error(y_true, y_pred, *,
sample_weight=None,
multioutput='uniform_average'):
"""Mean squared logarithmic error regression loss.
Read more in the :ref:`User Guide <mean_squared_log_error>`.
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
multioutput : {'raw_values', 'uniform_average'} or array-like of shape \
(n_outputs,), default='uniform_average'
Defines aggregating of multiple output values.
Array-like value defines weights used to average errors.
'raw_values' :
Returns a full set of errors when the input is of multioutput
format.
'uniform_average' :
Errors of all outputs are averaged with uniform weight.
Returns
-------
loss : float or ndarray of floats
A non-negative floating point value (the best value is 0.0), or an
array of floating point values, one for each individual target.
Examples
--------
>>> from sklearn.metrics import mean_squared_log_error
>>> y_true = [3, 5, 2.5, 7]
>>> y_pred = [2.5, 5, 4, 8]
>>> mean_squared_log_error(y_true, y_pred)
0.039...
>>> y_true = [[0.5, 1], [1, 2], [7, 6]]
>>> y_pred = [[0.5, 2], [1, 2.5], [8, 8]]
>>> mean_squared_log_error(y_true, y_pred)
0.044...
>>> mean_squared_log_error(y_true, y_pred, multioutput='raw_values')
array([0.00462428, 0.08377444])
>>> mean_squared_log_error(y_true, y_pred, multioutput=[0.3, 0.7])
0.060...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
check_consistent_length(y_true, y_pred, sample_weight)
if (y_true < 0).any() or (y_pred < 0).any():
raise ValueError("Mean Squared Logarithmic Error cannot be used when "
"targets contain negative values.")
return mean_squared_error(np.log1p(y_true), np.log1p(y_pred),
sample_weight=sample_weight,
multioutput=multioutput)
@_deprecate_positional_args
def median_absolute_error(y_true, y_pred, *, multioutput='uniform_average',
sample_weight=None):
"""Median absolute error regression loss.
Median absolute error output is non-negative floating point. The best value
is 0.0. Read more in the :ref:`User Guide <median_absolute_error>`.
Parameters
----------
y_true : array-like of shape = (n_samples) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples) or (n_samples, n_outputs)
Estimated target values.
multioutput : {'raw_values', 'uniform_average'} or array-like of shape \
(n_outputs,), default='uniform_average'
Defines aggregating of multiple output values. Array-like value defines
weights used to average errors.
'raw_values' :
Returns a full set of errors in case of multioutput input.
'uniform_average' :
Errors of all outputs are averaged with uniform weight.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
.. versionadded:: 0.24
Returns
-------
loss : float or ndarray of floats
If multioutput is 'raw_values', then mean absolute error is returned
for each output separately.
If multioutput is 'uniform_average' or an ndarray of weights, then the
weighted average of all output errors is returned.
Examples
--------
>>> from sklearn.metrics import median_absolute_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> median_absolute_error(y_true, y_pred)
0.5
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> median_absolute_error(y_true, y_pred)
0.75
>>> median_absolute_error(y_true, y_pred, multioutput='raw_values')
array([0.5, 1. ])
>>> median_absolute_error(y_true, y_pred, multioutput=[0.3, 0.7])
0.85
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
if sample_weight is None:
output_errors = np.median(np.abs(y_pred - y_true), axis=0)
else:
sample_weight = _check_sample_weight(sample_weight, y_pred)
output_errors = _weighted_percentile(np.abs(y_pred - y_true),
sample_weight=sample_weight)
if isinstance(multioutput, str):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
@_deprecate_positional_args
def explained_variance_score(y_true, y_pred, *,
sample_weight=None,
multioutput='uniform_average'):
"""Explained variance regression score function.
Best possible score is 1.0, lower values are worse.
Read more in the :ref:`User Guide <explained_variance_score>`.
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
multioutput : {'raw_values', 'uniform_average', 'variance_weighted'} or \
array-like of shape (n_outputs,), default='uniform_average'
Defines aggregating of multiple output scores.
Array-like value defines weights used to average scores.
'raw_values' :
Returns a full set of scores in case of multioutput input.
'uniform_average' :
Scores of all outputs are averaged with uniform weight.
'variance_weighted' :
Scores of all outputs are averaged, weighted by the variances
of each individual output.
Returns
-------
score : float or ndarray of floats
The explained variance or ndarray if 'multioutput' is 'raw_values'.
Notes
-----
This is not a symmetric function.
Examples
--------
>>> from sklearn.metrics import explained_variance_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> explained_variance_score(y_true, y_pred)
0.957...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> explained_variance_score(y_true, y_pred, multioutput='uniform_average')
0.983...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
check_consistent_length(y_true, y_pred, sample_weight)
y_diff_avg = np.average(y_true - y_pred, weights=sample_weight, axis=0)
numerator = np.average((y_true - y_pred - y_diff_avg) ** 2,
weights=sample_weight, axis=0)
y_true_avg = np.average(y_true, weights=sample_weight, axis=0)
denominator = np.average((y_true - y_true_avg) ** 2,
weights=sample_weight, axis=0)
nonzero_numerator = numerator != 0
nonzero_denominator = denominator != 0
valid_score = nonzero_numerator & nonzero_denominator
output_scores = np.ones(y_true.shape[1])
output_scores[valid_score] = 1 - (numerator[valid_score] /
denominator[valid_score])
output_scores[nonzero_numerator & ~nonzero_denominator] = 0.
if isinstance(multioutput, str):
if multioutput == 'raw_values':
# return scores individually
return output_scores
elif multioutput == 'uniform_average':
# passing to np.average() None as weights results is uniform mean
avg_weights = None
elif multioutput == 'variance_weighted':
avg_weights = denominator
else:
avg_weights = multioutput
return np.average(output_scores, weights=avg_weights)
@_deprecate_positional_args
def r2_score(y_true, y_pred, *, sample_weight=None,
multioutput="uniform_average"):
"""R^2 (coefficient of determination) regression score function.
Best possible score is 1.0 and it can be negative (because the
model can be arbitrarily worse). A constant model that always
predicts the expected value of y, disregarding the input features,
would get a R^2 score of 0.0.
Read more in the :ref:`User Guide <r2_score>`.
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
multioutput : {'raw_values', 'uniform_average', 'variance_weighted'}, \
array-like of shape (n_outputs,) or None, default='uniform_average'
Defines aggregating of multiple output scores.
Array-like value defines weights used to average scores.
Default is "uniform_average".
'raw_values' :
Returns a full set of scores in case of multioutput input.
'uniform_average' :
Scores of all outputs are averaged with uniform weight.
'variance_weighted' :
Scores of all outputs are averaged, weighted by the variances
of each individual output.
.. versionchanged:: 0.19
Default value of multioutput is 'uniform_average'.
Returns
-------
z : float or ndarray of floats
The R^2 score or ndarray of scores if 'multioutput' is
'raw_values'.
Notes
-----
This is not a symmetric function.
Unlike most other scores, R^2 score may be negative (it need not actually
be the square of a quantity R).
This metric is not well-defined for single samples and will return a NaN
value if n_samples is less than two.
References
----------
.. [1] `Wikipedia entry on the Coefficient of determination
<https://en.wikipedia.org/wiki/Coefficient_of_determination>`_
Examples
--------
>>> from sklearn.metrics import r2_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> r2_score(y_true, y_pred)
0.948...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> r2_score(y_true, y_pred,
... multioutput='variance_weighted')
0.938...
>>> y_true = [1, 2, 3]
>>> y_pred = [1, 2, 3]
>>> r2_score(y_true, y_pred)
1.0
>>> y_true = [1, 2, 3]
>>> y_pred = [2, 2, 2]
>>> r2_score(y_true, y_pred)
0.0
>>> y_true = [1, 2, 3]
>>> y_pred = [3, 2, 1]
>>> r2_score(y_true, y_pred)
-3.0
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
check_consistent_length(y_true, y_pred, sample_weight)
if _num_samples(y_pred) < 2:
msg = "R^2 score is not well-defined with less than two samples."
warnings.warn(msg, UndefinedMetricWarning)
return float('nan')
if sample_weight is not None:
sample_weight = column_or_1d(sample_weight)
weight = sample_weight[:, np.newaxis]
else:
weight = 1.
numerator = (weight * (y_true - y_pred) ** 2).sum(axis=0,
dtype=np.float64)
denominator = (weight * (y_true - np.average(
y_true, axis=0, weights=sample_weight)) ** 2).sum(axis=0,
dtype=np.float64)
nonzero_denominator = denominator != 0
nonzero_numerator = numerator != 0
valid_score = nonzero_denominator & nonzero_numerator
output_scores = np.ones([y_true.shape[1]])
output_scores[valid_score] = 1 - (numerator[valid_score] /
denominator[valid_score])
# arbitrary set to zero to avoid -inf scores, having a constant
# y_true is not interesting for scoring a regression anyway
output_scores[nonzero_numerator & ~nonzero_denominator] = 0.
if isinstance(multioutput, str):
if multioutput == 'raw_values':
# return scores individually
return output_scores
elif multioutput == 'uniform_average':
# passing None as weights results is uniform mean
avg_weights = None
elif multioutput == 'variance_weighted':
avg_weights = denominator
# avoid fail on constant y or one-element arrays
if not np.any(nonzero_denominator):
if not np.any(nonzero_numerator):
return 1.0
else:
return 0.0
else:
avg_weights = multioutput
return np.average(output_scores, weights=avg_weights)
def max_error(y_true, y_pred):
"""
max_error metric calculates the maximum residual error.
Read more in the :ref:`User Guide <max_error>`.
Parameters
----------
y_true : array-like of shape (n_samples,)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,)
Estimated target values.
Returns
-------
max_error : float
A positive floating point value (the best value is 0.0).
Examples
--------
>>> from sklearn.metrics import max_error
>>> y_true = [3, 2, 7, 1]
>>> y_pred = [4, 2, 7, 1]
>>> max_error(y_true, y_pred)
1
"""
y_type, y_true, y_pred, _ = _check_reg_targets(y_true, y_pred, None)
if y_type == 'continuous-multioutput':
raise ValueError("Multioutput not supported in max_error")
return np.max(np.abs(y_true - y_pred))
@_deprecate_positional_args
def mean_tweedie_deviance(y_true, y_pred, *, sample_weight=None, power=0):
"""Mean Tweedie deviance regression loss.
Read more in the :ref:`User Guide <mean_tweedie_deviance>`.
Parameters
----------
y_true : array-like of shape (n_samples,)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,)
Estimated target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
power : float, default=0
Tweedie power parameter. Either power <= 0 or power >= 1.
The higher `p` the less weight is given to extreme
deviations between true and predicted targets.
- power < 0: Extreme stable distribution. Requires: y_pred > 0.
- power = 0 : Normal distribution, output corresponds to
mean_squared_error. y_true and y_pred can be any real numbers.
- power = 1 : Poisson distribution. Requires: y_true >= 0 and
y_pred > 0.
- 1 < p < 2 : Compound Poisson distribution. Requires: y_true >= 0
and y_pred > 0.
- power = 2 : Gamma distribution. Requires: y_true > 0 and y_pred > 0.
- power = 3 : Inverse Gaussian distribution. Requires: y_true > 0
and y_pred > 0.
- otherwise : Positive stable distribution. Requires: y_true > 0
and y_pred > 0.
Returns
-------
loss : float
A non-negative floating point value (the best value is 0.0).
Examples
--------
>>> from sklearn.metrics import mean_tweedie_deviance
>>> y_true = [2, 0, 1, 4]
>>> y_pred = [0.5, 0.5, 2., 2.]
>>> mean_tweedie_deviance(y_true, y_pred, power=1)
1.4260...
"""
y_type, y_true, y_pred, _ = _check_reg_targets(
y_true, y_pred, None, dtype=[np.float64, np.float32])
if y_type == 'continuous-multioutput':
raise ValueError("Multioutput not supported in mean_tweedie_deviance")
check_consistent_length(y_true, y_pred, sample_weight)
if sample_weight is not None:
sample_weight = column_or_1d(sample_weight)
sample_weight = sample_weight[:, np.newaxis]
dist = TweedieDistribution(power=power)
dev = dist.unit_deviance(y_true, y_pred, check_input=True)
return np.average(dev, weights=sample_weight)
@_deprecate_positional_args
def mean_poisson_deviance(y_true, y_pred, *, sample_weight=None):
"""Mean Poisson deviance regression loss.
Poisson deviance is equivalent to the Tweedie deviance with
the power parameter `power=1`.
Read more in the :ref:`User Guide <mean_tweedie_deviance>`.
Parameters
----------
y_true : array-like of shape (n_samples,)
Ground truth (correct) target values. Requires y_true >= 0.
y_pred : array-like of shape (n_samples,)
Estimated target values. Requires y_pred > 0.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
Returns
-------
loss : float
A non-negative floating point value (the best value is 0.0).
Examples
--------
>>> from sklearn.metrics import mean_poisson_deviance
>>> y_true = [2, 0, 1, 4]
>>> y_pred = [0.5, 0.5, 2., 2.]
>>> mean_poisson_deviance(y_true, y_pred)
1.4260...
"""
return mean_tweedie_deviance(
y_true, y_pred, sample_weight=sample_weight, power=1
)
@_deprecate_positional_args
def mean_gamma_deviance(y_true, y_pred, *, sample_weight=None):
"""Mean Gamma deviance regression loss.
Gamma deviance is equivalent to the Tweedie deviance with
the power parameter `power=2`. It is invariant to scaling of
the target variable, and measures relative errors.
Read more in the :ref:`User Guide <mean_tweedie_deviance>`.
Parameters
----------
y_true : array-like of shape (n_samples,)
Ground truth (correct) target values. Requires y_true > 0.
y_pred : array-like of shape (n_samples,)
Estimated target values. Requires y_pred > 0.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
Returns
-------
loss : float
A non-negative floating point value (the best value is 0.0).
Examples
--------
>>> from sklearn.metrics import mean_gamma_deviance
>>> y_true = [2, 0.5, 1, 4]
>>> y_pred = [0.5, 0.5, 2., 2.]
>>> mean_gamma_deviance(y_true, y_pred)
1.0568...
"""
return mean_tweedie_deviance(
y_true, y_pred, sample_weight=sample_weight, power=2
)
| 35.452725 | 79 | 0.629706 |
b62987f66abfd36f0476135eb24fc728a991f946 | 5,544 | py | Python | users/migrations/0016_auto_20201030_0142.py | namanmalikk/UpCare | 43af1a3d291d4580cdf9348e9c38da4a7e41db73 | [
"Apache-2.0"
] | null | null | null | users/migrations/0016_auto_20201030_0142.py | namanmalikk/UpCare | 43af1a3d291d4580cdf9348e9c38da4a7e41db73 | [
"Apache-2.0"
] | null | null | null | users/migrations/0016_auto_20201030_0142.py | namanmalikk/UpCare | 43af1a3d291d4580cdf9348e9c38da4a7e41db73 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.0.10 on 2020-10-29 20:12
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('users', '0015_auto_20201030_0139'),
]
operations = [
migrations.AlterField(
model_name='symptom',
name='name',
field=models.CharField(choices=[('itching', 'itching'), ('skin_rash', 'skin_rash'), ('nodal_skin_eruptions', 'nodal_skin_eruptions'), ('continuous_sneezing', 'continuous_sneezing'), ('shivering', 'shivering'), ('chills', 'chills'), ('joint_pain', 'joint_pain'), ('stomach_pain', 'stomach_pain'), ('acidity', 'acidity'), ('ulcers_on_tongue', 'ulcers_on_tongue'), ('muscle_wasting vomiting', 'muscle_wasting vomiting'), ('burning_micturition', 'burning_micturition'), ('spotting_ urination', 'spotting_ urination'), ('fatigue', 'fatigue'), ('weight_gain', 'weight_gain'), ('anxiety', 'anxiety'), ('cold_hands_and_feets', 'cold_hands_and_feets'), ('mood_swings', 'mood_swings'), ('weight_loss', 'weight_loss'), ('restlessness', 'restlessness'), ('lethargy', 'lethargy'), ('patches_in_throat', 'patches_in_throat'), ('irregular_sugar_level', 'irregular_sugar_level'), ('cough', 'cough'), ('high_fever', 'high_fever'), ('sunken_eyes', 'sunken_eyes'), ('breathlessness', 'breathlessness'), ('sweating', 'sweating'), ('dehydration', 'dehydration'), ('indigestion', 'indigestion'), ('headache', 'headache'), ('yellowish_skin', 'yellowish_skin'), ('dark_urine', 'dark_urine'), ('nausea', 'nausea'), ('loss_of_appetite', 'loss_of_appetite'), ('pain_behind_the_eyes', 'pain_behind_the_eyes'), ('back_pain', 'back_pain'), ('constipation', 'constipation'), ('abdominal_pain', 'abdominal_pain'), ('diarrhoea', 'diarrhoea'), ('mild_fever', 'mild_fever'), ('yellow_urine', 'yellow_urine'), ('yellowing_of_eyes', 'yellowing_of_eyes'), ('acute_liver_failure', 'acute_liver_failure'), ('fluid_overload', 'fluid_overload'), ('swelling_of_stomach', 'swelling_of_stomach'), ('swelled_lymph_nodes', 'swelled_lymph_nodes'), ('malaise', 'malaise'), ('blurred_and_distorted_vision', 'blurred_and_distorted_vision'), ('phlegm', 'phlegm'), ('throat_irritation', 'throat_irritation'), ('redness_of_eyes', 'redness_of_eyes'), ('sinus_pressure', 'sinus_pressure'), ('runny_nose', 'runny_nose'), ('congestion', 'congestion'), ('chest_pain', 'chest_pain'), ('weakness_in_limbs', 'weakness_in_limbs'), ('fast_heart_rate', 'fast_heart_rate'), ('pain_during_bowel_movements', 'pain_during_bowel_movements'), ('pain_in_anal_region', 'pain_in_anal_region'), ('bloody_stool', 'bloody_stool'), ('irritation_in_anus', 'irritation_in_anus'), ('neck_pain', 'neck_pain'), ('dizziness', 'dizziness'), ('cramps', 'cramps'), ('bruising', 'bruising'), ('obesity', 'obesity'), ('swollen_legs', 'swollen_legs'), ('swollen_blood_vessels', 'swollen_blood_vessels'), ('puffy_face_and_eyes', 'puffy_face_and_eyes'), ('enlarged_thyroid', 'enlarged_thyroid'), ('brittle_nails', 'brittle_nails'), ('swollen_extremeties', 'swollen_extremeties'), ('excessive_hunger', 'excessive_hunger'), ('extra_marital_contacts', 'extra_marital_contacts'), ('drying_and_tingling_lips', 'drying_and_tingling_lips'), ('slurred_speech', 'slurred_speech'), ('knee_pain', 'knee_pain'), ('hip_joint_pain', 'hip_joint_pain'), ('muscle_weakness', 'muscle_weakness'), (' stiff_neck', ' stiff_neck'), ('swelling_joints', 'swelling_joints'), ('movement_stiffness', 'movement_stiffness'), ('spinning_movements', 'spinning_movements'), ('loss_of_balance', 'loss_of_balance'), ('unsteadiness', 'unsteadiness'), ('weakness_of_one_body_side', 'weakness_of_one_body_side'), ('loss_of_smell', 'loss_of_smell'), ('bladder_discomfort', 'bladder_discomfort'), ('foul_smell_of urine', 'foul_smell_of urine'), ('continuous_feel_of_urine', 'continuous_feel_of_urine'), ('passage_of_gases', 'passage_of_gases'), ('internal_itching', 'internal_itching'), ('toxic_look_(typhos)', 'toxic_look_(typhos)'), ('depression', 'depression'), ('irritability', 'irritability'), ('muscle_pain', 'muscle_pain'), ('altered_sensorium', 'altered_sensorium'), ('red_spots_over_body', 'red_spots_over_body'), ('belly_pain', 'belly_pain'), ('abnormal_menstruation', 'abnormal_menstruation'), ('dischromic _patches', 'dischromic _patches'), ('watering_from_eyes', 'watering_from_eyes'), ('increased_appetite', 'increased_appetite'), ('polyuria', 'polyuria'), ('family_history', 'family_history'), ('mucoid_sputum', 'mucoid_sputum'), ('rusty_sputum', 'rusty_sputum'), ('lack_of_concentration', 'lack_of_concentration'), ('visual_disturbances', 'visual_disturbances'), ('receiving_blood_transfusion', 'receiving_blood_transfusion'), ('receiving_unsterile_injections', 'receiving_unsterile_injections'), ('coma', 'coma'), ('stomach_bleeding', 'stomach_bleeding'), ('distention_of_abdomen', 'distention_of_abdomen'), ('history_of_alcohol_consumption', 'history_of_alcohol_consumption'), ('fluid_overload', 'fluid_overload'), ('blood_in_sputum', 'blood_in_sputum'), ('prominent_veins_on_calf', 'prominent_veins_on_calf'), ('palpitations', 'palpitations'), ('painful_walking', 'painful_walking'), ('pus_filled_pimples', 'pus_filled_pimples'), ('blackheads', 'blackheads'), ('scurring', 'scurring'), ('skin_peeling', 'skin_peeling'), ('silver_like_dusting', 'silver_like_dusting'), ('small_dents_in_nails', 'small_dents_in_nails'), ('inflammatory_nails', 'inflammatory_nails'), ('blister', 'blister'), ('red_sore_around_nose', 'red_sore_around_nose'), ('yellow_crust_ooze', 'yellow_crust_ooze')], default='', max_length=50),
),
]
| 291.789474 | 5,210 | 0.731782 |
df6b27cef2c8da008bf74eb848e9805140a570c6 | 1,228 | py | Python | src/waldur_mastermind/billing/managers.py | opennode/nodeconductor-assembly-waldur | cad9966389dc9b52b13d2301940c99cf4b243900 | [
"MIT"
] | 2 | 2017-01-20T15:26:25.000Z | 2017-08-03T04:38:08.000Z | src/waldur_mastermind/billing/managers.py | opennode/nodeconductor-assembly-waldur | cad9966389dc9b52b13d2301940c99cf4b243900 | [
"MIT"
] | null | null | null | src/waldur_mastermind/billing/managers.py | opennode/nodeconductor-assembly-waldur | cad9966389dc9b52b13d2301940c99cf4b243900 | [
"MIT"
] | null | null | null | from django.contrib.contenttypes.models import ContentType
from django.db import models as django_models
from waldur_core.core import managers as core_managers
from waldur_core.structure.managers import filter_queryset_for_user
class UserFilterMixin:
def filtered_for_user(self, user, queryset=None):
if queryset is None:
queryset = self.get_queryset()
if user.is_staff or user.is_support:
return queryset
query = django_models.Q()
for model in self.get_available_models():
user_object_ids = filter_queryset_for_user(
model.objects.all(), user
).values_list('id', flat=True)
content_type_id = ContentType.objects.get_for_model(model).id
query |= django_models.Q(
object_id__in=list(user_object_ids), content_type_id=content_type_id
)
return queryset.filter(query)
def get_available_models(self):
"""Return list of models that are acceptable"""
raise NotImplementedError()
class PriceEstimateManager(core_managers.GenericKeyMixin, django_models.Manager):
def get_available_models(self):
return self.model.get_estimated_models()
| 34.111111 | 84 | 0.697883 |
e9796e26c591500379363cc2b3f1621c1cc874c4 | 4,249 | py | Python | identity_data.py | Identity-Stick/identity-stick-server | 4cafa21e1573ac9f2663ca7127b844b47e392bab | [
"MIT"
] | 2 | 2021-06-18T13:33:24.000Z | 2022-01-11T00:14:45.000Z | identity_data.py | Identity-Stick/identity-stick-server | 4cafa21e1573ac9f2663ca7127b844b47e392bab | [
"MIT"
] | null | null | null | identity_data.py | Identity-Stick/identity-stick-server | 4cafa21e1573ac9f2663ca7127b844b47e392bab | [
"MIT"
] | 1 | 2020-12-02T15:32:27.000Z | 2020-12-02T15:32:27.000Z | #!/usr/bin/env python3
"""
______________
MIT License
Copyright (c) 2020 Julian Rösner
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
____________________________
"""
from Crypto.Signature import pkcs1_15
from Crypto.Hash import SHA256
from Crypto.PublicKey import RSA
import urllib.request
########################
# FUNCTIONS #
########################
# Create a new key pair
def new_keys():
private_key = RSA.generate(4096)
public_key = private_key.publickey()
return private_key, public_key
# Store a key pair in files
def store_keys(private_key, pr_key_file_name, public_key, pu_key_file_name):
with (pr_key_file_name, 'w') as pr_key_file:
pr_key_file.write(private_key)
with (pu_key_file_name, 'w') as pu_key_file:
pu_key_file.write(public_key)
# Load key from file
def load_key_from_file(file_name):
return RSA.import_key(open(file_name).read())
# Load key from url
def load_key_from_url(url):
webUrl = urllib.request.urlopen(url)
return RSA.import_key(webUrl.read())
# Load keys from files
def load_keys_from_file(private_file, public_file):
private_key = load_key_from_file(private_file)
public_key = load_key_from_file(public_file)
return private_key, public_key
# Function to hash data and sign it
def sign(data, private_key):
hash_value = SHA256.new(data)
signature = pkcs1_15.new(private_key).sign(hash_value)
return hash_value, signature
# Check signature on a hash value with public key
def check_sig(public_key, hash_value, signature):
try:
pkcs1_15.new(public_key).verify(hash_value, signature)
return True
except (ValueError, TypeError):
return False
# Save signature and hash value to files
def save_sig_hash(signature, sig_file_name, hash_value, hash_file_name):
with open(sig_file_name, 'w') as sig_file:
sig_file.write(signature.hex())
with open(hash_file_name, 'w') as hash_file:
hash_file.write(hash_value.hexdigest())
# Open signature from a file
def open_sig(sig_file_name):
with open(sig_file_name, 'r') as sig_file:
return bytes.fromhex(sig_file.read())
# Test if a signaturefile contains a valid signature
def test_sig_file(public_key, sig_file_name, data_file_name):
signature = open_sig(sig_file_name)
with open(data_file_name, 'rb') as data_file:
data = data_file.read()
hash_value = SHA256.new(data)
return check_sig(public_key, hash_value, signature)
########################
# MAIN #
########################
if __name__ == "__main__":
# Get the keys
private_key, public_key = load_keys_from_file('private_key.pem','public_key.pem')
# Get the identity data from the file
identity_data = b''
with open('identity_data.txt', 'rb') as data_file:
identity_data = data_file.read()
# Hash the data & sign it
hash_value, signature = sign(identity_data, private_key)
# Check the signature and then store hash and signature to files
if check_sig(public_key, hash_value, signature):
print("Everthing is fine and files are updated.")
save_sig_hash(signature, 'signature.pem', hash_value, 'hash.pem')
else:
print("The signature is not valid. Check your key pair")
if test_sig_file(public_key, 'signature.pem', 'identity_data.txt'):
print("Done")
else:
print("The stored signature seems to be broken")
| 31.242647 | 82 | 0.750059 |
2c77a174599477c2dbccd8e9247eeac7b0a1b477 | 16,972 | py | Python | buycoins_sdk/tests/client/test_client.py | GoZaddy/buycoins_sdk | 404ab183dc9d575d04abc60fa84fb5b1001a97a9 | [
"MIT"
] | 16 | 2021-02-09T13:34:10.000Z | 2021-02-15T12:51:00.000Z | buycoins_sdk/tests/client/test_client.py | GoZaddy/buycoins_sdk | 404ab183dc9d575d04abc60fa84fb5b1001a97a9 | [
"MIT"
] | 7 | 2021-02-07T06:12:30.000Z | 2021-02-12T00:12:41.000Z | buycoins_sdk/tests/client/test_client.py | GoZaddy/buycoins_sdk | 404ab183dc9d575d04abc60fa84fb5b1001a97a9 | [
"MIT"
] | 1 | 2021-02-12T19:22:05.000Z | 2021-02-12T19:22:05.000Z | from unittest import TestCase, main
from unittest.mock import Mock
from buycoins_sdk import BuycoinsGraphqlClient, enums, errors, client
from .fixtures import *
def _add_error_to_result(result: dict, message: str = 'random error message'):
res = result
res['errors'] = [{
'message': message
}]
return res
class TestClient(TestCase):
"""This is the TestCase for the BuycoinsGraphqlClient Class and some utility functions
"""
def setUp(self) -> None:
BuycoinsGraphqlClient.__init__ = Mock(side_effect=lambda public_key, secret_key: None)
self.bc_client = BuycoinsGraphqlClient(secret_key="secret_key", public_key="public_key")
self.bc_client.client = Mock()
def test_prepare_graphql_args(self):
res = client._prepare_graphql_args(variables={}, first=2, last=2, after='ma', before='mb')
result = {
'connection_arg': '(first:$first,last:$last,after:$after,before:$before)',
'arg': ',$first: Int,$last: Int,$after: String,$before: String',
'variables': {'first': 2, 'last': 2, 'after': 'ma', 'before': 'mb'}
}
self.assertEqual(res, result, 'SHOULD BE EQUAL')
res = client._prepare_graphql_args(variables={})
result = {
'connection_arg': '',
'arg': '',
'variables': {}
}
self.assertEqual(res, result, 'SHOULD BE EQUAL')
def test_get_balances(self):
# testing success state
val = {
'data': {
'getBalances': [
{
'confirmedBalance': '0.0',
'cryptocurrency': 'bitcoin',
'id': 'QWNjb3VudC0='
}
]
}
}
self.bc_client.client.execute.return_value = val
balances = self.bc_client.get_balances(cryptocurrency=enums.Cryptocurrency.BITCOIN)
self.assertEqual(val['data']['getBalances'], balances['data'], "should be Equal")
# testing failure state
val = {
'data': {
'getBalances': None,
},
'errors': [
{
'message': 'Test error'
}
]
}
self.bc_client.client.execute.return_value = val
# balances = self.bc_client.get_balances(cryptocurrency=enums.Cryptocurrency.BITCOIN)
with self.assertRaises(errors.BuycoinsException):
print(self.bc_client.get_balances(cryptocurrency=enums.Cryptocurrency.BITCOIN))
def test_get_bank_accounts(self):
# testing success state
val = {
'data': {
'getBankAccounts': [
{
'accountName': 'Some random name',
'accountNumber': '2119851388',
'accountReference': None,
'accountType': 'withdrawal',
'bankName': 'GTB',
'id': 'some_random_node_id'
}
]
}
}
self.bc_client.client.execute.return_value = val
bank_accounts = self.bc_client.get_bank_accounts()
self.assertEqual(val['data']['getBankAccounts'], bank_accounts['data'], "should be Equal")
# testing failure state
val = {
'data': {
'getBankAccounts': None,
},
'errors': [
{
'message': 'Test error'
}
]
}
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
self.bc_client.get_bank_accounts()
def test_get_estimated_network_fee(self):
val = {
'data': {'getEstimatedNetworkFee': {
'estimatedFee': '0.00036',
'total': '500.00036'
}}
}
self.bc_client.client.execute.return_value = val
estimated_fee = self.bc_client.get_estimated_network_fee(amount='random_amount')
self.assertEqual(val['data']['getEstimatedNetworkFee'], estimated_fee['data'])
val = {
'data': {
'getEstimatedNetworkFee': None
},
}
val = _add_error_to_result(val, 'a random error')
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
self.bc_client.get_estimated_network_fee(amount='a random amount')
def test_get_market_book(self):
# test for success
val = get_market_book_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.get_market_book()
self.assertEqual(val['data']['getMarketBook'], client_result['data'], "should be Equal")
# test for error
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
client_result = self.bc_client.get_market_book()
def test_get_orders(self):
# test for success
val = get_orders_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.get_orders(status=enums.GetOrdersStatus.OPEN)
self.assertEqual(val['data']['getOrders'], client_result['data'], "should be Equal")
# test for error
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
client_result = self.bc_client.get_orders(status=enums.GetOrdersStatus.OPEN)
def test_get_payments(self):
# test for success
val = get_payments_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.get_payments()
self.assertEqual(val['data']['getPayments'], client_result['data'], "should be Equal")
# test for error
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
client_result = self.bc_client.get_payments()
def test_get_prices(self):
# test for success
val = get_prices_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.get_prices()
self.assertEqual(val['data']['getPrices'], client_result['data'], "should be Equal")
# test for error
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
client_result = self.bc_client.get_prices()
def test_node(self):
# test for success
val = node_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.node(
node_id='random id',
gql_type=enums.BuycoinsType.ADDRESS # random type
)
self.assertEqual(val['data']['node'], client_result['data'], "should be Equal")
# test for InvalidGraphQLNodeIDException - 1
val['data']['node'] = {}
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.InvalidGraphQLNodeIDException):
self.bc_client.node(
node_id='random id',
gql_type=enums.BuycoinsType.ADDRESS # random type
)
def test_nodes(self):
# test for success
val = nodes_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.nodes(
ids=['random id'],
gql_types=[enums.BuycoinsType.ADDRESS] # random type
)
self.assertEqual(val['data']['nodes'], client_result['data'], "should be Equal")
# test for InvalidGraphQLNodeIDException - 1
val['data']['nodes'][0] = {}
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.InvalidGraphQLNodeIDException):
self.bc_client.nodes(
ids=['random id'],
gql_types=[enums.BuycoinsType.ADDRESS] # random type
)
def test_buy(self):
# test for success
val = buy_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.buy(
price_id='random id',
coin_amount='10',
cryptocurrency=enums.Cryptocurrency.BITCOIN
)
self.assertEqual(val['data']['buy'], client_result['data'], "should be Equal")
# test for error - InsufficientBalanceToBuyException
val = _add_error_to_result(val, message='Your balance is insufficient for this purchase')
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.InsufficientBalanceToBuyException):
self.bc_client.buy(
price_id='random id',
coin_amount='10',
cryptocurrency=enums.Cryptocurrency.BITCOIN
)
# test for error - BuycoinsException
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
self.bc_client.buy(
price_id='random id',
coin_amount='10',
cryptocurrency=enums.Cryptocurrency.BITCOIN
)
def test_cancel_withdrawal(self):
val = cancel_withdrawal_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.cancel_withdrawal(
payment_id='random_id'
)
self.assertEqual(val['data']['cancelWithdrawal'], client_result['data'], "should be Equal")
# test for error - WithdrawalCannotBeCanceledException
val = _add_error_to_result(val, message="This payment has been processed and can not be canceled")
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.WithdrawalCannotBeCanceledException):
self.bc_client.cancel_withdrawal(
payment_id='random_id'
)
# test for error - BuycoinsException
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
self.bc_client.cancel_withdrawal(
payment_id='random_id'
)
def test_create_address(self):
val = create_address_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.create_address()
self.assertEqual(val['data']['createAddress'], client_result['data'], "should be Equal")
# test for error
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
client_result = self.bc_client.create_address()
def test_create_deposit_account(self):
val = create_deposit_account_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.create_deposit_account(account_name='test')
self.assertEqual(val['data']['createDepositAccount'], client_result['data'], "should be Equal")
# test for error
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
client_result = self.bc_client.create_deposit_account(account_name='test')
def test_create_withdrawal(self):
val = create_withdrawal_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.create_withdrawal(bank_account_id='id', amount='123')
self.assertEqual(val['data']['createWithdrawal'], client_result['data'], "should be Equal")
# test for error - InsufficientBalanceToWithdrawException
val = _add_error_to_result(val, message='Balance is insufficient for this withdrawal')
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.InsufficientBalanceToWithdrawException):
client_result = self.bc_client.create_withdrawal(bank_account_id='id', amount='123')
# test for error
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
client_result = self.bc_client.create_withdrawal(bank_account_id='id', amount='123')
def test_post_limit_order(self):
val = post_limit_order_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.post_limit_order(coin_amount='10', order_side=enums.OrderSide.SELL,
static_price='1000', price_type=enums.PriceType.STATIC)
self.assertEqual(val['data']['postLimitOrder'], client_result['data'], "should be Equal")
# test for error
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
self.bc_client.post_limit_order(coin_amount='10', order_side=enums.OrderSide.SELL,
static_price='1000', price_type=enums.PriceType.STATIC)
def test_post_market_order(self):
val = post_market_order_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.post_market_order(
order_side=enums.OrderSide.SELL,
coin_amount='100'
)
self.assertEqual(val['data']['postMarketOrder'], client_result['data'], "should be Equal")
# test for error
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
self.bc_client.post_market_order(
order_side=enums.OrderSide.SELL,
coin_amount='100'
)
def test_sell(self):
val = sell_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.sell(
price_id='id',
coin_amount='1',
cryptocurrency=enums.Cryptocurrency.BITCOIN
)
self.assertEqual(val['data']['sell'], client_result['data'], "should be Equal")
# test for error - InsufficientAmountToSellException
val = _add_error_to_result(val, message='Your balance is insufficient for this sale')
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.InsufficientAmountToSellException):
self.bc_client.sell(
price_id='id',
coin_amount='1',
cryptocurrency=enums.Cryptocurrency.BITCOIN
)
# test for error
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
self.bc_client.sell(
price_id='id',
coin_amount='1',
cryptocurrency=enums.Cryptocurrency.BITCOIN
)
def test_send(self):
val = send_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.send(
address='address',
amount='1892.920',
cryptocurrency=enums.Cryptocurrency.BITCOIN
)
self.assertEqual(val['data']['send'], client_result['data'], "should be Equal")
# test for error
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
self.bc_client.send(
address='address',
amount='1892.920',
cryptocurrency=enums.Cryptocurrency.BITCOIN
)
def test_send_offchain(self):
val = send_offchain_success
self.bc_client.client.execute.return_value = val
client_result = self.bc_client.send_offchain(
recipient='random person',
amount='1892.920',
cryptocurrency=enums.Cryptocurrency.BITCOIN
)
self.assertEqual(val['data']['sendOffchain'], client_result['data'], "should be Equal")
# test for error
val = _add_error_to_result(val)
self.bc_client.client.execute.return_value = val
with self.assertRaises(errors.BuycoinsException):
self.bc_client.send_offchain(
recipient='random person',
amount='1892.920',
cryptocurrency=enums.Cryptocurrency.BITCOIN
)
if __name__ == '__main__':
main()
| 39.561772 | 111 | 0.617016 |
dc4a6fab0645e38bf849674252586411250b9346 | 308 | py | Python | sympy/mpmath/functions/__init__.py | dqnykamp/sympy | f341893eea26c7ee90be45df181eac1d21b7a541 | [
"BSD-3-Clause"
] | 625 | 2015-01-07T04:56:25.000Z | 2022-03-28T16:30:27.000Z | sympy_old/mpmath/functions/__init__.py | curzel-it/KiPyCalc | 909c783d5e6967ea58ca93f875106d8a8e3ca5db | [
"MIT"
] | 322 | 2015-01-01T15:19:37.000Z | 2022-03-27T05:07:51.000Z | sympy_old/mpmath/functions/__init__.py | curzel-it/KiPyCalc | 909c783d5e6967ea58ca93f875106d8a8e3ca5db | [
"MIT"
] | 160 | 2015-01-25T01:16:52.000Z | 2022-03-21T14:44:20.000Z | from . import functions
# Hack to update methods
from . import factorials
from . import hypergeometric
from . import expintegrals
from . import bessel
from . import orthogonal
from . import theta
from . import elliptic
from . import zeta
from . import rszeta
from . import zetazeros
from . import qfunctions
| 22 | 28 | 0.785714 |
18fdf6144cc1d80a7c689064ee7f08cc7f3a19c2 | 317 | py | Python | my_cv/03/03_03_canny.py | strawsyz/straw | db313c78c2e3c0355cd10c70ac25a15bb5632d41 | [
"MIT"
] | 2 | 2020-04-06T09:09:19.000Z | 2020-07-24T03:59:55.000Z | my_cv/03/03_03_canny.py | strawsyz/straw | db313c78c2e3c0355cd10c70ac25a15bb5632d41 | [
"MIT"
] | null | null | null | my_cv/03/03_03_canny.py | strawsyz/straw | db313c78c2e3c0355cd10c70ac25a15bb5632d41 | [
"MIT"
] | null | null | null | import cv2
"""
1、使用高斯滤波器对图像去噪
2、计算梯度
3、在边缘上使用非最大抑制(NMS)
4、在检测到的边缘上使用双阈值去除假阳性
5、最后分析所有的边缘及其之间的连接,保留真的边缘,消除不明显的边缘"""
# 转化为灰度图像
img = cv2.imread('test.jpg', 0)
# 使用canny边缘检测算法
cv2.imwrite('test_canny.jpg', cv2.Canny(img, 200, 300))
cv2.imshow("canny", cv2.imread('test_canny.jpg'))
cv2.waitKey()
cv2.destroyAllWindows()
| 21.133333 | 55 | 0.750789 |
5dfbec245291e78ea8065d88cde320c53f0affec | 308 | py | Python | storage3/utils.py | supabase-community/storage-py | 7ff6a7a305b652974d64b500e55255345e3162d7 | [
"MIT"
] | 1 | 2022-03-31T14:44:15.000Z | 2022-03-31T14:44:15.000Z | storage3/utils.py | supabase-community/storage-py | 7ff6a7a305b652974d64b500e55255345e3162d7 | [
"MIT"
] | null | null | null | storage3/utils.py | supabase-community/storage-py | 7ff6a7a305b652974d64b500e55255345e3162d7 | [
"MIT"
] | null | null | null | from httpx import AsyncClient as AsyncClient # noqa: F401
from httpx import Client as BaseClient
__version__ = "0.3.2"
class SyncClient(BaseClient):
def aclose(self) -> None:
self.close()
class StorageException(Exception):
"""Error raised when an operation on the storage API fails."""
| 22 | 66 | 0.717532 |
3ec39f98f8dc932954f15dbfda3d01b6706db494 | 12,902 | py | Python | vunit/com/codec_vhdl_package.py | bjacobs1/vunit | a7f7717a172855ea7852296bb768370d50cfc992 | [
"Artistic-2.0"
] | 1 | 2020-08-30T08:30:02.000Z | 2020-08-30T08:30:02.000Z | vunit/com/codec_vhdl_package.py | smgl9/vunit | 9933d9a1ae600cc241894244361282dd7f7227d7 | [
"Artistic-2.0"
] | null | null | null | vunit/com/codec_vhdl_package.py | smgl9/vunit | 9933d9a1ae600cc241894244361282dd7f7227d7 | [
"Artistic-2.0"
] | null | null | null | # This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
#
# Copyright (c) 2014-2018, Lars Asplund lars.anders.asplund@gmail.com
"""
Module containing the CodecVHDLPackage class.
"""
from string import Template
from vunit.vhdl_parser import VHDLPackage
from vunit.vhdl_parser import remove_comments
from vunit.com.codec_vhdl_enumeration_type import CodecVHDLEnumerationType
from vunit.com.codec_vhdl_array_type import CodecVHDLArrayType
from vunit.com.codec_vhdl_record_type import CodecVHDLRecordType
class CodecVHDLPackage(VHDLPackage):
"""Class derived from VHDLPackage to provide codec generator functionality for the data types definied
in the package."""
def __init__(self, identifier,
enumeration_types, record_types, array_types):
super(CodecVHDLPackage, self).__init__(identifier,
enumeration_types,
record_types,
array_types)
self._template = None
@classmethod
def parse(cls, code):
"""
Return a new VHDLPackage instance for a single package found within the code
"""
code = remove_comments(code).lower()
# Extract identifier
identifier = cls._package_start_re.match(code).group('id')
enumeration_types = [e for e in CodecVHDLEnumerationType.find(code)]
record_types = [r for r in CodecVHDLRecordType.find(code)]
array_types = [a for a in CodecVHDLArrayType.find(code)]
return cls(identifier, enumeration_types, record_types, array_types)
@classmethod
def find_named_package(cls, code, name):
"""Find and return the named package in the code (if it exists)"""
for package in cls.find(code):
if package.identifier == name:
return package
return None
def generate_codecs_and_support_functions(self):
"""Generate codecs and communication support functions for the data types defined in self."""
self._template = PackageCodecTemplate()
declarations = ''
definitions = ''
# Record
new_declarations, new_definitions = self._generate_record_codec_and_to_string_functions()
declarations += new_declarations
definitions += new_definitions
new_declarations, new_definitions = self._generate_msg_type_encoders()
declarations += new_declarations
definitions += new_definitions
new_declarations, new_definitions = self._generate_get_functions()
declarations += new_declarations
definitions += new_definitions
# Enumerations
all_msg_types_enumeration_type, msg_type_enumeration_types = self._create_enumeration_of_all_msg_types()
if all_msg_types_enumeration_type is not None:
declarations += self._template.all_msg_types_enumeration_type_declaration.substitute(
identifier=all_msg_types_enumeration_type.identifier,
literals=', '.join(all_msg_types_enumeration_type.literals))
if all_msg_types_enumeration_type is not None:
declarations += \
self._template.get_msg_type_declaration.substitute(type=all_msg_types_enumeration_type.identifier)
definitions += \
self._template.get_msg_type_definition.substitute(type=all_msg_types_enumeration_type.identifier)
new_declarations, new_definitions = \
self._generate_enumeration_codec_and_to_string_functions(all_msg_types_enumeration_type,
msg_type_enumeration_types)
declarations += new_declarations
definitions += new_definitions
# Arrays
new_declarations, new_definitions = self._generate_array_codec_and_to_string_functions()
declarations += new_declarations
definitions += new_definitions
return declarations, definitions
def _generate_record_codec_and_to_string_functions(self):
"""Generate codecs and to_string functions for all record data types."""
declarations = ''
definitions = ''
for record in self.record_types:
new_declarations, new_definitions = record.generate_codecs_and_support_functions()
declarations += new_declarations
definitions += new_definitions
return declarations, definitions
def _generate_array_codec_and_to_string_functions(self):
"""Generate codecs and to_string functions for all array data types."""
declarations = ''
definitions = ''
for array in self.array_types:
new_declarations, new_definitions = array.generate_codecs_and_support_functions()
declarations += new_declarations
definitions += new_definitions
return declarations, definitions
def _create_enumeration_of_all_msg_types(self):
"""Create an enumeration type containing all valid message types. These message types are collected from
records with a msg_type element which has an enumerated data type."""
msg_type_enumeration_types = []
for record in self.record_types:
if record.elements[0].identifier_list[0] == 'msg_type':
msg_type_enumeration_types.append(record.elements[0].subtype_indication.code)
msg_type_enumeration_literals = []
for enum in self.enumeration_types:
if enum.identifier in msg_type_enumeration_types:
for literal in enum.literals:
if literal in msg_type_enumeration_literals:
raise RuntimeError('Different msg_type enumerations may not have the same literals')
else:
msg_type_enumeration_literals.append(literal)
if msg_type_enumeration_literals:
all_msg_types_enumeration_type = CodecVHDLEnumerationType(self.identifier + '_msg_type_t',
msg_type_enumeration_literals)
else:
all_msg_types_enumeration_type = None
return all_msg_types_enumeration_type, msg_type_enumeration_types
def _generate_enumeration_codec_and_to_string_functions(self,
all_msg_types_enumeration_type,
msg_type_enumeration_types):
"""Generate codecs and to_string functions for all enumeration data types."""
declarations = ''
definitions = ''
enumeration_offset = 0
for enum in self.enumeration_types + ([all_msg_types_enumeration_type] if
all_msg_types_enumeration_type is not None else []):
if enum.identifier in msg_type_enumeration_types:
offset = enumeration_offset
enumeration_offset += len(enum.literals)
else:
offset = 0
new_declarations, new_definitions = enum.generate_codecs_and_support_functions(offset)
declarations += new_declarations
definitions += new_definitions
return declarations, definitions
def _generate_msg_type_encoders(self): # pylint: disable=too-many-locals
"""Generate message type encoders for records with the initial element = msg_type. An encoder is
generated for each value of the enumeration data type for msg_type. For example, if the record
has two message types, read and write, and two other fields, addr and data, then two encoders,
read(addr, data) and write(addr, data) will be generated. These are shorthands for
encode((<read or write>, addr, data))"""
declarations = ''
definitions = ''
enumeration_types = {}
for enum in self.enumeration_types:
enumeration_types[enum.identifier] = enum.literals
msg_type_record_types = self._get_records_with_an_initial_msg_type_element()
for record in msg_type_record_types:
msg_type_values = enumeration_types.get(record.elements[0].subtype_indication.type_mark)
if msg_type_values is None:
continue
for value in msg_type_values:
parameter_list = []
parameter_type_list = []
encoding_list = []
for element in record.elements:
for identifier in element.identifier_list:
if identifier != 'msg_type':
parameter_list.append(' constant %s : %s' % (identifier,
element.subtype_indication.code))
parameter_type_list.append(element.subtype_indication.type_mark)
encoding_list.append('encode(%s)' % identifier)
else:
encoding_list.append("encode(%s'(%s))" % (element.subtype_indication.code, value))
if parameter_list == []:
parameter_part = ''
alias_signature = value + '[return string];'
else:
parameter_part = ' (\n' + ';\n'.join(parameter_list) + ')'
alias_signature = value + '[' + ', '.join(parameter_type_list) + ' return string];'
encodings = ' & '.join(encoding_list)
declarations += \
self._template.msg_type_record_codec_declaration.substitute(name=value,
parameter_part=parameter_part,
alias_signature=alias_signature,
alias_name=value + '_msg')
definitions += \
self._template.msg_type_record_codec_definition.substitute(name=value,
parameter_part=parameter_part,
num_of_encodings=len(encoding_list),
encodings=encodings)
return declarations, definitions
def _generate_get_functions(self):
"""Generate a get function which will return the message type for records"""
declarations = ''
definitions = ''
msg_type_record_types = self._get_records_with_an_initial_msg_type_element()
msg_type_types = []
for record in msg_type_record_types:
msg_type_type = record.elements[0].subtype_indication.code
if msg_type_type not in msg_type_types:
msg_type_types.append(msg_type_type)
declarations += self._template.get_specific_msg_type_declaration.substitute(type=msg_type_type)
definitions += self._template.get_specific_msg_type_definition.substitute(type=msg_type_type)
return declarations, definitions
def _get_records_with_an_initial_msg_type_element(self):
"""Find all record types starting with a msg_type element"""
msg_type_record_types = []
for record in self.record_types:
if record.elements[0].identifier_list[0] == 'msg_type':
msg_type_record_types.append(record)
return msg_type_record_types
class PackageCodecTemplate(object):
"""This class contains package codec templates."""
msg_type_record_codec_declaration = Template("""\
function $name$parameter_part
return string;
alias $alias_name is $alias_signature
""")
get_specific_msg_type_declaration = Template("""\
function get_$type (
constant code : string)
return $type;
""")
all_msg_types_enumeration_type_declaration = Template("""\
type $identifier is ($literals);
""")
get_msg_type_declaration = Template("""\
function get_msg_type (
constant code : string)
return $type;
""")
msg_type_record_codec_definition = Template("""\
function $name$parameter_part
return string is
begin
return $encodings;
end function $name;
""")
get_specific_msg_type_definition = Template("""\
function get_$type (
constant code : string)
return $type is
begin
return decode(code);
end;
""")
get_msg_type_definition = Template("""\
function get_msg_type (
constant code : string)
return $type is
begin
return decode(code);
end;
""")
| 41.089172 | 115 | 0.625794 |
d652e638999b9f4dda4d097341fa897579f2bed6 | 6,735 | py | Python | test/ibm_test_case.py | mtreinish/qiskit-ibm-runtime | fd741a3f9d8bd11c702682e60c522797618a1ed7 | [
"Apache-2.0"
] | null | null | null | test/ibm_test_case.py | mtreinish/qiskit-ibm-runtime | fd741a3f9d8bd11c702682e60c522797618a1ed7 | [
"Apache-2.0"
] | null | null | null | test/ibm_test_case.py | mtreinish/qiskit-ibm-runtime | fd741a3f9d8bd11c702682e60c522797618a1ed7 | [
"Apache-2.0"
] | null | null | null | # This code is part of Qiskit.
#
# (C) Copyright IBM 2021.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""Custom TestCase for IBM Provider."""
import os
import copy
import logging
import inspect
import unittest
from contextlib import suppress
from collections import defaultdict
from qiskit_ibm_runtime import QISKIT_IBM_RUNTIME_LOGGER_NAME
from qiskit_ibm_runtime.exceptions import IBMNotAuthorizedError
from .utils import setup_test_logging
from .decorators import IntegrationTestDependencies, integration_test_setup
from .templates import RUNTIME_PROGRAM, RUNTIME_PROGRAM_METADATA, PROGRAM_PREFIX
class IBMTestCase(unittest.TestCase):
"""Custom TestCase for use with qiskit-ibm-runtime."""
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.log = logging.getLogger(cls.__name__)
filename = "%s.log" % os.path.splitext(inspect.getfile(cls))[0]
setup_test_logging(cls.log, filename)
cls._set_logging_level(logging.getLogger(QISKIT_IBM_RUNTIME_LOGGER_NAME))
@classmethod
def _set_logging_level(cls, logger: logging.Logger) -> None:
"""Set logging level for the input logger.
Args:
logger: Logger whose level is to be set.
"""
if logger.level is logging.NOTSET:
try:
logger.setLevel(cls.log.level)
except Exception as ex: # pylint: disable=broad-except
logger.warning(
'Error while trying to set the level for the "%s" logger to %s. %s.',
logger,
os.getenv("LOG_LEVEL"),
str(ex),
)
if not any(
isinstance(handler, logging.StreamHandler) for handler in logger.handlers
):
logger.addHandler(logging.StreamHandler())
logger.propagate = False
class IBMIntegrationTestCase(IBMTestCase):
"""Custom integration test case for use with qiskit-ibm-runtime."""
@classmethod
@integration_test_setup()
def setUpClass(cls, dependencies: IntegrationTestDependencies):
"""Initial class level setup."""
# pylint: disable=arguments-differ
super().setUpClass()
cls.dependencies = dependencies
cls.service = dependencies.service
def setUp(self) -> None:
"""Test level setup."""
super().setUp()
self.to_delete = defaultdict(list)
self.to_cancel = defaultdict(list)
def tearDown(self) -> None:
"""Test level teardown."""
super().tearDown()
# Delete programs
service = self.service
for prog in self.to_delete[service.auth]:
with suppress(Exception):
service.delete_program(prog)
# Cancel and delete jobs.
for job in self.to_cancel[service.auth]:
with suppress(Exception):
job.cancel()
with suppress(Exception):
service.delete_job(job.job_id)
def _upload_program(
self,
service,
name=None,
max_execution_time=300,
data=None,
is_public: bool = False,
):
"""Upload a new program."""
name = name or PROGRAM_PREFIX
data = data or RUNTIME_PROGRAM
metadata = copy.deepcopy(RUNTIME_PROGRAM_METADATA)
metadata["name"] = name
metadata["max_execution_time"] = max_execution_time
metadata["is_public"] = is_public
program_id = service.upload_program(data=data, metadata=metadata)
self.to_delete[service.auth].append(program_id)
return program_id
class IBMIntegrationJobTestCase(IBMIntegrationTestCase):
"""Custom integration test case for job-related tests."""
@classmethod
def setUpClass(cls):
"""Initial class level setup."""
# pylint: disable=arguments-differ
# pylint: disable=no-value-for-parameter
super().setUpClass()
cls._create_default_program()
cls._find_sim_backends()
@classmethod
def tearDownClass(cls) -> None:
"""Class level teardown."""
super().tearDownClass()
# Delete default program.
with suppress(Exception):
service = cls.service
service.delete_program(cls.program_ids[service.auth])
cls.log.debug(
"Deleted %s program %s", service.auth, cls.program_ids[service.auth]
)
@classmethod
def _create_default_program(cls):
"""Create a default program."""
metadata = copy.deepcopy(RUNTIME_PROGRAM_METADATA)
metadata["name"] = PROGRAM_PREFIX
cls.program_ids = {}
cls.sim_backends = {}
service = cls.service
try:
prog_id = service.upload_program(data=RUNTIME_PROGRAM, metadata=metadata)
cls.log.debug("Uploaded %s program %s", service.auth, prog_id)
cls.program_ids[service.auth] = prog_id
except IBMNotAuthorizedError:
raise unittest.SkipTest("No upload access.")
@classmethod
def _find_sim_backends(cls):
"""Find a simulator backend for each service."""
cls.sim_backends[cls.service.auth] = cls.service.backends(simulator=True)[
0
].name
def _run_program(
self,
service,
program_id=None,
iterations=1,
inputs=None,
interim_results=None,
final_result=None,
callback=None,
backend=None,
log_level=None,
):
"""Run a program."""
self.log.debug("Running program on %s", service.auth)
inputs = (
inputs
if inputs is not None
else {
"iterations": iterations,
"interim_results": interim_results or {},
"final_result": final_result or {},
}
)
pid = program_id or self.program_ids[service.auth]
backend_name = (
backend if backend is not None else self.sim_backends[service.auth]
)
options = {"backend_name": backend_name, "log_level": log_level}
job = service.run(
program_id=pid, inputs=inputs, options=options, callback=callback
)
self.log.info("Runtime job %s submitted.", job.job_id)
self.to_cancel[service.auth].append(job)
return job
| 33.675 | 89 | 0.627023 |
2ea198542b2479557e1fb8cf68064018cf58ba59 | 84 | py | Python | main.py | SuperSonicHub1/FreeEpicGamesCalendar | 82d444bf37d9cafa701be2b6354e8279be6b1a24 | [
"Unlicense"
] | 2 | 2022-02-13T04:16:03.000Z | 2022-03-09T10:06:08.000Z | main.py | SuperSonicHub1/FreeEpicGamesCalendar | 82d444bf37d9cafa701be2b6354e8279be6b1a24 | [
"Unlicense"
] | null | null | null | main.py | SuperSonicHub1/FreeEpicGamesCalendar | 82d444bf37d9cafa701be2b6354e8279be6b1a24 | [
"Unlicense"
] | null | null | null | from free_epic_games import create_app
app = create_app()
app.run("0.0.0.0", 8080)
| 16.8 | 38 | 0.738095 |
0a34bf4a7fa5f404371550413719b60a78d98661 | 586 | py | Python | run2-rdx/conds/cond-cutflow_mc-2016-md-sim09b-bare.py | umd-lhcb/lhcb-ntuples-gen | d306895a0dc6bad2def19ca3d7d1304a5a9be239 | [
"BSD-2-Clause"
] | null | null | null | run2-rdx/conds/cond-cutflow_mc-2016-md-sim09b-bare.py | umd-lhcb/lhcb-ntuples-gen | d306895a0dc6bad2def19ca3d7d1304a5a9be239 | [
"BSD-2-Clause"
] | 105 | 2018-12-20T19:09:19.000Z | 2022-03-19T09:53:06.000Z | run2-rdx/conds/cond-cutflow_mc-2016-md-sim09b-bare.py | umd-lhcb/lhcb-ntuples-gen | d306895a0dc6bad2def19ca3d7d1304a5a9be239 | [
"BSD-2-Clause"
] | null | null | null | from Configurables import DaVinci
DaVinci().DataType = '2016'
DaVinci().Simulation = True
DaVinci().TupleFile = 'cutflow_mc-bare.root'
# Additional global flags
DaVinci().MoniSequence += ['CUTFLOW', 'BARE']
from Configurables import LHCbApp
LHCbApp().CondDBtag = "sim-20161124-2-vc-md100"
LHCbApp().DDDBtag = "dddb-20150724"
from GaudiConf import IOHelper
IOHelper().inputFiles([
'./data/Bd2D0XMuNu_D0_cocktail-2016-md-py8-sim09b/00056169_00000005_3.AllStreams.dst',
'./data/Bd2D0XMuNu_D0_cocktail-2016-md-py8-sim09b/00056169_00000013_3.AllStreams.dst',
], clear=True)
| 24.416667 | 90 | 0.761092 |
129f28b6ad6db67420d990e4fedef10259761245 | 1,118 | py | Python | kubernetes/test/test_networking_v1beta1_http_ingress_rule_value.py | fooka03/python | 073cf4d89e532f92b57e8955b4efc3d5d5eb80cf | [
"Apache-2.0"
] | 2 | 2020-07-02T05:47:41.000Z | 2020-07-02T05:50:34.000Z | kubernetes/test/test_networking_v1beta1_http_ingress_rule_value.py | fooka03/python | 073cf4d89e532f92b57e8955b4efc3d5d5eb80cf | [
"Apache-2.0"
] | 1 | 2021-03-25T23:44:49.000Z | 2021-03-25T23:44:49.000Z | k8sdeployment/k8sstat/python/kubernetes/test/test_networking_v1beta1_http_ingress_rule_value.py | JeffYFHuang/gpuaccounting | afa934350ebbd0634beb60b9df4a147426ea0006 | [
"MIT"
] | 1 | 2021-10-13T17:45:37.000Z | 2021-10-13T17:45:37.000Z | # coding: utf-8
"""
Kubernetes
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
OpenAPI spec version: v1.15.6
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import unittest
import kubernetes.client
from kubernetes.client.models.networking_v1beta1_http_ingress_rule_value import NetworkingV1beta1HTTPIngressRuleValue # noqa: E501
from kubernetes.client.rest import ApiException
class TestNetworkingV1beta1HTTPIngressRuleValue(unittest.TestCase):
"""NetworkingV1beta1HTTPIngressRuleValue unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testNetworkingV1beta1HTTPIngressRuleValue(self):
"""Test NetworkingV1beta1HTTPIngressRuleValue"""
# FIXME: construct object with mandatory attributes with example values
# model = kubernetes.client.models.networking_v1beta1_http_ingress_rule_value.NetworkingV1beta1HTTPIngressRuleValue() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 27.95 | 139 | 0.76297 |
54919d6a36cfb11b1e272db4a6a033c09e88e5e2 | 4,976 | py | Python | neighbourhood_project/settings.py | dukundejeanne/neighbourhood_django | fb20f2c1dc9acec08c03668ccebece6d28493f9d | [
"MIT"
] | null | null | null | neighbourhood_project/settings.py | dukundejeanne/neighbourhood_django | fb20f2c1dc9acec08c03668ccebece6d28493f9d | [
"MIT"
] | 4 | 2020-02-12T02:36:58.000Z | 2021-09-08T01:24:24.000Z | neighbourhood_project/settings.py | dukundejeanne/neighbourhood_django | fb20f2c1dc9acec08c03668ccebece6d28493f9d | [
"MIT"
] | null | null | null | """
Django settings for neighbourhood_project project.
Generated by 'django-admin startproject' using Django 1.11.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.11/ref/settings/
"""
import os
import django_heroku
import dj_database_url
from decouple import config,Csv
MODE=config("MODE", default="dev")
SECRET_KEY = config('SECRET_KEY')
DEBUG = config('DEBUG', default=False, cast=bool)
# development
if config('MODE')=="dev":
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': config('DB_NAME'),
'USER': config('DB_USER'),
'PASSWORD': config('DB_PASSWORD'),
'HOST': config('DB_HOST'),
'PORT': '',
}
}
# production
else:
DATABASES = {
'default': dj_database_url.config(
default=config('DATABASE_URL')
)
}
db_from_env = dj_database_url.config(conn_max_age=500)
DATABASES['default'].update(db_from_env)
ALLOWED_HOSTS = config('ALLOWED_HOSTS', cast=Csv())
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
# SECRET_KEY = '*sh088@korg3%!iu67sg5y*$1c3y)9vqx)gv2fz6)_$z@qi7j-'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'neighbourapp',
'bootstrap4',
# 'tinymce',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
]
ROOT_URLCONF = 'neighbourhood_project.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'neighbourhood_project.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
# DATABASES = {
# 'default': {
# # 'ENGINE': 'django.db.backends.sqlite3',
# # 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
# 'ENGINE': 'django.db.backends.postgresql',
# 'NAME': 'neighbour',
# 'USER': 'wecode',
# 'PASSWORD':'12',
# }
# }
# Password validation
# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.11/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.11/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
]
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.9/howto/static-files/
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATIC_URL = '/static/'
# Extra places for collectstatic to find static files.
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
)
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
# configuring the location for media
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
# Configure Django App for Heroku.
django_heroku.settings(locals()) | 26.609626 | 91 | 0.684486 |
9be6bb64f520ad20706cdc87ca9d024948d6051c | 4,744 | py | Python | ohapi/public.py | dwivedikriti/open-humans-api | d7f23bcb6827c92f70f81992de3def0dc57a1d04 | [
"MIT"
] | null | null | null | ohapi/public.py | dwivedikriti/open-humans-api | d7f23bcb6827c92f70f81992de3def0dc57a1d04 | [
"MIT"
] | null | null | null | ohapi/public.py | dwivedikriti/open-humans-api | d7f23bcb6827c92f70f81992de3def0dc57a1d04 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import logging
import os
import re
import signal
from functools import partial
try:
from urllib import urlencode
except ImportError:
from urllib.parse import urlencode
import click
import concurrent.futures
import requests
import sys
from humanfriendly import format_size, parse_size
from .api import get_page
BASE_URL = 'https://www.openhumans.org'
BASE_URL_API = '{}/api/public-data/'.format(BASE_URL)
def signal_handler_cb(signal_name, frame):
"""
Exit on Ctrl-C.
"""
os._exit(1)
def download_url(result, directory, max_bytes):
"""
Download a file.
"""
response = requests.get(result['download_url'], stream=True)
# TODO: make this more robust by parsing the URL
filename = response.url.split('/')[-1]
filename = re.sub(r'\?.*$', '', filename)
filename = '{}-{}'.format(result['user']['id'], filename)
size = int(response.headers['Content-Length'])
if size > max_bytes:
logging.info('Skipping {}, {} > {}'.format(filename, format_size(size),
format_size(max_bytes)))
return
logging.info('Downloading {} ({})'.format(filename, format_size(size)))
output_path = os.path.join(directory, filename)
try:
stat = os.stat(output_path)
if stat.st_size == size:
logging.info('Skipping "{}"; exists and is the right size'.format(
filename))
return
else:
logging.info('Removing "{}"; exists and is the wrong size'.format(
filename))
os.remove(output_path)
except OSError:
# TODO: check errno here?
pass
with open(output_path, 'wb') as f:
total_length = response.headers.get('content-length')
total_length = int(total_length)
dl = 0
for chunk in response.iter_content(chunk_size=8192):
if chunk:
dl += len(chunk)
f.write(chunk)
d = int(50 * dl / total_length)
sys.stdout.write("\r[%s%s]%d%s" % ('.' * d,
'' * (50 - d),
d * 2,
'%'))
sys.stdout.flush
print("\n")
logging.info('Downloaded {}'.format(filename))
@click.command()
@click.option('-s', '--source', help='the source to download files from')
@click.option('-u', '--username', help='the user to download files from')
@click.option('-d', '--directory', help='the directory for downloaded files',
default='.')
@click.option('-m', '--max-size', help='the maximum file size to download',
default='128m')
@click.option('-q', '--quiet', help='Report ERROR level logging to stdout',
is_flag=True)
@click.option('--debug', help='Report DEBUG level logging to stdout.',
is_flag=True)
def download_cli(source, username, directory, max_size, quiet, debug):
return download(source, username, directory, max_size, quiet, debug)
def download(source=None, username=None, directory='.', max_size='128m',
quiet=None, debug=None):
"""
Download public data from Open Humans.
"""
if debug:
logging.basicConfig(level=logging.DEBUG)
elif quiet:
logging.basicConfig(level=logging.ERROR)
else:
logging.basicConfig(level=logging.INFO)
logging.debug("Running with source: '{}'".format(source) +
" and username: '{}'".format(username) +
" and directory: '{}'".format(directory) +
" and max-size: '{}'".format(max_size))
signal.signal(signal.SIGINT, signal_handler_cb)
max_bytes = parse_size(max_size)
options = {}
if source:
options['source'] = source
if username:
options['username'] = username
page = '{}?{}'.format(BASE_URL_API, urlencode(options))
results = []
counter = 1
logging.info('Retrieving metadata')
while True:
logging.info('Retrieving page {}'.format(counter))
response = get_page(page)
results = results + response['results']
if response['next']:
page = response['next']
else:
break
counter += 1
logging.info('Downloading {} files'.format(len(results)))
download_url_partial = partial(download_url, directory=directory,
max_bytes=max_bytes)
with concurrent.futures.ProcessPoolExecutor(max_workers=4) as executor:
for value in executor.map(download_url_partial, results):
if value:
logging.info(value)
| 28.407186 | 79 | 0.578836 |
a51d4b9577b8109ecc4fe05e8b1f90f8ad9d6edf | 1,100 | py | Python | update_others.py | XiaogangHe/cv | 814fabeb93c6302526ad0fca79587bf3fbd2a0ea | [
"CC-BY-4.0"
] | null | null | null | update_others.py | XiaogangHe/cv | 814fabeb93c6302526ad0fca79587bf3fbd2a0ea | [
"CC-BY-4.0"
] | null | null | null | update_others.py | XiaogangHe/cv | 814fabeb93c6302526ad0fca79587bf3fbd2a0ea | [
"CC-BY-4.0"
] | 1 | 2020-12-20T08:02:39.000Z | 2020-12-20T08:02:39.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import division, print_function
import re
import json
import requests
def get_number_of_citations(url):
r = requests.get(url)
try:
r.raise_for_status()
except Exception as e:
print(e)
return None
results = re.findall("([0-9]+) results", r.text)
if not len(results):
print("no results found")
return None
try:
return int(results[0])
except Exception as e:
print(e)
return None
def update_others():
with open("other_pubs.json", "r") as f:
pubs = json.load(f)
for i, pub in enumerate(pubs):
if not pub["url"].startswith("https://scholar.google.com"):
continue
n = get_number_of_citations(pub["url"])
if n is None or n < pub["citations"]:
continue
pubs[i]["citations"] = n
with open("other_pubs.json", "w") as f:
json.dump(pubs, f, sort_keys=True, indent=2, separators=(",", ": "))
if __name__ == "__main__":
update_others()
| 23.913043 | 76 | 0.571818 |
b95020b46f0e685adaf3a432aa598592cf4ebf47 | 6,053 | py | Python | cifar10_imagenet_eval.py | power1997312/-2020 | d89a02e6be831a4ae86e75a2aa6db63292c9cebd | [
"Apache-2.0"
] | 6 | 2019-04-28T15:59:43.000Z | 2021-12-22T02:15:43.000Z | cifar10_imagenet_eval.py | power1997312/-2020 | d89a02e6be831a4ae86e75a2aa6db63292c9cebd | [
"Apache-2.0"
] | 14 | 2020-03-24T18:18:41.000Z | 2022-03-12T00:17:55.000Z | cifar10_imagenet_eval.py | power1997312/-2020 | d89a02e6be831a4ae86e75a2aa6db63292c9cebd | [
"Apache-2.0"
] | 1 | 2020-10-16T13:19:05.000Z | 2020-10-16T13:19:05.000Z | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Evaluation for CIFAR-10.
Accuracy:
cifar10_train.py achieves 83.0% accuracy after 100K steps (256 epochs
of data) as judged by cifar10_eval.py.
Speed:
On a single Tesla K40, cifar10_train.py processes a single batch of 128 images
in 0.25-0.35 sec (i.e. 350 - 600 images /sec). The model reaches ~86%
accuracy after 100K steps in 8 hours of training time.
Usage:
Please see the tutorial and website for how to download the CIFAR-10
data set, compile the program and train the model.
http://tensorflow.org/tutorials/deep_cnn/
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
import math
import time
import numpy as np
import tensorflow as tf
import cifar10
FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_string('eval_dir', '/tmp/cifar10_eval',
"""Directory where to write event logs.""")
tf.app.flags.DEFINE_string('eval_data', 'test',
"""Either 'test' or 'train_eval'.""")
tf.app.flags.DEFINE_string('checkpoint_dir', '/tmp/cifar10_train',
"""Directory where to read model checkpoints.""")
tf.app.flags.DEFINE_integer('eval_interval_secs', 60 * 5,
"""How often to run the eval.""")
tf.app.flags.DEFINE_integer('num_examples', 4000,
"""Number of examples to run.""")
tf.app.flags.DEFINE_boolean('run_once', True,
"""Whether to run eval only once.""")
def eval_once(saver, summary_writer, top_k_op, summary_op):
"""Run Eval once.
Args:
saver: Saver.
summary_writer: Summary writer.
top_k_op: Top K op.
summary_op: Summary op.
"""
with tf.Session() as sess:
ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
# Restores from checkpoint
saver.restore(sess, ckpt.model_checkpoint_path)
# Assuming model_checkpoint_path looks something like:
# /my-favorite-path/cifar10_train/model.ckpt-0,
# extract global_step from it.
global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
else:
print('No checkpoint file found')
return
# Start the queue runners.
coord = tf.train.Coordinator()
try:
threads = []
for qr in tf.get_collection(tf.GraphKeys.QUEUE_RUNNERS):
threads.extend(qr.create_threads(sess, coord=coord, daemon=True,
start=True))
num_iter = int(math.ceil(FLAGS.num_examples / FLAGS.batch_size))
true_count = 0 # Counts the number of correct predictions.
total_sample_count = num_iter * FLAGS.batch_size
summary = tf.Summary()
step = 0
while step < num_iter and not coord.should_stop():
predictions = sess.run([top_k_op])
itemtrue=np.sum(predictions)
true_count += itemtrue
step += 1
a = sess.run(summary_op)
summary.ParseFromString(a)
summary.value.add(tag='itemtrue @ 1', simple_value=itemtrue)
summary.value.add(tag='predictions length', simple_value=len(predictions[0]))
summary_writer.add_summary(summary,step)
# Compute precision @ 1.
print("%s / %s" %(true_count,total_sample_count))
precision = true_count / total_sample_count
print('%s: precision @ 1 = %.3f' % (datetime.now(), precision))
#summary.value.add(tag='Precision @ 1', simple_value=precision)
except Exception as e: # pylint: disable=broad-except
coord.request_stop(e)
coord.request_stop()
coord.join(threads, stop_grace_period_secs=10)
def evaluate():
"""Eval CIFAR-10 for a number of steps."""
with tf.Graph().as_default() as g:
eval_data = FLAGS.eval_data == 'test'
images, labels = cifar10.inputs(eval_data=eval_data)
"""
all images in the training set have an range from 0-1
and not from 0-255 so we divide our flatten images
(a one dimensional vector with our 784 pixels)
to use the same 0-1 based range
"""
"""
we need to store the flatten image and generate
the correct_vals array
correct_val for a digit (9) would be
[0,0,0,0,0,0,0,0,0,1]
"""
"""
we want to run the prediction and the accuracy function
using our generated arrays (images and correct_vals)
"""
# Build a Graph that computes the logits predictions from the
# inference model.
logits = cifar10.inference(images)
# Calculate predictions.
top_k_op = tf.nn.in_top_k(logits, labels, 1)
# Restore the moving average version of the learned variables for eval.
variable_averages = tf.train.ExponentialMovingAverage(
cifar10.MOVING_AVERAGE_DECAY)
variables_to_restore = variable_averages.variables_to_restore()
saver = tf.train.Saver(variables_to_restore)
# Build the summary operation based on the TF collection of Summaries.
summary_op = tf.summary.merge_all()
summary_writer = tf.summary.FileWriter(FLAGS.eval_dir, g)
while True:
eval_once(saver, summary_writer, top_k_op, summary_op)
if FLAGS.run_once:
break
time.sleep(FLAGS.eval_interval_secs)
def main(argv=None): # pylint: disable=unused-argument
tf.gfile.MakeDirs(FLAGS.eval_dir)
evaluate()
if __name__ == '__main__':
tf.app.run()
| 33.815642 | 85 | 0.673385 |
e10fc7aab2e3871b906d7d5756caf701e0df748f | 8,660 | py | Python | influxdb_service_sdk/model/container/ingress_pb2.py | easyopsapis/easyops-api-python | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | [
"Apache-2.0"
] | 5 | 2019-07-31T04:11:05.000Z | 2021-01-07T03:23:20.000Z | influxdb_service_sdk/model/container/ingress_pb2.py | easyopsapis/easyops-api-python | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | [
"Apache-2.0"
] | null | null | null | influxdb_service_sdk/model/container/ingress_pb2.py | easyopsapis/easyops-api-python | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: ingress.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from influxdb_service_sdk.model.container import ingress_backend_pb2 as influxdb__service__sdk_dot_model_dot_container_dot_ingress__backend__pb2
from influxdb_service_sdk.model.container import ingress_rule_pb2 as influxdb__service__sdk_dot_model_dot_container_dot_ingress__rule__pb2
from influxdb_service_sdk.model.container import ingress_tls_pb2 as influxdb__service__sdk_dot_model_dot_container_dot_ingress__tls__pb2
from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='ingress.proto',
package='container',
syntax='proto3',
serialized_options=_b('ZCgo.easyops.local/contracts/protorepo-models/easyops/model/container'),
serialized_pb=_b('\n\ringress.proto\x12\tcontainer\x1a:influxdb_service_sdk/model/container/ingress_backend.proto\x1a\x37influxdb_service_sdk/model/container/ingress_rule.proto\x1a\x36influxdb_service_sdk/model/container/ingress_tls.proto\x1a\x1cgoogle/protobuf/struct.proto\"\xd4\x02\n\x07Ingress\x12\x12\n\ninstanceId\x18\x01 \x01(\t\x12\x0c\n\x04kind\x18\x02 \x01(\t\x12\x11\n\tnamespace\x18\x03 \x01(\t\x12\x14\n\x0cresourceName\x18\x04 \x01(\t\x12\x13\n\x0b\x64isplayName\x18\x05 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x06 \x01(\t\x12,\n\x0b\x61nnotations\x18\x07 \x01(\x0b\x32\x17.google.protobuf.Struct\x12\x0f\n\x07\x61\x64\x64ress\x18\x08 \x03(\t\x12*\n\x07\x62\x61\x63kend\x18\t \x01(\x0b\x32\x19.container.IngressBackend\x12%\n\x05rules\x18\n \x03(\x0b\x32\x16.container.IngressRule\x12\"\n\x03tls\x18\x0b \x03(\x0b\x32\x15.container.IngressTLS\x12\r\n\x05\x63time\x18\x0c \x01(\t\x12\x0f\n\x07\x63reator\x18\r \x01(\tBEZCgo.easyops.local/contracts/protorepo-models/easyops/model/containerb\x06proto3')
,
dependencies=[influxdb__service__sdk_dot_model_dot_container_dot_ingress__backend__pb2.DESCRIPTOR,influxdb__service__sdk_dot_model_dot_container_dot_ingress__rule__pb2.DESCRIPTOR,influxdb__service__sdk_dot_model_dot_container_dot_ingress__tls__pb2.DESCRIPTOR,google_dot_protobuf_dot_struct__pb2.DESCRIPTOR,])
_INGRESS = _descriptor.Descriptor(
name='Ingress',
full_name='container.Ingress',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='instanceId', full_name='container.Ingress.instanceId', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='kind', full_name='container.Ingress.kind', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='namespace', full_name='container.Ingress.namespace', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='resourceName', full_name='container.Ingress.resourceName', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='displayName', full_name='container.Ingress.displayName', index=4,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='description', full_name='container.Ingress.description', index=5,
number=6, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='annotations', full_name='container.Ingress.annotations', index=6,
number=7, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='address', full_name='container.Ingress.address', index=7,
number=8, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='backend', full_name='container.Ingress.backend', index=8,
number=9, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='rules', full_name='container.Ingress.rules', index=9,
number=10, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='tls', full_name='container.Ingress.tls', index=10,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='ctime', full_name='container.Ingress.ctime', index=11,
number=12, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='creator', full_name='container.Ingress.creator', index=12,
number=13, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=232,
serialized_end=572,
)
_INGRESS.fields_by_name['annotations'].message_type = google_dot_protobuf_dot_struct__pb2._STRUCT
_INGRESS.fields_by_name['backend'].message_type = influxdb__service__sdk_dot_model_dot_container_dot_ingress__backend__pb2._INGRESSBACKEND
_INGRESS.fields_by_name['rules'].message_type = influxdb__service__sdk_dot_model_dot_container_dot_ingress__rule__pb2._INGRESSRULE
_INGRESS.fields_by_name['tls'].message_type = influxdb__service__sdk_dot_model_dot_container_dot_ingress__tls__pb2._INGRESSTLS
DESCRIPTOR.message_types_by_name['Ingress'] = _INGRESS
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Ingress = _reflection.GeneratedProtocolMessageType('Ingress', (_message.Message,), {
'DESCRIPTOR' : _INGRESS,
'__module__' : 'ingress_pb2'
# @@protoc_insertion_point(class_scope:container.Ingress)
})
_sym_db.RegisterMessage(Ingress)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
| 52.484848 | 1,019 | 0.766166 |
a9ea908c02a74e6749566685a683907ca963c310 | 8,046 | py | Python | userbot/modules/system_stats.py | gepeng001/WeebProject | 4e5f3fe3b0a694ba44c2682d05961d508a84bef3 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | userbot/modules/system_stats.py | gepeng001/WeebProject | 4e5f3fe3b0a694ba44c2682d05961d508a84bef3 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | userbot/modules/system_stats.py | gepeng001/WeebProject | 4e5f3fe3b0a694ba44c2682d05961d508a84bef3 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | # Copyright (C) 2019 The Raphielscape Company LLC.
#
# Licensed under the Raphielscape Public License, Version 1.c (the "License");
# you may not use this file except in compliance with the License.
#
""" Userbot module for getting information about the server. """
import platform
import shutil
import sys
import time
from asyncio import create_subprocess_exec as asyncrunapp
from asyncio.subprocess import PIPE as asyncPIPE
from os import remove
from platform import python_version, uname
from shutil import which
from git import Repo
from telethon import version
from telethon.errors.rpcerrorlist import MediaEmptyError
from userbot import ALIVE_LOGO, ALIVE_NAME, CMD_HELP, HEROKU_APP_NAME, StartTime, bot
from userbot.events import register
# ================= CONSTANT =================
DEFAULTUSER = str(ALIVE_NAME) if ALIVE_NAME else uname().node
repo = Repo()
modules = CMD_HELP
# ============================================
async def get_readable_time(seconds: int) -> str:
count = 0
up_time = ""
time_list = []
time_suffix_list = ["s", "m", "h", "days"]
while count < 4:
count += 1
if count < 3:
remainder, result = divmod(seconds, 60)
else:
remainder, result = divmod(seconds, 24)
if seconds == 0 and remainder == 0:
break
time_list.append(int(result))
seconds = int(remainder)
for x in range(len(time_list)):
time_list[x] = str(time_list[x]) + time_suffix_list[x]
if len(time_list) == 4:
up_time += time_list.pop() + ", "
time_list.reverse()
up_time += ":".join(time_list)
return up_time
@register(outgoing=True, pattern=r"^\.sysd$")
async def sysdetails(sysd):
""" For .sysd command, get system info using neofetch. """
if not sysd.text[0].isalpha() and sysd.text[0] not in ("/", "#", "@", "!"):
try:
fetch = await asyncrunapp(
"neofetch",
"--stdout",
stdout=asyncPIPE,
stderr=asyncPIPE,
)
stdout, stderr = await fetch.communicate()
result = str(stdout.decode().strip()) + str(stderr.decode().strip())
await sysd.edit("`" + result + "`")
except FileNotFoundError:
await sysd.edit("`Instal neofetch terlebih dahulu!`")
@register(outgoing=True, pattern=r"^\.botver$")
async def bot_ver(event):
""" For .botver command, get the bot version. """
if not event.text[0].isalpha() and event.text[0] not in ("/", "#", "@", "!"):
if which("git") is not None:
ver = await asyncrunapp(
"git",
"describe",
"--all",
"--long",
stdout=asyncPIPE,
stderr=asyncPIPE,
)
stdout, stderr = await ver.communicate()
verout = str(stdout.decode().strip()) + str(stderr.decode().strip())
rev = await asyncrunapp(
"git",
"rev-list",
"--all",
"--count",
stdout=asyncPIPE,
stderr=asyncPIPE,
)
stdout, stderr = await rev.communicate()
revout = str(stdout.decode().strip()) + str(stderr.decode().strip())
await event.edit(
"**Versi Userbot** : " f"`{verout}" "` \n" "**Revisi** : " f"`{revout}" "`"
)
else:
await event.edit(
"Sayang sekali Anda tidak memiliki git, Anda tetap menjalankan “v1.beta.4”!"
)
@register(outgoing=True, pattern=r"^\.pip(?: |$)(.*)")
async def pipcheck(pip):
""" For .pip command, do a pip search. """
if not pip.text[0].isalpha() and pip.text[0] not in ("/", "#", "@", "!"):
pipmodule = pip.pattern_match.group(1)
if pipmodule:
await pip.edit("`Sedang mencari...`")
pipc = await asyncrunapp(
"pip3",
"search",
pipmodule,
stdout=asyncPIPE,
stderr=asyncPIPE,
)
stdout, stderr = await pipc.communicate()
pipout = str(stdout.decode().strip()) + str(stderr.decode().strip())
if pipout:
if len(pipout) > 4096:
await pip.edit("`Output terlalu besar, dikirim sebagai file`")
file = open("output.txt", "w+")
file.write(pipout)
file.close()
await pip.client.send_file(
pip.chat_id,
"output.txt",
reply_to=pip.id,
)
remove("output.txt")
return
await pip.edit(
"**Pencarian** : \n`"
f"pip3 search {pipmodule}"
"`\n**Hasil** : \n`"
f"{pipout}"
"`"
)
else:
await pip.edit(
"**Pencarian** : \n`"
f"pip3 search {pipmodule}"
"`\n**Hasil** : \n`Tidak ada hasil yang dikembalikan/salah`"
)
else:
await pip.edit("`Gunakan “.help pip” untuk melihat contoh`")
@register(outgoing=True, pattern=r"^\.alive$")
async def amireallyalive(alive):
"""For .alive command, check if the bot is running."""
logo = ALIVE_LOGO
uptime = await get_readable_time((time.time() - StartTime))
output = (
f"❖ **WEEBPROJECT AKTIF - BERJALAN NORMAL** ❖\n\n"
f"**⌯ Pengguna** : {DEFAULTUSER}\n"
f"**⌯ Versi Python** : `{python_version()}`\n"
f"**⌯ Versi Telethon** : `{version.__version__}`\n"
f"**⌯ Berjalan di** : `{repo.active_branch.name}`\n"
f"**⌯ Modul dimuat** : `{len(CMD_HELP)} modul`\n"
f"**⌯ Bot aktif sejak** : `{uptime}`\n"
)
if ALIVE_LOGO:
try:
logo = ALIVE_LOGO
await bot.send_file(alive.chat_id, logo, caption=output)
await alive.delete()
except MediaEmptyError:
await alive.edit(
output + "\n\n`Logo yang diberikan tidak valid."
"\nPastikan tautan diarahkan ke gambar logo`."
)
else:
await alive.edit(output)
@register(outgoing=True, pattern=r"^\.aliveu")
async def amireallyaliveuser(username):
""" For .aliveu command, change the username in the .alive command. """
message = username.text
output = ".aliveu [pengguna baru tanpa tanda kurung] juga tidak bisa kosong"
if not (message == ".aliveu" or message[7:8] != " "):
newuser = message[8:]
global DEFAULTUSER
DEFAULTUSER = newuser
output = "`Berhasil mengubah pengguna menjadi " + newuser + ".`"
await username.edit(f"{output}")
@register(outgoing=True, pattern=r"^\.resetalive$")
async def amireallyalivereset(ureset):
""" For .resetalive command, reset the username in the .alive command. """
global DEFAULTUSER
DEFAULTUSER = str(ALIVE_NAME) if ALIVE_NAME else uname().node
await ureset.edit("`Berhasil menyetel ulang pengguna untuk “.alive”`")
CMD_HELP.update(
{
"sysd": "`.sysd`"
"\n➥ Menampilkan informasi sistem menggunakan neofetch.",
"botver": "`.botver`"
"\n➥ Menampilkan versi userbot.",
"pip": "`.pip [modul]`"
"\n➥ Melakukan pencarian modul pip.",
"alive": "`.alive`"
"\n➥ Melihat apakah bot Anda berfungsi atau tidak."
"\n\n`.aliveu [teks]`"
"\n➥ Mengubah “Pengguna” di `.alive` menjadi teks yang Anda inginkan."
"\n\n`.resetalive`"
"\n➥ Menyetel ulang “Pengguna” ke default.",
}
)
| 35.135371 | 93 | 0.517897 |
105839460ea13455d32500684326d086dd535d54 | 1,620 | py | Python | fabric_cf/actor/core/kernel/rpc_request_type.py | fabric-testbed/ActorBase | 3c7dd040ee79fef0759e66996c93eeec57c790b2 | [
"MIT"
] | null | null | null | fabric_cf/actor/core/kernel/rpc_request_type.py | fabric-testbed/ActorBase | 3c7dd040ee79fef0759e66996c93eeec57c790b2 | [
"MIT"
] | null | null | null | fabric_cf/actor/core/kernel/rpc_request_type.py | fabric-testbed/ActorBase | 3c7dd040ee79fef0759e66996c93eeec57c790b2 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# MIT License
#
# Copyright (c) 2020 FABRIC Testbed
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
#
# Author: Komal Thareja (kthare10@renci.org)
from enum import Enum
class RPCRequestType(Enum):
Unknown = 1
Ticket = 4
ExtendTicket = 5
Relinquish = 6
Redeem = 7
ExtendLease = 8
ModifyLease = 9
UpdateTicket = 10
UpdateLease = 11
Close = 12
Query = 13
QueryResult = 14
FailedRPC = 15
UpdateDelegation = 16
ClaimDelegation = 17
ReclaimDelegation = 18
DeliveryAck = 19
def __str__(self):
return self.name
| 32.4 | 80 | 0.730864 |
ffc95c4c6ceb5631be346304f5f9c1c5fe5f2377 | 13,197 | py | Python | sql/instance.py | hunkjun/Archery2 | a4198b51710fef4db15d74cfa8f3ae364c4a1feb | [
"Apache-2.0"
] | null | null | null | sql/instance.py | hunkjun/Archery2 | a4198b51710fef4db15d74cfa8f3ae364c4a1feb | [
"Apache-2.0"
] | null | null | null | sql/instance.py | hunkjun/Archery2 | a4198b51710fef4db15d74cfa8f3ae364c4a1feb | [
"Apache-2.0"
] | null | null | null | # -*- coding: UTF-8 -*-
import shlex
import MySQLdb
import os
import time
import simplejson as json
from django.conf import settings
from django.contrib.auth.decorators import permission_required
from django.http import HttpResponse
from django.views.decorators.cache import cache_page
from common.config import SysConfig
from common.utils.extend_json_encoder import ExtendJSONEncoder
from sql.engines import get_engine
from sql.plugins.schemasync import SchemaSync
from .models import Instance, ParamTemplate, ParamHistory
@permission_required('sql.menu_instance_list', raise_exception=True)
def lists(request):
"""获取实例列表"""
limit = int(request.POST.get('limit'))
offset = int(request.POST.get('offset'))
type = request.POST.get('type')
db_type = request.POST.get('db_type')
tags = request.POST.getlist('tags[]')
limit = offset + limit
search = request.POST.get('search', '')
# 组合筛选项
filter_dict = dict()
# 过滤搜索
if search:
filter_dict['instance_name__icontains'] = search
# 过滤实例类型
if type:
filter_dict['type'] = type
# 过滤数据库类型
if db_type:
filter_dict['db_type'] = db_type
instances = Instance.objects.filter(**filter_dict)
# 过滤标签,返回同时包含全部标签的实例,TODO 循环会生成多表JOIN,如果数据量大会存在效率问题
if tags:
for tag in tags:
instances = instances.filter(instance_tag=tag, instance_tag__active=True)
count = instances.count()
instances = instances[offset:limit].values("id", "instance_name", "db_type", "type", "host", "port", "user")
# QuerySet 序列化
rows = [row for row in instances]
result = {"total": count, "rows": rows}
return HttpResponse(json.dumps(result, cls=ExtendJSONEncoder, bigint_as_string=True),
content_type='application/json')
@permission_required('sql.param_view', raise_exception=True)
def param_list(request):
"""
获取实例参数列表
:param request:
:return:
"""
instance_id = request.POST.get('instance_id')
editable = True if request.POST.get('editable') else False
search = request.POST.get('search', '')
try:
ins = Instance.objects.get(id=instance_id)
except Instance.DoesNotExist:
result = {'status': 1, 'msg': '实例不存在-3', 'data': []}
return HttpResponse(json.dumps(result), content_type='application/json')
# 获取已配置参数列表
cnf_params = dict()
for param in ParamTemplate.objects.filter(db_type=ins.db_type, variable_name__contains=search).values(
'id', 'variable_name', 'default_value', 'valid_values', 'description', 'editable'):
param['variable_name'] = param['variable_name'].lower()
cnf_params[param['variable_name']] = param
# 获取实例参数列表
engine = get_engine(instance=ins)
ins_variables = engine.get_variables()
# 处理结果
rows = list()
for variable in ins_variables.rows:
variable_name = variable[0].lower()
row = {
'variable_name': variable_name,
'runtime_value': variable[1],
'editable': False,
}
if variable_name in cnf_params.keys():
row = dict(row, **cnf_params[variable_name])
rows.append(row)
# 过滤参数
if editable:
rows = [row for row in rows if row['editable']]
else:
rows = [row for row in rows if not row['editable']]
return HttpResponse(json.dumps(rows, cls=ExtendJSONEncoder, bigint_as_string=True),
content_type='application/json')
@permission_required('sql.param_view', raise_exception=True)
def param_history(request):
"""实例参数修改历史"""
limit = int(request.POST.get('limit'))
offset = int(request.POST.get('offset'))
limit = offset + limit
instance_id = request.POST.get('instance_id')
search = request.POST.get('search', '')
phs = ParamHistory.objects.filter(instance__id=instance_id)
# 过滤搜索条件
if search:
phs = ParamHistory.objects.filter(variable_name__contains=search)
count = phs.count()
phs = phs[offset:limit].values("instance__instance_name", "variable_name", "old_var", "new_var",
"user_display", "create_time")
# QuerySet 序列化
rows = [row for row in phs]
result = {"total": count, "rows": rows}
return HttpResponse(json.dumps(result, cls=ExtendJSONEncoder, bigint_as_string=True),
content_type='application/json')
@permission_required('sql.param_edit', raise_exception=True)
def param_edit(request):
user = request.user
instance_id = request.POST.get('instance_id')
variable_name = request.POST.get('variable_name')
variable_value = request.POST.get('runtime_value')
try:
ins = Instance.objects.get(id=instance_id)
except Instance.DoesNotExist:
result = {'status': 1, 'msg': '实例不存在-4', 'data': []}
return HttpResponse(json.dumps(result), content_type='application/json')
# 修改参数
engine = get_engine(instance=ins)
# 校验是否配置模板
if not ParamTemplate.objects.filter(variable_name=variable_name).exists():
result = {'status': 1, 'msg': '请先在参数模板中配置该参数!', 'data': []}
return HttpResponse(json.dumps(result), content_type='application/json')
# 获取当前运行参数值
runtime_value = engine.get_variables(variables=[variable_name]).rows[0][1]
if variable_value == runtime_value:
result = {'status': 1, 'msg': '参数值与实际运行值一致,未调整!', 'data': []}
return HttpResponse(json.dumps(result), content_type='application/json')
set_result = engine.set_variable(variable_name=variable_name, variable_value=variable_value)
if set_result.error:
result = {'status': 1, 'msg': f'设置错误,错误信息:{set_result.error}', 'data': []}
return HttpResponse(json.dumps(result), content_type='application/json')
# 修改成功的保存修改记录
else:
ParamHistory.objects.create(
instance=ins,
variable_name=variable_name,
old_var=runtime_value,
new_var=variable_value,
set_sql=set_result.full_sql,
user_name=user.username,
user_display=user.display
)
result = {'status': 0, 'msg': '修改成功,请手动持久化到配置文件!', 'data': []}
return HttpResponse(json.dumps(result), content_type='application/json')
@permission_required('sql.menu_schemasync', raise_exception=True)
def schemasync(request):
"""对比实例schema信息"""
instance_name = request.POST.get('instance_name')
db_name = request.POST.get('db_name')
target_instance_name = request.POST.get('target_instance_name')
target_db_name = request.POST.get('target_db_name')
sync_auto_inc = True if request.POST.get('sync_auto_inc') == 'true' else False
sync_comments = True if request.POST.get('sync_comments') == 'true' else False
result = {'status': 0, 'msg': 'ok', 'data': {'diff_stdout': '', 'patch_stdout': '', 'revert_stdout': ''}}
# 循环对比全部数据库
if db_name == 'all' or target_db_name == 'all':
db_name = '*'
target_db_name = '*'
# 取出该实例的连接方式
instance_info = Instance.objects.get(instance_name=instance_name)
target_instance_info = Instance.objects.get(instance_name=target_instance_name)
# 提交给SchemaSync获取对比结果
schema_sync = SchemaSync()
# 准备参数
tag = int(time.time())
output_directory = os.path.join(settings.BASE_DIR, 'downloads/schemasync/')
os.makedirs(output_directory, exist_ok=True)
db_name = shlex.quote(db_name)
target_db_name = shlex.quote(target_db_name)
args = {
"sync-auto-inc": sync_auto_inc,
"sync-comments": sync_comments,
"tag": tag,
"output-directory": output_directory,
"source": r"mysql://{user}:'{pwd}'@{host}:{port}/{database}".format(user=instance_info.user,
pwd=instance_info.password,
host=instance_info.host,
port=instance_info.port,
database=db_name),
"target": r"mysql://{user}:'{pwd}'@{host}:{port}/{database}".format(user=target_instance_info.user,
pwd=target_instance_info.password,
host=target_instance_info.host,
port=target_instance_info.port,
database=target_db_name)
}
# 参数检查
args_check_result = schema_sync.check_args(args)
if args_check_result['status'] == 1:
return HttpResponse(json.dumps(args_check_result), content_type='application/json')
# 参数转换
cmd_args = schema_sync.generate_args2cmd(args, shell=True)
# 执行命令
try:
stdout, stderr = schema_sync.execute_cmd(cmd_args, shell=True).communicate()
diff_stdout = f'{stdout}{stderr}'
except RuntimeError as e:
diff_stdout = str(e)
# 非全部数据库对比可以读取对比结果并在前端展示
if db_name != '*':
date = time.strftime("%Y%m%d", time.localtime())
patch_sql_file = '%s%s_%s.%s.patch.sql' % (output_directory, target_db_name, tag, date)
revert_sql_file = '%s%s_%s.%s.revert.sql' % (output_directory, target_db_name, tag, date)
try:
with open(patch_sql_file, 'r') as f:
patch_sql = f.read()
except FileNotFoundError as e:
patch_sql = str(e)
try:
with open(revert_sql_file, 'r') as f:
revert_sql = f.read()
except FileNotFoundError as e:
revert_sql = str(e)
result['data'] = {'diff_stdout': diff_stdout, 'patch_stdout': patch_sql, 'revert_stdout': revert_sql}
else:
result['data'] = {'diff_stdout': diff_stdout, 'patch_stdout': '', 'revert_stdout': ''}
return HttpResponse(json.dumps(result), content_type='application/json')
@cache_page(60 * 5, key_prefix="insRes")
def instance_resource(request):
"""
获取实例内的资源信息,database、schema、table、column
:param request:
:return:
"""
instance_id = request.GET.get('instance_id')
instance_name = request.GET.get('instance_name')
db_name = request.GET.get('db_name', '')
schema_name = request.GET.get('schema_name', '')
tb_name = request.GET.get('tb_name', '')
resource_type = request.GET.get('resource_type')
if instance_id:
instance = Instance.objects.get(id=instance_id)
else:
try:
instance = Instance.objects.get(instance_name=instance_name)
except Instance.DoesNotExist:
result = {'status': 1, 'msg': '实例不存在-1', 'data': []}
return HttpResponse(json.dumps(result), content_type='application/json')
result = {'status': 0, 'msg': 'ok', 'data': []}
try:
# escape
db_name = MySQLdb.escape_string(db_name).decode('utf-8')
schema_name = MySQLdb.escape_string(schema_name).decode('utf-8')
tb_name = MySQLdb.escape_string(tb_name).decode('utf-8')
query_engine = get_engine(instance=instance)
if resource_type == 'database':
resource = query_engine.get_all_databases()
elif resource_type == 'schema' and db_name:
resource = query_engine.get_all_schemas(db_name=db_name)
elif resource_type == 'table' and db_name:
resource = query_engine.get_all_tables(db_name=db_name, schema_name=schema_name)
elif resource_type == 'column' and db_name and tb_name:
resource = query_engine.get_all_columns_by_tb(db_name=db_name, tb_name=tb_name, schema_name=schema_name)
else:
raise TypeError('不支持的资源类型或者参数不完整!')
except Exception as msg:
result['status'] = 1
result['msg'] = str(msg)
else:
if resource.error:
result['status'] = 1
result['msg'] = resource.error
else:
result['data'] = resource.rows
return HttpResponse(json.dumps(result), content_type='application/json')
def describe(request):
"""获取表结构"""
instance_name = request.POST.get('instance_name')
try:
instance = Instance.objects.get(instance_name=instance_name)
except Instance.DoesNotExist:
result = {'status': 1, 'msg': '实例不存在-2', 'data': []}
return HttpResponse(json.dumps(result), content_type='application/json')
db_name = request.POST.get('db_name')
schema_name = request.POST.get('schema_name')
tb_name = request.POST.get('tb_name')
result = {'status': 0, 'msg': 'ok', 'data': []}
try:
query_engine = get_engine(instance=instance)
query_result = query_engine.describe_table(db_name, tb_name, schema_name=schema_name)
result['data'] = query_result.__dict__
except Exception as msg:
result['status'] = 1
result['msg'] = str(msg)
if result['data']['error']:
result['status'] = 1
result['msg'] = result['data']['error']
return HttpResponse(json.dumps(result), content_type='application/json')
| 40.481595 | 116 | 0.626885 |
d5962d3e6fce27e4708f225ceb435293361581ce | 176 | py | Python | Day19/Solution.py | varadharajaan/30-Days-of-Code | b5e83c59aefd8122cdd0c7ea890042bc94b6e30d | [
"MIT"
] | 76 | 2016-12-17T09:00:51.000Z | 2021-08-18T18:46:49.000Z | Day19/Solution.py | Ankur-9598/30-Days-Of-Code | 42c682cd6ae0a1642e1548690615e424e881d1ae | [
"MIT"
] | 70 | 2019-05-22T05:53:44.000Z | 2020-11-16T14:26:14.000Z | Day19/Solution.py | Ankur-9598/30-Days-Of-Code | 42c682cd6ae0a1642e1548690615e424e881d1ae | [
"MIT"
] | 116 | 2017-01-01T20:22:27.000Z | 2021-07-25T08:05:51.000Z | class Calculator(AdvancedArithmetic):
def divisorSum(self, n):
s = 0
for i in range(1,n+1):
if (n%i == 0):
s+=i
return s | 25.142857 | 37 | 0.460227 |
a799516aca1f05306eb0132ee6de1ba3528bb45a | 6,043 | py | Python | isi_sdk_8_2_2/isi_sdk_8_2_2/models/cluster_config_device.py | mohitjain97/isilon_sdk_python | a371f438f542568edb8cda35e929e6b300b1177c | [
"Unlicense"
] | 24 | 2018-06-22T14:13:23.000Z | 2022-03-23T01:21:26.000Z | isi_sdk_8_2_2/isi_sdk_8_2_2/models/cluster_config_device.py | mohitjain97/isilon_sdk_python | a371f438f542568edb8cda35e929e6b300b1177c | [
"Unlicense"
] | 46 | 2018-04-30T13:28:22.000Z | 2022-03-21T21:11:07.000Z | isi_sdk_8_2_2/isi_sdk_8_2_2/models/cluster_config_device.py | mohitjain97/isilon_sdk_python | a371f438f542568edb8cda35e929e6b300b1177c | [
"Unlicense"
] | 29 | 2018-06-19T00:14:04.000Z | 2022-02-08T17:51:19.000Z | # coding: utf-8
"""
Isilon SDK
Isilon SDK - Language bindings for the OneFS API # noqa: E501
OpenAPI spec version: 9
Contact: sdk@isilon.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class ClusterConfigDevice(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'devid': 'int',
'guid': 'str',
'is_up': 'bool',
'lnn': 'int'
}
attribute_map = {
'devid': 'devid',
'guid': 'guid',
'is_up': 'is_up',
'lnn': 'lnn'
}
def __init__(self, devid=None, guid=None, is_up=None, lnn=None): # noqa: E501
"""ClusterConfigDevice - a model defined in Swagger""" # noqa: E501
self._devid = None
self._guid = None
self._is_up = None
self._lnn = None
self.discriminator = None
self.devid = devid
self.guid = guid
self.is_up = is_up
self.lnn = lnn
@property
def devid(self):
"""Gets the devid of this ClusterConfigDevice. # noqa: E501
Device ID. # noqa: E501
:return: The devid of this ClusterConfigDevice. # noqa: E501
:rtype: int
"""
return self._devid
@devid.setter
def devid(self, devid):
"""Sets the devid of this ClusterConfigDevice.
Device ID. # noqa: E501
:param devid: The devid of this ClusterConfigDevice. # noqa: E501
:type: int
"""
if devid is None:
raise ValueError("Invalid value for `devid`, must not be `None`") # noqa: E501
self._devid = devid
@property
def guid(self):
"""Gets the guid of this ClusterConfigDevice. # noqa: E501
Device GUID. # noqa: E501
:return: The guid of this ClusterConfigDevice. # noqa: E501
:rtype: str
"""
return self._guid
@guid.setter
def guid(self, guid):
"""Sets the guid of this ClusterConfigDevice.
Device GUID. # noqa: E501
:param guid: The guid of this ClusterConfigDevice. # noqa: E501
:type: str
"""
if guid is None:
raise ValueError("Invalid value for `guid`, must not be `None`") # noqa: E501
self._guid = guid
@property
def is_up(self):
"""Gets the is_up of this ClusterConfigDevice. # noqa: E501
If true, this node is online and communicating with the local node and every other node with the is_up property normally. If false, this node is not currently communicating with the local node or other nodes with the is_up property. It may be shut down, rebooting, disconnected from the backend network, or connected only to other nodes. # noqa: E501
:return: The is_up of this ClusterConfigDevice. # noqa: E501
:rtype: bool
"""
return self._is_up
@is_up.setter
def is_up(self, is_up):
"""Sets the is_up of this ClusterConfigDevice.
If true, this node is online and communicating with the local node and every other node with the is_up property normally. If false, this node is not currently communicating with the local node or other nodes with the is_up property. It may be shut down, rebooting, disconnected from the backend network, or connected only to other nodes. # noqa: E501
:param is_up: The is_up of this ClusterConfigDevice. # noqa: E501
:type: bool
"""
if is_up is None:
raise ValueError("Invalid value for `is_up`, must not be `None`") # noqa: E501
self._is_up = is_up
@property
def lnn(self):
"""Gets the lnn of this ClusterConfigDevice. # noqa: E501
Device logical node number. # noqa: E501
:return: The lnn of this ClusterConfigDevice. # noqa: E501
:rtype: int
"""
return self._lnn
@lnn.setter
def lnn(self, lnn):
"""Sets the lnn of this ClusterConfigDevice.
Device logical node number. # noqa: E501
:param lnn: The lnn of this ClusterConfigDevice. # noqa: E501
:type: int
"""
if lnn is None:
raise ValueError("Invalid value for `lnn`, must not be `None`") # noqa: E501
self._lnn = lnn
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, ClusterConfigDevice):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 29.768473 | 361 | 0.579348 |
c63d63306e3b07e73f49cdd2412c78e0657a2e08 | 851 | py | Python | sdk/ml/azure-ai-ml/azure/ai/ml/_schema/_deployment/batch/batch_deployment_settings.py | dubiety/azure-sdk-for-python | 62ffa839f5d753594cf0fe63668f454a9d87a346 | [
"MIT"
] | 1 | 2022-02-01T18:50:12.000Z | 2022-02-01T18:50:12.000Z | sdk/ml/azure-ai-ml/azure/ai/ml/_schema/_deployment/batch/batch_deployment_settings.py | ellhe-blaster/azure-sdk-for-python | 82193ba5e81cc5e5e5a5239bba58abe62e86f469 | [
"MIT"
] | null | null | null | sdk/ml/azure-ai-ml/azure/ai/ml/_schema/_deployment/batch/batch_deployment_settings.py | ellhe-blaster/azure-sdk-for-python | 82193ba5e81cc5e5e5a5239bba58abe62e86f469 | [
"MIT"
] | null | null | null | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import logging
from typing import Any
from azure.ai.ml._schema import PatchedSchemaMeta
from azure.ai.ml.entities._deployment.deployment_settings import BatchRetrySettings
from marshmallow import fields, post_load
module_logger = logging.getLogger(__name__)
class BatchRetrySettingsSchema(metaclass=PatchedSchemaMeta):
max_retries = fields.Int(
metadata={"description": "The number of maximum tries for a failed or timeout mini batch."},
)
timeout = fields.Int(metadata={"description": "The timeout for a mini batch."})
@post_load
def make(self, data: Any, **kwargs: Any) -> BatchRetrySettings:
return BatchRetrySettings(**data)
| 35.458333 | 100 | 0.652174 |
4719f0b27f3a3161c35d14dd03e0face72daa1fd | 4,893 | py | Python | tests/test_multi_body_errors.py | dmig/fastapi | 497e5e6257a282162a435b4d37f82d567fe73195 | [
"MIT"
] | 5 | 2020-06-17T10:02:51.000Z | 2021-03-19T12:55:50.000Z | tests/test_multi_body_errors.py | dmig/fastapi | 497e5e6257a282162a435b4d37f82d567fe73195 | [
"MIT"
] | 22 | 2020-06-27T19:24:59.000Z | 2020-10-18T19:35:50.000Z | tests/test_multi_body_errors.py | dmig/fastapi | 497e5e6257a282162a435b4d37f82d567fe73195 | [
"MIT"
] | 1 | 2021-01-30T14:29:30.000Z | 2021-01-30T14:29:30.000Z | from decimal import Decimal
from typing import List
from fastapi import FastAPI
from fastapi.testclient import TestClient
from pydantic import BaseModel, condecimal
app = FastAPI()
class Item(BaseModel):
name: str
age: condecimal(gt=Decimal(0.0))
@app.post("/items/")
def save_item_no_body(item: List[Item]):
return {"item": item}
client = TestClient(app)
openapi_schema = {
"openapi": "3.0.2",
"info": {"title": "FastAPI", "version": "0.1.0"},
"paths": {
"/items/": {
"post": {
"responses": {
"200": {
"description": "Successful Response",
"content": {"application/json": {"schema": {}}},
},
"422": {
"description": "Validation Error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/HTTPValidationError"
}
}
},
},
},
"summary": "Save Item No Body",
"operationId": "save_item_no_body_items__post",
"requestBody": {
"content": {
"application/json": {
"schema": {
"title": "Item",
"type": "array",
"items": {"$ref": "#/components/schemas/Item"},
}
}
},
"required": True,
},
}
}
},
"components": {
"schemas": {
"Item": {
"title": "Item",
"required": ["name", "age"],
"type": "object",
"properties": {
"name": {"title": "Name", "type": "string"},
"age": {"title": "Age", "exclusiveMinimum": 0.0, "type": "number"},
},
},
"ValidationError": {
"title": "ValidationError",
"required": ["loc", "msg", "type"],
"type": "object",
"properties": {
"loc": {
"title": "Location",
"type": "array",
"items": {"type": "string"},
},
"msg": {"title": "Message", "type": "string"},
"type": {"title": "Error Type", "type": "string"},
},
},
"HTTPValidationError": {
"title": "HTTPValidationError",
"type": "object",
"properties": {
"detail": {
"title": "Detail",
"type": "array",
"items": {"$ref": "#/components/schemas/ValidationError"},
}
},
},
}
},
}
single_error = {
"detail": [
{
"ctx": {"limit_value": 0.0},
"loc": ["body", 0, "age"],
"msg": "ensure this value is greater than 0",
"type": "value_error.number.not_gt",
}
]
}
multiple_errors = {
"detail": [
{
"loc": ["body", 0, "name"],
"msg": "field required",
"type": "value_error.missing",
},
{
"loc": ["body", 0, "age"],
"msg": "value is not a valid decimal",
"type": "type_error.decimal",
},
{
"loc": ["body", 1, "name"],
"msg": "field required",
"type": "value_error.missing",
},
{
"loc": ["body", 1, "age"],
"msg": "value is not a valid decimal",
"type": "type_error.decimal",
},
]
}
def test_openapi_schema():
response = client.get("/openapi.json")
assert response.status_code == 200, response.text
assert response.json() == openapi_schema
def test_put_correct_body():
response = client.post("/items/", json=[{"name": "Foo", "age": 5}])
assert response.status_code == 200, response.text
assert response.json() == {"item": [{"name": "Foo", "age": 5}]}
def test_jsonable_encoder_requiring_error():
response = client.post("/items/", json=[{"name": "Foo", "age": -1.0}])
assert response.status_code == 422, response.text
assert response.json() == single_error
def test_put_incorrect_body_multiple():
response = client.post("/items/", json=[{"age": "five"}, {"age": "six"}])
assert response.status_code == 422, response.text
assert response.json() == multiple_errors
| 30.203704 | 87 | 0.405273 |
4a5c8bcf90589cace8b9b2b2283e688316b1a4ac | 6,433 | py | Python | frappe/website/doctype/website_settings/website_settings.py | kevingdc/frappe | 985bf3042e8277ef8ca93065b89f12a8c097f1a8 | [
"MIT"
] | null | null | null | frappe/website/doctype/website_settings/website_settings.py | kevingdc/frappe | 985bf3042e8277ef8ca93065b89f12a8c097f1a8 | [
"MIT"
] | null | null | null | frappe/website/doctype/website_settings/website_settings.py | kevingdc/frappe | 985bf3042e8277ef8ca93065b89f12a8c097f1a8 | [
"MIT"
] | null | null | null | # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
# MIT License. See license.txt
from __future__ import unicode_literals
import requests
import frappe
from frappe import _
from frappe.utils import get_request_site_address, encode
from frappe.model.document import Document
from six.moves.urllib.parse import quote
from frappe.website.router import resolve_route
from frappe.website.doctype.website_theme.website_theme import add_website_theme
from frappe.integrations.doctype.google_settings.google_settings import get_auth_url
INDEXING_SCOPES = "https://www.googleapis.com/auth/indexing"
class WebsiteSettings(Document):
def validate(self):
self.validate_top_bar_items()
self.validate_footer_items()
self.validate_home_page()
self.validate_google_settings()
def validate_home_page(self):
if frappe.flags.in_install:
return
if self.home_page and not resolve_route(self.home_page):
frappe.msgprint(_("Invalid Home Page") + " (Standard pages - index, login, products, blog, about, contact)")
self.home_page = ''
def validate_top_bar_items(self):
"""validate url in top bar items"""
for top_bar_item in self.get("top_bar_items"):
if top_bar_item.parent_label:
parent_label_item = self.get("top_bar_items", {"label": top_bar_item.parent_label})
if not parent_label_item:
# invalid item
frappe.throw(_("{0} does not exist in row {1}").format(top_bar_item.parent_label, top_bar_item.idx))
elif not parent_label_item[0] or parent_label_item[0].url:
# parent cannot have url
frappe.throw(_("{0} in row {1} cannot have both URL and child items").format(top_bar_item.parent_label,
top_bar_item.idx))
def validate_footer_items(self):
"""validate url in top bar items"""
for footer_item in self.get("footer_items"):
if footer_item.parent_label:
parent_label_item = self.get("footer_items", {"label": footer_item.parent_label})
if not parent_label_item:
# invalid item
frappe.throw(_("{0} does not exist in row {1}").format(footer_item.parent_label, footer_item.idx))
elif not parent_label_item[0] or parent_label_item[0].url:
# parent cannot have url
frappe.throw(_("{0} in row {1} cannot have both URL and child items").format(footer_item.parent_label,
footer_item.idx))
def validate_google_settings(self):
if self.enable_google_indexing and not frappe.db.get_single_value("Google Settings", "enable"):
frappe.throw(_("Enable Google API in Google Settings."))
def on_update(self):
self.clear_cache()
def clear_cache(self):
# make js and css
# clear web cache (for menus!)
frappe.clear_cache(user = 'Guest')
from frappe.website.render import clear_cache
clear_cache()
# clears role based home pages
frappe.clear_cache()
def get_access_token(self):
google_settings = frappe.get_doc("Google Settings")
if not google_settings.enable:
frappe.throw(_("Google Integration is disabled."))
if not self.indexing_refresh_token:
button_label = frappe.bold(_("Allow API Indexing Access"))
raise frappe.ValidationError(_("Click on {0} to generate Refresh Token.").format(button_label))
data = {
"client_id": google_settings.client_id,
"client_secret": google_settings.get_password(fieldname="client_secret", raise_exception=False),
"refresh_token": self.get_password(fieldname="indexing_refresh_token", raise_exception=False),
"grant_type": "refresh_token",
"scope": INDEXING_SCOPES
}
try:
res = requests.post(get_auth_url(), data=data).json()
except requests.exceptions.HTTPError:
button_label = frappe.bold(_("Allow Google Indexing Access"))
frappe.throw(_("Something went wrong during the token generation. Click on {0} to generate a new one.").format(button_label))
return res.get("access_token")
def get_website_settings(context=None):
hooks = frappe.get_hooks()
context = context or frappe._dict()
context = context.update({
'top_bar_items': get_items('top_bar_items'),
'footer_items': get_items('footer_items'),
"post_login": [
{"label": _("My Account"), "url": "/me"},
{"label": _("Logout"), "url": "/?cmd=web_logout"}
]
})
settings = frappe.get_single("Website Settings")
for k in ["banner_html", "banner_image", "brand_html", "copyright", "twitter_share_via",
"facebook_share", "google_plus_one", "twitter_share", "linked_in_share",
"disable_signup", "hide_footer_signup", "head_html", "title_prefix",
"navbar_template", "footer_template", "navbar_search", "enable_view_tracking",
"footer_logo", "call_to_action", "call_to_action_url"]:
if hasattr(settings, k):
context[k] = settings.get(k)
if settings.address:
context["footer_address"] = settings.address
for k in ["facebook_share", "google_plus_one", "twitter_share", "linked_in_share",
"disable_signup"]:
context[k] = int(context.get(k) or 0)
if frappe.request:
context.url = quote(str(get_request_site_address(full_address=True)), safe="/:")
context.encoded_title = quote(encode(context.title or ""), str(""))
for update_website_context in hooks.update_website_context or []:
frappe.get_attr(update_website_context)(context)
context.web_include_js = hooks.web_include_js or []
context.web_include_css = hooks.web_include_css or []
via_hooks = frappe.get_hooks("website_context")
for key in via_hooks:
context[key] = via_hooks[key]
if key not in ("top_bar_items", "footer_items", "post_login") \
and isinstance(context[key], (list, tuple)):
context[key] = context[key][-1]
add_website_theme(context)
if not context.get("favicon"):
context["favicon"] = "/assets/frappe/images/favicon.png"
if settings.favicon and settings.favicon != "attach_files:":
context["favicon"] = settings.favicon
context["hide_login"] = settings.hide_login
return context
def get_items(parentfield):
all_top_items = frappe.db.sql("""\
select * from `tabTop Bar Item`
where parent='Website Settings' and parentfield= %s
order by idx asc""", parentfield, as_dict=1)
top_items = all_top_items[:]
# attach child items to top bar
for d in all_top_items:
if d['parent_label']:
for t in top_items:
if t['label']==d['parent_label']:
if not 'child_items' in t:
t['child_items'] = []
t['child_items'].append(d)
break
return top_items
@frappe.whitelist(allow_guest=True)
def is_chat_enabled():
return bool(frappe.db.get_single_value('Website Settings', 'chat_enable'))
| 34.40107 | 128 | 0.736826 |
b70da614c13d7461383d9057bad8e4caa9c06033 | 8,557 | py | Python | 04_train_vae.py | anofox/aws_summit | 5c483cab69c10e77fa2524c799cd7114030d0f58 | [
"Apache-2.0"
] | null | null | null | 04_train_vae.py | anofox/aws_summit | 5c483cab69c10e77fa2524c799cd7114030d0f58 | [
"Apache-2.0"
] | null | null | null | 04_train_vae.py | anofox/aws_summit | 5c483cab69c10e77fa2524c799cd7114030d0f58 | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from keras.layers import Dense, Input
from keras.layers import Conv2D, Flatten, Lambda
from keras.layers import Reshape, Conv2DTranspose
from keras.models import Model
from keras.datasets import mnist
from keras.losses import mse, binary_crossentropy
from keras.utils import plot_model
from keras import backend as K
import numpy as np
import matplotlib.pyplot as plt
import argparse
import os
# reparameterization trick
# instead of sampling from Q(z|X), sample eps = N(0,I)
# then z = z_mean + sqrt(var)*eps
from showcase.image_sequence import create_rgbdseq_from_files
def sampling(args):
"""Reparameterization trick by sampling fr an isotropic unit Gaussian.
# Arguments:
args (tensor): mean and log of variance of Q(z|X)
# Returns:
z (tensor): sampled latent vector
"""
z_mean, z_log_var = args
batch = K.shape(z_mean)[0]
dim = K.int_shape(z_mean)[1]
# by default, random_normal has mean=0 and std=1.0
epsilon = K.random_normal(shape=(batch, dim))
return z_mean + K.exp(0.5 * z_log_var) * epsilon
def plot_results(models,
data,
batch_size=128,
model_name="vae_mnist"):
"""Plots labels and MNIST digits as function of 2-dim latent vector
# Arguments:
models (tuple): encoder and decoder models
data (tuple): test data and label
batch_size (int): prediction batch size
model_name (string): which model is using this function
"""
encoder, decoder = models
x_test, y_test = data
os.makedirs(model_name, exist_ok=True)
filename = os.path.join(model_name, "vae_mean.png")
# display a 2D plot of the digit classes in the latent space
z_mean, _, _ = encoder.predict(x_test,
batch_size=batch_size)
plt.figure(figsize=(12, 10))
plt.scatter(z_mean[:, 0], z_mean[:, 1], c=y_test)
plt.colorbar()
plt.xlabel("z[0]")
plt.ylabel("z[1]")
plt.savefig(filename)
plt.show()
return
filename = os.path.join(model_name, "digits_over_latent.png")
# display a 30x30 2D manifold of digits
n = 30
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * n))
# linearly spaced coordinates corresponding to the 2D plot
# of digit classes in the latent space
grid_x = np.linspace(-4, 4, n)
grid_y = np.linspace(-4, 4, n)[::-1]
for i, yi in enumerate(grid_y):
for j, xi in enumerate(grid_x):
z_sample = np.array([[xi, yi]])
x_decoded = decoder.predict(z_sample)
digit = x_decoded[0].reshape(digit_size, digit_size)
figure[i * digit_size: (i + 1) * digit_size,
j * digit_size: (j + 1) * digit_size] = digit
plt.figure(figsize=(10, 10))
start_range = digit_size // 2
end_range = n * digit_size + start_range + 1
pixel_range = np.arange(start_range, end_range, digit_size)
sample_range_x = np.round(grid_x, 1)
sample_range_y = np.round(grid_y, 1)
plt.xticks(pixel_range, sample_range_x)
plt.yticks(pixel_range, sample_range_y)
plt.xlabel("z[0]")
plt.ylabel("z[1]")
plt.imshow(figure, cmap='Greys_r')
plt.savefig(filename)
plt.show()
def plot_history(history):
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# MNIST dataset
#(x_train, y_train), (x_test, y_test) = mnist.load_data()
#image_size = x_train.shape[1]
#x_train = np.reshape(x_train, [-1, image_size, image_size, 1])
#x_test = np.reshape(x_test, [-1, image_size, image_size, 1])
#x_train = x_train.astype('float32') / 255
#x_test = x_test.astype('float32') / 255
# Extend Mnist to simulate RGBD
#x_train = np.repeat(x_train, 4, axis=3)
#x_test = np.repeat(x_test, 4, axis=3)
image_height = 96
image_width = 128
n_chans = 4
batch_size = 128
input_shape = (image_height, image_width, n_chans)
train_gen = create_rgbdseq_from_files(glob_pattern="data/*.npz", batch_size=batch_size, is_validation=False)
val_gen = create_rgbdseq_from_files(glob_pattern="data/*.npz", batch_size=batch_size, is_validation=True)
x_train = np.vstack(filter(lambda x: isinstance(x, bool)==False, [train_gen[i] for i in range(800)]))
x_test = np.vstack(filter(lambda x: isinstance(x, bool)==False, [val_gen[i] for i in range(160)]))
# network parameters
kernel_size = 6
filters = 16
latent_dim = 16
# VAE model = encoder + decoder
# build encoder model
inputs = Input(shape=input_shape, name='encoder_input')
x = inputs
# New
x = Conv2D(filters=input_shape[-1],
kernel_size=(2, 2),
activation='relu',
strides=1,
padding='same')(x)
# New
for i in range(5):
filters *= 2
x = Conv2D(filters=filters,
kernel_size=kernel_size,
activation='relu',
strides=2,
padding='same')(x)
# shape info needed to build decoder model
shape = K.int_shape(x)
# generate latent vector Q(z|X)
x = Flatten()(x)
x = Dense(16, activation='relu')(x)
z_mean = Dense(latent_dim, name='z_mean')(x)
z_log_var = Dense(latent_dim, name='z_log_var')(x)
# use reparameterization trick to push the sampling out as input
# note that "output_shape" isn't necessary with the TensorFlow backend
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])
# instantiate encoder model
encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')
encoder.summary()
plot_model(encoder, to_file='vae_cnn_encoder.png', show_shapes=True)
# build decoder model
latent_inputs = Input(shape=(latent_dim,), name='z_sampling')
x = Dense(shape[1] * shape[2] * shape[3], activation='relu')(latent_inputs)
x = Reshape((shape[1], shape[2], shape[3]))(x)
for i in range(5):
x = Conv2DTranspose(filters=filters,
kernel_size=kernel_size,
activation='relu',
strides=2,
padding='same',
data_format='channels_last')(x)
filters //= 2
outputs = Conv2DTranspose(filters=input_shape[-1],
kernel_size=kernel_size,
activation='sigmoid',
padding='same',
name='decoder_output')(x)
# instantiate decoder model
decoder = Model(latent_inputs, outputs, name='decoder')
decoder.summary()
plot_model(decoder, to_file='vae_cnn_decoder.png', show_shapes=True)
# instantiate VAE model
outputs = decoder(encoder(inputs)[2])
vae = Model(inputs, outputs, name='vae')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-w", "--weights", help="Load h5 model trained weights")
parser.add_argument("-m", "--mse", help="Use mse loss instead of binary cross entropy (default)", action='store_true', default=True)
parser.add_argument("-e", "--epochs", default=25)
args = parser.parse_args()
models = (encoder, decoder)
#data = (x_test, y_test)
# VAE loss = mse_loss or xent_loss + kl_loss
if args.mse:
reconstruction_loss = mse(K.flatten(inputs), K.flatten(outputs))
else:
reconstruction_loss = binary_crossentropy(K.flatten(inputs), K.flatten(outputs))
reconstruction_loss *= image_width * image_height
kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5
vae_loss = K.mean(reconstruction_loss + kl_loss)
vae.add_loss(vae_loss)
vae.compile(optimizer='rmsprop')
vae.summary()
plot_model(vae, to_file='vae_cnn.png', show_shapes=True)
if args.weights:
vae = vae.load_weights(args.weights)
else:
# train the autoencoder
hist = vae.fit(x_train,
epochs=args.epochs,
batch_size=batch_size,
validation_data=(x_test, None))
#hist = vae.fit_generator(generator=train_gen,
# validation_data=[(x, None) for x in val_gen],
# use_multiprocessing=True,
# workers=6,
# verbose=3)
vae.save_weights('vae_cnn_rgbd.h5')
plot_history(hist)
#plot_results(models, data, batch_size=batch_size, model_name="vae_cnn") | 33.425781 | 136 | 0.646839 |
757c50cd4aca42b352998d153813e11e734ccde4 | 7,280 | py | Python | digits/data/select.py | eegdigits/digits | 09d680ac0ecc16b79a9b096aa28baf8a378108f4 | [
"MIT"
] | null | null | null | digits/data/select.py | eegdigits/digits | 09d680ac0ecc16b79a9b096aa28baf8a378108f4 | [
"MIT"
] | null | null | null | digits/data/select.py | eegdigits/digits | 09d680ac0ecc16b79a9b096aa28baf8a378108f4 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Methods to (sub)-select an imported dataset given some criteria.
"""
import numpy as np
# Selectors based on ranges
#
# workaround for this slicing issue
# https://bpaste.net/show/bb7601ca91a8
# as suggested by papna in #pydata is to use
# swaplevel().sort().loc().swaplevel().sort()
def fromtimerange(samples, tmin, tmax):
"""
Sub-select time window from samples data frame.
Expects 2nd layer sample label to be named 't_NNNN'.
"""
timepoints = np.unique(samples.columns.get_level_values(level='sample'))
if timepoints[0][0] != 't':
raise ValueError("Did not find time-values, are we still in time domain?")
if isinstance(tmin, int) and isinstance(tmax, int):
zlen = len(timepoints[0]) - 2
tmin = 't_' + str(tmin).zfill(zlen)
tmax = 't_' + str(tmax).zfill(zlen)
for param in [tmin, tmax]:
if param not in timepoints:
raise ValueError("Invalid value for parameter: {0}".format(param))
return samples.swaplevel(0,1,axis=1).sort_index(axis=1).loc[:,(slice(tmin,tmax),)].swaplevel(0,1,axis=1).sort_index(axis=1)
def fromfreqrange(samples, fmin, fmax):
"""
Sub-select frequency window from samples data frame.
Expects 2nd layer sample label to be named 'f_NNNN'.
"""
freqpoints = np.unique(samples.columns.get_level_values(level='sample'))
if freqpoints[0][0] != 'f':
raise ValueError("Did not find freq-values, did you transform to freq domain?")
freqfloats = np.array([ float(f.split("_")[1]) for f in freqpoints ])
if fmax=='max':
fmax = freqfloats[-1]
if fmin=='min':
fmin = freqfloats[0]
try:
fmax = freqpoints[ freqfloats >= float(fmax) ][0]
except IndexError:
raise ValueError("fmax({}) invalid, must be <= {}".format(freqfloats[-1]))
try:
fmin = freqpoints[ freqfloats <= float(fmin) ][-1]
except IndexError:
raise ValueError("fmin({}) invalid, must be >= {}".format(freqfloats[0]))
#print("fmin: {}, fmax: {}".format(fmin, fmax))
return samples.swaplevel(0,1,axis=1).sort_index(axis=1).loc[:,(slice(fmin,fmax),)].swaplevel(0,1,axis=1).sort_index(axis=1)
def fromchannelrange(samples, chanmin, chanmax):
"""
Sub-select from a range of channels. Usually fromchannellist or
fromchannelblacklist is more useful.
"""
channels = np.unique(samples.columns.get_level_values(level='channel'))
for param in [chanmin, chanmax]:
if param not in channels:
raise ValueError("Invalid value for parameter: {0}".format(param))
return samples.swaplevel(0,1,axis=1).sort_index(axis=1).loc[:, (slice(chanmin,chanmax),)].swaplevel(0,1,axis=1).sort_index(axis=1)
def fromchannellist(samples, channels):
"""
Returns a sample data frame that only contains channels according to the
channels input list.
"""
if not isinstance(channels, list):
raise ValueError("Channel argument must be a list.")
return samples.loc[:, (channels, slice(None))]
def fromchannelblacklist(samples, channels):
"""
Inverse function to fromchannellist.
"""
if not isinstance(channels, list):
raise ValueError("Channel argument must be a list.")
all_channels = np.unique(samples.columns.get_level_values(level='channel'))
return fromchannellist(samples, list(set(all_channels) - set(channels)))
# Dual selectors (need to reshape/filter targets as well!)
def fromsessionlist(samples, targets, sessions):
"""
Return a samples and targets data frame filtered by a list of session names.
"""
if not isinstance(sessions, list):
raise ValueError("Session argument must be a list.")
if isinstance(sessions[0], int):
sessions = [str(x).zfill(2) for x in sessions]
# this is very digits package specific
if len(sessions[0]) < 2:
sessions = [x.zfill(2) for x in sessions]
samples = samples.loc[(slice(None), (sessions)), :]
targets = targets.loc[(slice(None), (sessions)), :]
return samples, targets
def fromsessionblacklist(samples, targets, sessions):
"""
Inverse function to fromsessionlist.
"""
if not isinstance(sessions, list):
raise ValueError("Session argument must be a list.")
all_sess = np.unique(samples.index.get_level_values(level='session'))
return fromsessionlist(samples, targets,list(set(all_sess) - set(sessions)))
def fromtriallist(samples, targets, trials):
"""
Return a samples and targets data frame filtered by a list of trial names.
"""
if not isinstance(trials, list):
raise ValueError("Trials argument must be a list.")
samples = samples.swaplevel(0,2,axis=0).sort_index(axis=0).loc[(trials,),:].swaplevel(0,2,axis=0).sort_index(axis=0)
targets = targets.swaplevel(0,2,axis=0).sort_index(axis=0).loc[(trials,),:].swaplevel(0,2,axis=0).sort_index(axis=0)
return samples, targets
def frompresentationlist(samples, targets, presentations):
"""
Return a samples and targets data frame filtered by a list of presentation names.
"""
if not isinstance(presentations, list):
raise ValueError("Presentation argument must be a list.")
if isinstance(presentations[0], int):
presentations = [str(x) for x in presentations]
samples = samples.swaplevel(0,3,axis=0).sort_index(axis=0).loc[(presentations,),:].swaplevel(0,3,axis=0).sort_index(axis=0)
targets = targets.swaplevel(0,3,axis=0).sort_index(axis=0).loc[(presentations,),:].swaplevel(0,3,axis=0).sort_index(axis=0)
return samples, targets
def fromtargetlist(samples, targets, targetlist):
"""
Return a samples and targets data frame filtered by target values
(needs at least 2 targets in the list).
"""
if len(targetlist)<2:
raise ValueError("Targetlist argument must be a list with size >1.")
mask = targets.label == targetlist.pop()
for target in targetlist:
mask = (mask) | (targets.label == target)
return samples[mask], targets[mask]
def jointargets(targets, groups):
"""
Creates two labels and maps all old labels into the two categories.
Combine with fromtargetlist() for arbitrary groups.
Group A,B will be assigned 0,1 respectively.
"""
(groupa, groupb) = groups
targets_new = targets.copy()
for ix,target in targets.iterrows():
if target.label in groupa:
targets_new.loc[ix] = 0
else:
targets_new.loc[ix] = 1
return targets_new
# Helpers
def getchannelnames(samples):
return __getlevelnames(samples, 'channel', 1)
def getsamplingnames(samples):
return __getlevelnames(samples, 'sample', 1)
def getsubjectnames(samples):
return __getlevelnames(samples, 'subject', 0)
def getsessionnames(samples):
return __getlevelnames(samples, 'session', 0)
def gettrialnames(samples):
return __getlevelnames(samples, 'trial', 0)
def getpresentationnames(samples):
return __getlevelnames(samples, 'presentation', 0)
def __getlevelnames(samples, level, axis):
if axis == 0:
names = samples.index.get_level_values(level=level)
else:
names = samples.columns.get_level_values(level=level)
return np.unique(names).tolist()
| 34.018692 | 134 | 0.674313 |
54763b49f67a96c10453e9e05a8bf7514f691d0f | 9,512 | py | Python | nova/conductor/tasks/live_migrate.py | bopopescu/nova-8 | 768d7cc0a632e1a880f00c5840c1ec8051e161be | [
"Apache-2.0"
] | null | null | null | nova/conductor/tasks/live_migrate.py | bopopescu/nova-8 | 768d7cc0a632e1a880f00c5840c1ec8051e161be | [
"Apache-2.0"
] | 2 | 2015-02-03T06:25:24.000Z | 2015-02-04T10:10:36.000Z | nova/conductor/tasks/live_migrate.py | bopopescu/nova-8 | 768d7cc0a632e1a880f00c5840c1ec8051e161be | [
"Apache-2.0"
] | 7 | 2015-01-20T10:30:08.000Z | 2020-02-05T10:29:05.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
import oslo_messaging as messaging
import six
from nova.compute import power_state
from nova.conductor.tasks import base
import nova.conf
from nova import exception
from nova.i18n import _
from nova import objects
from nova.scheduler import utils as scheduler_utils
from nova import utils
LOG = logging.getLogger(__name__)
CONF = nova.conf.CONF
class LiveMigrationTask(base.TaskBase):
def __init__(self, context, instance, destination,
block_migration, disk_over_commit, migration, compute_rpcapi,
servicegroup_api, scheduler_client, request_spec=None):
super(LiveMigrationTask, self).__init__(context, instance)
self.destination = destination
self.block_migration = block_migration
self.disk_over_commit = disk_over_commit
self.migration = migration
self.source = instance.host
self.migrate_data = None
self.compute_rpcapi = compute_rpcapi
self.servicegroup_api = servicegroup_api
self.scheduler_client = scheduler_client
self.request_spec = request_spec
def _execute(self):
self._check_instance_is_active()
self._check_host_is_up(self.source)
if not self.destination:
self.destination = self._find_destination()
self.migration.dest_compute = self.destination
self.migration.save()
else:
self._check_requested_destination()
# TODO(johngarbutt) need to move complexity out of compute manager
# TODO(johngarbutt) disk_over_commit?
return self.compute_rpcapi.live_migration(self.context,
host=self.source,
instance=self.instance,
dest=self.destination,
block_migration=self.block_migration,
migration=self.migration,
migrate_data=self.migrate_data)
def rollback(self):
# TODO(johngarbutt) need to implement the clean up operation
# but this will make sense only once we pull in the compute
# calls, since this class currently makes no state changes,
# except to call the compute method, that has no matching
# rollback call right now.
pass
def _check_instance_is_active(self):
if self.instance.power_state not in (power_state.RUNNING,
power_state.PAUSED):
raise exception.InstanceInvalidState(
instance_uuid=self.instance.uuid,
attr='power_state',
state=self.instance.power_state,
method='live migrate')
def _check_host_is_up(self, host):
service = objects.Service.get_by_compute_host(self.context, host)
if not self.servicegroup_api.service_is_up(service):
raise exception.ComputeServiceUnavailable(host=host)
def _check_requested_destination(self):
self._check_destination_is_not_source()
self._check_host_is_up(self.destination)
self._check_destination_has_enough_memory()
self._check_compatible_with_source_hypervisor(self.destination)
self._call_livem_checks_on_host(self.destination)
def _check_destination_is_not_source(self):
if self.destination == self.source:
raise exception.UnableToMigrateToSelf(
instance_id=self.instance.uuid, host=self.destination)
def _check_destination_has_enough_memory(self):
compute = self._get_compute_info(self.destination)
free_ram_mb = compute.free_ram_mb
total_ram_mb = compute.memory_mb
mem_inst = self.instance.memory_mb
# NOTE(sbauza): Now the ComputeNode object reports an allocation ratio
# that can be provided by the compute_node if new or by the controller
ram_ratio = compute.ram_allocation_ratio
# NOTE(sbauza): Mimic the RAMFilter logic in order to have the same
# ram validation
avail = total_ram_mb * ram_ratio - (total_ram_mb - free_ram_mb)
if not mem_inst or avail <= mem_inst:
instance_uuid = self.instance.uuid
dest = self.destination
reason = _("Unable to migrate %(instance_uuid)s to %(dest)s: "
"Lack of memory(host:%(avail)s <= "
"instance:%(mem_inst)s)")
raise exception.MigrationPreCheckError(reason=reason % dict(
instance_uuid=instance_uuid, dest=dest, avail=avail,
mem_inst=mem_inst))
def _get_compute_info(self, host):
return objects.ComputeNode.get_first_node_by_host_for_old_compat(
self.context, host)
def _check_compatible_with_source_hypervisor(self, destination):
source_info = self._get_compute_info(self.source)
destination_info = self._get_compute_info(destination)
source_type = source_info.hypervisor_type
destination_type = destination_info.hypervisor_type
if source_type != destination_type:
raise exception.InvalidHypervisorType()
source_version = source_info.hypervisor_version
destination_version = destination_info.hypervisor_version
if source_version > destination_version:
raise exception.DestinationHypervisorTooOld()
def _call_livem_checks_on_host(self, destination):
try:
self.migrate_data = self.compute_rpcapi.\
check_can_live_migrate_destination(self.context, self.instance,
destination, self.block_migration, self.disk_over_commit)
except messaging.MessagingTimeout:
msg = _("Timeout while checking if we can live migrate to host: "
"%s") % destination
raise exception.MigrationPreCheckError(msg)
def _find_destination(self):
# TODO(johngarbutt) this retry loop should be shared
attempted_hosts = [self.source]
image = utils.get_image_from_system_metadata(
self.instance.system_metadata)
filter_properties = {'ignore_hosts': attempted_hosts}
if not self.request_spec:
# NOTE(sbauza): We were unable to find an original RequestSpec
# object - probably because the instance is old.
# We need to mock that the old way
request_spec = objects.RequestSpec.from_components(
self.context, self.instance.uuid, image,
self.instance.flavor, self.instance.numa_topology,
self.instance.pci_requests,
filter_properties, None, self.instance.availability_zone
)
else:
request_spec = self.request_spec
# NOTE(sbauza): Force_hosts/nodes needs to be reset
# if we want to make sure that the next destination
# is not forced to be the original host
request_spec.reset_forced_destinations()
scheduler_utils.setup_instance_group(self.context, request_spec)
host = None
while host is None:
self._check_not_over_max_retries(attempted_hosts)
request_spec.ignore_hosts = attempted_hosts
try:
host = self.scheduler_client.select_destinations(self.context,
request_spec, [self.instance.uuid])[0]['host']
except messaging.RemoteError as ex:
# TODO(ShaoHe Feng) There maybe multi-scheduler, and the
# scheduling algorithm is R-R, we can let other scheduler try.
# Note(ShaoHe Feng) There are types of RemoteError, such as
# NoSuchMethod, UnsupportedVersion, we can distinguish it by
# ex.exc_type.
raise exception.MigrationSchedulerRPCError(
reason=six.text_type(ex))
try:
self._check_compatible_with_source_hypervisor(host)
self._call_livem_checks_on_host(host)
except (exception.Invalid, exception.MigrationPreCheckError) as e:
LOG.debug("Skipping host: %(host)s because: %(e)s",
{"host": host, "e": e})
attempted_hosts.append(host)
host = None
return host
def _check_not_over_max_retries(self, attempted_hosts):
if CONF.migrate_max_retries == -1:
return
retries = len(attempted_hosts) - 1
if retries > CONF.migrate_max_retries:
if self.migration:
self.migration.status = 'failed'
self.migration.save()
msg = (_('Exceeded max scheduling retries %(max_retries)d for '
'instance %(instance_uuid)s during live migration')
% {'max_retries': retries,
'instance_uuid': self.instance.uuid})
raise exception.MaxRetriesExceeded(reason=msg)
| 44.24186 | 79 | 0.65307 |
29734fbcb9dea8da4aba3662f7fb8d3f2c767e40 | 6,981 | py | Python | evaluation/dataset_indexing_test.py | LaudateCorpus1/dm_c19_modelling | 25c0a3c47df4f93991acaf4b5e4d3ab0a6d9b5ab | [
"Apache-2.0"
] | 3 | 2021-07-21T10:05:55.000Z | 2021-09-06T09:28:56.000Z | evaluation/dataset_indexing_test.py | deepmind/c19_modelling | 25c0a3c47df4f93991acaf4b5e4d3ab0a6d9b5ab | [
"Apache-2.0"
] | 1 | 2021-10-05T16:20:25.000Z | 2021-10-05T16:20:25.000Z | evaluation/dataset_indexing_test.py | LaudateCorpus1/dm_c19_modelling | 25c0a3c47df4f93991acaf4b5e4d3ab0a6d9b5ab | [
"Apache-2.0"
] | 2 | 2021-08-04T16:10:28.000Z | 2021-10-05T16:15:25.000Z | # pylint: disable=g-bad-file-header
# Copyright 2020 DeepMind Technologies Limited. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Tests for dm_c19_modelling.evaluation.dataset_indexing."""
import os
from absl.testing import absltest
from dm_c19_modelling.evaluation import dataset_indexing
import pandas as pd
_TEST_DATASET = "test_dataset"
_TEST_DATASET_FILE = "test_dataset.csv"
def _get_test_entry(directory):
return {
"file_location": os.path.join(directory, _TEST_DATASET_FILE),
"source_data_info": ["test_source_1", "test_source_2"],
"creation_timestamp": "2020-06-07_12:43:02",
"dataset_name": _TEST_DATASET,
"creation_date": "2020-06-07",
"extra_info": {}
}
def _create_dataset(file_location):
df = pd.DataFrame({"site_id": ["A"], "date": ["2020-05-07"],
"new_deceased": [0], "new_confirmed": [0]})
df.to_csv(file_location, index=False)
class DatasetIndexingTest(absltest.TestCase):
def setUp(self):
super().setUp()
self._test_dir = absltest.get_default_test_tmpdir()
os.makedirs(self._test_dir, exist_ok=True)
self._key = "12345"
self._entry = _get_test_entry(self._test_dir)
self._index_path = os.path.join(
self._test_dir, f"dataset_index-{_TEST_DATASET}.json")
_create_dataset(self._entry["file_location"])
self._remove_index_if_exists()
def _remove_index_if_exists(self):
if os.path.exists(self._index_path):
os.remove(self._index_path)
def test_write_operation_not_in_read_only(self):
"""Test that opening the index in read-only mode prevents writing."""
index = dataset_indexing.DatasetIndex(self._test_dir, _TEST_DATASET)
with self.assertRaisesWithLiteralMatch(
IOError,
"Attempting to write to the index when it is in read-only mode."):
index.add_entry(self._key, {})
def test_write_operation_not_in_context(self):
"""Tests that the index can't be used in write mode outside of a context."""
index = dataset_indexing.DatasetIndex(self._test_dir, _TEST_DATASET,
read_only=False)
with self.assertRaisesWithLiteralMatch(
IOError, ("Index has not been loaded. The index should be used as a "
"context when not in read-only mode")):
index.add_entry(self._key, {})
def test_create_new_index_and_add_entry(self):
"""Tests that an index can get created and an entry added."""
with dataset_indexing.DatasetIndex(self._test_dir, _TEST_DATASET,
read_only=False) as index:
index.add_entry(self._key, self._entry)
assert os.path.exists(self._index_path)
def test_create_new_index_add_entry_with_missing_field(self):
"""Tests that adding an entry with missing fields fails."""
del self._entry["creation_timestamp"]
with dataset_indexing.DatasetIndex(
self._test_dir, _TEST_DATASET, read_only=False) as index:
with self.assertRaisesRegex(ValueError, "Entry must have fields *"):
index.add_entry(self._key, self._entry)
def test_add_duplicate_entry(self):
"""Tests that adding an entry with a duplicated key fails."""
with dataset_indexing.DatasetIndex(
self._test_dir, _TEST_DATASET, read_only=False) as index:
index.add_entry(self._key, self._entry)
with self.assertRaisesWithLiteralMatch(
ValueError,
("Found entry for given key. Index keys must be unique.")):
index.add_entry(self._key, self._entry)
def test_create_new_index_add_invalid_creation_timestamp(self):
"""Tests creation timestamp format validation."""
self._entry["creation_timestamp"] = "2020-06-07"
with dataset_indexing.DatasetIndex(
self._test_dir, _TEST_DATASET, read_only=False) as index:
with self.assertRaisesWithLiteralMatch(ValueError,
"Cannot parse creation_timestamp"):
index.add_entry(self._key, self._entry)
def test_create_new_index_add_non_existent_file(self):
"""Tests filepath validation."""
bad_file_location = os.path.join(self._test_dir, "bad_file")
self._entry["file_location"] = bad_file_location
with dataset_indexing.DatasetIndex(
self._test_dir, _TEST_DATASET, read_only=False) as index:
with self.assertRaisesRegex(IOError,
f"Path {bad_file_location} not found *"):
index.add_entry(self._key, self._entry)
def test_add_to_existing_index(self):
"""Tests that an entry can be added to an existing index."""
entry_2 = self._entry.copy()
entry_2["creation_date"] = "2020-06-08"
key_2 = "123456"
with dataset_indexing.DatasetIndex(self._test_dir, _TEST_DATASET,
read_only=False) as index:
index.add_entry(self._key, self._entry)
with dataset_indexing.DatasetIndex(self._test_dir, _TEST_DATASET,
read_only=False) as index:
index.add_entry(key_2, entry_2)
read_index = dataset_indexing.DatasetIndex(self._test_dir, _TEST_DATASET)
self.assertIsNotNone(read_index.query_by_creation_date("2020-06-07"))
self.assertIsNotNone(read_index.query_by_creation_date("2020-06-08"))
def test_get_latest_creation_date(self):
"""Tests that querying 'latest' creation date returns the correct key."""
with dataset_indexing.DatasetIndex(self._test_dir, _TEST_DATASET,
read_only=False) as index:
index.add_entry(self._key, self._entry)
read_index = dataset_indexing.DatasetIndex(self._test_dir, _TEST_DATASET)
self.assertEqual(read_index.query_by_creation_date("latest"), self._key)
def test_query_by_creation_date_duplicates(self):
"""Tests that querying a duplicated creation date gets the latest entry."""
entry_2 = self._entry.copy()
key_2 = "123456"
entry_2["creation_timestamp"] = "2020-06-07_16:43:02"
with dataset_indexing.DatasetIndex(self._test_dir, _TEST_DATASET,
read_only=False) as index:
index.add_entry(self._key, self._entry)
index.add_entry(key_2, entry_2)
read_index = dataset_indexing.DatasetIndex(self._test_dir, _TEST_DATASET)
self.assertEqual(read_index.query_by_creation_date("latest"), key_2)
if __name__ == "__main__":
absltest.main()
| 43.092593 | 80 | 0.694027 |
4d30d652c7f5347f2af33b9f4def9743773ae7d3 | 482 | py | Python | tests/test_remove_item_from_cart.py | joshmgrant/Python-Pytest-Nerodia | 55e8d92cd21e3093e6eb434e4ab7b126c974c6f0 | [
"MIT"
] | 1 | 2019-03-19T08:29:02.000Z | 2019-03-19T08:29:02.000Z | tests/test_remove_item_from_cart.py | joshmgrant/Python-Pytest-Nerodia | 55e8d92cd21e3093e6eb434e4ab7b126c974c6f0 | [
"MIT"
] | null | null | null | tests/test_remove_item_from_cart.py | joshmgrant/Python-Pytest-Nerodia | 55e8d92cd21e3093e6eb434e4ab7b126c974c6f0 | [
"MIT"
] | null | null | null | import pytest
def test_removes_one(browser):
browser.goto('www.saucedemo.com/inventory.html')
browser.button(class_name='add-to-cart-button').click()
browser.button(class_name='add-to-cart-button').click()
browser.button(class_name='remove-from-cart-button').click()
assert browser.span(class_name='shopping_cart_badge').text == '1'
browser.goto("https://www.saucedemo.com/cart.html")
assert len(browser.divs(class_name='inventory_item_name')) == 1
| 32.133333 | 69 | 0.728216 |
7c04e27953242f96e04e58488ad5962abe57d0ce | 2,830 | py | Python | modules/NotificationFimJornada.py | camargo2019/conecta | 3bf67a7aa5280d1d263e5ee40cbc48f0045fcb99 | [
"MIT"
] | 1 | 2021-08-01T05:40:31.000Z | 2021-08-01T05:40:31.000Z | modules/NotificationFimJornada.py | camargo2019/conecta | 3bf67a7aa5280d1d263e5ee40cbc48f0045fcb99 | [
"MIT"
] | null | null | null | modules/NotificationFimJornada.py | camargo2019/conecta | 3bf67a7aa5280d1d263e5ee40cbc48f0045fcb99 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import os
import sys
import json
import time
import glob
from tkinter import *
from PIL import ImageTk, Image
from threading import Thread
from datetime import datetime
from .database.datalocal import *
from .Notifications import *
import pygetwindow as gw
import subprocess
class NotificationFimJornada:
def __init__(self, path_exec):
self.path_exec = path_exec
dir_path = "C:\\ConectaIT\\modules"
for file in glob.glob(dir_path+"\\logs\\data\\*.json"):
start = Thread(target=self.iniciar_notification, args=[file])
start.start()
def iniciar_notification(self, arquivo):
dir_path = "C:\\ConectaIT\\modules"
#print(dir_path)
arquivo = open(arquivo, "r")
returndb = arquivo.read()
dados_value = json.loads(returndb)
arquivo.close()
while True:
for i in dados_value['my_workday']["0"]['end']['workday_notifications']:
try:
data_start = dados_value['my_workday']["0"]['end']['end_time'] + ':00'
except:
data_start = "00:00:00"
horario_atual = datetime.now().strftime('%H:%M:%S')
iminutes, s = divmod(i["seconds"], 60)
#print(iminutes)
if(len(str(iminutes)) == 1):
time_10min = '00:0' + str(iminutes) + ':00'
else:
time_10min = '00:' + str(iminutes) + ':00'
formato = '%H:%M:%S'
time_10 = datetime.strptime(
data_start, formato) - datetime.strptime(time_10min, formato)
time_antesNotifica = datetime.strptime(
str(horario_atual), formato) - datetime.strptime(str(time_10), formato)
# print(time_antesNotifica)
# print(time_antesNotifica)
#print(str(horario_atual)+" "+str(time))
if(str(time_antesNotifica) == '00:00' or str(time_antesNotifica) == '00:00:00' or str(time_antesNotifica) == "0:00:00"):
inf = Notification(title="ConectaIT", subtitle="Atenção!",
descrition=i["notification"]["message"], icone="advertising", wait=i["notification"]["wait"], cancel='True')
if inf.returnValue == True:
if(i["notification"]["stay_open"] == True):
try:
janela = gw.getWindowsWithTitle('ConectaIT - Área do Usuário')[0]
janela.restore()
except:
subprocess.call('C:\\ConectaIT\\ConectaIT.exe')
else:
time.sleep(0.5)
| 44.21875 | 137 | 0.524028 |
755b51e57d6319e942b85397a2ac058c833ea313 | 1,056 | py | Python | setup.py | mszaf/forge | b401fb30dde6817f972009d6f3b5748bc3e5d374 | [
"MIT"
] | null | null | null | setup.py | mszaf/forge | b401fb30dde6817f972009d6f3b5748bc3e5d374 | [
"MIT"
] | null | null | null | setup.py | mszaf/forge | b401fb30dde6817f972009d6f3b5748bc3e5d374 | [
"MIT"
] | null | null | null | """ Setup """
from setuptools import setup
setup(name='tele-forge',
version='1.0.0',
description='convenient cli with extendable collection of useful plugins',
url='https://github.com/TeleTrackingTechnologies/forge',
author='Brandon Horn, Kenneth Poling, Paul Verardi, Cameron Tucker, Clint Wadley',
author_email='opensource@teletracking.com',
packages=[
'forge',
'forge.config',
'forge._internal_plugins',
'forge._internal_plugins.manage_plugins',
'forge._internal_plugins.manage_plugins.manage_plugins_logic'
],
install_requires=[
'colorama',
'GitPython',
'pluginbase',
'requests',
'tabulate'
],
classifiers=[
'Programming Language :: Python :: 3',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent'
],
python_requires='>=3.7',
scripts=[
'bin/forge',
'bin/forge.bat'],
zip_safe=False
)
| 31.058824 | 88 | 0.582386 |
69bdeb94284d8f639b5d693c06b534e1cfd41ea4 | 1,198 | py | Python | tests/unittests/models/layers/test_window_attention3d.py | krishnakatyal/towhee | c5e043aa1509cf46644ca6b53f691d6ed2647212 | [
"Apache-2.0"
] | null | null | null | tests/unittests/models/layers/test_window_attention3d.py | krishnakatyal/towhee | c5e043aa1509cf46644ca6b53f691d6ed2647212 | [
"Apache-2.0"
] | 1 | 2022-01-19T06:21:07.000Z | 2022-01-19T06:21:07.000Z | tests/unittests/models/layers/test_window_attention3d.py | jennyli-z/towhee | 55c55fd961229575b75eae269b55090c839f8dcd | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 Zilliz. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import torch
from towhee.models.layers.window_attention3d import WindowAttention3D
class WindowAttention3DTest(unittest.TestCase):
def test_window_attention3d(self):
dim = 144
window_size = [2, 2, 2]
num_heads = 8
wa3 = WindowAttention3D(dim=dim, window_size=window_size, num_heads=num_heads)
num_windows = 2 * 2 * 2
b = 3
n = 8
c = dim
input_tensor = torch.rand(num_windows * b, n, c)
out = wa3(input_tensor) # torch.Size([24, 8, 144])
self.assertTrue(out.shape == torch.Size([24, 8, 144]))
| 34.228571 | 86 | 0.696995 |
a9ba19405d38b9fe51da574754449c5a450da256 | 277 | py | Python | fondInformatica/python/Esercitazione2_max3n (lupia).py | peppelongo96/materiale-LT-ing-informatica-unical | dead349a0c50d04c72ecd8f673c5a60cd413c680 | [
"MIT"
] | null | null | null | fondInformatica/python/Esercitazione2_max3n (lupia).py | peppelongo96/materiale-LT-ing-informatica-unical | dead349a0c50d04c72ecd8f673c5a60cd413c680 | [
"MIT"
] | null | null | null | fondInformatica/python/Esercitazione2_max3n (lupia).py | peppelongo96/materiale-LT-ing-informatica-unical | dead349a0c50d04c72ecd8f673c5a60cd413c680 | [
"MIT"
] | null | null | null |
x = int(input("Inserisci il primo numero:> "))
y = int(input("Inserisci il secondo numero:> "))
z = int(input("Inserisci il terzo numero:> "))
if x >= y and x >= z:
massimo = x
elif y >= x and y >= z:
massimo = y
else:
massimo = z
print('Il massimo è', massimo)
| 19.785714 | 48 | 0.595668 |
e9bf0d222a3ce2c8e07fb755804999c0a91589a0 | 5,388 | py | Python | algolab_class_API/views/api/server_admin.py | KMU-algolab/algolab_class | fdf22cd10d5af71eae63e259c4f88f2b55b44ec7 | [
"MIT"
] | 1 | 2019-01-10T05:46:09.000Z | 2019-01-10T05:46:09.000Z | algolab_class_API/views/api/server_admin.py | KMU-algolab/algolab_class | fdf22cd10d5af71eae63e259c4f88f2b55b44ec7 | [
"MIT"
] | 7 | 2018-12-25T15:59:49.000Z | 2019-01-10T05:45:25.000Z | algolab_class_API/views/api/server_admin.py | KMU-algolab/algolab_class | fdf22cd10d5af71eae63e259c4f88f2b55b44ec7 | [
"MIT"
] | null | null | null | from . import mixins
from . import jwt
from django.contrib.auth.models import User
from rest_framework import viewsets, status, mixins as mx
from rest_framework.response import Response
from rest_framework.decorators import action
from algolab_class_API import models, serializers
class ServerAdminProblemViewSet(mixins.VersionedSchemaMixin,
viewsets.ModelViewSet):
lookup_url_kwarg = 'id'
serializer_class = serializers.ProblemSerializer
http_method_names = ['get', 'post', 'put', 'delete']
def list(self, request, *args, **kwargs):
return self.get_response_list_for(models.Problem.objects.all(),
serializers.ProblemSerializer)
def create(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
data = serializer.validated_data
sq = models.Problem.objects.create(name=data['name'],
problem_file=data['problem_file'],
limit_time=data['limit_time'],
limit_memory=data['limit_memory'],
judge_type=data['judge_type'],
judge_code=data['judge_code'])
return self.get_response_list_for(models.Problem.objects.all(),
serializers.ProblemSerializer)
def retrieve(self, request, *args, **kwargs):
return self.get_response_for(models.Problem.objects.get(id=kwargs['id']), False,
serializers.ProblemSerializer)
def destroy(self, request, *args, **kwargs):
sq = models.Problem.objects.get(id=kwargs['id'])
user_info = jwt.decode_jwt(request.META['HTTP_JMT'])
if not sq and user_info['group'] != 'SERVER_MANAGER':
return Response(status=status.HTTP_400_BAD_REQUEST)
sq.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
def update(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
data = serializer.validated_data
user_info = jwt.decode_jwt(request.META['HTTP_JMT'])
sq = models.Problem.objects.get(id=kwargs['id'])
if not sq and user_info['group'] != 'SERVER_MANAGER':
return Response(status=status.HTTP_400_BAD_REQUEST)
if sq:
sq.name = data['name']
sq.problem_file = data['problem_file']
sq.limit_time = data['limit_time']
sq.limit_memory = data['limit_memory']
sq.judge_code = data['judge_code']
sq.judge_type = data['judge_type']
sq.save()
return self.get_response_for(sq, False, serializers.ProblemSerializer)
class ServerAdminUserViewSet(mixins.VersionedSchemaMixin,
viewsets.ModelViewSet):
lookup_url_kwarg = 'id'
serializer_class = serializers.UserSerializer
http_method_names = ['get', 'post', 'put', 'delete']
def list(self, request, *args, **kwargs):
user_info = jwt.decode_jwt(request.META['HTTP_JMT'])
if user_info['group'] != 'SERVER_MANAGER':
print(user_info['group'])
return Response(status=status.HTTP_400_BAD_REQUEST)
return self.get_response_list_for(User.objects.all(), serializers.UserSerializer)
def create(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
data = serializer.validated_data
user_info = jwt.decode_jwt(request.META['HTTP_JMT'])
if user_info['group'] != 'SERVER_MANAGER':
return Response(status=status.HTTP_400_BAD_REQUEST)
instance = User.objects.create_user(username=data['username'],
password=data['password'],
groups=data['group'])
return self.get_response_for(instance, True, serializers.UserSerializer)
def retrieve(self, request, *args, **kwargs):
return self.get_response_for(User.objects.filter(id=kwargs['id']), False, serializers.UserSerializer)
def destroy(self, request, *args, **kwargs):
sq = User.objects.get(id=kwargs['id'])
user_info = jwt.decode_jwt(request.META['HTTP_JMT'])
if not sq and user_info['group'] != 'SERVER_MANAGER':
return Response(status=status.HTTP_400_BAD_REQUEST)
sq.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
def update(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
data = serializer.validated_data
user_info = jwt.decode_jwt(request.META['HTTP_JMT'])
sq = User.objects.get(id=kwargs['id'])
if not sq and user_info != 'SERVER_MANAGER':
return Response(status=status.HTTP_400_BAD_REQUEST)
if sq:
sq.username = data['username']
sq.password = data['password']
sq.groups = data['password']
return self.get_response_for(sq, False, serializers.ProblemSerializer)
| 40.511278 | 109 | 0.623794 |
003b298da3f8a757acb9551c63bfe54fa983b146 | 913 | py | Python | nomadgram/users/migrations/0002_auto_20180802_1146.py | jnano/nomadgram | e12dd27fc6a8d6f77e4e56308a42ad08fef98a7d | [
"MIT"
] | null | null | null | nomadgram/users/migrations/0002_auto_20180802_1146.py | jnano/nomadgram | e12dd27fc6a8d6f77e4e56308a42ad08fef98a7d | [
"MIT"
] | 5 | 2020-06-05T18:47:49.000Z | 2021-09-08T00:06:20.000Z | nomadgram/users/migrations/0002_auto_20180802_1146.py | jnano/nomadgram | e12dd27fc6a8d6f77e4e56308a42ad08fef98a7d | [
"MIT"
] | null | null | null | # Generated by Django 2.0.7 on 2018-08-02 02:46
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('users', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='user',
name='bio',
field=models.TextField(null=True),
),
migrations.AddField(
model_name='user',
name='gender',
field=models.CharField(choices=[('male', 'Male'), ('female', 'Female'), ('not-specified', 'Not specified')], max_length=80, null=True),
),
migrations.AddField(
model_name='user',
name='phone',
field=models.CharField(max_length=140, null=True),
),
migrations.AddField(
model_name='user',
name='website',
field=models.URLField(null=True),
),
]
| 26.852941 | 147 | 0.538883 |
19adb2e743629f550e3fd22dbf38a4610a2408a6 | 2,607 | py | Python | vidgear/tests/benchmark_tests/test_benchmark_playback.py | winnerineast/vidgear | 2d2cbd48f55bf884dc38107106f0a17e1170810b | [
"MIT"
] | 1 | 2020-12-01T15:55:29.000Z | 2020-12-01T15:55:29.000Z | vidgear/tests/benchmark_tests/test_benchmark_playback.py | Brainiarc7/vidgear | d0989d95d22ee07a73819d332c5fac82cd5c4cf7 | [
"MIT"
] | null | null | null | vidgear/tests/benchmark_tests/test_benchmark_playback.py | Brainiarc7/vidgear | d0989d95d22ee07a73819d332c5fac82cd5c4cf7 | [
"MIT"
] | 1 | 2020-12-01T15:55:33.000Z | 2020-12-01T15:55:33.000Z | """
============================================
vidgear library code is placed under the MIT license
Copyright (c) 2019 Abhishek Thakur
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
===============================================
"""
import os
import pytest
from vidgear.gears import CamGear
from .fps import FPS
def return_testvideo(level=0):
"""
return Test Video Data path with different Video quality/resolution/bitrate for different levels(Level-0(Lowest ~HD 2Mbps) and Level-5(Highest ~4k UHD 120mpbs))
"""
Levels = ['BigBuckBunny.mp4','20_mbps_hd_hevc_10bit.mkv','50_mbps_hd_h264.mkv','90_mbps_hd_hevc_10bit.mkv','120_mbps_4k_uhd_h264.mkv']
path = '{}/Downloads/Test_videos/{}'.format(os.environ['USERPROFILE'] if os.name == 'nt' else os.environ['HOME'], Levels[level])
return os.path.abspath(path)
def playback(level):
"""
Function to test VidGear playback capabilities
"""
options = {'THREADED_QUEUE_MODE':False}
stream = CamGear(source=level, **options).start()
fps = FPS().start()
while True:
frame = stream.read()
if frame is None:
break
fps.update()
stream.stop()
fps.stop()
print("[LOG] total elasped time: {:.2f}".format(fps.total_time_elapsed()))
print("[LOG] approx. FPS: {:.2f}".format(fps.fps()))
@pytest.mark.parametrize('level', [return_testvideo(0), return_testvideo(1), return_testvideo(2),return_testvideo(3),return_testvideo(4)])
def test_benchmark(level):
"""
Benchmarking low to extreme 4k video playback capabilities of VidGear
"""
try:
playback(level)
except Exception as e:
print(e)
| 36.71831 | 162 | 0.71308 |
facc5bd379e838db34fcc7fc11a845d4c668c131 | 2,831 | py | Python | configs/lane_detection/bezierlanenet/resnet18_culane_aug1b.py | voldemortX/DeeplabV3_PyTorch1.3_Codebase | d22d23e74800fafb58eeb61d6649008745c1a287 | [
"BSD-3-Clause"
] | 1 | 2020-09-17T06:21:39.000Z | 2020-09-17T06:21:39.000Z | configs/lane_detection/bezierlanenet/resnet18_culane_aug1b.py | voldemortX/pytorch-segmentation | 9c62c0a721d11c8ea6bf312ecf1c7b238a54dcda | [
"BSD-3-Clause"
] | null | null | null | configs/lane_detection/bezierlanenet/resnet18_culane_aug1b.py | voldemortX/pytorch-segmentation | 9c62c0a721d11c8ea6bf312ecf1c7b238a54dcda | [
"BSD-3-Clause"
] | null | null | null | from importmagician import import_from
with import_from('./'):
# Data pipeline
from configs.lane_detection.common.datasets.culane_bezier import dataset
from configs.lane_detection.common.datasets.train_level1b_288 import train_augmentation
from configs.lane_detection.common.datasets.test_288 import test_augmentation
# Optimization pipeline
from configs.lane_detection.common.optims.matchingloss_bezier import loss
from configs.lane_detection.common.optims.adam00006_dcn import optimizer
from configs.lane_detection.common.optims.ep36_cosine import lr_scheduler
train = dict(
exp_name='resnet18_bezierlanenet_culane-aug2',
workers=10,
batch_size=20,
checkpoint=None,
# Device args
world_size=0,
dist_url='env://',
device='cuda',
val_num_steps=0, # Seg IoU validation (mostly useless)
save_dir='./checkpoints',
input_size=(288, 800),
original_size=(590, 1640),
num_classes=None,
num_epochs=36,
collate_fn='dict_collate_fn', # 'dict_collate_fn' for LSTR
seg=False, # Seg-based method or not
)
test = dict(
exp_name='resnet18_bezierlanenet_culane-aug2',
workers=0,
batch_size=1,
checkpoint='./checkpoints/resnet18_bezierlanenet_culane-aug2/model.pt',
# Device args
device='cuda',
save_dir='./checkpoints',
seg=False,
gap=20,
ppl=18,
thresh=None,
collate_fn='dict_collate_fn', # 'dict_collate_fn' for LSTR
input_size=(288, 800),
original_size=(590, 1640),
max_lane=4,
dataset_name='culane'
)
model = dict(
name='BezierLaneNet',
image_height=288,
num_regression_parameters=8, # 3 x 2 + 2 = 8 (Cubic Bezier Curve)
# Inference parameters
thresh=0.95,
local_maximum_window_size=9,
# Backbone (3-stage resnet (no dilation) + 2 extra dilated blocks)
backbone_cfg=dict(
name='predefined_resnet_backbone',
backbone_name='resnet18',
return_layer='layer3',
pretrained=True,
replace_stride_with_dilation=[False, False, False]
),
reducer_cfg=None, # No need here
dilated_blocks_cfg=dict(
name='predefined_dilated_blocks',
in_channels=256,
mid_channels=64,
dilations=[4, 8]
),
# Head, Fusion module
feature_fusion_cfg=dict(
name='FeatureFlipFusion',
channels=256
),
head_cfg=dict(
name='ConvProjection_1D',
num_layers=2,
in_channels=256,
bias=True,
k=3
), # Just some transforms of feature, similar to FCOS heads, but shared between cls & reg branches
# Auxiliary binary segmentation head (automatically discarded in eval() mode)
aux_seg_head_cfg=dict(
name='SimpleSegHead',
in_channels=256,
mid_channels=64,
num_classes=1
)
)
| 27.754902 | 103 | 0.680678 |
b1885a9e07f92e0a1107eddae33c2428acf3f080 | 718 | py | Python | Alphabets/Capital Alphabets/W.py | vijayakumarr345/pattern | d857812cea625098a18c9d45ca01b22a379d5fb0 | [
"MIT"
] | null | null | null | Alphabets/Capital Alphabets/W.py | vijayakumarr345/pattern | d857812cea625098a18c9d45ca01b22a379d5fb0 | [
"MIT"
] | 1 | 2021-03-18T12:33:06.000Z | 2021-03-18T12:33:48.000Z | Alphabets/Capital Alphabets/W.py | vijayakumarr345/pattern | d857812cea625098a18c9d45ca01b22a379d5fb0 | [
"MIT"
] | null | null | null | # Capital Alphabet W using Function
def for_W():
""" *'s printed in the Shape of Capital W """
for row in range(9):
for col in range(9):
if col%8 ==0 or row+col ==8 and row>3 or row -col ==0 and row >3:
print('*',end=' ')
else:
print(' ',end=' ')
print()
def while_W():
""" *'s printed in the Shape of Capital W """
row =0
while row <9:
col =0
while col <9:
if col%8 ==0 or row+col ==8 and row>3 or row -col ==0 and row >3:
print('*',end=' ')
else:
print(' ',end=' ')
col+=1
print()
row +=1
| 25.642857 | 78 | 0.408078 |
315189e6cdd539797c1a4ce8c7c1d1e9498e0026 | 52,111 | py | Python | cinder/volume/drivers/zfssa/zfssaiscsi.py | bswartz/cinder | 6cfecade9e2ee86bbb7d95c3c401c9e4c70f6a96 | [
"Apache-2.0"
] | null | null | null | cinder/volume/drivers/zfssa/zfssaiscsi.py | bswartz/cinder | 6cfecade9e2ee86bbb7d95c3c401c9e4c70f6a96 | [
"Apache-2.0"
] | null | null | null | cinder/volume/drivers/zfssa/zfssaiscsi.py | bswartz/cinder | 6cfecade9e2ee86bbb7d95c3c401c9e4c70f6a96 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2014, 2016, Oracle and/or its affiliates. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
ZFS Storage Appliance Cinder Volume Driver
"""
import ast
import math
from oslo_config import cfg
from oslo_log import log
from oslo_serialization import base64
from oslo_utils import excutils
from oslo_utils import units
import six
from cinder import exception
from cinder import utils
from cinder.i18n import _, _LE, _LI, _LW
from cinder.image import image_utils
from cinder import interface
from cinder.volume import driver
from cinder.volume.drivers.san import san
from cinder.volume.drivers.zfssa import zfssarest
from cinder.volume import volume_types
import taskflow.engines
from taskflow.patterns import linear_flow as lf
from taskflow import task
CONF = cfg.CONF
LOG = log.getLogger(__name__)
ZFSSA_OPTS = [
cfg.StrOpt('zfssa_pool',
help='Storage pool name.'),
cfg.StrOpt('zfssa_project',
help='Project name.'),
cfg.StrOpt('zfssa_lun_volblocksize', default='8k',
choices=['512', '1k', '2k', '4k', '8k', '16k', '32k', '64k',
'128k'],
help='Block size.'),
cfg.BoolOpt('zfssa_lun_sparse', default=False,
help='Flag to enable sparse (thin-provisioned): True, False.'),
cfg.StrOpt('zfssa_lun_compression', default='off',
choices=['off', 'lzjb', 'gzip-2', 'gzip', 'gzip-9'],
help='Data compression.'),
cfg.StrOpt('zfssa_lun_logbias', default='latency',
choices=['latency', 'throughput'],
help='Synchronous write bias.'),
cfg.StrOpt('zfssa_initiator_group', default='',
help='iSCSI initiator group.'),
cfg.StrOpt('zfssa_initiator', default='',
help='iSCSI initiator IQNs. (comma separated)'),
cfg.StrOpt('zfssa_initiator_user', default='',
help='iSCSI initiator CHAP user (name).'),
cfg.StrOpt('zfssa_initiator_password', default='',
help='Secret of the iSCSI initiator CHAP user.', secret=True),
cfg.StrOpt('zfssa_initiator_config', default='',
help='iSCSI initiators configuration.'),
cfg.StrOpt('zfssa_target_group', default='tgt-grp',
help='iSCSI target group name.'),
cfg.StrOpt('zfssa_target_user', default='',
help='iSCSI target CHAP user (name).'),
cfg.StrOpt('zfssa_target_password', default='', secret=True,
help='Secret of the iSCSI target CHAP user.'),
cfg.StrOpt('zfssa_target_portal',
help='iSCSI target portal (Data-IP:Port, w.x.y.z:3260).'),
cfg.StrOpt('zfssa_target_interfaces',
help='Network interfaces of iSCSI targets. (comma separated)'),
cfg.IntOpt('zfssa_rest_timeout',
help='REST connection timeout. (seconds)'),
cfg.StrOpt('zfssa_replication_ip', default='',
help='IP address used for replication data. (maybe the same as '
'data ip)'),
cfg.BoolOpt('zfssa_enable_local_cache', default=True,
help='Flag to enable local caching: True, False.'),
cfg.StrOpt('zfssa_cache_project', default='os-cinder-cache',
help='Name of ZFSSA project where cache volumes are stored.'),
cfg.StrOpt('zfssa_manage_policy', default='loose',
choices=['loose', 'strict'],
help='Driver policy for volume manage.')
]
CONF.register_opts(ZFSSA_OPTS)
ZFSSA_LUN_SPECS = {
'zfssa:volblocksize',
'zfssa:sparse',
'zfssa:compression',
'zfssa:logbias',
}
def factory_zfssa():
return zfssarest.ZFSSAApi()
@interface.volumedriver
class ZFSSAISCSIDriver(driver.ISCSIDriver):
"""ZFSSA Cinder iSCSI volume driver.
Version history:
.. code-block:: none
1.0.1:
Backend enabled volume migration.
Local cache feature.
1.0.2:
Volume manage/unmanage support.
"""
VERSION = '1.0.2'
protocol = 'iSCSI'
def __init__(self, *args, **kwargs):
super(ZFSSAISCSIDriver, self).__init__(*args, **kwargs)
self.configuration.append_config_values(ZFSSA_OPTS)
self.configuration.append_config_values(san.san_opts)
self.zfssa = None
self.tgt_zfssa = None
self._stats = None
self.tgtiqn = None
def _get_target_alias(self):
"""return target alias."""
return self.configuration.zfssa_target_group
def do_setup(self, context):
"""Setup - create multiple elements.
Project, initiators, initiatorgroup, target and targetgroup.
"""
lcfg = self.configuration
LOG.info(_LI('Connecting to host: %s.'), lcfg.san_ip)
self.zfssa = factory_zfssa()
self.tgt_zfssa = factory_zfssa()
self.zfssa.set_host(lcfg.san_ip, timeout=lcfg.zfssa_rest_timeout)
auth_str = '%s:%s' % (lcfg.san_login, lcfg.san_password)
auth_str = base64.encode_as_text(auth_str)
self.zfssa.login(auth_str)
self.zfssa.create_project(lcfg.zfssa_pool, lcfg.zfssa_project,
compression=lcfg.zfssa_lun_compression,
logbias=lcfg.zfssa_lun_logbias)
schemas = [
{'property': 'cinder_managed',
'description': 'Managed by Cinder',
'type': 'Boolean'}]
if lcfg.zfssa_enable_local_cache:
self.zfssa.create_project(lcfg.zfssa_pool,
lcfg.zfssa_cache_project,
compression=lcfg.zfssa_lun_compression,
logbias=lcfg.zfssa_lun_logbias)
schemas.extend([
{'property': 'image_id',
'description': 'OpenStack image ID',
'type': 'String'},
{'property': 'updated_at',
'description': 'Most recent updated time of image',
'type': 'String'}])
self.zfssa.create_schemas(schemas)
if (lcfg.zfssa_initiator_config != ''):
initiator_config = ast.literal_eval(lcfg.zfssa_initiator_config)
for initiator_group in initiator_config:
zfssa_initiator_group = initiator_group
for zfssa_initiator in initiator_config[zfssa_initiator_group]:
self.zfssa.create_initiator(zfssa_initiator['iqn'],
zfssa_initiator_group + '-' +
zfssa_initiator['iqn'],
chapuser=
zfssa_initiator['user'],
chapsecret=
zfssa_initiator['password'])
if (zfssa_initiator_group != 'default'):
self.zfssa.add_to_initiatorgroup(
zfssa_initiator['iqn'],
zfssa_initiator_group)
else:
LOG.warning(_LW('zfssa_initiator_config not found. '
'Using deprecated configuration options.'))
if (not lcfg.zfssa_initiator and
(not lcfg.zfssa_initiator_group and
lcfg.zfssa_initiator_group != 'default')):
LOG.error(_LE('zfssa_initiator cannot be empty when '
'creating a zfssa_initiator_group.'))
raise exception.InvalidConfigurationValue(
value='',
option='zfssa_initiator')
if (lcfg.zfssa_initiator != '' and
(lcfg.zfssa_initiator_group == '' or
lcfg.zfssa_initiator_group == 'default')):
LOG.warning(_LW('zfssa_initiator: %(ini)s'
' wont be used on '
'zfssa_initiator_group= %(inigrp)s.'),
{'ini': lcfg.zfssa_initiator,
'inigrp': lcfg.zfssa_initiator_group})
# Setup initiator and initiator group
if (lcfg.zfssa_initiator != '' and
lcfg.zfssa_initiator_group != '' and
lcfg.zfssa_initiator_group != 'default'):
for initiator in lcfg.zfssa_initiator.split(','):
initiator = initiator.strip()
self.zfssa.create_initiator(
initiator, lcfg.zfssa_initiator_group + '-' +
initiator, chapuser=lcfg.zfssa_initiator_user,
chapsecret=lcfg.zfssa_initiator_password)
self.zfssa.add_to_initiatorgroup(
initiator, lcfg.zfssa_initiator_group)
# Parse interfaces
interfaces = []
for intrface in lcfg.zfssa_target_interfaces.split(','):
if intrface == '':
continue
interfaces.append(intrface)
# Setup target and target group
iqn = self.zfssa.create_target(
self._get_target_alias(),
interfaces,
tchapuser=lcfg.zfssa_target_user,
tchapsecret=lcfg.zfssa_target_password)
self.zfssa.add_to_targetgroup(iqn, lcfg.zfssa_target_group)
if lcfg.zfssa_manage_policy not in ("loose", "strict"):
err_msg = (_("zfssa_manage_policy property needs to be set to"
" 'strict' or 'loose'. Current value is: %s.") %
lcfg.zfssa_manage_policy)
LOG.error(err_msg)
raise exception.InvalidInput(reason=err_msg)
# Lookup the zfssa_target_portal DNS name to an IP address
host, port = lcfg.zfssa_target_portal.split(':')
host_ip_addr = utils.resolve_hostname(host)
self.zfssa_target_portal = host_ip_addr + ':' + port
def check_for_setup_error(self):
"""Check that driver can login.
Check also pool, project, initiators, initiatorgroup, target and
targetgroup.
"""
lcfg = self.configuration
self.zfssa.verify_pool(lcfg.zfssa_pool)
self.zfssa.verify_project(lcfg.zfssa_pool, lcfg.zfssa_project)
if (lcfg.zfssa_initiator_config != ''):
initiator_config = ast.literal_eval(lcfg.zfssa_initiator_config)
for initiator_group in initiator_config:
zfssa_initiator_group = initiator_group
for zfssa_initiator in initiator_config[zfssa_initiator_group]:
self.zfssa.verify_initiator(zfssa_initiator['iqn'])
else:
if (lcfg.zfssa_initiator != '' and
lcfg.zfssa_initiator_group != '' and
lcfg.zfssa_initiator_group != 'default'):
for initiator in lcfg.zfssa_initiator.split(','):
self.zfssa.verify_initiator(initiator)
self.zfssa.verify_target(self._get_target_alias())
def _get_provider_info(self, volume, lun=None):
"""Return provider information."""
lcfg = self.configuration
project = lcfg.zfssa_project
if ((lcfg.zfssa_enable_local_cache is True) and
(volume['name'].startswith('os-cache-vol-'))):
project = lcfg.zfssa_cache_project
if lun is None:
lun = self.zfssa.get_lun(lcfg.zfssa_pool,
project,
volume['name'])
if isinstance(lun['number'], list):
lun['number'] = lun['number'][0]
if self.tgtiqn is None:
self.tgtiqn = self.zfssa.get_target(self._get_target_alias())
loc = "%s %s %s" % (self.zfssa_target_portal, self.tgtiqn,
lun['number'])
LOG.debug('_get_provider_info: provider_location: %s', loc)
provider = {'provider_location': loc}
if lcfg.zfssa_target_user != '' and lcfg.zfssa_target_password != '':
provider['provider_auth'] = ('CHAP %s %s' %
(lcfg.zfssa_target_user,
lcfg.zfssa_target_password))
return provider
def create_volume(self, volume):
"""Create a volume on ZFSSA."""
LOG.debug('zfssa.create_volume: volume=' + volume['name'])
lcfg = self.configuration
volsize = str(volume['size']) + 'g'
specs = self._get_voltype_specs(volume)
specs.update({'custom:cinder_managed': True})
self.zfssa.create_lun(lcfg.zfssa_pool,
lcfg.zfssa_project,
volume['name'],
volsize,
lcfg.zfssa_target_group,
specs)
def delete_volume(self, volume):
"""Deletes a volume with the given volume['name']."""
LOG.debug('zfssa.delete_volume: name=%s', volume['name'])
lcfg = self.configuration
try:
lun2del = self.zfssa.get_lun(lcfg.zfssa_pool,
lcfg.zfssa_project,
volume['name'])
except exception.VolumeBackendAPIException as ex:
# NOTE(jdg): This will log an error and continue
# if for some reason the volume no longer exists
# on the backend
if 'Error Getting Volume' in ex.message:
LOG.error(_LE("Volume ID %s was not found on "
"the zfssa device while attempting "
"delete_volume operation."), volume['id'])
return
# Delete clone temp snapshot. see create_cloned_volume()
if 'origin' in lun2del and 'id' in volume:
if lun2del['nodestroy']:
self.zfssa.set_lun_props(lcfg.zfssa_pool,
lcfg.zfssa_project,
volume['name'],
nodestroy=False)
tmpsnap = 'tmp-snapshot-%s' % volume['id']
if lun2del['origin']['snapshot'] == tmpsnap:
self.zfssa.delete_snapshot(lcfg.zfssa_pool,
lcfg.zfssa_project,
lun2del['origin']['share'],
lun2del['origin']['snapshot'])
return
self.zfssa.delete_lun(pool=lcfg.zfssa_pool,
project=lcfg.zfssa_project,
lun=volume['name'])
if ('origin' in lun2del and
lun2del['origin']['project'] == lcfg.zfssa_cache_project):
self._check_origin(lun2del, volume['name'])
def create_snapshot(self, snapshot):
"""Creates a snapshot of a volume.
Snapshot name: snapshot['name']
Volume name: snapshot['volume_name']
"""
LOG.debug('zfssa.create_snapshot: snapshot=%s', snapshot['name'])
lcfg = self.configuration
self.zfssa.create_snapshot(lcfg.zfssa_pool,
lcfg.zfssa_project,
snapshot['volume_name'],
snapshot['name'])
def delete_snapshot(self, snapshot):
"""Deletes a snapshot."""
LOG.debug('zfssa.delete_snapshot: snapshot=%s', snapshot['name'])
lcfg = self.configuration
numclones = self.zfssa.num_clones(lcfg.zfssa_pool,
lcfg.zfssa_project,
snapshot['volume_name'],
snapshot['name'])
if numclones > 0:
LOG.error(_LE('Snapshot %s: has clones'), snapshot['name'])
raise exception.SnapshotIsBusy(snapshot_name=snapshot['name'])
self.zfssa.delete_snapshot(lcfg.zfssa_pool,
lcfg.zfssa_project,
snapshot['volume_name'],
snapshot['name'])
def create_volume_from_snapshot(self, volume, snapshot):
"""Creates a volume from a snapshot - clone a snapshot."""
LOG.debug('zfssa.create_volume_from_snapshot: volume=%s',
volume['name'])
LOG.debug('zfssa.create_volume_from_snapshot: snapshot=%s',
snapshot['name'])
if not self._verify_clone_size(snapshot, volume['size'] * units.Gi):
exception_msg = (_('Error verifying clone size on '
'Volume clone: %(clone)s '
'Size: %(size)d on'
'Snapshot: %(snapshot)s')
% {'clone': volume['name'],
'size': volume['size'],
'snapshot': snapshot['name']})
LOG.error(exception_msg)
raise exception.InvalidInput(reason=exception_msg)
lcfg = self.configuration
self.zfssa.clone_snapshot(lcfg.zfssa_pool,
lcfg.zfssa_project,
snapshot['volume_name'],
snapshot['name'],
lcfg.zfssa_project,
volume['name'])
def _update_volume_status(self):
"""Retrieve status info from volume group."""
LOG.debug("Updating volume status")
self._stats = None
data = {}
backend_name = self.configuration.safe_get('volume_backend_name')
data["volume_backend_name"] = backend_name or self.__class__.__name__
data["vendor_name"] = 'Oracle'
data["driver_version"] = self.VERSION
data["storage_protocol"] = self.protocol
lcfg = self.configuration
(avail, total) = self.zfssa.get_project_stats(lcfg.zfssa_pool,
lcfg.zfssa_project)
if avail is None or total is None:
return
host = lcfg.san_ip
pool = lcfg.zfssa_pool
project = lcfg.zfssa_project
auth_str = '%s:%s' % (lcfg.san_login, lcfg.san_password)
auth_str = base64.encode_as_text(auth_str)
zfssa_tgt_group = lcfg.zfssa_target_group
repl_ip = lcfg.zfssa_replication_ip
data['location_info'] = "%s:%s:%s:%s:%s:%s" % (host, auth_str, pool,
project,
zfssa_tgt_group,
repl_ip)
data['total_capacity_gb'] = int(total) / units.Gi
data['free_capacity_gb'] = int(avail) / units.Gi
data['reserved_percentage'] = 0
data['QoS_support'] = False
pool_details = self.zfssa.get_pool_details(lcfg.zfssa_pool)
data['zfssa_poolprofile'] = pool_details['profile']
data['zfssa_volblocksize'] = lcfg.zfssa_lun_volblocksize
data['zfssa_sparse'] = six.text_type(lcfg.zfssa_lun_sparse)
data['zfssa_compression'] = lcfg.zfssa_lun_compression
data['zfssa_logbias'] = lcfg.zfssa_lun_logbias
self._stats = data
def get_volume_stats(self, refresh=False):
"""Get volume status.
If 'refresh' is True, run update the stats first.
"""
if refresh:
self._update_volume_status()
return self._stats
def create_export(self, context, volume, connector):
pass
def remove_export(self, context, volume):
pass
def ensure_export(self, context, volume):
pass
def extend_volume(self, volume, new_size):
"""Driver entry point to extent volume size."""
LOG.debug('extend_volume: volume name: %s', volume['name'])
lcfg = self.configuration
self.zfssa.set_lun_props(lcfg.zfssa_pool,
lcfg.zfssa_project,
volume['name'],
volsize=new_size * units.Gi)
def create_cloned_volume(self, volume, src_vref):
"""Create a clone of the specified volume."""
zfssa_snapshot = {'volume_name': src_vref['name'],
'name': 'tmp-snapshot-%s' % volume['id']}
self.create_snapshot(zfssa_snapshot)
try:
self.create_volume_from_snapshot(volume, zfssa_snapshot)
except exception.VolumeBackendAPIException:
LOG.error(_LE('Clone Volume:'
'%(volume)s failed from source volume:'
'%(src_vref)s'),
{'volume': volume['name'],
'src_vref': src_vref['name']})
# Cleanup snapshot
self.delete_snapshot(zfssa_snapshot)
@utils.synchronized('zfssaiscsi', external=True)
def clone_image(self, context, volume,
image_location, image_meta,
image_service):
"""Create a volume efficiently from an existing image.
Verify the image ID being used:
(1) If there is no existing cache volume, create one and transfer
image data to it. Take a snapshot.
(2) If a cache volume already exists, verify if it is either alternated
or updated. If so try to remove it, raise exception if removal fails.
Create a new cache volume as in (1).
Clone a volume from the cache volume and returns it to Cinder.
A file lock is placed on this method to prevent:
(a) a race condition when a cache volume has been verified, but then
gets deleted before it is cloned.
(b) failure of subsequent clone_image requests if the first request is
still pending.
"""
LOG.debug('Cloning image %(image)s to volume %(volume)s',
{'image': image_meta['id'], 'volume': volume['name']})
lcfg = self.configuration
cachevol_size = 0
if not lcfg.zfssa_enable_local_cache:
return None, False
with image_utils.TemporaryImages.fetch(image_service,
context,
image_meta['id']) as tmp_image:
info = image_utils.qemu_img_info(tmp_image)
cachevol_size = int(math.ceil(float(info.virtual_size) / units.Gi))
if cachevol_size > volume['size']:
exception_msg = (_LE('Image size %(img_size)dGB is larger '
'than volume size %(vol_size)dGB.'),
{'img_size': cachevol_size,
'vol_size': volume['size']})
LOG.error(exception_msg)
return None, False
specs = self._get_voltype_specs(volume)
cachevol_props = {'size': cachevol_size}
try:
cache_vol, cache_snap = self._verify_cache_volume(context,
image_meta,
image_service,
specs,
cachevol_props)
# A cache volume and a snapshot should be ready by now
# Create a clone from the cache volume
self.zfssa.clone_snapshot(lcfg.zfssa_pool,
lcfg.zfssa_cache_project,
cache_vol,
cache_snap,
lcfg.zfssa_project,
volume['name'])
if cachevol_size < volume['size']:
self.extend_volume(volume, volume['size'])
except exception.VolumeBackendAPIException as exc:
exception_msg = (_LE('Cannot clone image %(image)s to '
'volume %(volume)s. Error: %(error)s.'),
{'volume': volume['name'],
'image': image_meta['id'],
'error': exc.msg})
LOG.error(exception_msg)
return None, False
return None, True
def _verify_cache_volume(self, context, img_meta,
img_service, specs, cachevol_props):
"""Verify if we have a cache volume that we want.
If we don't, create one.
If we do, check if it's been updated:
* If so, delete it and recreate a new volume
* If not, we are good.
If it's out of date, delete it and create a new one.
After the function returns, there should be a cache volume available,
ready for cloning.
"""
lcfg = self.configuration
cachevol_name = 'os-cache-vol-%s' % img_meta['id']
cachesnap_name = 'image-%s' % img_meta['id']
cachevol_meta = {
'cache_name': cachevol_name,
'snap_name': cachesnap_name,
}
cachevol_props.update(cachevol_meta)
cache_vol, cache_snap = None, None
updated_at = six.text_type(img_meta['updated_at'].isoformat())
LOG.debug('Verifying cache volume %s:', cachevol_name)
try:
cache_vol = self.zfssa.get_lun(lcfg.zfssa_pool,
lcfg.zfssa_cache_project,
cachevol_name)
if (not cache_vol.get('updated_at', None) or
not cache_vol.get('image_id', None)):
exc_msg = (_('Cache volume %s does not have required '
'properties') % cachevol_name)
LOG.error(exc_msg)
raise exception.VolumeBackendAPIException(data=exc_msg)
cache_snap = self.zfssa.get_lun_snapshot(lcfg.zfssa_pool,
lcfg.zfssa_cache_project,
cachevol_name,
cachesnap_name)
except exception.VolumeNotFound:
# There is no existing cache volume, create one:
return self._create_cache_volume(context,
img_meta,
img_service,
specs,
cachevol_props)
except exception.SnapshotNotFound:
exception_msg = (_('Cache volume %(cache_vol)s'
'does not have snapshot %(cache_snap)s.'),
{'cache_vol': cachevol_name,
'cache_snap': cachesnap_name})
LOG.error(exception_msg)
raise exception.VolumeBackendAPIException(data=exception_msg)
# A cache volume does exist, check if it's updated:
if ((cache_vol['updated_at'] != updated_at) or
(cache_vol['image_id'] != img_meta['id'])):
# The cache volume is updated, but has clones:
if cache_snap['numclones'] > 0:
exception_msg = (_('Cannot delete '
'cache volume: %(cachevol_name)s. '
'It was updated at %(updated_at)s '
'and currently has %(numclones)s '
'volume instances.'),
{'cachevol_name': cachevol_name,
'updated_at': updated_at,
'numclones': cache_snap['numclones']})
LOG.error(exception_msg)
raise exception.VolumeBackendAPIException(data=exception_msg)
# The cache volume is updated, but has no clone, so we delete it
# and re-create a new one:
self.zfssa.delete_lun(lcfg.zfssa_pool,
lcfg.zfssa_cache_project,
cachevol_name)
return self._create_cache_volume(context,
img_meta,
img_service,
specs,
cachevol_props)
return cachevol_name, cachesnap_name
def _create_cache_volume(self, context, img_meta,
img_service, specs, cachevol_props):
"""Create a cache volume from an image.
Returns names of the cache volume and its snapshot.
"""
lcfg = self.configuration
cachevol_size = int(cachevol_props['size'])
lunsize = "%sg" % six.text_type(cachevol_size)
lun_props = {
'custom:image_id': img_meta['id'],
'custom:updated_at': (
six.text_type(img_meta['updated_at'].isoformat())),
}
lun_props.update(specs)
cache_vol = {
'name': cachevol_props['cache_name'],
'id': img_meta['id'],
'size': cachevol_size,
}
LOG.debug('Creating cache volume %s.', cache_vol['name'])
try:
self.zfssa.create_lun(lcfg.zfssa_pool,
lcfg.zfssa_cache_project,
cache_vol['name'],
lunsize,
lcfg.zfssa_target_group,
lun_props)
super(ZFSSAISCSIDriver, self).copy_image_to_volume(context,
cache_vol,
img_service,
img_meta['id'])
self.zfssa.create_snapshot(lcfg.zfssa_pool,
lcfg.zfssa_cache_project,
cache_vol['name'],
cachevol_props['snap_name'])
except Exception as exc:
exc_msg = (_('Fail to create cache volume %(volume)s. '
'Error: %(err)s'),
{'volume': cache_vol['name'],
'err': six.text_type(exc)})
LOG.error(exc_msg)
self.zfssa.delete_lun(lcfg.zfssa_pool,
lcfg.zfssa_cache_project,
cache_vol['name'])
raise exception.VolumeBackendAPIException(data=exc_msg)
return cachevol_props['cache_name'], cachevol_props['snap_name']
def local_path(self, volume):
"""Not implemented."""
pass
def _verify_clone_size(self, snapshot, size):
"""Check whether the clone size is the same as the parent volume."""
lcfg = self.configuration
lun = self.zfssa.get_lun(lcfg.zfssa_pool,
lcfg.zfssa_project,
snapshot['volume_name'])
return lun['size'] == size
def initialize_connection(self, volume, connector):
lcfg = self.configuration
init_groups = self.zfssa.get_initiator_initiatorgroup(
connector['initiator'])
if not init_groups:
if lcfg.zfssa_initiator_group == 'default':
init_groups.append('default')
else:
exception_msg = (_('Failed to find iSCSI initiator group '
'containing %(initiator)s.')
% {'initiator': connector['initiator']})
LOG.error(exception_msg)
raise exception.VolumeBackendAPIException(data=exception_msg)
if ((lcfg.zfssa_enable_local_cache is True) and
(volume['name'].startswith('os-cache-vol-'))):
project = lcfg.zfssa_cache_project
else:
project = lcfg.zfssa_project
for initiator_group in init_groups:
self.zfssa.set_lun_initiatorgroup(lcfg.zfssa_pool,
project,
volume['name'],
initiator_group)
iscsi_properties = {}
provider = self._get_provider_info(volume)
(target_portal, iqn, lun) = provider['provider_location'].split()
iscsi_properties['target_discovered'] = False
iscsi_properties['target_portal'] = target_portal
iscsi_properties['target_iqn'] = iqn
iscsi_properties['target_lun'] = int(lun)
iscsi_properties['volume_id'] = volume['id']
if 'provider_auth' in provider:
(auth_method, auth_username, auth_password) = provider[
'provider_auth'].split()
iscsi_properties['auth_method'] = auth_method
iscsi_properties['auth_username'] = auth_username
iscsi_properties['auth_password'] = auth_password
return {
'driver_volume_type': 'iscsi',
'data': iscsi_properties
}
def terminate_connection(self, volume, connector, **kwargs):
"""Driver entry point to terminate a connection for a volume."""
LOG.debug('terminate_connection: volume name: %s.', volume['name'])
lcfg = self.configuration
project = lcfg.zfssa_project
if ((lcfg.zfssa_enable_local_cache is True) and
(volume['name'].startswith('os-cache-vol-'))):
project = lcfg.zfssa_cache_project
self.zfssa.set_lun_initiatorgroup(lcfg.zfssa_pool,
project,
volume['name'],
'')
def _get_voltype_specs(self, volume):
"""Get specs suitable for volume creation."""
vtype = volume.get('volume_type_id', None)
extra_specs = None
if vtype:
extra_specs = volume_types.get_volume_type_extra_specs(vtype)
return self._get_specs(extra_specs)
def _get_specs(self, xspecs):
"""Return a dict with extra specs and/or config values."""
result = {}
for spc in ZFSSA_LUN_SPECS:
val = None
prop = spc.split(':')[1]
cfg = 'zfssa_lun_' + prop
if xspecs:
val = xspecs.pop(spc, None)
if val is None:
val = self.configuration.safe_get(cfg)
if val is not None and val != '':
result.update({prop: val})
return result
def migrate_volume(self, ctxt, volume, host):
LOG.debug('Attempting ZFSSA enabled volume migration. volume: %(id)s, '
'host: %(host)s, status=%(status)s.',
{'id': volume['id'],
'host': host,
'status': volume['status']})
lcfg = self.configuration
default_ret = (False, None)
if volume['status'] != "available":
LOG.debug('Only available volumes can be migrated using backend '
'assisted migration. Defaulting to generic migration.')
return default_ret
if (host['capabilities']['vendor_name'] != 'Oracle' or
host['capabilities']['storage_protocol'] != self.protocol):
LOG.debug('Source and destination drivers need to be Oracle iSCSI '
'to use backend assisted migration. Defaulting to '
'generic migration.')
return default_ret
if 'location_info' not in host['capabilities']:
LOG.debug('Could not find location_info in capabilities reported '
'by the destination driver. Defaulting to generic '
'migration.')
return default_ret
loc_info = host['capabilities']['location_info']
try:
(tgt_host, auth_str, tgt_pool, tgt_project, tgt_tgtgroup,
tgt_repl_ip) = loc_info.split(':')
except ValueError:
LOG.error(_LE("Location info needed for backend enabled volume "
"migration not in correct format: %s. Continuing "
"with generic volume migration."), loc_info)
return default_ret
if tgt_repl_ip == '':
msg = _LE("zfssa_replication_ip not set in cinder.conf. "
"zfssa_replication_ip is needed for backend enabled "
"volume migration. Continuing with generic volume "
"migration.")
LOG.error(msg)
return default_ret
src_pool = lcfg.zfssa_pool
src_project = lcfg.zfssa_project
try:
LOG.info(_LI('Connecting to target host: %s for backend enabled '
'migration.'), tgt_host)
self.tgt_zfssa.set_host(tgt_host)
self.tgt_zfssa.login(auth_str)
# Verify that the replication service is online
try:
self.zfssa.verify_service('replication')
self.tgt_zfssa.verify_service('replication')
except exception.VolumeBackendAPIException:
return default_ret
# ensure that a target group by the same name exists on the target
# system also, if not, use default migration.
lun = self.zfssa.get_lun(src_pool, src_project, volume['name'])
if lun['targetgroup'] != tgt_tgtgroup:
return default_ret
tgt_asn = self.tgt_zfssa.get_asn()
src_asn = self.zfssa.get_asn()
# verify on the source system that the destination has been
# registered as a replication target
tgts = self.zfssa.get_replication_targets()
targets = []
for target in tgts['targets']:
if target['asn'] == tgt_asn:
targets.append(target)
if targets == []:
LOG.debug('Target host: %(host)s for volume migration '
'not configured as a replication target '
'for volume: %(vol)s.',
{'host': tgt_repl_ip,
'vol': volume['name']})
return default_ret
# Multiple ips from the same appliance may be configured
# as different targets
for target in targets:
if target['address'] == tgt_repl_ip + ':216':
break
if target['address'] != tgt_repl_ip + ':216':
LOG.debug('Target with replication ip: %s not configured on '
'the source appliance for backend enabled volume '
'migration. Proceeding with default migration.',
tgt_repl_ip)
return default_ret
flow = lf.Flow('zfssa_volume_migration').add(
MigrateVolumeInit(),
MigrateVolumeCreateAction(provides='action_id'),
MigrateVolumeSendReplUpdate(),
MigrateVolumeSeverRepl(),
MigrateVolumeMoveVol(),
MigrateVolumeCleanUp()
)
taskflow.engines.run(flow,
store={'driver': self,
'tgt_zfssa': self.tgt_zfssa,
'tgt_pool': tgt_pool,
'tgt_project': tgt_project,
'volume': volume, 'tgt_asn': tgt_asn,
'src_zfssa': self.zfssa,
'src_asn': src_asn,
'src_pool': src_pool,
'src_project': src_project,
'target': target})
return(True, None)
except Exception:
LOG.error(_LE("Error migrating volume: %s"), volume['name'])
raise
def update_migrated_volume(self, ctxt, volume, new_volume,
original_volume_status):
"""Return model update for migrated volume.
:param volume: The original volume that was migrated to this backend
:param new_volume: The migration volume object that was created on
this backend as part of the migration process
:param original_volume_status: The status of the original volume
:returns: model_update to update DB with any needed changes
"""
lcfg = self.configuration
original_name = CONF.volume_name_template % volume['id']
current_name = CONF.volume_name_template % new_volume['id']
LOG.debug('Renaming migrated volume: %(cur)s to %(org)s',
{'cur': current_name,
'org': original_name})
self.zfssa.set_lun_props(lcfg.zfssa_pool, lcfg.zfssa_project,
current_name, name=original_name)
return {'_name_id': None}
@utils.synchronized('zfssaiscsi', external=True)
def _check_origin(self, lun, volname):
"""Verify the cache volume of a bootable volume.
If the cache no longer has clone, it will be deleted.
There is a small lag between the time a clone is deleted and the number
of clones being updated accordingly. There is also a race condition
when multiple volumes (clones of a cache volume) are deleted at once,
leading to the number of clones reported incorrectly. The file lock is
here to avoid such issues.
"""
lcfg = self.configuration
cache = lun['origin']
numclones = -1
if (cache['snapshot'].startswith('image-') and
cache['share'].startswith('os-cache-vol')):
try:
numclones = self.zfssa.num_clones(lcfg.zfssa_pool,
lcfg.zfssa_cache_project,
cache['share'],
cache['snapshot'])
except Exception:
LOG.debug('Cache volume is already deleted.')
return
LOG.debug('Checking cache volume %(name)s, numclones = %(clones)d',
{'name': cache['share'], 'clones': numclones})
# Sometimes numclones still hold old values even when all clones
# have been deleted. So we handle this situation separately here:
if numclones == 1:
try:
self.zfssa.get_lun(lcfg.zfssa_pool,
lcfg.zfssa_project,
volname)
# The volume does exist, so return
return
except exception.VolumeNotFound:
# The volume is already deleted
numclones = 0
if numclones == 0:
try:
self.zfssa.delete_lun(lcfg.zfssa_pool,
lcfg.zfssa_cache_project,
cache['share'])
except exception.VolumeBackendAPIException:
LOG.warning(_LW("Volume %s exists but can't be deleted"),
cache['share'])
def manage_existing(self, volume, existing_ref):
"""Manage an existing volume in the ZFSSA backend.
:param volume: Reference to the new volume.
:param existing_ref: Reference to the existing volume to be managed.
"""
lcfg = self.configuration
existing_vol = self._get_existing_vol(existing_ref)
self._verify_volume_to_manage(existing_vol)
new_vol_name = volume['name']
try:
self.zfssa.set_lun_props(lcfg.zfssa_pool,
lcfg.zfssa_project,
existing_vol['name'],
name=new_vol_name,
schema={"custom:cinder_managed": True})
except exception.VolumeBackendAPIException:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Failed to rename volume %(existing)s to "
"%(new)s. Volume manage failed."),
{'existing': existing_vol['name'],
'new': new_vol_name})
return None
def manage_existing_get_size(self, volume, existing_ref):
"""Return size of the volume to be managed by manage_existing."""
existing_vol = self._get_existing_vol(existing_ref)
size = existing_vol['size']
return int(math.ceil(float(size) / units.Gi))
def unmanage(self, volume):
"""Remove an existing volume from cinder management.
:param volume: Reference to the volume to be unmanaged.
"""
lcfg = self.configuration
new_name = 'unmanaged-' + volume['name']
try:
self.zfssa.set_lun_props(lcfg.zfssa_pool,
lcfg.zfssa_project,
volume['name'],
name=new_name,
schema={"custom:cinder_managed": False})
except exception.VolumeBackendAPIException:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Failed to rename volume %(existing)s to"
" %(new)s. Volume unmanage failed."),
{'existing': volume['name'],
'new': new_name})
return None
def _verify_volume_to_manage(self, volume):
lcfg = self.configuration
if lcfg.zfssa_manage_policy == 'loose':
return
vol_name = volume['name']
if 'cinder_managed' not in volume:
err_msg = (_("Unknown if the volume: %s to be managed is "
"already being managed by Cinder. Aborting manage "
"volume. Please add 'cinder_managed' custom schema "
"property to the volume and set its value to False."
" Alternatively, set the value of cinder config "
"policy 'zfssa_manage_policy' to 'loose' to "
"remove this restriction.") % vol_name)
LOG.error(err_msg)
raise exception.InvalidInput(reason=err_msg)
if volume['cinder_managed'] is True:
msg = (_("Volume: %s is already being managed by Cinder.")
% vol_name)
LOG.error(msg)
raise exception.ManageExistingAlreadyManaged(volume_ref=vol_name)
def _get_existing_vol(self, existing_ref):
lcfg = self.configuration
if 'source-name' not in existing_ref:
msg = (_("Reference to volume: %s to be managed must contain "
"source-name.") % existing_ref)
raise exception.ManageExistingInvalidReference(
existing_ref=existing_ref, reason=msg)
try:
existing_vol = self.zfssa.get_lun(lcfg.zfssa_pool,
lcfg.zfssa_project,
existing_ref['source-name'])
except exception.VolumeNotFound:
err_msg = (_("Volume %s doesn't exist on the ZFSSA "
"backend.") % existing_vol['name'])
LOG.error(err_msg)
raise exception.InvalidInput(reason=err_msg)
return existing_vol
class MigrateVolumeInit(task.Task):
def execute(self, src_zfssa, volume, src_pool, src_project):
LOG.debug('Setting inherit flag on source backend to False.')
src_zfssa.edit_inherit_replication_flag(src_pool, src_project,
volume['name'], set=False)
def revert(self, src_zfssa, volume, src_pool, src_project, **kwargs):
LOG.debug('Rollback: Setting inherit flag on source appliance to '
'True.')
src_zfssa.edit_inherit_replication_flag(src_pool, src_project,
volume['name'], set=True)
class MigrateVolumeCreateAction(task.Task):
def execute(self, src_zfssa, volume, src_pool, src_project, target,
tgt_pool):
LOG.debug('Creating replication action on source appliance.')
action_id = src_zfssa.create_replication_action(src_pool,
src_project,
target['label'],
tgt_pool,
volume['name'])
self._action_id = action_id
return action_id
def revert(self, src_zfssa, **kwargs):
if hasattr(self, '_action_id'):
LOG.debug('Rollback: deleting replication action on source '
'appliance.')
src_zfssa.delete_replication_action(self._action_id)
class MigrateVolumeSendReplUpdate(task.Task):
def execute(self, src_zfssa, action_id):
LOG.debug('Sending replication update from source appliance.')
src_zfssa.send_repl_update(action_id)
LOG.debug('Deleting replication action on source appliance.')
src_zfssa.delete_replication_action(action_id)
self._action_deleted = True
class MigrateVolumeSeverRepl(task.Task):
def execute(self, tgt_zfssa, src_asn, action_id, driver):
source = tgt_zfssa.get_replication_source(src_asn)
if not source:
err = (_('Source with host ip/name: %s not found on the '
'target appliance for backend enabled volume '
'migration, proceeding with default migration.'),
driver.configuration.san_ip)
LOG.error(err)
raise exception.VolumeBackendAPIException(data=err)
LOG.debug('Severing replication package on destination appliance.')
tgt_zfssa.sever_replication(action_id, source['name'],
project=action_id)
class MigrateVolumeMoveVol(task.Task):
def execute(self, tgt_zfssa, tgt_pool, tgt_project, action_id, volume):
LOG.debug('Moving LUN to destination project on destination '
'appliance.')
tgt_zfssa.move_volume(tgt_pool, action_id, volume['name'], tgt_project)
LOG.debug('Deleting temporary project on destination appliance.')
tgt_zfssa.delete_project(tgt_pool, action_id)
self._project_deleted = True
def revert(self, tgt_zfssa, tgt_pool, tgt_project, action_id, volume,
**kwargs):
if not hasattr(self, '_project_deleted'):
LOG.debug('Rollback: deleting temporary project on destination '
'appliance.')
tgt_zfssa.delete_project(tgt_pool, action_id)
class MigrateVolumeCleanUp(task.Task):
def execute(self, driver, volume, tgt_zfssa):
LOG.debug('Finally, delete source volume on source appliance.')
driver.delete_volume(volume)
tgt_zfssa.logout()
| 43.57107 | 79 | 0.537622 |
2d97ce8bc497b1956a723542bb0bba4c78c3f233 | 8,436 | py | Python | robin_stocks/authentication.py | jacov/robin_stocks | c7dba7f9680be7d518577fcfce2a3edecf5a13a1 | [
"MIT"
] | null | null | null | robin_stocks/authentication.py | jacov/robin_stocks | c7dba7f9680be7d518577fcfce2a3edecf5a13a1 | [
"MIT"
] | null | null | null | robin_stocks/authentication.py | jacov/robin_stocks | c7dba7f9680be7d518577fcfce2a3edecf5a13a1 | [
"MIT"
] | null | null | null | """Contains all functions for the purpose of logging in and out to Robinhood."""
import getpass
import os
import pickle
import random
import robin_stocks.helper as helper
import robin_stocks.urls as urls
def generate_device_token():
"""This function will generate a token used when loggin on.
:returns: A string representing the token.
"""
rands = []
for i in range(0, 16):
r = random.random()
rand = 4294967296.0 * r
rands.append((int(rand) >> ((3 & i) << 3)) & 255)
hexa = []
for i in range(0, 256):
hexa.append(str(hex(i+256)).lstrip("0x").rstrip("L")[1:])
id = ""
for i in range(0, 16):
id += hexa[rands[i]]
if (i == 3) or (i == 5) or (i == 7) or (i == 9):
id += "-"
return(id)
def respond_to_challenge(challenge_id, sms_code):
"""This functino will post to the challenge url.
:param challenge_id: The challenge id.
:type challenge_id: str
:param sms_code: The sms code.
:type sms_code: str
:returns: The response from requests.
"""
url = urls.challenge_url(challenge_id)
payload = {
'response': sms_code
}
return(helper.request_post(url, payload))
def login(username=None, password=None, expiresIn=86400, scope='internal', by_sms=True, store_session=True):
"""This function will effectivly log the user into robinhood by getting an
authentication token and saving it to the session header. By default, it
will store the authentication token in a pickle file and load that value
on subsequent logins.
:param username: The username for your robinhood account, usually your email.
Not required if credentials are already cached and valid.
:type username: Optional[str]
:param password: The password for your robinhood account. Not required if
credentials are already cached and valid.
:type password: Optional[str]
:param expiresIn: The time until your login session expires. This is in seconds.
:type expiresIn: Optional[int]
:param scope: Specifies the scope of the authentication.
:type scope: Optional[str]
:param by_sms: Specifies whether to send an email(False) or an sms(True)
:type by_sms: Optional[boolean]
:param store_session: Specifies whether to save the log in authorization
for future log ins.
:type store_session: Optional[boolean]
:returns: A dictionary with log in information. The 'access_token' keyword contains the access token, and the 'detail' keyword \
contains information on whether the access token was generated or loaded from pickle file.
"""
device_token = generate_device_token()
#--# home_dir = os.path.expanduser("~")
home_dir = os.path.expanduser("/tmp/")
data_dir = os.path.join(home_dir, ".tokens")
if not os.path.exists(data_dir):
os.makedirs(data_dir)
creds_file = "robinhood.pickle"
pickle_path = os.path.join(data_dir, creds_file)
# Challenge type is used if not logging in with two-factor authentication.
#if by_sms:
# challenge_type = "sms"
#else:
# challenge_type = "email"
#
url = urls.login_url()
payload = {
'client_id': 'c82SH0WZOsabOXGP2sxqcj34FxkvfnWRZBKlBjFS',
'expires_in': expiresIn,
'grant_type': 'password',
'password': password,
'scope': scope,
'username': username,
'mfa_required': 'false',
'device_token': device_token
}
# #'challenge_type': challenge_type,
# If authentication has been stored in pickle file then load it. Stops login server from being pinged so much.
if os.path.isfile(pickle_path):
# If store_session has been set to false then delete the pickle file, otherwise try to load it.
# Loading pickle file will fail if the acess_token has expired.
if store_session:
try:
with open(pickle_path, 'rb') as f:
pickle_data = pickle.load(f)
access_token = pickle_data['access_token']
token_type = pickle_data['token_type']
refresh_token = pickle_data['refresh_token']
# Set device_token to be the original device token when first logged in.
pickle_device_token = pickle_data['device_token']
payload['device_token'] = pickle_device_token
# Set login status to True in order to try and get account info.
helper.set_login_state(True)
helper.update_session(
'Authorization', '{0} {1}'.format(token_type, access_token))
# Try to load account profile to check that authorization token is still valid.
res = helper.request_get(
urls.portfolio_profile(), 'regular', payload, jsonify_data=False)
# Raises exception is response code is not 200.
res.raise_for_status()
return({'access_token': access_token, 'token_type': token_type,
'expires_in': expiresIn, 'scope': scope, 'detail': 'logged in using authentication in {0}'.format(creds_file),
'backup_code': None, 'refresh_token': refresh_token})
except:
print(
"ERROR: There was an issue loading pickle file. Authentication may be expired - logging in normally.")
helper.set_login_state(False)
helper.update_session('Authorization', None)
else:
os.remove(pickle_path)
# Try to log in normally.
if not username:
username = input("Robinhood username: ")
payload['username'] = username
if not password:
password = getpass.getpass("Robinhood password: ")
payload['password'] = password
data = helper.request_post(url, payload)
# Handle case where mfa or challenge is required.
if data:
#if 'mfa_required' in data:
# mfa_token = input("Please type in the MFA code: ")
# payload['mfa_code'] = mfa_token
# res = helper.request_post(url, payload, jsonify_data=False)
# while (res.status_code != 200):
# mfa_token = input(
# "That MFA code was not correct. Please type in another MFA code: ")
# payload['mfa_code'] = mfa_token
# res = helper.request_post(url, payload, jsonify_data=False)
# data = res.json()
#elif 'challenge' in data:
# challenge_id = data['challenge']['id']
# sms_code = input('Enter Robinhood code for validation: ')
# res = respond_to_challenge(challenge_id, sms_code)
# while 'challenge' in res and res['challenge']['remaining_attempts'] > 0:
# sms_code = input('That code was not correct. {0} tries remaining. Please type in another code: '.format(
# res['challenge']['remaining_attempts']))
# res = respond_to_challenge(challenge_id, sms_code)
# helper.update_session(
# 'X-ROBINHOOD-CHALLENGE-RESPONSE-ID', challenge_id)
# data = helper.request_post(url, payload)
# Update Session data with authorization or raise exception with the information present in data.
if 'access_token' in data:
token = '{0} {1}'.format(data['token_type'], data['access_token'])
helper.update_session('Authorization', token)
helper.set_login_state(True)
data['detail'] = "logged in with brand new authentication code."
if store_session:
with open(pickle_path, 'wb') as f:
pickle.dump({'token_type': data['token_type'],
'access_token': data['access_token'],
'refresh_token': data['refresh_token'],
'device_token': device_token}, f)
else:
raise Exception(data['detail'])
else:
raise Exception('Error: Trouble connecting to robinhood API. Check internet connection.')
return(data)
@helper.login_required
def logout():
"""Removes authorization from the session header.
:returns: None
"""
helper.set_login_state(False)
helper.update_session('Authorization', None)
| 42.606061 | 138 | 0.616169 |
ddf8cd8b3e0b8c568941bf677991a238a86c7931 | 164,456 | py | Python | redis/client.py | sam974/redis-py | 8ed9e4785714a6e9294d823cd60c3070f2c491e6 | [
"MIT"
] | null | null | null | redis/client.py | sam974/redis-py | 8ed9e4785714a6e9294d823cd60c3070f2c491e6 | [
"MIT"
] | null | null | null | redis/client.py | sam974/redis-py | 8ed9e4785714a6e9294d823cd60c3070f2c491e6 | [
"MIT"
] | null | null | null | from itertools import chain
import datetime
import warnings
import time
import threading
import time as mod_time
import re
import hashlib
from redis.connection import (ConnectionPool, UnixDomainSocketConnection,
SSLConnection)
from redis.lock import Lock
from redis.exceptions import (
ConnectionError,
DataError,
ExecAbortError,
NoScriptError,
PubSubError,
RedisError,
ResponseError,
TimeoutError,
WatchError,
ModuleError,
)
from redis.utils import safe_str, str_if_bytes
SYM_EMPTY = b''
EMPTY_RESPONSE = 'EMPTY_RESPONSE'
def list_or_args(keys, args):
# returns a single new list combining keys and args
try:
iter(keys)
# a string or bytes instance can be iterated, but indicates
# keys wasn't passed as a list
if isinstance(keys, (bytes, str)):
keys = [keys]
else:
keys = list(keys)
except TypeError:
keys = [keys]
if args:
keys.extend(args)
return keys
def timestamp_to_datetime(response):
"Converts a unix timestamp to a Python datetime object"
if not response:
return None
try:
response = int(response)
except ValueError:
return None
return datetime.datetime.fromtimestamp(response)
def string_keys_to_dict(key_string, callback):
return dict.fromkeys(key_string.split(), callback)
class CaseInsensitiveDict(dict):
"Case insensitive dict implementation. Assumes string keys only."
def __init__(self, data):
for k, v in data.items():
self[k.upper()] = v
def __contains__(self, k):
return super().__contains__(k.upper())
def __delitem__(self, k):
super().__delitem__(k.upper())
def __getitem__(self, k):
return super().__getitem__(k.upper())
def get(self, k, default=None):
return super().get(k.upper(), default)
def __setitem__(self, k, v):
super().__setitem__(k.upper(), v)
def update(self, data):
data = CaseInsensitiveDict(data)
super().update(data)
def parse_debug_object(response):
"Parse the results of Redis's DEBUG OBJECT command into a Python dict"
# The 'type' of the object is the first item in the response, but isn't
# prefixed with a name
response = str_if_bytes(response)
response = 'type:' + response
response = dict(kv.split(':') for kv in response.split())
# parse some expected int values from the string response
# note: this cmd isn't spec'd so these may not appear in all redis versions
int_fields = ('refcount', 'serializedlength', 'lru', 'lru_seconds_idle')
for field in int_fields:
if field in response:
response[field] = int(response[field])
return response
def parse_object(response, infotype):
"Parse the results of an OBJECT command"
if infotype in ('idletime', 'refcount'):
return int_or_none(response)
return response
def parse_info(response):
"Parse the result of Redis's INFO command into a Python dict"
info = {}
response = str_if_bytes(response)
def get_value(value):
if ',' not in value or '=' not in value:
try:
if '.' in value:
return float(value)
else:
return int(value)
except ValueError:
return value
else:
sub_dict = {}
for item in value.split(','):
k, v = item.rsplit('=', 1)
sub_dict[k] = get_value(v)
return sub_dict
for line in response.splitlines():
if line and not line.startswith('#'):
if line.find(':') != -1:
# Split, the info fields keys and values.
# Note that the value may contain ':'. but the 'host:'
# pseudo-command is the only case where the key contains ':'
key, value = line.split(':', 1)
if key == 'cmdstat_host':
key, value = line.rsplit(':', 1)
if key == 'module':
# Hardcode a list for key 'modules' since there could be
# multiple lines that started with 'module'
info.setdefault('modules', []).append(get_value(value))
else:
info[key] = get_value(value)
else:
# if the line isn't splittable, append it to the "__raw__" key
info.setdefault('__raw__', []).append(line)
return info
def parse_memory_stats(response, **kwargs):
"Parse the results of MEMORY STATS"
stats = pairs_to_dict(response,
decode_keys=True,
decode_string_values=True)
for key, value in stats.items():
if key.startswith('db.'):
stats[key] = pairs_to_dict(value,
decode_keys=True,
decode_string_values=True)
return stats
SENTINEL_STATE_TYPES = {
'can-failover-its-master': int,
'config-epoch': int,
'down-after-milliseconds': int,
'failover-timeout': int,
'info-refresh': int,
'last-hello-message': int,
'last-ok-ping-reply': int,
'last-ping-reply': int,
'last-ping-sent': int,
'master-link-down-time': int,
'master-port': int,
'num-other-sentinels': int,
'num-slaves': int,
'o-down-time': int,
'pending-commands': int,
'parallel-syncs': int,
'port': int,
'quorum': int,
'role-reported-time': int,
's-down-time': int,
'slave-priority': int,
'slave-repl-offset': int,
'voted-leader-epoch': int
}
def parse_sentinel_state(item):
result = pairs_to_dict_typed(item, SENTINEL_STATE_TYPES)
flags = set(result['flags'].split(','))
for name, flag in (('is_master', 'master'), ('is_slave', 'slave'),
('is_sdown', 's_down'), ('is_odown', 'o_down'),
('is_sentinel', 'sentinel'),
('is_disconnected', 'disconnected'),
('is_master_down', 'master_down')):
result[name] = flag in flags
return result
def parse_sentinel_master(response):
return parse_sentinel_state(map(str_if_bytes, response))
def parse_sentinel_masters(response):
result = {}
for item in response:
state = parse_sentinel_state(map(str_if_bytes, item))
result[state['name']] = state
return result
def parse_sentinel_slaves_and_sentinels(response):
return [parse_sentinel_state(map(str_if_bytes, item)) for item in response]
def parse_sentinel_get_master(response):
return response and (response[0], int(response[1])) or None
def pairs_to_dict(response, decode_keys=False, decode_string_values=False):
"Create a dict given a list of key/value pairs"
if response is None:
return {}
if decode_keys or decode_string_values:
# the iter form is faster, but I don't know how to make that work
# with a str_if_bytes() map
keys = response[::2]
if decode_keys:
keys = map(str_if_bytes, keys)
values = response[1::2]
if decode_string_values:
values = map(str_if_bytes, values)
return dict(zip(keys, values))
else:
it = iter(response)
return dict(zip(it, it))
def pairs_to_dict_typed(response, type_info):
it = iter(response)
result = {}
for key, value in zip(it, it):
if key in type_info:
try:
value = type_info[key](value)
except Exception:
# if for some reason the value can't be coerced, just use
# the string value
pass
result[key] = value
return result
def zset_score_pairs(response, **options):
"""
If ``withscores`` is specified in the options, return the response as
a list of (value, score) pairs
"""
if not response or not options.get('withscores'):
return response
score_cast_func = options.get('score_cast_func', float)
it = iter(response)
return list(zip(it, map(score_cast_func, it)))
def sort_return_tuples(response, **options):
"""
If ``groups`` is specified, return the response as a list of
n-element tuples with n being the value found in options['groups']
"""
if not response or not options.get('groups'):
return response
n = options['groups']
return list(zip(*[response[i::n] for i in range(n)]))
def int_or_none(response):
if response is None:
return None
return int(response)
def parse_stream_list(response):
if response is None:
return None
data = []
for r in response:
if r is not None:
data.append((r[0], pairs_to_dict(r[1])))
else:
data.append((None, None))
return data
def pairs_to_dict_with_str_keys(response):
return pairs_to_dict(response, decode_keys=True)
def parse_list_of_dicts(response):
return list(map(pairs_to_dict_with_str_keys, response))
def parse_xclaim(response, **options):
if options.get('parse_justid', False):
return response
return parse_stream_list(response)
def parse_xinfo_stream(response):
data = pairs_to_dict(response, decode_keys=True)
first = data['first-entry']
if first is not None:
data['first-entry'] = (first[0], pairs_to_dict(first[1]))
last = data['last-entry']
if last is not None:
data['last-entry'] = (last[0], pairs_to_dict(last[1]))
return data
def parse_xread(response):
if response is None:
return []
return [[r[0], parse_stream_list(r[1])] for r in response]
def parse_xpending(response, **options):
if options.get('parse_detail', False):
return parse_xpending_range(response)
consumers = [{'name': n, 'pending': int(p)} for n, p in response[3] or []]
return {
'pending': response[0],
'min': response[1],
'max': response[2],
'consumers': consumers
}
def parse_xpending_range(response):
k = ('message_id', 'consumer', 'time_since_delivered', 'times_delivered')
return [dict(zip(k, r)) for r in response]
def float_or_none(response):
if response is None:
return None
return float(response)
def bool_ok(response):
return str_if_bytes(response) == 'OK'
def parse_zadd(response, **options):
if response is None:
return None
if options.get('as_score'):
return float(response)
return int(response)
def parse_client_list(response, **options):
clients = []
for c in str_if_bytes(response).splitlines():
# Values might contain '='
clients.append(dict(pair.split('=', 1) for pair in c.split(' ')))
return clients
def parse_config_get(response, **options):
response = [str_if_bytes(i) if i is not None else None for i in response]
return response and pairs_to_dict(response) or {}
def parse_scan(response, **options):
cursor, r = response
return int(cursor), r
def parse_hscan(response, **options):
cursor, r = response
return int(cursor), r and pairs_to_dict(r) or {}
def parse_zscan(response, **options):
score_cast_func = options.get('score_cast_func', float)
cursor, r = response
it = iter(r)
return int(cursor), list(zip(it, map(score_cast_func, it)))
def parse_slowlog_get(response, **options):
space = ' ' if options.get('decode_responses', False) else b' '
return [{
'id': item[0],
'start_time': int(item[1]),
'duration': int(item[2]),
'command': space.join(item[3])
} for item in response]
def parse_cluster_info(response, **options):
response = str_if_bytes(response)
return dict(line.split(':') for line in response.splitlines() if line)
def _parse_node_line(line):
line_items = line.split(' ')
node_id, addr, flags, master_id, ping, pong, epoch, \
connected = line.split(' ')[:8]
slots = [sl.split('-') for sl in line_items[8:]]
node_dict = {
'node_id': node_id,
'flags': flags,
'master_id': master_id,
'last_ping_sent': ping,
'last_pong_rcvd': pong,
'epoch': epoch,
'slots': slots,
'connected': True if connected == 'connected' else False
}
return addr, node_dict
def parse_cluster_nodes(response, **options):
raw_lines = str_if_bytes(response).splitlines()
return dict(_parse_node_line(line) for line in raw_lines)
def parse_georadius_generic(response, **options):
if options['store'] or options['store_dist']:
# `store` and `store_diff` cant be combined
# with other command arguments.
return response
if type(response) != list:
response_list = [response]
else:
response_list = response
if not options['withdist'] and not options['withcoord']\
and not options['withhash']:
# just a bunch of places
return response_list
cast = {
'withdist': float,
'withcoord': lambda ll: (float(ll[0]), float(ll[1])),
'withhash': int
}
# zip all output results with each casting functino to get
# the properly native Python value.
f = [lambda x: x]
f += [cast[o] for o in ['withdist', 'withhash', 'withcoord'] if options[o]]
return [
list(map(lambda fv: fv[0](fv[1]), zip(f, r))) for r in response_list
]
def parse_pubsub_numsub(response, **options):
return list(zip(response[0::2], response[1::2]))
def parse_client_kill(response, **options):
if isinstance(response, int):
return response
return str_if_bytes(response) == 'OK'
def parse_acl_getuser(response, **options):
if response is None:
return None
data = pairs_to_dict(response, decode_keys=True)
# convert everything but user-defined data in 'keys' to native strings
data['flags'] = list(map(str_if_bytes, data['flags']))
data['passwords'] = list(map(str_if_bytes, data['passwords']))
data['commands'] = str_if_bytes(data['commands'])
# split 'commands' into separate 'categories' and 'commands' lists
commands, categories = [], []
for command in data['commands'].split(' '):
if '@' in command:
categories.append(command)
else:
commands.append(command)
data['commands'] = commands
data['categories'] = categories
data['enabled'] = 'on' in data['flags']
return data
def parse_acl_log(response, **options):
if response is None:
return None
if isinstance(response, list):
data = []
for log in response:
log_data = pairs_to_dict(log, True, True)
client_info = log_data.get('client-info', '')
log_data["client-info"] = parse_client_info(client_info)
# float() is lossy comparing to the "double" in C
log_data["age-seconds"] = float(log_data["age-seconds"])
data.append(log_data)
else:
data = bool_ok(response)
return data
def parse_client_info(value):
"""
Parsing client-info in ACL Log in following format.
"key1=value1 key2=value2 key3=value3"
"""
client_info = {}
infos = value.split(" ")
for info in infos:
key, value = info.split("=")
client_info[key] = value
# Those fields are definded as int in networking.c
for int_key in {"id", "age", "idle", "db", "sub", "psub",
"multi", "qbuf", "qbuf-free", "obl",
"oll", "omem"}:
client_info[int_key] = int(client_info[int_key])
return client_info
def parse_module_result(response):
if isinstance(response, ModuleError):
raise response
return True
class Redis:
"""
Implementation of the Redis protocol.
This abstract class provides a Python interface to all Redis commands
and an implementation of the Redis protocol.
Connection and Pipeline derive from this, implementing how
the commands are sent and received to the Redis server
"""
RESPONSE_CALLBACKS = {
**string_keys_to_dict(
'AUTH EXPIRE EXPIREAT HEXISTS HMSET MOVE MSETNX PERSIST '
'PSETEX RENAMENX SISMEMBER SMOVE SETEX SETNX',
bool
),
**string_keys_to_dict(
'BITCOUNT BITPOS DECRBY DEL EXISTS GEOADD GETBIT HDEL HLEN '
'HSTRLEN INCRBY LINSERT LLEN LPUSHX PFADD PFCOUNT RPUSHX SADD '
'SCARD SDIFFSTORE SETBIT SETRANGE SINTERSTORE SREM STRLEN '
'SUNIONSTORE UNLINK XACK XDEL XLEN XTRIM ZCARD ZLEXCOUNT ZREM '
'ZREMRANGEBYLEX ZREMRANGEBYRANK ZREMRANGEBYSCORE',
int
),
**string_keys_to_dict(
'INCRBYFLOAT HINCRBYFLOAT',
float
),
**string_keys_to_dict(
# these return OK, or int if redis-server is >=1.3.4
'LPUSH RPUSH',
lambda r: isinstance(r, int) and r or str_if_bytes(r) == 'OK'
),
**string_keys_to_dict('SORT', sort_return_tuples),
**string_keys_to_dict('ZSCORE ZINCRBY GEODIST', float_or_none),
**string_keys_to_dict(
'FLUSHALL FLUSHDB LSET LTRIM MSET PFMERGE READONLY READWRITE '
'RENAME SAVE SELECT SHUTDOWN SLAVEOF SWAPDB WATCH UNWATCH ',
bool_ok
),
**string_keys_to_dict('BLPOP BRPOP', lambda r: r and tuple(r) or None),
**string_keys_to_dict(
'SDIFF SINTER SMEMBERS SUNION',
lambda r: r and set(r) or set()
),
**string_keys_to_dict(
'ZPOPMAX ZPOPMIN ZRANGE ZRANGEBYSCORE ZREVRANGE ZREVRANGEBYSCORE',
zset_score_pairs
),
**string_keys_to_dict('BZPOPMIN BZPOPMAX', \
lambda r:
r and (r[0], r[1], float(r[2])) or None),
**string_keys_to_dict('ZRANK ZREVRANK', int_or_none),
**string_keys_to_dict('XREVRANGE XRANGE', parse_stream_list),
**string_keys_to_dict('XREAD XREADGROUP', parse_xread),
**string_keys_to_dict('BGREWRITEAOF BGSAVE', lambda r: True),
'ACL CAT': lambda r: list(map(str_if_bytes, r)),
'ACL DELUSER': int,
'ACL GENPASS': str_if_bytes,
'ACL GETUSER': parse_acl_getuser,
'ACL LIST': lambda r: list(map(str_if_bytes, r)),
'ACL LOAD': bool_ok,
'ACL LOG': parse_acl_log,
'ACL SAVE': bool_ok,
'ACL SETUSER': bool_ok,
'ACL USERS': lambda r: list(map(str_if_bytes, r)),
'ACL WHOAMI': str_if_bytes,
'CLIENT GETNAME': str_if_bytes,
'CLIENT ID': int,
'CLIENT KILL': parse_client_kill,
'CLIENT LIST': parse_client_list,
'CLIENT SETNAME': bool_ok,
'CLIENT UNBLOCK': lambda r: r and int(r) == 1 or False,
'CLIENT PAUSE': bool_ok,
'CLUSTER ADDSLOTS': bool_ok,
'CLUSTER COUNT-FAILURE-REPORTS': lambda x: int(x),
'CLUSTER COUNTKEYSINSLOT': lambda x: int(x),
'CLUSTER DELSLOTS': bool_ok,
'CLUSTER FAILOVER': bool_ok,
'CLUSTER FORGET': bool_ok,
'CLUSTER INFO': parse_cluster_info,
'CLUSTER KEYSLOT': lambda x: int(x),
'CLUSTER MEET': bool_ok,
'CLUSTER NODES': parse_cluster_nodes,
'CLUSTER REPLICATE': bool_ok,
'CLUSTER RESET': bool_ok,
'CLUSTER SAVECONFIG': bool_ok,
'CLUSTER SET-CONFIG-EPOCH': bool_ok,
'CLUSTER SETSLOT': bool_ok,
'CLUSTER SLAVES': parse_cluster_nodes,
'CONFIG GET': parse_config_get,
'CONFIG RESETSTAT': bool_ok,
'CONFIG SET': bool_ok,
'DEBUG OBJECT': parse_debug_object,
'GEOHASH': lambda r: list(map(str_if_bytes, r)),
'GEOPOS': lambda r: list(map(lambda ll: (float(ll[0]),
float(ll[1]))
if ll is not None else None, r)),
'GEORADIUS': parse_georadius_generic,
'GEORADIUSBYMEMBER': parse_georadius_generic,
'HGETALL': lambda r: r and pairs_to_dict(r) or {},
'HSCAN': parse_hscan,
'INFO': parse_info,
'LASTSAVE': timestamp_to_datetime,
'MEMORY PURGE': bool_ok,
'MEMORY STATS': parse_memory_stats,
'MEMORY USAGE': int_or_none,
'MODULE LOAD': parse_module_result,
'MODULE UNLOAD': parse_module_result,
'MODULE LIST': lambda r: [pairs_to_dict(m) for m in r],
'OBJECT': parse_object,
'PING': lambda r: str_if_bytes(r) == 'PONG',
'PUBSUB NUMSUB': parse_pubsub_numsub,
'RANDOMKEY': lambda r: r and r or None,
'SCAN': parse_scan,
'SCRIPT EXISTS': lambda r: list(map(bool, r)),
'SCRIPT FLUSH': bool_ok,
'SCRIPT KILL': bool_ok,
'SCRIPT LOAD': str_if_bytes,
'SENTINEL GET-MASTER-ADDR-BY-NAME': parse_sentinel_get_master,
'SENTINEL MASTER': parse_sentinel_master,
'SENTINEL MASTERS': parse_sentinel_masters,
'SENTINEL MONITOR': bool_ok,
'SENTINEL REMOVE': bool_ok,
'SENTINEL SENTINELS': parse_sentinel_slaves_and_sentinels,
'SENTINEL SET': bool_ok,
'SENTINEL SLAVES': parse_sentinel_slaves_and_sentinels,
'SET': lambda r: r and str_if_bytes(r) == 'OK',
'SLOWLOG GET': parse_slowlog_get,
'SLOWLOG LEN': int,
'SLOWLOG RESET': bool_ok,
'SSCAN': parse_scan,
'TIME': lambda x: (int(x[0]), int(x[1])),
'XCLAIM': parse_xclaim,
'XGROUP CREATE': bool_ok,
'XGROUP DELCONSUMER': int,
'XGROUP DESTROY': bool,
'XGROUP SETID': bool_ok,
'XINFO CONSUMERS': parse_list_of_dicts,
'XINFO GROUPS': parse_list_of_dicts,
'XINFO STREAM': parse_xinfo_stream,
'XPENDING': parse_xpending,
'ZADD': parse_zadd,
'ZSCAN': parse_zscan,
}
@classmethod
def from_url(cls, url, **kwargs):
"""
Return a Redis client object configured from the given URL
For example::
redis://[[username]:[password]]@localhost:6379/0
rediss://[[username]:[password]]@localhost:6379/0
unix://[[username]:[password]]@/path/to/socket.sock?db=0
Three URL schemes are supported:
- `redis://` creates a TCP socket connection. See more at:
<https://www.iana.org/assignments/uri-schemes/prov/redis>
- `rediss://` creates a SSL wrapped TCP socket connection. See more at:
<https://www.iana.org/assignments/uri-schemes/prov/rediss>
- ``unix://``: creates a Unix Domain Socket connection.
The username, password, hostname, path and all querystring values
are passed through urllib.parse.unquote in order to replace any
percent-encoded values with their corresponding characters.
There are several ways to specify a database number. The first value
found will be used:
1. A ``db`` querystring option, e.g. redis://localhost?db=0
2. If using the redis:// or rediss:// schemes, the path argument
of the url, e.g. redis://localhost/0
3. A ``db`` keyword argument to this function.
If none of these options are specified, the default db=0 is used.
All querystring options are cast to their appropriate Python types.
Boolean arguments can be specified with string values "True"/"False"
or "Yes"/"No". Values that cannot be properly cast cause a
``ValueError`` to be raised. Once parsed, the querystring arguments
and keyword arguments are passed to the ``ConnectionPool``'s
class initializer. In the case of conflicting arguments, querystring
arguments always win.
"""
connection_pool = ConnectionPool.from_url(url, **kwargs)
return cls(connection_pool=connection_pool)
def __init__(self, host='localhost', port=6379,
db=0, password=None, socket_timeout=None,
socket_connect_timeout=None,
socket_keepalive=None, socket_keepalive_options=None,
connection_pool=None, unix_socket_path=None,
encoding='utf-8', encoding_errors='strict',
charset=None, errors=None,
decode_responses=False, retry_on_timeout=False,
ssl=False, ssl_keyfile=None, ssl_certfile=None,
ssl_cert_reqs='required', ssl_ca_certs=None,
ssl_check_hostname=False,
max_connections=None, single_connection_client=False,
health_check_interval=0, client_name=None, username=None):
if not connection_pool:
if charset is not None:
warnings.warn(DeprecationWarning(
'"charset" is deprecated. Use "encoding" instead'))
encoding = charset
if errors is not None:
warnings.warn(DeprecationWarning(
'"errors" is deprecated. Use "encoding_errors" instead'))
encoding_errors = errors
kwargs = {
'db': db,
'username': username,
'password': password,
'socket_timeout': socket_timeout,
'encoding': encoding,
'encoding_errors': encoding_errors,
'decode_responses': decode_responses,
'retry_on_timeout': retry_on_timeout,
'max_connections': max_connections,
'health_check_interval': health_check_interval,
'client_name': client_name
}
# based on input, setup appropriate connection args
if unix_socket_path is not None:
kwargs.update({
'path': unix_socket_path,
'connection_class': UnixDomainSocketConnection
})
else:
# TCP specific options
kwargs.update({
'host': host,
'port': port,
'socket_connect_timeout': socket_connect_timeout,
'socket_keepalive': socket_keepalive,
'socket_keepalive_options': socket_keepalive_options,
})
if ssl:
kwargs.update({
'connection_class': SSLConnection,
'ssl_keyfile': ssl_keyfile,
'ssl_certfile': ssl_certfile,
'ssl_cert_reqs': ssl_cert_reqs,
'ssl_ca_certs': ssl_ca_certs,
'ssl_check_hostname': ssl_check_hostname,
})
connection_pool = ConnectionPool(**kwargs)
self.connection_pool = connection_pool
self.connection = None
if single_connection_client:
self.connection = self.connection_pool.get_connection('_')
self.response_callbacks = CaseInsensitiveDict(
self.__class__.RESPONSE_CALLBACKS)
def __repr__(self):
return "%s<%s>" % (type(self).__name__, repr(self.connection_pool))
def set_response_callback(self, command, callback):
"Set a custom Response Callback"
self.response_callbacks[command] = callback
def pipeline(self, transaction=True, shard_hint=None):
"""
Return a new pipeline object that can queue multiple commands for
later execution. ``transaction`` indicates whether all commands
should be executed atomically. Apart from making a group of operations
atomic, pipelines are useful for reducing the back-and-forth overhead
between the client and server.
"""
return Pipeline(
self.connection_pool,
self.response_callbacks,
transaction,
shard_hint)
def transaction(self, func, *watches, **kwargs):
"""
Convenience method for executing the callable `func` as a transaction
while watching all keys specified in `watches`. The 'func' callable
should expect a single argument which is a Pipeline object.
"""
shard_hint = kwargs.pop('shard_hint', None)
value_from_callable = kwargs.pop('value_from_callable', False)
watch_delay = kwargs.pop('watch_delay', None)
with self.pipeline(True, shard_hint) as pipe:
while True:
try:
if watches:
pipe.watch(*watches)
func_value = func(pipe)
exec_value = pipe.execute()
return func_value if value_from_callable else exec_value
except WatchError:
if watch_delay is not None and watch_delay > 0:
time.sleep(watch_delay)
continue
def lock(self, name, timeout=None, sleep=0.1, blocking_timeout=None,
lock_class=None, thread_local=True):
"""
Return a new Lock object using key ``name`` that mimics
the behavior of threading.Lock.
If specified, ``timeout`` indicates a maximum life for the lock.
By default, it will remain locked until release() is called.
``sleep`` indicates the amount of time to sleep per loop iteration
when the lock is in blocking mode and another client is currently
holding the lock.
``blocking_timeout`` indicates the maximum amount of time in seconds to
spend trying to acquire the lock. A value of ``None`` indicates
continue trying forever. ``blocking_timeout`` can be specified as a
float or integer, both representing the number of seconds to wait.
``lock_class`` forces the specified lock implementation.
``thread_local`` indicates whether the lock token is placed in
thread-local storage. By default, the token is placed in thread local
storage so that a thread only sees its token, not a token set by
another thread. Consider the following timeline:
time: 0, thread-1 acquires `my-lock`, with a timeout of 5 seconds.
thread-1 sets the token to "abc"
time: 1, thread-2 blocks trying to acquire `my-lock` using the
Lock instance.
time: 5, thread-1 has not yet completed. redis expires the lock
key.
time: 5, thread-2 acquired `my-lock` now that it's available.
thread-2 sets the token to "xyz"
time: 6, thread-1 finishes its work and calls release(). if the
token is *not* stored in thread local storage, then
thread-1 would see the token value as "xyz" and would be
able to successfully release the thread-2's lock.
In some use cases it's necessary to disable thread local storage. For
example, if you have code where one thread acquires a lock and passes
that lock instance to a worker thread to release later. If thread
local storage isn't disabled in this case, the worker thread won't see
the token set by the thread that acquired the lock. Our assumption
is that these cases aren't common and as such default to using
thread local storage. """
if lock_class is None:
lock_class = Lock
return lock_class(self, name, timeout=timeout, sleep=sleep,
blocking_timeout=blocking_timeout,
thread_local=thread_local)
def pubsub(self, **kwargs):
"""
Return a Publish/Subscribe object. With this object, you can
subscribe to channels and listen for messages that get published to
them.
"""
return PubSub(self.connection_pool, **kwargs)
def monitor(self):
return Monitor(self.connection_pool)
def client(self):
return self.__class__(connection_pool=self.connection_pool,
single_connection_client=True)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
def __del__(self):
self.close()
def close(self):
conn = self.connection
if conn:
self.connection = None
self.connection_pool.release(conn)
# COMMAND EXECUTION AND PROTOCOL PARSING
def execute_command(self, *args, **options):
"Execute a command and return a parsed response"
pool = self.connection_pool
command_name = args[0]
conn = self.connection or pool.get_connection(command_name, **options)
try:
conn.send_command(*args)
return self.parse_response(conn, command_name, **options)
except (ConnectionError, TimeoutError) as e:
conn.disconnect()
if not (conn.retry_on_timeout and isinstance(e, TimeoutError)):
raise
conn.send_command(*args)
return self.parse_response(conn, command_name, **options)
finally:
if not self.connection:
pool.release(conn)
def parse_response(self, connection, command_name, **options):
"Parses a response from the Redis server"
try:
response = connection.read_response()
except ResponseError:
if EMPTY_RESPONSE in options:
return options[EMPTY_RESPONSE]
raise
if command_name in self.response_callbacks:
return self.response_callbacks[command_name](response, **options)
return response
# SERVER INFORMATION
# ACL methods
def acl_cat(self, category=None):
"""
Returns a list of categories or commands within a category.
If ``category`` is not supplied, returns a list of all categories.
If ``category`` is supplied, returns a list of all commands within
that category.
"""
pieces = [category] if category else []
return self.execute_command('ACL CAT', *pieces)
def acl_deluser(self, username):
"Delete the ACL for the specified ``username``"
return self.execute_command('ACL DELUSER', username)
def acl_genpass(self):
"Generate a random password value"
return self.execute_command('ACL GENPASS')
def acl_getuser(self, username):
"""
Get the ACL details for the specified ``username``.
If ``username`` does not exist, return None
"""
return self.execute_command('ACL GETUSER', username)
def acl_list(self):
"Return a list of all ACLs on the server"
return self.execute_command('ACL LIST')
def acl_log(self, count=None):
"""
Get ACL logs as a list.
:param int count: Get logs[0:count].
:rtype: List.
"""
args = []
if count is not None:
if not isinstance(count, int):
raise DataError('ACL LOG count must be an '
'integer')
args.append(count)
return self.execute_command('ACL LOG', *args)
def acl_log_reset(self):
"""
Reset ACL logs.
:rtype: Boolean.
"""
args = [b'RESET']
return self.execute_command('ACL LOG', *args)
def acl_load(self):
"""
Load ACL rules from the configured ``aclfile``.
Note that the server must be configured with the ``aclfile``
directive to be able to load ACL rules from an aclfile.
"""
return self.execute_command('ACL LOAD')
def acl_save(self):
"""
Save ACL rules to the configured ``aclfile``.
Note that the server must be configured with the ``aclfile``
directive to be able to save ACL rules to an aclfile.
"""
return self.execute_command('ACL SAVE')
def acl_setuser(self, username, enabled=False, nopass=False,
passwords=None, hashed_passwords=None, categories=None,
commands=None, keys=None, reset=False, reset_keys=False,
reset_passwords=False):
"""
Create or update an ACL user.
Create or update the ACL for ``username``. If the user already exists,
the existing ACL is completely overwritten and replaced with the
specified values.
``enabled`` is a boolean indicating whether the user should be allowed
to authenticate or not. Defaults to ``False``.
``nopass`` is a boolean indicating whether the can authenticate without
a password. This cannot be True if ``passwords`` are also specified.
``passwords`` if specified is a list of plain text passwords
to add to or remove from the user. Each password must be prefixed with
a '+' to add or a '-' to remove. For convenience, the value of
``passwords`` can be a simple prefixed string when adding or
removing a single password.
``hashed_passwords`` if specified is a list of SHA-256 hashed passwords
to add to or remove from the user. Each hashed password must be
prefixed with a '+' to add or a '-' to remove. For convenience,
the value of ``hashed_passwords`` can be a simple prefixed string when
adding or removing a single password.
``categories`` if specified is a list of strings representing category
permissions. Each string must be prefixed with either a '+' to add the
category permission or a '-' to remove the category permission.
``commands`` if specified is a list of strings representing command
permissions. Each string must be prefixed with either a '+' to add the
command permission or a '-' to remove the command permission.
``keys`` if specified is a list of key patterns to grant the user
access to. Keys patterns allow '*' to support wildcard matching. For
example, '*' grants access to all keys while 'cache:*' grants access
to all keys that are prefixed with 'cache:'. ``keys`` should not be
prefixed with a '~'.
``reset`` is a boolean indicating whether the user should be fully
reset prior to applying the new ACL. Setting this to True will
remove all existing passwords, flags and privileges from the user and
then apply the specified rules. If this is False, the user's existing
passwords, flags and privileges will be kept and any new specified
rules will be applied on top.
``reset_keys`` is a boolean indicating whether the user's key
permissions should be reset prior to applying any new key permissions
specified in ``keys``. If this is False, the user's existing
key permissions will be kept and any new specified key permissions
will be applied on top.
``reset_passwords`` is a boolean indicating whether to remove all
existing passwords and the 'nopass' flag from the user prior to
applying any new passwords specified in 'passwords' or
'hashed_passwords'. If this is False, the user's existing passwords
and 'nopass' status will be kept and any new specified passwords
or hashed_passwords will be applied on top.
"""
encoder = self.connection_pool.get_encoder()
pieces = [username]
if reset:
pieces.append(b'reset')
if reset_keys:
pieces.append(b'resetkeys')
if reset_passwords:
pieces.append(b'resetpass')
if enabled:
pieces.append(b'on')
else:
pieces.append(b'off')
if (passwords or hashed_passwords) and nopass:
raise DataError('Cannot set \'nopass\' and supply '
'\'passwords\' or \'hashed_passwords\'')
if passwords:
# as most users will have only one password, allow remove_passwords
# to be specified as a simple string or a list
passwords = list_or_args(passwords, [])
for i, password in enumerate(passwords):
password = encoder.encode(password)
if password.startswith(b'+'):
pieces.append(b'>%s' % password[1:])
elif password.startswith(b'-'):
pieces.append(b'<%s' % password[1:])
else:
raise DataError('Password %d must be prefixeed with a '
'"+" to add or a "-" to remove' % i)
if hashed_passwords:
# as most users will have only one password, allow remove_passwords
# to be specified as a simple string or a list
hashed_passwords = list_or_args(hashed_passwords, [])
for i, hashed_password in enumerate(hashed_passwords):
hashed_password = encoder.encode(hashed_password)
if hashed_password.startswith(b'+'):
pieces.append(b'#%s' % hashed_password[1:])
elif hashed_password.startswith(b'-'):
pieces.append(b'!%s' % hashed_password[1:])
else:
raise DataError('Hashed %d password must be prefixeed '
'with a "+" to add or a "-" to remove' % i)
if nopass:
pieces.append(b'nopass')
if categories:
for category in categories:
category = encoder.encode(category)
# categories can be prefixed with one of (+@, +, -@, -)
if category.startswith(b'+@'):
pieces.append(category)
elif category.startswith(b'+'):
pieces.append(b'+@%s' % category[1:])
elif category.startswith(b'-@'):
pieces.append(category)
elif category.startswith(b'-'):
pieces.append(b'-@%s' % category[1:])
else:
raise DataError('Category "%s" must be prefixed with '
'"+" or "-"'
% encoder.decode(category, force=True))
if commands:
for cmd in commands:
cmd = encoder.encode(cmd)
if not cmd.startswith(b'+') and not cmd.startswith(b'-'):
raise DataError('Command "%s" must be prefixed with '
'"+" or "-"'
% encoder.decode(cmd, force=True))
pieces.append(cmd)
if keys:
for key in keys:
key = encoder.encode(key)
pieces.append(b'~%s' % key)
return self.execute_command('ACL SETUSER', *pieces)
def acl_users(self):
"Returns a list of all registered users on the server."
return self.execute_command('ACL USERS')
def acl_whoami(self):
"Get the username for the current connection"
return self.execute_command('ACL WHOAMI')
def bgrewriteaof(self):
"Tell the Redis server to rewrite the AOF file from data in memory."
return self.execute_command('BGREWRITEAOF')
def bgsave(self):
"""
Tell the Redis server to save its data to disk. Unlike save(),
this method is asynchronous and returns immediately.
"""
return self.execute_command('BGSAVE')
def client_kill(self, address):
"Disconnects the client at ``address`` (ip:port)"
return self.execute_command('CLIENT KILL', address)
def client_kill_filter(self, _id=None, _type=None, addr=None, skipme=None):
"""
Disconnects client(s) using a variety of filter options
:param id: Kills a client by its unique ID field
:param type: Kills a client by type where type is one of 'normal',
'master', 'slave' or 'pubsub'
:param addr: Kills a client by its 'address:port'
:param skipme: If True, then the client calling the command
will not get killed even if it is identified by one of the filter
options. If skipme is not provided, the server defaults to skipme=True
"""
args = []
if _type is not None:
client_types = ('normal', 'master', 'slave', 'pubsub')
if str(_type).lower() not in client_types:
raise DataError("CLIENT KILL type must be one of %r" % (
client_types,))
args.extend((b'TYPE', _type))
if skipme is not None:
if not isinstance(skipme, bool):
raise DataError("CLIENT KILL skipme must be a bool")
if skipme:
args.extend((b'SKIPME', b'YES'))
else:
args.extend((b'SKIPME', b'NO'))
if _id is not None:
args.extend((b'ID', _id))
if addr is not None:
args.extend((b'ADDR', addr))
if not args:
raise DataError("CLIENT KILL <filter> <value> ... ... <filter> "
"<value> must specify at least one filter")
return self.execute_command('CLIENT KILL', *args)
def client_list(self, _type=None):
"""
Returns a list of currently connected clients.
If type of client specified, only that type will be returned.
:param _type: optional. one of the client types (normal, master,
replica, pubsub)
"""
"Returns a list of currently connected clients"
if _type is not None:
client_types = ('normal', 'master', 'replica', 'pubsub')
if str(_type).lower() not in client_types:
raise DataError("CLIENT LIST _type must be one of %r" % (
client_types,))
return self.execute_command('CLIENT LIST', b'TYPE', _type)
return self.execute_command('CLIENT LIST')
def client_getname(self):
"Returns the current connection name"
return self.execute_command('CLIENT GETNAME')
def client_id(self):
"Returns the current connection id"
return self.execute_command('CLIENT ID')
def client_setname(self, name):
"Sets the current connection name"
return self.execute_command('CLIENT SETNAME', name)
def client_unblock(self, client_id, error=False):
"""
Unblocks a connection by its client id.
If ``error`` is True, unblocks the client with a special error message.
If ``error`` is False (default), the client is unblocked using the
regular timeout mechanism.
"""
args = ['CLIENT UNBLOCK', int(client_id)]
if error:
args.append(b'ERROR')
return self.execute_command(*args)
def client_pause(self, timeout):
"""
Suspend all the Redis clients for the specified amount of time
:param timeout: milliseconds to pause clients
"""
if not isinstance(timeout, int):
raise DataError("CLIENT PAUSE timeout must be an integer")
return self.execute_command('CLIENT PAUSE', str(timeout))
def copy(self, src_key, dest_key):
"Copy the source key to the destination key"
return self.execute_command('COPY', src_key, dest_key)
def readwrite(self):
"Disables read queries for a connection to a Redis Cluster slave node"
return self.execute_command('READWRITE')
def readonly(self):
"Enables read queries for a connection to a Redis Cluster replica node"
return self.execute_command('READONLY')
def config_get(self, pattern="*"):
"Return a dictionary of configuration based on the ``pattern``"
return self.execute_command('CONFIG GET', pattern)
def config_set(self, name, value):
"Set config item ``name`` with ``value``"
return self.execute_command('CONFIG SET', name, value)
def config_resetstat(self):
"Reset runtime statistics"
return self.execute_command('CONFIG RESETSTAT')
def config_rewrite(self):
"Rewrite config file with the minimal change to reflect running config"
return self.execute_command('CONFIG REWRITE')
def dbsize(self):
"Returns the number of keys in the current database"
return self.execute_command('DBSIZE')
def debug_object(self, key):
"Returns version specific meta information about a given key"
return self.execute_command('DEBUG OBJECT', key)
def echo(self, value):
"Echo the string back from the server"
return self.execute_command('ECHO', value)
def flushall(self, asynchronous=False):
"""
Delete all keys in all databases on the current host.
``asynchronous`` indicates whether the operation is
executed asynchronously by the server.
"""
args = []
if asynchronous:
args.append(b'ASYNC')
return self.execute_command('FLUSHALL', *args)
def flushdb(self, asynchronous=False):
"""
Delete all keys in the current database.
``asynchronous`` indicates whether the operation is
executed asynchronously by the server.
"""
args = []
if asynchronous:
args.append(b'ASYNC')
return self.execute_command('FLUSHDB', *args)
def swapdb(self, first, second):
"Swap two databases"
return self.execute_command('SWAPDB', first, second)
def info(self, section=None):
"""
Returns a dictionary containing information about the Redis server
The ``section`` option can be used to select a specific section
of information
The section option is not supported by older versions of Redis Server,
and will generate ResponseError
"""
if section is None:
return self.execute_command('INFO')
else:
return self.execute_command('INFO', section)
def lastsave(self):
"""
Return a Python datetime object representing the last time the
Redis database was saved to disk
"""
return self.execute_command('LASTSAVE')
def migrate(self, host, port, keys, destination_db, timeout,
copy=False, replace=False, auth=None):
"""
Migrate 1 or more keys from the current Redis server to a different
server specified by the ``host``, ``port`` and ``destination_db``.
The ``timeout``, specified in milliseconds, indicates the maximum
time the connection between the two servers can be idle before the
command is interrupted.
If ``copy`` is True, the specified ``keys`` are NOT deleted from
the source server.
If ``replace`` is True, this operation will overwrite the keys
on the destination server if they exist.
If ``auth`` is specified, authenticate to the destination server with
the password provided.
"""
keys = list_or_args(keys, [])
if not keys:
raise DataError('MIGRATE requires at least one key')
pieces = []
if copy:
pieces.append(b'COPY')
if replace:
pieces.append(b'REPLACE')
if auth:
pieces.append(b'AUTH')
pieces.append(auth)
pieces.append(b'KEYS')
pieces.extend(keys)
return self.execute_command('MIGRATE', host, port, '', destination_db,
timeout, *pieces)
def object(self, infotype, key):
"Return the encoding, idletime, or refcount about the key"
return self.execute_command('OBJECT', infotype, key, infotype=infotype)
def memory_stats(self):
"Return a dictionary of memory stats"
return self.execute_command('MEMORY STATS')
def memory_usage(self, key, samples=None):
"""
Return the total memory usage for key, its value and associated
administrative overheads.
For nested data structures, ``samples`` is the number of elements to
sample. If left unspecified, the server's default is 5. Use 0 to sample
all elements.
"""
args = []
if isinstance(samples, int):
args.extend([b'SAMPLES', samples])
return self.execute_command('MEMORY USAGE', key, *args)
def memory_purge(self):
"Attempts to purge dirty pages for reclamation by allocator"
return self.execute_command('MEMORY PURGE')
def ping(self):
"Ping the Redis server"
return self.execute_command('PING')
def save(self):
"""
Tell the Redis server to save its data to disk,
blocking until the save is complete
"""
return self.execute_command('SAVE')
def sentinel(self, *args):
"Redis Sentinel's SENTINEL command."
warnings.warn(
DeprecationWarning('Use the individual sentinel_* methods'))
def sentinel_get_master_addr_by_name(self, service_name):
"Returns a (host, port) pair for the given ``service_name``"
return self.execute_command('SENTINEL GET-MASTER-ADDR-BY-NAME',
service_name)
def sentinel_master(self, service_name):
"Returns a dictionary containing the specified masters state."
return self.execute_command('SENTINEL MASTER', service_name)
def sentinel_masters(self):
"Returns a list of dictionaries containing each master's state."
return self.execute_command('SENTINEL MASTERS')
def sentinel_monitor(self, name, ip, port, quorum):
"Add a new master to Sentinel to be monitored"
return self.execute_command('SENTINEL MONITOR', name, ip, port, quorum)
def sentinel_remove(self, name):
"Remove a master from Sentinel's monitoring"
return self.execute_command('SENTINEL REMOVE', name)
def sentinel_sentinels(self, service_name):
"Returns a list of sentinels for ``service_name``"
return self.execute_command('SENTINEL SENTINELS', service_name)
def sentinel_set(self, name, option, value):
"Set Sentinel monitoring parameters for a given master"
return self.execute_command('SENTINEL SET', name, option, value)
def sentinel_slaves(self, service_name):
"Returns a list of slaves for ``service_name``"
return self.execute_command('SENTINEL SLAVES', service_name)
def shutdown(self, save=False, nosave=False):
"""Shutdown the Redis server. If Redis has persistence configured,
data will be flushed before shutdown. If the "save" option is set,
a data flush will be attempted even if there is no persistence
configured. If the "nosave" option is set, no data flush will be
attempted. The "save" and "nosave" options cannot both be set.
"""
if save and nosave:
raise DataError('SHUTDOWN save and nosave cannot both be set')
args = ['SHUTDOWN']
if save:
args.append('SAVE')
if nosave:
args.append('NOSAVE')
try:
self.execute_command(*args)
except ConnectionError:
# a ConnectionError here is expected
return
raise RedisError("SHUTDOWN seems to have failed.")
def slaveof(self, host=None, port=None):
"""
Set the server to be a replicated slave of the instance identified
by the ``host`` and ``port``. If called without arguments, the
instance is promoted to a master instead.
"""
if host is None and port is None:
return self.execute_command('SLAVEOF', b'NO', b'ONE')
return self.execute_command('SLAVEOF', host, port)
def slowlog_get(self, num=None):
"""
Get the entries from the slowlog. If ``num`` is specified, get the
most recent ``num`` items.
"""
args = ['SLOWLOG GET']
if num is not None:
args.append(num)
decode_responses = self.connection_pool.connection_kwargs.get(
'decode_responses', False)
return self.execute_command(*args, decode_responses=decode_responses)
def slowlog_len(self):
"Get the number of items in the slowlog"
return self.execute_command('SLOWLOG LEN')
def slowlog_reset(self):
"Remove all items in the slowlog"
return self.execute_command('SLOWLOG RESET')
def time(self):
"""
Returns the server time as a 2-item tuple of ints:
(seconds since epoch, microseconds into this second).
"""
return self.execute_command('TIME')
def wait(self, num_replicas, timeout):
"""
Redis synchronous replication
That returns the number of replicas that processed the query when
we finally have at least ``num_replicas``, or when the ``timeout`` was
reached.
"""
return self.execute_command('WAIT', num_replicas, timeout)
# BASIC KEY COMMANDS
def append(self, key, value):
"""
Appends the string ``value`` to the value at ``key``. If ``key``
doesn't already exist, create it with a value of ``value``.
Returns the new length of the value at ``key``.
"""
return self.execute_command('APPEND', key, value)
def bitcount(self, key, start=None, end=None):
"""
Returns the count of set bits in the value of ``key``. Optional
``start`` and ``end`` paramaters indicate which bytes to consider
"""
params = [key]
if start is not None and end is not None:
params.append(start)
params.append(end)
elif (start is not None and end is None) or \
(end is not None and start is None):
raise DataError("Both start and end must be specified")
return self.execute_command('BITCOUNT', *params)
def bitfield(self, key, default_overflow=None):
"""
Return a BitFieldOperation instance to conveniently construct one or
more bitfield operations on ``key``.
"""
return BitFieldOperation(self, key, default_overflow=default_overflow)
def bitop(self, operation, dest, *keys):
"""
Perform a bitwise operation using ``operation`` between ``keys`` and
store the result in ``dest``.
"""
return self.execute_command('BITOP', operation, dest, *keys)
def bitpos(self, key, bit, start=None, end=None):
"""
Return the position of the first bit set to 1 or 0 in a string.
``start`` and ``end`` difines search range. The range is interpreted
as a range of bytes and not a range of bits, so start=0 and end=2
means to look at the first three bytes.
"""
if bit not in (0, 1):
raise DataError('bit must be 0 or 1')
params = [key, bit]
start is not None and params.append(start)
if start is not None and end is not None:
params.append(end)
elif start is None and end is not None:
raise DataError("start argument is not set, "
"when end is specified")
return self.execute_command('BITPOS', *params)
def decr(self, name, amount=1):
"""
Decrements the value of ``key`` by ``amount``. If no key exists,
the value will be initialized as 0 - ``amount``
"""
# An alias for ``decr()``, because it is already implemented
# as DECRBY redis command.
return self.decrby(name, amount)
def decrby(self, name, amount=1):
"""
Decrements the value of ``key`` by ``amount``. If no key exists,
the value will be initialized as 0 - ``amount``
"""
return self.execute_command('DECRBY', name, amount)
def delete(self, *names):
"Delete one or more keys specified by ``names``"
return self.execute_command('DEL', *names)
def __delitem__(self, name):
self.delete(name)
def dump(self, name):
"""
Return a serialized version of the value stored at the specified key.
If key does not exist a nil bulk reply is returned.
"""
return self.execute_command('DUMP', name)
def exists(self, *names):
"Returns the number of ``names`` that exist"
return self.execute_command('EXISTS', *names)
__contains__ = exists
def expire(self, name, time):
"""
Set an expire flag on key ``name`` for ``time`` seconds. ``time``
can be represented by an integer or a Python timedelta object.
"""
if isinstance(time, datetime.timedelta):
time = int(time.total_seconds())
return self.execute_command('EXPIRE', name, time)
def expireat(self, name, when):
"""
Set an expire flag on key ``name``. ``when`` can be represented
as an integer indicating unix time or a Python datetime object.
"""
if isinstance(when, datetime.datetime):
when = int(mod_time.mktime(when.timetuple()))
return self.execute_command('EXPIREAT', name, when)
def get(self, name):
"""
Return the value at key ``name``, or None if the key doesn't exist
"""
return self.execute_command('GET', name)
def __getitem__(self, name):
"""
Return the value at key ``name``, raises a KeyError if the key
doesn't exist.
"""
value = self.get(name)
if value is not None:
return value
raise KeyError(name)
def getbit(self, name, offset):
"Returns a boolean indicating the value of ``offset`` in ``name``"
return self.execute_command('GETBIT', name, offset)
def getrange(self, key, start, end):
"""
Returns the substring of the string value stored at ``key``,
determined by the offsets ``start`` and ``end`` (both are inclusive)
"""
return self.execute_command('GETRANGE', key, start, end)
def getset(self, name, value):
"""
Sets the value at key ``name`` to ``value``
and returns the old value at key ``name`` atomically.
"""
return self.execute_command('GETSET', name, value)
def incr(self, name, amount=1):
"""
Increments the value of ``key`` by ``amount``. If no key exists,
the value will be initialized as ``amount``
"""
return self.incrby(name, amount)
def incrby(self, name, amount=1):
"""
Increments the value of ``key`` by ``amount``. If no key exists,
the value will be initialized as ``amount``
"""
# An alias for ``incr()``, because it is already implemented
# as INCRBY redis command.
return self.execute_command('INCRBY', name, amount)
def incrbyfloat(self, name, amount=1.0):
"""
Increments the value at key ``name`` by floating ``amount``.
If no key exists, the value will be initialized as ``amount``
"""
return self.execute_command('INCRBYFLOAT', name, amount)
def keys(self, pattern='*'):
"Returns a list of keys matching ``pattern``"
return self.execute_command('KEYS', pattern)
def mget(self, keys, *args):
"""
Returns a list of values ordered identically to ``keys``
"""
args = list_or_args(keys, args)
options = {}
if not args:
options[EMPTY_RESPONSE] = []
return self.execute_command('MGET', *args, **options)
def mset(self, mapping):
"""
Sets key/values based on a mapping. Mapping is a dictionary of
key/value pairs. Both keys and values should be strings or types that
can be cast to a string via str().
"""
items = []
for pair in mapping.items():
items.extend(pair)
return self.execute_command('MSET', *items)
def msetnx(self, mapping):
"""
Sets key/values based on a mapping if none of the keys are already set.
Mapping is a dictionary of key/value pairs. Both keys and values
should be strings or types that can be cast to a string via str().
Returns a boolean indicating if the operation was successful.
"""
items = []
for pair in mapping.items():
items.extend(pair)
return self.execute_command('MSETNX', *items)
def move(self, name, db):
"Moves the key ``name`` to a different Redis database ``db``"
return self.execute_command('MOVE', name, db)
def persist(self, name):
"Removes an expiration on ``name``"
return self.execute_command('PERSIST', name)
def pexpire(self, name, time):
"""
Set an expire flag on key ``name`` for ``time`` milliseconds.
``time`` can be represented by an integer or a Python timedelta
object.
"""
if isinstance(time, datetime.timedelta):
time = int(time.total_seconds() * 1000)
return self.execute_command('PEXPIRE', name, time)
def pexpireat(self, name, when):
"""
Set an expire flag on key ``name``. ``when`` can be represented
as an integer representing unix time in milliseconds (unix time * 1000)
or a Python datetime object.
"""
if isinstance(when, datetime.datetime):
ms = int(when.microsecond / 1000)
when = int(mod_time.mktime(when.timetuple())) * 1000 + ms
return self.execute_command('PEXPIREAT', name, when)
def psetex(self, name, time_ms, value):
"""
Set the value of key ``name`` to ``value`` that expires in ``time_ms``
milliseconds. ``time_ms`` can be represented by an integer or a Python
timedelta object
"""
if isinstance(time_ms, datetime.timedelta):
time_ms = int(time_ms.total_seconds() * 1000)
return self.execute_command('PSETEX', name, time_ms, value)
def pttl(self, name):
"Returns the number of milliseconds until the key ``name`` will expire"
return self.execute_command('PTTL', name)
def randomkey(self):
"Returns the name of a random key"
return self.execute_command('RANDOMKEY')
def rename(self, src, dst):
"""
Rename key ``src`` to ``dst``
"""
return self.execute_command('RENAME', src, dst)
def renamenx(self, src, dst):
"Rename key ``src`` to ``dst`` if ``dst`` doesn't already exist"
return self.execute_command('RENAMENX', src, dst)
def restore(self, name, ttl, value, replace=False, absttl=False):
"""
Create a key using the provided serialized value, previously obtained
using DUMP.
``replace`` allows an existing key on ``name`` to be overridden. If
it's not specified an error is raised on collision.
``absttl`` if True, specified ``ttl`` should represent an absolute Unix
timestamp in milliseconds in which the key will expire. (Redis 5.0 or
greater).
"""
params = [name, ttl, value]
if replace:
params.append('REPLACE')
if absttl:
params.append('ABSTTL')
return self.execute_command('RESTORE', *params)
def set(self, name, value,
ex=None, px=None, nx=False, xx=False, keepttl=False):
"""
Set the value at key ``name`` to ``value``
``ex`` sets an expire flag on key ``name`` for ``ex`` seconds.
``px`` sets an expire flag on key ``name`` for ``px`` milliseconds.
``nx`` if set to True, set the value at key ``name`` to ``value`` only
if it does not exist.
``xx`` if set to True, set the value at key ``name`` to ``value`` only
if it already exists.
``keepttl`` if True, retain the time to live associated with the key.
(Available since Redis 6.0)
"""
pieces = [name, value]
if ex is not None:
pieces.append('EX')
if isinstance(ex, datetime.timedelta):
ex = int(ex.total_seconds())
pieces.append(ex)
if px is not None:
pieces.append('PX')
if isinstance(px, datetime.timedelta):
px = int(px.total_seconds() * 1000)
pieces.append(px)
if nx:
pieces.append('NX')
if xx:
pieces.append('XX')
if keepttl:
pieces.append('KEEPTTL')
return self.execute_command('SET', *pieces)
def __setitem__(self, name, value):
self.set(name, value)
def setbit(self, name, offset, value):
"""
Flag the ``offset`` in ``name`` as ``value``. Returns a boolean
indicating the previous value of ``offset``.
"""
value = value and 1 or 0
return self.execute_command('SETBIT', name, offset, value)
def setex(self, name, time, value):
"""
Set the value of key ``name`` to ``value`` that expires in ``time``
seconds. ``time`` can be represented by an integer or a Python
timedelta object.
"""
if isinstance(time, datetime.timedelta):
time = int(time.total_seconds())
return self.execute_command('SETEX', name, time, value)
def setnx(self, name, value):
"Set the value of key ``name`` to ``value`` if key doesn't exist"
return self.execute_command('SETNX', name, value)
def setrange(self, name, offset, value):
"""
Overwrite bytes in the value of ``name`` starting at ``offset`` with
``value``. If ``offset`` plus the length of ``value`` exceeds the
length of the original value, the new value will be larger than before.
If ``offset`` exceeds the length of the original value, null bytes
will be used to pad between the end of the previous value and the start
of what's being injected.
Returns the length of the new string.
"""
return self.execute_command('SETRANGE', name, offset, value)
def strlen(self, name):
"Return the number of bytes stored in the value of ``name``"
return self.execute_command('STRLEN', name)
def substr(self, name, start, end=-1):
"""
Return a substring of the string at key ``name``. ``start`` and ``end``
are 0-based integers specifying the portion of the string to return.
"""
return self.execute_command('SUBSTR', name, start, end)
def touch(self, *args):
"""
Alters the last access time of a key(s) ``*args``. A key is ignored
if it does not exist.
"""
return self.execute_command('TOUCH', *args)
def ttl(self, name):
"Returns the number of seconds until the key ``name`` will expire"
return self.execute_command('TTL', name)
def type(self, name):
"Returns the type of key ``name``"
return self.execute_command('TYPE', name)
def watch(self, *names):
"""
Watches the values at keys ``names``, or None if the key doesn't exist
"""
warnings.warn(DeprecationWarning('Call WATCH from a Pipeline object'))
def unwatch(self):
"""
Unwatches the value at key ``name``, or None of the key doesn't exist
"""
warnings.warn(
DeprecationWarning('Call UNWATCH from a Pipeline object'))
def unlink(self, *names):
"Unlink one or more keys specified by ``names``"
return self.execute_command('UNLINK', *names)
# LIST COMMANDS
def blpop(self, keys, timeout=0):
"""
LPOP a value off of the first non-empty list
named in the ``keys`` list.
If none of the lists in ``keys`` has a value to LPOP, then block
for ``timeout`` seconds, or until a value gets pushed on to one
of the lists.
If timeout is 0, then block indefinitely.
"""
if timeout is None:
timeout = 0
keys = list_or_args(keys, None)
keys.append(timeout)
return self.execute_command('BLPOP', *keys)
def brpop(self, keys, timeout=0):
"""
RPOP a value off of the first non-empty list
named in the ``keys`` list.
If none of the lists in ``keys`` has a value to RPOP, then block
for ``timeout`` seconds, or until a value gets pushed on to one
of the lists.
If timeout is 0, then block indefinitely.
"""
if timeout is None:
timeout = 0
keys = list_or_args(keys, None)
keys.append(timeout)
return self.execute_command('BRPOP', *keys)
def brpoplpush(self, src, dst, timeout=0):
"""
Pop a value off the tail of ``src``, push it on the head of ``dst``
and then return it.
This command blocks until a value is in ``src`` or until ``timeout``
seconds elapse, whichever is first. A ``timeout`` value of 0 blocks
forever.
"""
if timeout is None:
timeout = 0
return self.execute_command('BRPOPLPUSH', src, dst, timeout)
def lindex(self, name, index):
"""
Return the item from list ``name`` at position ``index``
Negative indexes are supported and will return an item at the
end of the list
"""
return self.execute_command('LINDEX', name, index)
def linsert(self, name, where, refvalue, value):
"""
Insert ``value`` in list ``name`` either immediately before or after
[``where``] ``refvalue``
Returns the new length of the list on success or -1 if ``refvalue``
is not in the list.
"""
return self.execute_command('LINSERT', name, where, refvalue, value)
def llen(self, name):
"Return the length of the list ``name``"
return self.execute_command('LLEN', name)
def lpop(self, name):
"Remove and return the first item of the list ``name``"
return self.execute_command('LPOP', name)
def lpush(self, name, *values):
"Push ``values`` onto the head of the list ``name``"
return self.execute_command('LPUSH', name, *values)
def lpushx(self, name, value):
"Push ``value`` onto the head of the list ``name`` if ``name`` exists"
return self.execute_command('LPUSHX', name, value)
def lrange(self, name, start, end):
"""
Return a slice of the list ``name`` between
position ``start`` and ``end``
``start`` and ``end`` can be negative numbers just like
Python slicing notation
"""
return self.execute_command('LRANGE', name, start, end)
def lrem(self, name, count, value):
"""
Remove the first ``count`` occurrences of elements equal to ``value``
from the list stored at ``name``.
The count argument influences the operation in the following ways:
count > 0: Remove elements equal to value moving from head to tail.
count < 0: Remove elements equal to value moving from tail to head.
count = 0: Remove all elements equal to value.
"""
return self.execute_command('LREM', name, count, value)
def lset(self, name, index, value):
"Set ``position`` of list ``name`` to ``value``"
return self.execute_command('LSET', name, index, value)
def ltrim(self, name, start, end):
"""
Trim the list ``name``, removing all values not within the slice
between ``start`` and ``end``
``start`` and ``end`` can be negative numbers just like
Python slicing notation
"""
return self.execute_command('LTRIM', name, start, end)
def rpop(self, name):
"Remove and return the last item of the list ``name``"
return self.execute_command('RPOP', name)
def rpoplpush(self, src, dst):
"""
RPOP a value off of the ``src`` list and atomically LPUSH it
on to the ``dst`` list. Returns the value.
"""
return self.execute_command('RPOPLPUSH', src, dst)
def rpush(self, name, *values):
"Push ``values`` onto the tail of the list ``name``"
return self.execute_command('RPUSH', name, *values)
def rpushx(self, name, value):
"Push ``value`` onto the tail of the list ``name`` if ``name`` exists"
return self.execute_command('RPUSHX', name, value)
def lpos(self, name, value, rank=None, count=None, maxlen=None):
"""
Get position of ``value`` within the list ``name``
If specified, ``rank`` indicates the "rank" of the first element to
return in case there are multiple copies of ``value`` in the list.
By default, LPOS returns the position of the first occurrence of
``value`` in the list. When ``rank`` 2, LPOS returns the position of
the second ``value`` in the list. If ``rank`` is negative, LPOS
searches the list in reverse. For example, -1 would return the
position of the last occurrence of ``value`` and -2 would return the
position of the next to last occurrence of ``value``.
If specified, ``count`` indicates that LPOS should return a list of
up to ``count`` positions. A ``count`` of 2 would return a list of
up to 2 positions. A ``count`` of 0 returns a list of all positions
matching ``value``. When ``count`` is specified and but ``value``
does not exist in the list, an empty list is returned.
If specified, ``maxlen`` indicates the maximum number of list
elements to scan. A ``maxlen`` of 1000 will only return the
position(s) of items within the first 1000 entries in the list.
A ``maxlen`` of 0 (the default) will scan the entire list.
"""
pieces = [name, value]
if rank is not None:
pieces.extend(['RANK', rank])
if count is not None:
pieces.extend(['COUNT', count])
if maxlen is not None:
pieces.extend(['MAXLEN', maxlen])
return self.execute_command('LPOS', *pieces)
def sort(self, name, start=None, num=None, by=None, get=None,
desc=False, alpha=False, store=None, groups=False):
"""
Sort and return the list, set or sorted set at ``name``.
``start`` and ``num`` allow for paging through the sorted data
``by`` allows using an external key to weight and sort the items.
Use an "*" to indicate where in the key the item value is located
``get`` allows for returning items from external keys rather than the
sorted data itself. Use an "*" to indicate where in the key
the item value is located
``desc`` allows for reversing the sort
``alpha`` allows for sorting lexicographically rather than numerically
``store`` allows for storing the result of the sort into
the key ``store``
``groups`` if set to True and if ``get`` contains at least two
elements, sort will return a list of tuples, each containing the
values fetched from the arguments to ``get``.
"""
if (start is not None and num is None) or \
(num is not None and start is None):
raise DataError("``start`` and ``num`` must both be specified")
pieces = [name]
if by is not None:
pieces.append(b'BY')
pieces.append(by)
if start is not None and num is not None:
pieces.append(b'LIMIT')
pieces.append(start)
pieces.append(num)
if get is not None:
# If get is a string assume we want to get a single value.
# Otherwise assume it's an interable and we want to get multiple
# values. We can't just iterate blindly because strings are
# iterable.
if isinstance(get, (bytes, str)):
pieces.append(b'GET')
pieces.append(get)
else:
for g in get:
pieces.append(b'GET')
pieces.append(g)
if desc:
pieces.append(b'DESC')
if alpha:
pieces.append(b'ALPHA')
if store is not None:
pieces.append(b'STORE')
pieces.append(store)
if groups:
if not get or isinstance(get, (bytes, str)) or len(get) < 2:
raise DataError('when using "groups" the "get" argument '
'must be specified and contain at least '
'two keys')
options = {'groups': len(get) if groups else None}
return self.execute_command('SORT', *pieces, **options)
# SCAN COMMANDS
def scan(self, cursor=0, match=None, count=None, _type=None):
"""
Incrementally return lists of key names. Also return a cursor
indicating the scan position.
``match`` allows for filtering the keys by pattern
``count`` provides a hint to Redis about the number of keys to
return per batch.
``_type`` filters the returned values by a particular Redis type.
Stock Redis instances allow for the following types:
HASH, LIST, SET, STREAM, STRING, ZSET
Additionally, Redis modules can expose other types as well.
"""
pieces = [cursor]
if match is not None:
pieces.extend([b'MATCH', match])
if count is not None:
pieces.extend([b'COUNT', count])
if _type is not None:
pieces.extend([b'TYPE', _type])
return self.execute_command('SCAN', *pieces)
def scan_iter(self, match=None, count=None, _type=None):
"""
Make an iterator using the SCAN command so that the client doesn't
need to remember the cursor position.
``match`` allows for filtering the keys by pattern
``count`` provides a hint to Redis about the number of keys to
return per batch.
``_type`` filters the returned values by a particular Redis type.
Stock Redis instances allow for the following types:
HASH, LIST, SET, STREAM, STRING, ZSET
Additionally, Redis modules can expose other types as well.
"""
cursor = '0'
while cursor != 0:
cursor, data = self.scan(cursor=cursor, match=match,
count=count, _type=_type)
yield from data
def sscan(self, name, cursor=0, match=None, count=None):
"""
Incrementally return lists of elements in a set. Also return a cursor
indicating the scan position.
``match`` allows for filtering the keys by pattern
``count`` allows for hint the minimum number of returns
"""
pieces = [name, cursor]
if match is not None:
pieces.extend([b'MATCH', match])
if count is not None:
pieces.extend([b'COUNT', count])
return self.execute_command('SSCAN', *pieces)
def sscan_iter(self, name, match=None, count=None):
"""
Make an iterator using the SSCAN command so that the client doesn't
need to remember the cursor position.
``match`` allows for filtering the keys by pattern
``count`` allows for hint the minimum number of returns
"""
cursor = '0'
while cursor != 0:
cursor, data = self.sscan(name, cursor=cursor,
match=match, count=count)
yield from data
def hscan(self, name, cursor=0, match=None, count=None):
"""
Incrementally return key/value slices in a hash. Also return a cursor
indicating the scan position.
``match`` allows for filtering the keys by pattern
``count`` allows for hint the minimum number of returns
"""
pieces = [name, cursor]
if match is not None:
pieces.extend([b'MATCH', match])
if count is not None:
pieces.extend([b'COUNT', count])
return self.execute_command('HSCAN', *pieces)
def hscan_iter(self, name, match=None, count=None):
"""
Make an iterator using the HSCAN command so that the client doesn't
need to remember the cursor position.
``match`` allows for filtering the keys by pattern
``count`` allows for hint the minimum number of returns
"""
cursor = '0'
while cursor != 0:
cursor, data = self.hscan(name, cursor=cursor,
match=match, count=count)
yield from data.items()
def zscan(self, name, cursor=0, match=None, count=None,
score_cast_func=float):
"""
Incrementally return lists of elements in a sorted set. Also return a
cursor indicating the scan position.
``match`` allows for filtering the keys by pattern
``count`` allows for hint the minimum number of returns
``score_cast_func`` a callable used to cast the score return value
"""
pieces = [name, cursor]
if match is not None:
pieces.extend([b'MATCH', match])
if count is not None:
pieces.extend([b'COUNT', count])
options = {'score_cast_func': score_cast_func}
return self.execute_command('ZSCAN', *pieces, **options)
def zscan_iter(self, name, match=None, count=None,
score_cast_func=float):
"""
Make an iterator using the ZSCAN command so that the client doesn't
need to remember the cursor position.
``match`` allows for filtering the keys by pattern
``count`` allows for hint the minimum number of returns
``score_cast_func`` a callable used to cast the score return value
"""
cursor = '0'
while cursor != 0:
cursor, data = self.zscan(name, cursor=cursor, match=match,
count=count,
score_cast_func=score_cast_func)
yield from data
# SET COMMANDS
def sadd(self, name, *values):
"Add ``value(s)`` to set ``name``"
return self.execute_command('SADD', name, *values)
def scard(self, name):
"Return the number of elements in set ``name``"
return self.execute_command('SCARD', name)
def sdiff(self, keys, *args):
"Return the difference of sets specified by ``keys``"
args = list_or_args(keys, args)
return self.execute_command('SDIFF', *args)
def sdiffstore(self, dest, keys, *args):
"""
Store the difference of sets specified by ``keys`` into a new
set named ``dest``. Returns the number of keys in the new set.
"""
args = list_or_args(keys, args)
return self.execute_command('SDIFFSTORE', dest, *args)
def sinter(self, keys, *args):
"Return the intersection of sets specified by ``keys``"
args = list_or_args(keys, args)
return self.execute_command('SINTER', *args)
def sinterstore(self, dest, keys, *args):
"""
Store the intersection of sets specified by ``keys`` into a new
set named ``dest``. Returns the number of keys in the new set.
"""
args = list_or_args(keys, args)
return self.execute_command('SINTERSTORE', dest, *args)
def sismember(self, name, value):
"Return a boolean indicating if ``value`` is a member of set ``name``"
return self.execute_command('SISMEMBER', name, value)
def smembers(self, name):
"Return all members of the set ``name``"
return self.execute_command('SMEMBERS', name)
def smove(self, src, dst, value):
"Move ``value`` from set ``src`` to set ``dst`` atomically"
return self.execute_command('SMOVE', src, dst, value)
def spop(self, name, count=None):
"Remove and return a random member of set ``name``"
args = (count is not None) and [count] or []
return self.execute_command('SPOP', name, *args)
def srandmember(self, name, number=None):
"""
If ``number`` is None, returns a random member of set ``name``.
If ``number`` is supplied, returns a list of ``number`` random
members of set ``name``. Note this is only available when running
Redis 2.6+.
"""
args = (number is not None) and [number] or []
return self.execute_command('SRANDMEMBER', name, *args)
def srem(self, name, *values):
"Remove ``values`` from set ``name``"
return self.execute_command('SREM', name, *values)
def sunion(self, keys, *args):
"Return the union of sets specified by ``keys``"
args = list_or_args(keys, args)
return self.execute_command('SUNION', *args)
def sunionstore(self, dest, keys, *args):
"""
Store the union of sets specified by ``keys`` into a new
set named ``dest``. Returns the number of keys in the new set.
"""
args = list_or_args(keys, args)
return self.execute_command('SUNIONSTORE', dest, *args)
# STREAMS COMMANDS
def xack(self, name, groupname, *ids):
"""
Acknowledges the successful processing of one or more messages.
name: name of the stream.
groupname: name of the consumer group.
*ids: message ids to acknowlege.
"""
return self.execute_command('XACK', name, groupname, *ids)
def xadd(self, name, fields, id='*', maxlen=None, approximate=True):
"""
Add to a stream.
name: name of the stream
fields: dict of field/value pairs to insert into the stream
id: Location to insert this record. By default it is appended.
maxlen: truncate old stream members beyond this size
approximate: actual stream length may be slightly more than maxlen
"""
pieces = []
if maxlen is not None:
if not isinstance(maxlen, int) or maxlen < 1:
raise DataError('XADD maxlen must be a positive integer')
pieces.append(b'MAXLEN')
if approximate:
pieces.append(b'~')
pieces.append(str(maxlen))
pieces.append(id)
if not isinstance(fields, dict) or len(fields) == 0:
raise DataError('XADD fields must be a non-empty dict')
for pair in fields.items():
pieces.extend(pair)
return self.execute_command('XADD', name, *pieces)
def xclaim(self, name, groupname, consumername, min_idle_time, message_ids,
idle=None, time=None, retrycount=None, force=False,
justid=False):
"""
Changes the ownership of a pending message.
name: name of the stream.
groupname: name of the consumer group.
consumername: name of a consumer that claims the message.
min_idle_time: filter messages that were idle less than this amount of
milliseconds
message_ids: non-empty list or tuple of message IDs to claim
idle: optional. Set the idle time (last time it was delivered) of the
message in ms
time: optional integer. This is the same as idle but instead of a
relative amount of milliseconds, it sets the idle time to a specific
Unix time (in milliseconds).
retrycount: optional integer. set the retry counter to the specified
value. This counter is incremented every time a message is delivered
again.
force: optional boolean, false by default. Creates the pending message
entry in the PEL even if certain specified IDs are not already in the
PEL assigned to a different client.
justid: optional boolean, false by default. Return just an array of IDs
of messages successfully claimed, without returning the actual message
"""
if not isinstance(min_idle_time, int) or min_idle_time < 0:
raise DataError("XCLAIM min_idle_time must be a non negative "
"integer")
if not isinstance(message_ids, (list, tuple)) or not message_ids:
raise DataError("XCLAIM message_ids must be a non empty list or "
"tuple of message IDs to claim")
kwargs = {}
pieces = [name, groupname, consumername, str(min_idle_time)]
pieces.extend(list(message_ids))
if idle is not None:
if not isinstance(idle, int):
raise DataError("XCLAIM idle must be an integer")
pieces.extend((b'IDLE', str(idle)))
if time is not None:
if not isinstance(time, int):
raise DataError("XCLAIM time must be an integer")
pieces.extend((b'TIME', str(time)))
if retrycount is not None:
if not isinstance(retrycount, int):
raise DataError("XCLAIM retrycount must be an integer")
pieces.extend((b'RETRYCOUNT', str(retrycount)))
if force:
if not isinstance(force, bool):
raise DataError("XCLAIM force must be a boolean")
pieces.append(b'FORCE')
if justid:
if not isinstance(justid, bool):
raise DataError("XCLAIM justid must be a boolean")
pieces.append(b'JUSTID')
kwargs['parse_justid'] = True
return self.execute_command('XCLAIM', *pieces, **kwargs)
def xdel(self, name, *ids):
"""
Deletes one or more messages from a stream.
name: name of the stream.
*ids: message ids to delete.
"""
return self.execute_command('XDEL', name, *ids)
def xgroup_create(self, name, groupname, id='$', mkstream=False):
"""
Create a new consumer group associated with a stream.
name: name of the stream.
groupname: name of the consumer group.
id: ID of the last item in the stream to consider already delivered.
"""
pieces = ['XGROUP CREATE', name, groupname, id]
if mkstream:
pieces.append(b'MKSTREAM')
return self.execute_command(*pieces)
def xgroup_delconsumer(self, name, groupname, consumername):
"""
Remove a specific consumer from a consumer group.
Returns the number of pending messages that the consumer had before it
was deleted.
name: name of the stream.
groupname: name of the consumer group.
consumername: name of consumer to delete
"""
return self.execute_command('XGROUP DELCONSUMER', name, groupname,
consumername)
def xgroup_destroy(self, name, groupname):
"""
Destroy a consumer group.
name: name of the stream.
groupname: name of the consumer group.
"""
return self.execute_command('XGROUP DESTROY', name, groupname)
def xgroup_setid(self, name, groupname, id):
"""
Set the consumer group last delivered ID to something else.
name: name of the stream.
groupname: name of the consumer group.
id: ID of the last item in the stream to consider already delivered.
"""
return self.execute_command('XGROUP SETID', name, groupname, id)
def xinfo_consumers(self, name, groupname):
"""
Returns general information about the consumers in the group.
name: name of the stream.
groupname: name of the consumer group.
"""
return self.execute_command('XINFO CONSUMERS', name, groupname)
def xinfo_groups(self, name):
"""
Returns general information about the consumer groups of the stream.
name: name of the stream.
"""
return self.execute_command('XINFO GROUPS', name)
def xinfo_stream(self, name):
"""
Returns general information about the stream.
name: name of the stream.
"""
return self.execute_command('XINFO STREAM', name)
def xlen(self, name):
"""
Returns the number of elements in a given stream.
"""
return self.execute_command('XLEN', name)
def xpending(self, name, groupname):
"""
Returns information about pending messages of a group.
name: name of the stream.
groupname: name of the consumer group.
"""
return self.execute_command('XPENDING', name, groupname)
def xpending_range(self, name, groupname, min, max, count,
consumername=None):
"""
Returns information about pending messages, in a range.
name: name of the stream.
groupname: name of the consumer group.
min: minimum stream ID.
max: maximum stream ID.
count: number of messages to return
consumername: name of a consumer to filter by (optional).
"""
pieces = [name, groupname]
if min is not None or max is not None or count is not None:
if min is None or max is None or count is None:
raise DataError("XPENDING must be provided with min, max "
"and count parameters, or none of them. ")
if not isinstance(count, int) or count < -1:
raise DataError("XPENDING count must be a integer >= -1")
pieces.extend((min, max, str(count)))
if consumername is not None:
if min is None or max is None or count is None:
raise DataError("if XPENDING is provided with consumername,"
" it must be provided with min, max and"
" count parameters")
pieces.append(consumername)
return self.execute_command('XPENDING', *pieces, parse_detail=True)
def xrange(self, name, min='-', max='+', count=None):
"""
Read stream values within an interval.
name: name of the stream.
start: first stream ID. defaults to '-',
meaning the earliest available.
finish: last stream ID. defaults to '+',
meaning the latest available.
count: if set, only return this many items, beginning with the
earliest available.
"""
pieces = [min, max]
if count is not None:
if not isinstance(count, int) or count < 1:
raise DataError('XRANGE count must be a positive integer')
pieces.append(b'COUNT')
pieces.append(str(count))
return self.execute_command('XRANGE', name, *pieces)
def xread(self, streams, count=None, block=None):
"""
Block and monitor multiple streams for new data.
streams: a dict of stream names to stream IDs, where
IDs indicate the last ID already seen.
count: if set, only return this many items, beginning with the
earliest available.
block: number of milliseconds to wait, if nothing already present.
"""
pieces = []
if block is not None:
if not isinstance(block, int) or block < 0:
raise DataError('XREAD block must be a non-negative integer')
pieces.append(b'BLOCK')
pieces.append(str(block))
if count is not None:
if not isinstance(count, int) or count < 1:
raise DataError('XREAD count must be a positive integer')
pieces.append(b'COUNT')
pieces.append(str(count))
if not isinstance(streams, dict) or len(streams) == 0:
raise DataError('XREAD streams must be a non empty dict')
pieces.append(b'STREAMS')
keys, values = zip(*streams.items())
pieces.extend(keys)
pieces.extend(values)
return self.execute_command('XREAD', *pieces)
def xreadgroup(self, groupname, consumername, streams, count=None,
block=None, noack=False):
"""
Read from a stream via a consumer group.
groupname: name of the consumer group.
consumername: name of the requesting consumer.
streams: a dict of stream names to stream IDs, where
IDs indicate the last ID already seen.
count: if set, only return this many items, beginning with the
earliest available.
block: number of milliseconds to wait, if nothing already present.
noack: do not add messages to the PEL
"""
pieces = [b'GROUP', groupname, consumername]
if count is not None:
if not isinstance(count, int) or count < 1:
raise DataError("XREADGROUP count must be a positive integer")
pieces.append(b'COUNT')
pieces.append(str(count))
if block is not None:
if not isinstance(block, int) or block < 0:
raise DataError("XREADGROUP block must be a non-negative "
"integer")
pieces.append(b'BLOCK')
pieces.append(str(block))
if noack:
pieces.append(b'NOACK')
if not isinstance(streams, dict) or len(streams) == 0:
raise DataError('XREADGROUP streams must be a non empty dict')
pieces.append(b'STREAMS')
pieces.extend(streams.keys())
pieces.extend(streams.values())
return self.execute_command('XREADGROUP', *pieces)
def xrevrange(self, name, max='+', min='-', count=None):
"""
Read stream values within an interval, in reverse order.
name: name of the stream
start: first stream ID. defaults to '+',
meaning the latest available.
finish: last stream ID. defaults to '-',
meaning the earliest available.
count: if set, only return this many items, beginning with the
latest available.
"""
pieces = [max, min]
if count is not None:
if not isinstance(count, int) or count < 1:
raise DataError('XREVRANGE count must be a positive integer')
pieces.append(b'COUNT')
pieces.append(str(count))
return self.execute_command('XREVRANGE', name, *pieces)
def xtrim(self, name, maxlen, approximate=True):
"""
Trims old messages from a stream.
name: name of the stream.
maxlen: truncate old stream messages beyond this size
approximate: actual stream length may be slightly more than maxlen
"""
pieces = [b'MAXLEN']
if approximate:
pieces.append(b'~')
pieces.append(maxlen)
return self.execute_command('XTRIM', name, *pieces)
# SORTED SET COMMANDS
def zadd(self, name, mapping, nx=False, xx=False, ch=False, incr=False):
"""
Set any number of element-name, score pairs to the key ``name``. Pairs
are specified as a dict of element-names keys to score values.
``nx`` forces ZADD to only create new elements and not to update
scores for elements that already exist.
``xx`` forces ZADD to only update scores of elements that already
exist. New elements will not be added.
``ch`` modifies the return value to be the numbers of elements changed.
Changed elements include new elements that were added and elements
whose scores changed.
``incr`` modifies ZADD to behave like ZINCRBY. In this mode only a
single element/score pair can be specified and the score is the amount
the existing score will be incremented by. When using this mode the
return value of ZADD will be the new score of the element.
The return value of ZADD varies based on the mode specified. With no
options, ZADD returns the number of new elements added to the sorted
set.
"""
if not mapping:
raise DataError("ZADD requires at least one element/score pair")
if nx and xx:
raise DataError("ZADD allows either 'nx' or 'xx', not both")
if incr and len(mapping) != 1:
raise DataError("ZADD option 'incr' only works when passing a "
"single element/score pair")
pieces = []
options = {}
if nx:
pieces.append(b'NX')
if xx:
pieces.append(b'XX')
if ch:
pieces.append(b'CH')
if incr:
pieces.append(b'INCR')
options['as_score'] = True
for pair in mapping.items():
pieces.append(pair[1])
pieces.append(pair[0])
return self.execute_command('ZADD', name, *pieces, **options)
def zcard(self, name):
"Return the number of elements in the sorted set ``name``"
return self.execute_command('ZCARD', name)
def zcount(self, name, min, max):
"""
Returns the number of elements in the sorted set at key ``name`` with
a score between ``min`` and ``max``.
"""
return self.execute_command('ZCOUNT', name, min, max)
def zincrby(self, name, amount, value):
"Increment the score of ``value`` in sorted set ``name`` by ``amount``"
return self.execute_command('ZINCRBY', name, amount, value)
def zinterstore(self, dest, keys, aggregate=None):
"""
Intersect multiple sorted sets specified by ``keys`` into
a new sorted set, ``dest``. Scores in the destination will be
aggregated based on the ``aggregate``, or SUM if none is provided.
"""
return self._zaggregate('ZINTERSTORE', dest, keys, aggregate)
def zlexcount(self, name, min, max):
"""
Return the number of items in the sorted set ``name`` between the
lexicographical range ``min`` and ``max``.
"""
return self.execute_command('ZLEXCOUNT', name, min, max)
def zpopmax(self, name, count=None):
"""
Remove and return up to ``count`` members with the highest scores
from the sorted set ``name``.
"""
args = (count is not None) and [count] or []
options = {
'withscores': True
}
return self.execute_command('ZPOPMAX', name, *args, **options)
def zpopmin(self, name, count=None):
"""
Remove and return up to ``count`` members with the lowest scores
from the sorted set ``name``.
"""
args = (count is not None) and [count] or []
options = {
'withscores': True
}
return self.execute_command('ZPOPMIN', name, *args, **options)
def bzpopmax(self, keys, timeout=0):
"""
ZPOPMAX a value off of the first non-empty sorted set
named in the ``keys`` list.
If none of the sorted sets in ``keys`` has a value to ZPOPMAX,
then block for ``timeout`` seconds, or until a member gets added
to one of the sorted sets.
If timeout is 0, then block indefinitely.
"""
if timeout is None:
timeout = 0
keys = list_or_args(keys, None)
keys.append(timeout)
return self.execute_command('BZPOPMAX', *keys)
def bzpopmin(self, keys, timeout=0):
"""
ZPOPMIN a value off of the first non-empty sorted set
named in the ``keys`` list.
If none of the sorted sets in ``keys`` has a value to ZPOPMIN,
then block for ``timeout`` seconds, or until a member gets added
to one of the sorted sets.
If timeout is 0, then block indefinitely.
"""
if timeout is None:
timeout = 0
keys = list_or_args(keys, None)
keys.append(timeout)
return self.execute_command('BZPOPMIN', *keys)
def zrange(self, name, start, end, desc=False, withscores=False,
score_cast_func=float):
"""
Return a range of values from sorted set ``name`` between
``start`` and ``end`` sorted in ascending order.
``start`` and ``end`` can be negative, indicating the end of the range.
``desc`` a boolean indicating whether to sort the results descendingly
``withscores`` indicates to return the scores along with the values.
The return type is a list of (value, score) pairs
``score_cast_func`` a callable used to cast the score return value
"""
if desc:
return self.zrevrange(name, start, end, withscores,
score_cast_func)
pieces = ['ZRANGE', name, start, end]
if withscores:
pieces.append(b'WITHSCORES')
options = {
'withscores': withscores,
'score_cast_func': score_cast_func
}
return self.execute_command(*pieces, **options)
def zrangebylex(self, name, min, max, start=None, num=None):
"""
Return the lexicographical range of values from sorted set ``name``
between ``min`` and ``max``.
If ``start`` and ``num`` are specified, then return a slice of the
range.
"""
if (start is not None and num is None) or \
(num is not None and start is None):
raise DataError("``start`` and ``num`` must both be specified")
pieces = ['ZRANGEBYLEX', name, min, max]
if start is not None and num is not None:
pieces.extend([b'LIMIT', start, num])
return self.execute_command(*pieces)
def zrevrangebylex(self, name, max, min, start=None, num=None):
"""
Return the reversed lexicographical range of values from sorted set
``name`` between ``max`` and ``min``.
If ``start`` and ``num`` are specified, then return a slice of the
range.
"""
if (start is not None and num is None) or \
(num is not None and start is None):
raise DataError("``start`` and ``num`` must both be specified")
pieces = ['ZREVRANGEBYLEX', name, max, min]
if start is not None and num is not None:
pieces.extend([b'LIMIT', start, num])
return self.execute_command(*pieces)
def zrangebyscore(self, name, min, max, start=None, num=None,
withscores=False, score_cast_func=float):
"""
Return a range of values from the sorted set ``name`` with scores
between ``min`` and ``max``.
If ``start`` and ``num`` are specified, then return a slice
of the range.
``withscores`` indicates to return the scores along with the values.
The return type is a list of (value, score) pairs
`score_cast_func`` a callable used to cast the score return value
"""
if (start is not None and num is None) or \
(num is not None and start is None):
raise DataError("``start`` and ``num`` must both be specified")
pieces = ['ZRANGEBYSCORE', name, min, max]
if start is not None and num is not None:
pieces.extend([b'LIMIT', start, num])
if withscores:
pieces.append(b'WITHSCORES')
options = {
'withscores': withscores,
'score_cast_func': score_cast_func
}
return self.execute_command(*pieces, **options)
def zrank(self, name, value):
"""
Returns a 0-based value indicating the rank of ``value`` in sorted set
``name``
"""
return self.execute_command('ZRANK', name, value)
def zrem(self, name, *values):
"Remove member ``values`` from sorted set ``name``"
return self.execute_command('ZREM', name, *values)
def zremrangebylex(self, name, min, max):
"""
Remove all elements in the sorted set ``name`` between the
lexicographical range specified by ``min`` and ``max``.
Returns the number of elements removed.
"""
return self.execute_command('ZREMRANGEBYLEX', name, min, max)
def zremrangebyrank(self, name, min, max):
"""
Remove all elements in the sorted set ``name`` with ranks between
``min`` and ``max``. Values are 0-based, ordered from smallest score
to largest. Values can be negative indicating the highest scores.
Returns the number of elements removed
"""
return self.execute_command('ZREMRANGEBYRANK', name, min, max)
def zremrangebyscore(self, name, min, max):
"""
Remove all elements in the sorted set ``name`` with scores
between ``min`` and ``max``. Returns the number of elements removed.
"""
return self.execute_command('ZREMRANGEBYSCORE', name, min, max)
def zrevrange(self, name, start, end, withscores=False,
score_cast_func=float):
"""
Return a range of values from sorted set ``name`` between
``start`` and ``end`` sorted in descending order.
``start`` and ``end`` can be negative, indicating the end of the range.
``withscores`` indicates to return the scores along with the values
The return type is a list of (value, score) pairs
``score_cast_func`` a callable used to cast the score return value
"""
pieces = ['ZREVRANGE', name, start, end]
if withscores:
pieces.append(b'WITHSCORES')
options = {
'withscores': withscores,
'score_cast_func': score_cast_func
}
return self.execute_command(*pieces, **options)
def zrevrangebyscore(self, name, max, min, start=None, num=None,
withscores=False, score_cast_func=float):
"""
Return a range of values from the sorted set ``name`` with scores
between ``min`` and ``max`` in descending order.
If ``start`` and ``num`` are specified, then return a slice
of the range.
``withscores`` indicates to return the scores along with the values.
The return type is a list of (value, score) pairs
``score_cast_func`` a callable used to cast the score return value
"""
if (start is not None and num is None) or \
(num is not None and start is None):
raise DataError("``start`` and ``num`` must both be specified")
pieces = ['ZREVRANGEBYSCORE', name, max, min]
if start is not None and num is not None:
pieces.extend([b'LIMIT', start, num])
if withscores:
pieces.append(b'WITHSCORES')
options = {
'withscores': withscores,
'score_cast_func': score_cast_func
}
return self.execute_command(*pieces, **options)
def zrevrank(self, name, value):
"""
Returns a 0-based value indicating the descending rank of
``value`` in sorted set ``name``
"""
return self.execute_command('ZREVRANK', name, value)
def zscore(self, name, value):
"Return the score of element ``value`` in sorted set ``name``"
return self.execute_command('ZSCORE', name, value)
def zunionstore(self, dest, keys, aggregate=None):
"""
Union multiple sorted sets specified by ``keys`` into
a new sorted set, ``dest``. Scores in the destination will be
aggregated based on the ``aggregate``, or SUM if none is provided.
"""
return self._zaggregate('ZUNIONSTORE', dest, keys, aggregate)
def _zaggregate(self, command, dest, keys, aggregate=None):
pieces = [command, dest, len(keys)]
if isinstance(keys, dict):
keys, weights = keys.keys(), keys.values()
else:
weights = None
pieces.extend(keys)
if weights:
pieces.append(b'WEIGHTS')
pieces.extend(weights)
if aggregate:
pieces.append(b'AGGREGATE')
pieces.append(aggregate)
return self.execute_command(*pieces)
# HYPERLOGLOG COMMANDS
def pfadd(self, name, *values):
"Adds the specified elements to the specified HyperLogLog."
return self.execute_command('PFADD', name, *values)
def pfcount(self, *sources):
"""
Return the approximated cardinality of
the set observed by the HyperLogLog at key(s).
"""
return self.execute_command('PFCOUNT', *sources)
def pfmerge(self, dest, *sources):
"Merge N different HyperLogLogs into a single one."
return self.execute_command('PFMERGE', dest, *sources)
# HASH COMMANDS
def hdel(self, name, *keys):
"Delete ``keys`` from hash ``name``"
return self.execute_command('HDEL', name, *keys)
def hexists(self, name, key):
"Returns a boolean indicating if ``key`` exists within hash ``name``"
return self.execute_command('HEXISTS', name, key)
def hget(self, name, key):
"Return the value of ``key`` within the hash ``name``"
return self.execute_command('HGET', name, key)
def hgetall(self, name):
"Return a Python dict of the hash's name/value pairs"
return self.execute_command('HGETALL', name)
def hincrby(self, name, key, amount=1):
"Increment the value of ``key`` in hash ``name`` by ``amount``"
return self.execute_command('HINCRBY', name, key, amount)
def hincrbyfloat(self, name, key, amount=1.0):
"""
Increment the value of ``key`` in hash ``name`` by floating ``amount``
"""
return self.execute_command('HINCRBYFLOAT', name, key, amount)
def hkeys(self, name):
"Return the list of keys within hash ``name``"
return self.execute_command('HKEYS', name)
def hlen(self, name):
"Return the number of elements in hash ``name``"
return self.execute_command('HLEN', name)
def hset(self, name, key=None, value=None, mapping=None):
"""
Set ``key`` to ``value`` within hash ``name``,
``mapping`` accepts a dict of key/value pairs that that will be
added to hash ``name``.
Returns the number of fields that were added.
"""
if key is None and not mapping:
raise DataError("'hset' with no key value pairs")
items = []
if key is not None:
items.extend((key, value))
if mapping:
for pair in mapping.items():
items.extend(pair)
return self.execute_command('HSET', name, *items)
def hsetnx(self, name, key, value):
"""
Set ``key`` to ``value`` within hash ``name`` if ``key`` does not
exist. Returns 1 if HSETNX created a field, otherwise 0.
"""
return self.execute_command('HSETNX', name, key, value)
def hmset(self, name, mapping):
"""
Set key to value within hash ``name`` for each corresponding
key and value from the ``mapping`` dict.
"""
warnings.warn(
'%s.hmset() is deprecated. Use %s.hset() instead.'
% (self.__class__.__name__, self.__class__.__name__),
DeprecationWarning,
stacklevel=2,
)
if not mapping:
raise DataError("'hmset' with 'mapping' of length 0")
items = []
for pair in mapping.items():
items.extend(pair)
return self.execute_command('HMSET', name, *items)
def hmget(self, name, keys, *args):
"Returns a list of values ordered identically to ``keys``"
args = list_or_args(keys, args)
return self.execute_command('HMGET', name, *args)
def hvals(self, name):
"Return the list of values within hash ``name``"
return self.execute_command('HVALS', name)
def hstrlen(self, name, key):
"""
Return the number of bytes stored in the value of ``key``
within hash ``name``
"""
return self.execute_command('HSTRLEN', name, key)
def publish(self, channel, message):
"""
Publish ``message`` on ``channel``.
Returns the number of subscribers the message was delivered to.
"""
return self.execute_command('PUBLISH', channel, message)
def pubsub_channels(self, pattern='*'):
"""
Return a list of channels that have at least one subscriber
"""
return self.execute_command('PUBSUB CHANNELS', pattern)
def pubsub_numpat(self):
"""
Returns the number of subscriptions to patterns
"""
return self.execute_command('PUBSUB NUMPAT')
def pubsub_numsub(self, *args):
"""
Return a list of (channel, number of subscribers) tuples
for each channel given in ``*args``
"""
return self.execute_command('PUBSUB NUMSUB', *args)
def cluster(self, cluster_arg, *args):
return self.execute_command('CLUSTER %s' % cluster_arg.upper(), *args)
def eval(self, script, numkeys, *keys_and_args):
"""
Execute the Lua ``script``, specifying the ``numkeys`` the script
will touch and the key names and argument values in ``keys_and_args``.
Returns the result of the script.
In practice, use the object returned by ``register_script``. This
function exists purely for Redis API completion.
"""
return self.execute_command('EVAL', script, numkeys, *keys_and_args)
def evalsha(self, sha, numkeys, *keys_and_args):
"""
Use the ``sha`` to execute a Lua script already registered via EVAL
or SCRIPT LOAD. Specify the ``numkeys`` the script will touch and the
key names and argument values in ``keys_and_args``. Returns the result
of the script.
In practice, use the object returned by ``register_script``. This
function exists purely for Redis API completion.
"""
return self.execute_command('EVALSHA', sha, numkeys, *keys_and_args)
def script_exists(self, *args):
"""
Check if a script exists in the script cache by specifying the SHAs of
each script as ``args``. Returns a list of boolean values indicating if
if each already script exists in the cache.
"""
return self.execute_command('SCRIPT EXISTS', *args)
def script_flush(self):
"Flush all scripts from the script cache"
return self.execute_command('SCRIPT FLUSH')
def script_kill(self):
"Kill the currently executing Lua script"
return self.execute_command('SCRIPT KILL')
def script_load(self, script):
"Load a Lua ``script`` into the script cache. Returns the SHA."
return self.execute_command('SCRIPT LOAD', script)
def register_script(self, script):
"""
Register a Lua ``script`` specifying the ``keys`` it will touch.
Returns a Script object that is callable and hides the complexity of
deal with scripts, keys, and shas. This is the preferred way to work
with Lua scripts.
"""
return Script(self, script)
# GEO COMMANDS
def geoadd(self, name, *values):
"""
Add the specified geospatial items to the specified key identified
by the ``name`` argument. The Geospatial items are given as ordered
members of the ``values`` argument, each item or place is formed by
the triad longitude, latitude and name.
"""
if len(values) % 3 != 0:
raise DataError("GEOADD requires places with lon, lat and name"
" values")
return self.execute_command('GEOADD', name, *values)
def geodist(self, name, place1, place2, unit=None):
"""
Return the distance between ``place1`` and ``place2`` members of the
``name`` key.
The units must be one of the following : m, km mi, ft. By default
meters are used.
"""
pieces = [name, place1, place2]
if unit and unit not in ('m', 'km', 'mi', 'ft'):
raise DataError("GEODIST invalid unit")
elif unit:
pieces.append(unit)
return self.execute_command('GEODIST', *pieces)
def geohash(self, name, *values):
"""
Return the geo hash string for each item of ``values`` members of
the specified key identified by the ``name`` argument.
"""
return self.execute_command('GEOHASH', name, *values)
def geopos(self, name, *values):
"""
Return the positions of each item of ``values`` as members of
the specified key identified by the ``name`` argument. Each position
is represented by the pairs lon and lat.
"""
return self.execute_command('GEOPOS', name, *values)
def georadius(self, name, longitude, latitude, radius, unit=None,
withdist=False, withcoord=False, withhash=False, count=None,
sort=None, store=None, store_dist=None):
"""
Return the members of the specified key identified by the
``name`` argument which are within the borders of the area specified
with the ``latitude`` and ``longitude`` location and the maximum
distance from the center specified by the ``radius`` value.
The units must be one of the following : m, km mi, ft. By default
``withdist`` indicates to return the distances of each place.
``withcoord`` indicates to return the latitude and longitude of
each place.
``withhash`` indicates to return the geohash string of each place.
``count`` indicates to return the number of elements up to N.
``sort`` indicates to return the places in a sorted way, ASC for
nearest to fairest and DESC for fairest to nearest.
``store`` indicates to save the places names in a sorted set named
with a specific key, each element of the destination sorted set is
populated with the score got from the original geo sorted set.
``store_dist`` indicates to save the places names in a sorted set
named with a specific key, instead of ``store`` the sorted set
destination score is set with the distance.
"""
return self._georadiusgeneric('GEORADIUS',
name, longitude, latitude, radius,
unit=unit, withdist=withdist,
withcoord=withcoord, withhash=withhash,
count=count, sort=sort, store=store,
store_dist=store_dist)
def georadiusbymember(self, name, member, radius, unit=None,
withdist=False, withcoord=False, withhash=False,
count=None, sort=None, store=None, store_dist=None):
"""
This command is exactly like ``georadius`` with the sole difference
that instead of taking, as the center of the area to query, a longitude
and latitude value, it takes the name of a member already existing
inside the geospatial index represented by the sorted set.
"""
return self._georadiusgeneric('GEORADIUSBYMEMBER',
name, member, radius, unit=unit,
withdist=withdist, withcoord=withcoord,
withhash=withhash, count=count,
sort=sort, store=store,
store_dist=store_dist)
def _georadiusgeneric(self, command, *args, **kwargs):
pieces = list(args)
if kwargs['unit'] and kwargs['unit'] not in ('m', 'km', 'mi', 'ft'):
raise DataError("GEORADIUS invalid unit")
elif kwargs['unit']:
pieces.append(kwargs['unit'])
else:
pieces.append('m',)
for arg_name, byte_repr in (
('withdist', b'WITHDIST'),
('withcoord', b'WITHCOORD'),
('withhash', b'WITHHASH')):
if kwargs[arg_name]:
pieces.append(byte_repr)
if kwargs['count']:
pieces.extend([b'COUNT', kwargs['count']])
if kwargs['sort']:
if kwargs['sort'] == 'ASC':
pieces.append(b'ASC')
elif kwargs['sort'] == 'DESC':
pieces.append(b'DESC')
else:
raise DataError("GEORADIUS invalid sort")
if kwargs['store'] and kwargs['store_dist']:
raise DataError("GEORADIUS store and store_dist cant be set"
" together")
if kwargs['store']:
pieces.extend([b'STORE', kwargs['store']])
if kwargs['store_dist']:
pieces.extend([b'STOREDIST', kwargs['store_dist']])
return self.execute_command(command, *pieces, **kwargs)
# MODULE COMMANDS
def module_load(self, path):
"""
Loads the module from ``path``.
Raises ``ModuleError`` if a module is not found at ``path``.
"""
return self.execute_command('MODULE LOAD', path)
def module_unload(self, name):
"""
Unloads the module ``name``.
Raises ``ModuleError`` if ``name`` is not in loaded modules.
"""
return self.execute_command('MODULE UNLOAD', name)
def module_list(self):
"""
Returns a list of dictionaries containing the name and version of
all loaded modules.
"""
return self.execute_command('MODULE LIST')
StrictRedis = Redis
class Monitor:
"""
Monitor is useful for handling the MONITOR command to the redis server.
next_command() method returns one command from monitor
listen() method yields commands from monitor.
"""
monitor_re = re.compile(r'\[(\d+) (.*)\] (.*)')
command_re = re.compile(r'"(.*?)(?<!\\)"')
def __init__(self, connection_pool):
self.connection_pool = connection_pool
self.connection = self.connection_pool.get_connection('MONITOR')
def __enter__(self):
self.connection.send_command('MONITOR')
# check that monitor returns 'OK', but don't return it to user
response = self.connection.read_response()
if not bool_ok(response):
raise RedisError('MONITOR failed: %s' % response)
return self
def __exit__(self, *args):
self.connection.disconnect()
self.connection_pool.release(self.connection)
def next_command(self):
"Parse the response from a monitor command"
response = self.connection.read_response()
if isinstance(response, bytes):
response = self.connection.encoder.decode(response, force=True)
command_time, command_data = response.split(' ', 1)
m = self.monitor_re.match(command_data)
db_id, client_info, command = m.groups()
command = ' '.join(self.command_re.findall(command))
# Redis escapes double quotes because each piece of the command
# string is surrounded by double quotes. We don't have that
# requirement so remove the escaping and leave the quote.
command = command.replace('\\"', '"')
if client_info == 'lua':
client_address = 'lua'
client_port = ''
client_type = 'lua'
elif client_info.startswith('unix'):
client_address = 'unix'
client_port = client_info[5:]
client_type = 'unix'
else:
# use rsplit as ipv6 addresses contain colons
client_address, client_port = client_info.rsplit(':', 1)
client_type = 'tcp'
return {
'time': float(command_time),
'db': int(db_id),
'client_address': client_address,
'client_port': client_port,
'client_type': client_type,
'command': command
}
def listen(self):
"Listen for commands coming to the server."
while True:
yield self.next_command()
class PubSub:
"""
PubSub provides publish, subscribe and listen support to Redis channels.
After subscribing to one or more channels, the listen() method will block
until a message arrives on one of the subscribed channels. That message
will be returned and it's safe to start listening again.
"""
PUBLISH_MESSAGE_TYPES = ('message', 'pmessage')
UNSUBSCRIBE_MESSAGE_TYPES = ('unsubscribe', 'punsubscribe')
HEALTH_CHECK_MESSAGE = 'redis-py-health-check'
def __init__(self, connection_pool, shard_hint=None,
ignore_subscribe_messages=False):
self.connection_pool = connection_pool
self.shard_hint = shard_hint
self.ignore_subscribe_messages = ignore_subscribe_messages
self.connection = None
# we need to know the encoding options for this connection in order
# to lookup channel and pattern names for callback handlers.
self.encoder = self.connection_pool.get_encoder()
if self.encoder.decode_responses:
self.health_check_response = ['pong', self.HEALTH_CHECK_MESSAGE]
else:
self.health_check_response = [
b'pong',
self.encoder.encode(self.HEALTH_CHECK_MESSAGE)
]
self.reset()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.reset()
def __del__(self):
try:
# if this object went out of scope prior to shutting down
# subscriptions, close the connection manually before
# returning it to the connection pool
self.reset()
except Exception:
pass
def reset(self):
if self.connection:
self.connection.disconnect()
self.connection.clear_connect_callbacks()
self.connection_pool.release(self.connection)
self.connection = None
self.channels = {}
self.pending_unsubscribe_channels = set()
self.patterns = {}
self.pending_unsubscribe_patterns = set()
def close(self):
self.reset()
def on_connect(self, connection):
"Re-subscribe to any channels and patterns previously subscribed to"
# NOTE: for python3, we can't pass bytestrings as keyword arguments
# so we need to decode channel/pattern names back to unicode strings
# before passing them to [p]subscribe.
self.pending_unsubscribe_channels.clear()
self.pending_unsubscribe_patterns.clear()
if self.channels:
channels = {}
for k, v in self.channels.items():
channels[self.encoder.decode(k, force=True)] = v
self.subscribe(**channels)
if self.patterns:
patterns = {}
for k, v in self.patterns.items():
patterns[self.encoder.decode(k, force=True)] = v
self.psubscribe(**patterns)
@property
def subscribed(self):
"Indicates if there are subscriptions to any channels or patterns"
return bool(self.channels or self.patterns)
def execute_command(self, *args):
"Execute a publish/subscribe command"
# NOTE: don't parse the response in this function -- it could pull a
# legitimate message off the stack if the connection is already
# subscribed to one or more channels
if self.connection is None:
self.connection = self.connection_pool.get_connection(
'pubsub',
self.shard_hint
)
# register a callback that re-subscribes to any channels we
# were listening to when we were disconnected
self.connection.register_connect_callback(self.on_connect)
connection = self.connection
kwargs = {'check_health': not self.subscribed}
self._execute(connection, connection.send_command, *args, **kwargs)
def _execute(self, connection, command, *args, **kwargs):
try:
return command(*args, **kwargs)
except (ConnectionError, TimeoutError) as e:
connection.disconnect()
if not (connection.retry_on_timeout and
isinstance(e, TimeoutError)):
raise
# Connect manually here. If the Redis server is down, this will
# fail and raise a ConnectionError as desired.
connection.connect()
# the ``on_connect`` callback should haven been called by the
# connection to resubscribe us to any channels and patterns we were
# previously listening to
return command(*args, **kwargs)
def parse_response(self, block=True, timeout=0):
"Parse the response from a publish/subscribe command"
conn = self.connection
if conn is None:
raise RuntimeError(
'pubsub connection not set: '
'did you forget to call subscribe() or psubscribe()?')
self.check_health()
if not block and not conn.can_read(timeout=timeout):
return None
response = self._execute(conn, conn.read_response)
if conn.health_check_interval and \
response == self.health_check_response:
# ignore the health check message as user might not expect it
return None
return response
def check_health(self):
conn = self.connection
if conn is None:
raise RuntimeError(
'pubsub connection not set: '
'did you forget to call subscribe() or psubscribe()?')
if conn.health_check_interval and time.time() > conn.next_health_check:
conn.send_command('PING', self.HEALTH_CHECK_MESSAGE,
check_health=False)
def _normalize_keys(self, data):
"""
normalize channel/pattern names to be either bytes or strings
based on whether responses are automatically decoded. this saves us
from coercing the value for each message coming in.
"""
encode = self.encoder.encode
decode = self.encoder.decode
return {decode(encode(k)): v for k, v in data.items()}
def psubscribe(self, *args, **kwargs):
"""
Subscribe to channel patterns. Patterns supplied as keyword arguments
expect a pattern name as the key and a callable as the value. A
pattern's callable will be invoked automatically when a message is
received on that pattern rather than producing a message via
``listen()``.
"""
if args:
args = list_or_args(args[0], args[1:])
new_patterns = dict.fromkeys(args)
new_patterns.update(kwargs)
ret_val = self.execute_command('PSUBSCRIBE', *new_patterns.keys())
# update the patterns dict AFTER we send the command. we don't want to
# subscribe twice to these patterns, once for the command and again
# for the reconnection.
new_patterns = self._normalize_keys(new_patterns)
self.patterns.update(new_patterns)
self.pending_unsubscribe_patterns.difference_update(new_patterns)
return ret_val
def punsubscribe(self, *args):
"""
Unsubscribe from the supplied patterns. If empty, unsubscribe from
all patterns.
"""
if args:
args = list_or_args(args[0], args[1:])
patterns = self._normalize_keys(dict.fromkeys(args))
else:
patterns = self.patterns
self.pending_unsubscribe_patterns.update(patterns)
return self.execute_command('PUNSUBSCRIBE', *args)
def subscribe(self, *args, **kwargs):
"""
Subscribe to channels. Channels supplied as keyword arguments expect
a channel name as the key and a callable as the value. A channel's
callable will be invoked automatically when a message is received on
that channel rather than producing a message via ``listen()`` or
``get_message()``.
"""
if args:
args = list_or_args(args[0], args[1:])
new_channels = dict.fromkeys(args)
new_channels.update(kwargs)
ret_val = self.execute_command('SUBSCRIBE', *new_channels.keys())
# update the channels dict AFTER we send the command. we don't want to
# subscribe twice to these channels, once for the command and again
# for the reconnection.
new_channels = self._normalize_keys(new_channels)
self.channels.update(new_channels)
self.pending_unsubscribe_channels.difference_update(new_channels)
return ret_val
def unsubscribe(self, *args):
"""
Unsubscribe from the supplied channels. If empty, unsubscribe from
all channels
"""
if args:
args = list_or_args(args[0], args[1:])
channels = self._normalize_keys(dict.fromkeys(args))
else:
channels = self.channels
self.pending_unsubscribe_channels.update(channels)
return self.execute_command('UNSUBSCRIBE', *args)
def listen(self):
"Listen for messages on channels this client has been subscribed to"
while self.subscribed:
response = self.handle_message(self.parse_response(block=True))
if response is not None:
yield response
def get_message(self, ignore_subscribe_messages=False, timeout=0):
"""
Get the next message if one is available, otherwise None.
If timeout is specified, the system will wait for `timeout` seconds
before returning. Timeout should be specified as a floating point
number.
"""
response = self.parse_response(block=False, timeout=timeout)
if response:
return self.handle_message(response, ignore_subscribe_messages)
return None
def ping(self, message=None):
"""
Ping the Redis server
"""
message = '' if message is None else message
return self.execute_command('PING', message)
def handle_message(self, response, ignore_subscribe_messages=False):
"""
Parses a pub/sub message. If the channel or pattern was subscribed to
with a message handler, the handler is invoked instead of a parsed
message being returned.
"""
message_type = str_if_bytes(response[0])
if message_type == 'pmessage':
message = {
'type': message_type,
'pattern': response[1],
'channel': response[2],
'data': response[3]
}
elif message_type == 'pong':
message = {
'type': message_type,
'pattern': None,
'channel': None,
'data': response[1]
}
else:
message = {
'type': message_type,
'pattern': None,
'channel': response[1],
'data': response[2]
}
# if this is an unsubscribe message, remove it from memory
if message_type in self.UNSUBSCRIBE_MESSAGE_TYPES:
if message_type == 'punsubscribe':
pattern = response[1]
if pattern in self.pending_unsubscribe_patterns:
self.pending_unsubscribe_patterns.remove(pattern)
self.patterns.pop(pattern, None)
else:
channel = response[1]
if channel in self.pending_unsubscribe_channels:
self.pending_unsubscribe_channels.remove(channel)
self.channels.pop(channel, None)
if message_type in self.PUBLISH_MESSAGE_TYPES:
# if there's a message handler, invoke it
if message_type == 'pmessage':
handler = self.patterns.get(message['pattern'], None)
else:
handler = self.channels.get(message['channel'], None)
if handler:
handler(message)
return None
elif message_type != 'pong':
# this is a subscribe/unsubscribe message. ignore if we don't
# want them
if ignore_subscribe_messages or self.ignore_subscribe_messages:
return None
return message
def run_in_thread(self, sleep_time=0, daemon=False,
exception_handler=None):
for channel, handler in self.channels.items():
if handler is None:
raise PubSubError("Channel: '%s' has no handler registered" %
channel)
for pattern, handler in self.patterns.items():
if handler is None:
raise PubSubError("Pattern: '%s' has no handler registered" %
pattern)
thread = PubSubWorkerThread(
self,
sleep_time,
daemon=daemon,
exception_handler=exception_handler
)
thread.start()
return thread
class PubSubWorkerThread(threading.Thread):
def __init__(self, pubsub, sleep_time, daemon=False,
exception_handler=None):
super().__init__()
self.daemon = daemon
self.pubsub = pubsub
self.sleep_time = sleep_time
self.exception_handler = exception_handler
self._running = threading.Event()
def run(self):
if self._running.is_set():
return
self._running.set()
pubsub = self.pubsub
sleep_time = self.sleep_time
while self._running.is_set():
try:
pubsub.get_message(ignore_subscribe_messages=True,
timeout=sleep_time)
except BaseException as e:
if self.exception_handler is None:
raise
self.exception_handler(e, pubsub, self)
pubsub.close()
def stop(self):
# trip the flag so the run loop exits. the run loop will
# close the pubsub connection, which disconnects the socket
# and returns the connection to the pool.
self._running.clear()
class Pipeline(Redis):
"""
Pipelines provide a way to transmit multiple commands to the Redis server
in one transmission. This is convenient for batch processing, such as
saving all the values in a list to Redis.
All commands executed within a pipeline are wrapped with MULTI and EXEC
calls. This guarantees all commands executed in the pipeline will be
executed atomically.
Any command raising an exception does *not* halt the execution of
subsequent commands in the pipeline. Instead, the exception is caught
and its instance is placed into the response list returned by execute().
Code iterating over the response list should be able to deal with an
instance of an exception as a potential value. In general, these will be
ResponseError exceptions, such as those raised when issuing a command
on a key of a different datatype.
"""
UNWATCH_COMMANDS = {'DISCARD', 'EXEC', 'UNWATCH'}
def __init__(self, connection_pool, response_callbacks, transaction,
shard_hint):
self.connection_pool = connection_pool
self.connection = None
self.response_callbacks = response_callbacks
self.transaction = transaction
self.shard_hint = shard_hint
self.watching = False
self.reset()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.reset()
def __del__(self):
try:
self.reset()
except Exception:
pass
def __len__(self):
return len(self.command_stack)
def __bool__(self):
"Pipeline instances should always evaluate to True"
return True
def reset(self):
self.command_stack = []
self.scripts = set()
# make sure to reset the connection state in the event that we were
# watching something
if self.watching and self.connection:
try:
# call this manually since our unwatch or
# immediate_execute_command methods can call reset()
self.connection.send_command('UNWATCH')
self.connection.read_response()
except ConnectionError:
# disconnect will also remove any previous WATCHes
self.connection.disconnect()
# clean up the other instance attributes
self.watching = False
self.explicit_transaction = False
# we can safely return the connection to the pool here since we're
# sure we're no longer WATCHing anything
if self.connection:
self.connection_pool.release(self.connection)
self.connection = None
def multi(self):
"""
Start a transactional block of the pipeline after WATCH commands
are issued. End the transactional block with `execute`.
"""
if self.explicit_transaction:
raise RedisError('Cannot issue nested calls to MULTI')
if self.command_stack:
raise RedisError('Commands without an initial WATCH have already '
'been issued')
self.explicit_transaction = True
def execute_command(self, *args, **kwargs):
if (self.watching or args[0] == 'WATCH') and \
not self.explicit_transaction:
return self.immediate_execute_command(*args, **kwargs)
return self.pipeline_execute_command(*args, **kwargs)
def immediate_execute_command(self, *args, **options):
"""
Execute a command immediately, but don't auto-retry on a
ConnectionError if we're already WATCHing a variable. Used when
issuing WATCH or subsequent commands retrieving their values but before
MULTI is called.
"""
command_name = args[0]
conn = self.connection
# if this is the first call, we need a connection
if not conn:
conn = self.connection_pool.get_connection(command_name,
self.shard_hint)
self.connection = conn
try:
conn.send_command(*args)
return self.parse_response(conn, command_name, **options)
except (ConnectionError, TimeoutError) as e:
conn.disconnect()
# if we were already watching a variable, the watch is no longer
# valid since this connection has died. raise a WatchError, which
# indicates the user should retry this transaction.
if self.watching:
self.reset()
raise WatchError("A ConnectionError occurred on while "
"watching one or more keys")
# if retry_on_timeout is not set, or the error is not
# a TimeoutError, raise it
if not (conn.retry_on_timeout and isinstance(e, TimeoutError)):
self.reset()
raise
# retry_on_timeout is set, this is a TimeoutError and we are not
# already WATCHing any variables. retry the command.
try:
conn.send_command(*args)
return self.parse_response(conn, command_name, **options)
except (ConnectionError, TimeoutError):
# a subsequent failure should simply be raised
self.reset()
raise
def pipeline_execute_command(self, *args, **options):
"""
Stage a command to be executed when execute() is next called
Returns the current Pipeline object back so commands can be
chained together, such as:
pipe = pipe.set('foo', 'bar').incr('baz').decr('bang')
At some other point, you can then run: pipe.execute(),
which will execute all commands queued in the pipe.
"""
self.command_stack.append((args, options))
return self
def _execute_transaction(self, connection, commands, raise_on_error):
cmds = chain([(('MULTI', ), {})], commands, [(('EXEC', ), {})])
all_cmds = connection.pack_commands([args for args, options in cmds
if EMPTY_RESPONSE not in options])
connection.send_packed_command(all_cmds)
errors = []
# parse off the response for MULTI
# NOTE: we need to handle ResponseErrors here and continue
# so that we read all the additional command messages from
# the socket
try:
self.parse_response(connection, '_')
except ResponseError as e:
errors.append((0, e))
# and all the other commands
for i, command in enumerate(commands):
if EMPTY_RESPONSE in command[1]:
errors.append((i, command[1][EMPTY_RESPONSE]))
else:
try:
self.parse_response(connection, '_')
except ResponseError as e:
self.annotate_exception(e, i + 1, command[0])
errors.append((i, e))
# parse the EXEC.
try:
response = self.parse_response(connection, '_')
except ExecAbortError:
if errors:
raise errors[0][1]
raise
# EXEC clears any watched keys
self.watching = False
if response is None:
raise WatchError("Watched variable changed.")
# put any parse errors into the response
for i, e in errors:
response.insert(i, e)
if len(response) != len(commands):
self.connection.disconnect()
raise ResponseError("Wrong number of response items from "
"pipeline execution")
# find any errors in the response and raise if necessary
if raise_on_error:
self.raise_first_error(commands, response)
# We have to run response callbacks manually
data = []
for r, cmd in zip(response, commands):
if not isinstance(r, Exception):
args, options = cmd
command_name = args[0]
if command_name in self.response_callbacks:
r = self.response_callbacks[command_name](r, **options)
data.append(r)
return data
def _execute_pipeline(self, connection, commands, raise_on_error):
# build up all commands into a single request to increase network perf
all_cmds = connection.pack_commands([args for args, _ in commands])
connection.send_packed_command(all_cmds)
response = []
for args, options in commands:
try:
response.append(
self.parse_response(connection, args[0], **options))
except ResponseError as e:
response.append(e)
if raise_on_error:
self.raise_first_error(commands, response)
return response
def raise_first_error(self, commands, response):
for i, r in enumerate(response):
if isinstance(r, ResponseError):
self.annotate_exception(r, i + 1, commands[i][0])
raise r
def annotate_exception(self, exception, number, command):
cmd = ' '.join(map(safe_str, command))
msg = 'Command # %d (%s) of pipeline caused error: %s' % (
number, cmd, exception.args[0])
exception.args = (msg,) + exception.args[1:]
def parse_response(self, connection, command_name, **options):
result = Redis.parse_response(
self, connection, command_name, **options)
if command_name in self.UNWATCH_COMMANDS:
self.watching = False
elif command_name == 'WATCH':
self.watching = True
return result
def load_scripts(self):
# make sure all scripts that are about to be run on this pipeline exist
scripts = list(self.scripts)
immediate = self.immediate_execute_command
shas = [s.sha for s in scripts]
# we can't use the normal script_* methods because they would just
# get buffered in the pipeline.
exists = immediate('SCRIPT EXISTS', *shas)
if not all(exists):
for s, exist in zip(scripts, exists):
if not exist:
s.sha = immediate('SCRIPT LOAD', s.script)
def execute(self, raise_on_error=True):
"Execute all the commands in the current pipeline"
stack = self.command_stack
if not stack and not self.watching:
return []
if self.scripts:
self.load_scripts()
if self.transaction or self.explicit_transaction:
execute = self._execute_transaction
else:
execute = self._execute_pipeline
conn = self.connection
if not conn:
conn = self.connection_pool.get_connection('MULTI',
self.shard_hint)
# assign to self.connection so reset() releases the connection
# back to the pool after we're done
self.connection = conn
try:
return execute(conn, stack, raise_on_error)
except (ConnectionError, TimeoutError) as e:
conn.disconnect()
# if we were watching a variable, the watch is no longer valid
# since this connection has died. raise a WatchError, which
# indicates the user should retry this transaction.
if self.watching:
raise WatchError("A ConnectionError occurred on while "
"watching one or more keys")
# if retry_on_timeout is not set, or the error is not
# a TimeoutError, raise it
if not (conn.retry_on_timeout and isinstance(e, TimeoutError)):
raise
# retry a TimeoutError when retry_on_timeout is set
return execute(conn, stack, raise_on_error)
finally:
self.reset()
def watch(self, *names):
"Watches the values at keys ``names``"
if self.explicit_transaction:
raise RedisError('Cannot issue a WATCH after a MULTI')
return self.execute_command('WATCH', *names)
def unwatch(self):
"Unwatches all previously specified keys"
return self.watching and self.execute_command('UNWATCH') or True
class Script:
"An executable Lua script object returned by ``register_script``"
def __init__(self, registered_client, script):
self.registered_client = registered_client
self.script = script
# Precalculate and store the SHA1 hex digest of the script.
if isinstance(script, str):
# We need the encoding from the client in order to generate an
# accurate byte representation of the script
encoder = registered_client.connection_pool.get_encoder()
script = encoder.encode(script)
self.sha = hashlib.sha1(script).hexdigest()
def __call__(self, keys=[], args=[], client=None):
"Execute the script, passing any required ``args``"
if client is None:
client = self.registered_client
args = tuple(keys) + tuple(args)
# make sure the Redis server knows about the script
if isinstance(client, Pipeline):
# Make sure the pipeline can register the script before executing.
client.scripts.add(self)
try:
return client.evalsha(self.sha, len(keys), *args)
except NoScriptError:
# Maybe the client is pointed to a differnet server than the client
# that created this instance?
# Overwrite the sha just in case there was a discrepancy.
self.sha = client.script_load(self.script)
return client.evalsha(self.sha, len(keys), *args)
class BitFieldOperation:
"""
Command builder for BITFIELD commands.
"""
def __init__(self, client, key, default_overflow=None):
self.client = client
self.key = key
self._default_overflow = default_overflow
self.reset()
def reset(self):
"""
Reset the state of the instance to when it was constructed
"""
self.operations = []
self._last_overflow = 'WRAP'
self.overflow(self._default_overflow or self._last_overflow)
def overflow(self, overflow):
"""
Update the overflow algorithm of successive INCRBY operations
:param overflow: Overflow algorithm, one of WRAP, SAT, FAIL. See the
Redis docs for descriptions of these algorithmsself.
:returns: a :py:class:`BitFieldOperation` instance.
"""
overflow = overflow.upper()
if overflow != self._last_overflow:
self._last_overflow = overflow
self.operations.append(('OVERFLOW', overflow))
return self
def incrby(self, fmt, offset, increment, overflow=None):
"""
Increment a bitfield by a given amount.
:param fmt: format-string for the bitfield being updated, e.g. 'u8'
for an unsigned 8-bit integer.
:param offset: offset (in number of bits). If prefixed with a
'#', this is an offset multiplier, e.g. given the arguments
fmt='u8', offset='#2', the offset will be 16.
:param int increment: value to increment the bitfield by.
:param str overflow: overflow algorithm. Defaults to WRAP, but other
acceptable values are SAT and FAIL. See the Redis docs for
descriptions of these algorithms.
:returns: a :py:class:`BitFieldOperation` instance.
"""
if overflow is not None:
self.overflow(overflow)
self.operations.append(('INCRBY', fmt, offset, increment))
return self
def get(self, fmt, offset):
"""
Get the value of a given bitfield.
:param fmt: format-string for the bitfield being read, e.g. 'u8' for
an unsigned 8-bit integer.
:param offset: offset (in number of bits). If prefixed with a
'#', this is an offset multiplier, e.g. given the arguments
fmt='u8', offset='#2', the offset will be 16.
:returns: a :py:class:`BitFieldOperation` instance.
"""
self.operations.append(('GET', fmt, offset))
return self
def set(self, fmt, offset, value):
"""
Set the value of a given bitfield.
:param fmt: format-string for the bitfield being read, e.g. 'u8' for
an unsigned 8-bit integer.
:param offset: offset (in number of bits). If prefixed with a
'#', this is an offset multiplier, e.g. given the arguments
fmt='u8', offset='#2', the offset will be 16.
:param int value: value to set at the given position.
:returns: a :py:class:`BitFieldOperation` instance.
"""
self.operations.append(('SET', fmt, offset, value))
return self
@property
def command(self):
cmd = ['BITFIELD', self.key]
for ops in self.operations:
cmd.extend(ops)
return cmd
def execute(self):
"""
Execute the operation(s) in a single BITFIELD command. The return value
is a list of values corresponding to each operation. If the client
used to create this instance was a pipeline, the list of values
will be present within the pipeline's execute.
"""
command = self.command
self.reset()
return self.client.execute_command(*command)
| 38.094973 | 79 | 0.59875 |
179e6f66437758d13244042bb59f2c333c4782b4 | 11,260 | py | Python | sdk/python/pulumi_aws/sns/platform_application.py | johnktims/pulumi-aws | c838bc79043f5376c66fc66275a1e012edd3ab7d | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/sns/platform_application.py | johnktims/pulumi-aws | c838bc79043f5376c66fc66275a1e012edd3ab7d | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/sns/platform_application.py | johnktims/pulumi-aws | c838bc79043f5376c66fc66275a1e012edd3ab7d | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from typing import Union
from .. import utilities, tables
class PlatformApplication(pulumi.CustomResource):
arn: pulumi.Output[str]
"""
The ARN of the SNS platform application
"""
event_delivery_failure_topic_arn: pulumi.Output[str]
"""
SNS Topic triggered when a delivery to any of the platform endpoints associated with your platform application encounters a permanent failure.
"""
event_endpoint_created_topic_arn: pulumi.Output[str]
"""
SNS Topic triggered when a new platform endpoint is added to your platform application.
"""
event_endpoint_deleted_topic_arn: pulumi.Output[str]
"""
SNS Topic triggered when an existing platform endpoint is deleted from your platform application.
"""
event_endpoint_updated_topic_arn: pulumi.Output[str]
"""
SNS Topic triggered when an existing platform endpoint is changed from your platform application.
"""
failure_feedback_role_arn: pulumi.Output[str]
"""
The IAM role permitted to receive failure feedback for this application.
"""
name: pulumi.Output[str]
"""
The friendly name for the SNS platform application
"""
platform: pulumi.Output[str]
"""
The platform that the app is registered with. See [Platform][1] for supported platforms.
"""
platform_credential: pulumi.Output[str]
"""
Application Platform credential. See [Credential][1] for type of credential required for platform. The value of this attribute when stored into the state is only a hash of the real value, so therefore it is not practical to use this as an attribute for other resources.
"""
platform_principal: pulumi.Output[str]
"""
Application Platform principal. See [Principal][2] for type of principal required for platform. The value of this attribute when stored into the state is only a hash of the real value, so therefore it is not practical to use this as an attribute for other resources.
"""
success_feedback_role_arn: pulumi.Output[str]
"""
The IAM role permitted to receive success feedback for this application.
"""
success_feedback_sample_rate: pulumi.Output[str]
"""
The percentage of success to sample (0-100)
"""
def __init__(__self__, resource_name, opts=None, event_delivery_failure_topic_arn=None, event_endpoint_created_topic_arn=None, event_endpoint_deleted_topic_arn=None, event_endpoint_updated_topic_arn=None, failure_feedback_role_arn=None, name=None, platform=None, platform_credential=None, platform_principal=None, success_feedback_role_arn=None, success_feedback_sample_rate=None, __props__=None, __name__=None, __opts__=None):
"""
Provides an SNS platform application resource
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] event_delivery_failure_topic_arn: SNS Topic triggered when a delivery to any of the platform endpoints associated with your platform application encounters a permanent failure.
:param pulumi.Input[str] event_endpoint_created_topic_arn: SNS Topic triggered when a new platform endpoint is added to your platform application.
:param pulumi.Input[str] event_endpoint_deleted_topic_arn: SNS Topic triggered when an existing platform endpoint is deleted from your platform application.
:param pulumi.Input[str] event_endpoint_updated_topic_arn: SNS Topic triggered when an existing platform endpoint is changed from your platform application.
:param pulumi.Input[str] failure_feedback_role_arn: The IAM role permitted to receive failure feedback for this application.
:param pulumi.Input[str] name: The friendly name for the SNS platform application
:param pulumi.Input[str] platform: The platform that the app is registered with. See [Platform][1] for supported platforms.
:param pulumi.Input[str] platform_credential: Application Platform credential. See [Credential][1] for type of credential required for platform. The value of this attribute when stored into the state is only a hash of the real value, so therefore it is not practical to use this as an attribute for other resources.
:param pulumi.Input[str] platform_principal: Application Platform principal. See [Principal][2] for type of principal required for platform. The value of this attribute when stored into the state is only a hash of the real value, so therefore it is not practical to use this as an attribute for other resources.
:param pulumi.Input[str] success_feedback_role_arn: The IAM role permitted to receive success feedback for this application.
:param pulumi.Input[str] success_feedback_sample_rate: The percentage of success to sample (0-100)
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
__props__['event_delivery_failure_topic_arn'] = event_delivery_failure_topic_arn
__props__['event_endpoint_created_topic_arn'] = event_endpoint_created_topic_arn
__props__['event_endpoint_deleted_topic_arn'] = event_endpoint_deleted_topic_arn
__props__['event_endpoint_updated_topic_arn'] = event_endpoint_updated_topic_arn
__props__['failure_feedback_role_arn'] = failure_feedback_role_arn
__props__['name'] = name
if platform is None:
raise TypeError("Missing required property 'platform'")
__props__['platform'] = platform
if platform_credential is None:
raise TypeError("Missing required property 'platform_credential'")
__props__['platform_credential'] = platform_credential
__props__['platform_principal'] = platform_principal
__props__['success_feedback_role_arn'] = success_feedback_role_arn
__props__['success_feedback_sample_rate'] = success_feedback_sample_rate
__props__['arn'] = None
super(PlatformApplication, __self__).__init__(
'aws:sns/platformApplication:PlatformApplication',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name, id, opts=None, arn=None, event_delivery_failure_topic_arn=None, event_endpoint_created_topic_arn=None, event_endpoint_deleted_topic_arn=None, event_endpoint_updated_topic_arn=None, failure_feedback_role_arn=None, name=None, platform=None, platform_credential=None, platform_principal=None, success_feedback_role_arn=None, success_feedback_sample_rate=None):
"""
Get an existing PlatformApplication resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param str id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] arn: The ARN of the SNS platform application
:param pulumi.Input[str] event_delivery_failure_topic_arn: SNS Topic triggered when a delivery to any of the platform endpoints associated with your platform application encounters a permanent failure.
:param pulumi.Input[str] event_endpoint_created_topic_arn: SNS Topic triggered when a new platform endpoint is added to your platform application.
:param pulumi.Input[str] event_endpoint_deleted_topic_arn: SNS Topic triggered when an existing platform endpoint is deleted from your platform application.
:param pulumi.Input[str] event_endpoint_updated_topic_arn: SNS Topic triggered when an existing platform endpoint is changed from your platform application.
:param pulumi.Input[str] failure_feedback_role_arn: The IAM role permitted to receive failure feedback for this application.
:param pulumi.Input[str] name: The friendly name for the SNS platform application
:param pulumi.Input[str] platform: The platform that the app is registered with. See [Platform][1] for supported platforms.
:param pulumi.Input[str] platform_credential: Application Platform credential. See [Credential][1] for type of credential required for platform. The value of this attribute when stored into the state is only a hash of the real value, so therefore it is not practical to use this as an attribute for other resources.
:param pulumi.Input[str] platform_principal: Application Platform principal. See [Principal][2] for type of principal required for platform. The value of this attribute when stored into the state is only a hash of the real value, so therefore it is not practical to use this as an attribute for other resources.
:param pulumi.Input[str] success_feedback_role_arn: The IAM role permitted to receive success feedback for this application.
:param pulumi.Input[str] success_feedback_sample_rate: The percentage of success to sample (0-100)
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["arn"] = arn
__props__["event_delivery_failure_topic_arn"] = event_delivery_failure_topic_arn
__props__["event_endpoint_created_topic_arn"] = event_endpoint_created_topic_arn
__props__["event_endpoint_deleted_topic_arn"] = event_endpoint_deleted_topic_arn
__props__["event_endpoint_updated_topic_arn"] = event_endpoint_updated_topic_arn
__props__["failure_feedback_role_arn"] = failure_feedback_role_arn
__props__["name"] = name
__props__["platform"] = platform
__props__["platform_credential"] = platform_credential
__props__["platform_principal"] = platform_principal
__props__["success_feedback_role_arn"] = success_feedback_role_arn
__props__["success_feedback_sample_rate"] = success_feedback_sample_rate
return PlatformApplication(resource_name, opts=opts, __props__=__props__)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 68.242424 | 431 | 0.744938 |
e55661080b1c3788446a69b9fc3348286a98b31c | 18,553 | py | Python | IPython/zmq/zmqshell.py | btel/ipython | 6ecc466f4571913479172d974a6dd8bea5658941 | [
"BSD-3-Clause-Clear"
] | null | null | null | IPython/zmq/zmqshell.py | btel/ipython | 6ecc466f4571913479172d974a6dd8bea5658941 | [
"BSD-3-Clause-Clear"
] | null | null | null | IPython/zmq/zmqshell.py | btel/ipython | 6ecc466f4571913479172d974a6dd8bea5658941 | [
"BSD-3-Clause-Clear"
] | null | null | null | """A ZMQ-based subclass of InteractiveShell.
This code is meant to ease the refactoring of the base InteractiveShell into
something with a cleaner architecture for 2-process use, without actually
breaking InteractiveShell itself. So we're doing something a bit ugly, where
we subclass and override what we want to fix. Once this is working well, we
can go back to the base class and refactor the code for a cleaner inheritance
implementation that doesn't rely on so much monkeypatching.
But this lets us maintain a fully working IPython as we develop the new
machinery. This should thus be thought of as scaffolding.
"""
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
from __future__ import print_function
# Stdlib
import inspect
import os
# Our own
from IPython.core.interactiveshell import (
InteractiveShell, InteractiveShellABC
)
from IPython.core import page
from IPython.core.autocall import ZMQExitAutocall
from IPython.core.displaypub import DisplayPublisher
from IPython.core.macro import Macro
from IPython.core.magic import MacroToEdit
from IPython.core.payloadpage import install_payload_page
from IPython.utils import io
from IPython.utils.path import get_py_filename
from IPython.utils.traitlets import Instance, Type, Dict, CBool
from IPython.utils.warn import warn
from IPython.zmq.displayhook import ZMQShellDisplayHook, _encode_binary
from IPython.zmq.session import extract_header
from session import Session
#-----------------------------------------------------------------------------
# Globals and side-effects
#-----------------------------------------------------------------------------
# Install the payload version of page.
install_payload_page()
#-----------------------------------------------------------------------------
# Functions and classes
#-----------------------------------------------------------------------------
class ZMQDisplayPublisher(DisplayPublisher):
"""A display publisher that publishes data using a ZeroMQ PUB socket."""
session = Instance(Session)
pub_socket = Instance('zmq.Socket')
parent_header = Dict({})
def set_parent(self, parent):
"""Set the parent for outbound messages."""
self.parent_header = extract_header(parent)
def publish(self, source, data, metadata=None):
if metadata is None:
metadata = {}
self._validate_data(source, data, metadata)
content = {}
content['source'] = source
_encode_binary(data)
content['data'] = data
content['metadata'] = metadata
self.session.send(
self.pub_socket, u'display_data', content,
parent=self.parent_header
)
class ZMQInteractiveShell(InteractiveShell):
"""A subclass of InteractiveShell for ZMQ."""
displayhook_class = Type(ZMQShellDisplayHook)
display_pub_class = Type(ZMQDisplayPublisher)
# Override the traitlet in the parent class, because there's no point using
# readline for the kernel. Can be removed when the readline code is moved
# to the terminal frontend.
# FIXME. This is disabled for now, even though it may cause problems under
# Windows, because it breaks %run in the Qt console. See gh-617 for more
# details. Re-enable once we've fully tested that %run works in the Qt
# console with syntax highlighting in tracebacks.
# readline_use = CBool(False)
# /FIXME
exiter = Instance(ZMQExitAutocall)
def _exiter_default(self):
return ZMQExitAutocall(self)
keepkernel_on_exit = None
def init_environment(self):
"""Configure the user's environment.
"""
env = os.environ
# These two ensure 'ls' produces nice coloring on BSD-derived systems
env['TERM'] = 'xterm-color'
env['CLICOLOR'] = '1'
# Since normal pagers don't work at all (over pexpect we don't have
# single-key control of the subprocess), try to disable paging in
# subprocesses as much as possible.
env['PAGER'] = 'cat'
env['GIT_PAGER'] = 'cat'
def auto_rewrite_input(self, cmd):
"""Called to show the auto-rewritten input for autocall and friends.
FIXME: this payload is currently not correctly processed by the
frontend.
"""
new = self.displayhook.prompt1.auto_rewrite() + cmd
payload = dict(
source='IPython.zmq.zmqshell.ZMQInteractiveShell.auto_rewrite_input',
transformed_input=new,
)
self.payload_manager.write_payload(payload)
def ask_exit(self):
"""Engage the exit actions."""
payload = dict(
source='IPython.zmq.zmqshell.ZMQInteractiveShell.ask_exit',
exit=True,
keepkernel=self.keepkernel_on_exit,
)
self.payload_manager.write_payload(payload)
def _showtraceback(self, etype, evalue, stb):
exc_content = {
u'traceback' : stb,
u'ename' : unicode(etype.__name__),
u'evalue' : unicode(evalue)
}
dh = self.displayhook
# Send exception info over pub socket for other clients than the caller
# to pick up
exc_msg = dh.session.send(dh.pub_socket, u'pyerr', exc_content, dh.parent_header)
# FIXME - Hack: store exception info in shell object. Right now, the
# caller is reading this info after the fact, we need to fix this logic
# to remove this hack. Even uglier, we need to store the error status
# here, because in the main loop, the logic that sets it is being
# skipped because runlines swallows the exceptions.
exc_content[u'status'] = u'error'
self._reply_content = exc_content
# /FIXME
return exc_content
#------------------------------------------------------------------------
# Magic overrides
#------------------------------------------------------------------------
# Once the base class stops inheriting from magic, this code needs to be
# moved into a separate machinery as well. For now, at least isolate here
# the magics which this class needs to implement differently from the base
# class, or that are unique to it.
def magic_doctest_mode(self,parameter_s=''):
"""Toggle doctest mode on and off.
This mode is intended to make IPython behave as much as possible like a
plain Python shell, from the perspective of how its prompts, exceptions
and output look. This makes it easy to copy and paste parts of a
session into doctests. It does so by:
- Changing the prompts to the classic ``>>>`` ones.
- Changing the exception reporting mode to 'Plain'.
- Disabling pretty-printing of output.
Note that IPython also supports the pasting of code snippets that have
leading '>>>' and '...' prompts in them. This means that you can paste
doctests from files or docstrings (even if they have leading
whitespace), and the code will execute correctly. You can then use
'%history -t' to see the translated history; this will give you the
input after removal of all the leading prompts and whitespace, which
can be pasted back into an editor.
With these features, you can switch into this mode easily whenever you
need to do testing and changes to doctests, without having to leave
your existing IPython session.
"""
from IPython.utils.ipstruct import Struct
# Shorthands
shell = self.shell
disp_formatter = self.shell.display_formatter
ptformatter = disp_formatter.formatters['text/plain']
# dstore is a data store kept in the instance metadata bag to track any
# changes we make, so we can undo them later.
dstore = shell.meta.setdefault('doctest_mode', Struct())
save_dstore = dstore.setdefault
# save a few values we'll need to recover later
mode = save_dstore('mode', False)
save_dstore('rc_pprint', ptformatter.pprint)
save_dstore('rc_plain_text_only',disp_formatter.plain_text_only)
save_dstore('xmode', shell.InteractiveTB.mode)
if mode == False:
# turn on
ptformatter.pprint = False
disp_formatter.plain_text_only = True
shell.magic_xmode('Plain')
else:
# turn off
ptformatter.pprint = dstore.rc_pprint
disp_formatter.plain_text_only = dstore.rc_plain_text_only
shell.magic_xmode(dstore.xmode)
# Store new mode and inform on console
dstore.mode = bool(1-int(mode))
mode_label = ['OFF','ON'][dstore.mode]
print('Doctest mode is:', mode_label)
# Send the payload back so that clients can modify their prompt display
payload = dict(
source='IPython.zmq.zmqshell.ZMQInteractiveShell.magic_doctest_mode',
mode=dstore.mode)
self.payload_manager.write_payload(payload)
def magic_edit(self,parameter_s='',last_call=['','']):
"""Bring up an editor and execute the resulting code.
Usage:
%edit [options] [args]
%edit runs IPython's editor hook. The default version of this hook is
set to call the __IPYTHON__.rc.editor command. This is read from your
environment variable $EDITOR. If this isn't found, it will default to
vi under Linux/Unix and to notepad under Windows. See the end of this
docstring for how to change the editor hook.
You can also set the value of this editor via the command line option
'-editor' or in your ipythonrc file. This is useful if you wish to use
specifically for IPython an editor different from your typical default
(and for Windows users who typically don't set environment variables).
This command allows you to conveniently edit multi-line code right in
your IPython session.
If called without arguments, %edit opens up an empty editor with a
temporary file and will execute the contents of this file when you
close it (don't forget to save it!).
Options:
-n <number>: open the editor at a specified line number. By default,
the IPython editor hook uses the unix syntax 'editor +N filename', but
you can configure this by providing your own modified hook if your
favorite editor supports line-number specifications with a different
syntax.
-p: this will call the editor with the same data as the previous time
it was used, regardless of how long ago (in your current session) it
was.
-r: use 'raw' input. This option only applies to input taken from the
user's history. By default, the 'processed' history is used, so that
magics are loaded in their transformed version to valid Python. If
this option is given, the raw input as typed as the command line is
used instead. When you exit the editor, it will be executed by
IPython's own processor.
-x: do not execute the edited code immediately upon exit. This is
mainly useful if you are editing programs which need to be called with
command line arguments, which you can then do using %run.
Arguments:
If arguments are given, the following possibilites exist:
- The arguments are numbers or pairs of colon-separated numbers (like
1 4:8 9). These are interpreted as lines of previous input to be
loaded into the editor. The syntax is the same of the %macro command.
- If the argument doesn't start with a number, it is evaluated as a
variable and its contents loaded into the editor. You can thus edit
any string which contains python code (including the result of
previous edits).
- If the argument is the name of an object (other than a string),
IPython will try to locate the file where it was defined and open the
editor at the point where it is defined. You can use `%edit function`
to load an editor exactly at the point where 'function' is defined,
edit it and have the file be executed automatically.
If the object is a macro (see %macro for details), this opens up your
specified editor with a temporary file containing the macro's data.
Upon exit, the macro is reloaded with the contents of the file.
Note: opening at an exact line is only supported under Unix, and some
editors (like kedit and gedit up to Gnome 2.8) do not understand the
'+NUMBER' parameter necessary for this feature. Good editors like
(X)Emacs, vi, jed, pico and joe all do.
- If the argument is not found as a variable, IPython will look for a
file with that name (adding .py if necessary) and load it into the
editor. It will execute its contents with execfile() when you exit,
loading any code in the file into your interactive namespace.
After executing your code, %edit will return as output the code you
typed in the editor (except when it was an existing file). This way
you can reload the code in further invocations of %edit as a variable,
via _<NUMBER> or Out[<NUMBER>], where <NUMBER> is the prompt number of
the output.
Note that %edit is also available through the alias %ed.
This is an example of creating a simple function inside the editor and
then modifying it. First, start up the editor:
In [1]: ed
Editing... done. Executing edited code...
Out[1]: 'def foo():n print "foo() was defined in an editing session"n'
We can then call the function foo():
In [2]: foo()
foo() was defined in an editing session
Now we edit foo. IPython automatically loads the editor with the
(temporary) file where foo() was previously defined:
In [3]: ed foo
Editing... done. Executing edited code...
And if we call foo() again we get the modified version:
In [4]: foo()
foo() has now been changed!
Here is an example of how to edit a code snippet successive
times. First we call the editor:
In [5]: ed
Editing... done. Executing edited code...
hello
Out[5]: "print 'hello'n"
Now we call it again with the previous output (stored in _):
In [6]: ed _
Editing... done. Executing edited code...
hello world
Out[6]: "print 'hello world'n"
Now we call it with the output #8 (stored in _8, also as Out[8]):
In [7]: ed _8
Editing... done. Executing edited code...
hello again
Out[7]: "print 'hello again'n"
Changing the default editor hook:
If you wish to write your own editor hook, you can put it in a
configuration file which you load at startup time. The default hook
is defined in the IPython.core.hooks module, and you can use that as a
starting example for further modifications. That file also has
general instructions on how to set a new hook for use once you've
defined it."""
opts,args = self.parse_options(parameter_s,'prn:')
try:
filename, lineno, _ = self._find_edit_target(args, opts, last_call)
except MacroToEdit as e:
# TODO: Implement macro editing over 2 processes.
print("Macro editing not yet implemented in 2-process model.")
return
# Make sure we send to the client an absolute path, in case the working
# directory of client and kernel don't match
filename = os.path.abspath(filename)
payload = {
'source' : 'IPython.zmq.zmqshell.ZMQInteractiveShell.edit_magic',
'filename' : filename,
'line_number' : lineno
}
self.payload_manager.write_payload(payload)
def magic_gui(self, *args, **kwargs):
raise NotImplementedError(
'Kernel GUI support is not implemented yet, except for --pylab.')
def magic_pylab(self, *args, **kwargs):
raise NotImplementedError(
'pylab support must be enabled in command line options.')
# A few magics that are adapted to the specifics of using pexpect and a
# remote terminal
def magic_clear(self, arg_s):
"""Clear the terminal."""
if os.name == 'posix':
self.shell.system("clear")
else:
self.shell.system("cls")
if os.name == 'nt':
# This is the usual name in windows
magic_cls = magic_clear
# Terminal pagers won't work over pexpect, but we do have our own pager
def magic_less(self, arg_s):
"""Show a file through the pager.
Files ending in .py are syntax-highlighted."""
cont = open(arg_s).read()
if arg_s.endswith('.py'):
cont = self.shell.pycolorize(cont)
page.page(cont)
magic_more = magic_less
# Man calls a pager, so we also need to redefine it
if os.name == 'posix':
def magic_man(self, arg_s):
"""Find the man page for the given command and display in pager."""
page.page(self.shell.getoutput('man %s | col -b' % arg_s,
split=False))
# FIXME: this is specific to the GUI, so we should let the gui app load
# magics at startup that are only for the gui. Once the gui app has proper
# profile and configuration management, we can have it initialize a kernel
# with a special config file that provides these.
def magic_guiref(self, arg_s):
"""Show a basic reference about the GUI console."""
from IPython.core.usage import gui_reference
page.page(gui_reference, auto_html=True)
def set_next_input(self, text):
"""Send the specified text to the frontend to be presented at the next
input cell."""
payload = dict(
source='IPython.zmq.zmqshell.ZMQInteractiveShell.set_next_input',
text=text
)
self.payload_manager.write_payload(payload)
InteractiveShellABC.register(ZMQInteractiveShell)
| 40.508734 | 89 | 0.635746 |
eaff817fd988d4e85a8f63012f5eea85fb0cc219 | 5,191 | py | Python | Tests/Methods/Slot/test_SlotW61_meth.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | null | null | null | Tests/Methods/Slot/test_SlotW61_meth.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | null | null | null | Tests/Methods/Slot/test_SlotW61_meth.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import pytest
from pyleecan.Classes.LamSlot import LamSlot
from pyleecan.Classes.SlotW61 import SlotW61
from pyleecan.Classes.Slot import Slot
from pyleecan.Methods.Slot.SlotW61 import (
S61_InnerCheckError,
S61_WindError,
S61_WindWError,
)
# For AlmostEqual
DELTA = 1e-5
slotW61_test = list()
# Internal Slot
lam = LamSlot(is_internal=True, Rext=0.1325)
lam.slot = SlotW61(
Zs=12,
W0=15e-3,
W1=40e-3,
W2=12.5e-3,
H0=15e-3,
H1=20e-3,
H2=25e-3,
H3=0,
H4=0,
W3=0,
)
slotW61_test.append(
{
"test_obj": lam,
"S_exp": 1.69308e-3,
"Aw": 0.29889,
"SW_exp": 6.875e-4,
"H_exp": 5.9942749e-2,
"Ao": 0.41033,
}
)
# Internal Slot small wind
lam = LamSlot(is_internal=True, Rext=0.1325)
lam.slot = SlotW61(
Zs=12,
W0=15e-3,
W1=40e-3,
W2=12.5e-3,
H0=15e-3,
H1=20e-3,
H2=25e-3,
H3=1e-3,
H4=2e-3,
W3=3e-3,
)
slotW61_test.append(
{
"test_obj": lam,
"S_exp": 1.69308e-3,
"Aw": 0.2419425,
"SW_exp": 4.73e-4,
"H_exp": 5.9942749e-2,
"Ao": 0.41033,
}
)
@pytest.mark.METHODS
class Test_SlotW61_meth(object):
"""pytest for SlotW61 methods"""
@pytest.mark.parametrize("test_dict", slotW61_test)
def test_comp_surface(self, test_dict):
"""Check that the computation of the surface is correct"""
test_obj = test_dict["test_obj"]
result = test_obj.slot.comp_surface()
a = result
b = test_dict["S_exp"]
msg = "Return " + str(a) + " expected " + str(b)
assert abs((a - b) / a - 0) < DELTA, msg
b = Slot.comp_surface(test_obj.slot)
msg = "Return " + str(a) + " expected " + str(b)
assert abs((a - b) / a - 0) < DELTA, msg
@pytest.mark.parametrize("test_dict", slotW61_test)
def test_comp_surface_active(self, test_dict):
"""Check that the computation of the winding surface is correct"""
test_obj = test_dict["test_obj"]
result = test_obj.slot.comp_surface_active()
a = result
b = test_dict["SW_exp"]
msg = "Return " + str(a) + " expected " + str(b)
assert abs((a - b) / a - 0) < DELTA, msg
@pytest.mark.parametrize("test_dict", slotW61_test)
def test_comp_height(self, test_dict):
"""Check that the computation of the height is correct"""
test_obj = test_dict["test_obj"]
result = test_obj.slot.comp_height()
a = result
b = test_dict["H_exp"]
msg = "Return " + str(a) + " expected " + str(b)
assert abs((a - b) / a - 0) < DELTA, msg
b = Slot.comp_height(test_obj.slot)
msg = "Return " + str(a) + " expected " + str(b)
assert abs((a - b) / a - 0) < DELTA, msg
@pytest.mark.parametrize("test_dict", slotW61_test)
def test_comp_angle_opening(self, test_dict):
"""Check that the computation of the average opening angle iscorrect"""
test_obj = test_dict["test_obj"]
a = test_obj.slot.comp_angle_opening()
b = test_dict["Ao"]
msg = "Return " + str(a) + " expected " + str(b)
assert abs((a - b) / a - 0) < DELTA, msg
b = Slot.comp_angle_opening(test_obj.slot)
msg = "Return " + str(a) + " expected " + str(b)
assert abs((a - b) / a - 0) < DELTA, msg
@pytest.mark.parametrize("test_dict", slotW61_test)
def test_comp_angle_active_eq(self, test_dict):
"""Check that the computation of the average angle is correct"""
test_obj = test_dict["test_obj"]
result = test_obj.slot.comp_angle_active_eq()
a = result
b = test_dict["Aw"]
msg = "Return " + str(a) + " expected " + str(b)
assert abs((a - b) / a - 0) < DELTA, msg
@pytest.mark.parametrize("test_dict", slotW61_test)
def test_build_geometry_active_is_stator_true(self, test_dict):
"""Check that the computation of the average angle is correct"""
test_obj = test_dict["test_obj"]
result = test_obj.slot.build_geometry_active(Nrad=1, Ntan=2)
a = result
assert "Wind_Stator_R0_T0_S0" == a[0].label
@pytest.mark.parametrize("test_dict", slotW61_test)
def test_build_geometry_active_error(self, test_dict):
"""Check that the ERROR is raised"""
test_obj = test_dict["test_obj"]
with pytest.raises(S61_WindError) as context:
test_obj.slot.build_geometry_active(Nrad=0, Ntan=0)
@pytest.mark.parametrize("test_dict", slotW61_test)
def test_check_Inner_error(self, test_dict):
"""Check that the ERROR is raised"""
test_obj = test_dict["test_obj"]
test_obj.is_internal = False
with pytest.raises(S61_InnerCheckError) as context:
test_obj.slot.check()
@pytest.mark.parametrize("test_dict", slotW61_test)
def test_check_Wind_error(self, test_dict):
"""Check that the ERROR is raised"""
test_obj = test_dict["test_obj"]
test_obj.slot.W3 = 50
test_obj.is_internal = True
with pytest.raises(S61_WindWError) as context:
test_obj.slot.check()
| 29.162921 | 79 | 0.60393 |
4124a38902c9d56258c5c3a0cbeed4d0a6d8735c | 10,314 | py | Python | 2_Computer_Vision/src/keras_yolo3/yolo.py | AntonMu/eqanalytics | 6b4494861c2e4c04ad6a1e8ed1d9e8432c631c13 | [
"MIT"
] | 7 | 2019-07-15T05:19:59.000Z | 2021-07-13T12:33:57.000Z | 2_Computer_Vision/src/keras_yolo3/yolo.py | AntonMu/EQanalytics | 6b4494861c2e4c04ad6a1e8ed1d9e8432c631c13 | [
"MIT"
] | 11 | 2020-01-28T22:14:45.000Z | 2022-03-11T23:52:35.000Z | 2_Computer_Vision/src/keras_yolo3/yolo.py | AntonMu/eqanalytics | 6b4494861c2e4c04ad6a1e8ed1d9e8432c631c13 | [
"MIT"
] | 1 | 2020-04-04T00:06:55.000Z | 2020-04-04T00:06:55.000Z | # -*- coding: utf-8 -*-
"""
Class definition of YOLO_v3 style detection model on image and video
"""
import colorsys
import os
from timeit import default_timer as timer
import numpy as np
from keras import backend as K
from keras.models import load_model
from keras.layers import Input
from PIL import Image, ImageFont, ImageDraw
from .yolo3.model import yolo_eval, yolo_body, tiny_yolo_body
from .yolo3.utils import letterbox_image
import os
from keras.utils import multi_gpu_model
class YOLO(object):
_defaults = {
"model_path": "model_data/yolo.h5",
"anchors_path": "model_data/yolo_anchors.txt",
"classes_path": "model_data/coco_classes.txt",
"score": 0.3,
"iou": 0.45,
"model_image_size": (416, 416),
"gpu_num": 1,
}
@classmethod
def get_defaults(cls, n):
if n in cls._defaults:
return cls._defaults[n]
else:
return "Unrecognized attribute name '" + n + "'"
def __init__(self, **kwargs):
self.__dict__.update(self._defaults) # set up default values
self.__dict__.update(kwargs) # and update with user overrides
self.class_names = self._get_class()
self.anchors = self._get_anchors()
self.sess = K.get_session()
self.boxes, self.scores, self.classes = self.generate()
def _get_class(self):
classes_path = os.path.expanduser(self.classes_path)
with open(classes_path) as f:
class_names = f.readlines()
class_names = [c.strip() for c in class_names]
return class_names
def _get_anchors(self):
anchors_path = os.path.expanduser(self.anchors_path)
with open(anchors_path) as f:
anchors = f.readline()
anchors = [float(x) for x in anchors.split(",")]
return np.array(anchors).reshape(-1, 2)
def generate(self):
model_path = os.path.expanduser(self.model_path)
assert model_path.endswith(
".h5"), "Keras model or weights must be a .h5 file."
# Load model, or construct model and load weights.
start = timer()
num_anchors = len(self.anchors)
num_classes = len(self.class_names)
is_tiny_version = num_anchors == 6 # default setting
try:
self.yolo_model = load_model(model_path, compile=False)
except BaseException:
self.yolo_model = (
tiny_yolo_body(
Input(shape=(None, None, 3)), num_anchors // 2, num_classes
)
if is_tiny_version
else yolo_body(
Input(shape=(None, None, 3)), num_anchors // 3, num_classes
)
)
self.yolo_model.load_weights(
self.model_path
) # make sure model, anchors and classes match
else:
assert self.yolo_model.layers[-1].output_shape[-1] == num_anchors / len(
self.yolo_model.output
) * (
num_classes + 5
), "Mismatch between model and given anchor and class sizes"
end = timer()
print(
"{} model, anchors, and classes loaded in {:.2f}sec.".format(
model_path, end - start
)
)
# Generate colors for drawing bounding boxes.
if len(self.class_names) == 1:
self.colors = ["GreenYellow"]
else:
hsv_tuples = [
(x / len(self.class_names), 1.0, 1.0)
for x in range(len(self.class_names))
]
self.colors = list(
map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))
self.colors = list(
map(
lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)),
self.colors,
)
)
# Fixed seed for consistent colors across runs.
np.random.seed(10101)
np.random.shuffle(
self.colors
) # Shuffle colors to decorrelate adjacent classes.
np.random.seed(None) # Reset seed to default.
# Generate output tensor targets for filtered bounding boxes.
self.input_image_shape = K.placeholder(shape=(2,))
if self.gpu_num >= 2:
self.yolo_model = multi_gpu_model(
self.yolo_model, gpus=self.gpu_num)
boxes, scores, classes = yolo_eval(
self.yolo_model.output,
self.anchors,
len(self.class_names),
self.input_image_shape,
score_threshold=self.score,
iou_threshold=self.iou,
)
return boxes, scores, classes
def detect_image(self, image):
start = timer()
if self.model_image_size != (None, None):
assert self.model_image_size[0] % 32 == 0, "Multiples of 32 required"
assert self.model_image_size[1] % 32 == 0, "Multiples of 32 required"
boxed_image = letterbox_image(
image, tuple(reversed(self.model_image_size)))
else:
new_image_size = (
image.width - (image.width % 32),
image.height - (image.height % 32),
)
boxed_image = letterbox_image(image, new_image_size)
image_data = np.array(boxed_image, dtype="float32")
print(image_data.shape)
image_data /= 255.0
image_data = np.expand_dims(image_data, 0) # Add batch dimension.
out_boxes, out_scores, out_classes = self.sess.run(
[self.boxes, self.scores, self.classes],
feed_dict={
self.yolo_model.input: image_data,
self.input_image_shape: [image.size[1], image.size[0]],
K.learning_phase(): 0,
},
)
print("Found {} boxes for {}".format(len(out_boxes), "img"))
out_prediction = []
font_path = os.path.join(
os.path.dirname(__file__),
"font/FiraMono-Medium.otf")
font = ImageFont.truetype(
font=font_path,
size=np.floor(
3e-2 *
image.size[1] +
0.5).astype("int32"))
thickness = (image.size[0] + image.size[1]) // 300
for i, c in reversed(list(enumerate(out_classes))):
predicted_class = self.class_names[c]
box = out_boxes[i]
score = out_scores[i]
label = "{} {:.2f}".format(predicted_class, score)
draw = ImageDraw.Draw(image)
label_size = draw.textsize(label, font)
top, left, bottom, right = box
top = max(0, np.floor(top + 0.5).astype("int32"))
left = max(0, np.floor(left + 0.5).astype("int32"))
bottom = min(image.size[1], np.floor(bottom + 0.5).astype("int32"))
right = min(image.size[0], np.floor(right + 0.5).astype("int32"))
# image was expanded to model_image_size: make sure it did not pick
# up any box outside of original image (run into this bug when
# lowering confidence threshold to 0.01)
if top > image.size[1] or right > image.size[0]:
continue
print(label, (left, top), (right, bottom))
# output as xmin, ymin, xmax, ymax, class_index, confidence
out_prediction.append([left, top, right, bottom, c, score])
if top - label_size[1] >= 0:
text_origin = np.array([left, top - label_size[1]])
else:
text_origin = np.array([left, bottom])
# My kingdom for a good redistributable image drawing library.
for i in range(thickness):
draw.rectangle([left + i, top + i, right - i,
bottom - i], outline=self.colors[c])
draw.rectangle(
[tuple(text_origin), tuple(text_origin + label_size)],
fill=self.colors[c],
)
draw.text(text_origin, label, fill=(0, 0, 0), font=font)
del draw
end = timer()
print("Time spent: {:.3f}sec".format(end - start))
return out_prediction, image
def close_session(self):
self.sess.close()
def detect_video(yolo, video_path, output_path=""):
import cv2
vid = cv2.VideoCapture(video_path)
if not vid.isOpened():
raise IOError("Couldn't open webcam or video")
video_FourCC = cv2.VideoWriter_fourcc(
*"mp4v") # int(vid.get(cv2.CAP_PROP_FOURCC))
video_fps = vid.get(cv2.CAP_PROP_FPS)
video_size = (
int(vid.get(cv2.CAP_PROP_FRAME_WIDTH)),
int(vid.get(cv2.CAP_PROP_FRAME_HEIGHT)),
)
isOutput = True if output_path != "" else False
if isOutput:
print(output_path, video_FourCC, video_fps, video_size)
# print("!!! TYPE:", type(output_path), type(video_FourCC), type(video_fps), type(video_size))
out = cv2.VideoWriter(output_path, video_FourCC, video_fps, video_size)
accum_time = 0
curr_fps = 0
fps = "FPS: ??"
prev_time = timer()
while vid.isOpened():
return_value, frame = vid.read()
if not return_value:
break
# opencv images are BGR, translate to RGB
frame = frame[:, :, ::-1]
image = Image.fromarray(frame)
out_pred, image = yolo.detect_image(image)
result = np.asarray(image)
curr_time = timer()
exec_time = curr_time - prev_time
prev_time = curr_time
accum_time = accum_time + exec_time
curr_fps = curr_fps + 1
if accum_time > 1:
accum_time = accum_time - 1
fps = "FPS: " + str(curr_fps)
curr_fps = 0
cv2.putText(
result,
text=fps,
org=(3, 15),
fontFace=cv2.FONT_HERSHEY_SIMPLEX,
fontScale=0.50,
color=(255, 0, 0),
thickness=2,
)
# cv2.namedWindow("result", cv2.WINDOW_NORMAL)
# cv2.imshow("result", result)
if isOutput:
out.write(result[:, :, ::-1])
# if cv2.waitKey(1) & 0xFF == ord('q'):
# break
vid.release()
out.release()
yolo.close_session()
| 35.443299 | 102 | 0.558561 |
b1dd51d660f095e0416bc23391a093a966472c15 | 3,583 | py | Python | django_grpc_framework/proto_serializers.py | amrhgh/django-grpc-framework | 158e1d9001bd426410ca962e2f72b14ee3e2f935 | [
"Apache-2.0"
] | 269 | 2020-05-06T03:22:43.000Z | 2022-03-26T21:05:24.000Z | django_grpc_framework/proto_serializers.py | amrhgh/django-grpc-framework | 158e1d9001bd426410ca962e2f72b14ee3e2f935 | [
"Apache-2.0"
] | 19 | 2020-06-03T03:46:39.000Z | 2022-03-30T20:24:55.000Z | django_grpc_framework/proto_serializers.py | amrhgh/django-grpc-framework | 158e1d9001bd426410ca962e2f72b14ee3e2f935 | [
"Apache-2.0"
] | 39 | 2020-05-27T07:23:12.000Z | 2022-03-27T13:10:24.000Z | from rest_framework.serializers import (
BaseSerializer, Serializer, ListSerializer, ModelSerializer,
LIST_SERIALIZER_KWARGS,
)
from rest_framework.settings import api_settings
from rest_framework.exceptions import ValidationError
from django_grpc_framework.protobuf.json_format import (
message_to_dict, parse_dict
)
class BaseProtoSerializer(BaseSerializer):
def __init__(self, *args, **kwargs):
message = kwargs.pop('message', None)
if message is not None:
self.initial_message = message
kwargs['data'] = self.message_to_data(message)
super().__init__(*args, **kwargs)
def message_to_data(self, message):
"""Protobuf message -> Dict of python primitive datatypes."""
raise NotImplementedError('`message_to_data()` must be implemented.')
def data_to_message(self, data):
"""Protobuf message <- Dict of python primitive datatypes."""
raise NotImplementedError('`data_to_message()` must be implemented.')
@property
def message(self):
if not hasattr(self, '_message'):
self._message = self.data_to_message(self.data)
return self._message
@classmethod
def many_init(cls, *args, **kwargs):
allow_empty = kwargs.pop('allow_empty', None)
child_serializer = cls(*args, **kwargs)
list_kwargs = {
'child': child_serializer,
}
if allow_empty is not None:
list_kwargs['allow_empty'] = allow_empty
list_kwargs.update({
key: value for key, value in kwargs.items()
if key in LIST_SERIALIZER_KWARGS
})
meta = getattr(cls, 'Meta', None)
list_serializer_class = getattr(meta, 'list_serializer_class', ListProtoSerializer)
return list_serializer_class(*args, **list_kwargs)
class ProtoSerializer(BaseProtoSerializer, Serializer):
def message_to_data(self, message):
"""Protobuf message -> Dict of python primitive datatypes.
"""
return message_to_dict(message)
def data_to_message(self, data):
"""Protobuf message <- Dict of python primitive datatypes."""
assert hasattr(self, 'Meta'), (
'Class {serializer_class} missing "Meta" attribute'.format(
serializer_class=self.__class__.__name__
)
)
assert hasattr(self.Meta, 'proto_class'), (
'Class {serializer_class} missing "Meta.proto_class" attribute'.format(
serializer_class=self.__class__.__name__
)
)
return parse_dict(data, self.Meta.proto_class())
class ListProtoSerializer(BaseProtoSerializer, ListSerializer):
def message_to_data(self, message):
"""
List of protobuf messages -> List of dicts of python primitive datatypes.
"""
if not isinstance(message, list):
error_message = self.error_messages['not_a_list'].format(
input_type=type(message).__name__
)
raise ValidationError({
api_settings.NON_FIELD_ERRORS_KEY: [error_message]
}, code='not_a_list')
ret = []
for item in message:
ret.append(self.child.message_to_data(item))
return ret
def data_to_message(self, data):
"""
List of protobuf messages <- List of dicts of python primitive datatypes.
"""
return [
self.child.data_to_message(item) for item in data
]
class ModelProtoSerializer(ProtoSerializer, ModelSerializer):
pass | 35.83 | 91 | 0.643874 |
7f03c0dfeeb8ed15ff1c8f34e6c8087cc57c9470 | 2,508 | py | Python | tests/test_constraints.py | Andy-math/optimizer | a65f5ee54a0ae4e02aefb008d47c2d551d071ef0 | [
"Apache-2.0"
] | 1 | 2021-06-29T10:05:05.000Z | 2021-06-29T10:05:05.000Z | tests/test_constraints.py | Andy-math/optimizer | a65f5ee54a0ae4e02aefb008d47c2d551d071ef0 | [
"Apache-2.0"
] | null | null | null | tests/test_constraints.py | Andy-math/optimizer | a65f5ee54a0ae4e02aefb008d47c2d551d071ef0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import numpy
from optimizer import trust_region
from overloads.typedefs import ndarray
"""
f = x^2 + (y-2)^2 = x^2 + y^2 - 4y + 4
-x-1 <= y <= x+1
x-1 <= y <= -x+1
-x-y <= 1
-x+y <= 1
x-y <= 1
y-x <= 1
"""
class Test_constraints:
def test1(self) -> None:
def func(_x: ndarray) -> float:
x: float
y: float
x, y = _x
return x * x + (y - 2) * (y - 2)
def grad(_x: ndarray) -> ndarray:
x: float
y: float
x, y = _x
return numpy.array([2 * x, 2 * y - 4])
constr_A = numpy.array(
[
[-1, -1],
[1, -1],
[-1, 1],
[1, 1],
],
dtype=numpy.float64,
)
constr_b = numpy.ones((4,))
constr_lb = numpy.full((2,), -numpy.inf)
constr_ub = numpy.full((2,), numpy.inf)
opts = trust_region.Trust_Region_Options(max_iter=500)
opts.check_rel = 1
opts.check_abs = 1e-6
result = trust_region.trust_region(
func,
grad,
numpy.array([0.9, 0.0]),
(constr_A, constr_b, constr_lb, constr_ub),
opts,
)
# assert result.success
assert numpy.all(result.x.round(6) == numpy.array([0.0, 1.0]))
assert 3 < result.iter < 300
def test2(self) -> None:
def func(_x: ndarray) -> float:
x: float
y: float
x, y = _x
return (x + 2) * (x + 2) + (y - 2) * (y - 2)
def grad(_x: ndarray) -> ndarray:
x: float
y: float
x, y = _x
return numpy.array([2 * (x + 2), 2 * (y - 2)])
constr_A = numpy.empty((0, 2))
constr_b = numpy.ones((0,))
constr_lb = numpy.array([-1, -1], dtype=numpy.float64)
constr_ub = numpy.array([1, 1], dtype=numpy.float64)
opts = trust_region.Trust_Region_Options(max_iter=500)
opts.check_rel = 1
opts.check_abs = 1e-6
result = trust_region.trust_region(
func,
grad,
numpy.array([0.9, 0.9]),
(constr_A, constr_b, constr_lb, constr_ub),
opts,
)
assert result.success
assert numpy.all(result.x.round(6) == numpy.array([-1.0, 1.0]))
assert 3 < result.iter < 300
if __name__ == "__main__":
Test_constraints().test1()
Test_constraints().test2()
| 26.125 | 71 | 0.469697 |
85d683714d531ae692f4b2fa142f7782b706f04d | 1,219 | py | Python | 6002x/findCombination.py | CarlosEduardoAS/MITx | 532695d69c77581b6df80c145283b349b75e4973 | [
"MIT"
] | null | null | null | 6002x/findCombination.py | CarlosEduardoAS/MITx | 532695d69c77581b6df80c145283b349b75e4973 | [
"MIT"
] | null | null | null | 6002x/findCombination.py | CarlosEduardoAS/MITx | 532695d69c77581b6df80c145283b349b75e4973 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Tue May 25 16:10:53 2021
@author: caear
"""
import numpy
import itertools
def find_combination(choices, total):
"""
choices: a non-empty list of ints
total: a positive int
Returns result, a numpy.array of length len(choices)
such that
* each element of result is 0 or 1
* sum(result*choices) == total
* sum(result) is as small as possible
In case of ties, returns any result that works.
If there is no result that gives the exact total,
pick the one that gives sum(result*choices) closest
to total without going over.
"""
power_set = []
for i in itertools.product([1,0], repeat = len(choices)):
power_set.append(numpy.array(i))
filter_set_eq = []
filter_set_less = []
for j in power_set:
if sum(j*choices) == total:
filter_set_eq.append(j)
elif sum(j*choices) < total:
filter_set_less.append(j)
if len(filter_set_eq) > 0:
minidx = min(enumerate(filter_set_eq), key=lambda x:sum(x[1]))[1]
return minidx
else:
minidx = max(enumerate(filter_set_less), key = lambda x:sum(x[1]))[1]
return minidx
| 29.02381 | 77 | 0.61854 |
a61712d357d7779fb92b00a8ec32fa05acd9f284 | 2,621 | py | Python | DataLoader.py | Times125/break_captcha | 9ab16d1d7737630786039d07caec308bdb8a26d9 | [
"MIT"
] | 13 | 2019-08-14T09:36:03.000Z | 2021-04-05T06:35:05.000Z | DataLoader.py | Times125/break_captcha | 9ab16d1d7737630786039d07caec308bdb8a26d9 | [
"MIT"
] | 6 | 2019-09-26T09:45:12.000Z | 2022-03-11T23:56:08.000Z | DataLoader.py | Times125/break_captcha | 9ab16d1d7737630786039d07caec308bdb8a26d9 | [
"MIT"
] | 6 | 2019-08-16T14:46:04.000Z | 2021-06-05T01:53:20.000Z | #! /usr/bin/env python
# -*- coding: utf-8 -*-
"""
@Author: _defined
@Time: 2019/8/9 16:55
@Description:
"""
import os
import tensorflow as tf
from settings import (config, DataMode)
class DataLoader(object):
"""
data loader for train, test and validation
"""
def __init__(self, mode):
"""
:param mode: DataMode.Train, DataMode.Test or DataMode.Val
"""
self.mode = mode
self._size = 0
@staticmethod
def _parse_example(serial_example):
features = tf.io.parse_single_example(
serial_example,
features={
'label': tf.io.FixedLenFeature([], tf.string),
'image': tf.io.FixedLenFeature([], tf.string),
}
)
shape = (config.resize[0], config.resize[1], config.channel)
image = tf.reshape(tf.io.decode_raw(features['image'], tf.float32), shape)
label = tf.reshape(tf.io.decode_raw(features['label'], tf.int32), [config.max_seq_len, ])
# label = tf.one_hot(tf.reshape(label, [config.max_seq_len, ]), self.n_class)
return image, label
def _load_dataset_from_tfrecords(self):
"""
:return:
"""
tfrecord_dir = './dataset/{}'.format(config.dataset)
path = os.path.join(tfrecord_dir, "{}_{}.tfrecords".format(config.dataset, self.mode))
if not os.path.exists(path):
raise FileNotFoundError("No tfrecords file found. Please execute 'make_dataset.py' before your training")
dataset = tf.data.TFRecordDataset(filenames=path).map(self._parse_example)
self._size = len([_ for _ in dataset])
return dataset
def load_batch_from_tfrecords(self):
"""
:return: tf.Dataset
"""
min_after_dequeue = 2000
dataset = self._load_dataset_from_tfrecords()
dataset = dataset.shuffle(min_after_dequeue).batch(config.batch_size)
return dataset
def load_all_from_tfreocrds(self):
"""
:return:
"""
return self._load_dataset_from_tfrecords()
@property
def size(self):
"""
:return: dataset size
"""
return self._size
if __name__ == '__main__':
import matplotlib.pyplot as plt
import numpy as np
dataloader = DataLoader(DataMode.Test)
la = list()
dataset = dataloader.load_all_from_tfreocrds()
print(dataloader.size)
# print(dataloader.size())
for i in range(2):
for data in dataset:
images, labels = data
print(labels.shape)
la.append(labels)
print(len(la))
| 28.802198 | 117 | 0.602823 |
bf2b03d05e0bc12787c098e6fbf6ef08a5c89301 | 1,685 | py | Python | Scripts/enhancer.py | shashank7991/eBuy | 2e65572967b33e7205b38c048b7be2d9943173b6 | [
"MIT"
] | null | null | null | Scripts/enhancer.py | shashank7991/eBuy | 2e65572967b33e7205b38c048b7be2d9943173b6 | [
"MIT"
] | null | null | null | Scripts/enhancer.py | shashank7991/eBuy | 2e65572967b33e7205b38c048b7be2d9943173b6 | [
"MIT"
] | null | null | null | #!c:\users\krutarth\desktop\ecommerce-2\scripts\python.exe
#
# The Python Imaging Library
# $Id$
#
# this demo script creates four windows containing an image and a slider.
# drag the slider to modify the image.
#
try:
from tkinter import Tk, Toplevel, Frame, Label, Scale, HORIZONTAL
except ImportError:
from Tkinter import Tk, Toplevel, Frame, Label, Scale, HORIZONTAL
from PIL import Image, ImageTk, ImageEnhance
import sys
#
# enhancer widget
class Enhance(Frame):
def __init__(self, master, image, name, enhancer, lo, hi):
Frame.__init__(self, master)
# set up the image
self.tkim = ImageTk.PhotoImage(image.mode, image.size)
self.enhancer = enhancer(image)
self.update("1.0") # normalize
# image window
Label(self, image=self.tkim).pack()
# scale
s = Scale(self, label=name, orient=HORIZONTAL,
from_=lo, to=hi, resolution=0.01,
command=self.update)
s.set(self.value)
s.pack()
def update(self, value):
self.value = float(value)
self.tkim.paste(self.enhancer.enhance(self.value))
#
# main
if len(sys.argv) != 2:
print("Usage: enhancer file")
sys.exit(1)
root = Tk()
im = Image.open(sys.argv[1])
im.thumbnail((200, 200))
Enhance(root, im, "Color", ImageEnhance.Color, 0.0, 4.0).pack()
Enhance(Toplevel(), im, "Sharpness", ImageEnhance.Sharpness, -2.0, 2.0).pack()
Enhance(Toplevel(), im, "Brightness", ImageEnhance.Brightness, -1.0, 3.0).pack()
Enhance(Toplevel(), im, "Contrast", ImageEnhance.Contrast, -1.0, 3.0).pack()
root.mainloop()
| 26.328125 | 81 | 0.623739 |
6adf9eb911309b7eb19135e60f97456a99d11db1 | 2,404 | py | Python | sdk/python/pulumi_aws/wafregional/geo_match_set.py | Charliekenney23/pulumi-aws | 55bd0390160d27350b297834026fee52114a2d41 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/wafregional/geo_match_set.py | Charliekenney23/pulumi-aws | 55bd0390160d27350b297834026fee52114a2d41 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/wafregional/geo_match_set.py | Charliekenney23/pulumi-aws | 55bd0390160d27350b297834026fee52114a2d41 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-03-08T15:05:29.000Z | 2021-03-08T15:05:29.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from .. import utilities, tables
class GeoMatchSet(pulumi.CustomResource):
geo_match_constraints: pulumi.Output[list]
"""
The Geo Match Constraint objects which contain the country that you want AWS WAF to search for.
"""
name: pulumi.Output[str]
"""
The name or description of the Geo Match Set.
"""
def __init__(__self__, resource_name, opts=None, geo_match_constraints=None, name=None, __name__=None, __opts__=None):
"""
Provides a WAF Regional Geo Match Set Resource
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[list] geo_match_constraints: The Geo Match Constraint objects which contain the country that you want AWS WAF to search for.
:param pulumi.Input[str] name: The name or description of the Geo Match Set.
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if not resource_name:
raise TypeError('Missing resource name argument (for URN creation)')
if not isinstance(resource_name, str):
raise TypeError('Expected resource name to be a string')
if opts and not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
__props__ = dict()
__props__['geo_match_constraints'] = geo_match_constraints
__props__['name'] = name
super(GeoMatchSet, __self__).__init__(
'aws:wafregional/geoMatchSet:GeoMatchSet',
resource_name,
__props__,
opts)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 39.409836 | 152 | 0.68178 |
c319b98e0d2df60d12ced9900fd1f3c30c922a6b | 918 | py | Python | azure-mgmt-datalake-analytics/azure/mgmt/datalake/analytics/catalog/models/usql_table_statistics_paged.py | azuresdkci1x/azure-sdk-for-python-1722 | e08fa6606543ce0f35b93133dbb78490f8e6bcc9 | [
"MIT"
] | 2 | 2020-07-29T14:22:17.000Z | 2020-11-06T18:47:40.000Z | azure-mgmt-datalake-analytics/azure/mgmt/datalake/analytics/catalog/models/usql_table_statistics_paged.py | azuresdkci1x/azure-sdk-for-python-1722 | e08fa6606543ce0f35b93133dbb78490f8e6bcc9 | [
"MIT"
] | 1 | 2016-08-01T07:37:04.000Z | 2016-08-01T07:37:04.000Z | azure-mgmt-datalake-analytics/azure/mgmt/datalake/analytics/catalog/models/usql_table_statistics_paged.py | azuresdkci1x/azure-sdk-for-python-1722 | e08fa6606543ce0f35b93133dbb78490f8e6bcc9 | [
"MIT"
] | 1 | 2018-11-09T06:17:41.000Z | 2018-11-09T06:17:41.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.paging import Paged
class USqlTableStatisticsPaged(Paged):
"""
A paging container for iterating over a list of USqlTableStatistics object
"""
_attribute_map = {
'next_link': {'key': 'nextLink', 'type': 'str'},
'current_page': {'key': 'value', 'type': '[USqlTableStatistics]'}
}
def __init__(self, *args, **kwargs):
super(USqlTableStatisticsPaged, self).__init__(*args, **kwargs)
| 32.785714 | 78 | 0.578431 |
0f5687e4cca4b17903780e38735c112312594054 | 1,474 | py | Python | praw/util/cache.py | nickatnight/praw | 6ba5c92e5d5210338c0a2a2755a5e5e226a002fa | [
"BSD-2-Clause"
] | 1 | 2020-11-09T00:09:23.000Z | 2020-11-09T00:09:23.000Z | praw/util/cache.py | nickatnight/praw | 6ba5c92e5d5210338c0a2a2755a5e5e226a002fa | [
"BSD-2-Clause"
] | 1 | 2021-03-13T21:26:49.000Z | 2021-03-13T21:26:49.000Z | praw/util/cache.py | nickatnight/praw | 6ba5c92e5d5210338c0a2a2755a5e5e226a002fa | [
"BSD-2-Clause"
] | null | null | null | """Caching utilities."""
from typing import Any, Callable, Optional
class cachedproperty:
"""A decorator for caching a property's result.
Similar to `property`, but the wrapped method's result is cached
on the instance. This is achieved by setting an entry in the object's
instance dictionary with the same name as the property. When the name
is later accessed, the value in the instance dictionary takes precedence
over the (non-data descriptor) property.
This is useful for implementing lazy-loaded properties.
The cache can be invalidated via `delattr()`, or by modifying `__dict__`
directly. It will be repopulated on next access.
.. versionadded:: 6.3.0
"""
def __init__(self, func: Callable[[Any], Any], doc: Optional[str] = None):
"""Initialize the descriptor."""
self.func = self.__wrapped__ = func
if doc is None:
doc = func.__doc__
self.__doc__ = doc
def __get__(self, obj: Optional[Any], objtype: Optional[Any] = None) -> Any:
"""Implement descriptor getter.
Calculate the property's value and then store it in the
associated object's instance dictionary.
"""
if obj is None:
return self
value = obj.__dict__[self.func.__name__] = self.func(obj)
return value
def __repr__(self) -> str:
"""Return repr(self)."""
return "<%s %s>" % (self.__class__.__name__, self.func)
| 32.755556 | 80 | 0.651289 |
785b0c49026272e86d45ad297f32b139e88d4124 | 3,207 | py | Python | searx/engines/json_engine.py | xu1991/open | 5398dab4ba669b3ca87d9fe26eb24431c45f153e | [
"CC0-1.0"
] | 4 | 2018-09-07T15:35:24.000Z | 2019-03-27T09:48:12.000Z | master/searx-master/searx/engines/json_engine.py | AlexRogalskiy/DevArtifacts | 931aabb8cbf27656151c54856eb2ea7d1153203a | [
"MIT"
] | 371 | 2020-03-04T21:51:56.000Z | 2022-03-31T20:59:11.000Z | searx/engines/json_engine.py | xu1991/open | 5398dab4ba669b3ca87d9fe26eb24431c45f153e | [
"CC0-1.0"
] | 3 | 2019-06-18T19:57:17.000Z | 2020-11-06T03:55:08.000Z | from collections import Iterable
from json import loads
from sys import version_info
from searx.url_utils import urlencode
from searx.utils import to_string
if version_info[0] == 3:
unicode = str
search_url = None
url_query = None
content_query = None
title_query = None
paging = False
suggestion_query = ''
results_query = ''
# parameters for engines with paging support
#
# number of results on each page
# (only needed if the site requires not a page number, but an offset)
page_size = 1
# number of the first page (usually 0 or 1)
first_page_num = 1
def iterate(iterable):
if type(iterable) == dict:
it = iterable.items()
else:
it = enumerate(iterable)
for index, value in it:
yield str(index), value
def is_iterable(obj):
if type(obj) == str:
return False
if type(obj) == unicode:
return False
return isinstance(obj, Iterable)
def parse(query):
q = []
for part in query.split('/'):
if part == '':
continue
else:
q.append(part)
return q
def do_query(data, q):
ret = []
if not q:
return ret
qkey = q[0]
for key, value in iterate(data):
if len(q) == 1:
if key == qkey:
ret.append(value)
elif is_iterable(value):
ret.extend(do_query(value, q))
else:
if not is_iterable(value):
continue
if key == qkey:
ret.extend(do_query(value, q[1:]))
else:
ret.extend(do_query(value, q))
return ret
def query(data, query_string):
q = parse(query_string)
return do_query(data, q)
def request(query, params):
query = urlencode({'q': query})[2:]
fp = {'query': query}
if paging and search_url.find('{pageno}') >= 0:
fp['pageno'] = (params['pageno'] - 1) * page_size + first_page_num
params['url'] = search_url.format(**fp)
params['query'] = query
return params
def response(resp):
results = []
json = loads(resp.text)
if results_query:
rs = query(json, results_query)
if not len(rs):
return results
for result in rs[0]:
try:
url = query(result, url_query)[0]
title = query(result, title_query)[0]
except:
continue
try:
content = query(result, content_query)[0]
except:
content = ""
results.append({
'url': to_string(url),
'title': to_string(title),
'content': to_string(content),
})
else:
for url, title, content in zip(
query(json, url_query),
query(json, title_query),
query(json, content_query)
):
results.append({
'url': to_string(url),
'title': to_string(title),
'content': to_string(content),
})
if not suggestion_query:
return results
for suggestion in query(json, suggestion_query):
results.append({'suggestion': suggestion})
return results
| 23.408759 | 74 | 0.55067 |
306aea119205928453f1400fecc1fd4f813d3152 | 6,600 | py | Python | test/completion/stdlib.py | haoqixu/jedi | ea93dbc08eac0a1b8c39e15c554c0b0c4ce65591 | [
"MIT"
] | 10 | 2020-07-21T21:59:54.000Z | 2021-07-19T11:01:47.000Z | test/completion/stdlib.py | haoqixu/jedi | ea93dbc08eac0a1b8c39e15c554c0b0c4ce65591 | [
"MIT"
] | null | null | null | test/completion/stdlib.py | haoqixu/jedi | ea93dbc08eac0a1b8c39e15c554c0b0c4ce65591 | [
"MIT"
] | 1 | 2021-01-30T18:17:01.000Z | 2021-01-30T18:17:01.000Z | """
std library stuff
"""
# -----------------
# builtins
# -----------------
arr = ['']
#? str()
sorted(arr)[0]
#? str()
next(reversed(arr))
next(reversed(arr))
# should not fail if there's no return value.
def yielder():
yield None
#? None
next(reversed(yielder()))
# empty reversed should not raise an error
#?
next(reversed())
#? str() bytes()
next(open(''))
#? int()
{'a':2}.setdefault('a', 3)
# Compiled classes should have the meta class attributes.
#? ['__itemsize__']
tuple.__itemsize__
#? []
tuple().__itemsize__
# -----------------
# type() calls with one parameter
# -----------------
#? int
type(1)
#? int
type(int())
#? type
type(int)
#? type
type(type)
#? list
type([])
def x():
yield 1
generator = type(x())
#? generator
type(x for x in [])
#? type(x)
type(lambda: x)
import math
import os
#? type(os)
type(math)
class X(): pass
#? type
type(X)
# -----------------
# type() calls with multiple parameters
# -----------------
X = type('X', (object,), dict(a=1))
# Doesn't work yet.
#?
X.a
#?
X
if os.path.isfile():
#? ['abspath']
fails = os.path.abspath
# The type vars and other underscored things from typeshed should not be
# findable.
#?
os._T
with open('foo') as f:
for line in f.readlines():
#? str() bytes()
line
# -----------------
# enumerate
# -----------------
for i, j in enumerate(["as", "ad"]):
#? int()
i
#? str()
j
# -----------------
# re
# -----------------
import re
c = re.compile(r'a')
# re.compile should not return str -> issue #68
#? []
c.startswith
#? int()
c.match().start()
#? int()
re.match(r'a', 'a').start()
for a in re.finditer('a', 'a'):
#? int()
a.start()
# -----------------
# ref
# -----------------
import weakref
#? int()
weakref.proxy(1)
#? weakref.ref()
weakref.ref(1)
#? int() None
weakref.ref(1)()
# -----------------
# functools
# -----------------
import functools
basetwo = functools.partial(int, base=2)
#? int()
basetwo()
def function(a, b):
return a, b
a = functools.partial(function, 0)
#? int()
a('')[0]
#? str()
a('')[1]
kw = functools.partial(function, b=1.0)
tup = kw(1)
#? int()
tup[0]
#? float()
tup[1]
def my_decorator(f):
@functools.wraps(f)
def wrapper(*args, **kwds):
return f(*args, **kwds)
return wrapper
@my_decorator
def example(a):
return a
#? str()
example('')
# -----------------
# sqlite3 (#84)
# -----------------
import sqlite3
#? sqlite3.Connection()
con = sqlite3.connect()
#? sqlite3.Cursor()
c = con.cursor()
def huhu(db):
"""
:type db: sqlite3.Connection
:param db: the db connection
"""
#? sqlite3.Connection()
db
with sqlite3.connect() as c:
#? sqlite3.Connection()
c
# -----------------
# hashlib
# -----------------
import hashlib
#? ['md5']
hashlib.md5
# -----------------
# copy
# -----------------
import copy
#? int()
copy.deepcopy(1)
#?
copy.copy()
# -----------------
# json
# -----------------
# We don't want any results for json, because it depends on IO.
import json
#?
json.load('asdf')
#?
json.loads('[1]')
# -----------------
# random
# -----------------
import random
class A(object):
def say(self): pass
class B(object):
def shout(self): pass
cls = random.choice([A, B])
#? ['say', 'shout']
cls().s
# -----------------
# random
# -----------------
import zipfile
z = zipfile.ZipFile("foo")
#? ['upper']
z.read('name').upper
# -----------------
# contextlib
# -----------------
import contextlib
with contextlib.closing('asd') as string:
#? str()
string
# -----------------
# operator
# -----------------
import operator
f = operator.itemgetter(1)
#? float()
f([1.0])
#? str()
f([1, ''])
g = operator.itemgetter(1, 2)
x1, x2 = g([1, 1.0, ''])
#? float()
x1
#? str()
x2
x1, x2 = g([1, ''])
#? str()
x1
#? int() str()
x2
# -----------------
# shlex
# -----------------
# Github issue #929
import shlex
qsplit = shlex.split("foo, ferwerwerw werw werw e")
for part in qsplit:
#? str()
part
# -----------------
# staticmethod, classmethod params
# -----------------
class F():
def __init__(self):
self.my_variable = 3
@staticmethod
def my_func(param):
#? []
param.my_
#? ['upper']
param.uppe
#? str()
return param
@staticmethod
def my_func_without_call(param):
#? []
param.my_
#? []
param.uppe
#?
return param
@classmethod
def my_method_without_call(cls, param):
#?
cls.my_variable
#? ['my_method', 'my_method_without_call']
cls.my_meth
#?
return param
@classmethod
def my_method(cls, param):
#?
cls.my_variable
#? ['my_method', 'my_method_without_call']
cls.my_meth
#?
return param
#? str()
F.my_func('')
#? str()
F.my_method('')
# -----------------
# Unknown metaclass
# -----------------
# Github issue 1321
class Meta(object):
pass
class Test(metaclass=Meta):
def test_function(self):
result = super(Test, self).test_function()
#? []
result.
# -----------------
# Enum
# -----------------
# python > 2.7
import enum
class X(enum.Enum):
attr_x = 3
attr_y = 2.0
#? ['mro']
X.mro
#? ['attr_x', 'attr_y']
X.attr_
#? str()
X.attr_x.name
#? int()
X.attr_x.value
#? str()
X.attr_y.name
#? float()
X.attr_y.value
#? str()
X().name
#? float()
X().attr_x.attr_y.value
# -----------------
# functools Python 3.5+
# -----------------
class X:
def function(self, a, b):
return a, b
a = functools.partialmethod(function, 0)
kw = functools.partialmethod(function, b=1.0)
just_partial = functools.partial(function, 1, 2.0)
#? int()
X().a('')[0]
#? str()
X().a('')[1]
# The access of partialmethods on classes are not 100% correct. This doesn't
# really matter, because nobody uses it like that anyway and would take quite a
# bit of work to fix all of these cases.
#? str()
X.a('')[0]
#?
X.a('')[1]
#? X()
X.a(X(), '')[0]
#? str()
X.a(X(), '')[1]
tup = X().kw(1)
#? int()
tup[0]
#? float()
tup[1]
tup = X.kw(1)
#?
tup[0]
#? float()
tup[1]
tup = X.kw(X(), 1)
#? int()
tup[0]
#? float()
tup[1]
#? float()
X.just_partial('')[0]
#? str()
X.just_partial('')[1]
#? float()
X().just_partial('')[0]
#? str()
X().just_partial('')[1]
# -----------------
# functools Python 3.8
# -----------------
# python >= 3.8
@functools.lru_cache
def x() -> int: ...
@functools.lru_cache()
def y() -> float: ...
@functools.lru_cache(8)
def z() -> str: ...
#? int()
x()
#? float()
y()
#? str()
z()
| 14.316703 | 79 | 0.506515 |
3ef23880f0b55cc7ccb0bf49524d8582edfba662 | 6,538 | py | Python | python/ray/serve/tests/test_cluster.py | yuanchi2807/ray | cf512254bb4bcd71ff1818dff5c868ab10c5f620 | [
"Apache-2.0"
] | 1 | 2021-09-20T15:45:59.000Z | 2021-09-20T15:45:59.000Z | python/ray/serve/tests/test_cluster.py | yuanchi2807/ray | cf512254bb4bcd71ff1818dff5c868ab10c5f620 | [
"Apache-2.0"
] | 53 | 2021-10-06T20:08:04.000Z | 2022-03-21T20:17:25.000Z | python/ray/serve/tests/test_cluster.py | yuanchi2807/ray | cf512254bb4bcd71ff1818dff5c868ab10c5f620 | [
"Apache-2.0"
] | 1 | 2022-03-27T09:01:59.000Z | 2022-03-27T09:01:59.000Z | from collections import defaultdict
import os
import sys
import time
import pytest
import requests
import ray
from ray import serve
from ray.cluster_utils import Cluster
from ray.serve.deployment_state import ReplicaStartupStatus, ReplicaState
from ray._private.test_utils import SignalActor, wait_for_condition
@pytest.fixture
def ray_cluster():
cluster = Cluster()
yield Cluster()
serve.shutdown()
ray.shutdown()
cluster.shutdown()
def test_scale_up(ray_cluster):
cluster = ray_cluster
cluster.add_node(num_cpus=1)
cluster.connect(namespace="serve")
# By default, Serve controller and proxy actors use 0 CPUs,
# so initially there should only be room for 1 replica.
@serve.deployment(version="1", num_replicas=1, _health_check_period_s=1)
def D(*args):
return os.getpid()
def get_pids(expected, timeout=30):
pids = set()
start = time.time()
while len(pids) < expected:
pids.add(requests.get("http://localhost:8000/D").text)
if time.time() - start >= timeout:
raise TimeoutError("Timed out waiting for pids.")
return pids
serve.start(detached=True)
client = serve.api._connect()
D.deploy()
pids1 = get_pids(1)
D.options(num_replicas=3).deploy(_blocking=False)
# Check that a new replica has not started in 1.0 seconds. This
# doesn't guarantee that a new replica won't ever be started, but
# 1.0 seconds is a reasonable upper bound on replica startup time.
with pytest.raises(TimeoutError):
client._wait_for_deployment_healthy(D.name, timeout_s=1)
assert get_pids(1) == pids1
# Add a node with another CPU, another replica should get placed.
cluster.add_node(num_cpus=1)
with pytest.raises(TimeoutError):
client._wait_for_deployment_healthy(D.name, timeout_s=1)
pids2 = get_pids(2)
assert pids1.issubset(pids2)
# Add a node with another CPU, the final replica should get placed
# and the deploy goal should be done.
cluster.add_node(num_cpus=1)
client._wait_for_deployment_healthy(D.name)
pids3 = get_pids(3)
assert pids2.issubset(pids3)
@pytest.mark.skipif(sys.platform == "win32", reason="Flaky on Windows.")
def test_node_failure(ray_cluster):
cluster = ray_cluster
cluster.add_node(num_cpus=3)
cluster.connect(namespace="serve")
worker_node = cluster.add_node(num_cpus=2)
@serve.deployment(version="1", num_replicas=5)
def D(*args):
return os.getpid()
def get_pids(expected, timeout=30):
pids = set()
start = time.time()
while len(pids) < expected:
pids.add(requests.get("http://localhost:8000/D").text)
if time.time() - start >= timeout:
raise TimeoutError("Timed out waiting for pids.")
return pids
serve.start(detached=True)
print("Initial deploy.")
D.deploy()
pids1 = get_pids(5)
# Remove the node. There should still be three replicas running.
print("Kill node.")
cluster.remove_node(worker_node)
pids2 = get_pids(3)
assert pids2.issubset(pids1)
# Add a worker node back. One replica should get placed.
print("Add back first node.")
cluster.add_node(num_cpus=1)
pids3 = get_pids(4)
assert pids2.issubset(pids3)
# Add another worker node. One more replica should get placed.
print("Add back second node.")
cluster.add_node(num_cpus=1)
pids4 = get_pids(5)
assert pids3.issubset(pids4)
@pytest.mark.skipif(sys.platform == "win32", reason="Flaky on Windows.")
def test_replica_startup_status_transitions(ray_cluster):
cluster = ray_cluster
cluster.add_node(num_cpus=1)
cluster.connect(namespace="serve")
serve_instance = serve.start()
signal = SignalActor.remote()
@serve.deployment(version="1", ray_actor_options={"num_cpus": 2})
class E:
def __init__(self):
ray.get(signal.wait.remote())
E.deploy(_blocking=False)
def get_replicas(replica_state):
controller = serve_instance._controller
replicas = ray.get(controller._dump_replica_states_for_testing.remote(E.name))
return replicas.get([replica_state])
# wait for serve to start the replica, and catch a reference to it.
wait_for_condition(lambda: len(get_replicas(ReplicaState.STARTING)) > 0)
replica = get_replicas(ReplicaState.STARTING)[0]
# FIXME: We switched our code formatter from YAPF to Black. Check whether we still
# need shorthands and update the comment below. See issue #21318.
# declare shorthands as yapf doesn't like long lambdas
PENDING_ALLOCATION = ReplicaStartupStatus.PENDING_ALLOCATION
PENDING_INITIALIZATION = ReplicaStartupStatus.PENDING_INITIALIZATION
SUCCEEDED = ReplicaStartupStatus.SUCCEEDED
# currently there are no resources to allocate the replica
assert replica.check_started() == PENDING_ALLOCATION
# add the necessary resources to allocate the replica
cluster.add_node(num_cpus=4)
wait_for_condition(lambda: (ray.cluster_resources().get("CPU", 0) >= 4))
wait_for_condition(lambda: (ray.available_resources().get("CPU", 0) >= 2))
def is_replica_pending_initialization():
status = replica.check_started()
print(status)
return status == PENDING_INITIALIZATION
wait_for_condition(is_replica_pending_initialization, timeout=25)
# send signal to complete replica intialization
signal.send.remote()
wait_for_condition(lambda: replica.check_started() == SUCCEEDED)
def test_intelligent_scale_down(ray_cluster):
cluster = ray_cluster
cluster.add_node(num_cpus=2)
cluster.add_node(num_cpus=2)
cluster.connect(namespace="serve")
serve.start()
@serve.deployment(version="1")
def f():
pass
def get_actor_distributions():
actors = ray.state.actors()
node_to_actors = defaultdict(list)
for actor in actors.values():
if "RayServeWrappedReplica" not in actor["ActorClassName"]:
continue
if actor["State"] != "ALIVE":
continue
node_to_actors[actor["Address"]["NodeID"]].append(actor)
return set(map(len, node_to_actors.values()))
f.options(num_replicas=3).deploy()
assert get_actor_distributions() == {2, 1}
f.options(num_replicas=2).deploy()
assert get_actor_distributions() == {2}
if __name__ == "__main__":
sys.exit(pytest.main(["-v", "-s", __file__]))
| 31.737864 | 86 | 0.689355 |
af0ca5610daabc4196a0cf4ab3b5e5d75f915c26 | 409 | py | Python | examples/lecture09b.py | SeattleCentral/ITC110 | b444097b10225396f3f4465c5cfc37ba442df951 | [
"MIT"
] | 12 | 2017-01-05T03:43:10.000Z | 2019-01-18T03:39:09.000Z | examples/lecture09b.py | ogarmela/ITC110-1 | b444097b10225396f3f4465c5cfc37ba442df951 | [
"MIT"
] | null | null | null | examples/lecture09b.py | ogarmela/ITC110-1 | b444097b10225396f3f4465c5cfc37ba442df951 | [
"MIT"
] | 2 | 2019-02-28T05:13:40.000Z | 2019-11-09T05:06:35.000Z | print("Enter a date formated as: YYYY-MM-DD")
# Print it out like Jan 23, 2018
date_input = input("Enter the date to format: ")
year = date_input[0:4]
month = int(date_input[5:7])
day = date_input[-2:]
months_list = [
"Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug",
"Sep", "Oct", "Nov", "Dec",
]
print("The formatted date is: {0} {1}, {2}".format(
months_list[month - 1], day, year))
| 22.722222 | 59 | 0.596577 |
4bb0108b5ff27caa13065353fdba1eb3d185f81a | 26,840 | py | Python | backend/tournesol/tests/test_api_comparison.py | iamnkc/tournesol | 4a09985f494577917c357783a37dfae02c57fd82 | [
"CC0-1.0"
] | null | null | null | backend/tournesol/tests/test_api_comparison.py | iamnkc/tournesol | 4a09985f494577917c357783a37dfae02c57fd82 | [
"CC0-1.0"
] | null | null | null | backend/tournesol/tests/test_api_comparison.py | iamnkc/tournesol | 4a09985f494577917c357783a37dfae02c57fd82 | [
"CC0-1.0"
] | null | null | null | import datetime
from copy import deepcopy
from unittest.mock import patch
from django.db.models import ObjectDoesNotExist, Q
from django.test import TestCase
from django.utils import timezone
from rest_framework import status
from rest_framework.test import APIClient
from core.tests.factories.user import UserFactory
from core.utils.time import time_ago
from tournesol.tests.factories.comparison import ComparisonFactory
from tournesol.tests.factories.video import VideoFactory
from ..models import Comparison, Entity, Poll
class ComparisonApiTestCase(TestCase):
"""
TestCase of the comparison API.
For each endpoint, the test case checks:
- authorizations
- permissions
- behaviour
"""
_user = "username"
_other = "other_username"
# videos available in all tests
_uid_01 = "yt:video_id_01"
_uid_02 = "yt:video_id_02"
_uid_03 = "yt:video_id_03"
_uid_04 = "yt:video_id_04"
# non existing videos that can be created
_uid_05 = "yt:video_id_05"
_uid_06 = "yt:video_id_06"
_uid_07 = "yt:video_id_07"
non_existing_comparison = {
"entity_a": {"video_id": _uid_01.split(":")[1]},
"entity_b": {"video_id": _uid_03.split(":")[1]},
"criteria_scores": [{"criteria": "pedagogy", "score": 10, "weight": 10}],
"duration_ms": 103,
}
def setUp(self):
"""
Set-up a minimal set of data to test the ComparisonList API.
At least 4 videos and 2 users with 2 comparisons each are required.
"""
self.poll_videos = Poll.default_poll()
self.comparisons_base_url = "/users/me/comparisons/{}".format(
self.poll_videos.name
)
self.user = UserFactory(username=self._user)
self.other = UserFactory(username=self._other)
now = datetime.datetime.now()
self.videos = [
VideoFactory(video_id=self._uid_01.split(":")[1]),
VideoFactory(video_id=self._uid_02.split(":")[1]),
VideoFactory(video_id=self._uid_03.split(":")[1]),
VideoFactory(video_id=self._uid_04.split(":")[1]),
]
self.comparisons = [
# "user" will have the comparisons: 01 / 02 and 01 / 04
ComparisonFactory(
user=self.user,
entity_1=self.videos[0],
entity_2=self.videos[1],
duration_ms=102,
datetime_lastedit=now,
),
ComparisonFactory(
user=self.user,
entity_1=self.videos[0],
entity_2=self.videos[3],
duration_ms=104,
datetime_lastedit=now + datetime.timedelta(minutes=1),
),
# "other" will have the comparisons: 03 / 02 and 03 / 04
ComparisonFactory(
user=self.other,
entity_1=self.videos[2],
entity_2=self.videos[1],
duration_ms=302,
datetime_lastedit=now + datetime.timedelta(minutes=3),
),
ComparisonFactory(
user=self.other,
entity_1=self.videos[2],
entity_2=self.videos[3],
duration_ms=304,
datetime_lastedit=now + datetime.timedelta(minutes=2),
),
]
def _remove_optional_fields(self, comparison):
comparison.pop("duration_ms", None)
if "criteria_scores" in comparison:
for criteria_score in comparison["criteria_scores"]:
criteria_score.pop("weight", None)
return comparison
def test_anonymous_cant_create(self):
"""
An anonymous user can't create a comparison.
"""
client = APIClient()
initial_comparisons_nbr = Comparison.objects.all().count()
data = deepcopy(self.non_existing_comparison)
response = client.post(
self.comparisons_base_url,
data,
format="json",
)
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
comparisons = Comparison.objects.filter(
entity_1__uid=self._uid_01, entity_2__uid=self._uid_03
)
self.assertFalse(comparisons.exists())
comparisons = Comparison.objects.filter(
entity_1__uid=self._uid_03, entity_2__uid=self._uid_01
)
self.assertFalse(comparisons.exists())
self.assertEqual(Comparison.objects.all().count(), initial_comparisons_nbr)
def test_authenticated_can_create(self):
"""
An authenticated user can create a new comparison.
Ensure the database object contains the data sent.
Also ensure the object representation included in the API response
contains the data sent.
"""
client = APIClient()
initial_comparisons_nbr = Comparison.objects.filter(user=self.user).count()
data = deepcopy(self.non_existing_comparison)
client.force_authenticate(user=self.user)
response = client.post(
self.comparisons_base_url,
data,
format="json",
)
# check the authorization
self.assertEqual(response.status_code, status.HTTP_201_CREATED, response.json())
comparison = Comparison.objects.select_related(
"user", "entity_1", "entity_2"
).get(
user=self.user,
poll=self.poll_videos,
entity_1__video_id=data["entity_a"]["video_id"],
entity_2__video_id=data["entity_b"]["video_id"],
)
comparisons_nbr = Comparison.objects.filter(user=self.user).count()
# check the database integrity
self.assertEqual(comparisons_nbr, initial_comparisons_nbr + 1)
self.assertEqual(comparison.poll, self.poll_videos)
self.assertEqual(comparison.user, self.user)
self.assertEqual(comparison.entity_1.video_id, data["entity_a"]["video_id"])
self.assertEqual(comparison.entity_2.video_id, data["entity_b"]["video_id"])
self.assertEqual(comparison.duration_ms, data["duration_ms"])
comparison_criteria_scores = comparison.criteria_scores.all()
self.assertEqual(
comparison_criteria_scores.count(), len(data["criteria_scores"])
)
self.assertEqual(
comparison_criteria_scores[0].criteria,
data["criteria_scores"][0]["criteria"],
)
self.assertEqual(
comparison_criteria_scores[0].score, data["criteria_scores"][0]["score"]
)
self.assertEqual(
comparison_criteria_scores[0].weight, data["criteria_scores"][0]["weight"]
)
# check the representation integrity
self.assertEqual(
response.data["entity_a"]["video_id"], data["entity_a"]["video_id"]
)
self.assertEqual(
response.data["entity_b"]["video_id"], data["entity_b"]["video_id"]
)
self.assertEqual(response.data["duration_ms"], data["duration_ms"])
self.assertEqual(
len(response.data["criteria_scores"]), len(data["criteria_scores"])
)
self.assertEqual(
response.data["criteria_scores"][0]["criteria"],
data["criteria_scores"][0]["criteria"],
)
self.assertEqual(
response.data["criteria_scores"][0]["score"],
data["criteria_scores"][0]["score"],
)
self.assertEqual(
response.data["criteria_scores"][0]["weight"],
data["criteria_scores"][0]["weight"],
)
def test_authenticated_can_create_without_optional(self):
"""
An authenticated user can create a new comparison with only required
fields.
All optional fields of the comparison and its related criteria are
tested.
"""
client = APIClient()
initial_comparisons_nbr = Comparison.objects.filter(user=self.user).count()
data = self._remove_optional_fields(deepcopy(self.non_existing_comparison))
client.force_authenticate(user=self.user)
response = client.post(
self.comparisons_base_url,
data,
format="json",
)
comparison = Comparison.objects.select_related(
"user", "entity_1", "entity_2"
).get(
poll=self.poll_videos,
user=self.user,
entity_1__video_id=data["entity_a"]["video_id"],
entity_2__video_id=data["entity_b"]["video_id"],
)
comparisons_nbr = Comparison.objects.filter(user=self.user).count()
# check the authorization
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
# check the database integrity (only the criteria scores part)
self.assertEqual(comparisons_nbr, initial_comparisons_nbr + 1)
self.assertEqual(
comparison.duration_ms,
Comparison._meta.get_field("duration_ms").get_default(),
)
comparison_criteria_scores = comparison.criteria_scores.all()
self.assertEqual(
comparison_criteria_scores.count(), len(data["criteria_scores"])
)
self.assertEqual(comparison_criteria_scores[0].weight, 1)
def test_authenticated_cant_create_criteria_scores_without_mandatory(self):
"""
An authenticated user can't create a new comparison without explicitly
providing a `creteria` and a `score` field for each criterion.
Only required fields of the comparison's criteria are tested.
"""
client = APIClient()
initial_comparisons_nbr = Comparison.objects.filter(user=self.user).count()
data = deepcopy(self.non_existing_comparison)
data["criteria_scores"][0].pop("score")
client.force_authenticate(user=self.user)
response = client.post(
self.comparisons_base_url,
data,
format="json",
)
comparisons_nbr = Comparison.objects.filter(user=self.user).count()
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(comparisons_nbr, initial_comparisons_nbr)
data = deepcopy(self.non_existing_comparison)
data["criteria_scores"][0].pop("criteria")
response = client.post(
self.comparisons_base_url,
data,
format="json",
)
comparisons_nbr = Comparison.objects.filter(user=self.user).count()
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(comparisons_nbr, initial_comparisons_nbr)
def test_authenticated_cant_create_twice(self):
"""
An authenticated user can't create two comparisons for the same couple
of videos.
"""
client = APIClient()
data = deepcopy(self.non_existing_comparison)
client.force_authenticate(user=self.user)
response = client.post(
self.comparisons_base_url,
data,
format="json",
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
response = client.post(
self.comparisons_base_url,
data,
format="json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_authenticated_cant_create_reverse(self):
"""
An authenticated user can't create two comparisons for the same couple
of videos, even by providing the video id in the reverse order.
"""
client = APIClient()
initial_comparisons_nbr = Comparison.objects.all().count()
data = deepcopy(self.non_existing_comparison)
client.force_authenticate(user=self.user)
response = client.post(
self.comparisons_base_url,
data,
format="json",
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
# swap the video id
data["entity_a"]["video_id"], data["entity_b"]["video_id"] = (
data["entity_b"]["video_id"],
data["entity_a"]["video_id"],
)
response = client.post(
self.comparisons_base_url,
data,
format="json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
# the database must contain exactly one more comparison, not two
self.assertEqual(
Comparison.objects.all().count(),
initial_comparisons_nbr + 1,
)
def test_anonymous_cant_list(self):
"""
An anonymous user can't list its comparisons.
"""
client = APIClient()
response = client.get(
self.comparisons_base_url,
format="json",
)
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
def test_authenticated_can_list(self):
"""
An authenticated user can list its comparisons.
"""
client = APIClient()
comparisons_made = Comparison.objects.filter(user=self.user)
client.force_authenticate(user=self.user)
response = client.get(
self.comparisons_base_url,
format="json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["count"], comparisons_made.count())
self.assertEqual(len(response.data["results"]), comparisons_made.count())
# the comparisons must be ordered by datetime_lastedit
comparison1 = response.data["results"][0]
comparison2 = response.data["results"][1]
self.assertEqual(
comparison1["entity_a"]["video_id"], self.comparisons[1].entity_1.video_id
)
self.assertEqual(
comparison1["entity_b"]["video_id"], self.comparisons[1].entity_2.video_id
)
self.assertEqual(comparison1["duration_ms"], self.comparisons[1].duration_ms)
self.assertEqual(
comparison2["entity_a"]["video_id"], self.comparisons[0].entity_1.video_id
)
self.assertEqual(
comparison2["entity_b"]["video_id"], self.comparisons[0].entity_2.video_id
)
self.assertEqual(comparison2["duration_ms"], self.comparisons[0].duration_ms)
def test_authenticated_can_list_filtered(self):
"""
An authenticated user can list its comparisons filtered by a video id.
"""
client = APIClient()
comparisons_made = Comparison.objects.filter(
Q(entity_1__uid=self._uid_02) | Q(entity_2__uid=self._uid_02),
poll=self.poll_videos,
user=self.user,
)
client.force_authenticate(user=self.user)
response = client.get(
"{}/{}/".format(self.comparisons_base_url, self._uid_02),
format="json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["count"], comparisons_made.count())
self.assertEqual(len(response.data["results"]), comparisons_made.count())
# currently the GET API returns an unordered list, so the assertions
# are made unordered too
for comparison in response.data["results"]:
if comparison["entity_a"]["video_id"] != self._uid_02.split(":")[1]:
self.assertEqual(
comparison["entity_b"]["video_id"], self._uid_02.split(":")[1]
)
self.assertEqual(comparison["duration_ms"], 102)
def test_anonymous_cant_read(self):
"""
An anonymous user can't read one of its comparisons.
"""
client = APIClient()
response = client.get(
"{}/{}/{}/".format(self.comparisons_base_url, self._uid_01, self._uid_02),
format="json",
)
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
def test_authenticated_can_read(self):
"""
An authenticated user can read one of its comparisons.
The `entity_a` and `entity_b` fields in the response must respectively
match the positional arguments of the URL requested.
"""
client = APIClient()
client.force_authenticate(user=self.user)
response = client.get(
"{}/{}/{}/".format(self.comparisons_base_url, self._uid_01, self._uid_02),
format="json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.data["entity_a"]["video_id"], self._uid_01.split(":")[1]
)
self.assertEqual(
response.data["entity_b"]["video_id"], self._uid_02.split(":")[1]
)
self.assertEqual(response.data["duration_ms"], 102)
def test_authenticated_can_read_reverse(self):
"""
An authenticated user can read one of its comparisons, even if the
video id are reversed in the URL requested.
The `entity_a` and `entity_b` fields in the response must respectively
match the positional arguments of the URL requested.
"""
client = APIClient()
client.force_authenticate(user=self.user)
# assert the comparison 02 / 01 does not exist in this order in the
# database, in order to test the GET view with reversed parameters
with self.assertRaises(ObjectDoesNotExist):
Comparison.objects.get(
poll=self.poll_videos,
user=self.user,
entity_1__uid=self._uid_02,
entity_2__uid=self._uid_01,
)
response = client.get(
"{}/{}/{}/".format(
self.comparisons_base_url,
self._uid_02,
self._uid_01,
),
format="json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.data["entity_a"]["video_id"], self._uid_02.split(":")[1]
)
self.assertEqual(
response.data["entity_b"]["video_id"], self._uid_01.split(":")[1]
)
self.assertEqual(response.data["duration_ms"], 102)
def test_anonymous_cant_update(self):
"""
An anonymous user can't update a comparison.
"""
client = APIClient()
response = client.put(
"{}/{}/{}/".format(
self.comparisons_base_url,
self._uid_01,
self._uid_02,
),
{"criteria_scores": [{"criteria": "pedagogy", "score": 10, "weight": 10}]},
format="json",
)
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
def test_anonymous_cant_delete(self):
"""
An anonymous user can't delete a comparison.
"""
client = APIClient()
# ensure ObjectDoesNoteExist is not raised
Comparison.objects.get(
poll=self.poll_videos,
user=self.user,
entity_1=self.videos[0],
entity_2=self.videos[1],
)
response = client.delete(
"{}/{}/{}/".format(
self.comparisons_base_url,
self._uid_01,
self._uid_02,
),
format="json",
)
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
# ensure ObjectDoesNoteExist is still not raised
Comparison.objects.get(
poll=self.poll_videos,
user=self.user,
entity_1=self.videos[0],
entity_2=self.videos[1],
)
def test_authenticated_can_delete(self):
"""
An authenticated user can delete a comparison.
"""
client = APIClient()
client.force_authenticate(user=self.user)
# ensure ObjectDoesNoteExist is not raised
Comparison.objects.get(
poll=self.poll_videos,
user=self.user,
entity_1=self.videos[0],
entity_2=self.videos[1],
)
response = client.delete(
"{}/{}/{}/".format(
self.comparisons_base_url,
self._uid_01,
self._uid_02,
),
format="json",
)
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
with self.assertRaises(ObjectDoesNotExist):
Comparison.objects.get(
poll=self.poll_videos,
user=self.user,
entity_1=self.videos[0],
entity_2=self.videos[1],
)
def test_authenticated_integrated_comparison_list(self):
client = APIClient()
client.force_authenticate(user=self.user)
comparison1 = Comparison.objects.create(
poll=self.poll_videos,
user=self.user,
entity_1=self.videos[2],
entity_2=self.videos[3],
)
comparison2 = Comparison.objects.create(
poll=self.poll_videos,
user=self.user,
entity_1=self.videos[1],
entity_2=self.videos[2],
)
client.put(
"{}/{}/{}/".format(
self.comparisons_base_url,
self._uid_03,
self._uid_04,
),
{"criteria_scores": [{"criteria": "pedagogy", "score": 10, "weight": 10}]},
format="json",
)
response = client.get(
self.comparisons_base_url,
format="json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
result_comparison1 = response.data["results"][0]
result_comparison2 = response.data["results"][1]
self.assertEqual(
result_comparison1["entity_a"]["video_id"], comparison1.entity_1.video_id
)
self.assertEqual(
result_comparison1["entity_b"]["video_id"], comparison1.entity_2.video_id
)
self.assertEqual(
result_comparison2["entity_a"]["video_id"], comparison2.entity_1.video_id
)
self.assertEqual(
result_comparison2["entity_b"]["video_id"], comparison2.entity_2.video_id
)
def test_n_ratings_from_video(self):
client = APIClient()
client.force_authenticate(user=self.user)
VideoFactory(video_id=self._uid_05.split(":")[1])
VideoFactory(video_id=self._uid_06.split(":")[1])
VideoFactory(video_id=self._uid_07.split(":")[1])
data1 = {
"entity_a": {"video_id": self._uid_05.split(":")[1]},
"entity_b": {"video_id": self._uid_06.split(":")[1]},
"criteria_scores": [{"criteria": "pedagogy", "score": 10, "weight": 10}],
"duration_ms": 103,
}
response = client.post(
self.comparisons_base_url,
data1,
format="json",
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
data2 = {
"entity_a": {"video_id": self._uid_05.split(":")[1]},
"entity_b": {"video_id": self._uid_07.split(":")[1]},
"criteria_scores": [{"criteria": "pedagogy", "score": 10, "weight": 10}],
"duration_ms": 103,
}
response = client.post(
self.comparisons_base_url,
data2,
format="json",
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
client.force_authenticate(user=self.other)
data3 = {
"entity_a": {"video_id": self._uid_05.split(":")[1]},
"entity_b": {"video_id": self._uid_06.split(":")[1]},
"criteria_scores": [{"criteria": "pedagogy", "score": 10, "weight": 10}],
"duration_ms": 103,
}
response = client.post(
self.comparisons_base_url,
data3,
format="json",
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
video5 = Entity.objects.get(uid=self._uid_05)
video6 = Entity.objects.get(uid=self._uid_06)
video7 = Entity.objects.get(uid=self._uid_07)
self.assertEqual(video5.rating_n_contributors, 2)
self.assertEqual(video5.rating_n_ratings, 3)
self.assertEqual(video6.rating_n_contributors, 2)
self.assertEqual(video6.rating_n_ratings, 2)
self.assertEqual(video7.rating_n_contributors, 1)
self.assertEqual(video7.rating_n_ratings, 1)
@patch("tournesol.utils.api_youtube.get_video_metadata")
def test_metadata_refresh_on_comparison_creation(self, mock_get_video_metadata):
client = APIClient()
mock_get_video_metadata.return_value = {}
user = UserFactory(username="non_existing_user")
client.force_authenticate(user=user)
video01, video02, video03 = self.videos[:3]
video01.last_metadata_request_at = None
video01.save()
video02.last_metadata_request_at = time_ago(days=7)
video02.save()
video03.last_metadata_request_at = timezone.now()
video03.save()
data = {
"entity_a": {"video_id": self._uid_01.split(":")[1]},
"entity_b": {"video_id": self._uid_02.split(":")[1]},
"criteria_scores": [
{"criteria": "largely_recommended", "score": 10, "weight": 10}
],
}
response = client.post(self.comparisons_base_url, data, format="json")
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(len(mock_get_video_metadata.mock_calls), 2)
data = {
"entity_a": {"video_id": self._uid_01.split(":")[1]},
"entity_b": {"video_id": self._uid_03.split(":")[1]},
"criteria_scores": [
{"criteria": "largely_recommended", "score": 10, "weight": 10}
],
}
response = client.post(self.comparisons_base_url, data, format="json")
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
# Video01 has been refreshed and video03 was already up-to-date.
# No additional call to YouTube API should be visible.
self.assertEqual(len(mock_get_video_metadata.mock_calls), 2)
def test_invalid_criteria_in_comparison(self):
client = APIClient()
client.force_authenticate(self.user)
data = deepcopy(self.non_existing_comparison)
data["criteria_scores"][0]["criteria"] = "invalid"
response = client.post(self.comparisons_base_url, data, format="json")
self.assertContains(
response, "not a valid criteria", status_code=status.HTTP_400_BAD_REQUEST
)
| 35.039164 | 88 | 0.60149 |
214e5bbf3df86491971ee073185fa30948af4cc3 | 15,143 | py | Python | scripts/coverage-scores/update-coverage.py | thomaskonrad/osm_austria_building_coverage | 3f678837b6800adfdd165f9b8424d1a258ca63de | [
"MIT"
] | 2 | 2015-06-21T19:39:05.000Z | 2015-06-22T10:54:17.000Z | scripts/coverage-scores/update-coverage.py | thomaskonrad/osm-austria-building-coverage | 3f678837b6800adfdd165f9b8424d1a258ca63de | [
"MIT"
] | 2 | 2018-02-17T16:51:06.000Z | 2018-02-23T07:11:13.000Z | scripts/coverage-scores/update-coverage.py | thomaskonrad/osm_austria_building_coverage | 3f678837b6800adfdd165f9b8424d1a258ca63de | [
"MIT"
] | 3 | 2019-10-16T08:46:13.000Z | 2021-04-14T23:49:07.000Z | #!/usr/bin/env python3
import psycopg2
import sys
from PIL import Image
import os
import struct
import argparse
import math
def hex2rgb(hex):
# Remove the hash in the beginning
hex = hex[1:]
return struct.unpack('BBB', bytes.fromhex(hex))
def calculate_coverage_full_tiles(basemap_tiles_path, osm_tiles_path, zoom, schema, tile_size, tile_indices):
covered_basemap_pixels = 0
uncovered_basemap_pixels = 0
for index in tile_indices:
x = index[0]
y = index[1]
basemap_tile_path = basemap_tiles_path + schema % (zoom, x, y)
osm_tile_path = osm_tiles_path + schema % (zoom, x, y)
basemap_tile = Image.open(basemap_tile_path).load()
osm_tile = Image.open(osm_tile_path).convert('RGBA').load()
for pixel_y in range(tile_size):
for pixel_x in range(tile_size):
(cbr, cbg, cbb, cba) = basemap_tile[pixel_x, pixel_y]
(cor, cog, cob, coa) = osm_tile[pixel_x, pixel_y]
if cba != 0: # basemap pixel
if coa != 0: # Also OSM pixel
covered_basemap_pixels += 1
else: # Only basemap pixel
uncovered_basemap_pixels += 1
return covered_basemap_pixels, uncovered_basemap_pixels
def calculate_coverage_partial_tiles(municipality_tiles_path, basemap_tiles_path, osm_tiles_path, color, zoom, schema, tile_size, tile_indices):
covered_basemap_pixels = 0
uncovered_basemap_pixels = 0
(r, g, b) = hex2rgb(color)
for index in tile_indices:
x = index[0]
y = index[1]
municipality_tile_path = municipality_tiles_path + schema % (zoom, x, y)
basemap_tile_path = basemap_tiles_path + schema % (zoom, x, y)
osm_tile_path = osm_tiles_path + schema % (zoom, x, y)
municipality_tile = Image.open(municipality_tile_path).convert('RGBA').load()
basemap_tile = Image.open(basemap_tile_path).load()
osm_tile = Image.open(osm_tile_path).convert('RGBA').load()
for pixel_y in range(tile_size):
for pixel_x in range(tile_size):
(cmr, cmg, cmb, cma) = municipality_tile[pixel_x, pixel_y]
(cbr, cbg, cbb, cba) = basemap_tile[pixel_x, pixel_y]
(cor, cog, cob, coa) = osm_tile[pixel_x, pixel_y]
if cmr == r and cmg == g and cmb == b:
# We're on this municipality
if cba != 0: # basemap pixel
if coa != 0: # Also OSM pixel
covered_basemap_pixels += 1
else: # Only basemap pixel
uncovered_basemap_pixels += 1
return covered_basemap_pixels, uncovered_basemap_pixels
def get_latest_timestamp(tile_indices, full_schemata, zoom):
latest_timestamp = 0
for full_schema in full_schemata:
for index in tile_indices:
x = index[0]
y = index[1]
tile_path = full_schema % (zoom, x, y)
if os.path.exists(tile_path):
timestamp = os.path.getmtime(tile_path)
if timestamp > latest_timestamp:
latest_timestamp = timestamp
# We only need seconds precision
return math.floor(latest_timestamp)
def get_number_of_coverage_entries(cur, boundary_id):
cur.execute("select count(*) from austria_building_coverage where boundary_id = %s", (boundary_id,))
return cur.fetchone()[0]
def get_latest_coverage_entry(cur, boundary_id):
cur.execute("""
select c1.id as id, extract(epoch from c1.timestamp) as latest_timestamp,
c1.covered_basemap_pixels, c1.total_basemap_pixels, c1.coverage
from austria_building_coverage c1
where boundary_id = %s
and timestamp =
(select max(timestamp) from austria_building_coverage c2
where c2.boundary_id = c1.boundary_id)
""",
(boundary_id,))
return cur.fetchone()
def update_coverage_entry_timestamp(cur, conn, entry_id, timestamp):
cur.execute("update austria_building_coverage set timestamp = to_timestamp(%s) where id = %s",
("%.0f" % timestamp, entry_id,))
conn.commit()
def update_coverage_high_level(cur, conn, boundaries_updated):
# Record boundaries that have been updated
boundaries_coverage_updated = []
if len(boundaries_updated) > 0:
cur.execute("""
select b.id
from austria_admin_boundaries b
left join austria_admin_boundaries m on (m.parent = b.id)
where m.id = ANY(%s)
group by b.id
""",
(boundaries_updated,))
boundaries_to_update = cur.fetchall()
for boundary in boundaries_to_update:
boundary_id = boundary[0]
number_of_entries = get_number_of_coverage_entries(cur, boundary_id)
latest_entry = get_latest_coverage_entry(cur, boundary_id)
cur.execute("""select extract(epoch from max(c.timestamp)), sum(c.covered_basemap_pixels), sum(c.total_basemap_pixels)
from austria_admin_boundaries parent
left join austria_admin_boundaries child on (child.parent = parent.id)
left join austria_building_coverage c on (c.boundary_id = child.id)
where c.timestamp = (select max(timestamp) from austria_building_coverage c2 where c.boundary_id = c2.boundary_id)
and parent.id = %s
""",
(boundary_id,))
result = cur.fetchone()
if result is not None and result[0] is not None:
# Calculate district coverage and avoid division by zero
if result[1] > 0:
boundary_coverage = result[1] / result[2] * 100.0
else:
boundary_coverage = 0.0
if latest_entry is None or result[1] != latest_entry[2] or result[2] != latest_entry[3]:
boundaries_coverage_updated.append(boundary_id)
print("Calculated coverage of boundary #%s (coverage: %.2f percent)." % (boundary_id, boundary_coverage))
cur.execute("insert into austria_building_coverage "
"(boundary_id, timestamp, covered_basemap_pixels, total_basemap_pixels, coverage) "
"values ("
"%s, to_timestamp(%s), %s, %s, %s"
")",
(
boundary_id,
"%.0f" % result[0],
result[1],
result[2],
boundary_coverage,
)
)
conn.commit()
elif number_of_entries > 1:
boundaries_coverage_updated.append(boundary_id)
update_coverage_entry_timestamp(cur, conn, latest_entry[0], result[0])
else:
print("Boundary %d marked for update but not affected." % boundary_id)
else:
print("Error: No coverage results of boundary %d could be calculated." % boundary_id)
return boundaries_coverage_updated
def main():
parser = argparse.ArgumentParser(description="Update the coverage scores of each outdated municipality.")
parser.add_argument("-m", "--municipality-tiles-path", dest="municipality_tiles_path", required=True,
help="The path to the municipality tiles (with a trailing slash)")
parser.add_argument("-b", "--basemap-tiles-path", dest="basemap_tiles_path", required=True,
help="The path to the basemap tiles (with a trailing slash)")
parser.add_argument("-o", "--osm-tiles-path", dest="osm_tiles_path", required=True,
help="The path to the OSM tiles (with a trailing slash)")
parser.add_argument("-H", "--hostname", dest="hostname", required=False, help="The database hostname")
parser.add_argument("-d", "--database", dest="database", nargs='?', default="gis", help="The name of the database")
parser.add_argument("-u", "--user", dest="user", required=False, help="The database user")
parser.add_argument("-p", "--password", dest="password", required=False, help="The database password")
parser.add_argument("-O", "--onlyhighlevel", action="store_true",
help="Set this if you want to update only the high-level boundaries (districts, federal states,"
"the whole country) from the current municipality coverage scores.")
args = parser.parse_args()
municipality_tiles_path = os.path.expanduser(args.municipality_tiles_path)
basemap_tiles_path = os.path.expanduser(args.basemap_tiles_path)
osm_tiles_path = os.path.expanduser(args.osm_tiles_path)
tile_size = 256
zoom = 16
schema = "%d/%d/%d.png"
for path in [municipality_tiles_path, basemap_tiles_path, osm_tiles_path]:
if not os.path.isdir(path):
print("Path %s does not exist. Please specify a valid path." % (path))
sys.exit(1)
# Try to connect
try:
conn = psycopg2.connect(
host=args.hostname,
database=args.database,
user=args.user,
password=args.password
)
except Exception as e:
print("I am unable to connect to the database (%s)." % str(e))
sys.exit(1)
cur = conn.cursor()
try:
cur.execute("SELECT id, name, full_tiles, partial_tiles, color "
"from austria_admin_boundaries "
"where admin_level=3")
except Exception as e:
print("I can't SELECT! (%s)" % str(e))
sys.exit(1)
all_municipalities = cur.fetchall()
print("%d municipalities found." % len(all_municipalities))
if not args.onlyhighlevel:
municipalities_coverage_updated = []
for municipality in all_municipalities:
id = municipality[0]
name = municipality[1]
full_tiles = municipality[2]
partial_tiles = municipality[3]
color = municipality[4]
entry_count = get_number_of_coverage_entries(cur, id)
latest_coverage_row = get_latest_coverage_entry(cur, id)
latest_tile_timestamp = get_latest_timestamp(
full_tiles + partial_tiles,
[
basemap_tiles_path + schema,
osm_tiles_path + schema,
],
zoom)
if latest_coverage_row is None or latest_coverage_row[1] < latest_tile_timestamp:
print("Municipality %s (ID %d) is out of date. Updating..." % (name, id))
(covered_basemap_pixels_full, uncovered_basemap_pixels_full) = \
calculate_coverage_full_tiles(basemap_tiles_path, osm_tiles_path, zoom, schema, tile_size, full_tiles)
(covered_basemap_pixels_partial, uncovered_basemap_pixels_partial) =\
calculate_coverage_partial_tiles(municipality_tiles_path, basemap_tiles_path, osm_tiles_path, color, zoom, schema, tile_size, partial_tiles)
covered_basemap_pixels = covered_basemap_pixels_full + covered_basemap_pixels_partial
uncovered_basemap_pixels = uncovered_basemap_pixels_full + uncovered_basemap_pixels_partial
total_basemap_pixels = covered_basemap_pixels + uncovered_basemap_pixels
# Calculate coverage and avoid a division by zero.
if total_basemap_pixels > 0:
coverage = covered_basemap_pixels / total_basemap_pixels * 100.0
else:
coverage = 0.0
# Only insert the values if no entry exists yet or if the values have actually changed.
if latest_coverage_row is None or latest_coverage_row[2] != covered_basemap_pixels or \
latest_coverage_row[3] != total_basemap_pixels:
municipalities_coverage_updated.append(id)
cur.execute("insert into austria_building_coverage "
"(boundary_id, timestamp, covered_basemap_pixels, total_basemap_pixels, coverage) "
"values ("
"%s, to_timestamp(%s), %s, %s, %s"
")",
(
id,
"%.0f" % latest_tile_timestamp,
covered_basemap_pixels,
total_basemap_pixels,
coverage,
)
)
conn.commit()
# We update the timestamp only if the entry count is higher than one. The problem is that if a tile is
# updated that is part of the municipality's tile set but does not affect the municipality, the
# timestamp of the last coverage is simply updated. That may lead to the case where some municipalities
# do not have an austria_building_coverage entry on the first day.
elif entry_count > 1:
print("The latest timestamp of the tiles of municipality %s has changed but these changes did not "
"affect this municipality. Only updating the timestsamp of entry %d." % (name, latest_coverage_row[0]))
municipalities_coverage_updated.append(id)
update_coverage_entry_timestamp(cur, conn, latest_coverage_row[0], latest_tile_timestamp)
else:
print("The latest timestamp of the tiles of municipality %s has changed but these changes did not "
"affect this municipality. Not updating the timestamp anyway because the municipality has "
"only one coverage score entry. Updating the timestamp would cause the municipality not to "
"have a score entry on the first day." % name)
# Alright, all municipalities updated. Now let's update the total coverage scores of districts, federal states and
# the whole contry where necessary.
if args.onlyhighlevel:
municipalities_coverage_updated = []
for municipality in all_municipalities:
municipalities_coverage_updated.append(municipality[0])
# Update districts.
districts_updated = update_coverage_high_level(cur, conn, municipalities_coverage_updated)
# Update federal states.
states_updated = update_coverage_high_level(cur, conn, districts_updated)
# Update the whole country.
update_coverage_high_level(cur, conn, states_updated)
if __name__ == "__main__":
main()
| 43.265714 | 160 | 0.596183 |
8704af97b2bbac46a8b9a5ae5798d9b182e8c122 | 7,119 | py | Python | tests/test_numbers.py | NickRuiz/power-asr | bbb42f1c20a2a78303c427b0f0361fbd7499e428 | [
"MIT"
] | 16 | 2019-09-06T08:25:45.000Z | 2022-01-06T04:44:14.000Z | tests/test_numbers.py | NickRuiz/power-asr | bbb42f1c20a2a78303c427b0f0361fbd7499e428 | [
"MIT"
] | 4 | 2017-08-18T03:44:48.000Z | 2019-05-03T20:51:10.000Z | tests/test_numbers.py | NickRuiz/power-asr | bbb42f1c20a2a78303c427b0f0361fbd7499e428 | [
"MIT"
] | 4 | 2019-04-26T02:27:38.000Z | 2022-02-26T04:44:44.000Z | import unittest
import traceback
from normalize import TextToNumEng, NumToTextEng
class TextToNumEng_Test(unittest.TestCase):
def test_not_number(self):
try:
actual = TextToNumEng.convert('hi there')
self.assertIsNone(actual)
except Exception as e:
self.assertTrue(isinstance(e, Exception))
def test_convert_ones(self):
expected = 4
actual = TextToNumEng.convert('four')
self.assertEqual(actual, expected)
def test_convert_teens(self):
expected = 19
actual = TextToNumEng.convert('nineteen')
self.assertEqual(actual, expected)
def test_convert_tens(self):
expected = 20
actual = TextToNumEng.convert('twenty')
self.assertEqual(actual, expected)
def test_convert_scales(self):
expected = 500
actual = TextToNumEng.convert("five hundred")
self.assertEqual(actual, expected)
def test_convert_tens_ones(self):
expected = 92
actual = TextToNumEng.convert('ninety two')
self.assertEqual(actual, expected)
def test_convert_hundreds_tens_ones(self):
expected = 149
actual = TextToNumEng.convert('one hundred forty nine')
self.assertEqual(actual, expected)
def test_convert_94(self):
expected = 94
actual = TextToNumEng.convert('ninety four')
self.assertEqual(actual, expected)
def test_convert_999(self):
try:
expected = 999
actual = TextToNumEng.convert('nine hundred ninety nine')
self.assertEqual(actual, expected)
except Exception:
self.assertTrue(False, traceback.format_exc())
def test_convert_millions_thousands_hundreds_tens_ones(self):
expected = 2050149
actual = TextToNumEng.convert('two million fifty thousand one hundred forty nine')
self.assertEqual(actual, expected)
# These 'and' cases are special and should only be considered in this conversion direction.
def test_convert_hundreds_and_tens_ones(self):
expected = 249
actual = TextToNumEng.convert('two hundred and forty nine')
self.assertEqual(actual, expected)
def test_convert_millions_thousands_hundreds_and_tens_ones(self):
expected = 2050149
actual = TextToNumEng.convert('two million fifty thousand one hundred and forty nine')
self.assertEqual(actual, expected)
def test_number(self):
expected = 128
actual = ""
try:
actual = TextToNumEng.convert(expected)
self.assertEqual(expected, actual)
except TypeError:
raise
def test_hyphen(self):
expected = 41
actual = TextToNumEng.convert('forty-one')
self.assertEqual(actual, expected)
def test_list_string(self):
words = 'two million fifty thousand one hundred and forty nine'.split()
expected = 2050149
actual = TextToNumEng.convert(words)
self.assertEqual(actual, expected)
def test_string_empty(self):
try:
actual = TextToNumEng.convert('')
self.fail()
self.assertIsNone(actual)
except TypeError:
pass
def test_invalid_scale(self):
try:
actual = TextToNumEng.convert('hundred four')
self.assertIsNone(actual)
except Exception as e:
#print type(e), str(e)
self.assertTrue(isinstance(e, Exception))
def test_year_1996(self):
expected = 1996
actual = TextToNumEng.convertTryYear("nineteen ninety six")
self.assertEqual(actual, expected)
def test_year_911(self):
expected = 911
actual = TextToNumEng.convertTryYear("nine 11")
self.assertEqual(actual, expected)
def test_year_984(self):
expected = 984
actual = TextToNumEng.convertTryYear("nine eighty four")
self.assertEqual(actual, expected)
def test_year_1992(self):
expected = 1992
actual = TextToNumEng.convertTryYear('nineteen ninety two')
self.assertEqual(actual, expected)
def test_year_2015(self):
expected = 2015
actual = TextToNumEng.convertTryYear('twenty fifteen')
self.assertEqual(actual, expected)
def test_year_invalid(self):
try:
actual = TextToNumEng.convertTryYear('nineteen hundred four')
self.assertIsNone(actual)
except Exception as e:
#print type(e), str(e)
self.assertTrue(isinstance(e, Exception))
def test_year_notyear_number(self):
try:
expected = 999
actual = TextToNumEng.convertTryYear('nine hundred ninety nine')
self.assertEqual(actual, expected)
except Exception:
self.assertTrue(False, traceback.format_exc())
class NumToTextEng_Test(unittest.TestCase):
def test_convert_hundreds_tens_ones(self):
x = 149
expected = 'one hundred forty nine'
actual = NumToTextEng.convert(x)
self.assertEqual(actual, expected)
def test_convert_millions_thousands_hundreds_tens_ones(self):
expected = 'two million fifty thousand one hundred forty nine'
actual = NumToTextEng.convert(2050149)
self.assertEqual(actual, expected)
def test_notnumber(self):
try:
actual = NumToTextEng.convert("134sx39")
self.assertIsNone(actual)
except TypeError:
self.assertTrue(True)
def test_convert_2000000(self):
value = 2000000
expected = 'two million'
actual = NumToTextEng.convert(value)
self.assertEqual(actual, expected)
def test_convertTryYear_2012(self):
value = 2012
expected = 'twenty twelve'
actual = NumToTextEng.convertTryYear(value)
self.assertEqual(actual, expected)
def test_convertTryYear_611(self):
value = 611
expected = 'six eleven'
actual = NumToTextEng.convertTryYear(value)
self.assertEqual(actual, expected)
def test_convertTryYear_837(self):
value = 837
expected = 'eight thirty seven'
actual = NumToTextEng.convertTryYear(value)
self.assertEqual(actual, expected)
def test_convertTryYear_50(self):
value = 50
expected = 'fifty'
try:
actual = NumToTextEng.convertTryYear(value)
self.assertEqual(actual, expected)
except (ValueError, TypeError):
self.fail()
def test_convert_50(self):
value = 50
expected = 'fifty'
actual = NumToTextEng.convert(value)
self.assertEqual(actual, expected)
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main() | 33.580189 | 95 | 0.6168 |
cbd87ba6a508cae12b4b9f05c699b80d0fda024c | 3,366 | py | Python | xojbackend/app/app/settings.py | mazharkafi004/XOJ | 834091418a7c4d008e44c3ff49df5955f38d9378 | [
"MIT"
] | null | null | null | xojbackend/app/app/settings.py | mazharkafi004/XOJ | 834091418a7c4d008e44c3ff49df5955f38d9378 | [
"MIT"
] | null | null | null | xojbackend/app/app/settings.py | mazharkafi004/XOJ | 834091418a7c4d008e44c3ff49df5955f38d9378 | [
"MIT"
] | null | null | null | """
Django settings for app project.
Generated by 'django-admin startproject' using Django 3.1.5.
For more information on this file, see
https://docs.djangoproject.com/en/3.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.1/ref/settings/
"""
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'fwhwu!e#m46g(9gtvugczrje=9pr^s4u4194na_2u_w(+rdk^r'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'rest_framework.authtoken',
'corsheaders',
'core',
'user',
'ojworkings',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'corsheaders.middleware.CorsMiddleware',
]
ROOT_URLCONF = 'app.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'app.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
# Password validation
# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.1/howto/static-files/
STATIC_URL = '/static/'
AUTH_USER_MODEL = 'core.User'
# cors headers variables
CORS_ORIGIN_ALLOW_ALL = True
# CORS_ORIGIN_WHITELIST = (
# 'http://localhost:8000',
# )
| 24.569343 | 91 | 0.693405 |
96082ce1b4312230df028fc6a6117a8519b208d7 | 10,421 | py | Python | lncrawl/core/sources.py | mesmerlord/lncrawler | b309e892969ecd3e7c8e68aef70b6614131fcb3c | [
"Apache-2.0"
] | null | null | null | lncrawl/core/sources.py | mesmerlord/lncrawler | b309e892969ecd3e7c8e68aef70b6614131fcb3c | [
"Apache-2.0"
] | null | null | null | lncrawl/core/sources.py | mesmerlord/lncrawler | b309e892969ecd3e7c8e68aef70b6614131fcb3c | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import hashlib
import importlib.util
import json
import logging
import os
import re
import time
from concurrent.futures import Future, ThreadPoolExecutor
from importlib.abc import FileLoader
from pathlib import Path
from typing import Dict, List, Type
import requests
from packaging import version
from tqdm.std import tqdm
from ..assets.icons import Icons
from ..assets.version import get_version
from .arguments import get_args
from .crawler import Crawler
from .display import new_version_news
logger = logging.getLogger(__name__)
# --------------------------------------------------------------------------- #
__all__ = [
'load_sources',
'crawler_list',
'rejected_sources',
]
rejected_sources = {}
crawler_list: Dict[str, Type[Crawler]] = {}
# --------------------------------------------------------------------------- #
# Utilities
# --------------------------------------------------------------------------- #
__executor = ThreadPoolExecutor(10)
def __download_data(url: str):
logger.debug('Downloading %s', url)
if Icons.isWindows:
referer = 'http://updater.checker/windows/' + get_version()
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36'
elif Icons.isLinux:
referer = 'http://updater.checker/linux/' + get_version()
user_agent = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36'
elif Icons.isMac:
referer = 'http://updater.checker/mac/' + get_version()
user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36'
else:
referer = 'http://updater.checker/others/' + get_version()
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:90.0) Gecko/20100101 Firefox/90.0'
# end if
res = requests.get(
url,
stream=True,
allow_redirects=True,
headers={
'referer': referer,
'user-agent': user_agent,
}
)
res.raise_for_status()
return res.content
# --------------------------------------------------------------------------- #
# Checking Updates
# --------------------------------------------------------------------------- #
__index_fetch_internval_in_hours = 3
__master_index_file_url = 'https://raw.githubusercontent.com/dipu-bd/lightnovel-crawler/master/sources/_index.json'
__user_data_path = Path(os.path.expanduser('~')) / '.lncrawl'
__local_data_path = Path(__file__).parent.parent.absolute()
if not (__local_data_path / 'sources').is_dir():
__local_data_path = __local_data_path.parent
__current_index = {}
__latest_index = {}
def __load_current_index():
try:
index_file = __user_data_path / 'sources' / '_index.json'
if not index_file.is_file():
index_file = __local_data_path / 'sources' / '_index.json'
assert index_file.is_file()
logger.debug('Loading current index data from %s', index_file)
with open(index_file, 'r', encoding='utf8') as fp:
global __current_index
__current_index = json.load(fp)
except Exception as e:
logger.debug('Could not load sources index. Error: %s', e)
def __save_current_index():
index_file = __user_data_path / 'sources' / '_index.json'
os.makedirs(index_file.parent, exist_ok=True)
logger.debug('Saving current index data to %s', index_file)
with open(index_file, 'w', encoding='utf8') as fp:
json.dump(__current_index, fp, ensure_ascii=False)
def __load_latest_index():
global __latest_index
global __current_index
last_download = __current_index.get('v', 0)
if time.time() - last_download < __index_fetch_internval_in_hours * 3600:
logger.debug('Current index was already downloaded once in last %d hours.',
__index_fetch_internval_in_hours)
__latest_index = __current_index
return
try:
data = __download_data(__master_index_file_url)
__latest_index = json.loads(data.decode('utf8'))
if 'crawlers' not in __current_index:
__current_index = __latest_index
__current_index['v'] = int(time.time())
__save_current_index()
except Exception as e:
if 'crawlers' not in __current_index:
raise Exception('Could not fetch sources index')
logger.warn('Could not download latest index. Error: %s', e)
__latest_index = __current_index
def __check_updates():
__load_current_index()
__load_latest_index()
latest_app_version = __latest_index['app']['version']
if version.parse(latest_app_version) > version.parse(get_version()):
new_version_news(latest_app_version)
global __current_index
__current_index['app'] = __latest_index['app']
__current_index['supported'] = __latest_index['supported']
__current_index['rejected'] = __latest_index['rejected']
__save_current_index()
global rejected_sources
rejected_sources = __current_index['rejected']
# --------------------------------------------------------------------------- #
# Downloading sources
# --------------------------------------------------------------------------- #
def __save_source_data(source_id, data):
latest = __latest_index['crawlers'][source_id]
dst_file = __user_data_path / str(latest['file_path'])
dst_dir = dst_file.parent
temp_file = dst_dir / ('.' + dst_file.name)
os.makedirs(dst_dir, exist_ok=True)
with open(temp_file, 'wb') as fp:
fp.write(data)
if dst_file.exists():
os.remove(dst_file)
temp_file.rename(dst_file)
global __current_index
__current_index['crawlers'][source_id] = latest
__save_current_index()
logger.debug('Source update downloaded: %s', dst_file.name)
def __get_file_md5(file: Path):
if not file.is_file():
return None
with open(file, 'rb') as f:
return hashlib.md5(f.read()).hexdigest()
def __download_sources():
tbd_sids = []
for sid in __current_index['crawlers'].keys():
if sid not in __latest_index['crawlers']:
tbd_sids.append(sid)
for sid in tbd_sids:
del __current_index['crawlers'][sid]
futures: Dict[str, Future] = {}
for sid, latest in __latest_index['crawlers'].items():
current = __current_index['crawlers'].get(sid)
has_new_version = not current or current['version'] < latest['version']
__current_index['crawlers'][sid] = latest
user_file = (__user_data_path / str(latest['file_path'])).is_file()
local_file = (__local_data_path / str(latest['file_path'])).is_file()
if has_new_version or not (user_file or local_file):
future = __executor.submit(__download_data, latest['url'])
futures[sid] = future
if not futures:
return
bar = tqdm(desc='Updating sources', total=len(futures), unit='file')
if os.getenv('debug_mode') == 'yes':
bar.update = lambda n=1: None # Hide in debug mode
bar.clear()
for sid, future in futures.items():
try:
data = future.result()
__save_source_data(sid, data)
except Exception as e:
logger.warn('Failed to download source file. Error: %s', e)
finally:
bar.update()
bar.clear()
bar.close()
# --------------------------------------------------------------------------- #
# Loading sources
# --------------------------------------------------------------------------- #
__cache_crawlers = {}
__url_regex = re.compile(r'^^(https?|ftp)://[^\s/$.?#].[^\s]*$', re.I)
def __import_crawlers(file_path: Path) -> List[Type[Crawler]]:
global __cache_crawlers
if file_path in __cache_crawlers:
return __cache_crawlers[file_path]
# logger.debug('+ %s', file_path)
assert file_path.is_file()
module_name = hashlib.md5(file_path.name.encode()).hexdigest()
spec = importlib.util.spec_from_file_location(module_name, file_path)
assert spec and isinstance(spec.loader, FileLoader)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
crawlers = []
for key in dir(module):
crawler = getattr(module, key)
if type(crawler) != type(Crawler) or crawler.__base__ != Crawler:
continue
assert 'base_url' in crawler.__dict__
assert 'read_novel_info' in crawler.__dict__
assert 'download_chapter_body' in crawler.__dict__
urls = getattr(crawler, 'base_url')
if isinstance(urls, str):
urls = [urls]
assert isinstance(urls, list)
urls = [
str(url).lower().strip('/') + '/'
for url in set(urls)
]
assert len(urls) > 0
for url in urls:
assert __url_regex.match(url)
setattr(crawler, 'base_url', urls)
crawlers.append(crawler)
__cache_crawlers[file_path] = crawlers
return crawlers
def __add_crawlers_from_path(path: Path):
if not path.exists():
logger.warn('Path does not exists: %s', path)
return
if path.is_dir():
for py_file in path.glob('**/*.py'):
__add_crawlers_from_path(py_file)
return
global crawler_list
try:
crawlers = __import_crawlers(path)
for crawler in crawlers:
setattr(crawler, 'file_path', str(path.absolute()))
for url in getattr(crawler, 'base_url'):
crawler_list[url] = crawler
except Exception as e:
logger.warn('Could not load crawlers from %s. Error: %s', path, e)
def load_sources():
__is_dev_mode = (__local_data_path / '.git' / 'HEAD').exists()
if not __is_dev_mode:
__check_updates()
__download_sources()
__save_current_index()
__add_crawlers_from_path(__local_data_path / 'sources')
if not __is_dev_mode:
for _, current in __current_index['crawlers'].items():
source_file = __user_data_path / str(current['file_path'])
if source_file.is_file():
__add_crawlers_from_path(source_file)
args = get_args()
for crawler_file in args.crawler:
__add_crawlers_from_path(Path(crawler_file))
return __cache_crawlers | 31.966258 | 143 | 0.615584 |
a5ad60b8212974a3dff8fac3753a7bfab4c07bd5 | 2,827 | py | Python | models.py | Anmepod44/first-django-site | 9ebe472d9e641bc20a864a1fb3be293dd10bca86 | [
"MIT"
] | null | null | null | models.py | Anmepod44/first-django-site | 9ebe472d9e641bc20a864a1fb3be293dd10bca86 | [
"MIT"
] | null | null | null | models.py | Anmepod44/first-django-site | 9ebe472d9e641bc20a864a1fb3be293dd10bca86 | [
"MIT"
] | null | null | null | from django.db import models
import uuid
# Create your models here.
class Genre(models.Model):
#The Genre Entity
name=models.CharField(max_length=200,help_text="Enter a Book Genre i.e Science Fiction ,Horror etc")
def __str__(self):
return self.name
class Book(models.Model):
#The Book Entity
title=models.CharField(max_length=200,help_text="The Title Of The Book")
author=models.ForeignKey('Author',on_delete=models.SET_NULL,null=True)
summary=models.TextField(max_length=1000,help_text="enter a brief description of the book")
isbn=models.CharField('ISBN',max_length=13,unique=True,help_text='13 character <h><a href="https://www.isbn-international.org/content/what-isbn">ISBN number</a></h>')
genre=models.ManyToManyField('Genre',help_text="select a genre for this book")
def __str__(self):
#string for representing the object model
return self.title
def get_absolute_url(self):
return reverse("Book_detail", args=[str(self.id)])
class BookInstance(models.Model):
#the book instance represents a specific copy of a book that someone might borrow and includes information about whether the book is available or not
id=models.UUIDField(primary_key=True,default=uuid.uuid4,help_text="unique id for this book across the whole library")
#ON_DELETE SET TO RESTRICT TO ENSURE A BOOK CANNOT BE DELETED WHEN REFERENCED BY A BOOK INSTANCE
book=models.ForeignKey('Book',on_delete=models.RESTRICT)
imprint=models.CharField(max_length=200)
due_back=models.DateField(null=True,blank=True)
language=models.ForeignKey('Language',on_delete=models.SET_NULL,null=True)
LOAN_STATUS=(
('m','Maintainance'),
('o','On Loan'),
('a','Available'),
('r','Reserved')
)
status=models.CharField(max_length=1,choices=LOAN_STATUS,blank=True,default='m',help_text="Book Availability")
class meta:
ordering=['due_back']
def __str__(self):
return f'{self.id} ({self.book.title})'
class Author(models.Model):
first_name=models.CharField(max_length=100)
last_name=models.CharField(max_length=100)
date_of_birth=models.DateField(null=True,blank=True)
date_of_death=models.DateField('Died',null=True,blank=True)
def get_absolute_url(self):
return reverse('Author_detail',args=[str(self.id)])
def __str__(self):
return f'{self.first_name},{self.last_name}'
class Meta:
ordering=['last_name','first_name']
class Language(models.Model):
name=models.CharField(max_length=100,help_text="Enter Language The Book is Written In")
def __str__(self):
return f'{self.language}'
| 33.258824 | 171 | 0.68058 |
7f421a8c2a528e58f59e06018afbb05572bd479d | 2,358 | py | Python | Python/Simple_Chat_Bot_TFIDS/ChatBot.py | not4YU5H/hacktoberfest2021-2 | ae3d17f8bef81aa79231ee07db46a79841b4b06b | [
"CC0-1.0"
] | 81 | 2021-10-01T13:19:13.000Z | 2021-10-06T08:43:35.000Z | Python/Simple_Chat_Bot_TFIDS/ChatBot.py | not4YU5H/hacktoberfest2021-2 | ae3d17f8bef81aa79231ee07db46a79841b4b06b | [
"CC0-1.0"
] | 67 | 2021-10-01T13:43:46.000Z | 2021-10-06T13:55:49.000Z | Python/Simple_Chat_Bot_TFIDS/ChatBot.py | not4YU5H/hacktoberfest2021-2 | ae3d17f8bef81aa79231ee07db46a79841b4b06b | [
"CC0-1.0"
] | 394 | 2021-10-01T11:55:24.000Z | 2021-10-06T13:45:57.000Z |
import nltk
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import random
import string
f=open('train_txt.txt','r',errors = 'ignore')
raw=f.read()
raw=raw.lower()
sent_tokens = nltk.sent_tokenize(raw)# converts to list of sentences
word_tokens = nltk.word_tokenize(raw)
lemmer = nltk.stem.WordNetLemmatizer()
def LemTokens(tokens):
return [lemmer.lemmatize(token) for token in tokens]
remove_punct_dict = dict((ord(punct), None) for punct in string.punctuation)
def LemNormalize(text):
return LemTokens(nltk.word_tokenize(text.lower().translate(remove_punct_dict)))
GREETING_INPUTS = ("hello", "hi", "greetings", "sup", "what's up","hey",)
GREETING_RESPONSES = ["hi", "hello", "I am glad! You are talking to me"]
def greeting(sentence):
for word in sentence.split():
if word.lower() in GREETING_INPUTS:
return random.choice(GREETING_RESPONSES)
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
def response(user_response):
robo_response=''
sent_tokens.append(user_response)
TfidfVec = TfidfVectorizer(tokenizer=LemNormalize, stop_words='english')
tfidf = TfidfVec.fit_transform(sent_tokens)
vals = cosine_similarity(tfidf[-1], tfidf)
vals = cosine_similarity(tfidf[-1], tfidf)
idx=vals.argsort()[0][-2]
flat = vals.flatten()
flat.sort()
req_tfidf = flat[-2]
if(req_tfidf==0):
robo_response=robo_response+"I don't understand you"
return robo_response
else:
robo_response = robo_response+sent_tokens[idx]
return robo_response
flag=True
print("BOT: My name is Chatty. I will answer your queries about Chatbots. If you want to exit, type Bye!")
while(flag==True):
user_response = input()
user_response=user_response.lower()
if(user_response!='bye'):
if(user_response=='thanks' or user_response=='thank you' ):
flag=False
print("BOT: You are welcome..")
else:
if(greeting(user_response)!=None):
print("BOT: "+greeting(user_response))
else:
print("BOT: ",end="")
print(response(user_response))
sent_tokens.remove(user_response)
else:
flag=False
print("BOT: Bye! take care..")
| 28.071429 | 106 | 0.673876 |
d6f36c304e0086baf8bdefd8d5f89bc540cc6e96 | 956 | py | Python | airflow/airflow/dags/spark_submit_pi.py | just4jc/pipeline | 3c7a4fa59c6363833766d2b55fa55ace6b6af351 | [
"Apache-2.0"
] | null | null | null | airflow/airflow/dags/spark_submit_pi.py | just4jc/pipeline | 3c7a4fa59c6363833766d2b55fa55ace6b6af351 | [
"Apache-2.0"
] | null | null | null | airflow/airflow/dags/spark_submit_pi.py | just4jc/pipeline | 3c7a4fa59c6363833766d2b55fa55ace6b6af351 | [
"Apache-2.0"
] | 2 | 2018-08-19T15:05:18.000Z | 2020-08-13T16:31:48.000Z | """
Code that goes along with the Airflow located at:
http://airflow.readthedocs.org/en/latest/tutorial.html
"""
from airflow import DAG
from airflow.operators import BashOperator
from datetime import datetime, timedelta
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime.now(),
'email': ['chris@fregly.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5),
# 'queue': 'bash_queue',
# 'pool': 'backfill',
# 'priority_weight': 10,
# 'end_date': datetime(2016, 4, 24),
}
dag = DAG('spark_submit_pi', default_args=default_args)
# t1 is an example of tasks created by instatiating operators
t1 = BashOperator(
task_id='spark_submit',
bash_command='spark-submit --master spark://127.0.0.1:47077 --class org.apache.spark.examples.SparkPi $SPARK_HOME/examples/jars/spark-examples_2.11-2.0.1.jar 10',
dag=dag)
| 29.875 | 166 | 0.695607 |
a0a1b6bf4dbafcd353855ffd2270ee3a3c1730e1 | 7,550 | py | Python | sscls/modeling/classification/effnet.py | poodarchu/sscls | 8b1bd94b1ef4f0cef3ec6ecbb48be9dab129687b | [
"MIT"
] | 2 | 2020-04-26T13:41:24.000Z | 2020-05-06T10:15:06.000Z | sscls/modeling/classification/effnet.py | poodarchu/sscls | 8b1bd94b1ef4f0cef3ec6ecbb48be9dab129687b | [
"MIT"
] | null | null | null | sscls/modeling/classification/effnet.py | poodarchu/sscls | 8b1bd94b1ef4f0cef3ec6ecbb48be9dab129687b | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""EfficientNet models."""
import torch
import torch.nn as nn
from sscls.core.config import cfg
import sscls.utils.logging as logging
import sscls.utils.net as nu
logger = logging.get_logger(__name__)
class EffHead(nn.Module):
"""EfficientNet head."""
def __init__(self, w_in, w_out, nc):
super(EffHead, self).__init__()
self._construct(w_in, w_out, nc)
def _construct(self, w_in, w_out, nc):
# 1x1, BN, Swish
self.conv = nn.Conv2d(
w_in, w_out,
kernel_size=1, stride=1, padding=0, bias=False
)
self.conv_bn = nn.BatchNorm2d(
w_out, eps=cfg.BN.EPS, momentum=cfg.BN.MOM
)
self.conv_swish = Swish()
# AvgPool
self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))
# Dropout
if cfg.EN.DROPOUT_RATIO > 0.0:
self.dropout = nn.Dropout(p=cfg.EN.DROPOUT_RATIO)
# FC
self.fc = nn.Linear(w_out, nc, bias=True)
def forward(self, x):
x = self.conv_swish(self.conv_bn(self.conv(x)))
x = self.avg_pool(x)
x = x.view(x.size(0), -1)
x = self.dropout(x) if hasattr(self, 'dropout') else x
x = self.fc(x)
return x
class Swish(nn.Module):
"""Swish activation function: x * sigmoid(x)"""
def __init__(self):
super(Swish, self).__init__()
def forward(self, x):
return x * torch.sigmoid(x)
class SE(nn.Module):
"""Squeeze-and-Excitation (SE) block w/ Swish."""
def __init__(self, w_in, w_se):
super(SE, self).__init__()
self._construct(w_in, w_se)
def _construct(self, w_in, w_se):
# AvgPool
self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))
# FC, Swish, FC, Sigmoid
self.f_ex = nn.Sequential(
nn.Conv2d(w_in, w_se, kernel_size=1, bias=True),
Swish(),
nn.Conv2d(w_se, w_in, kernel_size=1, bias=True),
nn.Sigmoid()
)
def forward(self, x):
return x * self.f_ex(self.avg_pool(x))
class MBConv(nn.Module):
"""Mobile inverted bottleneck block w/ SE (MBConv)."""
def __init__(self, w_in, exp_r, kernel, stride, se_r, w_out):
super(MBConv, self).__init__()
self._construct(w_in, exp_r, kernel, stride, se_r, w_out)
def _construct(self, w_in, exp_r, kernel, stride, se_r, w_out):
# Expansion ratio is wrt the input width
self.exp = None
w_exp = int(w_in * exp_r)
# Include exp ops only if the exp ratio is different from 1
if w_exp != w_in:
# 1x1, BN, Swish
self.exp = nn.Conv2d(
w_in, w_exp,
kernel_size=1, stride=1, padding=0, bias=False
)
self.exp_bn = nn.BatchNorm2d(
w_exp, eps=cfg.BN.EPS, momentum=cfg.BN.MOM
)
self.exp_swish = Swish()
# 3x3 dwise, BN, Swish
self.dwise = nn.Conv2d(
w_exp, w_exp,
kernel_size=kernel, stride=stride, groups=w_exp, bias=False,
# Hacky padding to preserve res (supports only 3x3 and 5x5)
padding=(1 if kernel == 3 else 2)
)
self.dwise_bn = nn.BatchNorm2d(
w_exp, eps=cfg.BN.EPS, momentum=cfg.BN.MOM
)
self.dwise_swish = Swish()
# SE
w_se = int(w_in * se_r)
self.se = SE(w_exp, w_se)
# 1x1, BN
self.lin_proj = nn.Conv2d(
w_exp, w_out,
kernel_size=1, stride=1, padding=0, bias=False
)
self.lin_proj_bn = nn.BatchNorm2d(
w_out, eps=cfg.BN.EPS, momentum=cfg.BN.MOM
)
# Skip connection if in and out shapes are the same (MN-V2 style)
self.has_skip = (stride == 1) and (w_in == w_out)
def forward(self, x):
f_x = x
# Expansion
if self.exp:
f_x = self.exp_swish(self.exp_bn(self.exp(f_x)))
# Depthwise
f_x = self.dwise_swish(self.dwise_bn(self.dwise(f_x)))
# SE
if cfg.EN.SE_ENABLED:
f_x = self.se(f_x)
# Linear projection
f_x = self.lin_proj_bn(self.lin_proj(f_x))
# Skip connection
if self.has_skip:
# Drop connect
if self.training and cfg.EN.DC_RATIO > 0.0:
f_x = nu.drop_connect(f_x, cfg.EN.DC_RATIO)
f_x = x + f_x
return f_x
class EffStage(nn.Module):
"""EfficientNet stage."""
def __init__(self, w_in, exp_r, kernel, stride, se_r, w_out, d):
super(EffStage, self).__init__()
self._construct(w_in, exp_r, kernel, stride, se_r, w_out, d)
def _construct(self, w_in, exp_r, kernel, stride, se_r, w_out, d):
# Construct the blocks
for i in range(d):
# Stride and input width apply to the first block of the stage
b_stride = stride if i == 0 else 1
b_w_in = w_in if i == 0 else w_out
# Construct the block
self.add_module(
'b{}'.format(i + 1),
MBConv(b_w_in, exp_r, kernel, b_stride, se_r, w_out)
)
def forward(self, x):
for block in self.children():
x = block(x)
return x
class StemIN(nn.Module):
"""EfficientNet stem for ImageNet."""
def __init__(self, w_in, w_out):
super(StemIN, self).__init__()
self._construct(w_in, w_out)
def _construct(self, w_in, w_out):
# 3x3, BN, Swish
self.conv = nn.Conv2d(
w_in, w_out,
kernel_size=3, stride=2, padding=1, bias=False
)
self.bn = nn.BatchNorm2d(w_out, eps=cfg.BN.EPS, momentum=cfg.BN.MOM)
self.swish = Swish()
def forward(self, x):
for layer in self.children():
x = layer(x)
return x
class EffNet(nn.Module):
"""EfficientNet model."""
def __init__(self):
assert cfg.TRAIN.DATASET in ['imagenet'], \
'Training on {} is not supported'.format(cfg.TRAIN.DATASET)
assert cfg.TEST.DATASET in ['imagenet'], \
'Testing on {} is not supported'.format(cfg.TEST.DATASET)
super(EffNet, self).__init__()
self._construct(
stem_w=cfg.EN.STEM_W,
ds=cfg.EN.DEPTHS,
ws=cfg.EN.WIDTHS,
exp_rs=cfg.EN.EXP_RATIOS,
se_r=cfg.EN.SE_RATIO,
ss=cfg.EN.STRIDES,
ks=cfg.EN.KERNELS,
head_w=cfg.EN.HEAD_W,
nc=cfg.MODEL.NUM_CLASSES
)
self.apply(nu.init_weights)
def _construct(self, stem_w, ds, ws, exp_rs, se_r, ss, ks, head_w, nc):
# Group params by stage
stage_params = list(zip(ds, ws, exp_rs, ss, ks))
logger.info('Constructing: EfficientNet-{}'.format(stage_params))
# Construct the stem
self.stem = StemIN(3, stem_w)
prev_w = stem_w
# Construct the stages
for i, (d, w, exp_r, stride, kernel) in enumerate(stage_params):
self.add_module(
's{}'.format(i + 1),
EffStage(prev_w, exp_r, kernel, stride, se_r, w, d)
)
prev_w = w
# Construct the head
self.head = EffHead(prev_w, head_w, nc)
def forward(self, x):
for module in self.children():
x = module(x)
return x
| 30.816327 | 76 | 0.560662 |
193a20f2547a50fa84e4e7b677aa55a9646a38e5 | 948 | py | Python | src/genie/libs/parser/iosxe/tests/ShowIpBgpTemplatePeerSession/cli/equal/golden_output2_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/iosxe/tests/ShowIpBgpTemplatePeerSession/cli/equal/golden_output2_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/iosxe/tests/ShowIpBgpTemplatePeerSession/cli/equal/golden_output2_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
"peer_session": {
"PEER-SESSION": {
"local_policies": "0x5025FD",
"inherited_polices": "0x0",
"fall_over_bfd": True,
"suppress_four_byte_as_capability": True,
"description": "desc1!",
"disable_connected_check": True,
"ebgp_multihop_enable": True,
"ebgp_multihop_max_hop": 254,
"local_as_as_no": 255,
"password_text": "is configured",
"remote_as": 321,
"shutdown": True,
"transport_connection_mode": "passive",
"update_source": "Loopback0",
"index": 1,
"inherited_session_commands": {"keepalive_interval": 10, "holdtime": 30},
},
"PEER-SESSION2": {
"local_policies": "0x100000",
"inherited_polices": "0x0",
"fall_over_bfd": True,
"index": 2,
},
}
}
| 32.689655 | 85 | 0.509494 |
d557794c4578290ea7fdb33a4d4ef7d30c28bb27 | 1,018 | py | Python | app.py | emumba-com/aws-bootcloud | ac0c26c57405999f00fb985dc665dad89a985053 | [
"MIT"
] | null | null | null | app.py | emumba-com/aws-bootcloud | ac0c26c57405999f00fb985dc665dad89a985053 | [
"MIT"
] | 1 | 2020-11-20T10:06:36.000Z | 2020-11-20T10:26:57.000Z | app.py | emumba-com/aws-bootcloud | ac0c26c57405999f00fb985dc665dad89a985053 | [
"MIT"
] | 2 | 2020-11-15T15:09:15.000Z | 2021-03-09T14:02:39.000Z | import os
import sys
import time
from multiprocessing import Process
from datetime import datetime
from flask import render_template
sys.path.insert(0, os.path.join(os.path.dirname(os.path.realpath(__file__)), '..', '.'))
from auth import auth_bp
from admin import admin_bp, fetch_instances_cost_from_aws
from user_blueprint import user_bp
from settings import app, db
# Registaring blueprints
app.register_blueprint(auth_bp)
app.register_blueprint(admin_bp)
app.register_blueprint(user_bp)
@app.route('/')
def index():
return render_template('login.html')
def fetch_bill_from_aws(duration=86400):
while True:
fetch_instances_cost_from_aws()
delay = duration + int(time.time() / duration) * duration - time.time()
print("Going to sleep for %s seconds" % delay)
time.sleep(delay)
def bill_scheduler():
process = Process(target=fetch_bill_from_aws, args=(86400, ))
process.start()
bill_scheduler()
if __name__ == '__main__':
db.init_app(app)
app.run()
| 23.136364 | 88 | 0.735756 |
f0991054082e140d60270cefb3e4385f668e77d4 | 85 | py | Python | error.py | RedstonekPL/Jejabot | cd2685fba422bacd8b27a45c43fd67beb454db62 | [
"MIT"
] | null | null | null | error.py | RedstonekPL/Jejabot | cd2685fba422bacd8b27a45c43fd67beb454db62 | [
"MIT"
] | null | null | null | error.py | RedstonekPL/Jejabot | cd2685fba422bacd8b27a45c43fd67beb454db62 | [
"MIT"
] | null | null | null | class BadUserError(Exception):
def __init__(self, message):
self.message = message | 28.333333 | 30 | 0.776471 |
97265417eee434e031a7b58330e40542a88043f7 | 2,307 | py | Python | bin/fuzzer-libs/mipsel/usr/share/gdb/auto-load/usr/lib/mipsel-linux-gnu/libstdc++.so.6.0.20-gdb.py | CAFA1/shellphish-afl | 6a3f015e3e557210ae39a2b69d4a1cca52405d3f | [
"BSD-2-Clause"
] | null | null | null | bin/fuzzer-libs/mipsel/usr/share/gdb/auto-load/usr/lib/mipsel-linux-gnu/libstdc++.so.6.0.20-gdb.py | CAFA1/shellphish-afl | 6a3f015e3e557210ae39a2b69d4a1cca52405d3f | [
"BSD-2-Clause"
] | null | null | null | bin/fuzzer-libs/mipsel/usr/share/gdb/auto-load/usr/lib/mipsel-linux-gnu/libstdc++.so.6.0.20-gdb.py | CAFA1/shellphish-afl | 6a3f015e3e557210ae39a2b69d4a1cca52405d3f | [
"BSD-2-Clause"
] | null | null | null | # -*- python -*-
# Copyright (C) 2009-2014 Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import sys
import gdb
import os
import os.path
pythondir = '/usr/share/gcc-4.9/python'
libdir = '/usr/lib/mipsel-linux-gnu'
# This file might be loaded when there is no current objfile. This
# can happen if the user loads it manually. In this case we don't
# update sys.path; instead we just hope the user managed to do that
# beforehand.
if gdb.current_objfile () is not None:
# Update module path. We want to find the relative path from libdir
# to pythondir, and then we want to apply that relative path to the
# directory holding the objfile with which this file is associated.
# This preserves relocatability of the gcc tree.
# Do a simple normalization that removes duplicate separators.
pythondir = os.path.normpath (pythondir)
libdir = os.path.normpath (libdir)
prefix = os.path.commonprefix ([libdir, pythondir])
# In some bizarre configuration we might have found a match in the
# middle of a directory name.
if prefix[-1] != '/':
prefix = os.path.dirname (prefix) + '/'
# Strip off the prefix.
pythondir = pythondir[len (prefix):]
libdir = libdir[len (prefix):]
# Compute the ".."s needed to get from libdir to the prefix.
dotdots = ('..' + os.sep) * len (libdir.split (os.sep))
objfile = gdb.current_objfile ().filename
dir_ = os.path.join (os.path.dirname (objfile), dotdots, pythondir)
if not dir_ in sys.path:
sys.path.insert(0, dir_)
# Load the pretty-printers.
from libstdcxx.v6.printers import register_libstdcxx_printers
register_libstdcxx_printers (gdb.current_objfile ())
| 37.819672 | 72 | 0.717815 |
f2c2bce3a5b20c2350cb340f04c3be9bda970496 | 5,648 | py | Python | examples/image_captioning/train.py | Evanc123/chainer | 929af7189b1271683200aa9b0ba6da2dd3dee110 | [
"MIT"
] | 90 | 2017-02-23T04:04:47.000Z | 2020-04-09T12:06:50.000Z | examples/image_captioning/train.py | nolfwin/chainer | 8d776fcc1e848cb9d3800a6aab356eb91ae9d088 | [
"MIT"
] | 7 | 2017-07-23T13:38:06.000Z | 2018-07-10T07:09:03.000Z | examples/image_captioning/train.py | nolfwin/chainer | 8d776fcc1e848cb9d3800a6aab356eb91ae9d088 | [
"MIT"
] | 32 | 2017-02-28T07:40:38.000Z | 2021-02-17T11:33:09.000Z | #!/usr/bin/env python
import argparse
import chainer
from chainer.datasets import TransformDataset
from chainer import iterators
from chainer import optimizers
from chainer import training
from chainer.training import extensions
import datasets
from model import ImageCaptionModel
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--out', type=str, default='result',
help='Output directory')
parser.add_argument('--mscoco-root', type=str, default='data',
help='MSOCO dataset root directory')
parser.add_argument('--max-iters', type=int, default=50000,
help='Maximum number of iterations to train')
parser.add_argument('--batch-size', type=int, default=128,
help='Minibatch size')
parser.add_argument('--dropout-ratio', type=float, default=0.5,
help='Language model dropout ratio')
parser.add_argument('--val-keep-quantity', type=int, default=100,
help='Keep every N-th validation image')
parser.add_argument('--val-iter', type=int, default=100,
help='Run validation every N-th iteration')
parser.add_argument('--log-iter', type=int, default=1,
help='Log every N-th iteration')
parser.add_argument('--snapshot-iter', type=int, default=1000,
help='Model snapshot every N-th iteration')
parser.add_argument('--rnn', type=str, default='nsteplstm',
choices=['nsteplstm', 'lstm'],
help='Language model layer type')
parser.add_argument('--gpu', type=int, default=0,
help='GPU ID (negative value indicates CPU)')
parser.add_argument('--max-caption-length', type=int, default=30,
help='Maxium caption length when using LSTM layer')
args = parser.parse_args()
# Load the MSCOCO dataset. Assumes that the dataset has been downloaded
# already using e.g. the `download.py` script
train, val = datasets.get_mscoco(args.mscoco_root)
# Validation samples are used to address overfitting and see how well your
# model generalizes to yet unseen data. However, since the number of these
# samples in MSCOCO is quite large (~200k) and thus require time to
# evaluate, you may choose to use only a fraction of the available samples
val = val[::args.val_keep_quantity]
# Number of unique words that are found in the dataset
vocab_size = len(train.vocab)
# Instantiate the model to be trained either with LSTM layers or with
# NStepLSTM layers
model = ImageCaptionModel(
vocab_size, dropout_ratio=args.dropout_ratio, rnn=args.rnn)
if args.gpu >= 0:
chainer.backends.cuda.get_device_from_id(args.gpu).use()
model.to_gpu()
def transform(in_data):
# Called for each sample and applies necessary preprocessing to the
# image such as resizing and normalizing
img, caption = in_data
img = model.prepare(img)
return img, caption
# We need to preprocess the images since their sizes may vary (and the
# model requires that they have the exact same fixed size)
train = TransformDataset(train, transform)
val = TransformDataset(val, transform)
train_iter = iterators.MultiprocessIterator(
train, args.batch_size, shared_mem=700000)
val_iter = chainer.iterators.MultiprocessIterator(
val, args.batch_size, repeat=False, shuffle=False, shared_mem=700000)
optimizer = optimizers.Adam()
optimizer.setup(model)
def converter(batch, device):
# The converted receives a batch of input samples any may modify it if
# necessary. In our case, we need to align the captions depending on if
# we are using LSTM layers of NStepLSTM layers in the model.
if args.rnn == 'lstm':
max_caption_length = args.max_caption_length
elif args.rnn == 'nsteplstm':
max_caption_length = None
else:
raise ValueError('Invalid RNN type.')
return datasets.converter(
batch, device, max_caption_length=max_caption_length)
updater = training.updater.StandardUpdater(
train_iter, optimizer=optimizer, device=args.gpu, converter=converter)
trainer = training.Trainer(
updater, out=args.out, stop_trigger=(args.max_iters, 'iteration'))
trainer.extend(
extensions.Evaluator(
val_iter,
target=model,
converter=converter,
device=args.gpu
),
trigger=(args.val_iter, 'iteration')
)
trainer.extend(
extensions.LogReport(
['main/loss', 'validation/main/loss'],
trigger=(args.log_iter, 'iteration')
)
)
trainer.extend(
extensions.PlotReport(
['main/loss', 'validation/main/loss'],
trigger=(args.log_iter, 'iteration')
)
)
trainer.extend(
extensions.PrintReport(
['elapsed_time', 'epoch', 'iteration', 'main/loss',
'validation/main/loss']
),
trigger=(args.log_iter, 'iteration')
)
# Save model snapshots so that later on, we can load them and generate new
# captions for any image. This can be done in the `predict.py` script
trainer.extend(
extensions.snapshot_object(model, 'model_{.updater.iteration}'),
trigger=(args.snapshot_iter, 'iteration')
)
trainer.extend(extensions.ProgressBar())
trainer.run()
if __name__ == '__main__':
main()
| 38.951724 | 79 | 0.643059 |
b25ae506ea862df4816759fbb68846db5db51d8d | 1,591 | py | Python | data/p3BR/R2/benchmark/startCirq32.py | UCLA-SEAL/QDiff | d968cbc47fe926b7f88b4adf10490f1edd6f8819 | [
"BSD-3-Clause"
] | null | null | null | data/p3BR/R2/benchmark/startCirq32.py | UCLA-SEAL/QDiff | d968cbc47fe926b7f88b4adf10490f1edd6f8819 | [
"BSD-3-Clause"
] | null | null | null | data/p3BR/R2/benchmark/startCirq32.py | UCLA-SEAL/QDiff | d968cbc47fe926b7f88b4adf10490f1edd6f8819 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 5/15/20 4:49 PM
# @File : grover.py
# qubit number=3
# total number=7
import cirq
import cirq.google as cg
from typing import Optional
import sys
from math import log2
import numpy as np
#thatsNoCode
from cirq.contrib.svg import SVGCircuit
# Symbols for the rotation angles in the QAOA circuit.
def make_circuit(n: int, input_qubit):
c = cirq.Circuit() # circuit begin
c.append(cirq.H.on(input_qubit[0])) # number=1
c.append(cirq.X.on(input_qubit[2])) # number=2
c.append(cirq.CNOT.on(input_qubit[2],input_qubit[0])) # number=4
c.append(cirq.Z.on(input_qubit[2])) # number=5
c.append(cirq.CNOT.on(input_qubit[2],input_qubit[0])) # number=6
# circuit end
c.append(cirq.measure(*input_qubit, key='result'))
return c
def bitstring(bits):
return ''.join(str(int(b)) for b in bits)
if __name__ == '__main__':
qubit_count = 4
input_qubits = [cirq.GridQubit(i, 0) for i in range(qubit_count)]
circuit = make_circuit(qubit_count,input_qubits)
circuit = cg.optimized_for_sycamore(circuit, optimizer_type='sqrt_iswap')
circuit_sample_count =2000
simulator = cirq.Simulator()
result = simulator.run(circuit, repetitions=circuit_sample_count)
frequencies = result.histogram(key='result', fold_func=bitstring)
writefile = open("../data/startCirq32.csv","w+")
print(format(frequencies),file=writefile)
print("results end", file=writefile)
print(circuit.__len__(), file=writefile)
print(circuit,file=writefile)
writefile.close() | 26.516667 | 77 | 0.697674 |
e7aac5073d58d07c8854965f3cb1b284ab82065d | 16,583 | py | Python | PyInstaller/archive/writers.py | ravindrajeet27/pyinstaller | e2d61ecb4bf1fa4708b6db036929b6971fc641e8 | [
"Apache-2.0"
] | 12 | 2020-12-15T15:12:06.000Z | 2022-03-18T16:17:42.000Z | PyInstaller/archive/writers.py | jeremysanders/pyinstaller | 321b24f9a9a5978337735816b36ca6b4a90a2fb4 | [
"Apache-2.0"
] | 10 | 2020-09-30T12:49:45.000Z | 2020-10-04T10:26:33.000Z | PyInstaller/archive/writers.py | jeremysanders/pyinstaller | 321b24f9a9a5978337735816b36ca6b4a90a2fb4 | [
"Apache-2.0"
] | 10 | 2020-12-15T15:12:14.000Z | 2022-02-09T21:02:17.000Z | #-----------------------------------------------------------------------------
# Copyright (c) 2005-2020, PyInstaller Development Team.
#
# Distributed under the terms of the GNU General Public License (version 2
# or later) with exception for distributing the bootloader.
#
# The full license is in the file COPYING.txt, distributed with this software.
#
# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
#-----------------------------------------------------------------------------
"""
Utilities to create data structures for embedding Python modules and additional
files into the executable.
"""
# While an Archive is really an abstraction for any "filesystem
# within a file", it is tuned for use with imputil.FuncImporter.
# This assumes it contains python code objects, indexed by the
# the internal name (ie, no '.py').
#
# See pyi_carchive.py for a more general archive (contains anything)
# that can be understood by a C program.
import os
import sys
import struct
from types import CodeType
import marshal
import zlib
from PyInstaller.building.utils import get_code_object, strip_paths_in_code,\
fake_pyc_timestamp
from PyInstaller.loader.pyimod02_archive import PYZ_TYPE_MODULE, PYZ_TYPE_PKG, \
PYZ_TYPE_DATA
from ..compat import BYTECODE_MAGIC
class ArchiveWriter(object):
"""
A base class for a repository of python code objects.
The extract method is used by imputil.ArchiveImporter
to get code objects by name (fully qualified name), so
an enduser "import a.b" would become
extract('a.__init__')
extract('a.b')
"""
MAGIC = b'PYL\0'
HDRLEN = 12 # default is MAGIC followed by python's magic, int pos of toc
TOCPOS = 8
def __init__(self, archive_path, logical_toc):
"""
Create an archive file of name 'archive_path'.
logical_toc is a 'logical TOC' - a list of (name, path, ...)
where name is the internal name, eg 'a'
and path is a file to get the object from, eg './a.pyc'.
"""
self.start = 0
self._start_add_entries(archive_path)
self._add_from_table_of_contents(logical_toc)
self._finalize()
def _start_add_entries(self, archive_path):
"""
Open an empty archive for addition of entries.
"""
self.lib = open(archive_path, 'wb')
# Reserve space for the header.
if self.HDRLEN:
self.lib.write(b'\0' * self.HDRLEN)
# Create an empty table of contents.
# Use a list to support reproducible builds
self.toc = []
def _add_from_table_of_contents(self, toc):
"""
Add entries from a logical TOC (without absolute positioning info).
An entry is an entry in a logical TOC is a tuple,
entry[0] is name (under which it will be saved).
entry[1] is fullpathname of the file.
entry[2] is a flag for it's storage format (True or 1 if compressed)
entry[3] is the entry's type code.
"""
for toc_entry in toc:
self.add(toc_entry) # The guts of the archive.
def _finalize(self):
"""
Finalize an archive which has been opened using _start_add_entries(),
writing any needed padding and the table of contents.
"""
toc_pos = self.lib.tell()
self.save_trailer(toc_pos)
if self.HDRLEN:
self.update_headers(toc_pos)
self.lib.close()
####### manages keeping the internal TOC and the guts in sync #######
def add(self, entry):
"""
Override this to influence the mechanics of the Archive.
Assumes entry is a seq beginning with (nm, pth, ...) where
nm is the key by which we'll be asked for the object.
pth is the name of where we find the object. Overrides of
get_obj_from can make use of further elements in entry.
"""
nm = entry[0]
pth = entry[1]
pynm, ext = os.path.splitext(os.path.basename(pth))
ispkg = pynm == '__init__'
assert ext in ('.pyc', '.pyo')
self.toc.append((nm, (ispkg, self.lib.tell())))
with open(entry[1], 'rb') as f:
f.seek(8) # skip magic and timestamp
self.lib.write(f.read())
def save_trailer(self, tocpos):
"""
Default - toc is a dict
Gets marshaled to self.lib
"""
try:
self.lib.write(marshal.dumps(self.toc))
# If the TOC to be marshalled contains an unmarshallable object, Python
# raises a cryptic exception providing no details on why such object is
# unmarshallable. Correct this by iteratively inspecting the TOC for
# unmarshallable objects.
except ValueError as exception:
if str(exception) == 'unmarshallable object':
# List of all marshallable types.
MARSHALLABLE_TYPES = {
bool, int, float, complex, str, bytes, bytearray, tuple,
list, set, frozenset, dict, CodeType
}
for module_name, module_tuple in self.toc.items():
if type(module_name) not in MARSHALLABLE_TYPES:
print('Module name "%s" (%s) unmarshallable.' % (module_name, type(module_name)))
if type(module_tuple) not in MARSHALLABLE_TYPES:
print('Module "%s" tuple "%s" (%s) unmarshallable.' % (module_name, module_tuple, type(module_tuple)))
elif type(module_tuple) == tuple:
for i in range(len(module_tuple)):
if type(module_tuple[i]) not in MARSHALLABLE_TYPES:
print('Module "%s" tuple index %s item "%s" (%s) unmarshallable.' % (module_name, i, module_tuple[i], type(module_tuple[i])))
raise
def update_headers(self, tocpos):
"""
Default - MAGIC + Python's magic + tocpos
"""
self.lib.seek(self.start)
self.lib.write(self.MAGIC)
self.lib.write(BYTECODE_MAGIC)
self.lib.write(struct.pack('!i', tocpos))
class ZlibArchiveWriter(ArchiveWriter):
"""
ZlibArchive - an archive with compressed entries. Archive is read
from the executable created by PyInstaller.
This archive is used for bundling python modules inside the executable.
NOTE: The whole ZlibArchive (PYZ) is compressed so it is not necessary
to compress single modules with zlib.
"""
MAGIC = b'PYZ\0'
TOCPOS = 8
HDRLEN = ArchiveWriter.HDRLEN + 5
COMPRESSION_LEVEL = 6 # Default level of the 'zlib' module from Python.
def __init__(self, archive_path, logical_toc, code_dict=None, cipher=None):
"""
code_dict dict containing module code objects from ModuleGraph.
"""
# Keep references to module code objects constructed by ModuleGraph
# to avoid writting .pyc/pyo files to hdd.
self.code_dict = code_dict or {}
self.cipher = cipher or None
super(ZlibArchiveWriter, self).__init__(archive_path, logical_toc)
def add(self, entry):
name, path, typ = entry
if typ == 'PYMODULE':
typ = PYZ_TYPE_MODULE
if path in ('-', None):
# This is a NamespacePackage, modulegraph marks them
# by using the filename '-'. (But wants to use None,
# so check for None, too, to be forward-compatible.)
typ = PYZ_TYPE_PKG
else:
base, ext = os.path.splitext(os.path.basename(path))
if base == '__init__':
typ = PYZ_TYPE_PKG
data = marshal.dumps(self.code_dict[name])
else:
# Any data files, that might be required by pkg_resources.
typ = PYZ_TYPE_DATA
with open(path, 'rb') as fh:
data = fh.read()
# No need to use forward slash as path-separator here since
# pkg_resources on Windows back slash as path-separator.
obj = zlib.compress(data, self.COMPRESSION_LEVEL)
# First compress then encrypt.
if self.cipher:
obj = self.cipher.encrypt(obj)
self.toc.append((name, (typ, self.lib.tell(), len(obj))))
self.lib.write(obj)
def update_headers(self, tocpos):
"""
add level
"""
ArchiveWriter.update_headers(self, tocpos)
self.lib.write(struct.pack('!B', self.cipher is not None))
class CTOC(object):
"""
A class encapsulating the table of contents of a CArchive.
When written to disk, it is easily read from C.
"""
ENTRYSTRUCT = '!iiiiBB' # (structlen, dpos, dlen, ulen, flag, typcd) followed by name
ENTRYLEN = struct.calcsize(ENTRYSTRUCT)
def __init__(self):
self.data = []
def tobinary(self):
"""
Return self as a binary string.
"""
rslt = []
for (dpos, dlen, ulen, flag, typcd, nm) in self.data:
# Encode all names using UTF-8. This should be save as
# standard python modules only contain ascii-characters
# (and standard shared libraries should have the same) and
# thus the C-code still can handle this correctly.
nm = nm.encode('utf-8')
nmlen = len(nm) + 1 # add 1 for a '\0'
# align to 16 byte boundary so xplatform C can read
toclen = nmlen + self.ENTRYLEN
if toclen % 16 == 0:
pad = b'\0'
else:
padlen = 16 - (toclen % 16)
pad = b'\0' * padlen
nmlen = nmlen + padlen
rslt.append(struct.pack(self.ENTRYSTRUCT + '%is' % nmlen,
nmlen + self.ENTRYLEN, dpos, dlen, ulen,
flag, ord(typcd), nm + pad))
return b''.join(rslt)
def add(self, dpos, dlen, ulen, flag, typcd, nm):
"""
Add an entry to the table of contents.
DPOS is data position.
DLEN is data length.
ULEN is the uncompressed data len.
FLAG says if the data is compressed.
TYPCD is the "type" of the entry (used by the C code)
NM is the entry's name.
This function is used only while creating an executable.
"""
# Ensure forward slashes in paths are on Windows converted to back
# slashes '\\' since on Windows the bootloader works only with back
# slashes.
nm = os.path.normpath(nm)
self.data.append((dpos, dlen, ulen, flag, typcd, nm))
class CArchiveWriter(ArchiveWriter):
"""
An Archive subclass that can hold arbitrary data.
This class encapsulates all files that are bundled within an executable.
It can contain ZlibArchive (Python .pyc files), dlls, Python C extensions
and all other data files that are bundled in --onefile mode.
Easily handled from C or from Python.
"""
# MAGIC is usefull to verify that conversion of Python data types
# to C structure and back works properly.
MAGIC = b'MEI\014\013\012\013\016'
HDRLEN = 0
LEVEL = 9
# Cookie - holds some information for the bootloader. C struct format
# definition. '!' at the beginning means network byte order.
# C struct looks like:
#
# typedef struct _cookie {
# char magic[8]; /* 'MEI\014\013\012\013\016' */
# int len; /* len of entire package */
# int TOC; /* pos (rel to start) of TableOfContents */
# int TOClen; /* length of TableOfContents */
# int pyvers; /* new in v4 */
# char pylibname[64]; /* Filename of Python dynamic library. */
# } COOKIE;
#
_cookie_format = '!8siiii64s'
_cookie_size = struct.calcsize(_cookie_format)
def __init__(self, archive_path, logical_toc, pylib_name):
"""
Constructor.
archive_path path name of file (create empty CArchive if path is None).
start is the seekposition within PATH.
len is the length of the CArchive (if 0, then read till EOF).
pylib_name name of Python DLL which bootloader will use.
"""
self._pylib_name = pylib_name
# A CArchive created from scratch starts at 0, no leading bootloader.
super(CArchiveWriter, self).__init__(archive_path, logical_toc)
def _start_add_entries(self, path):
"""
Open an empty archive for addition of entries.
"""
super(CArchiveWriter, self)._start_add_entries(path)
# Override parents' toc {} with a class.
self.toc = CTOC()
def add(self, entry):
"""
Add an ENTRY to the CArchive.
ENTRY must have:
entry[0] is name (under which it will be saved).
entry[1] is fullpathname of the file.
entry[2] is a flag for it's storage format (0==uncompressed,
1==compressed)
entry[3] is the entry's type code.
Version 5:
If the type code is 'o':
entry[0] is the runtime option
eg: v (meaning verbose imports)
u (meaning unbuffered)
W arg (warning option arg)
s (meaning do site.py processing.
"""
(nm, pathnm, flag, typcd) = entry[:4]
# FIXME Could we make the version 5 the default one?
# Version 5 - allow type 'o' = runtime option.
code_data = None
fh = None
try:
if typcd in ('o', 'd'):
ulen = 0
flag = 0
elif typcd == 's':
# If it's a source code file, compile it to a code object and marshall
# the object so it can be unmarshalled by the bootloader.
code = get_code_object(nm, pathnm)
code = strip_paths_in_code(code)
code_data = marshal.dumps(code)
ulen = len(code_data)
else:
fh = open(pathnm, 'rb')
ulen = os.fstat(fh.fileno()).st_size
except IOError:
print("Cannot find ('%s', '%s', %s, '%s')" % (nm, pathnm, flag, typcd))
raise
where = self.lib.tell()
assert flag in range(3)
if not fh and not code_data:
# no need to write anything
pass
elif flag == 1:
comprobj = zlib.compressobj(self.LEVEL)
if code_data is not None:
self.lib.write(comprobj.compress(code_data))
else:
assert fh
# We only want to change it for pyc files
modify_header = typcd in ('M', 'm', 's')
while 1:
buf = fh.read(16*1024)
if not buf:
break
if modify_header:
modify_header = False
buf = fake_pyc_timestamp(buf)
self.lib.write(comprobj.compress(buf))
self.lib.write(comprobj.flush())
else:
if code_data is not None:
self.lib.write(code_data)
else:
assert fh
while 1:
buf = fh.read(16*1024)
if not buf:
break
self.lib.write(buf)
dlen = self.lib.tell() - where
if typcd == 'm':
if pathnm.find('.__init__.py') > -1:
typcd = 'M'
if fh:
fh.close()
# Record the entry in the CTOC
self.toc.add(where, dlen, ulen, flag, typcd, nm)
def save_trailer(self, tocpos):
"""
Save the table of contents and the cookie for the bootlader to
disk.
CArchives can be opened from the end - the cookie points
back to the start.
"""
tocstr = self.toc.tobinary()
self.lib.write(tocstr)
toclen = len(tocstr)
# now save teh cookie
total_len = tocpos + toclen + self._cookie_size
pyvers = sys.version_info[0] * 10 + sys.version_info[1]
# Before saving cookie we need to convert it to corresponding
# C representation.
cookie = struct.pack(self._cookie_format, self.MAGIC, total_len,
tocpos, toclen, pyvers,
self._pylib_name.encode('ascii'))
self.lib.write(cookie)
| 36.688053 | 157 | 0.571549 |
76d40edf2503a64425aa968e01180bdedd3fbc11 | 8,367 | py | Python | src/zenmake/waf/waflib/extras/javatest.py | pustotnik/raven | adb75d04a1ce719266eb34c29b35151dfaf91a8a | [
"BSD-3-Clause"
] | 2 | 2019-10-14T05:05:34.000Z | 2022-03-28T04:55:00.000Z | extras/javatest.py | drobilla/autowaf | b600c928b221a001faeab7bd92786d0b25714bc8 | [
"BSD-3-Clause"
] | 42 | 2020-08-25T07:59:32.000Z | 2021-11-15T03:12:29.000Z | extras/javatest.py | drobilla/autowaf | b600c928b221a001faeab7bd92786d0b25714bc8 | [
"BSD-3-Clause"
] | 1 | 2021-08-13T13:59:51.000Z | 2021-08-13T13:59:51.000Z | #! /usr/bin/env python
# encoding: utf-8
# Federico Pellegrin, 2019 (fedepell)
"""
Provides Java Unit test support using :py:class:`waflib.Tools.waf_unit_test.utest`
task via the **javatest** feature.
This gives the possibility to run unit test and have them integrated into the
standard waf unit test environment. It has been tested with TestNG and JUnit
but should be easily expandable to other frameworks given the flexibility of
ut_str provided by the standard waf unit test environment.
The extra takes care also of managing non-java dependencies (ie. C/C++ libraries
using JNI or Python modules via JEP) and setting up the environment needed to run
them.
Example usage:
def options(opt):
opt.load('java waf_unit_test javatest')
def configure(conf):
conf.load('java javatest')
def build(bld):
[ ... mainprog is built here ... ]
bld(features = 'javac javatest',
srcdir = 'test/',
outdir = 'test',
sourcepath = ['test'],
classpath = [ 'src' ],
basedir = 'test',
use = ['JAVATEST', 'mainprog'], # mainprog is the program being tested in src/
ut_str = 'java -cp ${CLASSPATH} ${JTRUNNER} ${SRC}',
jtest_source = bld.path.ant_glob('test/*.xml'),
)
At command line the CLASSPATH where to find the testing environment and the
test runner (default TestNG) that will then be seen in the environment as
CLASSPATH_JAVATEST (then used for use) and JTRUNNER and can be used for
dependencies and ut_str generation.
Example configure for TestNG:
waf configure --jtpath=/tmp/testng-6.12.jar:/tmp/jcommander-1.71.jar --jtrunner=org.testng.TestNG
or as default runner is TestNG:
waf configure --jtpath=/tmp/testng-6.12.jar:/tmp/jcommander-1.71.jar
Example configure for JUnit:
waf configure --jtpath=/tmp/junit.jar --jtrunner=org.junit.runner.JUnitCore
The runner class presence on the system is checked for at configuration stage.
"""
import os
from waflib import Task, TaskGen, Options, Errors, Utils, Logs
from waflib.Tools import ccroot
JAR_RE = '**/*'
def _process_use_rec(self, name):
"""
Recursively process ``use`` for task generator with name ``name``..
Used by javatest_process_use.
"""
if name in self.javatest_use_not or name in self.javatest_use_seen:
return
try:
tg = self.bld.get_tgen_by_name(name)
except Errors.WafError:
self.javatest_use_not.add(name)
return
self.javatest_use_seen.append(name)
tg.post()
for n in self.to_list(getattr(tg, 'use', [])):
_process_use_rec(self, n)
@TaskGen.feature('javatest')
@TaskGen.after_method('process_source', 'apply_link', 'use_javac_files')
def javatest_process_use(self):
"""
Process the ``use`` attribute which contains a list of task generator names and store
paths that later is used to populate the unit test runtime environment.
"""
self.javatest_use_not = set()
self.javatest_use_seen = []
self.javatest_libpaths = [] # strings or Nodes
self.javatest_pypaths = [] # strings or Nodes
self.javatest_dep_nodes = []
names = self.to_list(getattr(self, 'use', []))
for name in names:
_process_use_rec(self, name)
def extend_unique(lst, varlst):
ext = []
for x in varlst:
if x not in lst:
ext.append(x)
lst.extend(ext)
# Collect type specific info needed to construct a valid runtime environment
# for the test.
for name in self.javatest_use_seen:
tg = self.bld.get_tgen_by_name(name)
# Python-Java embedding crosstools such as JEP
if 'py' in tg.features:
# Python dependencies are added to PYTHONPATH
pypath = getattr(tg, 'install_from', tg.path)
if 'buildcopy' in tg.features:
# Since buildcopy is used we assume that PYTHONPATH in build should be used,
# not source
extend_unique(self.javatest_pypaths, [pypath.get_bld().abspath()])
# Add buildcopy output nodes to dependencies
extend_unique(self.javatest_dep_nodes, [o for task in getattr(tg, 'tasks', []) for o in getattr(task, 'outputs', [])])
else:
# If buildcopy is not used, depend on sources instead
extend_unique(self.javatest_dep_nodes, tg.source)
extend_unique(self.javatest_pypaths, [pypath.abspath()])
if getattr(tg, 'link_task', None):
# For tasks with a link_task (C, C++, D et.c.) include their library paths:
if not isinstance(tg.link_task, ccroot.stlink_task):
extend_unique(self.javatest_dep_nodes, tg.link_task.outputs)
extend_unique(self.javatest_libpaths, tg.link_task.env.LIBPATH)
if 'pyext' in tg.features:
# If the taskgen is extending Python we also want to add the interpreter libpath.
extend_unique(self.javatest_libpaths, tg.link_task.env.LIBPATH_PYEXT)
else:
# Only add to libpath if the link task is not a Python extension
extend_unique(self.javatest_libpaths, [tg.link_task.outputs[0].parent.abspath()])
if 'javac' in tg.features or 'jar' in tg.features:
if hasattr(tg, 'jar_task'):
# For Java JAR tasks depend on generated JAR
extend_unique(self.javatest_dep_nodes, tg.jar_task.outputs)
else:
# For Java non-JAR ones we need to glob generated files (Java output files are not predictable)
if hasattr(tg, 'outdir'):
base_node = tg.outdir
else:
base_node = tg.path.get_bld()
self.javatest_dep_nodes.extend([dx for dx in base_node.ant_glob(JAR_RE, remove=False, quiet=True)])
@TaskGen.feature('javatest')
@TaskGen.after_method('apply_java', 'use_javac_files', 'set_classpath', 'javatest_process_use')
def make_javatest(self):
"""
Creates a ``utest`` task with a populated environment for Java Unit test execution
"""
tsk = self.create_task('utest')
tsk.set_run_after(self.javac_task)
# Dependencies from recursive use analysis
tsk.dep_nodes.extend(self.javatest_dep_nodes)
# Put test input files as waf_unit_test relies on that for some prints and log generation
# If jtest_source is there, this is specially useful for passing XML for TestNG
# that contain test specification, use that as inputs, otherwise test sources
if getattr(self, 'jtest_source', None):
tsk.inputs = self.to_nodes(self.jtest_source)
else:
if self.javac_task.srcdir[0].exists():
tsk.inputs = self.javac_task.srcdir[0].ant_glob('**/*.java', remove=False)
if getattr(self, 'ut_str', None):
self.ut_run, lst = Task.compile_fun(self.ut_str, shell=getattr(self, 'ut_shell', False))
tsk.vars = lst + tsk.vars
if getattr(self, 'ut_cwd', None):
if isinstance(self.ut_cwd, str):
# we want a Node instance
if os.path.isabs(self.ut_cwd):
self.ut_cwd = self.bld.root.make_node(self.ut_cwd)
else:
self.ut_cwd = self.path.make_node(self.ut_cwd)
else:
self.ut_cwd = self.bld.bldnode
# Get parent CLASSPATH and add output dir of test, we run from wscript dir
# We have to change it from list to the standard java -cp format (: separated)
tsk.env.CLASSPATH = ':'.join(self.env.CLASSPATH) + ':' + self.outdir.abspath()
if not self.ut_cwd.exists():
self.ut_cwd.mkdir()
if not hasattr(self, 'ut_env'):
self.ut_env = dict(os.environ)
def add_paths(var, lst):
# Add list of paths to a variable, lst can contain strings or nodes
lst = [ str(n) for n in lst ]
Logs.debug("ut: %s: Adding paths %s=%s", self, var, lst)
self.ut_env[var] = os.pathsep.join(lst) + os.pathsep + self.ut_env.get(var, '')
add_paths('PYTHONPATH', self.javatest_pypaths)
if Utils.is_win32:
add_paths('PATH', self.javatest_libpaths)
elif Utils.unversioned_sys_platform() == 'darwin':
add_paths('DYLD_LIBRARY_PATH', self.javatest_libpaths)
add_paths('LD_LIBRARY_PATH', self.javatest_libpaths)
else:
add_paths('LD_LIBRARY_PATH', self.javatest_libpaths)
def configure(ctx):
cp = ctx.env.CLASSPATH or '.'
if getattr(Options.options, 'jtpath', None):
ctx.env.CLASSPATH_JAVATEST = getattr(Options.options, 'jtpath').split(':')
cp += ':' + getattr(Options.options, 'jtpath')
if getattr(Options.options, 'jtrunner', None):
ctx.env.JTRUNNER = getattr(Options.options, 'jtrunner')
if ctx.check_java_class(ctx.env.JTRUNNER, with_classpath=cp):
ctx.fatal('Could not run test class %r' % ctx.env.JTRUNNER)
def options(opt):
opt.add_option('--jtpath', action='store', default='', dest='jtpath',
help='Path to jar(s) needed for javatest execution, colon separated, if not in the system CLASSPATH')
opt.add_option('--jtrunner', action='store', default='org.testng.TestNG', dest='jtrunner',
help='Class to run javatest test [default: org.testng.TestNG]')
| 35.155462 | 122 | 0.726067 |
ac9d5d32e9278e80cd11c2bbb29b245995f702cf | 5,290 | py | Python | homeassistant/components/syncthing/__init__.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 30,023 | 2016-04-13T10:17:53.000Z | 2020-03-02T12:56:31.000Z | homeassistant/components/syncthing/__init__.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 31,101 | 2020-03-02T13:00:16.000Z | 2022-03-31T23:57:36.000Z | homeassistant/components/syncthing/__init__.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 11,956 | 2016-04-13T18:42:31.000Z | 2020-03-02T09:32:12.000Z | """The syncthing integration."""
import asyncio
import logging
import aiosyncthing
from homeassistant.config_entries import ConfigEntry
from homeassistant.const import (
CONF_TOKEN,
CONF_URL,
CONF_VERIFY_SSL,
EVENT_HOMEASSISTANT_STOP,
Platform,
)
from homeassistant.core import HomeAssistant
from homeassistant.exceptions import ConfigEntryNotReady
from homeassistant.helpers.dispatcher import async_dispatcher_send
from .const import (
DOMAIN,
EVENTS,
RECONNECT_INTERVAL,
SERVER_AVAILABLE,
SERVER_UNAVAILABLE,
)
PLATFORMS = [Platform.SENSOR]
_LOGGER = logging.getLogger(__name__)
async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry) -> bool:
"""Set up syncthing from a config entry."""
data = entry.data
if DOMAIN not in hass.data:
hass.data[DOMAIN] = {}
client = aiosyncthing.Syncthing(
data[CONF_TOKEN],
url=data[CONF_URL],
verify_ssl=data[CONF_VERIFY_SSL],
)
try:
status = await client.system.status()
except aiosyncthing.exceptions.SyncthingError as exception:
await client.close()
raise ConfigEntryNotReady from exception
server_id = status["myID"]
syncthing = SyncthingClient(hass, client, server_id)
syncthing.subscribe()
hass.data[DOMAIN][entry.entry_id] = syncthing
hass.config_entries.async_setup_platforms(entry, PLATFORMS)
async def cancel_listen_task(_):
await syncthing.unsubscribe()
entry.async_on_unload(
hass.bus.async_listen_once(EVENT_HOMEASSISTANT_STOP, cancel_listen_task)
)
return True
async def async_unload_entry(hass: HomeAssistant, entry: ConfigEntry) -> bool:
"""Unload a config entry."""
unload_ok = await hass.config_entries.async_unload_platforms(entry, PLATFORMS)
if unload_ok:
syncthing = hass.data[DOMAIN].pop(entry.entry_id)
await syncthing.unsubscribe()
return unload_ok
class SyncthingClient:
"""A Syncthing client."""
def __init__(self, hass, client, server_id):
"""Initialize the client."""
self._hass = hass
self._client = client
self._server_id = server_id
self._listen_task = None
@property
def server_id(self):
"""Get server id."""
return self._server_id
@property
def url(self):
"""Get server URL."""
return self._client.url
@property
def database(self):
"""Get database namespace client."""
return self._client.database
@property
def system(self):
"""Get system namespace client."""
return self._client.system
def subscribe(self):
"""Start event listener coroutine."""
self._listen_task = asyncio.create_task(self._listen())
async def unsubscribe(self):
"""Stop event listener coroutine."""
if self._listen_task:
self._listen_task.cancel()
await self._client.close()
async def _listen(self):
"""Listen to Syncthing events."""
events = self._client.events
server_was_unavailable = False
while True:
if await self._server_available():
if server_was_unavailable:
_LOGGER.info(
"The syncthing server '%s' is back online", self._client.url
)
async_dispatcher_send(
self._hass, f"{SERVER_AVAILABLE}-{self._server_id}"
)
server_was_unavailable = False
else:
await asyncio.sleep(RECONNECT_INTERVAL.total_seconds())
continue
try:
async for event in events.listen():
if events.last_seen_id == 0:
continue # skipping historical events from the first batch
if event["type"] not in EVENTS:
continue
signal_name = EVENTS[event["type"]]
folder = None
if "folder" in event["data"]:
folder = event["data"]["folder"]
else: # A workaround, some events store folder id under `id` key
folder = event["data"]["id"]
async_dispatcher_send(
self._hass,
f"{signal_name}-{self._server_id}-{folder}",
event,
)
except aiosyncthing.exceptions.SyncthingError:
_LOGGER.info(
"The syncthing server '%s' is not available. Sleeping %i seconds and retrying",
self._client.url,
RECONNECT_INTERVAL.total_seconds(),
)
async_dispatcher_send(
self._hass, f"{SERVER_UNAVAILABLE}-{self._server_id}"
)
await asyncio.sleep(RECONNECT_INTERVAL.total_seconds())
server_was_unavailable = True
continue
async def _server_available(self):
try:
await self._client.system.ping()
except aiosyncthing.exceptions.SyncthingError:
return False
else:
return True
| 30.402299 | 99 | 0.59414 |
468692fa4620ef1acd50a0a3f2463eb4927b6c7b | 7,081 | py | Python | src/backend/api/handlers/tests/decorators_test.py | ofekashery/the-blue-alliance | df0e47d054161fe742ac6198a6684247d0713279 | [
"MIT"
] | null | null | null | src/backend/api/handlers/tests/decorators_test.py | ofekashery/the-blue-alliance | df0e47d054161fe742ac6198a6684247d0713279 | [
"MIT"
] | null | null | null | src/backend/api/handlers/tests/decorators_test.py | ofekashery/the-blue-alliance | df0e47d054161fe742ac6198a6684247d0713279 | [
"MIT"
] | null | null | null | import logging
from unittest.mock import patch
import pytest
from flask import g
from google.cloud import ndb
from werkzeug.test import Client
from backend.common import auth
from backend.common.consts.auth_type import AuthType
from backend.common.consts.event_type import EventType
from backend.common.models.account import Account
from backend.common.models.api_auth_access import ApiAuthAccess
from backend.common.models.event import Event
from backend.common.models.team import Team
def test_not_authenticated(ndb_client: ndb.Client, api_client: Client, caplog) -> None:
with ndb_client.context():
ApiAuthAccess(
id="test_auth_key",
auth_types_enum=[AuthType.READ_API],
).put()
Team(id="frc254", team_number=254).put()
with api_client.application.test_request_context():
assert g.get("auth_owner_id", None) is None
with caplog.at_level(logging.INFO):
resp = api_client.get("/api/v3/team/frc254")
assert len(caplog.records) == 0
assert g.get("auth_owner_id", None) is None
assert resp.status_code == 401
assert "required" in resp.json["Error"]
def test_bad_auth(ndb_client: ndb.Client, api_client: Client, caplog) -> None:
with ndb_client.context():
account = Account()
ApiAuthAccess(
id="test_auth_key", auth_types_enum=[AuthType.READ_API], owner=account.key
).put()
Team(id="frc254", team_number=254).put()
with api_client.application.test_request_context():
assert g.get("auth_owner_id", None) is None
with caplog.at_level(logging.INFO):
resp = api_client.get(
"/api/v3/team/frc254", headers={"X-TBA-Auth-Key": "bad_auth_key"}
)
assert len(caplog.records) == 0
assert g.get("auth_owner_id", None) is None
assert resp.status_code == 401
assert "invalid" in resp.json["Error"]
@pytest.mark.parametrize("account", [None, Account()])
def test_authenticated_header(
ndb_client: ndb.Client, api_client: Client, account: Account, caplog
) -> None:
with ndb_client.context():
if account:
account.put()
ApiAuthAccess(
id="test_auth_key",
auth_types_enum=[AuthType.READ_API],
owner=account.key if account else None,
).put()
Team(id="frc254", team_number=254).put()
with api_client.application.test_request_context():
assert g.get("auth_owner_id", None) is None
with caplog.at_level(logging.INFO):
resp = api_client.get(
"/api/v3/team/frc254", headers={"X-TBA-Auth-Key": "test_auth_key"}
)
assert len(caplog.records) == 1
record = caplog.records[0]
assert record.levelno == logging.INFO
if account:
assert record.message == "Auth owner: 1, X-TBA-Auth-Key: test_auth_key"
else:
assert record.message == "Auth owner: None, X-TBA-Auth-Key: test_auth_key"
if account:
assert g.auth_owner_id == account.key.id()
assert resp.status_code == 200
@pytest.mark.parametrize("account", [None, Account()])
def test_authenticated_urlparam(
ndb_client: ndb.Client, api_client: Client, account: Account, caplog
) -> None:
with ndb_client.context():
if account:
account.put()
ApiAuthAccess(
id="test_auth_key",
auth_types_enum=[AuthType.READ_API],
owner=account.key if account else None,
).put()
Team(id="frc254", team_number=254).put()
with api_client.application.test_request_context():
assert g.get("auth_owner_id", None) is None
with caplog.at_level(logging.INFO):
resp = api_client.get("/api/v3/team/frc254?X-TBA-Auth-Key=test_auth_key")
assert len(caplog.records) == 1
record = caplog.records[0]
assert record.levelno == logging.INFO
if account:
assert record.message == "Auth owner: 1, X-TBA-Auth-Key: test_auth_key"
else:
assert record.message == "Auth owner: None, X-TBA-Auth-Key: test_auth_key"
if account:
assert g.auth_owner_id == account.key.id()
assert resp.status_code == 200
def test_authenticated_user(ndb_client: ndb.Client, api_client: Client, caplog) -> None:
email = "zach@thebluealliance.com"
account = Account(email=email)
with ndb_client.context():
Team(id="frc254", team_number=254).put()
account.put()
with api_client.application.test_request_context():
assert g.get("auth_owner_id", None) is None
with patch.object(
auth, "_decoded_claims", return_value={"email": email}
), caplog.at_level(logging.INFO):
resp = api_client.get("/api/v3/team/frc254")
assert len(caplog.records) == 1
record = caplog.records[0]
assert record.levelno == logging.INFO
assert record.message == "Auth owner: 1, LOGGED IN"
assert g.auth_owner_id == account.key.id()
assert resp.status_code == 200
def test_team_key_invalid(ndb_client: ndb.Client, api_client: Client) -> None:
with ndb_client.context():
Team(id="frc254", team_number=254).put()
resp = api_client.get(
"/api/v3/team/254", headers={"X-TBA-Auth-Key": "test_auth_key"}
)
assert resp.status_code == 404
assert resp.json["Error"] == "254 is not a valid team key"
def test_team_key_does_not_exist(ndb_client: ndb.Client, api_client: Client) -> None:
with ndb_client.context():
ApiAuthAccess(
id="test_auth_key",
auth_types_enum=[AuthType.READ_API],
).put()
Team(id="frc254", team_number=254).put()
resp = api_client.get(
"/api/v3/team/frc604", headers={"X-TBA-Auth-Key": "test_auth_key"}
)
assert resp.status_code == 404
assert resp.json["Error"] == "team key: frc604 does not exist"
def test_event_key_invalid(ndb_client: ndb.Client, api_client: Client) -> None:
with ndb_client.context():
Event(
id="2019casj",
year=2019,
event_short="casj",
event_type_enum=EventType.REGIONAL,
).put()
resp = api_client.get(
"/api/v3/event/casj", headers={"X-TBA-Auth-Key": "test_auth_key"}
)
assert resp.status_code == 404
assert resp.json["Error"] == "casj is not a valid event key"
def test_event_key_does_not_exist(ndb_client: ndb.Client, api_client: Client) -> None:
with ndb_client.context():
ApiAuthAccess(
id="test_auth_key",
auth_types_enum=[AuthType.READ_API],
).put()
Event(
id="2019casj",
year=2019,
event_short="casj",
event_type_enum=EventType.REGIONAL,
).put()
resp = api_client.get(
"/api/v3/event/2019casf", headers={"X-TBA-Auth-Key": "test_auth_key"}
)
assert resp.status_code == 404
assert resp.json["Error"] == "event key: 2019casf does not exist"
| 33.244131 | 88 | 0.635221 |
3c83188e3f4e1015f705b6a97a6c20254c7ed04b | 8,528 | py | Python | snapraid-runner.py | Maleesh/snapraid-runner | 209c2054a7e0d21bb96de119d951c3fb770c9155 | [
"MIT"
] | null | null | null | snapraid-runner.py | Maleesh/snapraid-runner | 209c2054a7e0d21bb96de119d951c3fb770c9155 | [
"MIT"
] | null | null | null | snapraid-runner.py | Maleesh/snapraid-runner | 209c2054a7e0d21bb96de119d951c3fb770c9155 | [
"MIT"
] | null | null | null | # -*- coding: utf8 -*-
from __future__ import division
import argparse
import configparser
import logging
import logging.handlers
import os.path
import subprocess
import sys
import threading
import time
import traceback
from collections import Counter, defaultdict
from io import StringIO
import argparse
# Global variables
config = None
email_log = None
def tee_log(infile, out_lines, log_level):
"""
Create a thread thot saves all the output on infile to out_lines and
logs every line with log_level
"""
def tee_thread():
for line in iter(infile.readline, ""):
line = line.strip()
# Do not log the progress display
if "\r" in line:
line = line.split("\r")[-1]
logging.log(log_level, line.strip())
out_lines.append(line)
infile.close()
t = threading.Thread(target=tee_thread)
t.daemon = True
t.start()
return t
def snapraid_command(command, args={}, ignore_errors=False):
"""
Run snapraid command
Raises subprocess.CalledProcessError if errorlevel != 0
"""
arguments = ["--conf", config["snapraid"]["config"]]
for (k, v) in args.items():
arguments.extend(["--" + k, str(v)])
p = subprocess.Popen(
[config["snapraid"]["executable"], command] + arguments,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True)
out = []
threads = [
tee_log(p.stdout, out, logging.OUTPUT),
tee_log(p.stderr, [], logging.OUTERR)]
for t in threads:
t.join()
ret = p.wait()
# sleep for a while to make pervent output mixup
time.sleep(0.3)
if ret == 0 or ignore_errors:
return out
else:
raise subprocess.CalledProcessError(ret, "snapraid " + command)
def send_email(success):
import smtplib
from email.mime.text import MIMEText
from email import charset
if len(config["smtp"]["host"]) == 0:
return
# use quoted-printable instead of the default base64
charset.add_charset("utf-8", charset.SHORTEST, charset.QP)
if success:
body = "SnapRAID job completed successfully:\n\n\n"
else:
body = "Error during SnapRAID job:\n\n\n"
log = email_log.getvalue()
maxsize = config['email'].get('maxsize', 500) * 1024
if maxsize and len(log) > maxsize:
cut_lines = log.count("\n", maxsize//2, -maxsize//2)
log = (
"NOTE: Log was too big for email and was shortened\n\n" +
log[:maxsize//2] +
"[...]\n\n\n --- LOG WAS TOO BIG - {} LINES REMOVED --\n\n\n[...]".format(
cut_lines) +
log[-maxsize//2:])
body += log
msg = MIMEText(body, "plain", "utf-8")
msg["Subject"] = config["email"]["subject"] + \
(" SUCCESS" if success else " ERROR")
msg["From"] = config["email"]["from"]
msg["To"] = config["email"]["to"]
smtp = {"host": config["smtp"]["host"]}
if config["smtp"]["port"]:
smtp["port"] = config["smtp"]["port"]
if config["smtp"]["ssl"]:
server = smtplib.SMTP_SSL(**smtp)
else:
server = smtplib.SMTP(**smtp)
if config["smtp"]["tls"]:
server.starttls()
if config["smtp"]["user"]:
server.login(config["smtp"]["user"], config["smtp"]["password"])
server.sendmail(
config["email"]["from"],
[config["email"]["to"]],
msg.as_string())
server.quit()
def finish(is_success):
if ("error", "success")[is_success] in config["email"]["sendon"]:
try:
send_email(is_success)
except:
logging.exception("Failed to send email")
if is_success:
logging.info("Run finished successfully")
else:
logging.error("Run failed")
sys.exit(0 if is_success else 1)
def load_config(args):
global config
parser = configparser.RawConfigParser()
parser.read(args.conf)
sections = ["snapraid", "logging", "email", "smtp", "scrub"]
config = dict((x, defaultdict(lambda: "")) for x in sections)
for section in parser.sections():
for (k, v) in parser.items(section):
config[section][k] = v.strip()
int_options = [
("snapraid", "deletethreshold"), ("logging", "maxsize"),
("scrub", "percentage"), ("scrub", "older-than"), ("email", "maxsize"),
]
for section, option in int_options:
try:
config[section][option] = int(config[section][option])
except ValueError:
config[section][option] = 0
config["smtp"]["ssl"] = (config["smtp"]["ssl"].lower() == "true")
config["smtp"]["tls"] = (config["smtp"]["tls"].lower() == "true")
config["scrub"]["enabled"] = (config["scrub"]["enabled"].lower() == "true")
config["email"]["short"] = (config["email"]["short"].lower() == "true")
config["snapraid"]["touch"] = (config["snapraid"]["touch"].lower() == "true")
if args.scrub is not None:
config["scrub"]["enabled"] = args.scrub
def setup_logger():
log_format = logging.Formatter(
"%(asctime)s [%(levelname)-6.6s] %(message)s")
root_logger = logging.getLogger()
logging.OUTPUT = 15
logging.addLevelName(logging.OUTPUT, "OUTPUT")
logging.OUTERR = 25
logging.addLevelName(logging.OUTERR, "OUTERR")
root_logger.setLevel(logging.OUTPUT)
console_logger = logging.StreamHandler(sys.stdout)
console_logger.setFormatter(log_format)
root_logger.addHandler(console_logger)
if config["logging"]["file"]:
max_log_size = min(config["logging"]["maxsize"], 0) * 1024
file_logger = logging.handlers.RotatingFileHandler(
config["logging"]["file"],
maxBytes=max_log_size,
backupCount=9)
file_logger.setFormatter(log_format)
root_logger.addHandler(file_logger)
if config["email"]["sendon"]:
global email_log
email_log = StringIO()
email_logger = logging.StreamHandler(email_log)
email_logger.setFormatter(log_format)
if config["email"]["short"]:
# Don't send programm stdout in email
email_logger.setLevel(logging.INFO)
root_logger.addHandler(email_logger)
def main():
parser = argparse.ArgumentParser()
parser.add_argument("-c", "--conf",
default="snapraid-runner.conf",
metavar="CONFIG",
help="Configuration file (default: %(default)s)")
parser.add_argument("--no-scrub", action='store_false',
dest='scrub', default=None,
help="Do not scrub (overrides config)")
parser.add_argument("-f", "--force-sync", action='store_true',
dest='force_sync', default=False,
help="Force sync despite delete count (overrides config)")
args = parser.parse_args()
if not os.path.exists(args.conf):
print("snapraid-runner configuration file not found")
parser.print_help()
sys.exit(2)
try:
load_config(args)
except:
print("unexpected exception while loading config")
print(traceback.format_exc())
sys.exit(2)
try:
setup_logger()
except:
print("unexpected exception while setting up logging")
print(traceback.format_exc())
sys.exit(2)
try:
run(args)
except Exception:
logging.exception("Run failed due to unexpected exception:")
finish(False)
def run(args):
logging.info("=" * 60)
logging.info("Run started")
logging.info("=" * 60)
if not os.path.isfile(config["snapraid"]["executable"]):
logging.error("The configured snapraid executable \"{}\" does not "
"exist or is not a file".format(
config["snapraid"]["executable"]))
finish(False)
if not os.path.isfile(config["snapraid"]["config"]):
logging.error("Snapraid config does not exist at " +
config["snapraid"]["config"])
finish(False)
logging.info("Running diff...")
diff_out = snapraid_command("diff", ignore_errors=True)
logging.info("*" * 60)
diff_results = Counter(line.split(" ")[0] for line in diff_out)
diff_results = dict((x, diff_results[x]) for x in
["add", "remove", "move", "update"])
logging.info(("Diff results: {add} added, {remove} removed, "
+ "{move} moved, {update} modified").format(**diff_results))
print(args.force_sync)
if (config["snapraid"]["deletethreshold"] >= 0 and
diff_results["remove"] > config["snapraid"]["deletethreshold"]):
if args.force_sync is not None and args.force_sync is True:
logging.error(
"Deleted files exceed delete threshold of {}".format(
config["snapraid"]["deletethreshold"]))
logging.info("Force syncing")
else:
logging.error(
"Deleted files exceed delete threshold of {}, aborting".format(
config["snapraid"]["deletethreshold"]))
finish(False)
logging.info("Running sync...")
try:
snapraid_command("sync")
except subprocess.CalledProcessError as e:
logging.error(e)
finish(False)
logging.info("*" * 60)
if config["scrub"]["enabled"]:
logging.info("Running scrub...")
try:
snapraid_command("scrub", {
"percentage": config["scrub"]["percentage"],
"older-than": config["scrub"]["older-than"],
})
except subprocess.CalledProcessError as e:
logging.error(e)
finish(False)
logging.info("*" * 60)
logging.info("All done")
finish(True)
main()
| 28.332226 | 78 | 0.678354 |
b138ebe473b39f30d0c7912dd6f82dbdd6b77718 | 4,432 | py | Python | doc/source/conf.py | mail2nsrajesh/mistral | b19d87141563e00f18cd74c685392d0b9b70e351 | [
"Apache-2.0"
] | null | null | null | doc/source/conf.py | mail2nsrajesh/mistral | b19d87141563e00f18cd74c685392d0b9b70e351 | [
"Apache-2.0"
] | null | null | null | doc/source/conf.py | mail2nsrajesh/mistral | b19d87141563e00f18cd74c685392d0b9b70e351 | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import warnings
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../../'))
sys.path.insert(0, os.path.abspath('../'))
sys.path.insert(0, os.path.abspath('./'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinxcontrib.pecanwsme.rest',
'wsmeext.sphinxext',
]
if not on_rtd:
extensions.append('oslosphinx')
wsme_protocols = ['restjson']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Mistral'
copyright = u'2014, Mistral Contributors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
from mistral.version import version_info
release = version_info.release_string()
version = version_info.version_string()
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
# html_static_path = ['_static']
if on_rtd:
html_theme_path = ['.']
html_theme = 'sphinx_rtd_theme'
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['mistral.']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local",
"-n1"]
try:
html_last_updated_fmt = subprocess.check_output(
git_cmd).decode('utf-8')
except Exception:
warnings.warn('Cannot get last updated time from git repository. '
'Not setting "html_last_updated_fmt".')
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
html_title = 'Mistral'
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
'index': [
'sidebarlinks.html', 'localtoc.html', 'searchbox.html',
'sourcelink.html'
],
'**': [
'localtoc.html', 'relations.html',
'searchbox.html', 'sourcelink.html'
]
}
# -- Options for manual page output -------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'mistral', u'Mistral',
[u'OpenStack Foundation'], 1)
]
# If true, show URL addresses after external links.
man_show_urls = True
| 32.82963 | 79 | 0.696751 |
31bb90c352c04fcba9f456a4a29c62fbc97c6130 | 8,291 | py | Python | src/pytezos/block/header.py | bitnovo/pytezos | 3932fffc812a0fd3ebb615e5f7320b20ce2fc629 | [
"MIT"
] | null | null | null | src/pytezos/block/header.py | bitnovo/pytezos | 3932fffc812a0fd3ebb615e5f7320b20ce2fc629 | [
"MIT"
] | null | null | null | src/pytezos/block/header.py | bitnovo/pytezos | 3932fffc812a0fd3ebb615e5f7320b20ce2fc629 | [
"MIT"
] | null | null | null | from pprint import pformat
from typing import Any, Dict, List, Optional
import bson # type: ignore
from pytezos.block.forge import bump_fitness, forge_block_header, forge_protocol_data
from pytezos.context.impl import ExecutionContext
from pytezos.context.mixin import ContextMixin
from pytezos.crypto.encoding import base58_encode
from pytezos.crypto.key import blake2b_32
from pytezos.jupyter import get_class_docstring
from pytezos.michelson.forge import forge_array, forge_base58, optimize_timestamp
from pytezos.operation.kind import validation_passes
from pytezos.sandbox.parameters import sandbox_params
class BlockHeader(ContextMixin):
def __init__(
self,
context: ExecutionContext,
protocol_data: Optional[Dict[str, Any]] = None,
operations: Optional[List[List[Dict[str, Any]]]] = None,
shell_header: Optional[Dict[str, Any]] = None,
signature: Optional[str] = None,
):
super().__init__(context=context)
self.protocol_data = protocol_data or {}
self.operations = operations or []
self.shell_header = shell_header or {}
self.signature = signature
def __repr__(self):
res = [
super().__repr__(),
'\nHeader',
pformat(
{
**self.shell_header,
'protocol_data': {
**self.protocol_data,
'signature': self.signature,
},
}
),
'\nOperations',
pformat(self.operations),
'\nHelpers',
get_class_docstring(self.__class__),
]
return '\n'.join(res)
@classmethod
def activate_protocol(
cls,
protocol_hash: str,
parameters: Dict[str, Any],
context: ExecutionContext
) -> 'BlockHeader':
prev_fitness = context.shell.head.header()['fitness'] # type: ignore
protocol_data = {
"content": {
"command": "activate",
"hash": protocol_hash,
"fitness": bump_fitness(prev_fitness),
"protocol_parameters": forge_array(bson.dumps(parameters)).hex(),
}
}
return BlockHeader(
context=context,
protocol_data=protocol_data,
)
@classmethod
def bake_block(cls, context: ExecutionContext, min_fee: int = 0) -> 'BlockHeader':
pending_operations = context.shell.mempool.pending_operations() # type: ignore
operations: List[List[Dict[str, Any]]] = [[], [], [], []]
for opg in pending_operations['applied']:
validation_pass = validation_passes[opg['contents'][0]['kind']]
if validation_pass == 3 \
and sum(map(lambda x: int(x['fee']), opg['contents'])) < min_fee:
continue
operations[validation_pass].append(opg)
# NOTE: Real values will be set during fill
protocol_data = {
"priority": 0,
"proof_of_work_nonce": "0000000000000000",
}
return BlockHeader(
context=context,
operations=operations,
protocol_data=protocol_data,
)
def _spawn(self, **kwargs):
return BlockHeader(
context=self.context,
protocol_data=kwargs.get('protocol_data', self.protocol_data.copy()),
operations=kwargs.get('operations', self.operations.copy()),
shell_header=kwargs.get('shell_header', self.shell_header.copy()),
signature=kwargs.get('signature', self.signature),
)
def fill(self, block_id='head') -> 'BlockHeader':
"""Fill missing fields essential for preapply
:param block_id: head or genesis
:rtype: BlockHeader
"""
pred_shell_header = self.shell.blocks[block_id].header.shell()
timestamp = optimize_timestamp(pred_shell_header['timestamp']) + 1
protocol = self.shell.blocks[block_id].protocols()['next_protocol']
level = int(pred_shell_header['level']) + 1
dummy_signature = base58_encode(b'\x00' * 64, b'sig').decode()
protocol_data = {
'protocol': protocol,
**self.protocol_data,
}
if level % int(sandbox_params['blocks_per_commitment']) == 0: # type: ignore
protocol_data['seed_nonce_hash'] = base58_encode(b'\x00' * 32, b'nce').decode()
if 'priority' in protocol_data:
baker = self.key.public_key_hash()
baking_rights = self.shell.blocks[block_id].helpers.baking_rights(delegate=baker)
protocol_data['priority'] = next(item['priority']
for item in baking_rights
if item['delegate'] == baker) # Fail if no rights
operations = [
[
{
'protocol': protocol,
'branch': operation['branch'],
'contents': operation['contents'],
'signature': operation['signature'],
}
for operation in operation_list
]
for operation_list in self.operations
]
payload = {
'protocol_data': {
**protocol_data,
'signature': dummy_signature,
},
'operations': operations,
}
res = self.shell.blocks[block_id].helpers.preapply.block.post(
block=payload,
sort=True,
timestamp=timestamp,
)
forged_operations = [
[
{
'branch': operation['branch'],
'data': operation['data']
}
for operation in operation_list['applied']
]
for operation_list in res['operations']
]
return self._spawn(
shell_header=res['shell_header'],
operations=forged_operations,
protocol_data=protocol_data,
signature=dummy_signature,
)
def work(self):
header = self
threshold = int(sandbox_params['proof_of_work_threshold'])
nonce = 1
while header.pow_stamp() > threshold:
header = self._spawn(
protocol_data={
**self.protocol_data,
'proof_of_work_nonce': nonce.to_bytes(8, 'big').hex(),
}
)
nonce += 1
return header
def binary_payload(self) -> bytes:
"""Get binary payload used for injection/hash calculation."""
if self.signature is None:
raise ValueError('Not signed')
return self.forge() + forge_base58(self.signature)
def forge(self) -> bytes:
"""Convert json representation of the block header into bytes.
:returns: Binary payload (unsigned)
"""
return forge_block_header(
{
**self.shell_header,
'protocol_data': forge_protocol_data(self.protocol_data).hex(),
}
)
def pow_stamp(self) -> int:
data = self.forge() + b'\x00' * 64
hash_digest = blake2b_32(data).digest()
return int.from_bytes(hash_digest[:8], 'big')
def hash(self) -> str:
hash_digest = blake2b_32(self.binary_payload()).digest()
return base58_encode(hash_digest, b'B').decode()
def sign(self):
"""Sign the block header with the key specified by `using`.
:rtype: BlockHeader
"""
chain_watermark = bytes.fromhex(self.shell.chains.main.watermark())
watermark = b'\x01' + chain_watermark
signature = self.key.sign(message=watermark + self.forge())
return self._spawn(signature=signature)
def inject(self, force=False) -> str:
"""Inject the signed block header.
:returns: block hash
"""
payload = {
"data": self.binary_payload().hex(),
"operations": self.operations,
}
return self.shell.injection.block.post(
block=payload,
_async=False,
force=force,
)
| 33.979508 | 95 | 0.559161 |
389c0fce6224fae9bfafdfd0a02c639b204a5620 | 3,591 | py | Python | arpa/models/base.py | svandiekendialpad/python-arpa | 2284b815866aeb08f65f786da416e78d7937ee1d | [
"MIT"
] | 39 | 2015-07-04T12:29:56.000Z | 2022-02-02T20:49:31.000Z | arpa/models/base.py | svandiekendialpad/python-arpa | 2284b815866aeb08f65f786da416e78d7937ee1d | [
"MIT"
] | 17 | 2015-10-31T13:36:25.000Z | 2021-04-10T15:16:01.000Z | arpa/models/base.py | svandiekendialpad/python-arpa | 2284b815866aeb08f65f786da416e78d7937ee1d | [
"MIT"
] | 17 | 2015-08-25T14:27:34.000Z | 2021-10-18T01:13:35.000Z | from abc import ABCMeta, abstractmethod
UNK = '<unk>'
SOS = '<s>'
EOS = '</s>'
class ARPAModel(metaclass=ABCMeta):
def __init__(self, unk=UNK):
self._base = 10
self._unk = unk
def __contains__(self, word):
self._check_word(word)
return word in self.vocabulary()
def __len__(self):
return len(self.vocabulary())
@abstractmethod
def add_count(self, order, count): # pragma: no cover
pass
@abstractmethod
def add_entry(self, ngram, p, bo=None, order=None): # pragma: no cover
pass
def log_p(self, ngram):
words = self._check_input(ngram)
if self._unk:
words = self._replace_unks(words)
return self.log_p_raw(words)
def log_p_raw(self, ngram):
try:
return self._log_p(ngram)
except KeyError:
if len(ngram) == 1:
raise KeyError
else:
try:
log_bo = self._log_bo(ngram[:-1])
except KeyError:
log_bo = 0
return log_bo + self.log_p_raw(ngram[1:])
def log_s(self, sentence, sos=SOS, eos=EOS):
words = self._check_input(sentence)
if self._unk:
words = self._replace_unks(words)
if sos:
words = (sos, ) + words
if eos:
words = words + (eos, )
result = sum(self.log_p_raw(words[:i]) for i in range(1, len(words) + 1))
if sos:
result = result - self.log_p_raw(words[:1])
return result
def p(self, ngram):
return self._base**self.log_p(ngram)
def s(self, sentence):
return self._base**self.log_s(sentence)
@abstractmethod
def counts(self): # pragma: no cover
pass
@abstractmethod
def order(self): # pragma: no cover
pass
@abstractmethod
def vocabulary(self, sort=True): # pragma: no cover
pass
def write(self, fp):
fp.write('\n\\data\\\n')
for order, count in self.counts():
fp.write('ngram {}={}\n'.format(order, count))
fp.write('\n')
for order, _ in self.counts():
fp.write('\\{}-grams:\n'.format(order))
for e in self._entries(order):
prob = e[0]
ngram = ' '.join(e[1])
if len(e) == 2:
fp.write('{}\t{}\n'.format(prob, ngram))
elif len(e) == 3:
backoff = e[2]
fp.write('{}\t{}\t{}\n'.format(prob, ngram, backoff))
else:
raise ValueError
fp.write('\n')
fp.write('\\end\\\n')
@abstractmethod
def _entries(self, order): # pragma: no cover
pass
@abstractmethod
def _log_bo(self, ngram): # pragma: no cover
pass
@abstractmethod
def _log_p(self, ngram): # pragma: no cover
pass
@staticmethod
def _check_input(input):
if not input:
raise ValueError
elif isinstance(input, tuple):
return input
elif isinstance(input, list):
return tuple(input)
elif isinstance(input, str):
return tuple(input.strip().split(' '))
else:
raise ValueError
@staticmethod
def _check_word(input):
if not isinstance(input, str):
raise ValueError
if ' ' in input:
raise ValueError
def _replace_unks(self, words):
return tuple((w if w in self else self._unk) for w in words)
| 27.204545 | 81 | 0.52548 |
ea0a6fbeb3b87f4841ad76b541ea7c793e17fca6 | 293 | py | Python | bot/utils/xbox.py | FLaPProductions/monument-common | 110c325e477b1ef319fc149da620eada73cc7868 | [
"Apache-2.0"
] | null | null | null | bot/utils/xbox.py | FLaPProductions/monument-common | 110c325e477b1ef319fc149da620eada73cc7868 | [
"Apache-2.0"
] | null | null | null | bot/utils/xbox.py | FLaPProductions/monument-common | 110c325e477b1ef319fc149da620eada73cc7868 | [
"Apache-2.0"
] | null | null | null | import aiohttp
async def getxuid(username):
async with aiohttp.ClientSession() as session:
async with session.get(f'https://xbl-api.prouser123.me/xuid/{username}/raw') as r:
if r.status == 200:
xuid = await r.text()
return xuid | 32.555556 | 91 | 0.583618 |
bdecc10b3765d79dab3afb04ae237e20f497b3bf | 11,896 | py | Python | Power-Source/Python38/Lib/site-packages/fairseq_cli/anotherTest.py | dreamsavior/sugoi-translator-server-builder | 4bc6142841942d35f8256829b1161aeafae34c62 | [
"MIT"
] | null | null | null | Power-Source/Python38/Lib/site-packages/fairseq_cli/anotherTest.py | dreamsavior/sugoi-translator-server-builder | 4bc6142841942d35f8256829b1161aeafae34c62 | [
"MIT"
] | null | null | null | Power-Source/Python38/Lib/site-packages/fairseq_cli/anotherTest.py | dreamsavior/sugoi-translator-server-builder | 4bc6142841942d35f8256829b1161aeafae34c62 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3 -u
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""
Translate raw text with a trained model. Batches data on-the-fly.
"""
import ast
import fileinput
import logging
import math
import os
import sys
import time
from argparse import Namespace
from collections import namedtuple
import numpy as np
import torch
from fairseq import checkpoint_utils, distributed_utils, options, tasks, utils
from fairseq.data import encoders
from fairseq.dataclass.configs import FairseqConfig
from fairseq.dataclass.utils import convert_namespace_to_omegaconf
from fairseq.token_generation_constraints import pack_constraints, unpack_constraints
from fairseq_cli.generate import get_symbols_to_strip_from_output
logging.basicConfig(
format="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
level=os.environ.get("LOGLEVEL", "INFO").upper(),
stream=sys.stdout,
)
logger = logging.getLogger("fairseq_cli.interactive")
Batch = namedtuple("Batch", "ids src_tokens src_lengths constraints")
Translation = namedtuple("Translation", "src_str hypos pos_scores alignments")
def buffered_read(input, buffer_size):
buffer = []
with fileinput.input(files=[input], openhook=fileinput.hook_encoded("utf-8")) as h:
for src_str in h:
buffer.append(src_str.strip())
if len(buffer) >= buffer_size:
yield buffer
buffer = []
if len(buffer) > 0:
yield buffer
def make_batches(lines, cfg, task, max_positions, encode_fn):
def encode_fn_target(x):
return encode_fn(x)
if cfg.generation.constraints:
# Strip (tab-delimited) contraints, if present, from input lines,
# store them in batch_constraints
batch_constraints = [list() for _ in lines]
for i, line in enumerate(lines):
if "\t" in line:
lines[i], *batch_constraints[i] = line.split("\t")
# Convert each List[str] to List[Tensor]
for i, constraint_list in enumerate(batch_constraints):
batch_constraints[i] = [
task.target_dictionary.encode_line(
encode_fn_target(constraint),
append_eos=False,
add_if_not_exist=False,
)
for constraint in constraint_list
]
tokens = [
task.source_dictionary.encode_line(
encode_fn(src_str), add_if_not_exist=False
).long()
for src_str in lines
]
if cfg.generation.constraints:
constraints_tensor = pack_constraints(batch_constraints)
else:
constraints_tensor = None
lengths = [t.numel() for t in tokens]
itr = task.get_batch_iterator(
dataset=task.build_dataset_for_inference(
tokens, lengths, constraints=constraints_tensor
),
max_tokens=cfg.dataset.max_tokens,
max_sentences=cfg.dataset.batch_size,
max_positions=max_positions,
ignore_invalid_inputs=cfg.dataset.skip_invalid_size_inputs_valid_test,
).next_epoch_itr(shuffle=False)
for batch in itr:
ids = batch["id"]
src_tokens = batch["net_input"]["src_tokens"]
src_lengths = batch["net_input"]["src_lengths"]
constraints = batch.get("constraints", None)
yield Batch(
ids=ids,
src_tokens=src_tokens,
src_lengths=src_lengths,
constraints=constraints,
)
def main(cfg: FairseqConfig):
if isinstance(cfg, Namespace):
cfg = convert_namespace_to_omegaconf(cfg)
start_time = time.time()
total_translate_time = 0
utils.import_user_module(cfg.common)
if cfg.interactive.buffer_size < 1:
cfg.interactive.buffer_size = 1
if cfg.dataset.max_tokens is None and cfg.dataset.batch_size is None:
cfg.dataset.batch_size = 1
assert (
not cfg.generation.sampling or cfg.generation.nbest == cfg.generation.beam
), "--sampling requires --nbest to be equal to --beam"
assert (
not cfg.dataset.batch_size
or cfg.dataset.batch_size <= cfg.interactive.buffer_size
), "--batch-size cannot be larger than --buffer-size"
logger.info(cfg)
# Fix seed for stochastic decoding
if cfg.common.seed is not None and not cfg.generation.no_seed_provided:
np.random.seed(cfg.common.seed)
utils.set_torch_seed(cfg.common.seed)
use_cuda = torch.cuda.is_available() and not cfg.common.cpu
# Setup task, e.g., translation
task = tasks.setup_task(cfg.task)
# Load ensemble
overrides = ast.literal_eval(cfg.common_eval.model_overrides)
logger.info("loading model(s) from {}".format(cfg.common_eval.path))
models, _model_args = checkpoint_utils.load_model_ensemble(
utils.split_paths(cfg.common_eval.path),
arg_overrides=overrides,
task=task,
suffix=cfg.checkpoint.checkpoint_suffix,
strict=(cfg.checkpoint.checkpoint_shard_count == 1),
num_shards=cfg.checkpoint.checkpoint_shard_count,
)
# Set dictionaries
src_dict = task.source_dictionary
tgt_dict = task.target_dictionary
# Optimize ensemble for generation
for model in models:
if model is None:
continue
if cfg.common.fp16:
model.half()
if use_cuda and not cfg.distributed_training.pipeline_model_parallel:
model.cuda()
model.prepare_for_inference_(cfg)
# Initialize generator
generator = task.build_generator(models, cfg.generation)
# Handle tokenization and BPE
tokenizer = encoders.build_tokenizer(cfg.tokenizer)
bpe = encoders.build_bpe(cfg.bpe)
def encode_fn(x):
if tokenizer is not None:
x = tokenizer.encode(x)
if bpe is not None:
x = bpe.encode(x)
return x
def decode_fn(x):
if bpe is not None:
x = bpe.decode(x)
if tokenizer is not None:
x = tokenizer.decode(x)
return x
# Load alignment dictionary for unknown word replacement
# (None if no unknown word replacement, empty if no path to align dictionary)
align_dict = utils.load_align_dict(cfg.generation.replace_unk)
max_positions = utils.resolve_max_positions(
task.max_positions(), *[model.max_positions() for model in models]
)
if cfg.generation.constraints:
logger.warning(
"NOTE: Constrained decoding currently assumes a shared subword vocabulary."
)
if cfg.interactive.buffer_size > 1:
logger.info("Sentence buffer size: %s", cfg.interactive.buffer_size)
logger.info("NOTE: hypothesis and token scores are output in base 2")
logger.info("Type the input sentence and press return:")
start_id = 0
for inputs in buffered_read(cfg.interactive.input, cfg.interactive.buffer_size):
results = []
for batch in make_batches(inputs, cfg, task, max_positions, encode_fn):
bsz = batch.src_tokens.size(0)
src_tokens = batch.src_tokens
src_lengths = batch.src_lengths
constraints = batch.constraints
if use_cuda:
src_tokens = src_tokens.cuda()
src_lengths = src_lengths.cuda()
if constraints is not None:
constraints = constraints.cuda()
sample = {
"net_input": {
"src_tokens": src_tokens,
"src_lengths": src_lengths,
},
}
translate_start_time = time.time()
translations = task.inference_step(
generator, models, sample, constraints=constraints
)
translate_time = time.time() - translate_start_time
total_translate_time += translate_time
list_constraints = [[] for _ in range(bsz)]
if cfg.generation.constraints:
list_constraints = [unpack_constraints(c) for c in constraints]
for i, (id, hypos) in enumerate(zip(batch.ids.tolist(), translations)):
src_tokens_i = utils.strip_pad(src_tokens[i], tgt_dict.pad())
constraints = list_constraints[i]
results.append(
(
start_id + id,
src_tokens_i,
hypos,
{
"constraints": constraints,
"time": translate_time / len(translations),
},
)
)
# sort output to match input order
for id_, src_tokens, hypos, info in sorted(results, key=lambda x: x[0]):
if src_dict is not None:
src_str = src_dict.string(src_tokens, cfg.common_eval.post_process)
print("S-{}\t{}".format(id_, src_str))
print("W-{}\t{:.3f}\tseconds".format(id_, info["time"]))
for constraint in info["constraints"]:
print(
"C-{}\t{}".format(
id_, tgt_dict.string(constraint, cfg.common_eval.post_process)
)
)
# Process top predictions
for hypo in hypos[: min(len(hypos), cfg.generation.nbest)]:
hypo_tokens, hypo_str, alignment = utils.post_process_prediction(
hypo_tokens=hypo["tokens"].int().cpu(),
src_str=src_str,
alignment=hypo["alignment"],
align_dict=align_dict,
tgt_dict=tgt_dict,
remove_bpe=cfg.common_eval.post_process,
extra_symbols_to_ignore=get_symbols_to_strip_from_output(generator),
)
detok_hypo_str = decode_fn(hypo_str)
score = hypo["score"] / math.log(2) # convert to base 2
# original hypothesis (after tokenization and BPE)
print("H-{}\t{}\t{}".format(id_, score, hypo_str))
# detokenized hypothesis
print("D-{}\t{}\t{}".format(id_, score, detok_hypo_str))
print(
"P-{}\t{}".format(
id_,
" ".join(
map(
lambda x: "{:.4f}".format(x),
# convert from base e to base 2
hypo["positional_scores"].div_(math.log(2)).tolist(),
)
),
)
)
if cfg.generation.print_alignment:
alignment_str = " ".join(
["{}-{}".format(src, tgt) for src, tgt in alignment]
)
print("A-{}\t{}".format(id_, alignment_str))
# update running id_ counter
start_id += len(inputs)
logger.info(
"Total time: {:.3f} seconds; translation time: {:.3f}".format(
time.time() - start_time, total_translate_time
)
)
def cli_main():
parser = options.get_interactive_generation_parser()
print("###################################################")
print("parser", parser)
print("###################################################")
args = options.parse_args_and_arch(parser)
print("###################################################")
print("args", args)
print("###################################################")
distributed_utils.call_main(convert_namespace_to_omegaconf(args), main)
if __name__ == "__main__":
cli_main()
| 35.831325 | 90 | 0.584062 |
f124b7ff58669033dd18308bae6f4bbd0022b547 | 21,264 | py | Python | chives/server/ws_connection.py | HiveProject2021/chives-light-wallet | 0c7c36bfc703b26ce3c938027de643dc90e4191f | [
"Apache-2.0"
] | 7 | 2021-12-26T11:05:19.000Z | 2022-02-24T10:42:45.000Z | chives/server/ws_connection.py | HiveProject2021/chives-light-wallet | 0c7c36bfc703b26ce3c938027de643dc90e4191f | [
"Apache-2.0"
] | 8 | 2021-12-14T17:27:29.000Z | 2022-03-29T18:18:22.000Z | chives/server/ws_connection.py | HiveProject2021/chives-light-wallet | 0c7c36bfc703b26ce3c938027de643dc90e4191f | [
"Apache-2.0"
] | 1 | 2021-12-09T23:51:12.000Z | 2021-12-09T23:51:12.000Z | import asyncio
import logging
import time
import traceback
from typing import Any, Callable, Dict, List, Optional
from aiohttp import WSCloseCode, WSMessage, WSMsgType
from chives.cmds.init_funcs import chives_full_version_str
from chives.protocols.protocol_message_types import ProtocolMessageTypes
from chives.protocols.protocol_state_machine import message_response_ok
from chives.protocols.protocol_timing import INTERNAL_PROTOCOL_ERROR_BAN_SECONDS
from chives.protocols.shared_protocol import Capability, Handshake
from chives.server.outbound_message import Message, NodeType, make_msg
from chives.server.rate_limits import RateLimiter
from chives.types.blockchain_format.sized_bytes import bytes32
from chives.types.peer_info import PeerInfo
from chives.util.errors import Err, ProtocolError
from chives.util.ints import uint8, uint16
# Each message is prepended with LENGTH_BYTES bytes specifying the length
from chives.util.network import class_for_type, is_localhost
# Max size 2^(8*4) which is around 4GiB
LENGTH_BYTES: int = 4
class WSChivesConnection:
"""
Represents a connection to another node. Local host and port are ours, while peer host and
port are the host and port of the peer that we are connected to. Node_id and connection_type are
set after the handshake is performed in this connection.
"""
def __init__(
self,
local_type: NodeType,
ws: Any, # Websocket
server_port: int,
log: logging.Logger,
is_outbound: bool,
is_feeler: bool, # Special type of connection, that disconnects after the handshake.
peer_host,
incoming_queue,
close_callback: Callable,
peer_id,
inbound_rate_limit_percent: int,
outbound_rate_limit_percent: int,
close_event=None,
session=None,
):
# Local properties
self.ws: Any = ws
self.local_type = local_type
self.local_port = server_port
# Remote properties
self.peer_host = peer_host
peername = self.ws._writer.transport.get_extra_info("peername")
if peername is None:
raise ValueError(f"Was not able to get peername from {self.peer_host}")
connection_port = peername[1]
self.peer_port = connection_port
self.peer_server_port: Optional[uint16] = None
self.peer_node_id = peer_id
self.log = log
# connection properties
self.is_outbound = is_outbound
self.is_feeler = is_feeler
# ChivesConnection metrics
self.creation_time = time.time()
self.bytes_read = 0
self.bytes_written = 0
self.last_message_time: float = 0
# Messaging
self.incoming_queue: asyncio.Queue = incoming_queue
self.outgoing_queue: asyncio.Queue = asyncio.Queue()
self.inbound_task: Optional[asyncio.Task] = None
self.outbound_task: Optional[asyncio.Task] = None
self.active: bool = False # once handshake is successful this will be changed to True
self.close_event: asyncio.Event = close_event
self.session = session
self.close_callback = close_callback
self.pending_requests: Dict[bytes32, asyncio.Event] = {}
self.pending_timeouts: Dict[bytes32, asyncio.Task] = {}
self.request_results: Dict[bytes32, Message] = {}
self.closed = False
self.connection_type: Optional[NodeType] = None
if is_outbound:
self.request_nonce: uint16 = uint16(0)
else:
# Different nonce to reduce chances of overlap. Each peer will increment the nonce by one for each
# request. The receiving peer (not is_outbound), will use 2^15 to 2^16 - 1
self.request_nonce = uint16(2 ** 15)
# This means that even if the other peer's boundaries for each minute are not aligned, we will not
# disconnect. Also it allows a little flexibility.
self.outbound_rate_limiter = RateLimiter(incoming=False, percentage_of_limit=outbound_rate_limit_percent)
self.inbound_rate_limiter = RateLimiter(incoming=True, percentage_of_limit=inbound_rate_limit_percent)
# Used by crawler/dns introducer
self.version = None
self.protocol_version = ""
async def perform_handshake(self, network_id: str, protocol_version: str, server_port: int, local_type: NodeType):
if self.is_outbound:
outbound_handshake = make_msg(
ProtocolMessageTypes.handshake,
Handshake(
network_id,
protocol_version,
chives_full_version_str(),
uint16(server_port),
uint8(local_type.value),
[(uint16(Capability.BASE.value), "1")],
),
)
assert outbound_handshake is not None
await self._send_message(outbound_handshake)
inbound_handshake_msg = await self._read_one_message()
if inbound_handshake_msg is None:
raise ProtocolError(Err.INVALID_HANDSHAKE)
inbound_handshake = Handshake.from_bytes(inbound_handshake_msg.data)
# Handle case of invalid ProtocolMessageType
try:
message_type: ProtocolMessageTypes = ProtocolMessageTypes(inbound_handshake_msg.type)
except Exception:
raise ProtocolError(Err.INVALID_HANDSHAKE)
if message_type != ProtocolMessageTypes.handshake:
raise ProtocolError(Err.INVALID_HANDSHAKE)
if inbound_handshake.network_id != network_id:
raise ProtocolError(Err.INCOMPATIBLE_NETWORK_ID)
self.version = inbound_handshake.software_version
self.protocol_version = inbound_handshake.protocol_version
self.peer_server_port = inbound_handshake.server_port
self.connection_type = NodeType(inbound_handshake.node_type)
else:
try:
message = await self._read_one_message()
except Exception:
raise ProtocolError(Err.INVALID_HANDSHAKE)
if message is None:
raise ProtocolError(Err.INVALID_HANDSHAKE)
# Handle case of invalid ProtocolMessageType
try:
message_type = ProtocolMessageTypes(message.type)
except Exception:
raise ProtocolError(Err.INVALID_HANDSHAKE)
if message_type != ProtocolMessageTypes.handshake:
raise ProtocolError(Err.INVALID_HANDSHAKE)
inbound_handshake = Handshake.from_bytes(message.data)
if inbound_handshake.network_id != network_id:
raise ProtocolError(Err.INCOMPATIBLE_NETWORK_ID)
outbound_handshake = make_msg(
ProtocolMessageTypes.handshake,
Handshake(
network_id,
protocol_version,
chives_full_version_str(),
uint16(server_port),
uint8(local_type.value),
[(uint16(Capability.BASE.value), "1")],
),
)
await self._send_message(outbound_handshake)
self.peer_server_port = inbound_handshake.server_port
self.connection_type = NodeType(inbound_handshake.node_type)
self.outbound_task = asyncio.create_task(self.outbound_handler())
self.inbound_task = asyncio.create_task(self.inbound_handler())
return True
async def close(self, ban_time: int = 0, ws_close_code: WSCloseCode = WSCloseCode.OK, error: Optional[Err] = None):
"""
Closes the connection, and finally calls the close_callback on the server, so the connections gets removed
from the global list.
"""
if self.closed:
return None
self.closed = True
if error is None:
message = b""
else:
message = str(int(error.value)).encode("utf-8")
try:
if self.inbound_task is not None:
self.inbound_task.cancel()
if self.outbound_task is not None:
self.outbound_task.cancel()
if self.ws is not None and self.ws._closed is False:
await self.ws.close(code=ws_close_code, message=message)
if self.session is not None:
await self.session.close()
if self.close_event is not None:
self.close_event.set()
self.cancel_pending_timeouts()
except Exception:
error_stack = traceback.format_exc()
self.log.warning(f"Exception closing socket: {error_stack}")
self.close_callback(self, ban_time)
raise
self.close_callback(self, ban_time)
async def ban_peer_bad_protocol(self, log_err_msg: str):
"""Ban peer for protocol violation"""
ban_seconds = INTERNAL_PROTOCOL_ERROR_BAN_SECONDS
self.log.error(f"Banning peer for {ban_seconds} seconds: {self.peer_host} {log_err_msg}")
await self.close(ban_seconds, WSCloseCode.PROTOCOL_ERROR, Err.INVALID_PROTOCOL_MESSAGE)
def cancel_pending_timeouts(self):
for _, task in self.pending_timeouts.items():
task.cancel()
async def outbound_handler(self):
try:
while not self.closed:
msg = await self.outgoing_queue.get()
if msg is not None:
await self._send_message(msg)
except asyncio.CancelledError:
pass
except BrokenPipeError as e:
self.log.warning(f"{e} {self.peer_host}")
except ConnectionResetError as e:
self.log.warning(f"{e} {self.peer_host}")
except Exception as e:
error_stack = traceback.format_exc()
self.log.error(f"Exception: {e} with {self.peer_host}")
self.log.error(f"Exception Stack: {error_stack}")
async def inbound_handler(self):
try:
while not self.closed:
message: Message = await self._read_one_message()
if message is not None:
if message.id in self.pending_requests:
self.request_results[message.id] = message
event = self.pending_requests[message.id]
event.set()
else:
await self.incoming_queue.put((message, self))
else:
continue
except asyncio.CancelledError:
self.log.debug("Inbound_handler task cancelled")
except Exception as e:
error_stack = traceback.format_exc()
self.log.error(f"Exception: {e}")
self.log.error(f"Exception Stack: {error_stack}")
async def send_message(self, message: Message):
"""Send message sends a message with no tracking / callback."""
if self.closed:
return None
await self.outgoing_queue.put(message)
def __getattr__(self, attr_name: str):
# TODO KWARGS
async def invoke(*args, **kwargs):
timeout = 60
if "timeout" in kwargs:
timeout = kwargs["timeout"]
attribute = getattr(class_for_type(self.connection_type), attr_name, None)
if attribute is None:
raise AttributeError(f"Node type {self.connection_type} does not have method {attr_name}")
msg: Message = Message(uint8(getattr(ProtocolMessageTypes, attr_name).value), None, args[0])
request_start_t = time.time()
result = await self.send_request(msg, timeout)
self.log.debug(
f"Time for request {attr_name}: {self.get_peer_logging()} = {time.time() - request_start_t}, "
f"None? {result is None}"
)
if result is not None:
sent_message_type = ProtocolMessageTypes(msg.type)
recv_message_type = ProtocolMessageTypes(result.type)
if not message_response_ok(sent_message_type, recv_message_type):
# peer protocol violation
error_message = f"WSConnection.invoke sent message {sent_message_type.name} "
f"but received {recv_message_type.name}"
await self.ban_peer_bad_protocol(self.error_message)
raise ProtocolError(Err.INVALID_PROTOCOL_MESSAGE, [error_message])
ret_attr = getattr(class_for_type(self.local_type), ProtocolMessageTypes(result.type).name, None)
req_annotations = ret_attr.__annotations__
req = None
for key in req_annotations:
if key == "return" or key == "peer":
continue
else:
req = req_annotations[key]
assert req is not None
result = req.from_bytes(result.data)
return result
return invoke
async def send_request(self, message_no_id: Message, timeout: int) -> Optional[Message]:
"""Sends a message and waits for a response."""
if self.closed:
return None
# We will wait for this event, it will be set either by the response, or the timeout
event = asyncio.Event()
# The request nonce is an integer between 0 and 2**16 - 1, which is used to match requests to responses
# If is_outbound, 0 <= nonce < 2^15, else 2^15 <= nonce < 2^16
request_id = self.request_nonce
if self.is_outbound:
self.request_nonce = uint16(self.request_nonce + 1) if self.request_nonce != (2 ** 15 - 1) else uint16(0)
else:
self.request_nonce = (
uint16(self.request_nonce + 1) if self.request_nonce != (2 ** 16 - 1) else uint16(2 ** 15)
)
message = Message(message_no_id.type, request_id, message_no_id.data)
self.pending_requests[message.id] = event
await self.outgoing_queue.put(message)
# If the timeout passes, we set the event
async def time_out(req_id, req_timeout):
try:
await asyncio.sleep(req_timeout)
if req_id in self.pending_requests:
self.pending_requests[req_id].set()
except asyncio.CancelledError:
if req_id in self.pending_requests:
self.pending_requests[req_id].set()
raise
timeout_task = asyncio.create_task(time_out(message.id, timeout))
self.pending_timeouts[message.id] = timeout_task
await event.wait()
self.pending_requests.pop(message.id)
result: Optional[Message] = None
if message.id in self.request_results:
result = self.request_results[message.id]
assert result is not None
self.log.debug(f"<- {ProtocolMessageTypes(result.type).name} from: {self.peer_host}:{self.peer_port}")
self.request_results.pop(result.id)
return result
async def reply_to_request(self, response: Message):
if self.closed:
return None
await self.outgoing_queue.put(response)
async def send_messages(self, messages: List[Message]):
if self.closed:
return None
for message in messages:
await self.outgoing_queue.put(message)
async def _wait_and_retry(self, msg: Message, queue: asyncio.Queue):
try:
await asyncio.sleep(1)
await queue.put(msg)
except Exception as e:
self.log.debug(f"Exception {e} while waiting to retry sending rate limited message")
return None
async def _send_message(self, message: Message):
encoded: bytes = bytes(message)
size = len(encoded)
assert len(encoded) < (2 ** (LENGTH_BYTES * 8))
if not self.outbound_rate_limiter.process_msg_and_check(message):
if not is_localhost(self.peer_host):
self.log.debug(
f"Rate limiting ourselves. message type: {ProtocolMessageTypes(message.type).name}, "
f"peer: {self.peer_host}"
)
# TODO: fix this special case. This function has rate limits which are too low.
if ProtocolMessageTypes(message.type) != ProtocolMessageTypes.respond_peers:
asyncio.create_task(self._wait_and_retry(message, self.outgoing_queue))
return None
else:
self.log.debug(
f"Not rate limiting ourselves. message type: {ProtocolMessageTypes(message.type).name}, "
f"peer: {self.peer_host}"
)
await self.ws.send_bytes(encoded)
self.log.debug(f"-> {ProtocolMessageTypes(message.type).name} to peer {self.peer_host} {self.peer_node_id}")
self.bytes_written += size
async def _read_one_message(self) -> Optional[Message]:
try:
message: WSMessage = await self.ws.receive(30)
except asyncio.TimeoutError:
# self.ws._closed if we didn't receive a ping / pong
if self.ws._closed:
asyncio.create_task(self.close())
await asyncio.sleep(3)
return None
return None
if self.connection_type is not None:
connection_type_str = NodeType(self.connection_type).name.lower()
else:
connection_type_str = ""
if message.type == WSMsgType.CLOSING:
self.log.debug(
f"Closing connection to {connection_type_str} {self.peer_host}:"
f"{self.peer_server_port}/"
f"{self.peer_port}"
)
asyncio.create_task(self.close())
await asyncio.sleep(3)
elif message.type == WSMsgType.CLOSE:
self.log.debug(
f"Peer closed connection {connection_type_str} {self.peer_host}:"
f"{self.peer_server_port}/"
f"{self.peer_port}"
)
asyncio.create_task(self.close())
await asyncio.sleep(3)
elif message.type == WSMsgType.CLOSED:
if not self.closed:
asyncio.create_task(self.close())
await asyncio.sleep(3)
return None
elif message.type == WSMsgType.BINARY:
data = message.data
full_message_loaded: Message = Message.from_bytes(data)
self.bytes_read += len(data)
self.last_message_time = time.time()
try:
message_type = ProtocolMessageTypes(full_message_loaded.type).name
except Exception:
message_type = "Unknown"
if not self.inbound_rate_limiter.process_msg_and_check(full_message_loaded):
if self.local_type == NodeType.FULL_NODE and not is_localhost(self.peer_host):
self.log.error(
f"Peer has been rate limited and will be disconnected: {self.peer_host}, "
f"message: {message_type}"
)
# Only full node disconnects peers, to prevent abuse and crashing timelords, farmers, etc
asyncio.create_task(self.close(300))
await asyncio.sleep(3)
return None
else:
self.log.warning(
f"Peer surpassed rate limit {self.peer_host}, message: {message_type}, "
f"port {self.peer_port} but not disconnecting"
)
return full_message_loaded
return full_message_loaded
elif message.type == WSMsgType.ERROR:
self.log.error(f"WebSocket Error: {message}")
if message.data.code == WSCloseCode.MESSAGE_TOO_BIG:
asyncio.create_task(self.close(300))
else:
asyncio.create_task(self.close())
await asyncio.sleep(3)
else:
self.log.error(f"Unexpected WebSocket message type: {message}")
asyncio.create_task(self.close())
await asyncio.sleep(3)
return None
# Used by crawler/dns introducer
def get_version(self):
return self.version
def get_peer_info(self) -> Optional[PeerInfo]:
result = self.ws._writer.transport.get_extra_info("peername")
if result is None:
return None
connection_host = result[0]
port = self.peer_server_port if self.peer_server_port is not None else self.peer_port
return PeerInfo(connection_host, port)
def get_peer_logging(self) -> PeerInfo:
info: Optional[PeerInfo] = self.get_peer_info()
if info is None:
# in this case, we will use self.peer_host which is friendlier for logging
port = self.peer_server_port if self.peer_server_port is not None else self.peer_port
return PeerInfo(self.peer_host, port)
else:
return info
| 42.023715 | 119 | 0.611503 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.