gt
stringclasses 1
value | context
stringlengths 2.49k
119k
|
|---|---|
"""Generic socket server classes.
This module tries to capture the various aspects of defining a server:
For socket-based servers:
- address family:
- AF_INET{,6}: IP (Internet Protocol) sockets (default)
- AF_UNIX: Unix domain sockets
- others, e.g. AF_DECNET are conceivable (see <socket.h>
- socket type:
- SOCK_STREAM (reliable stream, e.g. TCP)
- SOCK_DGRAM (datagrams, e.g. UDP)
For request-based servers (including socket-based):
- client address verification before further looking at the request
(This is actually a hook for any processing that needs to look
at the request before anything else, e.g. logging)
- how to handle multiple requests:
- synchronous (one request is handled at a time)
- forking (each request is handled by a new process)
- threading (each request is handled by a new thread)
The classes in this module favor the server type that is simplest to
write: a synchronous TCP/IP server. This is bad class design, but
save some typing. (There's also the issue that a deep class hierarchy
slows down method lookups.)
There are five classes in an inheritance diagram, four of which represent
synchronous servers of four types:
+------------+
| BaseServer |
+------------+
|
v
+-----------+ +------------------+
| TCPServer |------->| UnixStreamServer |
+-----------+ +------------------+
|
v
+-----------+ +--------------------+
| UDPServer |------->| UnixDatagramServer |
+-----------+ +--------------------+
Note that UnixDatagramServer derives from UDPServer, not from
UnixStreamServer -- the only difference between an IP and a Unix
stream server is the address family, which is simply repeated in both
unix server classes.
Forking and threading versions of each type of server can be created
using the ForkingMixIn and ThreadingMixIn mix-in classes. For
instance, a threading UDP server class is created as follows:
class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
The Mix-in class must come first, since it overrides a method defined
in UDPServer! Setting the various member variables also changes
the behavior of the underlying server mechanism.
To implement a service, you must derive a class from
BaseRequestHandler and redefine its handle() method. You can then run
various versions of the service by combining one of the server classes
with your request handler class.
The request handler class must be different for datagram or stream
services. This can be hidden by using the request handler
subclasses StreamRequestHandler or DatagramRequestHandler.
Of course, you still have to use your head!
For instance, it makes no sense to use a forking server if the service
contains state in memory that can be modified by requests (since the
modifications in the child process would never reach the initial state
kept in the parent process and passed to each child). In this case,
you can use a threading server, but you will probably have to use
locks to avoid two requests that come in nearly simultaneous to apply
conflicting changes to the server state.
On the other hand, if you are building e.g. an HTTP server, where all
data is stored externally (e.g. in the file system), a synchronous
class will essentially render the service "deaf" while one request is
being handled -- which may be for a very long time if a client is slow
to reqd all the data it has requested. Here a threading or forking
server is appropriate.
In some cases, it may be appropriate to process part of a request
synchronously, but to finish processing in a forked child depending on
the request data. This can be implemented by using a synchronous
server and doing an explicit fork in the request handler class
handle() method.
Another approach to handling multiple simultaneous requests in an
environment that supports neither threads nor fork (or where these are
too expensive or inappropriate for the service) is to maintain an
explicit table of partially finished requests and to use select() to
decide which request to work on next (or whether to handle a new
incoming request). This is particularly important for stream services
where each client can potentially be connected for a long time (if
threads or subprocesses cannot be used).
Future work:
- Standard classes for Sun RPC (which uses either UDP or TCP)
- Standard mix-in classes to implement various authentication
and encryption schemes
- Standard framework for select-based multiplexing
XXX Open problems:
- What to do with out-of-band data?
BaseServer:
- split generic "request" functionality out into BaseServer class.
Copyright (C) 2000 Luke Kenneth Casson Leighton <lkcl@samba.org>
example: read entries from a SQL database (requires overriding
get_request() to return a table entry from the database).
entry is processed by a RequestHandlerClass.
"""
# This file copyright (c) 2001-2015 Python Software Foundation; All Rights Reserved
# Author of the BaseServer patch: Luke Kenneth Casson Leighton
# XXX Warning!
# There is a test suite for this module, but it cannot be run by the
# standard regression test.
# To run it manually, run Lib/test/test_socketserver.py.
__version__ = "0.4"
import socket
import select
import sys
import os
try:
import threading
except ImportError:
import dummy_threading as threading
__all__ = ["TCPServer", "UDPServer", "ForkingUDPServer", "ForkingTCPServer",
"ThreadingUDPServer", "ThreadingTCPServer", "BaseRequestHandler",
"StreamRequestHandler", "DatagramRequestHandler",
"ThreadingMixIn", "ForkingMixIn"]
if hasattr(socket, "AF_UNIX"):
__all__.extend(["UnixStreamServer", "UnixDatagramServer",
"ThreadingUnixStreamServer",
"ThreadingUnixDatagramServer"])
class BaseServer:
"""Base class for server classes.
Methods for the caller:
- __init__(server_address, RequestHandlerClass)
- serve_forever(poll_interval=0.5)
- shutdown()
- handle_request() # if you do not use serve_forever()
- fileno() -> int # for select()
Methods that may be overridden:
- server_bind()
- server_activate()
- get_request() -> request, client_address
- handle_timeout()
- verify_request(request, client_address)
- server_close()
- process_request(request, client_address)
- close_request(request)
- handle_error()
Methods for derived classes:
- finish_request(request, client_address)
Class variables that may be overridden by derived classes or
instances:
- timeout
- address_family
- socket_type
- allow_reuse_address
Instance variables:
- RequestHandlerClass
- socket
"""
timeout = None
def __init__(self, server_address, RequestHandlerClass):
"""Constructor. May be extended, do not override."""
self.server_address = server_address
self.RequestHandlerClass = RequestHandlerClass
self.__is_shut_down = threading.Event()
self.__serving = False
def server_activate(self):
"""Called by constructor to activate the server.
May be overridden.
"""
pass
def serve_forever(self, poll_interval=0.5):
"""Handle one request at a time until shutdown.
Polls for shutdown every poll_interval seconds. Ignores
self.timeout. If you need to do periodic tasks, do them in
another thread.
"""
self.__serving = True
self.__is_shut_down.clear()
while self.__serving:
# XXX: Consider using another file descriptor or
# connecting to the socket to wake this up instead of
# polling. Polling reduces our responsiveness to a
# shutdown request and wastes cpu at all other times.
r, w, e = select.select([self], [], [], poll_interval)
if r:
self._handle_request_noblock()
self.__is_shut_down.set()
def shutdown(self):
"""Stops the serve_forever loop.
Blocks until the loop has finished. This must be called while
serve_forever() is running in another thread, or it will
deadlock.
"""
self.__serving = False
self.__is_shut_down.wait()
# The distinction between handling, getting, processing and
# finishing a request is fairly arbitrary. Remember:
#
# - handle_request() is the top-level call. It calls
# select, get_request(), verify_request() and process_request()
# - get_request() is different for stream or datagram sockets
# - process_request() is the place that may fork a new process
# or create a new thread to finish the request
# - finish_request() instantiates the request handler class;
# this constructor will handle the request all by itself
def handle_request(self):
"""Handle one request, possibly blocking.
Respects self.timeout.
"""
# Support people who used socket.settimeout() to escape
# handle_request before self.timeout was available.
timeout = self.socket.gettimeout()
if timeout is None:
timeout = self.timeout
elif self.timeout is not None:
timeout = min(timeout, self.timeout)
fd_sets = select.select([self], [], [], timeout)
if not fd_sets[0]:
self.handle_timeout()
return
self._handle_request_noblock()
def _handle_request_noblock(self):
"""Handle one request, without blocking.
I assume that select.select has returned that the socket is
readable before this function was called, so there should be
no risk of blocking in get_request().
"""
try:
request, client_address = self.get_request()
except socket.error:
return
if self.verify_request(request, client_address):
try:
self.process_request(request, client_address)
except:
self.handle_error(request, client_address)
self.close_request(request)
def handle_timeout(self):
"""Called if no new request arrives within self.timeout.
Overridden by ForkingMixIn.
"""
pass
def verify_request(self, request, client_address):
"""Verify the request. May be overridden.
Return True if we should proceed with this request.
"""
return True
def process_request(self, request, client_address):
"""Call finish_request.
Overridden by ForkingMixIn and ThreadingMixIn.
"""
self.finish_request(request, client_address)
self.close_request(request)
def server_close(self):
"""Called to clean-up the server.
May be overridden.
"""
pass
def finish_request(self, request, client_address):
"""Finish one request by instantiating RequestHandlerClass."""
self.RequestHandlerClass(request, client_address, self)
def close_request(self, request):
"""Called to clean up an individual request."""
pass
def handle_error(self, request, client_address):
"""Handle an error gracefully. May be overridden.
The default is to print a traceback and continue.
"""
print(('-' * 40))
print(('Exception happened during processing of request from %s' % (client_address,)))
import traceback
traceback.print_exc() # XXX But this goes to stderr!
print(('-' * 40))
class TCPServer(BaseServer):
"""Base class for various socket-based server classes.
Defaults to synchronous IP stream (i.e., TCP).
Methods for the caller:
- __init__(server_address, RequestHandlerClass, bind_and_activate=True)
- serve_forever(poll_interval=0.5)
- shutdown()
- handle_request() # if you don't use serve_forever()
- fileno() -> int # for select()
Methods that may be overridden:
- server_bind()
- server_activate()
- get_request() -> request, client_address
- handle_timeout()
- verify_request(request, client_address)
- process_request(request, client_address)
- close_request(request)
- handle_error()
Methods for derived classes:
- finish_request(request, client_address)
Class variables that may be overridden by derived classes or
instances:
- timeout
- address_family
- socket_type
- request_queue_size (only for stream sockets)
- allow_reuse_address
Instance variables:
- server_address
- RequestHandlerClass
- socket
"""
address_family = socket.AF_INET
socket_type = socket.SOCK_STREAM
request_queue_size = 5
allow_reuse_address = False
def __init__(self, server_address, RequestHandlerClass,
bind_and_activate=True):
"""Constructor. May be extended, do not override."""
BaseServer.__init__(self, server_address, RequestHandlerClass)
self.socket = socket.socket(self.address_family,
self.socket_type)
if bind_and_activate:
self.server_bind()
self.server_activate()
def server_bind(self):
"""Called by constructor to bind the socket.
May be overridden.
"""
if self.allow_reuse_address:
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socket.bind(self.server_address)
self.server_address = self.socket.getsockname()
def server_activate(self):
"""Called by constructor to activate the server.
May be overridden.
"""
self.socket.listen(self.request_queue_size)
def server_close(self):
"""Called to clean-up the server.
May be overridden.
"""
self.socket.close()
def fileno(self):
"""Return socket file number.
Interface required by select().
"""
return self.socket.fileno()
def get_request(self):
"""Get the request and client address from the socket.
May be overridden.
"""
return self.socket.accept()
def close_request(self, request):
"""Called to clean up an individual request."""
request.close()
class UDPServer(TCPServer):
"""UDP server class."""
allow_reuse_address = False
socket_type = socket.SOCK_DGRAM
max_packet_size = 8192
def get_request(self):
data, client_addr = self.socket.recvfrom(self.max_packet_size)
return (data, self.socket), client_addr
def server_activate(self):
# No need to call listen() for UDP.
pass
def close_request(self, request):
# No need to close anything.
pass
class ForkingMixIn:
"""Mix-in class to handle each request in a new process."""
timeout = 300
active_children = None
max_children = 40
def collect_children(self):
"""Internal routine to wait for children that have exited."""
if self.active_children is None:
return
while len(self.active_children) >= self.max_children:
# XXX: This will wait for any child process, not just ones
# spawned by this library. This could confuse other
# libraries that expect to be able to wait for their own
# children.
try:
pid, status = os.waitpid(0, 0)
except os.error:
pid = None
if pid not in self.active_children:
continue
self.active_children.remove(pid)
# XXX: This loop runs more system calls than it ought
# to. There should be a way to put the active_children into a
# process group and then use os.waitpid(-pgid) to wait for any
# of that set, but I couldn't find a way to allocate pgids
# that couldn't collide.
for child in self.active_children:
try:
pid, status = os.waitpid(child, os.WNOHANG)
except os.error:
pid = None
if not pid:
continue
try:
self.active_children.remove(pid)
except ValueError as e:
raise ValueError('%s. x=%d and list=%r' % \
(e.message, pid, self.active_children))
def handle_timeout(self):
"""Wait for zombies after self.timeout seconds of inactivity.
May be extended, do not override.
"""
self.collect_children()
def process_request(self, request, client_address):
"""Fork a new subprocess to process the request."""
self.collect_children()
pid = os.fork()
if pid:
# Parent process
if self.active_children is None:
self.active_children = []
self.active_children.append(pid)
self.close_request(request)
return
else:
# Child process.
# This must never return, hence os._exit()!
try:
self.finish_request(request, client_address)
os._exit(0)
except:
try:
self.handle_error(request, client_address)
finally:
os._exit(1)
class ThreadingMixIn:
"""Mix-in class to handle each request in a new thread."""
# Decides how threads will act upon termination of the
# main process
daemon_threads = False
def process_request_thread(self, request, client_address):
"""Same as in BaseServer but as a thread.
In addition, exception handling is done here.
"""
try:
self.finish_request(request, client_address)
self.close_request(request)
except:
self.handle_error(request, client_address)
self.close_request(request)
def process_request(self, request, client_address):
"""Start a new thread to process the request."""
t = threading.Thread(target=self.process_request_thread,
args=(request, client_address))
if self.daemon_threads:
t.setDaemon(1)
t.start()
class ForkingUDPServer(ForkingMixIn, UDPServer):
pass
class ForkingTCPServer(ForkingMixIn, TCPServer):
pass
class ThreadingUDPServer(ThreadingMixIn, UDPServer):
pass
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
pass
if hasattr(socket, 'AF_UNIX'):
class UnixStreamServer(TCPServer):
address_family = socket.AF_UNIX
class UnixDatagramServer(UDPServer):
address_family = socket.AF_UNIX
class ThreadingUnixStreamServer(ThreadingMixIn, UnixStreamServer):
pass
class ThreadingUnixDatagramServer(ThreadingMixIn, UnixDatagramServer):
pass
class BaseRequestHandler:
"""Base class for request handler classes.
This class is instantiated for each request to be handled. The
constructor sets the instance variables request, client_address
and server, and then calls the handle() method. To implement a
specific service, all you need to do is to derive a class which
defines a handle() method.
The handle() method can find the request as self.request, the
client address as self.client_address, and the server (in case it
needs access to per-server information) as self.server. Since a
separate instance is created for each request, the handle() method
can define arbitrary other instance variariables.
"""
def __init__(self, request, client_address, server):
self.request = request
self.client_address = client_address
self.server = server
try:
self.setup()
self.handle()
self.finish()
finally:
sys.exc_info()[2] = None # Help garbage collection
def setup(self):
pass
def handle(self):
pass
def finish(self):
pass
# The following two classes make it possible to use the same service
# class for stream or datagram servers.
# Each class sets up these instance variables:
# - rfile: a file object from which receives the request is read
# - wfile: a file object to which the reply is written
# When the handle() method returns, wfile is flushed properly
class StreamRequestHandler(BaseRequestHandler):
"""Define self.rfile and self.wfile for stream sockets."""
# Default buffer sizes for rfile, wfile.
# We default rfile to buffered because otherwise it could be
# really slow for large data (a getc() call per byte); we make
# wfile unbuffered because (a) often after a write() we want to
# read and we need to flush the line; (b) big writes to unbuffered
# files are typically optimized by stdio even when big reads
# aren't.
rbufsize = -1
wbufsize = 0
def setup(self):
self.connection = self.request
self.rfile = self.connection.makefile('rb', self.rbufsize)
self.wfile = self.connection.makefile('wb', self.wbufsize)
def finish(self):
if not self.wfile.closed:
self.wfile.flush()
self.wfile.close()
self.rfile.close()
class DatagramRequestHandler(BaseRequestHandler):
# XXX Regrettably, I cannot get this working on Linux;
# s.recvfrom() doesn't return a meaningful client address.
"""Define self.rfile and self.wfile for datagram sockets."""
def setup(self):
try:
from io import StringIO
except ImportError:
from io import StringIO
self.packet, self.socket = self.request
self.rfile = StringIO(self.packet)
self.wfile = StringIO()
def finish(self):
self.socket.sendto(self.wfile.getvalue(), self.client_address)
|
|
# All fields except for BlobField written by Jonas Haag <jonas@lophus.org>
from django.db import models
from django.core.exceptions import ValidationError
from django.utils.importlib import import_module
__all__ = ('RawField', 'ListField', 'DictField', 'SetField',
'BlobField', 'EmbeddedModelField')
class _HandleAssignment(object):
"""
A placeholder class that provides a way to set the attribute on the model.
"""
def __init__(self, field):
self.field = field
def __get__(self, obj, type=None):
if obj is None:
raise AttributeError('Can only be accessed via an instance.')
return obj.__dict__[self.field.name]
def __set__(self, obj, value):
obj.__dict__[self.field.name] = self.field.to_python(value)
class RawField(models.Field):
""" Generic field to store anything your database backend allows you to. """
def get_internal_type(self):
return 'RawField'
class AbstractIterableField(models.Field):
"""
Abstract field for fields for storing iterable data type like ``list``,
``set`` and ``dict``.
You can pass an instance of a field as the first argument.
If you do, the iterable items will be piped through the passed field's
validation and conversion routines, converting the items to the
appropriate data type.
"""
def __init__(self, item_field=None, *args, **kwargs):
if item_field is None:
item_field = RawField()
self.item_field = item_field
default = kwargs.get('default', None if kwargs.get('null') else ())
if default is not None and not callable(default):
# ensure a new object is created every time the default is accessed
kwargs['default'] = lambda: self._type(default)
super(AbstractIterableField, self).__init__(*args, **kwargs)
def contribute_to_class(self, cls, name):
self.item_field.model = cls
self.item_field.name = name
super(AbstractIterableField, self).contribute_to_class(cls, name)
metaclass = getattr(self.item_field, '__metaclass__', None)
if issubclass(metaclass, models.SubfieldBase):
setattr(cls, self.name, _HandleAssignment(self))
def db_type(self, connection):
item_db_type = self.item_field.db_type(connection=connection)
return '%s:%s' % (self.__class__.__name__, item_db_type)
def _convert(self, func, values, *args, **kwargs):
if isinstance(values, (list, tuple, set)):
return self._type(func(value, *args, **kwargs) for value in values)
return values
def to_python(self, value):
return self._convert(self.item_field.to_python, value)
def pre_save(self, model_instance, add):
class fake_instance(object):
pass
fake_instance = fake_instance()
def wrapper(value):
assert not hasattr(self.item_field, 'attname')
fake_instance.value = value
self.item_field.attname = 'value'
try:
return self.item_field.pre_save(fake_instance, add)
finally:
del self.item_field.attname
return self._convert(wrapper, getattr(model_instance, self.attname))
def get_db_prep_value(self, value, connection, prepared=False):
return self._convert(self.item_field.get_db_prep_value, value,
connection=connection, prepared=prepared)
def get_db_prep_save(self, value, connection):
return self._convert(self.item_field.get_db_prep_save,
value, connection=connection)
def get_db_prep_lookup(self, lookup_type, value, connection, prepared=False):
# TODO/XXX: Remove as_lookup_value() once we have a cleaner solution
# for dot-notation queries
if hasattr(value, 'as_lookup_value'):
value = value.as_lookup_value(self, lookup_type, connection)
return self.item_field.get_db_prep_lookup(lookup_type, value,
connection=connection, prepared=prepared)
def validate(self, values, model_instance):
try:
iter(values)
except TypeError:
raise ValidationError('Value of type %r is not iterable' % type(values))
def formfield(self, **kwargs):
raise NotImplementedError('No form field implemented for %r' % type(self))
class ListField(AbstractIterableField):
"""
Field representing a Python ``list``.
If the optional keyword argument `ordering` is given, it must be a callable
that is passed to :meth:`list.sort` as `key` argument. If `ordering` is
given, the items in the list will be sorted before sending them to the
database.
"""
_type = list
def __init__(self, *args, **kwargs):
self.ordering = kwargs.pop('ordering', None)
if self.ordering is not None and not callable(self.ordering):
raise TypeError("'ordering' has to be a callable or None, "
"not of type %r" % type(self.ordering))
super(ListField, self).__init__(*args, **kwargs)
def pre_save(self, model_instance, add):
values = getattr(model_instance, self.attname)
if values is None:
return None
if values and self.ordering:
values.sort(key=self.ordering)
return super(ListField, self).pre_save(model_instance, add)
class SetField(AbstractIterableField):
"""
Field representing a Python ``set``.
"""
_type = set
class DictField(AbstractIterableField):
"""
Field representing a Python ``dict``.
The field type conversions described in :class:`AbstractIterableField`
only affect values of the dictionary, not keys.
Depending on the backend, keys that aren't strings might not be allowed.
"""
_type = dict
def _convert(self, func, values, *args, **kwargs):
if values is None:
return None
return dict((key, func(value, *args, **kwargs))
for key, value in values.iteritems())
def validate(self, values, model_instance):
if not isinstance(values, dict):
raise ValidationError('Value is of type %r. Should be a dict.' % type(values))
class BlobField(models.Field):
"""
A field for storing blobs of binary data.
The value might either be a string (or something that can be converted to
a string), or a file-like object.
In the latter case, the object has to provide a ``read`` method from which
the blob is read.
"""
def get_internal_type(self):
return 'BlobField'
def formfield(self, **kwargs):
# A file widget is provided, but use model FileField or ImageField
# for storing specific files most of the time
from .widgets import BlobWidget
from django.forms import FileField
defaults = {'form_class': FileField, 'widget': BlobWidget}
defaults.update(kwargs)
return super(BlobField, self).formfield(**defaults)
def get_db_prep_value(self, value, connection, prepared=False):
if hasattr(value, 'read'):
return value.read()
else:
return str(value)
def get_db_prep_lookup(self, lookup_type, value, connection, prepared=False):
raise TypeError("BlobFields do not support lookups")
def value_to_string(self, obj):
return str(self._get_val_from_obj(obj))
class EmbeddedModelField(models.Field):
"""
Field that allows you to embed a model instance.
:param model: (optional) The model class that shall be embedded
(may also be passed as string similar to relation fields)
"""
__metaclass__ = models.SubfieldBase
def __init__(self, model=None, *args, **kwargs):
self.embedded_model = model
kwargs.setdefault('default', None)
super(EmbeddedModelField, self).__init__(*args, **kwargs)
def db_type(self, connection):
return 'DictField:RawField'
def _set_model(self, model):
# EmbeddedModelFields are not contribute[d]_to_class if using within
# ListFields (and friends), so we can only know the model field is
# used in when the IterableField sets our 'model' attribute in its
# contribute_to_class method.
# We need to know the model to generate a valid key for the lookup.
if model is not None and isinstance(self.embedded_model, basestring):
# The model argument passed to __init__ was a string, so we need
# to make sure to resolve that string to the corresponding model
# class, similar to relation fields. We abuse some of the
# relation fields' code to do the lookup here:
def _resolve_lookup(self_, resolved_model, model):
self.embedded_model = resolved_model
from django.db.models.fields.related import add_lazy_relation
add_lazy_relation(model, self, self.embedded_model, _resolve_lookup)
self._model = model
model = property(lambda self:self._model, _set_model)
def pre_save(self, model_instance, add):
embedded_instance = super(EmbeddedModelField, self).pre_save(model_instance, add)
if embedded_instance is None:
return None, None
model = self.embedded_model or models.Model
if not isinstance(embedded_instance, model):
raise TypeError("Expected instance of type %r, not %r" % (
type(model), type(embedded_instance)))
data = dict((field.name, field.pre_save(embedded_instance, add))
for field in embedded_instance._meta.fields)
return embedded_instance, data
def get_db_prep_value(self, (embedded_instance, embedded_dict), **kwargs):
if embedded_dict is None:
return None
values = {}
for name, value in embedded_dict.iteritems():
field = embedded_instance._meta.get_field(name)
values[field.column] = field.get_db_prep_value(value, **kwargs)
if self.embedded_model is None:
values.update({'_module' : embedded_instance.__class__.__module__,
'_model' : embedded_instance.__class__.__name__})
return values
# TODO/XXX: Remove this once we have a cleaner solution
def get_db_prep_lookup(self, lookup_type, value, connection, prepared=False):
if hasattr(value, 'as_lookup_value'):
value = value.as_lookup_value(self, lookup_type, connection)
return value
def to_python(self, values):
if not isinstance(values, dict):
return values
module, model = values.pop('_module', None), values.pop('_model', None)
# TODO/XXX: Workaround for old Python releases. Remove this someday.
# Let's make sure keys are instances of str
values = dict([(str(k), v) for k,v in values.items()])
if module is not None:
return getattr(import_module(module), model)(**values)
return self.embedded_model(**values)
|
|
#!/usr/bin/env python
'''
gpsys.py -- print system properties
Usage: gpsys.py [-p] [-t]
-p : print system properties in Python pickled format
-t : print system properties in Text (default).
'''
import os, platform, sys, getopt, pickle
from datetime import datetime
opt = {}
opt['-p'] = False
GPHOME=os.getenv('__GPHOME')
if not GPHOME:
GPHOME = os.getenv('GPHOME')
def makeCommand(cmd):
return ('__GPHOME=%s && GPHOME=$__GPHOME && export GPHOME '
'&& PATH=$GPHOME/bin:$PATH && export PATH '
'&& LD_LIBRARY_PATH=$GPHOME/lib:$LD_LIBRARY_PATH && export LD_LIBRARY_PATH '
'&& %s'
% (GPHOME, cmd))
################
def usage(exitarg):
print __doc__
sys.exit(exitarg)
def parseCommandLine():
global opt
try:
(options, args) = getopt.getopt(sys.argv[1:], 'pt')
except Exception, e:
usage('Error: ' + str(e))
for (switch, val) in options:
if switch == '-p': opt['-p'] = True
elif switch == '-t': opt['-p'] = False
def run(cmd):
f = None
ok = False
out = []
try:
f = os.popen(cmd)
for line in f:
out.append(line)
ok = not f.close()
finally:
if f: f.close()
return (ok, out)
def add(res, prefix, lines):
for line in lines:
x = line.split(' ', 1)
if len(x) == 2:
x[0] = x[0].strip()
x[1] = x[1].strip()
if (x[0] and x[1]):
if x[0][-1] == ':':
x[0] = x[0][:-1]
elif x[1][0] == ':':
x[1] = x[1][1:]
elif x[0][-1] == '=':
x[0] = x[0][:-1]
elif x[1][0] == '=':
x[1] = x[1][1:]
if x[0]:
res[prefix + x[0].lower().strip()] = x[1].strip()
return res
def do_gppath(res):
out = os.getenv("GPHOME")
if out is None:
out = ''
res['env.GPHOME'] = out.strip()
out = os.getenv('__GPHOME')
if out is None:
out = ''
res['env.__GPHOME'] = out.strip()
return True
def do_postgres_md5(res):
cmd = makeCommand("cat $__GPHOME/bin/postgres | "
"python -c 'import md5, sys; m = md5.new(); m.update(sys.stdin.read()); print m.hexdigest()'")
(ok, out) = run(cmd)
if ok:
for line in out:
if len(line) == 33:
res['postgres.md5'] = line.lower().strip()
return True
return False
def do_postgres_version(res):
cmd = makeCommand("$__GPHOME/bin/postgres --version")
(ok, out) = run(cmd)
if ok and len(out) == 1:
res['postgres.version'] = out[0].strip()
return True
return False
def do_sysctl(res):
(ok, out) = run('export PATH="/sbin:/usr/sbin:$PATH" && sysctl -a 2> /dev/null')
if ok:
add(res, 'sysctl.', out)
return ok
def do_ulimit(res):
(ok, out) = run('ulimit -u && ulimit -n')
if ok:
res['ulimit.nproc'] = out[0].strip()
res['ulimit.nofile'] = out[1].strip()
return ok
def do_sync(res):
res['sync.time'] = datetime.today()
def do_platform(res):
res['platform.platform'] = platform.platform()
uname = platform.uname()
res['platform.system'] = uname[0].lower()
res['platform.node'] = uname[1]
res['platform.release'] = uname[2]
res['platform.version'] = uname[3]
res['platform.machine'] = uname[4]
res['platform.processor'] = uname[5]
s = res['platform.system']
mem = 0
if (s.find('sunos') >= 0):
(ok, out) = run('''sh -c "/usr/sbin/prtconf | awk '/^Memory/{print}'"''')
if ok:
list = out[0].strip().split(' ')
val = int(list[2])
factor = list[3]
if factor == 'Megabytes':
mem = val * 1024 * 1024
elif (s.find('linux') >= 0):
ok, out = run("sh -c 'cat /proc/meminfo | grep MemTotal'")
if ok:
list = out[0].strip().split(' ')
val = int(list[len(list) - 2])
factor = list[len(list) - 1]
if factor == 'kB':
mem = val * 1024
elif (s.find('darwin') >= 0):
(ok, out) = run("/usr/sbin/sysctl hw.physmem")
if ok:
list = out[0].strip().split(' ')
mem = int(list[1])
res['platform.memory'] = mem
return True
def do_python(res):
version = sys.version_info
res['python.version'] = '%s.%s.%s' % version[0:3]
def do_system(res):
SYSTEM_KEYS = ('rlim_fd_max', 'rlim_fd_cur', 'shmsys:shminfo_shmmax', 'semsys:seminfo_semmni')
f = open('/etc/system', 'r')
content = f.read()
f.close()
p = []
lines = content.splitlines()
for line in lines:
line = line.strip()
if line.startswith('set'):
for key in SYSTEM_KEYS:
if line.find(key) != -1:
res['system.%s' % key] = line[3:].split('=')[-1].strip()
break
return True
def do_meminfo(res):
if not os.path.exists('/proc/meminfo'):
return False
f = None
try:
f = open('/proc/meminfo', 'r')
list = []
for line in f:
list.append(line)
add(res, '/proc/meminfo', list)
return True
finally:
if f: f.close()
def do_ndd(res):
if not os.path.exists('/usr/sbin/ndd'):
return False
list = ('tcp_conn_req_max_q', 'tcp_conn_req_max_q0', 'tcp_largest_anon_port', \
'tcp_smallest_anon_port', 'tcp_time_wait_interval')
for key in list:
(ok, out) = run('/usr/sbin/ndd /dev/tcp ' + key)
if ok and len(out) == 1:
res['ndd.' + key] = out[0].strip()
def do_solaris(res):
if not os.path.exists('/etc/release'):
return False
f = open('/etc/release')
lines = f.readlines();
f.close();
for i in lines:
i = i.strip()
if i.startswith('Solaris 10'):
res['solaris.release'] = i
f = os.popen("ls -1 /var/sadm/patch ", "r")
files = f.readlines()
f.close()
res['solaris.patch_file'] = ' '.join(files)
f = os.popen("/bin/showrev -p", "r")
lines = f.readlines();
f.close()
patch = []
for i in lines:
i = i.strip()
i = i.split()
if len(i) > 2:
patch.append(i[1]);
res['solaris.patch'] = ' '.join(patch)
def do_zfs(res):
(ok, out) = run('/sbin/zpool list -H')
if ok and len(out) == 1:
r = out[0].split()
if len(r) == 7:
res['zfs.health'] = r[5].strip()
(ok, out) = run('/sbin/zfs get -H checksum %s' % r[0])
if ok and len(out) == 1:
r = out[0].split()
if len(r) == 4:
res['zfs.checksum'] = r[2].strip()
parseCommandLine()
res = {}
do_gppath(res)
do_postgres_md5(res)
do_postgres_version(res)
do_python(res)
do_platform(res)
do_sync(res)
system = res['platform.system']
if system == 'sunos':
do_zfs(res)
do_system(res)
do_ndd(res)
do_solaris(res)
elif system == 'linux':
do_sysctl(res)
do_ulimit(res)
do_meminfo(res)
elif system == 'darwin':
do_sysctl(res)
do_ulimit(res)
if opt['-p']:
print "BEGINDUMP"
print pickle.dumps(res)
print "ENDDUMP"
else:
for i in res:
print i, ' | ', res[i]
|
|
"""
The Plaid API
The Plaid REST API. Please see https://plaid.com/docs/api for more details. # noqa: E501
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from plaid.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
)
def lazy_import():
from plaid.model.transfer_sweep import TransferSweep
globals()['TransferSweep'] = TransferSweep
class SimulatedTransferSweep(ModelComposed):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'id': (str,), # noqa: E501
'created': (datetime,), # noqa: E501
'amount': (str,), # noqa: E501
'iso_currency_code': (str,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'id': 'id', # noqa: E501
'created': 'created', # noqa: E501
'amount': 'amount', # noqa: E501
'iso_currency_code': 'iso_currency_code', # noqa: E501
}
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
'_composed_instances',
'_var_name_to_model_instances',
'_additional_properties_model_instances',
])
@convert_js_args_to_python_args
def __init__(self, id, created, amount, iso_currency_code, *args, **kwargs): # noqa: E501
"""SimulatedTransferSweep - a model defined in OpenAPI
Args:
id (str): Identifier of the sweep.
created (datetime): The datetime when the sweep occurred, in RFC 3339 format.
amount (str): Signed decimal amount of the sweep as it appears on your sweep account ledger (e.g. \"-10.00\") If amount is not present, the sweep was net-settled to zero and outstanding debits and credits between the sweep account and Plaid are balanced.
iso_currency_code (str): The currency of the sweep, e.g. \"USD\".
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
constant_args = {
'_check_type': _check_type,
'_path_to_item': _path_to_item,
'_spec_property_naming': _spec_property_naming,
'_configuration': _configuration,
'_visited_composed_classes': self._visited_composed_classes,
}
required_args = {
'id': id,
'created': created,
'amount': amount,
'iso_currency_code': iso_currency_code,
}
model_args = {}
model_args.update(required_args)
model_args.update(kwargs)
composed_info = validate_get_composed_info(
constant_args, model_args, self)
self._composed_instances = composed_info[0]
self._var_name_to_model_instances = composed_info[1]
self._additional_properties_model_instances = composed_info[2]
unused_args = composed_info[3]
for var_name, var_value in required_args.items():
setattr(self, var_name, var_value)
for var_name, var_value in kwargs.items():
if var_name in unused_args and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
not self._additional_properties_model_instances:
# discard variable.
continue
setattr(self, var_name, var_value)
@cached_property
def _composed_schemas():
# we need this here to make our import statements work
# we must store _composed_schemas in here so the code is only run
# when we invoke this method. If we kept this at the class
# level we would get an error beause the class level
# code would be run when this module is imported, and these composed
# classes don't exist yet because their module has not finished
# loading
lazy_import()
return {
'anyOf': [
],
'allOf': [
TransferSweep,
],
'oneOf': [
],
}
|
|
#!/usr/bin/env python
# -*- coding: utf8 -*-
# pylint: disable=W0212
from collections import OrderedDict
try:
from cdecimal import Decimal
except ImportError: # pragma: no cover
from decimal import Decimal
import sys
from babel.numbers import format_decimal
import six
from agate.aggregations import Min, Max
from agate.data_types import Number
from agate.exceptions import DataTypeError
from agate import utils
def print_bars(self, label_column_name='group', value_column_name='Count', domain=None, width=120, output=sys.stdout, printable=False):
"""
Print a text-based bar chart based on this table.
:param label_column_name:
The column containing the label values. Defaults to :code:`group`, which
is the default output of :meth:`.Table.pivot` or :meth:`.Table.bins`.
:param value_column_name:
The column containing the bar values. Defaults to :code:`Count`, which
is the default output of :meth:`.Table.pivot` or :meth:`.Table.bins`.
:param domain:
A 2-tuple containing the minimum and maximum values for the chart's
x-axis. The domain must be large enough to contain all values in
the column.
:param width:
The width, in characters, to use for the bar chart. Defaults to
:code:`120`.
:param output:
A file-like object to print to. Defaults to :code:`sys.stdout`.
:param printable:
If true, only printable characters will be outputed.
"""
y_label = label_column_name
label_column = self._columns[label_column_name]
# if not isinstance(label_column.data_type, Text):
# raise ValueError('Only Text data is supported for bar chart labels.')
x_label = value_column_name
value_column = self._columns[value_column_name]
if not isinstance(value_column.data_type, Number):
raise DataTypeError('Only Number data is supported for bar chart values.')
output = output
width = width
# Format numbers
decimal_places = utils.max_precision(value_column)
value_formatter = utils.make_number_formatter(decimal_places)
formatted_labels = []
for label in label_column:
formatted_labels.append(six.text_type(label))
formatted_values = []
for value in value_column:
if value is None:
formatted_values.append("-")
else:
formatted_values.append(format_decimal(
value,
format=value_formatter,
locale=utils.LC_NUMERIC
))
max_label_width = max(max([len(l) for l in formatted_labels]), len(y_label))
max_value_width = max(max([len(v) for v in formatted_values]), len(x_label))
plot_width = width - (max_label_width + max_value_width + 2)
min_value = Min(value_column_name).run(self)
max_value = Max(value_column_name).run(self)
# Calculate dimensions
if domain:
x_min = Decimal(domain[0])
x_max = Decimal(domain[1])
if min_value < x_min or max_value > x_max:
raise ValueError('Column contains values outside specified domain')
else:
x_min, x_max = utils.round_limits(min_value, max_value)
# All positive
if x_min >= 0:
x_min = Decimal('0')
plot_negative_width = 0
zero_line = 0
plot_positive_width = plot_width - 1
# All negative
elif x_max <= 0:
x_max = Decimal('0')
plot_negative_width = plot_width - 1
zero_line = plot_width - 1
plot_positive_width = 0
# Mixed signs
else:
spread = x_max - x_min
negative_portion = (x_min.copy_abs() / spread)
# Subtract one for zero line
plot_negative_width = int(((plot_width - 1) * negative_portion).to_integral_value())
zero_line = plot_negative_width
plot_positive_width = plot_width - (plot_negative_width + 1)
def project(value):
if value >= 0:
return plot_negative_width + int((plot_positive_width * (value / x_max)).to_integral_value())
else:
return plot_negative_width - int((plot_negative_width * (value / x_min)).to_integral_value())
# Calculate ticks
ticks = OrderedDict()
# First tick
ticks[0] = x_min
ticks[plot_width - 1] = x_max
tick_fractions = [Decimal('0.25'), Decimal('0.5'), Decimal('0.75')]
# All positive
if x_min >= 0:
for fraction in tick_fractions:
value = x_max * fraction
ticks[project(value)] = value
# All negative
elif x_max <= 0:
for fraction in tick_fractions:
value = x_min * fraction
ticks[project(value)] = value
# Mixed signs
else:
# Zero tick
ticks[zero_line] = Decimal('0')
# Halfway between min and 0
value = x_min * Decimal('0.5')
ticks[project(value)] = value
# Halfway between 0 and max
value = x_max * Decimal('0.5')
ticks[project(value)] = value
decimal_places = utils.max_precision(ticks.values())
tick_formatter = utils.make_number_formatter(decimal_places)
ticks_formatted = OrderedDict()
for k, v in ticks.items():
ticks_formatted[k] = format_decimal(
v,
format=tick_formatter,
locale=utils.LC_NUMERIC
)
def write(line):
output.write(line + '\n')
# Chart top
top_line = u'%s %s' % (y_label.ljust(max_label_width), x_label.rjust(max_value_width))
write(top_line)
if printable:
bar_mark = utils.PRINTABLE_BAR_MARK
zero_mark = utils.PRINTABLE_ZERO_MARK
else:
bar_mark = utils.BAR_MARK
zero_mark = utils.ZERO_MARK
# Bars
for i, label in enumerate(formatted_labels):
value = value_column[i]
if value == 0 or value is None:
bar_width = 0
elif value > 0:
bar_width = project(value) - plot_negative_width
elif value < 0:
bar_width = plot_negative_width - project(value)
label_text = label.ljust(max_label_width)
value_text = formatted_values[i].rjust(max_value_width)
bar = bar_mark * bar_width
if value is not None and value >= 0:
gap = (u' ' * plot_negative_width)
# All positive
if x_min <= 0:
bar = gap + zero_mark + bar
else:
bar = bar + gap + zero_mark
else:
bar = u' ' * (plot_negative_width - bar_width) + bar
# All negative or mixed signs
if value is None or x_max > value:
bar = bar + zero_mark
bar = bar.ljust(plot_width)
write('%s %s %s' % (label_text, value_text, bar))
# Axis & ticks
axis = utils.HORIZONTAL_LINE * plot_width
tick_text = u' ' * width
for i, (tick, label) in enumerate(ticks_formatted.items()):
# First tick
if tick == 0:
offset = 0
# Last tick
elif tick == plot_width - 1:
offset = -(len(label) - 1)
else:
offset = int(-(len(label) / 2))
pos = (width - plot_width) + tick + offset
# Don't print intermediate ticks that would overlap
if tick != 0 and tick != plot_width - 1:
if tick_text[pos - 1:pos + len(label) + 1] != ' ' * (len(label) + 2):
continue
tick_text = tick_text[:pos] + label + tick_text[pos + len(label):]
axis = axis[:tick] + utils.TICK_MARK + axis[tick + 1:]
write(axis.rjust(width))
write(tick_text)
|
|
"""
Web Colors
Copyright (c) 2008-2016, James Bennett <@ubernostrum>
All rights reserved.
https://github.com/ubernostrum/webcolors
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of the author nor the names of other
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Utility functions for working with the color names and color value
formats defined by the HTML and CSS specifications for use in
documents on the Web.
See documentation (in docs/ directory of source distribution) for
details of the supported formats, conventions and conversions.
"""
# Stylistic notes:
#
# The HTML5 algorithms are implemented as direct translations into
# Python of the descriptions in the spec. This produces somewhat
# un-Pythonic code, but correctness of the implementations is more
# important.
#
# Several other style choices in this module enforce usage or Python
# version support (see documentation in docs/ directory for rationale
# behind Python version support and usage requirements):
#
# * The only Python 2.x version supported is 2.7. Use of a dict
# comprehension in the _reversedict() helper function enforces this
# by causing an import-time syntax error on Python 2.6 and earlier.
#
# * The oldest Python 3.x version supported is 3.3. Use of u-prefixed
# string literals enforces this by causing an import-time syntax
# error on Python 3.0, 3.1 and 3.2.
#
# * Use of this module on Python 3 requires str rather than bytes for
# all string arguments to functions. Use of the format() method for
# string formatting enforces this; as of Python 3.5, formatting with
# % is implemented on bytes objects, but the format() method will
# never be implemented on bytes.
import re
import string
import struct
# Python 2's unichr() is Python 3's chr().
try: # pragma: no cover
unichr # pragma: no cover
except NameError: # pragma: no cover
unichr = chr # pragma: no cover
# Python 2's unicode is Python 3's str.
try: # pragma: no cover
unicode # pragma: no cover
except NameError: # pragma: no cover
unicode = str # pragma: no cover
def _reversedict(d):
"""
Internal helper for generating reverse mappings; given a
dictionary, returns a new dictionary with keys and values swapped.
"""
return {value: key for key, value in d.items()}
HEX_COLOR_RE = re.compile(r'^#([a-fA-F0-9]{3}|[a-fA-F0-9]{6})$')
SUPPORTED_SPECIFICATIONS = (u'html4', u'css2', u'css21', u'css3')
SPECIFICATION_ERROR_TEMPLATE = u'{{spec}} is not a supported specification for color name lookups; \
supported specifications are: {supported}.'.format(
supported=','.join(SUPPORTED_SPECIFICATIONS)
)
# Mappings of color names to normalized hexadecimal color values.
#################################################################
# The HTML 4 named colors.
#
# The canonical source for these color definitions is the HTML 4
# specification:
#
# http://www.w3.org/TR/html401/types.html#h-6.5
#
# The file tests/definitions.py in the source distribution of this
# module downloads a copy of the HTML 4 standard and parses out the
# color names to ensure the values below are correct.
HTML4_NAMES_TO_HEX = {
u'aqua': u'#00ffff',
u'black': u'#000000',
u'blue': u'#0000ff',
u'fuchsia': u'#ff00ff',
u'green': u'#008000',
u'gray': u'#808080',
u'lime': u'#00ff00',
u'maroon': u'#800000',
u'navy': u'#000080',
u'olive': u'#808000',
u'purple': u'#800080',
u'red': u'#ff0000',
u'silver': u'#c0c0c0',
u'teal': u'#008080',
u'white': u'#ffffff',
u'yellow': u'#ffff00',
}
# CSS 2 used the same list as HTML 4.
CSS2_NAMES_TO_HEX = HTML4_NAMES_TO_HEX
# CSS 2.1 added orange.
CSS21_NAMES_TO_HEX = dict(HTML4_NAMES_TO_HEX, orange=u'#ffa500')
# The CSS 3/SVG named colors.
#
# The canonical source for these color definitions is the SVG
# specification's color list (which was adopted as CSS 3's color
# definition):
#
# http://www.w3.org/TR/SVG11/types.html#ColorKeywords
#
# CSS 3 also provides definitions of these colors:
#
# http://www.w3.org/TR/css3-color/#svg-color
#
# SVG provides the definitions as RGB triplets. CSS 3 provides them
# both as RGB triplets and as hexadecimal. Since hex values are more
# common in real-world HTML and CSS, the mapping below is to hex
# values instead. The file tests/definitions.py in the source
# distribution of this module downloads a copy of the CSS 3 color
# module and parses out the color names to ensure the values below are
# correct.
CSS3_NAMES_TO_HEX = {
u'aliceblue': u'#f0f8ff',
u'antiquewhite': u'#faebd7',
u'aqua': u'#00ffff',
u'aquamarine': u'#7fffd4',
u'azure': u'#f0ffff',
u'beige': u'#f5f5dc',
u'bisque': u'#ffe4c4',
u'black': u'#000000',
u'blanchedalmond': u'#ffebcd',
u'blue': u'#0000ff',
u'blueviolet': u'#8a2be2',
u'brown': u'#a52a2a',
u'burlywood': u'#deb887',
u'cadetblue': u'#5f9ea0',
u'chartreuse': u'#7fff00',
u'chocolate': u'#d2691e',
u'coral': u'#ff7f50',
u'cornflowerblue': u'#6495ed',
u'cornsilk': u'#fff8dc',
u'crimson': u'#dc143c',
u'cyan': u'#00ffff',
u'darkblue': u'#00008b',
u'darkcyan': u'#008b8b',
u'darkgoldenrod': u'#b8860b',
u'darkgray': u'#a9a9a9',
u'darkgrey': u'#a9a9a9',
u'darkgreen': u'#006400',
u'darkkhaki': u'#bdb76b',
u'darkmagenta': u'#8b008b',
u'darkolivegreen': u'#556b2f',
u'darkorange': u'#ff8c00',
u'darkorchid': u'#9932cc',
u'darkred': u'#8b0000',
u'darksalmon': u'#e9967a',
u'darkseagreen': u'#8fbc8f',
u'darkslateblue': u'#483d8b',
u'darkslategray': u'#2f4f4f',
u'darkslategrey': u'#2f4f4f',
u'darkturquoise': u'#00ced1',
u'darkviolet': u'#9400d3',
u'deeppink': u'#ff1493',
u'deepskyblue': u'#00bfff',
u'dimgray': u'#696969',
u'dimgrey': u'#696969',
u'dodgerblue': u'#1e90ff',
u'firebrick': u'#b22222',
u'floralwhite': u'#fffaf0',
u'forestgreen': u'#228b22',
u'fuchsia': u'#ff00ff',
u'gainsboro': u'#dcdcdc',
u'ghostwhite': u'#f8f8ff',
u'gold': u'#ffd700',
u'goldenrod': u'#daa520',
u'gray': u'#808080',
u'grey': u'#808080',
u'green': u'#008000',
u'greenyellow': u'#adff2f',
u'honeydew': u'#f0fff0',
u'hotpink': u'#ff69b4',
u'indianred': u'#cd5c5c',
u'indigo': u'#4b0082',
u'ivory': u'#fffff0',
u'khaki': u'#f0e68c',
u'lavender': u'#e6e6fa',
u'lavenderblush': u'#fff0f5',
u'lawngreen': u'#7cfc00',
u'lemonchiffon': u'#fffacd',
u'lightblue': u'#add8e6',
u'lightcoral': u'#f08080',
u'lightcyan': u'#e0ffff',
u'lightgoldenrodyellow': u'#fafad2',
u'lightgray': u'#d3d3d3',
u'lightgrey': u'#d3d3d3',
u'lightgreen': u'#90ee90',
u'lightpink': u'#ffb6c1',
u'lightsalmon': u'#ffa07a',
u'lightseagreen': u'#20b2aa',
u'lightskyblue': u'#87cefa',
u'lightslategray': u'#778899',
u'lightslategrey': u'#778899',
u'lightsteelblue': u'#b0c4de',
u'lightyellow': u'#ffffe0',
u'lime': u'#00ff00',
u'limegreen': u'#32cd32',
u'linen': u'#faf0e6',
u'magenta': u'#ff00ff',
u'maroon': u'#800000',
u'mediumaquamarine': u'#66cdaa',
u'mediumblue': u'#0000cd',
u'mediumorchid': u'#ba55d3',
u'mediumpurple': u'#9370db',
u'mediumseagreen': u'#3cb371',
u'mediumslateblue': u'#7b68ee',
u'mediumspringgreen': u'#00fa9a',
u'mediumturquoise': u'#48d1cc',
u'mediumvioletred': u'#c71585',
u'midnightblue': u'#191970',
u'mintcream': u'#f5fffa',
u'mistyrose': u'#ffe4e1',
u'moccasin': u'#ffe4b5',
u'navajowhite': u'#ffdead',
u'navy': u'#000080',
u'oldlace': u'#fdf5e6',
u'olive': u'#808000',
u'olivedrab': u'#6b8e23',
u'orange': u'#ffa500',
u'orangered': u'#ff4500',
u'orchid': u'#da70d6',
u'palegoldenrod': u'#eee8aa',
u'palegreen': u'#98fb98',
u'paleturquoise': u'#afeeee',
u'palevioletred': u'#db7093',
u'papayawhip': u'#ffefd5',
u'peachpuff': u'#ffdab9',
u'peru': u'#cd853f',
u'pink': u'#ffc0cb',
u'plum': u'#dda0dd',
u'powderblue': u'#b0e0e6',
u'purple': u'#800080',
u'red': u'#ff0000',
u'rosybrown': u'#bc8f8f',
u'royalblue': u'#4169e1',
u'saddlebrown': u'#8b4513',
u'salmon': u'#fa8072',
u'sandybrown': u'#f4a460',
u'seagreen': u'#2e8b57',
u'seashell': u'#fff5ee',
u'sienna': u'#a0522d',
u'silver': u'#c0c0c0',
u'skyblue': u'#87ceeb',
u'slateblue': u'#6a5acd',
u'slategray': u'#708090',
u'slategrey': u'#708090',
u'snow': u'#fffafa',
u'springgreen': u'#00ff7f',
u'steelblue': u'#4682b4',
u'tan': u'#d2b48c',
u'teal': u'#008080',
u'thistle': u'#d8bfd8',
u'tomato': u'#ff6347',
u'turquoise': u'#40e0d0',
u'violet': u'#ee82ee',
u'wheat': u'#f5deb3',
u'white': u'#ffffff',
u'whitesmoke': u'#f5f5f5',
u'yellow': u'#ffff00',
u'yellowgreen': u'#9acd32',
}
# Mappings of normalized hexadecimal color values to color names.
#################################################################
HTML4_HEX_TO_NAMES = _reversedict(HTML4_NAMES_TO_HEX)
CSS2_HEX_TO_NAMES = HTML4_HEX_TO_NAMES
CSS21_HEX_TO_NAMES = _reversedict(CSS21_NAMES_TO_HEX)
CSS3_HEX_TO_NAMES = _reversedict(CSS3_NAMES_TO_HEX)
# Aliases of the above mappings, for backwards compatibility.
#################################################################
(html4_names_to_hex,
css2_names_to_hex,
css21_names_to_hex,
css3_names_to_hex) = (HTML4_NAMES_TO_HEX,
CSS2_NAMES_TO_HEX,
CSS21_NAMES_TO_HEX,
CSS3_NAMES_TO_HEX)
(html4_hex_to_names,
css2_hex_to_names,
css21_hex_to_names,
css3_hex_to_names) = (HTML4_HEX_TO_NAMES,
CSS2_HEX_TO_NAMES,
CSS21_HEX_TO_NAMES,
CSS3_HEX_TO_NAMES)
# Normalization functions.
#################################################################
def normalize_hex(hex_value):
"""
Normalize a hexadecimal color value to 6 digits, lowercase.
"""
match = HEX_COLOR_RE.match(hex_value)
if match is None:
raise ValueError(
u"'{}' is not a valid hexadecimal color value.".format(hex_value)
)
hex_digits = match.group(1)
if len(hex_digits) == 3:
hex_digits = u''.join(2 * s for s in hex_digits)
return u'#{}'.format(hex_digits.lower())
def _normalize_integer_rgb(value):
"""
Internal normalization function for clipping integer values into
the permitted range (0-255, inclusive).
"""
return 0 if value < 0 \
else 255 if value > 255 \
else value
def normalize_integer_triplet(rgb_triplet):
"""
Normalize an integer ``rgb()`` triplet so that all values are
within the range 0-255 inclusive.
"""
return tuple(_normalize_integer_rgb(value) for value in rgb_triplet)
def _normalize_percent_rgb(value):
"""
Internal normalization function for clipping percent values into
the permitted range (0%-100%, inclusive).
"""
percent = value.split(u'%')[0]
percent = float(percent) if u'.' in percent else int(percent)
return u'0%' if percent < 0 \
else u'100%' if percent > 100 \
else u'{}%'.format(percent)
def normalize_percent_triplet(rgb_triplet):
"""
Normalize a percentage ``rgb()`` triplet so that all values are
within the range 0%-100% inclusive.
"""
return tuple(_normalize_percent_rgb(value) for value in rgb_triplet)
# Conversions from color names to various formats.
#################################################################
def name_to_hex(name, spec=u'css3'):
"""
Convert a color name to a normalized hexadecimal color value.
The optional keyword argument ``spec`` determines which
specification's list of color names will be used; valid values are
``html4``, ``css2``, ``css21`` and ``css3``, and the default is
``css3``.
When no color of that name exists in the given specification,
``ValueError`` is raised.
"""
if spec not in SUPPORTED_SPECIFICATIONS:
raise ValueError(SPECIFICATION_ERROR_TEMPLATE.format(spec=spec))
normalized = name.lower()
hex_value = {u'css2': CSS2_NAMES_TO_HEX,
u'css21': CSS21_NAMES_TO_HEX,
u'css3': CSS3_NAMES_TO_HEX,
u'html4': HTML4_NAMES_TO_HEX}[spec].get(normalized)
if hex_value is None:
raise ValueError(
u"'{name}' is not defined as a named color in {spec}".format(
name=name, spec=spec
)
)
return hex_value
def name_to_rgb(name, spec=u'css3'):
"""
Convert a color name to a 3-tuple of integers suitable for use in
an ``rgb()`` triplet specifying that color.
"""
return hex_to_rgb(name_to_hex(name, spec=spec))
def name_to_rgb_percent(name, spec=u'css3'):
"""
Convert a color name to a 3-tuple of percentages suitable for use
in an ``rgb()`` triplet specifying that color.
"""
return rgb_to_rgb_percent(name_to_rgb(name, spec=spec))
# Conversions from hexadecimal color values to various formats.
#################################################################
def hex_to_name(hex_value, spec=u'css3'):
"""
Convert a hexadecimal color value to its corresponding normalized
color name, if any such name exists.
The optional keyword argument ``spec`` determines which
specification's list of color names will be used; valid values are
``html4``, ``css2``, ``css21`` and ``css3``, and the default is
``css3``.
When no color name for the value is found in the given
specification, ``ValueError`` is raised.
"""
if spec not in SUPPORTED_SPECIFICATIONS:
raise ValueError(SPECIFICATION_ERROR_TEMPLATE.format(spec=spec))
normalized = normalize_hex(hex_value)
name = {u'css2': CSS2_HEX_TO_NAMES,
u'css21': CSS21_HEX_TO_NAMES,
u'css3': CSS3_HEX_TO_NAMES,
u'html4': HTML4_HEX_TO_NAMES}[spec].get(normalized)
if name is None:
raise ValueError(
u"'{}' has no defined color name in {}".format(hex_value, spec)
)
return name
def hex_to_rgb(hex_value):
"""
Convert a hexadecimal color value to a 3-tuple of integers
suitable for use in an ``rgb()`` triplet specifying that color.
"""
hex_value = normalize_hex(hex_value)
hex_value = int(hex_value[1:], 16)
return (hex_value >> 16,
hex_value >> 8 & 0xff,
hex_value & 0xff)
def hex_to_rgb_percent(hex_value):
"""
Convert a hexadecimal color value to a 3-tuple of percentages
suitable for use in an ``rgb()`` triplet representing that color.
"""
return rgb_to_rgb_percent(hex_to_rgb(hex_value))
# Conversions from integer rgb() triplets to various formats.
#################################################################
def rgb_to_name(rgb_triplet, spec=u'css3'):
"""
Convert a 3-tuple of integers, suitable for use in an ``rgb()``
color triplet, to its corresponding normalized color name, if any
such name exists.
The optional keyword argument ``spec`` determines which
specification's list of color names will be used; valid values are
``html4``, ``css2``, ``css21`` and ``css3``, and the default is
``css3``.
If there is no matching name, ``ValueError`` is raised.
"""
return hex_to_name(
rgb_to_hex(
normalize_integer_triplet(
rgb_triplet
)
),
spec=spec
)
def rgb_to_hex(rgb_triplet):
"""
Convert a 3-tuple of integers, suitable for use in an ``rgb()``
color triplet, to a normalized hexadecimal value for that color.
"""
return u'#{:02x}{:02x}{:02x}'.format(
*normalize_integer_triplet(
rgb_triplet
)
)
def rgb_to_rgb_percent(rgb_triplet):
"""
Convert a 3-tuple of integers, suitable for use in an ``rgb()``
color triplet, to a 3-tuple of percentages suitable for use in
representing that color.
This function makes some trade-offs in terms of the accuracy of
the final representation; for some common integer values,
special-case logic is used to ensure a precise result (e.g.,
integer 128 will always convert to '50%', integer 32 will always
convert to '12.5%'), but for all other values a standard Python
``float`` is used and rounded to two decimal places, which may
result in a loss of precision for some values.
"""
# In order to maintain precision for common values,
# special-case them.
specials = {255: u'100%', 128: u'50%', 64: u'25%',
32: u'12.5%', 16: u'6.25%', 0: u'0%'}
return tuple(specials.get(d, u'{:.02f}%'.format(d / 255.0 * 100))
for d in normalize_integer_triplet(rgb_triplet))
# Conversions from percentage rgb() triplets to various formats.
#################################################################
def rgb_percent_to_name(rgb_percent_triplet, spec=u'css3'):
"""
Convert a 3-tuple of percentages, suitable for use in an ``rgb()``
color triplet, to its corresponding normalized color name, if any
such name exists.
The optional keyword argument ``spec`` determines which
specification's list of color names will be used; valid values are
``html4``, ``css2``, ``css21`` and ``css3``, and the default is
``css3``.
If there is no matching name, ``ValueError`` is raised.
"""
return rgb_to_name(
rgb_percent_to_rgb(
normalize_percent_triplet(
rgb_percent_triplet
)
),
spec=spec
)
def rgb_percent_to_hex(rgb_percent_triplet):
"""
Convert a 3-tuple of percentages, suitable for use in an ``rgb()``
color triplet, to a normalized hexadecimal color value for that
color.
"""
return rgb_to_hex(
rgb_percent_to_rgb(
normalize_percent_triplet(
rgb_percent_triplet
)
)
)
def _percent_to_integer(percent):
"""
Internal helper for converting a percentage value to an integer
between 0 and 255 inclusive.
"""
return int(
round(
float(percent.split(u'%')[0]) / 100 * 255
)
)
def rgb_percent_to_rgb(rgb_percent_triplet):
"""
Convert a 3-tuple of percentages, suitable for use in an ``rgb()``
color triplet, to a 3-tuple of integers suitable for use in
representing that color.
Some precision may be lost in this conversion. See the note
regarding precision for ``rgb_to_rgb_percent()`` for details.
"""
return tuple(
map(
_percent_to_integer,
normalize_percent_triplet(
rgb_percent_triplet
)
)
)
# HTML5 color algorithms.
#################################################################
# These functions are written in a way that may seem strange to
# developers familiar with Python, because they do not use the most
# efficient or idiomatic way of accomplishing their tasks. This is
# because, for compliance, these functions are written as literal
# translations into Python of the algorithms in HTML5.
#
# For ease of understanding, the relevant steps of the algorithm from
# the standard are included as comments interspersed in the
# implementation.
def html5_parse_simple_color(input):
"""
Apply the simple color parsing algorithm from section 2.4.6 of
HTML5.
"""
# 1. Let input be the string being parsed.
#
# 2. If input is not exactly seven characters long, then return an
# error.
if not isinstance(input, unicode) or len(input) != 7:
raise ValueError(
u"An HTML5 simple color must be a Unicode string "
u"exactly seven characters long."
)
# 3. If the first character in input is not a U+0023 NUMBER SIGN
# character (#), then return an error.
if not input.startswith('#'):
raise ValueError(
u"An HTML5 simple color must begin with the "
u"character '#' (U+0023)."
)
# 4. If the last six characters of input are not all ASCII hex
# digits, then return an error.
if not all(c in string.hexdigits for c in input[1:]):
raise ValueError(
u"An HTML5 simple color must contain exactly six ASCII hex digits."
)
# 5. Let result be a simple color.
#
# 6. Interpret the second and third characters as a hexadecimal
# number and let the result be the red component of result.
#
# 7. Interpret the fourth and fifth characters as a hexadecimal
# number and let the result be the green component of result.
#
# 8. Interpret the sixth and seventh characters as a hexadecimal
# number and let the result be the blue component of result.
#
# 9. Return result.
result = (int(input[1:3], 16),
int(input[3:5], 16),
int(input[5:7], 16))
return result
def html5_serialize_simple_color(simple_color):
"""
Apply the serialization algorithm for a simple color from section
2.4.6 of HTML5.
"""
red, green, blue = simple_color
# 1. Let result be a string consisting of a single "#" (U+0023)
# character.
result = u'#'
# 2. Convert the red, green, and blue components in turn to
# two-digit hexadecimal numbers using lowercase ASCII hex
# digits, zero-padding if necessary, and append these numbers
# to result, in the order red, green, blue.
format_string = '{:02x}'
result += format_string.format(red)
result += format_string.format(green)
result += format_string.format(blue)
# 3. Return result, which will be a valid lowercase simple color.
return result
def html5_parse_legacy_color(input):
"""
Apply the legacy color parsing algorithm from section 2.4.6 of
HTML5.
"""
# 1. Let input be the string being parsed.
if not isinstance(input, unicode):
raise ValueError(
u"HTML5 legacy color parsing requires a Unicode string as input."
)
# 2. If input is the empty string, then return an error.
if input == "":
raise ValueError(
u"HTML5 legacy color parsing forbids empty string as a value."
)
# 3. Strip leading and trailing whitespace from input.
input = input.strip()
# 4. If input is an ASCII case-insensitive match for the string
# "transparent", then return an error.
if input.lower() == u"transparent":
raise ValueError(
u'HTML5 legacy color parsing forbids "transparent" as a value.'
)
# 5. If input is an ASCII case-insensitive match for one of the
# keywords listed in the SVG color keywords section of the CSS3
# Color specification, then return the simple color
# corresponding to that keyword.
keyword_hex = CSS3_NAMES_TO_HEX.get(input.lower())
if keyword_hex is not None:
return html5_parse_simple_color(keyword_hex)
# 6. If input is four characters long, and the first character in
# input is a "#" (U+0023) character, and the last three
# characters of input are all ASCII hex digits, then run these
# substeps:
if len(input) == 4 and \
input.startswith(u'#') and \
all(c in string.hexdigits for c in input[1:]):
# 1. Let result be a simple color.
#
# 2. Interpret the second character of input as a hexadecimal
# digit; let the red component of result be the resulting
# number multiplied by 17.
#
# 3. Interpret the third character of input as a hexadecimal
# digit; let the green component of result be the resulting
# number multiplied by 17.
#
# 4. Interpret the fourth character of input as a hexadecimal
# digit; let the blue component of result be the resulting
# number multiplied by 17.
result = (int(input[1], 16) * 17,
int(input[2], 16) * 17,
int(input[3], 16) * 17)
# 5. Return result.
return result
# 7. Replace any characters in input that have a Unicode code
# point greater than U+FFFF (i.e. any characters that are not
# in the basic multilingual plane) with the two-character
# string "00".
# This one's a bit weird due to the existence of multiple internal
# Unicode string representations in different versions and builds
# of Python.
#
# From Python 2.2 through 3.2, Python could be compiled with
# "narrow" or "wide" Unicode strings (see PEP 261). Narrow builds
# handled Unicode strings with two-byte characters and surrogate
# pairs for non-BMP code points. Wide builds handled Unicode
# strings with four-byte characters and no surrogates. This means
# ord() is only sufficient to identify a non-BMP character on a
# wide build.
#
# Starting with Python 3.3, the internal string representation
# (see PEP 393) is now dynamic, and Python chooses an encoding --
# either latin-1, UCS-2 or UCS-4 -- wide enough to handle the
# highest code point in the string.
#
# The code below bypasses all of that for a consistently effective
# method: encode the string to little-endian UTF-32, then perform
# a binary unpack of it as four-byte integers. Those integers will
# be the Unicode code points, and from there filtering out non-BMP
# code points is easy.
encoded_input = input.encode('utf_32_le')
# Format string is '<' (for little-endian byte order), then a
# sequence of 'L' characters (for 4-byte unsigned long integer)
# equal to the length of the original string, which is also
# one-fourth the encoded length. For example, for a six-character
# input the generated format string will be '<LLLLLL'.
format_string = '<' + ('L' * (int(len(encoded_input) / 4)))
codepoints = struct.unpack(format_string, encoded_input)
input = ''.join(u'00' if c > 0xffff
else unichr(c)
for c in codepoints)
# 8. If input is longer than 128 characters, truncate input,
# leaving only the first 128 characters.
if len(input) > 128:
input = input[:128]
# 9. If the first character in input is a "#" (U+0023) character,
# remove it.
if input.startswith(u'#'):
input = input[1:]
# 10. Replace any character in input that is not an ASCII hex
# digit with the character "0" (U+0030).
if any(c for c in input if c not in string.hexdigits):
input = ''.join(c if c in string.hexdigits else u'0' for c in input)
# 11. While input's length is zero or not a multiple of three,
# append a "0" (U+0030) character to input.
while (len(input) == 0) or (len(input) % 3 != 0):
input += u'0'
# 12. Split input into three strings of equal length, to obtain
# three components. Let length be the length of those
# components (one third the length of input).
length = int(len(input) / 3)
red = input[:length]
green = input[length:length*2]
blue = input[length*2:]
# 13. If length is greater than 8, then remove the leading
# length-8 characters in each component, and let length be 8.
if length > 8:
red, green, blue = (red[length-8:],
green[length-8:],
blue[length-8:])
length = 8
# 14. While length is greater than two and the first character in
# each component is a "0" (U+0030) character, remove that
# character and reduce length by one.
while (length > 2) and (red[0] == u'0' and
green[0] == u'0' and
blue[0] == u'0'):
red, green, blue = (red[1:],
green[1:],
blue[1:])
length -= 1
# 15. If length is still greater than two, truncate each
# component, leaving only the first two characters in each.
if length > 2:
red, green, blue = (red[:2],
green[:2],
blue[:2])
# 16. Let result be a simple color.
#
# 17. Interpret the first component as a hexadecimal number; let
# the red component of result be the resulting number.
#
# 18. Interpret the second component as a hexadecimal number; let
# the green component of result be the resulting number.
#
# 19. Interpret the third component as a hexadecimal number; let
# the blue component of result be the resulting number.
result = (int(red, 16),
int(green, 16),
int(blue, 16))
# 20. Return result.
return result
|
|
# orm/interfaces.py
# Copyright (C) 2005-2011 the SQLAlchemy authors and contributors <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""
Contains various base classes used throughout the ORM.
Defines the now deprecated ORM extension classes as well
as ORM internals.
Other than the deprecated extensions, this module and the
classes within should be considered mostly private.
"""
from itertools import chain
from sqlalchemy import exc as sa_exc
from sqlalchemy import util
from sqlalchemy.sql import operators
deque = util.importlater('collections').deque
mapperutil = util.importlater('sqlalchemy.orm', 'util')
collections = None
__all__ = (
'AttributeExtension',
'EXT_CONTINUE',
'EXT_STOP',
'ExtensionOption',
'InstrumentationManager',
'LoaderStrategy',
'MapperExtension',
'MapperOption',
'MapperProperty',
'PropComparator',
'PropertyOption',
'SessionExtension',
'StrategizedOption',
'StrategizedProperty',
'build_path',
)
EXT_CONTINUE = util.symbol('EXT_CONTINUE')
EXT_STOP = util.symbol('EXT_STOP')
ONETOMANY = util.symbol('ONETOMANY')
MANYTOONE = util.symbol('MANYTOONE')
MANYTOMANY = util.symbol('MANYTOMANY')
from deprecated_interfaces import AttributeExtension, SessionExtension, \
MapperExtension
class MapperProperty(object):
"""Manage the relationship of a ``Mapper`` to a single class
attribute, as well as that attribute as it appears on individual
instances of the class, including attribute instrumentation,
attribute access, loading behavior, and dependency calculations.
"""
cascade = ()
"""The set of 'cascade' attribute names.
This collection is checked before the 'cascade_iterator' method is called.
"""
def setup(self, context, entity, path, reduced_path, adapter, **kwargs):
"""Called by Query for the purposes of constructing a SQL statement.
Each MapperProperty associated with the target mapper processes the
statement referenced by the query context, adding columns and/or
criterion as appropriate.
"""
pass
def create_row_processor(self, context, path, reduced_path,
mapper, row, adapter):
"""Return a 3-tuple consisting of three row processing functions.
"""
return None, None, None
def cascade_iterator(self, type_, state, visited_instances=None,
halt_on=None):
"""Iterate through instances related to the given instance for
a particular 'cascade', starting with this MapperProperty.
Return an iterator3-tuples (instance, mapper, state).
Note that the 'cascade' collection on this MapperProperty is
checked first for the given type before cascade_iterator is called.
See PropertyLoader for the related instance implementation.
"""
return iter(())
def set_parent(self, parent, init):
self.parent = parent
def instrument_class(self, mapper):
raise NotImplementedError()
_compile_started = False
_compile_finished = False
def init(self):
"""Called after all mappers are created to assemble
relationships between mappers and perform other post-mapper-creation
initialization steps.
"""
self._compile_started = True
self.do_init()
self._compile_finished = True
@property
def class_attribute(self):
"""Return the class-bound descriptor corresponding to this
MapperProperty."""
return getattr(self.parent.class_, self.key)
def do_init(self):
"""Perform subclass-specific initialization post-mapper-creation
steps.
This is a template method called by the ``MapperProperty``
object's init() method.
"""
pass
def post_instrument_class(self, mapper):
"""Perform instrumentation adjustments that need to occur
after init() has completed.
"""
pass
def per_property_preprocessors(self, uow):
pass
def is_primary(self):
"""Return True if this ``MapperProperty``'s mapper is the
primary mapper for its class.
This flag is used to indicate that the ``MapperProperty`` can
define attribute instrumentation for the class at the class
level (as opposed to the individual instance level).
"""
return not self.parent.non_primary
def merge(self, session, source_state, source_dict, dest_state,
dest_dict, load, _recursive):
"""Merge the attribute represented by this ``MapperProperty``
from source to destination object"""
pass
def compare(self, operator, value, **kw):
"""Return a compare operation for the columns represented by
this ``MapperProperty`` to the given value, which may be a
column value or an instance. 'operator' is an operator from
the operators module, or from sql.Comparator.
By default uses the PropComparator attached to this MapperProperty
under the attribute name "comparator".
"""
return operator(self.comparator, value)
class PropComparator(operators.ColumnOperators):
"""Defines comparison operations for MapperProperty objects.
User-defined subclasses of :class:`.PropComparator` may be created. The
built-in Python comparison and math operator methods, such as
``__eq__()``, ``__lt__()``, ``__add__()``, can be overridden to provide
new operator behavior. The custom :class:`.PropComparator` is passed to
the mapper property via the ``comparator_factory`` argument. In each case,
the appropriate subclass of :class:`.PropComparator` should be used::
from sqlalchemy.orm.properties import \\
ColumnProperty,\\
CompositeProperty,\\
RelationshipProperty
class MyColumnComparator(ColumnProperty.Comparator):
pass
class MyCompositeComparator(CompositeProperty.Comparator):
pass
class MyRelationshipComparator(RelationshipProperty.Comparator):
pass
"""
def __init__(self, prop, mapper, adapter=None):
self.prop = self.property = prop
self.mapper = mapper
self.adapter = adapter
def __clause_element__(self):
raise NotImplementedError("%r" % self)
def adapted(self, adapter):
"""Return a copy of this PropComparator which will use the given
adaption function on the local side of generated expressions.
"""
return self.__class__(self.prop, self.mapper, adapter)
@staticmethod
def any_op(a, b, **kwargs):
return a.any(b, **kwargs)
@staticmethod
def has_op(a, b, **kwargs):
return a.has(b, **kwargs)
@staticmethod
def of_type_op(a, class_):
return a.of_type(class_)
def of_type(self, class_):
"""Redefine this object in terms of a polymorphic subclass.
Returns a new PropComparator from which further criterion can be
evaluated.
e.g.::
query.join(Company.employees.of_type(Engineer)).\\
filter(Engineer.name=='foo')
\class_
a class or mapper indicating that criterion will be against
this specific subclass.
"""
return self.operate(PropComparator.of_type_op, class_)
def any(self, criterion=None, **kwargs):
"""Return true if this collection contains any member that meets the
given criterion.
criterion
an optional ClauseElement formulated against the member class' table
or attributes.
\**kwargs
key/value pairs corresponding to member class attribute names which
will be compared via equality to the corresponding values.
"""
return self.operate(PropComparator.any_op, criterion, **kwargs)
def has(self, criterion=None, **kwargs):
"""Return true if this element references a member which meets the
given criterion.
criterion
an optional ClauseElement formulated against the member class' table
or attributes.
\**kwargs
key/value pairs corresponding to member class attribute names which
will be compared via equality to the corresponding values.
"""
return self.operate(PropComparator.has_op, criterion, **kwargs)
class StrategizedProperty(MapperProperty):
"""A MapperProperty which uses selectable strategies to affect
loading behavior.
There is a single strategy selected by default. Alternate
strategies can be selected at Query time through the usage of
``StrategizedOption`` objects via the Query.options() method.
"""
def _get_context_strategy(self, context, reduced_path):
key = ('loaderstrategy', reduced_path)
if key in context.attributes:
cls = context.attributes[key]
try:
return self._strategies[cls]
except KeyError:
return self.__init_strategy(cls)
else:
return self.strategy
def _get_strategy(self, cls):
try:
return self._strategies[cls]
except KeyError:
return self.__init_strategy(cls)
def __init_strategy(self, cls):
self._strategies[cls] = strategy = cls(self)
return strategy
def setup(self, context, entity, path, reduced_path, adapter, **kwargs):
self._get_context_strategy(context, reduced_path + (self.key,)).\
setup_query(context, entity, path,
reduced_path, adapter, **kwargs)
def create_row_processor(self, context, path, reduced_path, mapper, row, adapter):
return self._get_context_strategy(context, reduced_path + (self.key,)).\
create_row_processor(context, path,
reduced_path, mapper, row, adapter)
def do_init(self):
self._strategies = {}
self.strategy = self.__init_strategy(self.strategy_class)
def post_instrument_class(self, mapper):
if self.is_primary() and \
not mapper.class_manager._attr_has_impl(self.key):
self.strategy.init_class_attribute(mapper)
def build_path(entity, key, prev=None):
if prev:
return prev + (entity, key)
else:
return (entity, key)
def serialize_path(path):
if path is None:
return None
return zip(
[m.class_ for m in [path[i] for i in range(0, len(path), 2)]],
[path[i] for i in range(1, len(path), 2)] + [None]
)
def deserialize_path(path):
if path is None:
return None
p = tuple(chain(*[(mapperutil.class_mapper(cls), key) for cls, key in path]))
if p and p[-1] is None:
p = p[0:-1]
return p
class MapperOption(object):
"""Describe a modification to a Query."""
propagate_to_loaders = False
"""if True, indicate this option should be carried along
Query object generated by scalar or object lazy loaders.
"""
def process_query(self, query):
pass
def process_query_conditionally(self, query):
"""same as process_query(), except that this option may not
apply to the given query.
Used when secondary loaders resend existing options to a new
Query."""
self.process_query(query)
class PropertyOption(MapperOption):
"""A MapperOption that is applied to a property off the mapper or
one of its child mappers, identified by a dot-separated key. """
def __init__(self, key, mapper=None):
self.key = key
self.mapper = mapper
def process_query(self, query):
self._process(query, True)
def process_query_conditionally(self, query):
self._process(query, False)
def _process(self, query, raiseerr):
paths, mappers = self._get_paths(query, raiseerr)
if paths:
self.process_query_property(query, paths, mappers)
def process_query_property(self, query, paths, mappers):
pass
def __getstate__(self):
d = self.__dict__.copy()
d['key'] = ret = []
for token in util.to_list(self.key):
if isinstance(token, PropComparator):
ret.append((token.mapper.class_, token.key))
else:
ret.append(token)
return d
def __setstate__(self, state):
ret = []
for key in state['key']:
if isinstance(key, tuple):
cls, propkey = key
ret.append(getattr(cls, propkey))
else:
ret.append(key)
state['key'] = tuple(ret)
self.__dict__ = state
def _find_entity_prop_comparator(self, query, token, mapper, raiseerr):
if mapperutil._is_aliased_class(mapper):
searchfor = mapper
isa = False
else:
searchfor = mapperutil._class_to_mapper(mapper)
isa = True
for ent in query._mapper_entities:
if searchfor is ent.path_entity or isa \
and searchfor.common_parent(ent.path_entity):
return ent
else:
if raiseerr:
if not list(query._mapper_entities):
raise sa_exc.ArgumentError(
"Query has only expression-based entities - "
"can't find property named '%s'."
% (token, )
)
else:
raise sa_exc.ArgumentError(
"Can't find property '%s' on any entity "
"specified in this Query. Note the full path "
"from root (%s) to target entity must be specified."
% (token, ",".join(str(x) for
x in query._mapper_entities))
)
else:
return None
def _find_entity_basestring(self, query, token, raiseerr):
for ent in query._mapper_entities:
# return only the first _MapperEntity when searching
# based on string prop name. Ideally object
# attributes are used to specify more exactly.
return ent
else:
if raiseerr:
raise sa_exc.ArgumentError(
"Query has only expression-based entities - "
"can't find property named '%s'."
% (token, )
)
else:
return None
def _get_paths(self, query, raiseerr):
path = None
entity = None
l = []
mappers = []
# _current_path implies we're in a
# secondary load with an existing path
current_path = list(query._current_path)
tokens = deque(self.key)
while tokens:
token = tokens.popleft()
if isinstance(token, basestring):
sub_tokens = token.split(".", 1)
token = sub_tokens[0]
tokens.extendleft(sub_tokens[1:])
# exhaust current_path before
# matching tokens to entities
if current_path:
if current_path[1] == token:
current_path = current_path[2:]
continue
else:
return [], []
if not entity:
entity = self._find_entity_basestring(
query,
token,
raiseerr)
if entity is None:
return [], []
path_element = entity.path_entity
mapper = entity.mapper
mappers.append(mapper)
if mapper.has_property(token):
prop = mapper.get_property(token)
else:
if raiseerr:
raise sa_exc.ArgumentError(
"Can't find property named '%s' on the "
"mapped entity %s in this Query. " % (
token, mapper)
)
else:
return [], []
elif isinstance(token, PropComparator):
prop = token.property
# exhaust current_path before
# matching tokens to entities
if current_path:
if current_path[0:2] == \
[token.parententity, prop.key]:
current_path = current_path[2:]
continue
else:
return [], []
if not entity:
entity = self._find_entity_prop_comparator(
query,
prop.key,
token.parententity,
raiseerr)
if not entity:
return [], []
path_element = entity.path_entity
mapper = entity.mapper
mappers.append(prop.parent)
else:
raise sa_exc.ArgumentError(
"mapper option expects "
"string key or list of attributes")
assert prop is not None
path = build_path(path_element, prop.key, path)
l.append(path)
if getattr(token, '_of_type', None):
path_element = mapper = token._of_type
else:
path_element = mapper = getattr(prop, 'mapper', None)
if mapper is None and tokens:
raise sa_exc.ArgumentError(
"Attribute '%s' of entity '%s' does not "
"refer to a mapped entity" %
(token, entity)
)
if path_element:
path_element = path_element
if current_path:
# ran out of tokens before
# current_path was exhausted.
assert not tokens
return [], []
return l, mappers
class StrategizedOption(PropertyOption):
"""A MapperOption that affects which LoaderStrategy will be used
for an operation by a StrategizedProperty.
"""
chained = False
def process_query_property(self, query, paths, mappers):
# _get_context_strategy may receive the path in terms of a base
# mapper - e.g. options(eagerload_all(Company.employees,
# Engineer.machines)) in the polymorphic tests leads to
# "(Person, 'machines')" in the path due to the mechanics of how
# the eager strategy builds up the path
if self.chained:
for path in paths:
query._attributes[('loaderstrategy',
_reduce_path(path))] = \
self.get_strategy_class()
else:
query._attributes[('loaderstrategy',
_reduce_path(paths[-1]))] = \
self.get_strategy_class()
def get_strategy_class(self):
raise NotImplementedError()
def _reduce_path(path):
"""Convert a (mapper, path) path to use base mappers.
This is used to allow more open ended selection of loader strategies, i.e.
Mapper -> prop1 -> Subclass -> prop2, where Subclass is a sub-mapper
of the mapper referenced by Mapper.prop1.
"""
return tuple([i % 2 != 0 and
element or
getattr(element, 'base_mapper', element)
for i, element in enumerate(path)])
class LoaderStrategy(object):
"""Describe the loading behavior of a StrategizedProperty object.
The ``LoaderStrategy`` interacts with the querying process in three
ways:
* it controls the configuration of the ``InstrumentedAttribute``
placed on a class to handle the behavior of the attribute. this
may involve setting up class-level callable functions to fire
off a select operation when the attribute is first accessed
(i.e. a lazy load)
* it processes the ``QueryContext`` at statement construction time,
where it can modify the SQL statement that is being produced.
Simple column attributes may add their represented column to the
list of selected columns, *eager loading* properties may add
``LEFT OUTER JOIN`` clauses to the statement.
* It produces "row processor" functions at result fetching time.
These "row processor" functions populate a particular attribute
on a particular mapped instance.
"""
def __init__(self, parent):
self.parent_property = parent
self.is_class_level = False
self.parent = self.parent_property.parent
self.key = self.parent_property.key
# TODO: there's no particular reason we need
# the separate .init() method at this point.
# It's possible someone has written their
# own LS object.
self.init()
def init(self):
raise NotImplementedError("LoaderStrategy")
def init_class_attribute(self, mapper):
pass
def setup_query(self, context, entity, path, reduced_path, adapter, **kwargs):
pass
def create_row_processor(self, context, path, reduced_path, mapper,
row, adapter):
"""Return row processing functions which fulfill the contract
specified by MapperProperty.create_row_processor.
StrategizedProperty delegates its create_row_processor method
directly to this method. """
return None, None, None
def __str__(self):
return str(self.parent_property)
def debug_callable(self, fn, logger, announcement, logfn):
if announcement:
logger.debug(announcement)
if logfn:
def call(*args, **kwargs):
logger.debug(logfn(*args, **kwargs))
return fn(*args, **kwargs)
return call
else:
return fn
class InstrumentationManager(object):
"""User-defined class instrumentation extension.
:class:`.InstrumentationManager` can be subclassed in order
to change
how class instrumentation proceeds. This class exists for
the purposes of integration with other object management
frameworks which would like to entirely modify the
instrumentation methodology of the ORM, and is not intended
for regular usage. For interception of class instrumentation
events, see :class:`.InstrumentationEvents`.
For an example of :class:`.InstrumentationManager`, see the
example :ref:`examples_instrumentation`.
The API for this class should be considered as semi-stable,
and may change slightly with new releases.
"""
# r4361 added a mandatory (cls) constructor to this interface.
# given that, perhaps class_ should be dropped from all of these
# signatures.
def __init__(self, class_):
pass
def manage(self, class_, manager):
setattr(class_, '_default_class_manager', manager)
def dispose(self, class_, manager):
delattr(class_, '_default_class_manager')
def manager_getter(self, class_):
def get(cls):
return cls._default_class_manager
return get
def instrument_attribute(self, class_, key, inst):
pass
def post_configure_attribute(self, class_, key, inst):
pass
def install_descriptor(self, class_, key, inst):
setattr(class_, key, inst)
def uninstall_descriptor(self, class_, key):
delattr(class_, key)
def install_member(self, class_, key, implementation):
setattr(class_, key, implementation)
def uninstall_member(self, class_, key):
delattr(class_, key)
def instrument_collection_class(self, class_, key, collection_class):
global collections
if collections is None:
from sqlalchemy.orm import collections
return collections.prepare_instrumentation(collection_class)
def get_instance_dict(self, class_, instance):
return instance.__dict__
def initialize_instance_dict(self, class_, instance):
pass
def install_state(self, class_, instance, state):
setattr(instance, '_default_state', state)
def remove_state(self, class_, instance):
delattr(instance, '_default_state')
def state_getter(self, class_):
return lambda instance: getattr(instance, '_default_state')
def dict_getter(self, class_):
return lambda inst: self.get_instance_dict(class_, inst)
|
|
# coding: utf-8
#
# Copyright 2014 The Oppia Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Config properties and functions for managing email notifications."""
import datetime
import logging
from core.domain import config_domain
from core.domain import html_cleaner
from core.domain import rights_manager
from core.domain import subscription_services
from core.domain import user_services
from core.platform import models
import feconf
(email_models,) = models.Registry.import_models([models.NAMES.email])
app_identity_services = models.Registry.import_app_identity_services()
email_services = models.Registry.import_email_services()
transaction_services = models.Registry.import_transaction_services()
# Stub for logging.error(), so that it can be swapped out in tests.
def log_new_error(*args, **kwargs):
logging.error(*args, **kwargs)
EMAIL_HTML_BODY_SCHEMA = {
'type': 'unicode',
'ui_config': {
'rows': 20,
}
}
EMAIL_CONTENT_SCHEMA = {
'type': 'dict',
'properties': [{
'name': 'subject',
'schema': {
'type': 'unicode',
},
}, {
'name': 'html_body',
'schema': EMAIL_HTML_BODY_SCHEMA,
}],
}
EMAIL_SENDER_NAME = config_domain.ConfigProperty(
'email_sender_name', {'type': 'unicode'},
'The default sender name for outgoing emails.', 'Site Admin')
EMAIL_FOOTER = config_domain.ConfigProperty(
'email_footer', {'type': 'unicode', 'ui_config': {'rows': 5}},
'The footer to append to all outgoing emails. (This should be written in '
'HTML and include an unsubscribe link.)',
'You can change your email preferences via the '
'<a href="https://www.example.com">Preferences</a> page.')
_PLACEHOLDER_SUBJECT = 'THIS IS A PLACEHOLDER.'
_PLACEHOLDER_HTML_BODY = 'THIS IS A <b>PLACEHOLDER</b> AND SHOULD BE REPLACED.'
SIGNUP_EMAIL_CONTENT = config_domain.ConfigProperty(
'signup_email_content', EMAIL_CONTENT_SCHEMA,
'Content of email sent after a new user signs up. (The email body should '
'be written with HTML and not include a salutation or footer.) These '
'emails are only sent if the functionality is enabled in feconf.py.',
{
'subject': _PLACEHOLDER_SUBJECT,
'html_body': _PLACEHOLDER_HTML_BODY,
})
EXPLORATION_ROLE_MANAGER = 'manager rights'
EXPLORATION_ROLE_EDITOR = 'editor rights'
EXPLORATION_ROLE_PLAYTESTER = 'playtest access'
EDITOR_ROLE_EMAIL_HTML_ROLES = {
rights_manager.ROLE_OWNER: EXPLORATION_ROLE_MANAGER,
rights_manager.ROLE_EDITOR: EXPLORATION_ROLE_EDITOR,
rights_manager.ROLE_VIEWER: EXPLORATION_ROLE_PLAYTESTER
}
_EDITOR_ROLE_EMAIL_HTML_RIGHTS = {
'can_manage': '<li>Change the exploration permissions</li><br>',
'can_edit': '<li>Edit the exploration</li><br>',
'can_play': '<li>View and playtest the exploration</li><br>'
}
EDITOR_ROLE_EMAIL_RIGHTS_FOR_ROLE = {
EXPLORATION_ROLE_MANAGER: (
_EDITOR_ROLE_EMAIL_HTML_RIGHTS['can_manage'] +
_EDITOR_ROLE_EMAIL_HTML_RIGHTS['can_edit'] +
_EDITOR_ROLE_EMAIL_HTML_RIGHTS['can_play']),
EXPLORATION_ROLE_EDITOR: (
_EDITOR_ROLE_EMAIL_HTML_RIGHTS['can_edit'] +
_EDITOR_ROLE_EMAIL_HTML_RIGHTS['can_play']),
EXPLORATION_ROLE_PLAYTESTER: _EDITOR_ROLE_EMAIL_HTML_RIGHTS['can_play']
}
PUBLICIZE_EXPLORATION_EMAIL_HTML_BODY = config_domain.ConfigProperty(
'publicize_exploration_email_html_body', EMAIL_HTML_BODY_SCHEMA,
'Default content for the email sent after an exploration is publicized by '
'a moderator. These emails are only sent if the functionality is enabled '
'in feconf.py. Leave this field blank if emails should not be sent.',
'Congratulations, your exploration has been featured in the Oppia '
'library!')
UNPUBLISH_EXPLORATION_EMAIL_HTML_BODY = config_domain.ConfigProperty(
'unpublish_exploration_email_html_body', EMAIL_HTML_BODY_SCHEMA,
'Default content for the email sent after an exploration is unpublished '
'by a moderator. These emails are only sent if the functionality is '
'enabled in feconf.py. Leave this field blank if emails should not be '
'sent.',
'I\'m writing to inform you that I have unpublished the above '
'exploration.')
SENDER_VALIDATORS = {
feconf.EMAIL_INTENT_SIGNUP: (lambda x: x == feconf.SYSTEM_COMMITTER_ID),
feconf.EMAIL_INTENT_PUBLICIZE_EXPLORATION: (
lambda x: rights_manager.Actor(x).is_moderator()),
feconf.EMAIL_INTENT_UNPUBLISH_EXPLORATION: (
lambda x: rights_manager.Actor(x).is_moderator()),
feconf.EMAIL_INTENT_DAILY_BATCH: (
lambda x: x == feconf.SYSTEM_COMMITTER_ID),
feconf.EMAIL_INTENT_EDITOR_ROLE_NOTIFICATION: (
lambda x: x == feconf.SYSTEM_COMMITTER_ID),
feconf.EMAIL_INTENT_FEEDBACK_MESSAGE_NOTIFICATION: (
lambda x: x == feconf.SYSTEM_COMMITTER_ID),
feconf.EMAIL_INTENT_SUGGESTION_NOTIFICATION: (
lambda x: x == feconf.SYSTEM_COMMITTER_ID),
feconf.EMAIL_INTENT_SUBSCRIPTION_NOTIFICATION: (
lambda x: x == feconf.SYSTEM_COMMITTER_ID),
feconf.EMAIL_INTENT_QUERY_STATUS_NOTIFICATION: (
lambda x: x == feconf.SYSTEM_COMMITTER_ID),
feconf.EMAIL_INTENT_MARKETING: (
lambda x: rights_manager.Actor(x).is_admin()),
feconf.EMAIL_INTENT_DELETE_EXPLORATION: (
lambda x: rights_manager.Actor(x).is_moderator()),
feconf.EMAIL_INTENT_REPORT_BAD_CONTENT: (
lambda x: x == feconf.SYSTEM_COMMITTER_ID),
feconf.BULK_EMAIL_INTENT_MARKETING: (
lambda x: user_services.get_username(x) in
config_domain.WHITELISTED_EMAIL_SENDERS.value),
feconf.BULK_EMAIL_INTENT_IMPROVE_EXPLORATION: (
lambda x: user_services.get_username(x) in
config_domain.WHITELISTED_EMAIL_SENDERS.value),
feconf.BULK_EMAIL_INTENT_CREATE_EXPLORATION: (
lambda x: user_services.get_username(x) in
config_domain.WHITELISTED_EMAIL_SENDERS.value),
feconf.BULK_EMAIL_INTENT_CREATOR_REENGAGEMENT: (
lambda x: user_services.get_username(x) in
config_domain.WHITELISTED_EMAIL_SENDERS.value),
feconf.BULK_EMAIL_INTENT_LEARNER_REENGAGEMENT: (
lambda x: user_services.get_username(x) in
config_domain.WHITELISTED_EMAIL_SENDERS.value),
feconf.BULK_EMAIL_INTENT_TEST: (
lambda x: user_services.get_username(x) in
config_domain.WHITELISTED_EMAIL_SENDERS.value)
}
def _require_sender_id_is_valid(intent, sender_id):
"""Ensure that the sender ID is valid, based on the email's intent.
Many emails are only allowed to be sent by a certain user or type of user,
e.g. 'admin' or an admin/moderator. This function will raise an exception
if the given sender is not allowed to send this type of email.
Args:
intent: str. The intent string, i.e. the purpose of the email.
Valid intent strings are defined in feconf.py.
sender_id: str. The ID of the user sending the email.
Raises:
Exception: The email intent is invalid.
Exception: The sender_id is not appropriate for the given intent.
"""
if intent not in SENDER_VALIDATORS:
raise Exception('Invalid email intent string: %s' % intent)
else:
if not SENDER_VALIDATORS[intent](sender_id):
logging.error(
'Invalid sender_id %s for email with intent \'%s\'' %
(sender_id, intent))
raise Exception(
'Invalid sender_id for email with intent \'%s\'' % intent)
def _send_email(
recipient_id, sender_id, intent, email_subject, email_html_body,
sender_email, bcc_admin=False, sender_name=None):
"""Sends an email to the given recipient.
This function should be used for sending all user-facing emails.
Raises an Exception if the sender_id is not appropriate for the given
intent. Currently we support only system-generated emails and emails
initiated by moderator actions.
Args:
recipient_id: str. The user ID of the recipient.
sender_id: str. The user ID of the sender.
intent: str. The intent string for the email, i.e. the purpose/type.
email_subject: str. The subject of the email.
email_html_body: str. The body (message) of the email.
sender_email: str. The sender's email address.
bcc_admin: bool. Whether to send a copy of the email to the admin's
email address.
sender_name: str or None. The name to be shown in the "sender" field of
the email.
"""
if sender_name is None:
sender_name = EMAIL_SENDER_NAME.value
_require_sender_id_is_valid(intent, sender_id)
recipient_email = user_services.get_email_from_user_id(recipient_id)
cleaned_html_body = html_cleaner.clean(email_html_body)
if cleaned_html_body != email_html_body:
log_new_error(
'Original email HTML body does not match cleaned HTML body:\n'
'Original:\n%s\n\nCleaned:\n%s\n' %
(email_html_body, cleaned_html_body))
return
raw_plaintext_body = cleaned_html_body.replace('<br/>', '\n').replace(
'<br>', '\n').replace('<li>', '<li>- ').replace('</p><p>', '</p>\n<p>')
cleaned_plaintext_body = html_cleaner.strip_html_tags(raw_plaintext_body)
if email_models.SentEmailModel.check_duplicate_message(
recipient_id, email_subject, cleaned_plaintext_body):
log_new_error(
'Duplicate email:\n'
'Details:\n%s %s\n%s\n\n' %
(recipient_id, email_subject, cleaned_plaintext_body))
return
def _send_email_in_transaction():
sender_name_email = '%s <%s>' % (sender_name, sender_email)
email_services.send_mail(
sender_name_email, recipient_email, email_subject,
cleaned_plaintext_body, cleaned_html_body, bcc_admin)
email_models.SentEmailModel.create(
recipient_id, recipient_email, sender_id, sender_name_email, intent,
email_subject, cleaned_html_body, datetime.datetime.utcnow())
return transaction_services.run_in_transaction(_send_email_in_transaction)
def _send_bulk_mail(
recipient_ids, sender_id, intent, email_subject, email_html_body,
sender_email, sender_name, instance_id=None):
"""Sends an email to all given recipients.
Args:
recipient_ids: list(str). The user IDs of the email recipients.
sender_id: str. The ID of the user sending the email.
intent: str. The intent string, i.e. the purpose of the email.
email_subject: str. The subject of the email.
email_html_body: str. The body (message) of the email.
sender_email: str. The sender's email address.
sender_name: str. The name to be shown in the "sender" field of the
email.
instance_id: str or None. The ID of the BulkEmailModel entity instance.
"""
_require_sender_id_is_valid(intent, sender_id)
recipients_settings = user_services.get_users_settings(recipient_ids)
recipient_emails = [user.email for user in recipients_settings]
cleaned_html_body = html_cleaner.clean(email_html_body)
if cleaned_html_body != email_html_body:
log_new_error(
'Original email HTML body does not match cleaned HTML body:\n'
'Original:\n%s\n\nCleaned:\n%s\n' %
(email_html_body, cleaned_html_body))
return
raw_plaintext_body = cleaned_html_body.replace('<br/>', '\n').replace(
'<br>', '\n').replace('<li>', '<li>- ').replace('</p><p>', '</p>\n<p>')
cleaned_plaintext_body = html_cleaner.strip_html_tags(raw_plaintext_body)
def _send_bulk_mail_in_transaction(instance_id=None):
sender_name_email = '%s <%s>' % (sender_name, sender_email)
email_services.send_bulk_mail(
sender_name_email, recipient_emails, email_subject,
cleaned_plaintext_body, cleaned_html_body)
if instance_id is None:
instance_id = email_models.BulkEmailModel.get_new_id('')
email_models.BulkEmailModel.create(
instance_id, recipient_ids, sender_id, sender_name_email, intent,
email_subject, cleaned_html_body, datetime.datetime.utcnow())
return transaction_services.run_in_transaction(
_send_bulk_mail_in_transaction, instance_id)
def send_mail_to_admin(email_subject, email_body):
"""Send an email to the admin email address.
The email is sent to the ADMIN_EMAIL_ADDRESS set in feconf.py.
Args:
email_subject: str. Subject of the email.
email_body: str. Body (message) of the email.
"""
app_id = app_identity_services.get_application_id()
body = '(Sent from %s)\n\n%s' % (app_id, email_body)
email_services.send_mail(
feconf.SYSTEM_EMAIL_ADDRESS, feconf.ADMIN_EMAIL_ADDRESS, email_subject,
body, body.replace('\n', '<br/>'), bcc_admin=False)
def send_post_signup_email(user_id):
"""Sends a post-signup email to the given user.
Raises an exception if emails are not allowed to be sent to users (i.e.
feconf.CAN_SEND_EMAILS is False).
Args:
user_id: str. User ID of the user that signed up.
"""
for key, content in SIGNUP_EMAIL_CONTENT.value.iteritems():
if content == SIGNUP_EMAIL_CONTENT.default_value[key]:
log_new_error(
'Please ensure that the value for the admin config property '
'SIGNUP_EMAIL_CONTENT is set, before allowing post-signup '
'emails to be sent.')
return
user_settings = user_services.get_user_settings(user_id)
email_subject = SIGNUP_EMAIL_CONTENT.value['subject']
email_body = 'Hi %s,<br><br>%s<br><br>%s' % (
user_settings.username,
SIGNUP_EMAIL_CONTENT.value['html_body'],
EMAIL_FOOTER.value)
_send_email(
user_id, feconf.SYSTEM_COMMITTER_ID, feconf.EMAIL_INTENT_SIGNUP,
email_subject, email_body, feconf.NOREPLY_EMAIL_ADDRESS)
def require_valid_intent(intent):
"""Checks if the given intent is valid, and raises an exception if it is
not.
Raises:
Exception: The given intent did not match an entry in
feconf.VALID_MODERATOR_ACTIONS.
"""
if intent not in feconf.VALID_MODERATOR_ACTIONS:
raise Exception('Unrecognized email intent: %s' % intent)
def _get_email_config(intent):
"""Return the default body for the email type matching the given moderator
action intent.
Args:
intent: str. The intent string (cause/purpose) of the email.
Returns:
str. The default body for the email type matching the given moderator
action intent.
"""
require_valid_intent(intent)
return config_domain.Registry.get_config_property(
feconf.VALID_MODERATOR_ACTIONS[intent]['email_config'])
def get_draft_moderator_action_email(intent):
"""Returns a draft of the text of the body for an email sent immediately
following a moderator action. An empty body is a signal to the frontend
that no email will be sent.
Args:
intent: str. The intent string (cause/purpose) of the email.
Returns:
str. Draft of the email body for an email sent after a moderator action,
or an empty string if no email should be sent.
"""
try:
require_moderator_email_prereqs_are_satisfied()
return _get_email_config(intent).value
except Exception:
return ''
def require_moderator_email_prereqs_are_satisfied():
"""Raises an exception if, for any reason, moderator emails cannot be sent.
Raises:
Exception: feconf.REQUIRE_EMAIL_ON_MODERATOR_ACTION is False.
Exception: feconf.CAN_SEND_EMAILS is False.
"""
if not feconf.REQUIRE_EMAIL_ON_MODERATOR_ACTION:
raise Exception(
'For moderator emails to be sent, please ensure that '
'REQUIRE_EMAIL_ON_MODERATOR_ACTION is set to True.')
if not feconf.CAN_SEND_EMAILS:
raise Exception(
'For moderator emails to be sent, please ensure that '
'CAN_SEND_EMAILS is set to True.')
def send_moderator_action_email(
sender_id, recipient_id, intent, exploration_title, email_body):
"""Sends a email immediately following a moderator action (publicize,
unpublish, delete) to the given user.
Raises an exception if emails are not allowed to be sent to users (i.e.
feconf.CAN_SEND_EMAILS is False).
Args:
sender_id: str. User ID of the sender.
recipient_id: str. User ID of the recipient.
intent: str. The intent string (cause/purpose) of the email.
exploration_title: str. The title of the exploration on which the
moderator action was taken.
email_body: str. The email content/message.
"""
require_moderator_email_prereqs_are_satisfied()
email_config = feconf.VALID_MODERATOR_ACTIONS[intent]
recipient_user_settings = user_services.get_user_settings(recipient_id)
sender_user_settings = user_services.get_user_settings(sender_id)
email_subject = feconf.VALID_MODERATOR_ACTIONS[intent]['email_subject_fn'](
exploration_title)
email_salutation_html = email_config['email_salutation_html_fn'](
recipient_user_settings.username)
email_signoff_html = email_config['email_signoff_html_fn'](
sender_user_settings.username)
full_email_content = (
'%s<br><br>%s<br><br>%s<br><br>%s' % (
email_salutation_html, email_body, email_signoff_html,
EMAIL_FOOTER.value))
_send_email(
recipient_id, sender_id, intent, email_subject, full_email_content,
feconf.SYSTEM_EMAIL_ADDRESS, bcc_admin=True)
def send_role_notification_email(
inviter_id, recipient_id, recipient_role, exploration_id,
exploration_title):
"""Sends a email when a new user is given activity rights (Manager, Editor,
Viewer) to an exploration by creator of exploration.
Email will only be sent if recipient wants to receive these emails (i.e.
'can_receive_editor_role_email' is set True in recipent's preferences).
Args:
inviter_id: str. ID of the user who invited the recipient to the new
role.
recipient_id: str. User ID of the recipient.
recipient_role: str. Role given to the recipient. Must be defined in
EDITOR_ROLE_EMAIL_HTML_ROLES.
exploration_id: str. ID of the exploration for which the recipient has
been given the new role.
exploration_title: str. Title of the exploration for which the recipient
has been given the new role.
Raises:
Exception: The role is invalid (i.e. not defined in
EDITOR_ROLE_EMAIL_HTML_ROLES).
"""
# Editor role email body and email subject templates.
email_subject_template = (
'%s - invitation to collaborate')
email_body_template = (
'Hi %s,<br>'
'<br>'
'<b>%s</b> has granted you %s to their exploration, '
'"<a href="http://www.oppia.org/create/%s">%s</a>", on Oppia.org.<br>'
'<br>'
'This allows you to:<br>'
'<ul>%s</ul>'
'You can find the exploration '
'<a href="http://www.oppia.org/create/%s">here</a>.<br>'
'<br>'
'Thanks, and happy collaborating!<br>'
'<br>'
'Best wishes,<br>'
'The Oppia Team<br>'
'<br>%s')
# Return from here if sending email is turned off.
if not feconf.CAN_SEND_EMAILS:
log_new_error('This app cannot send emails to users.')
return
# Return from here is sending editor role email is disabled.
if not feconf.CAN_SEND_EDITOR_ROLE_EMAILS:
log_new_error('This app cannot send editor role emails to users.')
return
recipient_user_settings = user_services.get_user_settings(recipient_id)
inviter_user_settings = user_services.get_user_settings(inviter_id)
recipient_preferences = user_services.get_email_preferences(recipient_id)
if not recipient_preferences.can_receive_editor_role_email:
# Do not send email if recipient has declined.
return
if recipient_role not in EDITOR_ROLE_EMAIL_HTML_ROLES:
raise Exception(
'Invalid role: %s' % recipient_role)
role_description = EDITOR_ROLE_EMAIL_HTML_ROLES[recipient_role]
rights_html = EDITOR_ROLE_EMAIL_RIGHTS_FOR_ROLE[role_description]
email_subject = email_subject_template % exploration_title
email_body = email_body_template % (
recipient_user_settings.username, inviter_user_settings.username,
role_description, exploration_id, exploration_title, rights_html,
exploration_id, EMAIL_FOOTER.value)
_send_email(
recipient_id, feconf.SYSTEM_COMMITTER_ID,
feconf.EMAIL_INTENT_EDITOR_ROLE_NOTIFICATION, email_subject, email_body,
feconf.NOREPLY_EMAIL_ADDRESS,
sender_name=inviter_user_settings.username)
def send_emails_to_subscribers(creator_id, exploration_id, exploration_title):
"""Sends an email to all the subscribers of the creators when the creator
publishes an exploration.
Args:
creator_id: str. The id of the creator who has published an exploration
and to whose subscribers we are sending emails.
exploration_id: str. The id of the exploration which the creator has
published.
exploration_title: str. The title of the exploration which the creator
has published.
"""
creator_name = user_services.get_username(creator_id)
email_subject = ('%s has published a new exploration!' % creator_name)
email_body_template = (
'Hi %s,<br>'
'<br>'
'%s has published a new exploration! You can play it here: '
'<a href="https://www.oppia.org/explore/%s">%s</a><br>'
'<br>'
'Thanks, and happy learning!<br>'
'<br>'
'Best wishes,<br>'
'- The Oppia Team<br>'
'<br>%s')
if not feconf.CAN_SEND_EMAILS:
log_new_error('This app cannot send emails to users.')
return
if not feconf.CAN_SEND_SUBSCRIPTION_EMAILS:
log_new_error('This app cannot send subscription emails to users.')
return
recipient_list = subscription_services.get_all_subscribers_of_creator(
creator_id)
recipients_usernames = user_services.get_usernames(recipient_list)
recipients_preferences = user_services.get_users_email_preferences(
recipient_list)
for index, username in enumerate(recipients_usernames):
if recipients_preferences[index].can_receive_subscription_email:
email_body = email_body_template % (
username, creator_name, exploration_id,
exploration_title, EMAIL_FOOTER.value)
_send_email(
recipient_list[index], feconf.SYSTEM_COMMITTER_ID,
feconf.EMAIL_INTENT_SUBSCRIPTION_NOTIFICATION,
email_subject, email_body, feconf.NOREPLY_EMAIL_ADDRESS)
def send_feedback_message_email(recipient_id, feedback_messages):
"""Sends an email when creator receives feedback message to an exploration.
Args:
recipient_id: str. User ID of recipient.
feedback_messages: dict. Contains feedback messages. Example:
{
'exploration_id': {
'title': 'Exploration 1234',
'messages': ['Feedback message 1', 'Feedback message 2']
}
}
"""
email_subject = (
'You\'ve received %s new message%s on your explorations' %
(len(feedback_messages), 's' if len(feedback_messages) > 1 else ''))
email_body_template = (
'Hi %s,<br>'
'<br>'
'You\'ve received %s new message%s on your Oppia explorations:<br>'
'<ul>%s</ul>'
'You can view and reply to your messages from your '
'<a href="https://www.oppia.org/dashboard">dashboard</a>.'
'<br>'
'Thanks, and happy teaching!<br>'
'<br>'
'Best wishes,<br>'
'The Oppia Team<br>'
'<br>%s')
if not feconf.CAN_SEND_EMAILS:
log_new_error('This app cannot send emails to users.')
return
if not feconf.CAN_SEND_FEEDBACK_MESSAGE_EMAILS:
log_new_error('This app cannot send feedback message emails to users.')
return
if not feedback_messages:
return
recipient_user_settings = user_services.get_user_settings(recipient_id)
messages_html = ''
for _, reference in feedback_messages.iteritems():
for message in reference['messages']:
messages_html += (
'<li>%s: %s<br></li>' % (reference['title'], message))
email_body = email_body_template % (
recipient_user_settings.username, len(feedback_messages),
's' if len(feedback_messages) > 1 else '',
messages_html, EMAIL_FOOTER.value)
_send_email(
recipient_id, feconf.SYSTEM_COMMITTER_ID,
feconf.EMAIL_INTENT_FEEDBACK_MESSAGE_NOTIFICATION,
email_subject, email_body, feconf.NOREPLY_EMAIL_ADDRESS)
def can_users_receive_thread_email(
recipient_ids, exploration_id, has_suggestion):
"""Returns if users can receive email.
Args:
recipient_ids: list(str). IDs of persons that should receive the email.
exploration_id: str. ID of exploration that received new message.
has_suggestion: bool. True if thread contains suggestion.
Returns:
list(bool). True if user can receive the email, False otherwise.
"""
users_global_prefs = (
user_services.get_users_email_preferences(recipient_ids))
users_exploration_prefs = (
user_services.get_users_email_preferences_for_exploration(
recipient_ids, exploration_id))
zipped_preferences = zip(users_global_prefs, users_exploration_prefs)
result = []
if has_suggestion:
for user_global_prefs, user_exploration_prefs in zipped_preferences:
result.append(
user_global_prefs.can_receive_feedback_message_email
and not user_exploration_prefs.mute_suggestion_notifications)
else:
for user_global_prefs, user_exploration_prefs in zipped_preferences:
result.append(
user_global_prefs.can_receive_feedback_message_email
and not user_exploration_prefs.mute_feedback_notifications)
return result
def send_suggestion_email(
exploration_title, exploration_id, author_id, recipient_list):
"""Send emails to notify the given recipients about new suggestion.
Each recipient will only be emailed if their email preferences allow for
incoming feedback message emails.
Args:
exploration_title: str. Title of the exploration with the new
suggestion.
exploration_id: str. The ID of the exploration with the new suggestion.
author_id: str. The user ID of the author of the suggestion.
recipient_list: list(str). The user IDs of the email recipients.
"""
email_subject = 'New suggestion for "%s"' % exploration_title
email_body_template = (
'Hi %s,<br>'
'%s has submitted a new suggestion for your Oppia exploration, '
'<a href="https://www.oppia.org/create/%s">"%s"</a>.<br>'
'You can accept or reject this suggestion by visiting the '
'<a href="https://www.oppia.org/create/%s#/feedback">feedback page</a> '
'for your exploration.<br>'
'<br>'
'Thanks!<br>'
'- The Oppia Team<br>'
'<br>%s')
if not feconf.CAN_SEND_EMAILS:
log_new_error('This app cannot send emails to users.')
return
if not feconf.CAN_SEND_FEEDBACK_MESSAGE_EMAILS:
log_new_error('This app cannot send feedback message emails to users.')
return
author_settings = user_services.get_user_settings(author_id)
can_users_receive_email = (
can_users_receive_thread_email(recipient_list, exploration_id, True))
for index, recipient_id in enumerate(recipient_list):
recipient_user_settings = user_services.get_user_settings(recipient_id)
if can_users_receive_email[index]:
# Send email only if recipient wants to receive.
email_body = email_body_template % (
recipient_user_settings.username, author_settings.username,
exploration_id, exploration_title, exploration_id,
EMAIL_FOOTER.value)
_send_email(
recipient_id, feconf.SYSTEM_COMMITTER_ID,
feconf.EMAIL_INTENT_SUGGESTION_NOTIFICATION,
email_subject, email_body, feconf.NOREPLY_EMAIL_ADDRESS)
def send_instant_feedback_message_email(
recipient_id, sender_id, message, email_subject, exploration_title,
exploration_id, thread_title):
"""Send an email when a new message is posted to a feedback thread, or when
the thread's status is changed.
Args:
recipient_id: str. The user ID of the recipient.
sender_id: str. The user ID of the sender.
message: str. The message text or status change text from the sender.
email_subject: str. The subject line to be sent in the email.
exploration_title: str. The title of the exploration.
exploration_id: str. ID of the exploration the feedback thread is about.
thread_title: str. The title of the feedback thread.
"""
email_body_template = (
'Hi %s,<br><br>'
'New update to thread "%s" on '
'<a href="https://www.oppia.org/create/%s#/feedback">%s</a>:<br>'
'<ul><li>%s: %s<br></li></ul>'
'(You received this message because you are a '
'participant in this thread.)<br><br>'
'Best wishes,<br>'
'The Oppia team<br>'
'<br>%s')
if not feconf.CAN_SEND_EMAILS:
log_new_error('This app cannot send emails to users.')
return
if not feconf.CAN_SEND_FEEDBACK_MESSAGE_EMAILS:
log_new_error('This app cannot send feedback message emails to users.')
return
sender_settings = user_services.get_user_settings(sender_id)
recipient_settings = user_services.get_user_settings(recipient_id)
recipient_preferences = user_services.get_email_preferences(recipient_id)
if recipient_preferences.can_receive_feedback_message_email:
email_body = email_body_template % (
recipient_settings.username, thread_title, exploration_id,
exploration_title, sender_settings.username, message,
EMAIL_FOOTER.value)
_send_email(
recipient_id, feconf.SYSTEM_COMMITTER_ID,
feconf.EMAIL_INTENT_FEEDBACK_MESSAGE_NOTIFICATION, email_subject,
email_body, feconf.NOREPLY_EMAIL_ADDRESS)
def send_flag_exploration_email(
exploration_title, exploration_id, reporter_id, report_text):
"""Send an email to all moderators when an exploration is flagged.
Args:
exploration_title: str. The title of the flagged exporation.
exploration_id: str. The ID of the flagged exploration.
reporter_id: str. The user ID of the reporter.
report_text: str. The message entered by the reporter.
"""
email_subject = 'Exploration flagged by user: "%s"' % exploration_title
email_body_template = (
'Hello Moderator,<br>'
'%s has flagged exploration "%s" on the following '
'grounds: <br>'
'%s .<br>'
'You can modify the exploration by clicking '
'<a href="https://www.oppia.org/create/%s">here</a>.<br>'
'<br>'
'Thanks!<br>'
'- The Oppia Team<br>'
'<br>%s')
if not feconf.CAN_SEND_EMAILS:
log_new_error('This app cannot send emails to users.')
return
email_body = email_body_template % (
user_services.get_user_settings(reporter_id).username,
exploration_title, report_text, exploration_id,
EMAIL_FOOTER.value)
recipient_list = config_domain.MODERATOR_IDS.value
for recipient_id in recipient_list:
_send_email(
recipient_id, feconf.SYSTEM_COMMITTER_ID,
feconf.EMAIL_INTENT_REPORT_BAD_CONTENT,
email_subject, email_body, feconf.NOREPLY_EMAIL_ADDRESS)
def send_query_completion_email(recipient_id, query_id):
"""Send an email to the initiator of a bulk email query with a link to view
the query results.
Args:
recipient_id: str. The recipient ID.
query_id: str. The query ID.
"""
email_subject = 'Query %s has successfully completed' % query_id
email_body_template = (
'Hi %s,<br>'
'Your query with id %s has succesfully completed its '
'execution. Visit the result page '
'<a href="https://www.oppia.org/emaildashboardresult/%s">here</a> '
'to see result of your query.<br><br>'
'Thanks!<br>'
'<br>'
'Best wishes,<br>'
'The Oppia Team<br>'
'<br>%s')
recipient_user_settings = user_services.get_user_settings(recipient_id)
email_body = email_body_template % (
recipient_user_settings.username, query_id, query_id,
EMAIL_FOOTER.value)
_send_email(
recipient_id, feconf.SYSTEM_COMMITTER_ID,
feconf.EMAIL_INTENT_QUERY_STATUS_NOTIFICATION, email_subject,
email_body, feconf.NOREPLY_EMAIL_ADDRESS)
def send_query_failure_email(recipient_id, query_id, query_params):
"""Send an email to the initiator of a failed bulk email query.
Args:
recipient_id: str. The recipient ID.
query_id: str. The query ID.
query_params: dict. The parameters of the query, as key:value.
"""
email_subject = 'Query %s has failed' % query_id
email_body_template = (
'Hi %s,<br>'
'Your query with id %s has failed due to error '
'during execution. '
'Please check the query parameters and submit query again.<br><br>'
'Thanks!<br>'
'<br>'
'Best wishes,<br>'
'The Oppia Team<br>'
'<br>%s')
recipient_user_settings = user_services.get_user_settings(recipient_id)
email_body = email_body_template % (
recipient_user_settings.username, query_id, EMAIL_FOOTER.value)
_send_email(
recipient_id, feconf.SYSTEM_COMMITTER_ID,
feconf.EMAIL_INTENT_QUERY_STATUS_NOTIFICATION, email_subject,
email_body, feconf.NOREPLY_EMAIL_ADDRESS)
admin_email_subject = 'Query job has failed.'
admin_email_body_template = (
'Query job with %s query id has failed in its execution.\n'
'Query parameters:\n\n')
for key in sorted(query_params):
admin_email_body_template += '%s: %s\n' % (key, query_params[key])
admin_email_body = admin_email_body_template % query_id
send_mail_to_admin(admin_email_subject, admin_email_body)
def send_user_query_email(
sender_id, recipient_ids, email_subject, email_body, email_intent):
bulk_email_model_id = email_models.BulkEmailModel.get_new_id('')
sender_name = user_services.get_username(sender_id)
sender_email = user_services.get_email_from_user_id(sender_id)
_send_bulk_mail(
recipient_ids, sender_id, email_intent, email_subject, email_body,
sender_email, sender_name, bulk_email_model_id)
return bulk_email_model_id
def send_test_email_for_bulk_emails(tester_id, email_subject, email_body):
tester_name = user_services.get_username(tester_id)
tester_email = user_services.get_email_from_user_id(tester_id)
return _send_email(
tester_id, tester_id, feconf.BULK_EMAIL_INTENT_TEST,
email_subject, email_body, tester_email, sender_name=tester_name)
|
|
"""
The MNIST dataset.
"""
__authors__ = "Ian Goodfellow"
__copyright__ = "Copyright 2010-2012, Universite de Montreal"
__credits__ = ["Ian Goodfellow"]
__license__ = "3-clause BSD"
__maintainer__ = "LISA Lab"
__email__ = "pylearn-dev@googlegroups"
import numpy as N
np = N
from theano.compat.six.moves import xrange
from pylearn2.datasets import dense_design_matrix
from pylearn2.datasets import control
from pylearn2.datasets import cache
from pylearn2.utils import serial
from pylearn2.utils.mnist_ubyte import read_mnist_images
from pylearn2.utils.mnist_ubyte import read_mnist_labels
from pylearn2.utils.rng import make_np_rng
class MNIST(dense_design_matrix.DenseDesignMatrix):
"""
The MNIST dataset
Parameters
----------
which_set : str
'train' or 'test'
center : bool
If True, preprocess so that each pixel has zero mean.
shuffle : WRITEME
binarize : WRITEME
start : WRITEME
stop : WRITEME
axes : WRITEME
preprocessor : WRITEME
fit_preprocessor : WRITEME
fit_test_preprocessor : WRITEME
"""
def __init__(self, which_set, center=False, shuffle=False,
binarize=False, start=None, stop=None,
axes=['b', 0, 1, 'c'],
preprocessor=None,
fit_preprocessor=False,
fit_test_preprocessor=False):
self.args = locals()
if which_set not in ['train', 'test']:
if which_set == 'valid':
raise ValueError(
"There is no such thing as the MNIST validation set. MNIST"
"consists of 60,000 train examples and 10,000 test"
"examples. If you wish to use a validation set you should"
"divide the train set yourself. The pylearn2 dataset"
"implements and will only ever implement the standard"
"train / test split used in the literature.")
raise ValueError(
'Unrecognized which_set value "%s".' % (which_set,) +
'". Valid values are ["train","test"].')
def dimshuffle(b01c):
"""
.. todo::
WRITEME
"""
default = ('b', 0, 1, 'c')
return b01c.transpose(*[default.index(axis) for axis in axes])
if control.get_load_data():
path = "${PYLEARN2_DATA_PATH}/mnist/"
if which_set == 'train':
im_path = path + 'train-images-idx3-ubyte'
label_path = path + 'train-labels-idx1-ubyte'
else:
assert which_set == 'test'
im_path = path + 't10k-images-idx3-ubyte'
label_path = path + 't10k-labels-idx1-ubyte'
# Path substitution done here in order to make the lower-level
# mnist_ubyte.py as stand-alone as possible (for reuse in, e.g.,
# the Deep Learning Tutorials, or in another package).
im_path = serial.preprocess(im_path)
label_path = serial.preprocess(label_path)
# Locally cache the files before reading them
datasetCache = cache.datasetCache
im_path = datasetCache.cache_file(im_path)
label_path = datasetCache.cache_file(label_path)
topo_view = read_mnist_images(im_path, dtype='float32')
y = np.atleast_2d(read_mnist_labels(label_path)).T
else:
if which_set == 'train':
size = 60000
elif which_set == 'test':
size = 10000
else:
raise ValueError(
'Unrecognized which_set value "%s".' % (which_set,) +
'". Valid values are ["train","test"].')
topo_view = np.random.rand(size, 28, 28)
y = np.random.randint(0, 10, (size, 1))
if binarize:
topo_view = (topo_view > 0.5).astype('float32')
y_labels = 10
m, r, c = topo_view.shape
assert r == 28
assert c == 28
topo_view = topo_view.reshape(m, r, c, 1)
if which_set == 'train':
assert m == 60000
elif which_set == 'test':
assert m == 10000
else:
assert False
if center:
topo_view -= topo_view.mean(axis=0)
if shuffle:
self.shuffle_rng = make_np_rng(
None, [1, 2, 3], which_method="shuffle")
for i in xrange(topo_view.shape[0]):
j = self.shuffle_rng.randint(m)
# Copy ensures that memory is not aliased.
tmp = topo_view[i, :, :, :].copy()
topo_view[i, :, :, :] = topo_view[j, :, :, :]
topo_view[j, :, :, :] = tmp
tmp = y[i:i + 1].copy()
y[i] = y[j]
y[j] = tmp
super(MNIST, self).__init__(topo_view=dimshuffle(topo_view), y=y,
axes=axes, y_labels=y_labels)
assert not N.any(N.isnan(self.X))
if start is not None:
assert start >= 0
if stop > self.X.shape[0]:
raise ValueError('stop=' + str(stop) + '>' +
'm=' + str(self.X.shape[0]))
assert stop > start
self.X = self.X[start:stop, :]
if self.X.shape[0] != stop - start:
raise ValueError("X.shape[0]: %d. start: %d stop: %d"
% (self.X.shape[0], start, stop))
if len(self.y.shape) > 1:
self.y = self.y[start:stop, :]
else:
self.y = self.y[start:stop]
assert self.y.shape[0] == stop - start
if which_set == 'test':
assert fit_test_preprocessor is None or \
(fit_preprocessor == fit_test_preprocessor)
if self.X is not None and preprocessor:
preprocessor.apply(self, fit_preprocessor)
def adjust_for_viewer(self, X):
"""
.. todo::
WRITEME
"""
return N.clip(X * 2. - 1., -1., 1.)
def adjust_to_be_viewed_with(self, X, other, per_example=False):
"""
.. todo::
WRITEME
"""
return self.adjust_for_viewer(X)
def get_test_set(self):
"""
.. todo::
WRITEME
"""
args = {}
args.update(self.args)
del args['self']
args['which_set'] = 'test'
args['start'] = None
args['stop'] = None
args['fit_preprocessor'] = args['fit_test_preprocessor']
args['fit_test_preprocessor'] = None
return MNIST(**args)
class MNIST_rotated_background(dense_design_matrix.DenseDesignMatrix):
"""
.. todo::
WRITEME
Parameters
----------
which_set : WRITEME
center : WRITEME
"""
def __init__(self, which_set, center=False):
path = "${PYLEARN2_DATA_PATH}/mnist/mnist_rotation_back_image/" \
+ which_set
obj = serial.load(path)
X = obj['data']
X = N.cast['float32'](X)
y = N.asarray(obj['labels'])
if center:
X -= X.mean(axis=0)
view_converter = dense_design_matrix.DefaultViewConverter((28, 28, 1))
super(MNIST_rotated_background, self).__init__(
X=X, y=y, y_labels=10, view_converter=view_converter)
assert not N.any(N.isnan(self.X))
|
|
# Copyright (C) 2011 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import unittest
import datetime
import StringIO
from .bugzilla import Bugzilla, BugzillaQueries, EditUsersParser
from webkitpy.common.config import urls
from webkitpy.common.config.committers import Reviewer, Committer, Contributor, CommitterList
from webkitpy.common.system.outputcapture import OutputCapture
from webkitpy.common.net.web_mock import MockBrowser
from webkitpy.thirdparty.mock import Mock
from webkitpy.thirdparty.BeautifulSoup import BeautifulSoup
class BugzillaTest(unittest.TestCase):
_example_attachment = '''
<attachment
isobsolete="1"
ispatch="1"
isprivate="0"
>
<attachid>33721</attachid>
<date>2009-07-29 10:23 PDT</date>
<desc>Fixed whitespace issue</desc>
<filename>patch</filename>
<type>text/plain</type>
<size>9719</size>
<attacher>christian.plesner.hansen@gmail.com</attacher>
<flag name="review"
id="17931"
status="+"
setter="one@test.com"
/>
<flag name="commit-queue"
id="17932"
status="+"
setter="two@test.com"
/>
</attachment>
'''
_expected_example_attachment_parsing = {
'attach_date': datetime.datetime(2009, 07, 29, 10, 23),
'bug_id' : 100,
'is_obsolete' : True,
'is_patch' : True,
'id' : 33721,
'url' : "https://bugs.webkit.org/attachment.cgi?id=33721",
'name' : "Fixed whitespace issue",
'type' : "text/plain",
'review' : '+',
'reviewer_email' : 'one@test.com',
'commit-queue' : '+',
'committer_email' : 'two@test.com',
'attacher_email' : 'christian.plesner.hansen@gmail.com',
}
def test_url_creation(self):
# FIXME: These would be all better as doctests
bugs = Bugzilla()
self.assertEqual(None, bugs.bug_url_for_bug_id(None))
self.assertEqual(None, bugs.short_bug_url_for_bug_id(None))
self.assertEqual(None, bugs.attachment_url_for_id(None))
def test_parse_bug_id(self):
# Test that we can parse the urls we produce.
bugs = Bugzilla()
self.assertEqual(12345, urls.parse_bug_id(bugs.short_bug_url_for_bug_id(12345)))
self.assertEqual(12345, urls.parse_bug_id(bugs.bug_url_for_bug_id(12345)))
self.assertEqual(12345, urls.parse_bug_id(bugs.bug_url_for_bug_id(12345, xml=True)))
_bug_xml = """
<bug>
<bug_id>32585</bug_id>
<creation_ts>2009-12-15 15:17 PST</creation_ts>
<short_desc>bug to test webkit-patch's and commit-queue's failures</short_desc>
<delta_ts>2009-12-27 21:04:50 PST</delta_ts>
<reporter_accessible>1</reporter_accessible>
<cclist_accessible>1</cclist_accessible>
<classification_id>1</classification_id>
<classification>Unclassified</classification>
<product>WebKit</product>
<component>Tools / Tests</component>
<version>528+ (Nightly build)</version>
<rep_platform>PC</rep_platform>
<op_sys>Mac OS X 10.5</op_sys>
<bug_status>NEW</bug_status>
<priority>P2</priority>
<bug_severity>Normal</bug_severity>
<target_milestone>---</target_milestone>
<everconfirmed>1</everconfirmed>
<reporter name="Eric Seidel">eric@webkit.org</reporter>
<assigned_to name="Nobody">webkit-unassigned@lists.webkit.org</assigned_to>
<cc>foo@bar.com</cc>
<cc>example@example.com</cc>
<long_desc isprivate="0">
<who name="Eric Seidel">eric@webkit.org</who>
<bug_when>2009-12-15 15:17:28 PST</bug_when>
<thetext>bug to test webkit-patch and commit-queue failures
Ignore this bug. Just for testing failure modes of webkit-patch and the commit-queue.</thetext>
</long_desc>
<attachment
isobsolete="0"
ispatch="1"
isprivate="0"
>
<attachid>45548</attachid>
<date>2009-12-27 23:51 PST</date>
<desc>Patch</desc>
<filename>bug-32585-20091228005112.patch</filename>
<type>text/plain</type>
<size>10882</size>
<attacher>mjs@apple.com</attacher>
<token>1261988248-dc51409e9c421a4358f365fa8bec8357</token>
<data encoding="base64">SW5kZXg6IFdlYktpdC9tYWMvQ2hhbmdlTG9nCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09
removed-because-it-was-really-long
ZEZpbmlzaExvYWRXaXRoUmVhc29uOnJlYXNvbl07Cit9CisKIEBlbmQKIAogI2VuZGlmCg==
</data>
<flag name="review"
id="27602"
status="?"
setter="mjs@apple.com"
/>
</attachment>
</bug>
"""
_single_bug_xml = """
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<!DOCTYPE bugzilla SYSTEM "https://bugs.webkit.org/bugzilla.dtd">
<bugzilla version="3.2.3"
urlbase="https://bugs.webkit.org/"
maintainer="admin@webkit.org"
exporter="eric@webkit.org"
>
%s
</bugzilla>
""" % _bug_xml
_expected_example_bug_parsing = {
"id" : 32585,
"title" : u"bug to test webkit-patch's and commit-queue's failures",
"cc_emails" : ["foo@bar.com", "example@example.com"],
"reporter_email" : "eric@webkit.org",
"assigned_to_email" : "webkit-unassigned@lists.webkit.org",
"bug_status": "NEW",
"attachments" : [{
"attach_date": datetime.datetime(2009, 12, 27, 23, 51),
'name': u'Patch',
'url' : "https://bugs.webkit.org/attachment.cgi?id=45548",
'is_obsolete': False,
'review': '?',
'is_patch': True,
'attacher_email': 'mjs@apple.com',
'bug_id': 32585,
'type': 'text/plain',
'id': 45548
}],
"comments" : [{
'comment_date': datetime.datetime(2009, 12, 15, 15, 17, 28),
'comment_email': 'eric@webkit.org',
'text': """bug to test webkit-patch and commit-queue failures
Ignore this bug. Just for testing failure modes of webkit-patch and the commit-queue.""",
}]
}
# FIXME: This should move to a central location and be shared by more unit tests.
def _assert_dictionaries_equal(self, actual, expected):
# Make sure we aren't parsing more or less than we expect
self.assertEqual(sorted(actual.keys()), sorted(expected.keys()))
for key, expected_value in expected.items():
self.assertEqual(actual[key], expected_value, ("Failure for key: %s: Actual='%s' Expected='%s'" % (key, actual[key], expected_value)))
def test_parse_bug_dictionary_from_xml(self):
bug = Bugzilla()._parse_bug_dictionary_from_xml(self._single_bug_xml)
self._assert_dictionaries_equal(bug, self._expected_example_bug_parsing)
_sample_multi_bug_xml = """
<bugzilla version="3.2.3" urlbase="https://bugs.webkit.org/" maintainer="admin@webkit.org" exporter="eric@webkit.org">
%s
%s
</bugzilla>
""" % (_bug_xml, _bug_xml)
def test_parse_bugs_from_xml(self):
bugzilla = Bugzilla()
bugs = bugzilla._parse_bugs_from_xml(self._sample_multi_bug_xml)
self.assertEqual(len(bugs), 2)
self.assertEqual(bugs[0].id(), self._expected_example_bug_parsing['id'])
bugs = bugzilla._parse_bugs_from_xml("")
self.assertEqual(len(bugs), 0)
# This could be combined into test_bug_parsing later if desired.
def test_attachment_parsing(self):
bugzilla = Bugzilla()
soup = BeautifulSoup(self._example_attachment)
attachment_element = soup.find("attachment")
attachment = bugzilla._parse_attachment_element(attachment_element, self._expected_example_attachment_parsing['bug_id'])
self.assertTrue(attachment)
self._assert_dictionaries_equal(attachment, self._expected_example_attachment_parsing)
_sample_attachment_detail_page = """
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>
Attachment 41073 Details for Bug 27314</title>
<link rel="Top" href="https://bugs.webkit.org/">
<link rel="Up" href="show_bug.cgi?id=27314">
"""
def test_attachment_detail_bug_parsing(self):
bugzilla = Bugzilla()
self.assertEqual(27314, bugzilla._parse_bug_id_from_attachment_page(self._sample_attachment_detail_page))
def test_add_cc_to_bug(self):
bugzilla = Bugzilla()
bugzilla.browser = MockBrowser()
bugzilla.authenticate = lambda: None
expected_stderr = "Adding ['adam@example.com'] to the CC list for bug 42\n"
OutputCapture().assert_outputs(self, bugzilla.add_cc_to_bug, [42, ["adam@example.com"]], expected_stderr=expected_stderr)
def _mock_control_item(self, name):
mock_item = Mock()
mock_item.name = name
return mock_item
def _mock_find_control(self, item_names=[], selected_index=0):
mock_control = Mock()
mock_control.items = [self._mock_control_item(name) for name in item_names]
mock_control.value = [item_names[selected_index]] if item_names else None
return lambda name, type: mock_control
def _assert_reopen(self, item_names=None, selected_index=None, extra_stderr=None):
bugzilla = Bugzilla()
bugzilla.browser = MockBrowser()
bugzilla.authenticate = lambda: None
mock_find_control = self._mock_find_control(item_names, selected_index)
bugzilla.browser.find_control = mock_find_control
expected_stderr = "Re-opening bug 42\n['comment']\n"
if extra_stderr:
expected_stderr += extra_stderr
OutputCapture().assert_outputs(self, bugzilla.reopen_bug, [42, ["comment"]], expected_stderr=expected_stderr)
def test_reopen_bug(self):
self._assert_reopen(item_names=["REOPENED", "RESOLVED", "CLOSED"], selected_index=1)
self._assert_reopen(item_names=["UNCONFIRMED", "RESOLVED", "CLOSED"], selected_index=1)
extra_stderr = "Did not reopen bug 42, it appears to already be open with status ['NEW'].\n"
self._assert_reopen(item_names=["NEW", "RESOLVED"], selected_index=0, extra_stderr=extra_stderr)
def test_file_object_for_upload(self):
bugzilla = Bugzilla()
file_object = StringIO.StringIO()
unicode_tor = u"WebKit \u2661 Tor Arne Vestb\u00F8!"
utf8_tor = unicode_tor.encode("utf-8")
self.assertEqual(bugzilla._file_object_for_upload(file_object), file_object)
self.assertEqual(bugzilla._file_object_for_upload(utf8_tor).read(), utf8_tor)
self.assertEqual(bugzilla._file_object_for_upload(unicode_tor).read(), utf8_tor)
def test_filename_for_upload(self):
bugzilla = Bugzilla()
mock_file = Mock()
mock_file.name = "foo"
self.assertEqual(bugzilla._filename_for_upload(mock_file, 1234), 'foo')
mock_timestamp = lambda: "now"
filename = bugzilla._filename_for_upload(StringIO.StringIO(), 1234, extension="patch", timestamp=mock_timestamp)
self.assertEqual(filename, "bug-1234-now.patch")
def test_commit_queue_flag(self):
bugzilla = Bugzilla()
bugzilla.committers = CommitterList(reviewers=[Reviewer("WebKit Reviewer", "reviewer@webkit.org")],
committers=[Committer("WebKit Committer", "committer@webkit.org")],
contributors=[Contributor("WebKit Contributor", "contributor@webkit.org")],
watchers=[])
def assert_commit_queue_flag(mark_for_landing, mark_for_commit_queue, expected, username=None):
bugzilla.username = username
capture = OutputCapture()
capture.capture_output()
try:
self.assertEqual(bugzilla._commit_queue_flag(mark_for_landing=mark_for_landing, mark_for_commit_queue=mark_for_commit_queue), expected)
finally:
capture.restore_output()
assert_commit_queue_flag(mark_for_landing=False, mark_for_commit_queue=False, expected='X', username='unknown@webkit.org')
assert_commit_queue_flag(mark_for_landing=False, mark_for_commit_queue=True, expected='?', username='unknown@webkit.org')
assert_commit_queue_flag(mark_for_landing=False, mark_for_commit_queue=True, expected='?', username='unknown@webkit.org')
assert_commit_queue_flag(mark_for_landing=True, mark_for_commit_queue=True, expected='?', username='unknown@webkit.org')
assert_commit_queue_flag(mark_for_landing=False, mark_for_commit_queue=False, expected='X', username='contributor@webkit.org')
assert_commit_queue_flag(mark_for_landing=False, mark_for_commit_queue=True, expected='?', username='contributor@webkit.org')
assert_commit_queue_flag(mark_for_landing=True, mark_for_commit_queue=False, expected='?', username='contributor@webkit.org')
assert_commit_queue_flag(mark_for_landing=True, mark_for_commit_queue=True, expected='?', username='contributor@webkit.org')
assert_commit_queue_flag(mark_for_landing=False, mark_for_commit_queue=False, expected='X', username='committer@webkit.org')
assert_commit_queue_flag(mark_for_landing=False, mark_for_commit_queue=True, expected='?', username='committer@webkit.org')
assert_commit_queue_flag(mark_for_landing=True, mark_for_commit_queue=False, expected='+', username='committer@webkit.org')
assert_commit_queue_flag(mark_for_landing=True, mark_for_commit_queue=True, expected='+', username='committer@webkit.org')
assert_commit_queue_flag(mark_for_landing=False, mark_for_commit_queue=False, expected='X', username='reviewer@webkit.org')
assert_commit_queue_flag(mark_for_landing=False, mark_for_commit_queue=True, expected='?', username='reviewer@webkit.org')
assert_commit_queue_flag(mark_for_landing=True, mark_for_commit_queue=False, expected='+', username='reviewer@webkit.org')
assert_commit_queue_flag(mark_for_landing=True, mark_for_commit_queue=True, expected='+', username='reviewer@webkit.org')
class BugzillaQueriesTest(unittest.TestCase):
_sample_request_page = """
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>Request Queue</title>
</head>
<body>
<h3>Flag: review</h3>
<table class="requests" cellspacing="0" cellpadding="4" border="1">
<tr>
<th>Requester</th>
<th>Requestee</th>
<th>Bug</th>
<th>Attachment</th>
<th>Created</th>
</tr>
<tr>
<td>Shinichiro Hamaji <hamaji@chromium.org></td>
<td></td>
<td><a href="show_bug.cgi?id=30015">30015: text-transform:capitalize is failing in CSS2.1 test suite</a></td>
<td><a href="attachment.cgi?id=40511&action=review">
40511: Patch v0</a></td>
<td>2009-10-02 04:58 PST</td>
</tr>
<tr>
<td>Zan Dobersek <zandobersek@gmail.com></td>
<td></td>
<td><a href="show_bug.cgi?id=26304">26304: [GTK] Add controls for playing html5 video.</a></td>
<td><a href="attachment.cgi?id=40722&action=review">
40722: Media controls, the simple approach</a></td>
<td>2009-10-06 09:13 PST</td>
</tr>
<tr>
<td>Zan Dobersek <zandobersek@gmail.com></td>
<td></td>
<td><a href="show_bug.cgi?id=26304">26304: [GTK] Add controls for playing html5 video.</a></td>
<td><a href="attachment.cgi?id=40723&action=review">
40723: Adjust the media slider thumb size</a></td>
<td>2009-10-06 09:15 PST</td>
</tr>
</table>
</body>
</html>
"""
_sample_quip_page = u"""
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>Bugzilla Quip System</title>
</head>
<body>
<h2>
Existing quips:
</h2>
<ul>
<li>Everything should be made as simple as possible, but not simpler. - Albert Einstein</li>
<li>Good artists copy. Great artists steal. - Pablo Picasso</li>
<li>\u00e7gua mole em pedra dura, tanto bate at\u008e que fura.</li>
</ul>
</body>
</html>
"""
def _assert_result_count(self, queries, html, count):
self.assertEqual(queries._parse_result_count(html), count)
def test_parse_result_count(self):
queries = BugzillaQueries(None)
# Pages with results, always list the count at least twice.
self._assert_result_count(queries, '<span class="bz_result_count">314 bugs found.</span><span class="bz_result_count">314 bugs found.</span>', 314)
self._assert_result_count(queries, '<span class="bz_result_count">Zarro Boogs found.</span>', 0)
self._assert_result_count(queries, '<span class="bz_result_count">\n \nOne bug found.</span>', 1)
self.assertRaises(Exception, queries._parse_result_count, ['Invalid'])
def test_request_page_parsing(self):
queries = BugzillaQueries(None)
self.assertEqual([40511, 40722, 40723], queries._parse_attachment_ids_request_query(self._sample_request_page))
def test_quip_page_parsing(self):
queries = BugzillaQueries(None)
expected_quips = ["Everything should be made as simple as possible, but not simpler. - Albert Einstein", "Good artists copy. Great artists steal. - Pablo Picasso", u"\u00e7gua mole em pedra dura, tanto bate at\u008e que fura."]
self.assertEqual(expected_quips, queries._parse_quips(self._sample_quip_page))
def test_load_query(self):
queries = BugzillaQueries(Mock())
queries._load_query("request.cgi?action=queue&type=review&group=type")
class EditUsersParserTest(unittest.TestCase):
_example_user_results = """
<div id="bugzilla-body">
<p>1 user found.</p>
<table id="admin_table" border="1" cellpadding="4" cellspacing="0">
<tr bgcolor="#6666FF">
<th align="left">Edit user...
</th>
<th align="left">Real name
</th>
<th align="left">Account History
</th>
</tr>
<tr>
<td >
<a href="editusers.cgi?action=edit&userid=1234&matchvalue=login_name&groupid=&grouprestrict=&matchtype=substr&matchstr=abarth%40webkit.org">
abarth@webkit.org
</a>
</td>
<td >
Adam Barth
</td>
<td >
<a href="editusers.cgi?action=activity&userid=1234&matchvalue=login_name&groupid=&grouprestrict=&matchtype=substr&matchstr=abarth%40webkit.org">
View
</a>
</td>
</tr>
</table>
"""
_example_empty_user_results = """
<div id="bugzilla-body">
<p>0 users found.</p>
<table id="admin_table" border="1" cellpadding="4" cellspacing="0">
<tr bgcolor="#6666FF">
<th align="left">Edit user...
</th>
<th align="left">Real name
</th>
<th align="left">Account History
</th>
</tr>
<tr><td colspan="3" align="center"><i><none></i></td></tr>
</table>
"""
def _assert_login_userid_pairs(self, results_page, expected_logins):
parser = EditUsersParser()
logins = parser.login_userid_pairs_from_edit_user_results(results_page)
self.assertEqual(logins, expected_logins)
def test_logins_from_editusers_results(self):
self._assert_login_userid_pairs(self._example_user_results, [("abarth@webkit.org", 1234)])
self._assert_login_userid_pairs(self._example_empty_user_results, [])
_example_user_page = """<table class="main"><tr>
<th><label for="login">Login name:</label></th>
<td>eric@webkit.org
</td>
</tr>
<tr>
<th><label for="name">Real name:</label></th>
<td>Eric Seidel
</td>
</tr>
<tr>
<th>Group access:</th>
<td>
<table class="groups">
<tr>
</tr>
<tr>
<th colspan="2">User is a member of these groups</th>
</tr>
<tr class="direct">
<td class="checkbox"><input type="checkbox"
id="group_7"
name="group_7"
value="1" checked="checked" /></td>
<td class="groupname">
<label for="group_7">
<strong>canconfirm:</strong>
Can confirm a bug.
</label>
</td>
</tr>
<tr class="direct">
<td class="checkbox"><input type="checkbox"
id="group_6"
name="group_6"
value="1" /></td>
<td class="groupname">
<label for="group_6">
<strong>editbugs:</strong>
Can edit all aspects of any bug.
/label>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<th>Product responsibilities:</th>
<td>
<em>none</em>
</td>
</tr>
</table>"""
def test_user_dict_from_edit_user_page(self):
parser = EditUsersParser()
user_dict = parser.user_dict_from_edit_user_page(self._example_user_page)
expected_user_dict = {u'login': u'eric@webkit.org', u'groups': set(['canconfirm']), u'name': u'Eric Seidel'}
self.assertEqual(expected_user_dict, user_dict)
|
|
import numpy as np
from numba import cuda, int32, int64, float32, float64
from numba.cuda.testing import unittest, CUDATestCase, skip_on_cudasim
from numba.core import config
def useful_syncwarp(ary):
i = cuda.grid(1)
if i == 0:
ary[0] = 42
cuda.syncwarp(0xffffffff)
ary[i] = ary[0]
def use_shfl_sync_idx(ary, idx):
i = cuda.grid(1)
val = cuda.shfl_sync(0xffffffff, i, idx)
ary[i] = val
def use_shfl_sync_up(ary, delta):
i = cuda.grid(1)
val = cuda.shfl_up_sync(0xffffffff, i, delta)
ary[i] = val
def use_shfl_sync_down(ary, delta):
i = cuda.grid(1)
val = cuda.shfl_down_sync(0xffffffff, i, delta)
ary[i] = val
def use_shfl_sync_xor(ary, xor):
i = cuda.grid(1)
val = cuda.shfl_xor_sync(0xffffffff, i, xor)
ary[i] = val
def use_shfl_sync_with_val(ary, into):
i = cuda.grid(1)
val = cuda.shfl_sync(0xffffffff, into, 0)
ary[i] = val
def use_vote_sync_all(ary_in, ary_out):
i = cuda.grid(1)
pred = cuda.all_sync(0xffffffff, ary_in[i])
ary_out[i] = pred
def use_vote_sync_any(ary_in, ary_out):
i = cuda.grid(1)
pred = cuda.any_sync(0xffffffff, ary_in[i])
ary_out[i] = pred
def use_vote_sync_eq(ary_in, ary_out):
i = cuda.grid(1)
pred = cuda.eq_sync(0xffffffff, ary_in[i])
ary_out[i] = pred
def use_vote_sync_ballot(ary):
i = cuda.threadIdx.x
ballot = cuda.ballot_sync(0xffffffff, True)
ary[i] = ballot
def use_match_any_sync(ary_in, ary_out):
i = cuda.grid(1)
ballot = cuda.match_any_sync(0xffffffff, ary_in[i])
ary_out[i] = ballot
def use_match_all_sync(ary_in, ary_out):
i = cuda.grid(1)
ballot, pred = cuda.match_all_sync(0xffffffff, ary_in[i])
ary_out[i] = ballot if pred else 0
def use_independent_scheduling(arr):
i = cuda.threadIdx.x
if i % 4 == 0:
ballot = cuda.ballot_sync(0x11111111, True)
elif i % 4 == 1:
ballot = cuda.ballot_sync(0x22222222, True)
elif i % 4 == 2:
ballot = cuda.ballot_sync(0x44444444, True)
elif i % 4 == 3:
ballot = cuda.ballot_sync(0x88888888, True)
arr[i] = ballot
def _safe_cc_check(cc):
if config.ENABLE_CUDASIM:
return True
else:
return cuda.get_current_device().compute_capability >= cc
@skip_on_cudasim("Warp Operations are not yet implemented on cudasim")
class TestCudaWarpOperations(CUDATestCase):
def test_useful_syncwarp(self):
compiled = cuda.jit("void(int32[:])")(useful_syncwarp)
nelem = 32
ary = np.empty(nelem, dtype=np.int32)
compiled[1, nelem](ary)
self.assertTrue(np.all(ary == 42))
def test_shfl_sync_idx(self):
compiled = cuda.jit("void(int32[:], int32)")(use_shfl_sync_idx)
nelem = 32
idx = 4
ary = np.empty(nelem, dtype=np.int32)
compiled[1, nelem](ary, idx)
self.assertTrue(np.all(ary == idx))
def test_shfl_sync_up(self):
compiled = cuda.jit("void(int32[:], int32)")(use_shfl_sync_up)
nelem = 32
delta = 4
ary = np.empty(nelem, dtype=np.int32)
exp = np.arange(nelem, dtype=np.int32)
exp[delta:] -= delta
compiled[1, nelem](ary, delta)
self.assertTrue(np.all(ary == exp))
def test_shfl_sync_down(self):
compiled = cuda.jit("void(int32[:], int32)")(use_shfl_sync_down)
nelem = 32
delta = 4
ary = np.empty(nelem, dtype=np.int32)
exp = np.arange(nelem, dtype=np.int32)
exp[:-delta] += delta
compiled[1, nelem](ary, delta)
self.assertTrue(np.all(ary == exp))
def test_shfl_sync_xor(self):
compiled = cuda.jit("void(int32[:], int32)")(use_shfl_sync_xor)
nelem = 32
xor = 16
ary = np.empty(nelem, dtype=np.int32)
exp = np.arange(nelem, dtype=np.int32) ^ xor
compiled[1, nelem](ary, xor)
self.assertTrue(np.all(ary == exp))
def test_shfl_sync_types(self):
types = int32, int64, float32, float64
values = (np.int32(-1), np.int64(1 << 42),
np.float32(np.pi), np.float64(np.pi))
for typ, val in zip(types, values):
compiled = cuda.jit((typ[:], typ))(use_shfl_sync_with_val)
nelem = 32
ary = np.empty(nelem, dtype=val.dtype)
compiled[1, nelem](ary, val)
self.assertTrue(np.all(ary == val))
def test_vote_sync_all(self):
compiled = cuda.jit("void(int32[:], int32[:])")(use_vote_sync_all)
nelem = 32
ary_in = np.ones(nelem, dtype=np.int32)
ary_out = np.empty(nelem, dtype=np.int32)
compiled[1, nelem](ary_in, ary_out)
self.assertTrue(np.all(ary_out == 1))
ary_in[-1] = 0
compiled[1, nelem](ary_in, ary_out)
self.assertTrue(np.all(ary_out == 0))
def test_vote_sync_any(self):
compiled = cuda.jit("void(int32[:], int32[:])")(use_vote_sync_any)
nelem = 32
ary_in = np.zeros(nelem, dtype=np.int32)
ary_out = np.empty(nelem, dtype=np.int32)
compiled[1, nelem](ary_in, ary_out)
self.assertTrue(np.all(ary_out == 0))
ary_in[2] = 1
ary_in[5] = 1
compiled[1, nelem](ary_in, ary_out)
self.assertTrue(np.all(ary_out == 1))
def test_vote_sync_eq(self):
compiled = cuda.jit("void(int32[:], int32[:])")(use_vote_sync_eq)
nelem = 32
ary_in = np.zeros(nelem, dtype=np.int32)
ary_out = np.empty(nelem, dtype=np.int32)
compiled[1, nelem](ary_in, ary_out)
self.assertTrue(np.all(ary_out == 1))
ary_in[1] = 1
compiled[1, nelem](ary_in, ary_out)
self.assertTrue(np.all(ary_out == 0))
ary_in[:] = 1
compiled[1, nelem](ary_in, ary_out)
self.assertTrue(np.all(ary_out == 1))
def test_vote_sync_ballot(self):
compiled = cuda.jit("void(uint32[:])")(use_vote_sync_ballot)
nelem = 32
ary = np.empty(nelem, dtype=np.uint32)
compiled[1, nelem](ary)
self.assertTrue(np.all(ary == np.uint32(0xffffffff)))
@unittest.skipUnless(_safe_cc_check((7, 0)),
"Matching requires at least Volta Architecture")
def test_match_any_sync(self):
compiled = cuda.jit("void(int32[:], int32[:])")(use_match_any_sync)
nelem = 10
ary_in = np.arange(nelem, dtype=np.int32) % 2
ary_out = np.empty(nelem, dtype=np.int32)
exp = np.tile((0b0101010101, 0b1010101010), 5)
compiled[1, nelem](ary_in, ary_out)
self.assertTrue(np.all(ary_out == exp))
@unittest.skipUnless(_safe_cc_check((7, 0)),
"Matching requires at least Volta Architecture")
def test_match_all_sync(self):
compiled = cuda.jit("void(int32[:], int32[:])")(use_match_all_sync)
nelem = 10
ary_in = np.zeros(nelem, dtype=np.int32)
ary_out = np.empty(nelem, dtype=np.int32)
compiled[1, nelem](ary_in, ary_out)
self.assertTrue(np.all(ary_out == 0b1111111111))
ary_in[1] = 4
compiled[1, nelem](ary_in, ary_out)
self.assertTrue(np.all(ary_out == 0))
@unittest.skipUnless(_safe_cc_check((7, 0)),
"Independent scheduling requires at least Volta "
"Architecture")
def test_independent_scheduling(self):
compiled = cuda.jit("void(uint32[:])")(use_independent_scheduling)
arr = np.empty(32, dtype=np.uint32)
exp = np.tile((0x11111111, 0x22222222, 0x44444444, 0x88888888), 8)
compiled[1, 32](arr)
self.assertTrue(np.all(arr == exp))
def test_activemask(self):
@cuda.jit
def use_activemask(x):
i = cuda.grid(1)
if (i % 2) == 0:
# Even numbered threads fill in even numbered array entries
# with binary "...01010101"
x[i] = cuda.activemask()
else:
# Odd numbered threads fill in odd numbered array entries
# with binary "...10101010"
x[i] = cuda.activemask()
out = np.zeros(32, dtype=np.uint32)
use_activemask[1, 32](out)
# 0x5 = 0101: The pattern from even-numbered threads
# 0xA = 1010: The pattern from odd-numbered threads
expected = np.tile((0x55555555, 0xAAAAAAAA), 16)
np.testing.assert_equal(expected, out)
def test_lanemask_lt(self):
@cuda.jit
def use_lanemask_lt(x):
i = cuda.grid(1)
x[i] = cuda.lanemask_lt()
out = np.zeros(32, dtype=np.uint32)
use_lanemask_lt[1, 32](out)
# A string of 1s that grows from the LSB for each entry:
# 0, 1, 3, 7, F, 1F, 3F, 7F, FF, 1FF, etc.
# or in binary:
# ...0001, ....0011, ...0111, etc.
expected = np.asarray([(2 ** i) - 1 for i in range(32)],
dtype=np.uint32)
np.testing.assert_equal(expected, out)
if __name__ == '__main__':
unittest.main()
|
|
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for tensorflow.ops.Einsum."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.client import session
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gen_linalg_ops
from tensorflow.python.ops import special_math_ops
from tensorflow.python.ops import variables
from tensorflow.python.platform import benchmark
from tensorflow.python.platform import test
class EinsumOpTest(test.TestCase):
def _check(self, s, *input_shapes, **kwargs):
dtype = kwargs.pop('dtype', np.float32)
r = np.random.RandomState(0)
inputs = []
for shape in input_shapes:
arr = np.array(r.randn(*shape)).astype(dtype)
if dtype == np.complex64 or dtype == np.complex128:
arr += 1j * np.array(r.randn(*shape)).astype(dtype)
inputs.append(arr)
input_tensors = [constant_op.constant(x, shape=x.shape) for x in inputs]
a = np.einsum(s, *inputs)
b = self.evaluate(gen_linalg_ops.einsum(input_tensors, s))
self.assertAllClose(a, b, atol=1e-4, rtol=1e-4)
def testUnary(self):
self._check('->', ())
self._check('aa->', (3, 3))
self._check('aa->a', (3, 3))
self._check('aaa->', (3, 3, 3))
self._check('aaa->a', (3, 3, 3))
self._check('aab->a', (3, 3, 4))
self._check('ab->', (3, 3))
self._check('ab->ab', (3, 3))
self._check('abc->b', (3, 4, 5))
self._check('abc->ca', (3, 4, 5))
self._check('abc->cab', (3, 4, 5))
self._check('aabcc->a', (3, 3, 5, 4, 4))
self._check('aabcc->ac', (3, 3, 5, 4, 4))
self._check('aabcd->ad', (3, 3, 5, 4, 4))
def testUnaryEllipsis(self):
self._check('...->...', ())
self._check('...->', ())
self._check('->...', ())
# Tests from dask
self._check('a...a->a...', (2, 2))
self._check('a...a->', (2, 2))
self._check('a...a->...', (2, 5, 1, 2))
self._check('a...a->a...', (2, 1, 2))
self._check('a...a->a...', (2, 3, 4, 5, 2))
self._check('...ijk->...ki', (3, 4, 5))
self._check('...ijk->...ki', (1, 3, 4, 5))
self._check('...ijk->...ki', (2, 2, 3, 4, 5))
# Repeated indices.
self._check('i...ii->...i', (3, 2, 3, 3))
def testBinary(self):
self._check(',->', (), ())
self._check('a,a->', (3,), (3,))
self._check('a,a->a', (3,), (3,))
self._check('ba,b->', (3, 2), (3,))
self._check('ab,b->a', (3, 4), (4,))
self._check('ab,ab->', (3, 4), (3, 4))
self._check('nij,jk->nik', (5, 2, 3), (3, 4))
self._check('abc,bad->abcd', (1, 2, 3), (2, 1, 4))
# Repeated indices.
self._check('ijj,k->ik', (2, 3, 3), (4,))
self._check('aba,a->b', (3, 4, 3), (3,))
# From https://github.com/dask/dask/pull/3412#discussion_r182413444
self._check('aab,bc->ac', (2, 2, 3), (3, 4))
self._check('aab,bcc->ac', (2, 2, 3), (3, 4, 4))
# Based on https://github.com/google/jax/issues/37#issuecomment-448572187
self._check('sa,shb->shab', (2, 1), (2, 3, 4))
def testBroadcasting(self):
# Batch matmul without broadcasting.
self._check('...ij,...jk->...ik', (5, 1, 2, 3), (5, 1, 3, 4))
# Batch matmul with broadcasting.
self._check('...ij,...jk->...ik', (1, 2, 3), (3, 5))
self._check('...ij,...jk->...ik', (2, 3), (1, 3, 5))
self._check('...ij,...jk->...ik', (5, 2, 3), (3, 5))
self._check('...ij,...jk->...ik', (2, 3), (5, 3, 5))
self._check('...ij,...jk->...ik', (3, 1, 2, 3), (1, 1, 7, 3, 5))
self._check('i...j,j...k->...ik', (2, 1, 3, 1, 3), (3, 1, 7, 5))
# Broadcasting with repeated indices.
self._check('ij,jk...k->i...', (3, 2), (2, 4, 1, 4))
self._check('ij,jk...k->...i', (3, 2), (2, 4, 5, 4))
self._check('ijj,jk...k->i...', (3, 2, 2), (2, 4, 1, 4))
self._check('i...jj,jk...k->i...', (3, 3, 1, 2, 2), (2, 4, 1, 5, 4))
# Following 2 from # https://stackoverflow.com/a/19203475/1611416
self._check('...abc,...abcd->...d', (1, 1, 2, 3, 4), (5, 2, 3, 4, 6))
self._check('ab...,b->ab...', (2, 3, 1, 1, 5), (3,))
def testDtypes(self):
for dtype in [np.float64, np.float32, np.complex64, np.complex128]:
self._check('ij,jk->ik', (2, 2), (2, 2), dtype=dtype)
self._check('ji,jk->ik', (2, 2), (2, 2), dtype=dtype)
self._check('ji,kj->ik', (2, 2), (2, 2), dtype=dtype)
self._check('ij,jk->ki', (2, 2), (2, 2), dtype=dtype)
self._check('ji,kj->ki', (2, 2), (2, 2), dtype=dtype)
@test_util.run_in_graph_and_eager_modes
def testInvalid(self):
r = np.random.RandomState(0)
cases = [
# incorrect rank.
('ij,jk->ik', r.randn(1, 2, 3), r.randn(3, 4)),
('...ij,jk->ik', r.randn(3), r.randn(3, 4)),
# inconsistent dimensions.
('ij,jk->ik', r.randn(2, 3), r.randn(4, 4)),
# broadcasting is invalid
('...ij,...jk->...ik', r.randn(5, 2, 3), r.randn(7, 3, 4)),
# output should have ellipsis when broadcasting shape is
# non-empty.
('...ij,...jk->ik', r.randn(2, 2, 3), r.randn(3, 4)),
]
for args in cases:
with self.assertRaises((ValueError, errors.InvalidArgumentError)):
_ = self.evaluate(gen_linalg_ops.einsum(args[1:], args[0]))
placeholders = [
array_ops.placeholder_with_default(x, shape=None) for x in args[1:]
]
with self.assertRaises((ValueError, errors.InvalidArgumentError)):
_ = self.evaluate(gen_linalg_ops.einsum(placeholders, args[0]))
@test_util.run_in_graph_and_eager_modes
def testPlaceholder(self):
def check(equation, *input_and_placeholder_shapes):
r = np.random.RandomState(0)
inputs = []
input_placeholders = []
for actual_shape, placeholder_shape in input_and_placeholder_shapes:
input_np = np.array(r.randn(*actual_shape))
inputs.append(input_np)
input_placeholders.append(
array_ops.placeholder_with_default(input_np, placeholder_shape))
a = np.einsum(equation, *inputs)
b = self.evaluate(gen_linalg_ops.einsum(input_placeholders, equation))
self.assertAllClose(a, b, atol=1e-4, rtol=1e-4)
check('bijl,bjkm->bik', ((9, 2, 3, 5), (None, None, None, 5)),
((9, 3, 4, 7), (None, None, 4, None)))
check('bijl,bjkm->bik', ((9, 2, 3, 5), None), ((9, 3, 4, 7), None))
check('...ij,...->...i', ((4, 3, 1, 2), (None, 3, None, 2)),
((4, 3), (None, 3)))
check('...ij,...jk->...ik', ((3, 1, 2, 3), None), ((1, 7, 3, 4), None))
def testOutputRepeatedLabels(self):
# This is the reverse operation of repeated input labels, to be used for
# computing symbolic gradients of einsum.
r = np.random.RandomState(0)
a = r.randn(2, 2)
s = 'a->aa'
diag_a = np.diag(np.diag(a))
b = self.evaluate(gen_linalg_ops.einsum([np.diag(a)], s))
self.assertAllClose(diag_a, b, atol=1e-4, rtol=1e-4)
class EinsumBenchmark(test.Benchmark):
cases = [
# Unary cases.
['ijk->i', 100],
['ijk->kji', 100],
# Regular matmul or batch matmul.
['ij,jk->ik', 1000],
['ji,kj->ik', 1000],
['ab,ab->', 100],
['ab,ba->', 100],
['abc,abc->', 100],
['abc,bac->', 100],
['abc,cba->', 100],
['bij,bjk->bik', 100],
['bji,bjk->bki', 100],
['ikl,kji->kl', 100],
['klj,lki->ij', 100],
['ijk,ilj->kli', 100],
['kij,mkb->ijmb', 100],
['abcd,ad->bc', 40],
# Larger binary contractions.
['ijk,jklm->il', 40],
['efabc,eabcd->efd', 30],
['fabec,abcde->fde', 30],
['efabc,edabc->efd', 30],
['eadbf,dfebc->ecfad', 30],
['abcdef,bcdfg->abcdeg', 30],
]
def benchmarkEinsum(self):
for equation, dim in self.cases:
with ops.Graph().as_default(), \
session.Session(config=benchmark.benchmark_config()) as sess, \
ops.device('/cpu:0'):
r = np.random.RandomState(0)
input_subscripts = equation.split('->')[0].split(',')
input_vars = []
for subscript in input_subscripts:
input_shape = (dim,) * len(subscript)
input_vars.append(
variables.Variable(np.array(r.randn(*input_shape), np.float32)))
variables.global_variables_initializer().run()
# Call einsum_v1.
self.run_op_benchmark(
sess,
special_math_ops.einsum(equation, *input_vars),
min_iters=50,
name='einsum_v1_cpu_({})_{}'.format(equation, dim))
# Call gen_linalg_ops.einsum.
self.run_op_benchmark(
sess,
gen_linalg_ops.einsum(input_vars, equation),
min_iters=50,
name='einsum_v2_cpu_({})_{}'.format(equation, dim))
if __name__ == '__main__':
test.main()
|
|
# Tests for the targz module.
import gzip
import logging
import os
import pytest
import random
import sys
import tarfile
import zlib
# Code under test:
import targzstream
V2 = sys.version_info.major == 2
logging.basicConfig(level=logging.DEBUG)
def test_disallow_compression(tmpdir):
"""The resulting tarfile cannot be streamed nor compressed, unless it's reading."""
base = tmpdir.mkdir("1")
for mode in ('w:bz2', 'w:gz', 'w|bz2', 'w|gz', 'x:bz2', 'x:gz'):
path = base.join("outfile.tar." + mode[2:])
with pytest.raises(ValueError) as raised:
targzstream.open(path, mode=mode)
with pytest.raises(ValueError) as raised:
targzstream.TarFile.open(path, mode=mode)
logging.info("OPEN_METH = %s", targzstream.TarFile.OPEN_METH)
# Allow compressed reading, but we expect an exception
localpath = base.join("outfile-nosuch.tar")
path = str(localpath)
for mode in ('r', 'r:*', 'r:', 'r|', 'r:bz2', 'r:gz', 'r|*', 'r|bz2', 'r|gz'):
with pytest.raises((OSError, IOError)) as raised:
targzstream.open(path, mode=mode)
assert raised.value.errno == 2
with pytest.raises((OSError, IOError)) as raised:
targzstream.TarFile.open(path, mode=mode)
assert raised.value.errno == 2
with pytest.raises(ValueError) as raised:
targzstream.TarFile(path, mode=mode)
mesg = str(raised.value)
assert mesg.startswith('Mode')
assert 'is not allowed' in mesg
def test_readback(tmpdir):
base = tmpdir.mkdir("2")
tfile = base.join("test2.tar")
rand = random.Random()
tarball = None
files = {}
actions = [('medium', 9999, 1492000015),
('emptyfile', 0, 1492000030),
('smaller', 380, 1492000045),
('biggy', 49999, 1492000060),
('little', 1055, 1492010015),
('emptiness', 0, 1492010030),
('large', 10240, 1492010060),
('one-byter', 1, 1492010045)]
# Open first, re-open in append mode after 3 files:
actions.insert(0, "w")
actions.insert(4, "a")
actions.append(None)
logging.info("---- ACTIONS ----")
for num, action in enumerate(actions):
logging.info("Step %02d: %s", num, action)
for action in actions:
if isinstance(action, str):
if tarball:
tarball.close()
tarball = targzstream.open(str(tfile), mode=action)
os.system("ls -l '%s'" % tfile)
logging.info("Opened(%s, mode='%s') => %s",
tfile, action, tarball.fileobj.tell())
continue
if action is None:
name = 'targzstream.py'
stat = os.stat(name)
mtime = int(stat.st_mtime)
uid = stat.st_uid
gid = stat.st_gid
size = stat.st_size
data = open(name).read()
assert data[-1] == '\n', "%s does not end in a newline!" % name
files[name] = [size, mtime, uid, gid, data.encode()]
try:
target = tarball.add_gz_file(name, mtime=mtime, uid=uid, gid=gid)
for line in data[:-1].split('\n'):
target.write(line + '\n')
finally:
tarball.close_gz_file()
continue
name, size, mtime = action
uid = mtime % 127
gid = mtime % 213
data = []
files[name] = [size, mtime, uid, gid]
try:
# Write in pieces, maybe.
logging.info("@%05x Target(%s) ...", tarball.fileobj.tell(), name)
target = tarball.add_gz_file(name, mtime=mtime, uid=uid, gid=gid)
logging.info(" ==> [%s]: %s", target.__class__.__name__, target)
num = 0
while num < size:
psize = rand.randint(2, 13) ** 3 + rand.randint(5, 48)
psize = min(size - num, psize * psize, psize * psize)
chunk = os.urandom(psize)
target.write(chunk)
logging.info(" +%5d bytes (%d)", len(chunk), psize)
data.append(chunk)
num += psize
if num > 300:
target.flush()
except Exception as exc:
logging.exception("Failed: %s" % exc)
raise
finally:
if rand.randint(1, 100) < 85:
logging.info("NOT Closing gz stream(%s)...", name)
else:
logging.info("Closing gz stream(%s)...", name)
tarball.close_gz_file()
files[name].append(b''.join(data))
logging.info("++" * 80)
os.system("ls -l '%s' >&2" % tfile)
os.system("tar -Rtvvvf '%s' >&2" % tfile)
logging.info("--" * 80)
tarball.close()
def verify(tarball):
results = []
for info in tarball:
name = info.name
mtime = info.mtime
uid = info.uid
gid = info.gid
logging.info("Found '%s' @%04x mt=%s id=%s/%s",
name, tarball.fileobj.tell(), mtime, uid, gid)
# Read the compressed data.
fobj = tarball.extractfile(info)
data = gzip.GzipFile(fileobj=fobj).read()
# Strip the '.gz':
expect = files[name]
logging.info("Verifying %d bytes of '%s'", len(data), name)
assert expect[:-1] == [len(data), mtime, uid, gid]
assert expect[-1] == data
results.append(name)
print("Expect: %s" % sorted(files))
print("Result: %s" % sorted(results))
assert set(files) == set(results)
# Read it all back...
verify(tarfile.open(str(tfile), 'r'))
verify(tarfile.TarFile.open(str(tfile), 'r'))
def test_closer(tmpdir):
"""Test that closing the GzipStream is really a call to obj.close_gz_file()"""
base = tmpdir.mkdir("3")
tfile = base.join("test2.tar")
lines = []
setup = {True: ('fizz', 1234567890, 4176 if V2 else 4182,
'This is \xe2\x89\xaa NOT COMPRESSED \xe2\x89\xab'),
False: ('bizz', 2345678901, 22027 if V2 else 22033,
'This is \xe2\x89\xaa COMPRESSED \xe2\x89\xab')}
with targzstream.TarFile(str(tfile), mode='w') as tarball:
assert tarball._TarFile__stream is None
name, mtime, size, header = setup[True]
with tarball.add_gz_file(name, mtime=mtime) as stream:
assert isinstance(tarball._TarFile__stream, targzstream.GzipStream)
stream.write(header)
stream.write('\n')
for i in range(2000):
line = "Line %05d" % i
lines.append(line)
stream.write(line + '\n')
assert tarball._TarFile__stream is None
assert size == stream.size
name, mtime, size, header = setup[False]
with tarball.add_file(name, mtime=mtime) as stream:
assert isinstance(tarball._TarFile__stream, targzstream.GzipStream)
stream.write(header + '\t')
for line in lines:
stream.write(line)
stream.write('\t')
assert tarball._TarFile__stream is None
assert stream.size == size
assert tarball.closed
with tarfile.TarFile(str(tfile), mode='r') as tarball:
for num, member in enumerate(tarball):
print("Member<%s>: %s" % (type(member), member))
name, mtime, size, header = setup[num == 0]
assert member.name == name
assert member.mtime == mtime
assert member.size == size
io = tarball.extractfile(member)
data = io.read()
if num == 0:
data = zlib.decompress(data, 0x1F)
print("Data: \"\"\"%s\"\"\"" % data[:300])
flines = data.decode('utf8').split('\t' if num else '\n')
assert lines == flines[1:-1]
if hasattr(header, 'decode'):
header = header.decode('utf8')
assert header == flines[0]
|
|
import numpy as np
from theano.tensor import as_tensor_variable
from ContinuousTimeMarkovModel.distributions import *
from pymc3 import Model, sample, Metropolis, Dirichlet, Potential, Binomial, Beta, Slice, NUTS
import theano.tensor as TT
from ContinuousTimeMarkovModel.samplers.forwardS import *
from ContinuousTimeMarkovModel.samplers.forwardX import *
#import sys; sys.setrecursionlimit(50000)
#theano.config.compute_test_value = 'off'
N = 100 # Number of patients
M = 6 # Number of hidden states
K = 10 # Number of comorbidities
D = 721 # Number of claims
Dd = 80 # Maximum number of claims that can occur at once
min_obs = 10 # Minimum number of observed claims per patient
max_obs = 30 # Maximum number of observed claims per patient
# Load pre-generated data
from pickle import load
S_start = load(open('../data/X_layer_100_patients/S.pkl', 'rb'))
''' S_start[zeroIndices]
[3, 0, 0, 4, 1, 0, 3, 4, 4, 2, 2, 4, 5, 2, 2, 2, 2, 0, 2, 1, 1, 0, 1, 0, 3, 4, 0, 0, 3, 4, 1, 5, 0, 5, 3, 0, 3, 2, 4, 1, 4, 5, 4, 0, 1, 1, 1, 2, 3, 0, 1, 3, 0, 2, 4, 2, 4, 3, 5, 0, 4, 0, 1, 4, 4, 0, 4, 1, 3, 2, 2, 0, 0, 2, 4, 4, 4, 5, 0, 2, 2, 0, 1, 2, 2, 3, 5, 3, 3, 4, 2, 2, 4, 3, 5, 5, 3, 2, 0, 3]
'''
X_start = load(open('../data/X_layer_100_patients/X.pkl', 'rb'))
Z_start = load(open('../data/X_layer_100_patients/Z.pkl', 'rb'))
L_start = load(open('../data/X_layer_100_patients/L.pkl', 'rb'))
obs_jumps = load(open('../data/X_layer_100_patients/obs_jumps.pkl', 'rb'))
T = load(open('../data/X_layer_100_patients/T.pkl', 'rb'))
O = load(open('../data/X_layer_100_patients/O_input.pkl', 'rb'))
'''
T = load(open('../data/synthetic2000/T.pkl', 'rb'))
obs_jumps = load(open('../data/synthetic2000/obs_jumps.pkl', 'rb'))
S_start = load(open('../data/synthetic2000/S.pkl', 'rb'))
X_start = load(open('../data/synthetic2000/X.pkl', 'rb'))
Z_start = load(open('../data/synthetic2000/Z.pkl', 'rb'))
L_start = load(open('../data/synthetic2000/L.pkl', 'rb'))
O = load(open('../data/synthetic2000/O_input.pkl', 'rb'))
T = load(open('../data/small_model/data/T.pkl', 'rb'))
obs_jumps = load(open('../data/small_model/data/obs_jumps.pkl', 'rb'))
S_start = load(open('../data/small_model/data/S.pkl', 'rb'))
X_start = load(open('../data/small_model/data/X.pkl', 'rb'))
Z_start = load(open('../data/small_model/data/Z.pkl', 'rb'))
L_start = load(open('../data/small_model/data/L.pkl', 'rb'))
O = load(open('../data/small_model/data/O_input.pkl', 'rb'))
'''
#DES: nObs is total number of observations
nObs = T.sum()
#compress n and t indices
# S is (nObs) vector
S_start = np.concatenate([S_start[i,0:T[i]] for i in range(N)])
# add 0 to start for intial steps
obs_jumps = np.hstack([np.zeros((N,1),dtype='int8'),obs_jumps])
obs_jumps = np.concatenate([obs_jumps[i,0:T[i]] for i in range(N)])
# X is now (nObs,K)
X_start = np.concatenate([X_start[:,0:T[i],i].T for i in range(N)])
# O is now (nObs, Dd)
# TODO: implement this with sparse matrices
O = np.concatenate([O[:,0:T[i],i].T for i in range(N)])
#import pdb; pdb.set_trace()
model = Model()
with model:
#Fails: #pi = Dirichlet('pi', a = as_tensor_variable([0.147026,0.102571,0.239819,0.188710,0.267137,0.054738]), shape=M, testval = np.ones(M)/float(M))
pi = Dirichlet('pi', a = as_tensor_variable([0.147026,0.102571,0.239819,0.188710,0.267137,0.054738]), shape=M)
pi_min_potential = Potential('pi_min_potential', TT.switch(TT.min(pi) < .001, -np.inf, 0))
Q = DiscreteObsMJP_unif_prior('Q', M=M, lower=0.0, upper=1.0, shape=(M,M))
#S = DiscreteObsMJP('S', pi=pi, Q=Q, M=M, nObs=nObs, observed_jumps=obs_jumps, T=T, shape=(nObs), testval=np.ones(nObs,dtype='int32'))
S = DiscreteObsMJP('S', pi=pi, Q=Q, M=M, nObs=nObs, observed_jumps=obs_jumps, T=T, shape=(nObs))
#B0 = Beta('B0', alpha = 1., beta = 1., shape=(K,M), testval=0.2*np.ones((K,M)))
#B = Beta('B', alpha = 1., beta = 1., shape=(K,M), testval=0.2*np.ones((K,M)))
B0 = Beta('B0', alpha = 1., beta = 1., shape=(K,M))
B = Beta('B', alpha = 1., beta = 1., shape=(K,M))
#X = Comorbidities('X', S=S, B0=B0,B=B, T=T, shape=(nObs, K), testval=np.ones((nObs,K),dtype='int8'))
X = Comorbidities('X', S=S, B0=B0,B=B, T=T, shape=(nObs, K))
#Z = Beta('Z', alpha = 0.1, beta = 1., shape=(K,D), testval=0.5*np.ones((K,D)))
#L = Beta('L', alpha = 1., beta = 1., shape=D, testval=0.5*np.ones(D))
Z = Beta('Z', alpha = 0.1, beta = 1., shape=(K,D))
L = Beta('L', alpha = 1., beta = 1., shape=D)
O_obs = Claims('O_obs', X=X, Z=Z, L=L, T=T, D=D, O_input=O, shape=(nObs,Dd), observed=O)
#O_obs = Claims('O_obs', X=X, Z=Z, L=L, T=T, D=D, max_obs=max_obs, O_input=O, shape=(Dd,max_obs,N), observed=O)
#import pdb; pdb.set_trace()
import scipy.special
Q_raw_log = scipy.special.logit(np.array([0.631921, 0.229485, 0.450538, 0.206042, 0.609582]))
from scipy.special import logit
B_lo = logit(np.array([
[0.000001,0.760000,0.720000,0.570000,0.700000,0.610000],
[0.000001,0.460000,0.390000,0.220000,0.200000,0.140000],
[0.000001,0.620000,0.620000,0.440000,0.390000,0.240000],
[0.000001,0.270000,0.210000,0.170000,0.190000,0.070000],
[0.000001,0.490000,0.340000,0.220000,0.160000,0.090000],
[0.000001,0.620000,0.340000,0.320000,0.240000,0.120000],
[0.000001,0.550000,0.390000,0.320000,0.290000,0.150000],
[0.000001,0.420000,0.240000,0.170000,0.170000,0.110000],
[0.000001,0.310000,0.300000,0.230000,0.190000,0.110000],
[0.000001,0.470000,0.340000,0.190000,0.190000,0.110000]]))
B0_lo = logit(np.array([
[0.410412,0.410412,0.418293,0.418293,0.429890,0.429890],
[0.240983,0.240983,0.240983,0.240983,0.240983,0.240983],
[0.339714,0.339714,0.339714,0.339714,0.339714,0.339714],
[0.130415,0.130415,0.130415,0.130415,0.130415,0.130415],
[0.143260,0.143260,0.143260,0.143260,0.143260,0.143260],
[0.211465,0.211465,0.211465,0.211465,0.211465,0.211465],
[0.194187,0.194187,0.194187,0.194187,0.194187,0.194187],
[0.185422,0.185422,0.185422,0.185422,0.185422,0.185422],
[0.171973,0.171973,0.171973,0.171973,0.171973,0.171973],
[0.152277,0.152277,0.152277,0.152277,0.152277,0.152277]]))
#DES Random inputs
ranSeed = 144
np.random.seed(ranSeed)
L_start = np.random.rand(D)
np.random.seed(ranSeed+1)
Z_start = np.random.rand(K,D)
np.random.seed(ranSeed+2)
B_lo = logit(np.random.rand(K,M))
np.random.seed(ranSeed+3)
B0_lo = logit(np.random.rand(K,M))
Z_lo = logit(Z_start)
L_lo = logit(L_start)
#L_lo = np.ones_like(L_start)*-4.0
'''
Q_raw_log = np.log(np.array([[1, 0.0000001, 0.0000001, 0.0000001, 0.0000001],
[0.0000001, 1, 0.0000001, 0.0000001, 0.0000001],
[0.0000001, 0.0000001, 1, 0.0000001, 0.0000001],
[0.0000001, 0.0000001, 0.0000001, 1, 0.0000001],
[0.0000001, 0.0000001, 0.0000001, 0.0000001, 1],
[0.0000001, 0.0000001, 0.0000001, 0.0000001, 0.0000001]]))
'''
start = {'Q_ratematrixoneway': Q_raw_log, 'B_logodds':B_lo, 'B0_logodds':B0_lo, 'S':S_start, 'X':X_start, 'Z_logodds':Z_lo, 'L_logodds':L_lo}
#teststart = {'Q_ratematrixoneway': Q_raw_log, 'B_logodds':B_lo, 'B0_logodds':B0_lo, 'S':S_start, 'X':X_start, 'Z_logodds':Z_lo, 'L_logodds':L_lo, 'pi_stickbreaking':np.ones(M)/float(M)}
#start = {'Q_ratematrixoneway': Q_raw_log, 'B_logodds':B_lo, 'B0_logodds':B0_lo, 'S':S_start, 'X':X_start, 'Z_logodds':Z_lo, 'L_logodds':L_start}
with model:
#import pdb; pdb.set_trace()
steps = []
steps.append(NUTS(vars=[pi]))
#steps.append(NUTS(vars=[pi], scaling=np.ones(M-1)*0.058))
#steps.append(Metropolis(vars=[pi], scaling=0.058, tune=False))
steps.append(NUTS(vars=[Q],scaling=np.ones(M-1,dtype=float)*10.))
#steps.append(Metropolis(vars=[Q], scaling=0.2, tune=False))
steps.append(ForwardS(vars=[S], nObs=nObs, T=T, N=N, observed_jumps=obs_jumps))
steps.append(NUTS(vars=[B0,B]))
#steps.append(Metropolis(vars=[B0], scaling=0.2, tune=False))
#steps.append(NUTS(vars=[B]))
#steps.append(Metropolis(vars=[B], scaling=0.198, tune=False))
steps.append(ForwardX(vars=[X], N=N, T=T, K=K, D=D,Dd=Dd, O=O, nObs=nObs))
#steps.append(NUTS(vars=[Z], scaling=np.ones(K*D)))
steps.append(Metropolis(vars=[Z], scaling=0.0132, tune=False))
steps.append(NUTS(vars=[L],scaling=np.ones(D)))
#steps.append(Metropolis(vars=[L],scaling=0.02, tune=False, ))
## 22 minutes per step with all NUTS set
#import pdb; pdb.set_trace()
#model.dlogp()
trace = sample(1001, steps, start=start, random_seed=111,progressbar=True)
#trace = sample(11, steps, start=start, random_seed=111,progressbar=True)
#trace = sample(11, steps, start=start, random_seed=[111,112,113],progressbar=False,njobs=3)
pi = trace[pi]
Q = trace[Q]
S = trace[S]
#S0 = S[:,0] #now pibar
B0 = trace[B0]
B = trace[B]
X = trace[X]
Z = trace[Z]
L = trace[L]
#Sbin = np.vstack([np.bincount(S[i]) for i in range(len(S))])
Sbin = np.vstack([np.bincount(S[i],minlength=6)/float(len(S[i])) for i in range(len(S))])
zeroIndices = np.roll(T.cumsum(),1)
zeroIndices[0] = 0
pibar = np.vstack([np.bincount(S[i][zeroIndices],minlength=M)/float(zeroIndices.shape[0]) for i in range(len(S))])
pibar = np.vstack([np.bincount(S_start[zeroIndices],minlength=M)/float(zeroIndices.shape[0]),pibar])
SEnd = np.vstack([np.bincount(S[i][zeroIndices-1],minlength=M)/float(zeroIndices.shape[0]) for i in range(len(S))])
SEnd = np.vstack([np.bincount(S_start[zeroIndices-1],minlength=M)/float(zeroIndices.shape[0]),SEnd])
#logp = steps[1].logp
logp = steps[2].logp
Xlogp = steps[4].logp
XChanges = np.insert(1-(1-(X[:,1:]-X[:,:-1])).prod(axis=2),0,0,axis=1)
XChanges.T[zeroIndices] = 0
XChanges[XChanges.nonzero()] = XChanges[XChanges.nonzero()]/XChanges[XChanges.nonzero()]
XChanges = XChanges.sum(axis=1)/float(N)
logpTotal = [model.logp(trace[i]) for i in range(len(trace))]
#np.set_printoptions(2);np.set_printoptions(linewidth=160)
'''
for i in range(1001):
print "~~~",i ,"~~~"
print pi[i,:]
print "Bincount S0:", np.bincount(S0[i,:],minlength=6)
print "\n"
'''
#from pickle import dump
#with open('file.pkl','wb') as file:
# dump(trace,file)
|
|
#!/usr/bin/env python
"""Translate Wikipedia edit history XML files to JSON.
This script assumes that the Wikipedia edit history XML files are ordered as
follows:
<mediawiki>
...
<page>
<title></title>
<ns></ns>
<id></id>
<redirect title="" /> <!-- Optional -->
<revision>
<id>236939</id>
<timestamp></timestamp>
<contributor>
<username></username>
<id></id>
</contributor>
<comment></comment>
<model></model>
<format></format>
<text xml:space="preserve"></text>
<sha1></sha1>
</revision>
...
</page>
</mediawiki>
Specifically, it assumes that the title, ns, id, redirect tags for a page
appear before the revisions.
This is the format the files are in when they are downloaded from:
https://dumps.wikimedia.org/enwiki/latest/
Example:
This script is designed to be placed in the middle of a series of piped
commands. Most often it will read data on stdin from a 7zip program and
output JSON on stdout, often to a file or compression program, as follows:
$ 7z e -so input_file.7z | xml_to_json.py | gzip > output_file.json.gz
Attributes:
NAMESPACE (str): The default namespace of all tags in the Wikipedia edit
history XML file.
REVISION (dict): A dictionary that stores the information from each
revision as well as some information about the article in general. The
variables are as follows:
- article_title (str): The title of the article.
- article_id (int): A unique identifier for each article.
- article_namespace (int): The namespace of the article, as defined
by Wikipedia.
- redirect_target (str): If the article is a redirect, the target
article's article_title, otherwise None.
- revision_id (int): Unique identifier of the revision.
- parent_id (int): Unique identifier of the parent of the revision.
- timestamp (str): Time the revision was made.
- user_name (str): The name of the user, or their IP address if not
logged in.
- user_id (int): A unique id of the user, or None if they were not
logged in.
- comment (str): The comment the user left about their edit.
- minor (bool): True if the edit was marked as "minor", otherwise
False.
"""
from copy import deepcopy
from datetime import datetime
import argparse
import sys
import xml.etree.cElementTree as ET
import json
NAMESPACE = "{http://www.mediawiki.org/xml/export-0.10/}"
# JSON revision object
REVISION = {
# General article information
"article_title": None, # string
"article_id": None, # int
"article_namespace": None, # int
"redirect_target": None, # String
# Revision specific information
"revision_id": None, # int
"parent_id": None, # int
"timestamp": None, # date and time
"user_name": None, # string or ip as string
"user_id": None, # int if user, otherwise None
"comment": None, # string
"minor": False, # bool
"full_text": None, # str
}
def fill_rev(revision, element, in_revision_tree, namespace='', save_full_text=False):
"""Fill the fields of a revision dictionary given an element from an XML
Element Tree.
Args:
- revision (dict): A revision dictionary with fields already in place.
- element (ElementTree Element): An element from ElementTree.
- in_revision_tree (bool): True if inside a <revision> element,
otherwise should be set to False.
- namespace (Optional[str]): The XML name space that the tags exist in.
- save_full_text (Optional [bool]): Save the full text of the article
if True, otherwise set it to None.
Returns:
None
"""
if element.tag == namespace + "id":
if in_revision_tree:
revision["revision_id"] = int(element.text)
else:
revision["article_id"] = int(element.text)
elif element.tag == namespace + "parentid":
revision["parent_id"] = int(element.text)
elif element.tag == namespace + "timestamp":
revision["timestamp"] = element.text
elif element.tag == namespace + "minor":
revision["minor"] = True
elif element.tag == namespace + "comment":
revision["comment"] = element.text
elif element.tag == namespace + "contributor":
for child in element:
if child.tag == namespace + "username" or child.tag == namespace + "ip":
revision["user_name"] = child.text
elif child.tag == namespace + "id":
revision["user_id"] = int(child.text)
elif element.tag == namespace + "title":
revision["article_title"] = element.text
elif element.tag == namespace + "ns":
revision["article_namespace"] = int(element.text)
elif element.tag == namespace + "redirect":
revision["redirect_target"] = element.get("title")
elif element.tag == namespace + "text":
if save_full_text:
revision["full_text"] = element.text
# Set up command line flag handling
parser = argparse.ArgumentParser(
description="Transform Wikipedia XML to JSON.",
usage="7z e -so input.7z | %(prog)s [options] > output.json",
)
parser.add_argument(
'-f',
'--full-text',
help="keep the full text of each article, otherwise set it to None",
action="store_true",
dest="save_full_text",
default=False,
)
parser.add_argument(
'-l',
'--latest-revision-only',
help="keep only the latest revision of an article",
action="store_true",
dest="save_newest_only",
default=False,
)
# Run only if this script is being called directly
if __name__ == "__main__":
args = parser.parse_args()
in_page = False
in_revision = False
for event, elem in ET.iterparse(sys.stdin, events=("start", "end")):
# When a page element is started we set up a new revisions dictionary
# with the article title, id, namespace, and redirection information.
# This dictionary is then deepcopied for each revision. It is deleted
# when a page ends.
if event == "start" and elem.tag == NAMESPACE + "page":
in_page = True
page_rev = deepcopy(REVISION)
if args.save_newest_only:
newest = None
newest_date = None
elif event == "end" and elem.tag == NAMESPACE + "page":
if args.save_newest_only:
print json.dumps(newest)
del newest
del newest_date
in_page = False
del cur_rev
del page_rev
# When a revision starts we copy the current page dictionary and fill
# it. Revisions are sorted last in the XML tree, so the page_rev
# dictionary will be filled out by the time we reach them.
if event == "start" and elem.tag == NAMESPACE + "revision":
in_revision = True
cur_rev = deepcopy(page_rev)
elif event == "end" and elem.tag == NAMESPACE + "revision":
for child in elem:
fill_rev(cur_rev, child, in_revision, NAMESPACE, args.save_full_text)
child.clear()
in_revision = False
if not args.save_newest_only:
print json.dumps(cur_rev)
else:
if not newest:
newest = cur_rev
newest_date = datetime.strptime(cur_rev["timestamp"], "%Y-%m-%dT%H:%M:%SZ")
else:
test_date = datetime.strptime(cur_rev["timestamp"], "%Y-%m-%dT%H:%M:%SZ")
if test_date > newest_date:
newest = cur_rev
newest_date = test_date
elem.clear()
# Otherwise if we are not in a revision, but are in a page, then the
# elements are about the article and we save them into the page_rev
# dictionary
if event == "end" and in_page and not in_revision:
fill_rev(page_rev, elem, in_revision, NAMESPACE, args.save_full_text)
elem.clear()
|
|
# Copyright 2020-2022 Google, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import uuid, datetime, pytz, os, requests
import configparser, difflib, hashlib
import DataCatalogUtils as dc
import BigQueryUtils as bq
from google.cloud import bigquery
from google.cloud import firestore
import constants
class TagEngineUtils:
def __init__(self):
self.db = firestore.Client()
config = configparser.ConfigParser()
config.read("tagengine.ini")
def read_default_settings(self):
settings = {}
exists = False
doc_ref = self.db.collection('settings').document('default_tag_template')
doc = doc_ref.get()
if doc.exists:
settings = doc.to_dict()
exists = True
return exists, settings
def read_tag_history_settings(self):
settings = {}
enabled = False
doc_ref = self.db.collection('settings').document('tag_history')
doc = doc_ref.get()
if doc.exists:
settings = doc.to_dict()
if settings['enabled']:
enabled = True
return enabled, settings
def read_tag_stream_settings(self):
settings = {}
enabled = False
doc_ref = self.db.collection('settings').document('tag_stream')
doc = doc_ref.get()
if doc.exists:
settings = doc.to_dict()
if settings['enabled']:
enabled = True
return enabled, settings
def read_coverage_settings(self):
settings = {}
exists = False
doc_ref = self.db.collection('settings').document('coverage')
doc = doc_ref.get()
if doc.exists:
settings = doc.to_dict()
exists = True
return exists, settings
def read_propagation_settings(self):
settings = {}
exists = False
doc_ref = self.db.collection('settings').document('propagation')
doc = doc_ref.get()
if doc.exists:
settings = doc.to_dict()
exists = True
return exists, settings
def write_propagation_settings(self, source_project_ids, dest_project_ids, excluded_datasets, job_frequency):
report_settings = self.db.collection('settings')
doc_ref = report_settings.document('propagation')
doc_ref.set({
'source_project_ids': source_project_ids,
'dest_project_ids': dest_project_ids,
'excluded_datasets': excluded_datasets,
'job_frequency': job_frequency
})
print('Saved tag propagation settings.')
def write_default_settings(self, template_id, project_id, region):
report_settings = self.db.collection('settings')
doc_ref = report_settings.document('default_tag_template')
doc_ref.set({
'template_id': template_id,
'project_id': project_id,
'region': region
})
print('Saved default settings.')
def write_tag_history_settings(self, enabled, project_id, region, dataset):
history_settings = self.db.collection('settings')
doc_ref = history_settings.document('tag_history')
doc_ref.set({
'enabled': bool(enabled),
'project_id': project_id,
'region': region,
'dataset': dataset
})
print('Saved tag history settings.')
# assume that the BQ dataset exists
#bqu = bq.BigQueryUtils()
#bqu.create_dataset(project_id, region, dataset)
def write_tag_stream_settings(self, enabled, project_id, topic):
history_settings = self.db.collection('settings')
doc_ref = history_settings.document('tag_stream')
doc_ref.set({
'enabled': bool(enabled),
'project_id': project_id,
'topic': topic
})
print('Saved tag stream settings.')
def write_coverage_settings(self, project_ids, datasets, tables):
report_settings = self.db.collection('settings')
doc_ref = report_settings.document('coverage')
doc_ref.set({
'project_ids': project_ids,
'excluded_datasets': datasets,
'excluded_tables': tables
})
print('Saved coverage settings.')
def generate_coverage_report(self):
summary_report = []
detailed_report = []
exists, settings = self.read_coverage_settings()
project_ids = settings['project_ids']
excluded_datasets = settings['excluded_datasets']
excluded_tables = settings['excluded_tables']
print('project_ids: ' + project_ids)
print('excluded_datasets: ' + excluded_datasets)
print('excluded_tables: ' + excluded_tables)
log_ref = self.db.collection('logs')
# list datasets and tables for chosen projects
for project in project_ids.split(','):
project_id = project.strip()
bq_client = bigquery.Client(project=project_id)
datasets = list(bq_client.list_datasets())
total_tags = 0
for dataset in datasets:
print("dataset: " + dataset.dataset_id)
if project_id + "." + dataset.dataset_id in excluded_datasets:
print('skipping ' + project_id + "." + dataset.dataset_id)
continue
qualified_dataset = project_id + "." + dataset.dataset_id
total_tags = 0
table_list = []
tables = list(bq_client.list_tables(dataset.dataset_id))
for table in tables:
print("full_table_id: " + str(table.full_table_id))
table_path_full = table.full_table_id.replace(':', '/datasets/').replace('.', '/tables/')
table_path_short = table.full_table_id.replace(':', '.')
table_name = table_path_full.split('/')[4]
print('table_path_full: ' + table_path_full)
print('table_path_short: ' + table_path_short)
print('table_name: ' + table_name)
if table_path_short in project_id + '.' + excluded_tables:
print('skipping ' + table_path_short)
continue
query = log_ref.where('res', '==', table_path_full).where('dc_op', '==', 'TAG_CREATED')
results = query.stream()
tag_count = len(list(results))
total_tags = total_tags + tag_count
print("tag_count = " + str(tag_count))
print("total_tags = " + str(total_tags))
# add the table name and tag count to a list
table_tag = (table_name, tag_count)
table_list.append(table_tag)
# add record to summary report
summary_record = (qualified_dataset, total_tags)
summary_report.append(summary_record)
detailed_record = {qualified_dataset: table_list}
detailed_report.append(detailed_record)
return summary_report, detailed_report
def run_propagation_job(self, source_project_ids, dest_project_ids, excluded_datasets):
#print('*** enter run_propagation_job ***')
#print("source_project_ids: " + source_project_ids)
#print("dest_project_ids: " + dest_project_ids)
#print("excluded_datasets: " + excluded_datasets)
for dest_project in dest_project_ids.split(','):
dest_project_id = dest_project.strip()
print("dest_project_id: " + dest_project_id)
bq_client = bigquery.Client(project=dest_project_id)
datasets = list(bq_client.list_datasets())
for dataset in datasets:
print("dataset_id: " + dataset.dataset_id)
# filter out excluded datasets
if dest_project_id + "." + dataset.dataset_id in excluded_datasets:
print("excluding " + dest_project_id + "." + dataset.dataset_id + " from propagation")
continue
query_str = """
select table_name as view_name, view_definition
from `""" + dest_project_id + "." + dataset.dataset_id + "." + """INFORMATION_SCHEMA.VIEWS`
"""
query_job = bq_client.query(query_str)
for view in query_job:
view_name = view["view_name"]
view_def = view["view_definition"]
print('view_name: ' + view_name)
print('view_def: ' + view_def)
view_res = dest_project_id + '/datasets/' + dataset.dataset_id + '/views/' + view_name
print('view_res: ' + view_res)
source_tables = self.source_tables_from_view(view_def)
print('source_tables: ' + str(source_tables))
view_source_tables_configs = [] # contains list of all source tag configs per view
for source in source_tables:
source_split = source.split(".")
if len(source_split) == 3:
source_project = source_split[0]
source_dataset = source_split[1]
source_table = source_split[2]
elif len(source_split) == 2:
source_project = dest_project_id
source_dataset = source_split[0]
source_table = source_split[1]
else:
print("Opps, something went wrong, couldn't parse view definition")
continue
source_res = source_project + '/datasets/' + source_dataset + '/tables/' + source_table
print('source_res: ' + source_res)
tag_configs = self.read_tag_configs_on_res(source_res)
if len(tag_configs) == 0:
self.write_unpropagated_log_entry(source_res, view_res, 'PROPAGATED', 'NONE', '')
else:
self.add_source_to_configs(tag_configs, source_res)
view_source_tables_configs.append(tag_configs)
# end for source in source tables
if len(view_source_tables_configs) == 0:
# source tables have zero tags, and we have already logged these events, move on to next view
continue
else:
# go through all tags attached to this view's source tables and triage them
# returns list of configs which are tagged as CONFLICT or AGREE
reconciled_configs = self.triage_tag_configs(view_res, view_source_tables_configs)
for config in reconciled_configs:
config_status = config['config_status']
source_tag_uuid = config['tag_uuid']
source_res = config['source_res']
if isinstance(source_res, list) == False:
source_res = [source_res]
tag_type = config['tag_type']
fields = config['fields']
included_uris = config['included_uris']
template_uuid = config['template_uuid']
print('source_res: ' + str(source_res))
print('fields: ' + str(fields))
# check if tag has been forked, we don't want to override it, if it has
if self.is_forked_tag(template_uuid, source_res, view_res):
continue
template_config = self.read_template_config(template_uuid)
template_id = template_config['template_id']
project_id = template_config['project_id']
region = template_config['region']
dcu = dc.DataCatalogUtils(template_id, project_id, region)
# parse the included_uris field, matching the source_res against it.
# extract table-level and column-level tags from the included_uris field
columns = self.extract_tagged_columns(source_res, view_res, included_uris)
#print('columns: ' + str(columns))
# create or update propagated_config
view_tag_uuid = self.create_or_update_propagated_config(source_tag_uuid, source_res, view_res, config_status, columns, view_def, \
tag_type, fields, template_uuid)
if config_status == 'CONFLICT':
self.write_unpropagated_log_entry(source_res, view_res, 'PROPAGATED', config_status, template_uuid)
elif config_status == 'PROPAGATED':
if tag_type == "STATIC":
dcu.create_update_static_propagated_tag('PROPAGATED', source_res, view_res, columns, fields, source_tag_uuid, view_tag_uuid,\
template_uuid)
if tag_type == "DYNAMIC":
dcu.create_update_dynamic_propagated_tag('PROPAGATED', source_res, view_res, columns, fields, source_tag_uuid, view_tag_uuid,\
template_uuid)
def add_source_to_configs(self, tag_configs, source_res):
for tag_config in tag_configs:
tag_config['source_res'] = source_res
def triage_tag_configs(self, view_res, source_tables_tag_configs):
#print('enter triage_tag_configs')
#print('view_res: ' + view_res)
#print('source_tables_tag_configs: ' + str(source_tables_tag_configs))
reconciled_tags = [] # tracks configs which conflict or/and agree
overlapping_tags = [] # tracks configs which overlap and still need to be reconciled
template_tag_mapping = {} # key == tag_template_uuid, val == [tag_uuid]
for source_table_tag_configs in source_tables_tag_configs:
for tag_config in source_table_tag_configs:
print('tag_config: ' + str(tag_config))
template_uuid = tag_config['template_uuid']
tag_uuid = tag_config['tag_uuid']
if template_uuid in template_tag_mapping:
tag_uuid_list = template_tag_mapping[template_uuid]
tag_uuid_list.append(tag_uuid)
reconciled_tags, overlapping_tags = self.swap_elements(template_uuid, reconciled_tags, overlapping_tags)
tag_config['config_status'] = 'OVERLAP'
overlapping_tags.append(tag_config)
else:
tag_uuid_list = []
tag_uuid_list.append(tag_uuid)
template_tag_mapping[template_uuid] = tag_uuid_list
tag_config['config_status'] = 'PROPAGATED'
reconciled_tags.append(tag_config)
if len(overlapping_tags) == 0:
print('we have no overlapping tags')
print('reconciled_tags: ' + str(reconciled_tags))
return reconciled_tags
# we have some overlapping tags
tag_uuid_lists = template_tag_mapping.values()
for tag_uuid_list in tag_uuid_lists:
if len(tag_uuid_list) > 1:
agreeing_tag, conflicting_tag = self.run_diff(tag_uuid_list, overlapping_tags)
if len(conflicting_tag) > 0:
print('we have a conflicting tag')
print('conflictings_tag: ' + str(conflicting_tag))
conflicting_tag['config_status'] = 'CONFLICT'
conflicting_tag['tag_uuid'] = tag_uuid_list
reconciled_tags.append(conflicting_tag)
if len(agreeing_tag) > 0:
print('we have an agreeing tag')
print('agreeing_tag: ' + str(agreeing_tag))
agreeing_tag['config_status'] = 'PROPAGATED'
agreeing_tag['tag_uuid'] = tag_uuid_list
reconciled_tags.append(agreeing_tag)
print('reconciled_tags: ' + str(reconciled_tags))
return reconciled_tags
def extract_source_res_list(self, tag_configs):
source_res_list = []
for tag_config in tag_configs:
source_res_list.append(tag_config['source_res'])
return source_res_list
def swap_elements(self, template_uuid, reconciled_tags, overlapping_tags):
purge_tag_configs = []
# reconciled_tags and overlapping_tags may both contain more than one config
for tag_config in reconciled_tags:
if template_uuid in tag_config['template_uuid']:
overlapping_tags.append(tag_config)
purge_tag_configs.append(tag_config)
for purge_config in purge_tag_configs:
reconciled_tags.remove(purge_config)
for tag_config in reconciled_tags:
if template_uuid in tag_config['template_uuid']:
tag_config['config_status'] = 'OVERLAP'
return reconciled_tags, overlapping_tags
def run_diff(self, tag_uuid_list, overlapping_tags):
#print('enter run_diff')
#print('tag_uuid_list: ' + str(tag_uuid_list))
#print('overlapping_tags: ' + str(overlapping_tags))
# get template fields
template_uuid = overlapping_tags[0]['template_uuid']
template_config = self.read_template_config(template_uuid)
dcu = dc.DataCatalogUtils(template_config['template_id'], template_config['project_id'], template_config['region'])
template_fields = dcu.get_template()
status = constants.TAGS_AGREE
tag_type = ""
output_fields = []
# for each template field
for field in template_fields:
field_id = field['field_id']
field_type = field['field_type']
field_values = []
for tag in overlapping_tags:
for tagged_field in tag['fields']:
if field_id in tagged_field['field_id']:
if tag['tag_type'] in 'DYNAMIC':
tag_type = constants.DYNAMIC_TAG
field_values.append(tagged_field['query_expression'])
if tag['tag_type'] in 'STATIC':
field_values.append(tagged_field['field_value'])
tag_type = constants.STATIC_TAG
#print('field_values: ' + str(field_values))
continue
# we've collected all the values for a given field and added them to field_values
# values assigned to a field are all equal if the set has one element
if len(set(field_values)) == 1:
# field values all match
if tag_type == constants.DYNAMIC_TAG:
matching_field = {'field_id': field_id, 'field_type': field_type, 'status': 'AGREE', 'query_expression': field_values[0]}
if tag_type == constants.STATIC_TAG:
matching_field = {'field_id': field_id, 'field_type': field_type, 'status': 'AGREE', 'field_value': field_values[0]}
output_fields.append(matching_field)
else:
if len(field_values) > 0:
if tag_type == constants.DYNAMIC_TAG:
conflicting_field = {'field_id': field_id, 'field_type': field_type, 'status': 'CONFLICT', 'query_expression': ', '.join(field_values)}
if tag_type == constants.STATIC_TAG:
conflicting_field = {'field_id': field_id, 'field_type': field_type, 'status': 'CONFLICT', 'field_value': ', '.join(field_values)}
output_fields.append(conflicting_field)
status = constants.TAGS_CONFLICT
#print('output_fields: ' + str(output_fields))
agreeing_tag = [] # output
conflicting_tag = [] # output
source_res_list = self.extract_source_res_list(overlapping_tags)
overlapping_tags[0]['source_res'] = source_res_list
overlapping_tags[0]['fields'] = output_fields
if status == constants.TAGS_CONFLICT:
conflicting_tag = overlapping_tags[0]
if status == constants.TAGS_AGREE:
agreeing_tag = overlapping_tags[0]
return agreeing_tag, conflicting_tag
def extract_tagged_columns(self, source_res_list, view_res, included_uris):
#print('enter extract_tagged_columns')
#print('source_res_list: ' + str(source_res_list))
#print('view_res: ' + view_res)
#print('included: ' + included_uris)
view_res_split = view_res.split("/")
project = view_res_split[0]
dataset = view_res_split[2]
view = view_res.split("/")[4]
tagged_columns = []
for source_res in source_res_list:
print('source_res: ' + source_res)
source_res_full = "bigquery/project/" + source_res.replace('datasets', 'dataset').replace('tables/', '')
print('source_res_full: ' + source_res_full)
included_uri_split = included_uris.split(",")
for included_uri in included_uri_split:
uri = included_uri.strip()
print('uri: ' + uri)
# we may have a column
if len(uri) > len(source_res_full):
start_index = uri.rfind("/")
column = uri[start_index+1:]
exists = self.column_exists(project, dataset, view, column)
if exists:
tagged_columns.append(column)
return tagged_columns
def column_exists(self, project, dataset, view, column):
column_exists = False
query_str = """
select count(*) as count
from `""" + project + "." + dataset + "." + """INFORMATION_SCHEMA.COLUMN_FIELD_PATHS`
where table_name='""" + view + """'
and column_name='""" + column + """'
"""
#print("query_str: " + query_str)
bq_client = bigquery.Client(project=project)
query_job = bq_client.query(query_str)
for row in query_job:
count = row["count"]
if count == 1:
column_exists = True
return column_exists
def is_forked_tag(self, template_uuid, source_res, view_res):
config_ref = self.db.collection('propagated_config')
query = config_ref.where('template_uuid', '==', template_uuid).where('source_res', '==', source_res).where('view_res', '==', view_res).where('config_status', 'in', ['PROPAGATED AND FORKED', 'CONFLICT AND FORKED'])
results = query.stream()
for doc in results:
if doc.exists:
return True
return False
def create_or_update_propagated_config(self, source_tag_uuid, source_res, view_res, config_status, columns, view_def, tag_type, fields,\
template_uuid):
#print('enter create_or_update_propagated_config')
# check to see if we have an active config
tag_ref = self.db.collection('propagated_config')
query = tag_ref.where('template_uuid', '==', template_uuid).where('view_res', '==', view_res).where('source_res', 'array_contains_any', source_res)
results = query.stream()
doc_exists = False
for doc in results:
if doc.exists:
prop_config = doc.to_dict()
if len(columns) > 0:
if prop_config['cols'] == columns:
doc_exists = True
else:
doc_exists = True
if doc_exists == False:
break
view_tag_uuid = doc.id
print('Config already exists. Tag_uuid: ' + str(view_tag_uuid))
if prop_config['config_status'] == 'FORKED':
return view_tag_uuid
if prop_config['fields'] != fields:
self.db.collection('propagated_config').document(view_tag_uuid).update({
'config_status' : config_status,
'fields' : fields,
'last_modified_time' : datetime.datetime.utcnow()
})
print('Updated propagated_config.')
else:
self.db.collection('propagated_config').document(view_tag_uuid).update({
'last_modified_time' : datetime.datetime.utcnow()
})
print('Propagated config fields are equal, updated last_modified_time only.')
if doc_exists == False:
view_tag_uuid = uuid.uuid1().hex
prop_config = self.db.collection('propagated_config')
doc_ref = prop_config.document(view_tag_uuid)
doc = {
'view_tag_uuid': view_tag_uuid,
'source_tag_uuid': source_tag_uuid,
'tag_type': tag_type,
'config_status': config_status,
'creation_time': datetime.datetime.utcnow(),
'fields': fields,
'source_res': source_res,
'view_res': view_res,
'view_def': view_def,
'template_uuid': template_uuid}
if len(columns) > 0:
doc['cols'] = columns
doc_ref.set(doc)
print('Created new propagated tag config.')
return view_tag_uuid
def write_propagated_log_entry(self, config_status, dc_op, res_type, source_res, view_res, column, tag_type, source_tag_uuid, view_tag_uuid, tag_id, template_uuid):
log_entry = {}
log_entry['ts'] = datetime.datetime.utcnow()
log_entry['dc_op'] = dc_op
log_entry['res_type'] = res_type
log_entry['config_type'] = 'PROPAGATED'
log_entry['config_status'] = config_status
log_entry['tag_type'] = tag_type
log_entry['source_res'] = source_res
log_entry['view_res'] = view_res
if len(column) > 0:
log_entry['col'] = column
log_entry['tag_type'] = tag_type
log_entry['source_tag_uuid'] = source_tag_uuid
log_entry['view_tag_uuid'] = view_tag_uuid
log_entry['dc_tag_id'] = tag_id
log_entry['template_uuid'] = template_uuid
self.db.collection('logs').add(log_entry)
#print('Wrote log entry.')
def write_unpropagated_log_entry(self, source_res_list, view_res, config_type, config_status, template_uuid):
log_entry = {}
log_entry['source_res'] = source_res_list
log_entry['view_res'] = view_res
log_entry['config_type'] = config_type
log_entry['config_status'] = config_status
if template_uuid != "":
log_entry['template_uuid'] = template_uuid
log_entry['ts'] = datetime.datetime.utcnow()
self.db.collection('logs').add(log_entry)
#print('Wrote log entry.')
def generate_propagation_report(self):
#print("*** enter generate_propagation_report ***")
report = []
last_run = None
source_view_set = set()
prop_configs = self.db.collection('propagated_config').stream()
for config in prop_configs:
prop_entry = config.to_dict()
print("prop_entry: " + str(prop_entry))
view_res_pretty = prop_entry['view_res'].replace('/datasets', '').replace('/views', '')
prop_entry['view_res'] = view_res_pretty
source_res_list = prop_entry['source_res']
source_res_pretty = source_res_list[0]
if len(source_res_list) > 1:
source_res_pretty = source_res_pretty + '...'
source_res_pretty = source_res_pretty.replace('/datasets', '').replace('/tables', '')
prop_entry['source_res'] = source_res_pretty
template_config = self.read_template_config(prop_entry['template_uuid'])
prop_entry['template_id'] = template_config['template_id']
if last_run is None:
if 'last_modified_time' in prop_entry:
last_run = prop_entry['last_modified_time']
else:
last_run = prop_entry['creation_time']
#print('last_run: ' + str(last_run))
report.append(prop_entry)
last_hour_ts = datetime.datetime.utcnow() - datetime.timedelta(hours = 1)
logs = self.db.collection('logs').where('config_type', '==', 'PROPAGATED').where('config_status', '==', 'NONE').where('ts', '>=',\
last_hour_ts).order_by('ts', direction=firestore.Query.DESCENDING).stream()
for log in logs:
has_missing = True
log_entry = log.to_dict()
view_res_pretty = log_entry['view_res'].replace('/datasets', '').replace('/tables', '')
source_res_pretty = log_entry['source_res'].replace('/datasets', '').replace('/tables', '')
source_view_pair = source_res_pretty + '&' + view_res_pretty
if source_view_pair not in source_view_set:
source_view_set.add(source_view_pair)
report_entry = {}
report_entry['config_status'] = log_entry['config_status']
report_entry['source_res'] = source_res_pretty
report_entry['view_res'] = view_res_pretty
report.append(report_entry)
if last_run is None:
last_run = log_entry['ts']
#print('last_run: ' + str(last_run))
return report, last_run
def read_propagated_configs_on_res(self, source_res, view_res, template_id):
tag_config_results = []
view_res_full = view_res.replace('.', '/datasets/', 1).replace('.', '/tables/', 1)
print('view_res_full: ' + view_res_full)
prop_config_ref = self.db.collection('propagated_config')
tag_config_ref = self.db.collection('tag_config')
log_ref = self.db.collection('logs')
query1 = log_ref.where('template_uuid', '==', template_uuid).where('source_res', '==', source_res_full).where('view_res', '==',\
view_res_full).order_by('ts', direction=firestore.Query.DESCENDING).limit(1)
prop_results = query1.stream()
for prop_record in prop_results:
print('found prop log id ' + prop_record.id)
record = prop_record.to_dict()
view_tag_uuid = record['tag_uuid']
view_config = prop_config_ref.document(view_tag_uuid)
if view_config.exists:
view_record = view_config.to_dict()
# get tag_config for parent
source_config = self.read_tag_configs_on_res(source_res)
return view_config, source_config, template_id
def source_tables_from_view(self, view_def):
source_tables = []
payload = {"sql": view_def}
zeta = config['DEFAULT']['ZETA_URL']
response = requests.post(zeta, json=payload)
print('zeta response: ' + str(response))
resp_dict = response.json()
for resp in resp_dict:
source_res = '.'.join(resp)
source_tables.append(source_res)
return source_tables
def read_propagated_config(self, tag_uuid):
propagated_config = {}
propagated_ref = self.db.collection('propagated_config').document(tag_uuid)
doc = propagated_ref.get()
if doc.exists:
propagated_config = doc.to_dict()
print("propagated_config: " + str(propagated_config))
return propagated_config
@firestore.transactional
def update_in_transaction(transaction, config_ref, config_status, fields, refresh_frequency):
snapshot = config_ref.get(transaction=transaction)
if refresh_frequency != None:
transaction.update(config_ref, {
'fields': fields,
'refresh_frequency': refresh_frequency,
'config_status': config_status
})
else:
transaction.update(config_ref, {
'fields': fields,
'config_status': config_status
})
def fork_propagated_tag(self, tag_uuid, config_status, fields, refresh_frequency):
transaction = self.db.transaction()
config_ref = self.db.collection('propagated_config').document(tag_uuid)
self.update_in_transaction(transaction, config_ref, config_status, fields, refresh_frequency)
updated_config = config_ref.get().to_dict()
return updated_config
def read_tag_configs_on_res(self, res):
#print("*** enter read_tag_configs_on_res ***")
template_uuid_set = set()
tag_config_results = []
table_path_full = res.replace('.', '/datasets/', 1).replace('.', '/tables/', 1)
#print('table_path_full: ' + table_path_full)
log_ref = self.db.collection('logs')
query = log_ref.where('res', '==', table_path_full).where('config_type', '==', 'MANUAL').order_by('ts', direction=firestore.Query.DESCENDING)
create_update_entries = query.stream()
for create_update_entry in create_update_entries:
entry = create_update_entry.to_dict()
template_uuid = entry['template_uuid']
if template_uuid not in template_uuid_set:
template_uuid_set.add(template_uuid)
tag_uuid = entry['tag_uuid']
tag_config = self.read_tag_config(tag_uuid)
template_config = self.read_template_config(template_uuid)
tag_config['template_id'] = template_config['template_id']
tag_config_results.append(tag_config)
else:
continue
return tag_config_results
def read_template_config(self, template_uuid):
tag_config = {}
template_ref = self.db.collection('tag_template').document(template_uuid)
doc = template_ref.get()
if doc.exists:
template_config = doc.to_dict()
#print(str(tag_config))
return template_config
def read_tag_template(self, template_id, project_id, region):
template_exists = False
template_uuid = ""
# check to see if this template already exists
template_ref = self.db.collection('tag_template')
query = template_ref.where('template_id', '==', template_id).where('project_id', '==', project_id).where('region', '==', region)
matches = query.get()
# should either be a single matching template or no matching templates
if len(matches) == 1:
if matches[0].exists:
print('Tag Template exists. Template uuid: ' + str(matches[0].id))
template_uuid = matches[0].id
template_exists = True
return (template_exists, template_uuid)
def write_tag_template(self, template_id, project_id, region):
template_exists, template_uuid = self.read_tag_template(template_id, project_id, region)
if template_exists == False:
print('Tag Template doesn\'t exist. Creating new template')
template_uuid = uuid.uuid1().hex
doc_ref = self.db.collection('tag_template').document(template_uuid)
doc_ref.set({
'template_uuid': template_uuid,
'template_id': template_id,
'project_id': project_id,
'region': region
})
return template_uuid
def write_static_tag(self, config_status, fields, included_uris, excluded_uris, template_uuid, tag_history, tag_stream):
# hash the included_uris string
included_uris_hash = hashlib.md5(included_uris.encode()).hexdigest()
# check to see if this tag config already exists
tag_ref = self.db.collection('tag_config')
query = tag_ref.where('template_uuid', '==', template_uuid).where('included_uris_hash', '==',\
included_uris_hash).where('tag_type', '==', 'STATIC').where('config_status', '==', config_status)
matches = query.get()
for match in matches:
if match.exists:
tag_uuid_match = match.id
print('Tag config already exists. Tag_uuid: ' + str(tag_uuid_match))
# update status to INACTIVE
self.db.collection('tag_config').document(tag_uuid_match).update({
'config_status' : "INACTIVE"
})
print('Updated status to INACTIVE.')
tag_uuid = uuid.uuid1().hex
tag_config = self.db.collection('tag_config')
doc_ref = tag_config.document(tag_uuid)
doc_ref.set({
'tag_uuid': tag_uuid,
'tag_type': 'STATIC',
'config_status': config_status,
'creation_time': datetime.datetime.utcnow(),
'fields': fields,
'included_uris': included_uris,
'included_uris_hash': included_uris_hash,
'excluded_uris': excluded_uris,
'template_uuid': template_uuid,
'tag_history': tag_history,
'tag_stream': tag_stream
})
print('Created new static tag config.')
return tag_uuid, included_uris_hash
def write_dynamic_tag(self, config_status, fields, included_uris, excluded_uris, template_uuid, refresh_mode,\
refresh_frequency, refresh_unit, tag_history, tag_stream):
included_uris_hash = hashlib.md5(included_uris.encode()).hexdigest()
# check to see if this tag config already exists
tag_ref = self.db.collection('tag_config')
query = tag_ref.where('template_uuid', '==', template_uuid).where('included_uris_hash', '==', included_uris_hash).where('tag_type', '==', 'DYNAMIC').where('config_status', '==', config_status)
matches = query.get()
for match in matches:
if match.exists:
tag_uuid_match = match.id
print('Tag config already exists. Tag_uuid: ' + str(tag_uuid_match))
# update status to INACTIVE
self.db.collection('tag_config').document(tag_uuid_match).update({
'config_status' : "INACTIVE"
})
print('Updated status to INACTIVE.')
tag_uuid = uuid.uuid1().hex
tag_config = self.db.collection('tag_config')
doc_ref = tag_config.document(tag_uuid)
if refresh_mode == 'AUTO':
if type(refresh_frequency) is int:
if refresh_frequency > 0:
delta = refresh_frequency
else:
delta = 24
if type(refresh_frequency) is str:
if refresh_frequency.isdigit():
delta = int(refresh_frequency)
else:
delta = 24
if refresh_unit == 'hours':
next_run = datetime.datetime.utcnow() + datetime.timedelta(hours=delta)
if refresh_unit == 'days':
next_run = datetime.datetime.utcnow() + datetime.timedelta(days=delta)
doc_ref.set({
'tag_uuid': tag_uuid,
'tag_type': 'DYNAMIC',
'config_status': config_status,
'creation_time': datetime.datetime.utcnow(),
'fields': fields,
'included_uris': included_uris,
'included_uris_hash': included_uris_hash,
'excluded_uris': excluded_uris,
'template_uuid': template_uuid,
'refresh_mode': refresh_mode, # AUTO
'refresh_frequency': delta,
'refresh_unit': refresh_unit,
'tag_history': tag_history,
'tag_stream': tag_stream,
'scheduling_status': 'READY',
'next_run': next_run,
'version': 1
})
else:
doc_ref.set({
'tag_uuid': tag_uuid,
'tag_type': 'DYNAMIC',
'config_status': config_status,
'creation_time': datetime.datetime.utcnow(),
'fields': fields,
'included_uris': included_uris,
'included_uris_hash': included_uris_hash,
'excluded_uris': excluded_uris,
'template_uuid': template_uuid,
'refresh_mode': refresh_mode, # ON_DEMAND
'refresh_frequency': 0,
'tag_history': tag_history,
'tag_stream': tag_stream,
'version': 1
})
print('Created new dynamic tag config.')
return tag_uuid, included_uris_hash
def write_log_entry(self, dc_op, resource_type, resource, column, tag_type, tag_uuid, tag_id, template_uuid):
log_entry = {}
log_entry['ts'] = datetime.datetime.utcnow()
log_entry['dc_op'] = dc_op
log_entry['res_type'] = resource_type
log_entry['config_type'] = 'MANUAL'
log_entry['res'] = resource
if len(column) > 0:
log_entry['col'] = column
log_entry['tag_type'] = tag_type
log_entry['tag_uuid'] = tag_uuid
log_entry['dc_tag_id'] = tag_id
log_entry['template_uuid'] = template_uuid
self.db.collection('logs').add(log_entry)
#print('Wrote log entry.')
def write_error_entry(self, msg):
error_entry = {}
error_entry['ts'] = datetime.datetime.utcnow()
error_entry['msg'] = msg
self.db.collection('errors').add(error_entry)
print('Wrote error entry.')
def read_tag_configs(self, template_id, project_id, region):
tag_configs = []
template_exists, template_uuid = self.read_tag_template(template_id, project_id, region)
tag_ref = self.db.collection('tag_config')
docs = tag_ref.where('template_uuid', '==', template_uuid).where('config_status', '==', 'ACTIVE').stream()
for doc in docs:
tag_config = doc.to_dict()
tag_configs.append(tag_config)
#print(str(tag_configs))
return tag_configs
def read_tag_config(self, tag_uuid):
tag_config = {}
tag_ref = self.db.collection('tag_config').document(tag_uuid)
doc = tag_ref.get()
if doc.exists:
tag_config = doc.to_dict()
#print(str(tag_config))
return tag_config
def read_propagated_tag_config(self, tag_uuid):
propagated_tag_config = {}
propagated_tag_ref = self.db.collection('propagated_config').document(tag_uuid)
doc = propagated_tag_ref.get()
if doc.exists:
propagated_tag_config = doc.to_dict()
return propagated_tag_config
def lookup_tag_config_by_included_uris(self, template_uuid, included_uris, included_uris_hash):
success = False
tag_config = {}
tag_ref = self.db.collection('tag_config')
if included_uris is not None:
docs = tag_ref.where('template_uuid', '==', template_uuid).where('config_status', '==', 'ACTIVE')\
.where('included_uris', '==', included_uris).stream()
if included_uris_hash is not None:
docs = tag_ref.where('template_uuid', '==', template_uuid).where('config_status', '==', 'ACTIVE')\
.where('included_uris_hash', '==', included_uris_hash).stream()
for doc in docs:
tag_config = doc.to_dict()
break
print('tag_config: ' + str(tag_config))
if tag_config:
success = True
return success, tag_config
def increment_tag_config_version(self, tag_uuid, version):
self.db.collection('tag_config').document(tag_uuid).update({
'version' : version + 1
})
def update_tag_config(self, old_tag_uuid, tag_type, config_status, fields, included_uris, excluded_uris, template_uuid, \
refresh_mode, refresh_frequency, refresh_unit, tag_history, tag_stream):
self.db.collection('tag_config').document(old_tag_uuid).update({
'config_status' : "INACTIVE"
})
if tag_type == 'STATIC':
new_tag_uuid = self.write_static_tag(config_status, fields, included_uris, excluded_uris, template_uuid, \
tag_history, tag_stream)
if tag_type == 'DYNAMIC':
new_tag_uuid, included_uris_hash = self.write_dynamic_tag(config_status, fields, included_uris, excluded_uris, \
template_uuid, refresh_mode, refresh_frequency, refresh_unit,\
tag_history, tag_stream)
# note: don't need to return the included_uris_hash
return new_tag_uuid
if __name__ == '__main__':
config = configparser.ConfigParser()
config.read("tagengine.ini")
te = TagEngineUtils();
te.write_template('quality_template', config['DEFAULT']['PROJECT'], config['DEFAULT']['REGION'], 'ACTIVE')
|
|
# Copyright (c) 2006-2007 The Regents of The University of Michigan
# Copyright (c) 2009 Advanced Micro Devices, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Authors: Brad Beckmann
import math
import m5
from m5.objects import *
from m5.defines import buildEnv
from Ruby import create_topology
#
# Note: the L1 Cache latency is only used by the sequencer on fast path hits
#
class L1Cache(RubyCache):
latency = 2
#
# Note: the L2 Cache latency is not currently used
#
class L2Cache(RubyCache):
latency = 10
#
# Probe filter is a cache, latency is not used
#
class ProbeFilter(RubyCache):
latency = 1
def define_options(parser):
parser.add_option("--allow-atomic-migration", action="store_true",
help="allow migratory sharing for atomic only accessed blocks")
parser.add_option("--pf-on", action="store_true",
help="Hammer: enable Probe Filter")
parser.add_option("--dir-on", action="store_true",
help="Hammer: enable Full-bit Directory")
def create_system(options, system, dma_ports, ruby_system):
if buildEnv['PROTOCOL'] != 'MOESI_hammer':
panic("This script requires the MOESI_hammer protocol to be built.")
cpu_sequencers = []
#
# The ruby network creation expects the list of nodes in the system to be
# consistent with the NetDest list. Therefore the l1 controller nodes must be
# listed before the directory nodes and directory nodes before dma nodes, etc.
#
l1_cntrl_nodes = []
dir_cntrl_nodes = []
dma_cntrl_nodes = []
#
# Must create the individual controllers before the network to ensure the
# controller constructors are called before the network constructor
#
block_size_bits = int(math.log(options.cacheline_size, 2))
for i in xrange(options.num_cpus):
#
# First create the Ruby objects associated with this cpu
#
l1i_cache = L1Cache(size = options.l1i_size,
assoc = options.l1i_assoc,
start_index_bit = block_size_bits,
is_icache = True)
l1d_cache = L1Cache(size = options.l1d_size,
assoc = options.l1d_assoc,
start_index_bit = block_size_bits)
l2_cache = L2Cache(size = options.l2_size,
assoc = options.l2_assoc,
start_index_bit = block_size_bits)
l1_cntrl = L1Cache_Controller(version = i,
L1Icache = l1i_cache,
L1Dcache = l1d_cache,
L2cache = l2_cache,
no_mig_atomic = not \
options.allow_atomic_migration,
send_evictions = (
options.cpu_type == "detailed"),
transitions_per_cycle = options.ports,
ruby_system = ruby_system)
cpu_seq = RubySequencer(version = i,
icache = l1i_cache,
dcache = l1d_cache,
ruby_system = ruby_system)
l1_cntrl.sequencer = cpu_seq
if options.recycle_latency:
l1_cntrl.recycle_latency = options.recycle_latency
exec("ruby_system.l1_cntrl%d = l1_cntrl" % i)
#
# Add controllers and sequencers to the appropriate lists
#
cpu_sequencers.append(cpu_seq)
l1_cntrl_nodes.append(l1_cntrl)
phys_mem_size = sum(map(lambda r: r.size(), system.mem_ranges))
assert(phys_mem_size % options.num_dirs == 0)
mem_module_size = phys_mem_size / options.num_dirs
#
# determine size and index bits for probe filter
# By default, the probe filter size is configured to be twice the
# size of the L2 cache.
#
pf_size = MemorySize(options.l2_size)
pf_size.value = pf_size.value * 2
dir_bits = int(math.log(options.num_dirs, 2))
pf_bits = int(math.log(pf_size.value, 2))
if options.numa_high_bit:
if options.pf_on or options.dir_on:
# if numa high bit explicitly set, make sure it does not overlap
# with the probe filter index
assert(options.numa_high_bit - dir_bits > pf_bits)
# set the probe filter start bit to just above the block offset
pf_start_bit = block_size_bits
else:
if dir_bits > 0:
pf_start_bit = dir_bits + block_size_bits - 1
else:
pf_start_bit = block_size_bits
# Run each of the ruby memory controllers at a ratio of the frequency of
# the ruby system
# clk_divider value is a fix to pass regression.
ruby_system.memctrl_clk_domain = DerivedClockDomain(
clk_domain=ruby_system.clk_domain,
clk_divider=3)
for i in xrange(options.num_dirs):
#
# Create the Ruby objects associated with the directory controller
#
mem_cntrl = RubyMemoryControl(
clk_domain = ruby_system.memctrl_clk_domain,
version = i,
ruby_system = ruby_system)
dir_size = MemorySize('0B')
dir_size.value = mem_module_size
pf = ProbeFilter(size = pf_size, assoc = 4,
start_index_bit = pf_start_bit)
dir_cntrl = Directory_Controller(version = i,
directory = \
RubyDirectoryMemory( \
version = i,
size = dir_size,
use_map = options.use_map,
map_levels = \
options.map_levels,
numa_high_bit = \
options.numa_high_bit),
probeFilter = pf,
memBuffer = mem_cntrl,
probe_filter_enabled = options.pf_on,
full_bit_dir_enabled = options.dir_on,
transitions_per_cycle = options.ports,
ruby_system = ruby_system)
if options.recycle_latency:
dir_cntrl.recycle_latency = options.recycle_latency
exec("ruby_system.dir_cntrl%d = dir_cntrl" % i)
dir_cntrl_nodes.append(dir_cntrl)
for i, dma_port in enumerate(dma_ports):
#
# Create the Ruby objects associated with the dma controller
#
dma_seq = DMASequencer(version = i,
ruby_system = ruby_system)
dma_cntrl = DMA_Controller(version = i,
dma_sequencer = dma_seq,
transitions_per_cycle = options.ports,
ruby_system = ruby_system)
exec("ruby_system.dma_cntrl%d = dma_cntrl" % i)
exec("ruby_system.dma_cntrl%d.dma_sequencer.slave = dma_port" % i)
dma_cntrl_nodes.append(dma_cntrl)
if options.recycle_latency:
dma_cntrl.recycle_latency = options.recycle_latency
all_cntrls = l1_cntrl_nodes + dir_cntrl_nodes + dma_cntrl_nodes
topology = create_topology(all_cntrls, options)
return (cpu_sequencers, dir_cntrl_nodes, topology)
|
|
"""Abstract tensor product."""
from __future__ import print_function, division
from sympy import Expr, Add, Mul, Matrix, Pow, sympify
from sympy.core.compatibility import u
from sympy.core.trace import Tr
from sympy.printing.pretty.stringpict import prettyForm
from sympy.physics.quantum.qexpr import QuantumError
from sympy.physics.quantum.dagger import Dagger
from sympy.physics.quantum.commutator import Commutator
from sympy.physics.quantum.anticommutator import AntiCommutator
from sympy.physics.quantum.state import Ket, Bra
from sympy.physics.quantum.matrixutils import (
numpy_ndarray,
scipy_sparse_matrix,
matrix_tensor_product
)
__all__ = [
'TensorProduct',
'tensor_product_simp'
]
#-----------------------------------------------------------------------------
# Tensor product
#-----------------------------------------------------------------------------
_combined_printing = False
def combined_tensor_printing(combined):
"""Set flag controlling whether tensor products of states should be
printed as a combined bra/ket or as an explicit tensor product of different
bra/kets. This is a global setting for all TensorProduct class instances.
Parameters
----------
combine : bool
When true, tensor product states are combined into one ket/bra, and
when false explicit tensor product notation is used between each
ket/bra.
"""
global _combined_printing
_combined_printing = combined
class TensorProduct(Expr):
"""The tensor product of two or more arguments.
For matrices, this uses ``matrix_tensor_product`` to compute the Kronecker
or tensor product matrix. For other objects a symbolic ``TensorProduct``
instance is returned. The tensor product is a non-commutative
multiplication that is used primarily with operators and states in quantum
mechanics.
Currently, the tensor product distinguishes between commutative and non-
commutative arguments. Commutative arguments are assumed to be scalars and
are pulled out in front of the ``TensorProduct``. Non-commutative arguments
remain in the resulting ``TensorProduct``.
Parameters
==========
args : tuple
A sequence of the objects to take the tensor product of.
Examples
========
Start with a simple tensor product of sympy matrices::
>>> from sympy import I, Matrix, symbols
>>> from sympy.physics.quantum import TensorProduct
>>> m1 = Matrix([[1,2],[3,4]])
>>> m2 = Matrix([[1,0],[0,1]])
>>> TensorProduct(m1, m2)
Matrix([
[1, 0, 2, 0],
[0, 1, 0, 2],
[3, 0, 4, 0],
[0, 3, 0, 4]])
>>> TensorProduct(m2, m1)
Matrix([
[1, 2, 0, 0],
[3, 4, 0, 0],
[0, 0, 1, 2],
[0, 0, 3, 4]])
We can also construct tensor products of non-commutative symbols:
>>> from sympy import Symbol
>>> A = Symbol('A',commutative=False)
>>> B = Symbol('B',commutative=False)
>>> tp = TensorProduct(A, B)
>>> tp
AxB
We can take the dagger of a tensor product (note the order does NOT reverse
like the dagger of a normal product):
>>> from sympy.physics.quantum import Dagger
>>> Dagger(tp)
Dagger(A)xDagger(B)
Expand can be used to distribute a tensor product across addition:
>>> C = Symbol('C',commutative=False)
>>> tp = TensorProduct(A+B,C)
>>> tp
(A + B)xC
>>> tp.expand(tensorproduct=True)
AxC + BxC
"""
is_commutative = False
def __new__(cls, *args):
if isinstance(args[0], (Matrix, numpy_ndarray, scipy_sparse_matrix)):
return matrix_tensor_product(*args)
c_part, new_args = cls.flatten(sympify(args))
c_part = Mul(*c_part)
if len(new_args) == 0:
return c_part
elif len(new_args) == 1:
return c_part * new_args[0]
else:
tp = Expr.__new__(cls, *new_args)
return c_part * tp
@classmethod
def flatten(cls, args):
# TODO: disallow nested TensorProducts.
c_part = []
nc_parts = []
for arg in args:
cp, ncp = arg.args_cnc()
c_part.extend(list(cp))
nc_parts.append(Mul._from_args(ncp))
return c_part, nc_parts
def _eval_adjoint(self):
return TensorProduct(*[Dagger(i) for i in self.args])
def _eval_rewrite(self, pattern, rule, **hints):
sargs = self.args
terms = [t._eval_rewrite(pattern, rule, **hints) for t in sargs]
return TensorProduct(*terms).expand(tensorproduct=True)
def _sympystr(self, printer, *args):
from sympy.printing.str import sstr
length = len(self.args)
s = ''
for i in range(length):
if isinstance(self.args[i], (Add, Pow, Mul)):
s = s + '('
s = s + sstr(self.args[i])
if isinstance(self.args[i], (Add, Pow, Mul)):
s = s + ')'
if i != length - 1:
s = s + 'x'
return s
def _pretty(self, printer, *args):
if (_combined_printing and
(all([isinstance(arg, Ket) for arg in self.args]) or
all([isinstance(arg, Bra) for arg in self.args]))):
length = len(self.args)
pform = printer._print('', *args)
for i in range(length):
next_pform = printer._print('', *args)
length_i = len(self.args[i].args)
for j in range(length_i):
part_pform = printer._print(self.args[i].args[j], *args)
next_pform = prettyForm(*next_pform.right(part_pform))
if j != length_i - 1:
next_pform = prettyForm(*next_pform.right(', '))
if len(self.args[i].args) > 1:
next_pform = prettyForm(
*next_pform.parens(left='{', right='}'))
pform = prettyForm(*pform.right(next_pform))
if i != length - 1:
pform = prettyForm(*pform.right(',' + ' '))
pform = prettyForm(*pform.left(self.args[0].lbracket))
pform = prettyForm(*pform.right(self.args[0].rbracket))
return pform
length = len(self.args)
pform = printer._print('', *args)
for i in range(length):
next_pform = printer._print(self.args[i], *args)
if isinstance(self.args[i], (Add, Mul)):
next_pform = prettyForm(
*next_pform.parens(left='(', right=')')
)
pform = prettyForm(*pform.right(next_pform))
if i != length - 1:
if printer._use_unicode:
pform = prettyForm(*pform.right(u('\N{N-ARY CIRCLED TIMES OPERATOR}') + u(' ')))
else:
pform = prettyForm(*pform.right('x' + ' '))
return pform
def _latex(self, printer, *args):
if (_combined_printing and
(all([isinstance(arg, Ket) for arg in self.args]) or
all([isinstance(arg, Bra) for arg in self.args]))):
def _label_wrap(label, nlabels):
return label if nlabels == 1 else r"\left\{%s\right\}" % label
s = r", ".join([_label_wrap(arg._print_label_latex(printer, *args),
len(arg.args)) for arg in self.args])
return r"{%s%s%s}" % (self.args[0].lbracket_latex, s,
self.args[0].rbracket_latex)
length = len(self.args)
s = ''
for i in range(length):
if isinstance(self.args[i], (Add, Mul)):
s = s + '\\left('
# The extra {} brackets are needed to get matplotlib's latex
# rendered to render this properly.
s = s + '{' + printer._print(self.args[i], *args) + '}'
if isinstance(self.args[i], (Add, Mul)):
s = s + '\\right)'
if i != length - 1:
s = s + '\\otimes '
return s
def doit(self, **hints):
return TensorProduct(*[item.doit(**hints) for item in self.args])
def _eval_expand_tensorproduct(self, **hints):
"""Distribute TensorProducts across addition."""
args = self.args
add_args = []
stop = False
for i in range(len(args)):
if isinstance(args[i], Add):
for aa in args[i].args:
tp = TensorProduct(*args[:i] + (aa,) + args[i + 1:])
if isinstance(tp, TensorProduct):
tp = tp._eval_expand_tensorproduct()
add_args.append(tp)
break
if add_args:
return Add(*add_args)
else:
return self
def _eval_trace(self, **kwargs):
indices = kwargs.get('indices', None)
exp = tensor_product_simp(self)
if indices is None or len(indices) == 0:
return Mul(*[Tr(arg).doit() for arg in exp.args])
else:
return Mul(*[Tr(value).doit() if idx in indices else value
for idx, value in enumerate(exp.args)])
def tensor_product_simp_Mul(e):
"""Simplify a Mul with TensorProducts.
Current the main use of this is to simplify a ``Mul`` of ``TensorProduct``s
to a ``TensorProduct`` of ``Muls``. It currently only works for relatively
simple cases where the initial ``Mul`` only has scalars and raw
``TensorProduct``s, not ``Add``, ``Pow``, ``Commutator``s of
``TensorProduct``s.
Parameters
==========
e : Expr
A ``Mul`` of ``TensorProduct``s to be simplified.
Returns
=======
e : Expr
A ``TensorProduct`` of ``Mul``s.
Examples
========
This is an example of the type of simplification that this function
performs::
>>> from sympy.physics.quantum.tensorproduct import \
tensor_product_simp_Mul, TensorProduct
>>> from sympy import Symbol
>>> A = Symbol('A',commutative=False)
>>> B = Symbol('B',commutative=False)
>>> C = Symbol('C',commutative=False)
>>> D = Symbol('D',commutative=False)
>>> e = TensorProduct(A,B)*TensorProduct(C,D)
>>> e
AxB*CxD
>>> tensor_product_simp_Mul(e)
(A*C)x(B*D)
"""
# TODO: This won't work with Muls that have other composites of
# TensorProducts, like an Add, Pow, Commutator, etc.
# TODO: This only works for the equivalent of single Qbit gates.
if not isinstance(e, Mul):
return e
c_part, nc_part = e.args_cnc()
n_nc = len(nc_part)
if n_nc == 0 or n_nc == 1:
return e
elif e.has(TensorProduct):
current = nc_part[0]
if not isinstance(current, TensorProduct):
raise TypeError('TensorProduct expected, got: %r' % current)
n_terms = len(current.args)
new_args = list(current.args)
for next in nc_part[1:]:
# TODO: check the hilbert spaces of next and current here.
if isinstance(next, TensorProduct):
if n_terms != len(next.args):
raise QuantumError(
'TensorProducts of different lengths: %r and %r' %
(current, next)
)
for i in range(len(new_args)):
new_args[i] = new_args[i] * next.args[i]
else:
# this won't quite work as we don't want next in the
# TensorProduct
for i in range(len(new_args)):
new_args[i] = new_args[i] * next
current = next
return Mul(*c_part) * TensorProduct(*new_args)
else:
return e
def tensor_product_simp(e, **hints):
"""Try to simplify and combine TensorProducts.
In general this will try to pull expressions inside of ``TensorProducts``.
It currently only works for relatively simple cases where the products have
only scalars, raw ``TensorProducts``, not ``Add``, ``Pow``, ``Commutators``
of ``TensorProducts``. It is best to see what it does by showing examples.
Examples
========
>>> from sympy.physics.quantum import tensor_product_simp
>>> from sympy.physics.quantum import TensorProduct
>>> from sympy import Symbol
>>> A = Symbol('A',commutative=False)
>>> B = Symbol('B',commutative=False)
>>> C = Symbol('C',commutative=False)
>>> D = Symbol('D',commutative=False)
First see what happens to products of tensor products:
>>> e = TensorProduct(A,B)*TensorProduct(C,D)
>>> e
AxB*CxD
>>> tensor_product_simp(e)
(A*C)x(B*D)
This is the core logic of this function, and it works inside, powers, sums,
commutators and anticommutators as well:
>>> tensor_product_simp(e**2)
(A*C)x(B*D)**2
"""
if isinstance(e, Add):
return Add(*[tensor_product_simp(arg) for arg in e.args])
elif isinstance(e, Pow):
return tensor_product_simp(e.base) ** e.exp
elif isinstance(e, Mul):
return tensor_product_simp_Mul(e)
elif isinstance(e, Commutator):
return Commutator(*[tensor_product_simp(arg) for arg in e.args])
elif isinstance(e, AntiCommutator):
return AntiCommutator(*[tensor_product_simp(arg) for arg in e.args])
else:
return e
|
|
#!/usr/bin/env python
# encoding: UTF-8
import asyncio
from collections import deque
from collections import namedtuple
import concurrent.futures
import datetime
import functools
import logging
import operator
import re
import textwrap
import uuid
import xml.etree.ElementTree as ET
import xml.sax.saxutils
import aiohttp
from chameleon import PageTemplateFile
import pkg_resources
from sqlalchemy import desc
from cloudhands.burst.agent import Agent
from cloudhands.burst.agent import Job
from cloudhands.burst.control import create_node
from cloudhands.burst.control import describe_node
from cloudhands.burst.control import destroy_node
from cloudhands.burst.utils import find_xpath
from cloudhands.burst.utils import unescape_script
from cloudhands.common.discovery import providers
from cloudhands.common.discovery import settings
from cloudhands.common.schema import Appliance
from cloudhands.common.schema import CatalogueChoice
from cloudhands.common.schema import Component
from cloudhands.common.schema import IPAddress
from cloudhands.common.schema import Label
from cloudhands.common.schema import NATRouting
from cloudhands.common.schema import Node
from cloudhands.common.schema import OSImage
from cloudhands.common.schema import Provider
from cloudhands.common.schema import ProviderReport
from cloudhands.common.schema import ProviderToken
from cloudhands.common.schema import Touch
from cloudhands.common.states import ApplianceState
__doc__ = """
.. graphviz::
digraph appliance {
center = true;
compound = true;
nodesep = 0.6;
edge [decorate=true,labeldistance=3,labelfontname=helvetica,
labelfontsize=10,labelfloat=false];
subgraph cluster_states {
label = "Static";
configuring -> pre_provision [style=invis];
pre_check -> pre_operational [style=invis];
pre_operational -> pre_delete [style=invis];
pre_delete -> deleted [style=invis];
deleted -> pre_stop [style=invis];
stopped -> pre_start [style=invis];
pre_start -> running [style=invis];
subgraph cluster_super {
label = "Active";
node [height=1,width=2];
provisioning -> operational [style=invis];
}
}
pre_provision -> provisioning [style=invis];
operational -> pre_check [style=invis];
configuring -> pre_provision [taillabel="user"];
operational -> pre_check [ltail=cluster_super,taillabel="user"];
operational -> pre_stop [taillabel="user"];
subgraph cluster_agents {
label = "Burst controller";
style = filled;
node [shape=box];
pre_provision_agent -> provisioning_agent [style=invis];
provisioning_agent -> pre_check_agent [style=invis];
pre_check_agent -> pre_operational_agent [style=invis];
pre_operational_agent -> pre_delete_agent [style=invis];
pre_delete_agent -> pre_stop_agent [style=invis];
pre_stop_agent -> pre_start_agent [style=invis];
}
pre_provision -> pre_provision_agent[style=dashed,arrowhead=none];
pre_provision_agent -> provisioning;
provisioning -> provisioning_agent [taillabel="delay",style=dashed,arrowhead=none];
provisioning_agent -> pre_check;
pre_check -> pre_check_agent [style=dashed,arrowhead=none];
pre_check_agent -> operational;
pre_check_agent -> pre_operational;
pre_operational -> pre_operational_agent [style=dashed,arrowhead=none];
pre_operational -> pre_stop [taillabel="out of resource"];
pre_operational_agent -> operational;
pre_delete -> pre_delete_agent [style=dashed,arrowhead=none];
pre_delete_agent -> deleted;
pre_stop -> pre_stop_agent [style=dashed,arrowhead=none];
pre_stop_agent -> stopped;
stopped -> pre_delete [taillabel="user"];
stopped -> pre_start [taillabel="user"];
pre_start -> pre_start_agent [style=dashed,arrowhead=none,weight=2];
pre_start_agent -> running [tailport=w,weight=6];
running -> pre_stop [taillabel="user"];
}
"""
customizationScript = """#!/bin/sh
if [ x$1 == x"precustomization" ]; then
echo "Precustomisation"
elif [ x$1 == x"postcustomization" ]; then
echo "Postcustomisation"
/usr/local/bin/activator.sh {host}/appliance/{uuid}
fi
"""
find_catalogueitems = functools.partial(
find_xpath, ".//*[@type='application/vnd.vmware.vcloud.catalogItem+xml']",
namespaces={"": "http://www.vmware.com/vcloud/v1.5"})
find_catalogues = functools.partial(
find_xpath, "./*/[@type='application/vnd.vmware.vcloud.catalog+xml']")
def find_customizationsection(tree):
elems = find_xpath(
".//*[@type='application/vnd.vmware.vcloud.guestCustomizationSection+xml']",
tree, namespaces={"": "http://www.vmware.com/vcloud/v1.5"})
return (i for i in elems if i.tag.endswith("CustomizationSection"))
def find_customizationscript(tree):
return (i for s in find_customizationsection(tree) for i in s
if i.tag.endswith("CustomizationScript"))
find_gatewayserviceconfiguration = functools.partial(
find_xpath,
".//*[@type='application/vnd.vmware.admin.edgeGatewayServiceConfiguration+xml']")
def find_ipranges(tree, namespace="http://www.vmware.com/vcloud/v1.5"):
ranges = tree.iter("{{{}}}IpRange".format(namespace))
for r in ranges:
yield (
r.find("{{{}}}StartAddress".format(namespace)),
r.find("{{{}}}EndAddress".format(namespace)))
def find_networkconnectionsection(tree):
elems = find_xpath(
".//*[@type='application/vnd.vmware.vcloud.networkConnectionSection+xml']",
tree, namespaces={"": "http://www.vmware.com/vcloud/v1.5"})
return (i for i in elems if i.tag.endswith("NetworkConnectionSection"))
def find_networkconnection(tree):
return (i for s in find_networkconnectionsection(tree) for i in s
if i.tag.endswith("NetworkConnection"))
def find_networkconfigsection(tree):
elems = find_xpath(
".//*[@type='application/vnd.vmware.vcloud.networkConfigSection+xml']",
tree, namespaces={"": "http://www.vmware.com/vcloud/v1.5"})
return (i for i in elems if i.tag.endswith("NetworkConfigSection"))
def find_networkconfig(tree):
return (i for s in find_networkconfigsection(tree) for i in s
if i.tag.endswith("NetworkConfig"))
find_networkinterface = functools.partial(
find_xpath, ".//*[@type='application/vnd.vmware.admin.network+xml']",
namespaces={"": "http://www.vmware.com/vcloud/v1.5"})
find_orgs = functools.partial(
find_xpath, "./*/[@type='application/vnd.vmware.vcloud.org+xml']")
find_records = functools.partial(
find_xpath, "./*/[@type='application/vnd.vmware.vcloud.query.records+xml']")
find_results = functools.partial(find_xpath, "./*")
find_templates = functools.partial(
find_xpath, ".//*[@type='application/vnd.vmware.vcloud.vAppTemplate+xml']",
namespaces={"": "http://www.vmware.com/vcloud/v1.5"})
find_vdcs = functools.partial(
find_xpath, "./*/[@type='application/vnd.vmware.vcloud.vdc+xml']")
find_vms = functools.partial(
find_xpath, ".//*[@type='application/vnd.vmware.vcloud.vm+xml']")
def find_catalogrecords(text):
# ElementTree expects QueryResultRecords to declare a namespace. This and
# other issues mean using regular expressions instead.
log = logging.getLogger(
"cloudhands.burst.appliance.find_catalogrecords")
records = re.findall("<CatalogRecord[^>]+>", text)
return [ET.fromstring(r) for r in records]
@asyncio.coroutine
def find_template_among_catalogues(
client, headers, templateName, catalogues
):
log = logging.getLogger(
"cloudhands.burst.appliance.find_template_among_catalogues")
rv = None
for catalogue in catalogues:
response = yield from client.request(
"GET", catalogue.attrib.get("href"),
headers=headers)
catalogueData = yield from response.read_and_close()
tree = ET.fromstring(catalogueData.decode("utf-8"))
for catalogueItem in find_catalogueitems(tree, name=templateName):
response = yield from client.request(
"GET", catalogueItem.attrib.get("href"),
headers=headers)
catalogueItemData = yield from response.read_and_close()
tree = ET.fromstring(catalogueItemData.decode("utf-8"))
rv = next(find_templates(tree), None)
if rv is not None:
break
return rv
@asyncio.coroutine
def find_template_among_orgs(
client, headers, orgs, templateName,
catalogName="UN-managed Public Catalog"
):
log = logging.getLogger("cloudhands.burst.appliance.find_template_among_orgs")
rv = None
orgs = list(orgs)
while rv is None:
try:
org = orgs.pop(0)
except IndexError:
break
else:
response = yield from client.request(
"GET", org.attrib.get("href"),
headers=headers)
orgData = yield from response.read_and_close()
tree = ET.fromstring(orgData.decode("utf-8"))
for catalogue in find_catalogues(tree, name=catalogName):
response = yield from client.request(
"GET", catalogue.attrib.get("href"),
headers=headers)
catalogueData = yield from response.read_and_close()
tree = ET.fromstring(catalogueData.decode("utf-8"))
for catalogueItem in find_catalogueitems(tree, name=templateName):
response = yield from client.request(
"GET", catalogueItem.attrib.get("href"),
headers=headers)
catalogueItemData = yield from response.read_and_close()
tree = ET.fromstring(catalogueItemData.decode("utf-8"))
rv = next(find_templates(tree), None)
return rv
def hosts(session, state=None):
query = session.query(Appliance)
if not state:
return query.all()
return [h for h in query.all() if h.changes[-1].state.name == state]
class Strategy:
@staticmethod
def config(providerName):
return next((
cfg for p in providers.values() for cfg in p
if cfg["metadata"]["path"] == providerName),
None
)
@staticmethod
def recommend(host): # TODO sort providers
providerName = host.organisation.subscriptions[0].provider.name
return Strategy.config(providerName)
class PreCheckAgent(Agent):
CheckedAsOperational = namedtuple(
"CheckedAsOperational",
["uuid", "ts", "provider", "ip", "creation", "power", "health"])
CheckedAsPreOperational = namedtuple(
"CheckedAsPreOperational",
["uuid", "ts", "provider", "ip", "creation", "power", "health"])
CheckedAsProvisioning = namedtuple(
"CheckedAsProvisioning",
["uuid", "ts", "provider", "ip", "creation", "power", "health"])
@property
def callbacks(self):
return [
(PreCheckAgent.CheckedAsOperational, self.touch_to_operational),
(PreCheckAgent.CheckedAsPreOperational, self.touch_to_preoperational),
(PreCheckAgent.CheckedAsProvisioning, self.touch_to_provisioning),
]
def jobs(self, session):
for app in session.query(Appliance).all(): # TODO: Filter earlier
acts = app.changes
if acts[-1].state.name == "pre_check":
prvdrName = app.organisation.subscriptions[0].provider.name
token = session.query(ProviderToken).join(Touch).join(
Provider).filter(Touch.actor == acts[0].actor).filter(
Provider.name == prvdrName).order_by(
desc(Touch.at)).first()
creds = (prvdrName, token.key, token.value) if token else None
yield Job(app.uuid, creds, app)
def touch_to_operational(self, msg:CheckedAsOperational, session):
operational = session.query(ApplianceState).filter(
ApplianceState.name == "operational").one()
app = session.query(Appliance).filter(
Appliance.uuid == msg.uuid).first()
actor = session.query(Component).filter(
Component.handle=="burst.controller").one()
provider = session.query(Provider).filter(
Provider.name==msg.provider).one()
act = Touch(
artifact=app, actor=actor, state=operational, at=msg.ts)
resource = ProviderReport(
creation=msg.creation, power=msg.power, health=msg.health,
touch=act, provider=provider)
session.add(resource)
session.commit()
return act
def touch_to_preoperational(self, msg:CheckedAsPreOperational, session):
preoperational = session.query(ApplianceState).filter(
ApplianceState.name == "pre_operational").one()
app = session.query(Appliance).filter(
Appliance.uuid == msg.uuid).first()
actor = session.query(Component).filter(
Component.handle=="burst.controller").one()
provider = session.query(Provider).filter(
Provider.name==msg.provider).one()
act = Touch(
artifact=app, actor=actor, state=preoperational, at=msg.ts)
ip = IPAddress(value=msg.ip, touch=act, provider=provider)
report = ProviderReport(
creation=msg.creation, power=msg.power, health=msg.health,
touch=act, provider=provider)
session.add_all((ip, report))
session.commit()
return act
def touch_to_provisioning(self, msg:CheckedAsProvisioning, session):
provisioning = session.query(ApplianceState).filter(
ApplianceState.name == "provisioning").one()
app = session.query(Appliance).filter(
Appliance.uuid == msg.uuid).first()
actor = session.query(Component).filter(
Component.handle=="burst.controller").one()
provider = session.query(Provider).filter(
Provider.name==msg.provider).one()
act = Touch(
artifact=app, actor=actor, state=provisioning, at=msg.ts)
resource = ProviderReport(
creation=msg.creation, power=msg.power, health=msg.health,
touch=act, provider=provider)
session.add(resource)
session.commit()
return act
@asyncio.coroutine
def __call__(self, loop, msgQ, *args):
log = logging.getLogger("cloudhands.burst.appliance.precheck")
log.info("Activated.")
ET.register_namespace("", "http://www.vmware.com/vcloud/v1.5")
while True:
job = yield from self.work.get()
log.debug(job)
app = job.artifact
resources = sorted(
(r for c in app.changes for r in c.resources),
key=operator.attrgetter("touch.at"),
reverse=True)
node = next((i for i in resources if isinstance(i, Node)), None)
config = Strategy.config(node.provider.name)
headers = {
"Accept": "application/*+xml;version=5.5",
}
try:
headers[job.token[1]] = job.token[2]
except (TypeError, IndexError):
log.warning("No token supplied")
client = aiohttp.client.HttpClient(
["{host}:{port}".format(
host=config["host"]["name"],
port=config["host"]["port"])
],
verify_ssl=config["host"].getboolean("verify_ssl_cert")
)
response = yield from client.request(
"GET", node.uri, headers=headers)
vApp = yield from response.read_and_close()
log.debug(vApp)
tree = ET.fromstring(vApp.decode("utf-8"))
creation = "unknown"
ipAddr = None
messageType = PreCheckAgent.CheckedAsProvisioning
try:
scriptElement = next(find_customizationscript(tree))
except StopIteration:
# Not necessarily an error; possibly still provisioning
log.warning("Missing customisation script")
else:
try:
nc = next(find_networkconnection(tree))
except StopIteration:
log.debug("Missing network connection")
creation = "undeployed"
else:
try:
ipAddr = next(
i for i in nc if i.tag.endswith("IpAddress")).text
except Exception as e:
log.error(e)
script = unescape_script(scriptElement.text).splitlines()
if len(script) > 5:
# Customisation script is in place
messageType = (PreCheckAgent.CheckedAsOperational if any(
i for i in resources
if i.touch.state.name == "operational")
else PreCheckAgent.CheckedAsPreOperational)
if tree.attrib.get("deployed") == "true":
creation = "deployed"
msg = messageType(
app.uuid, datetime.datetime.utcnow(),
node.provider.name, ipAddr,
creation, None, None
)
yield from msgQ.put(msg)
class PreDeleteAgent(Agent):
Message = namedtuple(
"DeletedMessage", ["uuid", "ts", "provider"])
@property
def callbacks(self):
return [(PreDeleteAgent.Message, self.touch_to_deleted)]
def jobs(self, session):
for app in session.query(Appliance).all(): # TODO: Filter earlier
acts = app.changes
if acts[-1].state.name == "pre_delete":
prvdrName = app.organisation.subscriptions[0].provider.name
token = session.query(ProviderToken).join(Touch).join(
Provider).filter(Touch.actor == acts[0].actor).filter(
Provider.name == prvdrName).order_by(
desc(Touch.at)).first()
creds = (prvdrName, token.key, token.value) if token else None
yield Job(app.uuid, creds, app)
def touch_to_deleted(self, msg:Message, session):
deleted = session.query(ApplianceState).filter(
ApplianceState.name == "deleted").one()
app = session.query(Appliance).filter(
Appliance.uuid == msg.uuid).first()
actor = session.query(Component).filter(
Component.handle=="burst.controller").one()
provider = session.query(Provider).filter(
Provider.name==msg.provider).one()
act = Touch(artifact=app, actor=actor, state=deleted, at=msg.ts)
session.add(act)
session.commit()
return act
@asyncio.coroutine
def __call__(self, loop, msgQ, *args):
log = logging.getLogger("cloudhands.burst.appliance.predelete")
log.info("Activated.")
ET.register_namespace("", "http://www.vmware.com/vcloud/v1.5")
while True:
job = yield from self.work.get()
app = job.artifact
resources = sorted(
(r for c in app.changes for r in c.resources),
key=operator.attrgetter("touch.at"),
reverse=True)
node = next(i for i in resources if isinstance(i, Node))
config = Strategy.config(node.provider.name)
headers = {
"Accept": "application/*+xml;version=5.5",
}
try:
headers[job.token[1]] = job.token[2]
except (TypeError, IndexError):
log.warning("No token supplied")
client = aiohttp.client.HttpClient(
["{host}:{port}".format(
host=config["host"]["name"],
port=config["host"]["port"])
],
verify_ssl=config["host"].getboolean("verify_ssl_cert")
)
response = yield from client.request(
"DELETE", node.uri,
headers=headers)
reply = yield from response.read_and_close()
msg = PreDeleteAgent.Message(
app.uuid, datetime.datetime.utcnow(),
node.provider.name
)
yield from msgQ.put(msg)
class PreOperationalAgent(Agent):
OperationalMessage = namedtuple(
"OperationalMessage",
["uuid", "ts", "provider", "ip_internal", "ip_external"])
ResourceConstrainedMessage = namedtuple(
"ResourceConstrainedMessage",
["uuid", "ts", "provider", "ip_internal", "ip_external"])
@property
def callbacks(self):
return [
(PreOperationalAgent.OperationalMessage, self.touch_to_operational),
(PreOperationalAgent.ResourceConstrainedMessage, self.touch_to_prestop),
]
def jobs(self, session):
for app in session.query(Appliance).all(): # TODO: Filter earlier
acts = app.changes
if acts[-1].state.name == "pre_operational":
prvdrName = app.organisation.subscriptions[0].provider.name
token = session.query(ProviderToken).join(Touch).join(
Provider).filter(Touch.actor == acts[0].actor).filter(
Provider.name == prvdrName).order_by(
desc(Touch.at)).first()
creds = (prvdrName, token.key, token.value) if token else None
yield Job(app.uuid, creds, app)
def touch_to_operational(self, msg:OperationalMessage, session):
operational = session.query(ApplianceState).filter(
ApplianceState.name == "operational").one()
app = session.query(Appliance).filter(
Appliance.uuid == msg.uuid).first()
actor = session.query(Component).filter(
Component.handle=="burst.controller").one()
provider = session.query(Provider).filter(
Provider.name==msg.provider).one()
act = Touch(artifact=app, actor=actor, state=operational, at=msg.ts)
if msg.ip_internal and msg.ip_external:
resource = NATRouting(
touch=act, provider=provider,
ip_int=msg.ip_internal, ip_ext=msg.ip_external)
session.add(resource)
else:
session.add(act)
session.commit()
return act
def touch_to_prestop(self, msg:ResourceConstrainedMessage, session):
prestop = session.query(ApplianceState).filter(
ApplianceState.name == "pre_stop").one()
app = session.query(Appliance).filter(
Appliance.uuid == msg.uuid).first()
actor = session.query(Component).filter(
Component.handle=="burst.controller").one()
provider = session.query(Provider).filter(
Provider.name==msg.provider).one()
act = Touch(artifact=app, actor=actor, state=prestop, at=msg.ts)
session.add(act)
session.commit()
return act
@asyncio.coroutine
def __call__(self, loop, msgQ, session):
log = logging.getLogger("cloudhands.burst.appliance.preoperation")
log.info("Activated.")
ET.register_namespace("", "http://www.vmware.com/vcloud/v1.5")
natMacro = PageTemplateFile(pkg_resources.resource_filename(
"cloudhands.burst.drivers", "NatRule.pt"))
fwMacro = PageTemplateFile(pkg_resources.resource_filename(
"cloudhands.burst.drivers", "FirewallRule.pt"))
while True:
job = yield from self.work.get()
app = job.artifact
resources = sorted(
(r for c in app.changes for r in c.resources),
key=operator.attrgetter("touch.at"),
reverse=True)
choice = next(i for i in resources if isinstance(i, CatalogueChoice))
node = next(i for i in resources if isinstance(i, Node))
config = Strategy.config(node.provider.name)
network = config.get("vdc", "network", fallback=None)
if not choice.natrouted:
log.info("No rules applied for {} {}".format(
choice.name, app.uuid))
msg = PreOperationalAgent.OperationalMessage(
app.uuid, datetime.datetime.utcnow(),
node.provider.name,
None, None
)
yield from msgQ.put(msg)
continue
log.info("Applying rules for {} {}".format(choice.name, app.uuid))
try:
privateIP = next(
i for i in resources if isinstance(i, IPAddress))
except StopIteration:
log.error("No IPAddress")
continue
else:
log.debug(privateIP.value)
subs = next(i for i in app.organisation.subscriptions
if i.provider.name == node.provider.name)
ipPool = {r.value for c in subs.changes for r in c.resources
if isinstance(r, IPAddress)}
ipTaken = {i.ip_ext for i in session.query(NATRouting).join(
Provider).filter(Provider.name == node.provider.name).all()}
ipFree = ipPool.difference(ipTaken)
if not ipFree:
log.warning("No public IP Addresses available")
msg = PreOperationalAgent.ResourceConstrainedMessage(
app.uuid, datetime.datetime.utcnow(),
node.provider.name, privateIP.value, None
)
yield from msgQ.put(msg)
continue
else:
log.info("Allocating from {}".format(ipFree))
publicIP = session.query(IPAddress).filter(
IPAddress.value == ipFree.pop()).first()
headers = {
"Accept": "application/*+xml;version=5.5",
}
try:
headers[job.token[1]] = job.token[2]
except (TypeError, IndexError):
log.warning("No token supplied")
client = aiohttp.client.HttpClient(
["{host}:{port}".format(
host=config["host"]["name"],
port=config["host"]["port"])
],
verify_ssl=config["host"].getboolean("verify_ssl_cert")
)
url = "{scheme}://{host}:{port}/{endpoint}".format(
scheme="https",
host=config["host"]["name"],
port=config["host"]["port"],
endpoint="api/org")
response = yield from client.request(
"GET", url,
headers=headers)
orgList = yield from response.read_and_close()
tree = ET.fromstring(orgList.decode("utf-8"))
orgFound = find_orgs(tree, name=config["vdc"]["org"])
try:
org = next(orgFound)
except StopIteration:
log.error("Failed to find org")
continue
response = yield from client.request(
"GET", org.attrib.get("href"),
headers=headers)
orgData = yield from response.read_and_close()
tree = ET.fromstring(orgData.decode("utf-8"))
try:
vdcLink = next(find_vdcs(tree))
except StopIteration:
log.error("Failed to find VDC")
continue
response = yield from client.request(
"GET", vdcLink.attrib.get("href"),
headers=headers)
vdcData = yield from response.read_and_close()
tree = ET.fromstring(vdcData.decode("utf-8"))
# Gateway details via query to vdc
try:
gwLink = next(
find_records(tree, rel="edgeGateways"))
except StopIteration:
log.error("Failed to find gateways")
continue
# Gateway data from link
response = yield from client.request(
"GET", gwLink.attrib.get("href"),
headers=headers)
gwData = yield from response.read_and_close()
tree = ET.fromstring(gwData.decode("utf-8"))
gwRecord = next(
find_results(tree, name=config["gateway"]["name"]))
response = yield from client.request(
"GET", gwRecord.attrib.get("href"),
headers=headers)
gwData = yield from response.read_and_close()
tree = ET.fromstring(gwData.decode("utf-8"))
try:
interface = next(
find_networkinterface(
tree, name=config["gateway"]["interface"]))
except StopIteration:
log.error("Failed to find network")
try:
eGSC = next(
c for i in tree if i.tag.endswith("Configuration")
for c in i
if c.tag.endswith("EdgeGatewayServiceConfiguration"))
except StopIteration:
log.error("Missing Edge gateway service configuration")
continue
try:
natService = next(
i for i in eGSC if i.tag.endswith("NatService"))
except StopIteration:
natService = ET.XML(
"""<NatService><IsEnabled>true</IsEnabled></NatService>""")
eGSC.append(natService)
try:
fwService = next(
i for i in eGSC if i.tag.endswith("FirewallService"))
except StopIteration:
log.error("Failed to find firewall service")
# SNAT rule already defined for entire subnet
defn = {
"typ": "DNAT",
"network": {
"name": config["gateway"]["interface"],
"href": interface.attrib.get("href")
},
"rule": {
"rx": publicIP.value,
"tx": privateIP.value,
},
"description": "Public IP PNAT"
}
fwService.append(ET.XML(fwMacro(**defn)))
natService.append(ET.XML(natMacro(**defn)))
gwServiceCfgs = find_gatewayserviceconfiguration(tree)
try:
gwSCfg = next(gwServiceCfgs)
except StopIteration:
log.error("Failed to find gateway service configuration")
url = gwSCfg.attrib.get("href")
headers["Content-Type"] = (
"application/vnd.vmware.admin.edgeGatewayServiceConfiguration+xml")
response = yield from client.request(
"POST", url,
headers=headers,
data=ET.tostring(eGSC, encoding="utf-8"))
reply = yield from response.read_and_close()
log.debug(reply)
msg = PreOperationalAgent.OperationalMessage(
app.uuid, datetime.datetime.utcnow(),
node.provider.name,
defn["rule"]["tx"], defn["rule"]["rx"]
)
yield from msgQ.put(msg)
class PreProvisionAgent(Agent):
Message = namedtuple(
"ProvisioningMessage", ["uuid", "ts", "provider", "uri"])
@property
def callbacks(self):
return [(PreProvisionAgent.Message, self.touch_to_provisioning)]
def jobs(self, session):
for app in session.query(Appliance).all(): # TODO: Filter earlier
acts = app.changes
if acts[-1].state.name == "pre_provision":
prvdrName = app.organisation.subscriptions[0].provider.name
token = session.query(ProviderToken).join(Touch).join(
Provider).filter(Touch.actor == acts[0].actor).filter(
Provider.name == prvdrName).order_by(
desc(Touch.at)).first()
creds = (prvdrName, token.key, token.value) if token else None
yield Job(app.uuid, creds, app)
def touch_to_provisioning(self, msg:Message, session):
provisioning = session.query(ApplianceState).filter(
ApplianceState.name == "provisioning").one()
app = session.query(Appliance).filter(
Appliance.uuid == msg.uuid).first()
actor = session.query(Component).filter(
Component.handle=="burst.controller").one()
provider = session.query(Provider).filter(
Provider.name==msg.provider).one()
act = Touch(artifact=app, actor=actor, state=provisioning, at=msg.ts)
resource = Node(
name="", touch=act, provider=provider,
uri=msg.uri)
session.add(resource)
session.commit()
return act
@asyncio.coroutine
def __call__(self, loop, msgQ, *args):
log = logging.getLogger("cloudhands.burst.appliance.preprovision")
log.info("Activated.")
ET.register_namespace("", "http://www.vmware.com/vcloud/v1.5")
portalName, portal = next(iter(settings.items()))
macro = PageTemplateFile(pkg_resources.resource_filename(
"cloudhands.burst.drivers", "InstantiateVAppTemplateParams.pt"))
macro = PageTemplateFile(pkg_resources.resource_filename(
"cloudhands.burst.drivers", "ComposeVAppParams.pt"))
while True:
job = yield from self.work.get()
app = job.artifact
resources = sorted(
(r for c in app.changes for r in c.resources),
key=operator.attrgetter("touch.at"),
reverse=True)
label = next(i for i in resources if isinstance(i, Label))
choice = next(i for i in resources if isinstance(i, CatalogueChoice))
image = choice.name
config = Strategy.recommend(app)
headers = {
"Accept": "application/*+xml;version=5.5",
}
try:
headers[job.token[1]] = job.token[2]
except (TypeError, IndexError):
log.warning("No token supplied")
client = aiohttp.client.HttpClient(
["{host}:{port}".format(
host=config["host"]["name"],
port=config["host"]["port"])
],
verify_ssl=config["host"].getboolean("verify_ssl_cert")
)
# Find template among catalogues
url = "{scheme}://{host}:{port}/{endpoint}".format(
scheme="https",
host=config["host"]["name"],
port=config["host"]["port"],
endpoint="api/catalogs/query")
response = yield from client.request(
"GET", url, headers=headers)
data = yield from response.read_and_close()
catalogues = [
i for i in find_catalogrecords(data.decode("utf-8"))
if i.attrib.get("name", None) in (
config["vdc"]["org"],
config["vdc"]["catalogue"]
)
]
template = yield from find_template_among_catalogues(
client, headers, image, catalogues
)
if template is None:
log.error("Couldn't find template {}".format(image))
response = yield from client.request(
"GET", template.get("href"),
headers=headers)
reply = yield from response.read_and_close()
log.debug(reply)
tree = ET.fromstring(reply.decode("utf-8"))
script = customizationScript.format(
host=portal["auth.rest"]["host"],
uuid=app.uuid)
vmConfigs = []
for vm in find_vms(tree):
ncs = next(find_networkconnectionsection(tree), None)
if ncs is None:
log.error("Couldn't find network connection section")
vmConfigs.append({
"href": vm.attrib.get("href"),
"name": uuid.uuid4().hex,
"networks": [
{"name": nc.attrib.get("network")}
for nc in find_networkconnection(vm)
],
"script": script})
# VDC details from organisation
url = "{scheme}://{host}:{port}/{endpoint}".format(
scheme="https",
host=config["host"]["name"],
port=config["host"]["port"],
endpoint="api/org")
response = yield from client.request(
"GET", url,
headers=headers)
orgList = yield from response.read_and_close()
tree = ET.fromstring(orgList.decode("utf-8"))
userOrg = next(find_orgs(tree, name=config["vdc"]["org"]), None)
response = yield from client.request(
"GET", userOrg.attrib.get("href"),
headers=headers)
orgData = yield from response.read_and_close()
tree = ET.fromstring(orgData.decode("utf-8"))
try:
vdcLink = next(find_vdcs(tree))
except StopIteration:
log.error("Failed to find VDC")
response = yield from client.request(
"GET", vdcLink.attrib.get("href"),
headers=headers)
vdcData = yield from response.read_and_close()
tree = ET.fromstring(vdcData.decode("utf-8"))
# Network details via query to vdc
try:
netLink = next(
find_records(tree, rel="orgVdcNetworks"))
except StopIteration:
log.error("Failed to find network")
response = yield from client.request(
"GET", netLink.attrib.get("href"),
headers=headers)
netData = yield from response.read_and_close()
tree = ET.fromstring(netData.decode("utf-8"))
netDetails = [
next(find_results(tree, name=name), None)
for n, name in sorted(config.items("network"))]
try:
data = {
"appliance": {
"name": label.name,
"description": "FIXME: Description",
"vms": vmConfigs,
},
"networks": [{
"name": net.attrib.get("name"),
"href": net.attrib.get("href"),
} for net in netDetails],
"template": {
"name": template.attrib.get("name"),
"href": template.attrib.get("href"),
},
}
url = "{vdc}/{endpoint}".format(
vdc=vdcLink.attrib.get("href"),
endpoint="action/composeVApp")
headers["Content-Type"] = (
"application/vnd.vmware.vcloud.instantiateVAppTemplateParams+xml")
headers["Content-Type"] = (
"application/vnd.vmware.vcloud.composeVAppParams+xml")
payload = macro(**data)
log.debug(payload)
except Exception as e:
log.error(e)
response = yield from client.request(
"POST", url,
headers=headers,
data=payload.encode("utf-8"))
reply = yield from response.read_and_close()
log.debug(reply)
tree = ET.fromstring(reply.decode("utf-8"))
try:
vApp = next(find_xpath(".", tree, name=label.name))
except StopIteration:
#TODO: Check error for duplicate, take action
log.error("Failed to find vapp")
else:
msg = PreProvisionAgent.Message(
app.uuid, datetime.datetime.utcnow(),
config["metadata"]["path"],
vApp.attrib.get("href")
)
yield from msgQ.put(msg)
class ProvisioningAgent(Agent):
Message = namedtuple("CheckRequiredMessage", ["uuid", "ts"])
@property
def callbacks(self):
return [
(ProvisioningAgent.Message, self.touch_to_precheck),
]
def jobs(self, session):
# TODO: get token (need user registration ProviderToken)
now = datetime.datetime.utcnow()
then = now - datetime.timedelta(seconds=20)
for app in session.query(Appliance).all(): # TODO: Filter earlier
acts = app.changes
if acts[-1].state.name == "provisioning" and acts[-1].at < then:
prvdrName = app.organisation.subscriptions[0].provider.name
token = session.query(ProviderToken).join(Touch).join(
Provider).filter(Touch.actor == acts[0].actor).filter(
Provider.name == prvdrName).order_by(
desc(Touch.at)).first()
creds = (prvdrName, token.key, token.value) if token else None
yield Job(app.uuid, creds, app)
def touch_to_precheck(self, msg:Message, session):
precheck = session.query(ApplianceState).filter(
ApplianceState.name == "pre_check").one()
app = session.query(Appliance).filter(
Appliance.uuid == msg.uuid).first()
actor = session.query(Component).filter(
Component.handle=="burst.controller").one()
act = Touch(artifact=app, actor=actor, state=precheck, at=msg.ts)
session.add(act)
session.commit()
return act
@asyncio.coroutine
def __call__(self, loop, msgQ, *args):
log = logging.getLogger("cloudhands.burst.appliance.provisioning")
log.info("Activated.")
while True:
job = yield from self.work.get()
log.debug(job)
app = job.artifact
resources = sorted(
(r for c in app.changes for r in c.resources),
key=operator.attrgetter("touch.at"),
reverse=True)
node = next((i for i in resources if isinstance(i, Node)), None)
choice = next((i for i in resources
if isinstance(i, CatalogueChoice)), None)
if not (node and choice):
log.error("Missing data for new node")
config = Strategy.config(node.provider.name)
headers = {
"Accept": "application/*+xml;version=5.5",
}
try:
headers[job.token[1]] = job.token[2]
except (TypeError, IndexError):
log.warning("No token supplied")
client = aiohttp.client.HttpClient(
["{host}:{port}".format(
host=config["host"]["name"],
port=config["host"]["port"])
],
verify_ssl=config["host"].getboolean("verify_ssl_cert")
)
response = yield from client.request(
"GET", node.uri, headers=headers)
reply = yield from response.read_and_close()
tree = ET.fromstring(reply.decode("utf-8"))
try:
sectionElement = next(find_customizationsection(tree))
except StopIteration:
log.warning("Missing customisation script")
msg = ProvisioningAgent.Message(
job.uuid, datetime.datetime.utcnow())
yield from msgQ.put(msg)
class PreStartAgent(Agent):
Message = namedtuple(
"OperationalMessage", ["uuid", "ts", "provider"])
@property
def callbacks(self):
return [(PreStartAgent.Message, self.touch_to_running)]
def jobs(self, session):
for app in session.query(Appliance).all(): # TODO: Filter earlier
acts = app.changes
if acts[-1].state.name == "pre_start":
prvdrName = app.organisation.subscriptions[0].provider.name
token = session.query(ProviderToken).join(Touch).join(
Provider).filter(Touch.actor == acts[0].actor).filter(
Provider.name == prvdrName).order_by(
desc(Touch.at)).first()
creds = (prvdrName, token.key, token.value) if token else None
yield Job(app.uuid, creds, app)
def touch_to_running(self, msg:Message, session):
running = session.query(ApplianceState).filter(
ApplianceState.name == "running").one()
app = session.query(Appliance).filter(
Appliance.uuid == msg.uuid).first()
actor = session.query(Component).filter(
Component.handle=="burst.controller").one()
provider = session.query(Provider).filter(
Provider.name==msg.provider).one()
act = Touch(artifact=app, actor=actor, state=running, at=msg.ts)
session.add(act)
session.commit()
return act
@asyncio.coroutine
def __call__(self, loop, msgQ, *args):
log = logging.getLogger("cloudhands.burst.appliance.prestart")
log.info("Activated.")
ET.register_namespace("", "http://www.vmware.com/vcloud/v1.5")
while True:
job = yield from self.work.get()
try:
app = job.artifact
resources = sorted(
(r for c in app.changes for r in c.resources),
key=operator.attrgetter("touch.at"),
reverse=True)
node = next(i for i in resources if isinstance(i, Node))
config = Strategy.config(node.provider.name)
headers = {
"Accept": "application/*+xml;version=5.5",
}
try:
headers[job.token[1]] = job.token[2]
except (TypeError, IndexError):
log.warning("No token supplied")
client = aiohttp.client.HttpClient(
["{host}:{port}".format(
host=config["host"]["name"],
port=config["host"]["port"])
],
verify_ssl=config["host"].getboolean("verify_ssl_cert")
)
deploy = textwrap.dedent("""
<DeployVAppParams xmlns="http://www.vmware.com/vcloud/v1.5"
powerOn="true" />
""")
url = "{}/action/deploy".format(node.uri)
headers["Content-Type"] = (
"application/vnd.vmware.vcloud.deployVAppParams+xml")
response = yield from client.request(
"POST", url,
headers=headers,
data=deploy.encode("utf-8"))
reply = yield from response.read_and_close()
except Exception as e:
log.error(e)
continue
msg = PreStartAgent.Message(
app.uuid, datetime.datetime.utcnow(),
node.provider.name
)
yield from msgQ.put(msg)
class PreStopAgent(Agent):
Message = namedtuple(
"StoppedMessage", ["uuid", "ts", "provider"])
@property
def callbacks(self):
return [(PreStopAgent.Message, self.touch_to_stopped)]
def jobs(self, session):
for app in session.query(Appliance).all(): # TODO: Filter earlier
acts = app.changes
if acts[-1].state.name == "pre_stop":
prvdrName = app.organisation.subscriptions[0].provider.name
token = session.query(ProviderToken).join(Touch).join(
Provider).filter(Touch.actor == acts[0].actor).filter(
Provider.name == prvdrName).order_by(
desc(Touch.at)).first()
creds = (prvdrName, token.key, token.value) if token else None
yield Job(app.uuid, creds, app)
def touch_to_stopped(self, msg:Message, session):
stopped = session.query(ApplianceState).filter(
ApplianceState.name == "stopped").one()
app = session.query(Appliance).filter(
Appliance.uuid == msg.uuid).first()
actor = session.query(Component).filter(
Component.handle=="burst.controller").one()
provider = session.query(Provider).filter(
Provider.name==msg.provider).one()
act = Touch(artifact=app, actor=actor, state=stopped, at=msg.ts)
session.add(act)
session.commit()
return act
@asyncio.coroutine
def __call__(self, loop, msgQ, *args):
log = logging.getLogger("cloudhands.burst.appliance.prestop")
log.info("Activated.")
ET.register_namespace("", "http://www.vmware.com/vcloud/v1.5")
while True:
job = yield from self.work.get()
app = job.artifact
resources = sorted(
(r for c in app.changes for r in c.resources),
key=operator.attrgetter("touch.at"),
reverse=True)
node = next(i for i in resources if isinstance(i, Node))
config = Strategy.config(node.provider.name)
headers = {
"Accept": "application/*+xml;version=5.5",
}
try:
headers[job.token[1]] = job.token[2]
except (TypeError, IndexError):
log.warning("No token supplied")
client = aiohttp.client.HttpClient(
["{host}:{port}".format(
host=config["host"]["name"],
port=config["host"]["port"])
],
verify_ssl=config["host"].getboolean("verify_ssl_cert")
)
unDeploy = textwrap.dedent("""
<UndeployVAppParams xmlns="http://www.vmware.com/vcloud/v1.5">
<UndeployPowerAction>powerOff</UndeployPowerAction>
</UndeployVAppParams>
""")
url = "{}/action/undeploy".format(node.uri)
headers["Content-Type"] = (
"application/vnd.vmware.vcloud.undeployVAppParams+xml")
response = yield from client.request(
"POST", url,
headers=headers,
data=unDeploy.encode("utf-8"))
reply = yield from response.read_and_close()
msg = PreStopAgent.Message(
app.uuid, datetime.datetime.utcnow(),
node.provider.name
)
yield from msgQ.put(msg)
|
|
# -*- coding: utf-8 -*-
import os, sys; sys.path.insert(0, os.path.join("..", ".."))
import unittest
from pattern import graph
#---------------------------------------------------------------------------------------------------
class TestUtilityFunctions(unittest.TestCase):
def setUp(self):
pass
def test_deepcopy(self):
# Object with a copy() method are responsible for deep-copying themselves.
class MyObject:
def __init__(self, i):
self.i = i
def copy(self):
return MyObject(graph.deepcopy(self.i))
# Assert deep copy for different types.
for o1 in (
None, True, False,
"a", u"a",
1, 1.0, 1L, complex(1),
list([1]), tuple([1]), set([1]), frozenset([1]),
dict(a=1), {frozenset(["a"]):1}, {MyObject(1):1},
MyObject(1)):
o2 = graph.deepcopy(o1)
if isinstance(o2, (list, tuple, set, dict, MyObject)):
self.assertTrue(id(o1) != id(o2))
print "pattern.graph.deepcopy()"
def test_unique(self):
# Assert list copy with unique items.
v = graph.unique([1,1,1])
self.assertEqual(len(v), 1)
self.assertEqual(v[0], 1)
print "pattern.graph.unique()"
def test_coordinates(self):
# Assert 2D coordinates.
x, y = graph.coordinates(10, 10, 100, 30)
self.assertAlmostEqual(x, 96.60, places=2)
self.assertAlmostEqual(y, 60.00, places=2)
print "pattern.graph.coordinates()"
#---------------------------------------------------------------------------------------------------
class TestNode(unittest.TestCase):
def setUp(self):
# Create test graph.
self.g = graph.Graph()
self.g.add_node("a", radius=5, stroke=(0,0,0,1), strokewidth=1, fill=None, text=(0,0,0,1))
self.g.add_node("b", radius=5)
self.g.add_node("c", radius=5)
self.g.add_edge("a", "b")
self.g.add_edge("b", "c")
def test_node(self):
# Assert node properties.
n = self.g["a"]
self.assertTrue(isinstance(n, graph.Node))
self.assertTrue(n == self.g["a"])
self.assertTrue(n != self.g["b"])
self.assertTrue(n.graph == self.g)
self.assertTrue(n._distance == self.g.distance)
self.assertTrue(n.id == "a")
self.assertTrue(n.x == 0.0)
self.assertTrue(n.y == 0.0)
self.assertTrue(n.force.x == graph.Vector(0.0, 0.0).x)
self.assertTrue(n.force.y == graph.Vector(0.0, 0.0).y)
self.assertTrue(n.radius == 5)
self.assertTrue(n.fill == None)
self.assertTrue(n.stroke == (0,0,0,1))
self.assertTrue(n.strokewidth == 1)
self.assertTrue(n.text.string == u"a")
self.assertTrue(n.text.width == 85)
self.assertTrue(n.text.fill == (0,0,0,1))
self.assertTrue(n.text.fontsize == 11)
self.assertTrue(n.fixed == False)
self.assertTrue(n.weight == 0)
self.assertTrue(n.centrality == 0)
print "pattern.graph.Node"
def test_edge(self):
# Assert node edges.
n1 = self.g["a"]
n2 = self.g["b"]
self.assertTrue(n1.edges[0].node1.id == "a")
self.assertTrue(n1.edges[0].node2.id == "b")
self.assertTrue(n1.links[0].id == "b")
self.assertTrue(n1.links[0] == self.g.edges[0].node2)
self.assertTrue(n1.links.edge("b") == self.g.edges[0])
self.assertTrue(n1.links.edge(n2) == self.g.edges[0])
print "pattern.graph.Node.links"
print "pattern.graph.Node.edges"
def test_flatten(self):
# Assert node spreading activation.
n = self.g["a"]
self.assertTrue(set(n.flatten(depth=0)) == set([n]))
self.assertTrue(set(n.flatten(depth=1)) == set([n, n.links[0]]))
self.assertTrue(set(n.flatten(depth=2)) == set(self.g.nodes))
print "pattern.graph.Node.flatten()"
def test_text(self):
n = self.g.add_node("d", text=None)
self.assertTrue(n.text == None)
print "pattern.graph.Node.text"
#---------------------------------------------------------------------------------------------------
class TestEdge(unittest.TestCase):
def setUp(self):
# Create test graph.
self.g = graph.Graph()
self.g.add_node("a")
self.g.add_node("b")
self.g.add_edge("a", "b", weight=0.0, length=1.0, type="is-a", stroke=(0,0,0,1), strokewidth=1)
def test_edge(self):
# Assert edge properties.
e = self.g.edges[0]
self.assertTrue(isinstance(e, graph.Edge))
self.assertTrue(e.node1 == self.g["a"])
self.assertTrue(e.node2 == self.g["b"])
self.assertTrue(e.weight == 0.0)
self.assertTrue(e.length == 1.0)
self.assertTrue(e.type == "is-a")
self.assertTrue(e.stroke == (0,0,0,1))
self.assertTrue(e.strokewidth == 1)
print "pattern.graph.Edge"
#---------------------------------------------------------------------------------------------------
class TestGraph(unittest.TestCase):
def setUp(self):
# Create test graph.
self.g = graph.Graph(layout=graph.SPRING, distance=10.0)
self.g.add_node("a")
self.g.add_node("b")
self.g.add_node("c")
self.g.add_edge("a", "b")
self.g.add_edge("b", "c")
def test_graph(self):
# Assert graph properties.
g = self.g.copy()
self.assertTrue(len(g.nodes) == 3)
self.assertTrue(len(g.edges) == 2)
self.assertTrue(g.distance == 10.0)
self.assertTrue(g.density == 2 / 3.0)
self.assertTrue(g.is_complete == False)
self.assertTrue(g.is_sparse == False)
self.assertTrue(g.is_dense == True)
self.assertTrue(g._adjacency == None)
self.assertTrue(isinstance(g.layout, graph.GraphLayout))
self.assertTrue(isinstance(g.layout, graph.GraphSpringLayout))
print "pattern.graph.Graph"
def test_graph_nodes(self):
# Assert graph nodes.
g = self.g.copy()
g.append(graph.Node, "d")
g.add_node("e", base=graph.Node, root=True)
self.assertTrue("d" in g)
self.assertTrue("e" in g)
self.assertTrue(g.root == g["e"])
self.assertTrue(g["e"] == g.node("e") == g.nodes[-1])
g.remove(g["d"])
g.remove(g["e"])
self.assertTrue("d" not in g)
self.assertTrue("e" not in g)
print "pattern.graph.Graph.add_node()"
def test_graph_edges(self):
# Assert graph edges.
g = self.g.copy()
v1 = g.add_edge("d", "e") # Automatically create Node(d) and Node(e).
v2 = g.add_edge("d", "e") # Yields existing edge.
v3 = g.add_edge("e", "d") # Opposite direction.
self.assertEqual(v1, v2)
self.assertEqual(v2, g.edge("d", "e"))
self.assertEqual(v3, g.edge("e", "d"))
self.assertEqual(g["d"].links.edge(g["e"]), v2)
self.assertEqual(g["e"].links.edge(g["d"]), v3)
g.remove(g["d"])
g.remove(g["e"])
# Edges d->e and e->d should now be removed automatically.
self.assertEqual(len(g.edges), 2)
print "pattern.graph.Graph.add_edge()"
def test_cache(self):
# Assert adjacency cache is flushed when nodes, edges or direction changes.
g = self.g.copy()
g.eigenvector_centrality()
self.assertEqual(g._adjacency[0]["a"], {})
self.assertEqual(g._adjacency[0]["b"]["a"], 1.0)
g.add_node("d")
g.add_node("e")
self.assertEqual(g._adjacency, None)
g.betweenness_centrality()
self.assertEqual(g._adjacency[0]["a"]["b"], 1.0)
self.assertEqual(g._adjacency[0]["b"]["a"], 1.0)
g.add_edge("d", "e", weight=0.0)
g.remove(g.node("d"))
g.remove(g.node("e"))
print "pattern.graph.Graph._adjacency"
def test_paths(self):
# Assert node paths.
g = self.g.copy()
self.assertEqual(g.paths("a", "c"), g.paths(g["a"], g["c"]))
self.assertEqual(g.paths("a", "c"), [[g["a"], g["b"], g["c"]]])
self.assertEqual(g.paths("a", "c", length=2), [])
# Assert node shortest paths.
g.add_edge("a", "c")
self.assertEqual(g.paths("a", "c", length=2), [[g["a"], g["c"]]])
self.assertEqual(g.shortest_path("a", "c"), [g["a"], g["c"]])
self.assertEqual(g.shortest_path("c", "a"), [g["c"], g["a"]])
self.assertEqual(g.shortest_path("c", "a", directed=True), None)
g.remove(g.edge("a", "c"))
g.add_node("d")
self.assertEqual(g.shortest_path("a", "d"), None)
self.assertEqual(g.shortest_paths("a")["b"], [g["a"], g["b"]])
self.assertEqual(g.shortest_paths("a")["c"], [g["a"], g["b"], g["c"]])
self.assertEqual(g.shortest_paths("a")["d"], None)
self.assertEqual(g.shortest_paths("c", directed=True)["a"], None)
g.remove(g["d"])
print "pattern.graph.Graph.paths()"
print "pattern.graph.Graph.shortest_path()"
print "pattern.graph.Graph.shortest_paths()"
def test_eigenvector_centrality(self):
# Assert eigenvector centrality.
self.assertEqual(self.g["a"]._weight, None)
v = self.g.eigenvector_centrality()
self.assertTrue(isinstance(v["a"], float))
self.assertTrue(v["a"] == v[self.g.node("a")])
self.assertTrue(v["a"] < v["c"])
self.assertTrue(v["b"] < v["c"])
print "pattern.graph.Graph.eigenvector_centrality()"
def test_betweenness_centrality(self):
# Assert betweenness centrality.
self.assertEqual(self.g["a"]._centrality, None)
v = self.g.betweenness_centrality()
self.assertTrue(isinstance(v["a"], float))
self.assertTrue(v["a"] == v[self.g.node("a")])
self.assertTrue(v["a"] < v["b"])
self.assertTrue(v["c"] < v["b"])
print "pattern.graph.Graph.eigenvector_centrality()"
def test_sorted(self):
# Assert graph node sorting
o1 = self.g.sorted(order=graph.WEIGHT, threshold=0.0)
o2 = self.g.sorted(order=graph.CENTRALITY, threshold=0.0)
self.assertEqual(o1[0], self.g["c"])
self.assertEqual(o2[0], self.g["b"])
print "pattern.graph.Graph.sorted()"
def test_prune(self):
# Assert leaf pruning.
g = self.g.copy()
g.prune(1)
self.assertEqual(len(g), 1)
self.assertEqual(g.nodes, [g["b"]])
print "pattern.graph.Graph.prune()"
def test_fringe(self):
# Assert leaf fetching.
g = self.g.copy()
self.assertEqual(g.fringe(0), [g["a"], g["c"]])
self.assertEqual(g.fringe(1), [g["a"], g["b"], g["c"]])
print "pattern.graph.Graph.fringe()"
def test_split(self):
# Asset subgraph splitting.
self.assertTrue(isinstance(self.g.split(), list))
self.assertTrue(isinstance(self.g.split()[0], graph.Graph))
print "pattern.graph.Graph.split()"
def test_update(self):
# Assert node position after updating layout algorithm.
self.g.update()
for n in self.g.nodes:
self.assertTrue(n.x != 0)
self.assertTrue(n.y != 0)
self.g.layout.reset()
for n in self.g.nodes:
self.assertTrue(n.x == 0)
self.assertTrue(n.y == 0)
print "pattern.graph.Graph.update()"
def test_copy(self):
# Assert deep copy of Graph.
g1 = self.g
g2 = self.g.copy()
self.assertTrue(set(g1) == set(g2)) # Same node id's.
self.assertTrue(id(g1["a"]) != id(g2["b"])) # Different node objects.
g3 = self.g.copy(nodes=[self.g["a"], self.g["b"]])
g3 = self.g.copy(nodes=["a", "b"])
self.assertTrue(len(g3.nodes), 2)
self.assertTrue(len(g3.edges), 1)
# Assert copy with subclasses of Node and Edge.
class MyNode(graph.Node):
pass
class MyEdge(graph.Edge):
pass
g4 = graph.Graph()
g4.append(MyNode, "a")
g4.append(MyNode, "b")
g4.append(MyEdge, "a", "b")
g4 = g4.copy()
self.assertTrue(isinstance(g4.nodes[0], MyNode))
self.assertTrue(isinstance(g4.edges[0], MyEdge))
print "pattern.graph.Graph.copy()"
#---------------------------------------------------------------------------------------------------
class TestGraphLayout(unittest.TestCase):
def setUp(self):
# Create test graph.
self.g = graph.Graph(layout=graph.SPRING, distance=10.0)
self.g.add_node("a")
self.g.add_node("b")
self.g.add_node("c")
self.g.add_edge("a", "b")
self.g.add_edge("b", "c")
def test_layout(self):
# Assert GraphLayout properties.
gl = graph.GraphLayout(graph=self.g)
self.assertTrue(gl.graph == self.g)
self.assertTrue(gl.bounds == (0,0,0,0))
self.assertTrue(gl.iterations == 0)
gl.update()
self.assertTrue(gl.iterations == 1)
print "pattern.graph.GraphLayout"
class TestGraphSpringLayout(TestGraphLayout):
def test_layout(self):
# Assert GraphSpringLayout properties.
gl = self.g.layout
self.assertTrue(gl.graph == self.g)
self.assertTrue(gl.k == 4.0)
self.assertTrue(gl.force == 0.01)
self.assertTrue(gl.repulsion == 15)
self.assertTrue(gl.bounds == (0,0,0,0))
self.assertTrue(gl.iterations == 0)
gl.update()
self.assertTrue(gl.iterations == 1)
self.assertTrue(gl.bounds[0] < 0)
self.assertTrue(gl.bounds[1] < 0)
self.assertTrue(gl.bounds[2] > 0)
self.assertTrue(gl.bounds[3] > 0)
print "pattern.graph.GraphSpringLayout"
def test_distance(self):
# Assert 2D distance.
n1 = graph.Node()
n2 = graph.Node()
n1.x = -100
n2.x = +100
d = self.g.layout._distance(n1, n2)
self.assertEqual(d, (200.0, 0.0, 200.0, 40000.0))
print "pattern.graph.GraphSpringLayout._distance"
def test_repulsion(self):
# Assert repulsive node force.
gl = self.g.layout
d1 = gl._distance(self.g["a"], self.g["c"])[2]
gl.update()
d2 = gl._distance(self.g["a"], self.g["c"])[2]
self.assertTrue(d2 > d1)
self.g.layout.reset()
print "pattern.graph.GraphSpringLayout._repulse()"
def test_attraction(self):
# Assert attractive edge force.
gl = self.g.layout
self.g["a"].x = -100
self.g["b"].y = +100
d1 = gl._distance(self.g["a"], self.g["b"])[2]
gl.update()
d2 = gl._distance(self.g["a"], self.g["b"])[2]
self.assertTrue(d2 < d1)
print "pattern.graph.GraphSpringLayout._attract()"
#---------------------------------------------------------------------------------------------------
class TestGraphTraversal(unittest.TestCase):
def setUp(self):
# Create test graph.
self.g = graph.Graph()
self.g.add_edge("a", "b", weight=0.5)
self.g.add_edge("a", "c")
self.g.add_edge("b", "d")
self.g.add_edge("d", "e")
self.g.add_node("x")
def test_search(self):
# Assert depth-first vs. breadth-first search.
def visit(node):
a.append(node)
def traversable(node, edge):
if edge.node2.id == "e": return False
g = self.g
a = []
graph.depth_first_search(g["a"], visit, traversable)
self.assertEqual(a, [g["a"], g["b"], g["d"], g["c"]])
a = []
graph.breadth_first_search(g["a"], visit, traversable)
self.assertEqual(a, [g["a"], g["b"], g["c"], g["d"]])
print "pattern.graph.depth_first_search()"
print "pattern.graph.breadth_first_search()"
def test_paths(self):
# Assert depth-first all paths.
g = self.g.copy()
g.add_edge("a","d")
for id1, id2, length, path in (
("a", "a", 1, [["a"]]),
("a", "d", 3, [["a","d"], ["a","b","d"]]),
("a", "d", 2, [["a","d"]]),
("a", "d", 1, []),
("a", "x", 1, [])):
p = graph.paths(g, id1, id2, length)
self.assertEqual(p, path)
print "pattern.graph.paths()"
def test_edges(self):
# Assert path of nodes to edges.
g = self.g
p = [g["a"], g["b"], g["d"], g["x"]]
e = list(graph.edges(p))
self.assertEqual(e, [g.edge("a","b"), g.edge("b","d"), None])
print "pattern.graph.edges()"
def test_adjacency(self):
# Assert adjacency map with different settings.
a = [
graph.adjacency(self.g),
graph.adjacency(self.g, directed=True),
graph.adjacency(self.g, directed=True, reversed=True),
graph.adjacency(self.g, stochastic=True),
graph.adjacency(self.g, heuristic=lambda id1, id2: 0.1),
]
for i in range(len(a)):
a[i] = sorted((id1, sorted((id2, round(w,2)) for id2, w in p.items())) for id1, p in a[i].items())
self.assertEqual(a[0], [
("a", [("b", 0.75), ("c", 1.0)]),
("b", [("a", 0.75), ("d", 1.0)]),
("c", [("a", 1.0)]),
("d", [("b", 1.0), ("e", 1.0)]),
("e", [("d", 1.0)]),
("x", [])])
self.assertEqual(a[1], [
("a", [("b", 0.75), ("c", 1.0)]),
("b", [("d", 1.0)]),
("c", []),
("d", [("e", 1.0)]),
("e", []),
("x", [])])
self.assertEqual(a[2], [
("a", []),
("b", [("a", 0.75)]),
("c", [("a", 1.0)]),
("d", [("b", 1.0)]),
("e", [("d", 1.0)]),
("x", [])])
self.assertEqual(a[3], [
("a", [("b", 0.43), ("c", 0.57)]),
("b", [("a", 0.43), ("d", 0.57)]),
("c", [("a", 1.0)]),
("d", [("b", 0.5), ("e", 0.5)]),
("e", [("d", 1.0)]),
("x", [])])
self.assertEqual(a[4], [
("a", [("b", 0.85), ("c", 1.1)]),
("b", [("a", 0.85), ("d", 1.1)]),
("c", [("a", 1.1)]),
("d", [("b", 1.1), ("e", 1.1)]),
("e", [("d", 1.1)]),
("x", [])])
print "pattern.graph.adjacency()"
def test_dijkstra_shortest_path(self):
# Assert Dijkstra's algorithm (node1 -> node2).
g = self.g.copy()
g.add_edge("d","a")
for id1, id2, heuristic, directed, path in (
("a", "d", None, False, ["a", "d"]),
("a", "d", None, True, ["a", "b", "d"]),
("a", "d", lambda id1, id2: id1=="d" and id2=="a" and 1 or 0, False, ["a", "b", "d"])):
p = graph.dijkstra_shortest_path(g, id1, id2, heuristic, directed)
self.assertEqual(p, path)
print "pattern.graph.dijkstra_shortest_path()"
def test_dijkstra_shortest_paths(self):
# Assert Dijkstra's algorithm (node1 -> all).
g = self.g.copy()
g.add_edge("d","a")
a = [
graph.dijkstra_shortest_paths(g, "a"),
graph.dijkstra_shortest_paths(g, "a", directed=True),
graph.dijkstra_shortest_paths(g, "a", heuristic=lambda id1, id2: id1=="d" and id2=="a" and 1 or 0)
]
for i in range(len(a)):
a[i] = sorted(a[i].items())
self.assertEqual(a[0], [
("a", ["a"]),
("b", ["a", "b"]),
("c", ["a", "c"]),
("d", ["a", "d"]),
("e", ["a", "d", "e"]),
("x", None)])
self.assertEqual(a[1], [
("a", ["a"]),
("b", ["a", "b"]),
("c", ["a", "c"]),
("d", ["a", "b", "d"]),
("e", ["a", "b", "d", "e"]),
("x", None)])
self.assertEqual(a[2], [
("a", ["a"]),
("b", ["a", "b"]),
("c", ["a", "c"]),
("d", ["a", "b", "d"]),
("e", ["a", "b", "d", "e"]),
("x", None)])
print "pattern.graph.dijkstra_shortest_paths()"
def test_floyd_warshall_all_pairs_distance(self):
# Assert all pairs path distance.
p1 = graph.floyd_warshall_all_pairs_distance(self.g)
p2 = sorted((id1, sorted((id2, round(w,2)) for id2, w in p.items())) for id1, p in p1.items())
self.assertEqual(p2, [
("a", [("a", 0.00), ("b", 0.75), ("c", 1.00), ("d", 1.75), ("e", 2.75)]),
("b", [("a", 0.75), ("b", 0.00), ("c", 1.75), ("d", 1.00), ("e", 2.00)]),
("c", [("a", 1.00), ("b", 1.75), ("c", 2.00), ("d", 2.75), ("e", 3.75)]),
("d", [("a", 1.75), ("b", 1.00), ("c", 2.75), ("d", 0.00), ("e", 1.00)]),
("e", [("a", 2.75), ("b", 2.00), ("c", 3.75), ("d", 1.00), ("e", 2.00)]),
("x", [])])
# Assert predecessor tree.
self.assertEqual(graph.predecessor_path(p1.predecessors, "a", "d"), ["a", "b", "d"])
print "pattern.graph.floyd_warshall_all_pairs_distance()"
#---------------------------------------------------------------------------------------------------
class TestGraphPartitioning(unittest.TestCase):
def setUp(self):
# Create test graph.
self.g = graph.Graph()
self.g.add_edge("a", "b", weight=0.5)
self.g.add_edge("a", "c")
self.g.add_edge("b", "d")
self.g.add_edge("d", "e")
self.g.add_edge("x", "y")
self.g.add_node("z")
def test_union(self):
self.assertEqual(graph.union([1,2],[2,3]), [1,2,3])
def test_intersection(self):
self.assertEqual(graph.intersection([1,2],[2,3]), [2])
def test_difference(self):
self.assertEqual(graph.difference([1,2],[2,3]), [1])
def test_partition(self):
# Assert unconnected subgraph partitioning.
g = graph.partition(self.g)
self.assertTrue(len(g) == 3)
self.assertTrue(isinstance(g[0], graph.Graph))
self.assertTrue(sorted(g[0].keys()), ["a","b","c","d","e"])
self.assertTrue(sorted(g[1].keys()), ["x","y"])
self.assertTrue(sorted(g[2].keys()), ["z"])
print "pattern.graph.partition()"
def test_clique(self):
# Assert node cliques.
v = graph.clique(self.g, "a")
self.assertEqual(v, ["a","b"])
self.g.add_edge("b","c")
v = graph.clique(self.g, "a")
self.assertEqual(v, ["a","b","c"])
v = graph.cliques(self.g, 2)
self.assertEqual(v, [["a","b","c"], ["b","d"], ["d","e"], ["x","y"]])
print "pattern.graph.clique()"
print "pattern.graph.cliques()"
#---------------------------------------------------------------------------------------------------
class TestGraphMaintenance(unittest.TestCase):
def setUp(self):
pass
def test_unlink(self):
# Assert remove all edges to/from Node(a).
g = graph.Graph()
g.add_edge("a", "b")
g.add_edge("a", "c")
graph.unlink(g, g["a"])
self.assertTrue(len(g.edges) == 0)
# Assert remove edges between Node(a) and Node(b)
g = graph.Graph()
g.add_edge("a", "b")
g.add_edge("a", "c")
graph.unlink(g, g["a"], "b")
self.assertTrue(len(g.edges) == 1)
print "pattern.graph.unlink()"
def test_redirect(self):
# Assert transfer connections of Node(a) to Node(d).
g = graph.Graph()
g.add_edge("a", "b")
g.add_edge("c", "a")
g.add_node("d")
graph.redirect(g, g["a"], "d")
self.assertTrue(len(g["a"].edges) == 0)
self.assertTrue(len(g["d"].edges) == 2)
self.assertTrue(g.edge("d","c").node1 == g["c"])
print "pattern.graph.redirect()"
def test_cut(self):
# Assert unlink Node(b) and redirect a->c and a->d.
g = graph.Graph()
g.add_edge("a", "b")
g.add_edge("b", "c")
g.add_edge("b", "d")
graph.cut(g, g["b"])
self.assertTrue(len(g["b"].edges) == 0)
self.assertTrue(g.edge("a","c") is not None)
self.assertTrue(g.edge("a","d") is not None)
print "pattern.graph.cut()"
def test_insert(self):
g = graph.Graph()
g.add_edge("a", "b")
g.add_node("c")
graph.insert(g, g["c"], g["a"], g["b"])
self.assertTrue(g.edge("a","b") is None)
self.assertTrue(g.edge("a","c") is not None)
self.assertTrue(g.edge("c","b") is not None)
print "pattern.graph.insert()"
#---------------------------------------------------------------------------------------------------
def suite():
suite = unittest.TestSuite()
suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestUtilityFunctions))
suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestNode))
suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestEdge))
suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestGraph))
suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestGraphLayout))
suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestGraphSpringLayout))
suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestGraphTraversal))
suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestGraphPartitioning))
suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestGraphMaintenance))
return suite
if __name__ == "__main__":
unittest.TextTestRunner(verbosity=1).run(suite())
|
|
from jsonrpc import ServiceProxy
import sys
import string
# ===== BEGIN USER SETTINGS =====
# if you do not set these you will be prompted for a password for every command
rpcuser = ""
rpcpass = ""
# ====== END USER SETTINGS ======
if rpcpass == "":
access = ServiceProxy("http://127.0.0.1:9332")
else:
access = ServiceProxy("http://"+rpcuser+":"+rpcpass+"@127.0.0.1:9332")
cmd = sys.argv[1].lower()
if cmd == "backupwallet":
try:
path = raw_input("Enter destination path/filename: ")
print access.backupwallet(path)
except:
print "\n---An error occurred---\n"
elif cmd == "getaccount":
try:
addr = raw_input("Enter a fossilcoin address: ")
print access.getaccount(addr)
except:
print "\n---An error occurred---\n"
elif cmd == "getaccountaddress":
try:
acct = raw_input("Enter an account name: ")
print access.getaccountaddress(acct)
except:
print "\n---An error occurred---\n"
elif cmd == "getaddressesbyaccount":
try:
acct = raw_input("Enter an account name: ")
print access.getaddressesbyaccount(acct)
except:
print "\n---An error occurred---\n"
elif cmd == "getbalance":
try:
acct = raw_input("Enter an account (optional): ")
mc = raw_input("Minimum confirmations (optional): ")
try:
print access.getbalance(acct, mc)
except:
print access.getbalance()
except:
print "\n---An error occurred---\n"
elif cmd == "getblockbycount":
try:
height = raw_input("Height: ")
print access.getblockbycount(height)
except:
print "\n---An error occurred---\n"
elif cmd == "getblockcount":
try:
print access.getblockcount()
except:
print "\n---An error occurred---\n"
elif cmd == "getblocknumber":
try:
print access.getblocknumber()
except:
print "\n---An error occurred---\n"
elif cmd == "getconnectioncount":
try:
print access.getconnectioncount()
except:
print "\n---An error occurred---\n"
elif cmd == "getdifficulty":
try:
print access.getdifficulty()
except:
print "\n---An error occurred---\n"
elif cmd == "getgenerate":
try:
print access.getgenerate()
except:
print "\n---An error occurred---\n"
elif cmd == "gethashespersec":
try:
print access.gethashespersec()
except:
print "\n---An error occurred---\n"
elif cmd == "getinfo":
try:
print access.getinfo()
except:
print "\n---An error occurred---\n"
elif cmd == "getnewaddress":
try:
acct = raw_input("Enter an account name: ")
try:
print access.getnewaddress(acct)
except:
print access.getnewaddress()
except:
print "\n---An error occurred---\n"
elif cmd == "getreceivedbyaccount":
try:
acct = raw_input("Enter an account (optional): ")
mc = raw_input("Minimum confirmations (optional): ")
try:
print access.getreceivedbyaccount(acct, mc)
except:
print access.getreceivedbyaccount()
except:
print "\n---An error occurred---\n"
elif cmd == "getreceivedbyaddress":
try:
addr = raw_input("Enter a fossilcoin address (optional): ")
mc = raw_input("Minimum confirmations (optional): ")
try:
print access.getreceivedbyaddress(addr, mc)
except:
print access.getreceivedbyaddress()
except:
print "\n---An error occurred---\n"
elif cmd == "gettransaction":
try:
txid = raw_input("Enter a transaction ID: ")
print access.gettransaction(txid)
except:
print "\n---An error occurred---\n"
elif cmd == "getwork":
try:
data = raw_input("Data (optional): ")
try:
print access.gettransaction(data)
except:
print access.gettransaction()
except:
print "\n---An error occurred---\n"
elif cmd == "help":
try:
cmd = raw_input("Command (optional): ")
try:
print access.help(cmd)
except:
print access.help()
except:
print "\n---An error occurred---\n"
elif cmd == "listaccounts":
try:
mc = raw_input("Minimum confirmations (optional): ")
try:
print access.listaccounts(mc)
except:
print access.listaccounts()
except:
print "\n---An error occurred---\n"
elif cmd == "listreceivedbyaccount":
try:
mc = raw_input("Minimum confirmations (optional): ")
incemp = raw_input("Include empty? (true/false, optional): ")
try:
print access.listreceivedbyaccount(mc, incemp)
except:
print access.listreceivedbyaccount()
except:
print "\n---An error occurred---\n"
elif cmd == "listreceivedbyaddress":
try:
mc = raw_input("Minimum confirmations (optional): ")
incemp = raw_input("Include empty? (true/false, optional): ")
try:
print access.listreceivedbyaddress(mc, incemp)
except:
print access.listreceivedbyaddress()
except:
print "\n---An error occurred---\n"
elif cmd == "listtransactions":
try:
acct = raw_input("Account (optional): ")
count = raw_input("Number of transactions (optional): ")
frm = raw_input("Skip (optional):")
try:
print access.listtransactions(acct, count, frm)
except:
print access.listtransactions()
except:
print "\n---An error occurred---\n"
elif cmd == "move":
try:
frm = raw_input("From: ")
to = raw_input("To: ")
amt = raw_input("Amount:")
mc = raw_input("Minimum confirmations (optional): ")
comment = raw_input("Comment (optional): ")
try:
print access.move(frm, to, amt, mc, comment)
except:
print access.move(frm, to, amt)
except:
print "\n---An error occurred---\n"
elif cmd == "sendfrom":
try:
frm = raw_input("From: ")
to = raw_input("To: ")
amt = raw_input("Amount:")
mc = raw_input("Minimum confirmations (optional): ")
comment = raw_input("Comment (optional): ")
commentto = raw_input("Comment-to (optional): ")
try:
print access.sendfrom(frm, to, amt, mc, comment, commentto)
except:
print access.sendfrom(frm, to, amt)
except:
print "\n---An error occurred---\n"
elif cmd == "sendmany":
try:
frm = raw_input("From: ")
to = raw_input("To (in format address1:amount1,address2:amount2,...): ")
mc = raw_input("Minimum confirmations (optional): ")
comment = raw_input("Comment (optional): ")
try:
print access.sendmany(frm,to,mc,comment)
except:
print access.sendmany(frm,to)
except:
print "\n---An error occurred---\n"
elif cmd == "sendtoaddress":
try:
to = raw_input("To (in format address1:amount1,address2:amount2,...): ")
amt = raw_input("Amount:")
comment = raw_input("Comment (optional): ")
commentto = raw_input("Comment-to (optional): ")
try:
print access.sendtoaddress(to,amt,comment,commentto)
except:
print access.sendtoaddress(to,amt)
except:
print "\n---An error occurred---\n"
elif cmd == "setaccount":
try:
addr = raw_input("Address: ")
acct = raw_input("Account:")
print access.setaccount(addr,acct)
except:
print "\n---An error occurred---\n"
elif cmd == "setgenerate":
try:
gen= raw_input("Generate? (true/false): ")
cpus = raw_input("Max processors/cores (-1 for unlimited, optional):")
try:
print access.setgenerate(gen, cpus)
except:
print access.setgenerate(gen)
except:
print "\n---An error occurred---\n"
elif cmd == "settxfee":
try:
amt = raw_input("Amount:")
print access.settxfee(amt)
except:
print "\n---An error occurred---\n"
elif cmd == "stop":
try:
print access.stop()
except:
print "\n---An error occurred---\n"
elif cmd == "validateaddress":
try:
addr = raw_input("Address: ")
print access.validateaddress(addr)
except:
print "\n---An error occurred---\n"
elif cmd == "walletpassphrase":
try:
pwd = raw_input("Enter wallet passphrase: ")
access.walletpassphrase(pwd, 60)
print "\n---Wallet unlocked---\n"
except:
print "\n---An error occurred---\n"
elif cmd == "walletpassphrasechange":
try:
pwd = raw_input("Enter old wallet passphrase: ")
pwd2 = raw_input("Enter new wallet passphrase: ")
access.walletpassphrasechange(pwd, pwd2)
print
print "\n---Passphrase changed---\n"
except:
print
print "\n---An error occurred---\n"
print
else:
print "Command not found or not supported"
|
|
# Licensed under the MIT License - https://opensource.org/licenses/MIT
from sklearn import neighbors, tree, svm
from sklearn.linear_model import SGDClassifier, LogisticRegression
from sklearn.utils import shuffle
from sklearn.base import BaseEstimator
from sklearn.utils.validation import check_X_y, check_array, check_is_fitted
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neural_network import MLPClassifier
import math
import numpy as np
import random
import logging
import numbers
logger = logging.getLogger('pycobra.classifiercobra')
class ClassifierCobra(BaseEstimator):
"""
Classification algorithm as introduced by
Mojirsheibani [1999] Combining Classifiers via Discretization,
Journal of the American Statistical Association.
Parameters
----------
random_state: integer or a numpy.random.RandomState object.
Set the state of the random number generator to pass on to shuffle and loading machines, to ensure
reproducibility of your experiments, for example.
Attributes
----------
machines: A dictionary which maps machine names to the machine objects.
The machine object must have a predict method for it to be used during aggregation.
machine_predictions: A dictionary which maps machine name to it's predictions over X_l
This value is used to determine which points from y_l are used to aggregate.
"""
def __init__(self, random_state=None, machine_list='basic'):
self.random_state = random_state
self.machine_list = machine_list
def fit(self, X, y, default=True, X_k=None, X_l=None, y_k=None, y_l=None):
"""
Parameters
----------
X: array-like, [n_samples, n_features]
Training data which will be used to create ClassifierCobra.
y: array-like [n_samples]
Training labels for classification.
default: bool, optional
If set as true then sets up COBRA with default machines and splitting.
X_k : shape = [n_samples, n_features]
Training data which is used to train the machines loaded into COBRA.
y_k : array-like, shape = [n_samples]
Target values used to train the machines loaded into COBRA.
X_l : shape = [n_samples, n_features]
Training data which is used during the aggregation of COBRA.
y_l : array-like, shape = [n_samples]
Target values which are actually used in the aggregation of COBRA.
"""
X, y = check_X_y(X, y)
self.X_ = X
self.y_ = y
self.X_k_ = X_k
self.X_l_ = X_l
self.y_k_ = y_k
self.y_l_ = y_l
self.estimators_ = {}
# try block to pass scikit-learn estimator check.
try:
# set-up COBRA with default machines
if default:
self.split_data()
self.load_default(machine_list=self.machine_list)
self.load_machine_predictions()
except ValueError:
return self
return self
def pred(self, X, M, info=False):
"""
Performs the CLassififerCobra aggregation scheme, used in predict method.
Parameters
----------
X: array-like, [n_features]
M: int, optional
M refers to the number of machines the prediction must be close to to be considered during aggregation.
info: boolean, optional
If info is true the list of points selected in the aggregation is returned.
Returns
-------
result: prediction
"""
# dictionary mapping machine to points selected
select = {}
for machine in self.estimators_:
# machine prediction
label = self.estimators_[machine].predict(X)
select[machine] = set()
# iterating from l to n
# replace with numpy iteration
for count in range(0, len(self.X_l_)):
if self.machine_predictions_[machine][count] == label:
select[machine].add(count)
points = []
# count is the indice number.
for count in range(0, len(self.X_l_)):
# row check is number of machines which picked up a particular point
row_check = 0
for machine in select:
if count in select[machine]:
row_check += 1
if row_check == M:
points.append(count)
# if no points are selected, return 0
if len(points) == 0:
if info:
logger.info("No points were selected, prediction is 0")
return (0, 0)
logger.info("No points were selected, prediction is 0")
return 0
# aggregate
classes = {}
for label in np.unique(self.y_l_):
classes[label] = 0
for point in points:
classes[self.y_l_[point]] += 1
result = int(max(classes, key=classes.get))
if info:
return result, points
return result
def predict(self, X, M=None, info=False):
"""
Performs the ClassifierCobra aggregation scheme, calls pred.
ClassifierCobra performs a majority vote among all points which are retained by the COBRA procedure.
Parameters
----------
X: array-like, [n_features]
M: int, optional
M refers to the number of machines the prediction must be close to to be considered during aggregation.
info: boolean, optional
If info is true the list of points selected in the aggregation is returned.
Returns
-------
result: prediction
"""
X = check_array(X)
if M is None:
M = len(self.estimators_)
if X.ndim == 1:
return self.pred(X.reshape(1, -1), M=M)
result = np.zeros(len(X))
avg_points = 0
index = 0
for vector in X:
if info:
result[index], points = self.pred(vector.reshape(1, -1), M=M, info=info)
avg_points += len(points)
else:
result[index] = self.pred(vector.reshape(1, -1), M=M)
index += 1
if info:
avg_points = avg_points / len(X_array)
return result, avg_points
return result
def predict_proba(self, X, kernel=None, metric=None, bandwidth=1, **kwargs):
"""
Performs the ClassifierCobra aggregation scheme and calculates probability of a point being in a particular class.
ClassifierCobra performs a majority vote among all points which are retained by the COBRA procedure.
NOTE: this method is to visualise boundaries.
The current method is just the mean of the consituent machines, as the concept of that kind of predicted probability
doesn't exist (yet) for classifier cobra.
Parameters
----------
X: array-like, [n_features]
"""
probs = []
for machine in self.estimators_:
try:
probs.append(self.estimators_[machine].predict_proba(X))
except AttributeError:
continue
prob = np.mean(probs, axis=0)
return prob
def split_data(self, k=None, l=None, shuffle_data=True):
"""
Split the data into different parts for training machines and for aggregation.
Parameters
----------
k : int, optional
k is the number of points used to train the machines.
Those are the first k points of the data provided.
l: int, optional
l is the number of points used to form the ClassifierCobra aggregate.
shuffle: bool, optional
Boolean value to decide to shuffle the data before splitting.
Returns
-------
self : returns an instance of self.
"""
if shuffle_data:
self.X_, self.y_ = shuffle(self.X_, self.y_, random_state=self.random_state)
if k is None and l is None:
k = int(len(self.X_) / 2)
l = int(len(self.X_))
if k is not None and l is None:
l = len(self.X_) - k
if l is not None and k is None:
k = len(self.X_) - l
self.X_k_ = self.X_[:k]
self.X_l_ = self.X_[k:l]
self.y_k_ = self.y_[:k]
self.y_l_ = self.y_[k:l]
return self
def load_default(self, machine_list='basic'):
"""
Loads 4 different scikit-learn regressors by default. The advanced list adds more machines.
As of current release SGD algorithm is not included in the advanced list.
Parameters
----------
machine_list: optional, list of strings
List of default machine names to be loaded.
Returns
-------
self : returns an instance of self.
"""
if machine_list == 'basic':
machine_list = ['sgd', 'tree', 'knn', 'svm']
if machine_list == 'advanced':
machine_list = ['tree', 'knn', 'svm', 'logreg', 'naive_bayes', 'lda', 'neural_network']
for machine in machine_list:
try:
if machine == 'svm':
self.estimators_['svm'] = svm.SVC().fit(self.X_k_, self.y_k_)
if machine == 'knn':
self.estimators_['knn'] = neighbors.KNeighborsClassifier().fit(self.X_k_, self.y_k_)
if machine == 'tree':
self.estimators_['tree'] = tree.DecisionTreeClassifier().fit(self.X_k_, self.y_k_)
if machine == 'logreg':
self.estimators_['logreg'] = LogisticRegression(random_state=self.random_state).fit(self.X_k_, self.y_k_)
if machine == 'naive_bayes':
self.estimators_['naive_bayes'] = GaussianNB().fit(self.X_k_, self.y_k_)
if machine == 'lda':
self.estimators_['lda'] = LinearDiscriminantAnalysis().fit(self.X_k_, self.y_k_)
if machine == 'neural_network':
self.estimators_['neural_network'] = MLPClassifier(random_state=self.random_state).fit(self.X_k_, self.y_k_)
except ValueError:
continue
return self
def load_machine(self, machine_name, machine):
"""
Adds a machine to be used during the aggregation strategy.
The machine object must have been trained using X_k and y_k, and must have a 'predict()' method.
After the machine is loaded, for it to be used during aggregation, load_machine_predictions must be run.
Parameters
----------
machine_name : string
Name of the machine you are loading
machine: machine/regressor object
The regressor machine object which is mapped to the machine_name
Returns
-------
self : returns an instance of self.
"""
self.estimators_[machine_name] = machine
return self
def load_machine_predictions(self, predictions=None):
"""
Stores the trained machines' predicitons on D_l in a dictionary, to be used for predictions.
Should be run after all the machines to be used for aggregation is loaded.
Parameters
----------
predictions: dictionary, optional
A pre-existing machine:predictions dictionary can also be loaded.
Returns
-------
self : returns an instance of self.
"""
self.machine_predictions_ = {}
if predictions is None:
for machine in self.estimators_:
self.machine_predictions_[machine] = self.estimators_[machine].predict(self.X_l_)
return self
def load_machine_proba_predictions(self, predictions=None):
"""
Stores the trained machines' predicitons on D_l in a dictionary, to be used for predictions.
Should be run after all the machines to be used for aggregation is loaded.
Parameters
----------
predictions: dictionary, optional
A pre-existing machine:predictions dictionary can also be loaded.
Returns
-------
self : returns an instance of self.
"""
self.machine_proba_predictions_ = {}
if predictions is None:
for machine in self.estimators_:
try:
self.machine_proba_predictions_[machine] = self.estimators_[machine].predict_proba(self.X_l_)
except AttributeError:
self.machine_proba_predictions_[machine] = self.estimators_[machine].decision_function(self.X_l_)
return self
|
|
#!/usr/bin/env python3
# Copyright (c) 2020 The Bitcoin Unlimited developers
"""
Tests the electrum call 'blockchain.transaction.get'
"""
import asyncio
from test_framework.util import assert_equal, p2p_port
from test_framework.electrumutil import ElectrumTestFramework, ElectrumConnection
from test_framework.nodemessages import ToHex
from test_framework.blocktools import create_transaction, pad_tx
from test_framework.script import (
CScript,
OP_CHECKSIG,
OP_DROP,
OP_DUP,
OP_EQUAL,
OP_EQUALVERIFY,
OP_FALSE,
OP_HASH160,
OP_TRUE,
)
from test_framework.nodemessages import COIN
TX_GET = "blockchain.transaction.get"
DUMMY_HASH = 0x1111111111111111111111111111111111111111
class ElectrumTransactionGet(ElectrumTestFramework):
def run_test(self):
n = self.nodes[0]
self.bootstrap_p2p()
coinbases = self.mine_blocks(n, 104)
# non-coinbase transactions
prevtx = coinbases[0]
nonstandard_tx = create_transaction(
prevtx = prevtx,
value = prevtx.vout[0].nValue, n = 0,
sig = CScript([OP_TRUE]),
out = CScript([OP_FALSE, OP_DROP]))
prevtx = coinbases[1]
p2sh_tx = create_transaction(
prevtx = prevtx,
value = prevtx.vout[0].nValue, n = 0,
sig = CScript([OP_TRUE]),
out = CScript([OP_HASH160, DUMMY_HASH, OP_EQUAL]))
prevtx = coinbases[2]
p2pkh_tx = create_transaction(
prevtx = prevtx,
value = prevtx.vout[0].nValue, n = 0,
sig = CScript([OP_TRUE]),
out = CScript([OP_DUP, OP_HASH160, DUMMY_HASH, OP_EQUALVERIFY, OP_CHECKSIG]))
prevtx = coinbases[3]
unconfirmed_tx = create_transaction(
prevtx = prevtx,
value = prevtx.vout[0].nValue, n = 0,
sig = CScript([OP_TRUE]),
out = CScript([OP_DUP, OP_HASH160, DUMMY_HASH, OP_EQUALVERIFY, OP_CHECKSIG]))
for tx in [nonstandard_tx, p2sh_tx, p2pkh_tx, unconfirmed_tx]:
pad_tx(tx)
coinbases.extend(self.mine_blocks(n, 1, [nonstandard_tx, p2sh_tx, p2pkh_tx]))
self.sync_height()
n.sendrawtransaction(ToHex(unconfirmed_tx))
self.wait_for_mempool_count(count = 1)
async def async_tests(loop):
cli = ElectrumConnection(loop)
await cli.connect()
return await asyncio.gather(
self.test_verbose(n, cli, nonstandard_tx.hash, p2sh_tx.hash, p2pkh_tx.hash, unconfirmed_tx.hash),
self.test_non_verbose(cli, coinbases, unconfirmed_tx)
)
loop = asyncio.get_event_loop()
loop.run_until_complete(async_tests(loop))
async def test_non_verbose(self, cli, coinbases, unconfirmed):
for tx in coinbases + [unconfirmed]:
assert_equal(ToHex(tx), await cli.call(TX_GET, tx.hash))
async def test_verbose(self, n, cli, nonstandard_tx, p2sh_tx, p2pkh_tx, unconfirmed_tx):
"""
The spec is unclear. It states:
"whatever the coin daemon returns when asked for a
verbose form of the raw transaction"
We should test for defacto "common denominators" between bitcoind
implementations.
"""
# All confirmed transactions are confirmed in the tip
block = n.getbestblockhash()
tipheight = n.getblockcount()
coinbase_tx = n.getblock(block)['tx'][0]
async def check_tx(txid, is_confirmed = True, check_output_type = False):
electrum = await cli.call(TX_GET, txid, True)
bitcoind = n.getrawtransaction(txid, True, block)
is_coinbase = 'coinbase' in bitcoind['vin'][0]
if not is_confirmed:
# Transaction is unconfirmed. We handle this slightly different
# than bitcoind.
assert_equal(None, electrum['blockhash'])
assert_equal(None, electrum['confirmations'])
assert_equal(None, electrum['time'])
assert_equal(None, electrum['height'])
else:
assert_equal(n.getbestblockhash(), electrum['blockhash'])
assert_equal(1, electrum['confirmations'])
assert_equal(bitcoind['time'], electrum['time'])
assert_equal(tipheight, electrum['height'])
assert_equal(bitcoind['txid'], electrum['txid'])
assert_equal(bitcoind['locktime'], electrum['locktime'])
assert_equal(bitcoind['size'], electrum['size'])
assert_equal(bitcoind['hex'], electrum['hex'])
assert_equal(bitcoind['version'], electrum['version'])
# inputs
assert_equal(len(bitcoind['vin']), len(bitcoind['vin']))
for i in range(len(bitcoind['vin'])):
if 'coinbase' in bitcoind['vin'][i]:
# bitcoind drops txid and other fields, butadds 'coinbase' for coinbase
# inputs
assert_equal(bitcoind['vin'][i]['coinbase'], electrum['vin'][i]['coinbase'])
assert_equal(bitcoind['vin'][i]['sequence'], electrum['vin'][i]['sequence'])
continue
assert_equal(
bitcoind['vin'][i]['txid'],
electrum['vin'][i]['txid'])
assert_equal(
bitcoind['vin'][i]['vout'],
electrum['vin'][i]['vout'])
assert_equal(
bitcoind['vin'][i]['sequence'],
electrum['vin'][i]['sequence'])
assert_equal(
bitcoind['vin'][i]['scriptSig']['hex'],
electrum['vin'][i]['scriptSig']['hex'])
# There is more than one way to represent script as assembly.
# For instance '51' can be represented as '1' or 'OP_PUSHNUM_1'.
# Just check for existance.
assert('asm' in electrum['vin'][i]['scriptSig'])
# outputs
assert_equal(len(bitcoind['vout']), len(bitcoind['vout']))
for i in range(len(bitcoind['vout'])):
assert_equal(
bitcoind['vout'][i]['n'],
electrum['vout'][i]['n'])
assert_equal(
bitcoind['vout'][i]['value'],
electrum['vout'][i]['value_coin'])
assert_equal(
bitcoind['vout'][i]['value'] * COIN,
electrum['vout'][i]['value_satoshi'])
assert_equal(
bitcoind['vout'][i]['scriptPubKey']['hex'],
electrum['vout'][i]['scriptPubKey']['hex'])
assert('asm' in electrum['vout'][i]['scriptPubKey'])
if 'addresses' in bitcoind['vout'][i]['scriptPubKey']:
assert_equal(
bitcoind['vout'][i]['scriptPubKey']['addresses'],
electrum['vout'][i]['scriptPubKey']['addresses'])
else:
assert_equal([], electrum['vout'][i]['scriptPubKey']['addresses'])
if check_output_type:
assert_equal(
bitcoind['vout'][i]['scriptPubKey']['type'],
electrum['vout'][i]['scriptPubKey']['type'])
await asyncio.gather(
# ElectrsCash cannot tell if it's nonstandard
check_tx(nonstandard_tx, check_output_type = False),
check_tx(p2sh_tx),
check_tx(p2pkh_tx),
check_tx(coinbase_tx),
check_tx(unconfirmed_tx, is_confirmed = False),
)
if __name__ == '__main__':
ElectrumTransactionGet().main()
|
|
"""
Variational Auto-Encoder.
Copyright (C) 2017, Lucas Ondel
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use, copy,
modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
"""
import abc
import pickle
import numpy as np
import theano
import theano.tensor as T
from ..io import PersistentModel
from .mlp_utils import GaussianNeuralNetwork
from .mlp_utils import NeuralNetwork
class SVAE(PersistentModel):
def __init__(self, encoder_struct, decoder_struct, prior_latent,
n_samples=10):
self.n_samples = n_samples
self.encoder_struct = encoder_struct
self.decoder_struct = decoder_struct
self.prior_latent = prior_latent
self._build()
def _build(self):
self.encoder = GaussianNeuralNetwork(self.encoder_struct,
n_samples=self.n_samples)
self.decoder = GaussianNeuralNetwork(self.decoder_struct,
self.encoder.sample)
self.params = self.encoder.params + self.decoder.params
# Mean and variance of the decoder.
mean = T.reshape(
self.decoder.mean,
(self.encoder.n_samples, self.encoder.mean.shape[0], -1)
)
var = T.reshape(
self.decoder.var,
(self.encoder.n_samples, self.encoder.mean.shape[0], -1)
)
# Log-likelihood.
targets = self.encoder.inputs
llh = -.5 * T.sum(T.log(var).mean(axis=0), axis=1)
llh += -.5 * T.sum((((targets - mean) ** 2) / var).mean(axis=0),
axis=1)
llh = T.sum(llh)
# Mean and variance of the encoder (variational distribution).
mean = self.encoder.mean
var = self.encoder.var
# KL divergence posterior/prior.
prior_mean = T.matrix(dtype=theano.config.floatX)
prior_var = T.matrix(dtype=theano.config.floatX)
kl_div = .5 * T.log(prior_var / var) - .5
kl_div += ((prior_mean - mean)**2 + var) / (2 * prior_var)
kl_div = T.sum(kl_div)
# Variational objective function.
objective = llh - kl_div
# Gradient function of the neural network.
self._get_gradients = theano.function(
inputs=[self.encoder.inputs, prior_mean, prior_var],
outputs=[objective] + \
[T.grad(objective, param) for param in self.params],
)
# Forward and input to the encoder network.
self.forward = theano.function(
inputs=[self.encoder.inputs],
outputs=[mean, var]
)
def generate_features(self, data):
mean, var, predictions = self.forward(data)
# Expected value of the sufficient statistics.
s_stats = np.c_[mean**2 + var, mean,
np.ones((len(mean), 2 * mean.shape[1]))]
return s_stats
def decode(self, data, state_path=False):
mean, var = self.forward(data)
# Expected value of the sufficient statistics.
s_stats = np.c_[mean**2 + var, mean,
np.ones((len(mean), 2 * mean.shape[1]))]
# Clustering.
return self.prior_latent.decode(s_stats, state_path=state_path)
def get_posteriors(self, data, ac_scale=1.0):
mean, var = self.forward(data)
# Expected value of the sufficient statistics.
s_stats = np.c_[mean**2 + var, mean,
np.ones((len(mean), 2 * mean.shape[1]))]
# Clustering.
return self.prior_latent.get_posteriors(s_stats, ac_scale=1.0)
def _get_state_llh(self, data):
mean, var = self.forward(data)
# Expected value of the sufficient statistics.
s_stats = np.c_[mean**2 + var, mean,
np.ones((len(mean), 2 * mean.shape[1]))]
return self.prior_latent._get_state_llh(s_stats)
def get_gradients(self, data, alignments=None):
mean, var = self.forward(data)
# Expected value of the sufficient statistics.
s_stats = np.c_[mean**2 + var, mean,
np.ones((len(mean), 2 * mean.shape[1]))]
# Clustering.
posts, _, acc_stats = \
self.prior_latent.get_posteriors(s_stats, accumulate=True,
alignments=alignments,
gauss_posteriors=True)
print(posts.shape)
# Expected value of the prior's components parameters.
dim_latent = self.encoder.layers[-1].dim_out
p_np1 = [comp.posterior.grad_log_partition[:dim_latent]
for comp in self.prior_latent.components]
p_np2 = [comp.posterior.grad_log_partition[dim_latent:2 * dim_latent]
for comp in self.prior_latent.components]
q_np1 = posts.T.dot(p_np1)
q_np2 = posts.T.dot(p_np2)
# Convert the natural parameters to the standard parameters.
prior_var = -1 / (2 * q_np1)
prior_mean = q_np2 * prior_var
# Gradients of the objective function w.r.t. the parameters of
# the neural network (encoder + decoder).
val_and_grads = self._get_gradients(data, prior_mean, prior_var)
objective, grads = val_and_grads[0], val_and_grads[1:]
return objective, acc_stats, grads
# Features interface implementation.
# -----------------------------------------------------------------
def transform_features(self, data):
return data
# PersistentModel interface implementation.
# -----------------------------------------------------------------
def to_dict(self):
return {
'prior_latent_class': self.prior_latent.__class__,
'prior_latent_data': self.prior_latent.to_dict(),
'encoder_struct': self.encoder_struct,
'decoder_struct': self.decoder_struct,
'n_samples': self.n_samples,
'params': [param.get_value() for param in self.params]
}
@classmethod
def load_from_dict(cls, model_data):
encoder_struct = model_data['encoder_struct']
decoder_struct = model_data['decoder_struct']
prior_latent_cls = model_data['prior_latent_class']
prior_latent = prior_latent_cls.load_from_dict(
model_data['prior_latent_data']
)
n_samples = model_data['n_samples']
model = SVAE(encoder_struct, decoder_struct, prior_latent, n_samples)
params = model_data['params']
for i, param in enumerate(model.params):
param.set_value(params[i])
return model
class MLPClassifier(object):
def __init__(self, structure):
self.nnet = NeuralNetwork(structure, [])
self.params = self.nnet.params
self._build()
def _build(self):
# Evidence Lower-Bound.
resps = T.matrix()
prediction = self.nnet.outputs
llh = T.sum(resps * T.log(prediction))
self._get_gradients = theano.function(
inputs=[self.nnet.inputs, resps],
outputs=[llh] + \
[T.grad(llh, param) for param in self.params],
)
self.classify = theano.function(
inputs=[self.nnet.inputs],
outputs=T.argmax(prediction, axis=1)
)
def get_gradients(self, data, log_resps):
val_and_grads = self._get_gradients(data, np.exp(log_resps))
return val_and_grads[0], val_and_grads[1:]
|
|
# -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'HelpPage'
db.create_table(
'sentry_helppage', (
('id', self.gf('sentry.db.models.fields.BoundedBigAutoField')(primary_key=True)), (
'key', self.gf('django.db.models.fields.CharField')(
max_length=64, unique=True, null=True
)
), ('title', self.gf('django.db.models.fields.CharField')(max_length=64)),
('content', self.gf('django.db.models.fields.TextField')()),
('is_visible', self.gf('django.db.models.fields.BooleanField')(default=True)),
('priority', self.gf('django.db.models.fields.PositiveIntegerField')(default=50)), (
'date_added',
self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now)
),
)
)
db.send_create_signal('sentry', ['HelpPage'])
def backwards(self, orm):
# Deleting model 'HelpPage'
db.delete_table('sentry_helppage')
models = {
'sentry.accessgroup': {
'Meta': {
'unique_together': "(('team', 'name'),)",
'object_name': 'AccessGroup'
},
'data': ('django.db.models.fields.TextField', [], {
'null': 'True',
'blank': 'True'
}),
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'managed': ('django.db.models.fields.BooleanField', [], {
'default': 'False'
}),
'members': (
'django.db.models.fields.related.ManyToManyField', [], {
'to': "orm['sentry.User']",
'symmetrical': 'False'
}
),
'name': ('django.db.models.fields.CharField', [], {
'max_length': '64'
}),
'projects': (
'django.db.models.fields.related.ManyToManyField', [], {
'to': "orm['sentry.Project']",
'symmetrical': 'False'
}
),
'team':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Team']"
}),
'type': ('django.db.models.fields.IntegerField', [], {
'default': '50'
})
},
'sentry.activity': {
'Meta': {
'object_name': 'Activity'
},
'data': ('django.db.models.fields.TextField', [], {
'null': 'True'
}),
'datetime':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'event': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Event']",
'null': 'True'
}
),
'group': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Group']",
'null': 'True'
}
),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'ident':
('django.db.models.fields.CharField', [], {
'max_length': '64',
'null': 'True'
}),
'project':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']"
}),
'type': ('django.db.models.fields.PositiveIntegerField', [], {}),
'user': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.User']",
'null': 'True'
}
)
},
'sentry.alert': {
'Meta': {
'object_name': 'Alert'
},
'data': ('django.db.models.fields.TextField', [], {
'null': 'True'
}),
'datetime':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'group': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Group']",
'null': 'True'
}
),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'message': ('django.db.models.fields.TextField', [], {}),
'project':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']"
}),
'related_groups': (
'django.db.models.fields.related.ManyToManyField', [], {
'related_name': "'related_alerts'",
'symmetrical': 'False',
'through': "orm['sentry.AlertRelatedGroup']",
'to': "orm['sentry.Group']"
}
),
'status': (
'django.db.models.fields.PositiveIntegerField', [], {
'default': '0',
'db_index': 'True'
}
)
},
'sentry.alertrelatedgroup': {
'Meta': {
'unique_together': "(('group', 'alert'),)",
'object_name': 'AlertRelatedGroup'
},
'alert':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Alert']"
}),
'data': ('django.db.models.fields.TextField', [], {
'null': 'True'
}),
'group':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Group']"
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
})
},
'sentry.auditlogentry': {
'Meta': {
'object_name': 'AuditLogEntry'
},
'actor': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'audit_actors'",
'to': "orm['sentry.User']"
}
),
'data': ('sentry.db.models.fields.gzippeddict.GzippedDictField', [], {}),
'datetime':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'event': ('django.db.models.fields.PositiveIntegerField', [], {}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'ip_address': (
'django.db.models.fields.GenericIPAddressField', [], {
'max_length': '39',
'null': 'True'
}
),
'organization': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Organization']"
}
),
'target_object': ('django.db.models.fields.PositiveIntegerField', [], {
'null': 'True'
}),
'target_user': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'audit_targets'",
'null': 'True',
'to': "orm['sentry.User']"
}
)
},
'sentry.broadcast': {
'Meta': {
'object_name': 'Broadcast'
},
'badge': (
'django.db.models.fields.CharField', [], {
'max_length': '32',
'null': 'True',
'blank': 'True'
}
),
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'is_active':
('django.db.models.fields.BooleanField', [], {
'default': 'True',
'db_index': 'True'
}),
'link': (
'django.db.models.fields.URLField', [], {
'max_length': '200',
'null': 'True',
'blank': 'True'
}
),
'message': ('django.db.models.fields.CharField', [], {
'max_length': '256'
})
},
'sentry.event': {
'Meta': {
'unique_together': "(('project', 'event_id'),)",
'object_name': 'Event',
'db_table': "'sentry_message'",
'index_together': "(('group', 'datetime'),)"
},
'checksum':
('django.db.models.fields.CharField', [], {
'max_length': '32',
'db_index': 'True'
}),
'data': ('django.db.models.fields.TextField', [], {
'null': 'True',
'blank': 'True'
}),
'datetime': (
'django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now',
'db_index': 'True'
}
),
'event_id': (
'django.db.models.fields.CharField', [], {
'max_length': '32',
'null': 'True',
'db_column': "'message_id'"
}
),
'group': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'blank': 'True',
'related_name': "'event_set'",
'null': 'True',
'to': "orm['sentry.Group']"
}
),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'message': ('django.db.models.fields.TextField', [], {}),
'num_comments':
('django.db.models.fields.PositiveIntegerField', [], {
'default': '0',
'null': 'True'
}),
'platform':
('django.db.models.fields.CharField', [], {
'max_length': '64',
'null': 'True'
}),
'project': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']",
'null': 'True'
}
),
'time_spent': ('django.db.models.fields.IntegerField', [], {
'null': 'True'
})
},
'sentry.eventmapping': {
'Meta': {
'unique_together': "(('project', 'event_id'),)",
'object_name': 'EventMapping'
},
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'event_id': ('django.db.models.fields.CharField', [], {
'max_length': '32'
}),
'group':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Group']"
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'project':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']"
})
},
'sentry.group': {
'Meta': {
'unique_together': "(('project', 'checksum'),)",
'object_name': 'Group',
'db_table': "'sentry_groupedmessage'"
},
'active_at':
('django.db.models.fields.DateTimeField', [], {
'null': 'True',
'db_index': 'True'
}),
'checksum':
('django.db.models.fields.CharField', [], {
'max_length': '32',
'db_index': 'True'
}),
'culprit': (
'django.db.models.fields.CharField', [], {
'max_length': '200',
'null': 'True',
'db_column': "'view'",
'blank': 'True'
}
),
'data': ('django.db.models.fields.TextField', [], {
'null': 'True',
'blank': 'True'
}),
'first_seen': (
'django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now',
'db_index': 'True'
}
),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'is_public': (
'django.db.models.fields.NullBooleanField', [], {
'default': 'False',
'null': 'True',
'blank': 'True'
}
),
'last_seen': (
'django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now',
'db_index': 'True'
}
),
'level': (
'django.db.models.fields.PositiveIntegerField', [], {
'default': '40',
'db_index': 'True',
'blank': 'True'
}
),
'logger': (
'django.db.models.fields.CharField', [], {
'default': "'root'",
'max_length': '64',
'db_index': 'True',
'blank': 'True'
}
),
'message': ('django.db.models.fields.TextField', [], {}),
'num_comments':
('django.db.models.fields.PositiveIntegerField', [], {
'default': '0',
'null': 'True'
}),
'platform':
('django.db.models.fields.CharField', [], {
'max_length': '64',
'null': 'True'
}),
'project': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']",
'null': 'True'
}
),
'resolved_at':
('django.db.models.fields.DateTimeField', [], {
'null': 'True',
'db_index': 'True'
}),
'score': ('django.db.models.fields.IntegerField', [], {
'default': '0'
}),
'status': (
'django.db.models.fields.PositiveIntegerField', [], {
'default': '0',
'db_index': 'True'
}
),
'time_spent_count': ('django.db.models.fields.IntegerField', [], {
'default': '0'
}),
'time_spent_total': ('django.db.models.fields.IntegerField', [], {
'default': '0'
}),
'times_seen': (
'django.db.models.fields.PositiveIntegerField', [], {
'default': '1',
'db_index': 'True'
}
)
},
'sentry.groupassignee': {
'Meta': {
'object_name': 'GroupAssignee',
'db_table': "'sentry_groupasignee'"
},
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'group': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'assignee_set'",
'unique': 'True',
'to': "orm['sentry.Group']"
}
),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'project': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'assignee_set'",
'to': "orm['sentry.Project']"
}
),
'user': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'sentry_assignee_set'",
'to': "orm['sentry.User']"
}
)
},
'sentry.groupbookmark': {
'Meta': {
'unique_together': "(('project', 'user', 'group'),)",
'object_name': 'GroupBookmark'
},
'group': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'bookmark_set'",
'to': "orm['sentry.Group']"
}
),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'project': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'bookmark_set'",
'to': "orm['sentry.Project']"
}
),
'user': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'sentry_bookmark_set'",
'to': "orm['sentry.User']"
}
)
},
'sentry.grouphash': {
'Meta': {
'unique_together': "(('project', 'hash'),)",
'object_name': 'GroupHash'
},
'group': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Group']",
'null': 'True'
}
),
'hash':
('django.db.models.fields.CharField', [], {
'max_length': '32',
'db_index': 'True'
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'project': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']",
'null': 'True'
}
)
},
'sentry.groupmeta': {
'Meta': {
'unique_together': "(('group', 'key'),)",
'object_name': 'GroupMeta'
},
'group':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Group']"
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'key': ('django.db.models.fields.CharField', [], {
'max_length': '64'
}),
'value': ('django.db.models.fields.TextField', [], {})
},
'sentry.grouprulestatus': {
'Meta': {
'unique_together': "(('rule', 'group'),)",
'object_name': 'GroupRuleStatus'
},
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'group':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Group']"
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'project':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']"
}),
'rule':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Rule']"
}),
'status': ('django.db.models.fields.PositiveSmallIntegerField', [], {
'default': '0'
})
},
'sentry.groupseen': {
'Meta': {
'unique_together': "(('user', 'group'),)",
'object_name': 'GroupSeen'
},
'group':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Group']"
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'last_seen':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'project':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']"
}),
'user': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.User']",
'db_index': 'False'
}
)
},
'sentry.grouptagkey': {
'Meta': {
'unique_together': "(('project', 'group', 'key'),)",
'object_name': 'GroupTagKey'
},
'group':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Group']"
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'key': ('django.db.models.fields.CharField', [], {
'max_length': '32'
}),
'project': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']",
'null': 'True'
}
),
'values_seen': ('django.db.models.fields.PositiveIntegerField', [], {
'default': '0'
})
},
'sentry.grouptagvalue': {
'Meta': {
'unique_together': "(('project', 'key', 'value', 'group'),)",
'object_name': 'GroupTagValue',
'db_table': "'sentry_messagefiltervalue'"
},
'first_seen': (
'django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now',
'null': 'True',
'db_index': 'True'
}
),
'group': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'grouptag'",
'to': "orm['sentry.Group']"
}
),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'key': ('django.db.models.fields.CharField', [], {
'max_length': '32'
}),
'last_seen': (
'django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now',
'null': 'True',
'db_index': 'True'
}
),
'project': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'grouptag'",
'null': 'True',
'to': "orm['sentry.Project']"
}
),
'times_seen': ('django.db.models.fields.PositiveIntegerField', [], {
'default': '0'
}),
'value': ('django.db.models.fields.CharField', [], {
'max_length': '200'
})
},
'sentry.helppage': {
'Meta': {
'object_name': 'HelpPage'
},
'content': ('django.db.models.fields.TextField', [], {}),
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'is_visible': ('django.db.models.fields.BooleanField', [], {
'default': 'True'
}),
'key': (
'django.db.models.fields.CharField', [], {
'max_length': '64',
'unique': 'True',
'null': 'True'
}
),
'priority': ('django.db.models.fields.PositiveIntegerField', [], {
'default': '50'
}),
'title': ('django.db.models.fields.CharField', [], {
'max_length': '64'
})
},
'sentry.lostpasswordhash': {
'Meta': {
'object_name': 'LostPasswordHash'
},
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'hash': ('django.db.models.fields.CharField', [], {
'max_length': '32'
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'user': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.User']",
'unique': 'True'
}
)
},
'sentry.option': {
'Meta': {
'object_name': 'Option'
},
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'key':
('django.db.models.fields.CharField', [], {
'unique': 'True',
'max_length': '64'
}),
'last_updated':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'value': ('sentry.db.models.fields.pickle.UnicodePickledObjectField', [], {})
},
'sentry.organization': {
'Meta': {
'object_name': 'Organization'
},
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'members': (
'django.db.models.fields.related.ManyToManyField', [], {
'related_name': "'org_memberships'",
'symmetrical': 'False',
'through': "orm['sentry.OrganizationMember']",
'to': "orm['sentry.User']"
}
),
'name': ('django.db.models.fields.CharField', [], {
'max_length': '64'
}),
'owner':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.User']"
}),
'slug':
('django.db.models.fields.SlugField', [], {
'unique': 'True',
'max_length': '50'
}),
'status': ('django.db.models.fields.PositiveIntegerField', [], {
'default': '0'
})
},
'sentry.organizationmember': {
'Meta': {
'unique_together': "(('organization', 'user'), ('organization', 'email'))",
'object_name': 'OrganizationMember'
},
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'email': (
'django.db.models.fields.EmailField', [], {
'max_length': '75',
'null': 'True',
'blank': 'True'
}
),
'has_global_access': ('django.db.models.fields.BooleanField', [], {
'default': 'True'
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'organization': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'member_set'",
'to': "orm['sentry.Organization']"
}
),
'teams': (
'django.db.models.fields.related.ManyToManyField', [], {
'to': "orm['sentry.Team']",
'symmetrical': 'False',
'blank': 'True'
}
),
'type': ('django.db.models.fields.PositiveIntegerField', [], {
'default': '50'
}),
'user': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'blank': 'True',
'related_name': "'sentry_orgmember_set'",
'null': 'True',
'to': "orm['sentry.User']"
}
)
},
'sentry.pendingteammember': {
'Meta': {
'unique_together': "(('team', 'email'),)",
'object_name': 'PendingTeamMember'
},
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'email': ('django.db.models.fields.EmailField', [], {
'max_length': '75'
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'team': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'pending_member_set'",
'to': "orm['sentry.Team']"
}
),
'type': ('django.db.models.fields.IntegerField', [], {
'default': '50'
})
},
'sentry.project': {
'Meta': {
'unique_together': "(('team', 'slug'), ('organization', 'slug'))",
'object_name': 'Project'
},
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'name': ('django.db.models.fields.CharField', [], {
'max_length': '200'
}),
'organization': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Organization']"
}
),
'platform':
('django.db.models.fields.CharField', [], {
'max_length': '32',
'null': 'True'
}),
'public': ('django.db.models.fields.BooleanField', [], {
'default': 'False'
}),
'slug': ('django.db.models.fields.SlugField', [], {
'max_length': '50',
'null': 'True'
}),
'status': (
'django.db.models.fields.PositiveIntegerField', [], {
'default': '0',
'db_index': 'True'
}
),
'team':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Team']"
})
},
'sentry.projectkey': {
'Meta': {
'object_name': 'ProjectKey'
},
'date_added': (
'django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now',
'null': 'True'
}
),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'label': (
'django.db.models.fields.CharField', [], {
'max_length': '64',
'null': 'True',
'blank': 'True'
}
),
'project': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'key_set'",
'to': "orm['sentry.Project']"
}
),
'public_key': (
'django.db.models.fields.CharField', [], {
'max_length': '32',
'unique': 'True',
'null': 'True'
}
),
'roles': ('django.db.models.fields.BigIntegerField', [], {
'default': '1'
}),
'secret_key': (
'django.db.models.fields.CharField', [], {
'max_length': '32',
'unique': 'True',
'null': 'True'
}
),
'user': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.User']",
'null': 'True'
}
),
'user_added': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'related_name': "'keys_added_set'",
'null': 'True',
'to': "orm['sentry.User']"
}
)
},
'sentry.projectoption': {
'Meta': {
'unique_together': "(('project', 'key'),)",
'object_name': 'ProjectOption',
'db_table': "'sentry_projectoptions'"
},
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'key': ('django.db.models.fields.CharField', [], {
'max_length': '64'
}),
'project':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']"
}),
'value': ('sentry.db.models.fields.pickle.UnicodePickledObjectField', [], {})
},
'sentry.release': {
'Meta': {
'unique_together': "(('project', 'version'),)",
'object_name': 'Release'
},
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'project':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']"
}),
'version': ('django.db.models.fields.CharField', [], {
'max_length': '64'
})
},
'sentry.rule': {
'Meta': {
'object_name': 'Rule'
},
'data': ('sentry.db.models.fields.gzippeddict.GzippedDictField', [], {}),
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'label': ('django.db.models.fields.CharField', [], {
'max_length': '64'
}),
'project':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']"
})
},
'sentry.tagkey': {
'Meta': {
'unique_together': "(('project', 'key'),)",
'object_name': 'TagKey',
'db_table': "'sentry_filterkey'"
},
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'key': ('django.db.models.fields.CharField', [], {
'max_length': '32'
}),
'label':
('django.db.models.fields.CharField', [], {
'max_length': '64',
'null': 'True'
}),
'project':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']"
}),
'values_seen': ('django.db.models.fields.PositiveIntegerField', [], {
'default': '0'
})
},
'sentry.tagvalue': {
'Meta': {
'unique_together': "(('project', 'key', 'value'),)",
'object_name': 'TagValue',
'db_table': "'sentry_filtervalue'"
},
'data': ('django.db.models.fields.TextField', [], {
'null': 'True',
'blank': 'True'
}),
'first_seen': (
'django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now',
'null': 'True',
'db_index': 'True'
}
),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'key': ('django.db.models.fields.CharField', [], {
'max_length': '32'
}),
'last_seen': (
'django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now',
'null': 'True',
'db_index': 'True'
}
),
'project': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']",
'null': 'True'
}
),
'times_seen': ('django.db.models.fields.PositiveIntegerField', [], {
'default': '0'
}),
'value': ('django.db.models.fields.CharField', [], {
'max_length': '200'
})
},
'sentry.team': {
'Meta': {
'unique_together': "(('organization', 'slug'),)",
'object_name': 'Team'
},
'date_added': (
'django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now',
'null': 'True'
}
),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'name': ('django.db.models.fields.CharField', [], {
'max_length': '64'
}),
'organization': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Organization']"
}
),
'owner':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.User']"
}),
'slug': ('django.db.models.fields.SlugField', [], {
'max_length': '50'
}),
'status': ('django.db.models.fields.PositiveIntegerField', [], {
'default': '0'
})
},
'sentry.teammember': {
'Meta': {
'unique_together': "(('team', 'user'),)",
'object_name': 'TeamMember'
},
'date_added':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'team':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Team']"
}),
'type': ('django.db.models.fields.IntegerField', [], {
'default': '50'
}),
'user':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.User']"
})
},
'sentry.user': {
'Meta': {
'object_name': 'User',
'db_table': "'auth_user'"
},
'date_joined':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'email':
('django.db.models.fields.EmailField', [], {
'max_length': '75',
'blank': 'True'
}),
'first_name':
('django.db.models.fields.CharField', [], {
'max_length': '30',
'blank': 'True'
}),
'id': ('django.db.models.fields.AutoField', [], {
'primary_key': 'True'
}),
'is_active': ('django.db.models.fields.BooleanField', [], {
'default': 'True'
}),
'is_managed': ('django.db.models.fields.BooleanField', [], {
'default': 'False'
}),
'is_staff': ('django.db.models.fields.BooleanField', [], {
'default': 'False'
}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {
'default': 'False'
}),
'last_login':
('django.db.models.fields.DateTimeField', [], {
'default': 'datetime.datetime.now'
}),
'last_name':
('django.db.models.fields.CharField', [], {
'max_length': '30',
'blank': 'True'
}),
'password': ('django.db.models.fields.CharField', [], {
'max_length': '128'
}),
'username':
('django.db.models.fields.CharField', [], {
'unique': 'True',
'max_length': '128'
})
},
'sentry.useroption': {
'Meta': {
'unique_together': "(('user', 'project', 'key'),)",
'object_name': 'UserOption'
},
'id': ('sentry.db.models.fields.BoundedBigAutoField', [], {
'primary_key': 'True'
}),
'key': ('django.db.models.fields.CharField', [], {
'max_length': '64'
}),
'project': (
'sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.Project']",
'null': 'True'
}
),
'user':
('sentry.db.models.fields.FlexibleForeignKey', [], {
'to': "orm['sentry.User']"
}),
'value': ('sentry.db.models.fields.pickle.UnicodePickledObjectField', [], {})
}
}
complete_apps = ['sentry']
|
|
###
# Copyright (c) 2002-2005, Jeremiah Fincher
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions, and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions, and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the author of this software nor the name of
# contributors to this software may be used to endorse or promote products
# derived from this software without specific prior written consent.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
###
"""
Provides a great number of useful utility functions IRC. Things to muck around
with hostmasks, set bold or color on strings, IRC-case-insensitive dicts, a
nick class to handle nicks (so comparisons and hashing and whatnot work in an
IRC-case-insensitive fashion), and numerous other things.
"""
import re
import time
import random
import string
import textwrap
from cStringIO import StringIO as sio
import supybot.utils as utils
def debug(s, *args):
"""Prints a debug string. Most likely replaced by our logging debug."""
print '***', s % args
def isUserHostmask(s):
"""Returns whether or not the string s is a valid User hostmask."""
p1 = s.find('!')
p2 = s.find('@')
p3 = s.find('$')
if p1 < p2-1 and p1 >= 1 and p2 >= 3 and len(s) > p2+1 or p3 != -1:
return True
else:
return False
def isServerHostmask(s):
"""s => bool
Returns True if s is a valid server hostmask."""
return not isUserHostmask(s)
def nickFromHostmask(hostmask):
"""hostmask => nick
Returns the nick from a user hostmask."""
assert isUserHostmask(hostmask)
return hostmask.split('!', 1)[0]
def userFromHostmask(hostmask):
"""hostmask => user
Returns the user from a user hostmask."""
assert isUserHostmask(hostmask)
return hostmask.split('!', 1)[1].split('@', 1)[0]
def hostFromHostmask(hostmask):
"""hostmask => host
Returns the host from a user hostmask."""
assert isUserHostmask(hostmask)
return hostmask.split('@', 1)[1]
def splitHostmask(hostmask):
"""hostmask => (nick, user, host)
Returns the nick, user, host of a user hostmask."""
assert isUserHostmask(hostmask)
nick, rest = hostmask.split('!', 1)
user, host = rest.split('@', 1)
return (nick, user, host)
def joinHostmask(nick, ident, host):
"""nick, user, host => hostmask
Joins the nick, ident, host into a user hostmask."""
assert nick and ident and host
return '%s!%s@%s' % (nick, ident, host)
_rfc1459trans = string.maketrans(string.ascii_uppercase + r'\[]~',
string.ascii_lowercase + r'|{}^')
def toLower(s, casemapping=None):
"""s => s
Returns the string s lowered according to IRC case rules."""
if casemapping is None or casemapping == 'rfc1459':
return s.translate(_rfc1459trans)
elif casemapping == 'ascii': # freenode
return s.lower()
else:
raise ValueError, 'Invalid casemapping: %r' % casemapping
def strEqual(nick1, nick2):
"""s1, s2 => bool
Returns True if nick1 == nick2 according to IRC case rules."""
assert isinstance(nick1, basestring)
assert isinstance(nick2, basestring)
return toLower(nick1) == toLower(nick2)
nickEqual = strEqual
_nickchars = r'[]\`_^{|}'
nickRe = re.compile(r'^[A-Za-z%s][-0-9A-Za-z%s]*$'
% (re.escape(_nickchars), re.escape(_nickchars)))
def isNick(s, strictRfc=True, nicklen=None):
"""s => bool
Returns True if s is a valid IRC nick."""
if strictRfc:
ret = bool(nickRe.match(s))
if ret and nicklen is not None:
ret = len(s) <= nicklen
return ret
else:
return not isChannel(s) and \
not isUserHostmask(s) and \
not ' ' in s and not '!' in s
def isChannel(s, chantypes='#&+!', channellen=50):
"""s => bool
Returns True if s is a valid IRC channel name."""
return s and \
',' not in s and \
'\x07' not in s and \
s[0] in chantypes and \
len(s) <= channellen and \
len(s.split(None, 1)) == 1
_patternCache = utils.structures.CacheDict(1000)
def _hostmaskPatternEqual(pattern, hostmask):
try:
return _patternCache[pattern](hostmask) is not None
except KeyError:
# We make our own regexps, rather than use fnmatch, because fnmatch's
# case-insensitivity is not IRC's case-insensitity.
fd = sio()
for c in pattern:
if c == '*':
fd.write('.*')
elif c == '?':
fd.write('.')
elif c in '[{':
fd.write('[[{]')
elif c in '}]':
fd.write(r'[}\]]')
elif c in '|\\':
fd.write(r'[|\\]')
elif c in '^~':
fd.write('[~^]')
else:
fd.write(re.escape(c))
fd.write('$')
f = re.compile(fd.getvalue(), re.I).match
_patternCache[pattern] = f
return f(hostmask) is not None
_hostmaskPatternEqualCache = utils.structures.CacheDict(1000)
def hostmaskPatternEqual(pattern, hostmask):
"""pattern, hostmask => bool
Returns True if hostmask matches the hostmask pattern pattern."""
try:
return _hostmaskPatternEqualCache[(pattern, hostmask)]
except KeyError:
b = _hostmaskPatternEqual(pattern, hostmask)
_hostmaskPatternEqualCache[(pattern, hostmask)] = b
return b
def banmask(hostmask):
"""Returns a properly generic banning hostmask for a hostmask.
>>> banmask('nick!user@host.domain.tld')
'*!*@*.domain.tld'
>>> banmask('nick!user@10.0.0.1')
'*!*@10.0.0.*'
"""
assert isUserHostmask(hostmask)
host = hostFromHostmask(hostmask)
if utils.net.isIP(host):
L = host.split('.')
L[-1] = '*'
return '*!*@' + '.'.join(L)
elif utils.net.isIPV6(host):
L = host.split(':')
L[-1] = '*'
return '*!*@' + ':'.join(L)
else:
if '.' in host:
return '*!*@*%s' % host[host.find('.'):]
else:
return '*!*@' + host
_plusRequireArguments = 'fjklvobqeI'
_minusRequireArguments = 'fjklvobqeI'
def separateModes(args):
"""Separates modelines into single mode change tuples. Basically, you
should give it the .args of a MODE IrcMsg.
Examples:
>>> separateModes(['+ooo', 'jemfinch', 'StoneTable', 'philmes'])
[('+o', 'jemfinch'), ('+o', 'StoneTable'), ('+o', 'philmes')]
>>> separateModes(['+o-o', 'jemfinch', 'PeterB'])
[('+o', 'jemfinch'), ('-o', 'PeterB')]
>>> separateModes(['+s-o', 'test'])
[('+s', None), ('-o', 'test')]
>>> separateModes(['+sntl', '100'])
[('+s', None), ('+n', None), ('+t', None), ('+l', 100)]
"""
if not args:
return []
modes = args[0]
assert modes[0] in '+-', 'Invalid args: %r' % args
args = list(args[1:])
ret = []
for c in modes:
if c in '+-':
last = c
else:
if last == '+':
requireArguments = _plusRequireArguments
else:
requireArguments = _minusRequireArguments
if c in requireArguments:
arg = args.pop(0)
try:
arg = int(arg)
except ValueError:
pass
ret.append((last + c, arg))
else:
ret.append((last + c, None))
return ret
def joinModes(modes):
"""[(mode, targetOrNone), ...] => args
Joins modes of the same form as returned by separateModes."""
args = []
modeChars = []
currentMode = '\x00'
for (mode, arg) in modes:
if arg is not None:
args.append(arg)
if not mode.startswith(currentMode):
currentMode = mode[0]
modeChars.append(mode[0])
modeChars.append(mode[1])
args.insert(0, ''.join(modeChars))
return args
def bold(s):
"""Returns the string s, bolded."""
return '\x02%s\x02' % s
def reverse(s):
"""Returns the string s, reverse-videoed."""
return '\x16%s\x16' % s
def underline(s):
"""Returns the string s, underlined."""
return '\x1F%s\x1F' % s
# Definition of mircColors dictionary moved below because it became an IrcDict.
def mircColor(s, fg=None, bg=None):
"""Returns s with the appropriate mIRC color codes applied."""
if fg is None and bg is None:
return s
elif bg is None:
fg = mircColors[str(fg)]
return '\x03%s%s\x03' % (fg.zfill(2), s)
elif fg is None:
bg = mircColors[str(bg)]
# According to the mirc color doc, a fg color MUST be specified if a
# background color is specified. So, we'll specify 00 (white) if the
# user doesn't specify one.
return '\x0300,%s%s\x03' % (bg.zfill(2), s)
else:
fg = mircColors[str(fg)]
bg = mircColors[str(bg)]
# No need to zfill fg because the comma delimits.
return '\x03%s,%s%s\x03' % (fg, bg.zfill(2), s)
def canonicalColor(s, bg=False, shift=0):
"""Assigns an (fg, bg) canonical color pair to a string based on its hash
value. This means it might change between Python versions. This pair can
be used as a *parameter to mircColor. The shift parameter is how much to
right-shift the hash value initially.
"""
h = hash(s) >> shift
fg = h % 14 + 2 # The + 2 is to rule out black and white.
if bg:
bg = (h >> 4) & 3 # The 5th, 6th, and 7th least significant bits.
if fg < 8:
bg += 8
else:
bg += 2
return (fg, bg)
else:
return (fg, None)
def stripBold(s):
"""Returns the string s, with bold removed."""
return s.replace('\x02', '')
_stripColorRe = re.compile(r'\x03(?:\d{1,2},\d{1,2}|\d{1,2}|,\d{1,2}|)')
def stripColor(s):
"""Returns the string s, with color removed."""
return _stripColorRe.sub('', s)
def stripReverse(s):
"""Returns the string s, with reverse-video removed."""
return s.replace('\x16', '')
def stripUnderline(s):
"""Returns the string s, with underlining removed."""
return s.replace('\x1f', '').replace('\x1F', '')
def stripFormatting(s):
"""Returns the string s, with all formatting removed."""
# stripColor has to go first because of some strings, check the tests.
s = stripColor(s)
s = stripBold(s)
s = stripReverse(s)
s = stripUnderline(s)
return s.replace('\x0f', '').replace('\x0F', '')
class FormatContext(object):
def __init__(self):
self.reset()
def reset(self):
self.fg = None
self.bg = None
self.bold = False
self.reverse = False
self.underline = False
def start(self, s):
"""Given a string, starts all the formatters in this context."""
if self.bold:
s = '\x02' + s
if self.reverse:
s = '\x16' + s
if self.underline:
s = '\x1f' + s
if self.fg is not None or self.bg is not None:
s = mircColor(s, fg=self.fg, bg=self.bg)[:-1] # Remove \x03.
return s
def end(self, s):
"""Given a string, ends all the formatters in this context."""
if self.bold or self.reverse or \
self.fg or self.bg or self.underline:
# Should we individually end formatters?
s += '\x0f'
return s
class FormatParser(object):
def __init__(self, s):
self.fd = sio(s)
self.last = None
def getChar(self):
if self.last is not None:
c = self.last
self.last = None
return c
else:
return self.fd.read(1)
def ungetChar(self, c):
self.last = c
def parse(self):
context = FormatContext()
c = self.getChar()
while c:
if c == '\x02':
context.bold = not context.bold
elif c == '\x16':
context.reverse = not context.reverse
elif c == '\x1f':
context.underline = not context.underline
elif c == '\x0f':
context.reset()
elif c == '\x03':
self.getColor(context)
c = self.getChar()
return context
def getInt(self):
i = 0
setI = False
c = self.getChar()
while c.isdigit() and i < 100:
setI = True
i *= 10
i += int(c)
c = self.getChar()
self.ungetChar(c)
if setI:
return i
else:
return None
def getColor(self, context):
context.fg = self.getInt()
c = self.getChar()
if c == ',':
context.bg = self.getInt()
def wrap(s, length):
processed = []
chunks = textwrap.wrap(s, length)
context = None
for chunk in chunks:
if context is not None:
chunk = context.start(chunk)
context = FormatParser(chunk).parse()
processed.append(context.end(chunk))
return processed
def isValidArgument(s):
"""Returns whether s is strictly a valid argument for an IRC message."""
return '\r' not in s and '\n' not in s and '\x00' not in s
def safeArgument(s):
"""If s is unsafe for IRC, returns a safe version."""
if isinstance(s, unicode):
s = s.encode('utf-8')
elif not isinstance(s, basestring):
debug('Got a non-string in safeArgument: %r', s)
s = str(s)
if isValidArgument(s):
return s
else:
return repr(s)
def replyTo(msg):
"""Returns the appropriate target to send responses to msg."""
if isChannel(msg.args[0]):
return msg.args[0]
else:
return msg.nick
def dccIP(ip):
"""Returns in IP in the proper for DCC."""
assert utils.net.isIP(ip), \
'argument must be a string ip in xxx.yyy.zzz.www format.'
i = 0
x = 256**3
for quad in ip.split('.'):
i += int(quad)*x
x /= 256
return i
def unDccIP(i):
"""Takes an integer DCC IP and return a normal string IP."""
assert isinstance(i, (int, long)), '%r is not an number.' % i
L = []
while len(L) < 4:
L.append(i % 256)
i /= 256
L.reverse()
return '.'.join(utils.iter.imap(str, L))
class IrcString(str):
"""This class does case-insensitive comparison and hashing of nicks."""
def __new__(cls, s=''):
x = super(IrcString, cls).__new__(cls, s)
x.lowered = toLower(x)
return x
def __eq__(self, s):
try:
return toLower(s) == self.lowered
except:
return False
def __ne__(self, s):
return not (self == s)
def __hash__(self):
return hash(self.lowered)
class IrcDict(utils.InsensitivePreservingDict):
"""Subclass of dict to make key comparison IRC-case insensitive."""
def key(self, s):
if s is not None:
s = toLower(s)
return s
class IrcSet(utils.NormalizingSet):
"""A sets.Set using IrcStrings instead of regular strings."""
def normalize(self, s):
return IrcString(s)
def __reduce__(self):
return (self.__class__, (list(self),))
class FloodQueue(object):
timeout = 0
def __init__(self, timeout=None, queues=None):
if timeout is not None:
self.timeout = timeout
if queues is None:
queues = IrcDict()
self.queues = queues
def __repr__(self):
return 'FloodQueue(timeout=%r, queues=%s)' % (self.timeout,
repr(self.queues))
def key(self, msg):
return msg.user + '@' + msg.host
def getTimeout(self):
if callable(self.timeout):
return self.timeout()
else:
return self.timeout
def _getQueue(self, msg, insert=True):
key = self.key(msg)
try:
return self.queues[key]
except KeyError:
if insert:
# python--
# instancemethod.__repr__ calls the instance.__repr__, which
# means that our __repr__ calls self.queues.__repr__, which
# calls structures.TimeoutQueue.__repr__, which calls
# getTimeout.__repr__, which calls our __repr__, which calls...
getTimeout = lambda : self.getTimeout()
q = utils.structures.TimeoutQueue(getTimeout)
self.queues[key] = q
return q
else:
return None
def enqueue(self, msg, what=None):
if what is None:
what = msg
q = self._getQueue(msg)
q.enqueue(what)
def len(self, msg):
q = self._getQueue(msg, insert=False)
if q is not None:
return len(q)
else:
return 0
def has(self, msg, what=None):
q = self._getQueue(msg, insert=False)
if q is not None:
if what is None:
what = msg
for elt in q:
if elt == what:
return True
return False
mircColors = IrcDict({
'white': '0',
'black': '1',
'blue': '2',
'green': '3',
'red': '4',
'brown': '5',
'purple': '6',
'orange': '7',
'yellow': '8',
'light green': '9',
'teal': '10',
'light blue': '11',
'dark blue': '12',
'pink': '13',
'dark grey': '14',
'light grey': '15',
'dark gray': '14',
'light gray': '15',
})
# We'll map integers to their string form so mircColor is simpler.
for (k, v) in mircColors.items():
if k is not None: # Ignore empty string for None.
sv = str(v)
mircColors[sv] = sv
def standardSubstitute(irc, msg, text, env=None):
"""Do the standard set of substitutions on text, and return it"""
if isChannel(msg.args[0]):
channel = msg.args[0]
else:
channel = 'somewhere'
def randInt():
return str(random.randint(-1000, 1000))
def randDate():
t = pow(2,30)*random.random()+time.time()/4.0
return time.ctime(t)
def randNick():
if channel != 'somewhere':
L = list(irc.state.channels[channel].users)
if len(L) > 1:
n = msg.nick
while n == msg.nick:
n = utils.iter.choice(L)
return n
else:
return msg.nick
else:
return 'someone'
ctime = time.ctime()
localtime = time.localtime()
vars = IrcDict({
'who': msg.nick,
'nick': msg.nick,
'user': msg.user,
'host': msg.host,
'channel': channel,
'botnick': irc.nick,
'now': ctime, 'ctime': ctime,
'randnick': randNick, 'randomnick': randNick,
'randdate': randDate, 'randomdate': randDate,
'rand': randInt, 'randint': randInt, 'randomint': randInt,
'today': time.strftime('%d %b %Y', localtime),
'year': localtime[0],
'month': localtime[1],
'monthname': time.strftime('%b', localtime),
'date': localtime[2],
'day': time.strftime('%A', localtime),
'h': localtime[3], 'hr': localtime[3], 'hour': localtime[3],
'm': localtime[4], 'min': localtime[4], 'minute': localtime[4],
's': localtime[5], 'sec': localtime[5], 'second': localtime[5],
'tz': time.tzname[time.daylight],
})
if env is not None:
vars.update(env)
return utils.str.perlVariableSubstitute(vars, text)
if __name__ == '__main__':
import sys, doctest
doctest.testmod(sys.modules['__main__'])
# vim:set shiftwidth=4 softtabstop=4 expandtab textwidth=79:
|
|
#!/usr/bin/env python
# coding: utf-8
import nose
from pandas import Series, DataFrame
from pandas.compat import lmap
import pandas.util.testing as tm
from pandas.util.testing import slow
import numpy as np
from numpy import random
from numpy.random import randn
import pandas.tools.plotting as plotting
from pandas.tests.plotting.common import (TestPlotBase, _check_plot_works,
_ok_for_gaussian_kde)
""" Test cases for misc plot functions """
@tm.mplskip
class TestSeriesPlots(TestPlotBase):
def setUp(self):
TestPlotBase.setUp(self)
import matplotlib as mpl
mpl.rcdefaults()
self.ts = tm.makeTimeSeries()
self.ts.name = 'ts'
@slow
def test_autocorrelation_plot(self):
from pandas.tools.plotting import autocorrelation_plot
_check_plot_works(autocorrelation_plot, series=self.ts)
_check_plot_works(autocorrelation_plot, series=self.ts.values)
ax = autocorrelation_plot(self.ts, label='Test')
self._check_legend_labels(ax, labels=['Test'])
@slow
def test_lag_plot(self):
from pandas.tools.plotting import lag_plot
_check_plot_works(lag_plot, series=self.ts)
_check_plot_works(lag_plot, series=self.ts, lag=5)
@slow
def test_bootstrap_plot(self):
from pandas.tools.plotting import bootstrap_plot
_check_plot_works(bootstrap_plot, series=self.ts, size=10)
@tm.mplskip
class TestDataFramePlots(TestPlotBase):
@slow
def test_scatter_plot_legacy(self):
tm._skip_if_no_scipy()
df = DataFrame(randn(100, 2))
def scat(**kwds):
return plotting.scatter_matrix(df, **kwds)
with tm.assert_produces_warning(UserWarning):
_check_plot_works(scat)
with tm.assert_produces_warning(UserWarning):
_check_plot_works(scat, marker='+')
with tm.assert_produces_warning(UserWarning):
_check_plot_works(scat, vmin=0)
if _ok_for_gaussian_kde('kde'):
with tm.assert_produces_warning(UserWarning):
_check_plot_works(scat, diagonal='kde')
if _ok_for_gaussian_kde('density'):
with tm.assert_produces_warning(UserWarning):
_check_plot_works(scat, diagonal='density')
with tm.assert_produces_warning(UserWarning):
_check_plot_works(scat, diagonal='hist')
with tm.assert_produces_warning(UserWarning):
_check_plot_works(scat, range_padding=.1)
def scat2(x, y, by=None, ax=None, figsize=None):
return plotting.scatter_plot(df, x, y, by, ax, figsize=None)
_check_plot_works(scat2, x=0, y=1)
grouper = Series(np.repeat([1, 2, 3, 4, 5], 20), df.index)
with tm.assert_produces_warning(UserWarning):
_check_plot_works(scat2, x=0, y=1, by=grouper)
def test_scatter_matrix_axis(self):
tm._skip_if_no_scipy()
scatter_matrix = plotting.scatter_matrix
with tm.RNGContext(42):
df = DataFrame(randn(100, 3))
# we are plotting multiples on a sub-plot
with tm.assert_produces_warning(UserWarning):
axes = _check_plot_works(scatter_matrix, filterwarnings='always',
frame=df, range_padding=.1)
axes0_labels = axes[0][0].yaxis.get_majorticklabels()
# GH 5662
if self.mpl_ge_2_0_0:
expected = ['-2', '0', '2']
else:
expected = ['-2', '-1', '0', '1', '2']
self._check_text_labels(axes0_labels, expected)
self._check_ticks_props(
axes, xlabelsize=8, xrot=90, ylabelsize=8, yrot=0)
df[0] = ((df[0] - 2) / 3)
# we are plotting multiples on a sub-plot
with tm.assert_produces_warning(UserWarning):
axes = _check_plot_works(scatter_matrix, filterwarnings='always',
frame=df, range_padding=.1)
axes0_labels = axes[0][0].yaxis.get_majorticklabels()
if self.mpl_ge_2_0_0:
expected = ['-1.0', '-0.5', '0.0']
else:
expected = ['-1.2', '-1.0', '-0.8', '-0.6', '-0.4', '-0.2', '0.0']
self._check_text_labels(axes0_labels, expected)
self._check_ticks_props(
axes, xlabelsize=8, xrot=90, ylabelsize=8, yrot=0)
@slow
def test_andrews_curves(self):
from pandas.tools.plotting import andrews_curves
from matplotlib import cm
df = self.iris
_check_plot_works(andrews_curves, frame=df, class_column='Name')
rgba = ('#556270', '#4ECDC4', '#C7F464')
ax = _check_plot_works(andrews_curves, frame=df,
class_column='Name', color=rgba)
self._check_colors(
ax.get_lines()[:10], linecolors=rgba, mapping=df['Name'][:10])
cnames = ['dodgerblue', 'aquamarine', 'seagreen']
ax = _check_plot_works(andrews_curves, frame=df,
class_column='Name', color=cnames)
self._check_colors(
ax.get_lines()[:10], linecolors=cnames, mapping=df['Name'][:10])
ax = _check_plot_works(andrews_curves, frame=df,
class_column='Name', colormap=cm.jet)
cmaps = lmap(cm.jet, np.linspace(0, 1, df['Name'].nunique()))
self._check_colors(
ax.get_lines()[:10], linecolors=cmaps, mapping=df['Name'][:10])
length = 10
df = DataFrame({"A": random.rand(length),
"B": random.rand(length),
"C": random.rand(length),
"Name": ["A"] * length})
_check_plot_works(andrews_curves, frame=df, class_column='Name')
rgba = ('#556270', '#4ECDC4', '#C7F464')
ax = _check_plot_works(andrews_curves, frame=df,
class_column='Name', color=rgba)
self._check_colors(
ax.get_lines()[:10], linecolors=rgba, mapping=df['Name'][:10])
cnames = ['dodgerblue', 'aquamarine', 'seagreen']
ax = _check_plot_works(andrews_curves, frame=df,
class_column='Name', color=cnames)
self._check_colors(
ax.get_lines()[:10], linecolors=cnames, mapping=df['Name'][:10])
ax = _check_plot_works(andrews_curves, frame=df,
class_column='Name', colormap=cm.jet)
cmaps = lmap(cm.jet, np.linspace(0, 1, df['Name'].nunique()))
self._check_colors(
ax.get_lines()[:10], linecolors=cmaps, mapping=df['Name'][:10])
colors = ['b', 'g', 'r']
df = DataFrame({"A": [1, 2, 3],
"B": [1, 2, 3],
"C": [1, 2, 3],
"Name": colors})
ax = andrews_curves(df, 'Name', color=colors)
handles, labels = ax.get_legend_handles_labels()
self._check_colors(handles, linecolors=colors)
with tm.assert_produces_warning(FutureWarning):
andrews_curves(data=df, class_column='Name')
@slow
def test_parallel_coordinates(self):
from pandas.tools.plotting import parallel_coordinates
from matplotlib import cm
df = self.iris
ax = _check_plot_works(parallel_coordinates,
frame=df, class_column='Name')
nlines = len(ax.get_lines())
nxticks = len(ax.xaxis.get_ticklabels())
rgba = ('#556270', '#4ECDC4', '#C7F464')
ax = _check_plot_works(parallel_coordinates,
frame=df, class_column='Name', color=rgba)
self._check_colors(
ax.get_lines()[:10], linecolors=rgba, mapping=df['Name'][:10])
cnames = ['dodgerblue', 'aquamarine', 'seagreen']
ax = _check_plot_works(parallel_coordinates,
frame=df, class_column='Name', color=cnames)
self._check_colors(
ax.get_lines()[:10], linecolors=cnames, mapping=df['Name'][:10])
ax = _check_plot_works(parallel_coordinates,
frame=df, class_column='Name', colormap=cm.jet)
cmaps = lmap(cm.jet, np.linspace(0, 1, df['Name'].nunique()))
self._check_colors(
ax.get_lines()[:10], linecolors=cmaps, mapping=df['Name'][:10])
ax = _check_plot_works(parallel_coordinates,
frame=df, class_column='Name', axvlines=False)
assert len(ax.get_lines()) == (nlines - nxticks)
colors = ['b', 'g', 'r']
df = DataFrame({"A": [1, 2, 3],
"B": [1, 2, 3],
"C": [1, 2, 3],
"Name": colors})
ax = parallel_coordinates(df, 'Name', color=colors)
handles, labels = ax.get_legend_handles_labels()
self._check_colors(handles, linecolors=colors)
with tm.assert_produces_warning(FutureWarning):
parallel_coordinates(data=df, class_column='Name')
with tm.assert_produces_warning(FutureWarning):
parallel_coordinates(df, 'Name', colors=colors)
@slow
def test_radviz(self):
from pandas.tools.plotting import radviz
from matplotlib import cm
df = self.iris
_check_plot_works(radviz, frame=df, class_column='Name')
rgba = ('#556270', '#4ECDC4', '#C7F464')
ax = _check_plot_works(
radviz, frame=df, class_column='Name', color=rgba)
# skip Circle drawn as ticks
patches = [p for p in ax.patches[:20] if p.get_label() != '']
self._check_colors(
patches[:10], facecolors=rgba, mapping=df['Name'][:10])
cnames = ['dodgerblue', 'aquamarine', 'seagreen']
_check_plot_works(radviz, frame=df, class_column='Name', color=cnames)
patches = [p for p in ax.patches[:20] if p.get_label() != '']
self._check_colors(patches, facecolors=cnames, mapping=df['Name'][:10])
_check_plot_works(radviz, frame=df,
class_column='Name', colormap=cm.jet)
cmaps = lmap(cm.jet, np.linspace(0, 1, df['Name'].nunique()))
patches = [p for p in ax.patches[:20] if p.get_label() != '']
self._check_colors(patches, facecolors=cmaps, mapping=df['Name'][:10])
colors = [[0., 0., 1., 1.],
[0., 0.5, 1., 1.],
[1., 0., 0., 1.]]
df = DataFrame({"A": [1, 2, 3],
"B": [2, 1, 3],
"C": [3, 2, 1],
"Name": ['b', 'g', 'r']})
ax = radviz(df, 'Name', color=colors)
handles, labels = ax.get_legend_handles_labels()
self._check_colors(handles, facecolors=colors)
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
|
|
#!/usr/bin/python
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: iam_user
short_description: Manage AWS IAM users
description:
- Manage AWS IAM users
version_added: "2.5"
author: Josh Souza, @joshsouza
options:
name:
description:
- The name of the user to create.
required: true
managed_policy:
description:
- A list of managed policy ARNs or friendly names to attach to the user. To embed an inline policy, use M(iam_policy).
required: false
state:
description:
- Create or remove the IAM user
required: true
choices: [ 'present', 'absent' ]
purge_policy:
description:
- Detach policies which are not included in managed_policy list
required: false
default: false
requirements: [ botocore, boto3 ]
extends_documentation_fragment:
- aws
- ec2
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Note: This module does not allow management of groups that users belong to.
# Groups should manage their membership directly using `iam_group`,
# as users belong to them.
# Create a user
- iam_user:
name: testuser1
state: present
# Create a user and attach a managed policy using its ARN
- iam_user:
name: testuser1
managed_policy:
- arn:aws:iam::aws:policy/AmazonSNSFullAccess
state: present
# Remove all managed policies from an existing user with an empty list
- iam_user:
name: testuser1
state: present
purge_policy: true
# Delete the user
- iam_user:
name: testuser1
state: absent
'''
RETURN = '''
user:
description: dictionary containing all the user information
returned: success
type: complex
contains:
arn:
description: the Amazon Resource Name (ARN) specifying the user
type: string
sample: "arn:aws:iam::1234567890:user/testuser1"
create_date:
description: the date and time, in ISO 8601 date-time format, when the user was created
type: string
sample: "2017-02-08T04:36:28+00:00"
user_id:
description: the stable and unique string identifying the user
type: string
sample: AGPAIDBWE12NSFINE55TM
user_name:
description: the friendly name that identifies the user
type: string
sample: testuser1
path:
description: the path to the user
type: string
sample: /
'''
from ansible.module_utils._text import to_native
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ec2 import camel_dict_to_snake_dict, ec2_argument_spec, get_aws_connection_info, boto3_conn
from ansible.module_utils.ec2 import HAS_BOTO3
import traceback
try:
from botocore.exceptions import ClientError, ParamValidationError
except ImportError:
pass # caught by imported HAS_BOTO3
def compare_attached_policies(current_attached_policies, new_attached_policies):
# If new_attached_policies is None it means we want to remove all policies
if len(current_attached_policies) > 0 and new_attached_policies is None:
return False
current_attached_policies_arn_list = []
for policy in current_attached_policies:
current_attached_policies_arn_list.append(policy['PolicyArn'])
if not set(current_attached_policies_arn_list).symmetric_difference(set(new_attached_policies)):
return True
else:
return False
def convert_friendly_names_to_arns(connection, module, policy_names):
# List comprehension that looks for any policy in the 'policy_names' list
# that does not begin with 'arn'. If there aren't any, short circuit.
# If there are, translate friendly name to the full arn
if not any([not policy.startswith('arn:') for policy in policy_names if policy is not None]):
return policy_names
allpolicies = {}
paginator = connection.get_paginator('list_policies')
policies = paginator.paginate().build_full_result()['Policies']
for policy in policies:
allpolicies[policy['PolicyName']] = policy['Arn']
allpolicies[policy['Arn']] = policy['Arn']
try:
return [allpolicies[policy] for policy in policy_names]
except KeyError as e:
module.fail_json(msg="Couldn't find policy: " + str(e))
def create_or_update_user(connection, module):
params = dict()
params['UserName'] = module.params.get('name')
managed_policies = module.params.get('managed_policy')
purge_policy = module.params.get('purge_policy')
changed = False
if managed_policies:
managed_policies = convert_friendly_names_to_arns(connection, module, managed_policies)
# Get user
user = get_user(connection, module, params['UserName'])
# If user is None, create it
if user is None:
# Check mode means we would create the user
if module.check_mode:
module.exit_json(changed=True)
try:
connection.create_user(**params)
changed = True
except ClientError as e:
module.fail_json(msg="Unable to create user: {0}".format(to_native(e)), exception=traceback.format_exc(),
**camel_dict_to_snake_dict(e.response))
except ParamValidationError as e:
module.fail_json(msg="Unable to create user: {0}".format(to_native(e)), exception=traceback.format_exc())
# Manage managed policies
current_attached_policies = get_attached_policy_list(connection, module, params['UserName'])
if not compare_attached_policies(current_attached_policies, managed_policies):
current_attached_policies_arn_list = []
for policy in current_attached_policies:
current_attached_policies_arn_list.append(policy['PolicyArn'])
# If managed_policies has a single empty element we want to remove all attached policies
if purge_policy:
# Detach policies not present
for policy_arn in list(set(current_attached_policies_arn_list) - set(managed_policies)):
changed = True
if not module.check_mode:
try:
connection.detach_user_policy(UserName=params['UserName'], PolicyArn=policy_arn)
except ClientError as e:
module.fail_json(msg="Unable to detach policy {0} from user {1}: {2}".format(
policy_arn, params['UserName'], to_native(e)),
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except ParamValidationError as e:
module.fail_json(msg="Unable to detach policy {0} from user {1}: {2}".format(
policy_arn, params['UserName'], to_native(e)),
exception=traceback.format_exc())
# If there are policies to adjust that aren't in the current list, then things have changed
# Otherwise the only changes were in purging above
if set(managed_policies).difference(set(current_attached_policies_arn_list)):
changed = True
# If there are policies in managed_policies attach each policy
if managed_policies != [None] and not module.check_mode:
for policy_arn in managed_policies:
try:
connection.attach_user_policy(UserName=params['UserName'], PolicyArn=policy_arn)
except ClientError as e:
module.fail_json(msg="Unable to attach policy {0} to user {1}: {2}".format(
policy_arn, params['UserName'], to_native(e)),
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except ParamValidationError as e:
module.fail_json(msg="Unable to attach policy {0} to user {1}: {2}".format(
policy_arn, params['UserName'], to_native(e)),
exception=traceback.format_exc())
if module.check_mode:
module.exit_json(changed=changed)
# Get the user again
user = get_user(connection, module, params['UserName'])
module.exit_json(changed=changed, iam_user=camel_dict_to_snake_dict(user))
def destroy_user(connection, module):
params = dict()
params['UserName'] = module.params.get('name')
if get_user(connection, module, params['UserName']):
# Check mode means we would remove this user
if module.check_mode:
module.exit_json(changed=True)
# Remove any attached policies otherwise deletion fails
try:
for policy in get_attached_policy_list(connection, module, params['UserName']):
connection.detach_user_policy(UserName=params['UserName'], PolicyArn=policy['PolicyArn'])
except ClientError as e:
module.fail_json(msg="Unable to detach policy {0} from user {1}: {2}".format(
policy['PolicyArn'], params['UserName'], to_native(e)),
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except ParamValidationError as e:
module.fail_json(msg="Unable to detach policy {0} from user {1}: {2}".format(
policy['PolicyArn'], params['UserName'], to_native(e)),
exception=traceback.format_exc())
try:
connection.delete_user(**params)
except ClientError as e:
module.fail_json(msg="Unable to delete user {0}: {1}".format(params['UserName'], to_native(e)),
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except ParamValidationError as e:
module.fail_json(msg="Unable to delete user {0}: {1}".format(params['UserName'], to_native(e)),
exception=traceback.format_exc())
else:
module.exit_json(changed=False)
module.exit_json(changed=True)
def get_user(connection, module, name):
params = dict()
params['UserName'] = name
try:
return connection.get_user(**params)
except ClientError as e:
if e.response['Error']['Code'] == 'NoSuchEntity':
return None
else:
module.fail_json(msg="Unable to get user {0}: {1}".format(name, to_native(e)),
**camel_dict_to_snake_dict(e.response))
def get_attached_policy_list(connection, module, name):
try:
return connection.list_attached_user_policies(UserName=name)['AttachedPolicies']
except ClientError as e:
if e.response['Error']['Code'] == 'NoSuchEntity':
return None
else:
module.fail_json(msg="Unable to get policies for user {0}: {1}".format(name, to_native(e)),
**camel_dict_to_snake_dict(e.response))
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
name=dict(required=True, type='str'),
managed_policy=dict(default=[], type='list'),
state=dict(choices=['present', 'absent'], required=True),
purge_policy=dict(default=False, type='bool')
)
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True
)
if not HAS_BOTO3:
module.fail_json(msg='boto3 required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
connection = boto3_conn(module, conn_type='client', resource='iam', region=region, endpoint=ec2_url, **aws_connect_params)
state = module.params.get("state")
if state == 'present':
create_or_update_user(connection, module)
else:
destroy_user(connection, module)
if __name__ == '__main__':
main()
|
|
from construct3.lib import singleton
import sys
from construct3.lib.binutil import BitStreamReader, BitStreamWriter
from construct3.lib.containers import Container
from construct3.lib.config import Config
try:
from io import BytesIO
except ImportError:
from cStringIO import StringIO as BytesIO
import six
from six.moves import xrange
class PackerError(Exception):
pass
class RawError(PackerError):
pass
class RangeError(PackerError):
pass
class SwitchError(PackerError):
pass
class Packer(object):
__slots__ = ()
def pack(self, obj):
stream = BytesIO()
self._pack(obj, stream, {}, Config())
return stream.getvalue()
def pack_to_stream(self, obj, stream):
self._pack(obj, stream, {}, Config())
def _pack(self, obj, stream, ctx, cfg):
raise NotImplementedError()
def unpack(self, buf_or_stream):
if not hasattr(buf_or_stream, "read"):
buf_or_stream = BytesIO(buf_or_stream)
return self._unpack(buf_or_stream, {}, Config())
def _unpack(self, stream, ctx, cfg):
raise NotImplementedError()
def sizeof(self, ctx = None, cfg = None):
return self._sizeof(ctx or {}, cfg or Config())
def _sizeof(self, ctx, cfg):
raise NotImplementedError()
#
# short hands
#
def __getitem__(self, count):
if isinstance(count, slice):
if count.step:
raise ValueError("Slice must not contain as step: %r" % (count,))
return Range(count.start, count.stop, self)
elif isinstance(count, six.integer_types) or hasattr(count, "__call__"):
return Range(count, count, self)
else:
raise TypeError("Expected a number, a contextual expression or a slice thereof, got %r" % (count,))
def __rtruediv__(self, name):
if name is not None and not isinstance(name, str):
raise TypeError("`name` must be a string or None, got %r" % (name,))
return (name, self)
__rdiv__ = __rtruediv__
@singleton
class noop(Packer):
__slots__ = ()
def __repr__(self):
return "noop"
def _pack(self, obj, stream, ctx, cfg):
pass
def _unpack(self, stream, ctx, cfg):
return None
def _sizeof(self, ctx, cfg):
return 0
class CtxConst(object):
__slots__ = ["value"]
def __init__(self, value):
self.value = value
def __repr__(self):
return repr(self.value)
def __call__(self, ctx):
return self.value
def _contextify(value):
if hasattr(value, "__call__"):
return value
else:
return CtxConst(value)
class Adapter(Packer):
#__slots__ = ["underlying", "_decode", "_encode"]
def __init__(self, underlying, decode = None, encode = None):
self.underlying = underlying
if not hasattr(self, '_decode') and decode is None:
self._decode = decode
if not hasattr(self, '_encode') and encode is None:
self._encode = encode
def __repr__(self):
return "%s(%s)" % (self.__class__.__name__, self.underlying)
def _pack(self, obj, stream, ctx, cfg):
obj2 = self.encode(obj, ctx)
self.underlying._pack(obj2, stream, ctx, cfg)
def _unpack(self, stream, ctx, cfg):
obj = self.underlying._unpack(stream, ctx, cfg)
return self.decode(obj, ctx)
def _sizeof(self, ctx, cfg):
return self.underlying._sizeof(ctx, cfg)
def encode(self, obj, ctx):
if self._encode:
return self._encode(obj, ctx)
else:
return obj
def decode(self, obj, ctx):
if self._decode:
return self._decode(obj, ctx)
else:
return obj
class Raw(Packer):
__slots__ = ["length"]
def __init__(self, length):
self.length = _contextify(length)
def __repr__(self):
return "Raw(%r)" % (self.length,)
def _pack(self, obj, stream, ctx, cfg):
length = self.length(ctx)
if len(obj) != length:
raise RawError("Expected buffer of length %d, got %d" % (length, len(obj)))
stream.write(obj)
def _unpack(self, stream, ctx, cfg):
length = self.length(ctx)
data = stream.read(length)
if len(data) != length:
raise RawError("Expected buffer of length %d, got %d" % (length, len(data)))
return data
def _sizeof(self, ctx, cfg):
return self.length(ctx)
def Named(*args, **kwargs):
if (args and kwargs) or (not args and not kwargs):
raise TypeError("This function takes either two positional arguments or a single keyword attribute",
args, kwargs)
elif args:
if len(args) != 2:
raise TypeError("Expected exactly two positional arguments", args)
elif not isinstance(args[0], str):
raise TypeError("The first argument must be a string, got %r" % (args[0],))
elif not isinstance(args[1], Packer):
raise TypeError("The second argument must be a Packer, got %r" % (args[1],))
else:
if len(kwargs) != 1:
raise TypeError("Expected exactly one keyword argument", kwargs)
args = kwargs.popitem()
if not isinstance(args[1], Packer):
raise TypeError("The second argument must be a Packer, got %r" % (args[1],))
if isinstance(args[1], UnnamedPackerMixin):
raise TypeError("%s cannot take a name" % (args[1].__class__.__name__,))
return tuple(args)
class UnnamedPackerMixin(object):
# make us look like a tuple
__slots__ = ()
def __iter__(self):
yield None
yield self
def __len__(self):
return 2
def __getitem__(self, index):
return None if index == 0 else self
def __rtruediv__(self, other):
raise TypeError("%s cannot take a name" % (self.__class__.__name__,))
__rdiv__ = __rtruediv__
class Embedded(UnnamedPackerMixin, Packer):
__slots__ = ["underlying"]
def __init__(self, underlying):
self.underlying = underlying
def _unpack(self, stream, ctx, cfg):
with cfg.set(embedded = True):
return self.underlying._unpack(stream, ctx, cfg)
def _pack(self, obj, stream, ctx, cfg):
with cfg.set(embedded = True):
self.underlying._pack(obj, stream, ctx, cfg)
def _sizeof(self, ctx, cfg):
with cfg.set(embedded = True):
return self.underlying._sizeof(ctx, cfg)
class Struct(Packer):
__slots__ = ["members", "container_factory"]
def __init__(self, *members, **kwargs):
self.members = members
self.container_factory = kwargs.pop("container_factory", None)
if kwargs:
raise TypeError("invalid keyword argument(s): %s" % (", ".join(kwargs.keys()),))
names = set()
for mem in members:
if (not hasattr(mem, "__len__") or len(mem) != 2 or
not isinstance(mem[0], (type(None), str)) or not isinstance(mem[1], Packer)):
raise TypeError("Struct members must be 2-tuples of (name, Packer): %r" % (mem,))
if mem[0] in names:
raise TypeError("Member %r already exists in this struct" % (mem[0],))
if mem[0]:
names.add(mem[0])
def __repr__(self):
return "Struct(%s)" % (", ".join(repr(m) for m in self.members),)
def _unpack(self, stream, ctx, cfg):
factory = self.container_factory or cfg.container_factory or Container
if cfg.embedded:
ctx2 = ctx
obj = cfg.container
del cfg.embedded
else:
ctx2 = {"_" : ctx}
obj = factory()
with cfg.set(container = obj, ctx = ctx2, container_factory = factory):
for name, pkr in self.members:
cfg.name = name
obj2 = pkr._unpack(stream, ctx2, cfg)
if name:
ctx2[name] = obj[name] = obj2
return obj
def _pack(self, obj, stream, ctx, cfg):
if cfg.embedded:
ctx2 = ctx
obj = cfg.container
del cfg.embedded
else:
ctx2 = {"_" : ctx}
with cfg.set(container = obj, ctx = ctx2):
for name, pkr in self.members:
cfg.name = name
if not name:
obj2 = None
else:
obj2 = ctx2[name] = obj[name]
pkr._pack(obj2, stream, ctx2, cfg)
def _sizeof(self, ctx, cfg):
if cfg.embedded:
ctx2 = ctx
del cfg.embedded
else:
ctx2 = {"_" : ctx}
return sum(pkr.sizeof(ctx2, cfg) for _, pkr in self.members)
class Sequence(Packer):
__slots__ = ["members", "container_factory"]
def __init__(self, *members, **kwargs):
self.members = members
self.container_factory = kwargs.pop("container_factory", list)
if kwargs:
raise TypeError("Invalid keyword argument(s): %s" % (", ".join(kwargs.keys()),))
for mem in members:
if not isinstance(mem, Packer):
raise TypeError("Sequence members must be Packers: %r" % (mem,))
def __repr__(self):
return "Sequence(%s)" % (", ".join(repr(m) for m in self.members),)
def _unpack(self, stream, ctx, cfg):
factory = self.container_factory or cfg.container_factory or Container
if cfg.embedded:
ctx2 = ctx
obj = cfg.container
i = cfg.name + 1
del cfg.embedded
embedded = True
else:
ctx2 = {"_" : ctx}
obj = factory()
i = 0
embedded = False
with cfg.set(container = obj, ctx = ctx, container_factory = factory):
for pkr in self.members:
cfg.name = i
obj2 = pkr._unpack(stream, ctx2, cfg)
if obj2 is not None:
obj.append(obj2)
ctx2[i] = obj2
i += 1
return None if embedded else obj
def _pack(self, obj, stream, ctx, cfg):
from construct3.adapters import Padding
ctx2 = {"_" : ctx}
i = 0
for pkr in self.members:
if isinstance(pkr, Padding):
pkr._pack(None, stream, ctx2)
else:
obj2 = ctx2[i] = obj[i]
pkr._pack(obj2, stream, ctx2)
i += 1
def _sizeof(self, ctx, cfg):
if cfg.embedded:
ctx2 = ctx
del cfg.embedded
else:
ctx2 = {"_" : ctx}
return sum(pkr._sizeof(ctx2) for pkr in self.members)
class Range(Packer):
__slots__ = ["mincount", "maxcount", "itempkr"]
def __init__(self, mincount, maxcount, itempkr):
self.mincount = _contextify(mincount)
self.maxcount = _contextify(maxcount)
self.itempkr = itempkr
def __repr__(self):
return "Range(%r, %r, %r)" % (self.mincount, self.maxcount, self.itempkr)
def _pack(self, obj, stream, ctx, cfg):
mincount = self.mincount(ctx)
if mincount is None:
mincount = 0
maxcount = self.maxcount(ctx)
if maxcount is None:
maxcount = sys.maxsize
assert maxcount >= mincount
if len(obj) < mincount or len(obj) > maxcount:
raise RangeError("Expected %s items, found %s" % (
mincount if mincount == maxcount else "%s..%s" % (mincount, maxcount), len(obj)))
ctx2 = {"_" : ctx}
for i, item in enumerate(obj):
ctx2[i] = item
#import ipdb; ipdb.set_trace()
self.itempkr._pack(item, stream, ctx2, cfg)
def _unpack(self, stream, ctx, cfg):
mincount = self.mincount(ctx)
if mincount is None:
mincount = 0
maxcount = self.maxcount(ctx)
if maxcount is None:
maxcount = sys.maxsize
assert maxcount >= mincount
ctx2 = {"_" : ctx}
obj = []
for i in xrange(maxcount):
try:
obj2 = self.itempkr._unpack(stream, ctx2, cfg)
except PackerError as ex:
if i >= mincount:
break
else:
raise RangeError("Expected %s items, found %s\nUnderlying exception: %r" % (
mincount if mincount == maxcount else "%s..%s" % (mincount, maxcount), i, ex))
ctx2[i] = obj2
obj.append(obj2)
return obj
def _sizeof(self, ctx, cfg):
return self.count(ctx) * self.itempkr._sizeof({"_" : ctx}, cfg)
class While(Packer):
__slots__ = ["cond", "itempkr"]
def __init__(self, cond, itempkr):
self.cond = cond
self.itempkr = itempkr
def __repr__(self):
return "While(%r, %r)" % (self.cond, self.itempkr)
def _pack(self, obj, stream, ctx, cfg):
ctx2 = {"_" : ctx}
for i, item in enumerate(obj):
ctx2[i] = item
self.itempkr._pack(item, stream, ctx2)
if not self.cond(ctx2):
break
def _unpack(self, stream, ctx, cfg):
ctx2 = {"_" : ctx}
obj = []
i = 0
while True:
obj2 = ctx2[i] = self.itempkr._unpack(stream, ctx2)
if not self.cond(ctx2):
break
obj.append(obj2)
i += 1
return obj
def _sizeof(self):
raise NotImplementedError("Cannot compute sizeof of %r" % (self,))
class Switch(Packer):
__slots__ = ["expr", "cases", "default"]
def __init__(self, expr, cases, default = NotImplemented):
self.expr = expr
self.cases = cases
self.default = default
def _choose_packer(self, ctx):
val = self.expr(ctx)
if val in self.cases:
return self.cases[val]
elif self.default is not NotImplemented:
return self.default
else:
raise SwitchError("Cannot find a handler for %r" % (val,))
def _pack(self, obj, stream, ctx, cfg):
pkr = self._choose_packer(ctx)
pkr._pack(obj, stream, ctx, cfg)
def _unpack(self, stream, ctx, cfg):
pkr = self._choose_packer(ctx)
return pkr._unpack(stream, ctx, cfg)
def _sizeof(self, ctx, cfg):
return self._choose_packer(ctx)._sizeof(ctx, cfg)
#=======================================================================================================================
# Stream-related
#=======================================================================================================================
class Bitwise(Packer):
def __init__(self, underlying):
self.underlying = underlying
def __repr__(self):
return "Bitwise(%r)" % (self.underlying,)
def _unpack(self, stream, ctx, cfg):
stream2 = BitStreamReader(stream)
obj = self.underlying._unpack(stream2, ctx, cfg)
stream2.close()
return obj
def _pack(self, obj, stream, ctx, cfg):
stream2 = BitStreamWriter(stream)
self.underlying._pack(obj, stream2, ctx, cfg)
stream2.close()
def _sizeof(self, ctx, cfg):
return self.underlying._sizeof(ctx, cfg) // 8
@singleton
class anchor(Packer):
__slots__ = ()
def __repr__(self):
return "anchor"
def _unpack(self, stream, ctx, cfg):
return stream.tell()
def _pack(self, obj, stream, ctx, cfg):
ctx[cfg.name] = stream.tell()
def _sizeof(self, ctx, cfg):
return 0
class Pointer(Packer):
__slots__ = ["underlying", "offset"]
def __init__(self, offset, underlying):
self.underlying = underlying
self.offset = _contextify(offset)
def __repr__(self):
return "Pointer(%r, %r)" % (self.offset, self.underlying)
def _unpack(self, stream, ctx, cfg):
newpos = self.offset(ctx)
origpos = stream.tell()
stream.seek(newpos)
obj = self.underlying._unpack(stream, ctx, cfg)
stream.seek(origpos)
return obj
def _pack(self, obj, stream, ctx, cfg):
newpos = self.offset(ctx)
origpos = stream.tell()
stream.seek(newpos)
self.underlying._pack(obj, stream, ctx, cfg)
stream.seek(origpos)
def _sizeof(self, ctx, cfg):
return 0
|
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# coding: utf-8
"""Lists of functions whitelisted/blacklisted for automatic mixed precision in symbol API."""
# Functions that should be cast to lower precision
BF16_FUNCS = [
'Convolution',
'FullyConnected',
]
# Functions that should not be casted, either because
# they are irrelevant (not used in the network itself
# like image transformations or optimizers) or they
# are dtype neutral (can work in both bf16 and fp32)
BF16_FP32_FUNCS = [
'abs',
'_add',
'BatchNorm',
'BatchNormWithReLU',
'clip',
'Concat',
'concat',
'LRN',
'Pooling',
'relu',
'shuffle',
'_shuffle',
'sqrt',
'square',
'tanh',
]
# Functions that when running with Bfloat16, the params that still need float32.
BF16_USE_FP32_PARAMS = {
'BatchNormWithReLU': ["", "gamma", "beta", "moving_mean", "moving_var"],
'BatchNorm': ["", "gamma", "beta", "moving_mean", "moving_var"],
}
# Functions that have to be cast to FP32 due to possible
# overflows
FP32_FUNCS = [
'Deconvolution',
'RNN',
'BatchNorm_v1',
'BilinearSampler',
'BlockGrad',
'Cast',
'cast',
'cast_storage',
'Crop',
'Dropout',
'Embedding',
'_sparse_Embedding',
'_sparse_FullyConnected',
'Flatten',
'GridGenerator',
'Pad',
'Pooling_v1',
'ROIPooling',
'Reshape',
'SequenceLast',
'SequenceMask',
'SequenceReverse',
'SliceChannel',
'SpatialTransformer',
'SwapAxis',
'UpSampling',
'_CachedOp',
'_CrossDeviceCopy',
'_CustomFunction',
'_DivScalar',
'_EqualScalar',
'_GreaterScalar',
'_GreaterEqualScalar',
'_LesserScalar',
'_LesserEqualScalar',
'_LogicalAndScalar',
'_LogicalOrScalar',
'_LogicalXorScalar',
'_MaximumScalar',
'_MinimumScalar',
'_MinusScalar',
'_ModScalar',
'_MulScalar',
'_NoGradient',
'_NotEqualScalar',
'_PlusScalar',
'_RMinusScalar',
'_RModScalar',
'_adamw_update',
'_arange',
'_broadcast_backward',
'_cond',
'_contrib_AdaptiveAvgPooling2D',
'_contrib_BilinearResize2D',
'_contrib_SparseEmbedding',
'_contrib_bipartite_matching',
'_contrib_dequantize',
'_contrib_div_sqrt_dim',
'_contrib_boolean_mask',
'_contrib_getnnz',
'_contrib_gradientmultiplier',
'_contrib_group_adagrad_update',
'_contrib_ifft',
'_contrib_index_array',
'_contrib_index_copy',
'_contrib_quadratic',
'_contrib_quantize',
'_contrib_quantize_v2',
'_contrib_quantized_concat',
'_contrib_quantized_conv',
'_contrib_quantized_flatten',
'_contrib_quantized_fully_connected',
'_contrib_quantized_pooling',
'_contrib_quantized_elemwise_add',
'_contrib_quantized_act',
'_image_crop',
'_linspace',
'_contrib_requantize',
'_copy',
'_copyto',
'_crop_assign',
'_crop_assign_scalar',
'_cvcopyMakeBorder',
'_cvimdecode',
'_cvimread',
'_cvimresize',
'_div_scalar',
'_equal_scalar',
'_eye',
'_foreach',
'_while_loop',
'_full',
'_grad_add',
'_greater_scalar',
'_greater_equal_scalar',
'_histogram',
'_identity_with_attr_like_rhs',
'_image_adjust_lighting',
'_image_flip_left_right',
'_image_flip_top_bottom',
'_image_normalize',
'_image_random_brightness',
'_image_random_color_jitter',
'_image_random_contrast',
'_image_random_flip_left_right',
'_image_random_flip_top_bottom',
'_image_random_hue',
'_image_random_lighting',
'_image_random_saturation',
'_image_resize',
'_image_to_tensor',
'_imdecode',
'_lesser_scalar',
'_lesser_equal_scalar',
'_logical_and_scalar',
'_logical_or_scalar',
'_logical_xor_scalar',
'_maximum_scalar',
'_minimum_scalar',
'_minus_scalar',
'_mod_scalar',
'_mp_adamw_update',
'_mul_scalar',
'_not_equal_scalar',
'_onehot_encode',
'_ones',
'_plus_scalar',
'_random_exponential',
'_random_exponential_like',
'_random_gamma',
'_random_gamma_like',
'_random_generalized_negative_binomial',
'_random_generalized_negative_binomial_like',
'_random_negative_binomial',
'_random_negative_binomial_like',
'_random_normal',
'_random_normal_like',
'_random_poisson',
'_random_poisson_like',
'_random_randint',
'_random_uniform',
'_random_uniform_like',
'_ravel_multi_index',
'_rminus_scalar',
'_rmod_scalar',
'_rnn_param_concat',
'_sample_exponential',
'_sample_gamma',
'_sample_generalized_negative_binomial',
'_sample_multinomial',
'_sample_negative_binomial',
'_sample_normal',
'_sample_poisson',
'_sample_uniform',
'_sample_unique_zipfian',
'_scatter_minus_scalar',
'_scatter_plus_scalar',
'_scatter_set_nd',
'_set_value',
'_slice_assign',
'_slice_assign_scalar',
'_sparse_abs',
'_sparse_adagrad_update',
'_sparse_adam_update',
'_sparse_arccosh',
'_sparse_arcsinh',
'_sparse_arctan',
'_sparse_cast_storage',
'_sparse_cbrt',
'_sparse_ceil',
'_sparse_clip',
'_sparse_concat',
'_sparse_cos',
'_sparse_degrees',
'_sparse_fix',
'_sparse_floor',
'_sparse_ftrl_update',
'_sparse_negative',
'_sparse_radians',
'_sparse_relu',
'_sparse_retain',
'_sparse_rint',
'_sparse_round',
'_sparse_sgd_mom_update',
'_sparse_sgd_update',
'_sparse_sigmoid',
'_sparse_sign',
'_sparse_sin',
'_sparse_sinh',
'_sparse_slice',
'_sparse_sqrt',
'_sparse_stop_gradient',
'_sparse_tanh',
'_sparse_trunc',
'_sparse_zeros_like',
'_split_v2',
'_split_v2_backward',
'_unravel_index',
'_zeros',
'_zeros_without_dtype',
'adam_update',
'all_finite',
# 'amp_cast',
# 'amp_multicast',
'arccosh',
'arcsinh',
'arctan',
'argmax',
'argmax_channel',
'argmin',
'batch_take',
'broadcast_axes',
'broadcast_axis',
'broadcast_like',
'broadcast_to',
'cbrt',
'ceil',
'choose_element_0index',
'cos',
'crop',
'degrees',
'depth_to_space',
'diag',
'erf',
'expand_dims',
'fill_element_0index',
'fix',
'flatten',
'flip',
'floor',
'ftml_update',
'ftrl_update',
'gather_nd',
'hard_sigmoid',
'identity',
'logical_not',
'max_axis',
'max',
'min',
'min_axis',
'mp_sgd_mom_update',
'mp_sgd_update',
'multi_all_finite',
'multi_mp_sgd_mom_update',
'multi_mp_sgd_update',
'multi_sgd_mom_update',
'multi_sgd_update',
'negative',
'normal',
'one_hot',
'ones_like',
'pad',
'pick',
'radians',
'random_exponential',
'random_gamma',
'random_generalized_negative_binomial',
'random_negative_binomial',
'random_normal',
'random_poisson',
'random_randint',
'random_uniform',
'ravel_multi_index',
'repeat',
'reshape',
'reshape_like',
'reverse',
'rint',
'rmsprop_update',
'rmspropalex_update',
'round',
'sample_exponential',
'sample_gamma',
'sample_generalized_negative_binomial',
'sample_multinomial',
'sample_negative_binomial',
'sample_normal',
'sample_poisson',
'sample_uniform',
'scatter_nd',
'sgd_mom_update',
'sgd_update',
'shape_array',
'sigmoid',
'sign',
'signsgd_update',
'signum_update',
'sin',
'size_array',
'slice',
'slice_axis',
'slice_like',
'softsign',
'sort',
'space_to_depth',
'split',
'squeeze',
'stop_gradient',
'swapaxes',
'take',
'tile',
'transpose',
'trunc',
'uniform',
'unravel_index',
'zeros_like',
'_sg_mkldnn_conv',
'_sg_mkldnn_fully_connected',
'broadcast_mul',
'Convolution_v1',
'IdentityAttachKLSparseReg',
'arccos',
'_sparse_arccos',
'arcsin',
'cosh',
'_sparse_cosh',
'erfinv',
'sinh',
'tan',
'_sparse_tan',
'arctanh',
'_sparse_arcsin',
'_sparse_arctanh',
# Exponents
'exp',
'expm1',
'_sparse_exp',
'_sparse_expm1',
'log',
'log10',
'log2',
'log1p',
# Powers
'broadcast_power',
'_sparse_square',
'reciprocal',
'_RDivScalar',
'_rdiv_scalar',
'rsqrt',
'rcbrt',
'_Power',
'_PowerScalar',
'_power',
'_power_scalar',
'_RPowerScalar',
'_rpower_scalar',
'linalg_sumlogdiag',
'_Hypot',
'_HypotScalar',
'_hypot',
'_hypot_scalar',
'broadcast_hypot',
'_square_sum',
'_contrib_hawkesll',
# Reductions
'sum',
'sum_axis',
'nansum',
'prod',
'nanprod',
'mean',
'norm',
'softmin',
'khatri_rao',
'moments',
# Misc
'gamma',
'gammaln',
'_linalg_gelqf',
'_linalg_gemm',
'_linalg_gemm2',
'_linalg_potrf',
'_linalg_potri',
'_linalg_sumlogdiag',
'_linalg_syevd',
'_linalg_syrk',
'_linalg_trmm',
'_linalg_trsm',
'_linalg_makediag',
'_linalg_extractdiag',
'_linalg_maketrian',
'_linalg_extracttrian',
'_linalg_inverse',
'_linalg_det',
'_linalg_slogdet',
'linalg_syrk',
'linalg_potrf',
'linalg_potri',
'linalg_gemm2',
'linalg_gemm',
'linalg_gelqf',
'linalg_trmm',
'linalg_trsm',
'linalg_makediag',
'linalg_extractdiag',
'linalg_maketrian',
'linalg_extracttrian',
'linalg_inverse',
'linalg_det',
'linalg_slogdet',
'_NDArray',
'_Native',
'_contrib_count_sketch',
'_contrib_SyncBatchNorm',
'_contrib_fft',
'_sparse_gamma',
'_sparse_gammaln',
'_sparse_log',
'_sparse_log10',
'_sparse_log1p',
'_sparse_log2',
'_sparse_make_loss',
'_sparse_mean',
'_sparse_norm',
'_sparse_rsqrt',
'argsort',
'topk',
# Neural network
'SoftmaxOutput',
'softmax',
'Softmax',
'log_softmax',
'InstanceNorm',
'LayerNorm',
'GroupNorm',
'L2Normalization',
'SoftmaxActivation',
'LinearRegressionOutput',
'LogisticRegressionOutput',
'MAERegressionOutput',
'_sparse_LinearRegressionOutput',
'_sparse_LogisticRegressionOutput',
'_sparse_MAERegressionOutput',
'SVMOutput',
'softmax_cross_entropy',
'smooth_l1',
'MakeLoss',
'make_loss',
'Custom',
'CTCLoss',
'_contrib_CTCLoss',
'_contrib_ctc_loss',
'ctc_loss',
'_contrib_DeformableConvolution',
'_contrib_DeformablePSROIPooling',
]
# Functions that have to be cast to FP32 only for
# some values of their parameters
CONDITIONAL_FP32_FUNCS = [
('Activation', 'act_type', ['softrelu']),
('LeakyReLU', 'act_type', ['elu', 'selu']),
]
# Functions with multiple inputs, that need the same
# type of all their inputs
WIDEST_TYPE_CASTS = [
'_Plus',
'_plus',
'_Minus',
'_sub',
'_Mul',
'_Div',
'_div',
'_scatter_elemwise_div',
'_Mod',
'_Not_Equal',
'_Equal',
'_equal',
'_Greater',
'_greater',
'_Greater_Equal',
'_greater_equal',
'_Lesser',
'_Lesser_Equal',
'_lesser',
'_lesser_equal',
'_Logical_And',
'_Logical_Or',
'_Logical_Xor',
'_logical_and',
'_logical_or',
'_logical_xor',
'_maximum',
'_minimum',
'_minus',
'_mod',
'_mul',
'_not_equal',
'Correlation',
'ElementWiseSum',
'_sparse_ElementWiseSum',
'add_n',
'_sparse_add_n',
'batch_dot',
'broadcast_add',
'broadcast_plus',
'broadcast_div',
'broadcast_equal',
'broadcast_greater',
'broadcast_greater_equal',
'broadcast_lesser',
'broadcast_lesser_equal',
'broadcast_logical_and',
'broadcast_logical_or',
'broadcast_logical_xor',
'broadcast_maximum',
'broadcast_minimum',
'broadcast_minus',
'broadcast_mod',
'broadcast_not_equal',
'broadcast_sub',
'dot',
'elemwise_add',
'elemwise_div',
'elemwise_mul',
'elemwise_sub',
'stack',
'_Maximum',
'_Minimum',
'_contrib_MultiBoxDetection',
'_contrib_MultiBoxPrior',
'_contrib_MultiBoxTarget',
'_contrib_MultiProposal',
'_contrib_PSROIPooling',
'_contrib_Proposal',
'_contrib_ROIAlign',
'_contrib_box_iou',
'_contrib_box_nms',
'_contrib_box_non_maximum_suppression',
'_contrib_dgl_adjacency',
'_contrib_dgl_csr_neighbor_non_uniform_sample',
'_contrib_dgl_csr_neighbor_uniform_sample',
'_contrib_dgl_graph_compact',
'_contrib_dgl_subgraph',
'_contrib_edge_id',
'where',
'_sparse_where',
'_sparse_broadcast_add',
'_sparse_broadcast_div',
'_sparse_broadcast_minus',
'_sparse_broadcast_mul',
'_sparse_broadcast_plus',
'_sparse_broadcast_sub',
'_sparse_dot',
'_sparse_elemwise_add',
'_sparse_elemwise_div',
'_sparse_elemwise_mul',
'_sparse_elemwise_sub',
'_sparse_sum',
'random_pdf_gamma',
'random_pdf_exponential',
'random_pdf_uniform',
'random_pdf_negative_binomial',
'random_pdf_generalized_negative_binomial',
'random_pdf_dirichlet',
'random_pdf_normal',
'random_pdf_poisson',
'_random_pdf_gamma',
'_random_pdf_exponential',
'_random_pdf_uniform',
'_random_pdf_negative_binomial',
'_random_pdf_generalized_negative_binomial',
'_random_pdf_dirichlet',
'_random_pdf_normal',
'_random_pdf_poisson',
]
LOSS_OUTPUT_FUNCTIONS = [
'SoftmaxOutput',
'LinearRegressionOutput',
'LogisticRegressionOutput',
'MAERegressionOutput',
]
|
|
"""
Least Angle Regression algorithm. See the documentation on the
Generalized Linear Model for a complete discussion.
"""
from __future__ import print_function
# Author: Fabian Pedregosa <fabian.pedregosa@inria.fr>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Gael Varoquaux
#
# License: BSD 3 clause
from math import log
import sys
import warnings
import numpy as np
from scipy import linalg, interpolate
from scipy.linalg.lapack import get_lapack_funcs
from .base import LinearModel
from ..base import RegressorMixin
from ..utils import array2d, arrayfuncs, as_float_array, check_arrays
from ..cross_validation import _check_cv as check_cv
from ..externals.joblib import Parallel, delayed
from ..externals.six.moves import xrange
def lars_path(X, y, Xy=None, Gram=None, max_iter=500,
alpha_min=0, method='lar', copy_X=True,
eps=np.finfo(np.float).eps,
copy_Gram=True, verbose=0, return_path=True):
"""Compute Least Angle Regression or Lasso path using LARS algorithm [1]
The optimization objective for the case method='lasso' is::
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
in the case of method='lars', the objective function is only known in
the form of an implicit equation (see discussion in [1])
Parameters
-----------
X : array, shape: (n_samples, n_features)
Input data.
y : array, shape: (n_samples)
Input targets.
max_iter : integer, optional (default=500)
Maximum number of iterations to perform, set to infinity for no limit.
Gram : None, 'auto', array, shape: (n_features, n_features), optional
Precomputed Gram matrix (X' * X), if ``'auto'``, the Gram
matrix is precomputed from the given X, if there are more samples
than features.
alpha_min : float, optional (default=0)
Minimum correlation along the path. It corresponds to the
regularization parameter alpha parameter in the Lasso.
method : {'lar', 'lasso'}, optional (default='lar')
Specifies the returned model. Select ``'lar'`` for Least Angle
Regression, ``'lasso'`` for the Lasso.
eps : float, optional (default=``np.finfo(np.float).eps``)
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems.
copy_X : bool, optional (default=True)
If ``False``, ``X`` is overwritten.
copy_Gram : bool, optional (default=True)
If ``False``, ``Gram`` is overwritten.
verbose : int (default=0)
Controls output verbosity.
return_path: bool, (optional=True)
If ``return_path==True`` returns the entire path, else returns only the
last point of the path.
Returns
--------
alphas: array, shape: [n_alphas + 1]
Maximum of covariances (in absolute value) at each iteration.
``n_alphas`` is either ``max_iter``, ``n_features`` or the
number of nodes in the path with ``alpha >= alpha_min``, whichever
is smaller.
active: array, shape [n_alphas]
Indices of active variables at the end of the path.
coefs: array, shape (n_features, n_alphas + 1)
Coefficients along the path
See also
--------
lasso_path
LassoLars
Lars
LassoLarsCV
LarsCV
sklearn.decomposition.sparse_encode
References
----------
.. [1] "Least Angle Regression", Effron et al.
http://www-stat.stanford.edu/~tibs/ftp/lars.pdf
.. [2] `Wikipedia entry on the Least-angle regression
<http://en.wikipedia.org/wiki/Least-angle_regression>`_
.. [3] `Wikipedia entry on the Lasso
<http://en.wikipedia.org/wiki/Lasso_(statistics)#Lasso_method>`_
"""
n_features = X.shape[1]
n_samples = y.size
max_features = min(max_iter, n_features)
if return_path:
coefs = np.zeros((max_features + 1, n_features))
alphas = np.zeros(max_features + 1)
else:
coef, prev_coef = np.zeros(n_features), np.zeros(n_features)
alpha, prev_alpha = np.array([0.]), np.array([0.]) # better ideas?
n_iter, n_active = 0, 0
active, indices = list(), np.arange(n_features)
# holds the sign of covariance
sign_active = np.empty(max_features, dtype=np.int8)
drop = False
# will hold the cholesky factorization. Only lower part is
# referenced.
L = np.empty((max_features, max_features), dtype=X.dtype)
swap, nrm2 = linalg.get_blas_funcs(('swap', 'nrm2'), (X,))
solve_cholesky, = get_lapack_funcs(('potrs',), (X,))
if Gram is None:
if copy_X:
# force copy. setting the array to be fortran-ordered
# speeds up the calculation of the (partial) Gram matrix
# and allows to easily swap columns
X = X.copy('F')
elif Gram == 'auto':
Gram = None
if X.shape[0] > X.shape[1]:
Gram = np.dot(X.T, X)
elif copy_Gram:
Gram = Gram.copy()
if Xy is None:
Cov = np.dot(X.T, y)
else:
Cov = Xy.copy()
if verbose:
if verbose > 1:
print("Step\t\tAdded\t\tDropped\t\tActive set size\t\tC")
else:
sys.stdout.write('.')
sys.stdout.flush()
tiny = np.finfo(np.float).tiny # to avoid division by 0 warning
tiny32 = np.finfo(np.float32).tiny # to avoid division by 0 warning
while True:
if Cov.size:
C_idx = np.argmax(np.abs(Cov))
C_ = Cov[C_idx]
C = np.fabs(C_)
else:
C = 0.
if return_path:
alpha = alphas[n_iter, np.newaxis]
coef = coefs[n_iter]
prev_alpha = alphas[n_iter - 1, np.newaxis]
prev_coef = coefs[n_iter - 1]
alpha[0] = C / n_samples
if alpha[0] <= alpha_min: # early stopping
if not alpha[0] == alpha_min:
# interpolation factor 0 <= ss < 1
if n_iter > 0:
# In the first iteration, all alphas are zero, the formula
# below would make ss a NaN
ss = ((prev_alpha[0] - alpha_min) /
(prev_alpha[0] - alpha[0]))
coef[:] = prev_coef + ss * (coef - prev_coef)
alpha[0] = alpha_min
if return_path:
coefs[n_iter] = coef
break
if n_iter >= max_iter or n_active >= n_features:
break
if not drop:
##########################################################
# Append x_j to the Cholesky factorization of (Xa * Xa') #
# #
# ( L 0 ) #
# L -> ( ) , where L * w = Xa' x_j #
# ( w z ) and z = ||x_j|| #
# #
##########################################################
sign_active[n_active] = np.sign(C_)
m, n = n_active, C_idx + n_active
Cov[C_idx], Cov[0] = swap(Cov[C_idx], Cov[0])
indices[n], indices[m] = indices[m], indices[n]
Cov_not_shortened = Cov
Cov = Cov[1:] # remove Cov[0]
if Gram is None:
X.T[n], X.T[m] = swap(X.T[n], X.T[m])
c = nrm2(X.T[n_active]) ** 2
L[n_active, :n_active] = \
np.dot(X.T[n_active], X.T[:n_active].T)
else:
# swap does only work inplace if matrix is fortran
# contiguous ...
Gram[m], Gram[n] = swap(Gram[m], Gram[n])
Gram[:, m], Gram[:, n] = swap(Gram[:, m], Gram[:, n])
c = Gram[n_active, n_active]
L[n_active, :n_active] = Gram[n_active, :n_active]
# Update the cholesky decomposition for the Gram matrix
arrayfuncs.solve_triangular(L[:n_active, :n_active],
L[n_active, :n_active])
v = np.dot(L[n_active, :n_active], L[n_active, :n_active])
diag = max(np.sqrt(np.abs(c - v)), eps)
L[n_active, n_active] = diag
if diag < 1e-7:
# The system is becoming too ill-conditioned.
# We have degenerate vectors in our active set.
# We'll 'drop for good' the last regressor added
warnings.warn('Regressors in active set degenerate. '
'Dropping a regressor, after %i iterations, '
'i.e. alpha=%.3e, '
'with an active set of %i regressors, and '
'the smallest cholesky pivot element being %.3e'
% (n_iter, alpha, n_active, diag))
# XXX: need to figure a 'drop for good' way
Cov = Cov_not_shortened
Cov[0] = 0
Cov[C_idx], Cov[0] = swap(Cov[C_idx], Cov[0])
continue
active.append(indices[n_active])
n_active += 1
if verbose > 1:
print("%s\t\t%s\t\t%s\t\t%s\t\t%s" % (n_iter, active[-1], '',
n_active, C))
if method == 'lasso' and n_iter > 0 and prev_alpha[0] < alpha[0]:
# alpha is increasing. This is because the updates of Cov are
# bringing in too much numerical error that is greater than
# than the remaining correlation with the
# regressors. Time to bail out
warnings.warn('Early stopping the lars path, as the residues '
'are small and the current value of alpha is no '
'longer well controlled. %i iterations, alpha=%.3e, '
'previous alpha=%.3e, with an active set of %i '
'regressors.'
% (n_iter, alpha, prev_alpha, n_active))
break
# least squares solution
least_squares, info = solve_cholesky(L[:n_active, :n_active],
sign_active[:n_active],
lower=True)
if least_squares.size == 1 and least_squares == 0:
# This happens because sign_active[:n_active] = 0
least_squares[...] = 1
AA = 1.
else:
# is this really needed ?
AA = 1. / np.sqrt(np.sum(least_squares * sign_active[:n_active]))
if not np.isfinite(AA):
# L is too ill-conditioned
i = 0
L_ = L[:n_active, :n_active].copy()
while not np.isfinite(AA):
L_.flat[::n_active + 1] += (2 ** i) * eps
least_squares, info = solve_cholesky(
L_, sign_active[:n_active], lower=True)
tmp = max(np.sum(least_squares * sign_active[:n_active]),
eps)
AA = 1. / np.sqrt(tmp)
i += 1
least_squares *= AA
if Gram is None:
# equiangular direction of variables in the active set
eq_dir = np.dot(X.T[:n_active].T, least_squares)
# correlation between each unactive variables and
# eqiangular vector
corr_eq_dir = np.dot(X.T[n_active:], eq_dir)
else:
# if huge number of features, this takes 50% of time, I
# think could be avoided if we just update it using an
# orthogonal (QR) decomposition of X
corr_eq_dir = np.dot(Gram[:n_active, n_active:].T,
least_squares)
g1 = arrayfuncs.min_pos((C - Cov) / (AA - corr_eq_dir + tiny))
g2 = arrayfuncs.min_pos((C + Cov) / (AA + corr_eq_dir + tiny))
gamma_ = min(g1, g2, C / AA)
# TODO: better names for these variables: z
drop = False
z = -coef[active] / (least_squares + tiny32)
z_pos = arrayfuncs.min_pos(z)
if z_pos < gamma_:
# some coefficients have changed sign
idx = np.where(z == z_pos)[0]
# update the sign, important for LAR
sign_active[idx] = -sign_active[idx]
if method == 'lasso':
gamma_ = z_pos
drop = True
n_iter += 1
if return_path:
if n_iter >= coefs.shape[0]:
del coef, alpha, prev_alpha, prev_coef
# resize the coefs and alphas array
add_features = 2 * max(1, (max_features - n_active))
coefs = np.resize(coefs, (n_iter + add_features, n_features))
alphas = np.resize(alphas, n_iter + add_features)
coef = coefs[n_iter]
prev_coef = coefs[n_iter - 1]
alpha = alphas[n_iter, np.newaxis]
prev_alpha = alphas[n_iter - 1, np.newaxis]
else:
# mimic the effect of incrementing n_iter on the array references
prev_coef = coef
prev_alpha[0] = alpha[0]
coef = np.zeros_like(coef)
coef[active] = prev_coef[active] + gamma_ * least_squares
# update correlations
Cov -= gamma_ * corr_eq_dir
# See if any coefficient has changed sign
if drop and method == 'lasso':
arrayfuncs.cholesky_delete(L[:n_active, :n_active], idx)
n_active -= 1
m, n = idx, n_active
drop_idx = active.pop(idx)
if Gram is None:
# propagate dropped variable
for i in range(idx, n_active):
X.T[i], X.T[i + 1] = swap(X.T[i], X.T[i + 1])
# yeah this is stupid
indices[i], indices[i + 1] = indices[i + 1], indices[i]
# TODO: this could be updated
residual = y - np.dot(X[:, :n_active], coef[active])
temp = np.dot(X.T[n_active], residual)
Cov = np.r_[temp, Cov]
else:
for i in range(idx, n_active):
indices[i], indices[i + 1] = indices[i + 1], indices[i]
Gram[i], Gram[i + 1] = swap(Gram[i], Gram[i + 1])
Gram[:, i], Gram[:, i + 1] = swap(Gram[:, i],
Gram[:, i + 1])
# Cov_n = Cov_j + x_j * X + increment(betas) TODO:
# will this still work with multiple drops ?
# recompute covariance. Probably could be done better
# wrong as Xy is not swapped with the rest of variables
# TODO: this could be updated
residual = y - np.dot(X, coef)
temp = np.dot(X.T[drop_idx], residual)
Cov = np.r_[temp, Cov]
sign_active = np.delete(sign_active, idx)
sign_active = np.append(sign_active, 0.) # just to maintain size
if verbose > 1:
print("%s\t\t%s\t\t%s\t\t%s\t\t%s" % (n_iter, '', drop_idx,
n_active, abs(temp)))
if return_path:
# resize coefs in case of early stop
alphas = alphas[:n_iter + 1]
coefs = coefs[:n_iter + 1]
return alphas, active, coefs.T
else:
return alpha, active, coef
###############################################################################
# Estimator classes
class Lars(LinearModel, RegressorMixin):
"""Least Angle Regression model a.k.a. LAR
Parameters
----------
n_nonzero_coefs : int, optional
Target number of non-zero coefficients. Use ``np.inf`` for no limit.
fit_intercept : boolean
Whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default False
If ``True``, the regressors X will be normalized before regression.
precompute : True | False | 'auto' | array-like
Whether to use a precomputed Gram matrix to speed up
calculations. If set to ``'auto'`` let us decide. The Gram
matrix can also be passed as argument.
copy_X : boolean, optional, default True
If ``True``, X will be copied; else, it may be overwritten.
eps: float, optional
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems. Unlike the ``tol`` parameter in some iterative
optimization-based algorithms, this parameter does not control
the tolerance of the optimization.
fit_path : boolean
If True the full path is stored in the ``coef_path_`` attribute.
If you compute the solution for a large problem or many targets,
setting ``fit_path`` to ``False`` will lead to a speedup, especially
with a small alpha.
Attributes
----------
``alphas_`` : array, shape (n_alphas + 1,) | list of n_targets such arrays
Maximum of covariances (in absolute value) at each iteration. \
``n_alphas`` is either ``n_nonzero_coefs`` or ``n_features``, \
whichever is smaller.
``active_`` : list, length = n_alphas | list of n_targets such lists
Indices of active variables at the end of the path.
``coef_path_`` : array, shape (n_features, n_alphas + 1) \
| list of n_targets such arrays
The varying values of the coefficients along the path. It is not
present if the ``fit_path`` parameter is ``False``.
``coef_`` : array, shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the formulation formula).
``intercept_`` : float | array, shape (n_targets,)
Independent term in decision function.
Examples
--------
>>> from sklearn import linear_model
>>> clf = linear_model.Lars(n_nonzero_coefs=1)
>>> clf.fit([[-1, 1], [0, 0], [1, 1]], [-1.1111, 0, -1.1111])
... # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
Lars(copy_X=True, eps=..., fit_intercept=True, fit_path=True,
n_nonzero_coefs=1, normalize=True, precompute='auto', verbose=False)
>>> print(clf.coef_) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
[ 0. -1.11...]
See also
--------
lars_path, LarsCV
sklearn.decomposition.sparse_encode
"""
def __init__(self, fit_intercept=True, verbose=False, normalize=True,
precompute='auto', n_nonzero_coefs=500,
eps=np.finfo(np.float).eps, copy_X=True, fit_path=True):
self.fit_intercept = fit_intercept
self.verbose = verbose
self.normalize = normalize
self.method = 'lar'
self.precompute = precompute
self.n_nonzero_coefs = n_nonzero_coefs
self.eps = eps
self.copy_X = copy_X
self.fit_path = fit_path
def _get_gram(self):
# precompute if n_samples > n_features
precompute = self.precompute
if hasattr(precompute, '__array__'):
Gram = precompute
elif precompute == 'auto':
Gram = 'auto'
else:
Gram = None
return Gram
def fit(self, X, y, Xy=None):
"""Fit the model using X, y as training data.
parameters
----------
X : array-like, shape (n_samples, n_features)
Training data.
y : array-like, shape (n_samples,) or (n_samples, n_targets)
Target values.
Xy : array-like, shape (n_samples,) or (n_samples, n_targets), \
optional
Xy = np.dot(X.T, y) that can be precomputed. It is useful
only when the Gram matrix is precomputed.
returns
-------
self : object
returns an instance of self.
"""
X = array2d(X)
y = np.asarray(y)
n_features = X.shape[1]
X, y, X_mean, y_mean, X_std = self._center_data(X, y,
self.fit_intercept,
self.normalize,
self.copy_X)
if y.ndim == 1:
y = y[:, np.newaxis]
n_targets = y.shape[1]
alpha = getattr(self, 'alpha', 0.)
if hasattr(self, 'n_nonzero_coefs'):
alpha = 0. # n_nonzero_coefs parametrization takes priority
max_iter = self.n_nonzero_coefs
else:
max_iter = self.max_iter
precompute = self.precompute
if not hasattr(precompute, '__array__') and (
precompute is True or
(precompute == 'auto' and X.shape[0] > X.shape[1]) or
(precompute == 'auto' and y.shape[1] > 1)):
Gram = np.dot(X.T, X)
else:
Gram = self._get_gram()
self.alphas_ = []
if self.fit_path:
self.coef_ = []
self.active_ = []
self.coef_path_ = []
for k in xrange(n_targets):
this_Xy = None if Xy is None else Xy[:, k]
alphas, active, coef_path = lars_path(
X, y[:, k], Gram=Gram, Xy=this_Xy, copy_X=self.copy_X,
copy_Gram=True, alpha_min=alpha, method=self.method,
verbose=max(0, self.verbose - 1), max_iter=max_iter,
eps=self.eps, return_path=True)
self.alphas_.append(alphas)
self.active_.append(active)
self.coef_path_.append(coef_path)
self.coef_.append(coef_path[:, -1])
if n_targets == 1:
self.alphas_, self.active_, self.coef_path_, self.coef_ = [
a[0] for a in (self.alphas_, self.active_, self.coef_path_,
self.coef_)]
else:
self.coef_ = np.empty((n_targets, n_features))
for k in xrange(n_targets):
this_Xy = None if Xy is None else Xy[:, k]
alphas, _, self.coef_[k] = lars_path(
X, y[:, k], Gram=Gram, Xy=this_Xy, copy_X=self.copy_X,
copy_Gram=True, alpha_min=alpha, method=self.method,
verbose=max(0, self.verbose - 1), max_iter=max_iter,
eps=self.eps, return_path=False)
self.alphas_.append(alphas)
if n_targets == 1:
self.alphas_ = self.alphas_[0]
self._set_intercept(X_mean, y_mean, X_std)
return self
class LassoLars(Lars):
"""Lasso model fit with Least Angle Regression a.k.a. Lars
It is a Linear Model trained with an L1 prior as regularizer.
The optimization objective for Lasso is::
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Parameters
----------
alpha : float
Constant that multiplies the penalty term. Defaults to 1.0.
``alpha = 0`` is equivalent to an ordinary least square, solved
by :class:`LinearRegression`. For numerical reasons, using
``alpha = 0`` with the LassoLars object is not advised and you
should prefer the LinearRegression object.
fit_intercept : boolean
whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
copy_X : boolean, optional, default True
If True, X will be copied; else, it may be overwritten.
precompute : True | False | 'auto' | array-like
Whether to use a precomputed Gram matrix to speed up
calculations. If set to ``'auto'`` let us decide. The Gram
matrix can also be passed as argument.
max_iter : integer, optional
Maximum number of iterations to perform.
eps : float, optional
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems. Unlike the ``tol`` parameter in some iterative
optimization-based algorithms, this parameter does not control
the tolerance of the optimization.
fit_path : boolean
If ``True`` the full path is stored in the ``coef_path_`` attribute.
If you compute the solution for a large problem or many targets,
setting ``fit_path`` to ``False`` will lead to a speedup, especially
with a small alpha.
Attributes
----------
``alphas_`` : array, shape (n_alphas + 1,) | list of n_targets such arrays
Maximum of covariances (in absolute value) at each iteration. \
``n_alphas`` is either ``max_iter``, ``n_features``, or the number of \
nodes in the path with correlation greater than ``alpha``, whichever \
is smaller.
``active_`` : list, length = n_alphas | list of n_targets such lists
Indices of active variables at the end of the path.
``coef_path_`` : array, shape (n_features, n_alphas + 1) or list
If a list is passed it's expected to be one of n_targets such arrays.
The varying values of the coefficients along the path. It is not
present if the ``fit_path`` parameter is ``False``.
``coef_`` : array, shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the formulation formula).
``intercept_`` : float | array, shape (n_targets,)
Independent term in decision function.
Examples
--------
>>> from sklearn import linear_model
>>> clf = linear_model.LassoLars(alpha=0.01)
>>> clf.fit([[-1, 1], [0, 0], [1, 1]], [-1, 0, -1])
... # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
LassoLars(alpha=0.01, copy_X=True, eps=..., fit_intercept=True,
fit_path=True, max_iter=500, normalize=True, precompute='auto',
verbose=False)
>>> print(clf.coef_) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
[ 0. -0.963257...]
See also
--------
lars_path
lasso_path
Lasso
LassoCV
LassoLarsCV
sklearn.decomposition.sparse_encode
"""
def __init__(self, alpha=1.0, fit_intercept=True, verbose=False,
normalize=True, precompute='auto', max_iter=500,
eps=np.finfo(np.float).eps, copy_X=True, fit_path=True):
self.alpha = alpha
self.fit_intercept = fit_intercept
self.max_iter = max_iter
self.verbose = verbose
self.normalize = normalize
self.method = 'lasso'
self.precompute = precompute
self.copy_X = copy_X
self.eps = eps
self.fit_path = fit_path
###############################################################################
# Cross-validated estimator classes
def _lars_path_residues(X_train, y_train, X_test, y_test, Gram=None,
copy=True, method='lars', verbose=False,
fit_intercept=True, normalize=True, max_iter=500,
eps=np.finfo(np.float).eps):
"""Compute the residues on left-out data for a full LARS path
Parameters
-----------
X_train : array, shape (n_samples, n_features)
The data to fit the LARS on
y_train : array, shape (n_samples)
The target variable to fit LARS on
X_test : array, shape (n_samples, n_features)
The data to compute the residues on
y_test : array, shape (n_samples)
The target variable to compute the residues on
Gram : None, 'auto', array, shape: (n_features, n_features), optional
Precomputed Gram matrix (X' * X), if ``'auto'``, the Gram
matrix is precomputed from the given X, if there are more samples
than features
copy : boolean, optional
Whether X_train, X_test, y_train and y_test should be copied;
if False, they may be overwritten.
method : 'lar' | 'lasso'
Specifies the returned model. Select ``'lar'`` for Least Angle
Regression, ``'lasso'`` for the Lasso.
verbose : integer, optional
Sets the amount of verbosity
fit_intercept : boolean
whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
max_iter : integer, optional
Maximum number of iterations to perform.
eps : float, optional
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems. Unlike the ``tol`` parameter in some iterative
optimization-based algorithms, this parameter does not control
the tolerance of the optimization.
Returns
--------
alphas : array, shape (n_alphas,)
Maximum of covariances (in absolute value) at each iteration.
``n_alphas`` is either ``max_iter`` or ``n_features``, whichever
is smaller.
active : list
Indices of active variables at the end of the path.
coefs : array, shape (n_features, n_alphas)
Coefficients along the path
residues : array, shape (n_alphas, n_samples)
Residues of the prediction on the test data
"""
if copy:
X_train = X_train.copy()
y_train = y_train.copy()
X_test = X_test.copy()
y_test = y_test.copy()
if fit_intercept:
X_mean = X_train.mean(axis=0)
X_train -= X_mean
X_test -= X_mean
y_mean = y_train.mean(axis=0)
y_train = as_float_array(y_train, copy=False)
y_train -= y_mean
y_test = as_float_array(y_test, copy=False)
y_test -= y_mean
if normalize:
norms = np.sqrt(np.sum(X_train ** 2, axis=0))
nonzeros = np.flatnonzero(norms)
X_train[:, nonzeros] /= norms[nonzeros]
alphas, active, coefs = lars_path(
X_train, y_train, Gram=Gram, copy_X=False, copy_Gram=False,
method=method, verbose=max(0, verbose - 1), max_iter=max_iter, eps=eps)
if normalize:
coefs[nonzeros] /= norms[nonzeros][:, np.newaxis]
residues = np.dot(X_test, coefs) - y_test[:, np.newaxis]
return alphas, active, coefs, residues.T
class LarsCV(Lars):
"""Cross-validated Least Angle Regression model
Parameters
----------
fit_intercept : boolean
whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
copy_X : boolean, optional, default True
If ``True``, X will be copied; else, it may be overwritten.
precompute : True | False | 'auto' | array-like
Whether to use a precomputed Gram matrix to speed up
calculations. If set to ``'auto'`` let us decide. The Gram
matrix can also be passed as argument.
max_iter: integer, optional
Maximum number of iterations to perform.
cv : cross-validation generator, optional
see :mod:`sklearn.cross_validation`. If ``None`` is passed, default to
a 5-fold strategy
max_n_alphas : integer, optional
The maximum number of points on the path used to compute the
residuals in the cross-validation
n_jobs : integer, optional
Number of CPUs to use during the cross validation. If ``-1``, use
all the CPUs
eps: float, optional
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems.
Attributes
----------
``coef_`` : array, shape (n_features,)
parameter vector (w in the formulation formula)
``intercept_`` : float
independent term in decision function
``coef_path_`` : array, shape (n_features, n_alphas)
the varying values of the coefficients along the path
``alpha_`` : float
the estimated regularization parameter alpha
``alphas_`` : array, shape (n_alphas,)
the different values of alpha along the path
``cv_alphas_`` : array, shape (n_cv_alphas,)
all the values of alpha along the path for the different folds
``cv_mse_path_`` : array, shape (n_folds, n_cv_alphas)
the mean square error on left-out for each fold along the path
(alpha values given by ``cv_alphas``)
See also
--------
lars_path, LassoLars, LassoLarsCV
"""
method = 'lar'
def __init__(self, fit_intercept=True, verbose=False, max_iter=500,
normalize=True, precompute='auto', cv=None,
max_n_alphas=1000, n_jobs=1, eps=np.finfo(np.float).eps,
copy_X=True):
self.fit_intercept = fit_intercept
self.max_iter = max_iter
self.verbose = verbose
self.normalize = normalize
self.precompute = precompute
self.copy_X = copy_X
self.cv = cv
self.max_n_alphas = max_n_alphas
self.n_jobs = n_jobs
self.eps = eps
def fit(self, X, y):
"""Fit the model using X, y as training data.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training data.
y : array-like, shape (n_samples,)
Target values.
Returns
-------
self : object
returns an instance of self.
"""
self.fit_path = True
X = array2d(X)
X, y = check_arrays(X, y)
# init cross-validation generator
cv = check_cv(self.cv, X, y, classifier=False)
Gram = 'auto' if self.precompute else None
cv_paths = Parallel(n_jobs=self.n_jobs, verbose=self.verbose)(
delayed(_lars_path_residues)(
X[train], y[train], X[test], y[test], Gram=Gram, copy=False,
method=self.method, verbose=max(0, self.verbose - 1),
normalize=self.normalize, fit_intercept=self.fit_intercept,
max_iter=self.max_iter, eps=self.eps)
for train, test in cv)
all_alphas = np.concatenate(list(zip(*cv_paths))[0])
# Unique also sorts
all_alphas = np.unique(all_alphas)
# Take at most max_n_alphas values
stride = int(max(1, int(len(all_alphas) / float(self.max_n_alphas))))
all_alphas = all_alphas[::stride]
mse_path = np.empty((len(all_alphas), len(cv_paths)))
for index, (alphas, active, coefs, residues) in enumerate(cv_paths):
alphas = alphas[::-1]
residues = residues[::-1]
if alphas[0] != 0:
alphas = np.r_[0, alphas]
residues = np.r_[residues[0, np.newaxis], residues]
if alphas[-1] != all_alphas[-1]:
alphas = np.r_[alphas, all_alphas[-1]]
residues = np.r_[residues, residues[-1, np.newaxis]]
this_residues = interpolate.interp1d(alphas,
residues,
axis=0)(all_alphas)
this_residues **= 2
mse_path[:, index] = np.mean(this_residues, axis=-1)
mask = np.all(np.isfinite(mse_path), axis=-1)
all_alphas = all_alphas[mask]
mse_path = mse_path[mask]
# Select the alpha that minimizes left-out error
i_best_alpha = np.argmin(mse_path.mean(axis=-1))
best_alpha = all_alphas[i_best_alpha]
# Store our parameters
self.alpha_ = best_alpha
self.cv_alphas_ = all_alphas
self.cv_mse_path_ = mse_path
# Now compute the full model
# it will call a lasso internally when self if LassoLarsCV
# as self.method == 'lasso'
Lars.fit(self, X, y)
return self
@property
def alpha(self):
# impedance matching for the above Lars.fit (should not be documented)
return self.alpha_
class LassoLarsCV(LarsCV):
"""Cross-validated Lasso, using the LARS algorithm
The optimization objective for Lasso is::
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Parameters
----------
fit_intercept : boolean
whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
precompute : True | False | 'auto' | array-like
Whether to use a precomputed Gram matrix to speed up
calculations. If set to ``'auto'`` let us decide. The Gram
matrix can also be passed as argument.
max_iter: integer, optional
Maximum number of iterations to perform.
cv : cross-validation generator, optional
see sklearn.cross_validation module. If None is passed, default to
a 5-fold strategy
max_n_alphas : integer, optional
The maximum number of points on the path used to compute the
residuals in the cross-validation
n_jobs : integer, optional
Number of CPUs to use during the cross validation. If ``-1``, use
all the CPUs
eps: float, optional
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems.
copy_X : boolean, optional, default True
If True, X will be copied; else, it may be overwritten.
Attributes
----------
``coef_`` : array, shape (n_features,)
parameter vector (w in the formulation formula)
``intercept_`` : float
independent term in decision function.
``coef_path_`` : array, shape (n_features, n_alphas)
the varying values of the coefficients along the path
``alpha_`` : float
the estimated regularization parameter alpha
``alphas_`` : array, shape (n_alphas,)
the different values of alpha along the path
``cv_alphas_`` : array, shape (n_cv_alphas,)
all the values of alpha along the path for the different folds
``cv_mse_path_`` : array, shape (n_folds, n_cv_alphas)
the mean square error on left-out for each fold along the path
(alpha values given by ``cv_alphas``)
Notes
-----
The object solves the same problem as the LassoCV object. However,
unlike the LassoCV, it find the relevant alphas values by itself.
In general, because of this property, it will be more stable.
However, it is more fragile to heavily multicollinear datasets.
It is more efficient than the LassoCV if only a small number of
features are selected compared to the total number, for instance if
there are very few samples compared to the number of features.
See also
--------
lars_path, LassoLars, LarsCV, LassoCV
"""
method = 'lasso'
class LassoLarsIC(LassoLars):
"""Lasso model fit with Lars using BIC or AIC for model selection
The optimization objective for Lasso is::
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
AIC is the Akaike information criterion and BIC is the Bayes
Information criterion. Such criteria are useful to select the value
of the regularization parameter by making a trade-off between the
goodness of fit and the complexity of the model. A good model should
explain well the data while being simple.
Parameters
----------
criterion: 'bic' | 'aic'
The type of criterion to use.
fit_intercept : boolean
whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
copy_X : boolean, optional, default True
If True, X will be copied; else, it may be overwritten.
precompute : True | False | 'auto' | array-like
Whether to use a precomputed Gram matrix to speed up
calculations. If set to ``'auto'`` let us decide. The Gram
matrix can also be passed as argument.
max_iter: integer, optional
Maximum number of iterations to perform. Can be used for
early stopping.
eps: float, optional
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems. Unlike the ``tol`` parameter in some iterative
optimization-based algorithms, this parameter does not control
the tolerance of the optimization.
Attributes
----------
``coef_`` : array, shape (n_features,)
parameter vector (w in the formulation formula)
``intercept_`` : float
independent term in decision function.
``alpha_`` : float
the alpha parameter chosen by the information criterion
Examples
--------
>>> from sklearn import linear_model
>>> clf = linear_model.LassoLarsIC(criterion='bic')
>>> clf.fit([[-1, 1], [0, 0], [1, 1]], [-1.1111, 0, -1.1111])
... # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
LassoLarsIC(copy_X=True, criterion='bic', eps=..., fit_intercept=True,
max_iter=500, normalize=True, precompute='auto',
verbose=False)
>>> print(clf.coef_) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
[ 0. -1.11...]
Notes
-----
The estimation of the number of degrees of freedom is given by:
"On the degrees of freedom of the lasso"
Hui Zou, Trevor Hastie, and Robert Tibshirani
Ann. Statist. Volume 35, Number 5 (2007), 2173-2192.
http://en.wikipedia.org/wiki/Akaike_information_criterion
http://en.wikipedia.org/wiki/Bayesian_information_criterion
See also
--------
lars_path, LassoLars, LassoLarsCV
"""
def __init__(self, criterion='aic', fit_intercept=True, verbose=False,
normalize=True, precompute='auto', max_iter=500,
eps=np.finfo(np.float).eps, copy_X=True):
self.criterion = criterion
self.fit_intercept = fit_intercept
self.max_iter = max_iter
self.verbose = verbose
self.normalize = normalize
self.copy_X = copy_X
self.precompute = precompute
self.eps = eps
def fit(self, X, y, copy_X=True):
"""Fit the model using X, y as training data.
parameters
----------
x : array-like, shape (n_samples, n_features)
training data.
y : array-like, shape (n_samples,)
target values.
returns
-------
self : object
returns an instance of self.
"""
self.fit_path = True
X = array2d(X)
y = np.asarray(y)
X, y, Xmean, ymean, Xstd = LinearModel._center_data(
X, y, self.fit_intercept, self.normalize, self.copy_X)
max_iter = self.max_iter
Gram = self._get_gram()
alphas_, active_, coef_path_ = lars_path(
X, y, Gram=Gram, copy_X=copy_X, copy_Gram=True, alpha_min=0.0,
method='lasso', verbose=self.verbose, max_iter=max_iter,
eps=self.eps)
n_samples = X.shape[0]
if self.criterion == 'aic':
K = 2 # AIC
elif self.criterion == 'bic':
K = log(n_samples) # BIC
else:
raise ValueError('criterion should be either bic or aic')
R = y[:, np.newaxis] - np.dot(X, coef_path_) # residuals
mean_squared_error = np.mean(R ** 2, axis=0)
df = np.zeros(coef_path_.shape[1], dtype=np.int) # Degrees of freedom
for k, coef in enumerate(coef_path_.T):
mask = np.abs(coef) > np.finfo(coef.dtype).eps
if not np.any(mask):
continue
# get the number of degrees of freedom equal to:
# Xc = X[:, mask]
# Trace(Xc * inv(Xc.T, Xc) * Xc.T) ie the number of non-zero coefs
df[k] = np.sum(mask)
self.alphas_ = alphas_
self.criterion_ = n_samples * np.log(mean_squared_error) + K * df
n_best = np.argmin(self.criterion_)
self.alpha_ = alphas_[n_best]
self.coef_ = coef_path_[:, n_best]
self._set_intercept(Xmean, ymean, Xstd)
return self
|
|
import warnings
import numpy as np
from pandas import (
Categorical,
DataFrame,
Series,
)
from .pandas_vb_common import tm
class Dtypes:
params = ["str", "string", "arrow_string"]
param_names = ["dtype"]
def setup(self, dtype):
from pandas.core.arrays.string_arrow import ArrowStringDtype # noqa: F401
try:
self.s = Series(tm.makeStringIndex(10 ** 5), dtype=dtype)
except ImportError:
raise NotImplementedError
class Construction:
params = ["str", "string"]
param_names = ["dtype"]
def setup(self, dtype):
self.series_arr = tm.rands_array(nchars=10, size=10 ** 5)
self.frame_arr = self.series_arr.reshape((50_000, 2)).copy()
# GH37371. Testing construction of string series/frames from ExtensionArrays
self.series_cat_arr = Categorical(self.series_arr)
self.frame_cat_arr = Categorical(self.frame_arr)
def time_series_construction(self, dtype):
Series(self.series_arr, dtype=dtype)
def peakmem_series_construction(self, dtype):
Series(self.series_arr, dtype=dtype)
def time_frame_construction(self, dtype):
DataFrame(self.frame_arr, dtype=dtype)
def peakmem_frame_construction(self, dtype):
DataFrame(self.frame_arr, dtype=dtype)
def time_cat_series_construction(self, dtype):
Series(self.series_cat_arr, dtype=dtype)
def peakmem_cat_series_construction(self, dtype):
Series(self.series_cat_arr, dtype=dtype)
def time_cat_frame_construction(self, dtype):
DataFrame(self.frame_cat_arr, dtype=dtype)
def peakmem_cat_frame_construction(self, dtype):
DataFrame(self.frame_cat_arr, dtype=dtype)
class Methods(Dtypes):
def time_center(self, dtype):
self.s.str.center(100)
def time_count(self, dtype):
self.s.str.count("A")
def time_endswith(self, dtype):
self.s.str.endswith("A")
def time_extract(self, dtype):
with warnings.catch_warnings(record=True):
self.s.str.extract("(\\w*)A(\\w*)")
def time_findall(self, dtype):
self.s.str.findall("[A-Z]+")
def time_find(self, dtype):
self.s.str.find("[A-Z]+")
def time_rfind(self, dtype):
self.s.str.rfind("[A-Z]+")
def time_fullmatch(self, dtype):
self.s.str.fullmatch("A")
def time_get(self, dtype):
self.s.str.get(0)
def time_len(self, dtype):
self.s.str.len()
def time_join(self, dtype):
self.s.str.join(" ")
def time_match(self, dtype):
self.s.str.match("A")
def time_normalize(self, dtype):
self.s.str.normalize("NFC")
def time_pad(self, dtype):
self.s.str.pad(100, side="both")
def time_partition(self, dtype):
self.s.str.partition("A")
def time_rpartition(self, dtype):
self.s.str.rpartition("A")
def time_replace(self, dtype):
self.s.str.replace("A", "\x01\x01")
def time_translate(self, dtype):
self.s.str.translate({"A": "\x01\x01"})
def time_slice(self, dtype):
self.s.str.slice(5, 15, 2)
def time_startswith(self, dtype):
self.s.str.startswith("A")
def time_strip(self, dtype):
self.s.str.strip("A")
def time_rstrip(self, dtype):
self.s.str.rstrip("A")
def time_lstrip(self, dtype):
self.s.str.lstrip("A")
def time_title(self, dtype):
self.s.str.title()
def time_upper(self, dtype):
self.s.str.upper()
def time_lower(self, dtype):
self.s.str.lower()
def time_wrap(self, dtype):
self.s.str.wrap(10)
def time_zfill(self, dtype):
self.s.str.zfill(10)
def time_isalnum(self, dtype):
self.s.str.isalnum()
def time_isalpha(self, dtype):
self.s.str.isalpha()
def time_isdecimal(self, dtype):
self.s.str.isdecimal()
def time_isdigit(self, dtype):
self.s.str.isdigit()
def time_islower(self, dtype):
self.s.str.islower()
def time_isnumeric(self, dtype):
self.s.str.isnumeric()
def time_isspace(self, dtype):
self.s.str.isspace()
def time_istitle(self, dtype):
self.s.str.istitle()
def time_isupper(self, dtype):
self.s.str.isupper()
class Repeat:
params = ["int", "array"]
param_names = ["repeats"]
def setup(self, repeats):
N = 10 ** 5
self.s = Series(tm.makeStringIndex(N))
repeat = {"int": 1, "array": np.random.randint(1, 3, N)}
self.values = repeat[repeats]
def time_repeat(self, repeats):
self.s.str.repeat(self.values)
class Cat:
params = ([0, 3], [None, ","], [None, "-"], [0.0, 0.001, 0.15])
param_names = ["other_cols", "sep", "na_rep", "na_frac"]
def setup(self, other_cols, sep, na_rep, na_frac):
N = 10 ** 5
mask_gen = lambda: np.random.choice([True, False], N, p=[1 - na_frac, na_frac])
self.s = Series(tm.makeStringIndex(N)).where(mask_gen())
if other_cols == 0:
# str.cat self-concatenates only for others=None
self.others = None
else:
self.others = DataFrame(
{i: tm.makeStringIndex(N).where(mask_gen()) for i in range(other_cols)}
)
def time_cat(self, other_cols, sep, na_rep, na_frac):
# before the concatenation (one caller + other_cols columns), the total
# expected fraction of rows containing any NaN is:
# reduce(lambda t, _: t + (1 - t) * na_frac, range(other_cols + 1), 0)
# for other_cols=3 and na_frac=0.15, this works out to ~48%
self.s.str.cat(others=self.others, sep=sep, na_rep=na_rep)
class Contains(Dtypes):
params = (Dtypes.params, [True, False])
param_names = ["dtype", "regex"]
def setup(self, dtype, regex):
super().setup(dtype)
def time_contains(self, dtype, regex):
self.s.str.contains("A", regex=regex)
class Split(Dtypes):
params = (Dtypes.params, [True, False])
param_names = ["dtype", "expand"]
def setup(self, dtype, expand):
super().setup(dtype)
self.s = self.s.str.join("--")
def time_split(self, dtype, expand):
self.s.str.split("--", expand=expand)
def time_rsplit(self, dtype, expand):
self.s.str.rsplit("--", expand=expand)
class Extract(Dtypes):
params = (Dtypes.params, [True, False])
param_names = ["dtype", "expand"]
def setup(self, dtype, expand):
super().setup(dtype)
def time_extract_single_group(self, dtype, expand):
with warnings.catch_warnings(record=True):
self.s.str.extract("(\\w*)A", expand=expand)
class Dummies(Dtypes):
def setup(self, dtype):
super().setup(dtype)
self.s = self.s.str.join("|")
def time_get_dummies(self, dtype):
self.s.str.get_dummies("|")
class Encode:
def setup(self):
self.ser = Series(tm.makeUnicodeIndex())
def time_encode_decode(self):
self.ser.str.encode("utf-8").str.decode("utf-8")
class Slice:
def setup(self):
self.s = Series(["abcdefg", np.nan] * 500000)
def time_vector_slice(self):
# GH 2602
self.s.str[:5]
class Iter(Dtypes):
def time_iter(self, dtype):
for i in self.s:
pass
|
|
from __future__ import division,print_function
import math, os, json, sys, re
import _pickle as pickle
from glob import glob
import numpy as np
from matplotlib import pyplot as plt
from operator import itemgetter, attrgetter, methodcaller
from collections import OrderedDict
import itertools
from itertools import chain
import pandas as pd
import PIL
from PIL import Image
from numpy.random import random, permutation, randn, normal, uniform, choice
from numpy import newaxis
import scipy
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
from scipy.ndimage import imread
from sklearn.metrics import confusion_matrix
import bcolz
from sklearn.preprocessing import OneHotEncoder
from sklearn.manifold import TSNE
from IPython.lib.display import FileLink
import theano
from theano import shared, tensor as T
from theano.tensor.nnet import conv2d, nnet
from theano.tensor.signal import pool
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.utils import np_utils
from keras.utils.np_utils import to_categorical
from keras.models import Sequential, Model
from keras.layers import Input, Embedding, Reshape, merge, LSTM, Bidirectional
from keras.layers import TimeDistributed, Activation, SimpleRNN, GRU
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.regularizers import l2, activity_l2, l1, activity_l1
from keras.layers.normalization import BatchNormalization
from keras.optimizers import SGD, RMSprop, Adam
from keras.utils.layer_utils import layer_from_config
from keras.metrics import categorical_crossentropy, categorical_accuracy
from keras.layers.convolutional import *
from keras.preprocessing import image, sequence
from keras.preprocessing.text import Tokenizer
# from vgg16 import *
# from vgg16bn import *
np.set_printoptions(precision=4, linewidth=100)
to_bw = np.array([0.299, 0.587, 0.114])
def gray(img):
if K.image_dim_ordering() == 'tf':
return np.rollaxis(img, 0, 1).dot(to_bw)
else:
return np.rollaxis(img, 0, 3).dot(to_bw)
def to_plot(img):
if K.image_dim_ordering() == 'tf':
return np.rollaxis(img, 0, 1).astype(np.uint8)
else:
return np.rollaxis(img, 0, 3).astype(np.uint8)
def plot(img):
plt.imshow(to_plot(img))
def floor(x):
return int(math.floor(x))
def ceil(x):
return int(math.ceil(x))
def plots(ims, figsize=(12,6), rows=1, interp=False, titles=None):
if type(ims[0]) is np.ndarray:
ims = np.array(ims).astype(np.uint8)
if (ims.shape[-1] != 3):
ims = ims.transpose((0,2,3,1))
f = plt.figure(figsize=figsize)
for i in range(len(ims)):
sp = f.add_subplot(rows, len(ims)//rows, i+1)
sp.axis('Off')
if titles is not None:
sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i], interpolation=None if interp else 'none')
def do_clip(arr, mx):
clipped = np.clip(arr, (1-mx)/1, mx)
return clipped/clipped.sum(axis=1)[:, np.newaxis]
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, batch_size=4, class_mode='categorical',
target_size=(224,224)):
return gen.flow_from_directory(dirname, target_size=target_size,
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
def onehot(x):
return to_categorical(x)
def wrap_config(layer):
return {'class_name': layer.__class__.__name__, 'config': layer.get_config()}
def copy_layer(layer): return layer_from_config(wrap_config(layer))
def copy_layers(layers): return [copy_layer(layer) for layer in layers]
def copy_weights(from_layers, to_layers):
for from_layer,to_layer in zip(from_layers, to_layers):
to_layer.set_weights(from_layer.get_weights())
def copy_model(m):
res = Sequential(copy_layers(m.layers))
copy_weights(m.layers, res.layers)
return res
def insert_layer(model, new_layer, index):
res = Sequential()
for i,layer in enumerate(model.layers):
if i==index: res.add(new_layer)
copied = layer_from_config(wrap_config(layer))
res.add(copied)
copied.set_weights(layer.get_weights())
return res
def adjust_dropout(weights, prev_p, new_p):
scal = (1-prev_p)/(1-new_p)
return [o*scal for o in weights]
def get_data(path, target_size=(224,224)):
batches = get_batches(path, shuffle=False, batch_size=1, class_mode=None, target_size=target_size)
return np.concatenate([batches.next() for i in range(batches.nb_sample)])
def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
(This function is copied from the scikit docs.)
"""
plt.figure()
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
def save_array(fname, arr):
c=bcolz.carray(arr, rootdir=fname, mode='w')
c.flush()
def load_array(fname):
return bcolz.open(fname)[:]
def mk_size(img, r2c):
r,c,_ = img.shape
curr_r2c = r/c
new_r, new_c = r,c
if r2c>curr_r2c:
new_r = floor(c*r2c)
else:
new_c = floor(r/r2c)
arr = np.zeros((new_r, new_c, 3), dtype=np.float32)
r2=(new_r-r)//2
c2=(new_c-c)//2
arr[floor(r2):floor(r2)+r,floor(c2):floor(c2)+c] = img
return arr
def mk_square(img):
x,y,_ = img.shape
maxs = max(img.shape[:2])
y2=(maxs-y)//2
x2=(maxs-x)//2
arr = np.zeros((maxs,maxs,3), dtype=np.float32)
arr[floor(x2):floor(x2)+x,floor(y2):floor(y2)+y] = img
return arr
def vgg_ft(out_dim):
vgg = Vgg16()
vgg.ft(out_dim)
model = vgg.model
return model
def vgg_ft_bn(out_dim):
vgg = Vgg16BN()
vgg.ft(out_dim)
model = vgg.model
return model
def get_classes(path):
batches = get_batches(path+'train', shuffle=False, batch_size=1)
val_batches = get_batches(path+'valid', shuffle=False, batch_size=1)
test_batches = get_batches(path+'test', shuffle=False, batch_size=1)
return (val_batches.classes, batches.classes, onehot(val_batches.classes), onehot(batches.classes),
val_batches.filenames, batches.filenames, test_batches.filenames)
def split_at(model, layer_type):
layers = model.layers
layer_idx = [index for index,layer in enumerate(layers)
if type(layer) is layer_type][-1]
return layers[:layer_idx+1], layers[layer_idx+1:]
class MixIterator(object):
def __init__(self, iters):
self.iters = iters
self.multi = type(iters) is list
if self.multi:
self.N = sum([it[0].N for it in self.iters])
else:
self.N = sum([it.N for it in self.iters])
def reset(self):
for it in self.iters: it.reset()
def __iter__(self):
return self
def next(self, *args, **kwargs):
if self.multi:
nexts = [[next(it) for it in o] for o in self.iters]
n0 = np.concatenate([n[0] for n in nexts])
n1 = np.concatenate([n[1] for n in nexts])
return (n0, n1)
else:
nexts = [next(it) for it in self.iters]
n0 = np.concatenate([n[0] for n in nexts])
n1 = np.concatenate([n[1] for n in nexts])
return (n0, n1)
|
|
import re
import numpy as np
import scipy.sparse
import pytest
from sklearn.datasets import load_digits, load_iris
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.utils._testing import assert_almost_equal
from sklearn.utils._testing import assert_array_equal
from sklearn.utils._testing import assert_array_almost_equal
from sklearn.naive_bayes import GaussianNB, BernoulliNB
from sklearn.naive_bayes import MultinomialNB, ComplementNB
from sklearn.naive_bayes import CategoricalNB
DISCRETE_NAIVE_BAYES_CLASSES = [BernoulliNB, CategoricalNB, ComplementNB, MultinomialNB]
ALL_NAIVE_BAYES_CLASSES = DISCRETE_NAIVE_BAYES_CLASSES + [GaussianNB]
# Data is just 6 separable points in the plane
X = np.array([[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]])
y = np.array([1, 1, 1, 2, 2, 2])
# A bit more random tests
rng = np.random.RandomState(0)
X1 = rng.normal(size=(10, 3))
y1 = (rng.normal(size=(10)) > 0).astype(int)
# Data is 6 random integer points in a 100 dimensional space classified to
# three classes.
X2 = rng.randint(5, size=(6, 100))
y2 = np.array([1, 1, 2, 2, 3, 3])
def test_gnb():
# Gaussian Naive Bayes classification.
# This checks that GaussianNB implements fit and predict and returns
# correct values for a simple toy dataset.
clf = GaussianNB()
y_pred = clf.fit(X, y).predict(X)
assert_array_equal(y_pred, y)
y_pred_proba = clf.predict_proba(X)
y_pred_log_proba = clf.predict_log_proba(X)
assert_array_almost_equal(np.log(y_pred_proba), y_pred_log_proba, 8)
# Test whether label mismatch between target y and classes raises
# an Error
# FIXME Remove this test once the more general partial_fit tests are merged
with pytest.raises(
ValueError, match="The target label.* in y do not exist in the initial classes"
):
GaussianNB().partial_fit(X, y, classes=[0, 1])
# TODO remove in 1.2 once sigma_ attribute is removed (GH #18842)
def test_gnb_var():
clf = GaussianNB()
clf.fit(X, y)
with pytest.warns(FutureWarning, match="Attribute `sigma_` was deprecated"):
assert_array_equal(clf.sigma_, clf.var_)
def test_gnb_prior():
# Test whether class priors are properly set.
clf = GaussianNB().fit(X, y)
assert_array_almost_equal(np.array([3, 3]) / 6.0, clf.class_prior_, 8)
clf = GaussianNB().fit(X1, y1)
# Check that the class priors sum to 1
assert_array_almost_equal(clf.class_prior_.sum(), 1)
def test_gnb_sample_weight():
"""Test whether sample weights are properly used in GNB."""
# Sample weights all being 1 should not change results
sw = np.ones(6)
clf = GaussianNB().fit(X, y)
clf_sw = GaussianNB().fit(X, y, sw)
assert_array_almost_equal(clf.theta_, clf_sw.theta_)
assert_array_almost_equal(clf.var_, clf_sw.var_)
# Fitting twice with half sample-weights should result
# in same result as fitting once with full weights
sw = rng.rand(y.shape[0])
clf1 = GaussianNB().fit(X, y, sample_weight=sw)
clf2 = GaussianNB().partial_fit(X, y, classes=[1, 2], sample_weight=sw / 2)
clf2.partial_fit(X, y, sample_weight=sw / 2)
assert_array_almost_equal(clf1.theta_, clf2.theta_)
assert_array_almost_equal(clf1.var_, clf2.var_)
# Check that duplicate entries and correspondingly increased sample
# weights yield the same result
ind = rng.randint(0, X.shape[0], 20)
sample_weight = np.bincount(ind, minlength=X.shape[0])
clf_dupl = GaussianNB().fit(X[ind], y[ind])
clf_sw = GaussianNB().fit(X, y, sample_weight)
assert_array_almost_equal(clf_dupl.theta_, clf_sw.theta_)
assert_array_almost_equal(clf_dupl.var_, clf_sw.var_)
def test_gnb_neg_priors():
"""Test whether an error is raised in case of negative priors"""
clf = GaussianNB(priors=np.array([-1.0, 2.0]))
msg = "Priors must be non-negative"
with pytest.raises(ValueError, match=msg):
clf.fit(X, y)
def test_gnb_priors():
"""Test whether the class prior override is properly used"""
clf = GaussianNB(priors=np.array([0.3, 0.7])).fit(X, y)
assert_array_almost_equal(
clf.predict_proba([[-0.1, -0.1]]),
np.array([[0.825303662161683, 0.174696337838317]]),
8,
)
assert_array_almost_equal(clf.class_prior_, np.array([0.3, 0.7]))
def test_gnb_priors_sum_isclose():
# test whether the class prior sum is properly tested"""
X = np.array(
[
[-1, -1],
[-2, -1],
[-3, -2],
[-4, -5],
[-5, -4],
[1, 1],
[2, 1],
[3, 2],
[4, 4],
[5, 5],
]
)
priors = np.array([0.08, 0.14, 0.03, 0.16, 0.11, 0.16, 0.07, 0.14, 0.11, 0.0])
Y = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
clf = GaussianNB(priors=priors)
# smoke test for issue #9633
clf.fit(X, Y)
def test_gnb_wrong_nb_priors():
"""Test whether an error is raised if the number of prior is different
from the number of class"""
clf = GaussianNB(priors=np.array([0.25, 0.25, 0.25, 0.25]))
msg = "Number of priors must match number of classes"
with pytest.raises(ValueError, match=msg):
clf.fit(X, y)
def test_gnb_prior_greater_one():
"""Test if an error is raised if the sum of prior greater than one"""
clf = GaussianNB(priors=np.array([2.0, 1.0]))
msg = "The sum of the priors should be 1"
with pytest.raises(ValueError, match=msg):
clf.fit(X, y)
def test_gnb_prior_large_bias():
"""Test if good prediction when class prior favor largely one class"""
clf = GaussianNB(priors=np.array([0.01, 0.99]))
clf.fit(X, y)
assert clf.predict([[-0.1, -0.1]]) == np.array([2])
def test_gnb_check_update_with_no_data():
"""Test when the partial fit is called without any data"""
# Create an empty array
prev_points = 100
mean = 0.0
var = 1.0
x_empty = np.empty((0, X.shape[1]))
tmean, tvar = GaussianNB._update_mean_variance(prev_points, mean, var, x_empty)
assert tmean == mean
assert tvar == var
def test_gnb_partial_fit():
clf = GaussianNB().fit(X, y)
clf_pf = GaussianNB().partial_fit(X, y, np.unique(y))
assert_array_almost_equal(clf.theta_, clf_pf.theta_)
assert_array_almost_equal(clf.var_, clf_pf.var_)
assert_array_almost_equal(clf.class_prior_, clf_pf.class_prior_)
clf_pf2 = GaussianNB().partial_fit(X[0::2, :], y[0::2], np.unique(y))
clf_pf2.partial_fit(X[1::2], y[1::2])
assert_array_almost_equal(clf.theta_, clf_pf2.theta_)
assert_array_almost_equal(clf.var_, clf_pf2.var_)
assert_array_almost_equal(clf.class_prior_, clf_pf2.class_prior_)
def test_gnb_naive_bayes_scale_invariance():
# Scaling the data should not change the prediction results
iris = load_iris()
X, y = iris.data, iris.target
labels = [GaussianNB().fit(f * X, y).predict(f * X) for f in [1e-10, 1, 1e10]]
assert_array_equal(labels[0], labels[1])
assert_array_equal(labels[1], labels[2])
@pytest.mark.parametrize("DiscreteNaiveBayes", DISCRETE_NAIVE_BAYES_CLASSES)
def test_discretenb_prior(DiscreteNaiveBayes):
# Test whether class priors are properly set.
clf = DiscreteNaiveBayes().fit(X2, y2)
assert_array_almost_equal(
np.log(np.array([2, 2, 2]) / 6.0), clf.class_log_prior_, 8
)
@pytest.mark.parametrize("DiscreteNaiveBayes", DISCRETE_NAIVE_BAYES_CLASSES)
def test_discretenb_partial_fit(DiscreteNaiveBayes):
clf1 = DiscreteNaiveBayes()
clf1.fit([[0, 1], [1, 0], [1, 1]], [0, 1, 1])
clf2 = DiscreteNaiveBayes()
clf2.partial_fit([[0, 1], [1, 0], [1, 1]], [0, 1, 1], classes=[0, 1])
assert_array_equal(clf1.class_count_, clf2.class_count_)
if DiscreteNaiveBayes is CategoricalNB:
for i in range(len(clf1.category_count_)):
assert_array_equal(clf1.category_count_[i], clf2.category_count_[i])
else:
assert_array_equal(clf1.feature_count_, clf2.feature_count_)
clf3 = DiscreteNaiveBayes()
# all categories have to appear in the first partial fit
clf3.partial_fit([[0, 1]], [0], classes=[0, 1])
clf3.partial_fit([[1, 0]], [1])
clf3.partial_fit([[1, 1]], [1])
assert_array_equal(clf1.class_count_, clf3.class_count_)
if DiscreteNaiveBayes is CategoricalNB:
# the categories for each feature of CategoricalNB are mapped to an
# index chronologically with each call of partial fit and therefore
# the category_count matrices cannot be compared for equality
for i in range(len(clf1.category_count_)):
assert_array_equal(
clf1.category_count_[i].shape, clf3.category_count_[i].shape
)
assert_array_equal(
np.sum(clf1.category_count_[i], axis=1),
np.sum(clf3.category_count_[i], axis=1),
)
# assert category 0 occurs 1x in the first class and 0x in the 2nd
# class
assert_array_equal(clf1.category_count_[0][0], np.array([1, 0]))
# assert category 1 occurs 0x in the first class and 2x in the 2nd
# class
assert_array_equal(clf1.category_count_[0][1], np.array([0, 2]))
# assert category 0 occurs 0x in the first class and 1x in the 2nd
# class
assert_array_equal(clf1.category_count_[1][0], np.array([0, 1]))
# assert category 1 occurs 1x in the first class and 1x in the 2nd
# class
assert_array_equal(clf1.category_count_[1][1], np.array([1, 1]))
else:
assert_array_equal(clf1.feature_count_, clf3.feature_count_)
@pytest.mark.parametrize("NaiveBayes", ALL_NAIVE_BAYES_CLASSES)
def test_NB_partial_fit_no_first_classes(NaiveBayes):
# classes is required for first call to partial fit
with pytest.raises(
ValueError, match="classes must be passed on the first call to partial_fit."
):
NaiveBayes().partial_fit(X2, y2)
# check consistency of consecutive classes values
clf = NaiveBayes()
clf.partial_fit(X2, y2, classes=np.unique(y2))
with pytest.raises(
ValueError, match="is not the same as on last call to partial_fit"
):
clf.partial_fit(X2, y2, classes=np.arange(42))
def test_discretenb_predict_proba():
# Test discrete NB classes' probability scores
# The 100s below distinguish Bernoulli from multinomial.
# FIXME: write a test to show this.
X_bernoulli = [[1, 100, 0], [0, 1, 0], [0, 100, 1]]
X_multinomial = [[0, 1], [1, 3], [4, 0]]
# test binary case (1-d output)
y = [0, 0, 2] # 2 is regression test for binary case, 02e673
for DiscreteNaiveBayes, X in zip(
[BernoulliNB, MultinomialNB], [X_bernoulli, X_multinomial]
):
clf = DiscreteNaiveBayes().fit(X, y)
assert clf.predict(X[-1:]) == 2
assert clf.predict_proba([X[0]]).shape == (1, 2)
assert_array_almost_equal(
clf.predict_proba(X[:2]).sum(axis=1), np.array([1.0, 1.0]), 6
)
# test multiclass case (2-d output, must sum to one)
y = [0, 1, 2]
for DiscreteNaiveBayes, X in zip(
[BernoulliNB, MultinomialNB], [X_bernoulli, X_multinomial]
):
clf = DiscreteNaiveBayes().fit(X, y)
assert clf.predict_proba(X[0:1]).shape == (1, 3)
assert clf.predict_proba(X[:2]).shape == (2, 3)
assert_almost_equal(np.sum(clf.predict_proba([X[1]])), 1)
assert_almost_equal(np.sum(clf.predict_proba([X[-1]])), 1)
assert_almost_equal(np.sum(np.exp(clf.class_log_prior_)), 1)
@pytest.mark.parametrize("DiscreteNaiveBayes", DISCRETE_NAIVE_BAYES_CLASSES)
def test_discretenb_uniform_prior(DiscreteNaiveBayes):
# Test whether discrete NB classes fit a uniform prior
# when fit_prior=False and class_prior=None
clf = DiscreteNaiveBayes()
clf.set_params(fit_prior=False)
clf.fit([[0], [0], [1]], [0, 0, 1])
prior = np.exp(clf.class_log_prior_)
assert_array_almost_equal(prior, np.array([0.5, 0.5]))
@pytest.mark.parametrize("DiscreteNaiveBayes", DISCRETE_NAIVE_BAYES_CLASSES)
def test_discretenb_provide_prior(DiscreteNaiveBayes):
# Test whether discrete NB classes use provided prior
clf = DiscreteNaiveBayes(class_prior=[0.5, 0.5])
clf.fit([[0], [0], [1]], [0, 0, 1])
prior = np.exp(clf.class_log_prior_)
assert_array_almost_equal(prior, np.array([0.5, 0.5]))
# Inconsistent number of classes with prior
msg = "Number of priors must match number of classes"
with pytest.raises(ValueError, match=msg):
clf.fit([[0], [1], [2]], [0, 1, 2])
msg = "is not the same as on last call to partial_fit"
with pytest.raises(ValueError, match=msg):
clf.partial_fit([[0], [1]], [0, 1], classes=[0, 1, 1])
@pytest.mark.parametrize("DiscreteNaiveBayes", DISCRETE_NAIVE_BAYES_CLASSES)
def test_discretenb_provide_prior_with_partial_fit(DiscreteNaiveBayes):
# Test whether discrete NB classes use provided prior
# when using partial_fit
iris = load_iris()
iris_data1, iris_data2, iris_target1, iris_target2 = train_test_split(
iris.data, iris.target, test_size=0.4, random_state=415
)
for prior in [None, [0.3, 0.3, 0.4]]:
clf_full = DiscreteNaiveBayes(class_prior=prior)
clf_full.fit(iris.data, iris.target)
clf_partial = DiscreteNaiveBayes(class_prior=prior)
clf_partial.partial_fit(iris_data1, iris_target1, classes=[0, 1, 2])
clf_partial.partial_fit(iris_data2, iris_target2)
assert_array_almost_equal(
clf_full.class_log_prior_, clf_partial.class_log_prior_
)
@pytest.mark.parametrize("DiscreteNaiveBayes", DISCRETE_NAIVE_BAYES_CLASSES)
def test_discretenb_sample_weight_multiclass(DiscreteNaiveBayes):
# check shape consistency for number of samples at fit time
X = [
[0, 0, 1],
[0, 1, 1],
[0, 1, 1],
[1, 0, 0],
]
y = [0, 0, 1, 2]
sample_weight = np.array([1, 1, 2, 2], dtype=np.float64)
sample_weight /= sample_weight.sum()
clf = DiscreteNaiveBayes().fit(X, y, sample_weight=sample_weight)
assert_array_equal(clf.predict(X), [0, 1, 1, 2])
# Check sample weight using the partial_fit method
clf = DiscreteNaiveBayes()
clf.partial_fit(X[:2], y[:2], classes=[0, 1, 2], sample_weight=sample_weight[:2])
clf.partial_fit(X[2:3], y[2:3], sample_weight=sample_weight[2:3])
clf.partial_fit(X[3:], y[3:], sample_weight=sample_weight[3:])
assert_array_equal(clf.predict(X), [0, 1, 1, 2])
@pytest.mark.parametrize("DiscreteNaiveBayes", DISCRETE_NAIVE_BAYES_CLASSES)
@pytest.mark.parametrize("use_partial_fit", [False, True])
@pytest.mark.parametrize("train_on_single_class_y", [False, True])
def test_discretenb_degenerate_one_class_case(
DiscreteNaiveBayes,
use_partial_fit,
train_on_single_class_y,
):
# Most array attributes of a discrete naive Bayes classifier should have a
# first-axis length equal to the number of classes. Exceptions include:
# ComplementNB.feature_all_, CategoricalNB.n_categories_.
# Confirm that this is the case for binary problems and the degenerate
# case of a single class in the training set, when fitting with `fit` or
# `partial_fit`.
# Non-regression test for handling degenerate one-class case:
# https://github.com/scikit-learn/scikit-learn/issues/18974
X = [[1, 0, 0], [0, 1, 0], [0, 0, 1]]
y = [1, 1, 2]
if train_on_single_class_y:
X = X[:-1]
y = y[:-1]
classes = sorted(list(set(y)))
num_classes = len(classes)
clf = DiscreteNaiveBayes()
if use_partial_fit:
clf.partial_fit(X, y, classes=classes)
else:
clf.fit(X, y)
assert clf.predict(X[:1]) == y[0]
# Check that attributes have expected first-axis lengths
attribute_names = [
"classes_",
"class_count_",
"class_log_prior_",
"feature_count_",
"feature_log_prob_",
]
for attribute_name in attribute_names:
attribute = getattr(clf, attribute_name, None)
if attribute is None:
# CategoricalNB has no feature_count_ attribute
continue
if isinstance(attribute, np.ndarray):
assert attribute.shape[0] == num_classes
else:
# CategoricalNB.feature_log_prob_ is a list of arrays
for element in attribute:
assert element.shape[0] == num_classes
@pytest.mark.parametrize("kind", ("dense", "sparse"))
def test_mnnb(kind):
# Test Multinomial Naive Bayes classification.
# This checks that MultinomialNB implements fit and predict and returns
# correct values for a simple toy dataset.
if kind == "dense":
X = X2
elif kind == "sparse":
X = scipy.sparse.csr_matrix(X2)
# Check the ability to predict the learning set.
clf = MultinomialNB()
msg = "Negative values in data passed to"
with pytest.raises(ValueError, match=msg):
clf.fit(-X, y2)
y_pred = clf.fit(X, y2).predict(X)
assert_array_equal(y_pred, y2)
# Verify that np.log(clf.predict_proba(X)) gives the same results as
# clf.predict_log_proba(X)
y_pred_proba = clf.predict_proba(X)
y_pred_log_proba = clf.predict_log_proba(X)
assert_array_almost_equal(np.log(y_pred_proba), y_pred_log_proba, 8)
# Check that incremental fitting yields the same results
clf2 = MultinomialNB()
clf2.partial_fit(X[:2], y2[:2], classes=np.unique(y2))
clf2.partial_fit(X[2:5], y2[2:5])
clf2.partial_fit(X[5:], y2[5:])
y_pred2 = clf2.predict(X)
assert_array_equal(y_pred2, y2)
y_pred_proba2 = clf2.predict_proba(X)
y_pred_log_proba2 = clf2.predict_log_proba(X)
assert_array_almost_equal(np.log(y_pred_proba2), y_pred_log_proba2, 8)
assert_array_almost_equal(y_pred_proba2, y_pred_proba)
assert_array_almost_equal(y_pred_log_proba2, y_pred_log_proba)
# Partial fit on the whole data at once should be the same as fit too
clf3 = MultinomialNB()
clf3.partial_fit(X, y2, classes=np.unique(y2))
y_pred3 = clf3.predict(X)
assert_array_equal(y_pred3, y2)
y_pred_proba3 = clf3.predict_proba(X)
y_pred_log_proba3 = clf3.predict_log_proba(X)
assert_array_almost_equal(np.log(y_pred_proba3), y_pred_log_proba3, 8)
assert_array_almost_equal(y_pred_proba3, y_pred_proba)
assert_array_almost_equal(y_pred_log_proba3, y_pred_log_proba)
def test_mnb_prior_unobserved_targets():
# test smoothing of prior for yet unobserved targets
# Create toy training data
X = np.array([[0, 1], [1, 0]])
y = np.array([0, 1])
clf = MultinomialNB()
with pytest.warns(None) as record:
clf.partial_fit(X, y, classes=[0, 1, 2])
assert not [w.message for w in record]
assert clf.predict([[0, 1]]) == 0
assert clf.predict([[1, 0]]) == 1
assert clf.predict([[1, 1]]) == 0
# add a training example with previously unobserved class
with pytest.warns(None) as record:
clf.partial_fit([[1, 1]], [2])
assert not [w.message for w in record]
assert clf.predict([[0, 1]]) == 0
assert clf.predict([[1, 0]]) == 1
assert clf.predict([[1, 1]]) == 2
def test_bnb():
# Tests that BernoulliNB when alpha=1.0 gives the same values as
# those given for the toy example in Manning, Raghavan, and
# Schuetze's "Introduction to Information Retrieval" book:
# https://nlp.stanford.edu/IR-book/html/htmledition/the-bernoulli-model-1.html
# Training data points are:
# Chinese Beijing Chinese (class: China)
# Chinese Chinese Shanghai (class: China)
# Chinese Macao (class: China)
# Tokyo Japan Chinese (class: Japan)
# Features are Beijing, Chinese, Japan, Macao, Shanghai, and Tokyo
X = np.array(
[[1, 1, 0, 0, 0, 0], [0, 1, 0, 0, 1, 0], [0, 1, 0, 1, 0, 0], [0, 1, 1, 0, 0, 1]]
)
# Classes are China (0), Japan (1)
Y = np.array([0, 0, 0, 1])
# Fit BernoulliBN w/ alpha = 1.0
clf = BernoulliNB(alpha=1.0)
clf.fit(X, Y)
# Check the class prior is correct
class_prior = np.array([0.75, 0.25])
assert_array_almost_equal(np.exp(clf.class_log_prior_), class_prior)
# Check the feature probabilities are correct
feature_prob = np.array(
[
[0.4, 0.8, 0.2, 0.4, 0.4, 0.2],
[1 / 3.0, 2 / 3.0, 2 / 3.0, 1 / 3.0, 1 / 3.0, 2 / 3.0],
]
)
assert_array_almost_equal(np.exp(clf.feature_log_prob_), feature_prob)
# Testing data point is:
# Chinese Chinese Chinese Tokyo Japan
X_test = np.array([[0, 1, 1, 0, 0, 1]])
# Check the predictive probabilities are correct
unnorm_predict_proba = np.array([[0.005183999999999999, 0.02194787379972565]])
predict_proba = unnorm_predict_proba / np.sum(unnorm_predict_proba)
assert_array_almost_equal(clf.predict_proba(X_test), predict_proba)
def test_bnb_feature_log_prob():
# Test for issue #4268.
# Tests that the feature log prob value computed by BernoulliNB when
# alpha=1.0 is equal to the expression given in Manning, Raghavan,
# and Schuetze's "Introduction to Information Retrieval" book:
# http://nlp.stanford.edu/IR-book/html/htmledition/the-bernoulli-model-1.html
X = np.array([[0, 0, 0], [1, 1, 0], [0, 1, 0], [1, 0, 1], [0, 1, 0]])
Y = np.array([0, 0, 1, 2, 2])
# Fit Bernoulli NB w/ alpha = 1.0
clf = BernoulliNB(alpha=1.0)
clf.fit(X, Y)
# Manually form the (log) numerator and denominator that
# constitute P(feature presence | class)
num = np.log(clf.feature_count_ + 1.0)
denom = np.tile(np.log(clf.class_count_ + 2.0), (X.shape[1], 1)).T
# Check manual estimate matches
assert_array_almost_equal(clf.feature_log_prob_, (num - denom))
def test_cnb():
# Tests ComplementNB when alpha=1.0 for the toy example in Manning,
# Raghavan, and Schuetze's "Introduction to Information Retrieval" book:
# https://nlp.stanford.edu/IR-book/html/htmledition/the-bernoulli-model-1.html
# Training data points are:
# Chinese Beijing Chinese (class: China)
# Chinese Chinese Shanghai (class: China)
# Chinese Macao (class: China)
# Tokyo Japan Chinese (class: Japan)
# Features are Beijing, Chinese, Japan, Macao, Shanghai, and Tokyo.
X = np.array(
[[1, 1, 0, 0, 0, 0], [0, 1, 0, 0, 1, 0], [0, 1, 0, 1, 0, 0], [0, 1, 1, 0, 0, 1]]
)
# Classes are China (0), Japan (1).
Y = np.array([0, 0, 0, 1])
# Check that weights are correct. See steps 4-6 in Table 4 of
# Rennie et al. (2003).
theta = np.array(
[
[
(0 + 1) / (3 + 6),
(1 + 1) / (3 + 6),
(1 + 1) / (3 + 6),
(0 + 1) / (3 + 6),
(0 + 1) / (3 + 6),
(1 + 1) / (3 + 6),
],
[
(1 + 1) / (6 + 6),
(3 + 1) / (6 + 6),
(0 + 1) / (6 + 6),
(1 + 1) / (6 + 6),
(1 + 1) / (6 + 6),
(0 + 1) / (6 + 6),
],
]
)
weights = np.zeros(theta.shape)
normed_weights = np.zeros(theta.shape)
for i in range(2):
weights[i] = -np.log(theta[i])
normed_weights[i] = weights[i] / weights[i].sum()
# Verify inputs are nonnegative.
clf = ComplementNB(alpha=1.0)
msg = re.escape("Negative values in data passed to ComplementNB (input X)")
with pytest.raises(ValueError, match=msg):
clf.fit(-X, Y)
clf.fit(X, Y)
# Check that counts/weights are correct.
feature_count = np.array([[1, 3, 0, 1, 1, 0], [0, 1, 1, 0, 0, 1]])
assert_array_equal(clf.feature_count_, feature_count)
class_count = np.array([3, 1])
assert_array_equal(clf.class_count_, class_count)
feature_all = np.array([1, 4, 1, 1, 1, 1])
assert_array_equal(clf.feature_all_, feature_all)
assert_array_almost_equal(clf.feature_log_prob_, weights)
clf = ComplementNB(alpha=1.0, norm=True)
clf.fit(X, Y)
assert_array_almost_equal(clf.feature_log_prob_, normed_weights)
def test_categoricalnb():
# Check the ability to predict the training set.
clf = CategoricalNB()
y_pred = clf.fit(X2, y2).predict(X2)
assert_array_equal(y_pred, y2)
X3 = np.array([[1, 4], [2, 5]])
y3 = np.array([1, 2])
clf = CategoricalNB(alpha=1, fit_prior=False)
clf.fit(X3, y3)
assert_array_equal(clf.n_categories_, np.array([3, 6]))
# Check error is raised for X with negative entries
X = np.array([[0, -1]])
y = np.array([1])
error_msg = re.escape("Negative values in data passed to CategoricalNB (input X)")
with pytest.raises(ValueError, match=error_msg):
clf.predict(X)
with pytest.raises(ValueError, match=error_msg):
clf.fit(X, y)
# Test alpha
X3_test = np.array([[2, 5]])
# alpha=1 increases the count of all categories by one so the final
# probability for each category is not 50/50 but 1/3 to 2/3
bayes_numerator = np.array([[1 / 3 * 1 / 3, 2 / 3 * 2 / 3]])
bayes_denominator = bayes_numerator.sum()
assert_array_almost_equal(
clf.predict_proba(X3_test), bayes_numerator / bayes_denominator
)
# Assert category_count has counted all features
assert len(clf.category_count_) == X3.shape[1]
# Check sample_weight
X = np.array([[0, 0], [0, 1], [0, 0], [1, 1]])
y = np.array([1, 1, 2, 2])
clf = CategoricalNB(alpha=1, fit_prior=False)
clf.fit(X, y)
assert_array_equal(clf.predict(np.array([[0, 0]])), np.array([1]))
assert_array_equal(clf.n_categories_, np.array([2, 2]))
for factor in [1.0, 0.3, 5, 0.0001]:
X = np.array([[0, 0], [0, 1], [0, 0], [1, 1]])
y = np.array([1, 1, 2, 2])
sample_weight = np.array([1, 1, 10, 0.1]) * factor
clf = CategoricalNB(alpha=1, fit_prior=False)
clf.fit(X, y, sample_weight=sample_weight)
assert_array_equal(clf.predict(np.array([[0, 0]])), np.array([2]))
assert_array_equal(clf.n_categories_, np.array([2, 2]))
@pytest.mark.parametrize(
"min_categories, exp_X1_count, exp_X2_count, new_X, exp_n_categories_",
[
# check min_categories with int > observed categories
(
3,
np.array([[2, 0, 0], [1, 1, 0]]),
np.array([[1, 1, 0], [1, 1, 0]]),
np.array([[0, 2]]),
np.array([3, 3]),
),
# check with list input
(
[3, 4],
np.array([[2, 0, 0], [1, 1, 0]]),
np.array([[1, 1, 0, 0], [1, 1, 0, 0]]),
np.array([[0, 3]]),
np.array([3, 4]),
),
# check min_categories with min less than actual
(
[
1,
np.array([[2, 0], [1, 1]]),
np.array([[1, 1], [1, 1]]),
np.array([[0, 1]]),
np.array([2, 2]),
]
),
],
)
def test_categoricalnb_with_min_categories(
min_categories, exp_X1_count, exp_X2_count, new_X, exp_n_categories_
):
X_n_categories = np.array([[0, 0], [0, 1], [0, 0], [1, 1]])
y_n_categories = np.array([1, 1, 2, 2])
expected_prediction = np.array([1])
clf = CategoricalNB(alpha=1, fit_prior=False, min_categories=min_categories)
clf.fit(X_n_categories, y_n_categories)
X1_count, X2_count = clf.category_count_
assert_array_equal(X1_count, exp_X1_count)
assert_array_equal(X2_count, exp_X2_count)
predictions = clf.predict(new_X)
assert_array_equal(predictions, expected_prediction)
assert_array_equal(clf.n_categories_, exp_n_categories_)
@pytest.mark.parametrize(
"min_categories, error_msg",
[
("bad_arg", "'min_categories' should have integral"),
([[3, 2], [2, 4]], "'min_categories' should have shape"),
(1.0, "'min_categories' should have integral"),
],
)
def test_categoricalnb_min_categories_errors(min_categories, error_msg):
X = np.array([[0, 0], [0, 1], [0, 0], [1, 1]])
y = np.array([1, 1, 2, 2])
clf = CategoricalNB(alpha=1, fit_prior=False, min_categories=min_categories)
with pytest.raises(ValueError, match=error_msg):
clf.fit(X, y)
def test_alpha():
# Setting alpha=0 should not output nan results when p(x_i|y_j)=0 is a case
X = np.array([[1, 0], [1, 1]])
y = np.array([0, 1])
nb = BernoulliNB(alpha=0.0)
msg = "alpha too small will result in numeric errors, setting alpha = 1.0e-10"
with pytest.warns(UserWarning, match=msg):
nb.partial_fit(X, y, classes=[0, 1])
with pytest.warns(UserWarning, match=msg):
nb.fit(X, y)
prob = np.array([[1, 0], [0, 1]])
assert_array_almost_equal(nb.predict_proba(X), prob)
nb = MultinomialNB(alpha=0.0)
with pytest.warns(UserWarning, match=msg):
nb.partial_fit(X, y, classes=[0, 1])
with pytest.warns(UserWarning, match=msg):
nb.fit(X, y)
prob = np.array([[2.0 / 3, 1.0 / 3], [0, 1]])
assert_array_almost_equal(nb.predict_proba(X), prob)
nb = CategoricalNB(alpha=0.0)
with pytest.warns(UserWarning, match=msg):
nb.fit(X, y)
prob = np.array([[1.0, 0.0], [0.0, 1.0]])
assert_array_almost_equal(nb.predict_proba(X), prob)
# Test sparse X
X = scipy.sparse.csr_matrix(X)
nb = BernoulliNB(alpha=0.0)
with pytest.warns(UserWarning, match=msg):
nb.fit(X, y)
prob = np.array([[1, 0], [0, 1]])
assert_array_almost_equal(nb.predict_proba(X), prob)
nb = MultinomialNB(alpha=0.0)
with pytest.warns(UserWarning, match=msg):
nb.fit(X, y)
prob = np.array([[2.0 / 3, 1.0 / 3], [0, 1]])
assert_array_almost_equal(nb.predict_proba(X), prob)
# Test for alpha < 0
X = np.array([[1, 0], [1, 1]])
y = np.array([0, 1])
expected_msg = re.escape(
"Smoothing parameter alpha = -1.0e-01. alpha should be > 0."
)
b_nb = BernoulliNB(alpha=-0.1)
m_nb = MultinomialNB(alpha=-0.1)
c_nb = CategoricalNB(alpha=-0.1)
with pytest.raises(ValueError, match=expected_msg):
b_nb.fit(X, y)
with pytest.raises(ValueError, match=expected_msg):
m_nb.fit(X, y)
with pytest.raises(ValueError, match=expected_msg):
c_nb.fit(X, y)
b_nb = BernoulliNB(alpha=-0.1)
m_nb = MultinomialNB(alpha=-0.1)
with pytest.raises(ValueError, match=expected_msg):
b_nb.partial_fit(X, y, classes=[0, 1])
with pytest.raises(ValueError, match=expected_msg):
m_nb.partial_fit(X, y, classes=[0, 1])
def test_alpha_vector():
X = np.array([[1, 0], [1, 1]])
y = np.array([0, 1])
# Setting alpha=np.array with same length
# as number of features should be fine
alpha = np.array([1, 2])
nb = MultinomialNB(alpha=alpha)
nb.partial_fit(X, y, classes=[0, 1])
# Test feature probabilities uses pseudo-counts (alpha)
feature_prob = np.array([[1 / 2, 1 / 2], [2 / 5, 3 / 5]])
assert_array_almost_equal(nb.feature_log_prob_, np.log(feature_prob))
# Test predictions
prob = np.array([[5 / 9, 4 / 9], [25 / 49, 24 / 49]])
assert_array_almost_equal(nb.predict_proba(X), prob)
# Test alpha non-negative
alpha = np.array([1.0, -0.1])
m_nb = MultinomialNB(alpha=alpha)
expected_msg = "Smoothing parameter alpha = -1.0e-01. alpha should be > 0."
with pytest.raises(ValueError, match=expected_msg):
m_nb.fit(X, y)
# Test that too small pseudo-counts are replaced
ALPHA_MIN = 1e-10
alpha = np.array([ALPHA_MIN / 2, 0.5])
m_nb = MultinomialNB(alpha=alpha)
m_nb.partial_fit(X, y, classes=[0, 1])
assert_array_almost_equal(m_nb._check_alpha(), [ALPHA_MIN, 0.5], decimal=12)
# Test correct dimensions
alpha = np.array([1.0, 2.0, 3.0])
m_nb = MultinomialNB(alpha=alpha)
expected_msg = re.escape(
"alpha should be a scalar or a numpy array with shape [n_features]"
)
with pytest.raises(ValueError, match=expected_msg):
m_nb.fit(X, y)
def test_check_accuracy_on_digits():
# Non regression test to make sure that any further refactoring / optim
# of the NB models do not harm the performance on a slightly non-linearly
# separable dataset
X, y = load_digits(return_X_y=True)
binary_3v8 = np.logical_or(y == 3, y == 8)
X_3v8, y_3v8 = X[binary_3v8], y[binary_3v8]
# Multinomial NB
scores = cross_val_score(MultinomialNB(alpha=10), X, y, cv=10)
assert scores.mean() > 0.86
scores = cross_val_score(MultinomialNB(alpha=10), X_3v8, y_3v8, cv=10)
assert scores.mean() > 0.94
# Bernoulli NB
scores = cross_val_score(BernoulliNB(alpha=10), X > 4, y, cv=10)
assert scores.mean() > 0.83
scores = cross_val_score(BernoulliNB(alpha=10), X_3v8 > 4, y_3v8, cv=10)
assert scores.mean() > 0.92
# Gaussian NB
scores = cross_val_score(GaussianNB(), X, y, cv=10)
assert scores.mean() > 0.77
scores = cross_val_score(GaussianNB(var_smoothing=0.1), X, y, cv=10)
assert scores.mean() > 0.89
scores = cross_val_score(GaussianNB(), X_3v8, y_3v8, cv=10)
assert scores.mean() > 0.86
# FIXME: remove in 1.2
@pytest.mark.parametrize("Estimator", DISCRETE_NAIVE_BAYES_CLASSES)
def test_n_features_deprecation(Estimator):
# Check that we raise the proper deprecation warning if accessing
# `n_features_`.
X = np.array([[1, 2], [3, 4]])
y = np.array([1, 0])
est = Estimator().fit(X, y)
with pytest.warns(FutureWarning, match="`n_features_` was deprecated"):
est.n_features_
|
|
import io, math, os, os.path, pickle, struct
from struct import Struct
from Catalog.Identifiers import PageId, FileId, TupleId
from Catalog.Schema import DBSchema
from Storage.Page import PageHeader, Page
from Storage.SlottedPage import SlottedPageHeader, SlottedPage
class FileHeader:
"""
A file header class, containing a page size and a schema for the data
entries stored in the file.
Our file header object also keeps its own binary representation per instance
rather than at the class level, since each file may have a variable length schema.
The binary representation is a struct, with three components in its format string:
i. header length
ii. page size
iii. a JSON-serialized schema (from DBSchema.packSchema)
>>> schema = DBSchema('employee', [('id', 'int'), ('dob', 'char(10)'), ('salary', 'int')])
>>> fh = FileHeader(pageSize=io.DEFAULT_BUFFER_SIZE, pageClass=SlottedPage, schema=schema)
>>> b = fh.pack()
>>> fh2 = FileHeader.unpack(b)
>>> fh.pageSize == fh2.pageSize
True
>>> fh.schema.schema() == fh2.schema.schema()
True
## Test the file header's ability to be written to, and read from a Python file object.
>>> f1 = open('test.header', 'wb')
>>> fh.toFile(f1)
>>> f1.flush(); f1.close()
>>> f2 = open('test.header', 'r+b')
>>> fh3 = FileHeader.fromFile(f2)
>>> fh.pageSize == fh3.pageSize \
and fh.pageClass == fh3.pageClass \
and fh.schema.schema() == fh3.schema.schema()
True
>>> os.remove('test.header')
"""
def __init__(self, **kwargs):
other = kwargs.get("other", None)
if other:
self.fromOther(other)
else:
pageSize = kwargs.get("pageSize", None)
pageClass = kwargs.get("pageClass", None)
schema = kwargs.get("schema", None)
if pageSize and pageClass and schema:
pageClassLen = len(pickle.dumps(pageClass))
schemaDescLen = len(schema.packSchema())
self.binrepr = Struct("HHHH"+str(pageClassLen)+"s"+str(schemaDescLen)+"s")
self.size = self.binrepr.size
self.pageSize = pageSize
self.pageClass = pageClass
self.schema = schema
else:
raise ValueError("Invalid file header constructor arguments")
def fromOther(self, other):
self.binrepr = other.binrepr
self.size = other.size
self.pageSize = other.pageSize
self.pageClass = other.pageClass
self.schema = other.schema
def pack(self):
if self.binrepr and self.pageSize and self.schema:
packedPageClass = pickle.dumps(self.pageClass)
packedSchema = self.schema.packSchema()
return self.binrepr.pack(self.size, self.pageSize, \
len(packedPageClass), len(packedSchema), \
packedPageClass, packedSchema)
@classmethod
def binrepr(cls, buffer):
lenStruct = Struct("HHHH")
(headerLen, _, pageClassLen, schemaDescLen) = lenStruct.unpack_from(buffer)
if headerLen > 0 and pageClassLen > 0 and schemaDescLen > 0:
return Struct("HHHH"+str(pageClassLen)+"s"+str(schemaDescLen)+"s")
else:
raise ValueError("Invalid header length read from storage file header")
@classmethod
def unpack(cls, buffer):
brepr = cls.binrepr(buffer)
values = brepr.unpack_from(buffer)
if len(values) == 6:
pageClass = pickle.loads(values[4])
schema = DBSchema.unpackSchema(values[5])
return FileHeader(pageSize=values[1], pageClass=pageClass, schema=schema)
def toFile(self, f):
pos = f.tell()
if pos == 0:
f.write(self.pack())
else:
raise ValueError("Cannot write file header, file positioned beyond its start.")
@classmethod
def fromFile(cls, f):
pos = f.tell()
if pos == 0:
lenStruct = Struct("H")
headerLen = lenStruct.unpack_from(f.peek(lenStruct.size))[0]
if headerLen > 0:
buffer = f.read(headerLen)
return FileHeader.unpack(buffer)
else:
raise ValueError("Invalid header length read from storage file header")
else:
raise ValueError("Cannot read file header, file positioned beyond its start.")
class StorageFile:
"""
A storage file implementation, as a base class for all database files.
All storage files have a file identifier, a file path, a file header and a handle
to a file object as metadata.
This implementation supports a readPage() and writePage() method, enabling I/O
for specific pages to the backing file. Allocation of new pages is handled by the
underlying file system (i.e. simply write the desired page, and the file system
will grow the backing file by the desired amount).
Storage files may also serialize their metadata using the pack() and unpack(),
allowing their metadata to be written to disk when persisting the database catalog.
>>> import shutil, Storage.BufferPool, Storage.FileManager
>>> schema = DBSchema('employee', [('id', 'int'), ('age', 'int')])
>>> bp = Storage.BufferPool.BufferPool()
>>> fm = Storage.FileManager.FileManager(bufferPool=bp)
>>> bp.setFileManager(fm)
# Create a relation for the given schema
>>> fm.createRelation(schema.name, schema)
# Below 'f' is a StorageFile object returned by the FileManager
>>> (fId, f) = fm.relationFile(schema.name)
# Check initial file status
>>> f.numPages() == 0
True
# There should be a valid free page data structure in the file.
>>> f.freePages is not None
True
# The first available page should be at page offset 0.
>>> f.availablePage().pageIndex
0
# Create a pair of pages.
>>> pId = PageId(fId, 0)
>>> pId1 = PageId(fId, 1)
>>> p = SlottedPage(pageId=pId, buffer=bytes(f.pageSize()), schema=schema)
>>> p1 = SlottedPage(pageId=pId1, buffer=bytes(f.pageSize()), schema=schema)
# Populate pages
>>> for tup in [schema.pack(schema.instantiate(i, 2*i+20)) for i in range(10)]:
... _ = p.insertTuple(tup)
...
>>> for tup in [schema.pack(schema.instantiate(i, i+20)) for i in range(10, 20)]:
... _ = p1.insertTuple(tup)
...
# Write out pages and sync to disk.
>>> f.writePage(p)
>>> f.writePage(p1)
>>> f.flush()
# Check the number of pages, and the file size.
>>> f.numPages() == 2
True
>>> f.size() == (f.headerSize() + f.pageSize() * 2)
True
# Read pages in reverse order testing offset and page index.
>>> pageBuffer = bytearray(f.pageSize())
>>> pIn1 = f.readPage(pId1, pageBuffer)
>>> pIn1.pageId == pId1
True
>>> f.pageOffset(pIn1.pageId) == f.header.size + f.pageSize()
True
>>> pIn = f.readPage(pId, pageBuffer)
>>> pIn.pageId == pId
True
>>> f.pageOffset(pIn.pageId) == f.header.size
True
# Test page header iterator
>>> [p[1].usedSpace() for p in f.headers()]
[80, 80]
# Test page iterator
>>> [p[1].pageId.pageIndex for p in f.pages()]
[0, 1]
# Test tuple iterator
>>> [schema.unpack(tup).id for tup in f.tuples()] == list(range(20))
True
# Check buffer pool utilization
>>> (bp.numPages() - bp.numFreePages()) == 2
True
## Clean up the doctest
>>> shutil.rmtree(Storage.FileManager.FileManager.defaultDataDir)
"""
defaultPageClass = SlottedPage
def __init__(self, **kwargs):
other = kwargs.get("other", None)
if other:
self.fromOther(other)
else:
self.bufferPool = kwargs.get("bufferPool", None)
if self.bufferPool is None:
raise ValueError("No buffer pool found when initializing a storage file")
fileId = kwargs.get("fileId", None)
filePath = kwargs.get("filePath", None)
mode = kwargs.get("mode", None)
existing = os.path.exists(filePath)
if fileId and filePath:
initHeader = False
initFreePages = False
if not existing and mode.lower() == "create":
ioMode = "w+b"
pageSize = kwargs.get("pageSize", io.DEFAULT_BUFFER_SIZE)
pageClass = kwargs.get("pageClass", StorageFile.defaultPageClass)
schema = kwargs.get("schema", None)
if pageSize and pageClass and schema:
self.header = FileHeader(pageSize=pageSize, pageClass=pageClass, schema=schema)
initHeader = True
initFreePages = False
else:
raise ValueError("No page size, class or schema specified when creating a new storage file")
elif existing and mode.lower() in ["update", "truncate"]:
ioMode = "r+b" if mode.lower() == "update" else "w+b"
f = io.BufferedReader(io.FileIO(filePath))
self.header = FileHeader.fromFile(f)
pageSize = self.pageSize()
initFreePages = True
f.close()
else:
raise ValueError("Incompatible storage file mode and on-disk file status")
if self.header:
self.fileId = fileId
self.path = filePath
self.file = io.BufferedRandom(io.FileIO(self.path, ioMode), buffer_size=pageSize)
self.binrepr = Struct("H"+str(FileId.binrepr.size)+"s"+str(len(self.path))+"s")
self.freePages = set()
page = self.pageClass()(pageId=self.pageId(0), buffer=bytes(self.pageSize()), schema=self.schema())
self.pageHdrSize = page.header.headerSize()
if initFreePages:
self.initializeFreePages()
if initHeader:
self.file.seek(0)
self.header.toFile(self.file)
self.file.flush()
else:
raise ValueError("No valid header available for storage file")
else:
raise ValueError("No file id or path specified in storage file constructor")
def fromOther(self, other):
self.bufferPool = other.bufferPool
self.fileId = other.fileId
self.path = other.path
self.header = other.header
self.file = other.file
self.binrepr = other.binrepr
self.freePages = other.freePages
self.pageHdrSize = other.pageHdrSize
# Intialize the free page directory by reading all headers and
# checking if the page has free space.
def initializeFreePages(self):
for (pId, hdr) in self.headers():
if hdr.hasFreeTuple():
self.freePages.add(pId)
# File control
def flush(self):
self.file.flush()
def close(self):
if not self.file.closed:
self.file.close()
# Storage file helpers
def pageId(self, pageIndex):
return PageId(self.fileId, pageIndex)
def schema(self):
return self.header.schema
def size(self):
return os.path.getsize(self.path)
def headerSize(self):
return self.header.size
def pageSize(self):
return self.header.pageSize
def pageHeaderSize(self):
return self.pageHdrSize
def pageClass(self):
return self.header.pageClass
def numPages(self):
return math.floor((self.size() - self.headerSize()) / self.pageSize())
def pageOffset(self, pageId):
return self.headerSize() + self.pageSize() * pageId.pageIndex
def pageRange(self, pageId):
start = self.pageOffset(pageId)
return (start, start+self.pageSize())
def validPageId(self, pageId):
return pageId.fileId == self.fileId and pageId.pageIndex < self.numPages()
def validBuffer(self, page):
return len(page) == self.pageSize()
# Page header operations
# Reads a page header from disk.
def readPageHeader(self, pageId):
if self.validPageId(pageId):
self.file.seek(self.pageOffset(pageId))
packedHdr = bytearray(self.pageHeaderSize())
bytesRead = self.file.readinto(packedHdr)
if bytesRead == self.pageHeaderSize():
return self.pageClass().headerClass.unpack(packedHdr)
else:
raise ValueError("Read a partial page header")
else:
raise ValueError("Invalid page id while reading a header")
# Writes a page header to disk.
# The page must already exist, that is we cannot extend the file with only a page header.
def writePageHeader(self, page):
if isinstance(page, self.pageClass()) and self.validPageId(pageId):
self.file.seek(self.pageOffset(page.pageId))
self.file.write(page.header.pack())
else:
raise ValueError("Invalid page type or page id while writing a header")
# Page operations
def readPage(self, pageId, bufferForPage):
if self.validPageId(pageId) and self.validBuffer(bufferForPage):
self.file.seek(self.pageOffset(pageId))
bytesRead = self.file.readinto(bufferForPage)
if bytesRead == self.pageSize():
page = self.pageClass().unpack(pageId, bufferForPage)
# Refresh the free page list based on the on-disk header contents.
if page.header.hasFreeTuple() and pageId not in self.freePages:
self.freePages.add(pageId)
return page
else:
raise ValueError("Read a partial page")
else:
raise ValueError("Invalid page id or page buffer")
def writePage(self, page):
if isinstance(page, self.pageClass()):
self.file.seek(self.pageOffset(page.pageId))
self.file.write(page.pack())
# Refresh the free page list based on the in-memory header contents.
# This is needed if the page has been directly modified while resident in the buffer pool.
if not page.header.hasFreeTuple():
self.freePages.discard(page.pageId)
else:
raise ValueError("Incompatible page type during writePage")
# Adds a new page to the file by writing past its end.
def allocatePage(self):
pId = self.pageId(self.numPages())
page = self.pageClass()(pageId=pId, buffer=bytes(self.pageSize()), schema=self.schema())
self.writePage(page)
self.file.flush()
return page
# Returns the page id of the first page with available space.
def availablePage(self):
if not self.freePages:
page = self.allocatePage()
self.freePages.add(page.pageId)
return next(iter(self.freePages))
# Tuple operations
# Inserts the given tuple to the first available page.
def insertTuple(self, tupleData):
pId = self.availablePage()
page = self.bufferPool.getPage(pId)
tupleId = page.insertTuple(tupleData)
if not page.header.hasFreeTuple():
self.freePages.discard(pId)
return tupleId
# Removes the tuple by its id, tracking if the page is now free
# Returns the deleted tuple for further operations (e.g., index maintenance)
def deleteTuple(self, tupleId):
pId = tupleId.pageId
page = self.bufferPool.getPage(pId)
tupleData = page.getTuple(tupleId)
page.deleteTuple(tupleId)
if page.header.hasFreeTuple() and pId not in self.freePages:
self.freePages.add(pId)
return tupleData
# Updates the tuple by id
# Returns the updated tuple for further operations (e.g., index maintenance)
def updateTuple(self, tupleId, tupleData):
pId = tupleId.pageId
page = self.bufferPool.getPage(pId)
oldData = page.getTuple(tupleId)
page.putTuple(tupleId, tupleData)
return oldData
# Iterators
# Page header iterator
def headers(self):
return self.FileHeaderIterator(self)
# Page iterator, using the buffer pool.
# This can optionally pin the pages in the buffer pool while accessing them.
def pages(self, pinned=False):
return self.FilePageIterator(self, pinned)
# Unbuffered page iterator.
# Use with care, direct pages are not authoritative if the
# page is present in the buffer pool.
def directPages(self):
return self.FileDirectPageIterator(self)
# Tuple iterator
# This can optionally pin its accessed pages in the buffer pool.
def tuples(self, pinned=False):
return self.FileTupleIterator(self)
def pack(self):
if self.fileId and self.path:
return self.binrepr.pack(self.binrepr.size, self.fileId.pack(), self.path.encode())
@classmethod
def binrepr(cls, buffer):
lenStruct = Struct("H")
reprLen = lenStruct.unpack_from(buffer)[0]
if reprLen > 0:
fmt = "H"+str(FileId.binrepr.size)+"s"
filePathLen = reprLen-struct.calcsize(fmt)
return Struct(fmt+str(filePathLen)+"s")
else:
raise ValueError("Invalid format length read from storage file serialization")
@classmethod
def unpack(cls, bufferPool, buffer):
brepr = cls.binrepr(buffer)
values = brepr.unpack_from(buffer)
if len(values) == 3:
fileId = FileId.unpack(values[1])
filePath = values[2].decode()
return cls(bufferPool=bufferPool, fileId=fileId, filePath=filePath, mode="update")
# Iterator class implementations
class FileHeaderIterator:
def __init__(self, storageFile):
self.currentPageIdx = 0
self.storageFile = storageFile
def __iter__(self):
return self
def __next__(self):
pId = self.storageFile.pageId(self.currentPageIdx)
if self.storageFile.validPageId(pId):
self.currentPageIdx += 1
if self.storageFile.bufferPool.hasPage(pId):
return (pId, self.storageFile.bufferPool.getPage(pId).header)
else:
return (pId, self.storageFile.readPageHeader(pId))
else:
raise StopIteration
class FilePageIterator:
def __init__(self, storageFile, pinned=False):
self.currentPageIdx = 0
self.storageFile = storageFile
self.pinned = pinned
def __iter__(self):
return self
def __next__(self):
pId = self.storageFile.pageId(self.currentPageIdx)
if self.storageFile.validPageId(pId):
self.currentPageIdx += 1
return (pId, self.storageFile.bufferPool.getPage(pId, self.pinned))
else:
raise StopIteration
class FileDirectPageIterator:
def __init__(self, storageFile):
self.currentPageIdx = 0
self.storageFile = storageFile
self.buffer = bytearray(storageFile.pageSize())
def __iter__(self):
return self
def __next__(self):
pId = self.storageFile.pageId(self.currentPageIdx)
if self.storageFile.validPageId(pId):
self.currentPageIdx += 1
return (pId, self.storageFile.readPage(pId, self.buffer))
else:
raise StopIteration
class FileTupleIterator:
def __init__(self, storageFile, pinned=False):
self.storageFile = storageFile
self.pageIterator = storageFile.pages(pinned)
self.nextPage()
def __iter__(self):
return self
def __next__(self):
if self.pageIterator is not None:
while self.tupleIterator is not None:
try:
return next(self.tupleIterator)
except StopIteration:
self.nextPage()
if self.pageIterator is None:
raise StopIteration
def nextPage(self):
try:
self.currentPage = next(self.pageIterator)[1]
except StopIteration:
self.pageIterator = None
self.tupleIterator = None
else:
self.tupleIterator = iter(self.currentPage)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
|
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Provides TestStream for verifying streaming runner semantics.
For internal use only; no backwards-compatibility guarantees.
"""
from __future__ import absolute_import
from abc import ABCMeta
from abc import abstractmethod
from builtins import object
from functools import total_ordering
from future.utils import with_metaclass
import apache_beam as beam
from apache_beam import coders
from apache_beam import pvalue
from apache_beam.portability import common_urns
from apache_beam.portability.api import beam_runner_api_pb2
from apache_beam.transforms import PTransform
from apache_beam.transforms import core
from apache_beam.transforms import window
from apache_beam.transforms.window import TimestampedValue
from apache_beam.utils import timestamp
from apache_beam.utils.windowed_value import WindowedValue
__all__ = [
'Event',
'ElementEvent',
'WatermarkEvent',
'ProcessingTimeEvent',
'TestStream',
]
@total_ordering
class Event(with_metaclass(ABCMeta, object)): # type: ignore[misc]
"""Test stream event to be emitted during execution of a TestStream."""
@abstractmethod
def __eq__(self, other):
raise NotImplementedError
@abstractmethod
def __hash__(self):
raise NotImplementedError
@abstractmethod
def __lt__(self, other):
raise NotImplementedError
def __ne__(self, other):
# TODO(BEAM-5949): Needed for Python 2 compatibility.
return not self == other
@abstractmethod
def to_runner_api(self, element_coder):
raise NotImplementedError
@staticmethod
def from_runner_api(proto, element_coder):
if proto.HasField('element_event'):
return ElementEvent(
[TimestampedValue(
element_coder.decode(tv.encoded_element),
timestamp.Timestamp(micros=1000 * tv.timestamp))
for tv in proto.element_event.elements])
elif proto.HasField('watermark_event'):
return WatermarkEvent(timestamp.Timestamp(
micros=1000 * proto.watermark_event.new_watermark))
elif proto.HasField('processing_time_event'):
return ProcessingTimeEvent(timestamp.Duration(
micros=1000 * proto.processing_time_event.advance_duration))
else:
raise ValueError(
'Unknown TestStream Event type: %s' % proto.WhichOneof('event'))
class ElementEvent(Event):
"""Element-producing test stream event."""
def __init__(self, timestamped_values, tag=None):
self.timestamped_values = timestamped_values
self.tag = tag
def __eq__(self, other):
return (self.timestamped_values == other.timestamped_values and
self.tag == other.tag)
def __hash__(self):
return hash(self.timestamped_values)
def __lt__(self, other):
return self.timestamped_values < other.timestamped_values
def to_runner_api(self, element_coder):
return beam_runner_api_pb2.TestStreamPayload.Event(
element_event=beam_runner_api_pb2.TestStreamPayload.Event.AddElements(
elements=[
beam_runner_api_pb2.TestStreamPayload.TimestampedElement(
encoded_element=element_coder.encode(tv.value),
timestamp=tv.timestamp.micros // 1000)
for tv in self.timestamped_values]))
class WatermarkEvent(Event):
"""Watermark-advancing test stream event."""
def __init__(self, new_watermark, tag=None):
self.new_watermark = timestamp.Timestamp.of(new_watermark)
self.tag = tag
def __eq__(self, other):
return self.new_watermark == other.new_watermark and self.tag == other.tag
def __hash__(self):
return hash(self.new_watermark)
def __lt__(self, other):
return self.new_watermark < other.new_watermark
def to_runner_api(self, unused_element_coder):
return beam_runner_api_pb2.TestStreamPayload.Event(
watermark_event
=beam_runner_api_pb2.TestStreamPayload.Event.AdvanceWatermark(
new_watermark=self.new_watermark.micros // 1000))
class ProcessingTimeEvent(Event):
"""Processing time-advancing test stream event."""
def __init__(self, advance_by):
self.advance_by = timestamp.Duration.of(advance_by)
def __eq__(self, other):
return self.advance_by == other.advance_by
def __hash__(self):
return hash(self.advance_by)
def __lt__(self, other):
return self.advance_by < other.advance_by
def to_runner_api(self, unused_element_coder):
return beam_runner_api_pb2.TestStreamPayload.Event(
processing_time_event
=beam_runner_api_pb2.TestStreamPayload.Event.AdvanceProcessingTime(
advance_duration=self.advance_by.micros // 1000))
class TestStream(PTransform):
"""Test stream that generates events on an unbounded PCollection of elements.
Each event emits elements, advances the watermark or advances the processing
time. After all of the specified elements are emitted, ceases to produce
output.
"""
def __init__(self, coder=coders.FastPrimitivesCoder(), events=()):
super(TestStream, self).__init__()
assert coder is not None
self.coder = coder
self.watermarks = {None: timestamp.MIN_TIMESTAMP}
self._events = list(events)
self.output_tags = set()
def get_windowing(self, unused_inputs):
return core.Windowing(window.GlobalWindows())
def expand(self, pbegin):
assert isinstance(pbegin, pvalue.PBegin)
self.pipeline = pbegin.pipeline
# This multiplexing the multiple output PCollections.
def mux(event):
if event.tag:
yield pvalue.TaggedOutput(event.tag, event)
else:
yield event
mux_output = (pbegin
| _TestStream(self.output_tags, events=self._events)
| 'TestStream Multiplexer' >> beam.ParDo(mux).with_outputs())
# Apply a way to control the watermark per output. It is necessary to
# have an individual _WatermarkController per PCollection because the
# calculation of the input watermark of a transform is based on the event
# timestamp of the elements flowing through it. Meaning, it is impossible
# to control the output watermarks of the individual PCollections solely
# on the event timestamps.
outputs = {}
for tag in self.output_tags:
label = '_WatermarkController[{}]'.format(tag)
outputs[tag] = (mux_output[tag] | label >> _WatermarkController())
# Downstream consumers expect a PCollection if there is only a single
# output.
if len(outputs) == 1:
return list(outputs.values())[0]
return outputs
def _add(self, event):
if isinstance(event, ElementEvent):
for tv in event.timestamped_values:
assert tv.timestamp < timestamp.MAX_TIMESTAMP, (
'Element timestamp must be before timestamp.MAX_TIMESTAMP.')
elif isinstance(event, WatermarkEvent):
if event.tag not in self.watermarks:
self.watermarks[event.tag] = timestamp.MIN_TIMESTAMP
assert event.new_watermark > self.watermarks[event.tag], (
'Watermark must strictly-monotonically advance.')
self.watermarks[event.tag] = event.new_watermark
elif isinstance(event, ProcessingTimeEvent):
assert event.advance_by > 0, (
'Must advance processing time by positive amount.')
else:
raise ValueError('Unknown event: %s' % event)
self._events.append(event)
def add_elements(self, elements, tag=None, event_timestamp=None):
"""Add elements to the TestStream.
Elements added to the TestStream will be produced during pipeline execution.
These elements can be TimestampedValue, WindowedValue or raw unwrapped
elements that are serializable using the TestStream's specified Coder. When
a TimestampedValue or a WindowedValue element is used, the timestamp of the
TimestampedValue or WindowedValue will be the timestamp of the produced
element; otherwise, the current watermark timestamp will be used for that
element. The windows of a given WindowedValue are ignored by the
TestStream.
"""
self.output_tags.add(tag)
timestamped_values = []
if tag not in self.watermarks:
self.watermarks[tag] = timestamp.MIN_TIMESTAMP
for element in elements:
if isinstance(element, TimestampedValue):
timestamped_values.append(element)
elif isinstance(element, WindowedValue):
# Drop windows for elements in test stream.
timestamped_values.append(
TimestampedValue(element.value, element.timestamp))
else:
# Add elements with timestamp equal to current watermark.
if event_timestamp is None:
event_timestamp = self.watermarks[tag]
timestamped_values.append(TimestampedValue(element, event_timestamp))
self._add(ElementEvent(timestamped_values, tag))
return self
def advance_watermark_to(self, new_watermark, tag=None):
"""Advance the watermark to a given Unix timestamp.
The Unix timestamp value used must be later than the previous watermark
value and should be given as an int, float or utils.timestamp.Timestamp
object.
"""
self.output_tags.add(tag)
self._add(WatermarkEvent(new_watermark, tag))
return self
def advance_watermark_to_infinity(self, tag=None):
"""Advance the watermark to the end of time, completing this TestStream."""
self.advance_watermark_to(timestamp.MAX_TIMESTAMP, tag)
return self
def advance_processing_time(self, advance_by):
"""Advance the current processing time by a given duration in seconds.
The duration must be a positive second duration and should be given as an
int, float or utils.timestamp.Duration object.
"""
self._add(ProcessingTimeEvent(advance_by))
return self
def to_runner_api_parameter(self, context):
return (
common_urns.primitives.TEST_STREAM.urn,
beam_runner_api_pb2.TestStreamPayload(
coder_id=context.coders.get_id(self.coder),
events=[e.to_runner_api(self.coder) for e in self._events]))
@PTransform.register_urn(
common_urns.primitives.TEST_STREAM.urn,
beam_runner_api_pb2.TestStreamPayload)
def from_runner_api_parameter(payload, context):
coder = context.coders.get_by_id(payload.coder_id)
return TestStream(
coder=coder,
events=[Event.from_runner_api(e, coder) for e in payload.events])
class _WatermarkController(PTransform):
"""A runner-overridable PTransform Primitive to control the watermark.
Expected implementation behavior:
- If the instance recieves a WatermarkEvent, it sets its output watermark to
the specified value then drops the event.
- If the instance receives an ElementEvent, it emits all specified elements
to the Global Window with the event time set to the element's timestamp.
"""
def get_windowing(self, _):
return core.Windowing(window.GlobalWindows())
def expand(self, pcoll):
return pvalue.PCollection.from_(pcoll)
class _TestStream(PTransform):
"""Test stream that generates events on an unbounded PCollection of elements.
Each event emits elements, advances the watermark or advances the processing
time. After all of the specified elements are emitted, ceases to produce
output.
Expected implementation behavior:
- If the instance receives a WatermarkEvent with the WATERMARK_CONTROL_TAG
then the instance sets its own watermark hold at the specified value and
drops the event.
- If the instance receives any other WatermarkEvent or ElementEvent, it
passes it to the consumer.
"""
# This tag is used on WatermarkEvents to control the watermark at the root
# TestStream.
WATERMARK_CONTROL_TAG = '_TestStream_Watermark'
def __init__(self, output_tags, coder=coders.FastPrimitivesCoder(),
events=None):
assert coder is not None
self.coder = coder
self._events = self._add_watermark_advancements(output_tags, events)
def _watermark_starts(self, output_tags):
"""Sentinel values to hold the watermark of outputs to -inf.
The output watermarks of the output PCollections (fake unbounded sources) in
a TestStream are controlled by watermark holds. This sets the hold of each
output PCollection so that the individual holds can be controlled by the
given events.
"""
return [WatermarkEvent(timestamp.MIN_TIMESTAMP, tag) for tag in output_tags]
def _watermark_stops(self, output_tags):
"""Sentinel values to close the watermark of outputs."""
return [WatermarkEvent(timestamp.MAX_TIMESTAMP, tag) for tag in output_tags]
def _test_stream_start(self):
"""Sentinel value to move the watermark hold of the TestStream to +inf.
This sets a hold to +inf such that the individual holds of the output
PCollections are allowed to modify their individial output watermarks with
their holds. This is because the calculation of the output watermark is a
min over all input watermarks.
"""
return [WatermarkEvent(timestamp.MAX_TIMESTAMP - timestamp.TIME_GRANULARITY,
_TestStream.WATERMARK_CONTROL_TAG)]
def _test_stream_stop(self):
"""Sentinel value to close the watermark of the TestStream."""
return [WatermarkEvent(timestamp.MAX_TIMESTAMP,
_TestStream.WATERMARK_CONTROL_TAG)]
def _test_stream_init(self):
"""Sentinel value to hold the watermark of the TestStream to -inf.
This sets a hold to ensure that the output watermarks of the output
PCollections do not advance to +inf before their watermark holds are set.
"""
return [WatermarkEvent(timestamp.MIN_TIMESTAMP,
_TestStream.WATERMARK_CONTROL_TAG)]
def _set_up(self, output_tags):
return (self._test_stream_init()
+ self._watermark_starts(output_tags)
+ self._test_stream_start())
def _tear_down(self, output_tags):
return self._watermark_stops(output_tags) + self._test_stream_stop()
def _add_watermark_advancements(self, output_tags, events):
"""Adds watermark advancements to the given events.
The following watermark advancements can be done on the runner side.
However, it makes the logic on the runner side much more complicated than
it needs to be.
In order for watermarks to be properly advanced in a TestStream, a specific
sequence of watermark holds must be sent:
1. Hold the root watermark at -inf (this prevents the pipeline from
immediately returning).
2. Hold the watermarks at the WatermarkControllerss at -inf (this prevents
the pipeline from immediately returning).
3. Advance the root watermark to +inf - 1 (this allows the downstream
WatermarkControllers to control their watermarks via holds).
4. Advance watermarks as normal.
5. Advance WatermarkController watermarks to +inf
6. Advance root watermark to +inf.
"""
if not events:
return []
return self._set_up(output_tags) + events + self._tear_down(output_tags)
def get_windowing(self, unused_inputs):
return core.Windowing(window.GlobalWindows())
def expand(self, pcoll):
return pvalue.PCollection(pcoll.pipeline, is_bounded=False)
def _infer_output_coder(self, input_type=None, input_coder=None):
return self.coder
def _events_from_script(self, index):
yield self._events[index]
def events(self, index):
return self._events_from_script(index)
def begin(self):
return 0
def end(self, index):
return index >= len(self._events)
def next(self, index):
return index + 1
|
|
"""
Miscellaneous Routines.
"""
import struct
# from sys import maxint as INF #doesn't work anymore under Python3,
# but PDF still uses 32 bits ints
INF = (1<<31) - 1
import six #Python 2+3 compatibility
if six.PY3:
import chardet # For str encoding detection in Py3
unicode = str
def make_compat_bytes(in_str):
"In Py2, does nothing. In Py3, converts to bytes, encoding to unicode."
assert isinstance(in_str, str), str(type(in_str))
if six.PY2:
return in_str
else:
return in_str.encode()
def make_compat_str(in_str):
"In Py2, does nothing. In Py3, converts to string, guessing encoding."
assert isinstance(in_str, (bytes, str, unicode)), str(type(in_str))
if six.PY3 and isinstance(in_str, bytes):
enc = chardet.detect(in_str)
in_str = in_str.decode(enc['encoding'])
return in_str
def compatible_encode_method(bytesorstring, encoding='utf-8', erraction='ignore'):
"When Py2 str.encode is called, it often means bytes.encode in Py3. This does either."
if six.PY2:
assert isinstance(bytesorstring, (str, unicode)), str(type(bytesorstring))
return bytesorstring.encode(encoding, erraction)
if six.PY3:
if isinstance(bytesorstring, str): return bytesorstring
assert isinstance(bytesorstring, bytes), str(type(bytesorstring))
return bytesorstring.decode(encoding, erraction)
## PNG Predictor
##
def apply_png_predictor(pred, colors, columns, bitspercomponent, data):
if bitspercomponent != 8:
# unsupported
raise ValueError("Unsupported `bitspercomponent': %d" %
bitspercomponent)
nbytes = colors * columns * bitspercomponent // 8
i = 0
buf = b''
line0 = b'\x00' * columns
for i in range(0, len(data), nbytes+1):
ft = data[i]
if six.PY2:
ft = six.byte2int(ft)
i += 1
line1 = data[i:i+nbytes]
line2 = b''
if ft == 0:
# PNG none
line2 += line1
elif ft == 1:
# PNG sub (UNTESTED)
c = 0
for b in line1:
if six.PY2:
b = six.byte2int(b)
c = (c+b) & 255
line2 += six.int2byte(c)
elif ft == 2:
# PNG up
for (a, b) in zip(line0, line1):
if six.PY2:
a, b = six.byte2int(a), six.byte2int(b)
c = (a+b) & 255
line2 += six.int2byte(c)
elif ft == 3:
# PNG average (UNTESTED)
c = 0
for (a, b) in zip(line0, line1):
if six.PY2:
a, b = six.byte2int(a), six.byte2int(b)
c = ((c+a+b)//2) & 255
line2 += six.int2byte(c)
else:
# unsupported
raise ValueError("Unsupported predictor value: %d" % ft)
buf += line2
line0 = line2
return buf
## Matrix operations
##
MATRIX_IDENTITY = (1, 0, 0, 1, 0, 0)
def mult_matrix(m1, m0):
(a1, b1, c1, d1, e1, f1) = m1
(a0, b0, c0, d0, e0, f0) = m0
"""Returns the multiplication of two matrices."""
return (a0*a1+c0*b1, b0*a1+d0*b1,
a0*c1+c0*d1, b0*c1+d0*d1,
a0*e1+c0*f1+e0, b0*e1+d0*f1+f0)
def translate_matrix(m, v):
"""Translates a matrix by (x, y)."""
(a, b, c, d, e, f) = m
(x, y) = v
return (a, b, c, d, x*a+y*c+e, x*b+y*d+f)
def apply_matrix_pt(m, v):
(a, b, c, d, e, f) = m
(x, y) = v
"""Applies a matrix to a point."""
return (a*x+c*y+e, b*x+d*y+f)
def apply_matrix_norm(m, v):
"""Equivalent to apply_matrix_pt(M, (p,q)) - apply_matrix_pt(M, (0,0))"""
(a, b, c, d, e, f) = m
(p, q) = v
return (a*p+c*q, b*p+d*q)
## Utility functions
##
# isnumber
def isnumber(x):
return isinstance(x, (six.integer_types, float))
# uniq
def uniq(objs):
"""Eliminates duplicated elements."""
done = set()
for obj in objs:
if obj in done:
continue
done.add(obj)
yield obj
return
# csort
def csort(objs, key):
"""Order-preserving sorting function."""
idxs = dict((obj, i) for (i, obj) in enumerate(objs))
return sorted(objs, key=lambda obj: (key(obj), idxs[obj]))
# fsplit
def fsplit(pred, objs):
"""Split a list into two classes according to the predicate."""
t = []
f = []
for obj in objs:
if pred(obj):
t.append(obj)
else:
f.append(obj)
return (t, f)
# drange
def drange(v0, v1, d):
"""Returns a discrete range."""
assert v0 < v1, str((v0, v1, d))
return range(int(v0)//d, int(v1+d)//d)
# get_bound
def get_bound(pts):
"""Compute a minimal rectangle that covers all the points."""
(x0, y0, x1, y1) = (INF, INF, -INF, -INF)
for (x, y) in pts:
x0 = min(x0, x)
y0 = min(y0, y)
x1 = max(x1, x)
y1 = max(y1, y)
return (x0, y0, x1, y1)
# pick
def pick(seq, func, maxobj=None):
"""Picks the object obj where func(obj) has the highest value."""
maxscore = None
for obj in seq:
score = func(obj)
if maxscore is None or maxscore < score:
(maxscore, maxobj) = (score, obj)
return maxobj
# choplist
def choplist(n, seq):
"""Groups every n elements of the list."""
r = []
for x in seq:
r.append(x)
if len(r) == n:
yield tuple(r)
r = []
return
# nunpack
def nunpack(s, default=0):
"""Unpacks 1 to 4 or 8 byte integers (big endian)."""
l = len(s)
if not l:
return default
elif l == 1:
return ord(s)
elif l == 2:
return struct.unpack('>H', s)[0]
elif l == 3:
return struct.unpack('>L', b'\x00'+s)[0]
elif l == 4:
return struct.unpack('>L', s)[0]
elif l == 8:
return struct.unpack('>Q', s)[0]
else:
raise TypeError('invalid length: %d' % l)
# decode_text
PDFDocEncoding = ''.join(six.unichr(x) for x in (
0x0000, 0x0001, 0x0002, 0x0003, 0x0004, 0x0005, 0x0006, 0x0007,
0x0008, 0x0009, 0x000a, 0x000b, 0x000c, 0x000d, 0x000e, 0x000f,
0x0010, 0x0011, 0x0012, 0x0013, 0x0014, 0x0015, 0x0017, 0x0017,
0x02d8, 0x02c7, 0x02c6, 0x02d9, 0x02dd, 0x02db, 0x02da, 0x02dc,
0x0020, 0x0021, 0x0022, 0x0023, 0x0024, 0x0025, 0x0026, 0x0027,
0x0028, 0x0029, 0x002a, 0x002b, 0x002c, 0x002d, 0x002e, 0x002f,
0x0030, 0x0031, 0x0032, 0x0033, 0x0034, 0x0035, 0x0036, 0x0037,
0x0038, 0x0039, 0x003a, 0x003b, 0x003c, 0x003d, 0x003e, 0x003f,
0x0040, 0x0041, 0x0042, 0x0043, 0x0044, 0x0045, 0x0046, 0x0047,
0x0048, 0x0049, 0x004a, 0x004b, 0x004c, 0x004d, 0x004e, 0x004f,
0x0050, 0x0051, 0x0052, 0x0053, 0x0054, 0x0055, 0x0056, 0x0057,
0x0058, 0x0059, 0x005a, 0x005b, 0x005c, 0x005d, 0x005e, 0x005f,
0x0060, 0x0061, 0x0062, 0x0063, 0x0064, 0x0065, 0x0066, 0x0067,
0x0068, 0x0069, 0x006a, 0x006b, 0x006c, 0x006d, 0x006e, 0x006f,
0x0070, 0x0071, 0x0072, 0x0073, 0x0074, 0x0075, 0x0076, 0x0077,
0x0078, 0x0079, 0x007a, 0x007b, 0x007c, 0x007d, 0x007e, 0x0000,
0x2022, 0x2020, 0x2021, 0x2026, 0x2014, 0x2013, 0x0192, 0x2044,
0x2039, 0x203a, 0x2212, 0x2030, 0x201e, 0x201c, 0x201d, 0x2018,
0x2019, 0x201a, 0x2122, 0xfb01, 0xfb02, 0x0141, 0x0152, 0x0160,
0x0178, 0x017d, 0x0131, 0x0142, 0x0153, 0x0161, 0x017e, 0x0000,
0x20ac, 0x00a1, 0x00a2, 0x00a3, 0x00a4, 0x00a5, 0x00a6, 0x00a7,
0x00a8, 0x00a9, 0x00aa, 0x00ab, 0x00ac, 0x0000, 0x00ae, 0x00af,
0x00b0, 0x00b1, 0x00b2, 0x00b3, 0x00b4, 0x00b5, 0x00b6, 0x00b7,
0x00b8, 0x00b9, 0x00ba, 0x00bb, 0x00bc, 0x00bd, 0x00be, 0x00bf,
0x00c0, 0x00c1, 0x00c2, 0x00c3, 0x00c4, 0x00c5, 0x00c6, 0x00c7,
0x00c8, 0x00c9, 0x00ca, 0x00cb, 0x00cc, 0x00cd, 0x00ce, 0x00cf,
0x00d0, 0x00d1, 0x00d2, 0x00d3, 0x00d4, 0x00d5, 0x00d6, 0x00d7,
0x00d8, 0x00d9, 0x00da, 0x00db, 0x00dc, 0x00dd, 0x00de, 0x00df,
0x00e0, 0x00e1, 0x00e2, 0x00e3, 0x00e4, 0x00e5, 0x00e6, 0x00e7,
0x00e8, 0x00e9, 0x00ea, 0x00eb, 0x00ec, 0x00ed, 0x00ee, 0x00ef,
0x00f0, 0x00f1, 0x00f2, 0x00f3, 0x00f4, 0x00f5, 0x00f6, 0x00f7,
0x00f8, 0x00f9, 0x00fa, 0x00fb, 0x00fc, 0x00fd, 0x00fe, 0x00ff,
))
def decode_text(s):
"""Decodes a PDFDocEncoding string to Unicode."""
if s.startswith(b'\xfe\xff'):
return six.text_type(s[2:], 'utf-16be', 'ignore')
else:
return ''.join(PDFDocEncoding[c] for c in s)
# enc
def enc(x, codec='ascii'):
"""Encodes a string for SGML/XML/HTML"""
if isinstance(x, bytes):
return ''
x = x.replace('&', '&').replace('>', '>').replace('<', '<').replace('"', '"')
if codec:
x = x.encode(codec, 'xmlcharrefreplace')
return x
def bbox2str(bbox):
(x0, y0, x1, y1) = bbox
return '%.3f,%.3f,%.3f,%.3f' % (x0, y0, x1, y1)
def matrix2str(m):
(a, b, c, d, e, f) = m
return '[%.2f,%.2f,%.2f,%.2f, (%.2f,%.2f)]' % (a, b, c, d, e, f)
## Plane
##
## A set-like data structure for objects placed on a plane.
## Can efficiently find objects in a certain rectangular area.
## It maintains two parallel lists of objects, each of
## which is sorted by its x or y coordinate.
##
class Plane(object):
def __init__(self, bbox, gridsize=50):
self._seq = [] # preserve the object order.
self._objs = set()
self._grid = {}
self.gridsize = gridsize
(self.x0, self.y0, self.x1, self.y1) = bbox
return
def __repr__(self):
return ('<Plane objs=%r>' % list(self))
def __iter__(self):
return ( obj for obj in self._seq if obj in self._objs )
def __len__(self):
return len(self._objs)
def __contains__(self, obj):
return obj in self._objs
def _getrange(self, bbox):
(x0, y0, x1, y1) = bbox
if (x1 <= self.x0 or self.x1 <= x0 or
y1 <= self.y0 or self.y1 <= y0): return
x0 = max(self.x0, x0)
y0 = max(self.y0, y0)
x1 = min(self.x1, x1)
y1 = min(self.y1, y1)
for y in drange(y0, y1, self.gridsize):
for x in drange(x0, x1, self.gridsize):
yield (x, y)
return
# extend(objs)
def extend(self, objs):
for obj in objs:
self.add(obj)
return
# add(obj): place an object.
def add(self, obj):
for k in self._getrange((obj.x0, obj.y0, obj.x1, obj.y1)):
if k not in self._grid:
r = []
self._grid[k] = r
else:
r = self._grid[k]
r.append(obj)
self._seq.append(obj)
self._objs.add(obj)
return
# remove(obj): displace an object.
def remove(self, obj):
for k in self._getrange((obj.x0, obj.y0, obj.x1, obj.y1)):
try:
self._grid[k].remove(obj)
except (KeyError, ValueError):
pass
self._objs.remove(obj)
return
# find(): finds objects that are in a certain area.
def find(self, bbox):
(x0, y0, x1, y1) = bbox
done = set()
for k in self._getrange(bbox):
if k not in self._grid:
continue
for obj in self._grid[k]:
if obj in done:
continue
done.add(obj)
if (obj.x1 <= x0 or x1 <= obj.x0 or
obj.y1 <= y0 or y1 <= obj.y0):
continue
yield obj
return
|
|
"""Profile page test"""
import datetime
from django.test import TransactionTestCase
from django.core.urlresolvers import reverse
from django.contrib.auth.models import User
from apps.managers.challenge_mgr import challenge_mgr
from apps.utils import test_utils
from apps.widgets.smartgrid.models import Activity, ActionMember, Commitment, Event
from apps.widgets.quests.models import Quest
class MyAchievementsTestCase(TransactionTestCase):
"""Profile page test"""
def setUp(self):
"""setup"""
challenge_mgr.init()
self.user = test_utils.setup_user(username="user", password="changeme")
test_utils.set_competition_round()
test_utils.enable_quest()
challenge_mgr.register_page_widget("home", "home")
challenge_mgr.register_page_widget("profile", "my_achievements")
challenge_mgr.register_page_widget("profile", "my_commitments")
from apps.managers.cache_mgr import cache_mgr
cache_mgr.clear()
self.client.login(username="user", password="changeme")
def testActivityAchievement(self):
"""Check that the user's activity achievements are loaded."""
activity = Activity(
title="Test activity",
description="Testing!",
expected_duration=10,
point_value=10,
slug="test-activity",
pub_date=datetime.datetime.today(),
expire_date=datetime.datetime.today() + datetime.timedelta(days=7),
confirm_type="text",
type="activity",
)
activity.save()
# Test that profile page has a pending activity.
member = ActionMember(user=self.user,
action=activity,
approval_status="approved")
member.save()
response = self.client.get(reverse("profile_index"))
self.assertContains(response,
reverse("activity_task", args=(activity.type, activity.slug,)))
self.assertContains(response, "Test activity")
# Test adding an event to catch a bug.
event = Event(
title="Test event",
slug="test-event",
description="Testing!",
expected_duration=10,
point_value=10,
pub_date=datetime.datetime.today(),
expire_date=datetime.datetime.today() + datetime.timedelta(days=7),
type="event",
event_date=datetime.datetime.today() + datetime.timedelta(days=3),
)
event.save()
member = ActionMember(user=self.user, action=event, approval_status="pending")
member.save()
response = self.client.get(reverse("profile_index"))
self.assertContains(response,
reverse("activity_task", args=(activity.type, activity.slug,)))
self.assertContains(response, "Pending")
self.assertContains(response, "Activity:")
self.assertContains(response, "Event:")
self.assertNotContains(response, "You have nothing in progress or pending.")
def testCommitmentAchievement(self):
"""Check that the user's commitment achievements are loaded."""
commitment = Commitment(
title="Test commitment",
description="A commitment!",
point_value=10,
type="commitment",
slug="test-commitment",
)
commitment.save()
# Test that profile page has a pending activity.
member = ActionMember(user=self.user, action=commitment)
member.save()
response = self.client.get(reverse("profile_index"))
self.assertContains(response,
reverse("activity_task", args=(commitment.type, commitment.slug,)))
self.assertContains(response, "In Progress")
self.assertContains(response, "Commitment:")
self.assertNotContains(response, "You have nothing in progress or pending.")
# Test that the profile page has a rejected activity
member.award_date = datetime.datetime.today()
member.save()
response = self.client.get(reverse("profile_index"))
self.assertContains(response,
reverse("activity_task", args=(commitment.type, commitment.slug,)))
self.assertNotContains(response, "You have not been awarded anything yet!")
self.assertNotContains(response, "In Progress")
def testVariablePointAchievement(self):
"""Test that a variable point activity appears correctly in the my achievements list."""
activity = Activity(
title="Test activity",
slug="test-activity",
description="Variable points!",
expected_duration=10,
point_range_start=5,
point_range_end=314160,
pub_date=datetime.datetime.today(),
expire_date=datetime.datetime.today() + datetime.timedelta(days=7),
confirm_type="text",
type="activity",
)
activity.save()
points = self.user.profile.points()
member = ActionMember(
user=self.user,
action=activity,
approval_status="approved",
)
member.points_awarded = 314159
member.save()
self.assertEqual(self.user.profile.points(), points + 314159,
"Variable number of points should have been awarded.")
# Kludge to change point value for the info bar.
profile = self.user.profile
profile.add_points(3, datetime.datetime.today(), "test")
profile.save()
response = self.client.get(reverse("profile_index"))
self.assertContains(response,
reverse("activity_task", args=(activity.type, activity.slug,)))
# Note, this test may break if something in the page has the value 314159.
# Try finding another suitable number.
# print response.content
self.assertContains(response, "314159", count=5,
msg_prefix="314159 points should appear for the activity.")
def testSocialBonusAchievement(self):
"""Check that the social bonus appears in the my achievements list."""
# Create a second test user.
user2 = User.objects.create_user("user2", "user2@test.com")
event = Event.objects.create(
title="Test event",
slug="test-event",
description="Testing!",
expected_duration=10,
point_value=10,
social_bonus=10,
pub_date=datetime.datetime.today(),
expire_date=datetime.datetime.today() + datetime.timedelta(days=7),
type="event",
event_date=datetime.datetime.today(),
)
# Create membership for the two users.
m = ActionMember(
user=self.user,
action=event,
approval_status="approved",
)
m.social_email = "user2@test.com"
m.save()
m2 = ActionMember(
user=user2,
action=event,
approval_status="approved",
)
m2.social_email = "user@test.com"
m2.save()
response = self.client.get(reverse("profile_index"))
self.assertContains(response, reverse("activity_task", args=(event.type, event.slug,)))
entry = "Event: Test event (Social Bonus)"
self.assertContains(response, entry, count=1,
msg_prefix="Achievements should contain a social bonus entry")
def testQuestAchievement(self):
"""test quest shown up in achievement"""
quest = Quest(
name="Test quest",
quest_slug="test_quest",
description="test quest",
priority=1,
unlock_conditions="True",
completion_conditions="True",
)
quest.save()
# Accept the quest, which should be automatically completed.
response = self.client.post(
reverse("quests_accept", args=(quest.quest_slug,)),
follow=True,
HTTP_REFERER=reverse("home_index"),
)
response = self.client.get(reverse("profile_index"))
self.assertContains(response, "Quest: Test quest", count=1,
msg_prefix="Achievements should contain a quest entry")
|
|
from open_vsdcli.vsd_common import *
@vsdcli.command(name='user-list')
@click.option('--enterprise-id', metavar='<id>')
@click.option('--group-id', metavar='<id>')
@click.option('--filter', metavar='<filter>',
help='Filter for firstName, lastName, userName, email, '
'lastUpdatedDate, creationDate, externalID')
@click.pass_context
def user_list_list(ctx, filter, **ids):
"""list users for a given enterprise or group id"""
id_type, id = check_id(**ids)
if not filter:
result = ctx.obj['nc'].get("%ss/%s/users" % (id_type, id))
else:
result = ctx.obj['nc'].get("%ss/%s/users" % (id_type, id),
filter=filter)
table = PrettyTable(["ID",
"User name",
"First name",
"Last name",
"Email"])
for line in result:
table.add_row([line['ID'],
line['userName'],
line['firstName'],
line['lastName'],
line['email']])
print(table)
@vsdcli.command(name='user-show')
@click.argument('user-id', metavar='<user-id>', required=True)
@click.pass_context
def user_show(ctx, user_id):
"""Show information for a given user id"""
result = ctx.obj['nc'].get("users/%s" % user_id)[0]
print_object(result, only=ctx.obj['show_only'])
@vsdcli.command(name='user-create')
@click.argument('username', metavar='<username>', required=True)
@click.option('--lastname', metavar='<lastname>', required=True)
@click.option('--firstname', metavar='<firstname>', required=True)
@click.option('--email', metavar='<email>', required=True)
@click.option('--password', metavar='<password>', required=True)
@click.option('--enterprise-id', metavar='<enterprise ID>', required=True)
@click.pass_context
def user_create(ctx, username, firstname, lastname, email, password,
enterprise_id):
"""Add a user to the VSD"""
import hashlib
# Define mandotory values
params = {'userName': username,
'firstName': firstname,
'lastName': lastname,
'email': email,
'password': hashlib.sha1(password.encode('utf-8')).hexdigest()}
result = ctx.obj['nc'].post("enterprises/%s/users" %
enterprise_id, params)[0]
print_object(result, only=ctx.obj['show_only'])
@vsdcli.command(name='user-delete')
@click.argument('user-id', metavar='<user ID>', required=True)
@click.pass_context
def user_delete(ctx, user_id):
"""Delete a given user"""
ctx.obj['nc'].delete("users/%s" % user_id)
@vsdcli.command(name='user-update')
@click.argument('user-id', metavar='<user ID>', required=True)
@click.option('--key-value', metavar='<key:value>', multiple=True)
@click.pass_context
def user_update(ctx, user_id, key_value):
"""Update key/value for a given user"""
params = {}
for kv in key_value:
key, value = kv.split(':', 1)
params[key] = value
ctx.obj['nc'].put("users/%s" % user_id, params)
result = ctx.obj['nc'].get("users/%s" % user_id)[0]
print_object(result, only=ctx.obj['show_only'])
@vsdcli.command(name='group-list')
@click.option('--enterprise-id', metavar='<id>')
@click.option('--user-id', metavar='<id>')
@click.option('--filter', metavar='<filter>',
help='Filter for name, description, role, private, '
'lastUpdatedDate, creationDate, externalID')
@click.pass_context
def group_list(ctx, filter, **ids):
"""list groups for a given enterprise id or that an user belongs to"""
id_type, id = check_id(**ids)
if not filter:
result = ctx.obj['nc'].get("%ss/%s/groups" % (id_type, id))
else:
result = ctx.obj['nc'].get("%ss/%s/groups" % (id_type, id),
filter=filter)
table = PrettyTable(["ID", "Name", "Description", "Role", "Private"])
table.max_width['Description'] = 40
for line in result:
table.add_row([line['ID'],
line['name'],
line['description'],
line['role'],
line['private']])
print(table)
@vsdcli.command(name='group-show')
@click.argument('group-id', metavar='<group-id>', required=True)
@click.pass_context
def group_show(ctx, group_id):
"""Show information for a given group id"""
result = ctx.obj['nc'].get("groups/%s" % group_id)[0]
print_object(result, only=ctx.obj['show_only'])
@vsdcli.command(name='group-create')
@click.argument('name', metavar='<Group name>', required=True)
@click.option('--enterprise-id', metavar='<enterprise ID>', required=True)
@click.option('--description', metavar='<descrition>')
@click.option('--private', metavar='<email>', count=True)
@click.pass_context
def group_create(ctx, name, enterprise_id, description, private):
"""Add a group to the VSD"""
# Define mandotory values
params = {'name': name}
# Define optional values
if description:
params['description'] = description
if private >= 1:
params['private'] = True
result = ctx.obj['nc'].post("enterprises/%s/groups" % enterprise_id,
params)[0]
print_object(result, only=ctx.obj['show_only'])
@vsdcli.command(name='group-update')
@click.argument('group-id', metavar='<group ID>', required=True)
@click.option('--key-value', metavar='<key:value>', multiple=True)
@click.pass_context
def group_update(ctx, group_id, key_value):
"""Update key/value for a given group"""
params = {}
for kv in key_value:
key, value = kv.split(':', 1)
params[key] = value
ctx.obj['nc'].put("groups/%s" % group_id, params)
result = ctx.obj['nc'].get("groups/%s" % group_id)[0]
print_object(result, only=ctx.obj['show_only'])
@vsdcli.command(name='group-delete')
@click.argument('group-id', metavar='<group ID>', required=True)
@click.pass_context
def group_delete(ctx, group_id):
"""Delete a given group"""
ctx.obj['nc'].delete("groups/%s" % group_id)
@vsdcli.command(name='group-add-user')
@click.argument('group-id', metavar='<group ID>', required=True)
@click.option('--user-id', metavar='<user ID>', required=True)
@click.pass_context
def group_add_user(ctx, group_id, user_id):
"""Add a user to a given group"""
# Get all user for this group
userList = ctx.obj['nc'].get("groups/%s/users" % group_id)
user_ids = [u['ID'] for u in userList]
user_ids.append(user_id)
ctx.obj['nc'].put("groups/%s/users" % group_id, user_ids)
@vsdcli.command(name='group-del-user')
@click.argument('group-id', metavar='<group ID>', required=True)
@click.option('--user-id', metavar='<user ID>', required=True)
@click.pass_context
def group_del_user(ctx, group_id, user_id):
"""delete a user from a given group"""
# Get all user for this group
user_list = ctx.obj['nc'].get("groups/%s/users" % group_id)
user_ids = [elt.get('ID') for elt in user_list if elt.get('ID') != user_id]
if len(user_ids) == len(user_list):
print("User not present in the group")
else:
ctx.obj['nc'].put("groups/%s/users" % group_id, user_ids)
@vsdcli.command(name='permission-list')
@click.option('--zone-id', metavar='<id>')
@click.option('--domaintemplate-id', metavar='<id>')
@click.option('--redundancygroup-id', metavar='<id>')
@click.option('--gateway-id', metavar='<id>')
@click.option('--vlan-id', metavar='<id>')
@click.option('--domain-id', metavar='<id>')
@click.option('--service-id', metavar='<id>')
@click.option('--port-id', metavar='<id>')
@click.option('--l2domain-id', metavar='<id>')
@click.option('--l2domaintemplate-id', metavar='<id>')
@click.option('--filter', metavar='<filter>',
help='Filter for name, lastUpdatedDate, creationDate, '
'externalID')
@click.pass_context
def permission_list(ctx, filter, **ids):
"""List all permissions"""
id_type, id = check_id(**ids)
request = "%ss/%s/permissions" % (id_type, id)
if not filter:
result = ctx.obj['nc'].get(request)
else:
result = ctx.obj['nc'].get(request, filter=filter)
table = PrettyTable(["ID",
"Action",
"Entity ID",
"Entity type",
"Entity name"])
for line in result:
table.add_row([line['ID'],
line['permittedAction'],
line['permittedEntityID'],
line['permittedEntityType'],
line['permittedEntityName']])
print(table)
@vsdcli.command(name='permission-show')
@click.argument('permission-id', metavar='<permission-id>', required=True)
@click.pass_context
def permission_show(ctx, permission_id):
"""Show information for a given permission id"""
result = ctx.obj['nc'].get("permissions/%s" % permission_id)[0]
print_object(result, only=ctx.obj['show_only'])
@vsdcli.command(name='add-permission')
@click.argument('entity-id', metavar='<group or user ID>', required=True)
@click.option('--action', default='USE', help='Default : USE',
type=click.Choice(['USE',
'EXTEND',
'READ',
'INSTANTIATE']))
@click.option('--zone-id', metavar='<id>')
@click.option('--domaintemplate-id', metavar='<id>')
@click.option('--redundancygroup-id', metavar='<id>')
@click.option('--gateway-id', metavar='<id>')
@click.option('--vlan-id', metavar='<id>')
@click.option('--domain-id', metavar='<id>')
@click.option('--service-id', metavar='<id>')
@click.option('--port-id', metavar='<id>')
@click.option('--l2domain-id', metavar='<id>')
@click.option('--l2domaintemplate-id', metavar='<id>')
@click.option('--zone-id', metavar='<id>')
@click.pass_context
def add_permission(ctx, entity_id, action, **ids):
"""Add permission for a given element (Domain, Zone, L2Domain, etc...)"""
id_type, id = check_id(**ids)
params = {}
params['permittedEntityID'] = entity_id
params['permittedAction'] = action
ctx.obj['nc'].post("%ss/%s/permissions" % (id_type, id), params)
|
|
# Copyright 2014 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Magnum object test utilities."""
from magnum import objects
from magnum.tests.unit.db import utils as db_utils
def get_test_baymodel(context, **kw):
"""Return a BayModel object with appropriate attributes.
NOTE: The object leaves the attributes marked as changed, such
that a create() could be used to commit it to the DB.
"""
db_baymodel = db_utils.get_test_baymodel(**kw)
# Let DB generate ID if it isn't specified explicitly
if 'id' not in kw:
del db_baymodel['id']
baymodel = objects.BayModel(context)
for key in db_baymodel:
setattr(baymodel, key, db_baymodel[key])
return baymodel
def create_test_baymodel(context, **kw):
"""Create and return a test baymodel object.
Create a baymodel in the DB and return a BayModel object with appropriate
attributes.
"""
baymodel = get_test_baymodel(context, **kw)
baymodel.create()
return baymodel
def get_test_bay(context, **kw):
"""Return a Bay object with appropriate attributes.
NOTE: The object leaves the attributes marked as changed, such
that a create() could be used to commit it to the DB.
"""
db_bay = db_utils.get_test_bay(**kw)
# Let DB generate ID if it isn't specified explicitly
if 'id' not in kw:
del db_bay['id']
bay = objects.Bay(context)
for key in db_bay:
setattr(bay, key, db_bay[key])
return bay
def create_test_bay(context, **kw):
"""Create and return a test bay object.
Create a bay in the DB and return a Bay object with appropriate
attributes.
"""
bay = get_test_bay(context, **kw)
bay.create()
return bay
def get_test_pod(context, **kw):
"""Return a Pod object with appropriate attributes.
NOTE: The object leaves the attributes marked as changed, such
that a create() could be used to commit it to the DB.
"""
db_pod = db_utils.get_test_pod(**kw)
# Let DB generate ID if it isn't specified explicitly
if 'id' not in kw:
del db_pod['id']
pod = objects.Pod(context)
for key in db_pod:
setattr(pod, key, db_pod[key])
return pod
def create_test_pod(context, **kw):
"""Create and return a test pod object.
Create a pod in the DB and return a Pod object with appropriate
attributes.
"""
pod = get_test_pod(context, **kw)
pod.create()
return pod
def get_test_service(context, **kw):
"""Return a Service object with appropriate attributes.
NOTE: The object leaves the attributes marked as changed, such
that a create() could be used to commit it to the DB.
"""
db_service = db_utils.get_test_service(**kw)
# Let DB generate ID if it isn't specified explicitly
if 'id' not in kw:
del db_service['id']
service = objects.Service(context)
for key in db_service:
setattr(service, key, db_service[key])
return service
def create_test_service(context, **kw):
"""Create and return a test service object.
Create a service in the DB and return a Service object with appropriate
attributes.
"""
service = get_test_service(context, **kw)
service.create()
return service
def get_test_rc(context, **kw):
"""Return a ReplicationController object with appropriate attributes.
NOTE: The object leaves the attributes marked as changed, such
that a create() could be used to commit it to the DB.
"""
db_rc = db_utils.get_test_rc(**kw)
# Let DB generate ID if it isn't specified explicitly
if 'id' not in kw:
del db_rc['id']
rc = objects.ReplicationController(context)
for key in db_rc:
setattr(rc, key, db_rc[key])
return rc
def create_test_rc(context, **kw):
"""Create and return a test ReplicationController object.
Create a replication controller in the DB and return a
ReplicationController object with appropriate attributes.
"""
rc = get_test_rc(context, **kw)
rc.create()
return rc
def get_test_node(context, **kw):
"""Return a Node object with appropriate attributes.
NOTE: The object leaves the attributes marked as changed, such
that a create() could be used to commit it to the DB.
"""
db_node = db_utils.get_test_node(**kw)
# Let DB generate ID if it isn't specified explicitly
if 'id' not in kw:
del db_node['id']
node = objects.Node(context)
for key in db_node:
setattr(node, key, db_node[key])
return node
def create_test_node(context, **kw):
"""Create and return a test Node object.
Create a node in the DB and return a Node object with appropriate
attributes.
"""
node = get_test_node(context, **kw)
node.create()
return node
def get_test_x509keypair(context, **kw):
"""Return a X509KeyPair object with appropriate attributes.
NOTE: The object leaves the attributes marked as changed, such
that a create() could be used to commit it to the DB.
"""
db_x509keypair = db_utils.get_test_x509keypair(**kw)
# Let DB generate ID if it isn't specified explicitly
if 'id' not in kw:
del db_x509keypair['id']
x509keypair = objects.X509KeyPair(context)
for key in db_x509keypair:
setattr(x509keypair, key, db_x509keypair[key])
return x509keypair
def create_test_x509keypair(context, **kw):
"""Create and return a test x509keypair object.
Create a x509keypair in the DB and return a X509KeyPair object with
appropriate attributes.
"""
x509keypair = get_test_x509keypair(context, **kw)
x509keypair.create()
return x509keypair
|
|
#!/usr/bin/python
#
# Copyright (c) 2016 Juniper Networks, Inc. All rights reserved.
#
alarm_list = [
{
"alarm_rules": {
"or_list" : [
{
"and_list": [
{
"operand1": "ContrailConfig.elements.virtual_router_ip_address",
"operation": "!=",
"operand2": {
"uve_attribute": "VrouterAgent.control_ip"
}
}
]
}
]
},
"alarm_severity": 1,
"fq_name": [
"default-global-system-config",
"system-defined-address-mismatch-compute"
],
"id_perms": {
"description": "Compute Node IP Address mismatch."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"vrouter"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "ContrailConfig.elements.bgp_router_parameters.address",
"operation": "not in",
"operand2": {
"uve_attribute":
"BgpRouterState.bgp_router_ip_list"
}
}
]
}
]
},
"alarm_severity": 1,
"fq_name": [
"default-global-system-config",
"system-defined-address-mismatch-control"
],
"id_perms": {
"description": "Control Node IP Address mismatch."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"control-node"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "BgpRouterState.num_up_bgp_peer",
"operation": "==",
"operand2": {
"json_value": 'null'
}
}
]
},
{
"and_list": [
{
"operand1": "BgpRouterState.num_up_bgp_peer",
"operation": "!=",
"operand2": {
"uve_attribute": "BgpRouterState.num_bgp_peer"
}
}
]
}
]
},
"alarm_severity": 1,
"fq_name": [
"default-global-system-config",
"system-defined-bgp-connectivity"
],
"id_perms": {
"description": "BGP peer mismatch. Not enough BGP peers are up."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"control-node"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "ContrailConfig",
"operation": "==",
"operand2": {
"json_value": "null"
}
}
]
}
]
},
"alarm_severity": 1,
"fq_name": [
"default-global-system-config",
"system-defined-conf-incorrect"
],
"id_perms": {
"description": "ContrailConfig missing or incorrect. Configuration pushed to Ifmap as ContrailConfig is missing/incorrect."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"analytics-node",
"config-node",
"control-node",
"database-node",
"vrouter"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "NodeStatus.disk_usage_info.*.percentage_partition_space_used",
"operation": "range",
"operand2": {
"json_value": "[70, 90]"
},
"variables":
["NodeStatus.disk_usage_info.__key"]
}
]
}
]
},
"alarm_severity": 1,
"fq_name": [
"default-global-system-config",
"system-defined-disk-usage-high"
],
"id_perms": {
"description": "Disk usage crosses high threshold limit."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"analytics-node",
"config-node",
"control-node",
"database-node",
"vrouter"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "NodeStatus.disk_usage_info.*.percentage_partition_space_used",
"operation": ">",
"operand2": {
"json_value": "90"
},
"variables":
["NodeStatus.disk_usage_info.__key"]
}
]
}
]
},
"alarm_severity": 0,
"fq_name": [
"default-global-system-config",
"system-defined-disk-usage-critical"
],
"id_perms": {
"description": "Disk usage crosses critical threshold limit."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"analytics-node",
"config-node",
"control-node",
"database-node",
"vrouter"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "NodeStatus",
"operation": "==",
"operand2": {
"json_value": "null"
}
}
]
}
]
},
"alarm_severity": 0,
"fq_name": [
"default-global-system-config",
"system-defined-node-status"
],
"id_perms": {
"description": "Node Failure. NodeStatus UVE not present."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"analytics-node",
"config-node",
"control-node",
"database-node",
"vrouter"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "NodeStatus.build_info",
"operation": "==",
"operand2": {
"json_value": "null"
}
}
]
}
]
},
"alarm_severity": 1,
"fq_name": [
"default-global-system-config",
"system-defined-partial-sysinfo"
],
"id_perms": {
"description": "System Info Incomplete."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"analytics-node",
"config-node",
"control-node",
"database-node",
"vrouter"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "NodeStatus.process_status",
"operation": "==",
"operand2": {
"json_value": "null"
}
}
]
},
{
"and_list": [
{
"operand1": "NodeStatus.process_status.state",
"operation": "!=",
"operand2": {
"json_value": "\"Functional\""
},
"variables": ["NodeStatus.process_status.module_id",
"NodeStatus.process_status.instance_id"]
}
]
}
]
},
"alarm_severity": 0,
"fq_name": [
"default-global-system-config",
"system-defined-process-connectivity"
],
"id_perms": {
"description": "Process(es) reporting as non-functional."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"analytics-node",
"config-node",
"control-node",
"database-node",
"vrouter"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "NodeStatus.process_info",
"operation": "==",
"operand2": {
"json_value": "null"
}
}
]
},
{
"and_list": [
{
"operand1": "NodeStatus.process_info.process_state",
"operation": "!=",
"operand2": {
"json_value": "\"PROCESS_STATE_RUNNING\""
},
"variables": ["NodeStatus.process_info.process_name"]
}
]
}
]
},
"alarm_severity": 0,
"fq_name": [
"default-global-system-config",
"system-defined-process-status"
],
"id_perms": {
"description": "Process Failure."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"analytics-node",
"config-node",
"control-node",
"database-node",
"vrouter"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "ContrailConfig.elements.virtual_router_refs",
"operation": "!=",
"operand2": {
"json_value": "null"
}
},
{
"operand1": "ProuterData.connected_agent_list",
"operation": "size!=",
"operand2": {
"json_value": "1"
}
}
]
}
]
},
"alarm_severity": 1,
"fq_name": [
"default-global-system-config",
"system-defined-prouter-connectivity"
],
"id_perms": {
"description": "Prouter connectivity to controlling tor agent does not exist we look for non-empty value for connected_agent_list"
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"prouter"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "ContrailConfig.elements.virtual_router_refs",
"operation": "!=",
"operand2": {
"json_value": "null"
}
},
{
"operand1": "ProuterData.tsn_agent_list",
"operation": "size!=",
"operand2": {
"json_value": "1"
}
},
{
"operand1": "ProuterData.gateway_mode",
"operation": "!=",
"operand2": {
"json_value": "\"SERVER\""
}
}
]
},
{
"and_list": [
{
"operand1": "ContrailConfig.elements.virtual_router_refs",
"operation": "!=",
"operand2": {
"json_value": "null"
}
},
{
"operand1": "ProuterData.tsn_agent_list",
"operation": "size!=",
"operand2": {
"json_value": "0"
}
},
{
"operand1": "ProuterData.gateway_mode",
"operation": "==",
"operand2": {
"json_value": "\"SERVER\""
}
}
]
}
]
},
"alarm_severity": 1,
"fq_name": [
"default-global-system-config",
"system-defined-prouter-tsn-connectivity"
],
"id_perms": {
"description": "Prouter connectivity to controlling tsn agent does not exist we look for non-empty value for tsn_agent_list"
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"prouter"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "StorageCluster.info_stats.status",
"operation": "!=",
"operand2": {
"json_value": "0"
},
"variables":
["StorageCluster.info_stats.health_summary"]
}
]
}
]
},
"alarm_severity": 1,
"fq_name": [
"default-global-system-config",
"system-defined-storage-cluster-state"
],
"id_perms": {
"description": "Storage Cluster warning/errors."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"storage-cluster"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "VrouterAgent.down_interface_count",
"operation": ">=",
"operand2": {
"json_value": "1"
},
"variables": ["VrouterAgent.error_intf_list",
"VrouterAgent.no_config_intf_list"]
}
]
}
]
},
"alarm_severity": 1,
"fq_name": [
"default-global-system-config",
"system-defined-vrouter-interface"
],
"id_perms": {
"description": "Vrouter interface(s) down."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"vrouter"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "BgpRouterState.num_up_xmpp_peer",
"operation": "==",
"operand2": {
"json_value": "null"
}
}
]
},
{
"and_list": [
{
"operand1": "BgpRouterState.num_up_xmpp_peer",
"operation": "!=",
"operand2": {
"uve_attribute": "BgpRouterState.num_xmpp_peer"
}
}
]
}
]
},
"alarm_severity": 1,
"fq_name": [
"default-global-system-config",
"system-defined-xmpp-connectivity"
],
"id_perms": {
"description": "XMPP peer mismatch."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"control-node"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "XmppPeerInfoData.close_reason",
"operation": "!=",
"operand2": {
"json_value": "null"
}
},
{
"operand1": "XmppPeerInfoData.state_info.state",
"operation": "!=",
"operand2": {
"json_value": "\"Established\""
}
},
]
}
]
},
"alarm_severity": 1,
"fq_name": [
"default-global-system-config",
"system-defined-xmpp-close-reason"
],
"id_perms": {
"description": "XMPP connection closed towards peer, \
alarm has close reason"
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"xmpp-peer"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "NodeStatus.all_core_file_list",
"operand2": {
"json_value": "null"
},
"operation": "!="
},
{
"operand1": "NodeStatus.all_core_file_list",
"operand2": {
"json_value": "0"
},
"operation": "size!="
}
]
}
]
},
"alarm_severity": 0,
"fq_name": [
"default-global-system-config",
"system-defined-core-files"
],
"id_perms": {
"description": "A core file has been generated on the node."
},
"parent_type": "global-system-config",
"uve_keys": {
"uve_key": [
"analytics-node",
"config-node",
"control-node",
"database-node",
"vrouter"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "CassandraStatusData.cassandra_compaction_task.pending_compaction_tasks",
"operand2": {
"json_value": "300"
},
"operation": ">="
}
]
}
]
},
"alarm_severity": 1,
"fq_name": [
"default-global-system-config",
"system-defined-pending-cassandra-compaction-tasks"
],
"parent_type": "global-system-config",
"id_perms": {
"description": "Pending compaction tasks in cassandra crossed the configured threshold."
},
"uve_keys": {
"uve_key": [
"database-node"
]
}
},
{
"alarm_rules": {
"or_list": [
{
"and_list": [
{
"operand1": "NodeStatus.running_package_version",
"operation": "!=",
"operand2": {
"uve_attribute": "NodeStatus.installed_package_version"
}
}
]
}
]
},
"alarm_severity": 0,
"fq_name": [
"default-global-system-config",
"system-defined-package-version-mismatch"
],
"parent_type": "global-system-config",
"id_perms": {
"description": "There is a mismatch between installed and running package version."
},
"uve_keys": {
"uve_key": [
"analytics-node",
"config-node",
"control-node",
"database-node",
"vrouter"
]
}
}
]
|
|
"""
Dynamically generated set of TestCases based on set of yaml files decribing
some integration tests. These files are shared among all official Elasticsearch
clients.
"""
import re
from os import walk, environ
from os.path import exists, join, dirname, pardir
import yaml
from elasticsearch import TransportError
from elasticsearch.compat import string_types
from elasticsearch.helpers.test import _get_version
from ..test_cases import SkipTest
from . import ElasticsearchTestCase
# some params had to be changed in python, keep track of them so we can rename
# those in the tests accordingly
PARAMS_RENAMES = {
'type': 'doc_type',
'from': 'from_',
}
# mapping from catch values to http status codes
CATCH_CODES = {
'missing': 404,
'conflict': 409,
}
# test features we have implemented
IMPLEMENTED_FEATURES = ('gtelte', 'stash_in_path')
# broken YAML tests on some releases
SKIP_TESTS = {
(1, 1, 2): set(('TestCatRecovery10Basic', ))
}
class InvalidActionType(Exception):
pass
class YamlTestCase(ElasticsearchTestCase):
def setUp(self):
super(YamlTestCase, self).setUp()
if hasattr(self, '_setup_code'):
self.run_code(self._setup_code)
self.last_response = None
self._state = {}
def _resolve(self, value):
# resolve variables
if isinstance(value, string_types) and value.startswith('$'):
value = value[1:]
self.assertIn(value, self._state)
value = self._state[value]
if isinstance(value, string_types):
value = value.strip()
elif isinstance(value, dict):
value = dict((k, self._resolve(v)) for (k, v) in value.items())
elif isinstance(value, list):
value = list(map(self._resolve, value))
return value
def _lookup(self, path):
# fetch the possibly nested value from last_response
value = self.last_response
if path == '$body':
return value
path = path.replace(r'\.', '\1')
for step in path.split('.'):
if not step:
continue
step = step.replace('\1', '.')
step = self._resolve(step)
if step.isdigit() and step not in value:
step = int(step)
self.assertIsInstance(value, list)
self.assertGreater(len(value), step)
else:
self.assertIn(step, value)
value = value[step]
return value
def run_code(self, test):
""" Execute an instruction based on it's type. """
for action in test:
self.assertEquals(1, len(action))
action_type, action = list(action.items())[0]
if hasattr(self, 'run_' + action_type):
getattr(self, 'run_' + action_type)(action)
else:
raise InvalidActionType(action_type)
def run_do(self, action):
""" Perform an api call with given parameters. """
catch = action.pop('catch', None)
self.assertEquals(1, len(action))
method, args = list(action.items())[0]
# locate api endpoint
api = self.client
for m in method.split('.'):
self.assertTrue(hasattr(api, m))
api = getattr(api, m)
# some parameters had to be renamed to not clash with python builtins,
# compensate
for k in PARAMS_RENAMES:
if k in args:
args[PARAMS_RENAMES[k]] = args.pop(k)
# resolve vars
for k in args:
args[k] = self._resolve(args[k])
try:
self.last_response = api(**args)
except Exception as e:
if not catch:
raise
self.run_catch(catch, e)
else:
if catch:
raise AssertionError('Failed to catch %r in %r.' % (catch, self.last_response))
def _get_nodes(self):
if not hasattr(self, '_node_info'):
self._node_info = list(self.client.nodes.info(node_id='_all', metric='clear')['nodes'].values())
return self._node_info
def _get_data_nodes(self):
return len([info for info in self._get_nodes() if info.get('attributes', {}).get('data', 'true') == 'true'])
def _get_benchmark_nodes(self):
return len([info for info in self._get_nodes() if info.get('attributes', {}).get('bench', 'false') == 'true'])
def run_skip(self, skip):
if 'features' in skip:
if skip['features'] in IMPLEMENTED_FEATURES:
return
elif skip['features'] == 'requires_replica':
if self._get_data_nodes() > 1:
return
elif skip['features'] == 'benchmark':
if self._get_benchmark_nodes():
return
raise SkipTest(skip.get('reason', 'Feature %s is not supported' % skip['features']))
if 'version' in skip:
version, reason = skip['version'], skip['reason']
if version == 'all':
raise SkipTest(reason)
min_version, max_version = version.split('-')
min_version = _get_version(min_version) or (0, )
max_version = _get_version(max_version) or (999, )
if min_version <= self.es_version <= max_version:
raise SkipTest(reason)
def run_catch(self, catch, exception):
if catch == 'param':
self.assertIsInstance(exception, TypeError)
return
self.assertIsInstance(exception, TransportError)
if catch in CATCH_CODES:
self.assertEquals(CATCH_CODES[catch], exception.status_code)
elif catch[0] == '/' and catch[-1] == '/':
self.assertTrue(re.search(catch[1:-1], exception.error + ' ' + repr(exception.info)), '%s not in %r' % (catch, exception.info))
self.last_response = exception.info
def run_gt(self, action):
for key, value in action.items():
self.assertGreater(self._lookup(key), value)
def run_gte(self, action):
for key, value in action.items():
self.assertGreaterEqual(self._lookup(key), value)
def run_lt(self, action):
for key, value in action.items():
self.assertLess(self._lookup(key), value)
def run_lte(self, action):
for key, value in action.items():
self.assertLessEqual(self._lookup(key), value)
def run_set(self, action):
for key, value in action.items():
self._state[value] = self._lookup(key)
def run_is_false(self, action):
try:
value = self._lookup(action)
except AssertionError:
pass
else:
self.assertIn(value, ('', None, False, 0))
def run_is_true(self, action):
value = self._lookup(action)
self.assertNotIn(value, ('', None, False, 0))
def run_length(self, action):
for path, expected in action.items():
value = self._lookup(path)
expected = self._resolve(expected)
self.assertEquals(expected, len(value))
def run_match(self, action):
for path, expected in action.items():
value = self._lookup(path)
expected = self._resolve(expected)
if isinstance(expected, string_types) and \
expected.startswith('/') and expected.endswith('/'):
expected = re.compile(expected[1:-1], re.VERBOSE)
self.assertTrue(expected.search(value))
else:
self.assertEquals(expected, value)
def construct_case(filename, name):
"""
Parse a definition of a test case from a yaml file and construct the
TestCase subclass dynamically.
"""
def make_test(test_name, definition, i):
def m(self):
if name in SKIP_TESTS.get(self.es_version, ()):
raise SkipTest()
self.run_code(definition)
m.__doc__ = '%s:%s.test_from_yaml_%d (%s): %s' % (
__name__, name, i, '/'.join(filename.split('/')[-2:]), test_name)
m.__name__ = 'test_from_yaml_%d' % i
return m
with open(filename) as f:
tests = list(yaml.load_all(f))
attrs = {
'_yaml_file': filename
}
i = 0
for test in tests:
for test_name, definition in test.items():
if test_name == 'setup':
attrs['_setup_code'] = definition
continue
attrs['test_from_yaml_%d' % i] = make_test(test_name, definition, i)
i += 1
return type(name, (YamlTestCase, ), attrs)
YAML_DIR = environ.get(
'TEST_ES_YAML_DIR',
join(
dirname(__file__), pardir, pardir, pardir,
'elasticsearch', 'rest-api-spec', 'src', 'main', 'resources', 'rest-api-spec', 'test'
)
)
if exists(YAML_DIR):
# find all the test definitions in yaml files ...
for (path, dirs, files) in walk(YAML_DIR):
for filename in files:
if not filename.endswith(('.yaml', '.yml')):
continue
# ... parse them
name = ('Test' + ''.join(s.title() for s in path[len(YAML_DIR) + 1:].split('/')) + filename.rsplit('.', 1)[0].title()).replace('_', '').replace('.', '')
# and insert them into locals for test runner to find them
locals()[name] = construct_case(join(path, filename), name)
|
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Partially based on AboutMessagePassing in the Ruby Koans
#
from runner.koan import *
class AboutAttributeAccess(Koan):
class TypicalObject:
pass
def test_calling_undefined_functions_normally_results_in_errors(self):
typical = self.TypicalObject()
with self.assertRaises(___): typical.foobar()
def test_calling_getattribute_causes_an_attribute_error(self):
typical = self.TypicalObject()
with self.assertRaises(___): typical.__getattribute__('foobar')
# THINK ABOUT IT:
#
# If the method __getattribute__() causes the AttributeError, then
# what would happen if we redefine __getattribute__()?
# ------------------------------------------------------------------
class CatchAllAttributeReads:
def __getattribute__(self, attr_name):
return "Someone called '" + attr_name + "' and it could not be found"
def test_all_attribute_reads_are_caught(self):
catcher = self.CatchAllAttributeReads()
self.assertRegexpMatches(catcher.foobar, __)
def test_intercepting_return_values_can_disrupt_the_call_chain(self):
catcher = self.CatchAllAttributeReads()
self.assertRegexpMatches(catcher.foobaz, __) # This is fine
try:
catcher.foobaz(1)
except TypeError as ex:
err_msg = ex.args[0]
self.assertRegexpMatches(err_msg, __)
# foobaz returns a string. What happens to the '(1)' part?
# Try entering this into a python console to reproduce the issue:
#
# "foobaz"(1)
#
def test_changes_to_the_getattribute_implementation_affects_getattr_function(self):
catcher = self.CatchAllAttributeReads()
self.assertRegexpMatches(getattr(catcher, 'any_attribute'), __)
# ------------------------------------------------------------------
class WellBehavedFooCatcher:
def __getattribute__(self, attr_name):
if attr_name[:3] == "foo":
return "Foo to you too"
else:
return super().__getattribute__(attr_name)
def test_foo_attributes_are_caught(self):
catcher = self.WellBehavedFooCatcher()
self.assertEqual(__, catcher.foo_bar)
self.assertEqual(__, catcher.foo_baz)
def test_non_foo_messages_are_treated_normally(self):
catcher = self.WellBehavedFooCatcher()
with self.assertRaises(___): catcher.normal_undefined_attribute
# ------------------------------------------------------------------
global stack_depth
stack_depth = 0
class RecursiveCatcher:
def __init__(self):
global stack_depth
stack_depth = 0
self.no_of_getattribute_calls = 0
def __getattribute__(self, attr_name):
global stack_depth # We need something that is outside the scope of this class
stack_depth += 1
if stack_depth<=10: # to prevent a stack overflow
self.no_of_getattribute_calls += 1
# Oops! We just accessed an attribute (no_of_getattribute_calls)
# Guess what happens when self.no_of_getattribute_calls is
# accessed?
# Using 'object' directly because using super() here will also
# trigger a __getattribute__() call.
return object.__getattribute__(self, attr_name)
def my_method(self):
pass
def test_getattribute_is_a_bit_overzealous_sometimes(self):
catcher = self.RecursiveCatcher()
catcher.my_method()
global stack_depth
self.assertEqual(__, stack_depth)
# ------------------------------------------------------------------
class MinimalCatcher:
class DuffObject: pass
def __init__(self):
self.no_of_getattr_calls = 0
def __getattr__(self, attr_name):
self.no_of_getattr_calls += 1
return self.DuffObject
def my_method(self):
pass
def test_getattr_ignores_known_attributes(self):
catcher = self.MinimalCatcher()
catcher.my_method()
self.assertEqual(__, catcher.no_of_getattr_calls)
def test_getattr_only_catches_unknown_attributes(self):
catcher = self.MinimalCatcher()
catcher.purple_flamingos()
catcher.free_pie()
self.assertEqual(__,
type(catcher.give_me_duff_or_give_me_death()).__name__)
self.assertEqual(__, catcher.no_of_getattr_calls)
# ------------------------------------------------------------------
class PossessiveSetter(object):
def __setattr__(self, attr_name, value):
new_attr_name = attr_name
if attr_name[-5:] == 'comic':
new_attr_name = "my_" + new_attr_name
elif attr_name[-3:] == 'pie':
new_attr_name = "a_" + new_attr_name
object.__setattr__(self, new_attr_name, value)
def test_setattr_intercepts_attribute_assignments(self):
fanboy = self.PossessiveSetter()
fanboy.comic = 'The Laminator, issue #1'
fanboy.pie = 'blueberry'
self.assertEqual(__, fanboy.a_pie)
prefix = '__'
self.assertEqual("The Laminator, issue #1", getattr(fanboy, prefix + '_comic'))
# ------------------------------------------------------------------
class ScarySetter:
def __init__(self):
self.num_of_coconuts = 9
self._num_of_private_coconuts = 2
def __setattr__(self, attr_name, value):
new_attr_name = attr_name
if attr_name[0] != '_':
new_attr_name = "altered_" + new_attr_name
object.__setattr__(self, new_attr_name, value)
def test_it_modifies_external_attribute_as_expected(self):
setter = self.ScarySetter()
setter.e = "mc hammer"
self.assertEqual(__, setter.altered_e)
def test_it_mangles_some_internal_attributes(self):
setter = self.ScarySetter()
try:
coconuts = setter.num_of_coconuts
except AttributeError:
self.assertEqual(__, setter.altered_num_of_coconuts)
def test_in_this_case_private_attributes_remain_unmangled(self):
setter = self.ScarySetter()
self.assertEqual(__, setter._num_of_private_coconuts)
|
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Thomas Quintana <quintana.thomas@gmail.com>
from os import SEEK_END
from twisted.internet.protocol import Protocol, ClientFactory
import io
import logging
import urllib.request, urllib.parse, urllib.error
class IEventSocketClientObserver(object):
def on_event(self, event):
pass
def on_start(self, client):
pass
def on_stop(self):
pass
class Event(object):
def __init__(self, headers, body=None):
self.__headers__ = headers
self.__body__ = body
def get_body(self):
return self.__body__
def get_header(self, name):
return self.__headers__.get(name)
def get_headers(self):
return self.__headers__
class EventSocketClient(Protocol):
def __init__(self, observer):
self.__logger__ = logging.getLogger('freepy.lib.esl.eventsocketclient')
# Data buffer.
self.__buffer__ = None
# Client state.
self.__host__ = None
self.__peer__ = None
# Observer for incoming events.
if isinstance(observer, IEventSocketClientObserver):
self.__observer__ = observer
else:
raise TypeError('The observer must extend the \
IEventSocketClientObserver interface.')
def __parse__(self):
# Make sure we have enough data to process the event.
buffer_contents = self.__buffer__.getvalue().decode('utf-8')
if len(buffer_contents) == 0 or not buffer_contents.find('\n\n'):
return None
# Parse the event for processing.
self.__buffer__.seek(0)
body = None
headers = self.__parse_headers__()
length = headers.get('Content-Length')
if length:
length = int(length)
# Remove the Content-Length header.
del headers['Content-Length']
# Make sure we have enough data to process the body.
offset = self.__buffer__.tell()
self.__buffer__.seek(0, SEEK_END)
end = self.__buffer__.tell()
remaining = end - offset
# Handle the event body.
if length > remaining:
return None
self.__buffer__.seek(offset)
type = headers.get('Content-Type')
if type and type == 'text/event-plain':
headers.update(self.__parse_headers__())
length = headers.get('Content-Length')
if length:
length = int(length)
del headers['Content-Length']
body = self.__buffer__.read(length)
else:
body = self.__buffer__.read(length)
# Reclaim resources.
offset = self.__buffer__.tell()
self.__buffer__.seek(0, SEEK_END)
end = self.__buffer__.tell()
remaining = end - offset
if remaining == 0:
self.__buffer__.seek(0)
self.__buffer__.truncate(0)
else:
self.__buffer__.seek(offset)
data = self.__buffer__.read(remaining)
self.__buffer__.seek(0)
self.__buffer__.write(data)
self.__buffer__.truncate(remaining)
return Event(headers, body.decode('utf-8') if body else None)
def __parse_headers__(self):
headers = dict()
while True:
line = self.__parse_line__()
if line == '':
break
else:
tokens = line.split(':', 1)
name = tokens[0].strip()
if len(tokens) == 2:
value = tokens[1].strip()
if value and not len(value) == 0:
value = urllib.parse.unquote(value)
headers.update({name: value})
else:
headers.update({name: None})
return headers
def __parse_line__(self, stride=64):
line = list()
while True:
chunk = self.__buffer__.read(stride).decode('ascii')
end = chunk.find('\n')
if end == -1:
line.append(chunk)
else:
line.append(chunk[:end])
offset = self.__buffer__.tell()
left_over = len(chunk[end + 1:])
self.__buffer__.seek(offset - left_over)
break
if len(chunk) < stride:
break
return ''.join(line)
def connectionLost(self, reason):
self.__logger__.critical(
('A connection to the FreeSWITCH instance located @ %s:%i '
'has been lost due to the following reason.\n%s'),
self.__peer__.host, self.__peer__.port, reason)
self.__logger__.critical(reason.getTraceback())
self.__observer__.on_stop()
if self.__buffer__:
self.__buffer__.close()
self.__buffer__ = None
self.__host__ = None
self.__peer__ = None
def connectionMade(self):
self.__buffer__ = io.BytesIO()
self.__host__ = self.transport.getHost()
self.__peer__ = self.transport.getPeer()
self.__observer__.on_start(self)
def dataReceived(self, data):
if self.__logger__.isEnabledFor(logging.DEBUG):
self.__logger__.debug(
'The following message was received from %s:%i.\n%s',
self.__peer__.host, self.__peer__.port, data)
self.__buffer__.write(data)
while True:
event = self.__parse__()
if event:
self.__observer__.on_event(event)
else:
break
def send(self, command):
serialized_command = str(command)
if self.__logger__.isEnabledFor(logging.DEBUG):
self.__logger__.debug(
'The following message will be sent to %s:%i.\n%s',
self.__peer__.host, self.__peer__.port, serialized_command)
self.transport.write(serialized_command.encode('utf-8'))
# class EventSocketClientFactory(ReconnectingClientFactory):
class EventSocketClientFactory(ClientFactory):
def __init__(self, observer):
self.__logger__ = logging.getLogger(
'freepy.lib.esl.eventsocketclientfactory')
self.__observer__ = observer
def buildProtocol(self, addr):
if self.__logger__.isEnabledFor(logging.INFO):
self.__logger__.info(
'Connected to the FreeSWITCH instance located @ %s:%i.',
addr.host, addr.port)
# self.resetDelay()
return EventSocketClient(self.__observer__)
|
|
#!/usr/bin/python2.7
# encoding: utf-8
# Copyright 2010 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test cases for end-to-end testing. Run with the server_tests script."""
from model import *
from resources import Resource, ResourceBundle
from server_tests_base import ServerTestsBase
class ResourceTests(ServerTestsBase):
"""Tests that verify the Resource mechanism."""
def test_resource_override(self):
"""Verifies that Resources in the datastore override files on disk."""
# Should render normally.
doc = self.go('/haiti/create')
assert 'xyz' not in doc.content
# This Resource should override the create.html.template file.
bundle = ResourceBundle(key_name='1')
key1 = Resource(parent=bundle,
key_name='create.html.template',
content='xyz{{env.repo}}xyz').put()
doc = self.go('/haiti/create')
assert 'xyzhaitixyz' not in doc.content # old template is still cached
# The new template should take effect after 1 second.
self.advance_utcnow(seconds=1.1)
doc = self.go('/haiti/create')
assert 'xyzhaitixyz' in doc.content
# A plain .html Resource should override the .html.template Resource.
key2 = Resource(parent=bundle,
key_name='create.html', content='xyzxyzxyz').put()
self.advance_utcnow(seconds=1.1)
doc = self.go('/haiti/create')
assert 'xyzxyzxyz' in doc.content
# After removing both Resources, should fall back to the original file.
db.delete([key1, key2])
self.advance_utcnow(seconds=1.1)
doc = self.go('/haiti/create')
assert 'xyz' not in doc.content
def test_resource_caching(self):
"""Verifies that Resources are cached properly."""
# There's no file here.
self.go('/global/foo.txt')
assert self.s.status == 404
self.go('/global/foo.txt?lang=fr')
assert self.s.status == 404
# Add a Resource to be served as the static file.
bundle = ResourceBundle(key_name='1')
Resource(
parent=bundle, key_name='static/foo.txt', content='hello').put()
doc = self.go('/global/foo.txt?lang=fr')
assert doc.content_bytes == 'hello'
# Add a localized Resource.
fr_key = Resource(parent=bundle, key_name='static/foo.txt:fr',
content='bonjour').put()
doc = self.go('/global/foo.txt?lang=fr')
assert doc.content_bytes == 'hello' # original Resource remains cached
# The cached version should expire after 1 second.
self.advance_utcnow(seconds=1.1)
doc = self.go('/global/foo.txt?lang=fr')
assert doc.content_bytes == 'bonjour'
# Change the non-localized Resource.
Resource(
parent=bundle, key_name='static/foo.txt', content='goodbye').put()
doc = self.go('/global/foo.txt?lang=fr')
assert doc.content_bytes == 'bonjour'
# no effect on the localized Resource
# Remove the localized Resource.
db.delete(fr_key)
doc = self.go('/global/foo.txt?lang=fr')
assert doc.content_bytes == 'bonjour'
# localized Resource remains cached
# The cached version should expire after 1 second.
self.advance_utcnow(seconds=1.1)
doc = self.go('/global/foo.txt?lang=fr')
assert doc.content_bytes == 'goodbye'
def test_admin_resources(self):
# Verify that the bundle listing loads.
doc = self.go_as_admin('/global/admin/resources')
# Add a new bundle (redirects to the new bundle's resource listing).
doc = self.s.submit(doc.cssselect('form')[-1], resource_bundle='xyz')
assert doc.cssselect_one('a.sel').text == 'Bundle: xyz'
bundle = ResourceBundle.get_by_key_name('xyz')
assert bundle
# Add a resource (redirects to the resource's edit page).
doc = self.s.submit(doc.cssselect('form')[0], resource_name='abc')
assert doc.cssselect_one('a.sel').text == 'Resource: abc'
# The new Resource shouldn't exist in the datastore until it is saved.
assert not Resource.get_by_key_name('abc', parent=bundle)
# Enter some content for the resource.
doc = self.s.submit(doc.cssselect('form')[0], content='pqr')
assert Resource.get_by_key_name('abc', parent=bundle).content == 'pqr'
# Use the breadcrumb navigation bar to go back to the resource listing.
doc = self.s.follow('Bundle: xyz')
# Add a localized variant of the resource.
row = doc.xpath_one('//tr[td[normalize-space(.)="abc"]]')
doc = self.s.submit(row.cssselect('form')[0], resource_lang='pl')
assert doc.cssselect_one('a.sel').text == 'pl: Polish'
# Enter some content for the localized resource.
doc = self.s.submit(doc.cssselect('form')[0], content='jk')
assert Resource.get_by_key_name('abc:pl', parent=bundle).content == 'jk'
# Confirm that both the generic and localized resource are listed.
doc = self.s.follow('Bundle: xyz')
resource_texts = [a.text for a in doc.cssselect('a.resource')]
assert 'abc' in resource_texts
assert 'pl' in resource_texts
# Copy all the resources to a new bundle.
doc = self.s.submit(doc.cssselect('form')[-1], resource_bundle='zzz',
resource_bundle_original='xyz')
parent = ResourceBundle.get_by_key_name('zzz')
assert Resource.get_by_key_name('abc', parent=parent).content == 'pqr'
assert Resource.get_by_key_name('abc:pl', parent=parent).content == 'jk'
# Verify that we can't add a resource to the default bundle.
bundle = ResourceBundle.get_by_key_name('1')
assert(bundle)
doc = self.go_as_admin('/global/admin/resources')
doc = self.s.follow('1 (default)')
self.s.submit(doc.cssselect('form')[0], resource_name='abc')
assert not Resource.get_by_key_name('abc', parent=bundle)
# Verify that we can't edit a resource in the default bundle.
self.s.back()
doc = self.s.follow('base.html.template')
self.s.submit(doc.cssselect('form')[0], content='xyz')
assert not Resource.get_by_key_name('base.html.template', parent=bundle)
# Verify that we can't copy resources into the default bundle.
doc = self.go_as_admin('/global/admin/resources')
doc = self.s.follow('xyz')
doc = self.s.submit(doc.cssselect('form')[-1], resource_bundle='1',
resource_bundle_original='xyz')
assert not Resource.get_by_key_name('abc', parent=bundle)
# Switch the default bundle version.
doc = self.go_as_admin('/global/admin/resources')
doc = self.s.submit(
doc.cssselect('form')[0], resource_bundle_default='xyz')
assert 'xyz (default)' in doc.text
# Undo.
doc = self.s.submit(
doc.cssselect('form')[0], resource_bundle_default='1')
assert '1 (default)' in doc.text
class CounterTests(ServerTestsBase):
"""Tests related to Counters."""
def test_tasks_count(self):
"""Tests the counting task."""
# Add two Persons and two Notes in the 'haiti' repository.
db.put([Person(
key_name='haiti:test.google.com/person.123',
repo='haiti',
author_name='_test1_author_name',
entry_date=ServerTestsBase.TEST_DATETIME,
full_name='_test1_full_name',
sex='male',
date_of_birth='1970-01-01',
age='50-60',
latest_status='believed_missing'
), Note(
key_name='haiti:test.google.com/note.123',
repo='haiti',
person_record_id='haiti:test.google.com/person.123',
entry_date=ServerTestsBase.TEST_DATETIME,
status='believed_missing'
), Person(
key_name='haiti:test.google.com/person.456',
repo='haiti',
author_name='_test2_author_name',
entry_date=ServerTestsBase.TEST_DATETIME,
full_name='_test2_full_name',
sex='female',
date_of_birth='1970-02-02',
age='30-40',
latest_found=True
), Note(
key_name='haiti:test.google.com/note.456',
repo='haiti',
person_record_id='haiti:test.google.com/person.456',
entry_date=ServerTestsBase.TEST_DATETIME,
author_made_contact=True
)])
# Run the counting task (should finish counting in a single run).
doc = self.go_as_admin('/haiti/tasks/count/person')
# Check the resulting counters.
assert Counter.get_count('haiti', 'person.all') == 2
assert Counter.get_count('haiti', 'person.sex=male') == 1
assert Counter.get_count('haiti', 'person.sex=female') == 1
assert Counter.get_count('haiti', 'person.sex=other') == 0
assert Counter.get_count('haiti', 'person.found=TRUE') == 1
assert Counter.get_count('haiti', 'person.found=') == 1
assert Counter.get_count('haiti', 'person.status=believed_missing') == 1
assert Counter.get_count('haiti', 'person.status=') == 1
assert Counter.get_count('pakistan', 'person.all') == 0
# Add a Person in the 'pakistan' repository.
db.put(Person(
key_name='pakistan:test.google.com/person.789',
repo='pakistan',
author_name='_test3_author_name',
entry_date=ServerTestsBase.TEST_DATETIME,
full_name='_test3_full_name',
sex='male',
date_of_birth='1970-03-03',
age='30-40',
))
# Re-run the counting tasks for both repositories.
doc = self.go('/haiti/tasks/count/person')
doc = self.go('/pakistan/tasks/count/person')
# Check the resulting counters.
assert Counter.get_count('haiti', 'person.all') == 2
assert Counter.get_count('pakistan', 'person.all') == 1
# Check that the counted value shows up correctly on the main page.
doc = self.go('/haiti?flush=*')
assert 'Currently tracking' not in doc.text
# Counts less than 100 should not be shown.
db.put(Counter(scan_name=u'person', repo=u'haiti', last_key=u'',
count_all=5L))
doc = self.go('/haiti?flush=*')
assert 'Currently tracking' not in doc.text
db.put(Counter(scan_name=u'person', repo=u'haiti', last_key=u'',
count_all=86L))
doc = self.go('/haiti?flush=*')
assert 'Currently tracking' not in doc.text
# Counts should be rounded to the nearest 100.
db.put(Counter(scan_name=u'person', repo=u'haiti', last_key=u'',
count_all=278L))
doc = self.go('/haiti?flush=*')
assert 'Currently tracking about 300 records' in doc.text
# If we don't flush, the previously rendered page should stay cached.
db.put(Counter(scan_name=u'person', repo=u'haiti', last_key=u'',
count_all=411L))
doc = self.go('/haiti')
assert 'Currently tracking about 300 records' in doc.text
# After 10 seconds, the cached page should expire.
# The counter is also separately cached in memcache, so we have to
# flush memcache to make the expiry of the cached page observable.
self.advance_utcnow(seconds=11)
doc = self.go('/haiti?flush=memcache')
assert 'Currently tracking about 400 records' in doc.text
def test_admin_dashboard(self):
"""Visits the dashboard page and makes sure it doesn't crash."""
db.put([Counter(
scan_name='Person', repo='haiti', last_key='', count_all=278
), Counter(
scan_name='Person', repo='pakistan', last_key='',
count_all=127
), Counter(
scan_name='Note', repo='haiti', last_key='', count_all=12
), Counter(
scan_name='Note', repo='pakistan', last_key='', count_all=8
)])
assert self.go_as_admin('/global/admin/dashboard')
assert self.s.status == 200
|
|
import codepy.cgen
from cgen import *
import xml.etree.ElementTree as ElementTree
# these are additional classes that, along with codepy's classes, let
# programmers express the C code as a real AST (not the hybrid AST/strings/etc
# that codepy implements.
#TODO: add all of CodePy's classes we want to support
class CNumber(Generable):
def __init__(self, num):
self.num = num
self._fields = []
# def __str__(self):
# return str(self.num)
def to_xml(self):
return ElementTree.Element("CNumber", attrib={"num":str(self.num)})
def generate(self, with_semicolon=False):
if with_semicolon:
# This node type does not represent a complete C++ statement
raise ValueError
yield str(self.num)
class String(Generable):
def __init__(self, text):
self.text = text
def generate(self):
yield '\"%s\"' % self.text
class CName(Generable):
def __init__(self, name):
self.name = name
self._fields = []
def to_xml(self):
return ElementTree.Element("CName", attrib={"name":str(self.name)})
def generate(self, with_semicolon=False):
if with_semicolon:
# This node type does not represent a complete C++ statement
raise ValueError
yield self.name
class Expression(Generable):
def __init__(self):
super(Expression, self).__init__()
self._fields = []
def generate(self, with_semicolon=False):
yield ""
class BinOp(Expression):
def __init__(self, left, op, right):
self.left = left
self.op = op
self.right = right
self._fields = ['left', 'right']
def generate(self, with_semicolon=False):
yield "(%s %s %s)" % (self.left, self.op, self.right) + (";" if with_semicolon else "")
def split(self, x):
return str(self).split(x)
def to_xml(self):
node = ElementTree.Element("BinOp", attrib={"op":str(self.op)})
left = ElementTree.SubElement(node, "left")
left.append(self.left.to_xml())
right = ElementTree.SubElement(node, "right")
right.append(self.right.to_xml())
return node
class UnaryOp(Expression):
def __init__(self, op, operand):
self.op = op
self.operand = operand
self._fields = ['operand']
def generate(self, with_semicolon=False):
yield "(%s(%s))" % (self.op, self.operand) + (";" if with_semicolon else "")
def to_xml(self):
node = ElementTree.Element("UnaryOp", attrib={"op":str(self.op)})
operand = ElementTree.SubElement(node, "operand")
operand.append(self.operand.to_xml())
return node
class Subscript(Expression):
def __init__(self, value, index):
self.value = value
self.index = index
self._fields = ['value', 'index']
def generate(self, with_semicolon=False):
yield "%s[%s]" % (self.value, self.index)
def to_xml(self):
node = ElementTree.Element("Subscript")
ElementTree.SubElement(node, "value").append(self.value.to_xml())
ElementTree.SubElement(node, "index").append(self.index.to_xml())
return node
class Call(Expression):
def __init__(self, func, args):
self.func = func
self.args = args
self._fields = ['func', 'args']
def generate(self, with_semicolon=False):
yield "%s(%s)" % (self.func, ", ".join(map(str, self.args))) + (";" if with_semicolon else "")
def to_xml(self):
node = ElementTree.Element("Call", attrib={"func":str(self.func)})
args = ElementTree.SubElement(node, "args")
for x in self.args:
args.append(x.to_xml())
return node
class PostfixUnaryOp(Expression):
def __init__(self, operand, op):
self.operand = operand
self.op = op
self._fields = ['op', 'operand']
def generate(self, with_semicolon=False):
yield "((%s)%s)" % (self.operand, self.op) + (";" if with_semicolon else "")
def to_xml(self):
node = ElementTree.Element("PostfixUnaryOp", attrib={"op":str(self.op)})
operand = ElementTree.SubElement(node, "operand")
operand.append(self.operand.to_xml())
return node
class ConditionalExpr(Expression):
def __init__(self, test, body, orelse):
self.test = test
self.body = body
self.orelse = orelse
self._fields = ['test', 'body', 'orelse']
def generate(self, with_semicolon=False):
yield "(%s ? %s : %s)" % (self.test, self.body, self.orelse) + (";" if with_semicolon else "")
def to_xml(self):
node = ElementTree.Element("ConditionalExpr")
ElementTree.SubElement(node, "test").append(self.test.to_xml())
ElementTree.SubElement(node, "body").append(self.body.to_xml())
ElementTree.SubElement(node, "orelse").append(self.orelse.to_xml())
return node
class TypeCast(Expression):
# "type" should be a declaration with an empty variable name
# e.g. TypeCast(Pointer(Value('int', '')), ...)
def __init__(self, tp, value):
self.tp = tp
self.value = value
self._fields = ['tp', 'value']
def generate(self, with_semicolon=False):
yield "((%s)%s)" % (self.tp.inline(), self.value)
#class ForInitializer(codepy.cgen.Initializer):
# def __str__(self):
# return super(ForInitializer, self).__str__()[0:-1]
class Initializer(codepy.cgen.Initializer):
def __init__(self, vdecl, data):
self._fields = ['vdecl', 'data']
super(Initializer, self).__init__(vdecl, data)
def generate(self, with_semicolon=False):
tp_lines, tp_decl = self.vdecl.get_decl_pair()
tp_lines = list(tp_lines)
for line in tp_lines[:-1]:
yield line
yield "%s %s = %s" % (tp_lines[-1], tp_decl, self.data) + (";" if with_semicolon else "")
class Pragma(codepy.cgen.Pragma):
def __init__(self, value):
self._fields = ['value']
super(Pragma, self).__init__(value)
def generate(self, with_semicolon=False):
return super(Pragma, self).generate()
class RawFor(codepy.cgen.For):
def __init__(self, start, condition, update, body):
super(RawFor, self).__init__(start, condition, update, body)
self._fields = ['start', 'condition', 'update', 'body']
def generate(self, with_semicolon=False):
return super(RawFor, self).generate()
def to_xml(self):
node = ElementTree.Element("For")
if (not isinstance(self.start, str)):
ElementTree.SubElement(node, "start").append(self.start.to_xml())
else:
ElementTree.SubElement(node,"start").text = self.start
if (not isinstance(self.condition, str)):
ElementTree.SubElement(node, "condition").append(self.condition.to_xml())
else:
ElementTree.SubElement(node, "condition").text = self.condition
if (not isinstance(self.update, str)):
ElementTree.SubElement(node, "update").append(self.update.to_xml())
else:
ElementTree.SubElement(node, "update").text = self.update
ElementTree.SubElement(node, "body").append(self.body.to_xml())
return node
class For(RawFor):
#TODO: setting initial,end,etc should update the field in the shadow
#TODO: should loopvar be a string or a CName?
def __init__(self, loopvar, initial, end, increment, body):
# use setattr on object so we don't use our special one during initialization
object.__setattr__(self, "loopvar", loopvar)
object.__setattr__(self, "initial", initial)
object.__setattr__(self, "end", end)
object.__setattr__(self, "increment", increment)
self._fields = ['start', 'condition', 'update', 'body']
super(For, self).__init__(
Initializer(Value("int", self.loopvar), self.initial),
BinOp(CName(self.loopvar), "<=", self.end),
Assign(CName(self.loopvar), BinOp(CName(self.loopvar), "+", self.increment)),
body)
def set_underlying_for(self):
self.start = Initializer(Value("int", self.loopvar), self.initial)
self.condition = BinOp(CName(self.loopvar), "<=", self.end)
self.update = Assign(CName(self.loopvar), BinOp(CName(self.loopvar), "+", self.increment))
def generate(self, with_semicolon=False):
return super(For, self).generate()
def intro_line(self):
return "for (%s; %s; %s)" % (self.start,
self.condition,
self.update)
def __setattr__(self, name, val):
# we want to convey changes to the for loop to the underlying
# representation.
object.__setattr__(self, name, val)
if name in ["loopvar", "initial", "end", "increment"]:
self.set_underlying_for()
class FunctionBody(codepy.cgen.FunctionBody):
def __init__(self, fdecl, body):
super(FunctionBody, self).__init__(fdecl, body)
self._fields = ['fdecl', 'body']
def generate(self, with_semicolon=False):
return super(FunctionBody, self).generate()
def to_xml(self):
node = ElementTree.Element("FunctionBody")
ElementTree.SubElement(node, "fdecl").append(self.fdecl.to_xml())
ElementTree.SubElement(node, "body").append(self.body.to_xml())
return node
class FunctionDeclaration(codepy.cgen.FunctionDeclaration):
def __init__(self, subdecl, arg_decls):
super(FunctionDeclaration, self).__init__(subdecl, arg_decls)
self._fields = ['subdecl', 'arg_decls']
def to_xml(self):
node = ElementTree.Element("FunctionDeclaration")
ElementTree.SubElement(node, "subdecl").append(self.subdecl.to_xml())
arg_decls = ElementTree.SubElement(node, "arg_decls")
for x in self.arg_decls:
arg_decls.append(x.to_xml())
return node
class Value(codepy.cgen.Value):
def __init__(self, typename, name):
super(Value, self).__init__(typename, name)
self._fields = []
def to_xml(self):
return ElementTree.Element("Value", attrib={"typename":self.typename, "name":self.name})
class Pointer(codepy.cgen.Pointer):
def __init__(self, subdecl):
super(Pointer, self).__init__(subdecl)
self._fields = ['subdecl']
def to_xml(self):
node = ElementTree.Element("Pointer")
ElementTree.SubElement(node, "subdecl").append(self.subdecl.to_xml())
return node
class Block(codepy.cgen.Block):
def __init__(self, contents=[]):
super(Block, self).__init__(contents)
self._fields = ['contents']
def generate(self, with_semicolon=False):
yield "{"
for item in self.contents:
for item_line in item.generate(with_semicolon=True):
yield " " + item_line
yield "}"
def to_xml(self):
node = ElementTree.Element("Block")
for x in self.contents:
node.append(x.to_xml())
return node
class UnbracedBlock(Block):
def generate(self, with_semicolon=False):
for item in self.contents:
for item_line in item.generate(with_semicolon=True):
yield " " + item_line
class Define(codepy.cgen.Define):
def __init__(self, symbol, value):
super(Define, self).__init__(symbol, value)
self._fields = ['symbol', 'value']
def generate(self, with_semicolon=False):
return super(Define, self).generate()
def to_xml(self):
return ElementTree.Element("Define", attrib={"symbol":self.symbol, "value":self.value})
class Statement(codepy.cgen.Statement):
def __init__(self, text):
super(Statement, self).__init__(text)
self._fields = []
def to_xml(self):
node = ElementTree.Element("Statement")
node.text = self.text
return node
class Assign(codepy.cgen.Assign):
def __init__(self, lvalue, rvalue):
super(Assign, self).__init__(lvalue, rvalue)
self._fields = ['lvalue', 'rvalue']
def to_xml(self):
node = ElementTree.Element("Assign")
ElementTree.SubElement(node, "lvalue").append(self.lvalue.to_xml())
ElementTree.SubElement(node, "rvalue").append(self.rvalue.to_xml())
return node
def generate(self, with_semicolon=False):
lvalue = self.lvalue.generate(with_semicolon=False).next()
rvalue = str(self.rvalue)
yield "%s = %s" % (lvalue, rvalue) + (";" if with_semicolon else "")
class FunctionCall(codepy.cgen.Generable):
def __init__(self, fname, params=[]):
self.fname = fname
self.params = params
self._fields = ['fname', 'params']
def generate(self, with_semicolon=False):
yield "%s(%s)" % (self.fname, ','.join(map(str, self.params))) + (";" if with_semicolon else "")
class Print(Generable):
def __init__(self, text, newline):
self.text = text
self.newline = newline
def generate(self, with_semicolon=True):
if self.newline:
yield 'std::cout %s << std::endl;' % self.text
else:
yield 'std::cout %s;' % self.text
class Compare(Generable):
def __init__(self, left, op, right):
self.left = left
self.op = op
self.right = right
self._fields = ('left', 'op', 'right')
# cgen as of 4/24/2012 has a bug that directly calls split() on a Compare object.
# see https://github.com/shoaibkamil/asp/issues/32
def split(self, t):
return str(self).split(t)
def generate(self, with_semicolon=False):
yield '%s %s %s' % (self.left, self.op, self.right)
class IfConv(If):
def generate(self, with_semicolon=False):
return super(IfConv, self).generate()
class ReturnStatement (Generable):
def __init__ (self, retval):
self.retval = retval
self._fields = ['retval']
def generate (self, with_semicolon=True):
ret = 'return ' + str(self.retval)
if with_semicolon:
ret = ret + ';'
yield ret
|
|
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for tf upgrader."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import shutil
import tempfile
import numpy as np
import tensorflow as tf
from tensorflow.python.framework import test_util
from tensorflow.python.platform import test as test_lib
class TestUpgrade(test_util.TensorFlowTestCase):
"""Test various APIs that have been changed in 1.0.
This test will not run in current TensorFlow, but did run in 0.11.
This file is intended to be converted by a genrule() that uses the converter
so that a 1.0 compatible version of this file is generated. That is run as
a unit test if the converter is successful.
"""
@test_util.run_v1_only("b/120545219")
def testArgRenames(self):
with self.cached_session():
a = [[1., 2., 3.], [4., 5., 6.]]
b = [[True, False, False], [False, True, True]]
dim0 = [1]
dim1 = [1]
self.assertAllEqual(
tf.reduce_any(
b, reduction_indices=dim0).eval(), [True, True])
self.assertAllEqual(
tf.reduce_all(
b, reduction_indices=[0]).eval(), [False, False, False])
self.assertAllEqual(
tf.reduce_all(
b, reduction_indices=dim1).eval(), [False, False])
self.assertAllEqual(
tf.reduce_sum(
a, reduction_indices=[1]).eval(), [6., 15.])
self.assertAllEqual(
tf.reduce_sum(
a, reduction_indices=[0, 1]).eval(), 21.0)
self.assertAllEqual(tf.reduce_sum(a, [0, 1]).eval(), 21.0)
self.assertAllEqual(
tf.reduce_prod(
a, reduction_indices=[1]).eval(), [6., 120.])
self.assertAllEqual(
tf.reduce_prod(
a, reduction_indices=[0, 1]).eval(), 720.0)
self.assertAllEqual(tf.reduce_prod(a, [0, 1]).eval(), 720.0)
self.assertAllEqual(
tf.reduce_mean(
a, reduction_indices=[1]).eval(), [2., 5.])
self.assertAllEqual(
tf.reduce_mean(
a, reduction_indices=[0, 1]).eval(), 3.5)
self.assertAllEqual(tf.reduce_mean(a, [0, 1]).eval(), 3.5)
self.assertAllEqual(
tf.reduce_min(
a, reduction_indices=[1]).eval(), [1., 4.])
self.assertAllEqual(
tf.reduce_min(
a, reduction_indices=[0, 1]).eval(), 1.0)
self.assertAllEqual(tf.reduce_min(a, [0, 1]).eval(), 1.0)
self.assertAllEqual(
tf.reduce_max(
a, reduction_indices=[1]).eval(), [3., 6.])
self.assertAllEqual(
tf.reduce_max(
a, reduction_indices=[0, 1]).eval(), 6.0)
self.assertAllEqual(tf.reduce_max(a, [0, 1]).eval(), 6.0)
self.assertAllClose(tf.reduce_logsumexp(a, reduction_indices=[1]).eval(),
[3.40760589, 6.40760612])
self.assertAllClose(
tf.reduce_logsumexp(a, reduction_indices=[0, 1]).eval(),
6.45619344711)
self.assertAllClose(
tf.reduce_logsumexp(a, [0, 1]).eval(), 6.45619344711)
self.assertAllEqual(
tf.expand_dims([[1, 2], [3, 4]], axis=1).eval(),
[[[1, 2]], [[3, 4]]])
@test_util.run_v1_only("b/120545219")
def testArgMinMax(self):
with self.cached_session():
self.assertAllEqual(
tf.argmin([[1, 2, 3], [4, 1, 0]], dimension=1).eval(),
[0, 2])
self.assertAllEqual(
tf.argmin([[1, 2, 3], [4, 1, 0]], dimension=0).eval(),
[0, 1, 1])
self.assertAllEqual(
tf.argmax([[1, 2, 3], [4, 1, 0]], dimension=1).eval(),
[2, 0])
self.assertAllEqual(
tf.argmax([[1, 2, 3], [4, 1, 0]], dimension=0).eval(),
[1, 0, 0])
@test_util.run_v1_only("b/120545219")
def testExpandAndSqueeze(self):
with self.cached_session():
# TODO(aselle): sparse_split, sparse_reduce_sum,
# sparse_reduce_sum_sparse, reduce_join
a = [[1, 2, 3]]
self.assertAllEqual(tf.expand_dims(tf.squeeze(a, [0]), 0).eval(),
a)
self.assertAllEqual(tf.squeeze(tf.expand_dims(a, 1), [1]).eval(),
a)
self.assertAllEqual(
tf.expand_dims(
tf.squeeze(
[[1, 2, 3]], squeeze_dims=[0]), dim=0).eval(),
a)
self.assertAllEqual(
tf.squeeze(
tf.expand_dims(
[[1, 2, 3]], dim=1), squeeze_dims=[1]).eval(),
a)
self.assertAllEqual(
tf.squeeze(
tf.expand_dims(
[[1, 2, 3]], dim=1), squeeze_dims=[1]).eval(),
a)
@test_util.run_v1_only("b/120545219")
def testArithmeticRenames(self):
with self.cached_session() as s:
stuff = tf.split(1, 2, [[1, 2, 3, 4], [4, 5, 6, 7]])
vals = s.run(stuff)
self.assertAllEqual(vals,
[[[1, 2], [4, 5]], [[3, 4], [6, 7]]])
self.assertAllEqual(
tf.neg(tf.mul(tf.add(1, 2), tf.sub(5, 3))).eval(),
-6)
self.assertAllEqual(
s.run(tf.listdiff([1, 2, 3], [3, 3, 4]))[0], [1, 2])
self.assertAllEqual(
tf.list_diff([1, 2, 3], [3, 3, 4])[0].eval(), [1, 2])
a = [[1., 2., 3.], [4., 5., 6.]]
foo = np.where(np.less(a, 2), np.negative(a), a)
self.assertAllEqual(
tf.select(tf.less(a, 2), tf.neg(a), a).eval(),
foo)
self.assertAllEqual(
tf.complex_abs(tf.constant(3 + 4.j)).eval(),
5)
# # TODO(aselle): (tf.batch_*)
# ]
@test_util.run_v1_only("b/120545219")
def testBatchAndSvd(self):
with self.cached_session():
mat = [[1., 2.], [2., 3.]]
batched_mat = tf.expand_dims(mat, [0])
result = tf.matmul(mat, mat).eval()
result_batched = tf.batch_matmul(batched_mat, batched_mat).eval()
self.assertAllEqual(result_batched, np.expand_dims(result, 0))
self.assertAllEqual(
tf.svd(mat, False, True).eval(),
tf.svd(mat, compute_uv=False, full_matrices=True).eval())
@test_util.run_v1_only("b/120545219")
def testCrossEntropy(self):
# TODO(aselle): Test sparse_softmax_...
with self.cached_session():
labels = [.8, .5, .2, .1]
logits = [.9, .1, .3, .1]
self.assertAllEqual(
tf.nn.softmax_cross_entropy_with_logits(
logits, labels).eval(),
tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits).eval())
self.assertAllEqual(
tf.nn.sigmoid_cross_entropy_with_logits(
logits, labels).eval(),
tf.nn.sigmoid_cross_entropy_with_logits(
labels=labels, logits=logits).eval())
@test_util.run_v1_only("b/120545219")
def testVariables(self):
with self.cached_session() as s:
# make some variables
_ = [tf.Variable([1, 2, 3], dtype=tf.float32),
tf.Variable([1, 2, 3], dtype=tf.int32)]
s.run(tf.global_variables_initializer())
_ = [v.name for v in tf.all_variables()]
_ = [v.name for v in tf.local_variables()]
@test_util.run_v1_only("b/120545219")
def testSummaries(self):
with self.cached_session() as s:
var = tf.Variable([1, 2, 3], dtype=tf.float32)
s.run(tf.global_variables_initializer())
x, y = np.meshgrid(np.linspace(-10, 10, 256), np.linspace(-10, 10, 256))
image = np.sin(x**2 + y**2) / np.sqrt(x**2 + y**2) * .5 + .5
image = image[None, :, :, None]
# make a dummy sound
freq = 440 # A = 440Hz
sampling_frequency = 11000
audio = np.sin(2 * np.pi * np.linspace(0, 1, sampling_frequency) * freq)
audio = audio[None, :, None]
test_dir = tempfile.mkdtemp()
# test summaries
writer = tf.train.SummaryWriter(test_dir)
summaries = [
tf.scalar_summary("scalar_var", var[0]),
tf.scalar_summary("scalar_reduce_var", tf.reduce_sum(var)),
tf.histogram_summary("var_histogram", var),
tf.image_summary("sin_image", image),
tf.audio_summary("sin_wave", audio, sampling_frequency),
]
run_summaries = s.run(summaries)
writer.add_summary(s.run(tf.merge_summary(inputs=run_summaries)))
# This is redundant, but we want to be able to rewrite the command
writer.add_summary(s.run(tf.merge_all_summaries()))
writer.close()
shutil.rmtree(test_dir)
if __name__ == "__main__":
test_lib.main()
|
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Type resolution.
This analyzer uses known live values to further infer object types. This
may include for instance constructed objects and object member functions.
In addition, the analyzer will also process annotations for TF (staged) type
annotations.
Requires annotations generated by LiveValuesResolver.
"""
# TODO(mdan): This would be more robust with a CFG.
# Situations with multiple reaching modifications (e.g. modified inside and
# outside a control flow statement) should be more robustly detected and
# analyzed.
# TODO(mdan): Look into using Python AST's type annotation fields instead.
# It would be desirable to use that mechanism if we can.
# Some caveats to consider: We may need to annotate other nodes like
# Attribute. It may also not be feasible for us to faithfully to replicate
# PY3's type annotations where it isn't available. It would also require us
# to design rigorous type definitions that can accommodate Python types
# as well as TensorFLow dtypes and shapes.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import gast
from tensorflow.contrib.py2tf.pyct import anno
from tensorflow.contrib.py2tf.pyct import transformer
from tensorflow.python.util import tf_inspect
class Scope(object):
"""Tracks symbol value references.
Attributes:
values: A dict mapping string to gast.Node, containing the value that was
most recently assigned to the symbol.
"""
def __init__(self, parent):
"""Create a new scope.
Args:
parent: A Scope or None.
"""
self.parent = parent
self.values = {}
def __repr__(self):
return 'Scope[%s]' % self.values.keys()
def copy(self):
s = Scope(self.parent)
s.values = self.values.copy()
return s
def setval(self, name, value):
self.values[name] = value
def hasval(self, name):
return (name in self.values or
(self.parent is not None and self.parent.hasval(name)))
def getval(self, name):
if name in self.values:
return self.values[name]
if self.parent is not None:
return self.parent.getval(name)
raise KeyError(name)
class TypeInfoResolver(transformer.Base):
"""Annotates symbols with type information where possible.
Nodes currently annotated:
* Call (helps detect class constructors)
* Attribute (helps resolve object methods)
"""
def __init__(self, context):
super(TypeInfoResolver, self).__init__(context)
self.scope = Scope(None)
self.function_level = 0
def visit_FunctionDef(self, node):
self.scope = Scope(self.scope)
self.function_level += 1
self.generic_visit(node)
self.function_level -= 1
self.scope = self.scope.parent
return node
def _visit_block(self, block):
self.scope = Scope(self.scope)
for i, n in enumerate(block):
block[i] = self.generic_visit(n)
self.scope = self.scope.parent
return block
def visit_For(self, node):
self.generic_visit(node.target)
self.generic_visit(node.iter)
node.body = self._visit_block(node.body)
node.orelse = self._visit_block(node.orelse)
return node
def visit_While(self, node):
self.generic_visit(node.test)
node.body = self._visit_block(node.body)
node.orelse = self._visit_block(node.orelse)
return node
def visit_If(self, node):
self.generic_visit(node.test)
node.body = self._visit_block(node.body)
node.orelse = self._visit_block(node.orelse)
return node
def _process_function_arg(self, arg_name):
str_name = str(arg_name)
if self.function_level == 1 and str_name in self.context.arg_types:
# Forge a node to hold the type information, so that method calls on
# it can resolve the type.
type_holder = arg_name.ast()
type_string, type_obj = self.context.arg_types[str_name]
anno.setanno(type_holder, 'type', type_obj)
anno.setanno(type_holder, 'type_fqn', tuple(type_string.split('.')))
self.scope.setval(arg_name, type_holder)
def visit_arg(self, node):
self._process_function_arg(anno.getanno(node.arg, anno.Basic.QN))
return node
def visit_Name(self, node):
self.generic_visit(node)
qn = anno.getanno(node, anno.Basic.QN)
if isinstance(node.ctx, gast.Param):
self._process_function_arg(qn)
elif isinstance(node.ctx, gast.Load) and self.scope.hasval(qn):
# E.g. if we had
# a = b
# then for future references to `a` we should have definition = `b`
definition = self.scope.getval(qn)
if anno.hasanno(definition, 'type'):
anno.setanno(node, 'type', anno.getanno(definition, 'type'))
anno.setanno(node, 'type_fqn', anno.getanno(definition, 'type_fqn'))
if anno.hasanno(definition, 'element_type'):
anno.setanno(node, 'element_type',
anno.getanno(definition, 'element_type'))
return node
def _process_variable_assignment(self, source, targets):
if isinstance(source, gast.Call):
func = source.func
if anno.hasanno(func, 'live_val'):
func_obj = anno.getanno(func, 'live_val')
if tf_inspect.isclass(func_obj):
anno.setanno(source, 'is_constructor', True)
anno.setanno(source, 'type', func_obj)
anno.setanno(source, 'type_fqn', anno.getanno(func, 'fqn'))
# TODO(mdan): Raise an error if constructor has side effects.
# We can have a whitelist of no-side-effects constructors.
# We can also step inside the constructor and further analyze.
for t in targets:
if isinstance(t, gast.Tuple):
for i, e in enumerate(t.elts):
self.scope.setval(
anno.getanno(e, anno.Basic.QN),
gast.Subscript(source, gast.Index(i), ctx=gast.Store()))
elif isinstance(t, (gast.Name, gast.Attribute)):
self.scope.setval(anno.getanno(t, anno.Basic.QN), source)
else:
raise ValueError('Dont know how to handle assignment to %s' % t)
def visit_With(self, node):
for wi in node.items:
if wi.optional_vars is not None:
self._process_variable_assignment(wi.context_expr, (wi.optional_vars,))
self.generic_visit(node)
return node
def visit_Assign(self, node):
self.generic_visit(node)
self._process_variable_assignment(node.value, node.targets)
return node
def visit_Call(self, node):
if anno.hasanno(node.func, 'live_val'):
# Symbols targeted by the "set_type" marker function are assigned the data
# type that it specified.
if (anno.getanno(node.func, 'live_val') is
self.context.type_annotation_func):
# Expecting the actual type to be the second argument.
if len(node.args) != 2:
raise ValueError('"%s" must have exactly two parameters'
% self.context.type_annotation_func)
if not anno.hasanno(node.args[0], anno.Basic.QN):
raise ValueError('the first argument of "%s" must by a symbol'
% self.context.type_annotation_func)
if not anno.hasanno(node.args[1], 'live_val'):
raise ValueError(
'the second argument of "%s" must be statically resolvable' %
self.context.type_annotation_func)
target_symbol = anno.getanno(node.args[0], anno.Basic.QN)
element_type = anno.getanno(node.args[1], 'live_val')
# Find the definition of this symbol and annotate it with the given
# data type. That in turn will cause future uses of the symbol
# to receive the same type annotation.
definition = self.scope.getval(target_symbol)
anno.setanno(node, 'element_type', element_type)
anno.setanno(definition, 'element_type', element_type)
# TODO(mdan): Should we update references between definition and here?
return self.generic_visit(node)
def resolve(node, context):
return TypeInfoResolver(context).visit(node)
|
|
#Copyright ReportLab Europe Ltd. 2000-2004
#see license.txt for license details
#history http://www.reportlab.co.uk/cgi-bin/viewcvs.cgi/public/reportlab/trunk/reportlab/graphics/charts/piecharts.py
# experimental pie chart script. Two types of pie - one is a monolithic
#widget with all top-level properties, the other delegates most stuff to
#a wedges collection whic lets you customize the group or every individual
#wedge.
"""Basic Pie Chart class.
This permits you to customize and pop out individual wedges;
supports elliptical and circular pies.
"""
__version__=''' $Id: piecharts.py 3021 2007-01-22 14:33:59Z rgbecker $ '''
import copy
from math import sin, cos, pi
from reportlab.lib import colors
from reportlab.lib.validators import isColor, isNumber, isListOfNumbersOrNone,\
isListOfNumbers, isColorOrNone, isString,\
isListOfStringsOrNone, OneOf, SequenceOf,\
isBoolean, isListOfColors, isNumberOrNone,\
isNoneOrListOfNoneOrStrings, isTextAnchor,\
isNoneOrListOfNoneOrNumbers, isBoxAnchor,\
isStringOrNone, NoneOr
from reportlab.graphics.widgets.markers import uSymbol2Symbol, isSymbol
from reportlab.lib.attrmap import *
from reportlab.pdfgen.canvas import Canvas
from reportlab.graphics.shapes import Group, Drawing, Ellipse, Wedge, String, STATE_DEFAULTS, ArcPath, Polygon, Rect, PolyLine
from reportlab.graphics.widgetbase import Widget, TypedPropertyCollection, PropHolder
from reportlab.graphics.charts.areas import PlotArea
from textlabels import Label
_ANGLE2BOXANCHOR={0:'w', 45:'sw', 90:'s', 135:'se', 180:'e', 225:'ne', 270:'n', 315: 'nw', -45: 'nw'}
_ANGLE2RBOXANCHOR={0:'e', 45:'ne', 90:'n', 135:'nw', 180:'w', 225:'sw', 270:'s', 315: 'se', -45: 'se'}
class WedgeLabel(Label):
def _checkDXY(self,ba):
pass
def _getBoxAnchor(self):
na = (int((self._pmv%360)/45.)*45)%360
if not (na % 90): # we have a right angle case
da = (self._pmv - na) % 360
if abs(da)>5:
na = na + (da>0 and 45 or -45)
ba = (getattr(self,'_anti',None) and _ANGLE2RBOXANCHOR or _ANGLE2BOXANCHOR)[na]
self._checkDXY(ba)
return ba
class WedgeProperties(PropHolder):
"""This holds descriptive information about the wedges in a pie chart.
It is not to be confused with the 'wedge itself'; this just holds
a recipe for how to format one, and does not allow you to hack the
angles. It can format a genuine Wedge object for you with its
format method.
"""
_attrMap = AttrMap(
strokeWidth = AttrMapValue(isNumber),
fillColor = AttrMapValue(isColorOrNone),
strokeColor = AttrMapValue(isColorOrNone),
strokeDashArray = AttrMapValue(isListOfNumbersOrNone),
popout = AttrMapValue(isNumber),
fontName = AttrMapValue(isString),
fontSize = AttrMapValue(isNumber),
fontColor = AttrMapValue(isColorOrNone),
labelRadius = AttrMapValue(isNumber),
label_dx = AttrMapValue(isNumber),
label_dy = AttrMapValue(isNumber),
label_angle = AttrMapValue(isNumber),
label_boxAnchor = AttrMapValue(isBoxAnchor),
label_boxStrokeColor = AttrMapValue(isColorOrNone),
label_boxStrokeWidth = AttrMapValue(isNumber),
label_boxFillColor = AttrMapValue(isColorOrNone),
label_strokeColor = AttrMapValue(isColorOrNone),
label_strokeWidth = AttrMapValue(isNumber),
label_text = AttrMapValue(isStringOrNone),
label_leading = AttrMapValue(isNumberOrNone),
label_width = AttrMapValue(isNumberOrNone),
label_maxWidth = AttrMapValue(isNumberOrNone),
label_height = AttrMapValue(isNumberOrNone),
label_textAnchor = AttrMapValue(isTextAnchor),
label_visible = AttrMapValue(isBoolean,desc="True if the label is to be drawn"),
label_topPadding = AttrMapValue(isNumber,'padding at top of box'),
label_leftPadding = AttrMapValue(isNumber,'padding at left of box'),
label_rightPadding = AttrMapValue(isNumber,'padding at right of box'),
label_bottomPadding = AttrMapValue(isNumber,'padding at bottom of box'),
label_pointer_strokeColor = AttrMapValue(isColorOrNone,desc='Color of indicator line'),
label_pointer_strokeWidth = AttrMapValue(isNumber,desc='StrokeWidth of indicator line'),
label_pointer_elbowLength = AttrMapValue(isNumber,desc='length of final indicator line segment'),
label_pointer_edgePad = AttrMapValue(isNumber,desc='pad between pointer label and box'),
label_pointer_piePad = AttrMapValue(isNumber,desc='pad between pointer label and pie'),
swatchMarker = AttrMapValue(NoneOr(isSymbol), desc="None or makeMarker('Diamond') ..."),
visible = AttrMapValue(isBoolean,'set to false to skip displaying'),
)
def __init__(self):
self.strokeWidth = 0
self.fillColor = None
self.strokeColor = STATE_DEFAULTS["strokeColor"]
self.strokeDashArray = STATE_DEFAULTS["strokeDashArray"]
self.popout = 0
self.fontName = STATE_DEFAULTS["fontName"]
self.fontSize = STATE_DEFAULTS["fontSize"]
self.fontColor = STATE_DEFAULTS["fillColor"]
self.labelRadius = 1.2
self.label_dx = self.label_dy = self.label_angle = 0
self.label_text = None
self.label_topPadding = self.label_leftPadding = self.label_rightPadding = self.label_bottomPadding = 0
self.label_boxAnchor = 'c'
self.label_boxStrokeColor = None #boxStroke
self.label_boxStrokeWidth = 0.5 #boxStrokeWidth
self.label_boxFillColor = None
self.label_strokeColor = None
self.label_strokeWidth = 0.1
self.label_leading = self.label_width = self.label_maxWidth = self.label_height = None
self.label_textAnchor = 'start'
self.label_visible = 1
self.label_pointer_strokeColor = colors.black
self.label_pointer_strokeWidth = 0.5
self.label_pointer_elbowLength = 3
self.label_pointer_edgePad = 2
self.label_pointer_piePad = 3
self.visible = 1
def _addWedgeLabel(self,text,add,angle,labelX,labelY,wedgeStyle,labelClass=WedgeLabel):
# now draw a label
if self.simpleLabels:
theLabel = String(labelX, labelY, text)
theLabel.textAnchor = "middle"
theLabel._pmv = angle
else:
theLabel = labelClass()
theLabel._pmv = angle
theLabel.x = labelX
theLabel.y = labelY
theLabel.dx = wedgeStyle.label_dx
theLabel.dy = wedgeStyle.label_dy
theLabel.angle = wedgeStyle.label_angle
theLabel.boxAnchor = wedgeStyle.label_boxAnchor
theLabel.boxStrokeColor = wedgeStyle.label_boxStrokeColor
theLabel.boxStrokeWidth = wedgeStyle.label_boxStrokeWidth
theLabel.boxFillColor = wedgeStyle.label_boxFillColor
theLabel.strokeColor = wedgeStyle.label_strokeColor
theLabel.strokeWidth = wedgeStyle.label_strokeWidth
_text = wedgeStyle.label_text
if _text is None: _text = text
theLabel._text = _text
theLabel.leading = wedgeStyle.label_leading
theLabel.width = wedgeStyle.label_width
theLabel.maxWidth = wedgeStyle.label_maxWidth
theLabel.height = wedgeStyle.label_height
theLabel.textAnchor = wedgeStyle.label_textAnchor
theLabel.visible = wedgeStyle.label_visible
theLabel.topPadding = wedgeStyle.label_topPadding
theLabel.leftPadding = wedgeStyle.label_leftPadding
theLabel.rightPadding = wedgeStyle.label_rightPadding
theLabel.bottomPadding = wedgeStyle.label_bottomPadding
theLabel.fontSize = wedgeStyle.fontSize
theLabel.fontName = wedgeStyle.fontName
theLabel.fillColor = wedgeStyle.fontColor
add(theLabel)
def _fixLabels(labels,n):
if labels is None:
labels = [''] * n
else:
i = n-len(labels)
if i>0: labels = labels + ['']*i
return labels
class AbstractPieChart(PlotArea):
def makeSwatchSample(self, rowNo, x, y, width, height):
baseStyle = self.slices
styleIdx = rowNo % len(baseStyle)
style = baseStyle[styleIdx]
strokeColor = getattr(style, 'strokeColor', getattr(baseStyle,'strokeColor',None))
fillColor = getattr(style, 'fillColor', getattr(baseStyle,'fillColor',None))
strokeDashArray = getattr(style, 'strokeDashArray', getattr(baseStyle,'strokeDashArray',None))
strokeWidth = getattr(style, 'strokeWidth', getattr(baseStyle, 'strokeWidth',None))
swatchMarker = getattr(style, 'swatchMarker', getattr(baseStyle, 'swatchMarker',None))
if swatchMarker:
return uSymbol2Symbol(swatchMarker,x+width/2.,y+height/2.,fillColor)
return Rect(x,y,width,height,strokeWidth=strokeWidth,strokeColor=strokeColor,
strokeDashArray=strokeDashArray,fillColor=fillColor)
def getSeriesName(self,i,default=None):
'''return series name i or default'''
try:
text = str(self.labels[i])
except:
text = default
if not self.simpleLabels:
_text = getattr(self.slices[i],'label_text','')
if _text is not None: text = _text
return text
def boundsOverlap(P,Q):
return not(P[0]>Q[2]-1e-2 or Q[0]>P[2]-1e-2 or P[1]>Q[3]-1e-2 or Q[1]>P[3]-1e-2)
def _findOverlapRun(B,i,wrap):
'''find overlap run containing B[i]'''
n = len(B)
R = [i]
while 1:
i = R[-1]
j = (i+1)%n
if j in R or not boundsOverlap(B[i],B[j]): break
R.append(j)
while 1:
i = R[0]
j = (i-1)%n
if j in R or not boundsOverlap(B[i],B[j]): break
R.insert(0,j)
return R
def findOverlapRun(B,wrap=1):
'''determine a set of overlaps in bounding boxes B or return None'''
n = len(B)
if n>1:
for i in xrange(n-1):
R = _findOverlapRun(B,i,wrap)
if len(R)>1: return R
return None
def fixLabelOverlaps(L):
nL = len(L)
if nL<2: return
B = [l._origdata['bounds'] for l in L]
OK = 1
RP = []
iter = 0
mult = 1.
while iter<30:
R = findOverlapRun(B)
if not R: break
nR = len(R)
if nR==nL: break
if not [r for r in RP if r in R]:
mult = 1.0
da = 0
r0 = R[0]
rL = R[-1]
bi = B[r0]
taa = aa = _360(L[r0]._pmv)
for r in R[1:]:
b = B[r]
da = max(da,min(b[3]-bi[1],bi[3]-b[1]))
bi = b
aa += L[r]._pmv
aa = aa/float(nR)
utaa = abs(L[rL]._pmv-taa)
ntaa = _360(utaa)
da *= mult*(nR-1)/ntaa
for r in R:
l = L[r]
orig = l._origdata
angle = l._pmv = _360(l._pmv+da*(_360(l._pmv)-aa))
rad = angle/_180_pi
l.x = orig['cx'] + orig['rx']*cos(rad)
l.y = orig['cy'] + orig['ry']*sin(rad)
B[r] = l.getBounds()
RP = R
mult *= 1.05
iter += 1
def intervalIntersection(A,B):
x,y = max(min(A),min(B)),min(max(A),max(B))
if x>=y: return None
return x,y
def _makeSideArcDefs(sa,direction):
sa %= 360
if 90<=sa<270:
if direction=='clockwise':
a = (0,90,sa),(1,-90,90),(0,-360+sa,-90)
else:
a = (0,sa,270),(1,270,450),(0,450,360+sa)
else:
offs = sa>=270 and 360 or 0
if direction=='clockwise':
a = (1,offs-90,sa),(0,offs-270,offs-90),(1,-360+sa,offs-270)
else:
a = (1,sa,offs+90),(0,offs+90,offs+270),(1,offs+270,360+sa)
return tuple([a for a in a if a[1]<a[2]])
def _findLargestArc(xArcs,side):
a = [a[1] for a in xArcs if a[0]==side and a[1] is not None]
if not a: return None
if len(a)>1: a.sort(lambda x,y: cmp(y[1]-y[0],x[1]-x[0]))
return a[0]
def _fPLSide(l,width,side=None):
data = l._origdata
if side is None:
li = data['li']
ri = data['ri']
if li is None:
side = 1
i = ri
elif ri is None:
side = 0
i = li
elif li[1]-li[0]>ri[1]-ri[0]:
side = 0
i = li
else:
side = 1
i = ri
w = data['width']
edgePad = data['edgePad']
if not side: #on left
l._pmv = 180
l.x = edgePad+w
i = data['li']
else:
l._pmv = 0
l.x = width - w - edgePad
i = data['ri']
mid = data['mid'] = (i[0]+i[1])*0.5
data['smid'] = sin(mid/_180_pi)
data['cmid'] = cos(mid/_180_pi)
data['side'] = side
return side,w
def _fPLCF(a,b):
return cmp(b._origdata['smid'],a._origdata['smid'])
def _arcCF(a,b):
return cmp(a[1],b[1])
def _fixPointerLabels(n,L,x,y,width,height,side=None):
LR = [],[]
mlr = [0,0]
for l in L:
i,w = _fPLSide(l,width,side)
LR[i].append(l)
mlr[i] = max(w,mlr[i])
mul = 1
G = n*[None]
mel = 0
hh = height*0.5
yhh = y+hh
m = max(mlr)
for i in (0,1):
T = LR[i]
if T:
B = []
aB = B.append
S = []
aS = S.append
T.sort(_fPLCF)
p = 0
yh = y+height
for l in T:
data = l._origdata
inc = x+mul*(m-data['width'])
l.x += inc
G[data['index']] = l
ly = yhh+data['smid']*hh
b = data['bounds']
b2 = (b[3]-b[1])*0.5
if ly+b2>yh: ly = yh-b2
if ly-b2<y: ly = y+b2
data['bounds'] = b = (b[0],ly-b2,b[2],ly+b2)
aB(b)
l.y = ly
aS(max(0,yh-ly-b2))
yh = ly-b2
p = max(p,data['edgePad']+data['piePad'])
mel = max(mel,abs(data['smid']*(hh+data['elbowLength']))-hh)
aS(yh-y)
iter = 0
nT = len(T)
while iter<30:
R = findOverlapRun(B,wrap=0)
if not R: break
nR = len(R)
if nR==nT: break
j0 = R[0]
j1 = R[-1]
jl = j1+1
sAbove = sum(S[:j0+1])
sFree = sAbove+sum(S[jl:])
sNeed = sum([b[3]-b[1] for b in B[j0:jl]])+jl-j0-(B[j0][3]-B[j1][1])
if sNeed>sFree: break
yh = B[j0][3]+sAbove*sNeed/sFree
for r in R:
l = T[r]
data = l._origdata
b = data['bounds']
b2 = (b[3]-b[1])*0.5
yh -= 0.5
ly = l.y = yh-b2
B[r] = data['bounds'] = (b[0],ly-b2,b[2],yh)
yh = ly - b2 - 0.5
mlr[i] = m+p
mul = -1
return G, mlr[0], mlr[1], mel
class Pie(AbstractPieChart):
_attrMap = AttrMap(BASE=AbstractPieChart,
data = AttrMapValue(isListOfNumbers, desc='list of numbers defining wedge sizes; need not sum to 1'),
labels = AttrMapValue(isListOfStringsOrNone, desc="optional list of labels to use for each data point"),
startAngle = AttrMapValue(isNumber, desc="angle of first slice; like the compass, 0 is due North"),
direction = AttrMapValue(OneOf('clockwise', 'anticlockwise'), desc="'clockwise' or 'anticlockwise'"),
slices = AttrMapValue(None, desc="collection of wedge descriptor objects"),
simpleLabels = AttrMapValue(isBoolean, desc="If true(default) use String not super duper WedgeLabel"),
other_threshold = AttrMapValue(isNumber, desc='A value for doing threshholding, not used yet.'),
checkLabelOverlap = AttrMapValue(isBoolean, desc="If true check and attempt to fix standard label overlaps(default off)"),
pointerLabelMode = AttrMapValue(OneOf(None,'LeftRight','LeftAndRight'), desc=""),
sameRadii = AttrMapValue(isBoolean, desc="If true make x/y radii the same(default off)"),
orderMode = AttrMapValue(OneOf('fixed','alternate')),
xradius = AttrMapValue(isNumberOrNone, desc="X direction Radius"),
yradius = AttrMapValue(isNumberOrNone, desc="Y direction Radius"),
)
other_threshold=None
def __init__(self,**kwd):
PlotArea.__init__(self)
self.x = 0
self.y = 0
self.width = 100
self.height = 100
self.data = [1,2.3,1.7,4.2]
self.labels = None # or list of strings
self.startAngle = 90
self.direction = "clockwise"
self.simpleLabels = 1
self.checkLabelOverlap = 0
self.pointerLabelMode = None
self.sameRadii = False
self.orderMode = 'fixed'
self.xradius = self.yradius = None
self.slices = TypedPropertyCollection(WedgeProperties)
self.slices[0].fillColor = colors.darkcyan
self.slices[1].fillColor = colors.blueviolet
self.slices[2].fillColor = colors.blue
self.slices[3].fillColor = colors.cyan
self.slices[4].fillColor = colors.pink
self.slices[5].fillColor = colors.magenta
self.slices[6].fillColor = colors.yellow
def demo(self):
d = Drawing(200, 100)
pc = Pie()
pc.x = 50
pc.y = 10
pc.width = 100
pc.height = 80
pc.data = [10,20,30,40,50,60]
pc.labels = ['a','b','c','d','e','f']
pc.slices.strokeWidth=0.5
pc.slices[3].popout = 10
pc.slices[3].strokeWidth = 2
pc.slices[3].strokeDashArray = [2,2]
pc.slices[3].labelRadius = 1.75
pc.slices[3].fontColor = colors.red
pc.slices[0].fillColor = colors.darkcyan
pc.slices[1].fillColor = colors.blueviolet
pc.slices[2].fillColor = colors.blue
pc.slices[3].fillColor = colors.cyan
pc.slices[4].fillColor = colors.aquamarine
pc.slices[5].fillColor = colors.cadetblue
pc.slices[6].fillColor = colors.lightcoral
d.add(pc)
return d
def makePointerLabels(self,angles,plMode):
class PL:
def __init__(self,centerx,centery,xradius,yradius,data,lu=0,ru=0):
self.centerx = centerx
self.centery = centery
self.xradius = xradius
self.yradius = yradius
self.data = data
self.lu = lu
self.ru = ru
labelX = self.width-2
labelY = self.height
n = nr = nl = maxW = sumH = 0
styleCount = len(self.slices)
L=[]
L_add = L.append
refArcs = _makeSideArcDefs(self.startAngle,self.direction)
for i, A in angles:
if A[1] is None: continue
sn = self.getSeriesName(i,'')
if not sn: continue
style = self.slices[i%styleCount]
if not style.label_visible or not style.visible: continue
n += 1
_addWedgeLabel(self,sn,L_add,180,labelX,labelY,style,labelClass=WedgeLabel)
l = L[-1]
b = l.getBounds()
w = b[2]-b[0]
h = b[3]-b[1]
ri = [(a[0],intervalIntersection(A,(a[1],a[2]))) for a in refArcs]
li = _findLargestArc(ri,0)
ri = _findLargestArc(ri,1)
if li and ri:
if plMode=='LeftAndRight':
if li[1]-li[0]<ri[1]-ri[0]:
li = None
else:
ri = None
else:
if li[1]-li[0]<0.02*(ri[1]-ri[0]):
li = None
elif (li[1]-li[0])*0.02>ri[1]-ri[0]:
ri = None
if ri: nr += 1
if li: nl += 1
l._origdata = dict(bounds=b,width=w,height=h,li=li,ri=ri,index=i,edgePad=style.label_pointer_edgePad,piePad=style.label_pointer_piePad,elbowLength=style.label_pointer_elbowLength)
maxW = max(w,maxW)
sumH += h+2
if not n: #we have no labels
xradius = self.width*0.5
yradius = self.height*0.5
centerx = self.x+xradius
centery = self.y+yradius
if self.xradius: xradius = self.xradius
if self.yradius: yradius = self.yradius
if self.sameRadii: xradius=yradius=min(xradius,yradius)
return PL(centerx,centery,xradius,yradius,[])
aonR = nr==n
if sumH<self.height and (aonR or nl==n):
side=int(aonR)
else:
side=None
G,lu,ru,mel = _fixPointerLabels(len(angles),L,self.x,self.y,self.width,self.height,side=side)
if plMode=='LeftAndRight':
lu = ru = max(lu,ru)
x0 = self.x+lu
x1 = self.x+self.width-ru
xradius = (x1-x0)*0.5
yradius = self.height*0.5-mel
centerx = x0+xradius
centery = self.y+yradius+mel
if self.xradius: xradius = self.xradius
if self.yradius: yradius = self.yradius
if self.sameRadii: xradius=yradius=min(xradius,yradius)
return PL(centerx,centery,xradius,yradius,G,lu,ru)
def normalizeData(self):
from operator import add
data = self.data
self._sum = sum = float(reduce(add,data,0))
return abs(sum)>=1e-8 and map(lambda x,f=360./sum: f*x, data) or len(data)*[0]
def makeAngles(self):
startAngle = self.startAngle % 360
whichWay = self.direction == "clockwise" and -1 or 1
D = [a for a in enumerate(self.normalizeData())]
if self.orderMode=='alternate':
W = [a for a in D if abs(a[1])>=1e-5]
W.sort(_arcCF)
T = [[],[]]
i = 0
while W:
if i<2:
a = W.pop(0)
else:
a = W.pop(-1)
T[i%2].append(a)
i += 1
i %= 4
T[1].reverse()
D = T[0]+T[1] + [a for a in D if abs(a[1])<1e-5]
A = []
a = A.append
for i, angle in D:
endAngle = (startAngle + (angle * whichWay))
if abs(angle)>=1e-5:
if startAngle >= endAngle:
aa = endAngle,startAngle
else:
aa = startAngle,endAngle
else:
aa = startAngle, None
startAngle = endAngle
a((i,aa))
return A
def makeWedges(self):
angles = self.makeAngles()
n = len(angles)
labels = _fixLabels(self.labels,n)
self._seriesCount = n
styleCount = len(self.slices)
plMode = self.pointerLabelMode
if plMode:
checkLabelOverlap = False
PL=self.makePointerLabels(angles,plMode)
xradius = PL.xradius
yradius = PL.yradius
centerx = PL.centerx
centery = PL.centery
PL_data = PL.data
gSN = lambda i: ''
else:
xradius = self.width*0.5
yradius = self.height*0.5
centerx = self.x + xradius
centery = self.y + yradius
if self.xradius: xradius = self.xradius
if self.yradius: yradius = self.yradius
if self.sameRadii: xradius=yradius=min(xradius,yradius)
checkLabelOverlap = self.checkLabelOverlap
gSN = lambda i: self.getSeriesName(i,'')
g = Group()
g_add = g.add
if checkLabelOverlap:
L = []
L_add = L.append
else:
L_add = g_add
for i,(a1,a2) in angles:
if a2 is None: continue
#if we didn't use %stylecount here we'd end up with the later wedges
#all having the default style
wedgeStyle = self.slices[i%styleCount]
if not wedgeStyle.visible: continue
# is it a popout?
cx, cy = centerx, centery
text = gSN(i)
popout = wedgeStyle.popout
if text or popout:
averageAngle = (a1+a2)/2.0
aveAngleRadians = averageAngle/_180_pi
cosAA = cos(aveAngleRadians)
sinAA = sin(aveAngleRadians)
if popout:
# pop out the wedge
cx = centerx + popout*cosAA
cy = centery + popout*sinAA
if n > 1:
theWedge = Wedge(cx, cy, xradius, a1, a2, yradius=yradius)
elif n==1:
theWedge = Ellipse(cx, cy, xradius, yradius)
theWedge.fillColor = wedgeStyle.fillColor
theWedge.strokeColor = wedgeStyle.strokeColor
theWedge.strokeWidth = wedgeStyle.strokeWidth
theWedge.strokeDashArray = wedgeStyle.strokeDashArray
g_add(theWedge)
if wedgeStyle.label_visible:
if text:
labelRadius = wedgeStyle.labelRadius
rx = xradius*labelRadius
ry = yradius*labelRadius
labelX = cx + rx*cosAA
labelY = cy + ry*sinAA
_addWedgeLabel(self,text,L_add,averageAngle,labelX,labelY,wedgeStyle)
if checkLabelOverlap:
l = L[-1]
l._origdata = { 'x': labelX, 'y':labelY, 'angle': averageAngle,
'rx': rx, 'ry':ry, 'cx':cx, 'cy':cy,
'bounds': l.getBounds(),
}
elif plMode and PL_data:
l = PL_data[i]
if l:
data = l._origdata
sinM = data['smid']
cosM = data['cmid']
lX = cx + xradius*cosM
lY = cy + yradius*sinM
lpel = wedgeStyle.label_pointer_elbowLength
lXi = lX + lpel*cosM
lYi = lY + lpel*sinM
L_add(PolyLine((lX,lY,lXi,lYi,l.x,l.y),
strokeWidth=wedgeStyle.label_pointer_strokeWidth,
strokeColor=wedgeStyle.label_pointer_strokeColor))
L_add(l)
if checkLabelOverlap and L:
fixLabelOverlaps(L)
map(g_add,L)
return g
def draw(self):
G = self.makeBackground()
w = self.makeWedges()
if G: return Group(G,w)
return w
class LegendedPie(Pie):
"""Pie with a two part legend (one editable with swatches, one hidden without swatches)."""
_attrMap = AttrMap(BASE=Pie,
drawLegend = AttrMapValue(isBoolean, desc="If true then create and draw legend"),
legend1 = AttrMapValue(None, desc="Handle to legend for pie"),
legendNumberFormat = AttrMapValue(None, desc="Formatting routine for number on right hand side of legend."),
legendNumberOffset = AttrMapValue(isNumber, desc="Horizontal space between legend and numbers on r/hand side"),
pieAndLegend_colors = AttrMapValue(isListOfColors, desc="Colours used for both swatches and pie"),
legend_names = AttrMapValue(isNoneOrListOfNoneOrStrings, desc="Names used in legend (or None)"),
legend_data = AttrMapValue(isNoneOrListOfNoneOrNumbers, desc="Numbers used on r/hand side of legend (or None)"),
leftPadding = AttrMapValue(isNumber, desc='Padding on left of drawing'),
rightPadding = AttrMapValue(isNumber, desc='Padding on right of drawing'),
topPadding = AttrMapValue(isNumber, desc='Padding at top of drawing'),
bottomPadding = AttrMapValue(isNumber, desc='Padding at bottom of drawing'),
)
def __init__(self):
Pie.__init__(self)
self.x = 0
self.y = 0
self.height = 100
self.width = 100
self.data = [38.4, 20.7, 18.9, 15.4, 6.6]
self.labels = None
self.direction = 'clockwise'
PCMYKColor, black = colors.PCMYKColor, colors.black
self.pieAndLegend_colors = [PCMYKColor(11,11,72,0,spotName='PANTONE 458 CV'),
PCMYKColor(100,65,0,30,spotName='PANTONE 288 CV'),
PCMYKColor(11,11,72,0,spotName='PANTONE 458 CV',density=75),
PCMYKColor(100,65,0,30,spotName='PANTONE 288 CV',density=75),
PCMYKColor(11,11,72,0,spotName='PANTONE 458 CV',density=50),
PCMYKColor(100,65,0,30,spotName='PANTONE 288 CV',density=50)]
#Allows us up to six 'wedges' to be coloured
self.slices[0].fillColor=self.pieAndLegend_colors[0]
self.slices[1].fillColor=self.pieAndLegend_colors[1]
self.slices[2].fillColor=self.pieAndLegend_colors[2]
self.slices[3].fillColor=self.pieAndLegend_colors[3]
self.slices[4].fillColor=self.pieAndLegend_colors[4]
self.slices[5].fillColor=self.pieAndLegend_colors[5]
self.slices.strokeWidth = 0.75
self.slices.strokeColor = black
legendOffset = 17
self.legendNumberOffset = 51
self.legendNumberFormat = '%.1f%%'
self.legend_data = self.data
#set up the legends
from reportlab.graphics.charts.legends import Legend
self.legend1 = Legend()
self.legend1.x = self.width+legendOffset
self.legend1.y = self.height
self.legend1.deltax = 5.67
self.legend1.deltay = 14.17
self.legend1.dxTextSpace = 11.39
self.legend1.dx = 5.67
self.legend1.dy = 5.67
self.legend1.columnMaximum = 7
self.legend1.alignment = 'right'
self.legend_names = ['AAA:','AA:','A:','BBB:','NR:']
for f in range(0,len(self.data)):
self.legend1.colorNamePairs.append((self.pieAndLegend_colors[f], self.legend_names[f]))
self.legend1.fontName = "Helvetica-Bold"
self.legend1.fontSize = 6
self.legend1.strokeColor = black
self.legend1.strokeWidth = 0.5
self._legend2 = Legend()
self._legend2.dxTextSpace = 0
self._legend2.dx = 0
self._legend2.alignment = 'right'
self._legend2.fontName = "Helvetica-Oblique"
self._legend2.fontSize = 6
self._legend2.strokeColor = self.legend1.strokeColor
self.leftPadding = 5
self.rightPadding = 5
self.topPadding = 5
self.bottomPadding = 5
self.drawLegend = 1
def draw(self):
if self.drawLegend:
self.legend1.colorNamePairs = []
self._legend2.colorNamePairs = []
for f in range(0,len(self.data)):
if self.legend_names == None:
self.slices[f].fillColor = self.pieAndLegend_colors[f]
self.legend1.colorNamePairs.append((self.pieAndLegend_colors[f], None))
else:
try:
self.slices[f].fillColor = self.pieAndLegend_colors[f]
self.legend1.colorNamePairs.append((self.pieAndLegend_colors[f], self.legend_names[f]))
except IndexError:
self.slices[f].fillColor = self.pieAndLegend_colors[f%len(self.pieAndLegend_colors)]
self.legend1.colorNamePairs.append((self.pieAndLegend_colors[f%len(self.pieAndLegend_colors)], self.legend_names[f]))
if self.legend_data != None:
ldf = self.legend_data[f]
lNF = self.legendNumberFormat
from types import StringType
if ldf is None or lNF is None:
pass
elif type(lNF) is StringType:
ldf = lNF % ldf
elif callable(lNF):
ldf = lNF(ldf)
else:
p = self.legend_names[f]
if self.legend_data != None:
ldf = self.legend_data[f]
lNF = self.legendNumberFormat
if ldf is None or lNF is None:
pass
elif type(lNF) is StringType:
ldf = lNF % ldf
elif callable(lNF):
ldf = lNF(ldf)
else:
msg = "Unknown formatter type %s, expected string or function" % self.legendNumberFormat
raise Exception, msg
self._legend2.colorNamePairs.append((None,ldf))
p = Pie.draw(self)
if self.drawLegend:
p.add(self.legend1)
#hide from user - keeps both sides lined up!
self._legend2.x = self.legend1.x+self.legendNumberOffset
self._legend2.y = self.legend1.y
self._legend2.deltax = self.legend1.deltax
self._legend2.deltay = self.legend1.deltay
self._legend2.dy = self.legend1.dy
self._legend2.columnMaximum = self.legend1.columnMaximum
p.add(self._legend2)
p.shift(self.leftPadding, self.bottomPadding)
return p
def _getDrawingDimensions(self):
tx = self.rightPadding
if self.drawLegend:
tx = tx+self.legend1.x+self.legendNumberOffset #self._legend2.x
tx = tx + self._legend2._calculateMaxWidth(self._legend2.colorNamePairs)
ty = self.bottomPadding+self.height+self.topPadding
return (tx,ty)
def demo(self, drawing=None):
if not drawing:
tx,ty = self._getDrawingDimensions()
drawing = Drawing(tx, ty)
drawing.add(self.draw())
return drawing
from utils3d import _getShaded, _2rad, _360, _pi_2, _2pi, _180_pi
class Wedge3dProperties(PropHolder):
"""This holds descriptive information about the wedges in a pie chart.
It is not to be confused with the 'wedge itself'; this just holds
a recipe for how to format one, and does not allow you to hack the
angles. It can format a genuine Wedge object for you with its
format method.
"""
_attrMap = AttrMap(
fillColor = AttrMapValue(isColorOrNone),
fillColorShaded = AttrMapValue(isColorOrNone),
fontColor = AttrMapValue(isColorOrNone),
fontName = AttrMapValue(isString),
fontSize = AttrMapValue(isNumber),
label_angle = AttrMapValue(isNumber),
label_bottomPadding = AttrMapValue(isNumber,'padding at bottom of box'),
label_boxAnchor = AttrMapValue(isBoxAnchor),
label_boxFillColor = AttrMapValue(isColorOrNone),
label_boxStrokeColor = AttrMapValue(isColorOrNone),
label_boxStrokeWidth = AttrMapValue(isNumber),
label_dx = AttrMapValue(isNumber),
label_dy = AttrMapValue(isNumber),
label_height = AttrMapValue(isNumberOrNone),
label_leading = AttrMapValue(isNumberOrNone),
label_leftPadding = AttrMapValue(isNumber,'padding at left of box'),
label_maxWidth = AttrMapValue(isNumberOrNone),
label_rightPadding = AttrMapValue(isNumber,'padding at right of box'),
label_strokeColor = AttrMapValue(isColorOrNone),
label_strokeWidth = AttrMapValue(isNumber),
label_text = AttrMapValue(isStringOrNone),
label_textAnchor = AttrMapValue(isTextAnchor),
label_topPadding = AttrMapValue(isNumber,'padding at top of box'),
label_visible = AttrMapValue(isBoolean,desc="True if the label is to be drawn"),
label_width = AttrMapValue(isNumberOrNone),
labelRadius = AttrMapValue(isNumber),
popout = AttrMapValue(isNumber),
shading = AttrMapValue(isNumber),
strokeColor = AttrMapValue(isColorOrNone),
strokeColorShaded = AttrMapValue(isColorOrNone),
strokeDashArray = AttrMapValue(isListOfNumbersOrNone),
strokeWidth = AttrMapValue(isNumber),
visible = AttrMapValue(isBoolean,'set to false to skip displaying'),
)
def __init__(self):
self.strokeWidth = 0
self.shading = 0.3
self.visible = 1
self.strokeColorShaded = self.fillColorShaded = self.fillColor = None
self.strokeColor = STATE_DEFAULTS["strokeColor"]
self.strokeDashArray = STATE_DEFAULTS["strokeDashArray"]
self.popout = 0
self.fontName = STATE_DEFAULTS["fontName"]
self.fontSize = STATE_DEFAULTS["fontSize"]
self.fontColor = STATE_DEFAULTS["fillColor"]
self.labelRadius = 1.2
self.label_dx = self.label_dy = self.label_angle = 0
self.label_text = None
self.label_topPadding = self.label_leftPadding = self.label_rightPadding = self.label_bottomPadding = 0
self.label_boxAnchor = 'c'
self.label_boxStrokeColor = None #boxStroke
self.label_boxStrokeWidth = 0.5 #boxStrokeWidth
self.label_boxFillColor = None
self.label_strokeColor = None
self.label_strokeWidth = 0.1
self.label_leading = self.label_width = self.label_maxWidth = self.label_height = None
self.label_textAnchor = 'start'
self.label_visible = 1
class _SL3D:
def __init__(self,lo,hi):
if lo<0:
lo += 360
hi += 360
self.lo = lo
self.hi = hi
self.mid = (lo+hi)*0.5
def __str__(self):
return '_SL3D(%.2f,%.2f)' % (self.lo,self.hi)
_270r = _2rad(270)
class Pie3d(Pie):
_attrMap = AttrMap(BASE=Pie,
perspective = AttrMapValue(isNumber, desc='A flattening parameter.'),
depth_3d = AttrMapValue(isNumber, desc='depth of the pie.'),
angle_3d = AttrMapValue(isNumber, desc='The view angle.'),
)
perspective = 70
depth_3d = 25
angle_3d = 180
def _popout(self,i):
return self.slices[i].popout or 0
def CX(self, i,d ):
return self._cx+(d and self._xdepth_3d or 0)+self._popout(i)*cos(_2rad(self._sl3d[i].mid))
def CY(self,i,d):
return self._cy+(d and self._ydepth_3d or 0)+self._popout(i)*sin(_2rad(self._sl3d[i].mid))
def OX(self,i,o,d):
return self.CX(i,d)+self._radiusx*cos(_2rad(o))
def OY(self,i,o,d):
return self.CY(i,d)+self._radiusy*sin(_2rad(o))
def rad_dist(self,a):
_3dva = self._3dva
return min(abs(a-_3dva),abs(a-_3dva+360))
def __init__(self):
self.x = 0
self.y = 0
self.width = 300
self.height = 200
self.data = [12.50,20.10,2.00,22.00,5.00,18.00,13.00]
self.labels = None # or list of strings
self.startAngle = 90
self.direction = "clockwise"
self.simpleLabels = 1
self.slices = TypedPropertyCollection(Wedge3dProperties)
self.slices[0].fillColor = colors.darkcyan
self.slices[1].fillColor = colors.blueviolet
self.slices[2].fillColor = colors.blue
self.slices[3].fillColor = colors.cyan
self.slices[4].fillColor = colors.azure
self.slices[5].fillColor = colors.crimson
self.slices[6].fillColor = colors.darkviolet
self.checkLabelOverlap = 0
self.xradius = self.yradius = None
def _fillSide(self,L,i,angle,strokeColor,strokeWidth,fillColor):
rd = self.rad_dist(angle)
if rd<self.rad_dist(self._sl3d[i].mid):
p = [self.CX(i,0),self.CY(i,0),
self.CX(i,1),self.CY(i,1),
self.OX(i,angle,1),self.OY(i,angle,1),
self.OX(i,angle,0),self.OY(i,angle,0)]
L.append((rd,Polygon(p, strokeColor=strokeColor, fillColor=fillColor,strokeWidth=strokeWidth,strokeLineJoin=1)))
def draw(self):
slices = self.slices
_3d_angle = self.angle_3d
_3dva = self._3dva = _360(_3d_angle+90)
a0 = _2rad(_3dva)
self._xdepth_3d = cos(a0)*self.depth_3d
self._ydepth_3d = sin(a0)*self.depth_3d
self._cx = self.x+self.width/2.0
self._cy = self.y+(self.height - self._ydepth_3d)/2.0
radiusx = radiusy = self._cx-self.x
if self.xradius: radiusx = self.xradius
if self.yradius: radiusy = self.yradius
self._radiusx = radiusx
self._radiusy = radiusy = (1.0 - self.perspective/100.0)*radiusy
data = self.normalizeData()
sum = self._sum
CX = self.CX
CY = self.CY
OX = self.OX
OY = self.OY
rad_dist = self.rad_dist
_fillSide = self._fillSide
self._seriesCount = n = len(data)
_sl3d = self._sl3d = []
g = Group()
last = _360(self.startAngle)
a0 = self.direction=='clockwise' and -1 or 1
for v in data:
v *= a0
angle1, angle0 = last, v+last
last = angle0
if a0>0: angle0, angle1 = angle1, angle0
_sl3d.append(_SL3D(angle0,angle1))
labels = _fixLabels(self.labels,n)
a0 = _3d_angle
a1 = _3d_angle+180
T = []
S = []
L = []
class WedgeLabel3d(WedgeLabel):
_ydepth_3d = self._ydepth_3d
def _checkDXY(self,ba):
if ba[0]=='n':
if not hasattr(self,'_ody'):
self._ody = self.dy
self.dy = -self._ody + self._ydepth_3d
checkLabelOverlap = self.checkLabelOverlap
for i in xrange(n):
style = slices[i]
if not style.visible: continue
sl = _sl3d[i]
lo = angle0 = sl.lo
hi = angle1 = sl.hi
if abs(hi-lo)<=1e-7: continue
fillColor = _getShaded(style.fillColor,style.fillColorShaded,style.shading)
strokeColor = _getShaded(style.strokeColor,style.strokeColorShaded,style.shading) or fillColor
strokeWidth = style.strokeWidth
cx0 = CX(i,0)
cy0 = CY(i,0)
cx1 = CX(i,1)
cy1 = CY(i,1)
#background shaded pie bottom
g.add(Wedge(cx1,cy1,radiusx, lo, hi,yradius=radiusy,
strokeColor=strokeColor,strokeWidth=strokeWidth,fillColor=fillColor,
strokeLineJoin=1))
#connect to top
if lo < a0 < hi: angle0 = a0
if lo < a1 < hi: angle1 = a1
if 1:
p = ArcPath(strokeColor=strokeColor, fillColor=fillColor,strokeWidth=strokeWidth,strokeLineJoin=1)
p.addArc(cx1,cy1,radiusx,angle0,angle1,yradius=radiusy,moveTo=1)
p.lineTo(OX(i,angle1,0),OY(i,angle1,0))
p.addArc(cx0,cy0,radiusx,angle0,angle1,yradius=radiusy,reverse=1)
p.closePath()
if angle0<=_3dva and angle1>=_3dva:
rd = 0
else:
rd = min(rad_dist(angle0),rad_dist(angle1))
S.append((rd,p))
_fillSide(S,i,lo,strokeColor,strokeWidth,fillColor)
_fillSide(S,i,hi,strokeColor,strokeWidth,fillColor)
#bright shaded top
fillColor = style.fillColor
strokeColor = style.strokeColor or fillColor
T.append(Wedge(cx0,cy0,radiusx,lo,hi,yradius=radiusy,
strokeColor=strokeColor,strokeWidth=strokeWidth,fillColor=fillColor,strokeLineJoin=1))
text = labels[i]
if style.label_visible and text:
rat = style.labelRadius
self._radiusx *= rat
self._radiusy *= rat
mid = sl.mid
labelX = OX(i,mid,0)
labelY = OY(i,mid,0)
_addWedgeLabel(self,text,L.append,mid,labelX,labelY,style,labelClass=WedgeLabel3d)
if checkLabelOverlap:
l = L[-1]
l._origdata = { 'x': labelX, 'y':labelY, 'angle': mid,
'rx': self._radiusx, 'ry':self._radiusy, 'cx':CX(i,0), 'cy':CY(i,0),
'bounds': l.getBounds(),
}
self._radiusx = radiusx
self._radiusy = radiusy
S.sort(lambda a,b: -cmp(a[0],b[0]))
if checkLabelOverlap and L:
fixLabelOverlaps(L)
map(g.add,map(lambda x:x[1],S)+T+L)
return g
def demo(self):
d = Drawing(200, 100)
pc = Pie()
pc.x = 50
pc.y = 10
pc.width = 100
pc.height = 80
pc.data = [10,20,30,40,50,60]
pc.labels = ['a','b','c','d','e','f']
pc.slices.strokeWidth=0.5
pc.slices[3].popout = 10
pc.slices[3].strokeWidth = 2
pc.slices[3].strokeDashArray = [2,2]
pc.slices[3].labelRadius = 1.75
pc.slices[3].fontColor = colors.red
pc.slices[0].fillColor = colors.darkcyan
pc.slices[1].fillColor = colors.blueviolet
pc.slices[2].fillColor = colors.blue
pc.slices[3].fillColor = colors.cyan
pc.slices[4].fillColor = colors.aquamarine
pc.slices[5].fillColor = colors.cadetblue
pc.slices[6].fillColor = colors.lightcoral
self.slices[1].visible = 0
self.slices[3].visible = 1
self.slices[4].visible = 1
self.slices[5].visible = 1
self.slices[6].visible = 0
d.add(pc)
return d
def sample0a():
"Make a degenerated pie chart with only one slice."
d = Drawing(400, 200)
pc = Pie()
pc.x = 150
pc.y = 50
pc.data = [10]
pc.labels = ['a']
pc.slices.strokeWidth=1#0.5
d.add(pc)
return d
def sample0b():
"Make a degenerated pie chart with only one slice."
d = Drawing(400, 200)
pc = Pie()
pc.x = 150
pc.y = 50
pc.width = 120
pc.height = 100
pc.data = [10]
pc.labels = ['a']
pc.slices.strokeWidth=1#0.5
d.add(pc)
return d
def sample1():
"Make a typical pie chart with with one slice treated in a special way."
d = Drawing(400, 200)
pc = Pie()
pc.x = 150
pc.y = 50
pc.data = [10, 20, 30, 40, 50, 60]
pc.labels = ['a', 'b', 'c', 'd', 'e', 'f']
pc.slices.strokeWidth=1#0.5
pc.slices[3].popout = 20
pc.slices[3].strokeWidth = 2
pc.slices[3].strokeDashArray = [2,2]
pc.slices[3].labelRadius = 1.75
pc.slices[3].fontColor = colors.red
d.add(pc)
return d
def sample2():
"Make a pie chart with nine slices."
d = Drawing(400, 200)
pc = Pie()
pc.x = 125
pc.y = 25
pc.data = [0.31, 0.148, 0.108,
0.076, 0.033, 0.03,
0.019, 0.126, 0.15]
pc.labels = ['1', '2', '3', '4', '5', '6', '7', '8', 'X']
pc.width = 150
pc.height = 150
pc.slices.strokeWidth=1#0.5
pc.slices[0].fillColor = colors.steelblue
pc.slices[1].fillColor = colors.thistle
pc.slices[2].fillColor = colors.cornflower
pc.slices[3].fillColor = colors.lightsteelblue
pc.slices[4].fillColor = colors.aquamarine
pc.slices[5].fillColor = colors.cadetblue
pc.slices[6].fillColor = colors.lightcoral
pc.slices[7].fillColor = colors.tan
pc.slices[8].fillColor = colors.darkseagreen
d.add(pc)
return d
def sample3():
"Make a pie chart with a very slim slice."
d = Drawing(400, 200)
pc = Pie()
pc.x = 125
pc.y = 25
pc.data = [74, 1, 25]
pc.width = 150
pc.height = 150
pc.slices.strokeWidth=1#0.5
pc.slices[0].fillColor = colors.steelblue
pc.slices[1].fillColor = colors.thistle
pc.slices[2].fillColor = colors.cornflower
d.add(pc)
return d
def sample4():
"Make a pie chart with several very slim slices."
d = Drawing(400, 200)
pc = Pie()
pc.x = 125
pc.y = 25
pc.data = [74, 1, 1, 1, 1, 22]
pc.width = 150
pc.height = 150
pc.slices.strokeWidth=1#0.5
pc.slices[0].fillColor = colors.steelblue
pc.slices[1].fillColor = colors.thistle
pc.slices[2].fillColor = colors.cornflower
pc.slices[3].fillColor = colors.lightsteelblue
pc.slices[4].fillColor = colors.aquamarine
pc.slices[5].fillColor = colors.cadetblue
d.add(pc)
return d
|
|
from __future__ import absolute_import
"""Routine to monitor the modal gain in each pixel as a
function of time. Uses COS Cumulative Image (CCI) files
to produce a modal gain map for each time period. Modal gain
maps for each period are collated to monitor the progress of
each pixel(superpixel) with time. Pixels that drop below
a threshold value are flagged and collected into a
gain sag table reference file (gsagtab).
The PHA modal gain threshold is set by global variable MODAL_GAIN_LIMIT.
Allowing the modal gain of a distribution to come within 1 gain bin
of the threshold results in ~8% loss of flux. Within
2 gain bins, ~4%
3 gain bins, ~2%
4 gain bins, ~1%
However, due to the column summing, a 4% loss in a region does not appear to be so in the extracted spectrum.
"""
__author__ = 'Justin Ely'
__maintainer__ = 'Justin Ely'
__email__ = 'ely@stsci.edu'
__status__ = 'Active'
import argparse
import os
import shutil
import sys
import time
from datetime import datetime
import gzip
import glob
import logging
logger = logging.getLogger(__name__)
from astropy.io import fits
from astropy.modeling import models, fitting
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import scipy
from scipy.optimize import leastsq, newton, curve_fit
import fitsio
from ..utils import rebin, enlarge
from .constants import * ## I know this is bad, but shut up.
#from db_interface import session, engine, Gain
if sys.version_info.major == 2:
from itertools import izip as zip
#------------------------------------------------------------
class CCI:
"""Creates a cci_object designed for use in the monitor.
Each COS cumulative image fits file is made into its
own cci_object where the data is more easily used.
Takes a while to make, contains a few large arrays and
various header keywords from the original file.
"""
def __init__(self, filename, **kwargs):
"""Open filename and create CCI Object"""
self.input_file = filename
self.xbinning = kwargs.get('xbinning', 1)
self.ybinning = kwargs.get('ybinning', 1)
self.mincounts = kwargs.get('mincounts', 30)
path, cci_name = os.path.split(filename)
cci_name, ext = os.path.splitext(cci_name)
#-- trim off any remaining extensions: .fits, .gz, .tar, etc
while not ext == '':
cci_name, ext = os.path.splitext(cci_name)
self.cci_name = cci_name
self.open_fits()
if not self.numfiles:
return
gainmap, counts, std = measure_gainimage(self.big_array)
self.gain_image = gainmap
self.std_image = std
if kwargs.get('only_active_area', True):
brftab = os.path.join(os.environ['lref'], self.brftab)
left, right, top, bottom = read_brftab(brftab, self.segment)
left //= self.xbinning
right //= self.xbinning
top //= self.ybinning
bottom //= self.ybinning
self.gain_image[:bottom] = 0
self.gain_image[top:] = 0
self.gain_image[:, :left] = 0
self.gain_image[:, right:] = 0
if kwargs.get('ignore_spots', True):
### Dynamic when delivered to CRDS
reffiles = glob.glob(os.path.join(os.environ['lref'], '*spot.fits'))
creation_dates = np.array([fits.getval(item, 'DATE') for item in reffiles])
spottab = reffiles[creation_dates.argmax()]
if os.path.exists(spottab):
regions = read_spottab(spottab,
self.segment,
self.expstart,
self.expend)
for lx, ly, dx, dy in regions:
lx //= self.xbinning
dx //= self.xbinning
ly //= self.ybinning
dy //= self.ybinning
#-- pad the regions by 1 bin in either direction
lx -= 1
dx += 2
ly -= 1
dy += 2
#--
self.gain_image[ly:ly+dy, lx:lx+dx] = 0
self.gain_index = np.where(self.gain_image > 0)
self.bad_index = np.where((self.gain_image <= 3) &
(self.gain_image > 0))
def open_fits(self):
"""Open CCI file and populated attributes with
header keywords and data arrays.
"""
hdu = fitsio.FITS(self.input_file)
primary = hdu[0].read_header()
assert (hdu[2].read().shape == (Y_UNBINNED, X_UNBINNED)), 'ERROR: Input CCI not standard dimensions'
self.detector = primary['DETECTOR']
self.segment = primary['SEGMENT']
self.obsmode = primary['OBSMODE']
self.expstart = primary['EXPSTART']
self.expend = primary['EXPEND']
self.exptime = primary['EXPTIME']
self.numfiles = primary['NUMFILES']
self.counts = primary['COUNTS']
self.dethv = primary.get('DETHV', -999)
try:
self.brftab = primary['brftab'].split('$')[1]
except:
self.brftab = 'x1u1459il_brf.fits'
if self.expstart:
#----Finds to most recently created HVTAB
hvtable_list = glob.glob(os.path.join(os.environ['lref'], '*hv.fits'))
HVTAB = hvtable_list[np.array([fits.getval(item, 'DATE') for item in hvtable_list]).argmax()]
hvtab = fits.open(HVTAB)
if self.segment == 'FUVA':
hv_string = 'HVLEVELA'
elif self.segment == 'FUVB':
hv_string = 'HVLEVELB'
self.file_list = [line[0].decode("utf-8") for line in hdu[1].read()]
self.big_array = np.array([rebin(hdu[i+2].read(), bins=(self.ybinning, self.xbinning)) for i in range(32)])
self.get_counts(self.big_array)
self.extracted_charge = self.pha_to_coulombs(self.big_array)
self.gain_image = np.zeros((YLEN, XLEN))
self.modal_gain_width = np.zeros((YLEN, XLEN))
self.cnt00_00 = len(self.big_array[0].nonzero()[0])
self.cnt01_01 = len(self.big_array[1].nonzero()[0])
self.cnt02_30 = len(self.big_array[2:31].nonzero()[0])
self.cnt31_31 = len(self.big_array[31:].nonzero()[0])
def get_counts(self, in_array):
"""collapse pha arrays to get total counts accross all
PHA bins.
Will also search for and add in accum data if any exists.
"""
out_array = np.sum(in_array, axis=0)
###Test before implementation
###Should only effect counts and charge extensions.
### no implications for modal gain arrays or measurements
if self.segment == 'FUVA':
accum_name = self.cci_name.replace('00_','02_') ##change when using OPUS data
elif self.segment == 'FUVB':
accum_name = self.cci_name.replace('01_','03_') ##change when using OPUS data
else:
accum_name = None
print('ERROR: name not standard')
if os.path.exists(accum_name):
accum_data = rebin(fits.getdata(CCI_DIR+accum_name, 0),bins=(Y_BINNING,self.xbinning))
out_array += accum_data
self.accum_data = accum_data
else:
self.accum_data = None
self.counts_image = out_array
def pha_to_coulombs(self, in_array):
"""Convert pha to picocoloumbs to calculate extracted charge.
Equation comes from D. Sahnow.
"""
coulomb_value = 1.0e-12*10**((np.array(range(0,32))-11.75)/20.5)
zlen, ylen, xlen = in_array.shape
out_array = np.zeros((ylen, xlen))
for pha,layer in enumerate(in_array):
out_array += (coulomb_value[pha]*layer)
return out_array
def write(self, out_name=None):
'''Write current CCI object to fits file.
Output files are used in later analysis to determine when
regions fall below the threshold.
'''
out_name = out_name or self.cci_name + '_gainmap.fits'
if os.path.exists(out_name):
print("not clobbering existing file")
return
#-------Ext=0
hdu_out = fits.HDUList(fits.PrimaryHDU())
hdu_out[0].header['TELESCOP'] = 'HST'
hdu_out[0].header['INSTRUME'] = 'COS'
hdu_out[0].header['DETECTOR'] = 'FUV'
hdu_out[0].header['OPT_ELEM'] = 'ANY'
hdu_out[0].header['FILETYPE'] = 'GAINMAP'
hdu_out[0].header['XBINNING'] = self.xbinning
hdu_out[0].header['YBINNING'] = self.ybinning
hdu_out[0].header['SRC_FILE'] = self.cci_name
hdu_out[0].header['SEGMENT'] = self.segment
hdu_out[0].header['EXPSTART'] = self.expstart
hdu_out[0].header['EXPEND'] = self.expend
hdu_out[0].header['EXPTIME'] = self.exptime
hdu_out[0].header['NUMFILES'] = self.numfiles
hdu_out[0].header['COUNTS'] = self.counts
hdu_out[0].header['DETHV'] = self.dethv
hdu_out[0].header['cnt00_00'] = self.cnt00_00
hdu_out[0].header['cnt01_01'] = self.cnt01_01
hdu_out[0].header['cnt02_30'] = self.cnt02_30
hdu_out[0].header['cnt31_31'] = self.cnt31_31
#-------EXT=1
included_files = np.array(self.file_list)
files_col = fits.Column('files', '24A', 'rootname', array=included_files)
tab = fits.BinTableHDU.from_columns([files_col])
hdu_out.append(tab)
hdu_out[1].header['EXTNAME'] = 'FILES'
#-------EXT=2
hdu_out.append(fits.ImageHDU(data=self.gain_image))
hdu_out[2].header['EXTNAME'] = 'MOD_GAIN'
#-------EXT=3
hdu_out.append(fits.ImageHDU(data=self.counts_image))
hdu_out[3].header['EXTNAME'] = 'COUNTS'
#-------EXT=4
hdu_out.append(fits.ImageHDU(data=self.extracted_charge))
hdu_out[4].header['EXTNAME'] = 'CHARGE'
#-------EXT=5
hdu_out.append(fits.ImageHDU(data=self.big_array[0]))
hdu_out[5].header['EXTNAME'] = 'cnt00_00'
#-------EXT=6
hdu_out.append(fits.ImageHDU(data=self.big_array[1]))
hdu_out[6].header['EXTNAME'] = 'cnt01_01'
#-------EXT=7
hdu_out.append(fits.ImageHDU(data=np.sum(self.big_array[2:31],axis=0)))
hdu_out[7].header['EXTNAME'] = 'cnt02_30'
#-------EXT=8
hdu_out.append(fits.ImageHDU(data=self.big_array[31]))
hdu_out[8].header['EXTNAME'] = 'cnt31_31'
#-------Write to file
hdu_out.writeto(out_name)
hdu_out.close()
#------------------------------------------------------------
def rename(input_file, mode='move'):
"""Rename CCI file from old to new naming convention
Parameters
----------
input_file : str
Old-style CCI file
mode : str, optional
if 'move', the original file will be removed. Otherwise, simply the new
name will be printed and returned.
Returns
-------
outname : str
Name in the new naming convention.
"""
options = ['copy', 'move', 'print']
if not mode in options:
raise ValueError("mode: {} must be in {}".format(mode, options))
with fits.open(input_file) as hdu:
path, name = os.path.split(input_file)
name_split = name.split('_')
dethv = int(hdu[0].header['DETHV'])
time_str = name_split[1]
filetype = name_split[0][3:]
ext = ''
if '.fits' in name:
ext += '.fits'
if '.gz' in name:
ext += '.gz'
if hdu[0].header['DETECTOR'] == 'FUV':
if dethv == -1:
dethv = 999
out_name = 'l_{}_{}_{}_cci{}'.format(time_str, filetype, int(dethv), ext)
elif hdu[0].header['DETECTOR'] == 'NUV':
out_name = 'l_{}_{}_cci{}'.format(time_str, filetype, ext)
out_file = os.path.join(path, out_name)
if mode == 'copy':
hdu.writeto(out_name)
if mode == 'print':
print(out_name)
elif mode == 'move':
print("{} --> {}".format(input_file, out_name))
shutil.move(input_file, out_name)
return out_name
#-------------------------------------------------------------------------------
def read_brftab(filename, segment):
"""Parse Baseline Reference Table for needed information
Reads the active area for the specified segment from the COS BRFTAB
(Baseline Reference Table). The four corners (left, right, top, bottom)
of the active area are returned.
Parameters
----------
filename : str
Input BRFTAB
segment : str
'FUVA' or 'FUVB', which segment to parse from
Returns
-------
corners : tuple
left, right, top, bottom corners of the active area
"""
with fits.open(filename) as hdu:
index = np.where(hdu[1].data['segment'] == segment)[0]
left = hdu[1].data[index]['A_LEFT']
right = hdu[1].data[index]['A_RIGHT']
top = hdu[1].data[index]['A_HIGH']
bottom = hdu[1].data[index]['A_LOW']
return left[0], right[0], top[0], bottom[0]
#-------------------------------------------------------------------------------
def read_spottab(filename, segment, expstart, expend):
"""Parse the COS spottab
Parameters
----------
filename : str
Input SPOTTAB fits file
segment : str
'FUVA' or 'FUVB', which segment to parse from
expstart : float, int
return only rows with STOP > expstart
expend : float, int
return only rows with START < expend
Returns
-------
"""
with fits.open(filename) as hdu:
index = np.where((hdu[1].data['SEGMENT'] == segment) &
(hdu[1].data['START'] < expend) &
(hdu[1].data['STOP'] > expstart))[0]
rows = hdu[1].data[index]
return zip(rows['LX'], rows['LY'], rows['DX'], rows['DY'])
#-------------------------------------------------------------------------------
def make_all_hv_maps():
for hv in range(150, 179):
tmp_hdu = fits.open(os.path.join( MONITOR_DIR, 'total_gain.fits'))
for ext in (1, 2):
tmp_hdu[ext].data -= .393 * (float(178) - hv)
tmp_hdu.writeto( os.path.join( MONITOR_DIR, 'total_gain_%d.fits' % hv ), clobber=True )
print('WRITING total_gain_{}.fits TO {}'.format(hv, MONITOR_DIR))
#-------------------------------------------------------------------------------
def make_total_gain(gainmap_dir=None, segment='FUV', start_mjd=55055, end_mjd=70000, min_hv=163, max_hv=175, reverse=False):
if segment == 'FUVA':
search_string = 'l_*_00_???_cci_gainmap.fits'
elif segment == 'FUVB':
search_string = 'l_*_01_???_cci_gainmap.fits'
all_datasets = [item for item in glob.glob(os.path.join(gainmap_dir, search_string))]
if not len(all_datasets):
search_string = search_string + '.gz'
all_datasets = [item for item in glob.glob(os.path.join(gainmap_dir, search_string))]
all_datasets.sort()
print("Combining {} datasets".format(len(all_datasets)))
if reverse:
all_datasets = all_datasets[::-1]
out_data = np.zeros((YLEN, XLEN))
for item in all_datasets:
cci_hdu = fits.open(item)
if not cci_hdu[0].header['EXPSTART'] >= start_mjd:
continue
if not cci_hdu[0].header['EXPSTART'] <= end_mjd:
continue
if not cci_hdu[0].header['DETHV'] >= min_hv:
continue
if not cci_hdu[0].header['DETHV'] <= max_hv:
continue
test_list.append(item)
cci_data = cci_hdu['MOD_GAIN'].data
dethv = cci_hdu[0].header['DETHV']
index = np.where(cci_data)
#cci_data[index] += .393 * (float(178) - dethv)
index_both = np.where((cci_data > 0) & (out_data > 0))
#mean_data = np.mean([cci_data, out_data], axis=0)
out_data[index] = cci_data[index]
#out_data[index_both] = mean_data[index_both]
return enlarge(out_data, x=X_BINNING, y=Y_BINNING)
#------------------------------------------------------------
def make_all_gainmaps_entry():
parser = argparse.ArgumentParser()
parser.add_argument("-f",
'--filename',
type=str,
default='total_gain.fits',
help="Filename to write gain file to")
parser.add_argument("-d",
'--dir',
type=str,
default='/grp/hst/cos/Monitors/CCI/',
help="directory containing the gainmaps")
parser.add_argument("-s",
'--start',
type=float,
default=55055.0,
help="MJD of the first gainmap to include")
parser.add_argument("-e",
'--end',
type=float,
default=70000,
help="MJD of the last gainmap to include")
parser.add_argument('--hvmin',
type=int,
default=163,
help="Minimum DETHVS of gainmaps to include")
parser.add_argument('--hvmax',
type=int,
default=175,
help="Maximum DETHV of gainmaps to include.")
args = parser.parse_args()
print("Creating all gainmaps using:")
print(args)
make_all_gainmaps(filename=args.filename,
gainmap_dir=args.dir,
start_mjd=args.start,
end_mjd=args.end,
min_hv=args.hvmin,
max_hv=args.hvmax)
#------------------------------------------------------------
def make_all_gainmaps(filename, gainmap_dir, start_mjd=55055, end_mjd=70000, min_hv=163, max_hv=175):
"""
"""
#add_cumulative_data(ending)
hdu_out = fits.HDUList(fits.PrimaryHDU())
#-- Adding primary header with file specifications to make results reproducible
hdu_out[0].header['TELESCOP'] = 'HST'
hdu_out[0].header['INSTRUME'] = 'COS'
hdu_out[0].header['DETECTOR'] = 'FUV'
hdu_out[0].header['OPT_ELEM'] = 'ANY'
hdu_out[0].header['FILETYPE'] = 'GAINMAP'
hdu_out[0].header['EXPSTART'] = start_mjd
hdu_out[0].header['EXP_END'] = end_mjd
hdu_out[0].header['MIN_HV'] = min_hv
hdu_out[0].header['MAX_HV'] = max_hv
hdu_out[0].header['CCI_DIR'] = gainmap_dir
#-- Data ext
hdu_out.append(fits.ImageHDU(data=make_total_gain(gainmap_dir, 'FUVA', start_mjd, end_mjd, min_hv, max_hv, reverse=True)))
hdu_out[1].header['EXTNAME'] = 'FUVAINIT'
hdu_out.append(fits.ImageHDU(data=make_total_gain(gainmap_dir, 'FUVB', start_mjd, end_mjd, min_hv, max_hv, reverse=True)))
hdu_out[2].header['EXTNAME'] = 'FUVBINIT'
hdu_out.append(fits.ImageHDU(data=make_total_gain(gainmap_dir, 'FUVA', start_mjd, end_mjd, min_hv, max_hv)))
hdu_out[3].header['EXTNAME'] = 'FUVALAST'
hdu_out.append(fits.ImageHDU(data=make_total_gain(gainmap_dir, 'FUVB', start_mjd, end_mjd, min_hv, max_hv)))
hdu_out[4].header['EXTNAME'] = 'FUVBLAST'
hdu_out.writeto(filename, clobber=True)
hdu_out.close()
print('Making ALL HV Maps')
###make_all_hv_maps()
#------------------------------------------------------------
def add_cumulative_data(ending):
"""add cumulative counts and charge to each file
Will overwrite current data, so if files are added
in middle of list, they should be accounted for.
"""
data_list = glob.glob(os.path.join(MONITOR_DIR,'*%s*gainmap.fits'%ending))
data_list.sort()
print('Adding cumulative data to gainmaps for %s'%(ending))
shape = fits.getdata(data_list[0], ext=('MOD_GAIN', 1)).shape
total_counts = np.zeros(shape)
total_charge = np.zeros(shape)
for cci_name in data_list:
hdu = fits.open(cci_name, mode='update')
#-- Add nothing if extension.data is None
try:
hdu['counts'].data
hdu['charge'].data
print("Skipping")
except AttributeError:
continue
total_counts += hdu['COUNTS'].data
total_charge += hdu['CHARGE'].data
ext_names = [ext.name for ext in hdu]
if 'CUMLCNTS' in ext_names:
hdu['CUMLCNTS'].data = total_counts
else:
head_to_add = fits.Header()
head_to_add.update('EXTNAME', 'CUMLCNTS')
hdu.append(fits.ImageHDU(header=head_to_add, data=total_counts))
if 'CUMLCHRG' in ext_names:
hdu['CUMLCHRG'].data = total_charge
else:
head_to_add = fits.Header()
head_to_add.update('EXTNAME', 'CUMLCHRG')
hdu.append(fits.ImageHDU(header=head_to_add, data=total_charge))
hdu.flush()
hdu.close()
#------------------------------------------------------------
def measure_gainimage(data_cube, mincounts=30, phlow=1, phhigh=31):
""" measure the modal gain at each pixel
returns a 2d gainmap
"""
# Suppress certain pharanges
for i in list(range(0, phlow+1)) + list(range(phhigh, len(data_cube))):
data_cube[i] = 0
counts_im = np.sum(data_cube, axis=0)
out_gain = np.zeros(counts_im.shape)
out_counts = np.zeros(counts_im.shape)
out_std = np.zeros(counts_im.shape)
index_search = np.where(counts_im >= mincounts)
if not len(index_search):
return out_gain, out_counts, out_std
for y, x in zip(*index_search):
dist = data_cube[:, y, x]
g, fit_g, success = fit_distribution(dist)
if not success:
continue
#-- double-check
if g.mean.value <= 3:
sub_dist = dist - g(np.arange(len(dist)))
sub_dist[sub_dist < 0] = 0
g2, fit2_g, success = fit_distribution(sub_dist, start_mean=15)
if success and abs(g2.mean.value - g.mean.value) > 1:
continue
out_gain[y, x] = g.mean.value
out_counts[y, x] = dist.sum()
out_std[y, x] = g.stddev.value
return out_gain, out_counts, out_std
#------------------------------------------------------------
def fit_ok(fit, fitter, start_mean, start_amp, start_std):
#-- Check for success in the LevMarLSQ fitting
if not fitter.fit_info['ierr'] in [1, 2, 3, 4]:
return False
#-- If the peak is too low
if fit.amplitude.value < 12:
return False
if not fit.stddev.value:
return False
#-- Check if fitting stayed at initial
if not (start_mean - fit.mean.value):
return False
if not (start_amp - fit.amplitude.value):
return False
#-- Not sure this is possible, but checking anyway
if np.isnan(fit.mean.value):
return False
if (fit.mean.value <= 0) or (fit.mean.value >= 31):
return False
return True
#-------------------------------------------------------------------------------
def write_and_pull_gainmap(cci_name, out_dir=None):
"""Make modal gainmap for cos cumulative image.
"""
"""
#-- Disabling lookback
previous_list = []###get_previous(current)
mincounts = 30
print 'Adding in previous data to distributions'
for past_CCI in previous_list:
print past_CCI.cci_name
index = np.where(np.sum(current.big_array[1:31], axis=0) <= mincounts)
for y, x in zip(*index):
prev_dist = past_CCI.big_array[:, y, x]
if prev_dist.sum() > mincounts:
continue
else:
current.big_array[:, y, x] += prev_dist
"""
current = CCI(cci_name, xbinning=X_BINNING, ybinning=Y_BINNING)
out_name = os.path.join(out_dir, cci_name.replace('.fits', '_gainmap.fits'))
logger.debug("writing gainmap to {}".format(out_name))
current.write(out_name)
index = np.where(current.gain_image > 0)
info = {'segment': current.segment,
'dethv': int(current.dethv),
'expstart': round(current.expstart, 5)}
if not len(index[0]):
yield info
else:
for y, x in zip(*index):
info['gain'] = round(float(current.gain_image[y, x]), 3)
info['counts'] = round(float(current.counts_image[y, x]), 3)
info['std'] = round(float(current.std_image[y, x]), 3)
info['x'] = int(x)
info['y'] = int(y)
yield info
"""
if current.accum_data:
current.extracted_charge += current.accum_data*(1.0e-12*10**((gain_image-11.75)/20.5))
if gain_flag == '':
gain_flag = 'fine'
fit_modal_gain = fit_center
fit_gain_width = fit_std
current.gain_image[y,x] = fit_modal_gain
current.modal_gain_width[y,x] = fit_gain_width
if fit_center > 21:
print '##########################'
print 'WARNING MODAL GAIN: %3.2f'%(fit_center)
print 'Modal gain has been measured to be greater than 21. '
print 'PHA upper limit of 23 may cause flux to be lost.'
print '##########################'
send_email( subject='CCI high modal gain found',
message='Modal gain of %3.2f found on segment %s at (x,y,MJD) %d,%d,%5.5f'%(fit_center,current.KW_SEGMENT,x,y,current.KW_EXPSTART) )
"""
#-------------------------------------------------------------------------------
def fit_distribution(dist, start_mean=None, start_amp=None, start_std=None):
x_vals = np.arange(len(dist))
start_mean = start_mean or dist.argmax()
start_amp = start_amp or int(max(dist))
start_std = start_std or 1.05
g_init = models.Gaussian1D(amplitude=start_amp,
mean=start_mean,
stddev=start_std,
bounds={'mean': [1, 30]})
g_init.stddev.fixed = True
fit_g = fitting.LevMarLSQFitter()
g = fit_g(g_init, x_vals, dist)
success = fit_ok(g, fit_g, start_mean, start_amp, start_std)
return g, fit_g, success
#------------------------------------------------------------
def get_previous(current_cci):
"""Populates list of CCI objects.
A list of cci_objects with the same DETHV and within
NUM_DAYS_PREVIOUS before current_cci will be created.
Parameters
----------
current_cci: cci object
the current cci_object
Returns
-------
output: list
list of previous cci_objects
"""
#---Lookback_time
NUM_DAYS_PREVIOUS = 7.1 ##Just in case something is really close
out_list=[]
print('Retrieving data from previous CCIs:')
print('-----------------------------------')
dethv = current_cci.KW_DETHV
expstart = current_cci.KW_EXPSTART
segment = current_cci.KW_SEGMENT
cci_name = current_cci.input_file
if ((not NUM_DAYS_PREVIOUS) or (not expstart)):
print('None to find')
return out_list
path,file_name = os.path.split(cci_name)
if segment == 'FUVA':
ending = '*' + FUVA_string + '*'
elif segment == 'FUVB':
ending = '*' + FUVB_string + '*'
else:
print('Error, segment error in %s'%(file_name))
print('Returning blank list')
return []
cci_list = glob.glob(CCI_DIR + ending)
cci_list.sort()
current_cci_index = cci_list.index(cci_name)
for cci_file in cci_list[:current_cci_index][::-1]:
cci_hv = fits.getval(cci_file,'DETHV')
cci_expstart = fits.getval(cci_file,'EXPSTART')
if (cci_expstart < (expstart - NUM_DAYS_PREVIOUS) ):
break
if (cci_hv == dethv):
out_list.append(CCI(cci_file, xbinning=X_BINNING, ybinning=Y_BINNING))
if len(out_list) >= 2*NUM_DAYS_PREVIOUS:
print('Breaking off list. %d files retrieved'%(2 * NUM_DAYS_PREVIOUS))
break
print('Found: %d files' % (len(out_list)))
print([item.cci_name for item in out_list])
print('-----------------------------------')
return out_list
#-------------------------------------------------------------------------------
def explode(filename):
"""Expand an events list into a 3D data cube of PHA images
This function bins event lists into the 3D datacube with a format
like the CSUMs. 1 image containing the events with each integer PHA value
will be stacked into the output datacube.
Parameters
----------
filename : str
name of the COS corrtag file
Returns
-------
out_cube : np.ndarray
3D array of images for each PHA
"""
if isinstance(filename, str):
events = fits.getdata(filename, ext=('events', 1))
else:
raise ValueError('{} needs to be a filename of a COS corrtag file'.format(filename))
out_cube = np.empty((32, 1024, 16384))
for phaval in range(0, 32):
index = np.where(events['PHA'] == phaval)[0]
if not len(index):
out_cube[phaval] = 0
continue
image, y_range, x_range = np.histogram2d(events['YCORR'][index],
events['XCORR'][index],
bins=(1024, 16384),
range=((0,1023), (0,16384))
)
out_cube[phaval] = image
return out_cube
#-------------------------------------------------------------------------------
|
|
#!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
""" Creates a zip file of a build and upload it to google storage.
This will be used by the ASAN/TSAN security tests uploaded to ClusterFuzz.
To archive files on Google Storage, set the 'gs_bucket' key in the
--factory-properties to 'gs://<bucket-name>'. To control access to archives,
set the 'gs_acl' key to the desired canned-acl (e.g. 'public-read', see
https://developers.google.com/storage/docs/accesscontrol#extension for other
supported canned-acl values). If no 'gs_acl' key is set, the bucket's default
object ACL will be applied (see
https://developers.google.com/storage/docs/accesscontrol#defaultobjects).
"""
import optparse
import os
import re
import stat
import sys
from common import chromium_utils
from common.chromium_utils import GS_COMMIT_POSITION_NUMBER_KEY, \
GS_COMMIT_POSITION_KEY, \
GS_GIT_COMMIT_KEY
from slave import build_directory
from slave import slave_utils
class StagingError(Exception):
pass
def ShouldPackageFile(filename, target):
# Disable 'unused argument' warning for 'target' | pylint: disable=W0613
"""Returns true if the file should be a part of the resulting archive."""
if chromium_utils.IsMac():
file_filter = r'^.+\.(a)$'
elif chromium_utils.IsLinux():
file_filter = r'^.+\.(o|a|d)$'
elif chromium_utils.IsWindows():
file_filter = r'^.+\.(obj|lib|pch|exp)$'
else:
raise NotImplementedError('%s is not supported.' % sys.platform)
if re.match(file_filter, filename):
return False
# Skip files that we don't care about. Mostly directories.
things_to_skip = chromium_utils.FileExclusions()
if filename in things_to_skip:
return False
return True
def GetBuildSortKey(options, primary_project):
"""Returns: (str) the build sort key for the specified project.
Attempts to identify the build sort key for a given project. If
'primary_project' is None or if there is no sort key for the specified
primary project, the checkout-wide sort key will be used.
Raises:
chromium_utils.NoIdentifiedRevision: if the checkout-wide sort key could not
be resolved.
"""
if primary_project:
try:
return chromium_utils.GetBuildSortKey(options, project=primary_project)
except chromium_utils.NoIdentifiedRevision:
pass
return chromium_utils.GetBuildSortKey(options)
def GetGitCommit(options, primary_project):
"""Returns: (str/None) the git commit hash for a given project.
Attempts to identify the git commit hash for a given project. If
'primary_project' is None, or if there is no git commit hash for the specified
primary project, the checkout-wide commit hash will be used.
If none of the candidate configurations are present, 'None' will be returned.
"""
projects = []
if primary_project:
projects += [primary_project]
projects += [None]
for project in projects:
try:
return chromium_utils.GetGitCommit(options, project=project)
except chromium_utils.NoIdentifiedRevision:
pass
return None
def archive(options, args):
# Disable 'unused argument' warning for 'args' | pylint: disable=W0613
build_dir = build_directory.GetBuildOutputDirectory()
src_dir = os.path.abspath(os.path.dirname(build_dir))
build_dir = os.path.join(build_dir, options.target)
revision_dir = (options.revision_dir or
options.factory_properties.get('revision_dir'))
primary_project = chromium_utils.GetPrimaryProject(options)
build_sortkey_branch, build_sortkey_value = GetBuildSortKey(
options,
primary_project)
build_git_commit = GetGitCommit(options, primary_project)
staging_dir = slave_utils.GetStagingDir(src_dir)
chromium_utils.MakeParentDirectoriesWorldReadable(staging_dir)
print 'Full Staging in %s' % staging_dir
print 'Build Directory %s' % build_dir
# Build the list of files to archive.
zip_file_list = [f for f in os.listdir(build_dir)
if ShouldPackageFile(f, options.target)]
if options.cf_archive_subdir_suffix is None:
subdir_suffix = options.factory_properties.get(
'cf_archive_subdir_suffix', '')
else:
subdir_suffix = options.cf_archive_subdir_suffix
pieces = [chromium_utils.PlatformName(), options.target.lower()]
if subdir_suffix:
pieces.append(subdir_suffix)
subdir = '-'.join(pieces)
# Components like v8 get a <name>-v8-component-<revision> infix.
component = ''
if revision_dir:
component = '-%s-component' % revision_dir
prefix = (options.cf_archive_name or
options.factory_properties.get('cf_archive_name', 'cf_archive'))
sortkey_path = chromium_utils.GetSortableUploadPathForSortKey(
build_sortkey_branch, build_sortkey_value)
zip_file_name = '%s-%s-%s%s-%s' % (prefix,
chromium_utils.PlatformName(),
options.target.lower(),
component,
sortkey_path)
(zip_dir, zip_file) = chromium_utils.MakeZip(staging_dir,
zip_file_name,
zip_file_list,
build_dir,
raise_error=True)
chromium_utils.RemoveDirectory(zip_dir)
if not os.path.exists(zip_file):
raise StagingError('Failed to make zip package %s' % zip_file)
chromium_utils.MakeWorldReadable(zip_file)
# Report the size of the zip file to help catch when it gets too big.
zip_size = os.stat(zip_file)[stat.ST_SIZE]
print 'Zip file is %ld bytes' % zip_size
gs_bucket = (options.gs_bucket or
options.factory_properties.get('gs_bucket', None))
gs_acl = options.gs_acl or options.factory_properties.get('gs_acl', None)
gs_metadata = {
GS_COMMIT_POSITION_NUMBER_KEY: build_sortkey_value,
}
if build_sortkey_branch:
gs_metadata[GS_COMMIT_POSITION_KEY] = chromium_utils.BuildCommitPosition(
build_sortkey_branch, build_sortkey_value)
if build_git_commit:
gs_metadata[GS_GIT_COMMIT_KEY] = build_git_commit
status = slave_utils.GSUtilCopyFile(zip_file, gs_bucket, subdir=subdir,
gs_acl=gs_acl, metadata=gs_metadata)
if status:
raise StagingError('Failed to upload %s to %s. Error %d' % (zip_file,
gs_bucket,
status))
else:
# Delete the file, it is not needed anymore.
os.remove(zip_file)
return status
def main(argv):
option_parser = optparse.OptionParser()
option_parser.add_option('--target', default='Release',
help='build target to archive (Debug or Release)')
option_parser.add_option('--build-dir', help='ignored')
option_parser.add_option('--cf_archive_name',
help='prefix of the archive zip file')
option_parser.add_option('--cf_archive_subdir_suffix',
help='suffix of the archive directory')
option_parser.add_option('--gs_acl', help='ACLs to be used on upload')
option_parser.add_option('--gs_bucket',
help='the google storage bucket name')
option_parser.add_option('--revision_dir',
help=('component builds: if set, use directory '
'revision instead of chromium revision and '
'add "-component" to the archive name'))
chromium_utils.AddPropertiesOptions(option_parser)
options, args = option_parser.parse_args(argv)
return archive(options, args)
if '__main__' == __name__:
sys.exit(main(sys.argv))
|
|
"""Template class with arithmetic operations that can be passed through neural
network.
All classes that are being used for derest should inherit from this class."""
class Numlike(object):
"""Template class with arithmetic operations that can be passed through
neural network.
All classes that are being used for derest should inherit from this
class."""
def __init__(self):
"""Create numlike."""
pass
def __getitem__(self, at):
"""Returns specified slice of numlike.
:at: Coordinates / slice to be taken.
:rtype: Numlike
"""
raise NotImplementedError
def __setitem__(self, at, other):
"""Just like Theano set_subtensor function, but as a operator.
:at: Coordinates / slice to be set.
:other: Data to be put at 'at'.
"""
raise NotImplementedError
@property
def shape(self):
"""Returns shape of numlike.
:rtype: integer or tuple of integers or theano shape
"""
raise NotImplementedError
def __add__(self, other):
"""Returns sum of two numlikes.
:param other: value to be added.
:type other: Numlike or np.ndarray or theano.tensor
:rtype: Numlike
"""
raise NotImplementedError
def __sub__(self, other):
"""Returns difference between two numlikes.
:param other: value to be subtracted.
:type other: Numlike or np.ndarray or theano.tensor
:rtype: Numlike
"""
raise NotImplementedError
def __mul__(self, other):
"""Returns product of two numlikes.
:param other: value to be multiplied.
:type other: Numlike or np.ndarray or theano.tensor
:rtype: Numlike
"""
raise NotImplementedError
def __div__(self, other):
"""Returns quotient of self and other.
:param other: divisor
:type other: Numlike or np.ndarray or theano.tensor
:rtype: Numlike
"""
raise NotImplementedError
def __rdiv__(self, other):
"""Returns quotient of other and self.
:param other: dividend
:type other: float
:rtype: Nplike
.. warning:: divisor (self) should not contain zero, other must be
float
"""
raise NotImplementedError
def reciprocal(self):
"""Returns reciprocal of the Numlike.
:rtype: Numlike
"""
raise NotImplementedError
def neg(self):
"""Returns (-1) * Numlike.
:rtype: Numlike
"""
raise NotImplementedError
def __neg__(self):
return self.neg()
def exp(self):
"""Returns Numlike representing the exponential of the Numlike.
:rtype: Numlike
"""
raise NotImplementedError
def square(self):
"""Returns square of the Numlike.
:rtype: Numlike
"""
raise NotImplementedError
def power(self, exponent):
"""For numlike N, returns N^exponent.
:param float exponent: Number to be passed as exponent to N^exponent.
:rtype: Numlike
"""
raise NotImplementedError
def __pow__(self, exponent):
return self.power(exponent)
def dot(self, other):
"""Dot product of numlike vector and a other.
:param unspecified other: second dot param, type to be specified
:rtype: Numlike
"""
raise NotImplementedError
def max(self, other):
"""Returns maximum of self and other.
:param unspecified other: second masx param, type to be specified
:rtype: Numlike
"""
raise NotImplementedError
def amax(self, axis=None, keepdims=False):
"""Returns maximum of a Numlike along an axis.
Works like theano.tensor.max
:param axis: axis along which max is evaluated
:param bool keepdims: whether flattened dimensions should remain
:rtype: Numlike
"""
raise NotImplementedError
def reshape(self, shape):
"""Reshapes numlike tensor like theano Tensor.
:param integer tuple shape: shape to be set
:rtype: Numlike
"""
raise NotImplementedError
def flatten(self):
"""Flattens numlike tensor like theano Tensor.
:rtype: Numlike
"""
raise NotImplementedError
def sum(self, axis=None, dtype=None, keepdims=False):
"""Sum of array elements over a given axis like in numpy.ndarray.
:param axis: axis along which this function sums
:param dtype: just like dtype argument in
theano.tensor.sum
:type dtype: numeric type or None
:param bool keepdims: Whether to keep squashed dimensions of size 1
:type axis: integer, tuple of integers or None
:rtype: Numlike
"""
raise NotImplementedError
def abs(self):
"""Returns absolute value of Numlike.
:rtype: Numlike
"""
raise NotImplementedError
def __abs__(self):
return self.abs()
@property
def T(self):
"""Tensor transposition like in numpy.ndarray.
:rtype: Numlike
"""
raise NotImplementedError
@classmethod
def from_shape(cls, shp, neutral=True):
"""Returns Numlike of given shape.
:param integer tuple shp: shape to be set
:param bool neutral: whether created Numlike should have neutral
values or significant values.
:rtype: Numlike
"""
raise NotImplementedError
def reshape_for_padding(self, shape, padding):
"""Returns padded Numlike.
:param tuple of 4 integers shape: shape of input in format
(batch size, number of channels,
height, width)
:param pair of integers padding: padding to be applied
:returns: padded layer_input
:rtype: Numlike
"""
raise NotImplementedError
def broadcast(self, shape):
"""Broadcast Numlike into given shape
:param shape: tuple of integers
:rtype: Numlike
"""
raise NotImplementedError
@staticmethod
def stack(numlikes, axis=0):
""" Takes a sequence of numlikes and stack them on given axis
to make a single numlike. The size in dimension axis of the result
will be equal to the number of numlikes passed.
:param numlikes: numlikes of the same shape
:type numlikes: array or tuple of Numlikes
:param int axis: the axis along which numlikes will be stacked
:rtype: Numlike
"""
raise NotImplementedError
def eval(self, *args):
"""Returns some readable form of stored value."""
raise NotImplementedError
def op_relu(self):
"""Returns result of relu operation on given Numlike.
:rtype: Numlike
"""
raise NotImplementedError
def op_softmax(self, input_shp):
"""Returns result of softmax operation on given Numlike.
:param integer input_shp: shape of 1D input
:rtype: Numlike
"""
raise NotImplementedError
def op_norm(self, input_shape, local_range, k, alpha, beta):
"""Returns estimated activation of LRN layer.
:param input_shape: shape of input in format
(n_channels, height, width)
:param integer local_range: size of local range in local range
normalization
:param integer k: local range normalization k argument
:param integer alpha: local range normalization alpha argument
:param integer beta: local range normalization beta argument
:type input_shape: tuple of 3 integers
:rtype: Numlike
"""
raise NotImplementedError
def op_conv(self, weights, image_shape, filter_shape, biases, stride,
padding, n_groups):
"""Returns estimated activation of convolution applied to Numlike.
:param weights: weights tensor in format (number of output channels,
number of input channels,
filter height,
filter width)
:param image_shape: shape of input in the format
(number of input channels, image height, image width)
:param filter_shape: filter shape in the format
(number of output channels, filter height,
filter width)
:param biases: biases in convolution
:param stride: pair representing interval at which to apply the filters
:param padding: pair representing number of zero-valued pixels to add
on each side of the input.
:param n_groups: number of groups input and output channels will be
split into, two channels are connected only if they
belong to the same group.
:type image_shape: tuple of 3 integers
:type weights: 3D numpy.ndarray or theano.tensor
:type filter_shape: tuple of 3 integers
:type biases: 1D numpy.ndarray or theano.vector
:type stride: pair of integers
:type padding: pair of integers
:type n_groups: integer
:rtype: Numlike
"""
raise NotImplementedError
def op_d_relu(self, activation):
"""Returns estimated impact of input of relu layer on output of
network.
:param Numlike activation: estimated activation of input
:param Numlike self: estimated impact of output of layer on output
of network in shape (batch_size, number of
channels, height, width)
:returns: Estimated impact of input on output of network
:rtype: Numlike
"""
raise NotImplementedError
def op_d_max_pool(self, activation, input_shape, poolsize, stride,
padding):
"""Returns estimated impact of max pool layer on output of network.
:param Numlike self: estimated impact of output of layer on output
of network in shape (batch_size, number of
channels, height, width)
:param Numlike activation: estimated activation of input
:param input_shape: shape of layer input in format (batch size,
number of channels, height, width)
:type input_shape: tuple of 4 integers
:param pair of integers poolsize: pool size in format (height, width),
not equal (1, 1)
:param pair of integers stride: stride of max pool
:param pair of integers padding: padding of max pool
:returns: Estimated impact of input on output of network
:rtype: Numlike
"""
raise NotImplementedError
def op_d_avg_pool(self, activation, input_shape, poolsize, stride,
padding):
"""Returns estimated impact of avg pool layer on output of network.
:param Numlike self: estimated impact of output of layer on output
of network in shape (batch_size, number of
channels, height, width)
:param Numlike activation: estimated activation of input
:param input_shape: shape of layer input in format (batch size,
number of channels, height, width)
:type input_shape: tuple of 4 integers
:param pair of integers poolsize: pool size in format (height, width),
not equal (1, 1)
:param pair of integers stride: stride of avg pool
:param pair of integers padding: padding of avg pool
:returns: Estimated impact of input on output of network
:rtype: Numlike
"""
raise NotImplementedError
def op_d_norm(self, activation, input_shape, local_range, k, alpha,
beta):
"""Returns estimated impact of input of norm layer on output of
network.
:param Numlike self: estimated impact of output of layer on output
of network in shape (batch_size, number of
channels, height, width)
:param Numlike activation: estimated activation of input
:param input_shape: shape of layer input in format (batch size,
number of channels, height, width)
:type input_shape: tuple of 4 integers
:param integer local_range: size of local range in local range
normalization
:param float k: local range normalization k argument
:param float alpha: local range normalization alpha argument
:param float beta: local range normalization beta argument
:rtype: Numlike
"""
raise NotImplementedError
def op_d_conv(self, input_shape, filter_shape, weights,
stride, padding, n_groups, theano_ops=None):
"""Returns estimated impact of input of convolutional layer on output
of network.
:param Numlike self: estimated impact of output of layer on output
of network in shape (batch_size,
number of channels, height, width)
:param input_shape: shape of layer input in the format
(number of batches, number of input channels,
image height, image width)
:type input_shape: tuple of 4 integers
:param filter_shape: filter shape in the format
(number of output channels, filter height,
filter width)
:type filter_shape: tuple of 3 integers
:param weights: Weights tensor in format (number of output channels,
number of input channels,
filter height,
filter width)
:type weights: numpy.ndarray or theano tensor
:param stride: pair representing interval at which to apply the filters
:type stride: pair of integers
:param padding: pair representing number of zero-valued pixels to add
on each side of the input.
:type padding: pair of integers
:param n_groups: number of groups input and output channels will be
split into, two channels are connected only if they
belong to the same group.
:type n_groups: integer
:param theano_ops: map in which theano graph might be saved
:type theano_ops: map of theano functions
:returns: Estimated impact of input on output of network
:rtype: Numlike
"""
raise NotImplementedError
@staticmethod
def derest_output(n_outputs):
"""Generates Numlike of impact of output on output.
:param int n_outputs: Number of outputs of network.
:returns: 2D square Numlike in shape (n_batches, n_outputs) with one
different "1" in every batch.
:rtype: Numlike
"""
raise NotImplementedError
def concat(self, other, axis=0):
"""
:param other: Numlike variable to be concationated with
:type other: Numlike
:param axis: The axis along which the Numlikes will be joined.
Default is 0.
:type axis: int, optional
:returns: Numlike object analogic to
np.concatenate([self, other], axis=1)
"""
raise NotImplementedError
|
|
# vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Client implementation based on the boto AWS client library
"""
import sys
from heat_cfnclient.openstack.common import log as logging
logger = logging.getLogger(__name__)
from boto.ec2.cloudwatch import CloudWatchConnection
class BotoCWClient(CloudWatchConnection):
'''
Wrapper class for boto CloudWatchConnection class
'''
# TODO(unknown) : These should probably go in the CW API and be imported
DEFAULT_NAMESPACE = "heat/unknown"
METRIC_UNITS = ("Seconds", "Microseconds", "Milliseconds", "Bytes",
"Kilobytes", "Megabytes", "Gigabytes", "Terabytes",
"Bits", "Kilobits", "Megabits", "Gigabits", "Terabits",
"Percent", "Count", "Bytes/Second", "Kilobytes/Second",
"Megabytes/Second", "Gigabytes/Second", "Terabytes/Second",
"Bits/Second", "Kilobits/Second", "Megabits/Second",
"Gigabits/Second", "Terabits/Second", "Count/Second", None)
METRIC_COMPARISONS = (">=", ">", "<", "<=")
ALARM_STATES = ("OK", "ALARM", "INSUFFICIENT_DATA")
METRIC_STATISTICS = ("Average", "Sum", "SampleCount", "Maximum", "Minimum")
# Note, several of these boto calls take a list of alarm names, so
# we could easily handle multiple alarms per-action, but in the
# interests of keeping the client simple, we just handle one 'AlarmName'
def describe_alarm(self, **kwargs):
# If no AlarmName specified, we pass None, which returns
# results for ALL alarms
try:
name = kwargs['AlarmName']
except KeyError:
name = None
return super(BotoCWClient, self).describe_alarms(
alarm_names=[name])
def list_metrics(self, **kwargs):
# list_metrics returns non-null index in next_token if there
# are more than 500 metric results, in which case we have to
# re-read with the token to get the next batch of results
#
# Also note that we can do more advanced filtering by dimension
# and/or namespace, but for simplicity we only filter by
# MetricName for the time being
try:
name = kwargs['MetricName']
except KeyError:
name = None
results = []
token = None
while True:
results.append(super(BotoCWClient, self).list_metrics(
next_token=token,
dimensions=None,
metric_name=name,
namespace=None))
if not token:
break
return results
def put_metric_data(self, **kwargs):
'''
Publish metric data points to CloudWatch
'''
try:
metric_name = kwargs['MetricName']
metric_unit = kwargs['MetricUnit']
metric_value = kwargs['MetricValue']
metric_namespace = kwargs['Namespace']
except KeyError:
logger.error("Must pass MetricName, MetricUnit, " +
"Namespace, MetricValue!")
return
try:
metric_unit = kwargs['MetricUnit']
except KeyError:
metric_unit = None
# If we're passed AlarmName, we attach it to the metric
# as a dimension
try:
metric_dims = [{'AlarmName': kwargs['AlarmName']}]
except KeyError:
metric_dims = []
if metric_unit not in self.METRIC_UNITS:
logger.error("MetricUnit not an allowed value")
logger.error("MetricUnit must be one of %s" % self.METRIC_UNITS)
return
return super(BotoCWClient, self).put_metric_data(
namespace=metric_namespace,
name=metric_name,
value=metric_value,
timestamp=None, # This means use "now" in the engine
unit=metric_unit,
dimensions=metric_dims,
statistics=None)
def set_alarm_state(self, **kwargs):
return super(BotoCWClient, self).set_alarm_state(
alarm_name=kwargs['AlarmName'],
state_reason=kwargs['StateReason'],
state_value=kwargs['StateValue'],
state_reason_data=kwargs['StateReasonData'])
def format_metric_alarm(self, alarms):
'''
Return string formatted representation of
boto.ec2.cloudwatch.alarm.MetricAlarm objects
'''
ret = []
for s in alarms:
ret.append("AlarmName : %s" % s.name)
ret.append("AlarmDescription : %s" % s.description)
ret.append("ActionsEnabled : %s" % s.actions_enabled)
ret.append("AlarmActions : %s" % s.alarm_actions)
ret.append("AlarmArn : %s" % s.alarm_arn)
ret.append("AlarmConfigurationUpdatedTimestamp : %s" %
s.last_updated)
ret.append("ComparisonOperator : %s" % s.comparison)
ret.append("Dimensions : %s" % s.dimensions)
ret.append("EvaluationPeriods : %s" % s.evaluation_periods)
ret.append("InsufficientDataActions : %s" %
s.insufficient_data_actions)
ret.append("MetricName : %s" % s.metric)
ret.append("Namespace : %s" % s.namespace)
ret.append("OKActions : %s" % s.ok_actions)
ret.append("Period : %s" % s.period)
ret.append("StateReason : %s" % s.state_reason)
ret.append("StateUpdatedTimestamp : %s" %
s.last_updated)
ret.append("StateValue : %s" % s.state_value)
ret.append("Statistic : %s" % s.statistic)
ret.append("Threshold : %s" % s.threshold)
ret.append("Unit : %s" % s.unit)
ret.append("--")
return '\n'.join(ret)
def format_metric(self, metrics):
'''
Return string formatted representation of
boto.ec2.cloudwatch.metric.Metric objects
'''
# Boto appears to return metrics as a list-inside-a-list
# probably a bug in boto, but work around here
if len(metrics) == 1:
metlist = metrics[0]
elif len(metrics) == 0:
metlist = []
else:
# Shouldn't get here, unless boto gets fixed..
logger.error("Unexpected metric list-of-list length (boto fixed?)")
return "ERROR\n--"
ret = []
for m in metlist:
ret.append("MetricName : %s" % m.name)
ret.append("Namespace : %s" % m.namespace)
ret.append("Dimensions : %s" % m.dimensions)
ret.append("--")
return '\n'.join(ret)
def get_client(port=None, aws_access_key=None, aws_secret_key=None):
"""
Returns a new boto CloudWatch client connection to a heat server
Note : Configuration goes in /etc/boto.cfg, not via arguments
"""
# Note we pass None/None for the keys so boto reads /etc/boto.cfg
# Also note is_secure is defaulted to False as HTTPS connections
# don't seem to work atm, FIXME
cloudwatch = BotoCWClient(aws_access_key_id=aws_access_key,
aws_secret_access_key=aws_secret_key,
is_secure=False,
port=port,
path="/v1")
if cloudwatch:
logger.debug("Got CW connection object OK")
else:
logger.error("Error establishing CloudWatch connection!")
sys.exit(1)
return cloudwatch
|
|
from oidc_provider.lib.errors import RedirectUriError
try:
from urllib.parse import urlencode, quote
except ImportError:
from urllib import urlencode, quote
try:
from urllib.parse import parse_qs, urlsplit
except ImportError:
from urlparse import parse_qs, urlsplit
import uuid
from mock import patch, mock
from django.contrib.auth.models import AnonymousUser
from django.core.management import call_command
from django.core.urlresolvers import reverse
from django.test import (
RequestFactory,
override_settings,
)
from django.test import TestCase
from jwkest.jwt import JWT
from oidc_provider import settings
from oidc_provider.tests.app.utils import (
create_fake_user,
create_fake_client,
FAKE_CODE_CHALLENGE,
is_code_valid,
)
from oidc_provider.views import AuthorizeView
from oidc_provider.lib.endpoints.authorize import AuthorizeEndpoint
class AuthorizeEndpointMixin(object):
def _auth_request(self, method, data={}, is_user_authenticated=False):
url = reverse('oidc_provider:authorize')
if method.lower() == 'get':
query_str = urlencode(data).replace('+', '%20')
if query_str:
url += '?' + query_str
request = self.factory.get(url)
elif method.lower() == 'post':
request = self.factory.post(url, data=data)
else:
raise Exception('Method unsupported for an Authorization Request.')
# Simulate that the user is logged.
request.user = self.user if is_user_authenticated else AnonymousUser()
response = AuthorizeView.as_view()(request)
return response
class AuthorizationCodeFlowTestCase(TestCase, AuthorizeEndpointMixin):
"""
Test cases for Authorize Endpoint using Code Flow.
"""
def setUp(self):
call_command('creatersakey')
self.factory = RequestFactory()
self.user = create_fake_user()
self.client = create_fake_client(response_type='code')
self.client_with_no_consent = create_fake_client(response_type='code', require_consent=False)
self.client_public = create_fake_client(response_type='code', is_public=True)
self.client_public_with_no_consent = create_fake_client(response_type='code', is_public=True, require_consent=False)
self.state = uuid.uuid4().hex
self.nonce = uuid.uuid4().hex
def test_missing_parameters(self):
"""
If the request fails due to a missing, invalid, or mismatching
redirection URI, or if the client identifier is missing or invalid,
the authorization server SHOULD inform the resource owner of the error.
See: https://tools.ietf.org/html/rfc6749#section-4.1.2.1
"""
response = self._auth_request('get')
self.assertEqual(response.status_code, 200)
self.assertEqual(bool(response.content), True)
def test_invalid_response_type(self):
"""
The OP informs the RP by using the Error Response parameters defined
in Section 4.1.2.1 of OAuth 2.0.
See: http://openid.net/specs/openid-connect-core-1_0.html#AuthError
"""
# Create an authorize request with an unsupported response_type.
data = {
'client_id': self.client.client_id,
'response_type': 'something_wrong',
'redirect_uri': self.client.default_redirect_uri,
'scope': 'openid email',
'state': self.state,
}
response = self._auth_request('get', data)
self.assertEqual(response.status_code, 302)
self.assertEqual(response.has_header('Location'), True)
# Should be an 'error' component in query.
self.assertIn('error=', response['Location'])
def test_user_not_logged(self):
"""
The Authorization Server attempts to Authenticate the End-User by
redirecting to the login view.
See: http://openid.net/specs/openid-connect-core-1_0.html#Authenticates
"""
data = {
'client_id': self.client.client_id,
'response_type': 'code',
'redirect_uri': self.client.default_redirect_uri,
'scope': 'openid email',
'state': self.state,
}
response = self._auth_request('get', data)
# Check if user was redirected to the login view.
self.assertIn(settings.get('OIDC_LOGIN_URL'), response['Location'])
def test_user_consent_inputs(self):
"""
Once the End-User is authenticated, the Authorization Server MUST
obtain an authorization decision before releasing information to
the Client.
See: http://openid.net/specs/openid-connect-core-1_0.html#Consent
"""
data = {
'client_id': self.client.client_id,
'response_type': 'code',
'redirect_uri': self.client.default_redirect_uri,
'scope': 'openid email',
'state': self.state,
# PKCE parameters.
'code_challenge': FAKE_CODE_CHALLENGE,
'code_challenge_method': 'S256',
}
response = self._auth_request('get', data, is_user_authenticated=True)
# Check if hidden inputs exists in the form,
# also if their values are valid.
input_html = '<input name="{0}" type="hidden" value="{1}" />'
to_check = {
'client_id': self.client.client_id,
'redirect_uri': self.client.default_redirect_uri,
'response_type': 'code',
'code_challenge': FAKE_CODE_CHALLENGE,
'code_challenge_method': 'S256',
}
for key, value in iter(to_check.items()):
is_input_ok = input_html.format(key, value) in response.content.decode('utf-8')
self.assertEqual(is_input_ok, True,
msg='Hidden input for "' + key + '" fails.')
def test_user_consent_response(self):
"""
First,
if the user denied the consent we must ensure that
the error response parameters are added to the query component
of the Redirection URI.
Second,
if the user allow the RP then the server MUST return
the parameters defined in Section 4.1.2 of OAuth 2.0 [RFC6749]
by adding them as query parameters to the redirect_uri.
"""
data = {
'client_id': self.client.client_id,
'redirect_uri': self.client.default_redirect_uri,
'response_type': 'code',
'scope': 'openid email',
'state': self.state,
# PKCE parameters.
'code_challenge': FAKE_CODE_CHALLENGE,
'code_challenge_method': 'S256',
}
response = self._auth_request('post', data, is_user_authenticated=True)
# Because user doesn't allow app, SHOULD exists an error parameter
# in the query.
self.assertIn('error=', response['Location'], msg='error param is missing in query.')
self.assertIn('access_denied', response['Location'], msg='"access_denied" code is missing in query.')
# Simulate user authorization.
data['allow'] = 'Accept' # Will be the value of the button.
response = self._auth_request('post', data, is_user_authenticated=True)
is_code_ok = is_code_valid(url=response['Location'],
user=self.user,
client=self.client)
self.assertEqual(is_code_ok, True,
msg='Code returned is invalid.')
# Check if the state is returned.
state = (response['Location'].split('state='))[1].split('&')[0]
self.assertEqual(state, self.state, msg='State change or is missing.')
def test_user_consent_skipped(self):
"""
If users previously gave consent to some client (for a specific
list of scopes) and because they might be prompted for the same
authorization multiple times, the server skip it.
"""
data = {
'client_id': self.client_with_no_consent.client_id,
'redirect_uri': self.client_with_no_consent.default_redirect_uri,
'response_type': 'code',
'scope': 'openid email',
'state': self.state,
'allow': 'Accept',
}
request = self.factory.post(reverse('oidc_provider:authorize'),
data=data)
# Simulate that the user is logged.
request.user = self.user
response = self._auth_request('post', data, is_user_authenticated=True)
self.assertIn('code', response['Location'], msg='Code is missing in the returned url.')
response = self._auth_request('post', data, is_user_authenticated=True)
is_code_ok = is_code_valid(url=response['Location'],
user=self.user,
client=self.client_with_no_consent)
self.assertEqual(is_code_ok, True, msg='Code returned is invalid.')
del data['allow']
response = self._auth_request('get', data, is_user_authenticated=True)
is_code_ok = is_code_valid(url=response['Location'],
user=self.user,
client=self.client_with_no_consent)
self.assertEqual(is_code_ok, True, msg='Code returned is invalid or missing.')
def test_response_uri_is_properly_constructed(self):
"""
Check that the redirect_uri matches the one configured for the client.
Only 'state' and 'code' should be appended.
"""
data = {
'client_id': self.client.client_id,
'redirect_uri': self.client.default_redirect_uri,
'response_type': 'code',
'scope': 'openid email',
'state': self.state,
'allow': 'Accept',
}
response = self._auth_request('post', data, is_user_authenticated=True)
parsed = urlsplit(response['Location'])
params = parse_qs(parsed.query or parsed.fragment)
state = params['state'][0]
self.assertEquals(self.state, state, msg="State returned is invalid or missing")
is_code_ok = is_code_valid(url=response['Location'],
user=self.user,
client=self.client)
self.assertTrue(is_code_ok, msg='Code returned is invalid or missing')
self.assertEquals(set(params.keys()), set(['state', 'code']), msg='More than state or code appended as query params')
self.assertTrue(response['Location'].startswith(self.client.default_redirect_uri), msg='Different redirect_uri returned')
def test_unknown_redirect_uris_are_rejected(self):
"""
If a redirect_uri is not registered with the client the request must be rejected.
See http://openid.net/specs/openid-connect-core-1_0.html#AuthRequest.
"""
data = {
'client_id': self.client.client_id,
'response_type': 'code',
'redirect_uri': 'http://neverseenthis.com',
'scope': 'openid email',
'state': self.state,
}
response = self._auth_request('get', data)
self.assertIn(RedirectUriError.error, response.content.decode('utf-8'), msg='No redirect_uri error')
def test_manipulated_redirect_uris_are_rejected(self):
"""
If a redirect_uri does not exactly match the registered uri it must be rejected.
See http://openid.net/specs/openid-connect-core-1_0.html#AuthRequest.
"""
data = {
'client_id': self.client.client_id,
'response_type': 'code',
'redirect_uri': self.client.default_redirect_uri + "?some=query",
'scope': 'openid email',
'state': self.state,
}
response = self._auth_request('get', data)
self.assertIn(RedirectUriError.error, response.content.decode('utf-8'), msg='No redirect_uri error')
def test_public_client_auto_approval(self):
"""
It's recommended not auto-approving requests for non-confidential clients using Authorization Code.
"""
data = {
'client_id': self.client_public_with_no_consent.client_id,
'response_type': 'code',
'redirect_uri': self.client_public_with_no_consent.default_redirect_uri,
'scope': 'openid email',
'state': self.state,
}
response = self._auth_request('get', data, is_user_authenticated=True)
self.assertIn('Request for Permission', response.content.decode('utf-8'))
def test_prompt_none_parameter(self):
"""
Specifies whether the Authorization Server prompts the End-User for reauthentication and consent.
See: http://openid.net/specs/openid-connect-core-1_0.html#AuthRequest
"""
data = {
'client_id': self.client.client_id,
'response_type': self.client.response_type,
'redirect_uri': self.client.default_redirect_uri,
'scope': 'openid email',
'state': self.state,
'prompt': 'none'
}
response = self._auth_request('get', data)
# An error is returned if an End-User is not already authenticated.
self.assertIn('login_required', response['Location'])
response = self._auth_request('get', data, is_user_authenticated=True)
# An error is returned if the Client does not have pre-configured consent for the requested Claims.
self.assertIn('consent_required', response['Location'])
@patch('oidc_provider.views.django_user_logout')
def test_prompt_login_parameter(self, logout_function):
"""
Specifies whether the Authorization Server prompts the End-User for reauthentication and consent.
See: http://openid.net/specs/openid-connect-core-1_0.html#AuthRequest
"""
data = {
'client_id': self.client.client_id,
'response_type': self.client.response_type,
'redirect_uri': self.client.default_redirect_uri,
'scope': 'openid email',
'state': self.state,
'prompt': 'login'
}
response = self._auth_request('get', data)
self.assertIn(settings.get('OIDC_LOGIN_URL'), response['Location'])
self.assertNotIn(
quote('prompt=login'),
response['Location'],
"Found prompt=login, this leads to infinite login loop. See https://github.com/juanifioren/django-oidc-provider/issues/197."
)
response = self._auth_request('get', data, is_user_authenticated=True)
self.assertIn(settings.get('OIDC_LOGIN_URL'), response['Location'])
self.assertTrue(logout_function.called_once())
self.assertNotIn(
quote('prompt=login'),
response['Location'],
"Found prompt=login, this leads to infinite login loop. See https://github.com/juanifioren/django-oidc-provider/issues/197."
)
def test_prompt_login_none_parameter(self):
"""
Specifies whether the Authorization Server prompts the End-User for reauthentication and consent.
See: http://openid.net/specs/openid-connect-core-1_0.html#AuthRequest
"""
data = {
'client_id': self.client.client_id,
'response_type': self.client.response_type,
'redirect_uri': self.client.default_redirect_uri,
'scope': 'openid email',
'state': self.state,
'prompt': 'login none'
}
response = self._auth_request('get', data)
self.assertIn('login_required', response['Location'])
response = self._auth_request('get', data, is_user_authenticated=True)
self.assertIn('login_required', response['Location'])
@patch('oidc_provider.views.render')
def test_prompt_consent_parameter(self, render_patched):
"""
Specifies whether the Authorization Server prompts the End-User for reauthentication and consent.
See: http://openid.net/specs/openid-connect-core-1_0.html#AuthRequest
"""
data = {
'client_id': self.client.client_id,
'response_type': self.client.response_type,
'redirect_uri': self.client.default_redirect_uri,
'scope': 'openid email',
'state': self.state,
'prompt': 'consent'
}
response = self._auth_request('get', data)
self.assertIn(settings.get('OIDC_LOGIN_URL'), response['Location'])
response = self._auth_request('get', data, is_user_authenticated=True)
render_patched.assert_called_once()
self.assertTrue(render_patched.call_args[0][1], settings.get('OIDC_TEMPLATES')['authorize'])
def test_prompt_consent_none_parameter(self):
"""
Specifies whether the Authorization Server prompts the End-User for reauthentication and consent.
See: http://openid.net/specs/openid-connect-core-1_0.html#AuthRequest
"""
data = {
'client_id': self.client.client_id,
'response_type': self.client.response_type,
'redirect_uri': self.client.default_redirect_uri,
'scope': 'openid email',
'state': self.state,
'prompt': 'consent none'
}
response = self._auth_request('get', data)
self.assertIn('login_required', response['Location'])
response = self._auth_request('get', data, is_user_authenticated=True)
self.assertIn('consent_required', response['Location'])
class AuthorizationImplicitFlowTestCase(TestCase, AuthorizeEndpointMixin):
"""
Test cases for Authorization Endpoint using Implicit Flow.
"""
def setUp(self):
call_command('creatersakey')
self.factory = RequestFactory()
self.user = create_fake_user()
self.client = create_fake_client(response_type='id_token token')
self.client_public = create_fake_client(response_type='id_token token', is_public=True)
self.client_public_no_consent = create_fake_client(
response_type='id_token token', is_public=True,
require_consent=False)
self.client_no_access = create_fake_client(response_type='id_token')
self.client_public_no_access = create_fake_client(response_type='id_token', is_public=True)
self.state = uuid.uuid4().hex
self.nonce = uuid.uuid4().hex
def test_missing_nonce(self):
"""
The `nonce` parameter is REQUIRED if you use the Implicit Flow.
"""
data = {
'client_id': self.client.client_id,
'response_type': self.client.response_type,
'redirect_uri': self.client.default_redirect_uri,
'scope': 'openid email',
'state': self.state,
}
response = self._auth_request('get', data, is_user_authenticated=True)
self.assertIn('#error=invalid_request', response['Location'])
def test_idtoken_token_response(self):
"""
Implicit client requesting `id_token token` receives both id token
and access token as the result of the authorization request.
"""
data = {
'client_id': self.client.client_id,
'redirect_uri': self.client.default_redirect_uri,
'response_type': self.client.response_type,
'scope': 'openid email',
'state': self.state,
'nonce': self.nonce,
'allow': 'Accept',
}
response = self._auth_request('post', data, is_user_authenticated=True)
self.assertIn('access_token', response['Location'])
self.assertIn('id_token', response['Location'])
# same for public client
data['client_id'] = self.client_public.client_id,
data['redirect_uri'] = self.client_public.default_redirect_uri,
data['response_type'] = self.client_public.response_type,
response = self._auth_request('post', data, is_user_authenticated=True)
self.assertIn('access_token', response['Location'])
self.assertIn('id_token', response['Location'])
def test_idtoken_response(self):
"""
Implicit client requesting `id_token` receives
only an id token as the result of the authorization request.
"""
data = {
'client_id': self.client_no_access.client_id,
'redirect_uri': self.client_no_access.default_redirect_uri,
'response_type': self.client_no_access.response_type,
'scope': 'openid email',
'state': self.state,
'nonce': self.nonce,
'allow': 'Accept',
}
response = self._auth_request('post', data, is_user_authenticated=True)
self.assertNotIn('access_token', response['Location'])
self.assertIn('id_token', response['Location'])
# same for public client
data['client_id'] = self.client_public_no_access.client_id,
data['redirect_uri'] = self.client_public_no_access.default_redirect_uri,
data['response_type'] = self.client_public_no_access.response_type,
response = self._auth_request('post', data, is_user_authenticated=True)
self.assertNotIn('access_token', response['Location'])
self.assertIn('id_token', response['Location'])
def test_idtoken_token_at_hash(self):
"""
Implicit client requesting `id_token token` receives
`at_hash` in `id_token`.
"""
data = {
'client_id': self.client.client_id,
'redirect_uri': self.client.default_redirect_uri,
'response_type': self.client.response_type,
'scope': 'openid email',
'state': self.state,
'nonce': self.nonce,
'allow': 'Accept',
}
response = self._auth_request('post', data, is_user_authenticated=True)
self.assertIn('id_token', response['Location'])
# obtain `id_token` portion of Location
components = urlsplit(response['Location'])
fragment = parse_qs(components[4])
id_token = JWT().unpack(fragment["id_token"][0].encode('utf-8')).payload()
self.assertIn('at_hash', id_token)
def test_idtoken_at_hash(self):
"""
Implicit client requesting `id_token` should not receive
`at_hash` in `id_token`.
"""
data = {
'client_id': self.client_no_access.client_id,
'redirect_uri': self.client_no_access.default_redirect_uri,
'response_type': self.client_no_access.response_type,
'scope': 'openid email',
'state': self.state,
'nonce': self.nonce,
'allow': 'Accept',
}
response = self._auth_request('post', data, is_user_authenticated=True)
self.assertIn('id_token', response['Location'])
# obtain `id_token` portion of Location
components = urlsplit(response['Location'])
fragment = parse_qs(components[4])
id_token = JWT().unpack(fragment["id_token"][0].encode('utf-8')).payload()
self.assertNotIn('at_hash', id_token)
def test_public_client_implicit_auto_approval(self):
"""
Public clients using Implicit Flow should be able to reuse consent.
"""
data = {
'client_id': self.client_public_no_consent.client_id,
'response_type': self.client_public_no_consent.response_type,
'redirect_uri': self.client_public_no_consent.default_redirect_uri,
'scope': 'openid email',
'state': self.state,
'nonce': self.nonce,
}
response = self._auth_request('get', data, is_user_authenticated=True)
response_text = response.content.decode('utf-8')
self.assertEquals(response_text, '')
components = urlsplit(response['Location'])
fragment = parse_qs(components[4])
self.assertIn('access_token', fragment)
self.assertIn('id_token', fragment)
self.assertIn('expires_in', fragment)
class AuthorizationHybridFlowTestCase(TestCase, AuthorizeEndpointMixin):
"""
Test cases for Authorization Endpoint using Hybrid Flow.
"""
def setUp(self):
call_command('creatersakey')
self.factory = RequestFactory()
self.user = create_fake_user()
self.client_code_idtoken_token = create_fake_client(response_type='code id_token token', is_public=True)
self.state = uuid.uuid4().hex
self.nonce = uuid.uuid4().hex
# Base data for the auth request.
self.data = {
'client_id': self.client_code_idtoken_token.client_id,
'redirect_uri': self.client_code_idtoken_token.default_redirect_uri,
'response_type': self.client_code_idtoken_token.response_type,
'scope': 'openid email',
'state': self.state,
'nonce': self.nonce,
'allow': 'Accept',
}
def test_code_idtoken_token_response(self):
"""
Implicit client requesting `id_token token` receives both id token
and access token as the result of the authorization request.
"""
response = self._auth_request('post', self.data, is_user_authenticated=True)
self.assertIn('#', response['Location'])
self.assertIn('access_token', response['Location'])
self.assertIn('id_token', response['Location'])
self.assertIn('state', response['Location'])
self.assertIn('code', response['Location'])
# Validate code.
is_code_ok = is_code_valid(url=response['Location'],
user=self.user,
client=self.client_code_idtoken_token)
self.assertEqual(is_code_ok, True, msg='Code returned is invalid.')
@override_settings(OIDC_TOKEN_EXPIRE=36000)
def test_access_token_expiration(self):
"""
Add ten hours of expiration to access_token. Check for the expires_in query in fragment.
"""
response = self._auth_request('post', self.data, is_user_authenticated=True)
self.assertIn('expires_in=36000', response['Location'])
class TestCreateResponseURI(TestCase):
def setUp(self):
url = reverse('oidc_provider:authorize')
user = create_fake_user()
client = create_fake_client(response_type='code', is_public=True)
# Base data to create a uri response
data = {
'client_id': client.client_id,
'redirect_uri': client.default_redirect_uri,
'response_type': client.response_type,
}
factory = RequestFactory()
self.request = factory.post(url, data=data)
self.request.user = user
@patch('oidc_provider.lib.endpoints.authorize.create_code')
@patch('oidc_provider.lib.endpoints.authorize.logger.exception')
def test_create_response_uri_logs_to_error(self, log_exception, create_code):
"""
A lot can go wrong when creating a response uri and this is caught with a general Exception error. The
information contained within this error should show up in the error log so production servers have something
to work with when things don't work as expected.
"""
exception = Exception("Something went wrong!")
create_code.side_effect = exception
authorization_endpoint = AuthorizeEndpoint(self.request)
authorization_endpoint.validate_params()
with self.assertRaises(Exception):
authorization_endpoint.create_response_uri()
log_exception.assert_called_once_with('[Authorize] Error when trying to create response uri: %s', exception)
@override_settings(OIDC_SESSION_MANAGEMENT_ENABLE=True)
def test_create_response_uri_generates_session_state_if_session_management_enabled(self):
# RequestFactory doesn't support sessions, so we mock it
self.request.session = mock.Mock(session_key=None)
authorization_endpoint = AuthorizeEndpoint(self.request)
authorization_endpoint.validate_params()
uri = authorization_endpoint.create_response_uri()
self.assertIn('session_state=', uri)
|
|
# coding=utf-8
#
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_versionedobjects import fields
from cloudpulse.common import exception
from cloudpulse.common import utils
from cloudpulse.db import api as dbapi
from cloudpulse.objects import base
from cloudpulse.openstack.common._i18n import _LI
from cloudpulse.openstack.common import log as logging
LOG = logging.getLogger(__name__)
class Status(object):
CREATE_IN_PROGRESS = 'CREATE_IN_PROGRESS'
CREATE_FAILED = 'CREATE_FAILED'
CREATED = 'CREATED'
UPDATE_IN_PROGRESS = 'UPDATE_IN_PROGRESS'
UPDATE_FAILED = 'UPDATE_FAILED'
UPDATED = 'UPDATED'
DELETE_IN_PROGRESS = 'DELETE_IN_PROGRESS'
DELETE_FAILED = 'DELETE_FAILED'
DELETED = 'DELETED'
@base.CloudpulseObjectRegistry.register
class Cpulse(base.CloudpulsePersistentObject, base.CloudpulseObject,
base.CloudpulseObjectDictCompat):
# Version 1.0: Initial version
VERSION = '1.0'
dbapi = dbapi.get_instance()
fields = {
'id': fields.IntegerField(),
'uuid': fields.UUIDField(nullable=True),
'name': fields.StringField(nullable=True),
'state': fields.StringField(nullable=True),
'result': fields.StringField(nullable=True),
'testtype': fields.StringField(nullable=True)
}
@staticmethod
def _from_db_object(test, db):
"""Converts a database entity to a formal object."""
for field in test.fields:
test[field] = db[field]
test.obj_reset_changes()
return test
@staticmethod
def _from_db_object_list(db_objects, cls, ctx):
"""Converts a list of db entities to a list of formal objects."""
return [Cpulse._from_db_object(cls(ctx), obj) for obj in db_objects]
@base.remotable_classmethod
def get(cls, context, test_id):
"""Find a test based on its id or uuid and return a Cpulse object.
:param test_id: the id *or* uuid of a test.
:returns: a :class:`Cpulse` object.
"""
if utils.is_int_like(test_id):
return cls.get_by_id(context, test_id)
elif utils.is_uuid_like(test_id):
return cls.get_by_uuid(context, test_id)
else:
raise exception.InvalidIdentity(identity=test_id)
@base.remotable_classmethod
def get_by_id(cls, context, test_id):
"""Find a test based on its integer id and return a Cpulse object.
:param test_id: the id of a test.
:returns: a :class:`Cpulse` object.
"""
db = cls.dbapi.get_test_by_id(context, test_id)
test = Cpulse._from_db_object(cls(context), db)
return test
@base.remotable_classmethod
def get_by_uuid(cls, context, uuid):
"""Find a test based on uuid and return a :class:`Cpulse` object.
:param uuid: the uuid of a test.
:param context: Security context
:returns: a :class:`Cpulse` object.
"""
db = cls.dbapi.get_test_by_uuid(context, uuid)
test = Cpulse._from_db_object(cls(context), db)
return test
@base.remotable_classmethod
def get_by_name(cls, context, name):
"""Find a test based on name and return a Cpulse object.
:param name: the logical name of a test.
:param context: Security context
:returns: a :class:`Cpulse` object.
"""
db = cls.dbapi.get_test_by_name(context, name)
test = Cpulse._from_db_object(cls(context), db)
return test
@base.remotable_classmethod
def list(cls, context, limit=None, marker=None,
sort_key=None, sort_dir=None, filters=None):
"""Return a list of Cpulse objects.
:param context: Security context.
:param limit: maximum number of resources to return in a single result.
:param marker: pagination marker for large data sets.
:param sort_key: column to sort results by.
:param sort_dir: direction to sort. "asc" or "desc".
:returns: a list of :class:`Cpulse` object.
"""
db = cls.dbapi.get_test_list(context, limit=limit,
marker=marker,
sort_key=sort_key,
sort_dir=sort_dir,
filters=filters)
return Cpulse._from_db_object_list(db, cls, context)
@base.remotable
def create(self, context=None):
"""Create a Cpulse record in the DB.
:param context: Security context. NOTE: This should only
be used internally by the indirection_api.
Unfortunately, RPC requires context as the first
argument, even though we don't use it.
A context should be set when instantiating the
object, e.g.: Cpulse(context)
"""
values = self.obj_get_changes()
LOG.info(_LI('Dumping CREATE test datastructure %s') % str(values))
db = self.dbapi.create_test(values)
self._from_db_object(self, db)
@base.remotable
def destroy(self, context=None):
"""Delete the Cpulse from the DB.
:param context: Security context. NOTE: This should only
be used internally by the indirection_api.
Unfortunately, RPC requires context as the first
argument, even though we don't use it.
A context should be set when instantiating the
object, e.g.: Cpulse(context)
"""
self.dbapi.destroy_test(self.uuid)
self.obj_reset_changes()
@base.remotable
def save(self, context=None):
"""Save updates to this Cpulse.
Updates will be made column by column based on the result
of self.what_changed().
:param context: Security context. NOTE: This should only
be used internally by the indirection_api.
Unfortunately, RPC requires context as the first
argument, even though we don't use it.
A context should be set when instantiating the
object, e.g.: Cpulse(context)
"""
updates = self.obj_get_changes()
self.dbapi.update_test(self.uuid, updates)
self.obj_reset_changes()
@base.remotable
def refresh(self, context=None):
"""Loads updates for this Cpulse.
Loads a test with the same uuid from the database and
checks for updated attributes. Updates are applied from
the loaded test column by column, if there are any updates.
:param context: Security context. NOTE: This should only
be used internally by the indirection_api.
Unfortunately, RPC requires context as the first
argument, even though we don't use it.
A context should be set when instantiating the
object, e.g.: Cpulse(context)
"""
current = self.__class__.get_by_uuid(self._context, uuid=self.uuid)
for field in self.fields:
if self.obj_attr_is_set(field) and self[field] != current[field]:
self[field] = current[field]
|
|
# Copyright 2010 Jacob Kaplan-Moss
# Copyright 2011 OpenStack Foundation
# Copyright 2011 Piston Cloud Computing, Inc.
# Copyright 2013 Alessio Ababilov
# Copyright 2013 Grid Dynamics
# Copyright 2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
OpenStack Client interface. Handles the REST calls and responses.
"""
# E0202: An attribute inherited from %s hide this method
# pylint: disable=E0202
import logging
import time
try:
import json
except ImportError:
import simplejson as json
try:
from oslo_utils import importutils
except ImportError:
from oslo.utils import importutils
import requests
from .._i18n import _
from . import exceptions
_logger = logging.getLogger(__name__)
class HTTPClient(object):
"""This client handles sending HTTP requests to OpenStack servers.
Features:
- share authentication information between several clients to different
services (e.g., for compute and image clients);
- reissue authentication request for expired tokens;
- encode/decode JSON bodies;
- raise exceptions on HTTP errors;
- pluggable authentication;
- store authentication information in a keyring;
- store time spent for requests;
- register clients for particular services, so one can use
`http_client.identity` or `http_client.compute`;
- log requests and responses in a format that is easy to copy-and-paste
into terminal and send the same request with curl.
"""
user_agent = "bearclient.openstack.common.apiclient"
def __init__(self,
auth_plugin,
region_name=None,
endpoint_type="publicURL",
original_ip=None,
verify=True,
cert=None,
timeout=None,
timings=False,
keyring_saver=None,
debug=False,
user_agent=None,
http=None):
self.auth_plugin = auth_plugin
self.endpoint_type = endpoint_type
self.region_name = region_name
self.original_ip = original_ip
self.timeout = timeout
self.verify = verify
self.cert = cert
self.keyring_saver = keyring_saver
self.debug = debug
self.user_agent = user_agent or self.user_agent
self.times = [] # [("item", starttime, endtime), ...]
self.timings = timings
# requests within the same session can reuse TCP connections from pool
self.http = http or requests.Session()
self.cached_token = None
def _http_log_req(self, method, url, kwargs):
if not self.debug:
return
string_parts = [
"curl -g -i",
"-X '%s'" % method,
"'%s'" % url,
]
for element in kwargs['headers']:
header = "-H '%s: %s'" % (element, kwargs['headers'][element])
string_parts.append(header)
_logger.debug("REQ: %s" % " ".join(string_parts))
if 'data' in kwargs:
_logger.debug("REQ BODY: %s\n" % (kwargs['data']))
def _http_log_resp(self, resp):
if not self.debug:
return
_logger.debug(
"RESP: [%s] %s\n",
resp.status_code,
resp.headers)
if resp._content_consumed:
_logger.debug(
"RESP BODY: %s\n",
resp.text)
def serialize(self, kwargs):
if kwargs.get('json') is not None:
kwargs['headers']['Content-Type'] = 'application/json'
kwargs['data'] = json.dumps(kwargs['json'])
try:
del kwargs['json']
except KeyError:
pass
def get_timings(self):
return self.times
def reset_timings(self):
self.times = []
def request(self, method, url, **kwargs):
"""Send an http request with the specified characteristics.
Wrapper around `requests.Session.request` to handle tasks such as
setting headers, JSON encoding/decoding, and error handling.
:param method: method of HTTP request
:param url: URL of HTTP request
:param kwargs: any other parameter that can be passed to
requests.Session.request (such as `headers`) or `json`
that will be encoded as JSON and used as `data` argument
"""
kwargs.setdefault("headers", {})
kwargs["headers"]["User-Agent"] = self.user_agent
if self.original_ip:
kwargs["headers"]["Forwarded"] = "for=%s;by=%s" % (
self.original_ip, self.user_agent)
if self.timeout is not None:
kwargs.setdefault("timeout", self.timeout)
kwargs.setdefault("verify", self.verify)
if self.cert is not None:
kwargs.setdefault("cert", self.cert)
self.serialize(kwargs)
self._http_log_req(method, url, kwargs)
if self.timings:
start_time = time.time()
resp = self.http.request(method, url, **kwargs)
if self.timings:
self.times.append(("%s %s" % (method, url),
start_time, time.time()))
self._http_log_resp(resp)
if resp.status_code >= 400:
_logger.debug(
"Request returned failure status: %s",
resp.status_code)
raise exceptions.from_response(resp, method, url)
return resp
@staticmethod
def concat_url(endpoint, url):
"""Concatenate endpoint and final URL.
E.g., "http://keystone/v2.0/" and "/tokens" are concatenated to
"http://keystone/v2.0/tokens".
:param endpoint: the base URL
:param url: the final URL
"""
return "%s/%s" % (endpoint.rstrip("/"), url.strip("/"))
def client_request(self, client, method, url, **kwargs):
"""Send an http request using `client`'s endpoint and specified `url`.
If request was rejected as unauthorized (possibly because the token is
expired), issue one authorization attempt and send the request once
again.
:param client: instance of BaseClient descendant
:param method: method of HTTP request
:param url: URL of HTTP request
:param kwargs: any other parameter that can be passed to
`HTTPClient.request`
"""
filter_args = {
"endpoint_type": client.endpoint_type or self.endpoint_type,
"service_type": client.service_type,
}
token, endpoint = (self.cached_token, client.cached_endpoint)
just_authenticated = False
if not (token and endpoint):
try:
token, endpoint = self.auth_plugin.token_and_endpoint(
**filter_args)
except exceptions.EndpointException:
pass
if not (token and endpoint):
self.authenticate()
just_authenticated = True
token, endpoint = self.auth_plugin.token_and_endpoint(
**filter_args)
if not (token and endpoint):
raise exceptions.AuthorizationFailure(
_("Cannot find endpoint or token for request"))
old_token_endpoint = (token, endpoint)
kwargs.setdefault("headers", {})["X-Auth-Token"] = token
self.cached_token = token
client.cached_endpoint = endpoint
# Perform the request once. If we get Unauthorized, then it
# might be because the auth token expired, so try to
# re-authenticate and try again. If it still fails, bail.
try:
return self.request(
method, self.concat_url(endpoint, url), **kwargs)
except exceptions.Unauthorized as unauth_ex:
if just_authenticated:
raise
self.cached_token = None
client.cached_endpoint = None
if self.auth_plugin.opts.get('token'):
self.auth_plugin.opts['token'] = None
if self.auth_plugin.opts.get('endpoint'):
self.auth_plugin.opts['endpoint'] = None
self.authenticate()
try:
token, endpoint = self.auth_plugin.token_and_endpoint(
**filter_args)
except exceptions.EndpointException:
raise unauth_ex
if (not (token and endpoint) or
old_token_endpoint == (token, endpoint)):
raise unauth_ex
self.cached_token = token
client.cached_endpoint = endpoint
kwargs["headers"]["X-Auth-Token"] = token
return self.request(
method, self.concat_url(endpoint, url), **kwargs)
def add_client(self, base_client_instance):
"""Add a new instance of :class:`BaseClient` descendant.
`self` will store a reference to `base_client_instance`.
"""
service_type = base_client_instance.service_type
if service_type and not hasattr(self, service_type):
setattr(self, service_type, base_client_instance)
def authenticate(self):
self.auth_plugin.authenticate(self)
# Store the authentication results in the keyring for later requests
if self.keyring_saver:
self.keyring_saver.save(self)
class BaseClient(object):
"""Top-level object to access the OpenStack API.
This client uses :class:`HTTPClient` to send requests. :class:`HTTPClient`
will handle a bunch of issues such as authentication.
"""
service_type = None
endpoint_type = None # "publicURL" will be used
cached_endpoint = None
def __init__(self, http_client, extensions=None):
self.http_client = http_client
http_client.add_client(self)
# Add in any extensions...
if extensions:
for extension in extensions:
if extension.manager_class:
setattr(self, extension.name,
extension.manager_class(self))
def client_request(self, method, url, **kwargs):
return self.http_client.client_request(
self, method, url, **kwargs)
def head(self, url, **kwargs):
return self.client_request("HEAD", url, **kwargs)
def get(self, url, **kwargs):
return self.client_request("GET", url, **kwargs)
def post(self, url, **kwargs):
return self.client_request("POST", url, **kwargs)
def put(self, url, **kwargs):
return self.client_request("PUT", url, **kwargs)
def delete(self, url, **kwargs):
return self.client_request("DELETE", url, **kwargs)
def patch(self, url, **kwargs):
return self.client_request("PATCH", url, **kwargs)
@staticmethod
def get_class(api_name, version, version_map):
"""Returns the client class for the requested API version
:param api_name: the name of the API, e.g. 'compute', 'image', etc
:param version: the requested API version
:param version_map: a dict of client classes keyed by version
:rtype: a client class for the requested API version
"""
try:
client_path = version_map[str(version)]
except (KeyError, ValueError):
msg = _("Invalid %(api_name)s client version '%(version)s'. "
"Must be one of: %(version_map)s") % {
'api_name': api_name,
'version': version,
'version_map': ', '.join(version_map.keys())}
raise exceptions.UnsupportedVersion(msg)
return importutils.import_class(client_path)
|
|
#!/usr/bin/env python
# ------------------------------------------------------------------------------
# Copyright (c) 2010-2013, EVEthing team
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
# INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
# OF SUCH DAMAGE.
# ------------------------------------------------------------------------------
import cPickle
import os
import sys
import time
from decimal import Decimal, InvalidOperation
# Set up our environment and import settings
os.environ['DJANGO_SETTINGS_MODULE'] = 'evething.settings'
import django
django.setup()
from django.db import connections, transaction
from thing.models import * # NOPEP8
# ---------------------------------------------------------------------------
# Override volume for ships, assembled volume is mostly useless :ccp:
PACKAGED = {
25: 2500, # frigate
26: 10000, # cruiser
27: 50000, # battleship
28: 20000, # industrial
31: 500, # shuttle
324: 2500, # assault ship
358: 10000, # heavy assault ship
380: 20000, # transport ship
419: 15000, # battlecruiser
420: 5000, # destroyer
463: 3750, # mining barge
540: 15000, # command ship
541: 5000, # interdictor
543: 3750, # exhumer
830: 2500, # covert ops
831: 2500, # interceptor
832: 10000, # logistics
833: 10000, # force recon
834: 2500, # stealth bomber
893: 2500, # electronic attack ship
894: 10000, # heavy interdictor
898: 50000, # black ops
900: 50000, # marauder
906: 10000, # combat recon
963: 5000, # strategic cruiser
}
# ---------------------------------------------------------------------------
# Skill map things
PREREQ_SKILLS = {
182: 0,
183: 1,
184: 2,
1285: 3,
1289: 4,
1290: 5,
}
PREREQ_LEVELS = {
277: 0,
278: 1,
279: 2,
1286: 3,
1287: 4,
1288: 5,
}
def time_func(text, f):
start = time.time()
print '=> %s:' % text,
sys.stdout.flush()
try:
added = f()
except Exception as e:
added = 0
print '>> ERROR!'
print e
print '%d (%0.2fs)' % (added, time.time() - start)
class Importer:
def __init__(self):
self.cursor = connections['import'].cursor()
# sqlite3 UTF drama workaround
connections['import'].connection.text_factory = lambda x: unicode(x, "utf-8", "ignore")
def import_all(self):
#time_func('MapDenormalize', self.import_map_denormalize)
time_func('Region', self.import_region)
time_func('Constellation', self.import_constellation)
time_func('System', self.import_system)
time_func('Station', self.import_station)
time_func('MarketGroup', self.import_marketgroup)
time_func('ItemCategory', self.import_itemcategory)
time_func('ItemGroup', self.import_itemgroup)
time_func('Item', self.import_item)
time_func('ItemMaterial', self.import_item_material)
time_func('Implant', self.import_implant)
time_func('Blueprint', self.import_blueprint)
time_func('Skill', self.import_skill)
time_func('InventoryFlag', self.import_inventoryflag)
time_func('NPCFaction', self.import_npcfaction)
time_func('NPCCorporation', self.import_npccorporation)
time_func('SkillMap', self.build_skill_map)
# -----------------------------------------------------------------------
# Regions
def import_region(self):
added = 0
self.cursor.execute("SELECT regionID, regionName FROM mapRegions WHERE regionName != 'Unknown'")
bulk_data = {}
for row in self.cursor:
bulk_data[int(row[0])] = row[1:]
data_map = Region.objects.in_bulk(bulk_data.keys())
new = []
for id, data in bulk_data.items():
if id in data_map:
continue
region = Region(
id=id,
name=data[0],
)
new.append(region)
added += 1
if new:
Region.objects.bulk_create(new)
return added
def import_map_denormalize(self):
self.cursor.execute("SELECT * from mapDenormalize")
added = 0
new = []
existing = set(l[0] for l in MapDenormalize.objects.all().values_list('item_id'))
for row in self.cursor:
if int(row[0]) in existing:
continue
entry = MapDenormalize(
item_id=int(row[0]),
type_id=int(row[1]),
group_id=int(row[2]),
solar_system_id=int(row[3]) if row[3] else None,
constellation_id=int(row[4]) if row[4] else None,
region_id=int(row[5]) if row[5] else None,
orbit_id=int(row[6]) if row[6] else None,
x=float(row[7]),
y=float(row[8]),
z=float(row[9]),
radius=float(row[10]) if row[10] else None,
item_name=str(row[11]),
security=float(row[12]) if row[12] else None,
celestial_index=int(row[13]) if row[13] else None,
orbit_index=int(row[14]) if row[14] else None,
)
new.append(entry)
added += 1
if len(new) > 10000:
MapDenormalize.objects.bulk_create(new)
new = []
MapDenormalize.objects.bulk_create(new)
return added
# -----------------------------------------------------------------------
# Constellations
def import_constellation(self):
added = 0
self.cursor.execute('SELECT constellationID,constellationName,regionID FROM mapConstellations')
bulk_data = {}
for row in self.cursor:
id = int(row[0])
if id:
bulk_data[id] = row[1:]
new = []
for id, data in bulk_data.items():
if not data[0] or not data[1]:
continue
con = Constellation(
id=id,
name=data[0],
region_id=data[1],
)
new.append(con)
added += 1
if new:
Constellation.objects.all().delete()
Constellation.objects.bulk_create(new)
return added
# -----------------------------------------------------------------------
# Systems
def import_system(self):
added = 0
self.cursor.execute('SELECT solarSystemID, solarSystemName, constellationID FROM mapSolarSystems')
bulk_data = {}
for row in self.cursor:
id = int(row[0])
if id:
bulk_data[id] = row[1:]
new = []
for id, data in bulk_data.items():
if not data[0] or not data[1]:
continue
system = System(
id=id,
name=data[0],
constellation_id=data[1],
)
new.append(system)
added += 1
if new:
System.objects.all().delete()
System.objects.bulk_create(new)
return added
# -----------------------------------------------------------------------
# Stations
def import_station(self):
added = 0
self.cursor.execute('SELECT stationID, stationName, solarSystemID FROM staStations')
bulk_data = {}
for row in self.cursor:
id = int(row[0])
if id:
bulk_data[id] = row[1:]
data_map = Station.objects.in_bulk(bulk_data.keys())
new = []
for id, data in bulk_data.items():
if id in data_map or not data[0] or not data[1]:
continue
station = Station(
id=id,
name=data[0],
system_id=data[1],
)
station._make_shorter_name()
new.append(station)
added += 1
if new:
Station.objects.bulk_create(new)
return added
# -----------------------------------------------------------------------
# Market groups
def import_marketgroup(self):
added = 0
self.cursor.execute('SELECT marketGroupID, marketGroupName, parentGroupID FROM invMarketGroups')
bulk_data = {}
for row in self.cursor:
id = int(row[0])
if id:
bulk_data[id] = row[1:]
data_map = MarketGroup.objects.in_bulk(bulk_data.keys())
last_count = 999999
while bulk_data:
items = list(bulk_data.items())
if len(items) == last_count:
print 'infinite loop!'
for id, data in items:
print id, data
break
last_count = len(items)
for id, data in items:
if data[1] is None:
parent = None
else:
# if the parent id doesn't exist yet we have to do this later
try:
parent = MarketGroup.objects.get(pk=data[1])
except MarketGroup.DoesNotExist:
continue
# if we've already added this marketgroup, check that the parent
# hasn't changed
mg = data_map.get(id, None)
if mg is not None:
if parent is not None and mg.parent is not None and mg.parent.id != parent.id:
mg.delete()
else:
if mg.name != data[0]:
mg.name = data[0]
mg.save()
print '==> Updated data for #%s (%r)' % (mg.id, mg.name)
del bulk_data[id]
continue
mg = MarketGroup(
id=id,
name=data[0],
parent=parent,
)
mg.save()
added += 1
del bulk_data[id]
return added
# -----------------------------------------------------------------------
# Item Categories
def import_itemcategory(self):
added = 0
self.cursor.execute('SELECT categoryID, categoryName FROM invCategories')
bulk_data = {}
for row in self.cursor:
id = int(row[0])
if id and row[1]:
bulk_data[id] = row[1:]
data_map = ItemCategory.objects.in_bulk(bulk_data.keys())
new = []
for id, data in bulk_data.items():
if id in data_map or not data[0]:
continue
ic = ItemCategory(
id=id,
name=data[0],
)
new.append(ic)
added += 1
if new:
ItemCategory.objects.bulk_create(new)
return added
# -----------------------------------------------------------------------
# Item Groups
def import_itemgroup(self):
added = 0
self.cursor.execute('SELECT groupID, groupName, categoryID FROM invGroups')
bulk_data = {}
for row in self.cursor:
id = int(row[0])
if id and row[2]:
bulk_data[id] = row[1:]
data_map = ItemGroup.objects.in_bulk(bulk_data.keys())
new = []
for id, data in bulk_data.items():
if not data[1]:
continue
ig = data_map.get(id, None)
if ig is not None:
if ig.name != data[0]:
print '==> Renamed %r to %r' % (ig.name, data[0])
ig.name = data[0]
ig.save()
continue
ig = ItemGroup(
id=id,
name=data[0],
category_id=data[1],
)
new.append(ig)
added += 1
if new:
ItemGroup.objects.bulk_create(new)
return added
# -----------------------------------------------------------------------
# Items
def import_item(self):
added = 0
self.cursor.execute(
'SELECT typeID, typeName, groupID, marketGroupID, portionSize, volume, basePrice FROM invTypes')
bulk_data = {}
mg_ids = set()
for row in self.cursor:
bulk_data[int(row[0])] = row[1:]
if row[3] is not None:
mg_ids.add(int(row[3]))
data_map = Item.objects.in_bulk(bulk_data.keys())
mg_map = MarketGroup.objects.in_bulk(mg_ids)
new = []
for id, data in bulk_data.items():
if not data[0] or not data[1]:
continue
if data[2] is None:
mg_id = None
else:
mg_id = int(data[2])
if mg_id not in mg_map:
print '==> Invalid marketGroupID %s on item %s[%s]' % (mg_id, id, data[0])
continue
portion_size = Decimal(data[3])
# As of current patch sometimes the 4th and 5th fields are empty for some items :ccp:
try:
volume = PACKAGED.get(data[1], Decimal(str(data[4])))
except InvalidOperation:
volume = 0
pass
try:
base_price = Decimal(data[5])
except TypeError:
base_price = 0
pass
# handle modified items
item = data_map.get(id, None)
if item is not None:
if item.name != data[0] or item.portion_size != portion_size or item.volume != volume or \
item.base_price != base_price or item.market_group_id != mg_id:
print '==> Updated data for #%s (%r)' % (item.id, item.name)
item.name = data[0]
item.portion_size = portion_size
item.volume = volume
item.base_price = base_price
item.market_group_id = mg_id
item.save()
continue
item = Item(
id=id,
name=data[0],
item_group_id=data[1],
market_group_id=mg_id,
portion_size=portion_size,
volume=volume,
base_price=base_price,
)
new.append(item)
added += 1
if new:
Item.objects.bulk_create(new)
return added
# -----------------------------------------------------------------------
def import_item_material(self):
self.cursor.execute("""
SELECT
invTypeMaterials.typeID,
invTypeMaterials.materialTypeID,
invTypeMaterials.quantity
FROM invTypeMaterials
""")
materials = []
added = 0
for row in self.cursor:
materials.append(ItemMaterial(
item_id=int(row[0]),
material_id=int(row[1]),
quantity=int(row[2]),
active=True
))
added += 1
ItemMaterial.objects.all().delete()
ItemMaterial.objects.bulk_create(materials)
return added
# -----------------------------------------------------------------------
def import_implant(self):
added = 0
self.cursor.execute("""
SELECT
invTypes.typeid,
invTypes.description,
coalesce(c.valueint,c.valuefloat) as 'cha',
coalesce(i.valueint,i.valuefloat) as 'int',
coalesce(m.valueint,m.valuefloat) as 'mem',
coalesce(p.valueint,p.valuefloat) as 'per',
coalesce(w.valueint,w.valuefloat) as 'wil',
coalesce(s.valueint,s.valuefloat) as 'slot'
FROM invTypes
LEFT JOIN dgmTypeAttributes AS c ON (
invTypes.typeid = c.typeid and c.attributeid=175)
LEFT JOIN dgmTypeAttributes AS i ON (
invTypes.typeid = i.typeid and i.attributeid=176)
LEFT JOIN dgmTypeAttributes AS m ON (
invTypes.typeid = m.typeid and m.attributeid=177)
LEFT JOIN dgmTypeAttributes AS p ON (
invTypes.typeid = p.typeid and p.attributeid=178)
LEFT JOIN dgmTypeAttributes AS w ON (
invTypes.typeid = w.typeid and w.attributeid=179)
JOIN dgmTypeAttributes AS s ON (
invTypes.typeid = s.typeid and s.attributeid=331)
LEFT JOIN invGroups ON (invGroups.groupid=invTypes.groupid)
WHERE invGroups.categoryid=20
ORDER BY invTypes.typeid;
""")
implants = {}
for row in self.cursor:
id = int(row[0])
if row[1] is None:
desc = ''
else:
desc = row[1].strip()
if id:
implants[id] = {
'description': desc,
'charisma_modifier': row[2] if row[2] else 0,
'intelligence_modifier': row[3] if row[3] else 0,
'memory_modifier': row[4] if row[4] else 0,
'perception_modifier': row[5] if row[5] else 0,
'willpower_modifier': row[6] if row[6] else 0,
'implant_slot': row[7]
}
implant_map = {}
for implant in Implant.objects.all():
implant_map[implant.item_id] = implant
new = []
for id, data in implants.items():
implant = implant_map.get(id, None)
if implant is not None:
if implant.description != data['description'] or \
implant.charisma_modifier != data['charisma_modifier'] or \
implant.intelligence_modifier != data['intelligence_modifier'] or \
implant.memory_modifier != data['memory_modifier'] or \
implant.perception_modifier != data['perception_modifier'] or \
implant.willpower_modifier != data['willpower_modifier'] or \
implant.implant_slot != data['implant_slot']:
implant.description = data['description']
implant.charisma_modifier = data['charisma_modifier']
implant.intelligence_modifier = \
data['intelligence_modifier']
implant.memory_modifier = data['memory_modifier']
implant.perception_modifier = data['perception_modifier']
implant.willpower_modifier = data['willpower_modifier']
implant.implant_slot = data['implant_slot']
implant.save()
print '=>> Updated implant details for %d' % (id,)
continue
new.append(Implant(item_id=id, **data))
added += 1
if new:
Implant.objects.bulk_create(new)
return added
# -----------------------------------------------------------------------
def import_blueprint(self):
# Blueprints
added = 0
self.cursor.execute("""
SELECT b.typeID, t.typeName, b.maxProductionLimit
FROM industryBlueprints AS b
INNER JOIN invTypes AS t
ON b.typeID = t.typeID
""")
bulk_data = {}
for row in self.cursor:
bulk_data[int(row[0])] = row[1:]
data_map = Blueprint.objects.in_bulk(bulk_data.keys())
new = []
for id, data in bulk_data.items():
if not data[0] or not data[1]:
continue
bp = data_map.get(id, None)
if bp is not None:
if bp.name != data[0]:
print '==> Renamed %r to %r' % (bp.name, data[0])
bp.name = data[0]
bp.save()
else:
new.append(Blueprint(
id=id,
name=data[0],
productionLimit=data[1]
))
added += 1
if new:
Blueprint.objects.bulk_create(new)
# Collect all components
new = []
for id, data in bulk_data.items():
# Base materials
self.cursor.execute('SELECT activityID, materialTypeID, quantity FROM industryActivityMaterials WHERE typeID=%s', (id,))
for baserow in self.cursor:
# blueprint 3927 references itemId 3924 which doesn't exist,
# so ignore it, :ccp:
if id != 3927:
new.append(BlueprintComponent(
blueprint_id=id,
activity=baserow[0],
item_id=baserow[1],
count=baserow[2],
consumed=True
))
added += 1
# If there's any new ones just drop and recreate the whole lot, easier
# than trying to work out what has changed for every single blueprint
if new:
BlueprintComponent.objects.all().delete()
BlueprintComponent.objects.bulk_create(new)
# Products!
new = []
for id, data in bulk_data.items():
# Base materials
self.cursor.execute('SELECT activityID, productTypeID, quantity FROM industryActivityProducts WHERE typeID=%s', (id,))
for baserow in self.cursor:
# blueprint 37016 references itemId 35882 which doesn't exist,
# so ignore it, :ccp:
if id != 37016:
new.append(BlueprintProduct(
blueprint_id=id,
activity=baserow[0],
item_id=baserow[1],
count=baserow[2]
))
added += 1
# If there's any new ones just drop and recreate the whole lot, easier
# than trying to work out what has changed for every single blueprint
if new:
BlueprintProduct.objects.all().delete()
BlueprintProduct.objects.bulk_create(new)
return added
# -----------------------------------------------------------------------
# Skills
def import_skill(self):
added = 0
# AND invTypes.published = 1
skills = {}
self.cursor.execute("""
SELECT DISTINCT invTypes.typeID,
dgmTypeAttributes.valueFloat AS rank,
invTypes.description
FROM invTypes
INNER JOIN invGroups ON (invTypes.groupID = invGroups.groupID)
INNER JOIN dgmTypeAttributes ON (invTypes.typeID = dgmTypeAttributes.typeID)
WHERE invGroups.categoryID = 16
AND dgmTypeAttributes.attributeID = 275
AND dgmTypeAttributes.valueFloat IS NOT NULL
AND invTypes.marketGroupID IS NOT NULL
ORDER BY invTypes.typeID
""")
for row in self.cursor:
# Handle NULL descriptions
if row[2] is None:
desc = ''
else:
desc = row[2].strip()
skills[row[0]] = {
'rank': int(row[1]),
'description': desc,
}
# Primary/secondary attributes
self.cursor.execute("""
SELECT typeID, attributeID, valueInt, valueFloat
FROM dgmTypeAttributes
WHERE attributeID IN (180, 181)
""")
for row in self.cursor:
# skip unpublished
skill = skills.get(row[0], None)
if skill is None:
continue
if row[1] == 180:
k = 'pri'
else:
k = 'sec'
if row[2]:
skill[k] = row[2]
else:
skill[k] = row[3]
# filter skills I guess
skill_map = {}
for skill in Skill.objects.all():
skill_map[skill.item_id] = skill
new = []
for id, data in skills.items():
# TODO: add value verification
skill = skill_map.get(id, None)
if skill is not None:
if skill.rank != data['rank'] or skill.description != data['description'] or \
skill.primary_attribute != data['pri'] or skill.secondary_attribute != data['sec']:
skill.rank = data['rank']
skill.description = data['description']
skill.primary_attribute = data['pri']
skill.secondary_attribute = data['sec']
skill.save()
print '==> Updated skill details for #%d' % (id)
continue
new.append(Skill(
item_id=id,
rank=data['rank'],
primary_attribute=data['pri'],
secondary_attribute=data['sec'],
description=data['description'],
))
added += 1
if new:
Skill.objects.bulk_create(new)
return added
# :skills:
# :prerequisite: # These are the attribute ids for skill prerequisites. [item, level]
# 1: [182, 277]
# 2: [183, 278]
# 3: [184, 279]
# 4: [1285, 1286]
# 5: [1289, 1287]
# 6: [1290, 1288]
# :primary_attribute: 180 # database attribute ID for primary attribute
# :secondary_attribute: 181 # database attribute ID for secondary attribute
# :attributes: # Mapping of id keys to the actual attribute
# 165: :intelligence
# 164: :charisma
# 166: :memory
# 167: :perception
# 168: :willpower
# -----------------------------------------------------------------------
# InventoryFlags
def import_inventoryflag(self):
added = 0
self.cursor.execute('SELECT flagID, flagName, flagText FROM invFlags')
bulk_data = {}
for row in self.cursor:
bulk_data[int(row[0])] = row[1:]
data_map = InventoryFlag.objects.in_bulk(bulk_data.keys())
new = []
for id, data in bulk_data.items():
if not data[0] or not data[1]:
continue
# handle renamed flags
flag = data_map.get(id, None)
if flag is not None:
if flag.name != data[0] or flag.text != data[1]:
print '==> Renamed %r to %r' % (flag.name, data[0])
flag.name = data[0]
flag.text = data[1]
flag.save()
continue
flag = InventoryFlag(
id=id,
name=data[0],
text=data[1],
)
new.append(flag)
added += 1
if new:
InventoryFlag.objects.bulk_create(new)
return added
# -----------------------------------------------------------------------
# NPC Factions
def import_npcfaction(self):
added = 0
self.cursor.execute('SELECT factionID, factionName FROM chrFactions')
bulk_data = {}
for row in self.cursor:
bulk_data[int(row[0])] = row[1]
data_map = Faction.objects.in_bulk(bulk_data.keys())
new = []
for id, name in bulk_data.items():
faction = data_map.get(id, None)
if faction is not None:
if faction.name != name:
print '==> Renamed %r to %r' % (faction.name, name)
faction.name = name
faction.save()
continue
faction = Faction(
id=id,
name=name,
)
new.append(faction)
added += 1
if new:
Faction.objects.bulk_create(new)
return added
# -----------------------------------------------------------------------
# NPC Corporations
def import_npccorporation(self):
added = 0
self.cursor.execute("""
SELECT c.corporationID, i.itemName
FROM crpNPCCorporations c, invNames i
WHERE c.corporationID = i.itemID
""")
bulk_data = {}
for row in self.cursor:
bulk_data[int(row[0])] = row[1]
data_map = Corporation.objects.in_bulk(bulk_data.keys())
new = []
for id, name in bulk_data.items():
corp = data_map.get(id, None)
if corp is not None:
if corp.name != name:
print '==> Renamed %r to %r' % (corp.name, name)
corp.name = name
corp.save()
continue
corp = Corporation(
id=id,
name=name,
)
new.append(corp)
added += 1
if new:
Corporation.objects.bulk_create(new)
return added
# -----------------------------------------------------------------------
# Build the skill map
def build_skill_map(self):
# Get all skills
skill_map = {}
for skill in Skill.objects.all():
skill_map[skill.item_id] = {}
ids = ','.join(map(str, skill_map.keys()))
# Gather skill pre-requisite data
self.cursor.execute("""
SELECT typeID,
attributeID,
COALESCE(valueFloat, valueInt)
FROM dgmTypeAttributes
WHERE attributeID in (182, 183, 184, 1285, 1289, 1290, 277, 278, 279, 1286, 1287, 1288)
AND typeID in (%s)
""" % (ids))
for row in self.cursor:
typeID = int(row[0])
attrID = int(row[1])
value = int(row[2])
if attrID in PREREQ_SKILLS:
skill_map[typeID].setdefault(PREREQ_SKILLS[attrID], [None, None])[0] = value
elif attrID in PREREQ_LEVELS:
skill_map[typeID].setdefault(PREREQ_LEVELS[attrID], [None, None])[1] = value
# Save the skill map to a pickle
cPickle.dump(skill_map, open('skill_map.pickle', 'w'))
return 1
# ---------------------------------------------------------------------------
if __name__ == '__main__':
importer = Importer()
importer.import_all()
|
|
#
# BaseImage.py -- Abstraction of an generic data image.
#
# This is open-source software licensed under a BSD license.
# Please see the file LICENSE.txt for details.
#
import numpy as np
import logging
from ginga.misc import Bunch, Callback
from ginga import trcalc, AutoCuts
class ImageError(Exception):
pass
class ViewerObjectBase(Callback.Callbacks):
def __init__(self, metadata=None, logger=None, name=None):
Callback.Callbacks.__init__(self)
if logger is None:
logger = logging.getLogger('BaseImage')
logger.addHandler(logging.NullHandler())
self.logger = logger
self.metadata = {}
if metadata:
self.update_metadata(metadata)
# make sure an object has these attributes
# TODO: this maybe should have a unique random string or something
# but we'd have to fix a lot of code that is currently checking for
# None
self.metadata.setdefault('name', None)
# For callbacks
for name in ('modified', ):
self.enable_callback(name)
def get_metadata(self):
return self.metadata.copy()
def clear_metadata(self):
self.metadata = {}
def clear_all(self):
self.clear_metadata()
def update_metadata(self, map_like):
for key, val in map_like.items():
self.metadata[key] = val
def get(self, kwd, *args):
if kwd in self.metadata:
return self.metadata[kwd]
else:
# return a default if there is one
if len(args) > 0:
return args[0]
raise KeyError(kwd)
def get_list(self, *args):
return [self.get(kwd) for kwd in args]
def __getitem__(self, kwd):
return self.metadata[kwd]
def __contains__(self, kwd):
return kwd in self.metadata
def update(self, kwds):
self.metadata.update(kwds)
def set(self, **kwds):
self.update(kwds)
def __setitem__(self, kwd, value):
self.metadata[kwd] = value
class BaseImage(ViewerObjectBase):
def __init__(self, data_np=None, metadata=None, logger=None, order=None,
name=None):
ViewerObjectBase.__init__(self, logger=logger, metadata=metadata,
name=name)
if data_np is None:
data_np = np.zeros((0, 0))
self._data = data_np
self.order = ''
self.name = name
# For navigating multidimensional data
self.axisdim = []
self.naxispath = []
self.revnaxis = []
self._set_minmax()
self._calc_order(order)
self.autocuts = AutoCuts.Histogram(self.logger)
@property
def shape(self):
return self._get_data().shape
@property
def width(self):
if self.ndim < 2:
return 0
# NOTE: numpy stores data in column-major layout
return self.shape[1]
@property
def height(self):
# NOTE: numpy stores data in column-major layout
return self.shape[0]
@property
def depth(self):
return self.get_depth()
@property
def ndim(self):
return len(self.shape)
@property
def dtype(self):
return self._get_data().dtype
def get_size(self):
return (self.width, self.height)
def get_depth(self):
shape = self.shape
if len(shape) > 2:
return shape[-1]
return 1
def get_shape(self):
return self.shape
def get_center(self):
wd, ht = self.get_size()
ctr_x, ctr_y = wd // 2, ht // 2
return (ctr_x, ctr_y)
def get_data(self):
return self._data
def _get_data(self):
return self._data
def _get_fast_data(self):
"""
Return an array similar to but possibly smaller than self._data,
for fast calculation of the intensity distribution.
NOTE: this is used by the Ginga plugin for Glue
"""
return self._data
def copy_data(self):
data = self._get_data()
return data.copy()
def get_data_xy(self, x, y):
assert (x >= 0) and (y >= 0), \
ImageError("Indexes out of range: (x=%d, y=%d)" % (
x, y))
view = np.s_[y, x]
res = self._slice(view)
if isinstance(res, np.ndarray) and self.get('ignore_alpha', False):
# <-- this image has a "hidden" alpha array
# NOTE: assumes that data is at index 0
res = res[0]
return res
def set_data(self, data_np, metadata=None, order=None, astype=None):
"""Use this method to SHARE (not copy) the incoming array.
"""
if astype:
data = data_np.astype(astype, copy=False)
else:
data = data_np
self._data = data
self._calc_order(order)
if metadata:
self.update_metadata(metadata)
self._set_minmax()
self.make_callback('modified')
def clear_all(self):
# clear metadata
super(BaseImage, self).clear_all()
# unreference data array
self._data = np.zeros((0, 0))
def _slice(self, view):
if not isinstance(view, tuple):
view = tuple(view)
return self._get_data()[view]
def get_slice(self, c):
view = [slice(None)] * self.ndim
view[-1] = self.order.index(c.upper())
return self._slice(view)
def has_slice(self, c):
return c.upper() in self.order
def get_array(self, order):
order = order.upper()
if order == self.order:
return self._get_data()
l = [self.get_slice(c) for c in order]
return np.dstack(l)
def set_order(self, order):
self.order = order.upper()
def get_order(self):
return self.order
def get_order_indexes(self, cs):
cs = cs.upper()
return [self.order.index(c) for c in cs]
def _calc_order(self, order):
"""Called to set the order of a multi-channel image.
The order should be determined by the loader, but this will
make a best guess if passed `order` is `None`.
"""
if order is not None and order != '':
self.order = order.upper()
else:
self.order = trcalc.guess_order(self.shape)
def has_valid_wcs(self):
return hasattr(self, 'wcs') and self.wcs.has_valid_wcs()
def _set_minmax(self):
data = self._get_fast_data()
try:
self.maxval = np.nanmax(data)
self.minval = np.nanmin(data)
except Exception:
self.maxval = 0
self.minval = 0
# TODO: see if there is a faster way to ignore infinity
try:
if np.isfinite(self.maxval):
self.maxval_noinf = self.maxval
else:
self.maxval_noinf = np.nanmax(data[np.isfinite(data)])
except Exception:
self.maxval_noinf = self.maxval
try:
if np.isfinite(self.minval):
self.minval_noinf = self.minval
else:
self.minval_noinf = np.nanmin(data[np.isfinite(data)])
except Exception:
self.minval_noinf = self.minval
def get_minmax(self, noinf=False):
if not noinf:
return (self.minval, self.maxval)
else:
return (self.minval_noinf, self.maxval_noinf)
# kwargs is needed so subclasses can interoperate with optional keywords.
def get_header(self, **kwargs):
header = self.get('header', None)
if header is None:
header = Header()
self.set(header=header)
return header
def update_keywords(self, key_dict):
hdr = self.get_header()
hdr.update(key_dict)
def transfer(self, other, astype=None):
data = self._get_data()
other.set_data(data, metadata=self.metadata, astype=astype)
def copy(self, astype=None):
data = self.copy_data()
metadata = self.get_metadata()
other = self.__class__(data_np=data, metadata=metadata)
return other
def cutout_data(self, x1, y1, x2, y2, xstep=1, ystep=1, z=None,
astype=None):
"""Cut out data area based on bounded coordinates.
Parameters
----------
x1, y1 : int
Coordinates defining the minimum corner to be cut out
x2, y2 : int
Coordinates *one greater* than the maximum corner
xstep, ystep : int
Step values for skip intervals in the cutout region
z : int
Value for a depth (slice) component for color images
astype :
Note that the coordinates for `x2`, `y2` are *outside* the
cutout region, similar to slicing parameters in Python.
"""
view = np.s_[y1:y2:ystep, x1:x2:xstep]
data_np = self._slice(view)
if z is not None and len(data_np.shape) > 2:
data_np = data_np[..., z]
if astype:
data_np = data_np.astype(astype, copy=False)
return data_np
def cutout_adjust(self, x1, y1, x2, y2, xstep=1, ystep=1, z=0, astype=None):
"""Like `cutout_data`, but adjusts coordinates `x1`, `y1`, `x2`, `y2`
to be inside the data area if they are not already. It tries to
preserve the width and height of the region, so e.g. (-2, -2, 5, 5)
could become (0, 0, 7, 7)
"""
dx = x2 - x1
dy = y2 - y1
if x1 < 0:
x1, x2 = 0, dx
else:
if x2 >= self.width:
x2 = self.width
x1 = x2 - dx
if y1 < 0:
y1, y2 = 0, dy
else:
if y2 >= self.height:
y2 = self.height
y1 = y2 - dy
data = self.cutout_data(x1, y1, x2, y2, xstep=xstep, ystep=ystep,
z=z, astype=astype)
return (data, x1, y1, x2, y2)
def cutout_radius(self, x, y, radius, xstep=1, ystep=1, astype=None):
return self.cutout_adjust(x - radius, y - radius,
x + radius + 1, y + radius + 1,
xstep=xstep, ystep=ystep, astype=astype)
def cutout_cross(self, x, y, radius):
"""Cut two data subarrays that have a center at (x, y) and with
radius (radius) from (image). Returns the starting pixel (x0, y0)
of each cut and the respective arrays (xarr, yarr).
"""
n = radius
wd, ht = self.get_size()
x0, x1 = max(0, x - n), min(wd - 1, x + n)
y0, y1 = max(0, y - n), min(ht - 1, y + n)
xview = np.s_[y, x0:x1 + 1]
yview = np.s_[y0:y1 + 1, x]
xarr = self._slice(xview)
yarr = self._slice(yview)
return (x0, y0, xarr, yarr)
def get_shape_mask(self, shape_obj):
"""
Return full mask where True marks pixels within the given shape.
"""
wd, ht = self.get_size()
xi, yi = np.meshgrid(range(0, wd), range(0, ht))
pts = np.array((xi, yi)).T
contains = shape_obj.contains_pts(pts)
return contains
def get_shape_view(self, shape_obj, avoid_oob=True):
"""
Calculate a bounding box in the data enclosing `shape_obj` and
return a view that accesses it and a mask that is True only for
pixels enclosed in the region.
If `avoid_oob` is True (default) then the bounding box is clipped
to avoid coordinates outside of the actual data.
"""
x1, y1, x2, y2 = [int(np.round(n)) for n in shape_obj.get_llur()]
if avoid_oob:
# avoid out of bounds indexes
wd, ht = self.get_size()
x1, x2 = max(0, x1), min(x2, wd - 1)
y1, y2 = max(0, y1), min(y2, ht - 1)
# calculate pixel containment mask in bbox
xi, yi = np.meshgrid(range(x1, x2 + 1), range(y1, y2 + 1))
pts = np.array((xi, yi)).T
contains = shape_obj.contains_pts(pts)
view = np.s_[y1:y2 + 1, x1:x2 + 1]
return (view, contains)
def cutout_shape(self, shape_obj):
"""
Cut out and return a portion of the data corresponding to `shape_obj`.
A masked numpy array is returned, where the pixels not enclosed in
the shape are masked out.
"""
view, mask = self.get_shape_view(shape_obj)
# cutout our enclosing (possibly shortened) bbox
data = self._slice(view)
# mask non-containing members
mdata = np.ma.array(data, mask=np.logical_not(mask))
return mdata
def get_scaled_cutout_wdht(self, x1, y1, x2, y2, new_wd, new_ht,
method='basic'):
# TO BE DEPRECATED
data_np = self._get_data()
(newdata, (scale_x, scale_y)) = \
trcalc.get_scaled_cutout_wdht(data_np, x1, y1, x2, y2,
new_wd, new_ht,
interpolation=method,
logger=self.logger)
res = Bunch.Bunch(data=newdata, scale_x=scale_x, scale_y=scale_y)
return res
def get_scaled_cutout_basic(self, x1, y1, x2, y2, scale_x, scale_y,
method='basic'):
# TO BE DEPRECATED
p1, p2 = (x1, y1), (x2, y2)
scales = (scale_x, scale_y)
return self.get_scaled_cutout2(p1, p2, scales, method=method,
logger=self.logger)
def get_scaled_cutout(self, x1, y1, x2, y2, scale_x, scale_y,
method='basic', logger=None):
# TO BE DEPRECATED
p1, p2 = (x1, y1), (x2, y2)
scales = (scale_x, scale_y)
return self.get_scaled_cutout2(p1, p2, scales, method=method,
logger=logger)
def get_scaled_cutout2(self, p1, p2, scales,
method='basic', logger=None):
"""Extract a region of the image defined by points `p1` and `p2`
and scale it by scale factors `scales`.
`method` describes the method of interpolation used, where the
default "basic" is nearest neighbor.
"""
if logger is None:
logger = self.logger
data = self._get_data()
newdata, oscales = trcalc.get_scaled_cutout_basic2(data, p1, p2, scales,
interpolation=method,
logger=logger)
scale_x, scale_y = oscales[:2]
res = Bunch.Bunch(data=newdata, scale_x=scale_x, scale_y=scale_y)
if len(scales) > 2:
res.scale_z = oscales[2]
return res
def get_thumbnail(self, length):
wd, ht = self.get_size()
if ht == 0:
width, height = 1, 1
elif wd > ht:
width, height = length, int(length * float(ht) / wd)
else:
width, height = int(length * float(wd) / ht), length
res = self.get_scaled_cutout_wdht(0, 0, wd, ht, width, height)
return res.data
def get_pixels_on_line(self, x1, y1, x2, y2, getvalues=True):
"""Uses Bresenham's line algorithm to enumerate the pixels along
a line.
(see http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm)
If `getvalues`==False then it will return tuples of (x, y) coordinates
instead of pixel values.
"""
# NOTE: seems to be necessary or we get a non-terminating result
x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2)
dx = abs(x2 - x1)
dy = abs(y2 - y1)
if x1 < x2:
sx = 1
else:
sx = -1
if y1 < y2:
sy = 1
else:
sy = -1
err = dx - dy
res = []
x, y = x1, y1
while True:
if getvalues:
try:
val = self.get_data_xy(x, y)
except Exception:
val = np.NaN
res.append(val)
else:
res.append((x, y))
if (x == x2) and (y == y2):
break
e2 = 2 * err
if e2 > -dy:
err = err - dy
x += sx
if e2 < dx:
err = err + dx
y += sy
return res
def info_xy(self, data_x, data_y, settings):
# Get the value under the data coordinates
try:
# We report the value across the pixel, even though the coords
# change halfway across the pixel
_d_x, _d_y = (int(np.floor(data_x + 0.5)),
int(np.floor(data_y + 0.5)))
value = self.get_data_xy(_d_x, _d_y)
except Exception as e:
value = None
info = Bunch.Bunch(itype='base', data_x=data_x, data_y=data_y,
x=data_x, y=data_y, value=value)
wd, ht = self.get_size()
if 0 < data_x < wd and 0 < data_y < ht:
info.image_x, info.image_y = data_x, data_y
return info
class Header(dict):
def __init__(self, *args, **kwdargs):
super(Header, self).__init__(*args, **kwdargs)
self.keyorder = []
def __getitem__(self, key):
bnch = super(Header, self).__getitem__(key)
return bnch.value
def __setitem__(self, key, value):
try:
bnch = super(Header, self).__getitem__(key)
bnch.value = value
except KeyError:
bnch = Bunch.Bunch(key=key, value=value, comment='')
self.keyorder.append(key)
super(Header, self).__setitem__(key, bnch)
return bnch
def __delitem__(self, key):
super(Header, self).__delitem__(key)
self.keyorder.remove(key)
def get_card(self, key):
bnch = super(Header, self).__getitem__(key)
return bnch
def set_card(self, key, value, comment=None):
try:
bnch = super(Header, self).__getitem__(key)
bnch.value = value
if not (comment is None):
bnch.comment = comment
except KeyError:
if comment is None:
comment = ''
bnch = Bunch.Bunch(key=key, value=value, comment=comment)
self.keyorder.append(key)
super(Header, self).__setitem__(key, bnch)
return bnch
def get_keyorder(self):
return self.keyorder
def keys(self):
return self.keyorder
def items(self):
return [(key, self[key]) for key in self.keys()]
def get(self, key, alt=None):
try:
return self.__getitem__(key)
except KeyError:
return alt
def merge(self, hdr, override_keywords=False):
if not isinstance(hdr, Header):
raise ValueError("need to pass a compatible header for merge")
for key in hdr.keys():
if key not in self or override_keywords:
card = hdr.get_card(key)
self.set_card(key, card.value, comment=card.comment)
def update(self, map_kind):
for key, value in map_kind.items():
self.__setitem__(key, value)
def asdict(self):
return dict([(key, self[key]) for key in self.keys()])
# END
|
|
"""
dj-stripe Event Handler tests
"""
from copy import deepcopy
from decimal import Decimal
from unittest.mock import ANY, call, patch
from django.contrib.auth import get_user_model
from django.test import TestCase
from stripe.error import InvalidRequestError
from djstripe.enums import SubscriptionStatus
from djstripe.models import (
Card,
Charge,
Coupon,
Customer,
Dispute,
DjstripePaymentMethod,
Event,
Invoice,
InvoiceItem,
PaymentMethod,
Plan,
Price,
Subscription,
SubscriptionSchedule,
Transfer,
)
from djstripe.models.account import Account
from djstripe.models.billing import TaxId
from djstripe.models.checkout import Session
from djstripe.models.core import File
from djstripe.models.payment_methods import BankAccount
from . import (
FAKE_ACCOUNT,
FAKE_BALANCE_TRANSACTION,
FAKE_BANK_ACCOUNT_IV,
FAKE_CARD,
FAKE_CARD_AS_PAYMENT_METHOD,
FAKE_CARD_III,
FAKE_CARD_IV,
FAKE_CHARGE,
FAKE_CHARGE_II,
FAKE_COUPON,
FAKE_CUSTOM_ACCOUNT,
FAKE_CUSTOMER,
FAKE_CUSTOMER_II,
FAKE_DISPUTE_BALANCE_TRANSACTION,
FAKE_DISPUTE_BALANCE_TRANSACTION_REFUND_FULL,
FAKE_DISPUTE_BALANCE_TRANSACTION_REFUND_PARTIAL,
FAKE_DISPUTE_CHARGE,
FAKE_DISPUTE_I,
FAKE_DISPUTE_II,
FAKE_DISPUTE_III,
FAKE_DISPUTE_PAYMENT_INTENT,
FAKE_DISPUTE_PAYMENT_METHOD,
FAKE_DISPUTE_V_FULL,
FAKE_DISPUTE_V_PARTIAL,
FAKE_EVENT_ACCOUNT_APPLICATION_AUTHORIZED,
FAKE_EVENT_ACCOUNT_APPLICATION_DEAUTHORIZED,
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_BANK_ACCOUNT_CREATED,
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_BANK_ACCOUNT_DELETED,
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_BANK_ACCOUNT_UPDATED,
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_CARD_CREATED,
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_CARD_DELETED,
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_CARD_UPDATED,
FAKE_EVENT_CARD_PAYMENT_METHOD_ATTACHED,
FAKE_EVENT_CARD_PAYMENT_METHOD_DETACHED,
FAKE_EVENT_CHARGE_SUCCEEDED,
FAKE_EVENT_CUSTOM_ACCOUNT_UPDATED,
FAKE_EVENT_CUSTOMER_CREATED,
FAKE_EVENT_CUSTOMER_DELETED,
FAKE_EVENT_CUSTOMER_DISCOUNT_CREATED,
FAKE_EVENT_CUSTOMER_DISCOUNT_DELETED,
FAKE_EVENT_CUSTOMER_SOURCE_CREATED,
FAKE_EVENT_CUSTOMER_SOURCE_DELETED,
FAKE_EVENT_CUSTOMER_SOURCE_DELETED_DUPE,
FAKE_EVENT_CUSTOMER_SUBSCRIPTION_CREATED,
FAKE_EVENT_CUSTOMER_SUBSCRIPTION_DELETED,
FAKE_EVENT_CUSTOMER_UPDATED,
FAKE_EVENT_DISPUTE_CLOSED,
FAKE_EVENT_DISPUTE_CREATED,
FAKE_EVENT_DISPUTE_FUNDS_REINSTATED_FULL,
FAKE_EVENT_DISPUTE_FUNDS_REINSTATED_PARTIAL,
FAKE_EVENT_DISPUTE_FUNDS_WITHDRAWN,
FAKE_EVENT_DISPUTE_UPDATED,
FAKE_EVENT_EXPRESS_ACCOUNT_UPDATED,
FAKE_EVENT_FILE_CREATED,
FAKE_EVENT_INVOICE_CREATED,
FAKE_EVENT_INVOICE_DELETED,
FAKE_EVENT_INVOICE_UPCOMING,
FAKE_EVENT_INVOICEITEM_CREATED,
FAKE_EVENT_INVOICEITEM_DELETED,
FAKE_EVENT_PAYMENT_INTENT_SUCCEEDED_DESTINATION_CHARGE,
FAKE_EVENT_PAYMENT_METHOD_ATTACHED,
FAKE_EVENT_PAYMENT_METHOD_DETACHED,
FAKE_EVENT_PLAN_CREATED,
FAKE_EVENT_PLAN_DELETED,
FAKE_EVENT_PLAN_REQUEST_IS_OBJECT,
FAKE_EVENT_PRICE_CREATED,
FAKE_EVENT_PRICE_DELETED,
FAKE_EVENT_PRICE_UPDATED,
FAKE_EVENT_SESSION_COMPLETED,
FAKE_EVENT_STANDARD_ACCOUNT_UPDATED,
FAKE_EVENT_SUBSCRIPTION_SCHEDULE_CANCELED,
FAKE_EVENT_SUBSCRIPTION_SCHEDULE_CREATED,
FAKE_EVENT_SUBSCRIPTION_SCHEDULE_RELEASED,
FAKE_EVENT_SUBSCRIPTION_SCHEDULE_UPDATED,
FAKE_EVENT_TAX_ID_CREATED,
FAKE_EVENT_TAX_ID_DELETED,
FAKE_EVENT_TAX_ID_UPDATED,
FAKE_EVENT_TRANSFER_CREATED,
FAKE_EVENT_TRANSFER_DELETED,
FAKE_EXPRESS_ACCOUNT,
FAKE_FILEUPLOAD_ICON,
FAKE_FILEUPLOAD_LOGO,
FAKE_INVOICE,
FAKE_INVOICE_II,
FAKE_INVOICEITEM,
FAKE_PAYMENT_INTENT_DESTINATION_CHARGE,
FAKE_PAYMENT_INTENT_I,
FAKE_PAYMENT_INTENT_II,
FAKE_PAYMENT_METHOD_I,
FAKE_PAYMENT_METHOD_II,
FAKE_PLAN,
FAKE_PRICE,
FAKE_PRODUCT,
FAKE_SESSION_I,
FAKE_STANDARD_ACCOUNT,
FAKE_SUBSCRIPTION,
FAKE_SUBSCRIPTION_CANCELED,
FAKE_SUBSCRIPTION_III,
FAKE_SUBSCRIPTION_SCHEDULE,
FAKE_TAX_ID,
FAKE_TAX_ID_UPDATED,
FAKE_TRANSFER,
IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
AssertStripeFksMixin,
)
class EventTestCase(TestCase):
#
# Helpers
#
@patch("stripe.Event.retrieve", autospec=True)
def _create_event(self, event_data, event_retrieve_mock, patch_data=None):
event_data = deepcopy(event_data)
if patch_data:
event_data.update(patch_data)
event_retrieve_mock.return_value = event_data
event = Event.sync_from_stripe_data(event_data)
return event
class TestAccountEvents(EventTestCase):
def setUp(self):
# create a Custom Stripe Account
self.custom_account = FAKE_CUSTOM_ACCOUNT.create()
# create a Standard Stripe Account
self.standard_account = FAKE_STANDARD_ACCOUNT.create()
# create an Express Stripe Account
self.express_account = FAKE_EXPRESS_ACCOUNT.create()
@patch("stripe.Event.retrieve", autospec=True)
def test_account_deauthorized_event(self, event_retrieve_mock):
fake_stripe_event = deepcopy(FAKE_EVENT_ACCOUNT_APPLICATION_DEAUTHORIZED)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
@patch("stripe.Event.retrieve", autospec=True)
def test_account_authorized_event(self, event_retrieve_mock):
fake_stripe_event = deepcopy(FAKE_EVENT_ACCOUNT_APPLICATION_AUTHORIZED)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
# account.external_account.* events are fired for Custom and Express Accounts
@patch(
"stripe.Account.retrieve_external_account",
return_value=deepcopy(FAKE_BANK_ACCOUNT_IV),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_custom_account_external_account_created_bank_account_event(
self, event_retrieve_mock, account_retrieve_external_account_mock
):
fake_stripe_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_BANK_ACCOUNT_CREATED
)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
# fetch the newly created BankAccount object
bankaccount = BankAccount.objects.get(account=self.custom_account)
# assert the ids of the Bank Account and the Accounts were synced correctly.
self.assertEqual(
bankaccount.id,
fake_stripe_event["data"]["object"]["id"],
)
self.assertEqual(
self.custom_account.id,
fake_stripe_event["data"]["object"]["account"],
)
@patch(
"stripe.Account.retrieve_external_account",
return_value=deepcopy(FAKE_BANK_ACCOUNT_IV),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_custom_account_external_account_deleted_bank_account_event(
self, event_retrieve_mock, account_retrieve_external_account_mock
):
fake_stripe_create_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_BANK_ACCOUNT_CREATED
)
event = Event.sync_from_stripe_data(fake_stripe_create_event)
event.invoke_webhook_handlers()
fake_stripe_delete_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_BANK_ACCOUNT_DELETED
)
event = Event.sync_from_stripe_data(fake_stripe_delete_event)
event.invoke_webhook_handlers()
# assert the BankAccount object no longer exists
self.assertFalse(
BankAccount.objects.filter(
id=fake_stripe_create_event["data"]["object"]["id"]
).exists()
)
@patch(
"stripe.Account.retrieve_external_account",
return_value=deepcopy(FAKE_BANK_ACCOUNT_IV),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_custom_account_external_account_updated_bank_account_event(
self, event_retrieve_mock, account_retrieve_external_account_mock
):
fake_stripe_create_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_BANK_ACCOUNT_CREATED
)
event = Event.sync_from_stripe_data(fake_stripe_create_event)
event.invoke_webhook_handlers()
fake_stripe_update_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_BANK_ACCOUNT_UPDATED
)
event = Event.sync_from_stripe_data(fake_stripe_update_event)
event.invoke_webhook_handlers()
# fetch the updated BankAccount object
bankaccount = BankAccount.objects.get(account=self.custom_account)
# assert we are updating the account_holder_name
self.assertNotEqual(
fake_stripe_update_event["data"]["object"]["account_holder_name"],
fake_stripe_create_event["data"]["object"]["account_holder_name"],
)
# assert the account_holder_name got updated
self.assertNotEqual(
bankaccount.account_holder_name,
fake_stripe_update_event["data"]["object"]["account_holder_name"],
)
# assert the expected BankAccount object got updated
self.assertEqual(
bankaccount.id, fake_stripe_create_event["data"]["object"]["id"]
)
@patch(
"stripe.Account.retrieve_external_account",
return_value=deepcopy(FAKE_CARD_IV),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_custom_account_external_account_created_card_event(
self, event_retrieve_mock, account_retrieve_external_account_mock
):
fake_stripe_event = deepcopy(FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_CARD_CREATED)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
# fetch the newly created Card object
card = Card.objects.get(account=self.custom_account)
# assert the ids of the Card and the Accounts were synced correctly.
self.assertEqual(
card.id,
fake_stripe_event["data"]["object"]["id"],
)
self.assertEqual(
self.custom_account.id,
fake_stripe_event["data"]["object"]["account"],
)
@patch(
"stripe.Account.retrieve_external_account",
return_value=deepcopy(FAKE_CARD_IV),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_custom_account_external_account_deleted_card_event(
self, event_retrieve_mock, account_retrieve_external_account_mock
):
fake_stripe_create_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_CARD_CREATED
)
event = Event.sync_from_stripe_data(fake_stripe_create_event)
event.invoke_webhook_handlers()
fake_stripe_delete_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_CARD_DELETED
)
event = Event.sync_from_stripe_data(fake_stripe_delete_event)
event.invoke_webhook_handlers()
# assert Card Object no longer exists
self.assertFalse(
Card.objects.filter(
id=fake_stripe_create_event["data"]["object"]["id"]
).exists()
)
@patch(
"stripe.Account.retrieve_external_account",
return_value=deepcopy(FAKE_CARD_IV),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_custom_account_external_account_updated_card_event(
self, event_retrieve_mock, account_retrieve_external_account_mock
):
fake_stripe_create_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_CARD_CREATED
)
event = Event.sync_from_stripe_data(fake_stripe_create_event)
event.invoke_webhook_handlers()
fake_stripe_update_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_CARD_UPDATED
)
event = Event.sync_from_stripe_data(fake_stripe_update_event)
event.invoke_webhook_handlers()
# fetch the updated Card object
card = Card.objects.get(account=self.custom_account)
# assert we are updating the name
self.assertNotEqual(
fake_stripe_update_event["data"]["object"]["name"],
fake_stripe_create_event["data"]["object"]["name"],
)
# assert the name got updated
self.assertNotEqual(
card.name, fake_stripe_update_event["data"]["object"]["name"]
)
# assert the expected Card object got updated
self.assertEqual(card.id, fake_stripe_create_event["data"]["object"]["id"])
@patch(
"stripe.Account.retrieve_external_account",
return_value=deepcopy(FAKE_BANK_ACCOUNT_IV),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_express_account_external_account_created_bank_account_event(
self, event_retrieve_mock, account_retrieve_external_account_mock
):
fake_stripe_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_BANK_ACCOUNT_CREATED
)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
# fetch the newly created BankAccount object
bankaccount = BankAccount.objects.get(account=self.express_account)
# assert the ids of the Bank Account and the Accounts were synced correctly.
self.assertEqual(
bankaccount.id,
fake_stripe_event["data"]["object"]["id"],
)
self.assertEqual(
self.express_account.id,
fake_stripe_event["data"]["object"]["account"],
)
@patch(
"stripe.Account.retrieve_external_account",
return_value=deepcopy(FAKE_BANK_ACCOUNT_IV),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_express_account_external_account_deleted_bank_account_event(
self, event_retrieve_mock, account_retrieve_external_account_mock
):
fake_stripe_create_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_BANK_ACCOUNT_CREATED
)
event = Event.sync_from_stripe_data(fake_stripe_create_event)
event.invoke_webhook_handlers()
fake_stripe_delete_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_BANK_ACCOUNT_DELETED
)
event = Event.sync_from_stripe_data(fake_stripe_delete_event)
event.invoke_webhook_handlers()
# assert the BankAccount object no longer exists
self.assertFalse(
BankAccount.objects.filter(
id=fake_stripe_create_event["data"]["object"]["id"]
).exists()
)
@patch(
"stripe.Account.retrieve_external_account",
return_value=deepcopy(FAKE_BANK_ACCOUNT_IV),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_express_account_external_account_updated_bank_account_event(
self, event_retrieve_mock, account_retrieve_external_account_mock
):
fake_stripe_create_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_BANK_ACCOUNT_CREATED
)
event = Event.sync_from_stripe_data(fake_stripe_create_event)
event.invoke_webhook_handlers()
fake_stripe_update_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_BANK_ACCOUNT_UPDATED
)
event = Event.sync_from_stripe_data(fake_stripe_update_event)
event.invoke_webhook_handlers()
# fetch the updated BankAccount object
bankaccount = BankAccount.objects.get(account=self.express_account)
# assert we are updating the account_holder_name
self.assertNotEqual(
fake_stripe_update_event["data"]["object"]["account_holder_name"],
fake_stripe_create_event["data"]["object"]["account_holder_name"],
)
# assert the account_holder_name got updated
self.assertNotEqual(
bankaccount.account_holder_name,
fake_stripe_update_event["data"]["object"]["account_holder_name"],
)
# assert the expected BankAccount object got updated
self.assertEqual(
bankaccount.id, fake_stripe_create_event["data"]["object"]["id"]
)
@patch(
"stripe.Account.retrieve_external_account",
return_value=deepcopy(FAKE_CARD_IV),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_express_account_external_account_created_card_event(
self, event_retrieve_mock, account_retrieve_external_account_mock
):
fake_stripe_event = deepcopy(FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_CARD_CREATED)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
# fetch the newly created Card object
card = Card.objects.get(account=self.express_account)
# assert the ids of the Card and the Accounts were synced correctly.
self.assertEqual(
card.id,
fake_stripe_event["data"]["object"]["id"],
)
self.assertEqual(
self.express_account.id,
fake_stripe_event["data"]["object"]["account"],
)
@patch(
"stripe.Account.retrieve_external_account",
return_value=deepcopy(FAKE_CARD_IV),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_express_account_external_account_deleted_card_event(
self, event_retrieve_mock, account_retrieve_external_account_mock
):
fake_stripe_create_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_CARD_CREATED
)
event = Event.sync_from_stripe_data(fake_stripe_create_event)
event.invoke_webhook_handlers()
fake_stripe_delete_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_CARD_DELETED
)
event = Event.sync_from_stripe_data(fake_stripe_delete_event)
event.invoke_webhook_handlers()
# assert Card Object no longer exists
self.assertFalse(
Card.objects.filter(
id=fake_stripe_create_event["data"]["object"]["id"]
).exists()
)
@patch(
"stripe.Account.retrieve_external_account",
return_value=deepcopy(FAKE_CARD_IV),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_express_account_external_account_updated_card_event(
self, event_retrieve_mock, account_retrieve_external_account_mock
):
fake_stripe_create_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_CARD_CREATED
)
event = Event.sync_from_stripe_data(fake_stripe_create_event)
event.invoke_webhook_handlers()
fake_stripe_update_event = deepcopy(
FAKE_EVENT_ACCOUNT_EXTERNAL_ACCOUNT_CARD_UPDATED
)
event = Event.sync_from_stripe_data(fake_stripe_update_event)
event.invoke_webhook_handlers()
# fetch the updated Card object
card = Card.objects.get(account=self.express_account)
# assert we are updating the name
self.assertNotEqual(
fake_stripe_update_event["data"]["object"]["name"],
fake_stripe_create_event["data"]["object"]["name"],
)
# assert the name got updated
self.assertNotEqual(
card.name, fake_stripe_update_event["data"]["object"]["name"]
)
# assert the expected Card object got updated
self.assertEqual(card.id, fake_stripe_create_event["data"]["object"]["id"])
# account.updated events
@patch(
"stripe.Account.retrieve",
return_value=deepcopy(FAKE_EVENT_STANDARD_ACCOUNT_UPDATED["data"]["object"]),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_standard_account_updated_event(
self, event_retrieve_mock, account_retrieve_mock
):
# fetch the Stripe Account
standard_account = self.standard_account
# assert metadata is empty
self.assertEqual(standard_account.metadata, {})
fake_stripe_update_event = deepcopy(FAKE_EVENT_STANDARD_ACCOUNT_UPDATED)
event = Event.sync_from_stripe_data(fake_stripe_update_event)
event.invoke_webhook_handlers()
# fetch the updated Account object
updated_standard_account = Account.objects.get(id=standard_account.id)
# assert we are updating the metadata
self.assertNotEqual(
updated_standard_account.metadata,
standard_account.metadata,
)
# assert the meta got updated
self.assertEqual(
updated_standard_account.metadata,
fake_stripe_update_event["data"]["object"]["metadata"],
)
@patch(
"stripe.Account.retrieve",
return_value=deepcopy(FAKE_EVENT_EXPRESS_ACCOUNT_UPDATED["data"]["object"]),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_express_account_updated_event(
self, event_retrieve_mock, account_retrieve_mock
):
# fetch the Stripe Account
express_account = self.express_account
# assert metadata is empty
self.assertEqual(express_account.metadata, {})
fake_stripe_update_event = deepcopy(FAKE_EVENT_EXPRESS_ACCOUNT_UPDATED)
event = Event.sync_from_stripe_data(fake_stripe_update_event)
event.invoke_webhook_handlers()
# fetch the updated Account object
updated_express_account = Account.objects.get(id=express_account.id)
# assert we are updating the metadata
self.assertNotEqual(
updated_express_account.metadata,
express_account.metadata,
)
# assert the meta got updated
self.assertEqual(
updated_express_account.metadata,
fake_stripe_update_event["data"]["object"]["metadata"],
)
@patch(
"stripe.Account.retrieve",
return_value=deepcopy(FAKE_EVENT_CUSTOM_ACCOUNT_UPDATED["data"]["object"]),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_custom_account_updated_event(
self, event_retrieve_mock, account_retrieve_mock
):
# fetch the Stripe Account
custom_account = self.custom_account
# assert metadata is empty
self.assertEqual(custom_account.metadata, {})
fake_stripe_update_event = deepcopy(FAKE_EVENT_CUSTOM_ACCOUNT_UPDATED)
event = Event.sync_from_stripe_data(fake_stripe_update_event)
event.invoke_webhook_handlers()
# fetch the updated Account object
updated_custom_account = Account.objects.get(id=custom_account.id)
# assert we are updating the metadata
self.assertNotEqual(
updated_custom_account.metadata,
custom_account.metadata,
)
# assert the meta got updated
self.assertEqual(
updated_custom_account.metadata,
fake_stripe_update_event["data"]["object"]["metadata"],
)
class TestChargeEvents(EventTestCase):
def setUp(self):
self.user = get_user_model().objects.create_user(
username="pydanny", email="pydanny@gmail.com"
)
@patch(
"djstripe.models.Account.get_default_account",
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch(
"stripe.BalanceTransaction.retrieve",
return_value=deepcopy(FAKE_BALANCE_TRANSACTION),
)
@patch("stripe.Charge.retrieve", autospec=True)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=deepcopy(FAKE_PAYMENT_INTENT_I),
autospec=True,
)
@patch(
"stripe.PaymentMethod.retrieve",
return_value=deepcopy(FAKE_CARD_AS_PAYMENT_METHOD),
autospec=True,
)
@patch("stripe.Event.retrieve", autospec=True)
@patch(
"stripe.Invoice.retrieve", return_value=deepcopy(FAKE_INVOICE), autospec=True
)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
@patch(
"stripe.Subscription.retrieve",
return_value=deepcopy(FAKE_SUBSCRIPTION),
autospec=True,
)
def test_charge_created(
self,
subscription_retrieve_mock,
product_retrieve_mock,
invoice_retrieve_mock,
event_retrieve_mock,
paymentmethod_card_retrieve_mock,
payment_intent_retrieve_mock,
charge_retrieve_mock,
balance_transaction_retrieve_mock,
account_mock,
):
FAKE_CUSTOMER.create_for_user(self.user)
fake_stripe_event = deepcopy(FAKE_EVENT_CHARGE_SUCCEEDED)
event_retrieve_mock.return_value = fake_stripe_event
charge_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
account_mock.return_value = FAKE_STANDARD_ACCOUNT.create()
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
charge = Charge.objects.get(id=fake_stripe_event["data"]["object"]["id"])
self.assertEqual(
charge.amount,
fake_stripe_event["data"]["object"]["amount"] / Decimal("100"),
)
self.assertEqual(charge.status, fake_stripe_event["data"]["object"]["status"])
class TestCheckoutEvents(EventTestCase):
def setUp(self):
self.user = get_user_model().objects.create_user(
username="pydanny", email="pydanny@gmail.com"
)
self.customer = FAKE_CUSTOMER.create_for_user(self.user)
@patch(
"stripe.checkout.Session.retrieve", return_value=FAKE_SESSION_I, autospec=True
)
@patch("stripe.Customer.retrieve", return_value=FAKE_CUSTOMER, autospec=True)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=FAKE_PAYMENT_INTENT_I,
autospec=True,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_checkout_session_completed(
self,
event_retrieve_mock,
payment_intent_retrieve_mock,
customer_retrieve_mock,
session_retrieve_mock,
):
fake_stripe_event = deepcopy(FAKE_EVENT_SESSION_COMPLETED)
event_retrieve_mock.return_value = fake_stripe_event
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
session = Session.objects.get(id=fake_stripe_event["data"]["object"]["id"])
self.assertEqual(session.customer.id, self.customer.id)
@patch(
"stripe.checkout.Session.retrieve", return_value=FAKE_SESSION_I, autospec=True
)
@patch("stripe.Customer.retrieve", return_value=FAKE_CUSTOMER, autospec=True)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=FAKE_PAYMENT_INTENT_I,
autospec=True,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_checkout_session_async_payment_succeeded(
self,
event_retrieve_mock,
payment_intent_retrieve_mock,
customer_retrieve_mock,
session_retrieve_mock,
):
fake_stripe_event = deepcopy(FAKE_EVENT_SESSION_COMPLETED)
fake_stripe_event["type"] = "checkout.session.async_payment_succeeded"
event_retrieve_mock.return_value = fake_stripe_event
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
session = Session.objects.get(id=fake_stripe_event["data"]["object"]["id"])
self.assertEqual(session.customer.id, self.customer.id)
@patch(
"stripe.checkout.Session.retrieve", return_value=FAKE_SESSION_I, autospec=True
)
@patch("stripe.Customer.retrieve", return_value=FAKE_CUSTOMER, autospec=True)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=FAKE_PAYMENT_INTENT_I,
autospec=True,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_checkout_session_async_payment_failed(
self,
event_retrieve_mock,
payment_intent_retrieve_mock,
customer_retrieve_mock,
session_retrieve_mock,
):
fake_stripe_event = deepcopy(FAKE_EVENT_SESSION_COMPLETED)
fake_stripe_event["type"] = "checkout.session.async_payment_failed"
event_retrieve_mock.return_value = fake_stripe_event
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
session = Session.objects.get(id=fake_stripe_event["data"]["object"]["id"])
self.assertEqual(session.customer.id, self.customer.id)
@patch(
"stripe.checkout.Session.retrieve", return_value=FAKE_SESSION_I, autospec=True
)
@patch("stripe.Customer.retrieve", return_value=FAKE_CUSTOMER, autospec=True)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=FAKE_PAYMENT_INTENT_I,
autospec=True,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_checkout_session_completed_customer_subscriber_added(
self,
event_retrieve_mock,
payment_intent_retrieve_mock,
customer_retrieve_mock,
session_retrieve_mock,
):
# because create_for_user method adds subscriber
self.customer.subcriber = None
self.customer.save()
fake_stripe_event = deepcopy(FAKE_EVENT_SESSION_COMPLETED)
fake_stripe_event["data"]["object"]["metadata"] = {
"djstripe_subscriber": self.user.id
}
event_retrieve_mock.return_value = fake_stripe_event
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
# refresh self.customer from db
self.customer.refresh_from_db()
session = Session.objects.get(id=fake_stripe_event["data"]["object"]["id"])
self.assertEqual(session.customer.id, self.customer.id)
self.assertEqual(self.customer.subscriber, self.user)
self.assertEqual(self.customer.metadata, {"djstripe_subscriber": self.user.id})
class TestCustomerEvents(EventTestCase):
def setUp(self):
self.user = get_user_model().objects.create_user(
username="pydanny", email="pydanny@gmail.com"
)
self.customer = FAKE_CUSTOMER.create_for_user(self.user)
@patch("stripe.Customer.retrieve", return_value=FAKE_CUSTOMER, autospec=True)
@patch("stripe.Event.retrieve", autospec=True)
def test_customer_created(self, event_retrieve_mock, customer_retrieve_mock):
fake_stripe_event = deepcopy(FAKE_EVENT_CUSTOMER_CREATED)
event_retrieve_mock.return_value = fake_stripe_event
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
customer = Customer.objects.get(id=fake_stripe_event["data"]["object"]["id"])
self.assertEqual(
customer.balance, fake_stripe_event["data"]["object"]["balance"]
)
self.assertEqual(
customer.currency, fake_stripe_event["data"]["object"]["currency"]
)
@patch("stripe.Customer.retrieve", autospec=True)
@patch("stripe.Event.retrieve", autospec=True)
def test_customer_metadata_created(
self, event_retrieve_mock, customer_retrieve_mock
):
fake_customer = deepcopy(FAKE_CUSTOMER)
fake_customer["metadata"] = {"djstripe_subscriber": self.user.id}
fake_stripe_event = deepcopy(FAKE_EVENT_CUSTOMER_CREATED)
fake_stripe_event["data"]["object"] = fake_customer
event_retrieve_mock.return_value = fake_stripe_event
customer_retrieve_mock.return_value = fake_customer
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
customer = Customer.objects.get(id=fake_stripe_event["data"]["object"]["id"])
self.assertEqual(
customer.balance, fake_stripe_event["data"]["object"]["balance"]
)
self.assertEqual(
customer.currency, fake_stripe_event["data"]["object"]["currency"]
)
self.assertEqual(customer.subscriber, self.user)
self.assertEqual(customer.metadata, {"djstripe_subscriber": self.user.id})
@patch("stripe.Customer.retrieve", autospec=True)
@patch("stripe.Event.retrieve", autospec=True)
def test_customer_metadata_updated(
self, event_retrieve_mock, customer_retrieve_mock
):
fake_customer = deepcopy(FAKE_CUSTOMER)
fake_customer["metadata"] = {"djstripe_subscriber": self.user.id}
fake_stripe_event = deepcopy(FAKE_EVENT_CUSTOMER_UPDATED)
fake_stripe_event["data"]["object"] = fake_customer
event_retrieve_mock.return_value = fake_stripe_event
customer_retrieve_mock.return_value = fake_customer
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
customer = Customer.objects.get(id=fake_stripe_event["data"]["object"]["id"])
self.assertEqual(
customer.balance, fake_stripe_event["data"]["object"]["balance"]
)
self.assertEqual(
customer.currency, fake_stripe_event["data"]["object"]["currency"]
)
self.assertEqual(customer.subscriber, self.user)
self.assertEqual(customer.metadata, {"djstripe_subscriber": self.user.id})
@patch(
"stripe.Customer.retrieve_source",
side_effect=[deepcopy(FAKE_CARD), deepcopy(FAKE_CARD_III)],
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Customer.retrieve", return_value=FAKE_CUSTOMER, autospec=True)
def test_customer_deleted(
self, customer_retrieve_source_mock, customer_retrieve_mock
):
FAKE_CUSTOMER.create_for_user(self.user)
event = self._create_event(FAKE_EVENT_CUSTOMER_CREATED)
event.invoke_webhook_handlers()
event = self._create_event(FAKE_EVENT_CUSTOMER_DELETED)
event.invoke_webhook_handlers()
customer = Customer.objects.get(id=FAKE_CUSTOMER["id"])
self.assertIsNotNone(customer.date_purged)
@patch("stripe.Coupon.retrieve", return_value=FAKE_COUPON, autospec=True)
@patch(
"stripe.Event.retrieve",
return_value=FAKE_EVENT_CUSTOMER_DISCOUNT_CREATED,
autospec=True,
)
def test_customer_discount_created(self, event_retrieve_mock, coupon_retrieve_mock):
fake_stripe_event = deepcopy(FAKE_EVENT_CUSTOMER_DISCOUNT_CREATED)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
self.assertIsNotNone(event.customer)
self.assertEqual(event.customer.id, FAKE_CUSTOMER["id"])
self.assertIsNotNone(event.customer.coupon)
@patch("stripe.Coupon.retrieve", return_value=FAKE_COUPON, autospec=True)
@patch(
"stripe.Event.retrieve",
return_value=FAKE_EVENT_CUSTOMER_DISCOUNT_DELETED,
autospec=True,
)
def test_customer_discount_deleted(self, event_retrieve_mock, coupon_retrieve_mock):
coupon = Coupon.sync_from_stripe_data(FAKE_COUPON)
self.customer.coupon = coupon
fake_stripe_event = deepcopy(FAKE_EVENT_CUSTOMER_DISCOUNT_DELETED)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
self.assertIsNotNone(event.customer)
self.assertEqual(event.customer.id, FAKE_CUSTOMER["id"])
self.assertIsNone(event.customer.coupon)
@patch("stripe.Customer.retrieve", return_value=FAKE_CUSTOMER, autospec=True)
@patch("stripe.Event.retrieve", autospec=True)
@patch(
"stripe.Customer.retrieve_source",
return_value=deepcopy(FAKE_CARD),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
def test_customer_card_created(
self, customer_retrieve_source_mock, event_retrieve_mock, customer_retrieve_mock
):
fake_stripe_event = deepcopy(FAKE_EVENT_CUSTOMER_SOURCE_CREATED)
event_retrieve_mock.return_value = fake_stripe_event
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
card = Card.objects.get(id=fake_stripe_event["data"]["object"]["id"])
self.assertIn(card, self.customer.legacy_cards.all())
self.assertEqual(card.brand, fake_stripe_event["data"]["object"]["brand"])
self.assertEqual(card.last4, fake_stripe_event["data"]["object"]["last4"])
@patch("stripe.Event.retrieve", autospec=True)
@patch("stripe.Customer.retrieve", return_value=FAKE_CUSTOMER, autospec=True)
def test_customer_unknown_source_created(
self, customer_retrieve_mock, event_retrieve_mock
):
fake_stripe_event = deepcopy(FAKE_EVENT_CUSTOMER_SOURCE_CREATED)
fake_stripe_event["data"]["object"]["object"] = "unknown"
fake_stripe_event["data"]["object"][
"id"
] = "card_xxx_test_customer_unk_source_created"
event_retrieve_mock.return_value = fake_stripe_event
FAKE_CUSTOMER.create_for_user(self.user)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
self.assertFalse(
Card.objects.filter(id=fake_stripe_event["data"]["object"]["id"]).exists()
)
@patch("stripe.Customer.retrieve", return_value=FAKE_CUSTOMER, autospec=True)
def test_customer_default_source_deleted(self, customer_retrieve_mock):
self.customer.default_source = DjstripePaymentMethod.objects.get(
id=FAKE_CARD["id"]
)
self.customer.save()
self.assertIsNotNone(self.customer.default_source)
self.assertTrue(self.customer.has_valid_source())
event = self._create_event(FAKE_EVENT_CUSTOMER_SOURCE_DELETED)
event.invoke_webhook_handlers()
# fetch the customer. Doubles up as a check that the customer didn't get
# deleted
customer = Customer.objects.get(id=FAKE_CUSTOMER["id"])
self.assertIsNone(customer.default_source)
self.assertFalse(customer.has_valid_source())
@patch("stripe.Customer.retrieve", return_value=FAKE_CUSTOMER, autospec=True)
def test_customer_source_double_delete(self, customer_retrieve_mock):
event = self._create_event(FAKE_EVENT_CUSTOMER_SOURCE_DELETED)
event.invoke_webhook_handlers()
event = self._create_event(FAKE_EVENT_CUSTOMER_SOURCE_DELETED_DUPE)
event.invoke_webhook_handlers()
# fetch the customer. Doubles up as a check that the customer didn't get
# deleted
customer = Customer.objects.get(id=FAKE_CUSTOMER["id"])
self.assertIsNone(customer.default_source)
self.assertFalse(customer.has_valid_source())
@patch("stripe.Plan.retrieve", return_value=deepcopy(FAKE_PLAN), autospec=True)
@patch(
"stripe.Subscription.retrieve",
return_value=deepcopy(FAKE_SUBSCRIPTION),
autospec=True,
)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
@patch("stripe.Event.retrieve", autospec=True)
@patch("stripe.Customer.retrieve", return_value=FAKE_CUSTOMER, autospec=True)
def test_customer_subscription_created(
self,
customer_retrieve_mock,
event_retrieve_mock,
product_retrieve_mock,
subscription_retrieve_mock,
plan_retrieve_mock,
):
fake_stripe_event = deepcopy(FAKE_EVENT_CUSTOMER_SUBSCRIPTION_CREATED)
event_retrieve_mock.return_value = fake_stripe_event
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
subscription = Subscription.objects.get(
id=fake_stripe_event["data"]["object"]["id"]
)
self.assertIn(subscription, self.customer.subscriptions.all())
self.assertEqual(
subscription.status, fake_stripe_event["data"]["object"]["status"]
)
self.assertEqual(
subscription.quantity, fake_stripe_event["data"]["object"]["quantity"]
)
@patch("stripe.Plan.retrieve", return_value=deepcopy(FAKE_PLAN), autospec=True)
@patch(
"stripe.Subscription.retrieve",
return_value=deepcopy(FAKE_SUBSCRIPTION),
autospec=True,
)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
@patch(
"stripe.Customer.retrieve", return_value=deepcopy(FAKE_CUSTOMER), autospec=True
)
def test_customer_subscription_deleted(
self,
customer_retrieve_mock,
product_retrieve_mock,
subscription_retrieve_mock,
plan_retrieve_mock,
):
event = self._create_event(FAKE_EVENT_CUSTOMER_SUBSCRIPTION_CREATED)
event.invoke_webhook_handlers()
sub = Subscription.objects.get(id=FAKE_SUBSCRIPTION["id"])
self.assertEqual(sub.status, SubscriptionStatus.active)
subscription_retrieve_mock.return_value = deepcopy(FAKE_SUBSCRIPTION_CANCELED)
event = self._create_event(FAKE_EVENT_CUSTOMER_SUBSCRIPTION_DELETED)
event.invoke_webhook_handlers()
sub = Subscription.objects.get(id=FAKE_SUBSCRIPTION["id"])
# Check that Subscription is canceled and not deleted
self.assertEqual(sub.status, SubscriptionStatus.canceled)
self.assertIsNotNone(sub.canceled_at)
@patch("stripe.Customer.retrieve", autospec=True)
@patch("stripe.Event.retrieve", autospec=True)
def test_customer_bogus_event_type(
self, event_retrieve_mock, customer_retrieve_mock
):
fake_stripe_event = deepcopy(FAKE_EVENT_CUSTOMER_CREATED)
fake_stripe_event["data"]["object"]["customer"] = fake_stripe_event["data"][
"object"
]["id"]
fake_stripe_event["type"] = "customer.praised"
event_retrieve_mock.return_value = fake_stripe_event
customer_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
class TestDisputeEvents(EventTestCase):
def setUp(self):
self.user = get_user_model().objects.create_user(
username="fake_customer_1", email=FAKE_CUSTOMER["email"]
)
self.customer = FAKE_CUSTOMER.create_for_user(self.user)
@patch(
"stripe.PaymentMethod.retrieve",
return_value=deepcopy(FAKE_CARD_AS_PAYMENT_METHOD),
autospec=True,
)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=deepcopy(FAKE_DISPUTE_PAYMENT_INTENT),
autospec=True,
)
@patch(
"stripe.Charge.retrieve",
return_value=deepcopy(FAKE_DISPUTE_CHARGE),
autospec=True,
)
@patch(
"stripe.BalanceTransaction.retrieve",
return_value=deepcopy(FAKE_DISPUTE_BALANCE_TRANSACTION),
)
@patch(
"stripe.File.retrieve",
return_value=deepcopy(FAKE_FILEUPLOAD_ICON),
autospec=True,
)
@patch(
"stripe.Dispute.retrieve", return_value=deepcopy(FAKE_DISPUTE_I), autospec=True
)
@patch(
"stripe.Event.retrieve",
return_value=deepcopy(FAKE_EVENT_DISPUTE_CREATED),
autospec=True,
)
def test_dispute_created(
self,
event_retrieve_mock,
dispute_retrieve_mock,
file_retrieve_mock,
balance_transaction_retrieve_mock,
charge_retrieve_mock,
payment_intent_retrieve_mock,
payment_method_retrieve_mock,
):
fake_stripe_event = deepcopy(FAKE_EVENT_DISPUTE_CREATED)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
dispute = Dispute.objects.get()
self.assertEqual(dispute.id, FAKE_DISPUTE_I["id"])
# funds get withdrawn from the account as soon as a charge is
# disputed so practically there is no difference between
# charge.dispute.created and charge.dispute.funds_withdrawn
@patch(
"stripe.PaymentMethod.retrieve",
return_value=deepcopy(FAKE_CARD_AS_PAYMENT_METHOD),
autospec=True,
)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=deepcopy(FAKE_DISPUTE_PAYMENT_INTENT),
autospec=True,
)
@patch(
"stripe.Charge.retrieve",
return_value=deepcopy(FAKE_DISPUTE_CHARGE),
autospec=True,
)
@patch(
"stripe.BalanceTransaction.retrieve",
return_value=deepcopy(FAKE_DISPUTE_BALANCE_TRANSACTION),
)
@patch(
"stripe.File.retrieve",
return_value=deepcopy(FAKE_FILEUPLOAD_ICON),
autospec=True,
)
@patch(
"stripe.Dispute.retrieve", return_value=deepcopy(FAKE_DISPUTE_II), autospec=True
)
@patch(
"stripe.Event.retrieve",
return_value=deepcopy(FAKE_EVENT_DISPUTE_FUNDS_WITHDRAWN),
autospec=True,
)
def test_dispute_funds_withdrawn(
self,
event_retrieve_mock,
dispute_retrieve_mock,
file_retrieve_mock,
balance_transaction_retrieve_mock,
charge_retrieve_mock,
payment_intent_retrieve_mock,
payment_method_retrieve_mock,
):
fake_stripe_event = deepcopy(FAKE_EVENT_DISPUTE_FUNDS_WITHDRAWN)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
dispute = Dispute.objects.get()
self.assertEqual(dispute.id, FAKE_DISPUTE_II["id"])
@patch(
"stripe.PaymentMethod.retrieve",
return_value=deepcopy(FAKE_CARD_AS_PAYMENT_METHOD),
autospec=True,
)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=deepcopy(FAKE_DISPUTE_PAYMENT_INTENT),
autospec=True,
)
@patch(
"stripe.Charge.retrieve",
return_value=deepcopy(FAKE_DISPUTE_CHARGE),
autospec=True,
)
@patch(
"stripe.BalanceTransaction.retrieve",
return_value=deepcopy(FAKE_DISPUTE_BALANCE_TRANSACTION),
)
@patch(
"stripe.File.retrieve",
return_value=deepcopy(FAKE_FILEUPLOAD_ICON),
autospec=True,
)
@patch(
"stripe.Dispute.retrieve",
return_value=deepcopy(FAKE_DISPUTE_III),
autospec=True,
)
@patch(
"stripe.Event.retrieve",
return_value=deepcopy(FAKE_EVENT_DISPUTE_UPDATED),
autospec=True,
)
def test_dispute_updated(
self,
event_retrieve_mock,
dispute_retrieve_mock,
file_retrieve_mock,
balance_transaction_retrieve_mock,
charge_retrieve_mock,
payment_intent_retrieve_mock,
payment_method_retrieve_mock,
):
fake_stripe_event = deepcopy(FAKE_EVENT_DISPUTE_UPDATED)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
dispute = Dispute.objects.get()
self.assertEqual(dispute.id, FAKE_DISPUTE_III["id"])
@patch(
"stripe.PaymentMethod.retrieve",
return_value=deepcopy(FAKE_CARD_AS_PAYMENT_METHOD),
autospec=True,
)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=deepcopy(FAKE_DISPUTE_PAYMENT_INTENT),
autospec=True,
)
@patch(
"stripe.Charge.retrieve",
return_value=deepcopy(FAKE_DISPUTE_CHARGE),
autospec=True,
)
@patch(
"stripe.BalanceTransaction.retrieve",
return_value=deepcopy(FAKE_DISPUTE_BALANCE_TRANSACTION),
)
@patch(
"stripe.File.retrieve",
return_value=deepcopy(FAKE_FILEUPLOAD_ICON),
autospec=True,
)
@patch(
"stripe.Dispute.retrieve",
return_value=deepcopy(FAKE_DISPUTE_III),
autospec=True,
)
@patch(
"stripe.Event.retrieve",
return_value=deepcopy(FAKE_EVENT_DISPUTE_CLOSED),
autospec=True,
)
def test_dispute_closed(
self,
event_retrieve_mock,
dispute_retrieve_mock,
file_retrieve_mock,
balance_transaction_retrieve_mock,
charge_retrieve_mock,
payment_intent_retrieve_mock,
payment_method_retrieve_mock,
):
fake_stripe_event = deepcopy(FAKE_EVENT_DISPUTE_CLOSED)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
dispute = Dispute.objects.get()
self.assertEqual(dispute.id, FAKE_DISPUTE_III["id"])
# funds get reinstated after the dispute is closed
# includes full fund reinstatements as well as partial refunds
@patch(
"stripe.PaymentMethod.retrieve",
return_value=deepcopy(FAKE_CARD_AS_PAYMENT_METHOD),
autospec=True,
)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=deepcopy(FAKE_DISPUTE_PAYMENT_INTENT),
autospec=True,
)
@patch(
"stripe.Charge.retrieve",
return_value=deepcopy(FAKE_DISPUTE_CHARGE),
autospec=True,
)
@patch(
"stripe.BalanceTransaction.retrieve",
side_effect=[
FAKE_DISPUTE_BALANCE_TRANSACTION,
FAKE_DISPUTE_BALANCE_TRANSACTION_REFUND_FULL,
],
)
@patch(
"stripe.File.retrieve",
return_value=deepcopy(FAKE_FILEUPLOAD_ICON),
autospec=True,
)
@patch(
"stripe.Dispute.retrieve",
return_value=deepcopy(FAKE_DISPUTE_V_FULL),
autospec=True,
)
@patch(
"stripe.Event.retrieve",
return_value=deepcopy(FAKE_EVENT_DISPUTE_FUNDS_REINSTATED_FULL),
autospec=True,
)
def test_dispute_funds_reinstated_full(
self,
event_retrieve_mock,
dispute_retrieve_mock,
file_retrieve_mock,
balance_transaction_retrieve_mock,
charge_retrieve_mock,
payment_intent_retrieve_mock,
payment_method_retrieve_mock,
):
fake_stripe_event = deepcopy(FAKE_EVENT_DISPUTE_FUNDS_REINSTATED_FULL)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
dispute = Dispute.objects.get()
self.assertEqual(dispute.id, FAKE_DISPUTE_V_FULL["id"])
# funds get reinstated after the dispute is closed
# includes full fund reinstatements as well as partial refunds
@patch(
"stripe.PaymentMethod.retrieve",
return_value=deepcopy(FAKE_DISPUTE_PAYMENT_METHOD),
autospec=True,
)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=deepcopy(FAKE_DISPUTE_PAYMENT_INTENT),
autospec=True,
)
@patch(
"stripe.Charge.retrieve",
return_value=deepcopy(FAKE_DISPUTE_CHARGE),
autospec=True,
)
@patch(
"stripe.BalanceTransaction.retrieve",
side_effect=[
FAKE_DISPUTE_BALANCE_TRANSACTION,
FAKE_DISPUTE_BALANCE_TRANSACTION_REFUND_PARTIAL,
],
)
@patch(
"stripe.File.retrieve",
return_value=deepcopy(FAKE_FILEUPLOAD_ICON),
autospec=True,
)
@patch(
"stripe.Dispute.retrieve",
return_value=deepcopy(FAKE_DISPUTE_V_PARTIAL),
autospec=True,
)
@patch(
"stripe.Event.retrieve",
return_value=deepcopy(FAKE_EVENT_DISPUTE_FUNDS_REINSTATED_PARTIAL),
autospec=True,
)
def test_dispute_funds_reinstated_partial(
self,
event_retrieve_mock,
dispute_retrieve_mock,
file_retrieve_mock,
balance_transaction_retrieve_mock,
charge_retrieve_mock,
payment_intent_retrieve_mock,
payment_method_retrieve_mock,
):
fake_stripe_event = deepcopy(FAKE_EVENT_DISPUTE_FUNDS_REINSTATED_PARTIAL)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
dispute = Dispute.objects.get()
self.assertGreaterEqual(len(dispute.balance_transactions), 2)
self.assertEqual(dispute.id, FAKE_DISPUTE_V_PARTIAL["id"])
class TestFileEvents(EventTestCase):
@patch(
"stripe.File.retrieve",
return_value=deepcopy(FAKE_FILEUPLOAD_ICON),
autospec=True,
)
@patch(
"stripe.Event.retrieve",
return_value=deepcopy(FAKE_EVENT_FILE_CREATED),
autospec=True,
)
def test_file_created(self, event_retrieve_mock, file_retrieve_mock):
fake_stripe_event = deepcopy(FAKE_EVENT_FILE_CREATED)
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
file = File.objects.get()
self.assertEqual(file.id, FAKE_FILEUPLOAD_ICON["id"])
class TestInvoiceEvents(EventTestCase):
@patch(
"djstripe.models.Account.get_default_account",
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch(
"stripe.BalanceTransaction.retrieve",
return_value=deepcopy(FAKE_BALANCE_TRANSACTION),
autospec=True,
)
@patch(
"stripe.Subscription.retrieve",
return_value=deepcopy(FAKE_SUBSCRIPTION),
autospec=True,
)
@patch(
"stripe.Customer.retrieve", return_value=deepcopy(FAKE_CUSTOMER), autospec=True
)
@patch("stripe.Charge.retrieve", return_value=deepcopy(FAKE_CHARGE), autospec=True)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=deepcopy(FAKE_PAYMENT_INTENT_I),
autospec=True,
)
@patch(
"stripe.Invoice.retrieve", return_value=deepcopy(FAKE_INVOICE), autospec=True
)
@patch("stripe.Event.retrieve", autospec=True)
@patch(
"stripe.PaymentMethod.retrieve",
return_value=deepcopy(FAKE_CARD_AS_PAYMENT_METHOD),
autospec=True,
)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
def test_invoice_created_no_existing_customer(
self,
product_retrieve_mock,
paymentmethod_card_retrieve_mock,
event_retrieve_mock,
invoice_retrieve_mock,
payment_intent_retrieve_mock,
charge_retrieve_mock,
customer_retrieve_mock,
subscription_retrieve_mock,
balance_transaction_retrieve_mock,
default_account_mock,
):
default_account_mock.return_value = FAKE_STANDARD_ACCOUNT.create()
fake_stripe_event = deepcopy(FAKE_EVENT_INVOICE_CREATED)
event_retrieve_mock.return_value = fake_stripe_event
invoice_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
self.assertEqual(Customer.objects.count(), 1)
customer = Customer.objects.get()
self.assertEqual(customer.subscriber, None)
@patch(
"djstripe.models.Account.get_default_account",
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch(
"stripe.BalanceTransaction.retrieve",
return_value=deepcopy(FAKE_BALANCE_TRANSACTION),
autospec=True,
)
@patch(
"stripe.Subscription.retrieve",
return_value=deepcopy(FAKE_SUBSCRIPTION),
autospec=True,
)
@patch(
"stripe.Customer.retrieve", return_value=deepcopy(FAKE_CUSTOMER), autospec=True
)
@patch("stripe.Charge.retrieve", return_value=deepcopy(FAKE_CHARGE), autospec=True)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=deepcopy(FAKE_PAYMENT_INTENT_I),
autospec=True,
)
@patch("stripe.Invoice.retrieve", autospec=True)
@patch("stripe.Event.retrieve", autospec=True)
@patch(
"stripe.PaymentMethod.retrieve",
return_value=deepcopy(FAKE_CARD_AS_PAYMENT_METHOD),
autospec=True,
)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
def test_invoice_created(
self,
product_retrieve_mock,
paymentmethod_card_retrieve_mock,
event_retrieve_mock,
invoice_retrieve_mock,
payment_intent_retrieve_mock,
charge_retrieve_mock,
customer_retrieve_mock,
subscription_retrieve_mock,
balance_transaction_retrieve_mock,
default_account_mock,
):
default_account_mock.return_value = FAKE_STANDARD_ACCOUNT.create()
user = get_user_model().objects.create_user(
username="pydanny", email="pydanny@gmail.com"
)
FAKE_CUSTOMER.create_for_user(user)
fake_stripe_event = deepcopy(FAKE_EVENT_INVOICE_CREATED)
event_retrieve_mock.return_value = fake_stripe_event
invoice_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
invoice = Invoice.objects.get(id=fake_stripe_event["data"]["object"]["id"])
self.assertEqual(
invoice.amount_due,
fake_stripe_event["data"]["object"]["amount_due"] / Decimal("100"),
)
self.assertEqual(invoice.paid, fake_stripe_event["data"]["object"]["paid"])
@patch(
"djstripe.models.Account.get_default_account",
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch(
"stripe.BalanceTransaction.retrieve",
return_value=deepcopy(FAKE_BALANCE_TRANSACTION),
autospec=True,
)
@patch(
"stripe.Subscription.retrieve",
return_value=deepcopy(FAKE_SUBSCRIPTION),
autospec=True,
)
@patch("stripe.Charge.retrieve", return_value=deepcopy(FAKE_CHARGE), autospec=True)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=deepcopy(FAKE_PAYMENT_INTENT_I),
autospec=True,
)
@patch(
"stripe.Invoice.retrieve", return_value=deepcopy(FAKE_INVOICE), autospec=True
)
@patch(
"stripe.PaymentMethod.retrieve",
return_value=deepcopy(FAKE_CARD_AS_PAYMENT_METHOD),
autospec=True,
)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
def test_invoice_deleted(
self,
product_retrieve_mock,
paymentmethod_card_retrieve_mock,
invoice_retrieve_mock,
payment_intent_retrieve_mock,
charge_retrieve_mock,
subscription_retrieve_mock,
balance_transaction_retrieve_mock,
default_account_mock,
):
default_account_mock.return_value = FAKE_STANDARD_ACCOUNT.create()
user = get_user_model().objects.create_user(
username="pydanny", email="pydanny@gmail.com"
)
FAKE_CUSTOMER.create_for_user(user)
event = self._create_event(FAKE_EVENT_INVOICE_CREATED)
event.invoke_webhook_handlers()
Invoice.objects.get(id=FAKE_INVOICE["id"])
event = self._create_event(FAKE_EVENT_INVOICE_DELETED)
event.invoke_webhook_handlers()
with self.assertRaises(Invoice.DoesNotExist):
Invoice.objects.get(id=FAKE_INVOICE["id"])
def test_invoice_upcoming(self):
# Ensure that invoice upcoming events are processed - No actual
# process occurs so the operation is an effective no-op.
event = self._create_event(FAKE_EVENT_INVOICE_UPCOMING)
event.invoke_webhook_handlers()
class TestInvoiceItemEvents(EventTestCase):
@patch(
"djstripe.models.Account.get_default_account",
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch(
"stripe.BalanceTransaction.retrieve",
return_value=deepcopy(FAKE_BALANCE_TRANSACTION),
autospec=True,
)
@patch(
"stripe.Subscription.retrieve",
return_value=deepcopy(FAKE_SUBSCRIPTION_III),
autospec=True,
)
@patch(
"stripe.Charge.retrieve", return_value=deepcopy(FAKE_CHARGE_II), autospec=True
)
@patch(
"stripe.PaymentMethod.retrieve",
return_value=deepcopy(FAKE_PAYMENT_METHOD_II),
autospec=True,
)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=deepcopy(FAKE_PAYMENT_INTENT_II),
autospec=True,
)
@patch(
"stripe.Invoice.retrieve", return_value=deepcopy(FAKE_INVOICE_II), autospec=True
)
@patch("stripe.InvoiceItem.retrieve", autospec=True)
@patch("stripe.Event.retrieve", autospec=True)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
def test_invoiceitem_created(
self,
product_retrieve_mock,
event_retrieve_mock,
invoiceitem_retrieve_mock,
invoice_retrieve_mock,
paymentintent_retrieve_mock,
paymentmethod_retrieve_mock,
charge_retrieve_mock,
subscription_retrieve_mock,
balance_transaction_retrieve_mock,
default_account_mock,
):
default_account_mock.return_value = FAKE_STANDARD_ACCOUNT.create()
user = get_user_model().objects.create_user(
username="pydanny", email="pydanny@gmail.com"
)
FAKE_CUSTOMER_II.create_for_user(user)
fake_stripe_event = deepcopy(FAKE_EVENT_INVOICEITEM_CREATED)
event_retrieve_mock.return_value = fake_stripe_event
invoiceitem_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
invoiceitem = InvoiceItem.objects.get(
id=fake_stripe_event["data"]["object"]["id"]
)
self.assertEqual(
invoiceitem.amount,
fake_stripe_event["data"]["object"]["amount"] / Decimal("100"),
)
@patch(
"djstripe.models.Account.get_default_account",
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch(
"stripe.BalanceTransaction.retrieve",
return_value=deepcopy(FAKE_BALANCE_TRANSACTION),
autospec=True,
)
@patch(
"stripe.Subscription.retrieve",
return_value=deepcopy(FAKE_SUBSCRIPTION_III),
autospec=True,
)
@patch(
"stripe.Charge.retrieve", return_value=deepcopy(FAKE_CHARGE_II), autospec=True
)
@patch(
"stripe.PaymentMethod.retrieve",
return_value=deepcopy(FAKE_PAYMENT_METHOD_II),
autospec=True,
)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=deepcopy(FAKE_PAYMENT_INTENT_II),
autospec=True,
)
@patch(
"stripe.Invoice.retrieve", return_value=deepcopy(FAKE_INVOICE_II), autospec=True
)
@patch(
"stripe.InvoiceItem.retrieve",
return_value=deepcopy(FAKE_INVOICEITEM),
autospec=True,
)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
def test_invoiceitem_deleted(
self,
product_retrieve_mock,
invoiceitem_retrieve_mock,
invoice_retrieve_mock,
paymentintent_retrieve_mock,
paymentmethod_retrieve_mock,
charge_retrieve_mock,
subscription_retrieve_mock,
balance_transaction_retrieve_mock,
default_account_mock,
):
default_account_mock.return_value = FAKE_STANDARD_ACCOUNT.create()
user = get_user_model().objects.create_user(
username="pydanny", email="pydanny@gmail.com"
)
FAKE_CUSTOMER_II.create_for_user(user)
event = self._create_event(FAKE_EVENT_INVOICEITEM_CREATED)
event.invoke_webhook_handlers()
InvoiceItem.objects.get(id=FAKE_INVOICEITEM["id"])
event = self._create_event(FAKE_EVENT_INVOICEITEM_DELETED)
event.invoke_webhook_handlers()
with self.assertRaises(InvoiceItem.DoesNotExist):
InvoiceItem.objects.get(id=FAKE_INVOICEITEM["id"])
class TestPlanEvents(EventTestCase):
@patch("stripe.Plan.retrieve", autospec=True)
@patch("stripe.Event.retrieve", autospec=True)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
def test_plan_created(
self, product_retrieve_mock, event_retrieve_mock, plan_retrieve_mock
):
fake_stripe_event = deepcopy(FAKE_EVENT_PLAN_CREATED)
event_retrieve_mock.return_value = fake_stripe_event
plan_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
plan = Plan.objects.get(id=fake_stripe_event["data"]["object"]["id"])
self.assertEqual(plan.nickname, fake_stripe_event["data"]["object"]["nickname"])
@patch("stripe.Plan.retrieve", return_value=FAKE_PLAN, autospec=True)
@patch(
"stripe.Event.retrieve",
return_value=FAKE_EVENT_PLAN_REQUEST_IS_OBJECT,
autospec=True,
)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
def test_plan_updated_request_object(
self, product_retrieve_mock, event_retrieve_mock, plan_retrieve_mock
):
plan_retrieve_mock.return_value = FAKE_EVENT_PLAN_REQUEST_IS_OBJECT["data"][
"object"
]
event = Event.sync_from_stripe_data(FAKE_EVENT_PLAN_REQUEST_IS_OBJECT)
event.invoke_webhook_handlers()
plan = Plan.objects.get(
id=FAKE_EVENT_PLAN_REQUEST_IS_OBJECT["data"]["object"]["id"]
)
self.assertEqual(
plan.nickname,
FAKE_EVENT_PLAN_REQUEST_IS_OBJECT["data"]["object"]["nickname"],
)
@patch("stripe.Plan.retrieve", return_value=FAKE_PLAN, autospec=True)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
def test_plan_deleted(self, product_retrieve_mock, plan_retrieve_mock):
event = self._create_event(FAKE_EVENT_PLAN_CREATED)
event.invoke_webhook_handlers()
Plan.objects.get(id=FAKE_PLAN["id"])
event = self._create_event(FAKE_EVENT_PLAN_DELETED)
event.invoke_webhook_handlers()
with self.assertRaises(Plan.DoesNotExist):
Plan.objects.get(id=FAKE_PLAN["id"])
class TestPriceEvents(EventTestCase):
@patch("stripe.Price.retrieve", autospec=True)
@patch("stripe.Event.retrieve", autospec=True)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
def test_price_created(
self, product_retrieve_mock, event_retrieve_mock, price_retrieve_mock
):
fake_stripe_event = deepcopy(FAKE_EVENT_PRICE_CREATED)
event_retrieve_mock.return_value = fake_stripe_event
price_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
price = Price.objects.get(id=fake_stripe_event["data"]["object"]["id"])
self.assertEqual(
price.nickname, fake_stripe_event["data"]["object"]["nickname"]
)
@patch("stripe.Price.retrieve", return_value=FAKE_PRICE, autospec=True)
@patch(
"stripe.Event.retrieve", return_value=FAKE_EVENT_PRICE_UPDATED, autospec=True
)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
def test_price_updated(
self, product_retrieve_mock, event_retrieve_mock, price_retrieve_mock
):
price_retrieve_mock.return_value = FAKE_EVENT_PRICE_UPDATED["data"]["object"]
event = Event.sync_from_stripe_data(FAKE_EVENT_PRICE_UPDATED)
event.invoke_webhook_handlers()
price = Price.objects.get(id=FAKE_EVENT_PRICE_UPDATED["data"]["object"]["id"])
self.assertEqual(
price.unit_amount,
FAKE_EVENT_PRICE_UPDATED["data"]["object"]["unit_amount"],
)
self.assertEqual(
price.unit_amount_decimal,
Decimal(FAKE_EVENT_PRICE_UPDATED["data"]["object"]["unit_amount_decimal"]),
)
@patch("stripe.Price.retrieve", return_value=FAKE_PRICE, autospec=True)
@patch(
"stripe.Product.retrieve", return_value=deepcopy(FAKE_PRODUCT), autospec=True
)
def test_price_deleted(self, product_retrieve_mock, price_retrieve_mock):
event = self._create_event(FAKE_EVENT_PRICE_CREATED)
event.invoke_webhook_handlers()
Price.objects.get(id=FAKE_PRICE["id"])
event = self._create_event(FAKE_EVENT_PRICE_DELETED)
event.invoke_webhook_handlers()
with self.assertRaises(Price.DoesNotExist):
Price.objects.get(id=FAKE_PRICE["id"])
class TestPaymentMethodEvents(AssertStripeFksMixin, EventTestCase):
def setUp(self):
self.user = get_user_model().objects.create_user(
username="fake_customer_1", email=FAKE_CUSTOMER["email"]
)
self.customer = FAKE_CUSTOMER.create_for_user(self.user)
@patch("stripe.PaymentMethod.retrieve", autospec=True)
@patch("stripe.Event.retrieve", autospec=True)
def test_payment_method_attached(
self, event_retrieve_mock, payment_method_retrieve_mock
):
fake_stripe_event = deepcopy(FAKE_EVENT_PAYMENT_METHOD_ATTACHED)
event_retrieve_mock.return_value = fake_stripe_event
payment_method_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
payment_method = PaymentMethod.objects.get(
id=fake_stripe_event["data"]["object"]["id"]
)
self.assert_fks(
payment_method,
expected_blank_fks={
"djstripe.Customer.coupon",
"djstripe.Customer.default_payment_method",
},
)
@patch("stripe.PaymentMethod.retrieve", autospec=True)
@patch("stripe.Event.retrieve", autospec=True)
def test_card_payment_method_attached(
self, event_retrieve_mock, payment_method_retrieve_mock
):
# Attach of a legacy id="card_xxx" payment method should behave exactly
# as per a normal "native" id="pm_yyy" payment_method.
fake_stripe_event = deepcopy(FAKE_EVENT_CARD_PAYMENT_METHOD_ATTACHED)
event_retrieve_mock.return_value = fake_stripe_event
payment_method_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
payment_method = PaymentMethod.objects.get(
id=fake_stripe_event["data"]["object"]["id"]
)
self.assert_fks(
payment_method,
expected_blank_fks={
"djstripe.Customer.coupon",
"djstripe.Customer.default_payment_method",
},
)
@patch("stripe.PaymentMethod.retrieve", autospec=True)
@patch("stripe.Event.retrieve", autospec=True)
def test_payment_method_detached(
self, event_retrieve_mock, payment_method_retrieve_mock
):
fake_stripe_event = deepcopy(FAKE_EVENT_PAYMENT_METHOD_DETACHED)
event_retrieve_mock.return_value = fake_stripe_event
payment_method_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
payment_method = PaymentMethod.objects.get(
id=fake_stripe_event["data"]["object"]["id"]
)
self.assertIsNone(
payment_method.customer,
"Detach of a payment_method should set customer to null",
)
self.assert_fks(
payment_method, expected_blank_fks={"djstripe.PaymentMethod.customer"}
)
@patch(
"stripe.PaymentMethod.retrieve",
side_effect=InvalidRequestError(
message="No such payment_method: card_xxxx",
param="payment_method",
code="resource_missing",
),
autospec=True,
)
@patch("stripe.Event.retrieve", autospec=True)
def test_card_payment_method_detached(
self, event_retrieve_mock, payment_method_retrieve_mock
):
# Detach of a legacy id="card_xxx" payment method is handled specially,
# since the card is deleted by Stripe and therefore PaymetMethod.retrieve fails
fake_stripe_event = deepcopy(FAKE_EVENT_CARD_PAYMENT_METHOD_DETACHED)
event_retrieve_mock.return_value = fake_stripe_event
payment_method_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
self.assertEqual(
PaymentMethod.objects.filter(
id=fake_stripe_event["data"]["object"]["id"]
).count(),
0,
"Detach of a 'card_' payment_method should delete it",
)
class TestPaymentIntentEvents(EventTestCase):
"""Test case for payment intent event handling."""
@patch(
"stripe.Customer.retrieve", return_value=deepcopy(FAKE_CUSTOMER), autospec=True
)
@patch(
"stripe.Account.retrieve",
return_value=deepcopy(FAKE_ACCOUNT),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch(
"stripe.File.retrieve",
side_effect=(deepcopy(FAKE_FILEUPLOAD_ICON), deepcopy(FAKE_FILEUPLOAD_LOGO)),
autospec=True,
)
@patch(
"stripe.PaymentIntent.retrieve",
return_value=deepcopy(FAKE_PAYMENT_INTENT_DESTINATION_CHARGE),
autospec=True,
)
@patch(
"stripe.PaymentMethod.retrieve",
return_value=deepcopy(FAKE_PAYMENT_METHOD_I),
autospec=True,
)
def test_payment_intent_succeeded_with_destination_charge(
self,
customer_retrieve_mock,
account_retrieve_mock,
file_upload_retrieve_mock,
payment_intent_retrieve_mock,
payment_method_retrieve_mock,
):
"""Test that the payment intent succeeded event can create all related objects.
This should exercise the machinery to set `stripe_account` when recursing into
objects related to a connect `Account`.
"""
event = self._create_event(
FAKE_EVENT_PAYMENT_INTENT_SUCCEEDED_DESTINATION_CHARGE
)
event.invoke_webhook_handlers()
# Make sure the file uploads were retrieved using the account ID.
file_upload_retrieve_mock.assert_has_calls(
(
call(
id=FAKE_FILEUPLOAD_ICON["id"],
api_key=ANY,
expand=ANY,
stripe_account=FAKE_ACCOUNT["id"],
),
call(
id=FAKE_FILEUPLOAD_LOGO["id"],
api_key=ANY,
expand=ANY,
stripe_account=FAKE_ACCOUNT["id"],
),
)
)
class TestSubscriptionScheduleEvents(EventTestCase):
# TODO: test the following subscription_schedule events:
# * subscription_schedule.aborted
# * subscription_schedule.completed
# * subscription_schedule.expiring
@patch(
"stripe.SubscriptionSchedule.retrieve",
return_value=FAKE_SUBSCRIPTION_SCHEDULE,
autospec=True,
)
@patch(
"stripe.Customer.retrieve",
return_value=deepcopy(FAKE_CUSTOMER_II),
autospec=True,
)
def test_subscription_schedule_created(
self, customer_retrieve_mock, schedule_retrieve_mock
):
event = Event.sync_from_stripe_data(FAKE_EVENT_SUBSCRIPTION_SCHEDULE_CREATED)
event.invoke_webhook_handlers()
schedule = SubscriptionSchedule.objects.get(
id=FAKE_EVENT_SUBSCRIPTION_SCHEDULE_CREATED["data"]["object"]["id"]
)
assert (
schedule.id
== FAKE_EVENT_SUBSCRIPTION_SCHEDULE_CREATED["data"]["object"]["id"]
)
assert schedule.status == "not_started"
@patch("stripe.SubscriptionSchedule.retrieve", autospec=True)
@patch(
"stripe.Customer.retrieve",
return_value=deepcopy(FAKE_CUSTOMER_II),
autospec=True,
)
def test_subscription_schedule_canceled(
self, customer_retrieve_mock, schedule_retrieve_mock
):
fake_stripe_event = deepcopy(FAKE_EVENT_SUBSCRIPTION_SCHEDULE_UPDATED)
fake_stripe_event["data"]["object"]["canceled_at"] = 1605058030
fake_stripe_event["data"]["object"]["status"] = "canceled"
fake_stripe_event["data"]["previous_attributes"] = {
"canceled_at": None,
"status": "not_started",
}
schedule_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
schedule = SubscriptionSchedule.objects.get(
id=fake_stripe_event["data"]["object"]["id"]
)
assert schedule.status == "canceled"
assert schedule.canceled_at is not None
schedule_retrieve_mock.return_value = FAKE_EVENT_SUBSCRIPTION_SCHEDULE_CANCELED[
"data"
]["object"]
event = Event.sync_from_stripe_data(FAKE_EVENT_SUBSCRIPTION_SCHEDULE_CANCELED)
event.invoke_webhook_handlers()
schedule.refresh_from_db()
assert schedule.status == "canceled"
assert schedule.canceled_at is not None
@patch("stripe.SubscriptionSchedule.retrieve", autospec=True)
@patch(
"stripe.Customer.retrieve",
return_value=deepcopy(FAKE_CUSTOMER_II),
autospec=True,
)
def test_subscription_schedule_released(
self, customer_retrieve_mock, schedule_retrieve_mock
):
fake_stripe_event = deepcopy(FAKE_EVENT_SUBSCRIPTION_SCHEDULE_UPDATED)
fake_stripe_event["data"]["object"]["released_at"] = 1605058030
fake_stripe_event["data"]["object"]["status"] = "released"
fake_stripe_event["data"]["previous_attributes"] = {
"released_at": None,
"status": "not_started",
}
schedule_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
schedule = SubscriptionSchedule.objects.get(
id=fake_stripe_event["data"]["object"]["id"]
)
assert schedule.status == "released"
assert schedule.released_at is not None
schedule_retrieve_mock.return_value = FAKE_EVENT_SUBSCRIPTION_SCHEDULE_RELEASED[
"data"
]["object"]
event = Event.sync_from_stripe_data(FAKE_EVENT_SUBSCRIPTION_SCHEDULE_RELEASED)
event.invoke_webhook_handlers()
schedule.refresh_from_db()
assert schedule.status == "released"
assert schedule.released_at is not None
@patch("stripe.SubscriptionSchedule.retrieve", autospec=True)
@patch(
"stripe.Customer.retrieve",
return_value=deepcopy(FAKE_CUSTOMER_II),
autospec=True,
)
def test_subscription_schedule_updated(
self, customer_retrieve_mock, schedule_retrieve_mock
):
schedule_retrieve_mock.return_value = FAKE_EVENT_SUBSCRIPTION_SCHEDULE_CREATED[
"data"
]["object"]
event = Event.sync_from_stripe_data(FAKE_EVENT_SUBSCRIPTION_SCHEDULE_CREATED)
event.invoke_webhook_handlers()
schedule = SubscriptionSchedule.objects.get(
id=FAKE_EVENT_SUBSCRIPTION_SCHEDULE_CREATED["data"]["object"]["id"]
)
assert schedule.status == "not_started"
assert schedule.released_at is None
fake_stripe_event = deepcopy(FAKE_EVENT_SUBSCRIPTION_SCHEDULE_UPDATED)
fake_stripe_event["data"]["object"]["released_at"] = 1605058030
fake_stripe_event["data"]["object"]["status"] = "released"
fake_stripe_event["data"]["previous_attributes"] = {
"released_at": None,
"status": "not_started",
}
schedule_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
schedule = SubscriptionSchedule.objects.get(
id=fake_stripe_event["data"]["object"]["id"]
)
assert schedule.status == "released"
assert schedule.released_at is not None
class TestTaxIdEvents(EventTestCase):
@patch(
"stripe.Customer.retrieve",
return_value=deepcopy(FAKE_CUSTOMER),
autospec=True,
)
@patch(
"stripe.Customer.retrieve_tax_id",
return_value=deepcopy(FAKE_TAX_ID),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch(
"stripe.Event.retrieve",
return_value=deepcopy(FAKE_EVENT_TAX_ID_CREATED),
autospec=True,
)
def test_tax_id_created(
self, event_retrieve_mock, tax_id_retrieve_mock, customer_retrieve_mock
):
event = Event.sync_from_stripe_data(FAKE_EVENT_TAX_ID_CREATED)
event.invoke_webhook_handlers()
tax_id = TaxId.objects.get()
self.assertEqual(tax_id.id, FAKE_TAX_ID["id"])
@patch(
"stripe.Customer.retrieve",
return_value=deepcopy(FAKE_CUSTOMER),
autospec=True,
)
@patch(
"stripe.Customer.retrieve_tax_id",
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch(
"stripe.Event.retrieve",
autospec=True,
)
def test_tax_id_updated(
self, event_retrieve_mock, tax_id_retrieve_mock, customer_retrieve_mock
):
tax_id_retrieve_mock.return_value = FAKE_TAX_ID
fake_stripe_create_event = deepcopy(FAKE_EVENT_TAX_ID_CREATED)
event = Event.sync_from_stripe_data(fake_stripe_create_event)
event.invoke_webhook_handlers()
tax_id_retrieve_mock.return_value = FAKE_TAX_ID_UPDATED
fake_stripe_update_event = deepcopy(FAKE_EVENT_TAX_ID_UPDATED)
event = Event.sync_from_stripe_data(fake_stripe_update_event)
event.invoke_webhook_handlers()
tax_id = TaxId.objects.get()
self.assertEqual(tax_id.id, FAKE_TAX_ID["id"])
self.assertEqual(tax_id.verification.get("status"), "verified")
self.assertEqual(tax_id.verification.get("verified_name"), "Test")
@patch(
"stripe.Customer.retrieve",
return_value=deepcopy(FAKE_CUSTOMER),
autospec=True,
)
@patch(
"stripe.Customer.retrieve_tax_id",
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch(
"stripe.Event.retrieve",
autospec=True,
)
def test_tax_id_deleted(
self, event_retrieve_mock, tax_id_retrieve_mock, customer_retrieve_mock
):
tax_id_retrieve_mock.return_value = FAKE_TAX_ID
fake_stripe_create_event = deepcopy(FAKE_EVENT_TAX_ID_CREATED)
event = Event.sync_from_stripe_data(fake_stripe_create_event)
event.invoke_webhook_handlers()
tax_id_retrieve_mock.return_value = FAKE_EVENT_TAX_ID_DELETED
fake_stripe_delete_event = deepcopy(FAKE_EVENT_TAX_ID_DELETED)
event = Event.sync_from_stripe_data(fake_stripe_delete_event)
event.invoke_webhook_handlers()
self.assertFalse(TaxId.objects.filter(id=FAKE_TAX_ID["id"]).exists())
class TestTransferEvents(EventTestCase):
@patch.object(Transfer, "_attach_objects_post_save_hook")
@patch(
"stripe.Account.retrieve",
return_value=deepcopy(FAKE_STANDARD_ACCOUNT),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Transfer.retrieve", autospec=True)
@patch("stripe.Event.retrieve", autospec=True)
def test_transfer_created(
self,
event_retrieve_mock,
transfer_retrieve_mock,
account_retrieve_mock,
transfer__attach_object_post_save_hook_mock,
):
fake_stripe_event = deepcopy(FAKE_EVENT_TRANSFER_CREATED)
event_retrieve_mock.return_value = fake_stripe_event
transfer_retrieve_mock.return_value = fake_stripe_event["data"]["object"]
event = Event.sync_from_stripe_data(fake_stripe_event)
event.invoke_webhook_handlers()
transfer = Transfer.objects.get(id=fake_stripe_event["data"]["object"]["id"])
self.assertEqual(
transfer.amount,
fake_stripe_event["data"]["object"]["amount"] / Decimal("100"),
)
@patch.object(Transfer, "_attach_objects_post_save_hook")
@patch(
"stripe.Account.retrieve",
return_value=deepcopy(FAKE_STANDARD_ACCOUNT),
autospec=IS_STATICMETHOD_AUTOSPEC_SUPPORTED,
)
@patch("stripe.Transfer.retrieve", return_value=FAKE_TRANSFER, autospec=True)
def test_transfer_deleted(
self,
transfer_retrieve_mock,
account_retrieve_mock,
transfer__attach_object_post_save_hook_mock,
):
event = self._create_event(FAKE_EVENT_TRANSFER_CREATED)
event.invoke_webhook_handlers()
Transfer.objects.get(id=FAKE_TRANSFER["id"])
event = self._create_event(FAKE_EVENT_TRANSFER_DELETED)
event.invoke_webhook_handlers()
with self.assertRaises(Transfer.DoesNotExist):
Transfer.objects.get(id=FAKE_TRANSFER["id"])
event = self._create_event(FAKE_EVENT_TRANSFER_DELETED)
event.invoke_webhook_handlers()
|
|
"""
A collection of functions to synchronize subtitle (SRT) file
Synchronizes i.e. delays, or hastens a complete subtitle (SRT) file, or
part of it
Functions:
sync -- Synchronizes a complete subtitle file
sync_after_time -- Synchronizes the subtitles occuring after a
specified time
sync_before_time -- Synchronizes the subtitles occuring before a
specified time
sync_between_times -- Synchronizes the subtitles occuring between the
specified starting and ending times
sync_after_index -- Synchronizes the subtitles occuring after a
specified index
sync_before_index -- Synchronizes the subtitles occuring before a
specified index
sync_between_indexes -- Synchronizes the subtitles occuring between the
specified starting and ending indexes
"""
"""
Copyright (c) 2014 Prasannajit Acharya - Kanhu
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
"""
import os
import re
__all__ = ['sync', 'sync_after_time', 'sync_before_time', 'sync_between_times',\
'sync_after_index', 'sync_before_index', 'sync_between_indexes']
SUBTITLE_INDEX_FLAG = True
def sync(input_file, sync_time_in_ms, delay=True, output_file=''):
"""
Synchronizes a complete subtitle file.
input_file -- Path to the input SRT file.
sync_time_in_ms -- Time (in millisecond) for which subtitle will be
delayed, or hastened.
delay -- True for delaying the subtitle, False for
hastening it.
output_file -- With default value input file will be replaced.
Change it to get output in different file.
"""
if check_srt_extension(input_file):
if output_file == '':
output_file = input_file
try:
with open(input_file) as inputfile:
with open(output_file + '.tmp', 'w') as outputfile:
for each_line in inputfile:
(start_time, end_time) = \
get_start_and_end_times(each_line)
if (start_time != None) or (end_time != None):
(start_time, end_time) = get_sync_times(\
sync_time_in_ms, start_time, end_time, delay)
print(ms_to_str(start_time) + ' --> ' + \
ms_to_str(end_time), file=outputfile)
else:
print(each_line, end='', file=outputfile)
save_srt_file(input_file, output_file)
except IOError as ioerr:
print('File error: ' + str(ioerr))
def sync_after_time(sync_after_time_str, input_file, sync_time_in_ms, \
delay=True, output_file=''):
"""
Synchronizes the subtitles occuring after a specified time.
sync_after_time_str -- Time after which synchronization will start.
It is a string in the format of hh:mm:ss,ms
(01:23:45,678).
input_file -- Path to the input SRT file.
sync_time_in_ms -- Time (in millisecond) for which subtitle will be
delayed, or hastened.
delay -- True for delaying the subtitle, False for
hastening it.
output_file -- With default value input file will be replaced.
Change it to get output in different file.
"""
if check_srt_extension(input_file):
if output_file == '':
output_file = input_file
if check_time_str_format(sync_after_time_str):
try:
with open(input_file) as inputfile:
with open(output_file + '.tmp', 'w') as outputfile:
for each_line in inputfile:
sync_after_time_in_ms = \
str_to_ms(sync_after_time_str)
(start_time, end_time) = \
get_start_and_end_times(each_line)
if (start_time != None) or (end_time != None):
if sync_after_time_in_ms < start_time:
(start_time, end_time) = \
get_sync_times(sync_time_in_ms, \
start_time, end_time, delay)
print(ms_to_str(start_time) + ' --> ' + \
ms_to_str(end_time), file=outputfile)
else:
print(each_line, end='', file=outputfile)
else:
print(each_line, end='', file=outputfile)
save_srt_file(input_file, output_file)
except IOError as ioerr:
print('File error: ' + str(ioerr))
def sync_before_time(sync_before_time_str, input_file, sync_time_in_ms, \
delay=True, output_file=''):
"""
Synchronizes the subtitles occuring before a specified time.
sync_before_time_str -- Time before which synchronization will be done.
It is a string in the format of hh:mm:ss,ms
(01:23:45,678).
input_file -- Path to the input SRT file.
sync_time_in_ms -- Time (in millisecond) for which subtitle will
be delayed, or hastened.
delay -- True for delaying the subtitle, False for
hastening it.
output_file -- With default value input file will be replaced.
Change it to get output in different file.
"""
if check_srt_extension(input_file):
if output_file == '':
output_file = input_file
if check_time_str_format(sync_before_time_str):
try:
with open(input_file) as inputfile:
with open(output_file + '.tmp', 'w') as outputfile:
for each_line in inputfile:
sync_before_time_in_ms = \
str_to_ms(sync_before_time_str)
(start_time, end_time) = \
get_start_and_end_times(each_line)
if (start_time != None) or (end_time != None):
if sync_before_time_in_ms > end_time:
(start_time, end_time) = \
get_sync_times(sync_time_in_ms, \
start_time, end_time, delay)
print(ms_to_str(start_time) + ' --> ' + \
ms_to_str(end_time), file=outputfile)
else:
print(each_line, end='', file=outputfile)
else:
print(each_line, end='', file=outputfile)
save_srt_file(input_file, output_file)
except IOError as ioerr:
print('File error: ' + str(ioerr))
def sync_between_times(sync_after_time_str, sync_before_time_str, input_file, \
sync_time_in_ms, delay=True, output_file=''):
"""
Synchronizes the subtitles occuring between the specified starting and
ending times.
sync_after_time_str -- Time after which synchronization will start.
It is a string in the format of hh:mm:ss,ms
(01:23:45,678).
sync_before_time_str -- Time before which synchronization will be done.
It is a string in the format of hh:mm:ss,ms
(01:23:45,678).
input_file -- Path to the input SRT file.
sync_time_in_ms -- Time (in millisecond) for which subtitle will
be delayed, or hastened.
delay -- True for delaying the subtitle, False for
hastening it.
output_file -- With default value input file will be replaced.
Change it to get output in different file.
"""
if check_srt_extension(input_file):
if output_file == '':
output_file = input_file
if check_time_str_format(sync_after_time_str) and \
check_time_str_format(sync_before_time_str):
try:
with open(input_file) as inputfile:
with open(output_file + '.tmp', 'w') as outputfile:
for each_line in inputfile:
sync_after_time_in_ms = \
str_to_ms(sync_after_time_str)
sync_before_time_in_ms = \
str_to_ms(sync_before_time_str)
(start_time, end_time) = \
get_start_and_end_times(each_line)
if (start_time != None) or (end_time != None):
if (sync_after_time_in_ms < start_time) and \
(sync_before_time_in_ms > end_time):
(start_time, end_time) = \
get_sync_times(sync_time_in_ms, \
start_time, end_time, delay)
print(ms_to_str(start_time) + ' --> ' + \
ms_to_str(end_time), file=outputfile)
else:
print(each_line, end='', file=outputfile)
else:
print(each_line, end='', file=outputfile)
save_srt_file(input_file, output_file)
except IOError as ioerr:
print('File error: ' + str(ioerr))
def sync_after_index(index, input_file, sync_time_in_ms, delay=True, \
output_file=''):
"""
Synchronizes the subtitles occuring after a specified index.
index -- Index after which synchronization will be done.
If index < 1, this function will be same as
sync().
input_file -- Path to the input SRT file.
sync_time_in_ms -- Time (in millisecond) for which subtitle will
be delayed, or hastened.
delay -- True for delaying the subtitle, False for
hastening it.
output_file -- With default value input file will be replaced.
Change it to get output in different file.
"""
if check_srt_extension(input_file):
if output_file == '':
output_file = input_file
try:
with open(input_file) as inputfile:
with open(output_file + '.tmp', 'w') as outputfile:
subtitle_index = 0
for each_line in inputfile:
if is_index(each_line):
subtitle_index = int(each_line)
if subtitle_index > index:
(start_time, end_time) = \
get_start_and_end_times(each_line)
if (start_time != None) or (end_time != None):
(start_time, end_time) = \
get_sync_times(sync_time_in_ms, start_time,\
end_time, delay)
print(ms_to_str(start_time) + ' --> ' + \
ms_to_str(end_time), file=outputfile)
else:
print(each_line, end='', file=outputfile)
else:
print(each_line, end='', file=outputfile)
save_srt_file(input_file, output_file)
except IOError as ioerr:
print('File error: ' + str(ioerr))
def sync_before_index(index, input_file, sync_time_in_ms, delay=True, \
output_file=''):
"""
Synchronizes the subtitles occuring before a specified index.
index -- Index before which synchronization will be done.
input_file -- Path to the input SRT file.
sync_time_in_ms -- Time (in millisecond) for which subtitle will
be delayed, or hastened.
delay -- True for delaying the subtitle, False for
hastening it.
output_file -- With default value input file will be replaced.
Change it to get output in different file.
"""
if check_srt_extension(input_file):
if output_file == '':
output_file = input_file
try:
with open(input_file) as inputfile:
with open(output_file + '.tmp', 'w') as outputfile:
subtitle_index = 0
for each_line in inputfile:
if is_index(each_line):
subtitle_index = int(each_line)
if subtitle_index < index:
(start_time, end_time) = \
get_start_and_end_times(each_line)
if (start_time != None) or (end_time != None):
(start_time, end_time) = \
get_sync_times(sync_time_in_ms, start_time,\
end_time, delay)
print(ms_to_str(start_time) + ' --> ' + \
ms_to_str(end_time), file=outputfile)
else:
print(each_line, end='', file=outputfile)
else:
print(each_line, end='', file=outputfile)
save_srt_file(input_file, output_file)
except IOError as ioerr:
print('File error: ' + str(ioerr))
def sync_between_indexes(start_index, end_index, input_file, sync_time_in_ms, \
delay=True, output_file=''):
"""
Synchronizes the subtitles occuring between the specified starting and
ending times.
start_index -- Index after which synchronization will be done.
end_index -- Index before which synchronization will be done.
input_file -- Path to the input SRT file.
sync_time_in_ms -- Time (in millisecond) for which subtitle will
be delayed, or hastened.
delay -- True for delaying the subtitle, False for
hastening it.
output_file -- With default value input file will be replaced.
Change it to get output in different file.
"""
if check_srt_extension(input_file):
if output_file == '':
output_file = input_file
try:
with open(input_file) as inputfile:
with open(output_file + '.tmp', 'w') as outputfile:
subtitle_index = 0
for each_line in inputfile:
if is_index(each_line):
subtitle_index = int(each_line)
if (subtitle_index > start_index) and (subtitle_index <\
end_index):
(start_time, end_time) = \
get_start_and_end_times(each_line)
if (start_time != None) or (end_time != None):
(start_time, end_time) = \
get_sync_times(sync_time_in_ms, start_time,\
end_time, delay)
print(ms_to_str(start_time) + ' --> ' + \
ms_to_str(end_time), file=outputfile)
else:
print(each_line, end='', file=outputfile)
else:
print(each_line, end='', file=outputfile)
save_srt_file(input_file, output_file)
except IOError as ioerr:
print('File error: ' + str(ioerr))
def get_start_and_end_times(input_line):
"""
Returns the start annd end time of a subtile line if the 'input_line'
matches hh:mm:ss,ms --> hh:mm:ss,ms format, otherwise retuns 'None' for
both start and end time.
input_line -- String to be check against time pattern to get
start and end time
"""
is_time_str = \
re.match(r'\d{2}:\d{2}:\d{2},\d{3} --> \d{2}:\d{2}:\d{2},\d{3}', \
input_line)
if is_time_str != None:
(start_time_str, end_time_str) = input_line.split(' --> ')
start_time = str_to_ms(start_time_str)
end_time = str_to_ms(end_time_str)
else:
start_time = None
end_time = None
return (start_time, end_time)
def get_sync_times(sync_time_in_ms, start_time, end_time, delay):
"""
sync_time_in_ms -- Time (in millisecond) for which subtitle will
be delayed, or hastened.
start_time -- Subtitle line start time (in millisecond)
end_time -- Subtitle line end time (in millisecond)
delay -- True for delaying the subtitle, False for
hastening it.
"""
if delay:
start_time += sync_time_in_ms
end_time += sync_time_in_ms
else:
if start_time > sync_time_in_ms: # To prevent negative start time
start_time -= sync_time_in_ms
if (end_time - sync_time_in_ms) > start_time: # To prevent end time
# smaller than start time
end_time -= sync_time_in_ms
return (start_time, end_time)
def save_srt_file(input_file, output_file):
"""
input_file -- Input subtitle file
output_file -- Output subtitle file
"""
if input_file == output_file:
try:
os.remove(input_file)
except NotImplementedError as nierr:
print('File modification error: ' + str(nierr))
try:
os.rename(output_file + '.tmp', output_file)
except NotImplementedError as nierr:
print('File modification error: ' + str(nierr))
def str_to_ms(time_str):
"""
time_str -- Time string in (hh:mm:ss,ms) format
"""
(hhmmss_str, ms_str) = time_str.split(',')
(hh_str, mm_str, ss_str) = hhmmss_str.split(':')
return ((int(hh_str) * 3600) + (int(mm_str) * 60) + int(ss_str)) * 1000 + \
int(ms_str)
def ms_to_str(time_in_ms):
"""
time_in_ms -- Time in millisecond
"""
hour = int(time_in_ms / 3600000)
time_in_ms = time_in_ms % 3600000
minute, second, millisecond = int(time_in_ms / 60000), \
int(time_in_ms % 60000 / 1000), time_in_ms % 1000
return '{:0=2}:{:0=2}:{:0=2},{:0=3}'.format(hour, minute, second, \
millisecond)
def check_srt_extension(input_file):
"""
input_file -- Input file to be checked for .srt extension
"""
if input_file.endswith('.srt'):
return True
else:
print('Error: File needs to have .srt extension. Check the input file.')
return False
def check_time_str_format(time_str):
"""
time_str -- Time string in (hh:mm:ss,ms) format
"""
is_time_str = re.match(r'\d{2}:\d{2}:\d{2},\d{3}', time_str)
if is_time_str != None:
return True
else:
print('Error: Time string must be in the format of hh:mm:ss,ms' + \
'(01:23:45,678).')
return False
def is_index(input_line):
"""
input_line -- String to be check against time pattern to get
start and end time
"""
global SUBTITLE_INDEX_FLAG
if SUBTITLE_INDEX_FLAG:
isindex = re.match(r'^[\d]+$', input_line)
if isindex:
SUBTITLE_INDEX_FLAG = False
return True
else:
isnewline = re.match(r'^\n', input_line)
if isnewline:
SUBTITLE_INDEX_FLAG = True
return False
|
|
"""Dictionary Of Keys based matrix"""
from __future__ import division, print_function, absolute_import
__docformat__ = "restructuredtext en"
__all__ = ['dok_matrix', 'isspmatrix_dok']
import functools
import operator
import numpy as np
from scipy._lib.six import zip as izip, xrange
from scipy._lib.six import iteritems
from .base import spmatrix, isspmatrix
from .sputils import (isdense, getdtype, isshape, isintlike, isscalarlike,
upcast, upcast_scalar, IndexMixin, get_index_dtype)
try:
from operator import isSequenceType as _is_sequence
except ImportError:
def _is_sequence(x):
return (hasattr(x, '__len__') or hasattr(x, '__next__')
or hasattr(x, 'next'))
class dok_matrix(spmatrix, IndexMixin, dict):
"""
Dictionary Of Keys based sparse matrix.
This is an efficient structure for constructing sparse
matrices incrementally.
This can be instantiated in several ways:
dok_matrix(D)
with a dense matrix, D
dok_matrix(S)
with a sparse matrix, S
dok_matrix((M,N), [dtype])
create the matrix with initial shape (M,N)
dtype is optional, defaulting to dtype='d'
Attributes
----------
dtype : dtype
Data type of the matrix
shape : 2-tuple
Shape of the matrix
ndim : int
Number of dimensions (this is always 2)
nnz
Number of nonzero elements
Notes
-----
Sparse matrices can be used in arithmetic operations: they support
addition, subtraction, multiplication, division, and matrix power.
Allows for efficient O(1) access of individual elements.
Duplicates are not allowed.
Can be efficiently converted to a coo_matrix once constructed.
Examples
--------
>>> import numpy as np
>>> from scipy.sparse import dok_matrix
>>> S = dok_matrix((5, 5), dtype=np.float32)
>>> for i in range(5):
>>> for j in range(5):
>>> S[i,j] = i+j # Update element
"""
def __init__(self, arg1, shape=None, dtype=None, copy=False):
dict.__init__(self)
spmatrix.__init__(self)
self.dtype = getdtype(dtype, default=float)
if isinstance(arg1, tuple) and isshape(arg1): # (M,N)
M, N = arg1
self.shape = (M, N)
elif isspmatrix(arg1): # Sparse ctor
if isspmatrix_dok(arg1) and copy:
arg1 = arg1.copy()
else:
arg1 = arg1.todok()
if dtype is not None:
arg1 = arg1.astype(dtype)
self.update(arg1)
self.shape = arg1.shape
self.dtype = arg1.dtype
else: # Dense ctor
try:
arg1 = np.asarray(arg1)
except:
raise TypeError('invalid input format')
if len(arg1.shape) != 2:
raise TypeError('expected rank <=2 dense array or matrix')
from .coo import coo_matrix
d = coo_matrix(arg1, dtype=dtype).todok()
self.update(d)
self.shape = arg1.shape
self.dtype = d.dtype
def getnnz(self):
return dict.__len__(self)
nnz = property(fget=getnnz)
def __len__(self):
return dict.__len__(self)
def get(self, key, default=0.):
"""This overrides the dict.get method, providing type checking
but otherwise equivalent functionality.
"""
try:
i, j = key
assert isintlike(i) and isintlike(j)
except (AssertionError, TypeError, ValueError):
raise IndexError('index must be a pair of integers')
if (i < 0 or i >= self.shape[0] or j < 0 or j >= self.shape[1]):
raise IndexError('index out of bounds')
return dict.get(self, key, default)
def __getitem__(self, index):
"""If key=(i,j) is a pair of integers, return the corresponding
element. If either i or j is a slice or sequence, return a new sparse
matrix with just these elements.
"""
i, j = self._unpack_index(index)
i_intlike = isintlike(i)
j_intlike = isintlike(j)
if i_intlike and j_intlike:
# Scalar index case
i = int(i)
j = int(j)
if i < 0:
i += self.shape[0]
if i < 0 or i >= self.shape[0]:
raise IndexError('index out of bounds')
if j < 0:
j += self.shape[1]
if j < 0 or j >= self.shape[1]:
raise IndexError('index out of bounds')
return dict.get(self, (i,j), 0.)
elif ((i_intlike or isinstance(i, slice)) and
(j_intlike or isinstance(j, slice))):
# Fast path for slicing very sparse matrices
i_slice = slice(i, i+1) if i_intlike else i
j_slice = slice(j, j+1) if j_intlike else j
i_indices = i_slice.indices(self.shape[0])
j_indices = j_slice.indices(self.shape[1])
i_seq = xrange(*i_indices)
j_seq = xrange(*j_indices)
newshape = (len(i_seq), len(j_seq))
newsize = _prod(newshape)
if len(self) < 2*newsize and newsize != 0:
# Switch to the fast path only when advantageous
# (count the iterations in the loops, adjust for complexity)
#
# We also don't handle newsize == 0 here (if
# i/j_intlike, it can mean index i or j was out of
# bounds)
return self._getitem_ranges(i_indices, j_indices, newshape)
i, j = self._index_to_arrays(i, j)
if i.size == 0:
return dok_matrix(i.shape, dtype=self.dtype)
min_i = i.min()
if min_i < -self.shape[0] or i.max() >= self.shape[0]:
raise IndexError('index (%d) out of range -%d to %d)' %
(i.min(), self.shape[0], self.shape[0]-1))
if min_i < 0:
i = i.copy()
i[i < 0] += self.shape[0]
min_j = j.min()
if min_j < -self.shape[1] or j.max() >= self.shape[1]:
raise IndexError('index (%d) out of range -%d to %d)' %
(j.min(), self.shape[1], self.shape[1]-1))
if min_j < 0:
j = j.copy()
j[j < 0] += self.shape[1]
newdok = dok_matrix(i.shape, dtype=self.dtype)
for a in xrange(i.shape[0]):
for b in xrange(i.shape[1]):
v = dict.get(self, (i[a,b], j[a,b]), 0.)
if v != 0:
dict.__setitem__(newdok, (a, b), v)
return newdok
def _getitem_ranges(self, i_indices, j_indices, shape):
# performance golf: we don't want Numpy scalars here, they are slow
i_start, i_stop, i_stride = map(int, i_indices)
j_start, j_stop, j_stride = map(int, j_indices)
newdok = dok_matrix(shape, dtype=self.dtype)
for (ii, jj) in self.keys():
# ditto for numpy scalars
ii = int(ii)
jj = int(jj)
a, ra = divmod(ii - i_start, i_stride)
if a < 0 or a >= shape[0] or ra != 0:
continue
b, rb = divmod(jj - j_start, j_stride)
if b < 0 or b >= shape[1] or rb != 0:
continue
dict.__setitem__(newdok, (a, b),
dict.__getitem__(self, (ii, jj)))
return newdok
def __setitem__(self, index, x):
if isinstance(index, tuple) and len(index) == 2:
# Integer index fast path
i, j = index
if (isintlike(i) and isintlike(j) and 0 <= i < self.shape[0]
and 0 <= j < self.shape[1]):
v = np.asarray(x, dtype=self.dtype)
if v.ndim == 0 and v != 0:
dict.__setitem__(self, (int(i), int(j)), v[()])
return
i, j = self._unpack_index(index)
i, j = self._index_to_arrays(i, j)
if isspmatrix(x):
x = x.toarray()
# Make x and i into the same shape
x = np.asarray(x, dtype=self.dtype)
x, _ = np.broadcast_arrays(x, i)
if x.shape != i.shape:
raise ValueError("shape mismatch in assignment")
if np.size(x) == 0:
return
min_i = i.min()
if min_i < -self.shape[0] or i.max() >= self.shape[0]:
raise IndexError('index (%d) out of range -%d to %d)' %
(i.min(), self.shape[0], self.shape[0]-1))
if min_i < 0:
i = i.copy()
i[i < 0] += self.shape[0]
min_j = j.min()
if min_j < -self.shape[1] or j.max() >= self.shape[1]:
raise IndexError('index (%d) out of range -%d to %d)' %
(j.min(), self.shape[1], self.shape[1]-1))
if min_j < 0:
j = j.copy()
j[j < 0] += self.shape[1]
dict.update(self, izip(izip(i.flat, j.flat), x.flat))
if 0 in x:
zeroes = x == 0
for key in izip(i[zeroes].flat, j[zeroes].flat):
if dict.__getitem__(self, key) == 0:
# may have been superseded by later update
del self[key]
def __add__(self, other):
# First check if argument is a scalar
if isscalarlike(other):
res_dtype = upcast_scalar(self.dtype, other)
new = dok_matrix(self.shape, dtype=res_dtype)
# Add this scalar to every element.
M, N = self.shape
for i in xrange(M):
for j in xrange(N):
aij = self.get((i, j), 0) + other
if aij != 0:
new[i, j] = aij
# new.dtype.char = self.dtype.char
elif isinstance(other, dok_matrix):
if other.shape != self.shape:
raise ValueError("matrix dimensions are not equal")
# We could alternatively set the dimensions to the largest of
# the two matrices to be summed. Would this be a good idea?
res_dtype = upcast(self.dtype, other.dtype)
new = dok_matrix(self.shape, dtype=res_dtype)
new.update(self)
for key in other.keys():
new[key] += other[key]
elif isspmatrix(other):
csc = self.tocsc()
new = csc + other
elif isdense(other):
new = self.todense() + other
else:
return NotImplemented
return new
def __radd__(self, other):
# First check if argument is a scalar
if isscalarlike(other):
new = dok_matrix(self.shape, dtype=self.dtype)
# Add this scalar to every element.
M, N = self.shape
for i in xrange(M):
for j in xrange(N):
aij = self.get((i, j), 0) + other
if aij != 0:
new[i, j] = aij
elif isinstance(other, dok_matrix):
if other.shape != self.shape:
raise ValueError("matrix dimensions are not equal")
new = dok_matrix(self.shape, dtype=self.dtype)
new.update(self)
for key in other:
new[key] += other[key]
elif isspmatrix(other):
csc = self.tocsc()
new = csc + other
elif isdense(other):
new = other + self.todense()
else:
return NotImplemented
return new
def __neg__(self):
new = dok_matrix(self.shape, dtype=self.dtype)
for key in self.keys():
new[key] = -self[key]
return new
def _mul_scalar(self, other):
res_dtype = upcast_scalar(self.dtype, other)
# Multiply this scalar by every element.
new = dok_matrix(self.shape, dtype=res_dtype)
for (key, val) in iteritems(self):
new[key] = val * other
return new
def _mul_vector(self, other):
# matrix * vector
result = np.zeros(self.shape[0], dtype=upcast(self.dtype,other.dtype))
for (i,j),v in iteritems(self):
result[i] += v * other[j]
return result
def _mul_multivector(self, other):
# matrix * multivector
M,N = self.shape
n_vecs = other.shape[1] # number of column vectors
result = np.zeros((M,n_vecs), dtype=upcast(self.dtype,other.dtype))
for (i,j),v in iteritems(self):
result[i,:] += v * other[j,:]
return result
def __imul__(self, other):
if isscalarlike(other):
# Multiply this scalar by every element.
for (key, val) in iteritems(self):
self[key] = val * other
# new.dtype.char = self.dtype.char
return self
else:
return NotImplemented
def __truediv__(self, other):
if isscalarlike(other):
res_dtype = upcast_scalar(self.dtype, other)
new = dok_matrix(self.shape, dtype=res_dtype)
# Multiply this scalar by every element.
for (key, val) in iteritems(self):
new[key] = val / other
# new.dtype.char = self.dtype.char
return new
else:
return self.tocsr() / other
def __itruediv__(self, other):
if isscalarlike(other):
# Multiply this scalar by every element.
for (key, val) in iteritems(self):
self[key] = val / other
return self
else:
return NotImplemented
# What should len(sparse) return? For consistency with dense matrices,
# perhaps it should be the number of rows? For now it returns the number
# of non-zeros.
def transpose(self):
""" Return the transpose
"""
M, N = self.shape
new = dok_matrix((N, M), dtype=self.dtype)
for key, value in iteritems(self):
new[key[1], key[0]] = value
return new
def conjtransp(self):
""" Return the conjugate transpose
"""
M, N = self.shape
new = dok_matrix((N, M), dtype=self.dtype)
for key, value in iteritems(self):
new[key[1], key[0]] = np.conj(value)
return new
def copy(self):
new = dok_matrix(self.shape, dtype=self.dtype)
new.update(self)
return new
def getrow(self, i):
"""Returns a copy of row i of the matrix as a (1 x n)
DOK matrix.
"""
out = self.__class__((1, self.shape[1]), dtype=self.dtype)
for j in range(self.shape[1]):
out[0, j] = self[i, j]
return out
def getcol(self, j):
"""Returns a copy of column j of the matrix as a (m x 1)
DOK matrix.
"""
out = self.__class__((self.shape[0], 1), dtype=self.dtype)
for i in range(self.shape[0]):
out[i, 0] = self[i, j]
return out
def tocoo(self):
""" Return a copy of this matrix in COOrdinate format"""
from .coo import coo_matrix
if self.nnz == 0:
return coo_matrix(self.shape, dtype=self.dtype)
else:
idx_dtype = get_index_dtype(maxval=max(self.shape[0], self.shape[1]))
data = np.asarray(_list(self.values()), dtype=self.dtype)
indices = np.asarray(_list(self.keys()), dtype=idx_dtype).T
return coo_matrix((data,indices), shape=self.shape, dtype=self.dtype)
def todok(self,copy=False):
if copy:
return self.copy()
else:
return self
def tocsr(self):
""" Return a copy of this matrix in Compressed Sparse Row format"""
return self.tocoo().tocsr()
def tocsc(self):
""" Return a copy of this matrix in Compressed Sparse Column format"""
return self.tocoo().tocsc()
def resize(self, shape):
""" Resize the matrix in-place to dimensions given by 'shape'.
Any non-zero elements that lie outside the new shape are removed.
"""
if not isshape(shape):
raise TypeError("dimensions must be a 2-tuple of positive"
" integers")
newM, newN = shape
M, N = self.shape
if newM < M or newN < N:
# Remove all elements outside new dimensions
for (i, j) in list(self.keys()):
if i >= newM or j >= newN:
del self[i, j]
self._shape = shape
def _list(x):
"""Force x to a list."""
if not isinstance(x, list):
x = list(x)
return x
def isspmatrix_dok(x):
return isinstance(x, dok_matrix)
def _prod(x):
"""Product of a list of numbers; ~40x faster vs np.prod for Python tuples"""
if len(x) == 0:
return 1
return functools.reduce(operator.mul, x)
|
|
from apollo.choices import CHARGE_LIST_OPEN
from apollo.viewmixins import LoginRequiredMixin, ActivitySendMixin, StaffRequiredMixin
from applications.business.models import Business
from applications.charge_list.forms import ActivityChargeCatalog, TimeChargeCatalog, UnitChargeCatalog
from applications.charge_list.models import ChargeList
from applications.station.forms import StationBusinessForm, StationRentalForm
from applications.station.models import Station, StationBusiness, StationRental
from django.contrib import messages
from django.contrib.messages.views import SuccessMessageMixin
from django.core.urlresolvers import reverse_lazy
from django.shortcuts import get_object_or_404, redirect
from django.views.generic import ListView, DetailView, CreateView, UpdateView, DeleteView
def StationUUIDRedirect(request, station_uuid=None):
"""
Given a station guid, redirect to the station detail page.
If the station does not exist with the specified parameters, throw a 404 exception.
"""
station = get_object_or_404(Station, uuid=station_uuid)
return redirect('station_detail', kwargs={'pk': station.pk})
"""
Station model generic views.
"""
class StationViewList(LoginRequiredMixin, ListView):
context_object_name = "stations"
model = Station
template_name = "station/station_list.html"
def get_context_data(self, **kwargs):
context = super(StationViewList, self).get_context_data(**kwargs)
return context
class StationViewDetail(LoginRequiredMixin, DetailView):
context_object_name = 'station'
model = Station
template_name = "station/station_detail.html"
def get_context_data(self, **kwargs):
context = super(StationViewDetail, self).get_context_data(**kwargs)
user_businesses = Business.objects.filter(businessmembership__user=self.request.user)
station_businesses = self.object.stationbusiness_set.all()
context['can_modify'] = len(station_businesses.filter(business__in=user_businesses)) >= 1
active_cl = ChargeList.objects.filter(station=self.object, status=CHARGE_LIST_OPEN)
if len(active_cl) == 1:
context['chargelist'] = active_cl[0]
price_list_pk = active_cl[0].price_list.pk
context['activitycharge_catalog'] = ActivityChargeCatalog(price_list_pk=price_list_pk)
context['timecharge_catalog'] = TimeChargeCatalog(price_list_pk=price_list_pk)
context['unitcharge_catalog'] = UnitChargeCatalog(price_list_pk=price_list_pk)
return context
class StationViewCreate(LoginRequiredMixin, SuccessMessageMixin, ActivitySendMixin, CreateView):
context_object_name = 'station'
model = Station
success_message = "%(name)s was created successfully!"
template_name = "station/station_form.html"
activity_verb = 'created station'
fields = "__all__"
def dispatch(self, *args, **kwargs):
business = get_object_or_404(Business, pk=self.kwargs.get('business_pk', '-1'))
user_businesses = self.request.user.businessmembership_set.all()
can_modify = user_businesses.filter(business=business)
if can_modify:
return super(StationViewCreate, self).dispatch(*args, **kwargs)
else:
messages.warning(self.request, "You do not have permissions to create a station for this business.")
return redirect('business_detail', pk=business.pk)
def get_success_url(self):
business = get_object_or_404(Business, pk=self.kwargs.get('business_pk', '-1'))
StationBusiness.objects.create(business=business, station=self.object)
return reverse_lazy('station_detail', kwargs={'pk': self.object.pk})
def get_context_data(self, **kwargs):
context = super(StationViewCreate, self).get_context_data(**kwargs)
context['action'] = "Create New"
return context
class StationViewUpdate(LoginRequiredMixin, SuccessMessageMixin, ActivitySendMixin, UpdateView):
context_object_name = 'station'
model = Station
success_message = "%(name)s was updated successfully!"
template_name = "station/station_form.html"
activity_verb = 'updated station'
fields = "__all__"
def dispatch(self, *args, **kwargs):
station = get_object_or_404(Station, pk=self.kwargs.get('pk', '-1'))
user_businesses = self.request.user.businessmembership_set.all()
can_modify = station.stationbusiness_set.all().filter(business__in=user_businesses)
if can_modify:
return super(StationViewUpdate, self).dispatch(*args, **kwargs)
else:
messages.warning(self.request, "You do not have permissions to update this station.")
return redirect('station_detail', pk=self.kwargs['pk'])
def get_success_url(self):
return reverse_lazy('station_detail', kwargs={'pk': self.object.pk})
def get_context_data(self, **kwargs):
context = super(StationViewUpdate, self).get_context_data(**kwargs)
context['action'] = "Update"
return context
class StationViewDelete(LoginRequiredMixin, ActivitySendMixin, DeleteView):
context_object_name = 'station'
model = Station
success_url = reverse_lazy('base')
template_name = "station/station_form.html"
activity_verb = 'deleted station'
target_object_valid = False
def dispatch(self, *args, **kwargs):
station = get_object_or_404(Station, pk=self.kwargs.get('pk', '-1'))
user_businesses = self.request.user.businessmembership_set.all()
can_modify = station.stationbusiness_set.all().filter(business__in=user_businesses)
if can_modify:
return super(StationViewDelete, self).dispatch(*args, **kwargs)
else:
messages.warning(self.request, "You do not have permissions to delete this station.")
return redirect('station_detail', pk=station.pk)
def get_success_url(self):
return self.success_url
def get_context_data(self, **kwargs):
context = super(StationViewDelete, self).get_context_data(**kwargs)
context['action'] = "Delete"
return context
"""
Station Business Association generic views
"""
class StationBusinessViewCreate(LoginRequiredMixin, SuccessMessageMixin, ActivitySendMixin, CreateView):
context_object_name = 'stationbusiness'
model = StationBusiness
template_name = "station/stationbusiness_form.html"
activity_verb = 'created station business association'
success_message = "%(station)s: %(business)s relation successfully created!"
form_class = StationBusinessForm
def dispatch(self, *args, **kwargs):
station = get_object_or_404(Station, pk=self.kwargs.get('station_pk', '-1'))
user_businesses = self.request.user.businessmembership_set.all()
can_modify = station.stationbusiness_set.all().filter(business__in=user_businesses)
if can_modify:
return super(StationBusinessViewCreate, self).dispatch(*args, **kwargs)
else:
messages.warning(self.request, "You do not have permissions to create this station business.")
return redirect('station_detail', pk=station.pk)
def get_form(self, form_class):
return form_class(station_pk=self.kwargs['station_pk'], **self.get_form_kwargs())
def get_success_url(self):
return reverse_lazy('station_detail', pk=self.kwargs['station_pk'])
def get_context_data(self, **kwargs):
context = super(StationBusinessViewCreate, self).get_context_data(**kwargs)
context['station'] = Station.objects.get(pk=self.kwargs['station_pk'])
return context
class StationBusinessViewDelete(LoginRequiredMixin, ActivitySendMixin, DeleteView):
context_object_name = 'stationbusiness'
model = StationBusiness
template_name = "station/stationbusiness_form.html"
activity_verb = 'deleted station business association'
target_object_valid = False
def get_success_url(self):
return reverse_lazy('station_detail', kwargs={'pk': self.object.station.pk})
def dispatch(self, *args, **kwargs):
stationbusiness = get_object_or_404(StationBusiness, pk=self.kwargs.get('pk', '-1'))
business = stationbusiness.business
station = stationbusiness.station
user_businesses = self.request.user.businessmembership_set.all()
can_modify = business.stationbusiness_set.all().filter(business__in=user_businesses)
last_business = len(station.stationbusiness_set.all()) == 1
if can_modify:
if last_business:
messages.warning(self.request, "You cannot delete the last station business for this station!")
return redirect('station_detail', pk=station.pk)
return super(StationBusinessViewDelete, self).dispatch(*args, **kwargs)
else:
messages.warning(self.request, "You do not have permissions to delete this a station business.")
return redirect('station_detail', pk=station.pk)
def get_context_data(self, **kwargs):
context = super(StationBusinessViewDelete, self).get_context_data(**kwargs)
context['station'] = self.object.station
return context
"""
Station Rental generic views
"""
class StationRentalViewUpdate(LoginRequiredMixin, ActivitySendMixin, SuccessMessageMixin, UpdateView):
model = StationRental
context_object_name = 'stationrental'
template_name = "station/stationrental_form.html"
activity_verb = 'updated station rental'
success_message = '%(equipment)s rental successfully updated!'
form_class = StationRentalForm
def get_success_url(self):
return reverse_lazy('station_detail', kwargs={'pk': self.object.station.pk})
def dispatch(self, *args, **kwargs):
station_rental = get_object_or_404(StationRental, pk=self.kwargs.get('pk', '-1'))
if self.request.user.is_staff:
return super(StationRentalViewUpdate, self).dispatch(*args, **kwargs)
else:
messages.warning(self.request, "Only staff may update station rentals.")
return redirect('station_detail', pk=station_rental.station.pk)
def get_context_data(self, **kwargs):
context = super(StationRentalViewUpdate, self).get_context_data(**kwargs)
context['station'] = self.object.station
context['action'] = 'Update'
return context
class StationRentalViewDelete(LoginRequiredMixin, DeleteView):
model = StationRental
context_object_name = 'stationrental'
template_name = "station/stationrental_form.html"
activity_verb = 'updated station rental'
success_message = '%(equipment)s rental successfully updated!'
form_class = StationRentalForm
def get_success_url(self):
return reverse_lazy('station_detail', kwargs={'pk': self.object.station.pk})
def dispatch(self, *args, **kwargs):
station_rental = get_object_or_404(StationRental, pk=self.kwargs.get('pk', '-1'))
if self.request.user.is_staff:
return super(StationRentalViewDelete, self).dispatch(*args, **kwargs)
else:
messages.warning(self.request, "Only staff may delete station rentals.")
return redirect('station_detail', pk=station_rental.station.pk)
def get_context_data(self, **kwargs):
context = super(StationRentalViewDelete, self).get_context_data(**kwargs)
context['station'] = self.object.station
context['action'] = 'Delete'
return context
|
|
""" crmngr puppetmodule module """
# stdlib
from collections import namedtuple
import hashlib
import logging
from datetime import datetime
# crmngr
from crmngr import cprint
from crmngr.forgeapi import ForgeApi
from crmngr.forgeapi import ForgeError
from crmngr.git import GitError
from crmngr.git import Repository
LOG = logging.getLogger(__name__)
class PuppetModule:
"""Base class for puppet modules"""
def __init__(self, name):
"""Initialize puppet module"""
self._name = name
self._version = None
@staticmethod
def parse_module_name(string):
"""parse module file name into author/name"""
ModuleName = namedtuple( # pylint: disable=invalid-name
'ModuleName', ['module', 'author']
)
module_name = string.strip(' \'"').rsplit('/', 1)
try:
module = ModuleName(module=module_name[1], author=module_name[0])
except IndexError:
module = ModuleName(module=module_name[0], author=None)
LOG.debug("%s parsed into %s", string, module)
return module
@classmethod
def from_moduleline(cls, moduleline):
"""returns a crmngr module object based on a puppetfile module line"""
# split module line into comma-separated parts (starting after
# 'mod ')
line_parts = moduleline[4:].split(',')
module_name = cls.parse_module_name(line_parts[0])
# parse additional parts of mod line
module_info = {}
for fragment in line_parts[1:]:
clean = fragment.strip(' \'"')
# if part not start with a colon, it is git module
if clean.startswith(':'):
if clean.startswith(':git'):
module_info['url'] = clean.rsplit('>', 1)[1].strip(' \'"')
elif clean.startswith(':commit'):
module_info['version'] = GitCommit(
clean.rsplit('>', 1)[1].strip(' \'"')
)
elif clean.startswith(':ref'):
module_info['version'] = GitRef(
clean.rsplit('>', 1)[1].strip(' \'"')
)
elif clean.startswith(':tag'):
module_info['version'] = GitTag(
clean.rsplit('>', 1)[1].strip(' \'"')
)
elif clean.startswith(':branch'):
module_info['version'] = GitBranch(
clean.rsplit('>', 1)[1].strip(' \'"')
)
# forge module
else:
module_info['version'] = Forge(clean)
LOG.debug("%s parsed into %s", ','.join(line_parts[1:]), module_info)
# forge module
if module_name.author is not None and 'url' not in module_info:
return ForgeModule(
author=module_name.author,
name=module_name.module,
version=module_info.get('version'),
)
# git module
else:
return GitModule(
name=module_name.module,
url=module_info['url'],
version=module_info.get('version'),
)
@property
def name(self):
"""Name of this module"""
return self._name
@property
def version(self):
"""Return this modules version"""
raise NotImplementedError
@version.setter
def version(self, value):
"""Set this modules version"""
raise NotImplementedError
@property
def update_commit_message(self):
"""returns commit message for updating module"""
try:
commit_message = 'Update {} module ({})'.format(
self.name,
self.version.commit_message,
)
except AttributeError:
commit_message = 'Update {} module'.format(self.name)
return commit_message
def __hash__(self):
return hash(self.__repr__())
def __repr__(self):
return "%s" % self.name
def __eq__(self, other):
return str(self) >= str(other) >= str(self)
def __lt__(self, other):
return str(other) > str(self)
class GitModule(PuppetModule):
"""Puppet mdoule hosted on git"""
def __init__(self, name, url, version=None):
"""Initialize git module
:argument name Name of module
:argument url Repository URL of module
"""
super().__init__(name)
self._url = url
self.version = version
@property
def version(self):
"""Return this modules version"""
return self._version
@version.setter
def version(self, value):
"""Set version of this puppet module"""
if isinstance(value, (GitBranch, GitCommit, GitRef, GitTag)):
self._version = value
else:
if value is None:
self._version = None
else:
raise TypeError('Unsupported type %s for value' % type(value))
@property
def url(self):
"""URL to git repository for this puppet module"""
return self._url
def __repr__(self):
"""Return unique string representation"""
representation = "%s:git:%s" % (self.name, self.url)
if self.version:
representation += ":%s" % self.version
return representation
@property
def puppetfile(self):
"""Return puppetfile representation of module"""
lines = [
"mod '%s'," % self.name,
]
git_line = " :git => '%s'" % self.url
if self.version:
git_line += ","
lines.append(git_line)
if isinstance(self.version, BaseVersion):
lines.append(self.version.puppetfile)
return lines
def print_version_information(self, version_check=True, version_cache=None):
"""Print out version information"""
if version_check:
latest_version = self.get_latest_version(version_cache)
else:
latest_version = Unknown()
cprint.magenta_bold('Version:', lpad=2)
cprint.white('Git:', lpad=4, rpad=8, end='')
cprint.white(self.url)
if isinstance(self.version, GitBranch):
cprint.blue_bold('Branch: ', lpad=16, end='')
cprint.yellow_bold(self.version.version, end='')
elif isinstance(self.version, GitCommit):
cprint.red('Commit: ', lpad=16, end='')
cprint.red_bold(self.version.version[:7], end='')
elif isinstance(self.version, GitRef):
cprint.red('Ref: ', lpad=16, end='')
cprint.red_bold(self.version.version, end='')
elif isinstance(self.version, GitTag):
cprint.blue_bold('Tag: ', lpad=16, end='')
if latest_version.version == self.version.version:
cprint.green_bold(self.version.version, end='')
else:
cprint.yellow_bold(self.version.version, end='')
else:
cprint.red_bold('UNSPECIFIED', lpad=16, end='')
if version_check:
cprint.white('[Latest: %s]' % (
latest_version.report
), lpad=1)
else:
cprint.white('')
def get_latest_version(self, version_cache=None):
"""return a dict with version, date of newest tag in repository"""
if version_cache is not None:
local_info = version_cache.read(self.cachename)
else:
local_info = {}
if not local_info:
with Repository(self.url) as repository:
try:
latest_tag = repository.latest_tag
except GitError:
local_info = {}
else:
local_info = {
'version': latest_tag.name,
'date': latest_tag.date.strftime('%Y-%m-%d'),
}
if version_cache is not None:
version_cache.write(self.cachename, local_info)
try:
version = GitTag(
version=local_info['version'],
date=datetime.strptime(
local_info['date'], '%Y-%m-%d'
).date()
)
except KeyError:
version = Unknown() # pylint: disable=redefined-variable-type
LOG.debug("latest version for %s is %s", self.name, version)
return version
@property
def cachename(self):
"""returns cache lookup key"""
return hashlib.sha256(self.url.encode('utf-8')).hexdigest()
class ForgeModule(PuppetModule):
"""Puppet module hosted on forge"""
def __init__(self, name, author, version=None):
"""Initialize forge module
:argument name Name of module
:argument author Author/Namespace of the module
"""
super().__init__(name)
self._author = author
self.version = version
@property
def version(self):
"""Return this modules version"""
return self._version
@version.setter
def version(self, value):
"""Set version of this puppet module"""
if isinstance(value, Forge):
self._version = value
else:
if value is None:
self._version = None
else:
raise TypeError('Unsupported type %s for value' % type(value))
@property
def author(self):
"""Puppet module author / namespace"""
return self._author
@property
def forgename(self):
"""Return module name as used on forge"""
return "%s/%s" % (self._author, self._name)
@property
def cachename(self):
"""returns cache lookup key"""
return hashlib.sha256(self.forgename.encode('utf-8')).hexdigest()
def __repr__(self):
"""Return unique string representation"""
representation = "%s:forge:%s" % (self.name, self.author)
if self.version:
representation += ":%s" % self.version
return representation
def print_version_information(self, version_check=True, version_cache=None):
"""Print out version information"""
if version_check:
latest_version = self.get_latest_version(version_cache)
else:
latest_version = Unknown()
cprint.magenta_bold('Version:', lpad=2)
cprint.white('Forge:', lpad=4, rpad=6, end='')
cprint.white(self.forgename, suffix=':')
if not self.version:
cprint.red_bold('UNSPECIFIED', lpad=16, end='')
else:
if latest_version.version == self.version.version:
cprint.green_bold(self.version.version, lpad=16, end='')
else:
cprint.yellow_bold(self.version.version, lpad=16, end='')
if version_check:
cprint.white(' [Latest: %s]' % latest_version.report)
else:
cprint.white('')
def get_latest_version(self, version_cache=None):
"""returns dict with version and date of the newest version on forge"""
if version_cache is not None:
local_info = version_cache.read(self.cachename)
else:
local_info = {}
if not local_info:
try:
local_info = ForgeApi(
name=self.name,
author=self.author
).current_version
except ForgeError:
return Unknown()
if version_cache is not None:
version_cache.write(self.cachename, local_info)
try:
return Forge(
version=local_info['version'],
date=datetime.strptime(
local_info['date'], '%Y-%m-%d'
).date()
)
except KeyError:
return Unknown()
@property
def puppetfile(self):
"""Return puppetfile representation of module"""
line = "mod '%s/%s'" % (self.author, self.name)
if self.version:
line += ", %s" % self.version.puppetfile
return [line, ]
class BaseVersion:
"""Base class for version objects"""
def __init__(self, version, date=None):
"""Initialize Version
:argument version Version(-string) for this module.
"""
self._date = date
self._version = version
def __hash__(self):
return hash(self._version)
def __repr__(self):
return "%s(%s)" % (type(self).__name__, str(self._version))
@property
def version(self):
"""Return Version(-string)"""
return self._version
@property
def date(self):
"""Return Date of Version"""
return self._date
@property
def report(self):
"""Return version in suitable format for crmngr report"""
if self._date is None:
return "%s" % self._version
else:
return "%s (%s)" % (self._version, self._date)
@property
def commit_message(self):
"""Return version in suitable format for commit message"""
return "%s" % self.version
class Unknown(BaseVersion):
"""Object to represent and unknown Version"""
def __init__(self, version=None, date=None):
super().__init__(version, date)
def __repr__(self):
return "%s()" % type(self).__name__
@property
def report(self):
"""Return version in suitable format for crmngr report"""
return 'unknown'
class Forge(BaseVersion):
"""Puppet Forge Version"""
@property
def puppetfile(self):
"""Return version in suitable format for puppetfile"""
return "'%s'" % self.version
class GitBranch(BaseVersion):
"""Git Branch"""
@property
def puppetfile(self):
"""Return version in suitable format for puppetfile"""
return " :branch => '%s'" % self.version
@property
def commit_message(self):
"""Return version in suitable format for commit message"""
return "branch [%s]" % self.version
class GitCommit(BaseVersion):
"""Git Commit"""
@property
def puppetfile(self):
"""Return version in suitable format for puppetfile"""
return " :commit => '%s'" % self.version
@property
def commit_message(self):
"""Return version in suitable format for commit message"""
return "commit [%s]" % self.version
class GitRef(BaseVersion):
"""Git Ref"""
@property
def puppetfile(self):
"""Return version in suitable format for puppetfile"""
return " :ref => '%s'" % self.version
class GitTag(BaseVersion):
""" Git Tag"""
@property
def puppetfile(self):
"""Return version in suitable format for puppetfile"""
return " :tag => '%s'" % self.version
@property
def commit_message(self):
"""Return version in suitable format for commit message"""
return "tag [%s]" % self.version
|
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from sqlalchemy import Column, DateTime, Float, Integer, String, Table, text
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
metadata = Base.metadata
class AllstarFull(Base):
__tablename__ = u'AllstarFull'
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
gameNum = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
gameID = Column(String(12))
teamID = Column(String(3))
lgID = Column(String(2))
GP = Column(Integer)
startingPos = Column(Integer)
class Appearance(Base):
__tablename__ = u'Appearances'
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
teamID = Column(String(3), primary_key=True, nullable=False, server_default=text("''"))
lgID = Column(String(2))
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
G_all = Column(Integer)
GS = Column(Integer)
G_batting = Column(Integer)
G_defense = Column(Integer)
G_p = Column(Integer)
G_c = Column(Integer)
G_1b = Column(Integer)
G_2b = Column(Integer)
G_3b = Column(Integer)
G_ss = Column(Integer)
G_lf = Column(Integer)
G_cf = Column(Integer)
G_rf = Column(Integer)
G_of = Column(Integer)
G_dh = Column(Integer)
G_ph = Column(Integer)
G_pr = Column(Integer)
class AwardsManager(Base):
__tablename__ = u'AwardsManagers'
playerID = Column(String(10), primary_key=True, nullable=False, server_default=text("''"))
awardID = Column(String(75), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
lgID = Column(String(2), primary_key=True, nullable=False, server_default=text("''"))
tie = Column(String(1))
notes = Column(String(100))
class AwardsPlayer(Base):
__tablename__ = u'AwardsPlayers'
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
awardID = Column(String(255), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
lgID = Column(String(2), primary_key=True, nullable=False, server_default=text("''"))
tie = Column(String(1))
notes = Column(String(100))
class AwardsShareManager(Base):
__tablename__ = u'AwardsShareManagers'
awardID = Column(String(25), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
lgID = Column(String(2), primary_key=True, nullable=False, server_default=text("''"))
playerID = Column(String(10), primary_key=True, nullable=False, server_default=text("''"))
pointsWon = Column(Integer)
pointsMax = Column(Integer)
votesFirst = Column(Integer)
class AwardsSharePlayer(Base):
__tablename__ = u'AwardsSharePlayers'
awardID = Column(String(25), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
lgID = Column(String(2), primary_key=True, nullable=False, server_default=text("''"))
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
pointsWon = Column(Float(asdecimal=True))
pointsMax = Column(Integer)
votesFirst = Column(Float(asdecimal=True))
class Batting(Base):
__tablename__ = u'Batting'
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
stint = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
teamID = Column(String(3))
lgID = Column(String(2))
G = Column(Integer)
G_batting = Column(Integer)
AB = Column(Integer)
R = Column(Integer)
H = Column(Integer)
_2B = Column(u'2B', Integer)
_3B = Column(u'3B', Integer)
HR = Column(Integer)
RBI = Column(Integer)
SB = Column(Integer)
CS = Column(Integer)
BB = Column(Integer)
SO = Column(Integer)
IBB = Column(Integer)
HBP = Column(Integer)
SH = Column(Integer)
SF = Column(Integer)
GIDP = Column(Integer)
G_old = Column(Integer)
class BattingPost(Base):
__tablename__ = u'BattingPost'
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
round = Column(String(10), primary_key=True, nullable=False, server_default=text("''"))
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
teamID = Column(String(3))
lgID = Column(String(2))
G = Column(Integer)
AB = Column(Integer)
R = Column(Integer)
H = Column(Integer)
_2B = Column(u'2B', Integer)
_3B = Column(u'3B', Integer)
HR = Column(Integer)
RBI = Column(Integer)
SB = Column(Integer)
CS = Column(Integer)
BB = Column(Integer)
SO = Column(Integer)
IBB = Column(Integer)
HBP = Column(Integer)
SH = Column(Integer)
SF = Column(Integer)
GIDP = Column(Integer)
class BattingTotal(Base):
__tablename__ = u'BattingTotal'
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
G = Column(Integer)
G_batting = Column(Integer)
AB = Column(Integer)
R = Column(Integer)
H = Column(Integer)
_2B = Column(u'2B', Integer)
_3B = Column(u'3B', Integer)
HR = Column(Integer)
RBI = Column(Integer)
SB = Column(Integer)
CS = Column(Integer)
BB = Column(Integer)
SO = Column(Integer)
IBB = Column(Integer)
HBP = Column(Integer)
SH = Column(Integer)
SF = Column(Integer)
GIDP = Column(Integer)
G_old = Column(Integer)
class CollegePlaying(Base):
__tablename__ = u'CollegePlaying'
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
schoolID = Column(String(15), primary_key=True)
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
class Fielding(Base):
__tablename__ = u'Fielding'
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
stint = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
teamID = Column(String(3))
lgID = Column(String(2))
POS = Column(String(2), primary_key=True, nullable=False, server_default=text("''"))
G = Column(Integer)
GS = Column(Integer)
InnOuts = Column(Integer)
PO = Column(Integer)
A = Column(Integer)
E = Column(Integer)
DP = Column(Integer)
PB = Column(Integer)
WP = Column(Integer)
SB = Column(Integer)
CS = Column(Integer)
ZR = Column(Float(asdecimal=True))
class FieldingOF(Base):
__tablename__ = u'FieldingOF'
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
stint = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
Glf = Column(Integer)
Gcf = Column(Integer)
Grf = Column(Integer)
class FieldingPost(Base):
__tablename__ = u'FieldingPost'
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
teamID = Column(String(3))
lgID = Column(String(2))
round = Column(String(10), primary_key=True, nullable=False, server_default=text("''"))
POS = Column(String(2), primary_key=True, nullable=False, server_default=text("''"))
G = Column(Integer)
GS = Column(Integer)
InnOuts = Column(Integer)
PO = Column(Integer)
A = Column(Integer)
E = Column(Integer)
DP = Column(Integer)
TP = Column(Integer)
PB = Column(Integer)
SB = Column(Integer)
CS = Column(Integer)
class HallOfFame(Base):
__tablename__ = u'HallOfFame'
playerID = Column(String(10), primary_key=True, nullable=False, server_default=text("''"))
yearid = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
votedBy = Column(String(64), primary_key=True, nullable=False, server_default=text("''"))
ballots = Column(Integer)
needed = Column(Integer)
votes = Column(Integer)
inducted = Column(String(1))
category = Column(String(20))
needed_note = Column(String(25))
class Manager(Base):
__tablename__ = u'Managers'
playerID = Column(String(10))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
teamID = Column(String(3), primary_key=True, nullable=False, server_default=text("''"))
lgID = Column(String(2))
inseason = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
G = Column(Integer)
W = Column(Integer)
L = Column(Integer)
rank = Column(Integer)
plyrMgr = Column(String(1))
class ManagersHalf(Base):
__tablename__ = u'ManagersHalf'
playerID = Column(String(10), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
teamID = Column(String(3), primary_key=True, nullable=False, server_default=text("''"))
lgID = Column(String(2))
inseason = Column(Integer)
half = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
G = Column(Integer)
W = Column(Integer)
L = Column(Integer)
rank = Column(Integer)
class Master(Base):
__tablename__ = u'Master'
playerID = Column(String(10), primary_key=True)
birthYear = Column(Integer)
birthMonth = Column(Integer)
birthDay = Column(Integer)
birthCountry = Column(String(50))
birthState = Column(String(2))
birthCity = Column(String(50))
deathYear = Column(Integer)
deathMonth = Column(Integer)
deathDay = Column(Integer)
deathCountry = Column(String(50))
deathState = Column(String(2))
deathCity = Column(String(50))
nameFirst = Column(String(50))
nameLast = Column(String(50))
nameGiven = Column(String(255))
weight = Column(Integer)
height = Column(Float(asdecimal=True))
bats = Column(String(1))
throws = Column(String(1))
debut = Column(DateTime)
finalGame = Column(DateTime)
retroID = Column(String(9))
bbrefID = Column(String(9))
class Pitching(Base):
__tablename__ = u'Pitching'
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
stint = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
teamID = Column(String(3))
lgID = Column(String(2))
W = Column(Integer)
L = Column(Integer)
G = Column(Integer)
GS = Column(Integer)
CG = Column(Integer)
SHO = Column(Integer)
SV = Column(Integer)
IPouts = Column(Integer)
H = Column(Integer)
ER = Column(Integer)
HR = Column(Integer)
BB = Column(Integer)
SO = Column(Integer)
BAOpp = Column(Float(asdecimal=True))
ERA = Column(Float(asdecimal=True))
IBB = Column(Integer)
WP = Column(Integer)
HBP = Column(Integer)
BK = Column(Integer)
BFP = Column(Integer)
GF = Column(Integer)
R = Column(Integer)
SH = Column(Integer)
SF = Column(Integer)
GIDP = Column(Integer)
class PitchingPost(Base):
__tablename__ = u'PitchingPost'
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
round = Column(String(10), primary_key=True, nullable=False, server_default=text("''"))
teamID = Column(String(3))
lgID = Column(String(2))
W = Column(Integer)
L = Column(Integer)
G = Column(Integer)
GS = Column(Integer)
CG = Column(Integer)
SHO = Column(Integer)
SV = Column(Integer)
IPouts = Column(Integer)
H = Column(Integer)
ER = Column(Integer)
HR = Column(Integer)
BB = Column(Integer)
SO = Column(Integer)
BAOpp = Column(Float(asdecimal=True))
ERA = Column(Float(asdecimal=True))
IBB = Column(Integer)
WP = Column(Integer)
HBP = Column(Integer)
BK = Column(Integer)
BFP = Column(Integer)
GF = Column(Integer)
R = Column(Integer)
SH = Column(Integer)
SF = Column(Integer)
GIDP = Column(Integer)
class PitchingTotal(Base):
__tablename__ = u'PitchingTotal'
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
W = Column(Integer)
L = Column(Integer)
G = Column(Integer)
GS = Column(Integer)
CG = Column(Integer)
SHO = Column(Integer)
SV = Column(Integer)
IPouts = Column(Integer)
H = Column(Integer)
ER = Column(Integer)
HR = Column(Integer)
BB = Column(Integer)
SO = Column(Integer)
IBB = Column(Integer)
WP = Column(Integer)
HBP = Column(Integer)
BK = Column(Integer)
BFP = Column(Integer)
GF = Column(Integer)
R = Column(Integer)
SH = Column(Integer)
SF = Column(Integer)
GIDP = Column(Integer)
class Salary(Base):
__tablename__ = u'Salaries'
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
teamID = Column(String(3), primary_key=True, nullable=False, server_default=text("''"))
lgID = Column(String(2), primary_key=True, nullable=False, server_default=text("''"))
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
salary = Column(Float(asdecimal=True))
class SalariesTotal(Base):
__tablename__ = u'SalariesTotal'
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
playerID = Column(String(9), primary_key=True, nullable=False, server_default=text("''"))
salary = Column(Float(asdecimal=True))
class School(Base):
__tablename__ = u'Schools'
schoolID = Column(String(15), primary_key=True)
name_full = Column(String(255))
city = Column(String(55))
state = Column(String(55))
country = Column(String(55))
class SeriesPost(Base):
__tablename__ = u'SeriesPost'
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
round = Column(String(5), primary_key=True, nullable=False, server_default=text("''"))
teamIDwinner = Column(String(3))
lgIDwinner = Column(String(2))
teamIDloser = Column(String(3))
lgIDloser = Column(String(2))
wins = Column(Integer)
losses = Column(Integer)
ties = Column(Integer)
class Team(Base):
__tablename__ = u'Teams'
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
lgID = Column(String(2), primary_key=True, nullable=False, server_default=text("''"))
teamID = Column(String(3), primary_key=True, nullable=False, server_default=text("''"))
franchID = Column(String(3))
divID = Column(String(1))
Rank = Column(Integer)
G = Column(Integer)
Ghome = Column(Integer)
W = Column(Integer)
L = Column(Integer)
DivWin = Column(String(1))
WCWin = Column(String(1))
LgWin = Column(String(1))
WSWin = Column(String(1))
R = Column(Integer)
AB = Column(Integer)
H = Column(Integer)
_2B = Column(u'2B', Integer)
_3B = Column(u'3B', Integer)
HR = Column(Integer)
BB = Column(Integer)
SO = Column(Integer)
SB = Column(Integer)
CS = Column(Integer)
HBP = Column(Integer)
SF = Column(Integer)
RA = Column(Integer)
ER = Column(Integer)
ERA = Column(Float(asdecimal=True))
CG = Column(Integer)
SHO = Column(Integer)
SV = Column(Integer)
IPouts = Column(Integer)
HA = Column(Integer)
HRA = Column(Integer)
BBA = Column(Integer)
SOA = Column(Integer)
E = Column(Integer)
DP = Column(Integer)
FP = Column(Float(asdecimal=True))
name = Column(String(50))
park = Column(String(255))
attendance = Column(Integer)
BPF = Column(Integer)
PPF = Column(Integer)
teamIDBR = Column(String(3))
teamIDlahman45 = Column(String(3))
teamIDretro = Column(String(3))
class TeamsFranchise(Base):
__tablename__ = u'TeamsFranchises'
franchID = Column(String(3), primary_key=True)
franchName = Column(String(50))
active = Column(String(2))
NAassoc = Column(String(3))
class TeamsHalf(Base):
__tablename__ = u'TeamsHalf'
yearID = Column(Integer, primary_key=True, nullable=False, server_default=text("'0'"))
lgID = Column(String(2), primary_key=True, nullable=False, server_default=text("''"))
teamID = Column(String(3), primary_key=True, nullable=False, server_default=text("''"))
Half = Column(String(1), primary_key=True, nullable=False, server_default=text("''"))
divID = Column(String(1))
DivWin = Column(String(1))
Rank = Column(Integer)
G = Column(Integer)
W = Column(Integer)
L = Column(Integer)
|
|
#!/usr/bin/env python
VERSION = "1.0.4"
profiles = ''' <profiles>
<profile>
<id>scala-2.10</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<properties>
<scala.version>2.10.6</scala.version>
<scala.binary.version>2.10</scala.binary.version>
<maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
<spark.version>1.6.2</spark.version>
</properties>
</profile>
<profile>
<id>scala-2.11</id>
<activation>
<property><name>scala-2.11</name></property>
</activation>
<properties>
<scala.version>2.11.8</scala.version>
<scala.binary.version>2.11</scala.binary.version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<spark.version>2.0.0</spark.version>
</properties>
</profile>
</profiles>
'''
javaversion17 = ''' <maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
'''
javaversion18 = ''' <maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
'''
sourcejar = ''' <execution>
<id>attach-sources</id>
<goals>
<goal>add-source</goal>
</goals>
</execution>
'''
javadocjar = ''' <execution>
<id>attach-javadocs</id>
<goals>
<goal>doc-jar</goal>
</goals>
</execution>
'''
scalatest = ''' <plugin>
<groupId>org.scalatest</groupId>
<artifactId>scalatest-maven-plugin</artifactId>
<version>1.0</version>
<configuration>
<reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory>
<junitxml>.</junitxml>
</configuration>
<executions>
<execution>
<id>test</id>
<goals>
<goal>test</goal>
</goals>
</execution>
</executions>
</plugin>
'''
copydependencies = ''' <plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.10</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>
target/lib
</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
'''
gpgplugin = ''' <plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-gpg-plugin</artifactId>
<version>1.6</version>
<executions>
<execution>
<id>sign-artifacts</id>
<phase>verify</phase>
<goals>
<goal>sign</goal>
</goals>
</execution>
</executions>
</plugin>
'''
stagingplugin = ''' <plugin>
<groupId>org.sonatype.plugins</groupId>
<artifactId>nexus-staging-maven-plugin</artifactId>
<version>1.6.7</version>
<extensions>true</extensions>
<configuration>
<serverId>ossrh</serverId>
<nexusUrl>https://oss.sonatype.org/</nexusUrl>
<autoReleaseAfterClose>true</autoReleaseAfterClose>
</configuration>
</plugin>
'''
pluginmanagement = ''' <pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<version>2.5</version>
<configuration>
<useReleaseProfile>false</useReleaseProfile>
<pushChanges>false</pushChanges>
<localCheckout>true</localCheckout>
<goals>deploy</goals>
</configuration>
</plugin>
</plugins>
</pluginManagement>
'''
distributionmanagement = ''' <distributionManagement>
<snapshotRepository>
<id>ossrh</id>
<url>https://oss.sonatype.org/content/repositories/snapshots</url>
</snapshotRepository>
<repository>
<id>ossrh</id>
<url>https://oss.sonatype.org/service/local/staging/deploy/maven2/</url>
</repository>
</distributionManagement>
'''
template = '''<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<!-- Copyright 2016 Jim Pivarski -->
<!-- -->
<!-- Licensed under the Apache License, Version 2.0 (the "License"); -->
<!-- you may not use this file except in compliance with the License. -->
<!-- You may obtain a copy of the License at -->
<!-- -->
<!-- http://www.apache.org/licenses/LICENSE-2.0 -->
<!-- -->
<!-- Unless required by applicable law or agreed to in writing, software -->
<!-- distributed under the License is distributed on an "AS IS" BASIS, -->
<!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -->
<!-- See the License for the specific language governing permissions and -->
<!-- limitations under the License. -->
<name>{name}</name>
<description>{description}</description>
<url>http://histogrammar.org</url>
<inceptionYear>2016</inceptionYear>
<groupId>org.diana-hep</groupId>
<artifactId>{artifactid}</artifactId>
<version>{version}</version>
<packaging>jar</packaging>
<licenses>
<license>
<name>Apache License, Version 2.0</name>
<url>http://www.apache.org/licenses/LICENSE-2.0</url>
<distribution>repo</distribution>
</license>
</licenses>
<developers>
<developer>
<name>Jim Pivarski</name>
<email>jpivarski@gmail.com</email>
<organization>DIANA-HEP</organization>
<organizationUrl>http://diana-hep.org</organizationUrl>
</developer>
</developers>
<scm>
<connection>scm:git:git@github.com:histogrammar/histogrammar-scala.git</connection>
<developerConnection>scm:git:git@github.com:histogrammar/histogrammar-scala.git</developerConnection>
<url>git@github.com:histogrammar/histogrammar-scala.git</url>
</scm>
{profiles} <reporting>
<plugins>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<version>2.15.2</version>
</plugin>
</plugins>
</reporting>
<properties>
<encoding>UTF-8</encoding>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
{javaversion} </properties>
{dependencies}
<repositories>
<repository>
<id>central</id>
<name>Central Repository</name>
<url>http://repo1.maven.org/maven2</url>
<layout>default</layout>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>central</id>
<name>Maven Plugin Repository</name>
<url>http://repo1.maven.org/maven2</url>
<layout>default</layout>
<snapshots>
<enabled>false</enabled>
</snapshots>
<releases>
<updatePolicy>never</updatePolicy>
</releases>
</pluginRepository>
</pluginRepositories>
<build>
<plugins>
<plugin>
<!-- see http://davidb.github.com/scala-maven-plugin -->
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
<configuration>
<args>
<arg>-Dscalac.patmat.analysisBudget=512</arg>
<arg>-deprecation</arg>
<arg>-feature</arg>
<arg>-unchecked</arg>
<arg>-dependencyfile</arg>
<arg>${{project.build.directory}}/.scala_dependencies</arg>
</args>
<recompileMode>incremental</recompileMode>
<!-- <useZincServer>true</useZincServer> -->
</configuration>
</execution>
{sourcejar}{javadocjar}
</executions>
</plugin>
{scalatest}{copydependencies}
<plugin>
<artifactId>maven-install-plugin</artifactId>
<version>2.5.2</version>
<configuration>
<createChecksum>true</createChecksum>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
<version>2.2.1</version>
<executions>
<execution>
<id>attach-sources</id>
<goals>
<goal>jar-no-fork</goal>
</goals>
</execution>
</executions>
</plugin>
{gpgplugin}{stagingplugin} </plugins>
{pluginmanagement} <resources>
</resources>
<testResources>
</testResources>
</build>
{distributionmanagement}</project>
'''
if __name__ == "__main__":
open("core/pom.xml", "w").write(template.format(
name = "histogrammar",
description = "Histogram abstraction to simplify complex aggregations in distributed environments.",
artifactid = "histogrammar_${scala.binary.version}",
version = VERSION,
profiles = profiles,
javaversion = "",
dependencies = ''' <dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.scalatest</groupId>
<artifactId>scalatest_${scala.binary.version}</artifactId>
<version>2.2.5</version>
<scope>test</scope>
</dependency>
</dependencies>
''',
sourcejar = "",
javadocjar = "",
scalatest = scalatest,
copydependencies = copydependencies,
gpgplugin = "",
stagingplugin = "",
pluginmanagement = "",
distributionmanagement = ""
))
open("core/deploy-scala-2.10.xml", "w").write(template.format(
name = "histogrammar",
description = "Histogram abstraction to simplify complex aggregations in distributed environments.",
artifactid = "histogrammar_2.10",
version = VERSION,
profiles = "",
javaversion = javaversion17,
dependencies = ''' <dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.10.6</version>
</dependency>
<dependency>
<groupId>org.scalatest</groupId>
<artifactId>scalatest_2.10</artifactId>
<version>2.2.5</version>
<scope>test</scope>
</dependency>
</dependencies>
''',
sourcejar = sourcejar,
javadocjar = javadocjar,
scalatest = scalatest,
copydependencies = "",
gpgplugin = gpgplugin,
stagingplugin = stagingplugin,
pluginmanagement = pluginmanagement,
distributionmanagement = distributionmanagement
))
open("core/deploy-scala-2.11.xml", "w").write(template.format(
name = "histogrammar",
description = "Histogram abstraction to simplify complex aggregations in distributed environments.",
artifactid = "histogrammar_2.11",
version = VERSION,
profiles = "",
javaversion = javaversion18,
dependencies = ''' <dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.11.8</version>
</dependency>
<dependency>
<groupId>org.scalatest</groupId>
<artifactId>scalatest_2.11</artifactId>
<version>2.2.5</version>
<scope>test</scope>
</dependency>
</dependencies>
''',
sourcejar = sourcejar,
javadocjar = javadocjar,
scalatest = scalatest,
copydependencies = "",
gpgplugin = gpgplugin,
stagingplugin = stagingplugin,
pluginmanagement = pluginmanagement,
distributionmanagement = distributionmanagement
))
open("sparksql/pom.xml", "w").write(template.format(
name = "histogrammar-sparksql",
description = "Adapter for using Histogrammar in SparkSQL.",
artifactid = "histogrammar-sparksql_${scala.binary.version}",
version = VERSION,
profiles = profiles,
javaversion = "",
dependencies = ''' <dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${{scala.version}}</version>
</dependency>
<dependency>
<groupId>org.diana-hep</groupId>
<artifactId>histogrammar_${{scala.binary.version}}</artifactId>
<version>{version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_${{scala.binary.version}}</artifactId>
<version>${{spark.version}}</version>
<scope>provided</scope>
</dependency>
</dependencies>
'''.format(version = VERSION),
sourcejar = "",
javadocjar = "",
scalatest = "",
copydependencies = copydependencies,
gpgplugin = "",
stagingplugin = "",
pluginmanagement = "",
distributionmanagement = ""
))
open("sparksql/deploy-scala-2.10.xml", "w").write(template.format(
name = "histogrammar-sparksql",
description = "Adapter for using Histogrammar in SparkSQL.",
artifactid = "histogrammar-sparksql_2.10",
version = VERSION,
profiles = "",
javaversion = javaversion17,
dependencies = ''' <dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.10.6</version>
</dependency>
<dependency>
<groupId>org.diana-hep</groupId>
<artifactId>histogrammar_2.10</artifactId>
<version>{version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.6.2</version>
<scope>provided</scope>
</dependency>
</dependencies>
'''.format(version = VERSION),
sourcejar = sourcejar,
javadocjar = javadocjar,
scalatest = "",
copydependencies = "",
gpgplugin = gpgplugin,
stagingplugin = stagingplugin,
pluginmanagement = pluginmanagement,
distributionmanagement = distributionmanagement
))
open("sparksql/deploy-scala-2.11.xml", "w").write(template.format(
name = "histogrammar-sparksql",
description = "Adapter for using Histogrammar in SparkSQL.",
artifactid = "histogrammar-sparksql_2.11",
version = VERSION,
profiles = "",
javaversion = javaversion18,
dependencies = ''' <dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.11.8</version>
</dependency>
<dependency>
<groupId>org.diana-hep</groupId>
<artifactId>histogrammar_2.11</artifactId>
<version>{version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.0.0</version>
<scope>provided</scope>
</dependency>
</dependencies>
'''.format(version = VERSION),
sourcejar = sourcejar,
javadocjar = javadocjar,
scalatest = "",
copydependencies = "",
gpgplugin = gpgplugin,
stagingplugin = stagingplugin,
pluginmanagement = pluginmanagement,
distributionmanagement = distributionmanagement
))
open("bokeh/pom.xml", "w").write(template.format(
name = "histogrammar-bokeh",
description = "Adapter for using Histogrammar to generate Bokeh plots.",
artifactid = "histogrammar-bokeh_${scala.binary.version}",
version = VERSION,
profiles = profiles,
javaversion = "",
dependencies = ''' <dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${{scala.version}}</version>
</dependency>
<dependency>
<groupId>org.diana-hep</groupId>
<artifactId>histogrammar_${{scala.binary.version}}</artifactId>
<version>{version}</version>
</dependency>
<dependency>
<groupId>io.continuum.bokeh</groupId>
<artifactId>bokeh_${{scala.binary.version}}</artifactId>
<version>0.7</version>
</dependency>
</dependencies>
'''.format(version = VERSION),
sourcejar = "",
javadocjar = "",
scalatest = "",
copydependencies = copydependencies,
gpgplugin = "",
stagingplugin = "",
pluginmanagement = "",
distributionmanagement = ""
))
open("bokeh/deploy-scala-2.10.xml", "w").write(template.format(
name = "histogrammar-bokeh",
description = "Adapter for using Histogrammar to generate Bokeh plots.",
artifactid = "histogrammar-bokeh_2.10",
version = VERSION,
profiles = "",
javaversion = javaversion17,
dependencies = ''' <dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.10.6</version>
</dependency>
<dependency>
<groupId>org.diana-hep</groupId>
<artifactId>histogrammar_2.10</artifactId>
<version>{version}</version>
</dependency>
<dependency>
<groupId>io.continuum.bokeh</groupId>
<artifactId>bokeh_2.10</artifactId>
<version>0.7</version>
</dependency>
</dependencies>
'''.format(version = VERSION),
sourcejar = sourcejar,
javadocjar = javadocjar,
scalatest = "",
copydependencies = "",
gpgplugin = gpgplugin,
stagingplugin = stagingplugin,
pluginmanagement = pluginmanagement,
distributionmanagement = distributionmanagement
))
open("bokeh/deploy-scala-2.11.xml", "w").write(template.format(
name = "histogrammar-bokeh",
description = "Adapter for using Histogrammar to generate Bokeh plots.",
artifactid = "histogrammar-bokeh_2.11",
version = VERSION,
profiles = "",
javaversion = javaversion18,
dependencies = ''' <dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.11.8</version>
</dependency>
<dependency>
<groupId>org.diana-hep</groupId>
<artifactId>histogrammar_2.11</artifactId>
<version>{version}</version>
</dependency>
<dependency>
<groupId>io.continuum.bokeh</groupId>
<artifactId>bokeh_2.11</artifactId>
<version>0.7</version>
</dependency>
</dependencies>
'''.format(version = VERSION),
sourcejar = sourcejar,
javadocjar = javadocjar,
scalatest = "",
copydependencies = "",
gpgplugin = gpgplugin,
stagingplugin = stagingplugin,
pluginmanagement = pluginmanagement,
distributionmanagement = distributionmanagement
))
|
|
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
DRAC Bios specific methods
"""
import re
from xml.etree import ElementTree as ET
from oslo_log import log as logging
from oslo_utils import excutils
from ironic.common import exception
from ironic.common.i18n import _, _LE, _LW
from ironic.conductor import task_manager
from ironic.drivers.modules.drac import client as wsman_client
from ironic.drivers.modules.drac import management
from ironic.drivers.modules.drac import resource_uris
LOG = logging.getLogger(__name__)
def _val_or_none(item):
"""Test to see if an XML element should be treated as None.
If the element contains an XML Schema namespaced nil attribute that
has a value of True, return None. Otherwise, return whatever the
text of the element is.
:param item: an XML element.
:returns: None or the test of the XML element.
"""
if item is None:
return
itemnil = item.attrib.get('{%s}nil' % resource_uris.CIM_XmlSchema)
if itemnil == "true":
return
else:
return item.text
def _parse_common(item, ns):
"""Parse common values that all attributes must have.
:param item: an XML element.
:param ns: the namespace to search.
:returns: a dictionary containing the parsed attributes of the element.
:raises: DracOperationFailed if the given element had no AttributeName
value.
"""
searches = {'current_value': './{%s}CurrentValue' % ns,
'read_only': './{%s}IsReadOnly' % ns,
'pending_value': './{%s}PendingValue' % ns}
LOG.debug("Handing %(ns)s for %(xml)s", {
'ns': ns,
'xml': ET.tostring(item),
})
name = item.findtext('./{%s}AttributeName' % ns)
if not name:
raise exception.DracOperationFailed(
message=_('Item has no name: "%s"') % ET.tostring(item))
res = {}
res['name'] = name
for k in searches:
if k == 'read_only':
res[k] = item.findtext(searches[k]) == 'true'
else:
res[k] = _val_or_none(item.find(searches[k]))
return res
def _format_error_msg(invalid_attribs_msgs, read_only_keys):
"""Format a combined error message.
This method creates a combined error message from a list of error messages
and a list of read-only keys.
:param invalid_attribs_msgs: a list of invalid attribute error messages.
:param read_only_keys: a list of read only keys that were attempted to be
written to.
:returns: a formatted error message.
"""
msg = '\n'.join(invalid_attribs_msgs)
if invalid_attribs_msgs and read_only_keys:
msg += '\n'
if read_only_keys:
msg += (_('Cannot set read-only BIOS settings "%r"') % read_only_keys)
return msg
def parse_enumeration(item, ns):
"""Parse an attribute that has a set of distinct values.
:param item: an XML element.
:param ns: the namespace to search.
:returns: a dictionary containing the parsed attributes of the element.
:raises: DracOperationFailed if the given element had no AttributeName
value.
"""
res = _parse_common(item, ns)
res['possible_values'] = sorted(
[v.text for v in item.findall('./{%s}PossibleValues' % ns)])
return res
def parse_string(item, ns):
"""Parse an attribute that should be a freeform string.
:param item: an XML element.
:param ns: the namespace to search.
:returns: a dictionary containing the parsed attributes of the element.
:raises: DracOperationFailed if the given element had no AttributeName
value.
"""
res = _parse_common(item, ns)
searches = {'min_length': './{%s}MinLength' % ns,
'max_length': './{%s}MaxLength' % ns,
'pcre_regex': './{%s}ValueExpression' % ns}
for k in searches:
if k == 'pcre_regex':
res[k] = _val_or_none(item.find(searches[k]))
else:
res[k] = int(item.findtext(searches[k]))
# Workaround for a BIOS bug in one of the 13 gen boxes
badval = re.compile(r"MAX_ASSET_TAG_LEN")
if (res['pcre_regex'] is not None and
res['name'] == 'AssetTag' and
badval.search(res['pcre_regex'])):
res['pcre_regex'] = badval.sub("%d" % res['max_length'],
res['pcre_regex'])
return res
def parse_integer(item, ns):
"""Parse an attribute that should be an integer.
:param item: an XML element.
:param ns: the namespace to search.
:returns: a dictionary containing the parsed attributes of the element.
:raises: DracOperationFailed if the given element had no AttributeName
value.
"""
res = _parse_common(item, ns)
for k in ['current_value', 'pending_value']:
if res[k]:
res[k] = int(res[k])
searches = {'lower_bound': './{%s}LowerBound' % ns,
'upper_bound': './{%s}UpperBound' % ns}
for k in searches:
res[k] = int(item.findtext(searches[k]))
return res
def _get_config(node, resource):
"""Helper for get_config.
Handles getting BIOS config values for a single namespace
:param node: an ironic node object.
:param resource: the namespace.
:returns: a dictionary that maps the name of each attribute to a dictionary
of values of that attribute.
:raises: InvalidParameterValue if some information required to connnect
to the DRAC is missing on the node or the value of one or more
required parameters is invalid.
:raises: DracClientError on an error from pywsman library.
:raises: DracOperationFailed if the specified resource is unknown.
"""
res = {}
client = wsman_client.get_wsman_client(node)
try:
doc = client.wsman_enumerate(resource)
except exception.DracClientError as exc:
with excutils.save_and_reraise_exception():
LOG.error(_LE('DRAC driver failed to get BIOS settings '
'for resource %(resource)s '
'from node %(node_uuid)s. '
'Reason: %(error)s.'),
{'node_uuid': node.uuid,
'resource': resource,
'error': exc})
items = doc.find('.//{%s}Items' % resource_uris.CIM_WSMAN)
for item in items:
if resource == resource_uris.DCIM_BIOSEnumeration:
attribute = parse_enumeration(item, resource)
elif resource == resource_uris.DCIM_BIOSString:
attribute = parse_string(item, resource)
elif resource == resource_uris.DCIM_BIOSInteger:
attribute = parse_integer(item, resource)
else:
raise exception.DracOperationFailed(
message=_('Unknown namespace %(ns)s for item: "%(item)s"') % {
'item': ET.tostring(item), 'ns': resource})
res[attribute['name']] = attribute
return res
def get_config(node):
"""Get the BIOS configuration from a Dell server using WSMAN
:param node: an ironic node object.
:raises: DracClientError on an error from pywsman.
:raises: DracOperationFailed when a BIOS setting cannot be parsed.
:returns: a dictionary containing BIOS settings in the form of:
{'EnumAttrib': {'name': 'EnumAttrib',
'current_value': 'Value',
'pending_value': 'New Value', # could also be None
'read_only': False,
'possible_values': ['Value', 'New Value', 'None']},
'StringAttrib': {'name': 'StringAttrib',
'current_value': 'Information',
'pending_value': None,
'read_only': False,
'min_length': 0,
'max_length': 255,
'pcre_regex': '^[0-9A-Za-z]{0,255}$'},
'IntegerAttrib': {'name': 'IntegerAttrib',
'current_value': 0,
'pending_value': None,
'read_only': True,
'lower_bound': 0,
'upper_bound': 65535}
}
The above values are only examples, of course. BIOS attributes exposed via
this API will always be either an enumerated attribute, a string attribute,
or an integer attribute. All attributes have the following parameters:
:name: is the name of the BIOS attribute.
:current_value: is the current value of the attribute.
It will always be either an integer or a string.
:pending_value: is the new value that we want the attribute to have.
None means that there is no pending value.
:read_only: indicates whether this attribute can be changed. Trying to
change a read-only value will result in an error.
The read-only flag can change depending on other attributes.
A future version of this call may expose the dependencies
that indicate when that may happen.
Enumerable attributes also have the following parameters:
:possible_values: is an array of values it is permissible to set
the attribute to.
String attributes also have the following parameters:
:min_length: is the minimum length of the string.
:max_length: is the maximum length of the string.
:pcre_regex: is a PCRE compatible regular expression that the string
must match. It may be None if the string is read only
or if the string does not have to match any particular
regular expression.
Integer attributes also have the following parameters:
:lower_bound: is the minimum value the attribute can have.
:upper_bound: is the maximum value the attribute can have.
"""
res = {}
for ns in [resource_uris.DCIM_BIOSEnumeration,
resource_uris.DCIM_BIOSString,
resource_uris.DCIM_BIOSInteger]:
attribs = _get_config(node, ns)
if not set(res).isdisjoint(set(attribs)):
raise exception.DracOperationFailed(
message=_('Colliding attributes %r') % (
set(res) & set(attribs)))
res.update(attribs)
return res
@task_manager.require_exclusive_lock
def set_config(task, **kwargs):
"""Sets the pending_value parameter for each of the values passed in.
:param task: an ironic task object.
:param kwargs: a dictionary of {'AttributeName': 'NewValue'}
:raises: DracOperationFailed if any new values are invalid.
:raises: DracOperationFailed if any of the attributes are read-only.
:raises: DracOperationFailed if any of the attributes cannot be set for
any other reason.
:raises: DracClientError on an error from the pywsman library.
:returns: A boolean indicating whether commit_config needs to be
called to make the changes.
"""
node = task.node
management.check_for_config_job(node)
current = get_config(node)
unknown_keys = set(kwargs) - set(current)
if unknown_keys:
LOG.warning(_LW('Ignoring unknown BIOS attributes "%r"'),
unknown_keys)
candidates = set(kwargs) - unknown_keys
read_only_keys = []
unchanged_attribs = []
invalid_attribs_msgs = []
attrib_names = []
for k in candidates:
if str(kwargs[k]) == str(current[k]['current_value']):
unchanged_attribs.append(k)
elif current[k]['read_only']:
read_only_keys.append(k)
else:
if 'possible_values' in current[k]:
if str(kwargs[k]) not in current[k]['possible_values']:
m = _('Attribute %(attr)s cannot be set to value %(val)s.'
' It must be in %(ok)r') % {
'attr': k,
'val': kwargs[k],
'ok': current[k]['possible_values']}
invalid_attribs_msgs.append(m)
continue
if ('pcre_regex' in current[k] and
current[k]['pcre_regex'] is not None):
regex = re.compile(current[k]['pcre_regex'])
if regex.search(str(kwargs[k])) is None:
# TODO(victor-lowther)
# Leave untranslated for now until the unicode
# issues that the test suite exposes are straightened out.
m = ('Attribute %(attr)s cannot be set to value %(val)s.'
' It must match regex %(re)s.') % {
'attr': k,
'val': kwargs[k],
're': current[k]['pcre_regex']}
invalid_attribs_msgs.append(m)
continue
if 'lower_bound' in current[k]:
lower = current[k]['lower_bound']
upper = current[k]['upper_bound']
val = int(kwargs[k])
if val < lower or val > upper:
m = _('Attribute %(attr)s cannot be set to value %(val)d.'
' It must be between %(lower)d and %(upper)d.') % {
'attr': k,
'val': val,
'lower': lower,
'upper': upper}
invalid_attribs_msgs.append(m)
continue
attrib_names.append(k)
if unchanged_attribs:
LOG.warning(_LW('Ignoring unchanged BIOS settings %r'),
unchanged_attribs)
if invalid_attribs_msgs or read_only_keys:
raise exception.DracOperationFailed(
_format_error_msg(invalid_attribs_msgs, read_only_keys))
if not attrib_names:
return False
client = wsman_client.get_wsman_client(node)
selectors = {'CreationClassName': 'DCIM_BIOSService',
'Name': 'DCIM:BIOSService',
'SystemCreationClassName': 'DCIM_ComputerSystem',
'SystemName': 'DCIM:ComputerSystem'}
properties = {'Target': 'BIOS.Setup.1-1',
'AttributeName': attrib_names,
'AttributeValue': map(lambda k: kwargs[k], attrib_names)}
doc = client.wsman_invoke(resource_uris.DCIM_BIOSService,
'SetAttributes',
selectors,
properties)
# Yes, we look for RebootRequired. In this context, that actually means
# that we need to create a lifecycle controller config job and then reboot
# so that the lifecycle controller can commit the BIOS config changes that
# we have proposed.
set_results = doc.findall(
'.//{%s}RebootRequired' % resource_uris.DCIM_BIOSService)
return any(str(res.text) == 'Yes' for res in set_results)
@task_manager.require_exclusive_lock
def commit_config(task, reboot=False):
"""Commits pending changes added by set_config
:param task: is the ironic task for running the config job.
:param reboot: indicates whether a reboot job should be automatically
created with the config job.
:raises: DracClientError on an error from pywsman library.
:raises: DracPendingConfigJobExists if the job is already created.
:raises: DracOperationFailed if the client received response with an
error message.
:raises: DracUnexpectedReturnValue if the client received a response
with unexpected return value
"""
node = task.node
management.check_for_config_job(node)
management.create_config_job(node, reboot)
@task_manager.require_exclusive_lock
def abandon_config(task):
"""Abandons uncommitted changes added by set_config
:param task: is the ironic task for abandoning the changes.
:raises: DracClientError on an error from pywsman library.
:raises: DracOperationFailed on error reported back by DRAC.
:raises: DracUnexpectedReturnValue if the drac did not report success.
"""
node = task.node
client = wsman_client.get_wsman_client(node)
selectors = {'CreationClassName': 'DCIM_BIOSService',
'Name': 'DCIM:BIOSService',
'SystemCreationClassName': 'DCIM_ComputerSystem',
'SystemName': 'DCIM:ComputerSystem'}
properties = {'Target': 'BIOS.Setup.1-1'}
client.wsman_invoke(resource_uris.DCIM_BIOSService,
'DeletePendingConfiguration',
selectors,
properties,
wsman_client.RET_SUCCESS)
|
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# pylint: disable=too-many-instance-attributes, too-many-arguments, protected-access
# pylint: disable=too-many-public-methods
"""A `BucketingModule` implement the `BaseModule` API, and allows multiple
symbols to be used depending on the `bucket_key` provided by each different
mini-batch of data.
"""
import logging
import warnings
from .. import context as ctx
from ..initializer import Uniform
from .base_module import BaseModule, _check_input_names
from .module import Module
class BucketingModule(BaseModule):
"""This module helps to deal efficiently with varying-length inputs.
Parameters
----------
sym_gen : function
A function when called with a bucket key, returns a triple
``(symbol, data_names, label_names)``.
default_bucket_key : str (or any python object)
The key for the default bucket.
logger : Logger
context : Context or list of Context
Defaults to ``mx.cpu()``
work_load_list : list of number
Defaults to ``None``, indicating uniform workload.
fixed_param_names: list of str
Defaults to ``None``, indicating no network parameters are fixed.
state_names : list of str
States are similar to data and label, but not provided by data iterator.
Instead they are initialized to 0 and can be set by set_states()
"""
def __init__(self, sym_gen, default_bucket_key=None, logger=logging,
context=ctx.cpu(), work_load_list=None,
fixed_param_names=None, state_names=None):
super(BucketingModule, self).__init__(logger=logger)
assert default_bucket_key is not None
self._default_bucket_key = default_bucket_key
self._sym_gen = sym_gen
symbol, data_names, label_names = sym_gen(default_bucket_key)
data_names = list(data_names) if data_names is not None else []
label_names = list(label_names) if label_names is not None else []
state_names = list(state_names) if state_names is not None else []
fixed_param_names = list(fixed_param_names) if fixed_param_names is not None else []
_check_input_names(symbol, data_names, "data", True)
_check_input_names(symbol, label_names, "label", False)
_check_input_names(symbol, state_names, "state", True)
_check_input_names(symbol, fixed_param_names, "fixed_param", True)
self._fixed_param_names = fixed_param_names
self._state_names = state_names
self._context = context
self._work_load_list = work_load_list
self._buckets = {}
self._curr_module = None
self._curr_bucket_key = None
self._params_dirty = False
def _reset_bind(self):
"""Internal utility function to reset binding."""
self.binded = False
self._buckets = {}
self._curr_module = None
self._curr_bucket_key = None
@property
def data_names(self):
"""A list of names for data required by this module."""
if self.binded:
return self._curr_module.data_names
else:
_, data_names, _ = self._sym_gen(self._default_bucket_key)
return data_names
@property
def output_names(self):
"""A list of names for the outputs of this module."""
if self.binded:
return self._curr_module.output_names
else:
symbol, _, _ = self._sym_gen(self._default_bucket_key)
return symbol.list_outputs()
@property
def data_shapes(self):
"""Get data shapes.
Returns
-------
A list of `(name, shape)` pairs.
"""
assert self.binded
return self._curr_module.data_shapes
@property
def label_shapes(self):
"""Get label shapes.
Returns
-------
A list of `(name, shape)` pairs.
The return value could be ``None`` if the module does not need labels,
or if the module is not bound for training (in this case, label information
is not available).
"""
assert self.binded
return self._curr_module.label_shapes
@property
def output_shapes(self):
"""Gets output shapes.
Returns
-------
A list of `(name, shape)` pairs.
"""
assert self.binded
return self._curr_module.output_shapes
def get_params(self):
"""Gets current parameters.
Returns
-------
`(arg_params, aux_params)`
A pair of dictionaries each mapping parameter names to NDArray values.
"""
assert self.binded and self.params_initialized
self._curr_module._params_dirty = self._params_dirty
params = self._curr_module.get_params()
self._params_dirty = False
return params
def set_params(self, arg_params, aux_params, allow_missing=False, force_init=True,
allow_extra=False):
"""Assigns parameters and aux state values.
Parameters
----------
arg_params : dict
Dictionary of name to value (`NDArray`) mapping.
aux_params : dict
Dictionary of name to value (`NDArray`) mapping.
allow_missing : bool
If true, params could contain missing values, and the initializer will be
called to fill those missing params.
force_init : bool
If true, will force re-initialize even if already initialized.
allow_extra : boolean, optional
Whether allow extra parameters that are not needed by symbol.
If this is True, no error will be thrown when arg_params or aux_params
contain extra parameters that is not needed by the executor.
Examples
--------
>>> # An example of setting module parameters.
>>> sym, arg_params, aux_params = mx.model.load_checkpoint(model_prefix, n_epoch_load)
>>> mod.set_params(arg_params=arg_params, aux_params=aux_params)
"""
if not allow_missing:
self.init_params(initializer=None, arg_params=arg_params, aux_params=aux_params,
allow_missing=allow_missing, force_init=force_init)
return
if self.params_initialized and not force_init:
warnings.warn("Parameters already initialized and force_init=False. "
"set_params call ignored.", stacklevel=2)
return
self._curr_module.set_params(arg_params, aux_params, allow_missing=allow_missing,
force_init=force_init, allow_extra=allow_extra)
# because we didn't update self._arg_params, they are dirty now.
self._params_dirty = True
self.params_initialized = True
def init_params(self, initializer=Uniform(0.01), arg_params=None, aux_params=None,
allow_missing=False, force_init=False, allow_extra=False):
"""Initializes parameters.
Parameters
----------
initializer : Initializer
arg_params : dict
Defaults to ``None``. Existing parameters. This has higher priority
than `initializer`.
aux_params : dict
Defaults to ``None``. Existing auxiliary states. This has higher priority
than `initializer`.
allow_missing : bool
Allow missing values in `arg_params` and `aux_params` (if not ``None``).
In this case, missing values will be filled with `initializer`.
force_init : bool
Defaults to ``False``.
allow_extra : boolean, optional
Whether allow extra parameters that are not needed by symbol.
If this is True, no error will be thrown when arg_params or aux_params
contain extra parameters that is not needed by the executor.
"""
if self.params_initialized and not force_init:
return
assert self.binded, 'call bind before initializing the parameters'
self._curr_module.init_params(initializer=initializer, arg_params=arg_params,
aux_params=aux_params, allow_missing=allow_missing,
force_init=force_init, allow_extra=allow_extra)
self._params_dirty = False
self.params_initialized = True
def get_states(self, merge_multi_context=True):
"""Gets states from all devices.
Parameters
----------
merge_multi_context : bool
Default is `True`. In the case when data-parallelism is used, the states
will be collected from multiple devices. A `True` value indicate that we
should merge the collected results so that they look like from a single
executor.
Returns
-------
list of NDArrays or list of list of NDArrays
If `merge_multi_context` is ``True``, it is like ``[out1, out2]``. Otherwise, it
is like ``[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]``. All the output
elements are `NDArray`.
"""
assert self.binded and self.params_initialized
return self._curr_module.get_states(merge_multi_context=merge_multi_context)
def set_states(self, states=None, value=None):
"""Sets value for states. Only one of states & values can be specified.
Parameters
----------
states : list of list of NDArrays
Source states arrays formatted like ``[[state1_dev1, state1_dev2],
[state2_dev1, state2_dev2]]``.
value : number
A single scalar value for all state arrays.
"""
assert self.binded and self.params_initialized
self._curr_module.set_states(states, value)
def bind(self, data_shapes, label_shapes=None, for_training=True,
inputs_need_grad=False, force_rebind=False, shared_module=None,
grad_req='write'):
"""Binding for a `BucketingModule` means setting up the buckets and binding the
executor for the default bucket key. Executors corresponding to other keys are
bound afterwards with `switch_bucket`.
Parameters
----------
data_shapes : list of (str, tuple)
This should correspond to the symbol for the default bucket.
label_shapes : list of (str, tuple)
This should correspond to the symbol for the default bucket.
for_training : bool
Default is ``True``.
inputs_need_grad : bool
Default is ``False``.
force_rebind : bool
Default is ``False``.
shared_module : BucketingModule
Default is ``None``. This value is currently not used.
grad_req : str, list of str, dict of str to str
Requirement for gradient accumulation. Can be 'write', 'add', or 'null'
(default to 'write').
Can be specified globally (str) or for each argument (list, dict).
bucket_key : str (or any python object)
bucket key for binding. by default use the default_bucket_key
"""
# in case we already initialized params, keep it
if self.params_initialized:
arg_params, aux_params = self.get_params()
# force rebinding is typically used when one want to switch from
# training to prediction phase.
if force_rebind:
self._reset_bind()
if self.binded:
self.logger.warning('Already bound, ignoring bind()')
return
assert shared_module is None, 'shared_module for BucketingModule is not supported'
self.for_training = for_training
self.inputs_need_grad = inputs_need_grad
self.binded = True
symbol, data_names, label_names = self._sym_gen(self._default_bucket_key)
module = Module(symbol, data_names, label_names, logger=self.logger,
context=self._context, work_load_list=self._work_load_list,
fixed_param_names=self._fixed_param_names,
state_names=self._state_names)
module.bind(data_shapes, label_shapes, for_training, inputs_need_grad,
force_rebind=False, shared_module=None, grad_req=grad_req)
self._curr_module = module
self._curr_bucket_key = self._default_bucket_key
self._buckets[self._default_bucket_key] = module
# copy back saved params, if already initialized
if self.params_initialized:
self.set_params(arg_params, aux_params)
def switch_bucket(self, bucket_key, data_shapes, label_shapes=None):
"""Switches to a different bucket. This will change ``self.curr_module``.
Parameters
----------
bucket_key : str (or any python object)
The key of the target bucket.
data_shapes : list of (str, tuple)
Typically ``data_batch.provide_data``.
label_shapes : list of (str, tuple)
Typically ``data_batch.provide_label``.
"""
assert self.binded, 'call bind before switching bucket'
if not bucket_key in self._buckets:
symbol, data_names, label_names = self._sym_gen(bucket_key)
module = Module(symbol, data_names, label_names,
logger=self.logger, context=self._context,
work_load_list=self._work_load_list,
fixed_param_names=self._fixed_param_names,
state_names=self._state_names)
module.bind(data_shapes, label_shapes, self._curr_module.for_training,
self._curr_module.inputs_need_grad,
force_rebind=False, shared_module=self._buckets[self._default_bucket_key])
self._buckets[bucket_key] = module
self._curr_module = self._buckets[bucket_key]
self._curr_bucket_key = bucket_key
def init_optimizer(self, kvstore='local', optimizer='sgd',
optimizer_params=(('learning_rate', 0.01),),
force_init=False):
"""Installs and initializes optimizers.
Parameters
----------
kvstore : str or KVStore
Defaults to `'local'`.
optimizer : str or Optimizer
Defaults to `'sgd'`
optimizer_params : dict
Defaults to `(('learning_rate', 0.01),)`. The default value is not a dictionary,
just to avoid pylint warning of dangerous default values.
force_init : bool
Defaults to ``False``, indicating whether we should force re-initializing the
optimizer in the case an optimizer is already installed.
"""
assert self.binded and self.params_initialized
if self.optimizer_initialized and not force_init:
self.logger.warning('optimizer already initialized, ignoring.')
return
self._curr_module.init_optimizer(kvstore, optimizer, optimizer_params,
force_init=force_init)
for mod in self._buckets.values():
if mod is not self._curr_module:
mod.borrow_optimizer(self._curr_module)
self.optimizer_initialized = True
def prepare(self, data_batch):
"""Prepares a data batch for forward.
Parameters
----------
data_batch : DataBatch
"""
# perform bind if haven't done so
assert self.binded and self.params_initialized
bucket_key = data_batch.bucket_key
original_bucket_key = self._curr_bucket_key
data_shapes = data_batch.provide_data
label_shapes = data_batch.provide_label
self.switch_bucket(bucket_key, data_shapes, label_shapes)
# switch back
self.switch_bucket(original_bucket_key, None, None)
def forward(self, data_batch, is_train=None):
"""Forward computation.
Parameters
----------
data_batch : DataBatch
is_train : bool
Defaults to ``None``, in which case `is_train` is take as ``self.for_training``.
"""
assert self.binded and self.params_initialized
self.switch_bucket(data_batch.bucket_key, data_batch.provide_data,
data_batch.provide_label)
self._curr_module.forward(data_batch, is_train=is_train)
def backward(self, out_grads=None):
"""Backward computation."""
assert self.binded and self.params_initialized
self._curr_module.backward(out_grads=out_grads)
def update(self):
"""Updates parameters according to installed optimizer and the gradient computed
in the previous forward-backward cycle.
"""
assert self.binded and self.params_initialized and self.optimizer_initialized
self._params_dirty = True
self._curr_module.update()
def get_outputs(self, merge_multi_context=True):
"""Gets outputs from a previous forward computation.
Parameters
----------
merge_multi_context : bool
Defaults to ``True``. In the case when data-parallelism is used, the outputs
will be collected from multiple devices. A ``True`` value indicate that we
should merge the collected results so that they look like from a single
executor.
Returns
-------
list of numpy arrays or list of list of numpy arrays
If `merge_multi_context` is ``True``, it is like ``[out1, out2]``. Otherwise, it
is like ``[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]``. All the output
elements are numpy arrays.
"""
assert self.binded and self.params_initialized
return self._curr_module.get_outputs(merge_multi_context=merge_multi_context)
def get_input_grads(self, merge_multi_context=True):
"""Gets the gradients with respect to the inputs of the module.
Parameters
----------
merge_multi_context : bool
Defaults to ``True``. In the case when data-parallelism is used, the outputs
will be collected from multiple devices. A ``True`` value indicate that we
should merge the collected results so that they look like from a single
executor.
Returns
-------
list of NDArrays or list of list of NDArrays
If `merge_multi_context` is ``True``, it is like ``[grad1, grad2]``. Otherwise, it
is like ``[[grad1_dev1, grad1_dev2], [grad2_dev1, grad2_dev2]]``. All the output
elements are `NDArray`.
"""
assert self.binded and self.params_initialized and self.inputs_need_grad
return self._curr_module.get_input_grads(merge_multi_context=merge_multi_context)
def update_metric(self, eval_metric, labels):
"""Evaluates and accumulates evaluation metric on outputs of the last forward computation.
Parameters
----------
eval_metric : EvalMetric
labels : list of NDArray
Typically ``data_batch.label``.
"""
assert self.binded and self.params_initialized
self._curr_module.update_metric(eval_metric, labels)
@property
def symbol(self):
"""The symbol of the current bucket being used."""
assert self.binded
return self._curr_module.symbol
def install_monitor(self, mon):
"""Installs monitor on all executors """
assert self.binded
for mod in self._buckets.values():
mod.install_monitor(mon)
|
|
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
libcloud driver for the Blue Box Blocks API
This driver implements all libcloud functionality for the Blue Box Blocks API.
Blue Box home page http://bluebox.net
Blue Box API documentation https://boxpanel.bluebox
.net/public/the_vault/index.php/Blocks_API
"""
import copy
import base64
from libcloud.utils.py3 import urlencode
from libcloud.utils.py3 import b
from libcloud.common.base import JsonResponse, ConnectionUserAndKey
from libcloud.compute.providers import Provider
from libcloud.compute.types import NodeState, InvalidCredsError
from libcloud.compute.base import Node, NodeDriver
from libcloud.compute.base import NodeSize, NodeImage, NodeLocation
from libcloud.compute.base import NodeAuthPassword, NodeAuthSSHKey
# Current end point for Blue Box API.
BLUEBOX_API_HOST = "boxpanel.bluebox.net"
# The API doesn't currently expose all of the required values for libcloud,
# so we simply list what's available right now, along with all of the various
# attributes that are needed by libcloud.
BLUEBOX_INSTANCE_TYPES = {
'1gb': {
'id': '94fd37a7-2606-47f7-84d5-9000deda52ae',
'name': 'Block 1GB Virtual Server',
'ram': 1024,
'disk': 20,
'cpu': 0.5
},
'2gb': {
'id': 'b412f354-5056-4bf0-a42f-6ddd998aa092',
'name': 'Block 2GB Virtual Server',
'ram': 2048,
'disk': 25,
'cpu': 1
},
'4gb': {
'id': '0cd183d3-0287-4b1a-8288-b3ea8302ed58',
'name': 'Block 4GB Virtual Server',
'ram': 4096,
'disk': 50,
'cpu': 2
},
'8gb': {
'id': 'b9b87a5b-2885-4a2e-b434-44a163ca6251',
'name': 'Block 8GB Virtual Server',
'ram': 8192,
'disk': 100,
'cpu': 4
}
}
RAM_PER_CPU = 2048
NODE_STATE_MAP = {'queued': NodeState.PENDING,
'building': NodeState.PENDING,
'running': NodeState.RUNNING,
'error': NodeState.TERMINATED,
'unknown': NodeState.UNKNOWN}
class BlueboxResponse(JsonResponse):
def parse_error(self):
if int(self.status) == 401:
if not self.body:
raise InvalidCredsError(str(self.status) + ': ' + self.error)
else:
raise InvalidCredsError(self.body)
return self.body
class BlueboxNodeSize(NodeSize):
def __init__(self, id, name, cpu, ram, disk, price, driver):
self.id = id
self.name = name
self.cpu = cpu
self.ram = ram
self.disk = disk
self.price = price
self.driver = driver
def __repr__(self):
return ((
'<NodeSize: id=%s, name=%s, cpu=%s, ram=%s, disk=%s, '
'price=%s, driver=%s ...>')
% (self.id, self.name, self.cpu, self.ram, self.disk,
self.price, self.driver.name))
class BlueboxConnection(ConnectionUserAndKey):
"""
Connection class for the Bluebox driver
"""
host = BLUEBOX_API_HOST
secure = True
responseCls = BlueboxResponse
allow_insecure = False
def add_default_headers(self, headers):
user_b64 = base64.b64encode(b('%s:%s' % (self.user_id, self.key)))
headers['Authorization'] = 'Basic %s' % (user_b64)
return headers
class BlueboxNodeDriver(NodeDriver):
"""
Bluebox Blocks node driver
"""
connectionCls = BlueboxConnection
type = Provider.BLUEBOX
api_name = 'bluebox'
name = 'Bluebox Blocks'
website = 'http://bluebox.net'
features = {'create_node': ['ssh_key', 'password']}
def list_nodes(self):
result = self.connection.request('/api/blocks.json')
return [self._to_node(i) for i in result.object]
def list_sizes(self, location=None):
sizes = []
for key, values in list(BLUEBOX_INSTANCE_TYPES.items()):
attributes = copy.deepcopy(values)
attributes.update({'price': self._get_size_price(size_id=key)})
sizes.append(BlueboxNodeSize(driver=self.connection.driver,
**attributes))
return sizes
def list_images(self, location=None):
result = self.connection.request('/api/block_templates.json')
images = []
for image in result.object:
images.extend([self._to_image(image)])
return images
def create_node(self, **kwargs):
headers = {'Content-Type': 'application/x-www-form-urlencoded'}
size = kwargs["size"]
name = kwargs['name']
image = kwargs['image']
size = kwargs['size']
auth = self._get_and_check_auth(kwargs.get('auth'))
data = {
'hostname': name,
'product': size.id,
'template': image.id
}
ssh = None
password = None
if isinstance(auth, NodeAuthSSHKey):
ssh = auth.pubkey
data.update(ssh_public_key=ssh)
elif isinstance(auth, NodeAuthPassword):
password = auth.password
data.update(password=password)
if "ex_username" in kwargs:
data.update(username=kwargs["ex_username"])
if not ssh and not password:
raise Exception("SSH public key or password required.")
params = urlencode(data)
result = self.connection.request('/api/blocks.json', headers=headers,
data=params, method='POST')
node = self._to_node(result.object)
if getattr(auth, "generated", False):
node.extra['password'] = auth.password
return node
def destroy_node(self, node):
url = '/api/blocks/%s.json' % (node.id)
result = self.connection.request(url, method='DELETE')
return result.status == 200
def list_locations(self):
return [NodeLocation(0, "Blue Box Seattle US", 'US', self)]
def reboot_node(self, node):
url = '/api/blocks/%s/reboot.json' % (node.id)
result = self.connection.request(url, method="PUT")
return result.status == 200
def _to_node(self, vm):
state = NODE_STATE_MAP[vm.get('status', NodeState.UNKNOWN)]
n = Node(id=vm['id'],
name=vm['hostname'],
state=state,
public_ips=[ip['address'] for ip in vm['ips']],
private_ips=[],
extra={'storage': vm['storage'], 'cpu': vm['cpu']},
driver=self.connection.driver)
return n
def _to_image(self, image):
image = NodeImage(id=image['id'],
name=image['description'],
driver=self.connection.driver)
return image
|
|
"""Consulate CLI commands"""
# pragma: no cover
import argparse
import base64
import json
import sys
import os
try:
import urlparse
except ImportError:
import urllib.parse as urlparse
from requests import exceptions
import consulate
from consulate import adapters
from consulate import utils
CONSUL_ENV_VAR = 'CONSUL_RPC_ADDR'
EPILOG = ('If the CONSUL_RPC_ADDR environment variable is set, it will be '
'parsed and used for default values when connecting.')
def on_error(message, exit_code=2):
"""Write out the specified message to stderr and exit the specified
exit code, defaulting to ``2``.
:param str message: The exit message
:param int exit_code: The numeric exit code
"""
sys.stderr.write(message + "\n")
sys.exit(exit_code)
def connection_error():
"""Common exit routine when consulate can't connect to Consul"""
on_error('Could not connect to consul', 1)
KV_PARSERS = [
('backup', 'Backup to stdout or a JSON file', [
[['-b', '--base64'], {'help': 'Base64 encode values',
'action': 'store_true'}],
[['-f', '--file'],
{'help': 'JSON file to read instead of stdin',
'nargs': '?'}]]),
('restore', 'Restore from stdin or a JSON file', [
[['-b', '--base64'], {'help': 'Restore from Base64 encode values',
'action': 'store_true'}],
[['-f', '--file'],
{'help': 'JSON file to read instead of stdin',
'nargs': '?'}],
[['-n', '--no-replace'],
{'help': 'Do not replace existing entries',
'action': 'store_true'}]]),
('ls', 'List all of the keys', [
[['-l', '--long'],
{'help': 'Long format',
'action': 'store_true'}]]),
('mkdir', 'Create a folder', [
[['path'],
{'help': 'The path to create'}]]),
('get', 'Get a key from the database', [
[['key'], {'help': 'The key to get'}],
[['-r', '--recurse'],
{'help': 'Get all keys prefixed with the specified key',
'action': 'store_true'}],
[['-t', '--trim'],
{'help': 'Number of levels of prefix to trim from returned key',
'type': int,
'default': 0}]]),
('set', 'Set a key in the database', [
[['key'], {'help': 'The key to set'}],
[['value'], {'help': 'The value of the key'}]]),
('rm', 'Remove a key from the database', [
[['key'], {'help': 'The key to remove'}],
[['-r', '--recurse'],
{'help': 'Delete all keys prefixed with the specified key',
'action': 'store_true'}]])]
def add_kv_args(parser):
"""Add the kv command and arguments.
:param argparse.Subparser parser: parser
"""
kv_parser = parser.add_parser('kv', help='Key/Value Database Utilities')
subparsers = kv_parser.add_subparsers(dest='action',
title='Key/Value Database Utilities')
for (name, help_text, arguments) in KV_PARSERS:
parser = subparsers.add_parser(name, help=help_text)
for (args, kwargs) in arguments:
parser.add_argument(*args, **kwargs)
def add_register_args(parser):
"""Add the register command and arguments.
:param argparse.Subparser parser: parser
"""
# Service registration
registerp = parser.add_parser('register',
help='Register a service for this node')
registerp.add_argument('name', help='The service name')
registerp.add_argument('-a', '--address', default=None,
help='Specify an address')
registerp.add_argument('-p', '--port', default=None, type=int,
help='Specify a port')
registerp.add_argument('-s', '--service-id', default=None,
help='Specify a service ID')
registerp.add_argument('-t', '--tags', default=[],
help='Specify a comma delimited list of tags')
rsparsers = registerp.add_subparsers(dest='ctype',
title='Service Check Options')
check = rsparsers.add_parser('check',
help='Define an external script-based check')
check.add_argument('interval', default=10, type=int,
help='How often to run the check script')
check.add_argument('path', default=None,
help='Path to the script invoked by Consul')
httpcheck = rsparsers.add_parser('httpcheck',
help='Define an HTTP-based check')
httpcheck.add_argument('interval', default=10, type=int,
help='How often to run the check script')
httpcheck.add_argument('url', default=None,
help='HTTP URL to be polled by Consul')
rsparsers.add_parser('no-check', help='Do not enable service monitoring')
ttl = rsparsers.add_parser('ttl', help='Define a duration based TTL check')
ttl.add_argument('duration', type=int, default=10,
help='TTL duration for a service with missing check data')
def add_run_once_args(parser):
"""Add the run_once command and arguments.
:param argparse.Subparser parser: parser
"""
run_oncep = parser.add_parser('run_once',
help='Run a command locked to a single '
'execution')
run_oncep.add_argument('lock',
help='The name of the lock which will be '
'held in Consul.')
run_oncep.add_argument('command', nargs=argparse.REMAINDER,
help='The command to lock')
run_oncep.add_argument('-i', '--interval', default=None,
help='Hold the lock for X seconds')
def add_deregister_args(parser):
"""Add the deregister command and arguments.
:param argparse.Subparser parser: parser
"""
# Service registration
registerp = parser.add_parser('deregister',
help='Deregister a service for this node')
registerp.add_argument('service_id', help='The service registration id')
def parse_cli_args():
"""Create the argument parser and add the arguments"""
parser = argparse.ArgumentParser(description='CLI utilities for Consul',
epilog=EPILOG)
env_var = os.environ.get(CONSUL_ENV_VAR, '')
parsed_defaults = urlparse.urlparse(env_var)
parser.add_argument('--api-scheme',
default=parsed_defaults.scheme or 'http',
help='The scheme to use for connecting to Consul with')
parser.add_argument('--api-host',
default=parsed_defaults.hostname or 'localhost',
help='The consul host to connect on')
parser.add_argument('--api-port',
default=parsed_defaults.port or 8500,
help='The consul API port to connect to')
parser.add_argument('--datacenter',
dest='dc',
default=None,
help='The datacenter to specify for the connection')
parser.add_argument('--token', default=None, help='ACL token')
sparser = parser.add_subparsers(title='Commands', dest='command')
add_register_args(sparser)
add_deregister_args(sparser)
add_kv_args(sparser)
add_run_once_args(sparser)
return parser.parse_args()
def kv_backup(consul, args):
"""Backup the Consul KV database
:param consulate.api_old.Consul consul: The Consul instance
:param argparser.namespace args: The cli args
"""
handle = open(args.file, 'w') if args.file else sys.stdout
records = consul.kv.records()
if args.base64:
if utils.PYTHON3:
records = [(k, f, str(base64.b64encode(utils.maybe_encode(v)),
'ascii'))
for k,f,v in records]
else:
records = [(k, f, base64.b64encode(v) if v else v) for k,f,v in records]
try:
handle.write(json.dumps(records) + '\n')
except exceptions.ConnectionError:
connection_error()
def kv_delete(consul, args):
"""Remove a key from the Consulate database
:param consulate.api_old.Consul consul: The Consul instance
:param argparser.namespace args: The cli args
"""
try:
del consul.kv[args.key]
except exceptions.ConnectionError:
connection_error()
def kv_get(consul, args):
"""Get the value of a key from the Consul database
:param consulate.api_old.Consul consul: The Consul instance
:param argparser.namespace args: The cli args
"""
try:
if args.recurse:
for key in sorted(consul.kv.find(args.key)):
displaykey = key
if args.trim:
keyparts = displaykey.split('/')
if (args.trim >= len(keyparts)):
displaykey = keyparts[-1]
else:
displaykey = '/'.join(keyparts[args.trim:])
sys.stdout.write('%s\t%s\n' % (displaykey, consul.kv.get(key)))
else:
sys.stdout.write('%s\n' % consul.kv.get(args.key))
except exceptions.ConnectionError:
connection_error()
def kv_ls(consul, args):
"""List out the keys from the Consul KV database
:param consulate.api_old.Consul consul: The Consul instance
:param argparser.namespace args: The cli args
"""
try:
for key in consul.kv.keys():
if args.long:
keylen = 0
if consul.kv[key]:
keylen = len(consul.kv[key])
print('{0:>14} {1}'.format(keylen, key))
else:
print(key)
except exceptions.ConnectionError:
connection_error()
def kv_mkdir(consul, args):
"""Make a key based path/directory in the KV database
:param consulate.api_old.Consul consul: The Consul instance
:param argparser.namespace args: The cli args
"""
if not args.path[:-1] == '/':
args.path += '/'
try:
consul.kv.set(args.path, None)
except exceptions.ConnectionError:
connection_error()
def kv_restore(consul, args):
"""Restore the Consul KV store
:param consulate.api_old.Consul consul: The Consul instance
:param argparser.namespace args: The cli args
"""
handle = open(args.file, 'r') if args.file else sys.stdin
data = json.load(handle)
for row in data:
if isinstance(row, dict):
# translate raw api export to internal representation
if row['Value'] is not None:
row['Value'] = base64.b64decode(row['Value'])
row = [row['Key'], row['Flags'], row['Value']]
if args.base64:
row[2] = base64.b64decode(row[2])
# Here's an awesome thing to make things work
if not utils.PYTHON3 and isinstance(row[2], unicode):
row[2] = row[2].encode('utf-8')
try:
consul.kv.set_record(row[0], row[1], row[2], not args.no_replace)
except exceptions.ConnectionError:
connection_error()
def kv_rm(consul, args):
"""Remove a key from the Consulate database
:param consulate.api_old.Consul consul: The Consul instance
:param argparser.namespace args: The cli args
"""
try:
consul.kv.delete(args.key, args.recurse)
except exceptions.ConnectionError:
connection_error()
def kv_set(consul, args):
"""Set a value of a key int the Consul database
:param consulate.api_old.Consul consul: The Consul instance
:param argparser.namespace args: The cli args
"""
try:
consul.kv[args.key] = args.value
except exceptions.ConnectionError:
connection_error()
# Mapping dict to simplify the code in main()
KV_ACTIONS = {
'backup': kv_backup,
'del': kv_delete,
'get': kv_get,
'ls': kv_ls,
'mkdir': kv_mkdir,
'restore': kv_restore,
'rm': kv_rm,
'set': kv_set}
def register(consul, args):
"""Handle service registration.
:param consulate.api_old.Consul consul: The Consul instance
:param argparser.namespace args: The cli args
"""
check = args.path if args.ctype == 'check' else None
httpcheck = args.url if args.ctype == 'httpcheck' else None
interval = '%ss' % args.interval if args.ctype in ['check',
'httpcheck'] else None
ttl = '%ss' % args.duration if args.ctype == 'ttl' else None
tags = args.tags.split(',') if args.tags else None
try:
consul.agent.service.register(args.name, args.service_id, args.address,
int(args.port), tags, check, interval,
ttl, httpcheck)
except exceptions.ConnectionError:
connection_error()
def deregister(consul, args):
"""Handle service deregistration.
:param consulate.api_old.Consul consul: The Consul instance
:param argparser.namespace args: The cli args
"""
try:
consul.agent.service.deregister(args.service_id)
except exceptions.ConnectionError:
connection_error()
def run_once(consul, args):
"""Ensure only one process can run a command at a time
:param consulate.api_old.Consul consul: The Consul instance
:param argparser.namespace args: The cli args
"""
try:
import time
import subprocess
session = consul.session.create()
if not consul.kv.acquire_lock(args.prefix, session):
on_error('Cannot obtain the required lock. Exiting')
if args.interval:
now = int(time.time())
last_run = consul.kv.get("{0}_last_run".format(args.prefix))
if str(last_run) not in ['null', 'None'] and \
int(last_run) + int(args.interval) > now:
sys.stdout.write('Last run happened fewer than {0} '
'second ago. Exiting\n'.format(args.interval))
consul.kv.release_lock(args.prefix, session)
consul.session.destroy(session)
return
consul.kv["{0}_last_run".format(args.prefix)] = now
consul.kv.release_lock(args.prefix, session)
consul.session.destroy(session)
# Should the subprocess return an error code, release the lock
try:
subprocess.check_output(args.operation, stderr=subprocess.STDOUT)
# If the subprocess fails
except subprocess.CalledProcessError as e:
on_error('"{0}" exited with return code "{1}" '
'and output {2}'.format(args.operation,
e.returncode,
e.output), 1)
# If the command doesn't exist
except OSError as e:
on_error('"{0}" command does not exist\n'.format(args.operation), 1)
# Otherwise
except Exception as e:
on_error('"{0}" exited with error "{1}"'.format(args.operation, e),
1)
except exceptions.ConnectionError:
connection_error()
def main():
"""Entrypoint for the consulate cli application"""
args = parse_cli_args()
if args.api_scheme == 'http+unix':
adapter = adapters.UnixSocketRequest
port = None
api_host = os.environ.get('CONSUL_HTTP_ADDR').replace('unix://', '')
if args.api_host:
api_host = args.api_host
else:
adapter = None
port = args.api_port
api_host = 'localhost'
if args.api_host:
api_host = args.api_host
consul = consulate.Consul(api_host, port, args.dc,
args.token, args.api_scheme, adapter)
if args.command == 'register':
register(consul, args)
elif args.command == 'deregister':
deregister(consul, args)
elif args.command == 'kv':
KV_ACTIONS[args.action](consul, args)
elif args.command == 'run_once':
run_once(consul, args)
|
|
# Copyright (c) 2014 NetApp, Inc.
# Copyright (c) 2015 Alex Meade. All Rights Reserved.
# Copyright (c) 2015 Rushil Chugh. All Rights Reserved.
# Copyright (c) 2015 Navneet Singh. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tests for NetApp e-series iscsi volume driver."""
import copy
import json
import re
import socket
import mock
import requests
from cinder import exception
from cinder import test
from cinder.tests.unit.volume.drivers.netapp.eseries import fakes
from cinder.volume import configuration as conf
from cinder.volume.drivers.netapp import common
from cinder.volume.drivers.netapp.eseries import client
from cinder.volume.drivers.netapp.eseries import library
from cinder.volume.drivers.netapp.eseries import utils
from cinder.volume.drivers.netapp import options
import cinder.volume.drivers.netapp.utils as na_utils
def create_configuration():
configuration = conf.Configuration(None)
configuration.append_config_values(options.netapp_basicauth_opts)
configuration.append_config_values(options.netapp_eseries_opts)
configuration.append_config_values(options.netapp_san_opts)
return configuration
class FakeEseriesResponse(object):
"""Fake response to requests."""
def __init__(self, code=None, text=None):
self.status_code = code
self.text = text
def json(self):
return json.loads(self.text)
class FakeEseriesServerHandler(object):
"""HTTP handler that fakes enough stuff to allow the driver to run."""
def do_GET(self, path, params, data, headers):
"""Respond to a GET request."""
response = FakeEseriesResponse()
if "/devmgr/vn" not in path:
response.status_code = 404
(__, ___, path) = path.partition("/devmgr/vn")
if re.match("^/storage-systems/[0-9a-zA-Z]+/volumes$", path):
response.status_code = 200
response.text = """[{"extremeProtection": false,
"pitBaseVolume": false,
"dssMaxSegmentSize": 131072,
"totalSizeInBytes": "2126008832", "raidLevel": "raid6",
"volumeRef": "0200000060080E500023C73400000AAA52D11677",
"listOfMappings": [], "sectorOffset": "6",
"id": "0200000060080E500023C73400000AAA52D11677",
"wwn": "60080E500023C73400000AAA52D11677",
"capacity": "2126008832", "mgmtClientAttribute": 0,
"label": "repos_0006", "volumeFull": false,
"blkSize": 512, "volumeCopyTarget": false,
"volumeGroupRef":
"0400000060080E500023BB3400001F9F52CECC3F",
"preferredControllerId": "070000000000000000000002",
"currentManager": "070000000000000000000002",
"applicationTagOwned": true, "status": "optimal",
"segmentSize": 131072, "volumeUse":
"freeRepositoryVolume", "action": "none",
"name": "repos_0006", "worldWideName":
"60080E500023C73400000AAA52D11677", "currentControllerId"
: "070000000000000000000002",
"protectionInformationCapable": false, "mapped": false,
"reconPriority": 1, "protectionType": "type0Protection"}
,
{"extremeProtection": false, "pitBaseVolume": true,
"dssMaxSegmentSize": 131072,
"totalSizeInBytes": "2147483648", "raidLevel": "raid6",
"volumeRef": "0200000060080E500023BB3400001FC352D14CB2",
"listOfMappings": [], "sectorOffset": "15",
"id": "0200000060080E500023BB3400001FC352D14CB2",
"wwn": "60080E500023BB3400001FC352D14CB2",
"capacity": "2147483648", "mgmtClientAttribute": 0,
"label": "bdm-vc-test-1", "volumeFull": false,
"blkSize": 512, "volumeCopyTarget": false,
"volumeGroupRef":
"0400000060080E500023BB3400001F9F52CECC3F",
"preferredControllerId": "070000000000000000000001",
"currentManager": "070000000000000000000001",
"applicationTagOwned": false, "status": "optimal",
"segmentSize": 131072, "volumeUse": "standardVolume",
"action": "none", "preferredManager":
"070000000000000000000001", "volumeHandle": 15,
"offline": false, "preReadRedundancyCheckEnabled": false,
"dssPreallocEnabled": false, "name": "bdm-vc-test-1",
"worldWideName": "60080E500023BB3400001FC352D14CB2",
"currentControllerId": "070000000000000000000001",
"protectionInformationCapable": false, "mapped": false,
"reconPriority": 1, "protectionType":
"type1Protection"},
{"extremeProtection": false, "pitBaseVolume": true,
"dssMaxSegmentSize": 131072,
"totalSizeInBytes": "1073741824", "raidLevel": "raid6",
"volumeRef": "0200000060080E500023BB34000003FB515C2293",
"listOfMappings": [{
"lunMappingRef":"8800000000000000000000000000000000000000",
"lun": 0,
"ssid": 16384,
"perms": 15,
"volumeRef": "0200000060080E500023BB34000003FB515C2293",
"type": "all",
"mapRef": "8400000060080E500023C73400300381515BFBA3"
}], "sectorOffset": "15",
"id": "0200000060080E500023BB34000003FB515C2293",
"wwn": "60080E500023BB3400001FC352D14CB2",
"capacity": "2147483648", "mgmtClientAttribute": 0,
"label": "CFDXJ67BLJH25DXCZFZD4NSF54",
"volumeFull": false,
"blkSize": 512, "volumeCopyTarget": false,
"volumeGroupRef":
"0400000060080E500023BB3400001F9F52CECC3F",
"preferredControllerId": "070000000000000000000001",
"currentManager": "070000000000000000000001",
"applicationTagOwned": false, "status": "optimal",
"segmentSize": 131072, "volumeUse": "standardVolume",
"action": "none", "preferredManager":
"070000000000000000000001", "volumeHandle": 15,
"offline": false, "preReadRedundancyCheckEnabled": false,
"dssPreallocEnabled": false, "name": "bdm-vc-test-1",
"worldWideName": "60080E500023BB3400001FC352D14CB2",
"currentControllerId": "070000000000000000000001",
"protectionInformationCapable": false, "mapped": false,
"reconPriority": 1, "protectionType":
"type1Protection"}]"""
elif re.match("^/storage-systems/[0-9a-zA-Z]+/volumes/[0-9A-Za-z]+$",
path):
response.status_code = 200
response.text = """{"extremeProtection": false,
"pitBaseVolume": true,
"dssMaxSegmentSize": 131072,
"totalSizeInBytes": "2147483648", "raidLevel": "raid6",
"volumeRef": "0200000060080E500023BB3400001FC352D14CB2",
"listOfMappings": [], "sectorOffset": "15",
"id": "0200000060080E500023BB3400001FC352D14CB2",
"wwn": "60080E500023BB3400001FC352D14CB2",
"capacity": "2147483648", "mgmtClientAttribute": 0,
"label": "bdm-vc-test-1", "volumeFull": false,
"blkSize": 512, "volumeCopyTarget": false,
"volumeGroupRef":
"0400000060080E500023BB3400001F9F52CECC3F",
"preferredControllerId": "070000000000000000000001",
"currentManager": "070000000000000000000001",
"applicationTagOwned": false, "status": "optimal",
"segmentSize": 131072, "volumeUse": "standardVolume",
"action": "none", "preferredManager":
"070000000000000000000001", "volumeHandle": 15,
"offline": false, "preReadRedundancyCheckEnabled": false,
"dssPreallocEnabled": false, "name": "bdm-vc-test-1",
"worldWideName": "60080E500023BB3400001FC352D14CB2",
"currentControllerId": "070000000000000000000001",
"protectionInformationCapable": false, "mapped": false,
"reconPriority": 1, "protectionType":
"type1Protection"}"""
elif re.match("^/storage-systems/[0-9a-zA-Z]+/hardware-inventory$",
path):
response.status_code = 200
response.text = """
{"iscsiPorts": [{"controllerId":
"070000000000000000000002", "ipv4Enabled": true,
"ipv4Data": {"ipv4Address":
"0.0.0.0", "ipv4AddressConfigMethod": "configStatic",
"ipv4VlanId": {"isEnabled": false, "value": 0},
"ipv4AddressData": {"ipv4Address": "172.20.123.66",
"ipv4SubnetMask": "255.255.255.0", "configState":
"configured", "ipv4GatewayAddress": "0.0.0.0"}},
"tcpListenPort": 3260,
"interfaceRef": "2202040000000000000000000000000000000000"
,"iqn":
"iqn.1992-01.com.lsi:2365.60080e500023c73400000000515af323"
}]}"""
elif re.match("^/storage-systems/[0-9a-zA-Z]+/hosts$", path):
response.status_code = 200
response.text = """[{"isSAControlled": false,
"confirmLUNMappingCreation"
: false, "label": "stlrx300s7-55", "isLargeBlockFormatHost":
false, "clusterRef": "8500000060080E500023C7340036035F515B78FC",
"protectionInformationCapableAccessMethod": false,
"ports": [], "hostRef":
"8400000060080E500023C73400300381515BFBA3", "hostTypeIndex": 6,
"hostSidePorts": [{"label": "NewStore", "type": "iscsi",
"address": "iqn.1998-01.com.vmware:localhost-28a58148"}]}]"""
elif re.match("^/storage-systems/[0-9a-zA-Z]+/host-types$", path):
response.status_code = 200
response.text = """[{
"id" : "4",
"code" : "AIX",
"name" : "AIX",
"index" : 4
}, {
"id" : "5",
"code" : "IRX",
"name" : "IRX",
"index" : 5
}, {
"id" : "6",
"code" : "LnxALUA",
"name" : "LnxALUA",
"index" : 6
}]"""
elif re.match("^/storage-systems/[0-9a-zA-Z]+/snapshot-groups$", path):
response.status_code = 200
response.text = """[]"""
elif re.match("^/storage-systems/[0-9a-zA-Z]+/snapshot-images$", path):
response.status_code = 200
response.text = """[]"""
elif re.match("^/storage-systems/[0-9a-zA-Z]+/storage-pools$", path):
response.status_code = 200
response.text = """[ {"protectionInformationCapabilities":
{"protectionInformationCapable": true, "protectionType":
"type2Protection"}, "raidLevel": "raidDiskPool", "reserved1":
"000000000000000000000000", "reserved2": "", "isInaccessible":
false, "label": "DDP", "state": "complete", "usage":
"standard", "offline": false, "drawerLossProtection": false,
"trayLossProtection": false, "securityType": "capable",
"volumeGroupRef": "0400000060080E500023BB3400001F9F52CECC3F",
"driveBlockFormat": "__UNDEFINED", "usedSpace": "81604378624",
"volumeGroupData": {"type": "diskPool", "diskPoolData":
{"criticalReconstructPriority": "highest",
"poolUtilizationState": "utilizationOptimal",
"reconstructionReservedDriveCountCurrent": 3, "allocGranularity":
"4294967296", "degradedReconstructPriority": "high",
"backgroundOperationPriority": "low",
"reconstructionReservedAmt": "897111293952", "unusableCapacity":
"0", "reconstructionReservedDriveCount": 1,
"poolUtilizationWarningThreshold": 50,
"poolUtilizationCriticalThreshold": 85}}, "spindleSpeed": 10000,
"worldWideName": "60080E500023BB3400001F9F52CECC3F",
"spindleSpeedMatch": true, "totalRaidedSpace": "17273253317836",
"sequenceNum": 2, "protectionInformationCapable": false}]"""
elif re.match("^/storage-systems$", path):
response.status_code = 200
response.text = """[ {"freePoolSpace": 11142431623168,
"driveCount": 24,
"hostSparesUsed": 0, "id":
"1fa6efb5-f07b-4de4-9f0e-52e5f7ff5d1b",
"hotSpareSizeAsString": "0", "wwn":
"60080E500023C73400000000515AF323", "parameters":
{"minVolSize": 1048576, "maxSnapshotsPerBase": 16,
"maxDrives": 192, "maxVolumes": 512, "maxVolumesPerGroup":
256, "maxMirrors": 0, "maxMappingsPerVolume": 1,
"maxMappableLuns": 256, "maxVolCopys": 511,
"maxSnapshots":
256}, "hotSpareCount": 0, "hostSpareCountInStandby": 0,
"status": "needsattn", "trayCount": 1,
"usedPoolSpaceAsString": "5313000380416",
"ip2": "10.63.165.216", "ip1": "10.63.165.215",
"freePoolSpaceAsString": "11142431623168",
"types": "SAS",
"name": "stle2600-7_8", "hotSpareSize": 0,
"usedPoolSpace":
5313000380416, "driveTypes": ["sas"],
"unconfiguredSpaceByDriveType": {},
"unconfiguredSpaceAsStrings": "0", "model": "2650",
"unconfiguredSpace": 0}]"""
elif re.match("^/storage-systems/[0-9a-zA-Z]+$", path):
response.status_code = 200
response.text = """{"freePoolSpace": 11142431623168,
"driveCount": 24,
"hostSparesUsed": 0, "id":
"1fa6efb5-f07b-4de4-9f0e-52e5f7ff5d1b",
"hotSpareSizeAsString": "0", "wwn":
"60080E500023C73400000000515AF323", "parameters":
{"minVolSize": 1048576, "maxSnapshotsPerBase": 16,
"maxDrives": 192, "maxVolumes": 512, "maxVolumesPerGroup":
256, "maxMirrors": 0, "maxMappingsPerVolume": 1,
"maxMappableLuns": 256, "maxVolCopys": 511,
"maxSnapshots":
256}, "hotSpareCount": 0, "hostSpareCountInStandby": 0,
"status": "needsattn", "trayCount": 1,
"usedPoolSpaceAsString": "5313000380416",
"ip2": "10.63.165.216", "ip1": "10.63.165.215",
"freePoolSpaceAsString": "11142431623168",
"types": "SAS",
"name": "stle2600-7_8", "hotSpareSize": 0,
"usedPoolSpace":
5313000380416, "driveTypes": ["sas"],
"unconfiguredSpaceByDriveType": {},
"unconfiguredSpaceAsStrings": "0", "model": "2650",
"unconfiguredSpace": 0}"""
elif re.match("^/storage-systems/[0-9a-zA-Z]+/volume-copy-jobs"
"/[0-9a-zA-Z]+$", path):
response.status_code = 200
response.text = """{"status": "complete",
"cloneCopy": true, "pgRef":
"3300000060080E500023C73400000ACA52D29454", "volcopyHandle":49160
, "idleTargetWriteProt": true, "copyPriority": "priority2",
"volcopyRef": "1800000060080E500023C73400000ACF52D29466",
"worldWideName": "60080E500023C73400000ACF52D29466",
"copyCompleteTime": "0", "sourceVolume":
"3500000060080E500023C73400000ACE52D29462", "currentManager":
"070000000000000000000002", "copyStartTime": "1389551671",
"reserved1": "00000000", "targetVolume":
"0200000060080E500023C73400000A8C52D10675"}"""
elif re.match("^/storage-systems/[0-9a-zA-Z]+/volume-mappings$", path):
response.status_code = 200
response.text = """[
{
"lunMappingRef":"8800000000000000000000000000000000000000",
"lun": 0,
"ssid": 16384,
"perms": 15,
"volumeRef": "0200000060080E500023BB34000003FB515C2293",
"type": "all",
"mapRef": "8400000060080E500023C73400300381515BFBA3"
}]
"""
else:
# Unknown API
response.status_code = 500
return response
def do_POST(self, path, params, data, headers):
"""Respond to a POST request."""
response = FakeEseriesResponse()
if "/devmgr/vn" not in path:
response.status_code = 404
data = json.loads(data) if data else None
(__, ___, path) = path.partition("/devmgr/vn")
if re.match("^/storage-systems/[0-9a-zA-Z]+/volumes$", path):
response.status_code = 200
text_json = json.loads("""
{"extremeProtection": false, "pitBaseVolume": true,
"dssMaxSegmentSize": 131072,
"totalSizeInBytes": "1073741824", "raidLevel": "raid6",
"volumeRef": "0200000060080E500023BB34000003FB515C2293",
"listOfMappings": [{
"lunMappingRef":"8800000000000000000000000000000000000000",
"lun": 0,
"ssid": 16384,
"perms": 15,
"volumeRef": "0200000060080E500023BB34000003FB515C2293",
"type": "all",
"mapRef": "8400000060080E500023C73400300381515BFBA3"
}], "sectorOffset": "15",
"id": "0200000060080E500023BB34000003FB515C2293",
"wwn": "60080E500023BB3400001FC352D14CB2",
"capacity": "2147483648", "mgmtClientAttribute": 0,
"label": "CFDXJ67BLJH25DXCZFZD4NSF54",
"volumeFull": false,
"blkSize": 512, "volumeCopyTarget": false,
"volumeGroupRef":
"0400000060080E500023BB3400001F9F52CECC3F",
"preferredControllerId": "070000000000000000000001",
"currentManager": "070000000000000000000001",
"applicationTagOwned": false, "status": "optimal",
"segmentSize": 131072, "volumeUse": "standardVolume",
"action": "none", "preferredManager":
"070000000000000000000001", "volumeHandle": 15,
"offline": false, "preReadRedundancyCheckEnabled": false,
"dssPreallocEnabled": false, "name": "bdm-vc-test-1",
"worldWideName": "60080E500023BB3400001FC352D14CB2",
"currentControllerId": "070000000000000000000001",
"protectionInformationCapable": false, "mapped": false,
"reconPriority": 1, "protectionType":
"type1Protection"}""")
text_json['label'] = data['name']
text_json['name'] = data['name']
text_json['volumeRef'] = data['name']
text_json['id'] = data['name']
response.text = json.dumps(text_json)
elif re.match("^/storage-systems/[0-9a-zA-Z]+/volume-mappings$", path):
response.status_code = 200
text_json = json.loads("""
{
"lunMappingRef":"8800000000000000000000000000000000000000",
"lun": 0,
"ssid": 16384,
"perms": 15,
"volumeRef": "0200000060080E500023BB34000003FB515C2293",
"type": "all",
"mapRef": "8400000060080E500023C73400300381515BFBA3"
}
""")
text_json['volumeRef'] = data['mappableObjectId']
text_json['mapRef'] = data['targetId']
response.text = json.dumps(text_json)
elif re.match("^/storage-systems/[0-9a-zA-Z]+/hosts$", path):
response.status_code = 200
response.text = """{"isSAControlled": false,
"confirmLUNMappingCreation"
: false, "label": "stlrx300s7-55", "isLargeBlockFormatHost":
false, "clusterRef": "8500000060080E500023C7340036035F515B78FC",
"protectionInformationCapableAccessMethod": false,
"ports": [], "hostRef":
"8400000060080E500023C73400300381515BFBA3", "hostTypeIndex": 10,
"hostSidePorts": [{"label": "NewStore", "type": "iscsi",
"address": "iqn.1998-01.com.vmware:localhost-28a58148"}]}"""
elif re.match("^/storage-systems/[0-9a-zA-Z]+/snapshot-groups$", path):
response.status_code = 200
text_json = json.loads("""{"status": "optimal",
"autoDeleteLimit": 0,
"maxRepositoryCapacity": "-65536", "rollbackStatus": "none"
, "unusableRepositoryCapacity": "0", "pitGroupRef":
"3300000060080E500023C7340000098D5294AC9A", "clusterSize":
65536, "label": "C6JICISVHNG2TFZX4XB5ZWL7O",
"maxBaseCapacity":
"476187142128128", "repositoryVolume":
"3600000060080E500023BB3400001FA952CEF12C",
"fullWarnThreshold": 99, "repFullPolicy": "purgepit",
"action": "none", "rollbackPriority": "medium",
"creationPendingStatus": "none", "consistencyGroupRef":
"0000000000000000000000000000000000000000", "volumeHandle":
49153, "consistencyGroup": false, "baseVolume":
"0200000060080E500023C734000009825294A534"}""")
text_json['label'] = data['name']
text_json['name'] = data['name']
text_json['pitGroupRef'] = data['name']
text_json['id'] = data['name']
text_json['baseVolume'] = data['baseMappableObjectId']
response.text = json.dumps(text_json)
elif re.match("^/storage-systems/[0-9a-zA-Z]+/snapshot-images$", path):
response.status_code = 200
text_json = json.loads("""{"status": "optimal",
"pitCapacity": "2147483648",
"pitTimestamp": "1389315375", "pitGroupRef":
"3300000060080E500023C7340000098D5294AC9A", "creationMethod":
"user", "repositoryCapacityUtilization": "2818048",
"activeCOW": true, "isRollbackSource": false, "pitRef":
"3400000060080E500023BB3400631F335294A5A8",
"pitSequenceNumber": "19"}""")
text_json['label'] = data['groupId']
text_json['name'] = data['groupId']
text_json['id'] = data['groupId']
text_json['pitGroupRef'] = data['groupId']
response.text = json.dumps(text_json)
elif re.match("^/storage-systems/[0-9a-zA-Z]+/snapshot-volumes$",
path):
response.status_code = 200
text_json = json.loads("""{"unusableRepositoryCapacity": "0",
"totalSizeInBytes":
"-1", "worldWideName": "60080E500023BB3400001FAD52CEF2F5",
"boundToPIT": true, "wwn":
"60080E500023BB3400001FAD52CEF2F5", "id":
"3500000060080E500023BB3400001FAD52CEF2F5",
"baseVol": "0200000060080E500023BB3400001FA352CECCAE",
"label": "bdm-pv-1", "volumeFull": false,
"preferredControllerId": "070000000000000000000001", "offline":
false, "viewSequenceNumber": "10", "status": "optimal",
"viewRef": "3500000060080E500023BB3400001FAD52CEF2F5",
"mapped": false, "accessMode": "readOnly", "viewTime":
"1389315613", "repositoryVolume":
"0000000000000000000000000000000000000000", "preferredManager":
"070000000000000000000001", "volumeHandle": 16385,
"currentManager": "070000000000000000000001",
"maxRepositoryCapacity": "0", "name": "bdm-pv-1",
"fullWarnThreshold": 0, "currentControllerId":
"070000000000000000000001", "basePIT":
"3400000060080E500023BB3400631F335294A5A8", "clusterSize":
0, "mgmtClientAttribute": 0}""")
text_json['label'] = data['name']
text_json['name'] = data['name']
text_json['id'] = data['name']
text_json['basePIT'] = data['snapshotImageId']
text_json['baseVol'] = data['baseMappableObjectId']
response.text = json.dumps(text_json)
elif re.match("^/storage-systems$", path):
response.status_code = 200
response.text = """{"freePoolSpace": "17055871480319",
"driveCount": 24,
"wwn": "60080E500023C73400000000515AF323", "id": "1",
"hotSpareSizeAsString": "0", "hostSparesUsed": 0, "types": "",
"hostSpareCountInStandby": 0, "status": "optimal", "trayCount":
1, "usedPoolSpaceAsString": "37452115456", "ip2":
"10.63.165.216", "ip1": "10.63.165.215",
"freePoolSpaceAsString": "17055871480319", "hotSpareCount": 0,
"hotSpareSize": "0", "name": "stle2600-7_8", "usedPoolSpace":
"37452115456", "driveTypes": ["sas"],
"unconfiguredSpaceByDriveType": {}, "unconfiguredSpaceAsStrings":
"0", "model": "2650", "unconfiguredSpace": "0"}"""
elif re.match("^/storage-systems/[0-9a-zA-Z]+$",
path):
response.status_code = 200
elif re.match("^/storage-systems/[0-9a-zA-Z]+/volume-copy-jobs$",
path):
response.status_code = 200
response.text = """{"status": "complete", "cloneCopy": true,
"pgRef":
"3300000060080E500023C73400000ACA52D29454", "volcopyHandle":49160
, "idleTargetWriteProt": true, "copyPriority": "priority2",
"volcopyRef": "1800000060080E500023C73400000ACF52D29466",
"worldWideName": "60080E500023C73400000ACF52D29466",
"copyCompleteTime": "0", "sourceVolume":
"3500000060080E500023C73400000ACE52D29462", "currentManager":
"070000000000000000000002", "copyStartTime": "1389551671",
"reserved1": "00000000", "targetVolume":
"0200000060080E500023C73400000A8C52D10675"}"""
elif re.match("^/storage-systems/[0-9a-zA-Z]+/volumes/[0-9A-Za-z]+$",
path):
response.status_code = 200
response.text = """{"extremeProtection": false,
"pitBaseVolume": true,
"dssMaxSegmentSize": 131072,
"totalSizeInBytes": "1073741824", "raidLevel": "raid6",
"volumeRef": "0200000060080E500023BB34000003FB515C2293",
"listOfMappings": [{
"lunMappingRef":"8800000000000000000000000000000000000000",
"lun": 0,
"ssid": 16384,
"perms": 15,
"volumeRef": "0200000060080E500023BB34000003FB515C2293",
"type": "all",
"mapRef": "8400000060080E500023C73400300381515BFBA3"
}], "sectorOffset": "15",
"id": "0200000060080E500023BB34000003FB515C2293",
"wwn": "60080E500023BB3400001FC352D14CB2",
"capacity": "2147483648", "mgmtClientAttribute": 0,
"label": "rename",
"volumeFull": false,
"blkSize": 512, "volumeCopyTarget": false,
"volumeGroupRef":
"0400000060080E500023BB3400001F9F52CECC3F",
"preferredControllerId": "070000000000000000000001",
"currentManager": "070000000000000000000001",
"applicationTagOwned": false, "status": "optimal",
"segmentSize": 131072, "volumeUse": "standardVolume",
"action": "none", "preferredManager":
"070000000000000000000001", "volumeHandle": 15,
"offline": false, "preReadRedundancyCheckEnabled": false,
"dssPreallocEnabled": false, "name": "bdm-vc-test-1",
"worldWideName": "60080E500023BB3400001FC352D14CB2",
"currentControllerId": "070000000000000000000001",
"protectionInformationCapable": false, "mapped": false,
"reconPriority": 1, "protectionType":
"type1Protection"}"""
else:
# Unknown API
response.status_code = 500
return response
def do_DELETE(self, path, params, data, headers):
"""Respond to a DELETE request."""
response = FakeEseriesResponse()
if "/devmgr/vn" not in path:
response.status_code = 500
(__, ___, path) = path.partition("/devmgr/vn")
if re.match("^/storage-systems/[0-9a-zA-Z]+/snapshot-images"
"/[0-9A-Za-z]+$", path):
code = 204
elif re.match("^/storage-systems/[0-9a-zA-Z]+/snapshot-groups"
"/[0-9A-Za-z]+$", path):
code = 204
elif re.match("^/storage-systems/[0-9a-zA-Z]+/snapshot-volumes"
"/[0-9A-Za-z]+$", path):
code = 204
elif re.match("^/storage-systems/[0-9a-zA-Z]+/volume-copy-jobs"
"/[0-9A-Za-z]+$", path):
code = 204
elif re.match("^/storage-systems/[0-9a-zA-Z]+/volumes"
"/[0-9A-Za-z]+$", path):
code = 204
elif re.match("^/storage-systems/[0-9a-zA-Z]+/volume-mappings/"
"[0-9a-zA-Z]+$", path):
code = 204
else:
code = 500
response.status_code = code
return response
class FakeEseriesHTTPSession(object):
"""A fake requests.Session for netapp tests."""
def __init__(self):
self.handler = FakeEseriesServerHandler()
def request(self, method, url, params, data, headers, timeout, verify):
address = '127.0.0.1:80'
(__, ___, path) = url.partition(address)
if method.upper() == 'GET':
return self.handler.do_GET(path, params, data, headers)
elif method.upper() == 'POST':
return self.handler.do_POST(path, params, data, headers)
elif method.upper() == 'DELETE':
return self.handler.do_DELETE(path, params, data, headers)
else:
raise exception.Invalid()
class NetAppEseriesISCSIDriverTestCase(test.TestCase):
"""Test case for NetApp e-series iscsi driver."""
volume = {'id': '114774fb-e15a-4fae-8ee2-c9723e3645ef', 'size': 1,
'volume_name': 'lun1', 'host': 'hostname@backend#DDP',
'os_type': 'linux', 'provider_location': 'lun1',
'name_id': '114774fb-e15a-4fae-8ee2-c9723e3645ef',
'provider_auth': 'provider a b', 'project_id': 'project',
'display_name': None, 'display_description': 'lun1',
'volume_type_id': None}
snapshot = {'id': '17928122-553b-4da9-9737-e5c3dcd97f75',
'volume_id': '114774fb-e15a-4fae-8ee2-c9723e3645ef',
'size': 2, 'volume_name': 'lun1',
'volume_size': 2, 'project_id': 'project',
'display_name': None, 'display_description': 'lun1',
'volume_type_id': None}
volume_sec = {'id': 'b6c01641-8955-4917-a5e3-077147478575',
'size': 2, 'volume_name': 'lun1',
'os_type': 'linux', 'provider_location': 'lun1',
'name_id': 'b6c01641-8955-4917-a5e3-077147478575',
'provider_auth': None, 'project_id': 'project',
'display_name': None, 'display_description': 'lun1',
'volume_type_id': None}
volume_clone = {'id': 'b4b24b27-c716-4647-b66d-8b93ead770a5', 'size': 3,
'volume_name': 'lun1',
'os_type': 'linux', 'provider_location': 'cl_sm',
'name_id': 'b4b24b27-c716-4647-b66d-8b93ead770a5',
'provider_auth': None,
'project_id': 'project', 'display_name': None,
'display_description': 'lun1',
'volume_type_id': None}
volume_clone_large = {'id': 'f6ef5bf5-e24f-4cbb-b4c4-11d631d6e553',
'size': 6, 'volume_name': 'lun1',
'os_type': 'linux', 'provider_location': 'cl_lg',
'name_id': 'f6ef5bf5-e24f-4cbb-b4c4-11d631d6e553',
'provider_auth': None,
'project_id': 'project', 'display_name': None,
'display_description': 'lun1',
'volume_type_id': None}
fake_eseries_volume_label = utils.convert_uuid_to_es_fmt(volume['id'])
connector = {'initiator': 'iqn.1998-01.com.vmware:localhost-28a58148'}
fake_size_gb = volume['size']
fake_eseries_pool_label = 'DDP'
fake_ref = {'source-name': 'CFDGJSLS'}
fake_ret_vol = {'id': 'vol_id', 'label': 'label',
'worldWideName': 'wwn', 'capacity': '2147583648'}
def setUp(self):
super(NetAppEseriesISCSIDriverTestCase, self).setUp()
self._custom_setup()
def _custom_setup(self):
self.mock_object(na_utils, 'OpenStackInfo')
# Inject fake netapp_lib module classes.
fakes.mock_netapp_lib([client])
self.mock_object(common.na_utils, 'check_netapp_lib')
configuration = self._set_config(create_configuration())
self.driver = common.NetAppDriver(configuration=configuration)
self.library = self.driver.library
self.mock_object(requests, 'Session', FakeEseriesHTTPSession)
self.mock_object(self.library,
'_check_mode_get_or_register_storage_system')
self.driver.do_setup(context='context')
self.driver.library._client._endpoint = fakes.FAKE_ENDPOINT_HTTP
def _set_config(self, configuration):
configuration.netapp_storage_family = 'eseries'
configuration.netapp_storage_protocol = 'iscsi'
configuration.netapp_transport_type = 'http'
configuration.netapp_server_hostname = '127.0.0.1'
configuration.netapp_server_port = None
configuration.netapp_webservice_path = '/devmgr/vn'
configuration.netapp_controller_ips = '127.0.0.2,127.0.0.3'
configuration.netapp_sa_password = 'pass1234'
configuration.netapp_login = 'rw'
configuration.netapp_password = 'rw'
configuration.netapp_storage_pools = 'DDP'
configuration.netapp_enable_multiattach = False
return configuration
def test_embedded_mode(self):
configuration = self._set_config(create_configuration())
configuration.netapp_controller_ips = '127.0.0.1,127.0.0.3'
driver = common.NetAppDriver(configuration=configuration)
self.mock_object(client.RestClient, 'list_storage_systems', mock.Mock(
return_value=[fakes.STORAGE_SYSTEM]))
driver.do_setup(context='context')
self.assertEqual('1fa6efb5-f07b-4de4-9f0e-52e5f7ff5d1b',
driver.library._client.get_system_id())
def test_check_system_pwd_not_sync(self):
def list_system():
if getattr(self, 'test_count', None):
self.test_count = 1
return {'status': 'passwordoutofsync'}
return {'status': 'needsAttention'}
self.library._client.list_storage_system = mock.Mock(wraps=list_system)
result = self.library._check_storage_system()
self.assertTrue(result)
def test_create_destroy(self):
FAKE_POOLS = [{'label': 'DDP', 'volumeGroupRef': 'test'}]
self.library._get_storage_pools = mock.Mock(return_value=FAKE_POOLS)
self.mock_object(self.library._client, '_get_resource_url', mock.Mock(
return_value=fakes.FAKE_ENDPOINT_HTTP))
self.mock_object(self.library._client, '_eval_response')
self.mock_object(self.library._client, 'list_volumes', mock.Mock(
return_value=FAKE_POOLS))
self.driver.create_volume(self.volume)
self.driver.delete_volume(self.volume)
def test_vol_stats(self):
self.driver.get_volume_stats(refresh=False)
def test_get_pool(self):
self.mock_object(self.library, '_get_volume',
mock.Mock(return_value={
'volumeGroupRef': 'fake_ref'}))
self.mock_object(self.library._client, "get_storage_pool",
mock.Mock(return_value={'volumeGroupRef': 'fake_ref',
'label': 'ddp1'}))
pool = self.driver.get_pool({'name_id': 'fake-uuid'})
self.assertEqual('ddp1', pool)
def test_get_pool_no_pools(self):
self.mock_object(self.library, '_get_volume',
mock.Mock(return_value={
'volumeGroupRef': 'fake_ref'}))
self.mock_object(self.library._client, "get_storage_pool",
mock.Mock(return_value=None))
pool = self.driver.get_pool({'name_id': 'fake-uuid'})
self.assertEqual(None, pool)
@mock.patch.object(library.NetAppESeriesLibrary, '_create_volume',
mock.Mock())
def test_create_volume(self):
self.driver.create_volume(self.volume)
self.library._create_volume.assert_called_with(
'DDP', self.fake_eseries_volume_label, self.volume['size'])
def test_create_volume_no_pool_provided_by_scheduler(self):
volume = copy.deepcopy(self.volume)
volume['host'] = "host@backend" # missing pool
self.assertRaises(exception.InvalidHost, self.driver.create_volume,
volume)
@mock.patch.object(client.RestClient, 'list_storage_pools')
def test_helper_create_volume_fail(self, fake_list_pools):
fake_pool = {}
fake_pool['label'] = self.fake_eseries_pool_label
fake_pool['volumeGroupRef'] = 'foo'
fake_pool['raidLevel'] = 'raidDiskPool'
fake_pools = [fake_pool]
fake_list_pools.return_value = fake_pools
wrong_eseries_pool_label = 'hostname@backend'
self.assertRaises(exception.NetAppDriverException,
self.library._create_volume,
wrong_eseries_pool_label,
self.fake_eseries_volume_label,
self.fake_size_gb)
@mock.patch.object(library.LOG, 'info')
@mock.patch.object(client.RestClient, 'list_storage_pools')
@mock.patch.object(client.RestClient, 'create_volume',
mock.MagicMock(return_value='CorrectVolume'))
def test_helper_create_volume(self, storage_pools, log_info):
fake_pool = {}
fake_pool['label'] = self.fake_eseries_pool_label
fake_pool['volumeGroupRef'] = 'foo'
fake_pool['raidLevel'] = 'raidDiskPool'
fake_pools = [fake_pool]
storage_pools.return_value = fake_pools
storage_vol = self.library._create_volume(
self.fake_eseries_pool_label,
self.fake_eseries_volume_label,
self.fake_size_gb)
log_info.assert_called_once_with("Created volume with label %s.",
self.fake_eseries_volume_label)
self.assertEqual('CorrectVolume', storage_vol)
@mock.patch.object(client.RestClient, 'list_storage_pools')
@mock.patch.object(client.RestClient, 'create_volume',
mock.MagicMock(
side_effect=exception.NetAppDriverException))
@mock.patch.object(library.LOG, 'info', mock.Mock())
def test_create_volume_check_exception(self, fake_list_pools):
fake_pool = {}
fake_pool['label'] = self.fake_eseries_pool_label
fake_pool['volumeGroupRef'] = 'foo'
fake_pool['raidLevel'] = 'raidDiskPool'
fake_pools = [fake_pool]
fake_list_pools.return_value = fake_pools
self.assertRaises(exception.NetAppDriverException,
self.library._create_volume,
self.fake_eseries_pool_label,
self.fake_eseries_volume_label, self.fake_size_gb)
def test_portal_for_vol_controller(self):
volume = {'id': 'vol_id', 'currentManager': 'ctrl1'}
vol_nomatch = {'id': 'vol_id', 'currentManager': 'ctrl3'}
portals = [{'controller': 'ctrl2', 'iqn': 'iqn2'},
{'controller': 'ctrl1', 'iqn': 'iqn1'}]
portal = self.library._get_iscsi_portal_for_vol(volume, portals)
self.assertEqual({'controller': 'ctrl1', 'iqn': 'iqn1'}, portal)
portal = self.library._get_iscsi_portal_for_vol(vol_nomatch, portals)
self.assertEqual({'controller': 'ctrl2', 'iqn': 'iqn2'}, portal)
def test_portal_for_vol_any_false(self):
vol_nomatch = {'id': 'vol_id', 'currentManager': 'ctrl3'}
portals = [{'controller': 'ctrl2', 'iqn': 'iqn2'},
{'controller': 'ctrl1', 'iqn': 'iqn1'}]
self.assertRaises(exception.NetAppDriverException,
self.library._get_iscsi_portal_for_vol,
vol_nomatch, portals, False)
def test_setup_error_unsupported_host_type(self):
configuration = self._set_config(create_configuration())
configuration.netapp_host_type = 'garbage'
driver = common.NetAppDriver(configuration=configuration)
self.assertRaises(exception.NetAppDriverException,
driver.library.check_for_setup_error)
def test_check_host_type_default(self):
configuration = self._set_config(create_configuration())
driver = common.NetAppDriver(configuration=configuration)
driver.library._check_host_type()
self.assertEqual('LnxALUA', driver.library.host_type)
def test_do_setup_all_default(self):
configuration = self._set_config(create_configuration())
driver = common.NetAppDriver(configuration=configuration)
driver.library._check_mode_get_or_register_storage_system = mock.Mock()
mock_invoke = self.mock_object(client, 'RestClient')
driver.do_setup(context='context')
mock_invoke.assert_called_with(**fakes.FAKE_CLIENT_PARAMS)
def test_do_setup_http_default_port(self):
configuration = self._set_config(create_configuration())
configuration.netapp_transport_type = 'http'
driver = common.NetAppDriver(configuration=configuration)
driver.library._check_mode_get_or_register_storage_system = mock.Mock()
mock_invoke = self.mock_object(client, 'RestClient')
driver.do_setup(context='context')
mock_invoke.assert_called_with(**fakes.FAKE_CLIENT_PARAMS)
def test_do_setup_https_default_port(self):
configuration = self._set_config(create_configuration())
configuration.netapp_transport_type = 'https'
driver = common.NetAppDriver(configuration=configuration)
driver.library._check_mode_get_or_register_storage_system = mock.Mock()
mock_invoke = self.mock_object(client, 'RestClient')
driver.do_setup(context='context')
FAKE_EXPECTED_PARAMS = dict(fakes.FAKE_CLIENT_PARAMS, port=8443,
scheme='https')
mock_invoke.assert_called_with(**FAKE_EXPECTED_PARAMS)
def test_do_setup_http_non_default_port(self):
configuration = self._set_config(create_configuration())
configuration.netapp_server_port = 81
driver = common.NetAppDriver(configuration=configuration)
driver.library._check_mode_get_or_register_storage_system = mock.Mock()
mock_invoke = self.mock_object(client, 'RestClient')
driver.do_setup(context='context')
FAKE_EXPECTED_PARAMS = dict(fakes.FAKE_CLIENT_PARAMS, port=81)
mock_invoke.assert_called_with(**FAKE_EXPECTED_PARAMS)
def test_do_setup_https_non_default_port(self):
configuration = self._set_config(create_configuration())
configuration.netapp_transport_type = 'https'
configuration.netapp_server_port = 446
driver = common.NetAppDriver(configuration=configuration)
driver.library._check_mode_get_or_register_storage_system = mock.Mock()
mock_invoke = self.mock_object(client, 'RestClient')
driver.do_setup(context='context')
FAKE_EXPECTED_PARAMS = dict(fakes.FAKE_CLIENT_PARAMS, port=446,
scheme='https')
mock_invoke.assert_called_with(**FAKE_EXPECTED_PARAMS)
def test_setup_good_controller_ip(self):
configuration = self._set_config(create_configuration())
configuration.netapp_controller_ips = '127.0.0.1'
driver = common.NetAppDriver(configuration=configuration)
driver.library._check_mode_get_or_register_storage_system
def test_setup_good_controller_ips(self):
configuration = self._set_config(create_configuration())
configuration.netapp_controller_ips = '127.0.0.2,127.0.0.1'
driver = common.NetAppDriver(configuration=configuration)
driver.library._check_mode_get_or_register_storage_system
def test_setup_missing_controller_ip(self):
configuration = self._set_config(create_configuration())
configuration.netapp_controller_ips = None
driver = common.NetAppDriver(configuration=configuration)
self.assertRaises(exception.InvalidInput,
driver.do_setup, context='context')
def test_setup_error_invalid_controller_ip(self):
configuration = self._set_config(create_configuration())
configuration.netapp_controller_ips = '987.65.43.21'
driver = common.NetAppDriver(configuration=configuration)
self.mock_object(na_utils, 'resolve_hostname',
mock.Mock(side_effect=socket.gaierror))
self.assertRaises(
exception.NoValidHost,
driver.library._check_mode_get_or_register_storage_system)
def test_setup_error_invalid_first_controller_ip(self):
configuration = self._set_config(create_configuration())
configuration.netapp_controller_ips = '987.65.43.21,127.0.0.1'
driver = common.NetAppDriver(configuration=configuration)
self.mock_object(na_utils, 'resolve_hostname',
mock.Mock(side_effect=socket.gaierror))
self.assertRaises(
exception.NoValidHost,
driver.library._check_mode_get_or_register_storage_system)
def test_setup_error_invalid_second_controller_ip(self):
configuration = self._set_config(create_configuration())
configuration.netapp_controller_ips = '127.0.0.1,987.65.43.21'
driver = common.NetAppDriver(configuration=configuration)
self.mock_object(na_utils, 'resolve_hostname',
mock.Mock(side_effect=socket.gaierror))
self.assertRaises(
exception.NoValidHost,
driver.library._check_mode_get_or_register_storage_system)
def test_setup_error_invalid_both_controller_ips(self):
configuration = self._set_config(create_configuration())
configuration.netapp_controller_ips = '564.124.1231.1,987.65.43.21'
driver = common.NetAppDriver(configuration=configuration)
self.mock_object(na_utils, 'resolve_hostname',
mock.Mock(side_effect=socket.gaierror))
self.assertRaises(
exception.NoValidHost,
driver.library._check_mode_get_or_register_storage_system)
def test_get_vol_with_label_wwn_missing(self):
self.assertRaises(exception.InvalidInput,
self.library._get_volume_with_label_wwn,
None, None)
def test_get_vol_with_label_wwn_found(self):
fake_vl_list = [{'volumeRef': '1', 'volumeUse': 'standardVolume',
'label': 'l1', 'volumeGroupRef': 'g1',
'worlWideName': 'w1ghyu'},
{'volumeRef': '2', 'volumeUse': 'standardVolume',
'label': 'l2', 'volumeGroupRef': 'g2',
'worldWideName': 'w2ghyu'}]
self.library._get_storage_pools = mock.Mock(return_value=['g2', 'g3'])
self.library._client.list_volumes = mock.Mock(
return_value=fake_vl_list)
vol = self.library._get_volume_with_label_wwn('l2', 'w2:gh:yu')
self.assertEqual(1, self.library._client.list_volumes.call_count)
self.assertEqual('2', vol['volumeRef'])
def test_get_vol_with_label_wwn_unmatched(self):
fake_vl_list = [{'volumeRef': '1', 'volumeUse': 'standardVolume',
'label': 'l1', 'volumeGroupRef': 'g1',
'worlWideName': 'w1ghyu'},
{'volumeRef': '2', 'volumeUse': 'standardVolume',
'label': 'l2', 'volumeGroupRef': 'g2',
'worldWideName': 'w2ghyu'}]
self.library._get_storage_pools = mock.Mock(return_value=['g2', 'g3'])
self.library._client.list_volumes = mock.Mock(
return_value=fake_vl_list)
self.assertRaises(KeyError, self.library._get_volume_with_label_wwn,
'l2', 'abcdef')
self.assertEqual(1, self.library._client.list_volumes.call_count)
def test_manage_existing_get_size(self):
self.library._get_existing_vol_with_manage_ref = mock.Mock(
return_value=self.fake_ret_vol)
size = self.driver.manage_existing_get_size(self.volume, self.fake_ref)
self.assertEqual(3, size)
self.library._get_existing_vol_with_manage_ref.assert_called_once_with(
self.volume, self.fake_ref)
def test_get_exist_vol_source_name_missing(self):
self.assertRaises(exception.ManageExistingInvalidReference,
self.library._get_existing_vol_with_manage_ref,
self.volume, {'id': '1234'})
def test_get_exist_vol_source_not_found(self):
def _get_volume(v_id, v_name):
d = {'id': '1'}
return d[v_id]
self.library._get_volume_with_label_wwn = mock.Mock(wraps=_get_volume)
self.assertRaises(exception.ManageExistingInvalidReference,
self.library._get_existing_vol_with_manage_ref,
{'id': 'id2'}, {'source-name': 'name2'})
self.library._get_volume_with_label_wwn.assert_called_once_with(
'name2', None)
def test_get_exist_vol_with_manage_ref(self):
fake_ret_vol = {'id': 'right'}
self.library._get_volume_with_label_wwn = mock.Mock(
return_value=fake_ret_vol)
actual_vol = self.library._get_existing_vol_with_manage_ref(
{'id': 'id2'}, {'source-name': 'name2'})
self.library._get_volume_with_label_wwn.assert_called_once_with(
'name2', None)
self.assertEqual(fake_ret_vol, actual_vol)
@mock.patch.object(utils, 'convert_uuid_to_es_fmt')
def test_manage_existing_same_label(self, mock_convert_es_fmt):
self.library._get_existing_vol_with_manage_ref = mock.Mock(
return_value=self.fake_ret_vol)
mock_convert_es_fmt.return_value = 'label'
self.driver.manage_existing(self.volume, self.fake_ref)
self.library._get_existing_vol_with_manage_ref.assert_called_once_with(
self.volume, self.fake_ref)
mock_convert_es_fmt.assert_called_once_with(
'114774fb-e15a-4fae-8ee2-c9723e3645ef')
@mock.patch.object(utils, 'convert_uuid_to_es_fmt')
def test_manage_existing_new(self, mock_convert_es_fmt):
self.library._get_existing_vol_with_manage_ref = mock.Mock(
return_value=self.fake_ret_vol)
mock_convert_es_fmt.return_value = 'vol_label'
self.library._client.update_volume = mock.Mock(
return_value={'id': 'update', 'worldWideName': 'wwn'})
self.driver.manage_existing(self.volume, self.fake_ref)
self.library._get_existing_vol_with_manage_ref.assert_called_once_with(
self.volume, self.fake_ref)
mock_convert_es_fmt.assert_called_once_with(
'114774fb-e15a-4fae-8ee2-c9723e3645ef')
self.library._client.update_volume.assert_called_once_with(
'vol_id', 'vol_label')
@mock.patch.object(library.LOG, 'info')
def test_unmanage(self, log_info):
self.library._get_volume = mock.Mock(return_value=self.fake_ret_vol)
self.driver.unmanage(self.volume)
self.library._get_volume.assert_called_once_with(
'114774fb-e15a-4fae-8ee2-c9723e3645ef')
self.assertEqual(1, log_info.call_count)
|
|
""" types.py
"""
from __future__ import division
import abc
import copy
import six
from cryptography.hazmat.primitives.constant_time import bytes_eq
from ..constants import PacketTag
from ..decorators import sdproperty
from ..types import Dispatchable
from ..types import Field
from ..types import Header as _Header
__all__ = ['Header',
'VersionedHeader',
'Packet',
'VersionedPacket',
'Opaque',
'Key',
'Public',
'Private',
'Primary',
'Sub',
'MPI',
'MPIs', ]
class Header(_Header):
@sdproperty
def tag(self):
return self._tag
@tag.register(int)
@tag.register(PacketTag)
def tag_int(self, val):
_tag = (val & 0x3F) if self._lenfmt else ((val & 0x3C) >> 2)
try:
self._tag = PacketTag(_tag)
except ValueError: # pragma: no cover
self._tag = _tag
@property
def typeid(self):
return self.tag
def __init__(self):
super(Header, self).__init__()
self.tag = 0x00
def __bytearray__(self):
tag = 0x80 | (self._lenfmt << 6)
tag |= (self.tag) if self._lenfmt else ((self.tag << 2) | {1: 0, 2: 1, 4: 2, 0: 3}[self.llen])
_bytes = bytearray(self.int_to_bytes(tag))
_bytes += self.encode_length(self.length, self._lenfmt, self.llen)
return _bytes
def __len__(self):
return 1 + self.llen
def parse(self, packet):
"""
There are two formats for headers
old style
---------
Old style headers can be 1, 2, 3, or 6 octets long and are composed of a Tag and a Length.
If the header length is 1 octet (length_type == 3), then there is no Length field.
new style
---------
New style headers can be 2, 3, or 6 octets long and are also composed of a Tag and a Length.
Packet Tag
----------
The packet tag is the first byte, comprising the following fields:
+-------------+----------+---------------+---+---+---+---+----------+----------+
| byte | 1 |
+-------------+----------+---------------+---+---+---+---+----------+----------+
| bit | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
+-------------+----------+---------------+---+---+---+---+----------+----------+
| old-style | always 1 | packet format | packet tag | length type |
| description | | 0 = old-style | | 0 = 1 octet |
| | | 1 = new-style | | 1 = 2 octets |
| | | | | 2 = 5 octets |
| | | | | 3 = no length field |
+-------------+ + +---------------+---------------------+
| new-style | | | packet tag |
| description | | | |
+-------------+----------+---------------+-------------------------------------+
:param packet: raw packet bytes
"""
self._lenfmt = ((packet[0] & 0x40) >> 6)
self.tag = packet[0]
if self._lenfmt == 0:
self.llen = (packet[0] & 0x03)
del packet[0]
if (self._lenfmt == 0 and self.llen > 0) or self._lenfmt == 1:
self.length = packet
else:
# indeterminate packet length
self.length = len(packet)
class VersionedHeader(Header):
@sdproperty
def version(self):
return self._version
@version.register(int)
def version_int(self, val):
self._version = val
def __init__(self):
super(VersionedHeader, self).__init__()
self.version = 0
def __bytearray__(self):
_bytes = bytearray(super(VersionedHeader, self).__bytearray__())
_bytes += bytearray([self.version])
return _bytes
def parse(self, packet): # pragma: no cover
if self.tag == 0:
super(VersionedHeader, self).parse(packet)
if self.version == 0:
self.version = packet[0]
del packet[0]
class Packet(Dispatchable):
__typeid__ = -1
__headercls__ = Header
def __init__(self, _=None):
super(Packet, self).__init__()
self.header = self.__headercls__()
if isinstance(self.__typeid__, six.integer_types):
self.header.tag = self.__typeid__
@abc.abstractmethod
def __bytearray__(self):
return self.header.__bytearray__()
def __len__(self):
return len(self.header) + self.header.length
def __repr__(self):
return "<{cls:s} [tag {tag:02d}] at 0x{id:x}>".format(cls=self.__class__.__name__, tag=self.header.tag, id=id(self))
def update_hlen(self):
self.header.length = len(self.__bytearray__()) - len(self.header)
@abc.abstractmethod
def parse(self, packet):
if self.header.tag == 0:
self.header.parse(packet)
class VersionedPacket(Packet):
__headercls__ = VersionedHeader
def __init__(self):
super(VersionedPacket, self).__init__()
if isinstance(self.__ver__, six.integer_types):
self.header.version = self.__ver__
def __repr__(self):
return "<{cls:s} [tag {tag:02d}][v{ver:d}] at 0x{id:x}>".format(cls=self.__class__.__name__, tag=self.header.tag,
ver=self.header.version, id=id(self))
class Opaque(Packet):
__typeid__ = None
@sdproperty
def payload(self):
return self._payload
@payload.register(bytearray)
@payload.register(bytes)
def payload_bin(self, val):
self._payload = val
def __init__(self):
super(Opaque, self).__init__()
self.payload = b''
def __bytearray__(self):
_bytes = super(Opaque, self).__bytearray__()
_bytes += self.payload
return _bytes
def parse(self, packet): # pragma: no cover
super(Opaque, self).parse(packet)
pend = self.header.length
if hasattr(self.header, 'version'):
pend -= 1
self.payload = packet[:pend]
del packet[:pend]
# key marker classes for convenience
class Key(object):
pass
class Public(Key):
pass
class Private(Key):
pass
class Primary(Key):
pass
class Sub(Key):
pass
# This is required for class MPI to work in both Python 2 and 3
if not six.PY2:
long = int
class MPI(long):
def __new__(cls, num):
mpi = num
if isinstance(num, (bytes, bytearray)):
if isinstance(num, bytes): # pragma: no cover
num = bytearray(num)
fl = ((MPIs.bytes_to_int(num[:2]) + 7) // 8)
del num[:2]
mpi = MPIs.bytes_to_int(num[:fl])
del num[:fl]
return super(MPI, cls).__new__(cls, mpi)
def byte_length(self):
return ((self.bit_length() + 7) // 8)
def to_mpibytes(self):
return MPIs.int_to_bytes(self.bit_length(), 2) + MPIs.int_to_bytes(self, self.byte_length())
def __len__(self):
return self.byte_length() + 2
class MPIs(Field):
# this differs from MPI in that it's subclasses hold/parse several MPI fields
# and, in the case of v4 private keys, also a String2Key specifier/information.
__mpis__ = ()
def __len__(self):
return sum(len(i) for i in self)
def __hash__(self):
return hash(tuple(self))
def __eq__(self, other):
if isinstance(other, MPIs):
result = True
if len(self) != len(other):
result = False
for i, j in zip(self, other):
a = i.to_mpibytes()
b = j.to_mpibytes()
if not bytes_eq(a, b):
result = False
return result
return False
def __ne__(self, other):
return not self.__eq__(other)
def __iter__(self):
"""yield all components of an MPI so it can be iterated over"""
for i in self.__mpis__:
yield getattr(self, i)
def __copy__(self):
pk = self.__class__()
for m in self.__mpis__:
setattr(pk, m, copy.copy(getattr(self, m)))
return pk
|
|
# Copyright (C) 2015 ibu radempa <ibu@radempa.de>
#
# Permission is hereby granted, free of charge, to
# any person obtaining a copy of this software and
# associated documentation files (the "Software"),
# to deal in the Software without restriction,
# including without limitation the rights to use,
# copy, modify, merge, publish, distribute,
# sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is
# furnished to do so, subject to the following
# conditions:
#
# The above copyright notice and this permission
# notice shall be included in all copies or
# substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY
# OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
# LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
r"""
Create a generalized JSON-table-schema structure from a live postgres database.
The JSON data structure returned from :func:`get_database` is a generalization
of the `JSON-table-schema`_: The *resources* in our structure comply with the
table definition there (we extend it in allwoed ways). Our structure comprises
the whole database. It is the JSON-encoded form of a dictionary with these keys
(values being strings, if not otherwise indicated):
.. _`JSON-table-schema`:
http://dataprotocols.org/json-table-schema/
* **source**: the string 'PostgreSQL'
* **source_version**: the PostgreSQL version returned by the server
* **database_name**: the database name
* **database_description**: the comment on the database
* **generation_begin_time**: begin datetime as returned from PostgreSQL
* **generation_end_time**: end datetime as returned from PostgreSQL
* **datapackages**: a list of dictionaries, one for each PostgreSQL schema,
with these keys:
* **datapackage**: the name of the PostgreSQL schema
* **resources**: a list of dictionaries, each describing a table within
the current PostgreSQL schema and having these keys:
* **name**: the name of the table
* **description**: the table comment (only those components not part of
a weak foreign key definition)
* **primaryKey**: the primary key of the table, which is a list of
column names
* **fields**: a list of dictionaries describung the table columns and
having these keys:
* **name**: the column name
* **description**: the column comment
* **position**:
* **type**: the PostgreSQL data type, e.g., 'varchar(100)' or 'int4'
* **defaultValue**: the default value of the column, e.g., '0', or
'person_id_seq()' in case of a sequence
* **constraints**: a dictionary describing constraints on the current
column, with these keys:
* **required**: boolean telling whether the column has a 'NOT NULL'
constraint
* **indexes**: a list of dictionaries, one per index and column,
having these keys:
* **name**: name of the index
* **columns**: a list with the names of the columns used in the
index and ordered by priority
* **creation**: the SQL statement for creating the index
* **definition**: the index definition, e.g., 'btree (id1, id2)'
* **primary**: boolean telling whether the indexed columns form
a primary key
* **unique**: boolean telling whether the indexed columns are
constrained to be unique
* **foreignKeys**: a list of foreign keys used by the current table:
* **columns**: the names of the columns in the current table which
are referencing a remote relation
* **enforced**: a boolean telling whether the foreign key constraint
is being enforced in PostgreSQL (True), or if it is a weak
reference and the constraint is kept only by the application
software (False)
* **reference**: a dict for specifying the reference target, having
these keys:
* **datapackage**: the name of the PostgreSQL schema in which the
referenced table resides
* **resource**: the name of the referenced table
* **name**: the name of the foreign key constraint
* **columns**: a list of the names of the referenced columns
* **cardinalitySelf**: (optional) the cardinality of the foreign
key relation (as obtained from a column or table comment)
on the side of the current table
* **cardinalityRef**: (optional) the cardinality of the foreign
key relation (as obtained from a column or table comment)
on the side of the remote table
* **label**: (optional) a label describing the foreign key relation
(as obtained from a column or table comment)
.. _foreign-key-syntax:
Foreign key syntax
~~~~~~~~~~~~~~~~~~
Foreign keys will be recognized where either a (hard) foreign key constraint
is present in PostgreSQL, or a table or column comment describes a foreign key
relation according to these syntax rules (we call this *weak reference*):
* the comment is split at 1) ``;`` followed by a space character
or 2) ``\n``, and results in what we call *components*
* if a component matches one of the *relation_regexps*, we try to find
a column name, a table name and an optional schema name in it; we match
existing names in one of these four formats:
* schema.table.column
* table.column
* schema.table(column1, column2, ..., columnN)
* table(column1, column2, ..., columnN)
* if a relation is valid, we also extract both cardinalities on the side of
the table (card1) and on the foreign side (card2); the syntax is
``card1`` ``link`` ``card2``, where card1 and card2 are values in
:any:`cardinalities` and link is one of ``--``, ``-`` with an optional
space character on both sides (independently).
* if a relation is valid, we also extract a label for the relation: when
the component contains a string like ``label="<LABEL>"``, ``<LABEL>`` will
be extracted. (On both sides of '=' an arbitrary number of white spaces may
appear.
In cases where both a foreign key constraint and a weak reference are present,
the weak reference information supplements the constraint, in particular by
adding cardinalities (if present).
"""
import re
import json
from .pg_query import db_init
from . import pg_database as pd
re_components = re.compile('; |\n')
cardinalities = [
'0..1',
'1',
'0..N',
'1..N'
]
"""
Cardinalities.
These values are allowed in weak references.
"""
cards = '|'.join([re.escape(x) for x in cardinalities])
re_cardinalities = re.compile('(^| )(%s) ?--? ?(%s)( |,|$)'
% (cards, cards), re.I)
re_label = re.compile('(^| )label\s*=\s*"([^"]*)"( |$)')
def get_database(db_conn_str,
relation_regexps=None,
exclude_tables_regexps=None):
"""
Return a JSON data structure representing the PostgreSQL database.
Returns a JSON string and a list of notifications.
The notifications inform about invalid or possibly unwanted
syntax of the weak references (contained in the comments).
A valid PostgreSQL connection string (*db_conn_str*) is required for
connecting to a live PostgreSQL database with read permissions.
The resulting data structure is missing some details. Currently
mainly these structures are extracted from the database:
* tables
* foreign key relations (both constraints and weak references)
* indexes
The optional arguments have these meanings:
* *exclude_tables_regexps* is a list of regular expression strings;
if a table name matches any of them, the table and all its relations
to other tables are omitted from the result
* *relation_regexps* is a list of regular expression strings;
if a table comment or a column comment matches any of them, it is
parsed for a 'weak' foreign key relation
(cf. :ref:`foreign-key-syntax`)
"""
db_init(db_conn_str)
if exclude_tables_regexps is None:
exclude_tables_regexps = []
begin_time = pd.get_now()
database = pd.get_database()
schemas = pd.get_schemas()
res = []
for schema in schemas:
res_schema = {}
schema_name = schema['schema_name']
res_schema['datapackage'] = schema_name
res_tables = []
for table in pd.get_tables(schema_name):
table_name = table['table_name']
if not _check_exclude_table(exclude_tables_regexps, table_name):
res_table = {}
res_table['name'] = table_name
table_comment = table['table_comment']
if table_comment is not None:
res_table['description'] = table_comment
constraints = _reshuffle_constraints(schema_name, table_name)
if constraints['primary_key']:
res_table['primaryKey'] = constraints['primary_key']
res_table['foreignKeys'] = constraints['foreign_keys']
if constraints['unique']: ## ????
res_table['unique'] = constraints['unique'] ## ????
res_table['fields'] = _collect_columns(
schema_name,
table_name,
constraints['unique']
)
res_table['indexes'] = pd.get_indexes(schema_name, table_name)
res_tables.append(res_table)
res_schema['resources'] = res_tables
res.append(res_schema)
notifications = _add_annotated_foreign_keys(res, relation_regexps)
end_time = pd.get_now()
return json.dumps({
'source': 'PostgreSQL',
'source_version': pd.get_server_version(),
'database_name': database,
'database_description': pd.get_database_description(),
'generation_begin_time': begin_time,
'generation_end_time': end_time,
'datapackages': res
}), notifications
def _check_exclude_table(exclude_tables_regexps, table_name):
"""
Return whether table *table_name* is to be excluded.
Apply the patterns from *exclude_tables_regexps*.
If at least one matches, return True.
"""
for exclude_tables_regexp in exclude_tables_regexps:
if re.search(exclude_tables_regexp, table_name):
return True
return False
def _reshuffle_constraints(schema_name, table_name):
"""
Return primary key, foreign key and unique constraints for a table.
See also: :func:`_collect_column_constraints`
"""
constraint_names = []
pk_column_names = []
foreign_keys = {}
unique = {}
for constraint in pd.get_constraints(schema_name, table_name):
# constraints are ordered by column_position
column_name = constraint['column_name']
constraint_type = constraint['constraint_type']
c_oid = constraint['constraint_oid']
if constraint_type == 'u':
if c_oid not in unique:
c_schema = constraint['constraint_schema']
c_name = constraint['constraint_name']
unique[c_oid] = {
'name': c_name,
'fields': []
}
unique[c_oid]['fields'].append(column_name)
elif constraint_type == 'p':
pk_column_names.append(column_name) # using proper column ordering
elif constraint_type == 'f':
if c_oid not in foreign_keys:
foreign_keys[c_oid] = {
'fields': [column_name],
'reference': {
'datapackage': constraint['referenced_schema'],
'resource': constraint['referenced_table'],
'name': constraint['constraint_name'],
'fields': []
},
'enforced': True
}
ref_col = constraint['referenced_column']
foreign_keys[c_oid]['reference']['fields'].append(ref_col)
return {
'unique': list(unique.values()),
'primary_key': pk_column_names,
'foreign_keys': list(foreign_keys.values())
}
def _collect_columns(schema_name, table_name, unique):
"""
Return a column structure for a given table in a given schema.
Return a list of dicts, each describing a table column.
"""
res_columns = []
columns = pd.get_columns(schema_name, table_name)
for column in columns: # columns are ordered by ordinal position
res_column = {}
res_column['name'] = column['column_name']
res_column['type'] = column['datatype']
description = column['column_comment']
if description:
res_column['description'] = description
default_expr = column['column_default']
if default_expr:
res_column['default_value'] = _format_default(default_expr)
constraints = _collect_column_constraints(column, unique)
if constraints:
res_column['constraints'] = constraints
res_columns.append(res_column)
return res_columns
def _collect_column_constraints(column, unique):
"""
Collect constraints for a column.
Use column information as well as unique constraint information.
Note: for a unique constraint on a single column we set
column / constraints / unique = True
(and store all multicolumn uniques in the table realm)
"""
res = {}
if 'null' in column:
res['required'] = column['null']
for constr_i, constr in enumerate(unique):
if column['column_name'] in constr['fields']:
if len(constr['fields']) == 1:
res['unique'] = True
return res
def _format_default(expr):
"""
Return text from a default value expression.
Return a simplified form of a PostgreSQL default value.
"""
if expr.lower().startswith('nextval('):
r = expr.split("'", 1)[1]
r = r.rsplit("'", 1)[0]
return r + '()'
elif expr.startswith("'"):
r = expr.split("'", 1)[1]
r = r.rsplit("'", 1)[0]
return "'" + r + "'"
else:
return expr
def _add_annotated_foreign_keys(schemas, relation_regexps):
"""
Add foreign keys defined in column comments.
*schemas* must be a list of schemas as in :func:`get_database`.
*relation_regexps* must be a list of regular expression strings for
matching a 'weak' foreign key reference.
"""
all_notifications = []
schema_table_column = get_schema_table_column_triples(schemas)
if relation_regexps:
res_relation = [re.compile(x) for x in relation_regexps]
for schema in schemas:
schema_name = schema['datapackage']
for table in schema['resources']:
all_relations = []
table_name = table['name']
for column in table['fields']:
column_name = column['name']
if 'description' in column:
relations, notifications, comments = \
_parse_description(
schema_name,
table_name,
column_name,
column['description'],
schema_table_column,
res_relation
)
all_notifications += notifications
all_relations += relations
column['description'] = '; '.join(comments)
if 'description' in table:
relations, notifications, comments = _parse_description(
schema_name,
table_name,
None,
table['description'],
schema_table_column,
res_relation
)
all_notifications += notifications
all_relations += relations
table['description'] = '; '.join(comments)
_merge_foreign_keys(table['foreignKeys'], all_relations)
return all_notifications
def _parse_description(schema_name, table_name, column_name,
description, schema_table_column, relation_regexps):
r"""
Extract relation information from a column or table comment.
Split the description into components at '\n' as well as at '; '.
Check each component for whether one of the *relations_regexps* does match.
If so try to match (optionally a schema name,) a table name and
the name(s) of a (tuple of) column(s) as well as two cardinalities.
In case of a table comment also match another tuple of column names
of the current table. For a table comment set *column_name*=None.
Return a list of the found relations, a list of notifications from
syntax parsing and a list remaining component (i.e., comment parts
in which no relation was found).
"""
current = (schema_name, table_name, column_name)
current_text = '(schema=%s, table=%s, column=%s)' % current
relations = []
notifications = []
components = re_components.split(description)
comments = [] # remaining components
if column_name is None:
table_column_names = [x[2] for x in schema_table_column
if x[0] == schema_name and x[1] == table_name]
for component in components:
if not any([regex.search(component) for regex in relation_regexps]):
comments.append(component)
continue
for s, t, c in schema_table_column:
s_ = re.escape(s)
t_ = re.escape(t)
c_ = re.escape(c)
found = 0
p = re.search(' %s\.%s\.%s( |$)' % (s_, t_, c_), component)
if p:
found = 1
if not found:
p = re.search(' %s\.%s ?\( ?%s[, \)]' % (s_, t_, c_),
component)
if p:
found = 2
if not found:
p = re.search(' %s\.%s( |$)' % (t_, c_), component)
if p:
found = 3
if not found:
p = re.search(' %s ?\( ?%s[, \)]' % (t_, c_), component)
if p:
found = 4
if not found:
continue
matched1 = p.group(0)
related_schema = s
related_table = t
related_columns = [c]
if found in (3, 4):
related_schema = 'public'
if (related_schema, related_table) == current[:2]:
notifications.append(
('INFO', current_text +
' Dropping reference to same table ("%s")' % component)
)
continue
if found in (2, 4):
s1 = s + '.' if found == 2 else ''
pattern = re.escape(s1 + t) + ' ?\((' +\
re.escape(c) + '[^\)]*)\)'
m = re.search(pattern, component)
if m:
matched1 = m.group(0)
cols = m.group(1).split(',')
related_columns = []
for col in cols:
col_name = col.strip()
related = (related_schema, related_table, col_name)
if related in schema_table_column:
if related[:2] == current[:2]:
notifications.append(
('INFO', current_text +
' Dropping reference to same table ("%s")'
% component)
)
else:
related_columns.append(col_name)
else:
notifications.append(
('WARN', current_text +
' Inexistent referenced column "%s" in "%s"'
% (col_name, component))
)
else:
notifications.append(
('WARN',
current_text + ' No closing bracket: "%s"' % component)
)
m = re_cardinalities.search(component)
cardinality_self = None
cardinality_ref = None
matched2 = ''
if m:
matched2 = m.group(0)
cardinality_self = m.group(2)
cardinality_ref = m.group(3)
else:
matched2 = ''
m_label = re_label.search(component)
matched3 = ''
label = None
if m_label:
matched3 = m_label.group(0)
label = m_label.group(2)
if found:
break
else:
notifications.append(
('WARN', current_text +
' No valid reference target found: "%s"' % component)
)
if found:
if column_name is not None:
# column comment
relations.append({
'fields': [column_name],
'reference': {
'datapackage': related_schema,
'resource': related_table,
'fields': related_columns,
'cardinalitySelf': cardinality_self,
'cardinalityRef': cardinality_ref,
'label': label
},
'enforced': False
})
else:
# table comment
rest = component.replace(matched1, '')\
.replace(matched2, '')\
.replace(matched3, '')
for col_name in table_column_names:
r = re.search('(^|\s+)\(\s*(%s\s*,[^\)]+)\)\s'
% re.escape(col_name), rest)
if r:
col_s = r.group(2)
col_names = [s.strip() for s in col_s.split(',')]
if not set(col_names) <= set(table_column_names):
notifications.append(
('WARN', current_text +
' Invalid source column names "%s" found in:'
'"%s"' % (col_s, component))
)
else:
relations.append({
'fields': col_names,
'reference': {
'datapackage': related_schema,
'resource': related_table,
'fields': related_columns,
'cardinalitySelf': cardinality_self,
'cardinalityRef': cardinality_ref,
'label': label
},
'enforced': False
})
else:
comments.append(component)
return relations, notifications, comments
def _merge_foreign_keys(fk_constraints, fk_relations):
"""
Merge annotated foreign key relations into foreign key constraints.
(Both constraints and relations are for the same table.)
Add all relations, except if a matching constraint exists: then amend
the constraint by adding cardinality information.
"""
for rel in fk_relations:
for constr in fk_constraints:
if rel['fields'] == constr['fields']:
r1 = rel['reference']
r2 = constr['reference']
if r1['datapackage'] == r2['datapackage'] and \
r1['resource'] == r2['resource'] and \
r1['fields'] == r2['fields']:
r2['cardinalitySelf'] = r1['cardinalitySelf']
r2['cardinalityRef'] = r1['cardinalityRef']
break
else:
fk_constraints.append(rel)
def get_schema_table_column_triples(database):
"""
Return a list of all (schema_name, table_name, column_name)-combinations.
*database* must have the same structure as obtained from
:func:`get_database`.
"""
res = []
for schema in database:
schema_name = schema['datapackage']
for table in schema['resources']:
table_name = table['name']
for column in table['fields']:
res.append((schema_name, table_name, column['name']))
return res
|
|
# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import copy
import json
import os
import os.path
import re
import tempfile
from yaml import YAMLError
from ansible.errors import AnsibleFileNotFound, AnsibleParserError
from ansible.errors.yaml_strings import YAML_SYNTAX_ERROR
from ansible.module_utils.basic import is_executable
from ansible.module_utils.six import binary_type, string_types, text_type
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.parsing.quoting import unquote
from ansible.parsing.vault import VaultLib, b_HEADER, is_encrypted, is_encrypted_file, parse_vaulttext_envelope
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject, AnsibleUnicode
from ansible.utils.path import unfrackpath
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
# Tries to determine if a path is inside a role, last dir must be 'tasks'
# this is not perfect but people should really avoid 'tasks' dirs outside roles when using Ansible.
RE_TASKS = re.compile(u'(?:^|%s)+tasks%s?$' % (os.path.sep, os.path.sep))
class DataLoader:
'''
The DataLoader class is used to load and parse YAML or JSON content,
either from a given file name or from a string that was previously
read in through other means. A Vault password can be specified, and
any vault-encrypted files will be decrypted.
Data read from files will also be cached, so the file will never be
read from disk more than once.
Usage:
dl = DataLoader()
# optionally: dl.set_vault_password('foo')
ds = dl.load('...')
ds = dl.load_from_file('/path/to/file')
'''
def __init__(self):
self._basedir = '.'
self._FILE_CACHE = dict()
self._tempfiles = set()
# initialize the vault stuff with an empty password
# TODO: replace with a ref to something that can get the password
# a creds/auth provider
# self.set_vault_password(None)
self._vaults = {}
self._vault = VaultLib()
self.set_vault_secrets(None)
# TODO: since we can query vault_secrets late, we could provide this to DataLoader init
def set_vault_secrets(self, vault_secrets):
self._vault.secrets = vault_secrets
def load(self, data, file_name='<string>', show_content=True):
'''
Creates a python datastructure from the given data, which can be either
a JSON or YAML string.
'''
new_data = None
# YAML parser will take JSON as it is a subset.
if isinstance(data, AnsibleUnicode):
# The PyYAML's libyaml bindings use PyUnicode_CheckExact so
# they are unable to cope with our subclass.
# Unwrap and re-wrap the unicode so we can keep track of line
# numbers
in_data = text_type(data)
else:
in_data = data
try:
# we first try to load this data as JSON
new_data = json.loads(data)
except:
# must not be JSON, let the rest try
if isinstance(data, AnsibleUnicode):
# The PyYAML's libyaml bindings use PyUnicode_CheckExact so
# they are unable to cope with our subclass.
# Unwrap and re-wrap the unicode so we can keep track of line
# numbers
in_data = text_type(data)
else:
in_data = data
try:
new_data = self._safe_load(in_data, file_name=file_name)
except YAMLError as yaml_exc:
self._handle_error(yaml_exc, file_name, show_content)
if isinstance(data, AnsibleUnicode):
new_data = AnsibleUnicode(new_data)
new_data.ansible_pos = data.ansible_pos
return new_data
def load_from_file(self, file_name, cache=True, unsafe=False):
''' Loads data from a file, which can contain either JSON or YAML. '''
file_name = self.path_dwim(file_name)
display.debug("Loading data from %s" % file_name)
# if the file has already been read in and cached, we'll
# return those results to avoid more file/vault operations
if cache and file_name in self._FILE_CACHE:
parsed_data = self._FILE_CACHE[file_name]
else:
# read the file contents and load the data structure from them
(b_file_data, show_content) = self._get_file_contents(file_name)
file_data = to_text(b_file_data, errors='surrogate_or_strict')
parsed_data = self.load(data=file_data, file_name=file_name, show_content=show_content)
# cache the file contents for next time
self._FILE_CACHE[file_name] = parsed_data
if unsafe:
return parsed_data
else:
# return a deep copy here, so the cache is not affected
return copy.deepcopy(parsed_data)
def path_exists(self, path):
path = self.path_dwim(path)
return os.path.exists(to_bytes(path, errors='surrogate_or_strict'))
def is_file(self, path):
path = self.path_dwim(path)
return os.path.isfile(to_bytes(path, errors='surrogate_or_strict')) or path == os.devnull
def is_directory(self, path):
path = self.path_dwim(path)
return os.path.isdir(to_bytes(path, errors='surrogate_or_strict'))
def list_directory(self, path):
path = self.path_dwim(path)
return os.listdir(path)
def is_executable(self, path):
'''is the given path executable?'''
path = self.path_dwim(path)
return is_executable(path)
def _safe_load(self, stream, file_name=None):
''' Implements yaml.safe_load(), except using our custom loader class. '''
loader = AnsibleLoader(stream, file_name, self._vault.secrets)
try:
return loader.get_single_data()
finally:
try:
loader.dispose()
except AttributeError:
pass # older versions of yaml don't have dispose function, ignore
def _get_file_contents(self, file_name):
'''
Reads the file contents from the given file name
If the contents are vault-encrypted, it will decrypt them and return
the decrypted data
:arg file_name: The name of the file to read. If this is a relative
path, it will be expanded relative to the basedir
:raises AnsibleFileNotFOund: if the file_name does not refer to a file
:raises AnsibleParserError: if we were unable to read the file
:return: Returns a byte string of the file contents
'''
if not file_name or not isinstance(file_name, (binary_type, text_type)):
raise AnsibleParserError("Invalid filename: '%s'" % str(file_name))
b_file_name = to_bytes(self.path_dwim(file_name))
# This is what we really want but have to fix unittests to make it pass
# if not os.path.exists(b_file_name) or not os.path.isfile(b_file_name):
if not self.path_exists(b_file_name) or not self.is_file(b_file_name):
raise AnsibleFileNotFound("Unable to retrieve file contents", file_name=file_name)
show_content = True
try:
with open(b_file_name, 'rb') as f:
data = f.read()
if is_encrypted(data):
# FIXME: plugin vault selector
b_ciphertext, b_version, cipher_name, vault_id = parse_vaulttext_envelope(data)
data = self._vault.decrypt(data, filename=b_file_name)
show_content = False
return (data, show_content)
except (IOError, OSError) as e:
raise AnsibleParserError("an error occurred while trying to read the file '%s': %s" % (file_name, str(e)), orig_exc=e)
def _handle_error(self, yaml_exc, file_name, show_content):
'''
Optionally constructs an object (AnsibleBaseYAMLObject) to encapsulate the
file name/position where a YAML exception occurred, and raises an AnsibleParserError
to display the syntax exception information.
'''
# if the YAML exception contains a problem mark, use it to construct
# an object the error class can use to display the faulty line
err_obj = None
if hasattr(yaml_exc, 'problem_mark'):
err_obj = AnsibleBaseYAMLObject()
err_obj.ansible_pos = (file_name, yaml_exc.problem_mark.line + 1, yaml_exc.problem_mark.column + 1)
raise AnsibleParserError(YAML_SYNTAX_ERROR, obj=err_obj, show_content=show_content, orig_exc=yaml_exc)
def get_basedir(self):
''' returns the current basedir '''
return self._basedir
def set_basedir(self, basedir):
''' sets the base directory, used to find files when a relative path is given '''
if basedir is not None:
self._basedir = to_text(basedir)
def path_dwim(self, given):
'''
make relative paths work like folks expect.
'''
given = unquote(given)
given = to_text(given, errors='surrogate_or_strict')
if given.startswith(to_text(os.path.sep)) or given.startswith(u'~'):
path = given
else:
basedir = to_text(self._basedir, errors='surrogate_or_strict')
path = os.path.join(basedir, given)
return unfrackpath(path, follow=False)
def _is_role(self, path):
''' imperfect role detection, roles are still valid w/o tasks|meta/main.yml|yaml|etc '''
b_path = to_bytes(path, errors='surrogate_or_strict')
b_upath = to_bytes(unfrackpath(path, follow=False), errors='surrogate_or_strict')
for b_finddir in (b'meta', b'tasks'):
for b_suffix in (b'.yml', b'.yaml', b''):
b_main = b'main%s' % (b_suffix)
b_tasked = os.path.join(b_finddir, b_main)
if (
RE_TASKS.search(path) and
os.path.exists(os.path.join(b_path, b_main)) or
os.path.exists(os.path.join(b_upath, b_tasked)) or
os.path.exists(os.path.join(os.path.dirname(b_path), b_tasked))
):
return True
return False
def path_dwim_relative(self, path, dirname, source, is_role=False):
'''
find one file in either a role or playbook dir with or without
explicitly named dirname subdirs
Used in action plugins and lookups to find supplemental files that
could be in either place.
'''
search = []
source = to_text(source, errors='surrogate_or_strict')
# I have full path, nothing else needs to be looked at
if source.startswith(to_text(os.path.sep)) or source.startswith(u'~'):
search.append(unfrackpath(source, follow=False))
else:
# base role/play path + templates/files/vars + relative filename
search.append(os.path.join(path, dirname, source))
basedir = unfrackpath(path, follow=False)
# not told if role, but detect if it is a role and if so make sure you get correct base path
if not is_role:
is_role = self._is_role(path)
if is_role and RE_TASKS.search(path):
basedir = unfrackpath(os.path.dirname(path), follow=False)
cur_basedir = self._basedir
self.set_basedir(basedir)
# resolved base role/play path + templates/files/vars + relative filename
search.append(unfrackpath(os.path.join(basedir, dirname, source), follow=False))
self.set_basedir(cur_basedir)
if is_role and not source.endswith(dirname):
# look in role's tasks dir w/o dirname
search.append(unfrackpath(os.path.join(basedir, 'tasks', source), follow=False))
# try to create absolute path for loader basedir + templates/files/vars + filename
search.append(unfrackpath(os.path.join(dirname, source), follow=False))
# try to create absolute path for loader basedir
search.append(unfrackpath(os.path.join(basedir, source), follow=False))
# try to create absolute path for dirname + filename
search.append(self.path_dwim(os.path.join(dirname, source)))
# try to create absolute path for filename
search.append(self.path_dwim(source))
for candidate in search:
if os.path.exists(to_bytes(candidate, errors='surrogate_or_strict')):
break
return candidate
def path_dwim_relative_stack(self, paths, dirname, source, is_role=False):
'''
find one file in first path in stack taking roles into account and adding play basedir as fallback
:arg paths: A list of text strings which are the paths to look for the filename in.
:arg dirname: A text string representing a directory. The directory
is prepended to the source to form the path to search for.
:arg source: A text string which is the filename to search for
:rtype: A text string
:returns: An absolute path to the filename ``source`` if found
:raises: An AnsibleFileNotFound Exception if the file is found to exist in the search paths
'''
b_dirname = to_bytes(dirname)
b_source = to_bytes(source)
result = None
search = []
if source is None:
display.warning('Invalid request to find a file that matches a "null" value')
elif source and (source.startswith('~') or source.startswith(os.path.sep)):
# path is absolute, no relative needed, check existence and return source
test_path = unfrackpath(b_source, follow=False)
if os.path.exists(to_bytes(test_path, errors='surrogate_or_strict')):
result = test_path
else:
display.debug(u'evaluation_path:\n\t%s' % '\n\t'.join(paths))
for path in paths:
upath = unfrackpath(path, follow=False)
b_upath = to_bytes(upath, errors='surrogate_or_strict')
b_mydir = os.path.dirname(b_upath)
# if path is in role and 'tasks' not there already, add it into the search
if (is_role or self._is_role(path)) and b_mydir.endswith(b'tasks'):
search.append(os.path.join(os.path.dirname(b_mydir), b_dirname, b_source))
search.append(os.path.join(b_mydir, b_source))
else:
# don't add dirname if user already is using it in source
if b_source.split(b'/')[0] != dirname:
search.append(os.path.join(b_upath, b_dirname, b_source))
search.append(os.path.join(b_upath, b_source))
# always append basedir as last resort
# don't add dirname if user already is using it in source
if b_source.split(b'/')[0] != dirname:
search.append(os.path.join(to_bytes(self.get_basedir()), b_dirname, b_source))
search.append(os.path.join(to_bytes(self.get_basedir()), b_source))
display.debug(u'search_path:\n\t%s' % to_text(b'\n\t'.join(search)))
for b_candidate in search:
display.vvvvv(u'looking for "%s" at "%s"' % (source, to_text(b_candidate)))
if os.path.exists(b_candidate):
result = to_text(b_candidate)
break
if result is None:
raise AnsibleFileNotFound(file_name=source, paths=[to_text(p) for p in search])
return result
def _create_content_tempfile(self, content):
''' Create a tempfile containing defined content '''
fd, content_tempfile = tempfile.mkstemp()
f = os.fdopen(fd, 'wb')
content = to_bytes(content)
try:
f.write(content)
except Exception as err:
os.remove(content_tempfile)
raise Exception(err)
finally:
f.close()
return content_tempfile
def get_real_file(self, file_path, decrypt=True):
"""
If the file is vault encrypted return a path to a temporary decrypted file
If the file is not encrypted then the path is returned
Temporary files are cleanup in the destructor
"""
if not file_path or not isinstance(file_path, (binary_type, text_type)):
raise AnsibleParserError("Invalid filename: '%s'" % to_native(file_path))
b_file_path = to_bytes(file_path, errors='surrogate_or_strict')
if not self.path_exists(b_file_path) or not self.is_file(b_file_path):
raise AnsibleFileNotFound(file_name=file_path)
real_path = self.path_dwim(file_path)
try:
if decrypt:
with open(to_bytes(real_path), 'rb') as f:
# Limit how much of the file is read since we do not know
# whether this is a vault file and therefore it could be very
# large.
if is_encrypted_file(f, count=len(b_HEADER)):
# if the file is encrypted and no password was specified,
# the decrypt call would throw an error, but we check first
# since the decrypt function doesn't know the file name
data = f.read()
if not self._vault.secrets:
raise AnsibleParserError("A vault password or secret must be specified to decrypt %s" % to_native(file_path))
data = self._vault.decrypt(data, filename=real_path)
# Make a temp file
real_path = self._create_content_tempfile(data)
self._tempfiles.add(real_path)
return real_path
except (IOError, OSError) as e:
raise AnsibleParserError("an error occurred while trying to read the file '%s': %s" % (to_native(real_path), to_native(e)), orig_exc=e)
def cleanup_tmp_file(self, file_path):
"""
Removes any temporary files created from a previous call to
get_real_file. file_path must be the path returned from a
previous call to get_real_file.
"""
if file_path in self._tempfiles:
os.unlink(file_path)
self._tempfiles.remove(file_path)
def cleanup_all_tmp_files(self):
for f in self._tempfiles:
try:
self.cleanup_tmp_file(f)
except Exception as e:
display.warning("Unable to cleanup temp files: %s" % to_native(e))
|
|
# -*- encoding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tests for the Pecan API hooks."""
import json
import mock
from oslo_config import cfg
import oslo_messaging as messaging
import six
from webob import exc as webob_exc
from ironic.api.controllers import root
from ironic.api import hooks
from ironic.common import context
from ironic.tests.api import base
from ironic.tests import policy_fixture
class FakeRequest(object):
def __init__(self, headers, context, environ):
self.headers = headers
self.context = context
self.environ = environ or {}
self.version = (1, 0)
class FakeRequestState(object):
def __init__(self, headers=None, context=None, environ=None):
self.request = FakeRequest(headers, context, environ)
self.response = FakeRequest(headers, context, environ)
def set_context(self):
headers = self.request.headers
creds = {
'user': headers.get('X-User') or headers.get('X-User-Id'),
'tenant': headers.get('X-Tenant') or headers.get('X-Tenant-Id'),
'domain_id': headers.get('X-User-Domain-Id'),
'domain_name': headers.get('X-User-Domain-Name'),
'auth_token': headers.get('X-Auth-Token'),
'roles': headers.get('X-Roles', '').split(','),
}
is_admin = ('admin' in creds['roles'] or
'administrator' in creds['roles'])
is_public_api = self.request.environ.get('is_public_api', False)
show_password = ('admin' in creds['tenant'])
self.request.context = context.RequestContext(
is_admin=is_admin, is_public_api=is_public_api,
show_password=show_password, **creds)
def fake_headers(admin=False):
headers = {
'X-Auth-Token': '8d9f235ca7464dd7ba46f81515797ea0',
'X-Domain-Id': 'None',
'X-Domain-Name': 'None',
'X-Project-Domain-Id': 'default',
'X-Project-Domain-Name': 'Default',
'X-Project-Id': 'b4efa69d4ffa4973863f2eefc094f7f8',
'X-Project-Name': 'admin',
'X-Role': '_member_,admin',
'X-Roles': '_member_,admin',
'X-Tenant': 'foo',
'X-Tenant-Id': 'b4efa69d4ffa4973863f2eefc094f7f8',
'X-Tenant-Name': 'foo',
'X-User': 'foo',
'X-User-Domain-Id': 'default',
'X-User-Domain-Name': 'Default',
'X-User-Id': '604ab2a197c442c2a84aba66708a9e1e',
'X-User-Name': 'foo',
'X-OpenStack-Ironic-API-Version': '1.0'
}
if admin:
headers.update({
'X-Project-Name': 'admin',
'X-Role': '_member_,admin',
'X-Roles': '_member_,admin',
'X-Tenant': 'admin',
'X-Tenant-Name': 'admin',
})
else:
headers.update({
'X-Project-Name': 'foo',
'X-Role': '_member_',
'X-Roles': '_member_',
})
return headers
class TestNoExceptionTracebackHook(base.FunctionalTest):
TRACE = [u'Traceback (most recent call last):',
u' File "/opt/stack/ironic/ironic/openstack/common/rpc/amqp.py",'
' line 434, in _process_data\\n **args)',
u' File "/opt/stack/ironic/ironic/openstack/common/rpc/'
'dispatcher.py", line 172, in dispatch\\n result ='
' getattr(proxyobj, method)(ctxt, **kwargs)']
MSG_WITHOUT_TRACE = "Test exception message."
MSG_WITH_TRACE = MSG_WITHOUT_TRACE + "\n" + "\n".join(TRACE)
def setUp(self):
super(TestNoExceptionTracebackHook, self).setUp()
p = mock.patch.object(root.Root, 'convert')
self.root_convert_mock = p.start()
self.addCleanup(p.stop)
def test_hook_exception_success(self):
self.root_convert_mock.side_effect = Exception(self.MSG_WITH_TRACE)
response = self.get_json('/', path_prefix='', expect_errors=True)
actual_msg = json.loads(response.json['error_message'])['faultstring']
self.assertEqual(self.MSG_WITHOUT_TRACE, actual_msg)
def test_hook_remote_error_success(self):
test_exc_type = 'TestException'
self.root_convert_mock.side_effect = messaging.rpc.RemoteError(
test_exc_type, self.MSG_WITHOUT_TRACE, self.TRACE)
response = self.get_json('/', path_prefix='', expect_errors=True)
# NOTE(max_lobur): For RemoteError the client message will still have
# some garbage because in RemoteError traceback is serialized as a list
# instead of'\n'.join(trace). But since RemoteError is kind of very
# rare thing (happens due to wrong deserialization settings etc.)
# we don't care about this garbage.
expected_msg = ("Remote error: %s %s"
% (test_exc_type, self.MSG_WITHOUT_TRACE)
+ ("\n[u'" if six.PY2 else "\n['"))
actual_msg = json.loads(response.json['error_message'])['faultstring']
self.assertEqual(expected_msg, actual_msg)
def test_hook_without_traceback(self):
msg = "Error message without traceback \n but \n multiline"
self.root_convert_mock.side_effect = Exception(msg)
response = self.get_json('/', path_prefix='', expect_errors=True)
actual_msg = json.loads(response.json['error_message'])['faultstring']
self.assertEqual(msg, actual_msg)
def test_hook_server_debug_on_serverfault(self):
cfg.CONF.set_override('debug', True)
self.root_convert_mock.side_effect = Exception(self.MSG_WITH_TRACE)
response = self.get_json('/', path_prefix='', expect_errors=True)
actual_msg = json.loads(
response.json['error_message'])['faultstring']
self.assertEqual(self.MSG_WITHOUT_TRACE, actual_msg)
def test_hook_server_debug_on_clientfault(self):
cfg.CONF.set_override('debug', True)
client_error = Exception(self.MSG_WITH_TRACE)
client_error.code = 400
self.root_convert_mock.side_effect = client_error
response = self.get_json('/', path_prefix='', expect_errors=True)
actual_msg = json.loads(
response.json['error_message'])['faultstring']
self.assertEqual(self.MSG_WITH_TRACE, actual_msg)
class TestContextHook(base.FunctionalTest):
@mock.patch.object(context, 'RequestContext')
def test_context_hook_not_admin(self, mock_ctx):
headers = fake_headers(admin=False)
reqstate = FakeRequestState(headers=headers)
context_hook = hooks.ContextHook(None)
context_hook.before(reqstate)
mock_ctx.assert_called_with(
auth_token=headers['X-Auth-Token'],
user=headers['X-User'],
tenant=headers['X-Tenant'],
domain_id=headers['X-User-Domain-Id'],
domain_name=headers['X-User-Domain-Name'],
is_public_api=False,
show_password=False,
is_admin=False,
roles=headers['X-Roles'].split(','))
@mock.patch.object(context, 'RequestContext')
def test_context_hook_admin(self, mock_ctx):
headers = fake_headers(admin=True)
reqstate = FakeRequestState(headers=headers)
context_hook = hooks.ContextHook(None)
context_hook.before(reqstate)
mock_ctx.assert_called_with(
auth_token=headers['X-Auth-Token'],
user=headers['X-User'],
tenant=headers['X-Tenant'],
domain_id=headers['X-User-Domain-Id'],
domain_name=headers['X-User-Domain-Name'],
is_public_api=False,
show_password=True,
is_admin=True,
roles=headers['X-Roles'].split(','))
@mock.patch.object(context, 'RequestContext')
def test_context_hook_public_api(self, mock_ctx):
headers = fake_headers(admin=True)
env = {'is_public_api': True}
reqstate = FakeRequestState(headers=headers, environ=env)
context_hook = hooks.ContextHook(None)
context_hook.before(reqstate)
mock_ctx.assert_called_with(
auth_token=headers['X-Auth-Token'],
user=headers['X-User'],
tenant=headers['X-Tenant'],
domain_id=headers['X-User-Domain-Id'],
domain_name=headers['X-User-Domain-Name'],
is_public_api=True,
show_password=True,
is_admin=True,
roles=headers['X-Roles'].split(','))
class TestContextHookCompatJuno(TestContextHook):
def setUp(self):
super(TestContextHookCompatJuno, self).setUp()
self.policy = self.useFixture(
policy_fixture.PolicyFixture(compat='juno'))
# override two cases because Juno has no "show_password" policy
@mock.patch.object(context, 'RequestContext')
def test_context_hook_admin(self, mock_ctx):
headers = fake_headers(admin=True)
reqstate = FakeRequestState(headers=headers)
context_hook = hooks.ContextHook(None)
context_hook.before(reqstate)
mock_ctx.assert_called_with(
auth_token=headers['X-Auth-Token'],
user=headers['X-User'],
tenant=headers['X-Tenant'],
domain_id=headers['X-User-Domain-Id'],
domain_name=headers['X-User-Domain-Name'],
is_public_api=False,
show_password=False,
is_admin=True,
roles=headers['X-Roles'].split(','))
@mock.patch.object(context, 'RequestContext')
def test_context_hook_public_api(self, mock_ctx):
headers = fake_headers(admin=True)
env = {'is_public_api': True}
reqstate = FakeRequestState(headers=headers, environ=env)
context_hook = hooks.ContextHook(None)
context_hook.before(reqstate)
mock_ctx.assert_called_with(
auth_token=headers['X-Auth-Token'],
user=headers['X-User'],
tenant=headers['X-Tenant'],
domain_id=headers['X-User-Domain-Id'],
domain_name=headers['X-User-Domain-Name'],
is_public_api=True,
show_password=False,
is_admin=True,
roles=headers['X-Roles'].split(','))
class TestTrustedCallHook(base.FunctionalTest):
def test_trusted_call_hook_not_admin(self):
headers = fake_headers(admin=False)
reqstate = FakeRequestState(headers=headers)
reqstate.set_context()
trusted_call_hook = hooks.TrustedCallHook()
self.assertRaises(webob_exc.HTTPForbidden,
trusted_call_hook.before, reqstate)
def test_trusted_call_hook_admin(self):
headers = fake_headers(admin=True)
reqstate = FakeRequestState(headers=headers)
reqstate.set_context()
trusted_call_hook = hooks.TrustedCallHook()
trusted_call_hook.before(reqstate)
def test_trusted_call_hook_public_api(self):
headers = fake_headers(admin=False)
env = {'is_public_api': True}
reqstate = FakeRequestState(headers=headers, environ=env)
reqstate.set_context()
trusted_call_hook = hooks.TrustedCallHook()
trusted_call_hook.before(reqstate)
class TestTrustedCallHookCompatJuno(TestTrustedCallHook):
def setUp(self):
super(TestTrustedCallHookCompatJuno, self).setUp()
self.policy = self.useFixture(
policy_fixture.PolicyFixture(compat='juno'))
def test_trusted_call_hook_public_api(self):
self.skipTest('no public_api trusted call policy in juno')
|
|
# Copyright 2020 Tensorforce Team. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from tempfile import TemporaryDirectory
import unittest
from tensorforce import Agent
from test.unittest_base import UnittestBase
class TestAgents(UnittestBase, unittest.TestCase):
agent = dict(
config=dict(device='CPU', eager_mode=False, create_debug_assertions=True, tf_log_level=20)
)
def test_a2c(self):
self.start_tests(name='A2C')
# TODO: baseline horizon has to be equal to policy horizon
agent, environment = self.prepare(
agent='a2c', batch_size=4, network=dict(type='auto', size=8, depth=1, rnn=2),
critic=dict(type='auto', size=7, depth=1, rnn=2)
)
self.execute(agent=agent, environment=environment)
with TemporaryDirectory() as directory:
agent.save(directory=directory, format='numpy')
agent = Agent.load(directory=directory)
states = environment.reset()
agent.act(states=states)
agent.close()
environment.close()
def test_ac(self):
self.start_tests(name='AC')
# TODO: baseline horizon has to be equal to policy horizon
agent, environment = self.prepare(
agent='ac', batch_size=4, network=dict(type='auto', size=8, depth=1, rnn=2),
critic=dict(type='auto', size=7, depth=1, rnn=2)
)
self.execute(agent=agent, environment=environment)
with TemporaryDirectory() as directory:
agent.save(directory=directory, format='numpy')
agent = Agent.load(directory=directory)
states = environment.reset()
agent.act(states=states)
agent.close()
environment.close()
def test_constant(self):
self.start_tests(name='Constant')
self.unittest(num_episodes=2, experience_update=False, agent='constant')
def test_dpg(self):
self.start_tests(name='DPG')
actions = dict(
gaussian_action1=dict(type='float', shape=(1, 2), min_value=1.0, max_value=2.0),
gaussian_action2=dict(type='float', shape=(1,), min_value=-2.0, max_value=1.0)
)
agent, environment = self.prepare(
actions=actions, agent='dpg', memory=100, batch_size=4,
# TODO: no-RNN restriction can be removed
network=dict(type='auto', size=8, depth=1, rnn=False),
# TODO: cannot use RNN since value function takes states and actions
critic=dict(type='auto', size=7, depth=1, rnn=False)
)
self.execute(agent=agent, environment=environment)
with TemporaryDirectory() as directory:
agent.save(directory=directory, format='numpy')
agent = Agent.load(directory=directory)
states = environment.reset()
agent.act(states=states)
agent.close()
environment.close()
def test_double_dqn(self):
self.start_tests(name='DoubleDQN')
agent, environment = self.prepare(
actions=dict(type='int', shape=(2,), num_values=4),
agent='double_dqn', memory=100, batch_size=4,
network=dict(type='auto', size=8, depth=1, rnn=2)
)
self.execute(agent=agent, environment=environment)
with TemporaryDirectory() as directory:
agent.save(directory=directory, format='numpy')
agent = Agent.load(directory=directory)
states = environment.reset()
agent.act(states=states)
agent.close()
environment.close()
def test_dqn(self):
self.start_tests(name='DQN')
agent, environment = self.prepare(
actions=dict(type='int', shape=(2,), num_values=4),
agent='dqn', memory=100, batch_size=4,
network=dict(type='auto', size=8, depth=1, rnn=2)
)
self.execute(agent=agent, environment=environment)
with TemporaryDirectory() as directory:
agent.save(directory=directory, format='numpy')
agent = Agent.load(directory=directory)
states = environment.reset()
agent.act(states=states)
agent.close()
environment.close()
def test_dueling_dqn(self):
self.start_tests(name='DuelingDQN')
agent, environment = self.prepare(
actions=dict(type='int', shape=(2,), num_values=4),
agent='dueling_dqn', memory=100, batch_size=4,
network=dict(type='auto', size=8, depth=1, rnn=2)
)
self.execute(agent=agent, environment=environment)
with TemporaryDirectory() as directory:
agent.save(directory=directory, format='numpy')
agent = Agent.load(directory=directory)
states = environment.reset()
agent.act(states=states)
agent.close()
environment.close()
def test_ppo(self):
self.start_tests(name='PPO')
agent, environment = self.prepare(
agent='ppo', batch_size=2, network=dict(type='auto', size=8, depth=1, rnn=2),
baseline=dict(type='auto', size=7, depth=1, rnn=1),
baseline_optimizer=dict(optimizer='adam', learning_rate=1e-3)
)
self.execute(agent=agent, environment=environment)
with TemporaryDirectory() as directory:
agent.save(directory=directory, format='numpy')
agent = Agent.load(directory=directory)
states = environment.reset()
agent.act(states=states)
agent.close()
environment.close()
def test_random(self):
self.start_tests(name='Random')
self.unittest(num_episodes=2, experience_update=False, agent='random')
def test_tensorforce(self):
self.start_tests(name='Tensorforce')
# Explicit, singleton state/action
self.unittest(
states=dict(type='float', shape=(), min_value=1.0, max_value=2.0),
actions=dict(type='int', shape=(), num_values=4),
agent='tensorforce', **UnittestBase.agent
)
# Implicit
agent, environment = self.prepare(**UnittestBase.agent)
self.execute(agent=agent, environment=environment)
with TemporaryDirectory() as directory:
agent.save(directory=directory, format='numpy')
agent = Agent.load(directory=directory)
states = environment.reset()
agent.act(states=states)
agent.close()
environment.close()
def test_trpo(self):
self.start_tests(name='TRPO')
agent, environment = self.prepare(
agent='trpo', batch_size=2, network=dict(type='auto', size=8, depth=1, rnn=2),
baseline=dict(type='auto', size=7, depth=1, rnn=1),
baseline_optimizer=dict(optimizer='adam', learning_rate=1e-3)
)
self.execute(agent=agent, environment=environment)
with TemporaryDirectory() as directory:
agent.save(directory=directory, format='numpy')
agent = Agent.load(directory=directory)
states = environment.reset()
agent.act(states=states)
agent.close()
environment.close()
def test_vpg(self):
self.start_tests(name='VPG')
agent, environment = self.prepare(
agent='vpg', batch_size=2, network=dict(type='auto', size=8, depth=1, rnn=2),
baseline=dict(type='auto', size=7, depth=1, rnn=1),
baseline_optimizer=dict(optimizer='adam', learning_rate=1e-3)
)
self.execute(agent=agent, environment=environment)
with TemporaryDirectory() as directory:
agent.save(directory=directory, format='numpy')
agent = Agent.load(directory=directory)
states = environment.reset()
agent.act(states=states)
agent.close()
environment.close()
|
|
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""linebot.api module."""
from __future__ import unicode_literals
import json
from .__about__ import __version__
from .exceptions import LineBotApiError
from .http_client import HttpClient, RequestsHttpClient
from .models.error import Error
from .models.responses import Profile, MessageContent
class LineBotApi(object):
"""LineBotApi provides interface for LINE messaging API."""
DEFAULT_API_ENDPOINT = 'https://api.line.me'
def __init__(self, channel_access_token, endpoint=DEFAULT_API_ENDPOINT,
timeout=HttpClient.DEFAULT_TIMEOUT, http_client=RequestsHttpClient):
"""__init__ method.
:param str channel_access_token: Your channel access token
:param str endpoint: (optional) Default is https://api.line.me
:param timeout: (optional) How long to wait for the server
to send data before giving up, as a float,
or a (connect timeout, readtimeout) float tuple.
Default is linebot.http_client.HttpClient.DEFAULT_TIMEOUT
:type timeout: float | tuple(float, float)
:param http_client: (optional) Default is
:py:class:`linebot.http_client.RequestsHttpClient`
:type http_client: T <= :py:class:`linebot.http_client.HttpClient`
"""
self.endpoint = endpoint
self.headers = {
'Authorization': 'Bearer ' + channel_access_token,
'User-Agent': 'line-bot-sdk-python/' + __version__
}
if http_client:
self.http_client = http_client(timeout=timeout)
else:
self.http_client = RequestsHttpClient(timeout=timeout)
def reply_message(self, reply_token, messages, timeout=None):
"""Call reply message API.
https://devdocs.line.me/en/#reply-message
Respond to events from users, groups, and rooms.
Webhooks are used to notify you when an event occurs.
For events that you can respond to, a replyToken is issued for replying to messages.
Because the replyToken becomes invalid after a certain period of time,
responses should be sent as soon as a message is received.
Reply tokens can only be used once.
:param str reply_token: replyToken received via webhook
:param messages: Messages.
Max: 5
:type messages: T <= :py:class:`linebot.models.send_messages.SendMessage` |
list[T <= :py:class:`linebot.models.send_messages.SendMessage`]
:param timeout: (optional) How long to wait for the server
to send data before giving up, as a float,
or a (connect timeout, readtimeout) float tuple.
Default is self.http_client.timeout
:type timeout: float | tuple(float, float)
"""
if not isinstance(messages, (list, tuple)):
messages = [messages]
data = {
'replyToken': reply_token,
'messages': [message.as_json_dict() for message in messages]
}
self._post(
'/v2/bot/message/reply', data=json.dumps(data), timeout=timeout
)
def push_message(self, to, messages, timeout=None):
"""Call push message API.
https://devdocs.line.me/en/#push-message
Send messages to users, groups, and rooms at any time.
:param str to: ID of the receiver
:param messages: Messages.
Max: 5
:type messages: T <= :py:class:`linebot.models.send_messages.SendMessage` |
list[T <= :py:class:`linebot.models.send_messages.SendMessage`]
:param timeout: (optional) How long to wait for the server
to send data before giving up, as a float,
or a (connect timeout, readtimeout) float tuple.
Default is self.http_client.timeout
:type timeout: float | tuple(float, float)
"""
if not isinstance(messages, (list, tuple)):
messages = [messages]
data = {
'to': to,
'messages': [message.as_json_dict() for message in messages]
}
self._post(
'/v2/bot/message/push', data=json.dumps(data), timeout=timeout
)
def get_profile(self, user_id, timeout=None):
"""Call get profile API.
https://devdocs.line.me/en/#bot-api-get-profile
Get user profile information.
:param str user_id: User ID
:param timeout: (optional) How long to wait for the server
to send data before giving up, as a float,
or a (connect timeout, readtimeout) float tuple.
Default is self.http_client.timeout
:type timeout: float | tuple(float, float)
:rtype: :py:class:`linebot.models.responses.Profile`
:return: Profile instance
"""
response = self._get(
'/v2/bot/profile/{user_id}'.format(user_id=user_id),
timeout=timeout
)
return Profile.new_from_json_dict(response.json)
def get_message_content(self, message_id, timeout=None):
"""Call get content API.
https://devdocs.line.me/en/#get-content
Retrieve image, video, and audio data sent by users.
:param str message_id: Message ID
:param timeout: (optional) How long to wait for the server
to send data before giving up, as a float,
or a (connect timeout, readtimeout) float tuple.
Default is self.http_client.timeout
:type timeout: float | tuple(float, float)
:rtype: :py:class:`linebot.models.responses.MessageContent`
:return: MessageContent instance
"""
response = self._get(
'/v2/bot/message/{message_id}/content'.format(message_id=message_id),
stream=True, timeout=timeout
)
return MessageContent(response)
def leave_group(self, group_id, timeout=None):
"""Call leave group API.
https://devdocs.line.me/en/#leave
Leave a group.
:param str group_id: Group ID
:param timeout: (optional) How long to wait for the server
to send data before giving up, as a float,
or a (connect timeout, readtimeout) float tuple.
Default is self.http_client.timeout
:type timeout: float | tuple(float, float)
"""
self._post(
'/v2/bot/group/{group_id}/leave'.format(group_id=group_id),
timeout=timeout
)
def leave_room(self, room_id, timeout=None):
"""Call leave room API.
https://devdocs.line.me/en/#leave
Leave a room.
:param str room_id: Room ID
:param timeout: (optional) How long to wait for the server
to send data before giving up, as a float,
or a (connect timeout, readtimeout) float tuple.
Default is self.http_client.timeout
:type timeout: float | tuple(float, float)
"""
self._post(
'/v2/bot/room/{room_id}/leave'.format(room_id=room_id),
timeout=timeout
)
def _get(self, path, stream=False, timeout=None):
url = self.endpoint + path
response = self.http_client.get(
url, headers=self.headers, stream=stream, timeout=timeout
)
self.__check_error(response)
return response
def _post(self, path, data=None, timeout=None):
url = self.endpoint + path
headers = {'Content-Type': 'application/json'}
headers.update(self.headers)
response = self.http_client.post(
url, headers=headers, data=data, timeout=timeout
)
self.__check_error(response)
return response
@staticmethod
def __check_error(response):
if 200 <= response.status_code < 300:
pass
else:
error = Error.new_from_json_dict(response.json)
raise LineBotApiError(response.status_code, error)
|
|
# -*- coding: utf-8 -*-
import re
from django import forms
from django.db import models
from django.contrib import messages
from django.core.exceptions import ValidationError
from django.db import transaction
from django.http import Http404
from django.http import HttpResponseRedirect
from django.utils import timezone
from django.utils.translation import gettext_lazy, pgettext_lazy
from django.views.generic import TemplateView
from django.views.generic import View
from crispy_forms import layout
from cradmin_legacy import crapp
from cradmin_legacy.crinstance import reverse_cradmin_url
from cradmin_legacy.crispylayouts import PrimarySubmit
from cradmin_legacy.viewhelpers import formbase
from cradmin_legacy.viewhelpers import update, delete, crudbase
from cradmin_legacy.viewhelpers import listbuilderview
from cradmin_legacy.viewhelpers.listbuilder import itemvalue
from cradmin_legacy.viewhelpers import multiselect2view
from cradmin_legacy.viewhelpers import multiselect2
from devilry.apps.core.models import PeriodTag
from devilry.apps.core.models import RelatedStudent, RelatedExaminer
from devilry.devilry_admin.cradminextensions.listfilter import listfilter_tags, listfilter_relateduser
class TagItemValue(itemvalue.EditDelete):
template_name = 'devilry_admin/period/manage_tags/tag-item-value.django.html'
def get_title(self):
return self.value.displayname
class HideShowPeriodTag(TemplateView):
def dispatch(self, request, *args, **kwargs):
tag_id = self.__get_tag_id(request)
period_tag = self.__get_period_tag(tag_id)
hide = False
if not period_tag.is_hidden:
hide = True
period_tag.is_hidden = hide
period_tag.full_clean()
period_tag.save()
return HttpResponseRedirect(str(self.request.cradmin_app.reverse_appindexurl()))
def __get_tag_id(self, request):
tag_id = request.GET.get('tag_id', None)
if not tag_id:
raise Http404('Missing parameters.')
return tag_id
def __get_period_tag(self, tag_id):
try:
period_tag = PeriodTag.objects.get(id=tag_id)
except PeriodTag.DoesNotExist:
raise Http404('Tag does not exist.')
return period_tag
class TagListBuilderListView(listbuilderview.FilterListMixin, listbuilderview.View):
"""
"""
template_name = 'devilry_admin/period/manage_tags/manage-tags-list-view.django.html'
model = PeriodTag
value_renderer_class = TagItemValue
paginate_by = 10
def get_pagetitle(self):
return gettext_lazy('Tags on %(what)s') % {'what': self.request.cradmin_role.parentnode}
def add_filterlist_items(self, filterlist):
filterlist.append(listfilter_tags.Search())
# filterlist.append(listfilter_tags.IsHiddenFilter())
filterlist.append(listfilter_tags.IsHiddenRadioFilter())
def get_filterlist_url(self, filters_string):
return self.request.cradmin_app.reverse_appurl(
'filter',
kwargs={'filters_string': filters_string}
)
def get_unfiltered_queryset_for_role(self, role):
queryset = self.model.objects.filter(period=role)\
.prefetch_related(
models.Prefetch('relatedstudents',
queryset=RelatedStudent.objects.all().select_related('user')
.order_by('user__shortname')))\
.prefetch_related(
models.Prefetch('relatedexaminers',
queryset=RelatedExaminer.objects.all().select_related('user')
.order_by('user__shortname')))
return queryset
def get_no_items_message(self):
return pgettext_lazy(
'TagListBuilderListView get_no_items_message',
'No period tags'
)
class CreatePeriodTagForm(forms.Form):
tag_text = forms.CharField()
def __init__(self, *args, **kwargs):
super(CreatePeriodTagForm, self).__init__(*args, **kwargs)
self.fields['tag_text'].label = gettext_lazy('Tags')
self.fields['tag_text'].help_text = gettext_lazy(
'Enter tags here. Tags must be in a comma separated format, '
'e.g: tag1, tag2, tag3. '
'Each tag may be up to 15 characters long.'
)
self.fields['tag_text'].widget = forms.Textarea()
def get_added_tags_list(self):
"""
Get a list of all the tags added in the form separated by comma.
Returns:
(list): List of tags as strings.
"""
return [tag.strip() for tag in self.cleaned_data['tag_text'].split(',')
if len(tag.strip()) > 0]
def clean(self):
super(CreatePeriodTagForm, self).clean()
if 'tag_text' not in self.cleaned_data or len(self.cleaned_data['tag_text']) == 0:
raise ValidationError(gettext_lazy('Tag field is empty.'))
tags_list = self.get_added_tags_list()
if len(tags_list) == 0:
if len(tags_list) > 15:
raise ValidationError(
{'tag_text': gettext_lazy('Wrong format. Example: tag1, tag2, tag3')}
)
for tag in tags_list:
if len(tag) > 15:
raise ValidationError(
{'tag_text': gettext_lazy('One or more tags exceed the limit of 15 characters.')}
)
if tags_list.count(tag) > 1:
raise ValidationError(
{'tag_text': gettext_lazy('"%(what)s" occurs more than once in the form.') % {'what': tag}}
)
class AddTagsView(formbase.FormView):
"""
View for adding a new tag to the semester.
"""
template_name = 'devilry_admin/period/manage_tags/add-tag.django.html'
form_class = CreatePeriodTagForm
@classmethod
def deserialize_preview(cls, serialized):
pass
def serialize_preview(self, form):
pass
def get_field_layout(self):
return [
layout.Div(
layout.Field('tag_text', focusonme='focusonme'),
css_class='cradmin-globalfields'
)
]
def get_buttons(self):
return [
PrimarySubmit('add_tags', gettext_lazy('Add tags'))
]
def get_success_url(self):
return reverse_cradmin_url(
instanceid='devilry_admin_periodadmin',
appname='manage_tags',
roleid=self.request.cradmin_role.id,
viewname=crapp.INDEXVIEW_NAME
)
def __create_tags(self, tags_string_list, excluded_tags):
tags = []
period = self.request.cradmin_role
for tag_string in tags_string_list:
if tag_string not in excluded_tags:
tags.append(PeriodTag(period=period, tag=tag_string))
with transaction.atomic():
PeriodTag.objects.bulk_create(tags)
return len(tags)
def form_valid(self, form):
tags_string_list = form.get_added_tags_list()
excluded_tags = PeriodTag.objects\
.filter_editable_tags_on_period(period=self.request.cradmin_role)\
.filter(tag__in=tags_string_list)\
.values_list('tag', flat=True)
# Check if all tags to be added exists.
if len(tags_string_list) == excluded_tags.count():
self.add_error_message(gettext_lazy('The tag(s) you wanted to add already exists.'))
return HttpResponseRedirect(str(self.request.cradmin_app.reverse_appurl(viewname='add_tag')))
# Add success message.
num_tags_created = self.__create_tags(tags_string_list, excluded_tags)
message = gettext_lazy('%(created)d tag(s) added') % {'created': num_tags_created}
if excluded_tags.count() > 0:
message += gettext_lazy(
', %(excluded)d tag(s) already existed and were ignored.') % {
'excluded': excluded_tags.count()
}
self.add_success_message(message)
return super(AddTagsView, self).form_valid(form=form)
def add_success_message(self, message):
messages.success(self.request, message=message)
def add_error_message(self, message):
messages.error(self.request, message=message)
def get_context_data(self, **kwargs):
context_data = super(AddTagsView, self).get_context_data(**kwargs)
period = self.request.cradmin_role
context_data['period'] = period
context_data['period_tags'] = PeriodTag.objects.filter(period=period)
return context_data
class EditPeriodTagForm(forms.ModelForm):
"""
Form for editing :class:`~.devilry.apps.core.models.period_tag.PeriodTag`s.
"""
class Meta:
model = PeriodTag
fields = [
'tag',
]
def __init__(self, *args, **kwargs):
self.period = kwargs.pop('period')
self.tagobject = kwargs.pop('tagobject')
super(EditPeriodTagForm, self).__init__(*args, **kwargs)
self.fields['tag'].label = gettext_lazy('Tag name')
self.fields['tag'].help_text = gettext_lazy(
'Rename the tag here. Up to 15 characters. '
'Can contain any character except comma(,)'
)
def clean(self):
cleaned_data = super(EditPeriodTagForm, self).clean()
if 'tag' not in self.cleaned_data or len(self.cleaned_data['tag']) == 0:
raise ValidationError(
{'tag': gettext_lazy('Tag cannot be empty.')}
)
tag = cleaned_data['tag']
if PeriodTag.objects.filter(period=self.period, tag=tag).exists():
if tag != self.tagobject.tag:
raise ValidationError(gettext_lazy('%(what)s already exists') % {'what': tag})
if ',' in tag:
raise ValidationError(
{'tag': gettext_lazy('Tag contains a comma(,).')}
)
return cleaned_data
class EditDeleteViewMixin(View):
"""
Edit/delete mixin for :class:`~.devilry.apps.core.models.period_tag.PeriodTag`.
Raises:
Http404: if prefix :attr:`~.devilry.apps.core.models.period_tag.PeriodTag.prefix`
is not blank.
"""
model = PeriodTag
def dispatch(self, request, *args, **kwargs):
self.tag_id = kwargs.get('pk')
self.tag = PeriodTag.objects.get(period=self.request.cradmin_role, id=self.tag_id)
if self.tag.prefix != '':
raise Http404()
return super(EditDeleteViewMixin, self).dispatch(request, *args, **kwargs)
def get_queryset_for_role(self, role):
return PeriodTag.objects.filter(period=role, id=self.tag_id)
def get_success_url(self):
return str(self.request.cradmin_app.reverse_appindexurl())
class EditTagView(crudbase.OnlySaveButtonMixin, EditDeleteViewMixin, update.UpdateView):
"""
Edit a :class:`~.devilry.apps.core.models.period_tag.PeriodTag`.
"""
template_name = 'devilry_admin/period/manage_tags/crud.django.html'
form_class = EditPeriodTagForm
def get_pagetitle(self):
return gettext_lazy('Edit %(what)s') % {
'what': self.tag.displayname
}
def get_field_layout(self):
return [
layout.Div(
layout.Field('tag', focusonme='focusonme'),
css_class='cradmin-globalfields'
)
]
def save_object(self, form, commit=True):
period_tag = super(EditTagView, self).save_object(form=form, commit=False)
period_tag.modified_datetime = timezone.now()
self.add_success_messages(gettext_lazy('Tag successfully edited.'))
return super(EditTagView, self).save_object(form=form, commit=True)
def get_form_kwargs(self):
kwargs = super(EditTagView, self).get_form_kwargs()
kwargs['period'] = self.request.cradmin_role
kwargs['tagobject'] = self.tag
return kwargs
def get_context_data(self, **kwargs):
period = self.request.cradmin_role
context_data = super(EditTagView, self).get_context_data(**kwargs)
context_data['period'] = period
context_data['period_tags'] = PeriodTag.objects\
.filter_editable_tags_on_period(period=period)
return context_data
class DeleteTagView(EditDeleteViewMixin, delete.DeleteView):
"""
Delete a :class:`~.devilry.apps.core.models.period_tag.PeriodTag`.
"""
template_name = 'devilry_admin/period/manage_tags/delete.django.html'
def get_object_preview(self):
periodtag = self.model.objects.get(id=self.tag_id)
return periodtag.tag
class SelectedRelatedUsersForm(forms.Form):
invalid_item_selected_message = gettext_lazy(
'Invalid user was selected. This may happen if someone else added or '
'removed one or more of the available users while you were selecting. '
'Please try again.'
)
selected_items = forms.ModelMultipleChoiceField(
queryset=None,
error_messages={
'invalid_choice': invalid_item_selected_message
}
)
def __init__(self, *args, **kwargs):
relatedusers_queryset = kwargs.pop('relatedusers_queryset')
super(SelectedRelatedUsersForm, self).__init__(*args, **kwargs)
self.fields['selected_items'].queryset = relatedusers_queryset
class SelectedItemsTarget(multiselect2.target_renderer.Target):
def __init__(self, *args, **kwargs):
self.relateduser_type = kwargs.pop('relateduser_type')
super(SelectedItemsTarget, self).__init__(*args, **kwargs)
def get_with_items_title(self):
return pgettext_lazy('admin multiselect2_relateduser',
'Selected %(what)s') % {'what': self.relateduser_type}
def get_without_items_text(self):
return pgettext_lazy('admin multiselect2_relateduser',
'No %(what)s selected') % {'what': self.relateduser_type}
class SelectedRelatedUserItem(multiselect2.selected_item_renderer.SelectedItem):
valuealias = 'relateduser'
def get_title(self):
return self.relateduser.user.shortname
class SelectableRelatedUserItem(multiselect2.listbuilder_itemvalues.ItemValue):
valuealias = 'relateduser'
selected_item_renderer_class = SelectedRelatedUserItem
def get_title(self):
return self.relateduser.user.shortname
class BaseRelatedUserMultiSelectView(multiselect2view.ListbuilderFilterView):
"""
Base multiselect view for :class:`~.devilry.apps.core.models.relateduser.RelatedExaminer`s and
:class:`~.devilry.apps.core.models.relateduser.RelatedStudents`s.
"""
template_name = 'devilry_admin/period/manage_tags/base-multiselect-view.django.html'
value_renderer_class = SelectableRelatedUserItem
form_class = SelectedRelatedUsersForm
paginate_by = 20
#: the specific tag :attr:`~.devilry.apps.core.models.period_tag.PeriodTag.tag`.
tag_id = None
#: Type of related user as shown in ui.
#: e.g 'student' or 'examiner'
relateduser_string = ''
def dispatch(self, request, *args, **kwargs):
self.tag_id = kwargs.get('tag_id')
return super(BaseRelatedUserMultiSelectView, self).dispatch(request, *args, **kwargs)
def get_target_renderer_class(self):
return SelectedItemsTarget
def get_period_tag(self):
return PeriodTag.objects.get(id=self.tag_id)
def get_tags_for_period(self):
return PeriodTag.objects.filter(period=self.request.cradmin_role)
def add_filterlist_items(self, filterlist):
filterlist.append(listfilter_relateduser.Search())
filterlist.append(listfilter_relateduser.OrderRelatedStudentsFilter())
filterlist.append(listfilter_relateduser.TagSelectFilter(period=self.request.cradmin_role))
def get_unfiltered_queryset_for_role(self, role):
"""
Get all relatedstudents for the period that are not already registered on the
tag provided with the url.
"""
return self.model.objects.filter(period=role)
def get_form_kwargs(self):
period = self.request.cradmin_role
kwargs = super(BaseRelatedUserMultiSelectView, self).get_form_kwargs()
kwargs['relatedusers_queryset'] = self.get_queryset_for_role(role=period)
return kwargs
def get_target_renderer_kwargs(self):
kwargs = super(BaseRelatedUserMultiSelectView, self).get_target_renderer_kwargs()
kwargs['relateduser_type'] = self.relateduser_string
return kwargs
def add_success_message(self, message):
messages.success(self.request, message=message)
def add_error_message(self, message):
messages.error(self.request, message=message)
def get_success_url(self):
return str(self.request.cradmin_app.reverse_appindexurl())
class AddRelatedUserToTagMultiSelectView(BaseRelatedUserMultiSelectView):
"""
Add related users to a :class:`~.devilry.apps.core.models.period_tag.PeriodTag`.
"""
def get_pagetitle(self):
tag_displayname = self.get_period_tag().displayname
return gettext_lazy(
'Add %(user)s to %(tag)s') % {
'user': self.relateduser_string,
'tag': tag_displayname
}
def get_queryset_for_role(self, role):
return super(AddRelatedUserToTagMultiSelectView, self)\
.get_queryset_for_role(role=role)\
.exclude(periodtag__id=self.tag_id)
def add_related_users(self, period_tag, related_users):
with transaction.atomic():
for related_user in related_users:
related_user.periodtag_set.add(period_tag)
def form_valid(self, form):
period_tag = self.get_period_tag()
related_users = form.cleaned_data['selected_items']
self.add_related_users(period_tag=period_tag, related_users=related_users)
self.add_success_message(
message=gettext_lazy(
'%(number_users)d %(user_string)s added successfully.'
) % {
'number_users': len(related_users),
'user_string': self.relateduser_string,
}
)
return super(AddRelatedUserToTagMultiSelectView, self).form_valid(form=form)
class RemoveRelatedUserFromTagMultiSelectView(BaseRelatedUserMultiSelectView):
"""
Remove related users from a :class:`~.devilry.apps.core.models.period_tag.PeriodTag`.
"""
def get_pagetitle(self):
tag_displayname = self.get_period_tag().displayname
return gettext_lazy(
'Remove %(user)s from %(tag)s'
) % {
'user': self.relateduser_string,
'tag': tag_displayname
}
def get_queryset_for_role(self, role):
return super(RemoveRelatedUserFromTagMultiSelectView, self)\
.get_queryset_for_role(role=role)\
.filter(periodtag__id=self.tag_id)
def remove_related_users(self, period_tag, related_users):
with transaction.atomic():
for related_user in related_users:
related_user.periodtag_set.remove(period_tag)
def form_valid(self, form):
period_tag = self.get_period_tag()
related_users = form.cleaned_data['selected_items']
self.remove_related_users(period_tag=period_tag, related_users=related_users)
self.add_success_message(
message=gettext_lazy(
'%(number_users)d %(user_string)s removed successfully'
) % {
'number_users': len(related_users),
'user_string': self.relateduser_string
}
)
return super(RemoveRelatedUserFromTagMultiSelectView, self).form_valid(form=form)
class SelectedRelatedExaminerForm(SelectedRelatedUsersForm):
invalid_item_selected_message = gettext_lazy('Invalid examiner was selected.')
class SelectedRelatedStudentForm(SelectedRelatedUsersForm):
invalid_item_selected_message = gettext_lazy('Invalid student was selected.')
class ExaminerMultiSelectViewMixin(object):
model = RelatedExaminer
relateduser_string = gettext_lazy('examiner')
form_class = SelectedRelatedExaminerForm
class StudentMultiSelectViewMixin(object):
model = RelatedStudent
relateduser_string = gettext_lazy('student')
form_class = SelectedRelatedStudentForm
class RelatedExaminerAddView(ExaminerMultiSelectViewMixin, AddRelatedUserToTagMultiSelectView):
"""
Multi-select add view for :class:`~.devilry.apps.core.models.relateduser.RelatedExaminer`.
"""
def get_filterlist_url(self, filters_string):
return self.request.cradmin_app.reverse_appurl(
'add_examiners_filter', kwargs={
'tag_id': self.tag_id,
'filters_string': filters_string
})
class RelatedExaminerRemoveView(ExaminerMultiSelectViewMixin, RemoveRelatedUserFromTagMultiSelectView):
"""
Multi-select remove view for :class:`~.devilry.apps.core.models.relateduser.RelatedExaminer`.
"""
def get_filterlist_url(self, filters_string):
return self.request.cradmin_app.reverse_appurl(
'remove_examiners_filter', kwargs={
'tag_id': self.tag_id,
'filters_string': filters_string
})
class RelatedStudentAddView(StudentMultiSelectViewMixin, AddRelatedUserToTagMultiSelectView):
"""
Multi-select add view for :class:`~.devilry.apps.core.models.relateduser.RelatedStudent`.
"""
def get_filterlist_url(self, filters_string):
return self.request.cradmin_app.reverse_appurl(
'add_students_filter', kwargs={
'tag_id': self.tag_id,
'filters_string': filters_string
})
class RelatedStudentRemoveView(StudentMultiSelectViewMixin, RemoveRelatedUserFromTagMultiSelectView):
"""
Multi-select remove view for :class:`~.devilry.apps.core.models.relateduser.RelatedStudent`.
"""
def get_filterlist_url(self, filters_string):
return self.request.cradmin_app.reverse_appurl(
'remove_students_filter', kwargs={
'tag_id': self.tag_id,
'filters_string': filters_string
})
class App(crapp.App):
appurls = [
crapp.Url(r'^$',
TagListBuilderListView.as_view(),
name=crapp.INDEXVIEW_NAME),
crapp.Url(r'^filter/(?P<filters_string>.+)?$',
TagListBuilderListView.as_view(),
name='filter'),
crapp.Url(r'^add$',
AddTagsView.as_view(),
name='add_tag'),
crapp.Url(r'^edit/(?P<pk>\d+)$',
EditTagView.as_view(),
name='edit'),
crapp.Url(r'^delete/(?P<pk>\d+)$',
DeleteTagView.as_view(),
name='delete'),
crapp.Url(r'^toggle-visibility$',
HideShowPeriodTag.as_view(),
name='toggle_visibility'),
crapp.Url('^add-examiners/(?P<tag_id>\d+)$',
RelatedExaminerAddView.as_view(),
name='add_examiners'),
crapp.Url('^add-examiners/(?P<tag_id>\d+)/(?P<filters_string>.+)?$',
RelatedExaminerAddView.as_view(),
name='add_examiners_filter'),
crapp.Url('^remove-examiners/(?P<tag_id>\d+)$',
RelatedExaminerRemoveView.as_view(),
name='remove_examiners'),
crapp.Url('^remove-examiners/(?P<tag_id>\d+)/(?P<filters_string>.+)?$',
RelatedExaminerRemoveView.as_view(),
name='remove_examiners_filter'),
crapp.Url('^add-students/(?P<tag_id>\d+)$',
RelatedStudentAddView.as_view(),
name='add_students'),
crapp.Url('^add-students/(?P<tag_id>\d+)/(?P<filters_string>.+)?$',
RelatedStudentAddView.as_view(),
name='add_students_filter'),
crapp.Url('^remove-students/(?P<tag_id>\d+)$',
RelatedStudentRemoveView.as_view(),
name='remove_students'),
crapp.Url('^remove-students/(?P<tag_id>\d+)/(?P<filters_string>.+)?$',
RelatedStudentRemoveView.as_view(),
name='remove_students_filter'),
]
|
|
#!/usr/bin/python
# Copyright (C) 2015, WSID
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
import unittest
from gi.repository import GObject
from gi.repository import CrankBase
class TestVecInt(unittest.TestCase):
def assertFloat (self, a, b, delta=0.0001):
"""A simple custom assert that given values are same.
It takes into delta values into account, so that test can endure little
errors.
"""
try: #if they are both of list type.
if (len(a) != len(b)):
raise AssertionError ("array length: %d != %d" % (len(a), len(b)))
for i in range (0, len(a)):
if ((a[i] < b[i] - delta) or (b[i] + delta < a[i])):
raise AssertionError ("%g != %g (diff=%g)" % (a[i], b[i], b[i]-a[i]))
except TypeError: #then they are numeric type.
if ((a < b - delta) or (b + delta < a)):
raise AssertionError ("%g != %g (diff=%g)" % (a, b, b-a))
def test_equal (self):
a = CrankBase.CplxFloat.init (3, 4)
b = CrankBase.CplxFloat.init (3, 4)
c = CrankBase.CplxFloat.init (4, 3)
assert (a.equal (b))
assert (not a.equal (c))
def test_equal_delta (self):
a = CrankBase.CplxFloat.init (3, 4)
b = CrankBase.CplxFloat.init (3.2, 4.1)
c = CrankBase.CplxFloat.init (4, 3)
assert (a.equal_delta (b, 1))
assert (not a.equal_delta (c, 1))
def test_get_norm (self):
a = CrankBase.CplxFloat.init (3, 4)
self.assertFloat (a.get_norm (), 5)
def test_neg (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.neg ()
self.assertFloat (a.real, -3)
self.assertFloat (a.imag, -4)
def test_inverse (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.inverse ()
self.assertFloat (a.real, 0.12)
self.assertFloat (a.imag, -0.16)
def test_conjugate (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.conjugate ()
self.assertFloat (a.real, 3)
self.assertFloat (a.imag, -4)
def test_unit (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.unit ()
self.assertFloat (a.real, 0.6)
self.assertFloat (a.imag, 0.8)
def test_sqrt (self):
a = CrankBase.CplxFloat.init (7, 8)
a = a.sqrt ()
self.assertFloat (a.real, 2.9690)
self.assertFloat (a.imag, 1.3472)
def test_addr (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.addr (2)
self.assertFloat (a.real, 5)
self.assertFloat (a.imag, 4)
def test_subr (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.subr (2)
self.assertFloat (a.real, 1)
self.assertFloat (a.imag, 4)
def test_mulr (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.mulr (2)
self.assertFloat (a.real, 6)
self.assertFloat (a.imag, 8)
def test_divr (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.divr (2)
self.assertFloat (a.real, 1.5)
self.assertFloat (a.imag, 2)
def test_rsubr (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.rsubr (2)
self.assertFloat (a.real, -1)
self.assertFloat (a.imag, -4)
def test_rdivr (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.rdivr (2)
self.assertFloat (a.real, 0.24)
self.assertFloat (a.imag, -0.32)
def test_add (self):
a = CrankBase.CplxFloat.init (3, 4)
b = CrankBase.CplxFloat.init (5, 12)
a = a.add (b)
self.assertFloat (a.real, 8)
self.assertFloat (a.imag, 16)
def test_sub (self):
a = CrankBase.CplxFloat.init (3, 4)
b = CrankBase.CplxFloat.init (5, 12)
a = a.sub (b)
self.assertFloat (a.real, -2)
self.assertFloat (a.imag, -8)
def test_mul (self):
a = CrankBase.CplxFloat.init (3, 4)
b = CrankBase.CplxFloat.init (5, 12)
a = a.mul (b)
self.assertFloat (a.real, -33)
self.assertFloat (a.imag, 56)
def test_div (self):
a = CrankBase.CplxFloat.init (3, 4)
b = CrankBase.CplxFloat.init (5, 12)
a = a.div (b)
self.assertFloat (a.real, 63.0/169.0)
self.assertFloat (a.imag, -16.0/169.0)
def test_mul_conj (self):
a = CrankBase.CplxFloat.init (3, 4)
b = CrankBase.CplxFloat.init (5, 12)
a = a.mul_conj (b)
self.assertFloat (a.real, 63.0)
self.assertFloat (a.imag, -16.0)
def test_mix (self):
a = CrankBase.CplxFloat.init (3, 4)
b = CrankBase.CplxFloat.init (5, 12)
a = a.mix (b, 0.25)
self.assertFloat (a.real, 3.5)
self.assertFloat (a.imag, 6.0)
def test_ln (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.ln ()
self.assertFloat (a.real, 1.6094)
self.assertFloat (a.imag, 0.9273)
def test_exp (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.exp ()
self.assertFloat (a.real, -13.1287)
self.assertFloat (a.imag, -15.2008)
def test_pow (self):
a = CrankBase.CplxFloat.init (3, 4)
b = CrankBase.CplxFloat.init (1, 2)
a = a.pow (b)
self.assertFloat (a.real, -0.4198)
self.assertFloat (a.imag, -0.6605)
def test_sinh (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.sinh ()
self.assertFloat (a.real, -6.5481)
self.assertFloat (a.imag, -7.6192)
def test_cosh (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.cosh ()
self.assertFloat (a.real, -6.5807)
self.assertFloat (a.imag, -7.5816)
def test_tanh (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.tanh ()
self.assertFloat (a.real, 1.0007)
self.assertFloat (a.imag, 0.0049)
def test_sin (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.sin ()
self.assertFloat (a.real, 3.8537)
self.assertFloat (a.imag, -27.0168)
def test_cos (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.cos ()
self.assertFloat (a.real, -27.0349)
self.assertFloat (a.imag, -3.8512)
def test_tan (self):
a = CrankBase.CplxFloat.init (3, 4)
a = a.tan ()
self.assertFloat (a.real, -0.0001)
self.assertFloat (a.imag, 0.9994)
if __name__ == '__main__':
unittest.main ()
|
|
#
# Copyright 2012 SAS Institute
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import mock
import lockfile
import os
from cStringIO import StringIO
import signal
import tempfile
import testutils
from bigitr import bigitrdaemon
from bigitr import Synchronize
class TestDaemon(testutils.TestCase):
def setUp(self):
self.dir = tempfile.mkdtemp(suffix='.bigitr')
os.environ['DDIR'] = self.dir
self.daemonConfig = self.dir + '/daemon'
file(self.daemonConfig, 'w').write('''
[GLOBAL]
appconfig = ${DDIR}/app1
email = other@other blah@blah
mailfrom = sender@here
[foo]
repoconfig = ${DDIR}/foo1.* ${DDIR}/foo2.*
[bar]
appconfig = ${DDIR}/app2
repoconfig = ${DDIR}/bar
''')
appCfgText = '''
[global]
[import]
[export]
'''
app1Cfg = self.dir + '/app1'
file(app1Cfg, 'w').write(appCfgText)
app2Cfg = self.dir + '/app2'
file(app2Cfg, 'w').write(appCfgText)
repCfgText = '''
[%s]
'''
file(self.dir + '/foo1.1', 'w').write(repCfgText % 'foo1.1')
file(self.dir + '/foo1.2', 'w').write(repCfgText % 'foo1.2')
file(self.dir + '/foo2.1', 'w').write(repCfgText % 'foo2.1')
file(self.dir + '/bar', 'w').write(repCfgText % 'bar')
self.pidFile = self.dir+'/pid'
self.oldcwd = os.getcwd()
os.chdir(self.dir)
def tearDown(self):
os.chdir(self.oldcwd)
self.removeRecursive(self.dir)
os.unsetenv('DDIR')
@mock.patch('bigitr.progress.Progress')
@mock.patch('bigitr.bigitrdaemon.Daemon.createContext')
def test_initDetach(self, cC, P):
d = bigitrdaemon.Daemon('/foo', self.daemonConfig, True, '${DDIR}/pid')
self.assertFalse(os.path.exists(self.pidFile))
self.assertEquals(d.execPath, '/foo')
self.assertEquals(d.config, self.daemonConfig)
self.assertEquals(d.pidfile, self.pidFile)
self.assertFalse(d.restart)
self.assertFalse(d.stop)
cC.assert_called_once_with(True)
P.assert_called_once_with(outFile=None)
@mock.patch('bigitr.progress.Progress')
@mock.patch('bigitr.bigitrdaemon.Daemon.createContext')
def test_initNoDetach(self, cC, P):
d = bigitrdaemon.Daemon('/foo', self.daemonConfig, False, '${DDIR}/pid')
self.assertFalse(os.path.exists(self.pidFile))
self.assertEquals(d.execPath, '/foo')
self.assertEquals(d.config, self.daemonConfig)
self.assertEquals(d.pidfile, self.pidFile)
self.assertFalse(d.restart)
self.assertFalse(d.stop)
cC.assert_called_once_with(False)
P.assert_called_once_with()
@mock.patch.multiple('daemon.daemon',
close_all_open_files=mock.DEFAULT,
redirect_stream=mock.DEFAULT,
register_atexit_function=mock.DEFAULT,
change_working_directory=mock.DEFAULT,
set_signal_handlers=mock.DEFAULT
)
def test_Context(self, **patches):
d = bigitrdaemon.Daemon('/foo', self.daemonConfig, False, self.pidFile)
self.assertFalse(os.path.exists(self.pidFile))
self.assertFalse(d.context.pidfile.is_locked())
with d.context:
self.assertEqual(os.getpid(), d.context.pidfile.pid)
self.assertTrue(d.context.pidfile.is_locked())
self.assertFalse(d.context.pidfile.is_locked())
self.assertEqual(d.context.detach_process, False)
self.assertEqual(d.context.working_directory, os.getcwd())
self.assertEqual(d.context.signal_map[signal.SIGHUP], d.sighup)
self.assertEqual(d.context.signal_map[signal.SIGTERM], d.sigterm)
self.assertEqual(d.context.signal_map[signal.SIGINT], d.sigterm)
self.assertEqual(d.context.signal_map[signal.SIGCHLD], d.sigchld)
@mock.patch('bigitr.bigitrdaemon.Daemon.createContext')
def test_createSynchronizers(self, cC):
d = bigitrdaemon.Daemon('/foo', self.daemonConfig, False, self.pidFile)
self.assertEqual(len(d.synchronizers), 4)
for s in d.synchronizers:
self.assertTrue(isinstance(s, Synchronize))
self.assertTrue(isinstance(s.repos, list))
for repo in s.ctx.getRepositories():
email = s.ctx.getEmail(repo)
if email is not None:
self.assertFalse('a@b' in email)
@mock.patch('bigitr.bigitrdaemon.Daemon.createContext')
def test_createSynchronizersAddEmail(self, cC):
cfg = file(self.daemonConfig).read()
cfg = cfg.replace('email = other@other blah@blah\n', '')
cfg = cfg.replace('[GLOBAL]', '[GLOBAL]\nmailall = true\nemail = a@b')
file(self.daemonConfig, 'w').write(cfg)
d = bigitrdaemon.Daemon('/foo', self.daemonConfig, False, self.pidFile)
self.assertEqual(len(d.synchronizers), 4)
for s in d.synchronizers:
self.assertTrue(isinstance(s, Synchronize))
for repo in s.ctx.getRepositories():
self.assertTrue('a@b' in s.ctx.getEmail(repo))
@mock.patch('bigitr.bigitrdaemon.Daemon.mainLoop')
@mock.patch.multiple('daemon.daemon',
close_all_open_files=mock.DEFAULT,
redirect_stream=mock.DEFAULT,
register_atexit_function=mock.DEFAULT,
change_working_directory=mock.DEFAULT,
set_signal_handlers=mock.DEFAULT
)
def test_run(self, mainLoop, **patches):
def assertPidFileContents():
self.assertEqual(os.getpid(), int(file(d.pidfile).read()))
self.assertTrue(d.context.pidfile.is_locked())
mainLoop.side_effect = assertPidFileContents
d = bigitrdaemon.Daemon('/foo', self.daemonConfig, False, self.pidFile)
self.assertFalse(os.path.exists(self.pidFile))
self.assertFalse(d.context.pidfile.is_locked())
d.run()
self.assertFalse(os.path.exists(self.pidFile))
self.assertFalse(d.context.pidfile.is_locked())
@mock.patch('bigitr.bigitrdaemon.Daemon.mainLoop')
@mock.patch.multiple('daemon.daemon',
close_all_open_files=mock.DEFAULT,
redirect_stream=mock.DEFAULT,
register_atexit_function=mock.DEFAULT,
change_working_directory=mock.DEFAULT,
set_signal_handlers=mock.DEFAULT
)
def test_runLockRecover(self, mainLoop, **patches):
i = []
def raiseAlreadyLockedOnce(*args, **kwargs):
i.append(1)
if len(i) > 1:
return
raise lockfile.AlreadyLocked
d = bigitrdaemon.Daemon('/foo', self.daemonConfig, False, self.pidFile)
d.context.pidfile.acquire = mock.Mock()
d.context.pidfile.break_lock = mock.Mock()
d.context.pidfile.is_locked = mock.Mock()
d.context.pidfile.release = mock.Mock()
d.context.pidfile.acquire.side_effect = raiseAlreadyLockedOnce
file(self.pidFile, 'w').write('1234567890')
d.run()
self.assertEqual(len(i), 2)
d.context.pidfile.break_lock.assert_called_once_with()
d.context.pidfile.is_locked.assert_called_once_with()
self.assertEqual(d.context.pidfile.release.call_count, 2)
@mock.patch('bigitr.bigitrdaemon.Daemon.mainLoop')
@mock.patch.multiple('daemon.daemon',
close_all_open_files=mock.DEFAULT,
redirect_stream=mock.DEFAULT,
register_atexit_function=mock.DEFAULT,
change_working_directory=mock.DEFAULT,
set_signal_handlers=mock.DEFAULT
)
def test_runLockFail(self, mainLoop, **patches):
d = bigitrdaemon.Daemon('/foo', self.daemonConfig, False, self.pidFile)
d.context.pidfile.acquire = mock.Mock()
d.context.pidfile.break_lock = mock.Mock()
d.context.pidfile.is_locked = mock.Mock()
d.context.pidfile.release = mock.Mock()
d.context.pidfile.acquire.side_effect = lockfile.AlreadyLocked
file(self.pidFile, 'w').write(str(os.getppid()))
self.assertRaises(lockfile.AlreadyLocked, d.run)
self.assertEqual(d.context.pidfile.acquire.call_count, 1)
d.context.pidfile.break_lock.assert_not_called()
d.context.pidfile.is_locked.assert_not_called()
d.context.pidfile.release.assert_not_called()
@mock.patch('bigitr.bigitrdaemon.Daemon.__init__')
def test_sigterm(self, I):
I.return_value = None
d = bigitrdaemon.Daemon()
d.stop = False
d.sigterm(signal.SIGTERM, None)
self.assertEqual(d.stop, True)
@mock.patch('bigitr.bigitrdaemon.Daemon.__init__')
def test_sighup(self, I):
I.return_value = None
d = bigitrdaemon.Daemon()
d.restart = False
d.sighup(signal.SIGHUP, None)
self.assertEqual(d.restart, True)
@mock.patch('bigitr.bigitrdaemon.Daemon.__init__')
def test_sigchld(self, I):
I.return_value = None
d = bigitrdaemon.Daemon()
d.sigchld(signal.SIGCHLD, None)
@mock.patch('bigitr.bigitrdaemon.Daemon.__init__')
def test_runOnce(self, I):
I.return_value = None
d = bigitrdaemon.Daemon()
d.progress = mock.Mock()
s = mock.Mock()
s.repos = ['foo']
s.ctx.getRepositoryName.return_value = 'foo'
d.stop = False
d.restart = False
d.synchronizers = [s]
d.runOnce()
s.run.assert_called_once_with(poll=False)
d.progress.setPhase.assert_called_once_with('sync')
d.progress.add.assert_called_once_with('foo')
d.progress.report.assert_called_once_with()
d.progress.remove.assert_called_once_with('foo')
s.run.reset_mock()
d.progress.reset_mock()
d.runOnce(poll=True)
s.run.assert_called_once_with(poll=True)
d.progress.setPhase.assert_called_once_with('poll')
d.progress.add.assert_called_once_with('foo')
d.progress.report.assert_called_once_with()
d.progress.remove.assert_called_once_with('foo')
d.progress.reset_mock()
d.stop = True
self.assertRaises(SystemExit, d.runOnce)
d.progress.setPhase.assert_called_once_with('sync')
d.progress.reset_mock()
d.stop = False
d.restart = True
self.assertRaises(SystemExit, d.runOnce)
d.progress.setPhase.assert_called_once_with('sync')
s.run.reset_mock()
d.restart = False
s.run.side_effect = lambda **x: [][1]
d.report = mock.Mock()
d.runOnce()
d.report.assert_called_once_with()
@mock.patch('smtplib.SMTP')
@mock.patch('bigitr.bigitrdaemon.Daemon.createContext')
def test_report(self, cC, S):
d = bigitrdaemon.Daemon('/foo', self.daemonConfig, False, self.pidFile)
try:
[][1]
except:
d.report()
conn = S().sendmail
conn.assert_called_once_with(
'sender@here', ['other@other', 'blah@blah'], mock.ANY)
msg = conn.call_args[0][2]
self.assertTrue('\nIndexError: list index out of range\n' in msg)
@mock.patch('traceback.format_exception')
@mock.patch('bigitr.bigitrdaemon.Daemon.createContext')
def test_reportNoEmail(self, cC, t):
cfg = file(self.daemonConfig).read()
cfg = cfg.replace('email = other@other blah@blah\n', '')
file(self.daemonConfig, 'w').write(cfg)
d = bigitrdaemon.Daemon('/foo', self.daemonConfig, False, self.pidFile)
d.report()
t.assert_not_called()
@mock.patch('traceback.format_exception')
@mock.patch('bigitr.bigitrdaemon.Daemon.createContext')
def test_reportNoMailFrom(self, cC, t):
cfg = file(self.daemonConfig).read()
cfg = cfg.replace('mailfrom = sender@here\n', '')
file(self.daemonConfig, 'w').write(cfg)
d = bigitrdaemon.Daemon('/foo', self.daemonConfig, False, self.pidFile)
d.report()
t.assert_not_called()
@mock.patch('os.execl')
@mock.patch('time.time')
@mock.patch('time.sleep')
@mock.patch('bigitr.bigitrdaemon.Daemon.__init__')
@mock.patch('bigitr.bigitrdaemon.Daemon.runOnce')
def test_mainLoop(self, rO, I, sleep, Time, execl):
I.return_value = None
d = bigitrdaemon.Daemon()
d.progress = mock.Mock()
d.i = 0
def stop(**kw):
if 2 == d.i:
d.stop = True
d.i += 1
d.runOnce.side_effect = stop
d.cfg = mock.Mock()
d.cfg.getFullSyncFrequency.return_value = 10
d.cfg.getPollFrequency.return_value = 5
d.stop = False
d.restart = False
d.context = mock.Mock()
d.context.detach_process = False
# first sync is full. It is shorter than syncfrequency, so the
# second sync is a poll. It takes just long enough to exceed the
# total sync time since last sync, so the
# third sync is full
Time.side_effect = [0.1, 1.1,
1.2, 10.3,
10.4, 10.5]
self.assertRaises(SystemExit, d.mainLoop)
d.runOnce.assert_has_calls([mock.call(poll=False),
mock.call(poll=True),
mock.call(poll=False)])
sleep.assert_called_once_with(4.0)
execl.assert_not_called()
@mock.patch('os.execl')
@mock.patch('time.sleep')
@mock.patch('bigitr.bigitrdaemon.Daemon.__init__')
@mock.patch('bigitr.bigitrdaemon.Daemon.runOnce')
def test_mainLoopSignalHandling(self, rO, I, sleep, execl):
I.return_value = None
d = bigitrdaemon.Daemon()
d.progress = mock.Mock()
d.cfg = mock.Mock()
d.cfg.getFullSyncFrequency.return_value = 1000
d.cfg.getPollFrequency.return_value = 10000
d.context = mock.Mock()
d.context.detach_process = False
d.stop = False
d.restart = True
d.execPath = '/foo'
d.config = 'b'
d.pidfile = 'b-p'
self.assertRaises(SystemExit, d.mainLoop)
execl.assert_called_once_with('/foo', '/foo', '--config', 'b', '--pid-file', 'b-p', '--no-daemon')
execl.reset_mock()
d.restart = False
d.stop = True
self.assertRaises(SystemExit, d.mainLoop)
execl.assert_not_called()
@mock.patch('bigitr.bigitrdaemon.Daemon')
class TestMain(testutils.TestCase):
def test_emptyArgs(self, D):
if 'BIGITR_DAEMON_CONFIG' in os.environ:
del os.environ['BIGITR_DAEMON_CONFIG']
if 'BIGITR_DAEMON_PIDFILE' in os.environ:
del os.environ['BIGITR_DAEMON_PIDFILE']
bigitrdaemon.main(['/foo'])
D.assert_called_once_with(
'/foo', '~/.bigitrd', True, '~/.bigitrd-pid')
D().run.assert_called_once_with()
def test_emptyArgsWithEnvironment(self, D):
os.environ['BIGITR_DAEMON_CONFIG'] = '/b'
os.environ['BIGITR_DAEMON_PIDFILE'] = '/b-p'
try:
bigitrdaemon.main(['/foo'])
D.assert_called_once_with('/foo', '/b', True, '/b-p')
D().run.assert_called_once_with()
finally:
os.unsetenv('BIGITR_DAEMON_CONFIG')
os.unsetenv('BIGITR_DAEMON_PIDFILE')
@staticmethod
def assertNonDefaultArgs(D):
D.assert_called_once_with('/foo', '/b', False, '/b-p')
D().run.assert_called_once_with()
def test_ArgsHelp(self, D):
with mock.patch('sys.stdout') as o:
self.assertRaises(SystemExit, bigitrdaemon.main, ['/foo', '--help'])
o.write.assert_called_once_with(mock.ANY)
D.assert_not_called()
def test_Args(self, D):
bigitrdaemon.main(['/foo', '--config', '/b', '--nodaemon', '--pidfile', '/b-p'])
self.assertNonDefaultArgs(D)
def test_ArgsShort(self, D):
bigitrdaemon.main(['/foo', '-c', '/b', '-n', '-p', '/b-p'])
self.assertNonDefaultArgs(D)
def test_ArgsLong(self, D):
bigitrdaemon.main(['/foo', '--config', '/b', '--no-daemon', '--pid-file', '/b-p'])
self.assertNonDefaultArgs(D)
|
|
# coding: utf-8
"""
DocuSign REST API
The DocuSign REST API provides you with a powerful, convenient, and simple Web services API for interacting with DocuSign. # noqa: E501
OpenAPI spec version: v2.1
Contact: devcenter@docusign.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class ListCustomField(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'configuration_type': 'str',
'error_details': 'ErrorDetails',
'field_id': 'str',
'list_items': 'list[str]',
'name': 'str',
'required': 'str',
'show': 'str',
'value': 'str'
}
attribute_map = {
'configuration_type': 'configurationType',
'error_details': 'errorDetails',
'field_id': 'fieldId',
'list_items': 'listItems',
'name': 'name',
'required': 'required',
'show': 'show',
'value': 'value'
}
def __init__(self, configuration_type=None, error_details=None, field_id=None, list_items=None, name=None, required=None, show=None, value=None): # noqa: E501
"""ListCustomField - a model defined in Swagger""" # noqa: E501
self._configuration_type = None
self._error_details = None
self._field_id = None
self._list_items = None
self._name = None
self._required = None
self._show = None
self._value = None
self.discriminator = None
if configuration_type is not None:
self.configuration_type = configuration_type
if error_details is not None:
self.error_details = error_details
if field_id is not None:
self.field_id = field_id
if list_items is not None:
self.list_items = list_items
if name is not None:
self.name = name
if required is not None:
self.required = required
if show is not None:
self.show = show
if value is not None:
self.value = value
@property
def configuration_type(self):
"""Gets the configuration_type of this ListCustomField. # noqa: E501
If merge field's are being used, specifies the type of the merge field. The only supported value is **salesforce**. # noqa: E501
:return: The configuration_type of this ListCustomField. # noqa: E501
:rtype: str
"""
return self._configuration_type
@configuration_type.setter
def configuration_type(self, configuration_type):
"""Sets the configuration_type of this ListCustomField.
If merge field's are being used, specifies the type of the merge field. The only supported value is **salesforce**. # noqa: E501
:param configuration_type: The configuration_type of this ListCustomField. # noqa: E501
:type: str
"""
self._configuration_type = configuration_type
@property
def error_details(self):
"""Gets the error_details of this ListCustomField. # noqa: E501
:return: The error_details of this ListCustomField. # noqa: E501
:rtype: ErrorDetails
"""
return self._error_details
@error_details.setter
def error_details(self, error_details):
"""Sets the error_details of this ListCustomField.
:param error_details: The error_details of this ListCustomField. # noqa: E501
:type: ErrorDetails
"""
self._error_details = error_details
@property
def field_id(self):
"""Gets the field_id of this ListCustomField. # noqa: E501
An ID used to specify a custom field. # noqa: E501
:return: The field_id of this ListCustomField. # noqa: E501
:rtype: str
"""
return self._field_id
@field_id.setter
def field_id(self, field_id):
"""Sets the field_id of this ListCustomField.
An ID used to specify a custom field. # noqa: E501
:param field_id: The field_id of this ListCustomField. # noqa: E501
:type: str
"""
self._field_id = field_id
@property
def list_items(self):
"""Gets the list_items of this ListCustomField. # noqa: E501
# noqa: E501
:return: The list_items of this ListCustomField. # noqa: E501
:rtype: list[str]
"""
return self._list_items
@list_items.setter
def list_items(self, list_items):
"""Sets the list_items of this ListCustomField.
# noqa: E501
:param list_items: The list_items of this ListCustomField. # noqa: E501
:type: list[str]
"""
self._list_items = list_items
@property
def name(self):
"""Gets the name of this ListCustomField. # noqa: E501
The name of the custom field. # noqa: E501
:return: The name of this ListCustomField. # noqa: E501
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""Sets the name of this ListCustomField.
The name of the custom field. # noqa: E501
:param name: The name of this ListCustomField. # noqa: E501
:type: str
"""
self._name = name
@property
def required(self):
"""Gets the required of this ListCustomField. # noqa: E501
When set to **true**, the signer is required to fill out this tab # noqa: E501
:return: The required of this ListCustomField. # noqa: E501
:rtype: str
"""
return self._required
@required.setter
def required(self, required):
"""Sets the required of this ListCustomField.
When set to **true**, the signer is required to fill out this tab # noqa: E501
:param required: The required of this ListCustomField. # noqa: E501
:type: str
"""
self._required = required
@property
def show(self):
"""Gets the show of this ListCustomField. # noqa: E501
A boolean indicating if the value should be displayed. # noqa: E501
:return: The show of this ListCustomField. # noqa: E501
:rtype: str
"""
return self._show
@show.setter
def show(self, show):
"""Sets the show of this ListCustomField.
A boolean indicating if the value should be displayed. # noqa: E501
:param show: The show of this ListCustomField. # noqa: E501
:type: str
"""
self._show = show
@property
def value(self):
"""Gets the value of this ListCustomField. # noqa: E501
The value of the custom field. Maximum Length: 100 characters. # noqa: E501
:return: The value of this ListCustomField. # noqa: E501
:rtype: str
"""
return self._value
@value.setter
def value(self, value):
"""Sets the value of this ListCustomField.
The value of the custom field. Maximum Length: 100 characters. # noqa: E501
:param value: The value of this ListCustomField. # noqa: E501
:type: str
"""
self._value = value
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(ListCustomField, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, ListCustomField):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
|
|
'''
Cosmo Zhang @ Purude
neuromodules.py
neuro layer toolkit in theano
'''
import theano
import theano.tensor as T
import numpy as np
class LogisticRegression(object):
"""
Multi-class Logistic Regression Class
The logistic regression is fully described by a weight matrix :math:`W`
and bias vector :math:`b`. Classification is done by projecting data
points onto a set of hyperplanes, the distance to which is used to
determine a class membership probability.
"""
def __init__(self, input, n_in, n_out):
"""
Initialize the parameters of the logistic regression
:type input: theano.tensor.TensorType
:param input: symbolic variable that describes the input of the
architecture (one minibatch)
:type n_in: int
:param n_in: number of input units, the dimension of the space in
which the datapoints lie
:type n_out: int
:param n_out: number of output units, the dimension of the space in
which the labels lie
"""
# start-snippet-1
# initialize with 0 the weights W as a matrix of shape (n_in, n_out)
self.W = theano.shared(value=np.zeros((n_in, n_out), dtype=theano.config.floatX), name='W', borrow=True)
# initialize the baises b as a vector of n_out 0s
self.b = theano.shared(value=np.zeros((n_out,), dtype=theano.config.floatX), name='b', borrow=True)
# symbolic expression for computing the matrix of class-membership
# probabilities
# Where:
# W is a matrix where column-k represent the separation hyper plain for
# class-k
# x is a matrix where row-j represents input training sample-j
# b is a vector where element-k represent the free parameter of hyper
# plain-k
self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)
# symbolic description of how to compute prediction as class whose
# probability is maximal
self.y_pred = T.argmax(self.p_y_given_x, axis=1)
# end-snippet-1
# parameters of the model
self.params = [self.W, self.b]
def negative_log_likelihood(self, y):
"""Return the mean of the negative log-likelihood of the prediction
of this model under a given target distribution.
.. math::
\frac{1}{|\mathcal{D}|} \mathcal{L} (\theta=\{W,b\}, \mathcal{D}) =
\frac{1}{|\mathcal{D}|} \sum_{i=0}^{|\mathcal{D}|}
\log(P(Y=y^{(i)}|x^{(i)}, W,b)) \\
\ell (\theta=\{W,b\}, \mathcal{D})
:type y: theano.tensor.TensorType
:param y: corresponds to a vector that gives for each example the
correct label
Note: we use the mean instead of the sum so that
the learning rate is less dependent on the batch size
"""
# start-snippet-2
# y.shape[0] is (symbolically) the number of rows in y, i.e.,
# number of examples (call it n) in the minibatch
# T.arange(y.shape[0]) is a symbolic vector which will contain
# [0,1,2,... n-1] T.log(self.p_y_given_x) is a matrix of
# Log-Probabilities (call it LP) with one row per example and
# one column per class LP[T.arange(y.shape[0]),y] is a vector
# v containing [LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ...,
# LP[n-1,y[n-1]]] and T.mean(LP[T.arange(y.shape[0]),y]) is
# the mean (across minibatch examples) of the elements in v,
# i.e., the mean log-likelihood across the minibatch.
return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])
# end-snippet-2
def errors(self, y):
"""Return a float representing the number of errors in the minibatch
over the total number of examples of the minibatch ; zero one
loss over the size of the minibatch
:type y: theano.tensor.TensorType
:param y: corresponds to a vector that gives for each example the
correct label
"""
# check if y has same dimension of y_pred
if y.ndim != self.y_pred.ndim:
raise TypeError(
'y should have the same shape as self.y_pred',
('y', y.type, 'y_pred', self.y_pred.type)
)
# check if y is of the correct datatype
if y.dtype.startswith('int'):
# the T.neq operator returns a vector of 0s and 1s, where 1
# represents a mistake in prediction
return T.mean(T.neq(self.y_pred, y))
else:
raise NotImplementedError()
class MLP(object):
"""Multi-Layer Perceptron Class
A multilayer perceptron is a feedforward artificial neural network model
that has one layer or more of hidden units and nonlinear activations.
Intermediate layers usually have as activation function tanh or the
sigmoid function (defined here by a ``HiddenLayer`` class) while the
top layer is a softamx layer (defined here by a ``LogisticRegression``
class).
"""
def __init__(self, rng, input, n_in, n_hidden, n_out):
"""Initialize the parameters for the multilayer perceptron
:type rng: np.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.TensorType
:param input: symbolic variable that describes the input of the
architecture (one minibatch)
:type n_in: int
:param n_in: number of input units, the dimension of the space in
which the datapoints lie
:type n_hidden: int
:param n_hidden: number of hidden units
:type n_out: int
:param n_out: number of output units, the dimension of the space in
which the labels lie
"""
# Since we are dealing with a one hidden layer MLP, this will translate
# into a HiddenLayer with a tanh activation function connected to the
# LogisticRegression layer; the activation function can be replaced by
# sigmoid or any other nonlinear function
self.hiddenLayer = HiddenLayer(
rng=rng,
input=input,
n_in=n_in,
n_out=n_hidden,
activation=T.tanh
)
# The logistic regression layer gets as input the hidden units
# of the hidden layer
self.logRegressionLayer = LogisticRegression(
input=self.hiddenLayer.output,
n_in=n_hidden,
n_out=n_out
)
# end-snippet-2 start-snippet-3
# L1 norm ; one regularization option is to enforce L1 norm to
# be small
self.L1 = (
abs(self.hiddenLayer.W).sum()
+ abs(self.logRegressionLayer.W).sum()
)
# square of L2 norm ; one regularization option is to enforce
# square of L2 norm to be small
self.L2_sqr = (
(self.hiddenLayer.W ** 2).sum()
+ (self.logRegressionLayer.W ** 2).sum()
)
# negative log likelihood of the MLP is given by the negative
# log likelihood of the output of the model, computed in the
# logistic regression layer
self.negative_log_likelihood = (
self.logRegressionLayer.negative_log_likelihood
)
# same holds for the function computing the number of errors
self.errors = self.logRegressionLayer.errors
# the parameters of the model are the parameters of the two layer it is
# made out of
self.params = self.hiddenLayer.params + self.logRegressionLayer.params
# end-snippet-3
class HiddenLayer(object):
def __init__(self, rng, input, n_in, n_out, W=None, b=None,
activation=T.tanh):
"""
Typical hidden layer of a MLP: units are fully-connected and have
sigmoidal activation function. Weight matrix W is of shape (n_in,n_out)
and the bias vector b is of shape (n_out,).
NOTE : The nonlinearity used here is tanh
Hidden unit activation is given by: tanh(dot(input,W) + b)
:type rng: np.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dmatrix
:param input: a symbolic tensor of shape (n_examples, n_in)
:type n_in: int
:param n_in: dimensionality of input
:type n_out: int
:param n_out: number of hidden units
:type activation: theano.Op or function
:param activation: Non linearity to be applied in the hidden
layer
"""
self.input = input
# end-snippet-1
# `W` is initialized with `W_values` which is uniformely sampled
# from sqrt(-6./(n_in+n_hidden)) and sqrt(6./(n_in+n_hidden))
# for tanh activation function
# the output of uniform if converted using asarray to dtype
# theano.config.floatX so that the code is runable on GPU
# Note : optimal initialization of weights is dependent on the
# activation function used (among other things).
# For example, results presented in [Xavier10] suggest that you
# should use 4 times larger initial weights for sigmoid
# compared to tanh
# We have no info for other function, so we use the same as
# tanh.
if W is None:
W_values = np.asarray(
rng.uniform(
low=-np.sqrt(6. / (n_in + n_out)),
high=np.sqrt(6. / (n_in + n_out)),
size=(n_in, n_out)
),
dtype=theano.config.floatX
)
if activation == T.nnet.hard_sigmoid:
W_values *= 4
W = theano.shared(value=W_values, name='W', borrow=True)
if b is None:
b_values = np.zeros((n_out,), dtype=theano.config.floatX)+0.01
b = theano.shared(value=b_values, name='b', borrow=True)
self.W = W
self.b = b
lin_output = T.dot(input, self.W) + self.b
self.output = (
lin_output if activation is None
else activation(lin_output))
# parameters of the model
self.params = [self.W, self.b]
class ConvPoolLayer(object):
"""2D Pool Layer of a convolutional network"""
def __init__(self, rng, input, input_shape, filter_shape=(3, 2, 2), poolsize=(2, 2), activation=T.tanh):
"""
Allocate a LeNetConvPoolLayer with shared variable internal parameters.
:type rng: np.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.tensor3
:param input: symbolic input tensor, of shape input_shape
:type input_shape: tuple or list of length 3
:param input_shape: (input height, input width)
:type filter_shape: tuple or list of length 3
:param filter_shape: (number of filters, filter height, filter width)
:type poolsize: tuple or list of length 2
:param poolsize: the downsampling (pooling) factor (#rows,#cols)
"""
self.input = input
self.filter_shape = filter_shape
self.input_shape = input_shape
self.poolsize = poolsize
self.activation = activation
# there are "filter height * filter width"
# inputs to each hidden unit
fan_in = np.prod(filter_shape[1:])
# each unit in the lower layer receives a gradient from:
# "num output feature maps * filter height * filter width" /
# pooling size
fan_out = (filter_shape[0] * np.prod(filter_shape[1:]) /np.prod(poolsize))
# initialize weights with random weights
if self.activation=="none" or self.activation=="relu":
self.W = theano.shared(np.asarray(rng.uniform(low=-0.01, high=0.01, size=filter_shape), dtype=theano.config.floatX), borrow=True, name="W_conv")
else:
self.W = theano.shared(np.asarray(rng.uniform(low=-np.sqrt(6. / (fan_in + fan_out)), high=np.sqrt(6. / (fan_in + fan_out)), size=filter_shape), dtype=theano.config.floatX), borrow=True, name="W_conv")
b_values = np.zeros((filter_shape[0],), dtype=theano.config.floatX)
self.b = theano.shared(value=b_values, borrow=True, name="b_conv")
# convolve input feature maps with filters
conv_out = theano.tensor.signal.conv.conv2d(input=self.input, filters=self.W, filter_shape=self.filter_shape)
# print conv_out.tag.test_value.shape
# print self.input_shape[0] - self.filter_shape[1] + 1, self.input_shape[1] - self.filter_shape[2] + 1
conv_out_next = activation(conv_out + self.b.dimshuffle('x', 0, 'x', 'x'))
# print conv_out_next.tag.test_value.shape
self.output = T.signal.pool.pool_2d(input=conv_out_next, ds=self.poolsize, ignore_border=True).flatten(2)
# print self.output.tag.test_value.shape
pool_outsize = (self.input_shape[0] - self.filter_shape[1] + 1)/self.poolsize[0]*(self.input_shape[1] - self.filter_shape[2] + 1)/self.poolsize[1]
self.out_dim = self.filter_shape[0]*pool_outsize
# print (self.input_shape[0] - self.filter_shape[1] + 1)/self.poolsize[0], (self.input_shape[1] - self.filter_shape[2] + 1)/self.poolsize[1]
# print 'in conv', self.out_dim
self.params = [self.W, self.b]
class DropoutLayer(object):
def __init__(self, rng, input, dropout_rate=0.5):
"""
input: output of last layer
"""
self.input = input
self.dropout_rate = dropout_rate
srng = T.shared_randomstreams.RandomStreams(rng.randint(999999))
if self.dropout_rate > 0:
# p=1-p because 1's indicate keep and p is prob of dropping
mask = srng.binomial(n=1, p = 1-self.dropout_rate, size=self.input.shape)
# The cast is important because
# int * float32 = float64 which pulls things off the gpu
self.output = self.input * T.cast(mask, theano.config.floatX)
else:
self.output = input
class DropoutHiddenLayer(HiddenLayer):
def __init__(self, rng, input, n_in, n_out, dropout_rate = 0.5, W=None, b=None, activation=T.tanh):
self.input = input
if W is None:
W_values = np.asarray(
rng.uniform(
low=-np.sqrt(6. / (n_in + n_out)),
high=np.sqrt(6. / (n_in + n_out)),
size=(n_in, n_out)
),
dtype=theano.config.floatX
)
if activation == T.nnet.hard_sigmoid:
W_values *= 4
W = theano.shared(value=W_values, name='W', borrow=True)
if b is None:
b_values = np.zeros((n_out,), dtype=theano.config.floatX)
b = theano.shared(value=b_values, name='b', borrow=True)
self.W = W
self.b = b
lin_output = T.dot(input, self.W) + self.b
output = (
lin_output if activation is None
else activation(lin_output))
srng = T.shared_randomstreams.RandomStreams(
rng.randint(999999))
# p=1-p because 1's indicate keep and p is prob of dropping
mask = srng.binomial(n=1, p = 1-dropout_rate, size=output.shape)
# The cast is important because
# int * float32 = float64 which pulls things off the gpu
self.output = output * T.cast(mask, theano.config.floatX)
# parameters of the model
self.params = [self.W, self.b]
class ElmanRnnLayer(object):
def __init__(self, rng, inputSeq, n_in, n_out, Wx = None, Wh = None, b = None, activation = T.nnet.hard_sigmoid, bptt_truncate = -1, retrun_final_only = False, batch_mode = False):
if Wx is None:
Wx = np.asarray(
rng.uniform(
low=-np.sqrt(6. / (n_in + n_out)),
high=np.sqrt(6. / (n_in + n_out)),
size=(n_in, n_out)
),
dtype=theano.config.floatX
)
if activation == T.nnet.hard_sigmoid:
Wx *= 4
Wx = theano.shared(value=Wx, name='Wx', borrow=True)
self.Wx = Wx
if Wh is None:
Wh = np.asarray(
rng.uniform(
low=-np.sqrt(6. / (n_out + n_out)),
high=np.sqrt(6. / (n_out + n_out)),
size=(n_out, n_out)
),
dtype=theano.config.floatX
)
if activation == T.nnet.hard_sigmoid:
Wh *= 4
Wh = theano.shared(value=Wh, name='Wh', borrow=True)
self.Wh = Wh
if b is None:
b_values = np.zeros((n_out), dtype=theano.config.floatX)
b = theano.shared(value=b_values, name='bh', borrow=True)
self.b = b
h0 = theano.shared(np.zeros(n_out, dtype=theano.config.floatX))
if batch_mode:
self.h0 = T.alloc(h0, inputSeq.shape[1], inputSeq.shape[2], n_out)
else:
self.h0 = h0
def recurrence(x_t, h_tm1):
h_t = activation(T.dot(x_t, self.Wx) + T.dot(h_tm1, self.Wh) + self.bh) # hidden layer
return h_t
self.bptt_truncate = bptt_truncate
hSeq, _ = theano.scan(fn=recurrence,
# s_t is computed by h_t, so no inistial valuse
# is in need
sequences=inputSeq,
outputs_info=[self.h0],
# no non_sequences is needed here
truncate_gradient=self.bptt_truncate)
if retrun_final_only:
self.output = hSeq[-1]
else:
self.output = hSeq
self.params = [self.Wx, self.Wh, self.h0, self.b]
# self.params = [self.Wx, self.Wh, self.b]
class LstmLayer(object):
def __init__(self, rng, inputSeq, n_in, n_out, U = None, W = None, b = None, activation = T.tanh, bptt_truncate = -1, retrun_final_only = False, batch_mode = False):
# LSTM parameters
if U is None:
U = np.random.uniform(-np.sqrt(6./n_out), np.sqrt(6./n_out), (4, n_in, n_out))
if activation == T.nnet.hard_sigmoid:
U *= 4
self.U = theano.shared(name = 'U', value = U.astype(theano.config.floatX))
if W is None:
W = np.random.uniform(-np.sqrt(6./n_out), np.sqrt(6./n_out), (4, n_out, n_out))
if activation == T.nnet.hard_sigmoid:
W *= 4
self.W = theano.shared(name = 'W', value = W.astype(theano.config.floatX))
# b: bias
if b is None:
b = np.zeros((4, n_out))
self.b = theano.shared(name = 'b', value = b.astype(theano.config.floatX))
self.bptt_truncate = bptt_truncate
def recurrence(x_t, h_t_prev, c_t_prev):
# This is how we calculated the hidden state in a simple RNN. No longer!
# s_t = T.tanh(U[:,x_t] + W.dot(s_t1_prev))
# LSTM Layer 1
i_t = T.nnet.hard_sigmoid(T.dot(x_t, self.U[0]) + T.dot(h_t_prev, self.W[0]) + self.b[0])
f_t = T.nnet.hard_sigmoid(T.dot(x_t, self.U[1]) + T.dot(h_t_prev, self.W[1]) + self.b[1])
o_t = T.nnet.hard_sigmoid(T.dot(x_t, self.U[2]) + T.dot(h_t_prev, self.W[2]) + self.b[2])
g_t = activation(T.dot(x_t, self.U[3]) + T.dot(h_t_prev, self.W[3]) + self.b[3])
c_t = c_t_prev * f_t + g_t * i_t
h_t = T.tanh(c_t) * o_t
return [h_t, c_t]
h0 = theano.shared(np.zeros(n_out, dtype=theano.config.floatX))
c0 = theano.shared(np.zeros(n_out, dtype=theano.config.floatX))
if batch_mode:
self.h0 = T.alloc(h0, inputSeq.shape[1], inputSeq.shape[2], n_out)
self.c0 = T.alloc(c0, inputSeq.shape[1], inputSeq.shape[2], n_out)
else:
self.h0 = h0
self.c0 = c0
[hSeq, cSeq], _ = theano.scan(
fn = recurrence,
sequences=inputSeq,
outputs_info=[self.h0, self.c0],
truncate_gradient=self.bptt_truncate)
if retrun_final_only:
self.output = hSeq[-1]
else:
self.output = hSeq
# bundle
self.params = [self.U, self.W, h0, c0, self.b]
# self.params = [self.U, self.W, self.b]
class DropoutLstmLayer(object):
def __init__(self, rng, inputSeq, n_in, n_out, dropout_rate = 0.5, U = None, W = None, b = None, activation = T.tanh, bptt_truncate = -1, retrun_final_only = False):
# LSTM parameters
if U is None:
U = np.random.uniform(-np.sqrt(6./n_out), np.sqrt(6./n_out), (4, n_in, n_out))
if activation == T.nnet.hard_sigmoid:
U *= 4
self.U = theano.shared(name = 'Uh', value = U.astype(theano.config.floatX))
if W is None:
W = np.random.uniform(-np.sqrt(6./n_out), np.sqrt(6./n_out), (4, n_out, n_out))
if activation == T.nnet.hard_sigmoid:
W *= 4
self.W = theano.shared(name = 'Wh', value = W.astype(theano.config.floatX))
# b: bias
if b is None:
b = np.zeros((4, n_out))
self.b = theano.shared(name = 'bh', value = b.astype(theano.config.floatX))
self.bptt_truncate = bptt_truncate
srng = T.shared_randomstreams.RandomStreams(rng.randint(999999))
dropoutSeq = T.cast(srng.binomial(n=1, p = 1-dropout_rate, size=(inputSeq.shape[0], inputSeq.shape[1], n_out)), theano.config.floatX)
def recurrence(x_t, mask_t, h_t_prev, c_t_prev):
# This is how we calculated the hidden state in a simple RNN. No longer!
# s_t = T.tanh(U[:,x_t] + W.dot(s_t1_prev))
# LSTM Layer 1
# x_t is a matrix
i_t = T.nnet.hard_sigmoid(T.dot(x_t, self.U[0]) + T.dot(h_t_prev, self.W[0]) + self.b[0])
f_t = T.nnet.hard_sigmoid(T.dot(x_t, self.U[1]) + T.dot(h_t_prev, self.W[1]) + self.b[1])
o_t = T.nnet.hard_sigmoid(T.dot(x_t, self.U[2]) + T.dot(h_t_prev, self.W[2]) + self.b[2])
g_t = activation(T.dot(x_t, self.U[3]) + T.dot(h_t_prev, self.W[3]) + self.b[3])
c_t = c_t_prev * f_t + g_t * i_t
output = T.tanh(c_t) * o_t
h_t = output * mask_t
return [h_t, c_t]
self.h0 = T.zeros((inputSeq.shape[1], n_out), dtype=theano.config.floatX)
self.c0 = T.zeros((inputSeq.shape[1], n_out), dtype=theano.config.floatX)
[hSeq, cSeq], _ = theano.scan(
fn = recurrence,
sequences=[inputSeq, dropoutSeq],
outputs_info=[self.h0, self.c0],
truncate_gradient=self.bptt_truncate)
# hSeq is a 3d tensor
if retrun_final_only:
self.output = hSeq[-1]
else:
self.output = hSeq
# bundle
self.params = [self.U, self.W, self.b]
class GruLayer(object):
def __init__(self, rng, inputSeq, n_in, n_out, U = None, W = None, b = None, activation = T.tanh, bptt_truncate = -1, retrun_final_only = False):
# GRU parameters
if U is None:
U = np.random.uniform(-np.sqrt(6./n_out), np.sqrt(6./n_out), (3, de * cs, n_out))
if activation == T.nnet.hard_sigmoid:
U *= 4
self.U = theano.shared(name='U', value=U.astype(theano.config.floatX))
if W is None:
W = np.random.uniform(-np.sqrt(6./n_out), np.sqrt(6./n_out), (3, n_out, n_out))
if activation == T.nnet.hard_sigmoid:
W *= 4
self.W = theano.shared(name='W', value=W.astype(theano.config.floatX))
b = np.zeros((3, n_out))
self.b = theano.shared(name='b', value=b.astype(theano.config.floatX))
self.bptt_truncate = bptt_truncate
def recurrence(x_t, h_t_prev):
# This is how we calculated the hidden state in a simple RNN. No longer!
# s_t = T.tanh(U[:,x_t] + W.dot(s_t1_prev))
# GRU Layer 1
z_t = T.nnet.hard_sigmoid(T.dot(x_t, self.U[0]) + T.dot(h_t_prev, self.W[0]) + self.b[0])
r_t = T.nnet.hard_sigmoid(T.dot(x_t, self.U[1]) + T.dot(h_t_prev, self.W[1]) + self.b[1])
c_t = activation(T.dot(x_t, self.U[2]) + T.dot(h_t_prev * r_t, self.W[2]) + self.b[2])
h_t = (T.ones_like(z_t) - z_t) * c_t + z_t * h_t_prev
return h_t
self.h0 = theano.shared(np.zeros(n_out, dtype=theano.config.floatX))
hSeq, _ = theano.scan(
fn = recurrence,
sequences = inputSeq,
outputs_info = [self.h0],
truncate_gradient = self.bptt_truncate)
if retrun_final_only:
self.output = hSeq[-1]
else:
self.output = hSeq
# bundle
self.params = [self.U, self.W, self.h0, self.b]
|
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
# This script generates the necessary files to coordinate function calls between the FE
# and BE. In the FE, this creates a mapping between function signature (Operation &
# Arguments) to an opcode. The opcode is a thrift enum which is passed to the backend.
# The backend has all the information from just the opcode and does not need to worry
# about type checking.
#
# This scripts pulls function metadata input from
# - src/common/function/doris_functions.py (manually maintained)
# - src/common/function/generated_functions.py (auto-generated metadata)
#
# This script will generate 4 outputs
# 1. Thrift enum for all the opcodes
# - impala/fe/src/thrift/Opcodes.thrift
# 2. FE java operators (one per function, ignoring overloading)
# - impala/fe/generated-sources/gen-java/com/cloudera/impala/opcode/FunctionOperater.java
# 3 Java registry setup (registering all the functions with signatures)
# - impala/fe/generated-sources/gen-java/com/cloudera/impala/opcode/FunctionRegistry.java
# 4. BE registry setup (mapping opcodes to ComputeFunctions)
# - impala/be/generated-sources/opcode/opcode-registry-init.cc
#
# TODO: version the registry on the FE and BE so we can identify if they are out of sync
"""
import sys
import os
import string
sys.path.append(os.getcwd())
import doris_functions
import generated_functions
import generated_vector_functions
native_types = {
'BOOLEAN': 'bool',
'TINYINT': 'char',
'SMALLINT': 'short',
'INT': 'int',
'BIGINT': 'long',
'LARGEINT': 'LargeIntValue',
'FLOAT': 'float',
'DOUBLE': 'double',
'VARCHAR': 'StringValue',
'DATE': 'Date',
'DATETIME': 'DateTime',
'DECIMAL': 'DecimalValue',
'DECIMALV2': 'DecimalV2Value',
'TIME': 'double'
}
thrift_preamble = '\
//\n\
namespace java org.apache.doris.thrift\n\
\n\
enum TExprOpcode {\n'
thrift_epilogue = '\
}\n\
\n'
cc_registry_preamble = '\
// Licensed to the Apache Software Foundation (ASF) under one \n\
// or more contributor license agreements. See the NOTICE file \n\
// distributed with this work for additional information \n\
// regarding copyright ownership. The ASF licenses this file \n\
// to you under the Apache License, Version 2.0 (the \n\
// "License"); you may not use this file except in compliance \n\
// with the License. You may obtain a copy of the License at \n\
// \n\
// http://www.apache.org/licenses/LICENSE-2.0\n\
// \n\
// Unless required by applicable law or agreed to in writing, software\n\
// distributed under the License is distributed on an "AS IS" BASIS,\n\
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n\
// See the License for the specific language governing permissions and\n\
// limitations under the License.\n\
// limitations under the License.\n\
\n\
// This is a generated file, DO NOT EDIT.\n\
// To add new functions, see impala/common/function-registry/gen_opcodes.py\n\
\n\
#include "exprs/opcode_registry.h"\n\
#include "exprs/expr.h"\n\
#include "exprs/compound_predicate.h"\n\
#include "exprs/like_predicate.h"\n\
#include "exprs/math_functions.h"\n\
#include "exprs/string_functions.h"\n\
#include "exprs/timestamp_functions.h"\n\
#include "exprs/conditional_functions.h"\n\
#include "exprs/udf_builtins.h"\n\
#include "exprs/utility_functions.h"\n\
#include "gen_cpp/opcode/functions.h"\n\
#include "gen_cpp/opcode/vector-functions.h"\n\
#include "exprs/json_functions.h"\n\
#include "exprs/encryption_functions.h"\n\
#include "exprs/es_functions.h"\n\
#include "exprs/hll_hash_function.h"\n\
\n\
using namespace boost::posix_time;\n\
using namespace boost::gregorian;\n\
\n\
namespace doris {\n\
\n\
void OpcodeRegistry::init() {\n'
cc_registry_epilogue = '\
}\n\
\n\
}\n'
operator_file_preamble = '\
// This is a generated file, DO NOT EDIT.\n\
// To add new functions, see impala/common/function-registry/gen_opcodes.py\n\
\n\
package org.apache.doris.opcode;\n\
\n\
public enum FunctionOperator {\n'
operator_file_epilogue = '\
}\n'
java_registry_preamble = '\
// This is a generated file, DO NOT EDIT.\n\
// To add new functions, see impala/common/function-registry/gen_opcodes.py\n\
\n\
package org.apache.doris.opcode;\n\
\n\
import org.apache.doris.analysis.OpcodeRegistry;\n\
import org.apache.doris.catalog.PrimitiveType;\n\
import org.apache.doris.thrift.TExprOpcode;\n\
import com.google.common.base.Preconditions;\n\
\n\
public class FunctionRegistry { \n\
public static void InitFunctions(OpcodeRegistry registry) { \n\
boolean result = true;\n\
\n'
java_registry_epilogue = '\
Preconditions.checkState(result); \n\
}\n\
}\n'
def initialize_sub(op, return_type, arg_types):
"""
initialize_sub
"""
sub = {}
java_args = "PrimitiveType." + return_type
sub["fn_class"] = "GetValueFunctions"
sub["fn_signature"] = op
sub["num_args"] = len(arg_types)
for idx in range(0, len(arg_types)):
arg = arg_types[idx]
sub["fn_signature"] += "_" + native_types[arg]
sub["native_type" + repr(idx + 1)] = native_types[arg]
java_args += ", PrimitiveType." + arg
sub["thrift_enum"] = sub["fn_signature"].upper()
sub["java_output"] = "FunctionOperator." + op.upper() + ", TExprOpcode." + sub["thrift_enum"]
sub["java_output"] += ", " + java_args
return sub
FE_PATH = "../java/org.apache.doris/opcode/"
BE_PATH = "../gen_cpp/opcode/"
THRIFT_PATH = "../thrift/"
# This contains a list of all the opcodes that are built base on the
# function name from the input. Inputs can have multiple signatures
# with the same function name and the opcode is mangled using the
# arg types.
opcodes = []
# This contains a list of all the function names (no overloading/mangling)
operators = []
# This is a mapping of operators to a list of function meta data entries
# Each meta data entry is itself a map to store all the meta data
# - fn_name, ret_type, args, be_fn, sql_names
meta_data_entries = {}
def add_function(fn_meta_data, udf_interface, is_vector_function=False):
"""
Read in the function and add it to the meta_data_entries map
"""
fn_name = fn_meta_data[0]
ret_type = fn_meta_data[1]
args = fn_meta_data[2]
be_fn = fn_meta_data[3]
entry = {}
entry["fn_name"] = fn_name
entry["ret_type"] = fn_meta_data[1]
entry["args"] = fn_meta_data[2]
entry["be_fn"] = fn_meta_data[3]
entry["sql_names"] = fn_meta_data[4]
entry["is_vector_function"] = is_vector_function
if udf_interface:
entry["symbol"] = fn_meta_data[5]
else:
entry["symbol"] = "<no symbol specified>"
entry["udf_interface"] = udf_interface
if fn_name in meta_data_entries:
meta_data_entries[fn_name].append(entry)
else:
fn_list = [entry]
meta_data_entries[fn_name] = fn_list
operators.append(fn_name.upper())
def generate_opcodes():
"""
Iterate over entries in the meta_data_entries map and generate opcodes. Some
entries will have the same name at this stage, quality the name withe the
signature to generate unique enums.
Resulting opcode list is sorted with INVALID_OPCODE at beginning and LAST_OPCODE
at end.
"""
for fn in meta_data_entries:
entries = meta_data_entries[fn]
if len(entries) > 1:
for entry in entries:
opcode = fn.upper()
for arg in entry["args"]:
if arg == "...":
opcode += "_" + 'VARARGS'
else:
opcode += "_" + native_types[arg].upper()
opcodes.append(opcode)
entry["opcode"] = opcode
else:
opcodes.append(fn.upper())
entries[0]["opcode"] = fn.upper()
opcodes.sort()
opcodes.insert(0, 'INVALID_OPCODE')
opcodes.append('LAST_OPCODE')
def generate_be_registry_init(filename):
"""
Generates the BE registry init file that will add all the compute functions
to the registry. Outputs the generated-file to 'filename'
"""
cc_registry_file = open(filename, "w")
cc_registry_file.write(cc_registry_preamble)
for fn in meta_data_entries:
entries = meta_data_entries[fn]
for entry in entries:
opcode = entry["opcode"]
be_fn = entry["be_fn"]
symbol = entry["symbol"]
# We generate two casts to work around GCC Bug 11407
if entry["is_vector_function"]:
cc_output = 'TExprOpcode::%s, (void*)(Expr::VectorComputeFn)%s, "%s"' \
% (opcode, be_fn, symbol)
else:
cc_output = 'TExprOpcode::%s, (void*)(Expr::ComputeFn)%s, "%s"' \
% (opcode, be_fn, symbol)
cc_registry_file.write(" this->add(%s);\n" % (cc_output))
cc_registry_file.write(cc_registry_epilogue)
cc_registry_file.close()
def generate_fe_registry_init(filename):
"""
Generates the FE registry init file that registers all the functions. This file
contains all the opcode->function signature mappings and all of the string->operator
mappings for sql functions
"""
java_registry_file = open(filename, "w")
java_registry_file.write(java_registry_preamble)
for fn in meta_data_entries:
entries = meta_data_entries[fn]
for entry in entries:
java_output = ""
if entry["udf_interface"]:
java_output += "true"
else:
java_output += "false"
if entry["is_vector_function"]:
java_output += ", true"
else:
java_output += ", false"
java_output += ", FunctionOperator." + fn.upper()
java_output += ", TExprOpcode." + entry["opcode"]
# Check the last entry for varargs indicator.
if entry["args"] and entry["args"][-1] == "...":
entry["args"].pop()
java_output += ", true"
else:
java_output += ", false"
java_output += ", PrimitiveType." + entry["ret_type"]
for arg in entry["args"]:
java_output += ", PrimitiveType." + arg
java_registry_file.write(" result &= registry.add(%s);\n" % java_output)
java_registry_file.write("\n")
mappings = {}
for fn in meta_data_entries:
entries = meta_data_entries[fn]
for entry in entries:
for name in entry["sql_names"]:
if name in mappings:
if mappings[name] != fn.upper():
print "Invalid mapping \"%s\" -> FunctionOperator.%s." \
% (name, mappings[name])
print "There is already a mapping \"%s\" -> FunctionOperator.%s.\n" \
% (name, fn.upper())
sys.exit(1)
continue
mappings[name] = fn.upper()
java_output = "\"%s\", FunctionOperator.%s" % (name, fn.upper())
java_registry_file.write(" result &= registry.addFunctionMapping(%s);\n" \
% java_output)
java_registry_file.write("\n")
java_registry_file.write(java_registry_epilogue)
java_registry_file.close()
# Read the function metadata inputs
for function in doris_functions.functions:
if len(function) != 5:
print "Invalid function entry in doris_functions.py:\n\t" + repr(function)
sys.exit(1)
add_function(function, False)
for function in doris_functions.udf_functions:
assert len(function) == 6, \
"Invalid function entry in doris_functions.py:\n\t" + repr(function)
add_function(function, True)
for function in generated_functions.functions:
if len(function) != 5:
print "Invalid function entry in generated_functions.py:\n\t" + repr(function)
sys.exit(1)
add_function(function, False)
for function in generated_vector_functions.functions:
if len(function) != 5:
print "Invalid function entry in generated_functions.py:\n\t" + repr(function)
sys.exit(1)
add_function(function, False, True)
generate_opcodes()
if not os.path.exists(FE_PATH):
os.makedirs(FE_PATH)
if not os.path.exists(BE_PATH):
os.makedirs(BE_PATH)
if not os.path.exists(THRIFT_PATH):
os.makedirs(THRIFT_PATH)
generate_be_registry_init(BE_PATH + "opcode-registry-init.cc")
generate_fe_registry_init(FE_PATH + "FunctionRegistry.java")
# Output the opcodes to thrift
thrift_file = open(THRIFT_PATH + "Opcodes.thrift", "w")
thrift_file.write(thrift_preamble)
for opcode in opcodes:
thrift_file.write(" %s,\n" % opcode)
thrift_file.write(thrift_epilogue)
thrift_file.close()
# Output the operators to java
operators.sort()
operators.insert(0, "INVALID_OPERATOR")
operator_java_file = open(FE_PATH + "FunctionOperator.java", "w")
operator_java_file.write(operator_file_preamble)
for op in operators:
operator_java_file.write(" %s,\n" % op)
operator_java_file.write(operator_file_epilogue)
operator_java_file.close()
|
|
# -*- coding: utf-8 -*-
"""Setup the allura application"""
import os
import sys
import logging
import shutil
from collections import defaultdict
from datetime import datetime
import tg
from pylons import c, g
from paste.deploy.converters import asbool
from ming import Session, mim
from ming.orm import state, session
from ming.orm.ormsession import ThreadLocalORMSession
import allura
from allura.lib import plugin
from allura import model as M
from allura.websetup import schema
from allura.command import EnsureIndexCommand
from allura.command import CreateTroveCategoriesCommand
log = logging.getLogger(__name__)
def cache_test_data():
log.info('Saving data to cache in .test-data')
if os.path.exists('.test-data'):
shutil.rmtree('.test-data')
os.system('mongodump -h 127.0.0.1:27018 -o .test-data > mongodump.log 2>&1')
def restore_test_data():
if os.path.exists('.test-data'):
log.info('Restoring data from cache in .test-data')
rc = os.system('mongorestore -h 127.0.0.1:27018 --dir .test-data > mongorestore.log 2>&1')
return rc == 0
else:
return False
def bootstrap(command, conf, vars):
"""Place any commands to setup allura here"""
# are we being called by the test suite?
test_run = conf.get('__file__', '').endswith('test.ini')
# if this is a test_run, skip user project creation to save time
make_user_projects = not test_run
def make_user(*args, **kw):
kw.update(make_project=make_user_projects)
return create_user(*args, **kw)
# Our bootstrap doesn't play nicely with SFX project and user APIs
tg.config['auth.method'] = tg.config['registration.method'] = 'local'
assert tg.config['auth.method'] == 'local'
conf['auth.method'] = conf['registration.method'] = 'local'
# Clean up all old stuff
ThreadLocalORMSession.close_all()
c.queued_messages = defaultdict(list)
c.user = c.project = c.app = None
database=conf.get('db_prefix', '') + 'project:test'
wipe_database()
try:
g.solr.delete(q='*:*')
except: # pragma no cover
log.error('SOLR server is %s', g.solr_server)
log.error('Error clearing solr index')
if asbool(conf.get('cache_test_data')):
if restore_test_data():
from allura.lib import helpers as h
h.set_context('test', neighborhood='Projects')
return
log.info('Initializing search')
log.info('Registering root user & default neighborhoods')
anonymous = M.User(
_id=None,
username='*anonymous',
display_name='Anonymous')
# never make a user project for the root user
root = create_user('Root', make_project=False)
n_projects = M.Neighborhood(name='Projects', url_prefix='/p/',
features=dict(private_projects = True,
max_projects = None,
css = 'none',
google_analytics = False))
n_users = M.Neighborhood(name='Users', url_prefix='/u/',
shortname_prefix='u/',
features=dict(private_projects = True,
max_projects = None,
css = 'none',
google_analytics = False))
n_adobe = M.Neighborhood(name='Adobe', url_prefix='/adobe/', project_list_url='/adobe/',
features=dict(private_projects = True,
max_projects = None,
css = 'custom',
google_analytics = True))
assert tg.config['auth.method'] == 'local'
project_reg = plugin.ProjectRegistrationProvider.get()
p_projects = project_reg.register_neighborhood_project(n_projects, [root], allow_register=True)
p_users = project_reg.register_neighborhood_project(n_users, [root])
p_adobe = project_reg.register_neighborhood_project(n_adobe, [root])
ThreadLocalORMSession.flush_all()
ThreadLocalORMSession.close_all()
# add the adobe icon
file_name = 'adobe_icon.png'
file_path = os.path.join(allura.__path__[0],'public','nf','images',file_name)
M.NeighborhoodFile.from_path(file_path, neighborhood_id=n_adobe._id)
# Add some test users
for unum in range(10):
make_user('Test User %d' % unum)
log.info('Creating basic project categories')
cat1 = M.ProjectCategory(name='clustering', label='Clustering')
cat2 = M.ProjectCategory(name='communications', label='Communications')
cat2_1 = M.ProjectCategory(name='synchronization', label='Synchronization', parent_id=cat2._id)
cat2_2 = M.ProjectCategory(name='streaming', label='Streaming', parent_id=cat2._id)
cat2_3 = M.ProjectCategory(name='fax', label='Fax', parent_id=cat2._id)
cat2_4 = M.ProjectCategory(name='bbs', label='BBS', parent_id=cat2._id)
cat3 = M.ProjectCategory(name='database', label='Database')
cat3_1 = M.ProjectCategory(name='front_ends', label='Front-Ends', parent_id=cat3._id)
cat3_2 = M.ProjectCategory(name='engines_servers', label='Engines/Servers', parent_id=cat3._id)
log.info('Registering "regular users" (non-root) and default projects')
# since this runs a lot for tests, separate test and default users and
# do the minimal needed
if asbool(conf.get('load_test_data')):
u_admin = make_user('Test Admin')
u_admin.preferences = dict(email_address='test-admin@users.localhost')
u_admin.email_addresses = ['test-admin@users.localhost']
u_admin.set_password('foo')
u_admin.claim_address('test-admin@users.localhost')
else:
u_admin = make_user('Admin 1', username='admin1')
# Admin1 is almost root, with admin access for Users and Projects neighborhoods
p_projects.add_user(u_admin, ['Admin'])
p_users.add_user(u_admin, ['Admin'])
p_allura = n_projects.register_project('allura', u_admin)
u1 = make_user('Test User')
p_adobe1 = n_adobe.register_project('adobe-1', u_admin)
p_adobe.add_user(u_admin, ['Admin'])
p0 = n_projects.register_project('test', u_admin)
p1 = n_projects.register_project('test2', u_admin)
p0._extra_tool_status = [ 'alpha', 'beta' ]
sess = session(M.Neighborhood) # all the sessions are the same
for x in (n_adobe, n_projects, n_users, p_projects, p_users, p_adobe):
# Ming doesn't detect substructural changes in newly created objects (vs loaded from DB)
state(x).status = 'dirty'
# TODO: Hope that Ming can be improved to at least avoid stuff below
sess.flush(x)
c.project = p0
c.user = u_admin
p1 = p0.new_subproject('sub1')
ThreadLocalORMSession.flush_all()
if asbool(conf.get('load_test_data')):
if asbool(conf.get('cache_test_data')):
cache_test_data()
else: # pragma no cover
# regular first-time setup
p0.add_user(u_admin, ['Admin'])
log.info('Registering initial apps')
for ep_name, app in g.entry_points['tool'].iteritems():
if not app.installable:
continue
p0.install_app(ep_name)
ThreadLocalORMSession.flush_all()
ThreadLocalORMSession.close_all()
def wipe_database():
conn = M.main_doc_session.bind.conn
create_trove_categories = CreateTroveCategoriesCommand('create_trove_categories')
index = EnsureIndexCommand('ensure_index')
if isinstance(conn, mim.Connection):
clear_all_database_tables()
for db in conn.database_names():
db = conn[db]
else:
for database in conn.database_names():
if database not in ( 'allura', 'pyforge', 'project-data'): continue
log.info('Wiping database %s', database)
db = conn[database]
for coll in db.collection_names():
if coll.startswith('system.'): continue
log.info('Dropping collection %s:%s', database, coll)
try:
db.drop_collection(coll)
except:
pass
create_trove_categories.run([''])
index.run([''])
def clear_all_database_tables():
conn = M.main_doc_session.bind.conn
for db in conn.database_names():
db = conn[db]
for coll in db.collection_names():
if coll == 'system.indexes':
continue
db.drop_collection(coll)
def create_user(display_name, username=None, password='foo', make_project=False):
if not username:
username = display_name.lower().replace(' ', '-')
user = M.User.register(dict(username=username,
display_name=display_name),
make_project=make_project)
user.set_password(password)
return user
class DBSession(Session):
'''Simple session that takes a pymongo connection and a database name'''
def __init__(self, db):
self._db = db
@property
def db(self):
return self._db
def _impl(self, cls):
return self.db[cls.__mongometa__.name]
def pm(etype, value, tb): # pragma no cover
import pdb, traceback
try:
from IPython.ipapi import make_session; make_session()
from IPython.Debugger import Pdb
sys.stderr.write('Entering post-mortem IPDB shell\n')
p = Pdb(color_scheme='Linux')
p.reset()
p.setup(None, tb)
p.print_stack_trace()
sys.stderr.write('%s: %s\n' % ( etype, value))
p.cmdloop()
p.forget()
# p.interaction(None, tb)
except ImportError:
sys.stderr.write('Entering post-mortem PDB shell\n')
traceback.print_exception(etype, value, tb)
pdb.post_mortem(tb)
sys.excepthook = pm
|
|
"""Integration providing core pieces of infrastructure."""
import asyncio
import itertools as it
import logging
import voluptuous as vol
from homeassistant.auth.permissions.const import CAT_ENTITIES, POLICY_CONTROL
import homeassistant.config as conf_util
from homeassistant.const import (
ATTR_ENTITY_ID,
ATTR_LATITUDE,
ATTR_LONGITUDE,
RESTART_EXIT_CODE,
SERVICE_HOMEASSISTANT_RESTART,
SERVICE_HOMEASSISTANT_STOP,
SERVICE_SAVE_PERSISTENT_STATES,
SERVICE_TOGGLE,
SERVICE_TURN_OFF,
SERVICE_TURN_ON,
)
import homeassistant.core as ha
from homeassistant.exceptions import HomeAssistantError, Unauthorized, UnknownUser
from homeassistant.helpers import config_validation as cv, recorder, restore_state
from homeassistant.helpers.service import (
async_extract_config_entry_ids,
async_extract_referenced_entity_ids,
)
from homeassistant.helpers.typing import ConfigType
ATTR_ENTRY_ID = "entry_id"
_LOGGER = logging.getLogger(__name__)
DOMAIN = ha.DOMAIN
SERVICE_RELOAD_CORE_CONFIG = "reload_core_config"
SERVICE_RELOAD_CONFIG_ENTRY = "reload_config_entry"
SERVICE_CHECK_CONFIG = "check_config"
SERVICE_UPDATE_ENTITY = "update_entity"
SERVICE_SET_LOCATION = "set_location"
SCHEMA_UPDATE_ENTITY = vol.Schema({ATTR_ENTITY_ID: cv.entity_ids})
SCHEMA_RELOAD_CONFIG_ENTRY = vol.All(
vol.Schema(
{
vol.Optional(ATTR_ENTRY_ID): str,
**cv.ENTITY_SERVICE_FIELDS,
},
),
cv.has_at_least_one_key(ATTR_ENTRY_ID, *cv.ENTITY_SERVICE_FIELDS),
)
SHUTDOWN_SERVICES = (SERVICE_HOMEASSISTANT_STOP, SERVICE_HOMEASSISTANT_RESTART)
async def async_setup(hass: ha.HomeAssistant, config: ConfigType) -> bool: # noqa: C901
"""Set up general services related to Home Assistant."""
async def async_save_persistent_states(service):
"""Handle calls to homeassistant.save_persistent_states."""
await restore_state.RestoreStateData.async_save_persistent_states(hass)
async def async_handle_turn_service(service):
"""Handle calls to homeassistant.turn_on/off."""
referenced = async_extract_referenced_entity_ids(hass, service)
all_referenced = referenced.referenced | referenced.indirectly_referenced
# Generic turn on/off method requires entity id
if not all_referenced:
_LOGGER.error(
"The service homeassistant.%s cannot be called without a target",
service.service,
)
return
# Group entity_ids by domain. groupby requires sorted data.
by_domain = it.groupby(
sorted(all_referenced), lambda item: ha.split_entity_id(item)[0]
)
tasks = []
unsupported_entities = set()
for domain, ent_ids in by_domain:
# This leads to endless loop.
if domain == DOMAIN:
_LOGGER.warning(
"Called service homeassistant.%s with invalid entities %s",
service.service,
", ".join(ent_ids),
)
continue
if not hass.services.has_service(domain, service.service):
unsupported_entities.update(set(ent_ids) & referenced.referenced)
continue
# Create a new dict for this call
data = dict(service.data)
# ent_ids is a generator, convert it to a list.
data[ATTR_ENTITY_ID] = list(ent_ids)
tasks.append(
hass.services.async_call(
domain,
service.service,
data,
blocking=True,
context=service.context,
)
)
if unsupported_entities:
_LOGGER.warning(
"The service homeassistant.%s does not support entities %s",
service.service,
", ".join(sorted(unsupported_entities)),
)
if tasks:
await asyncio.gather(*tasks)
hass.services.async_register(
ha.DOMAIN, SERVICE_SAVE_PERSISTENT_STATES, async_save_persistent_states
)
service_schema = vol.Schema({ATTR_ENTITY_ID: cv.entity_ids}, extra=vol.ALLOW_EXTRA)
hass.services.async_register(
ha.DOMAIN, SERVICE_TURN_OFF, async_handle_turn_service, schema=service_schema
)
hass.services.async_register(
ha.DOMAIN, SERVICE_TURN_ON, async_handle_turn_service, schema=service_schema
)
hass.services.async_register(
ha.DOMAIN, SERVICE_TOGGLE, async_handle_turn_service, schema=service_schema
)
async def async_handle_core_service(call):
"""Service handler for handling core services."""
if call.service in SHUTDOWN_SERVICES and recorder.async_migration_in_progress(
hass
):
_LOGGER.error(
"The system cannot %s while a database upgrade is in progress",
call.service,
)
raise HomeAssistantError(
f"The system cannot {call.service} "
"while a database upgrade is in progress."
)
if call.service == SERVICE_HOMEASSISTANT_STOP:
asyncio.create_task(hass.async_stop())
return
errors = await conf_util.async_check_ha_config_file(hass)
if errors:
_LOGGER.error(
"The system cannot %s because the configuration is not valid: %s",
call.service,
errors,
)
hass.components.persistent_notification.async_create(
"Config error. See [the logs](/config/logs) for details.",
"Config validating",
f"{ha.DOMAIN}.check_config",
)
raise HomeAssistantError(
f"The system cannot {call.service} "
f"because the configuration is not valid: {errors}"
)
if call.service == SERVICE_HOMEASSISTANT_RESTART:
asyncio.create_task(hass.async_stop(RESTART_EXIT_CODE))
async def async_handle_update_service(call):
"""Service handler for updating an entity."""
if call.context.user_id:
user = await hass.auth.async_get_user(call.context.user_id)
if user is None:
raise UnknownUser(
context=call.context,
permission=POLICY_CONTROL,
user_id=call.context.user_id,
)
for entity in call.data[ATTR_ENTITY_ID]:
if not user.permissions.check_entity(entity, POLICY_CONTROL):
raise Unauthorized(
context=call.context,
permission=POLICY_CONTROL,
user_id=call.context.user_id,
perm_category=CAT_ENTITIES,
)
tasks = [
hass.helpers.entity_component.async_update_entity(entity)
for entity in call.data[ATTR_ENTITY_ID]
]
if tasks:
await asyncio.wait(tasks)
hass.helpers.service.async_register_admin_service(
ha.DOMAIN, SERVICE_HOMEASSISTANT_STOP, async_handle_core_service
)
hass.helpers.service.async_register_admin_service(
ha.DOMAIN, SERVICE_HOMEASSISTANT_RESTART, async_handle_core_service
)
hass.helpers.service.async_register_admin_service(
ha.DOMAIN, SERVICE_CHECK_CONFIG, async_handle_core_service
)
hass.services.async_register(
ha.DOMAIN,
SERVICE_UPDATE_ENTITY,
async_handle_update_service,
schema=SCHEMA_UPDATE_ENTITY,
)
async def async_handle_reload_config(call):
"""Service handler for reloading core config."""
try:
conf = await conf_util.async_hass_config_yaml(hass)
except HomeAssistantError as err:
_LOGGER.error(err)
return
# auth only processed during startup
await conf_util.async_process_ha_core_config(hass, conf.get(ha.DOMAIN) or {})
hass.helpers.service.async_register_admin_service(
ha.DOMAIN, SERVICE_RELOAD_CORE_CONFIG, async_handle_reload_config
)
async def async_set_location(call):
"""Service handler to set location."""
await hass.config.async_update(
latitude=call.data[ATTR_LATITUDE], longitude=call.data[ATTR_LONGITUDE]
)
hass.helpers.service.async_register_admin_service(
ha.DOMAIN,
SERVICE_SET_LOCATION,
async_set_location,
vol.Schema({ATTR_LATITUDE: cv.latitude, ATTR_LONGITUDE: cv.longitude}),
)
async def async_handle_reload_config_entry(call):
"""Service handler for reloading a config entry."""
reload_entries = set()
if ATTR_ENTRY_ID in call.data:
reload_entries.add(call.data[ATTR_ENTRY_ID])
reload_entries.update(await async_extract_config_entry_ids(hass, call))
if not reload_entries:
raise ValueError("There were no matching config entries to reload")
await asyncio.gather(
*(
hass.config_entries.async_reload(config_entry_id)
for config_entry_id in reload_entries
)
)
hass.helpers.service.async_register_admin_service(
ha.DOMAIN,
SERVICE_RELOAD_CONFIG_ENTRY,
async_handle_reload_config_entry,
schema=SCHEMA_RELOAD_CONFIG_ENTRY,
)
return True
|
|
import mock
import unittest
from .helper import _ResourceMixin
class ChargeTest(_ResourceMixin, unittest.TestCase):
def _getTargetClass(self):
from .. import Charge
return Charge
def _getCardClass(self):
from .. import Card
return Card
def _getSourceClass(self):
from .. import Source
return Source
def _getCollectionClass(self):
from .. import Collection
return Collection
def _getLazyCollectionClass(self):
from .. import LazyCollection
return LazyCollection
def _makeOne(self):
return self._getTargetClass().from_data({
'card': {
'city': 'Bangkok',
'financing': 'credit',
'object': 'card',
'expiration_year': 2018,
'last_digits': '4242',
'created': '2014-10-20T09:41:56Z',
'country': 'th',
'brand': 'Visa',
'livemode': False,
'expiration_month': 10,
'postal_code': '10320',
'fingerprint': '098f6bcd4621d373cade4e832627b4f6',
'id': 'card_test',
'name': 'Somchai Prasert'
},
'capture': False,
'object': 'charge',
'description': 'Order-384',
'created': '2014-10-21T11:12:28Z',
'ip': '127.0.0.1',
'livemode': False,
'currency': 'thb',
'amount': 100000,
'transaction': None,
'refunded': 0,
'refunds': {
'object': 'list',
'from': '1970-01-01T00:00:00+00:00',
'to': '2015-01-26T16:20:43+00:00',
'offset': 0,
'limit': 20,
'total': 0,
'data': [],
'location': '/charges/chrg_test/refunds',
},
'failure_code': None,
'failure_message': None,
'location': '/charges/chrg_test',
'customer': None,
'id': 'chrg_test',
'captured': False,
'authorized': True,
'reversed': False,
'expired': False
})
@mock.patch('requests.post')
def test_create(self, api_call):
class_ = self._getTargetClass()
card_class_ = self._getCardClass()
self.mockResponse(api_call, """{
"object": "charge",
"id": "chrg_test",
"livemode": false,
"location": "/charges/chrg_test",
"amount": 100000,
"currency": "thb",
"description": "Order-384",
"capture": false,
"authorized": false,
"reversed": false,
"captured": false,
"transaction": null,
"refunded": 0,
"refunds": {
"object": "list",
"from": "1970-01-01T00:00:00+00:00",
"to": "2015-01-26T16:20:43+00:00",
"offset": 0,
"limit": 20,
"total": 0,
"data": [],
"location": "/charges/chrg_test/refunds"
},
"failure_code": null,
"failure_message": null,
"card": {
"object": "card",
"id": "card_test",
"livemode": false,
"country": "th",
"city": "Bangkok",
"postal_code": "10320",
"financing": "credit",
"last_digits": "4242",
"brand": "Visa",
"expiration_month": 10,
"expiration_year": 2018,
"fingerprint": "098f6bcd4621d373cade4e832627b4f6",
"name": "Somchai Prasert",
"created": "2014-10-20T09:41:56Z"
},
"customer": null,
"ip": "127.0.0.1",
"created": "2014-10-21T11:12:28Z"
}""")
charge = class_.create(
amount=100000,
currency='thb',
description='Order-384',
ip='127.0.0.1',
card='tokn_test',
)
self.assertTrue(isinstance(charge, class_))
self.assertTrue(isinstance(charge.card, card_class_))
self.assertEqual(charge.id, 'chrg_test')
self.assertEqual(charge.amount, 100000)
self.assertEqual(charge.currency, 'thb')
self.assertEqual(charge.description, 'Order-384')
self.assertEqual(charge.ip, '127.0.0.1')
self.assertEqual(charge.card.id, 'card_test')
self.assertEqual(charge.card.last_digits, '4242')
self.assertRequest(
api_call,
'https://api.omise.co/charges',
{
'amount': 100000,
'currency': 'thb',
'description': 'Order-384',
'ip': '127.0.0.1',
'card': 'tokn_test',
}
)
@mock.patch('requests.post')
def test_create_with_source(self, api_call):
class_ = self._getTargetClass()
source_class_ = self._getSourceClass()
self.mockResponse(api_call, """{
"object": "charge",
"id": "chrg_test",
"livemode": false,
"location": "/charges/chrg_test",
"amount": 100000,
"currency": "thb",
"description": null,
"metadata": {},
"status": "pending",
"capture": true,
"authorized": false,
"reversed": false,
"paid": false,
"transaction": null,
"refunded": 0,
"refunds": {
"object": "list",
"from": "1970-01-01T00:00:00+00:00",
"to": "2015-01-26T16:20:43+00:00",
"offset": 0,
"limit": 20,
"total": 0,
"location": "/charges/chrg_test/refunds",
"data": []
},
"return_uri": "http://www.google.com",
"reference": "ofsp_test",
"authorize_uri": "https://pay.omise.co/offsites/ofsp_test/pay",
"failure_code": null,
"failure_message": null,
"card": null,
"customer": null,
"ip": null,
"dispute": null,
"created": "2014-10-21T11:12:28Z",
"source": {
"object": "source",
"id": "src_test",
"type": "internet_banking_test",
"flow": "redirect",
"amount": 100000,
"currency": "thb"
}
}""")
charge = class_.create(
amount=100000,
currency='thb',
source='src_test',
return_uri='http://www.google.com'
)
self.assertTrue(isinstance(charge, class_))
self.assertTrue(isinstance(charge.source, source_class_))
self.assertEqual(charge.id, 'chrg_test')
self.assertEqual(charge.amount, 100000)
self.assertEqual(charge.currency, 'thb')
self.assertEqual(charge.source.id, 'src_test')
self.assertRequest(
api_call,
'https://api.omise.co/charges',
{
'amount': 100000,
'currency': 'thb',
'source': 'src_test',
'return_uri': 'http://www.google.com'
}
)
charge = class_.create(
amount=100000,
currency='thb',
source={
'type': 'internet_banking_test'
},
return_uri='http://www.google.com'
)
self.assertTrue(isinstance(charge, class_))
self.assertTrue(isinstance(charge.source, source_class_))
self.assertEqual(charge.id, 'chrg_test')
self.assertEqual(charge.amount, 100000)
self.assertEqual(charge.currency, 'thb')
self.assertEqual(charge.source.id, 'src_test')
self.assertRequest(
api_call,
'https://api.omise.co/charges', {
'amount': 100000,
'currency': 'thb',
'source': {
'type': 'internet_banking_test'
},
'return_uri': 'http://www.google.com'
}
)
@mock.patch('requests.get')
def test_retrieve(self, api_call):
class_ = self._getTargetClass()
card_class_ = self._getCardClass()
self.mockResponse(api_call, """{
"object": "charge",
"id": "chrg_test",
"livemode": false,
"location": "/charges/chrg_test",
"amount": 100000,
"currency": "thb",
"description": "Order-384",
"metadata": {
"order_id": "384"
},
"capture": false,
"authorized": true,
"reversed": false,
"captured": false,
"transaction": null,
"refunded": 0,
"refunds": {
"object": "list",
"from": "1970-01-01T00:00:00+00:00",
"to": "2015-01-26T16:20:43+00:00",
"offset": 0,
"limit": 20,
"total": 0,
"data": [],
"location": "/charges/chrg_test/refunds"
},
"failure_code": null,
"failure_message": null,
"card": {
"object": "card",
"id": "card_test",
"livemode": false,
"country": "th",
"city": "Bangkok",
"postal_code": "10320",
"financing": "credit",
"last_digits": "4242",
"brand": "Visa",
"expiration_month": 10,
"expiration_year": 2018,
"fingerprint": "098f6bcd4621d373cade4e832627b4f6",
"name": "Somchai Prasert",
"created": "2014-10-20T09:41:56Z"
},
"customer": null,
"ip": "127.0.0.1",
"created": "2014-10-21T11:12:28Z"
}""")
charge = class_.retrieve('chrg_test')
self.assertTrue(isinstance(charge, class_))
self.assertTrue(isinstance(charge.card, card_class_))
self.assertEqual(charge.id, 'chrg_test')
self.assertEqual(charge.amount, 100000)
self.assertEqual(charge.currency, 'thb')
self.assertEqual(charge.description, 'Order-384')
self.assertEqual(charge.metadata.order_id, '384')
self.assertEqual(charge.ip, '127.0.0.1')
self.assertEqual(charge.card.id, 'card_test')
self.assertEqual(charge.card.last_digits, '4242')
self.assertRequest(api_call, 'https://api.omise.co/charges/chrg_test')
self.mockResponse(api_call, """{
"object": "charge",
"id": "chrg_test",
"livemode": false,
"location": "/charges/chrg_test",
"amount": 120000,
"currency": "thb",
"description": "Order-384",
"capture": false,
"authorized": true,
"reversed": false,
"captured": false,
"transaction": null,
"refunded": 0,
"refunds": {
"object": "list",
"from": "1970-01-01T00:00:00+00:00",
"to": "2015-01-26T16:20:43+00:00",
"offset": 0,
"limit": 20,
"total": 0,
"data": [],
"location": "/charges/chrg_test/refunds"
},
"failure_code": null,
"failure_message": null,
"card": {
"object": "card",
"id": "card_test",
"livemode": false,
"country": "th",
"city": "Bangkok",
"postal_code": "10320",
"financing": "credit",
"last_digits": "4242",
"brand": "Visa",
"expiration_month": 10,
"expiration_year": 2018,
"fingerprint": "098f6bcd4621d373cade4e832627b4f6",
"name": "Somchai Prasert",
"created": "2014-10-20T09:41:56Z"
},
"customer": null,
"ip": "127.0.0.1",
"created": "2014-10-21T11:12:28Z"
}""")
charge.reload()
self.assertEqual(charge.amount, 120000)
self.assertEqual(charge.currency, 'thb')
self.assertRequest(api_call, 'https://api.omise.co/charges/chrg_test')
@mock.patch('requests.get')
def test_retrieve_no_args(self, api_call):
class_ = self._getTargetClass()
collection_class_ = self._getCollectionClass()
self.mockResponse(api_call, """{
"object": "list",
"from": "1970-01-01T07:00:00+07:00",
"to": "2014-11-20T14:17:24+07:00",
"offset": 0,
"limit": 20,
"total": 2,
"data": [
{
"object": "charge",
"id": "chrg_test_1",
"livemode": false,
"location": "/charges/chrg_test_1",
"amount": 200000,
"currency": "thb",
"description": "on Johns mastercard",
"capture": true,
"authorized": false,
"captured": false,
"transaction": null,
"failure_code": null,
"failure_message": null,
"refunded": 0,
"refunds": {
"object": "list",
"from": "1970-01-01T00:00:00+00:00",
"to": "2015-01-26T16:20:43+00:00",
"offset": 0,
"limit": 20,
"total": 0,
"data": [],
"location": "/charges/chrg_test_1/refunds"
},
"card": {
"object": "card",
"id": "card_test_1",
"livemode": false,
"location": "/customers/cust_test/cards/card_test_1",
"country": "us",
"city": null,
"postal_code": null,
"financing": "debit",
"last_digits": "4242",
"brand": "Visa",
"expiration_month": 10,
"expiration_year": 2018,
"fingerprint": null,
"name": "john_mastercard",
"security_code_check": false,
"created": "2014-11-20T01:30:37Z"
},
"customer": "cust_test",
"ip": "133.71.33.7",
"created": "2014-11-20T01:32:07Z"
},
{
"object": "charge",
"id": "chrg_test_2",
"livemode": false,
"location": "/charges/chrg_test_2",
"amount": 100000,
"currency": "thb",
"description": "on Johns personal visa",
"capture": true,
"authorized": false,
"captured": false,
"transaction": null,
"failure_code": null,
"failure_message": null,
"refunded": 0,
"refunds": {
"object": "list",
"from": "1970-01-01T00:00:00+00:00",
"to": "2015-01-26T16:20:43+00:00",
"offset": 0,
"limit": 20,
"total": 0,
"data": [],
"location": "/charges/chrg_test_2/refunds"
},
"card": {
"object": "card",
"id": "card_test_2",
"livemode": false,
"location": "/customers/cust_test/cards/card_test_2",
"country": "us",
"city": "Dunkerque",
"postal_code": "59140",
"financing": "debit",
"last_digits": "4242",
"brand": "Visa",
"expiration_month": 10,
"expiration_year": 2015,
"fingerprint": null,
"name": "john_personal_visa",
"security_code_check": false,
"created": "2014-11-20T01:30:27Z"
},
"customer": "cust_test",
"ip": "133.71.33.7",
"created": "2014-11-20T01:32:07Z"
}
]
}""")
charges = class_.retrieve()
self.assertTrue(isinstance(charges, collection_class_))
self.assertTrue(isinstance(charges[0], class_))
self.assertTrue(charges[0].id, 'chrg_test_1')
self.assertTrue(charges[0].amount, 200000)
self.assertTrue(charges[1].id, 'chrg_test_2')
self.assertTrue(charges[1].amount, 100000)
self.assertRequest(api_call, 'https://api.omise.co/charges')
@mock.patch('requests.get')
def test_list(self, api_call):
class_ = self._getTargetClass()
lazy_collection_class_ = self._getLazyCollectionClass()
self.mockResponse(api_call, """{
"object": "list",
"from": "1970-01-01T07:00:00+07:00",
"to": "2014-11-20T14:17:24+07:00",
"offset": 0,
"limit": 20,
"total": 2,
"data": [
{
"object": "charge",
"id": "chrg_test_1",
"livemode": false,
"location": "/charges/chrg_test_1",
"amount": 200000,
"currency": "thb",
"description": "on Johns mastercard",
"capture": true,
"authorized": false,
"captured": false,
"transaction": null,
"failure_code": null,
"failure_message": null,
"refunded": 0,
"refunds": {
"object": "list",
"from": "1970-01-01T00:00:00+00:00",
"to": "2015-01-26T16:20:43+00:00",
"offset": 0,
"limit": 20,
"total": 0,
"data": [],
"location": "/charges/chrg_test_1/refunds"
},
"card": {
"object": "card",
"id": "card_test_1",
"livemode": false,
"location": "/customers/cust_test/cards/card_test_1",
"country": "us",
"city": null,
"postal_code": null,
"financing": "debit",
"last_digits": "4242",
"brand": "Visa",
"expiration_month": 10,
"expiration_year": 2018,
"fingerprint": null,
"name": "john_mastercard",
"security_code_check": false,
"created": "2014-11-20T01:30:37Z"
},
"customer": "cust_test",
"ip": "133.71.33.7",
"created": "2014-11-20T01:32:07Z"
},
{
"object": "charge",
"id": "chrg_test_2",
"livemode": false,
"location": "/charges/chrg_test_2",
"amount": 100000,
"currency": "thb",
"description": "on Johns personal visa",
"capture": true,
"authorized": false,
"captured": false,
"transaction": null,
"failure_code": null,
"failure_message": null,
"refunded": 0,
"refunds": {
"object": "list",
"from": "1970-01-01T00:00:00+00:00",
"to": "2015-01-26T16:20:43+00:00",
"offset": 0,
"limit": 20,
"total": 0,
"data": [],
"location": "/charges/chrg_test_2/refunds"
},
"card": {
"object": "card",
"id": "card_test_2",
"livemode": false,
"location": "/customers/cust_test/cards/card_test_2",
"country": "us",
"city": "Dunkerque",
"postal_code": "59140",
"financing": "debit",
"last_digits": "4242",
"brand": "Visa",
"expiration_month": 10,
"expiration_year": 2015,
"fingerprint": null,
"name": "john_personal_visa",
"security_code_check": false,
"created": "2014-11-20T01:30:27Z"
},
"customer": "cust_test",
"ip": "133.71.33.7",
"created": "2014-11-20T01:32:07Z"
}
]
}""")
charges = class_.list()
self.assertTrue(isinstance(charges, lazy_collection_class_))
charges = list(charges)
self.assertTrue(isinstance(charges[0], class_))
self.assertTrue(charges[0].id, 'chrg_test_1')
self.assertTrue(charges[0].amount, 200000)
self.assertTrue(charges[1].id, 'chrg_test_2')
self.assertTrue(charges[1].amount, 100000)
@mock.patch('requests.patch')
def test_update(self, api_call):
charge = self._makeOne()
class_ = self._getTargetClass()
self.mockResponse(api_call, """{
"object": "charge",
"id": "chrg_test",
"livemode": false,
"location": "/charges/chrg_test",
"amount": 100000,
"currency": "thb",
"description": "New description",
"capture": false,
"authorized": true,
"reversed": false,
"captured": false,
"transaction": null,
"failure_code": null,
"failure_message": null,
"refunded": 0,
"refunds": {
"object": "list",
"from": "1970-01-01T00:00:00+00:00",
"to": "2015-01-26T16:20:43+00:00",
"offset": 0,
"limit": 20,
"total": 0,
"data": [],
"location": "/charges/chrg_test/refunds"
},
"card": {
"object": "card",
"id": "card_test",
"livemode": false,
"country": "th",
"city": "Bangkok",
"postal_code": "10320",
"financing": "credit",
"last_digits": "4242",
"brand": "Visa",
"expiration_month": 10,
"expiration_year": 2018,
"fingerprint": "098f6bcd4621d373cade4e832627b4f6",
"name": "Somchai Prasert",
"created": "2014-10-20T09:41:56Z"
},
"customer": null,
"ip": "127.0.0.1",
"created": "2014-10-21T11:12:28Z"
}""")
self.assertTrue(isinstance(charge, class_))
self.assertEqual(charge.description, 'Order-384')
charge.description = 'New description'
charge.update()
self.assertEqual(charge.description, 'New description')
self.assertRequest(
api_call,
'https://api.omise.co/charges/chrg_test',
{'description': 'New description'}
)
@mock.patch('requests.post')
def test_capture(self, api_call):
charge = self._makeOne()
class_ = self._getTargetClass()
self.mockResponse(api_call, """{
"object": "charge",
"id": "chrg_test",
"livemode": false,
"location": "/charges/chrg_test",
"amount": 100000,
"currency": "thb",
"description": "New description",
"capture": false,
"authorized": true,
"reversed": false,
"captured": true,
"transaction": null,
"failure_code": null,
"failure_message": null,
"refunded": 0,
"refunds": {
"object": "list",
"from": "1970-01-01T00:00:00+00:00",
"to": "2015-01-26T16:20:43+00:00",
"offset": 0,
"limit": 20,
"total": 0,
"data": [],
"location": "/charges/chrg_test/refunds"
},
"card": {
"object": "card",
"id": "card_test",
"livemode": false,
"country": "th",
"city": "Bangkok",
"postal_code": "10320",
"financing": "credit",
"last_digits": "4242",
"brand": "Visa",
"expiration_month": 10,
"expiration_year": 2018,
"fingerprint": "098f6bcd4621d373cade4e832627b4f6",
"name": "Somchai Prasert",
"created": "2014-10-20T09:41:56Z"
},
"customer": null,
"ip": "127.0.0.1",
"created": "2014-10-21T11:12:28Z"
}""")
self.assertTrue(isinstance(charge, class_))
self.assertFalse(charge.captured)
charge.capture()
self.assertTrue(charge.captured)
self.assertRequest(
api_call,
'https://api.omise.co/charges/chrg_test/capture',
)
@mock.patch('requests.post')
def test_reverse(self, api_call):
charge = self._makeOne()
class_ = self._getTargetClass()
self.mockResponse(api_call, """{
"object": "charge",
"id": "chrg_test",
"livemode": false,
"location": "/charges/chrg_test",
"amount": 100000,
"currency": "thb",
"description": "New description",
"capture": false,
"authorized": true,
"reversed": true,
"captured": false,
"transaction": null,
"failure_code": null,
"failure_message": null,
"refunded": 0,
"refunds": {
"object": "list",
"from": "1970-01-01T00:00:00+00:00",
"to": "2015-01-26T16:20:43+00:00",
"offset": 0,
"limit": 20,
"total": 0,
"data": [],
"location": "/charges/chrg_test/refunds"
},
"card": {
"object": "card",
"id": "card_test",
"livemode": false,
"country": "th",
"city": "Bangkok",
"postal_code": "10320",
"financing": "credit",
"last_digits": "4242",
"brand": "Visa",
"expiration_month": 10,
"expiration_year": 2018,
"fingerprint": "098f6bcd4621d373cade4e832627b4f6",
"name": "Somchai Prasert",
"created": "2014-10-20T09:41:56Z"
},
"customer": null,
"ip": "127.0.0.1",
"created": "2014-10-21T11:12:28Z"
}""")
self.assertTrue(isinstance(charge, class_))
self.assertFalse(charge.reversed)
charge.reverse()
self.assertTrue(charge.reversed)
self.assertRequest(
api_call,
'https://api.omise.co/charges/chrg_test/reverse',
)
@mock.patch('requests.post')
def test_expire(self, api_call):
charge = self._makeOne()
class_ = self._getTargetClass()
self.mockResponse(api_call, """{
"object": "charge",
"id": "chrg_test",
"livemode": false,
"expired": true
}""")
self.assertTrue(isinstance(charge, class_))
self.assertFalse(charge.expired)
charge.expire()
self.assertTrue(charge.expired)
self.assertRequest(
api_call,
'https://api.omise.co/charges/chrg_test/expire',
)
@mock.patch('requests.get')
@mock.patch('requests.post')
def test_refund(self, api_call, reload_call):
charge = self._makeOne()
class_ = self._getTargetClass()
self.mockResponse(api_call, """{
"object": "refund",
"id": "rfnd_test",
"location": "/charges/chrg_test/refunds/rfnd_test",
"amount": 10000,
"currency": "thb",
"charge": "chrg_test",
"transaction": null,
"created": "2015-01-26T16:17:26Z"
}""")
self.mockResponse(reload_call, """{
"object": "charge",
"id": "chrg_test",
"livemode": false,
"location": "/charges/chrg_test",
"amount": 100000,
"currency": "thb",
"description": "New description",
"capture": true,
"authorized": true,
"reversed": false,
"captured": true,
"transaction": null,
"failure_code": null,
"failure_message": null,
"refunded": 10000,
"refunds": {
"object": "list",
"from": "1970-01-01T00:00:00+00:00",
"to": "2015-01-26T16:20:43+00:00",
"offset": 0,
"limit": 20,
"total": 1,
"data": [
{
"object": "refund",
"id": "rfnd_test_1",
"location": "/charges/chrg_test/refunds/rfnd_test_1",
"amount": 10000,
"currency": "thb",
"charge": "chrg_test",
"transaction": null,
"created": "2015-01-26T15:06:16Z"
}
],
"location": "/charges/chrg_test/refunds"
},
"card": {
"object": "card",
"id": "card_test",
"livemode": false,
"country": "th",
"city": "Bangkok",
"postal_code": "10320",
"financing": "credit",
"last_digits": "4242",
"brand": "Visa",
"expiration_month": 10,
"expiration_year": 2018,
"fingerprint": "098f6bcd4621d373cade4e832627b4f6",
"name": "Somchai Prasert",
"created": "2014-10-20T09:41:56Z"
},
"customer": null,
"ip": "127.0.0.1",
"created": "2014-10-21T11:12:28Z"
}""")
self.assertTrue(isinstance(charge, class_))
refund = charge.refund(amount=10000)
self.assertEqual(refund.amount, 10000)
self.assertEqual(charge.refunded, 10000)
self.assertRequest(
api_call,
'https://api.omise.co/charges/chrg_test/refunds',
{'amount': 10000}
)
@mock.patch('requests.get')
def test_schedule(self, api_call):
class_ = self._getTargetClass()
collection_class_ = self._getCollectionClass()
self.mockResponse(api_call, """{
"object": "list",
"from": "1970-01-01T07:00:00+07:00",
"to": "2017-06-02T12:34:43+07:00",
"offset": 0,
"limit": 20,
"total": 1,
"order": "chronological",
"location": "/charges/schedules",
"data": [
{
"object": "schedule",
"id": "schd_test",
"livemode": false,
"location": "/schedules/schd_test",
"status": "active",
"deleted": false,
"every": 1,
"period": "month",
"on": {
"weekday_of_month": "2nd_monday"
},
"in_words": "Every 1 month(s) on the 2nd Monday",
"start_date": "2017-06-02",
"end_date": "2018-05-01",
"charge": {
"amount": 100000,
"currency": "thb",
"description": "Membership fee",
"customer": "cust_test_58655j2ez4elik6t2xc",
"card": null
},
"occurrences": {
"object": "list",
"from": "1970-01-01T07:00:00+07:00",
"to": "2017-06-02T19:14:21+07:00",
"offset": 0,
"limit": 20,
"total": 0,
"order": null,
"location": "/schedules/schd_test/occurrences",
"data": []
},
"next_occurrence_dates": [
"2017-06-12",
"2017-07-10",
"2017-08-14",
"2017-09-11",
"2017-10-09",
"2017-11-13",
"2017-12-11",
"2018-01-08",
"2018-02-12",
"2018-03-12",
"2018-04-09"
],
"created": "2017-06-02T12:14:21Z"
}
]
}""")
schedules = class_.schedule()
self.assertTrue(isinstance(schedules, collection_class_))
self.assertEqual(schedules.total, 1)
self.assertEqual(schedules.location, '/charges/schedules')
self.assertEqual(schedules[0].period, 'month')
self.assertEqual(schedules[0].status, 'active')
self.assertEqual(schedules[0].start_date, '2017-06-02')
self.assertEqual(schedules[0].end_date, '2018-05-01')
self.assertRequest(api_call, 'https://api.omise.co/charges/schedules')
@mock.patch('requests.get')
def test_list_events(self, api_call):
charge = self._makeOne()
lazy_collection_class_ = self._getLazyCollectionClass()
self.mockResponse(api_call, """{
"object": "list",
"data": [
{
"object": "event",
"id": "evnt_test",
"livemode": false,
"location": "/events/evnt_test",
"webhook_deliveries": [
{
"object": "webhook_delivery",
"id": "whdl_test",
"uri": "https://www.omise.co",
"status": 200
}
],
"data": [
{
"object": "charge",
"id": "chrg_test",
"location": "/charges/chrg_test",
"amount": 100000
}
],
"key": "charge.create",
"created_at": "2021-01-29T02:05:20Z"
}
],
"limit": 20,
"offset": 0,
"total": 2,
"location": null,
"order": "chronological",
"from": "1970-01-01T00:00:00Z",
"to": "2021-02-02T03:16:57Z"
}""")
events = charge.list_events()
self.assertTrue(isinstance(events, lazy_collection_class_))
events = list(events)
self.assertTrue(events[0].id, 'evnt_test')
self.assertTrue(events[0].key, 'charge.create')
|
|
# vim: set et sw=4 sts=4 fileencoding=utf-8:
#
# Python camera library for the Rasperry-Pi camera module
# Copyright (c) 2013-2017 Dave Jones <dave@waveform.org.uk>
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the copyright holder nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
from __future__ import (
unicode_literals,
print_function,
division,
absolute_import,
)
# Make Py2's str equivalent to Py3's
str = type('')
import ctypes as ct
from . import mmal, mmalobj as mo
from .exc import (
PiCameraRuntimeError,
PiCameraValueError,
mmal_check,
)
class PiRenderer(object):
"""
Wraps :class:`~mmalobj.MMALRenderer` for use by PiCamera.
The *parent* parameter specifies the :class:`PiCamera` instance that has
constructed this renderer. All other parameters set the initial values
of the correspondingly named attributes (e.g. the *layer* parameter
sets the initial value of the :attr:`layer` attribute, the *crop* parameter
sets the initial value of the :attr:`crop` attribute, etc).
This base class isn't directly used by :class:`PiCamera`, but the two
derivatives defined below, :class:`PiOverlayRenderer` and
:class:`PiPreviewRenderer`, are used to produce overlays and the camera
preview respectively.
.. versionchanged:: 1.14
Added *anamorphic* parameter
"""
def __init__(
self, parent, layer=0, alpha=255, fullscreen=True, window=None,
crop=None, rotation=0, vflip=False, hflip=False, anamorphic=False):
# Create and enable the renderer component
self._rotation = 0
self._vflip = False
self._hflip = False
self.renderer = mo.MMALRenderer()
try:
self.layer = layer
self.alpha = alpha
self.fullscreen = fullscreen
self.anamorphic = anamorphic
if window is not None:
self.window = window
if crop is not None:
self.crop = crop
self.rotation = rotation
self.vflip = vflip
self.hflip = hflip
self.renderer.enable()
except:
self.renderer.close()
raise
def close(self):
"""
Finalizes the renderer and deallocates all structures.
This method is called by the camera prior to destroying the renderer
(or more precisely, letting it go out of scope to permit the garbage
collector to destroy it at some future time).
"""
if self.renderer:
self.renderer.close()
self.renderer = None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, exc_tb):
self.close()
def _get_alpha(self):
return self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION].alpha
def _set_alpha(self, value):
try:
if not (0 <= value <= 255):
raise PiCameraValueError(
"Invalid alpha value: %d (valid range 0..255)" % value)
except TypeError:
raise PiCameraValueError("Invalid alpha value: %s" % value)
mp = self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION]
mp.set = mmal.MMAL_DISPLAY_SET_ALPHA
mp.alpha = value
self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION] = mp
alpha = property(_get_alpha, _set_alpha, doc="""\
Retrieves or sets the opacity of the renderer.
When queried, the :attr:`alpha` property returns a value between 0 and
255 indicating the opacity of the renderer, where 0 is completely
transparent and 255 is completely opaque. The default value is 255. The
property can be set while recordings or previews are in progress.
.. note::
If the renderer is being fed RGBA data (as in partially transparent
overlays), the alpha property will be ignored.
""")
def _get_layer(self):
return self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION].layer
def _set_layer(self, value):
try:
if not (0 <= value <= 255):
raise PiCameraValueError(
"Invalid layer value: %d (valid range 0..255)" % value)
except TypeError:
raise PiCameraValueError("Invalid layer value: %s" % value)
mp = self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION]
mp.set = mmal.MMAL_DISPLAY_SET_LAYER
mp.layer = value
self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION] = mp
layer = property(_get_layer, _set_layer, doc="""\
Retrieves or sets the layer of the renderer.
The :attr:`layer` property is an integer which controls the layer that
the renderer occupies. Higher valued layers obscure lower valued layers
(with 0 being the "bottom" layer). The default value is 2. The property
can be set while recordings or previews are in progress.
""")
def _get_fullscreen(self):
return self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION].fullscreen.value != mmal.MMAL_FALSE
def _set_fullscreen(self, value):
mp = self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION]
mp.set = mmal.MMAL_DISPLAY_SET_FULLSCREEN
mp.fullscreen = bool(value)
self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION] = mp
fullscreen = property(_get_fullscreen, _set_fullscreen, doc="""\
Retrieves or sets whether the renderer appears full-screen.
The :attr:`fullscreen` property is a bool which controls whether the
renderer takes up the entire display or not. When set to ``False``, the
:attr:`window` property can be used to control the precise size of the
renderer display. The property can be set while recordings or previews
are active.
""")
def _get_anamorphic(self):
return self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION].noaspect.value != mmal.MMAL_FALSE
def _set_anamorphic(self, value):
mp = self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION]
mp.set = mmal.MMAL_DISPLAY_SET_NOASPECT
mp.noaspect = bool(value)
self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION] = mp
anamorphic = property(_get_anamorphic, _set_anamorphic, doc="""\
Retrieves or sets whether the renderer is `anamorphic`_.
The :attr:`anamorphic` property is a bool which controls whether the
renderer respects the `aspect ratio`_ of the source. When ``False``
(the default) the source aspect ratio is respected. When set to
``True``, the aspect ratio of the source is anamorphed. This can help
with things like 16:9 widescreen composite outputs for previews without
having to change the cameras output ratio. The property can be set
while recordings or previews are active.
.. versionadded:: 1.14
.. _aspect ratio: https://en.wikipedia.org/wiki/Aspect_ratio_(image)
.. _anamorphic: https://en.wikipedia.org/wiki/Anamorphic_widescreen
""")
def _get_window(self):
mp = self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION]
return (
mp.dest_rect.x,
mp.dest_rect.y,
mp.dest_rect.width,
mp.dest_rect.height,
)
def _set_window(self, value):
try:
x, y, w, h = value
except (TypeError, ValueError) as e:
raise PiCameraValueError(
"Invalid window rectangle (x, y, w, h) tuple: %s" % value)
mp = self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION]
mp.set = mmal.MMAL_DISPLAY_SET_DEST_RECT
mp.dest_rect = mmal.MMAL_RECT_T(x, y, w, h)
self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION] = mp
window = property(_get_window, _set_window, doc="""\
Retrieves or sets the size of the renderer.
When the :attr:`fullscreen` property is set to ``False``, the
:attr:`window` property specifies the size and position of the renderer
on the display. The property is a 4-tuple consisting of ``(x, y, width,
height)``. The property can be set while recordings or previews are
active.
""")
def _get_crop(self):
mp = self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION]
return (
mp.src_rect.x,
mp.src_rect.y,
mp.src_rect.width,
mp.src_rect.height,
)
def _set_crop(self, value):
try:
x, y, w, h = value
except (TypeError, ValueError) as e:
raise PiCameraValueError(
"Invalid crop rectangle (x, y, w, h) tuple: %s" % value)
mp = self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION]
mp.set = mmal.MMAL_DISPLAY_SET_SRC_RECT
mp.src_rect = mmal.MMAL_RECT_T(x, y, w, h)
self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION] = mp
crop = property(_get_crop, _set_crop, doc="""\
Retrieves or sets the area to read from the source.
The :attr:`crop` property specifies the rectangular area that the
renderer will read from the source as a 4-tuple of ``(x, y, width,
height)``. The special value ``(0, 0, 0, 0)`` (which is also the
default) means to read entire area of the source. The property can be
set while recordings or previews are active.
For example, if the camera's resolution is currently configured as
1280x720, setting this attribute to ``(160, 160, 640, 400)`` will
crop the preview to the center 640x400 pixels of the input. Note that
this property does not affect the size of the output rectangle,
which is controlled with :attr:`fullscreen` and :attr:`window`.
.. note::
This property only affects the renderer; it has no bearing on image
captures or recordings (unlike the :attr:`~PiCamera.zoom` property
of the :class:`PiCamera` class).
""")
def _get_rotation(self):
return self._rotation
def _set_rotation(self, value):
try:
value = ((int(value) % 360) // 90) * 90
except ValueError:
raise PiCameraValueError("Invalid rotation angle: %s" % value)
self._set_transform(
self._get_transform(value, self._vflip, self._hflip))
self._rotation = value
rotation = property(_get_rotation, _set_rotation, doc="""\
Retrieves or sets the current rotation of the renderer.
When queried, the :attr:`rotation` property returns the rotation
applied to the renderer. Valid values are 0, 90, 180, and 270.
When set, the property changes the rotation applied to the renderer's
output. The property can be set while recordings or previews are
active. The default is 0.
.. note::
This property only affects the renderer; it has no bearing on image
captures or recordings (unlike the :attr:`~PiCamera.rotation`
property of the :class:`PiCamera` class).
""")
def _get_vflip(self):
return self._vflip
def _set_vflip(self, value):
value = bool(value)
self._set_transform(
self._get_transform(self._rotation, value, self._hflip))
self._vflip = value
vflip = property(_get_vflip, _set_vflip, doc="""\
Retrieves or sets whether the renderer's output is vertically flipped.
When queried, the :attr:`vflip` property returns a boolean indicating
whether or not the renderer's output is vertically flipped. The
property can be set while recordings or previews are in progress. The
default is ``False``.
.. note::
This property only affects the renderer; it has no bearing on image
captures or recordings (unlike the :attr:`~PiCamera.vflip` property
of the :class:`PiCamera` class).
""")
def _get_hflip(self):
return self._hflip
def _set_hflip(self, value):
value = bool(value)
self._set_transform(
self._get_transform(self._rotation, self._vflip, value))
self._hflip = value
hflip = property(_get_hflip, _set_hflip, doc="""\
Retrieves or sets whether the renderer's output is horizontally
flipped.
When queried, the :attr:`vflip` property returns a boolean indicating
whether or not the renderer's output is horizontally flipped. The
property can be set while recordings or previews are in progress. The
default is ``False``.
.. note::
This property only affects the renderer; it has no bearing on image
captures or recordings (unlike the :attr:`~PiCamera.hflip` property
of the :class:`PiCamera` class).
""")
def _get_transform(self, rotate, vflip, hflip):
# Use a (horizontally) mirrored transform if one of vflip or hflip is
# set. If vflip is set, rotate by an extra 180 degrees to make up for
# the lack of a "true" vertical flip
mirror = vflip ^ hflip
if vflip:
rotate = (rotate + 180) % 360
return {
(0, False): mmal.MMAL_DISPLAY_ROT0,
(90, False): mmal.MMAL_DISPLAY_ROT90,
(180, False): mmal.MMAL_DISPLAY_ROT180,
(270, False): mmal.MMAL_DISPLAY_ROT270,
(0, True): mmal.MMAL_DISPLAY_MIRROR_ROT0,
(90, True): mmal.MMAL_DISPLAY_MIRROR_ROT90,
(180, True): mmal.MMAL_DISPLAY_MIRROR_ROT180,
(270, True): mmal.MMAL_DISPLAY_MIRROR_ROT270,
}[(rotate, mirror)]
def _set_transform(self, value):
mp = self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION]
mp.set = mmal.MMAL_DISPLAY_SET_TRANSFORM
mp.transform = value
self.renderer.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION] = mp
class PiOverlayRenderer(PiRenderer):
"""
Represents an :class:`~mmalobj.MMALRenderer` with a static source for
overlays.
This class descends from :class:`PiRenderer` and adds a static *source* for
the :class:`~mmalobj.MMALRenderer`. The *source* must be an object that
supports the :ref:`buffer protocol <bufferobjects>` in one of the supported
formats.
The optional *resolution* parameter specifies the size of the *source* as a
``(width, height)`` tuple. If this is omitted or ``None`` then the
resolution is assumed to be the same as the parent camera's current
:attr:`~PiCamera.resolution`. The optional *format* parameter specifies the
encoding of the *source*. This can be one of the unencoded formats:
``'yuv'``, ``'rgb'``, ``'rgba'``, ``'bgr'``, or ``'bgra'``. If omitted or
``None``, *format* will be guessed based on the size of *source* (assuming
3 bytes for `RGB`_, and 4 bytes for `RGBA`_).
The length of *source* must take into account that widths are rounded up to
the nearest multiple of 32, and heights to the nearest multiple of 16. For
example, if *resolution* is ``(1280, 720)``, and *format* is ``'rgb'`` then
*source* must be a buffer with length 1280 x 720 x 3 bytes, or 2,764,800
bytes (because 1280 is a multiple of 32, and 720 is a multiple of 16 no
extra rounding is required). However, if *resolution* is ``(97, 57)``, and
*format* is ``'rgb'`` then *source* must be a buffer with length 128 x 64 x
3 bytes, or 24,576 bytes (pixels beyond column 97 and row 57 in the source
will be ignored).
The *layer*, *alpha*, *fullscreen*, and *window* parameters are the same
as in :class:`PiRenderer`.
.. _RGB: https://en.wikipedia.org/wiki/RGB
.. _RGBA: https://en.wikipedia.org/wiki/RGBA_color_space
.. versionchanged:: 1.13
Added *format* parameter
.. versionchanged:: 1.14
Added *anamorphic* parameter
"""
SOURCE_BPP = {
3: 'rgb',
4: 'rgba',
}
SOURCE_ENCODINGS = {
'yuv': mmal.MMAL_ENCODING_I420,
'rgb': mmal.MMAL_ENCODING_RGB24,
'rgba': mmal.MMAL_ENCODING_RGBA,
'bgr': mmal.MMAL_ENCODING_BGR24,
'bgra': mmal.MMAL_ENCODING_BGRA,
}
def __init__(
self, parent, source, resolution=None, format=None, layer=0,
alpha=255, fullscreen=True, window=None, crop=None, rotation=0,
vflip=False, hflip=False, anamorphic=False):
super(PiOverlayRenderer, self).__init__(
parent, layer, alpha, fullscreen, window, crop,
rotation, vflip, hflip, anamorphic)
# Copy format from camera's preview port, then adjust the encoding to
# RGB888 or RGBA and optionally adjust the resolution and size
if resolution is not None:
self.renderer.inputs[0].framesize = resolution
else:
self.renderer.inputs[0].framesize = parent.resolution
self.renderer.inputs[0].framerate = 0
if format is None:
source_len = mo.buffer_bytes(source)
plane_size = self.renderer.inputs[0].framesize.pad()
plane_len = plane_size.width * plane_size.height
try:
format = self.SOURCE_BPP[source_len // plane_len]
except KeyError:
raise PiCameraValueError(
'unable to determine format from source size')
try:
self.renderer.inputs[0].format = self.SOURCE_ENCODINGS[format]
except KeyError:
raise PiCameraValueError('unknown format %s' % format)
self.renderer.inputs[0].commit()
# The following callback is required to prevent the mmalobj layer
# automatically passing buffers back to the port
self.renderer.inputs[0].enable(callback=lambda port, buf: True)
self.update(source)
def update(self, source):
"""
Update the overlay with a new source of data.
The new *source* buffer must have the same size as the original buffer
used to create the overlay. There is currently no method for changing
the size of an existing overlay (remove and recreate the overlay if you
require this).
.. note::
If you repeatedly update an overlay renderer, you must make sure
that you do so at a rate equal to, or slower than, the camera's
framerate. Going faster will rapidly starve the renderer's pool of
buffers leading to a runtime error.
"""
buf = self.renderer.inputs[0].get_buffer()
buf.data = source
self.renderer.inputs[0].send_buffer(buf)
class PiPreviewRenderer(PiRenderer):
"""
Represents an :class:`~mmalobj.MMALRenderer` which uses the camera's
preview as a source.
This class descends from :class:`PiRenderer` and adds an
:class:`~mmalobj.MMALConnection` to connect the renderer to an MMAL port.
The *source* parameter specifies the :class:`~mmalobj.MMALPort` to connect
to the renderer. The *resolution* parameter can be used to override the
framesize of the *source*. See :attr:`resolution` for details of when this
is useful.
All other parameters are the same as in :class:`PiRenderer`.
.. versionchanged:: 1.14
Added *anamorphic* parameter
"""
def __init__(
self, parent, source, resolution=None, layer=2, alpha=255,
fullscreen=True, window=None, crop=None, rotation=0, vflip=False,
hflip=False, anamorphic=False):
super(PiPreviewRenderer, self).__init__(
parent, layer, alpha, fullscreen, window, crop,
rotation, vflip, hflip, anamorphic)
self._parent = parent
if resolution is not None:
resolution = mo.to_resolution(resolution)
source.framesize = resolution
self.renderer.inputs[0].connect(source).enable()
def _get_resolution(self):
result = self._parent._camera.outputs[self._parent.CAMERA_PREVIEW_PORT].framesize
if result != self._parent.resolution:
return result
else:
return None
def _set_resolution(self, value):
if value is not None:
value = mo.to_resolution(value)
if (
value.width > self._parent.resolution.width or
value.height > self._parent.resolution.height
):
raise PiCameraValueError(
'preview resolution cannot exceed camera resolution')
self.renderer.connection.disable()
if value is None:
value = self._parent.resolution
self._parent._camera.outputs[self._parent.CAMERA_PREVIEW_PORT].framesize = value
self._parent._camera.outputs[self._parent.CAMERA_PREVIEW_PORT].commit()
self.renderer.connection.enable()
resolution = property(_get_resolution, _set_resolution, doc="""\
Retrieves or sets the resolution of the preview renderer.
By default, the preview's resolution matches the camera's resolution.
However, particularly high resolutions (such as the maximum resolution
of the V2 camera module) can cause issues. In this case, you may wish
to set a lower resolution for the preview that the camera's resolution.
When queried, the :attr:`resolution` property returns ``None`` if the
preview's resolution is derived from the camera's. In this case,
changing the camera's resolution will also cause the preview's
resolution to change. Otherwise, it returns the current preview
resolution as a tuple.
.. note::
The preview resolution cannot be greater than the camera's
resolution. If you set a preview resolution, then change the
camera's resolution below the preview's resolution, this property
will silently revert to ``None``, meaning the preview's resolution
will follow the camera's resolution.
When set, the property reconfigures the preview renderer with the new
resolution. As a special case, setting the property to ``None`` will
cause the preview to follow the camera's resolution once more. The
property can be set while recordings are in progress. The default is
``None``.
.. note::
This property only affects the renderer; it has no bearing on image
captures or recordings (unlike the :attr:`~PiCamera.resolution`
property of the :class:`PiCamera` class).
.. versionadded:: 1.11
""")
class PiNullSink(object):
"""
Implements an :class:`~mmalobj.MMALNullSink` which can be used in place of
a renderer.
The *parent* parameter specifies the :class:`PiCamera` instance which
constructed this :class:`~mmalobj.MMALNullSink`. The *source* parameter
specifies the :class:`~mmalobj.MMALPort` which the null-sink should connect
to its input.
The null-sink can act as a drop-in replacement for :class:`PiRenderer` in
most cases, but obviously doesn't implement attributes like ``alpha``,
``layer``, etc. as it simply dumps any incoming frames. This is also the
reason that this class doesn't derive from :class:`PiRenderer` like all
other classes in this module.
"""
def __init__(self, parent, source):
self.renderer = mo.MMALNullSink()
self.renderer.enable()
self.renderer.inputs[0].connect(source).enable()
def close(self):
"""
Finalizes the null-sink and deallocates all structures.
This method is called by the camera prior to destroying the null-sink
(or more precisely, letting it go out of scope to permit the garbage
collector to destroy it at some future time).
"""
if self.renderer:
self.renderer.close()
self.renderer = None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, exc_tb):
self.close()
|
|
import pygame
from math import sqrt
#http://www.raywenderlich.com/4946/introduction-to-a-pathfinding
class GridWorld():
"""Grid world that contains animals living in cells."""
def __init__(self,width=10,height=10,cell_size=50):
pygame.init()
self.screen = pygame.display.set_mode((height*cell_size,width*cell_size))
pygame.display.set_caption = ('Paul World')
self.actors = {}
self.width = width
self.height = height
self.cell_size = cell_size
self._init_cells()
self._init_paul_and_cake()
self.add_tile_type = None
def _draw_background(self):
WHITE = (255,255,255)
self.screen.fill(WHITE)
def _init_cells(self):
self.cells = {}
for i in range(self.height):
for j in range(self.width):
self.cells[(i,j)] = Cell(self.screen,(i*self.cell_size, j*self.cell_size),(self.cell_size,self.cell_size))
def _add_coords(self,a,b):
return tuple(map(sum,zip(a,b)))
def _init_paul_and_cake(self):
self.paul = Paul( (0,0), self, './images/paul.jpg' )
self.cake = Actor( (9,9), self, './images/cake.jpg' , unremovable = True, is_obstacle = False)
self.actors[(0,0)] = self.paul
self.actors[(9,9)] = self.cake
def _draw_cells(self):
all_cells = self.cells.values()
for cell in all_cells:
cell.draw()
def _draw_actors(self):
all_actors = self.actors.values()
for actor in all_actors:
actor.draw()
def _is_in_grid(self,cell_coord):
"""tells us whether cell_coord is valid and in range of the actual grid dimensions"""
return (-1 < cell_coord[0] < self.width) and (-1 < cell_coord[1] < self.height)
def _is_occupied(self,cell_coord):
try:
actor = self.actors[cell_coord]
return actor.is_obstacle
except:
return False
def _add_swamp(self, mouse_pos):
#insert swamp code here.
pass
def _add_lava(self, mouse_pos):
lava_coord = (mouse_pos[0]/50, mouse_pos[1]/50)
if self._is_occupied(lava_coord):
if self.actors[lava_coord].unremovable == False:
self.actors.pop(lava_coord, None)
else:
self.actors[lava_coord] = ObstacleTile( lava_coord, self, './images/lava.jpg', is_unpassable = True, terrain_cost = 0)
def get_terrain_cost(self, cell_coord):
try:
actor = self.actors[cell_coord]
if actor.terrain_cost is not None:
return actor.terrain_cost
else:
return 0
except:
return 0
def main_loop(self):
running = True
while (running):
self._draw_background()
self._draw_actors()
self._draw_cells()
pygame.display.update()
for event in pygame.event.get():
if event.type is pygame.QUIT:
running = False
elif event.type is pygame.MOUSEBUTTONDOWN:
if self.add_tile_type == 'lava':
self._add_lava(event.pos)
#insert swamp code here
elif event.type is pygame.KEYDOWN:
if event.key == pygame.K_SPACE:
self.paul.run_astar(self.cake.cell_coordinates, self)
self.paul.get_path()
elif event.key == pygame.K_l:
self.add_tile_type = 'lava'
#insert swamp code here
class Actor(object):
def __init__(self, cell_coordinates, world, image_loc, unremovable = False, is_obstacle = True):
self.is_obstacle = is_obstacle
self.unremovable = unremovable
"""takes coordinates as a tuple"""
if world._is_occupied(cell_coordinates):
raise Exception('%s is already occupied!'%cell_coordinates)
self.cell_coordinates = cell_coordinates
self.world = world
self.image = pygame.image.load(image_loc)
self.image_rect = self.image.get_rect()
def draw(self):
cells = self.world.cells
cell = cells[self.cell_coordinates]
x_y_coords = self.world._add_coords(cell.coordinates, (3,3) ) #add an offset so that the image will fit inside the cell border.
rect_dim = (self.image_rect.width, self.image_rect.height)
self.image_rect = pygame.Rect(x_y_coords, rect_dim)
screen = self.world.screen
screen.blit(self.image,self.image_rect)
class ObstacleTile(Actor):
def __init__(self, cell_coordinates, world, image_loc, terrain_cost=0, is_unpassable = True):
super(ObstacleTile, self).__init__(cell_coordinates, world, image_loc, unremovable = False, is_obstacle = is_unpassable)
self.terrain_cost = terrain_cost
class Cell():
def __init__(self, draw_screen, coordinates, dimensions):
self.draw_screen = draw_screen
self.coordinates = coordinates
self.dimensions = dimensions
self.color = (0,0,0)
self.g_cost = None
self.h_cost = None
@property
def f_cost(self):
if self.g_cost is None or self.h_cost is None:
return None
return self.g_cost + self.h_cost
def draw(self):
COST_TO_DRAW = ''
#COST_TO_DRAW = self.g_cost
#COST_TO_DRAW = self.h_cost
#COST_TO_DRAW = self.f_cost
line_width = 2
rect = pygame.Rect((self.coordinates[0],self.coordinates[1]),(self.dimensions[0],self.dimensions[1]))
pygame.draw.rect(self.draw_screen, self.color, rect, line_width)
font = pygame.font.Font(None, 20)
text = font.render(' '+str(COST_TO_DRAW) , 1, (10,10,10))
self.draw_screen.blit(text, self.coordinates)
class Paul(Actor):
def __init__(self, init_coordinates, world, image_loc):
super(Paul, self).__init__(init_coordinates, world, image_loc, unremovable = True)
self.cells = world.cells
self.open_list = []
self.closed_list = []
def get_h_cost(self, coord_a,coord_b):
"""returns the h score, the manhattan distance between coord_a and the coord_b."""
return abs(coord_a[0] - coord_b[0]) + abs(coord_a[1] - coord_b[1])
def get_open_adj_coords(self, coords):
"""returns list of valid coords that are adjacent to the argument, open, and not in the closed list."""
#modify directions and costs as needed
directions = [(1,0),(0,1),(-1,0),(0,-1)]
costs = [1,1,1,1]
adj_coords = map(lambda d: self.world._add_coords(coords,d), directions)
for i, coord in enumerate(adj_coords):
costs[i] += self.world.get_terrain_cost(coord)
in_bounds = [self.world._is_in_grid(c) and not self.world._is_occupied(c) and c not in self.closed_list for c in adj_coords]
adj_coords = [c for (idx,c) in enumerate(adj_coords) if in_bounds[idx]]
costs = [c for (idx,c) in enumerate(costs) if in_bounds[idx]]
return adj_coords, costs
def get_lowest_cost_open_coord(self):
open_cells = self.open_list
sorted_cells = sorted(open_cells, key = lambda s: self.cells[s].f_cost)
costs = map(lambda c: self.cells[c].f_cost, sorted_cells)
return sorted_cells[0]
def reset_cell_values(self):
self.destination_coord = None
for cell in self.cells.values():
cell.color = (0,0,0)
cell.parents_coords = None
cell.g_cost = None
cell.h_cost = None
def get_path(self):
"""Follows cell parents backwards until the initial cell is reached to create a path, which is the list of coordinates that paul will travel through to reach the destination."""
coord_list = [self.destination_coord]
print "final cost is", self.cells[coord_list[-1]].f_cost
while self.start_coord not in coord_list:
try:
coord_list.append(self.cells[coord_list[-1]].parents_coords)
except:
print 'No path found to destination coord!'
break
for coord in coord_list:
if coord is not None:
self.cells[coord].color = (0,255,0)
return coord_list
def run_astar(self, destination_coord, world):
"""Updates cells g,h,f, and parent coordinates until the destination square is found."""
self.reset_cell_values()
self.open_list = []
self.closed_list = []
self.start_coord = self.cell_coordinates
self.destination_coord = destination_coord
coord_s = self.cell_coordinates
cell_s = self.cells[coord_s]
cell_s.g_cost = 0
cell_s.h_cost = self.get_h_cost(coord_s, destination_coord)
self.open_list = [coord_s]
while len(self.open_list) > 0:
coord_s = self.get_lowest_cost_open_coord()
cell_s = self.cells[coord_s]
self.open_list.remove(coord_s)
self.closed_list.append(coord_s)
walkable_open_coords, costs = self.get_open_adj_coords(coord_s)
for idx,coord in enumerate(walkable_open_coords):
cell = self.cells[coord]
g_cost = cell_s.g_cost + costs[idx]
h_cost = self.get_h_cost(coord, destination_coord)
f_cost = g_cost + h_cost
if coord in self.open_list:
old_f_cost = cell.f_cost
if f_cost < old_f_cost:
cell.g_cost = g_cost
cell.h_cost = h_cost
cell.parents_coords = coord_s
else:
self.open_list.append(coord)
cell.g_cost = g_cost
cell.h_cost = h_cost
cell.parents_coords = coord_s
if __name__ == "__main__":
g = GridWorld()
g.main_loop()
|
|
import warnings
import uuid
from unittest import TestCase
from .models import (
SimpleTestModel,
DerivedPartitionPrimaryKeyModel,
PartitionPrimaryKeyModel,
ClusterPrimaryKeyModel,
ColumnFamilyTestModel
)
from .util import (
connect_db,
destroy_db,
create_model
)
class DatabaseSimpleQueryTestCase(TestCase):
def setUp(self):
self.connection = connect_db()
self.cached_rows = {}
'''
Let's create some simple data.
'''
create_model(
self.connection,
SimpleTestModel
)
field_names = [
field.name if field.get_internal_type() != 'AutoField' else None
for field in SimpleTestModel._meta.fields
]
unique_value = 'bazinga'
field_values = ['foo', 'bar', 'raw', 'awk', 'lik', 'sik', 'dik', 'doc']
self.total_rows = 400
for x in xrange(self.total_rows):
test_data = {}
i = 0
for name in field_names:
if not name:
continue
test_data[name] = field_values[i]
i += 1
if unique_value:
test_data['field_3'] = unique_value
unique_value = None
created_instance = SimpleTestModel.objects.create(**test_data)
self.cached_rows[created_instance.pk] = created_instance
import django
django.setup()
def tearDown(self):
destroy_db(self.connection)
def test_filter_on_unindexed_column(self):
field_3_filter = SimpleTestModel.objects.filter(field_3='raw')
expected_count = 0
for _, o in self.cached_rows.iteritems():
if o.field_3 == 'raw':
expected_count += 1
self.assertEqual(expected_count, len(field_3_filter))
for o in field_3_filter:
self.assertTrue(o.pk in self.cached_rows.keys())
def test_partial_inefficient_get(self):
field_3_get = SimpleTestModel.objects.get(field_3='bazinga')
partial_inefficient_get = SimpleTestModel.objects.get(
pk=field_3_get.pk,
field_3='bazinga'
)
self.assertIsNotNone(partial_inefficient_get)
self.assertTrue(partial_inefficient_get.pk in self.cached_rows.keys())
def test_get_on_unindexed_column(self):
field_3_get = SimpleTestModel.objects.get(field_3='bazinga')
self.assertIsNotNone(field_3_get)
self.assertTrue(field_3_get.pk in self.cached_rows.keys())
def test_query_all(self):
all_rows = list(SimpleTestModel.objects.all())
self.assertEqual(len(all_rows), self.total_rows)
for row in all_rows:
cache = self.cached_rows.get(row.pk)
fields = cache._meta.fields
for field in fields:
self.assertEqual(
getattr(cache, field.name),
getattr(row, field.name)
)
class DatabaseClusteringKeyTestCase(TestCase):
def setUp(self):
self.connection = connect_db()
self.cached_rows = {}
'''
Now let's create some data that is clustered
'''
create_model(
self.connection,
ClusterPrimaryKeyModel
)
manager = ClusterPrimaryKeyModel.objects
self.uuid0 = str(uuid.uuid4())
manager.create(
field_1=self.uuid0,
field_2='aaaa',
field_3='bbbb',
data='Foo'
)
manager.create(
field_1=self.uuid0,
field_2='bbbb',
field_3='cccc',
data='Tao'
)
self.uuid1 = str(uuid.uuid4())
manager.create(
field_1=self.uuid1,
field_2='aaaa',
field_3='aaaa',
data='Bar'
)
manager.create(
field_1=self.uuid1,
field_2='bbbb',
field_3='aaaa',
data='Lel'
)
import django
django.setup()
def tearDown(self):
destroy_db(self.connection)
def inefficient_filter(self):
manager = ClusterPrimaryKeyModel.objects
all_rows = list(manager.all())
with warnings.catch_warnings(record=True) as w:
filtered_rows = list(manager.filter(field_3='aaaa'))
self.assertEqual(
1,
len(w)
)
filtered_inmem = [r for r in all_rows if r.field_3 == 'aaaa']
self.assertEqual(
len(filtered_rows),
len(filtered_inmem)
)
for r in range(len(filtered_rows)):
self.assertEqual(
filtered_rows[r].data,
filtered_inmem[r].data
)
def test_pk_filter(self):
manager = ClusterPrimaryKeyModel.objects
all_rows = list(manager.all())
filtered_rows = list(manager.filter(field_1=self.uuid1))
filtered_rows_inmem = [r for r in all_rows if r.pk == self.uuid1]
self.assertEqual(
len(filtered_rows),
len(filtered_rows_inmem)
)
for i in range(len(filtered_rows)):
self.assertEqual(
filtered_rows[i].data,
filtered_rows_inmem[i].data
)
def test_clustering_key_filter(self):
manager = ClusterPrimaryKeyModel.objects
all_rows = list(manager.all())
with warnings.catch_warnings(record=True) as w:
filtered_rows = list(manager.filter(
field_1=self.uuid1,
field_2='aaaa',
field_3='aaaa'
))
self.assertEqual(
0,
len(w)
)
with warnings.catch_warnings(record=True) as w:
filtered_rows = list(manager.filter(
field_1=self.uuid1
).filter(
field_2='aaaa'
).filter(
field_3='aaaa'
))
self.assertEqual(
0,
len(w)
)
filtered_rows_inmem = [
r for r in all_rows if
r.field_1 == self.uuid1 and
r.field_2 == 'aaaa' and
r.field_3 == 'aaaa'
]
self.assertEqual(
len(filtered_rows),
len(filtered_rows_inmem)
)
for i in range(len(filtered_rows)):
self.assertEqual(
filtered_rows[i].data,
filtered_rows_inmem[i].data
)
def test_orderby(self):
manager = ClusterPrimaryKeyModel.objects
filtered_rows = list(manager.filter(field_1=self.uuid1))
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
filtered_rows_ordered = list(
manager.filter(
field_1=self.uuid1
).order_by('field_2')
)
filtered_rows_ordered_desc = list(
manager.filter(
field_1=self.uuid1
).order_by('-field_2')
)
self.assertEqual(
0,
len(w)
)
filtered_rows.sort(
key=lambda x: x.field_2,
reverse=False
)
for i in range(len(filtered_rows)):
self.assertEqual(
filtered_rows[i].data,
filtered_rows_ordered[i].data
)
filtered_rows.sort(
key=lambda x: x.field_2,
reverse=True
)
for i in range(len(filtered_rows)):
self.assertEqual(
filtered_rows[i].data,
filtered_rows_ordered_desc[i].data
)
class DatabasePartitionKeyTestCase(TestCase):
def setUp(self):
self.connection = connect_db()
self.cached_rows = {}
'''
Now let's create some data that is clustered
'''
create_model(
self.connection,
PartitionPrimaryKeyModel
)
manager = PartitionPrimaryKeyModel.objects
manager.create(
field_1='aaaa',
field_2='aaaa',
field_3='bbbb',
field_4='cccc',
data='Foo'
)
manager.create(
field_1='aaaa',
field_2='bbbb',
field_3='cccc',
field_4='dddd',
data='Tao'
)
manager.create(
field_1='bbbb',
field_2='aaaa',
field_3='aaaa',
field_4='eeee',
data='Bar'
)
manager.create(
field_1='bbbb',
field_2='bbbb',
field_3='aaaa',
field_4='ffff',
data='Lel'
)
import django
django.setup()
def tearDown(self):
destroy_db(self.connection)
def test_in_filter(self):
qs = PartitionPrimaryKeyModel.objects.filter(pk__in=[
'aaaa',
'bbbb'
])
self.assertEqual(4, len(qs))
def test_filter_all_partition_keys(self):
qs = PartitionPrimaryKeyModel.objects.filter(
field_1='aaaa',
field_2='bbbb'
)
self.assertEqual(1, len(qs))
class DerivedPartitionKeyModelTestCase(TestCase):
def setUp(self):
self.connection = connect_db()
self.cached_rows = {}
'''
Now let's create some data that is clustered
'''
create_model(
self.connection,
DerivedPartitionPrimaryKeyModel
)
manager = DerivedPartitionPrimaryKeyModel.objects
manager.create(
field_1='aaaa',
field_2='aaaa',
inherited_1='bbbb',
inherited_2='cccc',
data='Foo'
)
manager.create(
field_1='aaaa',
field_2='bbbb',
inherited_1='cccc',
inherited_2='dddd',
data='Tao'
)
manager.create(
field_1='bbbb',
field_2='aaaa',
inherited_1='aaaa',
inherited_2='eeee',
data='Bar'
)
manager.create(
field_1='bbbb',
field_2='bbbb',
inherited_1='aaaa',
inherited_2='ffff',
data='Lel'
)
import django
django.setup()
def tearDown(self):
destroy_db(self.connection)
def test_nothing(self):
pass
class ColumnFamilyModelPagingQueryTestCase(TestCase):
def setUp(self):
self.connection = connect_db()
self.cached_rows = {}
'''
Let's create some simple data.
'''
create_model(
self.connection,
ColumnFamilyTestModel
)
field_names = [
field.name if field.get_internal_type() != 'AutoField' and
field.db_column != 'pk__token' else None
for field in ColumnFamilyTestModel._meta.fields
]
field_values = ['foo', 'bar', 'raw', 'awk', 'lik', 'sik', 'dik', 'doc']
self.total_rows = 10
self.created_rows = 0
for x in xrange(self.total_rows):
test_data = {}
for name in field_names:
if not name:
continue
test_data[name] = field_values[x % len(field_values)]
if test_data['field_1'] in self.cached_rows:
continue
created_instance = ColumnFamilyTestModel.objects.create(
**test_data
)
self.cached_rows[created_instance.pk] = created_instance
self.created_rows += 1
import django
django.setup()
def tearDown(self):
destroy_db(self.connection)
def test_paged_query(self):
all_results = []
one_result = ColumnFamilyTestModel.objects.all()[:1]
self.assertEqual(len(one_result), 1)
all_results.extend(one_result)
next_result = one_result.next()
self.assertEqual(len(next_result), 1)
all_results.extend(next_result)
self.assertNotEqual(
one_result[0].pk,
next_result[0].pk
)
self.assertNotEqual(
one_result[0].field_2,
next_result[0].field_2
)
self.assertNotEqual(
one_result[0].field_3,
next_result[0].field_3
)
for i in xrange(self.created_rows + 10):
next_result = next_result.next()
if not len(next_result):
break
all_results.extend(next_result)
self.assertEqual(len(all_results), self.created_rows)
|
|
#!/usr/bin/env python3
#
# https://developers.strava.com/
# https://strava.github.io/api/
#
# https://www.strava.com/settings/api
#
# https://groups.google.com/forum/#!forum/strava-api
#
# https://github.com/hozn/stravalib/
#
# https://pythonhosted.org/stravalib/index.html
#
# Step #0 active the python virtual environment
# python3 -m virtualenv --no-site-packages --distribute venv
# source venv/bin/activate
# pip3 install stravalib
# pip3 install mechanize
#
# Step #1 create a file with your password in it
# echo -n 'p4ssw0rd' > passfile
#
# Step #2 mandatory the first time, re-export data whenever you need to run this program
# # Register with Strava at https://www.strava.com/settings/api and receive a clientid and a secret
# export CLIENTID=your_Client_ID
# export SECRET=your_Client_Secret
#
# Step #3 run the program
# ./Strava.py --clientid=${CLIENTID} --secret=${SECRET} --username='hamzy@yahoo.com' --passfile=passfile
#
# To download an activity:
# original - http://www.strava.com/activities/870974253/export_original
# TCX - ihttp://www.strava.com/activities/870974253/export_tcx
# https://strava.github.io/api/v3/streams/
# https://pypi.org/project/stravalib/
# https://github.com/hozn/stravalib
# 0.1 on 2014-12-26
__version__ = "0.1"
__date__ = "2014-12-18"
__author__ = "Mark Hamzy (hamzy@yahoo.com)"
# 0.2 on 2015-09-07
__version__ = "0.2"
__date__ = "2015-09-07"
__author__ = "Mark Hamzy (hamzy@yahoo.com)"
# 0.3 on 2020-12-26
__version__ = "0.3"
__date__ = "2020-12-26"
__author__ = "Mark Hamzy (hamzy@yahoo.com)"
import argparse
import datetime
from http.server import BaseHTTPRequestHandler, HTTPServer
import mechanize
from multiprocessing import Process
import stravalib
import sys
import threading
from urllib import parse
class MyServer(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(bytes("<html><head><title>Strava oauth response</title></head>", "utf-8"))
self.wfile.write(bytes("<body><p>Nothing to see here, the URL is what you want.</p></body></html>", "utf-8"))
#web_thread = WebThread('localhost', 8282)
#web_thread.start()
#
#class WebThread (threading.Thread):
# def __init__(self, hostName, port):
# threading.Thread.__init__(self)
# self.hostName = hostName
# self.port = port
#
# def run(self):
# webServer = HTTPServer((self.hostName, self.port), MyServer)
#
# try:
# webServer.serve_forever()
# except KeyboardInterrupt:
# pass
#
# webServer.server_close()
def run_webserver(hostName, port):
webServer = HTTPServer((hostName, port), MyServer)
try:
webServer.serve_forever()
except KeyboardInterrupt:
pass
webServer.server_close()
if __name__ == "__main__":
client = stravalib.client.Client ()
parser = argparse.ArgumentParser(description='Perform Strava queries.')
parser.add_argument("-i", "--clientid", action="store", type=str, required=True, dest="clientid", help="Client ID")
parser.add_argument("-p", "--passfile", action="store", type=str, required=True, dest="passfile", help="filename containing password")
parser.add_argument("-s", "--secret", action="store", type=str, required=True, dest="secret", help="Secret")
parser.add_argument("-u", "--username", action="store", type=str, required=True, dest="username", help="User Name")
parser.add_argument("-y", "--year", action="store", type=int, dest="year", help="Year", default=datetime.date.today().year)
ns = parser.parse_args ()
fp = open(ns.passfile, "r")
ns.password = fp.read()
fp.close()
# Strava oauth will use your target web server in its reply. So make sure it is running.
p = Process(target=run_webserver, args=('localhost', 8282,))
p.start()
# Read the unauthenticated web page for the oauth URL.
br = mechanize.Browser()
br.set_handle_robots(False)
authorize_url = client.authorization_url(client_id=ns.clientid, redirect_uri='http://localhost:8282/authorized')
response1 = br.open(authorize_url)
br.select_form(nr=0)
br.form['email'] = ns.username
br.form['password'] = ns.password
# The form now has the user/password. Try again.
response2 = br.submit()
# The response URL has the necessary code value.
# print (br.geturl())
# http://localhost:8282/authorized?state=&code=f114ae415a3af02c2710409072868752aaf3c39f&scope=read,activity:read
oauth_dict = parse.parse_qs(parse.urlsplit(br.geturl()).query)
# {'code': ['f114ae415a3af02c2710409072868752aaf3c39'], 'scope': ['read,activity:read']}
ns.code = oauth_dict['code']
# Kill the process running the web server.
p.terminate()
# The last step in the oauth protocol.
access_token = client.exchange_code_for_token(client_id=ns.clientid,
client_secret=ns.secret,
code=ns.code)
print (access_token)
athlete = client.get_athlete()
# create_spin_classes(client)
dtBegin = datetime.datetime (ns.year, 1, 1)
dtEnd = datetime.datetime (ns.year, 12, 31)
# dtBegin = datetime.datetime (ns.year, 5, 25)
# dtEnd = datetime.datetime (ns.year, 5, 27)
# dtBegin = datetime.datetime (ns.year, 12, 22)
# dtEnd = datetime.datetime (ns.year, 12, 23)
activities = client.get_activities(before=dtEnd, after=dtBegin)
results = list(activities)
bike_rides = {'distance' : 0.0, 'number': 0}
rpm_rides = {'distance' : 0.0, 'number': 0}
indoor_runs = {'distance' : 0.0, 'number': 0}
outdoor_runs = {'distance' : 0.0, 'number': 0}
swims = {'distance' : 0.0, 'number': 0}
yogas = {'distance' : 0.0, 'number': 0}
activity_days = {}
for activity in results:
print ("Processing %s" % (activity.name,))
if activity.calories != None:
print ("CALORIES: %s", (activity,))
activity_days[activity.start_date.timetuple().tm_yday] = True
distance = stravalib.unithelper.miles(activity.distance).num
if activity.type == 'Ride':
if activity.gear_id == "b1022230":
# 'Raleigh Revenio Carbon 2.0'
activity_map = bike_rides
elif activity.gear_id == "b1534808":
# 'RPM/Spin bike'
activity_map = rpm_rides
else:
print("Error: Unknown gear type of '%s'" % (activity.gear_id,), file=sys.stderr)
continue
elif activity.type == 'Run' and activity.start_latlng is None:
activity_map = indoor_runs
elif activity.type == 'Run' and activity.start_latlng is not None:
activity_map = outdoor_runs
elif activity.type == 'Swim':
activity_map = swims
elif activity.type == 'Yoga':
activity_map = yogas
else:
print("Error: Unknown activity type of '%s'" % (activity.type,), file=sys.stderr)
continue
activity_map['distance'] += distance
activity_map['number'] += 1
print ("You have been active for %d days this year" % (len(activity_days),))
print ("There were %3d RPM classes" % (rpm_rides['number'],))
print ("There were %3d Yoga classes" % (yogas['number'],))
print ("There were %3d outdoor bike rides for %f miles" % (bike_rides['number'], bike_rides['distance'],))
print ("There were %3d elliptical runs for %f miles" % (indoor_runs['number'], indoor_runs['distance'],))
print ("There were %3d outdoor runs for %f miles" % (outdoor_runs['number'], outdoor_runs['distance'],))
print ("There were %3d swims for %f miles" % (swims['number'], swims['distance'],))
# (_, num_week, _) = datetime.date.today().isocalendar()
import pdb
pdb.set_trace()
|
|
# coding: utf-8
"""
Talon.One API
The Talon.One API is used to manage applications and campaigns, as well as to integrate with your application. The operations in the _Integration API_ section are used to integrate with our platform, while the other operations are used to manage applications and campaigns. ### Where is the API? The API is available at the same hostname as these docs. For example, if you are reading this page at `https://mycompany.talon.one/docs/api/`, the URL for the [updateCustomerProfile][] operation is `https://mycompany.talon.one/v1/customer_profiles/id` [updateCustomerProfile]: #operation--v1-customer_profiles--integrationId--put # noqa: E501
The version of the OpenAPI document: 1.0.0
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
from talon_one.configuration import Configuration
class UpdateCampaign(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'name': 'str',
'description': 'str',
'start_time': 'datetime',
'end_time': 'datetime',
'attributes': 'object',
'state': 'str',
'active_ruleset_id': 'int',
'tags': 'list[str]',
'features': 'list[str]',
'coupon_settings': 'CodeGeneratorSettings',
'referral_settings': 'CodeGeneratorSettings',
'limits': 'list[LimitConfig]',
'campaign_groups': 'list[int]'
}
attribute_map = {
'name': 'name',
'description': 'description',
'start_time': 'startTime',
'end_time': 'endTime',
'attributes': 'attributes',
'state': 'state',
'active_ruleset_id': 'activeRulesetId',
'tags': 'tags',
'features': 'features',
'coupon_settings': 'couponSettings',
'referral_settings': 'referralSettings',
'limits': 'limits',
'campaign_groups': 'campaignGroups'
}
def __init__(self, name=None, description=None, start_time=None, end_time=None, attributes=None, state='enabled', active_ruleset_id=None, tags=None, features=None, coupon_settings=None, referral_settings=None, limits=None, campaign_groups=None, local_vars_configuration=None): # noqa: E501
"""UpdateCampaign - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._name = None
self._description = None
self._start_time = None
self._end_time = None
self._attributes = None
self._state = None
self._active_ruleset_id = None
self._tags = None
self._features = None
self._coupon_settings = None
self._referral_settings = None
self._limits = None
self._campaign_groups = None
self.discriminator = None
self.name = name
if description is not None:
self.description = description
if start_time is not None:
self.start_time = start_time
if end_time is not None:
self.end_time = end_time
if attributes is not None:
self.attributes = attributes
if state is not None:
self.state = state
if active_ruleset_id is not None:
self.active_ruleset_id = active_ruleset_id
self.tags = tags
self.features = features
if coupon_settings is not None:
self.coupon_settings = coupon_settings
if referral_settings is not None:
self.referral_settings = referral_settings
self.limits = limits
if campaign_groups is not None:
self.campaign_groups = campaign_groups
@property
def name(self):
"""Gets the name of this UpdateCampaign. # noqa: E501
A friendly name for this campaign. # noqa: E501
:return: The name of this UpdateCampaign. # noqa: E501
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""Sets the name of this UpdateCampaign.
A friendly name for this campaign. # noqa: E501
:param name: The name of this UpdateCampaign. # noqa: E501
:type: str
"""
if self.local_vars_configuration.client_side_validation and name is None: # noqa: E501
raise ValueError("Invalid value for `name`, must not be `None`") # noqa: E501
if (self.local_vars_configuration.client_side_validation and
name is not None and len(name) < 1):
raise ValueError("Invalid value for `name`, length must be greater than or equal to `1`") # noqa: E501
self._name = name
@property
def description(self):
"""Gets the description of this UpdateCampaign. # noqa: E501
A detailed description of the campaign. # noqa: E501
:return: The description of this UpdateCampaign. # noqa: E501
:rtype: str
"""
return self._description
@description.setter
def description(self, description):
"""Sets the description of this UpdateCampaign.
A detailed description of the campaign. # noqa: E501
:param description: The description of this UpdateCampaign. # noqa: E501
:type: str
"""
self._description = description
@property
def start_time(self):
"""Gets the start_time of this UpdateCampaign. # noqa: E501
Datetime when the campaign will become active. # noqa: E501
:return: The start_time of this UpdateCampaign. # noqa: E501
:rtype: datetime
"""
return self._start_time
@start_time.setter
def start_time(self, start_time):
"""Sets the start_time of this UpdateCampaign.
Datetime when the campaign will become active. # noqa: E501
:param start_time: The start_time of this UpdateCampaign. # noqa: E501
:type: datetime
"""
self._start_time = start_time
@property
def end_time(self):
"""Gets the end_time of this UpdateCampaign. # noqa: E501
Datetime when the campaign will become in-active. # noqa: E501
:return: The end_time of this UpdateCampaign. # noqa: E501
:rtype: datetime
"""
return self._end_time
@end_time.setter
def end_time(self, end_time):
"""Sets the end_time of this UpdateCampaign.
Datetime when the campaign will become in-active. # noqa: E501
:param end_time: The end_time of this UpdateCampaign. # noqa: E501
:type: datetime
"""
self._end_time = end_time
@property
def attributes(self):
"""Gets the attributes of this UpdateCampaign. # noqa: E501
Arbitrary properties associated with this campaign # noqa: E501
:return: The attributes of this UpdateCampaign. # noqa: E501
:rtype: object
"""
return self._attributes
@attributes.setter
def attributes(self, attributes):
"""Sets the attributes of this UpdateCampaign.
Arbitrary properties associated with this campaign # noqa: E501
:param attributes: The attributes of this UpdateCampaign. # noqa: E501
:type: object
"""
self._attributes = attributes
@property
def state(self):
"""Gets the state of this UpdateCampaign. # noqa: E501
A disabled or archived campaign is not evaluated for rules or coupons. # noqa: E501
:return: The state of this UpdateCampaign. # noqa: E501
:rtype: str
"""
return self._state
@state.setter
def state(self, state):
"""Sets the state of this UpdateCampaign.
A disabled or archived campaign is not evaluated for rules or coupons. # noqa: E501
:param state: The state of this UpdateCampaign. # noqa: E501
:type: str
"""
allowed_values = ["enabled", "disabled", "archived"] # noqa: E501
if self.local_vars_configuration.client_side_validation and state not in allowed_values: # noqa: E501
raise ValueError(
"Invalid value for `state` ({0}), must be one of {1}" # noqa: E501
.format(state, allowed_values)
)
self._state = state
@property
def active_ruleset_id(self):
"""Gets the active_ruleset_id of this UpdateCampaign. # noqa: E501
ID of Ruleset this campaign applies on customer session evaluation. # noqa: E501
:return: The active_ruleset_id of this UpdateCampaign. # noqa: E501
:rtype: int
"""
return self._active_ruleset_id
@active_ruleset_id.setter
def active_ruleset_id(self, active_ruleset_id):
"""Sets the active_ruleset_id of this UpdateCampaign.
ID of Ruleset this campaign applies on customer session evaluation. # noqa: E501
:param active_ruleset_id: The active_ruleset_id of this UpdateCampaign. # noqa: E501
:type: int
"""
self._active_ruleset_id = active_ruleset_id
@property
def tags(self):
"""Gets the tags of this UpdateCampaign. # noqa: E501
A list of tags for the campaign. # noqa: E501
:return: The tags of this UpdateCampaign. # noqa: E501
:rtype: list[str]
"""
return self._tags
@tags.setter
def tags(self, tags):
"""Sets the tags of this UpdateCampaign.
A list of tags for the campaign. # noqa: E501
:param tags: The tags of this UpdateCampaign. # noqa: E501
:type: list[str]
"""
if self.local_vars_configuration.client_side_validation and tags is None: # noqa: E501
raise ValueError("Invalid value for `tags`, must not be `None`") # noqa: E501
self._tags = tags
@property
def features(self):
"""Gets the features of this UpdateCampaign. # noqa: E501
A list of features for the campaign. # noqa: E501
:return: The features of this UpdateCampaign. # noqa: E501
:rtype: list[str]
"""
return self._features
@features.setter
def features(self, features):
"""Sets the features of this UpdateCampaign.
A list of features for the campaign. # noqa: E501
:param features: The features of this UpdateCampaign. # noqa: E501
:type: list[str]
"""
if self.local_vars_configuration.client_side_validation and features is None: # noqa: E501
raise ValueError("Invalid value for `features`, must not be `None`") # noqa: E501
allowed_values = ["coupons", "referrals", "loyalty"] # noqa: E501
if (self.local_vars_configuration.client_side_validation and
not set(features).issubset(set(allowed_values))): # noqa: E501
raise ValueError(
"Invalid values for `features` [{0}], must be a subset of [{1}]" # noqa: E501
.format(", ".join(map(str, set(features) - set(allowed_values))), # noqa: E501
", ".join(map(str, allowed_values)))
)
self._features = features
@property
def coupon_settings(self):
"""Gets the coupon_settings of this UpdateCampaign. # noqa: E501
:return: The coupon_settings of this UpdateCampaign. # noqa: E501
:rtype: CodeGeneratorSettings
"""
return self._coupon_settings
@coupon_settings.setter
def coupon_settings(self, coupon_settings):
"""Sets the coupon_settings of this UpdateCampaign.
:param coupon_settings: The coupon_settings of this UpdateCampaign. # noqa: E501
:type: CodeGeneratorSettings
"""
self._coupon_settings = coupon_settings
@property
def referral_settings(self):
"""Gets the referral_settings of this UpdateCampaign. # noqa: E501
:return: The referral_settings of this UpdateCampaign. # noqa: E501
:rtype: CodeGeneratorSettings
"""
return self._referral_settings
@referral_settings.setter
def referral_settings(self, referral_settings):
"""Sets the referral_settings of this UpdateCampaign.
:param referral_settings: The referral_settings of this UpdateCampaign. # noqa: E501
:type: CodeGeneratorSettings
"""
self._referral_settings = referral_settings
@property
def limits(self):
"""Gets the limits of this UpdateCampaign. # noqa: E501
The set of limits that will operate for this campaign # noqa: E501
:return: The limits of this UpdateCampaign. # noqa: E501
:rtype: list[LimitConfig]
"""
return self._limits
@limits.setter
def limits(self, limits):
"""Sets the limits of this UpdateCampaign.
The set of limits that will operate for this campaign # noqa: E501
:param limits: The limits of this UpdateCampaign. # noqa: E501
:type: list[LimitConfig]
"""
if self.local_vars_configuration.client_side_validation and limits is None: # noqa: E501
raise ValueError("Invalid value for `limits`, must not be `None`") # noqa: E501
self._limits = limits
@property
def campaign_groups(self):
"""Gets the campaign_groups of this UpdateCampaign. # noqa: E501
The IDs of the campaign groups that own this entity. # noqa: E501
:return: The campaign_groups of this UpdateCampaign. # noqa: E501
:rtype: list[int]
"""
return self._campaign_groups
@campaign_groups.setter
def campaign_groups(self, campaign_groups):
"""Sets the campaign_groups of this UpdateCampaign.
The IDs of the campaign groups that own this entity. # noqa: E501
:param campaign_groups: The campaign_groups of this UpdateCampaign. # noqa: E501
:type: list[int]
"""
self._campaign_groups = campaign_groups
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, UpdateCampaign):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, UpdateCampaign):
return True
return self.to_dict() != other.to_dict()
|
|
"""Conversation objects."""
import asyncio
import logging
from hangups import (parsers, event, user, conversation_event, exceptions,
schemas)
logger = logging.getLogger(__name__)
class Conversation(object):
"""Wrapper around Client for working with a single chat conversation."""
def __init__(self, client, user_list, client_conversation,
client_events=[]):
"""Initialize a new Conversation."""
self._client = client # Client
self._user_list = user_list # UserList
self._conversation = client_conversation # ClientConversation
self._events = [] # [ConversationEvent]
self._events_dict = {} # {event_id: ConversationEvent}
self._send_message_lock = asyncio.Lock()
for event_ in client_events:
self.add_event(event_)
# Event fired when a user starts or stops typing with arguments
# (typing_message).
self.on_typing = event.Event('Conversation.on_typing')
# Event fired when a new ConversationEvent arrives with arguments
# (ConversationEvent).
self.on_event = event.Event('Conversation.on_event')
# Event fired when a watermark (read timestamp) is updated with
# arguments (WatermarkNotification).
self.on_watermark_notification = event.Event(
'Conversation.on_watermark_notification'
)
self.on_watermark_notification.add_observer(
self._on_watermark_notification
)
def _on_watermark_notification(self, notif):
"""Update the conversations latest_read_timestamp."""
if self.get_user(notif.user_id).is_self:
logger.info('latest_read_timestamp for {} updated to {}'
.format(self.id_, notif.read_timestamp))
self_conversation_state = self._conversation.self_conversation_state
self_conversation_state.self_read_state.latest_read_timestamp = (
parsers.to_timestamp(notif.read_timestamp)
)
def update_conversation(self, client_conversation):
"""Update the internal ClientConversation."""
# When latest_read_timestamp is 0, this seems to indicate no change
# from the previous value. Word around this by saving and restoring the
# previous value.
old_timestamp = self.latest_read_timestamp
self._conversation = client_conversation
if parsers.to_timestamp(self.latest_read_timestamp) == 0:
self_conversation_state = self._conversation.self_conversation_state
self_conversation_state.self_read_state.latest_read_timestamp = (
parsers.to_timestamp(old_timestamp)
)
@staticmethod
def _wrap_event(event_):
"""Wrap ClientEvent in ConversationEvent subclass."""
if event_.chat_message is not None:
return conversation_event.ChatMessageEvent(event_)
elif event_.conversation_rename is not None:
return conversation_event.RenameEvent(event_)
elif event_.membership_change is not None:
return conversation_event.MembershipChangeEvent(event_)
else:
return conversation_event.ConversationEvent(event_)
def add_event(self, event_):
"""Add a ClientEvent to the Conversation.
Returns an instance of ConversationEvent or subclass.
"""
conv_event = self._wrap_event(event_)
self._events.append(conv_event)
self._events_dict[conv_event.id_] = conv_event
return conv_event
def get_user(self, user_id):
"""Return the User instance with the given UserID."""
return self._user_list.get_user(user_id)
@asyncio.coroutine
def send_message(self, segments, image_file=None, image_id=None):
"""Send a message to this conversation.
A per-conversation lock is acquired to ensure that messages are sent in
the correct order when this method is called multiple times
asynchronously.
segments is a list of ChatMessageSegments to include in the message.
image_file is an optional file-like object containing an image to be
attached to the message.
image_id is an optional ID of an image to be attached to the message
(if you specify both image_file and image_id together, image_file
takes precedence and supplied image_id will be ignored)
Raises hangups.NetworkError if the message can not be sent.
"""
with (yield from self._send_message_lock):
# Send messages with OTR status matching the conversation's status.
otr_status = (schemas.OffTheRecordStatus.OFF_THE_RECORD
if self.is_off_the_record
else schemas.OffTheRecordStatus.ON_THE_RECORD)
if image_file:
try:
image_id = yield from self._client.upload_image(image_file)
except exceptions.NetworkError as e:
logger.warning('Failed to upload image: {}'.format(e))
raise
try:
yield from self._client.sendchatmessage(
self.id_, [seg.serialize() for seg in segments],
image_id=image_id, otr_status=otr_status
)
except exceptions.NetworkError as e:
logger.warning('Failed to send message: {}'.format(e))
raise
@asyncio.coroutine
def leave(self):
"""Leave conversation.
Raises hangups.NetworkError if conversation cannot be left.
"""
try:
if self._conversation.type_ == schemas.ConversationType.GROUP:
yield from self._client.removeuser(self.id_)
else:
yield from self._client.deleteconversation(self.id_)
except exceptions.NetworkError as e:
logger.warning('Failed to leave conversation: {}'.format(e))
raise
@asyncio.coroutine
def rename(self, name):
"""Rename the conversation.
Hangouts only officially supports renaming group conversations, so
custom names for one-to-one conversations may or may not appear in all
first party clients.
Raises hangups.NetworkError if conversation cannot be renamed.
"""
yield from self._client.setchatname(self.id_, name)
@asyncio.coroutine
def set_typing(self, typing=schemas.TypingStatus.TYPING):
"""Set typing status.
TODO: Add rate-limiting to avoid unnecessary requests.
Raises hangups.NetworkError if typing status cannot be set.
"""
try:
yield from self._client.settyping(self.id_, typing)
except exceptions.NetworkError as e:
logger.warning('Failed to set typing status: {}'.format(e))
raise
@asyncio.coroutine
def update_read_timestamp(self, read_timestamp=None):
"""Update the timestamp of the latest event which has been read.
By default, the timestamp of the newest event is used.
This method will avoid making an API request if it will have no effect.
Raises hangups.NetworkError if the timestamp can not be updated.
"""
if read_timestamp is None:
read_timestamp = self.events[-1].timestamp
if read_timestamp > self.latest_read_timestamp:
logger.info(
'Setting {} latest_read_timestamp from {} to {}'
.format(self.id_, self.latest_read_timestamp, read_timestamp)
)
# Prevent duplicate requests by updating the conversation now.
state = self._conversation.self_conversation_state
state.self_read_state.latest_read_timestamp = (
parsers.to_timestamp(read_timestamp)
)
try:
yield from self._client.updatewatermark(self.id_,
read_timestamp)
except exceptions.NetworkError as e:
logger.warning('Failed to update read timestamp: {}'.format(e))
raise
@asyncio.coroutine
def get_events(self, event_id=None, max_events=50):
"""Return list of ConversationEvents ordered newest-first.
If event_id is specified, return events preceeding this event.
This method will make an API request to load historical events if
necessary. If the beginning of the conversation is reached, an empty
list will be returned.
Raises KeyError if event_id does not correspond to a known event.
Raises hangups.NetworkError if the events could not be requested.
"""
if event_id is None:
# If no event_id is provided, return the newest events in this
# conversation.
conv_events = self._events[-1 * max_events:]
else:
# If event_id is provided, return the events we have that are
# older, or request older events if event_id corresponds to the
# oldest event we have.
conv_event = self.get_event(event_id)
if self._events[0].id_ != event_id:
conv_events = self._events[self._events.index(conv_event) + 1:]
else:
logger.info('Loading events for conversation {} before {}'
.format(self.id_, conv_event.timestamp))
res = yield from self._client.getconversation(
self.id_, conv_event.timestamp, max_events
)
conv_events = [self._wrap_event(client_event) for client_event
in res.conversation_state.event]
logger.info('Loaded {} events for conversation {}'
.format(len(conv_events), self.id_))
for conv_event in reversed(conv_events):
self._events.insert(0, conv_event)
self._events_dict[conv_event.id_] = conv_event
return conv_events
def next_event(self, event_id, prev=False):
"""Return ConversationEvent following the event with given event_id.
If prev is True, return the previous event rather than the following
one.
Raises KeyError if no such ConversationEvent is known.
Return None if there is no following event.
"""
i = self.events.index(self._events_dict[event_id])
if prev and i > 0:
return self.events[i - 1]
elif not prev and i + 1 < len(self.events):
return self.events[i + 1]
else:
return None
def get_event(self, event_id):
"""Return ConversationEvent with the given event_id.
Raises KeyError if no such ConversationEvent is known.
"""
return self._events_dict[event_id]
@property
def id_(self):
"""The conversation's ID."""
return self._conversation.conversation_id.id_
@property
def users(self):
"""User instances of the conversation's current participants."""
return [self._user_list.get_user(user.UserID(chat_id=part.id_.chat_id,
gaia_id=part.id_.gaia_id))
for part in self._conversation.participant_data]
@property
def name(self):
"""The conversation's custom name, or None if it doesn't have one."""
return self._conversation.name
@property
def last_modified(self):
"""datetime timestamp of when the conversation was last modified."""
return parsers.from_timestamp(
self._conversation.self_conversation_state.sort_timestamp
)
@property
def latest_read_timestamp(self):
"""datetime timestamp of the last read ConversationEvent."""
timestamp = (self._conversation.self_conversation_state.\
self_read_state.latest_read_timestamp)
return parsers.from_timestamp(timestamp)
@property
def events(self):
"""The list of ConversationEvents, sorted oldest to newest."""
return list(self._events)
@property
def unread_events(self):
"""List of ConversationEvents that are unread.
Events are sorted oldest to newest.
Note that some Hangouts clients don't update the read timestamp for
certain event types, such as membership changes, so this method may
return more unread events than these clients will show. There's also a
delay between sending a message and the user's own message being
considered read.
"""
return [conv_event for conv_event in self._events
if conv_event.timestamp > self.latest_read_timestamp]
@property
def is_archived(self):
"""True if this conversation has been archived."""
return (schemas.ClientConversationView.ARCHIVED_VIEW in
self._conversation.self_conversation_state.view)
@property
def is_quiet(self):
"""True if notification level for this conversation is quiet."""
return (self._conversation.self_conversation_state.notification_level
== schemas.ClientNotificationLevel.QUIET)
@property
def is_off_the_record(self):
"""True if conversation is off the record (history is disabled)."""
return (self._conversation.otr_status
== schemas.OffTheRecordStatus.OFF_THE_RECORD)
class ConversationList(object):
"""Wrapper around Client that maintains a list of Conversations."""
def __init__(self, client, conv_states, user_list, sync_timestamp):
self._client = client # Client
self._conv_dict = {} # {conv_id: Conversation}
self._sync_timestamp = sync_timestamp # datetime
self._user_list = user_list # UserList
# Initialize the list of conversations from Client's list of
# ClientConversationStates.
for conv_state in conv_states:
self.add_conversation(conv_state.conversation, conv_state.event)
self._client.on_state_update.add_observer(self._on_state_update)
self._client.on_connect.add_observer(self._sync)
self._client.on_reconnect.add_observer(self._sync)
# Event fired when a new ConversationEvent arrives with arguments
# (ConversationEvent).
self.on_event = event.Event('ConversationList.on_event')
# Event fired when a user starts or stops typing with arguments
# (typing_message).
self.on_typing = event.Event('ConversationList.on_typing')
# Event fired when a watermark (read timestamp) is updated with
# arguments (WatermarkNotification).
self.on_watermark_notification = event.Event(
'ConversationList.on_watermark_notification'
)
def get_all(self, include_archived=False):
"""Return list of all Conversations.
If include_archived is False, do not return any archived conversations.
"""
return [conv for conv in self._conv_dict.values()
if not conv.is_archived or include_archived]
def get(self, conv_id):
"""Return a Conversation from its ID.
Raises KeyError if the conversation ID is invalid.
"""
return self._conv_dict[conv_id]
def add_conversation(self, client_conversation, client_events=[]):
"""Add new conversation from ClientConversation"""
conv_id = client_conversation.conversation_id.id_
logger.info('Adding new conversation: {}'.format(conv_id))
conv = Conversation(
self._client, self._user_list,
client_conversation, client_events
)
self._conv_dict[conv_id] = conv
return conv
@asyncio.coroutine
def leave_conversation(self, conv_id):
"""Leave conversation and remove it from ConversationList"""
logger.info('Leaving conversation: {}'.format(conv_id))
yield from self._conv_dict[conv_id].leave()
del self._conv_dict[conv_id]
@asyncio.coroutine
def _on_state_update(self, state_update):
"""Receive a ClientStateUpdate and fan out to Conversations."""
if state_update.client_conversation is not None:
self._handle_client_conversation(state_update.client_conversation)
if state_update.typing_notification is not None:
yield from self._handle_set_typing_notification(
state_update.typing_notification
)
if state_update.watermark_notification is not None:
yield from self._handle_watermark_notification(
state_update.watermark_notification
)
if state_update.event_notification is not None:
yield from self._on_client_event(
state_update.event_notification.event
)
@asyncio.coroutine
def _on_client_event(self, event_):
"""Receive a ClientEvent and fan out to Conversations."""
self._sync_timestamp = parsers.from_timestamp(event_.timestamp)
try:
conv = self._conv_dict[event_.conversation_id.id_]
except KeyError:
logger.warning('Received ClientEvent for unknown conversation {}'
.format(event_.conversation_id.id_))
else:
conv_event = conv.add_event(event_)
yield from self.on_event.fire(conv_event)
yield from conv.on_event.fire(conv_event)
def _handle_client_conversation(self, client_conversation):
"""Receive ClientConversation and create or update the conversation."""
conv_id = client_conversation.conversation_id.id_
conv = self._conv_dict.get(conv_id, None)
if conv is not None:
conv.update_conversation(client_conversation)
else:
self.add_conversation(client_conversation)
@asyncio.coroutine
def _handle_set_typing_notification(self, set_typing_notification):
"""Receive ClientSetTypingNotification and update the conversation."""
conv_id = set_typing_notification.conversation_id.id_
conv = self._conv_dict.get(conv_id, None)
if conv is not None:
res = parsers.parse_typing_status_message(set_typing_notification)
yield from self.on_typing.fire(res)
yield from conv.on_typing.fire(res)
else:
logger.warning('Received ClientSetTypingNotification for '
'unknown conversation {}'.format(conv_id))
@asyncio.coroutine
def _handle_watermark_notification(self, watermark_notification):
"""Receive ClientWatermarkNotification and update the conversation."""
conv_id = watermark_notification.conversation_id.id_
conv = self._conv_dict.get(conv_id, None)
if conv is not None:
res = parsers.parse_watermark_notification(watermark_notification)
yield from self.on_watermark_notification.fire(res)
yield from conv.on_watermark_notification.fire(res)
else:
logger.warning('Received ClientWatermarkNotification for '
'unknown conversation {}'.format(conv_id))
@asyncio.coroutine
def _sync(self, initial_data=None):
"""Sync conversation state and events that could have been missed."""
logger.info('Syncing events since {}'.format(self._sync_timestamp))
try:
res = yield from self._client.syncallnewevents(self._sync_timestamp)
except exceptions.NetworkError as e:
logger.warning('Failed to sync events, some events may be lost: {}'
.format(e))
else:
for conv_state in res.conversation_state:
conv_id = conv_state.conversation_id.id_
conv = self._conv_dict.get(conv_id, None)
if conv is not None:
conv.update_conversation(conv_state.conversation)
for event_ in conv_state.event:
timestamp = parsers.from_timestamp(event_.timestamp)
if timestamp > self._sync_timestamp:
# This updates the sync_timestamp for us, as well
# as triggering events.
yield from self._on_client_event(event_)
else:
self.add_conversation(conv_state.conversation,
conv_state.event)
|
|
# Copyright (c) 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_db import exception as db_exc
from oslo_log import log as logging
import oslo_messaging
import six
import sqlalchemy as sa
from sqlalchemy import func
from sqlalchemy import or_
from sqlalchemy import orm
from sqlalchemy.orm import joinedload
from sqlalchemy import sql
from neutron.common import constants
from neutron.common import utils as n_utils
from neutron import context as n_ctx
from neutron.db import agents_db
from neutron.db import agentschedulers_db
from neutron.db import l3_attrs_db
from neutron.db import model_base
from neutron.extensions import l3agentscheduler
from neutron.i18n import _LE, _LI, _LW
from neutron import manager
from neutron.plugins.common import constants as service_constants
LOG = logging.getLogger(__name__)
L3_AGENTS_SCHEDULER_OPTS = [
cfg.StrOpt('router_scheduler_driver',
default='neutron.scheduler.l3_agent_scheduler.ChanceScheduler',
help=_('Driver to use for scheduling '
'router to a default L3 agent')),
cfg.BoolOpt('router_auto_schedule', default=True,
help=_('Allow auto scheduling of routers to L3 agent.')),
cfg.BoolOpt('allow_automatic_l3agent_failover', default=False,
help=_('Automatically reschedule routers from offline L3 '
'agents to online L3 agents.')),
]
cfg.CONF.register_opts(L3_AGENTS_SCHEDULER_OPTS)
class RouterL3AgentBinding(model_base.BASEV2):
"""Represents binding between neutron routers and L3 agents."""
router_id = sa.Column(sa.String(36),
sa.ForeignKey("routers.id", ondelete='CASCADE'),
primary_key=True)
l3_agent = orm.relation(agents_db.Agent)
l3_agent_id = sa.Column(sa.String(36),
sa.ForeignKey("agents.id", ondelete='CASCADE'),
primary_key=True)
class L3AgentSchedulerDbMixin(l3agentscheduler.L3AgentSchedulerPluginBase,
agentschedulers_db.AgentSchedulerDbMixin):
"""Mixin class to add l3 agent scheduler extension to plugins
using the l3 agent for routing.
"""
router_scheduler = None
def start_periodic_l3_agent_status_check(self):
if not cfg.CONF.allow_automatic_l3agent_failover:
LOG.info(_LI("Skipping period L3 agent status check because "
"automatic router rescheduling is disabled."))
return
self.setup_agent_status_check(
self.reschedule_routers_from_down_agents)
def reschedule_routers_from_down_agents(self):
"""Reschedule routers from down l3 agents if admin state is up."""
agent_dead_limit = self.agent_dead_limit_seconds()
self.wait_down_agents('L3', agent_dead_limit)
cutoff = self.get_cutoff_time(agent_dead_limit)
context = n_ctx.get_admin_context()
down_bindings = (
context.session.query(RouterL3AgentBinding).
join(agents_db.Agent).
filter(agents_db.Agent.heartbeat_timestamp < cutoff,
agents_db.Agent.admin_state_up).
outerjoin(l3_attrs_db.RouterExtraAttributes,
l3_attrs_db.RouterExtraAttributes.router_id ==
RouterL3AgentBinding.router_id).
filter(sa.or_(l3_attrs_db.RouterExtraAttributes.ha == sql.false(),
l3_attrs_db.RouterExtraAttributes.ha == sql.null())))
try:
for binding in down_bindings:
agent_mode = self._get_agent_mode(binding.l3_agent)
if agent_mode == constants.L3_AGENT_MODE_DVR:
# rescheduling from l3 dvr agent on compute node doesn't
# make sense. Router will be removed from that agent once
# there are no dvr serviceable ports on that compute node
LOG.warn(_LW('L3 DVR agent on node %(host)s is down. '
'Not rescheduling from agent in \'dvr\' '
'mode.'), {'host': binding.l3_agent.host})
continue
LOG.warn(_LW(
"Rescheduling router %(router)s from agent %(agent)s "
"because the agent did not report to the server in "
"the last %(dead_time)s seconds."),
{'router': binding.router_id,
'agent': binding.l3_agent_id,
'dead_time': agent_dead_limit})
try:
self.reschedule_router(context, binding.router_id)
except (l3agentscheduler.RouterReschedulingFailed,
oslo_messaging.RemoteError):
# Catch individual router rescheduling errors here
# so one broken one doesn't stop the iteration.
LOG.exception(_LE("Failed to reschedule router %s"),
binding.router_id)
except Exception:
# we want to be thorough and catch whatever is raised
# to avoid loop abortion
LOG.exception(_LE("Exception encountered during router "
"rescheduling."))
def _get_agent_mode(self, agent_db):
agent_conf = self.get_configuration_dict(agent_db)
return agent_conf.get(constants.L3_AGENT_MODE,
constants.L3_AGENT_MODE_LEGACY)
def validate_agent_router_combination(self, context, agent, router):
"""Validate if the router can be correctly assigned to the agent.
:raises: RouterL3AgentMismatch if attempting to assign DVR router
to legacy agent, or centralized router to compute's L3 agents.
:raises: InvalidL3Agent if attempting to assign router to an
unsuitable agent (disabled, type != L3, incompatible configuration)
:raises: DVRL3CannotAssignToDvrAgent if attempting to assign DVR
router from one DVR Agent to another.
"""
is_distributed = router.get('distributed')
agent_mode = self._get_agent_mode(agent)
router_type = (
'distributed' if is_distributed else
'centralized')
is_agent_router_types_incompatible = (
agent_mode == constants.L3_AGENT_MODE_DVR and not is_distributed
or agent_mode == constants.L3_AGENT_MODE_LEGACY and is_distributed
)
if is_agent_router_types_incompatible:
raise l3agentscheduler.RouterL3AgentMismatch(
router_type=router_type, router_id=router['id'],
agent_mode=agent_mode, agent_id=agent['id'])
if agent_mode == constants.L3_AGENT_MODE_DVR and is_distributed:
raise l3agentscheduler.DVRL3CannotAssignToDvrAgent(
router_type=router_type, router_id=router['id'],
agent_id=agent['id'])
is_wrong_type_or_unsuitable_agent = (
agent['agent_type'] != constants.AGENT_TYPE_L3 or
not agentschedulers_db.services_available(agent['admin_state_up'])
or
not self.get_l3_agent_candidates(context, router, [agent],
ignore_admin_state=True))
if is_wrong_type_or_unsuitable_agent:
raise l3agentscheduler.InvalidL3Agent(id=agent['id'])
def check_agent_router_scheduling_needed(self, context, agent, router):
"""Check if the router scheduling is needed.
:raises: RouterHostedByL3Agent if router is already assigned
to a different agent.
:returns: True if scheduling is needed, otherwise False
"""
router_id = router['id']
agent_id = agent['id']
query = context.session.query(RouterL3AgentBinding)
bindings = query.filter_by(router_id=router_id).all()
if not bindings:
return True
for binding in bindings:
if binding.l3_agent_id == agent_id:
# router already bound to the agent we need
return False
if router.get('distributed'):
return False
if router.get('ha'):
return True
# legacy router case: router is already bound to some agent
raise l3agentscheduler.RouterHostedByL3Agent(
router_id=router_id,
agent_id=bindings[0].l3_agent_id)
def create_router_to_agent_binding(self, context, agent, router):
"""Create router to agent binding."""
router_id = router['id']
agent_id = agent['id']
if self.router_scheduler:
try:
if router.get('ha'):
plugin = manager.NeutronManager.get_service_plugins().get(
service_constants.L3_ROUTER_NAT)
self.router_scheduler.create_ha_port_and_bind(
plugin, context, router['id'],
router['tenant_id'], agent)
else:
self.router_scheduler.bind_router(
context, router_id, agent)
except db_exc.DBError:
raise l3agentscheduler.RouterSchedulingFailed(
router_id=router_id, agent_id=agent_id)
def add_router_to_l3_agent(self, context, agent_id, router_id):
"""Add a l3 agent to host a router."""
with context.session.begin(subtransactions=True):
router = self.get_router(context, router_id)
agent = self._get_agent(context, agent_id)
self.validate_agent_router_combination(context, agent, router)
if self.check_agent_router_scheduling_needed(
context, agent, router):
self.create_router_to_agent_binding(context, agent, router)
else:
return
l3_notifier = self.agent_notifiers.get(constants.AGENT_TYPE_L3)
if l3_notifier:
l3_notifier.router_added_to_agent(
context, [router_id], agent.host)
def remove_router_from_l3_agent(self, context, agent_id, router_id):
"""Remove the router from l3 agent.
After removal, the router will be non-hosted until there is update
which leads to re-schedule or be added to another agent manually.
"""
agent = self._get_agent(context, agent_id)
self._unbind_router(context, router_id, agent_id)
router = self.get_router(context, router_id)
if router.get('ha'):
plugin = manager.NeutronManager.get_service_plugins().get(
service_constants.L3_ROUTER_NAT)
plugin.delete_ha_interfaces_on_host(context, router_id, agent.host)
l3_notifier = self.agent_notifiers.get(constants.AGENT_TYPE_L3)
if l3_notifier:
l3_notifier.router_removed_from_agent(
context, router_id, agent.host)
def _unbind_router(self, context, router_id, agent_id):
with context.session.begin(subtransactions=True):
query = context.session.query(RouterL3AgentBinding)
query = query.filter(
RouterL3AgentBinding.router_id == router_id,
RouterL3AgentBinding.l3_agent_id == agent_id)
query.delete()
def reschedule_router(self, context, router_id, candidates=None):
"""Reschedule router to a new l3 agent
Remove the router from the agent(s) currently hosting it and
schedule it again
"""
cur_agents = self.list_l3_agents_hosting_router(
context, router_id)['agents']
with context.session.begin(subtransactions=True):
for agent in cur_agents:
self._unbind_router(context, router_id, agent['id'])
new_agent = self.schedule_router(context, router_id,
candidates=candidates)
if not new_agent:
raise l3agentscheduler.RouterReschedulingFailed(
router_id=router_id)
l3_notifier = self.agent_notifiers.get(constants.AGENT_TYPE_L3)
if l3_notifier:
for agent in cur_agents:
l3_notifier.router_removed_from_agent(
context, router_id, agent['host'])
l3_notifier.router_added_to_agent(
context, [router_id], new_agent.host)
def list_routers_on_l3_agent(self, context, agent_id):
query = context.session.query(RouterL3AgentBinding.router_id)
query = query.filter(RouterL3AgentBinding.l3_agent_id == agent_id)
router_ids = [item[0] for item in query]
if router_ids:
return {'routers':
self.get_routers(context, filters={'id': router_ids})}
else:
# Exception will be thrown if the requested agent does not exist.
self._get_agent(context, agent_id)
return {'routers': []}
def _get_active_l3_agent_routers_sync_data(self, context, host, agent,
router_ids):
if n_utils.is_extension_supported(self,
constants.L3_HA_MODE_EXT_ALIAS):
return self.get_ha_sync_data_for_host(context, host,
router_ids=router_ids,
active=True)
return self.get_sync_data(context, router_ids=router_ids, active=True)
def list_active_sync_routers_on_active_l3_agent(
self, context, host, router_ids):
agent = self._get_agent_by_type_and_host(
context, constants.AGENT_TYPE_L3, host)
if not agentschedulers_db.services_available(agent.admin_state_up):
return []
query = context.session.query(RouterL3AgentBinding.router_id)
query = query.filter(
RouterL3AgentBinding.l3_agent_id == agent.id)
if router_ids:
query = query.filter(
RouterL3AgentBinding.router_id.in_(router_ids))
router_ids = [item[0] for item in query]
if router_ids:
return self._get_active_l3_agent_routers_sync_data(context, host,
agent,
router_ids)
return []
def get_l3_agents_hosting_routers(self, context, router_ids,
admin_state_up=None,
active=None):
if not router_ids:
return []
query = context.session.query(RouterL3AgentBinding)
query = query.options(orm.contains_eager(
RouterL3AgentBinding.l3_agent))
query = query.join(RouterL3AgentBinding.l3_agent)
query = query.filter(RouterL3AgentBinding.router_id.in_(router_ids))
if admin_state_up is not None:
query = (query.filter(agents_db.Agent.admin_state_up ==
admin_state_up))
l3_agents = [binding.l3_agent for binding in query]
if active is not None:
l3_agents = [l3_agent for l3_agent in
l3_agents if not
agents_db.AgentDbMixin.is_agent_down(
l3_agent['heartbeat_timestamp'])]
return l3_agents
def _get_l3_bindings_hosting_routers(self, context, router_ids):
if not router_ids:
return []
query = context.session.query(RouterL3AgentBinding)
query = query.options(joinedload('l3_agent')).filter(
RouterL3AgentBinding.router_id.in_(router_ids))
return query.all()
def list_l3_agents_hosting_router(self, context, router_id):
with context.session.begin(subtransactions=True):
bindings = self._get_l3_bindings_hosting_routers(
context, [router_id])
return {'agents': [self._make_agent_dict(binding.l3_agent) for
binding in bindings]}
def get_l3_agents(self, context, active=None, filters=None):
query = context.session.query(agents_db.Agent)
query = query.filter(
agents_db.Agent.agent_type == constants.AGENT_TYPE_L3)
if active is not None:
query = (query.filter(agents_db.Agent.admin_state_up == active))
if filters:
for key, value in six.iteritems(filters):
column = getattr(agents_db.Agent, key, None)
if column:
if not value:
return []
query = query.filter(column.in_(value))
agent_modes = filters.get('agent_modes', [])
if agent_modes:
agent_mode_key = '\"agent_mode\": \"'
configuration_filter = (
[agents_db.Agent.configurations.contains('%s%s\"' %
(agent_mode_key, agent_mode))
for agent_mode in agent_modes])
query = query.filter(or_(*configuration_filter))
return [l3_agent
for l3_agent in query
if agentschedulers_db.AgentSchedulerDbMixin.is_eligible_agent(
active, l3_agent)]
def check_ports_exist_on_l3agent(self, context, l3_agent, router_id):
"""
This function checks for existence of dvr serviceable
ports on the host, running the input l3agent.
"""
subnet_ids = self.get_subnet_ids_on_router(context, router_id)
if not subnet_ids:
return False
core_plugin = manager.NeutronManager.get_plugin()
# NOTE(swami):Before checking for existence of dvr
# serviceable ports on the host managed by the l3
# agent, let's verify if at least one subnet has
# dhcp enabled. If so, then the host will have a
# dvr serviceable port, which is in fact the DHCP
# port.
# This optimization is valid assuming that the L3
# DVR_SNAT node will be the one hosting the DHCP
# Agent.
agent_mode = self._get_agent_mode(l3_agent)
for subnet_id in subnet_ids:
subnet_dict = core_plugin.get_subnet(context, subnet_id)
if (subnet_dict['enable_dhcp'] and (
agent_mode == constants.L3_AGENT_MODE_DVR_SNAT)):
return True
filter = {'fixed_ips': {'subnet_id': subnet_ids}}
ports = core_plugin.get_ports(context, filters=filter)
for port in ports:
if (n_utils.is_dvr_serviced(port['device_owner']) and
l3_agent['host'] == port['binding:host_id']):
return True
return False
def get_l3_agent_candidates(self, context, sync_router, l3_agents,
ignore_admin_state=False):
"""Get the valid l3 agents for the router from a list of l3_agents."""
candidates = []
for l3_agent in l3_agents:
if not ignore_admin_state and not l3_agent.admin_state_up:
# ignore_admin_state True comes from manual scheduling
# where admin_state_up judgement is already done.
continue
agent_conf = self.get_configuration_dict(l3_agent)
router_id = agent_conf.get('router_id', None)
use_namespaces = agent_conf.get('use_namespaces', True)
handle_internal_only_routers = agent_conf.get(
'handle_internal_only_routers', True)
gateway_external_network_id = agent_conf.get(
'gateway_external_network_id', None)
agent_mode = agent_conf.get(constants.L3_AGENT_MODE,
constants.L3_AGENT_MODE_LEGACY)
if not use_namespaces and router_id != sync_router['id']:
continue
ex_net_id = (sync_router['external_gateway_info'] or {}).get(
'network_id')
if ((not ex_net_id and not handle_internal_only_routers) or
(ex_net_id and gateway_external_network_id and
ex_net_id != gateway_external_network_id)):
continue
is_router_distributed = sync_router.get('distributed', False)
if agent_mode in (
constants.L3_AGENT_MODE_LEGACY,
constants.L3_AGENT_MODE_DVR_SNAT) and (
not is_router_distributed):
candidates.append(l3_agent)
elif is_router_distributed and agent_mode.startswith(
constants.L3_AGENT_MODE_DVR) and (
self.check_ports_exist_on_l3agent(
context, l3_agent, sync_router['id'])):
candidates.append(l3_agent)
return candidates
def auto_schedule_routers(self, context, host, router_ids):
if self.router_scheduler:
return self.router_scheduler.auto_schedule_routers(
self, context, host, router_ids)
def schedule_router(self, context, router, candidates=None):
if self.router_scheduler:
return self.router_scheduler.schedule(
self, context, router, candidates=candidates)
def schedule_routers(self, context, routers):
"""Schedule the routers to l3 agents."""
for router in routers:
self.schedule_router(context, router, candidates=None)
def get_l3_agent_with_min_routers(self, context, agent_ids):
"""Return l3 agent with the least number of routers."""
if not agent_ids:
return None
query = context.session.query(
agents_db.Agent,
func.count(
RouterL3AgentBinding.router_id
).label('count')).outerjoin(RouterL3AgentBinding).group_by(
RouterL3AgentBinding.l3_agent_id).order_by('count')
res = query.filter(agents_db.Agent.id.in_(agent_ids)).first()
return res[0]
|
|
import os
import webapp2
import hmac
import jinja2
from google.appengine.ext import db
from google.appengine.api import memcache
import re
import json
import logging
import datetime
import time
import urllib2
import HTMLParser
import pafy
USER_RE = re.compile("^[a-zA-Z0-9_-]{3,20}$")
PASSWORD_RE = re.compile("^.{3,20}$")
EMAIL_RE = re.compile("^[\S]+@[\S]+\.[\S]+$")
def valid_username(username):
return USER_RE.match(username)
def valid_password(password):
return PASSWORD_RE.match(password)
def valid_verify(password,verify):
return password==verify
def valid_email(email):
return EMAIL_RE.match(email) or email==''
def get_users() :
return list(db.GqlQuery("SELECT * from Users"))
def user_exists(username) :
for user in get_users():
if username == user.user_id:
return True
def password_match(username,password) :
for user in get_users() :
if username == user.user_id and user.password == password :
return True
return False
def valid_form(username,password,verify,email) :
return valid_username(username) and valid_password(password) and valid_verify(password,verify) and valid_email(email) and not user_exists(username)
def hash_str(s):
return hmac.new('lobo',s).hexdigest()
def make_secure_val(s):
return "%s|%s" % (s, hash_str(s))
def check_secure_val(h):
val = h.split('|')[0]
if h == make_secure_val(val):
return val
def get_posts(update = False) :
key = 'posts'
posts = memcache.get(key)
SAVED_TIME = memcache.get('age')
if not SAVED_TIME :
update = True
if posts is None or update :
logging.error('DB read')
SAVED_TIME = datetime.datetime.now().utcnow()
posts = list(db.GqlQuery('SELECT * from Post order by post_date desc'))
memcache.set(key,posts)
memcache.set('age',SAVED_TIME)
return posts,SAVED_TIME
def set_posts(subject,content) :
logging.error('DB write')
post = Post(subject = subject, content = content).put()
time.sleep(.1)
get_posts(True)
return post
def age_str(SAVED_TIME) :
return "queried %s seconds ago"%int((datetime.datetime.now().utcnow() - SAVED_TIME).total_seconds())
jinja_environment = jinja2.Environment(autoescape=True,
loader=jinja2.FileSystemLoader(os.path.join(os.path.dirname(__file__), 'templates')))
def ret_template(template):
return jinja_environment.get_template(template)
def return_primary_results(search_json):
primary_results = {}
if not search_json["Definition"] == "" :
if not search_json["Definition"].find("definition:") == -1 :
primary_results["definition"] = search_json["Definition"][search_json["Definition"].find("definition:")+len("definition")+1:]
else :
primary_results["definition"] = search_json["Definition"]
primary_results["definition_source"] = search_json["DefinitionSource"]
primary_results["definition_url"] = search_json["DefinitionURL"]
if not search_json["AbstractURL"] == "" :
primary_results["abstract_text"] = search_json["AbstractText"]
primary_results["abstract_source"] = search_json["AbstractSource"]
primary_results["abstract_url"] = search_json["AbstractURL"]
if not search_json["Image"] == "" :
primary_results["image_url"] = search_json["Image"]
if not search_json["Answer"] == "" :
primary_results["instant_answer"] = search_json["Answer"]
if search_json["AnswerType"] == "calc" :
calc_start_loc = search_json["Answer"].find("focus();\">")+len("focus();\">")
calc_remaining_array = search_json["Answer"][calc_start_loc+1:]
calc_end_loc = calc_remaining_array.find("</a>")
# logging.error(calc_start_loc)
# logging.error(calc_end_loc)
# logging.error(calc_remaining_array)
# logging.error(search_json["Answer"][calc_start_loc:calc_start_loc+calc_end_loc+1])
primary_results["instant_answer"] = search_json["Answer"][calc_start_loc:calc_start_loc+calc_end_loc+1]
if search_json["AnswerType"] == "root" :
root_start_loc = search_json["Answer"].find("focus();\">")+len("focus();\">")
root_remaining_array = search_json["Answer"][root_start_loc+1:]
root_end_loc = root_remaining_array.find("</a>")
primary_results["instant_answer"] = search_json["Answer"][root_start_loc:root_start_loc+root_end_loc+1]
if not search_json["Results"] == [] :
primary_results["results_list"] = search_json["Results"]
#logging.error(search_json["Results"])
if not search_json["RelatedTopics"] == [] :
JSON_APPEND = "&format=json"
primary_results["related_topics_list"] = search_json["RelatedTopics"]
for e in primary_results["related_topics_list"] :
if "Result" in e.keys() :
#logging.error(e['FirstURL'])
heading = json.loads(urllib2.urlopen(e['FirstURL']+JSON_APPEND).read())["Heading"]
#logging.error(heading)
# heading = e["Text"]
#if not heading.find(' or') == -1 :
# heading = heading[:e["Text"].find(' or')]
#logging.error(heading)
#if not heading.find(',') == -1 :
# heading = heading[:e["Text"].find(',')]
#if not heading.find(' - ') == -1 :
# heading = heading[:e["Text"].find(' - ')]
logging.error(heading)
e["FirstURL"] = heading
return primary_results
def return_news_results(search_string, user_ip) :
html_parser = HTMLParser.HTMLParser()
search_string = search_string.replace(' ','%20')
logging.error(search_string)
url = urllib2.urlopen("https://ajax.googleapis.com/ajax/services/search/news?v=1.0&q=%s&userip=%s"%(search_string,user_ip)).read()
news_json = json.loads(url)
logging.error(news_json)
news_results = []
for e in news_json["responseData"]["results"] :
news_results.append({"title":html_parser.unescape(e["titleNoFormatting"]) ,"url" : e["unescapedUrl"], "publisher":e["publisher"]})
logging.error(news_results)
more_news_results = []
for e in news_json["responseData"]["results"] :
if "relatedStories" in e.keys() :
for i in e["relatedStories"] :
more_news_results.append({"title":html_parser.unescape(e["titleNoFormatting"]) ,"url" : e["unescapedUrl"] })
key = "&key=RiMKJK3gq2FTBWxx41B76MR2OHc_"
url = urllib2.urlopen("http://www.faroo.com/api?q=%s&start=1&length=10&l=en&src=news&i=true&f=json%s"%(search_string,key)).read()
news_json = json.loads(url)
# logging.error(json_response)
for e in news_json["results"] :
more_news_results.append({"title":html_parser.unescape(e["title"]) ,"url" : e["url"] })
return {"news_results" : news_results , "more_news_results" : more_news_results}
def return_top_results(search_string, user_ip) :
html_parser = HTMLParser.HTMLParser()
search_string = search_string.replace(' ','%20')
logging.error(search_string)
url = urllib2.urlopen("https://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=%s&userip=%s"%(search_string,user_ip)).read()
top_json = json.loads(url)
logging.error(top_json)
top_results = []
for e in top_json["responseData"]["results"] :
top_results.append({"title":html_parser.unescape(e["titleNoFormatting"]) ,"url" : e["unescapedUrl"] })
logging.error(top_results)
return {"top_results" : top_results}
def return_web_results(search_string, user_ip) :
html_parser = HTMLParser.HTMLParser()
search_string = search_string.replace(' ','%20')
logging.error(search_string)
# logging.error(json_response)
web_results = []
more_web_results = []
key = "&key=RiMKJK3gq2FTBWxx41B76MR2OHc_"
url = urllib2.urlopen("http://www.faroo.com/api?q=%s&start=1&length=10&l=en&src=web&i=true&f=json%s"%(search_string,key)).read()
# json_response = unirest.get("https://faroo-faroo-web-search.p.mashape.com/api?q=lobo",
# headers={
# "X-Mashape-Authorization": "ArSWXeNxgohO7uPTFIGzNO79TNjPyYNW"
#}
#);
web_json = json.loads(url)
# logging.error(json_response)
for e in web_json["results"] :
if len(web_results) < 4 :
web_results.append({"title":html_parser.unescape(e["title"]) ,"url" : e["url"] })
else :
more_web_results.append({"title":html_parser.unescape(e["title"]) ,"url" : e["url"] })
#url = urllib2.urlopen("http://188.40.64.7:8092/yacysearch.json?query=%s"%(search_string)).read()
# json_response = unirest.get("https://faroo-faroo-web-search.p.mashape.com/api?q=lobo",
# headers={
# "X-Mashape-Authorization": "ArSWXeNxgohO7uPTFIGzNO79TNjPyYNW"
#}
#);
#web_json = json.loads(url)
#for e in web_json["channels"][0]["items"] :
# if len(web_results) < 4 :
# web_results.append({"title":html_parser.unescape(e["title"]) ,"url" : e["link"] })
# else :
# more_web_results.append({"title":html_parser.unescape(e["title"]) ,"url" : e["link"] })
#logging.error(web_results)
return {"web_results" : web_results, "more_web_results" : more_web_results}
def return_image_results(search_string,user_ip) :
html_parser = HTMLParser.HTMLParser()
search_string = search_string.replace(' ','%20')
logging.error(search_string)
url = urllib2.urlopen("https://ajax.googleapis.com/ajax/services/search/images?v=1.0&q=%s&userip=%s"%(search_string,user_ip)).read()
image_json = json.loads(url)
#logging.error(image_json)
image_results = []
for e in image_json["responseData"]["results"] :
image_results.append({"title":html_parser.unescape(e["content"]) ,"url" : e["unescapedUrl"] })
#logging.error(image_results)
return {"image_results" : image_results}
def return_thored_results(search_string, user_ip) :
search_string = search_string.replace(' ','%20')
url = urllib2.urlopen("http://suggestqueries.google.com/complete/search?client=firefox&q=%s"%(search_string)).read()
thored_json = json.loads(url)
thored_results = []
for e in thored_json[1] :
thored_results.append({"result" : e})
return {"thored_results" : thored_results}
def return_video_resutls(search_string , user_ip) :
html_parser = HTMLParser.HTMLParser()
search_string = search_string.replace(' ','%20')
logging.error(search_string)
url = urllib2.urlopen("https://ajax.googleapis.com/ajax/services/search/video?v=1.0&q=%s&userip=%s"%(search_string,user_ip)).read()
video_json = json.loads(url)
#logging.error(image_json)
video_results = []
for e in video_json["responseData"]["results"] :
video_results.append({"title":html_parser.unescape(e["titleNoFormatting"]) ,"url" : "//www.youtube.com/embed/"+re.findall( r'v\=([\-\w]+)', e["url"] )[0] , 'vid' : re.findall( r'v\=([\-\w]+)', e["url"] )[0] ,'source' : e["url"]})
logging.error(e["url"])
logging.error( "//www.youtube.com/embed/"+re.findall( r'v\=([\-\w]+)', e["url"] )[0])
return {"video_results" : video_results}
class Post(db.Model):
subject = db.StringProperty(required = True)
content = db.TextProperty(required = True)
post_date = db.DateTimeProperty(auto_now_add = True)
class Users(db.Model):
user_id = db.StringProperty(required = True)
password = db.StringProperty(required = True)
email = db.StringProperty()
join_date = db.DateTimeProperty(auto_now_add = True)
class BlogPage(webapp2.RequestHandler):
def get(self):
posts = get_posts()[0]
SAVED_TIME = get_posts()[1]
template_values = {
'posts' : posts ,
'age_str' : age_str(SAVED_TIME)
}
self.response.out.write(ret_template('blog.html').render(template_values))
class BlogPageJsonHandler(BlogPage):
def get(self):
self.response.content_type = 'application/json; charset=utf-8'
posts = get_posts()[0]
post_list = []
for post in posts :
post_list.append({"subject":post.subject,"content":post.content,"created":str(post.post_date)})
j = json.dumps(post_list)
self.response.out.write(j)
class FormHandler(webapp2.RequestHandler):
def get(self):
template_values = {
'subject': '',
'content': '',
}
self.response.out.write(ret_template('form.html').render(template_values))
def post(self):
subject = self.request.get('subject')
content = self.request.get('content')
template_values = {
'subject': subject,
'content': content,
}
if subject and content :
post_key = set_posts(subject, content)
self.redirect("/blog/"+str(post_key.id()))
else :
template_values['error'] = 'Enter both content and subject'
self.response.out.write(ret_template('form.html').render(template_values))
class ThanksHandler(webapp2.RequestHandler):
def get(self , post_key):
post = memcache.get(post_key)
SAVED_TIME = memcache.get('%s|age'%post_key)
if post is None or not SAVED_TIME :
SAVED_TIME = datetime.datetime.now().utcnow()
post = Post.get_by_id(int(post_key))
memcache.set(post_key,post)
memcache.set('%s|age'%post_key,SAVED_TIME)
template_values = {
'subject': post.subject,
'content': post.content,
'post_date': post.post_date,
'key' : post_key,
'age_str' : age_str(SAVED_TIME)
}
self.response.out.write(ret_template('thanks.html').render(template_values))
class ThanksJsonHandler(webapp2.RequestHandler):
def get(self , post_key) :
post = Post.get_by_id(int(post_key))
post_dict = {"subject":post.subject,"content":post.content,"created":str(post.post_date)}
j = json.dumps(post_dict)
self.response.content_type = 'application/json; charset=utf-8'
self.response.out.write(j)
class SignupHandler(webapp2.RequestHandler):
def get(self) :
template_values = {
'name' : '',
'usererror' : '' ,
'password' : '',
'passworderror': '',
'verify' : '' ,
'verifyerror': '' ,
'email': '',
'emailerror':''
}
self.response.out.write(ret_template('sign.html').render(template_values))
def post(self) :
name=self.request.get('username')
usererror = ''
password=self.request.get('password')
passworderror = ''
verify=self.request.get('verify')
verifyerror=''
email=self.request.get('email')
emailerror=''
if not valid_username(name) :
usererror = "That's not a valid username."
if user_exists(name) :
usererror = "That user already exists!"
if not valid_password(password):
passworderror = "That wasn't a valid password."
if not valid_verify(password,verify) and valid_password(password):
verifyerror = "Your passwords didn't match."
if not valid_email(email):
emailerror = "That's not a valid email."
if not valid_form(name,password,verify,email) :
template_values = {
'name' : name,
'usererror' : usererror ,
'password' : '',
'passworderror': passworderror,
'verify' : '' ,
'verifyerror': verifyerror ,
'email': email,
'emailerror':emailerror
}
self.response.out.write(ret_template('sign.html').render(template_values))
else :
self.response.headers.add_header('Set-Cookie', 'user_id='+make_secure_val(str(name))+'; Path = /')
u = Users(user_id=name, email=email, password=password)
u.put()
self.redirect('/welcome')
class LoginHandler(webapp2.RequestHandler):
def get(self) :
template_values = {
'name' : '',
'password' : '',
'error':''
}
self.response.out.write(ret_template('login.html').render(template_values))
def post(self):
username = self.request.get('username')
password = self.request.get('password')
error = ''
if user_exists(username) :
if password_match(username,password) :
self.response.headers.add_header('Set-Cookie', 'user_id='+make_secure_val(str(username))+';Path = /')
self.redirect('/welcome')
else :
error = 'Invalid Login'
else :
error = 'Invalid Login'
template_values = { 'name' : '',
'password' : '',
'error': error
}
self.response.out.write(ret_template('login.html').render(template_values))
class LogoutHandler(webapp2.RequestHandler) :
def get(self):
self.response.headers.add_header("Set-Cookie","user_id="+';Path = /')
self.redirect('/signup')
class WelcomeHandler(webapp2.RequestHandler):
def get(self) :
user_cookie = self.request.cookies.get('user_id')
if check_secure_val(user_cookie) == user_cookie.split('|')[0]:
template_values = {'username' : user_cookie.split('|')[0]}
self.response.out.write(ret_template('welcome.html').render(template_values))
else :
self.redirect('/signup')
class FlushHandler(webapp2.RequestHandler) :
def get(self) :
memcache.flush_all()
self.redirect('/')
class SearchHandler(webapp2.RequestHandler) :
def get(self, search_string="") :
search_string = self.request.get('search_string')
logging.error(search_string)
template_values={"tr_show_value" : 'hidden' , "sr_show_value" : 'hidden' , "mr_show_value" : 'hidden'}
self.response.out.write(ret_template('test.html').render(template_values))
def post(self):
search_string = self.request.get('search_string')
user_ip = self.request.remote_addr
if search_string == "" :
template_values={'search_string':search_string , "tr_show_value" : 'hidden' ,
"mr_show_value" : 'hidden' ,
"sr_show_value" : 'hidden'}
else :
template_values={'search_string':search_string}
url = 'http://api.duckduckgo.com/?q=%s&format=json'%search_string.replace(' ','%20')
url_content = urllib2.urlopen(url).read()
search_json = json.loads(url_content)
logging.error(search_json)
try :
primary_results = return_primary_results(search_json)
except :
primary_results = {}
news_results = return_news_results(search_string , user_ip)
top_results = return_top_results(search_string , user_ip)
web_results = return_web_results(search_string , user_ip)
image_results = return_image_results(search_string , user_ip)
thored_results = return_thored_results(search_string , user_ip)
video_results = return_video_resutls(search_string, user_ip)
template_values = dict(template_values.items() +
primary_results.items() +
news_results.items() +
top_results.items() +
web_results.items() +
image_results.items() +
thored_results.items() +
video_results.items()
)
template_values['search_string'] = search_string
self.response.out.write(ret_template('test.html').render(template_values))
class ShareHandler(webapp2.RequestHandler) :
def get(self) :
#add = self.request.remote_addr
template_values = {}
self.response.out.write(ret_template('share.html').render(template_values))
class TestHandler(webapp2.RequestHandler):
def get(self, search_string="") :
search_string = self.request.get('search_string')
logging.error(search_string)
template_values={"tr_show_value" : 'hidden' , "sr_show_value" : 'hidden' , "mr_show_value" : 'hidden'}
self.response.out.write(ret_template('test.html').render(template_values))
def post(self):
search_string = self.request.get('search_string')
user_ip = self.request.remote_addr
if search_string == "" :
template_values={'search_string':search_string , "tr_show_value" : 'hidden' ,
"mr_show_value" : 'hidden' ,
"sr_show_value" : 'hidden'}
else :
template_values={'search_string':search_string}
url = 'http://api.duckduckgo.com/?q=%s&format=json'%search_string.replace(' ','%20')
url_content = urllib2.urlopen(url).read()
search_json = json.loads(url_content)
logging.error(search_json)
try :
primary_results = return_primary_results(search_json)
except :
primary_results = {}
news_results = return_news_results(search_string , user_ip)
top_results = return_top_results(search_string , user_ip)
web_results = return_web_results(search_string , user_ip)
image_results = return_image_results(search_string , user_ip)
thored_results = return_thored_results(search_string , user_ip)
video_results = return_video_resutls(search_string, user_ip)
template_values = dict(template_values.items() +
primary_results.items() +
news_results.items() +
top_results.items() +
web_results.items() +
image_results.items() +
thored_results.items() +
video_results.items()
)
template_values['search_string'] = search_string
self.response.out.write(ret_template('test.html').render(template_values))
class DownloadHandler( webapp2.RequestHandler) :
def get(self) :
vid = self.request.get('vid')
logging.error(vid)
url = "http://www.youtube.com/watch?v=%s"%vid
video = pafy.new(url)
streams = video.allstreams
template_values = { "url" : url ,
"vid" : vid ,
"video" : video,
"streams" : streams
}
self.response.out.write(ret_template('downloader.html').render(template_values))
def post(self) :
pass
def handle_404(request, response, exception):
logging.exception(exception)
response.write(ret_template('404.html').render({}))
response.set_status(404)
def handle_500(request, response, exception):
logging.exception(exception)
response.write(ret_template('500.html').render({}))
response.set_status(500)
app = webapp2.WSGIApplication([('/blog', BlogPage),
('/blog'+'.json', BlogPageJsonHandler),
('/search',SearchHandler),
('/',SearchHandler),
('/share',ShareHandler),
('/blog/newpost',FormHandler),
('/blog/(\d+)',ThanksHandler),
('/blog/(\d+)'+'.json',ThanksJsonHandler),
('/signup',SignupHandler),
('/login',LoginHandler),
('/logout',LogoutHandler),
('/welcome',WelcomeHandler),
('/flush',FlushHandler),
('/test',TestHandler),
('/download', DownloadHandler)], debug=True)
app.error_handlers[404] = handle_404
app.error_handlers[500] = handle_500
|
|
#!/usr/bin/env python
#
# Copyright 2007 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Model classes which are used to communicate between parts of implementation.
These model classes are describing mapreduce, its current state and
communication messages. They are either stored in the datastore or
serialized to/from json and passed around with other means.
"""
__all__ = ["JsonEncoder",
"JsonDecoder",
"JSON_DEFAULTS",
"JsonMixin",
"JsonProperty",
"MapreduceState",
"MapperSpec",
"MapreduceControl",
"MapreduceSpec",
"ShardState",
"CountersMap",
"TransientShardState",
"QuerySpec",
"HugeTask"]
import cgi
import copy
import datetime
import logging
import os
import random
import simplejson
import time
import urllib
import zlib
from google.appengine.api import datastore_errors
from google.appengine.api import datastore_types
from google.appengine.api import memcache
from google.appengine.api import taskqueue
from google.appengine.datastore import datastore_rpc
from google.appengine.ext import db
from google.appengine.ext.mapreduce import context
from google.appengine.ext.mapreduce import hooks
from google.appengine.ext.mapreduce import util
from google.appengine._internal.graphy.backends import google_chart_api
_MAP_REDUCE_KINDS = ("_GAE_MR_MapreduceControl",
"_GAE_MR_MapreduceState",
"_GAE_MR_ShardState",
"_GAE_MR_TaskPayload")
class _HugeTaskPayload(db.Model):
"""Model object to store task payload."""
payload = db.BlobProperty()
@classmethod
def kind(cls):
"""Returns entity kind."""
return "_GAE_MR_TaskPayload"
class HugeTask(object):
"""HugeTask is a taskqueue.Task-like class that can store big payloads.
Payloads are stored either in the task payload itself or in the datastore.
Task handlers should inherit from base_handler.HugeTaskHandler class.
"""
PAYLOAD_PARAM = "__payload"
PAYLOAD_KEY_PARAM = "__payload_key"
MAX_TASK_PAYLOAD = taskqueue.MAX_PUSH_TASK_SIZE_BYTES - 1024
MAX_DB_PAYLOAD = datastore_rpc.BaseConnection.MAX_RPC_BYTES
PAYLOAD_VERSION_HEADER = "AE-MR-Payload-Version"
PAYLOAD_VERSION = "1"
def __init__(self,
url,
params,
name=None,
eta=None,
countdown=None,
parent=None,
headers=None):
"""Init.
Args:
url: task url in str.
params: a dict from str to str.
name: task name.
eta: task eta.
countdown: task countdown.
parent: parent entity of huge task's payload.
headers: a dict of headers for the task.
Raises:
ValueError: when payload is too big even for datastore, or parent is
not specified when payload is stored in datastore.
"""
self.url = url
self.name = name
self.eta = eta
self.countdown = countdown
self._headers = {
"Content-Type": "application/octet-stream",
self.PAYLOAD_VERSION_HEADER: self.PAYLOAD_VERSION
}
if headers:
self._headers.update(headers)
payload_str = urllib.urlencode(params)
compressed_payload = ""
if len(payload_str) > self.MAX_TASK_PAYLOAD:
compressed_payload = zlib.compress(payload_str)
if not compressed_payload:
self._payload = payload_str
elif len(compressed_payload) < self.MAX_TASK_PAYLOAD:
self._payload = self.PAYLOAD_PARAM + compressed_payload
elif len(compressed_payload) > self.MAX_DB_PAYLOAD:
raise ValueError(
"Payload from %s to big to be stored in database: %s" %
(self.name, len(compressed_payload)))
else:
if not parent:
raise ValueError("Huge tasks should specify parent entity.")
payload_entity = _HugeTaskPayload(payload=compressed_payload,
parent=parent)
payload_key = payload_entity.put()
self._payload = self.PAYLOAD_KEY_PARAM + str(payload_key)
def add(self, queue_name, transactional=False):
"""Add task to the queue."""
task = self.to_task()
task.add(queue_name, transactional)
def to_task(self):
"""Convert to a taskqueue task."""
return taskqueue.Task(
url=self.url,
payload=self._payload,
name=self.name,
eta=self.eta,
countdown=self.countdown,
headers=self._headers)
@classmethod
def decode_payload(cls, request):
"""Decode task payload.
HugeTask controls its own payload entirely including urlencoding.
It doesn't depend on any particular web framework.
Args:
request: a webapp Request instance.
Returns:
A dict of str to str. The same as the params argument to __init__.
Raises:
DeprecationWarning: When task payload constructed from an older
incompatible version of mapreduce.
"""
if request.headers.get(cls.PAYLOAD_VERSION_HEADER) != cls.PAYLOAD_VERSION:
raise DeprecationWarning(
"Task is generated by an older incompatible version of mapreduce. "
"Please kill this job manually")
body = request.body
compressed_payload_str = None
if body.startswith(cls.PAYLOAD_KEY_PARAM):
payload_key = body[len(cls.PAYLOAD_KEY_PARAM):]
payload_entity = _HugeTaskPayload.get(payload_key)
compressed_payload_str = payload_entity.payload
elif body.startswith(cls.PAYLOAD_PARAM):
compressed_payload_str = body[len(cls.PAYLOAD_PARAM):]
if compressed_payload_str:
payload_str = zlib.decompress(compressed_payload_str)
else:
payload_str = body
result = {}
for (name, value) in cgi.parse_qs(payload_str).items():
if len(value) == 1:
result[name] = value[0]
else:
result[name] = value
return result
class JsonEncoder(simplejson.JSONEncoder):
"""MR customized json encoder."""
TYPE_ID = "__mr_json_type"
def default(self, o):
"""Inherit docs."""
if type(o) in JSON_DEFAULTS:
encoder = JSON_DEFAULTS[type(o)][0]
json_struct = encoder(o)
json_struct[self.TYPE_ID] = type(o).__name__
return json_struct
return super(JsonEncoder, self).default(o)
class JsonDecoder(simplejson.JSONDecoder):
"""MR customized json decoder."""
def __init__(self, **kwargs):
if "object_hook" not in kwargs:
kwargs["object_hook"] = self._dict_to_obj
super(JsonDecoder, self).__init__(**kwargs)
def _dict_to_obj(self, d):
"""Converts a dictionary of json object to a Python object."""
if JsonEncoder.TYPE_ID not in d:
return d
obj_type = d.pop(JsonEncoder.TYPE_ID)
if obj_type in _TYPE_IDS:
decoder = JSON_DEFAULTS[_TYPE_IDS[obj_type]][1]
return decoder(d)
else:
raise TypeError("Invalid type %s.", obj_type)
_DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S.%f"
def _json_encode_datetime(o):
"""Json encode a datetime object.
Args:
o: a datetime object.
Returns:
A dict of json primitives.
"""
return {"isostr": o.strftime(_DATETIME_FORMAT)}
def _json_decode_datetime(d):
"""Converts a dict of json primitives to a datetime object."""
return datetime.datetime.strptime(d["isostr"], _DATETIME_FORMAT)
JSON_DEFAULTS = {
datetime.datetime: (_json_encode_datetime, _json_decode_datetime),
}
_TYPE_IDS = dict(zip([_cls.__name__ for _cls in JSON_DEFAULTS],
JSON_DEFAULTS.keys()))
class JsonMixin(object):
"""Simple, stateless json utilities mixin.
Requires class to implement two methods:
to_json(self): convert data to json-compatible datastructure (dict,
list, strings, numbers)
@classmethod from_json(cls, json): load data from json-compatible structure.
"""
def to_json_str(self):
"""Convert data to json string representation.
Returns:
json representation as string.
"""
json = self.to_json()
try:
return simplejson.dumps(json, sort_keys=True, cls=JsonEncoder)
except:
logging.exception("Could not serialize JSON: %r", json)
raise
@classmethod
def from_json_str(cls, json_str):
"""Convert json string representation into class instance.
Args:
json_str: json representation as string.
Returns:
New instance of the class with data loaded from json string.
"""
return cls.from_json(simplejson.loads(json_str, cls=JsonDecoder))
class JsonProperty(db.UnindexedProperty):
"""Property type for storing json representation of data.
Requires data types to implement two methods:
to_json(self): convert data to json-compatible datastructure (dict,
list, strings, numbers)
@classmethod from_json(cls, json): load data from json-compatible structure.
"""
def __init__(self, data_type, default=None, **kwargs):
"""Constructor.
Args:
data_type: underlying data type as class.
default: default value for the property. The value is deep copied
fore each model instance.
kwargs: remaining arguments.
"""
kwargs["default"] = default
super(JsonProperty, self).__init__(**kwargs)
self.data_type = data_type
def get_value_for_datastore(self, model_instance):
"""Gets value for datastore.
Args:
model_instance: instance of the model class.
Returns:
datastore-compatible value.
"""
value = super(JsonProperty, self).get_value_for_datastore(model_instance)
if not value:
return None
json_value = value
if not isinstance(value, dict):
json_value = value.to_json()
if not json_value:
return None
return datastore_types.Text(simplejson.dumps(
json_value, sort_keys=True, cls=JsonEncoder))
def make_value_from_datastore(self, value):
"""Convert value from datastore representation.
Args:
value: datastore value.
Returns:
value to store in the model.
"""
if value is None:
return None
json = simplejson.loads(value, cls=JsonDecoder)
if self.data_type == dict:
return json
return self.data_type.from_json(json)
def validate(self, value):
"""Validate value.
Args:
value: model value.
Returns:
Whether the specified value is valid data type value.
Raises:
BadValueError: when value is not of self.data_type type.
"""
if value is not None and not isinstance(value, self.data_type):
raise datastore_errors.BadValueError(
"Property %s must be convertible to a %s instance (%s)" %
(self.name, self.data_type, value))
return super(JsonProperty, self).validate(value)
def empty(self, value):
"""Checks if value is empty.
Args:
value: model value.
Returns:
True passed value is empty.
"""
return not value
def default_value(self):
"""Create default model value.
If default option was specified, then it will be deeply copied.
None otherwise.
Returns:
default model value.
"""
if self.default:
return copy.deepcopy(self.default)
else:
return None
_FUTURE_TIME = 2**34
def _get_descending_key(gettime=time.time):
"""Returns a key name lexically ordered by time descending.
This lets us have a key name for use with Datastore entities which returns
rows in time descending order when it is scanned in lexically ascending order,
allowing us to bypass index building for descending indexes.
Args:
gettime: Used for testing.
Returns:
A string with a time descending key.
"""
now_descending = int((_FUTURE_TIME - gettime()) * 100)
request_id_hash = os.environ.get("REQUEST_ID_HASH")
if not request_id_hash:
request_id_hash = str(random.getrandbits(32))
return "%d%s" % (now_descending, request_id_hash)
class CountersMap(JsonMixin):
"""Maintains map from counter name to counter value.
The class is used to provide basic arithmetics of counter values (buil
add/remove), increment individual values and store/load data from json.
"""
def __init__(self, initial_map=None):
"""Constructor.
Args:
initial_map: initial counter values map from counter name (string) to
counter value (int).
"""
if initial_map:
self.counters = initial_map
else:
self.counters = {}
def __repr__(self):
"""Compute string representation."""
return "mapreduce.model.CountersMap(%r)" % self.counters
def get(self, counter_name):
"""Get current counter value.
Args:
counter_name: counter name as string.
Returns:
current counter value as int. 0 if counter was not set.
"""
return self.counters.get(counter_name, 0)
def increment(self, counter_name, delta):
"""Increment counter value.
Args:
counter_name: counter name as String.
delta: increment delta as Integer.
Returns:
new counter value.
"""
current_value = self.counters.get(counter_name, 0)
new_value = current_value + delta
self.counters[counter_name] = new_value
return new_value
def add_map(self, counters_map):
"""Add all counters from the map.
For each counter in the passed map, adds its value to the counter in this
map.
Args:
counters_map: CounterMap instance to add.
"""
for counter_name in counters_map.counters:
self.increment(counter_name, counters_map.counters[counter_name])
def sub_map(self, counters_map):
"""Subtracts all counters from the map.
For each counter in the passed map, subtracts its value to the counter in
this map.
Args:
counters_map: CounterMap instance to subtract.
"""
for counter_name in counters_map.counters:
self.increment(counter_name, -counters_map.counters[counter_name])
def clear(self):
"""Clear all values."""
self.counters = {}
def to_json(self):
"""Serializes all the data in this map into json form.
Returns:
json-compatible data representation.
"""
return {"counters": self.counters}
@classmethod
def from_json(cls, json):
"""Create new CountersMap from the json data structure, encoded by to_json.
Args:
json: json representation of CountersMap .
Returns:
an instance of CountersMap with all data deserialized from json.
"""
counters_map = cls()
counters_map.counters = json["counters"]
return counters_map
def to_dict(self):
"""Convert to dictionary.
Returns:
a dictionary with counter name as key and counter values as value.
"""
return self.counters
class MapperSpec(JsonMixin):
"""Contains a specification for the mapper phase of the mapreduce.
MapperSpec instance can be changed only during mapreduce starting process,
and it remains immutable for the rest of mapreduce execution. MapperSpec is
passed as a payload to all mapreduce tasks in JSON encoding as part of
MapreduceSpec.
Specifying mapper handlers:
* '<module_name>.<class_name>' - __call__ method of class instance will be
called
* '<module_name>.<function_name>' - function will be called.
* '<module_name>.<class_name>.<method_name>' - class will be instantiated
and method called.
"""
def __init__(self,
handler_spec,
input_reader_spec,
params,
shard_count,
output_writer_spec=None):
"""Creates a new MapperSpec.
Args:
handler_spec: handler specification as string (see class doc for
details).
input_reader_spec: The class name of the input reader to use.
params: Dictionary of additional parameters for the mapper.
shard_count: number of shards to process in parallel.
Properties:
handler_spec: name of handler class/function to use.
input_reader_spec: The class name of the input reader to use.
params: Dictionary of additional parameters for the mapper.
shard_count: number of shards to process in parallel.
output_writer_spec: The class name of the output writer to use.
"""
self.handler_spec = handler_spec
self.input_reader_spec = input_reader_spec
self.output_writer_spec = output_writer_spec
self.shard_count = int(shard_count)
self.params = params
def get_handler(self):
"""Get mapper handler instance.
Returns:
handler instance as callable.
"""
return util.handler_for_name(self.handler_spec)
handler = property(get_handler)
def input_reader_class(self):
"""Get input reader class.
Returns:
input reader class object.
"""
return util.for_name(self.input_reader_spec)
def output_writer_class(self):
"""Get output writer class.
Returns:
output writer class object.
"""
return self.output_writer_spec and util.for_name(self.output_writer_spec)
def to_json(self):
"""Serializes this MapperSpec into a json-izable object."""
result = {
"mapper_handler_spec": self.handler_spec,
"mapper_input_reader": self.input_reader_spec,
"mapper_params": self.params,
"mapper_shard_count": self.shard_count
}
if self.output_writer_spec:
result["mapper_output_writer"] = self.output_writer_spec
return result
def __str__(self):
return "MapperSpec(%s, %s, %s, %s)" % (
self.handler_spec, self.input_reader_spec, self.params,
self.shard_count)
@classmethod
def from_json(cls, json):
"""Creates MapperSpec from a dict-like object."""
return cls(json["mapper_handler_spec"],
json["mapper_input_reader"],
json["mapper_params"],
json["mapper_shard_count"],
json.get("mapper_output_writer")
)
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self.to_json() == other.to_json()
class MapreduceSpec(JsonMixin):
"""Contains a specification for the whole mapreduce.
MapreduceSpec instance can be changed only during mapreduce starting process,
and it remains immutable for the rest of mapreduce execution. MapreduceSpec is
passed as a payload to all mapreduce tasks in json encoding.
"""
PARAM_DONE_CALLBACK = "done_callback"
PARAM_DONE_CALLBACK_QUEUE = "done_callback_queue"
def __init__(self,
name,
mapreduce_id,
mapper_spec,
params={},
hooks_class_name=None):
"""Create new MapreduceSpec.
Args:
name: The name of this mapreduce job type.
mapreduce_id: ID of the mapreduce.
mapper_spec: JSON-encoded string containing a MapperSpec.
params: dictionary of additional mapreduce parameters.
hooks_class_name: The fully qualified name of the hooks class to use.
Properties:
name: The name of this mapreduce job type.
mapreduce_id: unique id of this mapreduce as string.
mapper: This MapreduceSpec's instance of MapperSpec.
params: dictionary of additional mapreduce parameters.
hooks_class_name: The fully qualified name of the hooks class to use.
"""
self.name = name
self.mapreduce_id = mapreduce_id
self.mapper = MapperSpec.from_json(mapper_spec)
self.params = params
self.hooks_class_name = hooks_class_name
self.__hooks = None
self.get_hooks()
def get_hooks(self):
"""Returns a hooks.Hooks class or None if no hooks class has been set."""
if self.__hooks is None and self.hooks_class_name is not None:
hooks_class = util.for_name(self.hooks_class_name)
if not isinstance(hooks_class, type):
raise ValueError("hooks_class_name must refer to a class, got %s" %
type(hooks_class).__name__)
if not issubclass(hooks_class, hooks.Hooks):
raise ValueError(
"hooks_class_name must refer to a hooks.Hooks subclass")
self.__hooks = hooks_class(self)
return self.__hooks
def to_json(self):
"""Serializes all data in this mapreduce spec into json form.
Returns:
data in json format.
"""
mapper_spec = self.mapper.to_json()
return {
"name": self.name,
"mapreduce_id": self.mapreduce_id,
"mapper_spec": mapper_spec,
"params": self.params,
"hooks_class_name": self.hooks_class_name,
}
@classmethod
def from_json(cls, json):
"""Create new MapreduceSpec from the json, encoded by to_json.
Args:
json: json representation of MapreduceSpec.
Returns:
an instance of MapreduceSpec with all data deserialized from json.
"""
mapreduce_spec = cls(json["name"],
json["mapreduce_id"],
json["mapper_spec"],
json.get("params"),
json.get("hooks_class_name"))
return mapreduce_spec
def __str__(self):
return str(self.to_json())
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self.to_json() == other.to_json()
@classmethod
def _get_mapreduce_spec(cls, mr_id):
"""Get Mapreduce spec from mr id."""
key = 'GAE-MR-spec: %s' % mr_id
spec_json = memcache.get(key)
if spec_json:
return cls.from_json(spec_json)
state = MapreduceState.get_by_job_id(mr_id)
spec = state.mapreduce_spec
spec_json = spec.to_json()
memcache.set(key, spec_json)
return spec
class MapreduceState(db.Model):
"""Holds accumulated state of mapreduce execution.
MapreduceState is stored in datastore with a key name equal to the
mapreduce ID. Only controller tasks can write to MapreduceState.
Properties:
mapreduce_spec: cached deserialized MapreduceSpec instance. read-only
active: if this MR is still running.
last_poll_time: last time controller job has polled this mapreduce.
counters_map: shard's counters map as CountersMap. Mirrors
counters_map_json.
chart_url: last computed mapreduce status chart url. This chart displays the
progress of all the shards the best way it can.
sparkline_url: last computed mapreduce status chart url in small format.
result_status: If not None, the final status of the job.
active_shards: How many shards are still processing. This starts as 0,
then set by KickOffJob handler to be the actual number of input
readers after input splitting, and is updated by Controller task
as shards finish.
start_time: When the job started.
writer_state: Json property to be used by writer to store its state.
This is filled when single output per job. Will be deprecated.
Use OutputWriter.get_filenames instead.
"""
RESULT_SUCCESS = "success"
RESULT_FAILED = "failed"
RESULT_ABORTED = "aborted"
_RESULTS = frozenset([RESULT_SUCCESS, RESULT_FAILED, RESULT_ABORTED])
mapreduce_spec = JsonProperty(MapreduceSpec, indexed=False)
active = db.BooleanProperty(default=True, indexed=False)
last_poll_time = db.DateTimeProperty(required=True)
counters_map = JsonProperty(CountersMap, default=CountersMap(), indexed=False)
app_id = db.StringProperty(required=False, indexed=True)
writer_state = JsonProperty(dict, indexed=False)
active_shards = db.IntegerProperty(default=0, indexed=False)
failed_shards = db.IntegerProperty(default=0, indexed=False)
aborted_shards = db.IntegerProperty(default=0, indexed=False)
result_status = db.StringProperty(required=False, choices=_RESULTS)
chart_url = db.TextProperty(default="")
chart_width = db.IntegerProperty(default=300, indexed=False)
sparkline_url = db.TextProperty(default="")
start_time = db.DateTimeProperty(auto_now_add=True)
@classmethod
def kind(cls):
"""Returns entity kind."""
return "_GAE_MR_MapreduceState"
@classmethod
def get_key_by_job_id(cls, mapreduce_id):
"""Retrieves the Key for a Job.
Args:
mapreduce_id: The job to retrieve.
Returns:
Datastore Key that can be used to fetch the MapreduceState.
"""
return db.Key.from_path(cls.kind(), str(mapreduce_id))
@classmethod
def get_by_job_id(cls, mapreduce_id):
"""Retrieves the instance of state for a Job.
Args:
mapreduce_id: The mapreduce job to retrieve.
Returns:
instance of MapreduceState for passed id.
"""
return db.get(cls.get_key_by_job_id(mapreduce_id))
def set_processed_counts(self, shards_processed):
"""Updates a chart url to display processed count for each shard.
Args:
shards_processed: list of integers with number of processed entities in
each shard
"""
chart = google_chart_api.BarChart(shards_processed)
shard_count = len(shards_processed)
if shards_processed:
stride_length = max(1, shard_count / 16)
chart.bottom.labels = []
for x in xrange(shard_count):
if (x % stride_length == 0 or
x == shard_count - 1):
chart.bottom.labels.append(x)
else:
chart.bottom.labels.append("")
chart.left.labels = ["0", str(max(shards_processed))]
chart.left.min = 0
self.chart_width = min(700, max(300, shard_count * 20))
self.chart_url = chart.display.Url(self.chart_width, 200)
def get_processed(self):
"""Number of processed entities.
Returns:
The total number of processed entities as int.
"""
return self.counters_map.get(context.COUNTER_MAPPER_CALLS)
processed = property(get_processed)
@staticmethod
def create_new(mapreduce_id=None,
gettime=datetime.datetime.now):
"""Create a new MapreduceState.
Args:
mapreduce_id: Mapreduce id as string.
gettime: Used for testing.
"""
if not mapreduce_id:
mapreduce_id = MapreduceState.new_mapreduce_id()
state = MapreduceState(key_name=mapreduce_id,
last_poll_time=gettime())
state.set_processed_counts([])
return state
@staticmethod
def new_mapreduce_id():
"""Generate new mapreduce id."""
return _get_descending_key()
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self.properties() == other.properties()
class TransientShardState(object):
"""Shard's state kept in task payload.
TransientShardState holds a port of all shard processing state, which is not
saved in datastore, but rather is passed in task payload.
"""
def __init__(self,
base_path,
mapreduce_spec,
shard_id,
slice_id,
input_reader,
initial_input_reader,
output_writer=None,
retries=0,
handler=None):
"""Init.
Args:
base_path: base path of this mapreduce job.
mapreduce_spec: an instance of MapReduceSpec.
shard_id: shard id.
slice_id: slice id. When enqueuing task for the next slice, this number
is incremented by 1.
input_reader: input reader instance for this shard.
initial_input_reader: the input reader instance before any iteration.
Used by shard retry.
output_writer: output writer instance for this shard, if exists.
retries: the number of retries of the current shard. Used to drop
tasks from old retries.
handler: map/reduce handler.
"""
self.base_path = base_path
self.mapreduce_spec = mapreduce_spec
self.shard_id = shard_id
self.slice_id = slice_id
self.input_reader = input_reader
self.initial_input_reader = initial_input_reader
self.output_writer = output_writer
self.retries = retries
self.handler = handler
def reset_for_retry(self, output_writer):
"""Reset self for shard retry.
Args:
output_writer: new output writer that contains new output files.
"""
self.input_reader = self.initial_input_reader
self.slice_id = 0
self.retries += 1
self.output_writer = output_writer
self.handler = None
def advance_for_next_slice(self):
"""Advance relavent states for next slice."""
self.slice_id += 1
def to_dict(self):
"""Convert state to dictionary to save in task payload."""
result = {"mapreduce_spec": self.mapreduce_spec.to_json_str(),
"shard_id": self.shard_id,
"slice_id": str(self.slice_id),
"input_reader_state": self.input_reader.to_json_str(),
"initial_input_reader_state":
self.initial_input_reader.to_json_str(),
"retries": str(self.retries)}
if self.output_writer:
result["output_writer_state"] = self.output_writer.to_json_str()
serialized_handler = util.try_serialize_handler(self.handler)
if serialized_handler:
result["serialized_handler"] = serialized_handler
return result
@classmethod
def from_request(cls, request):
"""Create new TransientShardState from webapp request."""
mapreduce_spec = MapreduceSpec.from_json_str(request.get("mapreduce_spec"))
mapper_spec = mapreduce_spec.mapper
input_reader_spec_dict = simplejson.loads(request.get("input_reader_state"),
cls=JsonDecoder)
input_reader = mapper_spec.input_reader_class().from_json(
input_reader_spec_dict)
initial_input_reader_spec_dict = simplejson.loads(
request.get("initial_input_reader_state"), cls=JsonDecoder)
initial_input_reader = mapper_spec.input_reader_class().from_json(
initial_input_reader_spec_dict)
output_writer = None
if mapper_spec.output_writer_class():
output_writer = mapper_spec.output_writer_class().from_json(
simplejson.loads(request.get("output_writer_state", "{}"),
cls=JsonDecoder))
assert isinstance(output_writer, mapper_spec.output_writer_class()), (
"%s.from_json returned an instance of wrong class: %s" % (
mapper_spec.output_writer_class(),
output_writer.__class__))
request_path = request.path
base_path = request_path[:request_path.rfind("/")]
handler = util.try_deserialize_handler(request.get("serialized_handler"))
if not handler:
handler = mapreduce_spec.mapper.handler
return cls(base_path,
mapreduce_spec,
str(request.get("shard_id")),
int(request.get("slice_id")),
input_reader,
initial_input_reader,
output_writer=output_writer,
retries=int(request.get("retries")),
handler=handler)
class ShardState(db.Model):
"""Single shard execution state.
The shard state is stored in the datastore and is later aggregated by
controller task. ShardState key_name is equal to shard_id.
Properties:
active: if we have this shard still running as boolean.
counters_map: shard's counters map as CountersMap. All counters yielded
within mapreduce are stored here.
mapreduce_id: unique id of the mapreduce.
shard_id: unique id of this shard as string.
shard_number: ordered number for this shard.
retries: the number of times this shard has been retried.
result_status: If not None, the final status of this shard.
update_time: The last time this shard state was updated.
shard_description: A string description of the work this shard will do.
last_work_item: A string description of the last work item processed.
writer_state: writer state for this shard. This is filled when a job
has one output per shard by MR worker after finalizing output files.
slice_id: slice id of current executing slice. A task
will not run unless its slice_id matches this. Initial
value is 0. By the end of slice execution, this number is
incremented by 1.
slice_start_time: a slice updates this to now at the beginning of
execution transactionally. If transaction succeeds, the current task holds
a lease of slice duration + some grace period. During this time, no
other task with the same slice_id will execute. Upon slice failure,
the task should try to unset this value to allow retries to carry on
ASAP. slice_start_time is only meaningful when slice_id is the same.
slice_request_id: the request id that holds/held the lease. When lease has
expired, new request needs to verify that said request has indeed
ended according to logs API. Do this only when lease has expired
because logs API is expensive. This field should always be set/unset
with slice_start_time. It is possible Logs API doesn't log a request
at all or doesn't log the end of a request. So a new request can
proceed after a long conservative timeout.
slice_retries: the number of times a slice has been retried due to
processing data when lock is held. Taskqueue/datastore errors
related to shard management are not counted. This count is
only a lower bound and is used to determined when to fail a slice
completely.
acquired_once: whether the lock for this slice has been acquired at
least once. When this is True, duplicates in outputs are possible.
This is very different from when slice_retries is 0, e.g. when
outputs have been written but a taskqueue problem prevents a slice
to continue, acquired_once would be True but slice_retries would be
0.
"""
RESULT_SUCCESS = "success"
RESULT_FAILED = "failed"
RESULT_ABORTED = "aborted"
_RESULTS = frozenset([RESULT_SUCCESS, RESULT_FAILED, RESULT_ABORTED])
active = db.BooleanProperty(default=True, indexed=False)
counters_map = JsonProperty(CountersMap, default=CountersMap(), indexed=False)
result_status = db.StringProperty(choices=_RESULTS, indexed=False)
retries = db.IntegerProperty(default=0, indexed=False)
writer_state = JsonProperty(dict, indexed=False)
slice_id = db.IntegerProperty(default=0, indexed=False)
slice_start_time = db.DateTimeProperty(indexed=False)
slice_request_id = db.ByteStringProperty(indexed=False)
slice_retries = db.IntegerProperty(default=0, indexed=False)
acquired_once = db.BooleanProperty(default=False, indexed=False)
mapreduce_id = db.StringProperty(required=True)
update_time = db.DateTimeProperty(auto_now=True, indexed=False)
shard_description = db.TextProperty(default="")
last_work_item = db.TextProperty(default="")
def __str__(self):
kv = {"active": self.active,
"slice_id": self.slice_id,
"last_work_item": self.last_work_item,
"update_time": self.update_time}
if self.result_status:
kv["result_status"] = self.result_status
if self.retries:
kv["retries"] = self.retries
if self.slice_start_time:
kv["slice_start_time"] = self.slice_start_time
if self.slice_retries:
kv["slice_retries"] = self.slice_retries
if self.slice_request_id:
kv["slice_request_id"] = self.slice_request_id
if self.acquired_once:
kv["acquired_once"] = self.acquired_once
keys = kv.keys()
keys.sort()
result = "ShardState is {"
for k in keys:
result += k + ":" + str(kv[k]) + ","
result += "}"
return result
def reset_for_retry(self):
"""Reset self for shard retry."""
self.retries += 1
self.last_work_item = ""
self.active = True
self.result_status = None
self.counters_map = CountersMap()
self.slice_id = 0
self.slice_start_time = None
self.slice_request_id = None
self.slice_retries = 0
self.acquired_once = False
def advance_for_next_slice(self):
"""Advance self for next slice."""
self.slice_id += 1
self.slice_start_time = None
self.slice_request_id = None
self.slice_retries = 0
self.acquired_once = False
def set_for_failure(self):
self.active = False
self.result_status = self.RESULT_FAILED
def set_for_abort(self):
self.active = False
self.result_status = self.RESULT_ABORTED
def set_for_success(self):
self.active = False
self.result_status = self.RESULT_SUCCESS
self.slice_start_time = None
self.slice_request_id = None
self.slice_retries = 0
self.acquired_once = False
def copy_from(self, other_state):
"""Copy data from another shard state entity to self."""
for prop in self.properties().values():
setattr(self, prop.name, getattr(other_state, prop.name))
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self.properties() == other.properties()
def get_shard_number(self):
"""Gets the shard number from the key name."""
return int(self.key().name().split("-")[-1])
shard_number = property(get_shard_number)
def get_shard_id(self):
"""Returns the shard ID."""
return self.key().name()
shard_id = property(get_shard_id)
@classmethod
def kind(cls):
"""Returns entity kind."""
return "_GAE_MR_ShardState"
@classmethod
def shard_id_from_number(cls, mapreduce_id, shard_number):
"""Get shard id by mapreduce id and shard number.
Args:
mapreduce_id: mapreduce id as string.
shard_number: shard number to compute id for as int.
Returns:
shard id as string.
"""
return "%s-%d" % (mapreduce_id, shard_number)
@classmethod
def get_key_by_shard_id(cls, shard_id):
"""Retrieves the Key for this ShardState.
Args:
shard_id: The shard ID to fetch.
Returns:
The Datatore key to use to retrieve this ShardState.
"""
return db.Key.from_path(cls.kind(), shard_id)
@classmethod
def get_by_shard_id(cls, shard_id):
"""Get shard state from datastore by shard_id.
Args:
shard_id: shard id as string.
Returns:
ShardState for given shard id or None if it's not found.
"""
return cls.get_by_key_name(shard_id)
@classmethod
@db.non_transactional
def find_by_mapreduce_state(cls, mapreduce_state):
"""Find all shard states for given mapreduce.
Never runs within a transaction since it may touch >5 entity groups (one
for each shard).
Args:
mapreduce_state: MapreduceState instance
Returns:
iterable of all ShardState for given mapreduce.
"""
keys = cls.calculate_keys_by_mapreduce_state(mapreduce_state)
return [state for state in db.get(keys) if state]
@classmethod
def calculate_keys_by_mapreduce_state(cls, mapreduce_state):
"""Calculate all shard states keys for given mapreduce.
Args:
mapreduce_state: MapreduceState instance
Returns:
A list of keys for shard states. The corresponding shard states
may not exist.
"""
keys = []
for i in range(mapreduce_state.mapreduce_spec.mapper.shard_count):
shard_id = cls.shard_id_from_number(mapreduce_state.key().name(), i)
keys.append(cls.get_key_by_shard_id(shard_id))
return keys
@classmethod
def find_by_mapreduce_id(cls, mapreduce_id):
logging.error(
"ShardState.find_by_mapreduce_id method may be inconsistent. " +
"ShardState.find_by_mapreduce_state should be used instead.")
return cls.all().filter(
"mapreduce_id =", mapreduce_id).fetch(99999)
@classmethod
def create_new(cls, mapreduce_id, shard_number):
"""Create new shard state.
Args:
mapreduce_id: unique mapreduce id as string.
shard_number: shard number for which to create shard state.
Returns:
new instance of ShardState ready to put into datastore.
"""
shard_id = cls.shard_id_from_number(mapreduce_id, shard_number)
state = cls(key_name=shard_id,
mapreduce_id=mapreduce_id)
return state
class MapreduceControl(db.Model):
"""Datastore entity used to control mapreduce job execution.
Only one command may be sent to jobs at a time.
Properties:
command: The command to send to the job.
"""
ABORT = "abort"
_COMMANDS = frozenset([ABORT])
_KEY_NAME = "command"
command = db.TextProperty(choices=_COMMANDS, required=True)
@classmethod
def kind(cls):
"""Returns entity kind."""
return "_GAE_MR_MapreduceControl"
@classmethod
def get_key_by_job_id(cls, mapreduce_id):
"""Retrieves the Key for a mapreduce ID.
Args:
mapreduce_id: The job to fetch.
Returns:
Datastore Key for the command for the given job ID.
"""
return db.Key.from_path(cls.kind(), "%s:%s" % (mapreduce_id, cls._KEY_NAME))
@classmethod
def abort(cls, mapreduce_id, **kwargs):
"""Causes a job to abort.
Args:
mapreduce_id: The job to abort. Not verified as a valid job.
"""
cls(key_name="%s:%s" % (mapreduce_id, cls._KEY_NAME),
command=cls.ABORT).put(**kwargs)
class QuerySpec(object):
"""Encapsulates everything about a query needed by DatastoreInputReader."""
DEFAULT_BATCH_SIZE = 50
def __init__(self,
entity_kind,
keys_only=None,
filters=None,
batch_size=None,
model_class_path=None,
app=None,
ns=None):
self.entity_kind = entity_kind
self.keys_only = keys_only or False
self.filters = filters or None
self.batch_size = batch_size or self.DEFAULT_BATCH_SIZE
self.model_class_path = model_class_path
self.app = app
self.ns = ns
def to_json(self):
return {"entity_kind": self.entity_kind,
"keys_only": self.keys_only,
"filters": self.filters,
"batch_size": self.batch_size,
"model_class_path": self.model_class_path,
"app": self.app,
"ns": self.ns}
@classmethod
def from_json(cls, json):
return cls(json["entity_kind"],
json["keys_only"],
json["filters"],
json["batch_size"],
json["model_class_path"],
json["app"],
json["ns"])
|
|
import inspect
import json
import os
import sys
from ryu.ofproto import ofproto_v1_3
from ryu.ofproto import ofproto_v1_3_parser
# TODO: move configuration to separate directory
CFG_PATH = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
class LoadRyuTables(object):
def __init__(self):
self._ofproto_parser = None;
self.ryu_tables = []
self._class_name_to_name_ids = {"OFPTableFeaturePropInstructions":"instruction_ids",
"OFPTableFeaturePropNextTables":"table_ids",
"OFPTableFeaturePropActions":"action_ids",
"OFPTableFeaturePropOxm":"oxm_ids"}
self.ryu_table_translator = OpenflowToRyuTranslator()
def _read_json_document(self, filename):
try:
python_object_result = 0
json_string = (open(filename))
python_object_result = json.load(json_string)
self.ryu_table_translator.set_json_document(filename)
self.ryu_table_translator.create_ryu_structure()
python_object_result = self.ryu_table_translator.tables
except (ValueError, IOError) as e:
print e
python_object_result = None
return python_object_result
"""
method that will load the json file with the information of the tables
to convert a json file into a ryu object with all the tables
ofproto: it is the protocol used by the library. Also, this library was test with ofproto_v1_3_parser
"""
def load_tables(self, filename, ofproto_parser):
self.ryu_tables = []
self._ofproto_parser = ofproto_parser
self.tables = self._read_json_document(filename)
if (self.tables == None):
return
self.ryu_tables = self._create_tables(self.tables)
"""
this method will create a table with all the stuff that ryu needs
like name, config, max entries, id, and properties. Note that it is only
processes tables, properties are processed by the function create_features
"""
def _create_tables(self, tables_information):
table_array = []
for table in tables_information:
#iteritems is used to iterate a dictionary
for key, value in table.iteritems():
#getattr will get a function of the object entered, this function
#is used to create the table with ryu classes
table_class = getattr(self._ofproto_parser, key)
properties = self._create_features(value["properties"])
value["properties"] = properties
value["name"] = str(value["name"])
#value is a dictionary, with ** it will expand it content to arguments
new_table = table_class(**value)
table_array.append(new_table)
return table_array
"""
same as create_tables, but it will process the properties of each table
"""
def _create_features(self, table_features_information):
features_array = []
for feature in table_features_information:
for key, value in feature.iteritems():
name_id = self._class_name_to_name_ids[key]
feature_class = getattr(self._ofproto_parser, key)
instruction_ids = self._create_instructions(value[name_id])
value[name_id] = instruction_ids
value["type_"] = value.pop("type")
new_feature = feature_class(**value)
features_array.append(new_feature)
return features_array
"""
it will process the instructions or fields of each property
"""
def _create_instructions(self, instruction_ids_information):
instruction_array = []
for instruction in instruction_ids_information:
if (isinstance( instruction, dict )):
for key, value in instruction.iteritems():
instruction_class = getattr(self._ofproto_parser, key)
if (isinstance( value["type"], unicode )):
value["type"] = str(value["type"])
value["type_"] = value.pop("type")
new_instruction = instruction_class(**value)
instruction_array.append(new_instruction)
else:
instruction_array = instruction_ids_information
break
return instruction_array
"""
This script allows dynamically create a set of tables. Each table has a set of properties that allows take some actions
depended of the incoming package. Those properties are defined ine th file "openflow_structure_tables.json", which are based on
the openflow protocol version 1.3. Also, the fields allowed in each property are written in this file, each of those fields
are accepted by the switch 5400.
The output of this script is an json file with the tables well structure. This structure is converted from openflow structure
to ryu structure using the file "ofproto_to_ryu.json", so the json file generated will be to the SDN ryu framework. But, if is
necessary convert the structure to another sdn framework, you will only have to change the file ofproto_to_ryu.
"""
class OpenflowToRyuTranslator(object):
def __init__(self):
self.custom_json = CustomJson()
"""
file with the variables in openflow to map them into Ryu variables
"""
self.openflow_to_ryu = CFG_PATH + "/ofproto_to_ryu.json"
self.openflow_to_ryu = self.custom_json.read_json_document(self.openflow_to_ryu)
"""
variable used to save the ryu structure tables
"""
self.tables = []
def set_json_document(self, filepath):
self.document_with_openflow_tables = filepath
self.document_with_openflow_tables = self.custom_json.read_json_document(self.document_with_openflow_tables)
"""
The following functions are used to create the final structure (same structure that use ryu library)
"""
def create_ryu_structure(self):
table_properties = []
self.tables = []
for openflow_table in self.document_with_openflow_tables:
table_properties = []
for property_item in openflow_table["properties"]:
fields_tag = self.openflow_to_ryu["tables"][property_item["name"]]["action_tag"]
actions_ids = property_item[fields_tag]
table_properties.append(self.create_table_feature(property_item["name"],
actions_ids,
property_item["type"]))
self.tables.append(self.create_table(table_id=openflow_table["table_id"], name=openflow_table["name"],
config=3, max_entries=openflow_table["max_entries"], metadata_match=0,
metadata_write=0, properties=table_properties))
def create_table(self, table_id, name, config, max_entries, metadata_match, metadata_write, properties):
return {self.openflow_to_ryu["table_tag"] : {"config": config, "max_entries" : max_entries,
"metadata_match": metadata_match,
"metadata_write": metadata_write,
"name": name, "properties": properties,
"table_id": table_id}}
def create_table_feature(self, name, actions, type_id):
new_table_feature = {}
new_array_instructions = []
table_feature_name = self.openflow_to_ryu["tables"][name]["name"]
instruction_id_name = self.openflow_to_ryu["tables"][name]["action_tag"]
action_id_name = self.openflow_to_ryu["content"][instruction_id_name]
if action_id_name == []:
new_array_instructions = actions
else:
for action in actions:
if "name" in action:
action.pop("name")
new_array_instructions.append({action_id_name: action})
new_table_feature = {table_feature_name : {instruction_id_name: new_array_instructions, "type": type_id}}
return new_table_feature
class CustomJson(object):
def __init__(self):
self.json = ""
def read_json_document(self, filename):
python_object_result = []
try:
python_object_result = 0
with open(filename) as data_file:
python_object_result = json.load(data_file)
except (ValueError, IOError) as e:
print "Error found:"
print e
python_object_result = []
return python_object_result
def save_document(self, filepath, information):
file = open(filepath, "w+")
file.write(information)
file.close()
|
|
#!/usr/bin/env python
# Copyright 2015, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Run tests in parallel."""
from __future__ import print_function
import argparse
import ast
import collections
import glob
import itertools
import json
import logging
import multiprocessing
import os
import os.path
import pipes
import platform
import random
import re
import socket
import subprocess
import sys
import tempfile
import traceback
import time
from six.moves import urllib
import uuid
import six
import python_utils.jobset as jobset
import python_utils.report_utils as report_utils
import python_utils.watch_dirs as watch_dirs
import python_utils.start_port_server as start_port_server
_ROOT = os.path.abspath(os.path.join(os.path.dirname(sys.argv[0]), '../..'))
os.chdir(_ROOT)
_FORCE_ENVIRON_FOR_WRAPPERS = {
'GRPC_VERBOSITY': 'DEBUG',
}
_POLLING_STRATEGIES = {
'linux': ['epoll', 'poll', 'poll-cv']
}
def platform_string():
return jobset.platform_string()
_DEFAULT_TIMEOUT_SECONDS = 5 * 60
def run_shell_command(cmd, env=None, cwd=None):
try:
subprocess.check_output(cmd, shell=True, env=env, cwd=cwd)
except subprocess.CalledProcessError as e:
logging.exception("Error while running command '%s'. Exit status %d. Output:\n%s",
e.cmd, e.returncode, e.output)
raise
# SimpleConfig: just compile with CONFIG=config, and run the binary to test
class Config(object):
def __init__(self, config, environ=None, timeout_multiplier=1, tool_prefix=[], iomgr_platform='native'):
if environ is None:
environ = {}
self.build_config = config
self.environ = environ
self.environ['CONFIG'] = config
self.tool_prefix = tool_prefix
self.timeout_multiplier = timeout_multiplier
self.iomgr_platform = iomgr_platform
def job_spec(self, cmdline, timeout_seconds=_DEFAULT_TIMEOUT_SECONDS,
shortname=None, environ={}, cpu_cost=1.0, flaky=False):
"""Construct a jobset.JobSpec for a test under this config
Args:
cmdline: a list of strings specifying the command line the test
would like to run
"""
actual_environ = self.environ.copy()
for k, v in environ.items():
actual_environ[k] = v
return jobset.JobSpec(cmdline=self.tool_prefix + cmdline,
shortname=shortname,
environ=actual_environ,
cpu_cost=cpu_cost,
timeout_seconds=(self.timeout_multiplier * timeout_seconds if timeout_seconds else None),
flake_retries=5 if flaky or args.allow_flakes else 0,
timeout_retries=3 if args.allow_flakes else 0)
def get_c_tests(travis, test_lang) :
out = []
platforms_str = 'ci_platforms' if travis else 'platforms'
with open('tools/run_tests/generated/tests.json') as f:
js = json.load(f)
return [tgt
for tgt in js
if tgt['language'] == test_lang and
platform_string() in tgt[platforms_str] and
not (travis and tgt['flaky'])]
def _check_compiler(compiler, supported_compilers):
if compiler not in supported_compilers:
raise Exception('Compiler %s not supported (on this platform).' % compiler)
def _check_arch(arch, supported_archs):
if arch not in supported_archs:
raise Exception('Architecture %s not supported.' % arch)
def _is_use_docker_child():
"""Returns True if running running as a --use_docker child."""
return True if os.getenv('RUN_TESTS_COMMAND') else False
_PythonConfigVars = collections.namedtuple(
'_ConfigVars', ['shell', 'builder', 'builder_prefix_arguments',
'venv_relative_python', 'toolchain', 'runner'])
def _python_config_generator(name, major, minor, bits, config_vars):
return PythonConfig(
name,
config_vars.shell + config_vars.builder + config_vars.builder_prefix_arguments + [
_python_pattern_function(major=major, minor=minor, bits=bits)] + [
name] + config_vars.venv_relative_python + config_vars.toolchain,
config_vars.shell + config_vars.runner + [
os.path.join(name, config_vars.venv_relative_python[0])])
def _pypy_config_generator(name, major, config_vars):
return PythonConfig(
name,
config_vars.shell + config_vars.builder + config_vars.builder_prefix_arguments + [
_pypy_pattern_function(major=major)] + [
name] + config_vars.venv_relative_python + config_vars.toolchain,
config_vars.shell + config_vars.runner + [
os.path.join(name, config_vars.venv_relative_python[0])])
def _python_pattern_function(major, minor, bits):
# Bit-ness is handled by the test machine's environment
if os.name == "nt":
if bits == "64":
return '/c/Python{major}{minor}/python.exe'.format(
major=major, minor=minor, bits=bits)
else:
return '/c/Python{major}{minor}_{bits}bits/python.exe'.format(
major=major, minor=minor, bits=bits)
else:
return 'python{major}.{minor}'.format(major=major, minor=minor)
def _pypy_pattern_function(major):
if major == '2':
return 'pypy'
elif major == '3':
return 'pypy3'
else:
raise ValueError("Unknown PyPy major version")
class CLanguage(object):
def __init__(self, make_target, test_lang):
self.make_target = make_target
self.platform = platform_string()
self.test_lang = test_lang
def configure(self, config, args):
self.config = config
self.args = args
if self.args.compiler == 'cmake':
_check_arch(self.args.arch, ['default'])
self._use_cmake = True
self._docker_distro = 'jessie'
self._make_options = []
elif self.platform == 'windows':
self._use_cmake = False
self._make_options = [_windows_toolset_option(self.args.compiler),
_windows_arch_option(self.args.arch)]
else:
self._use_cmake = False
self._docker_distro, self._make_options = self._compiler_options(self.args.use_docker,
self.args.compiler)
if args.iomgr_platform == "uv":
cflags = '-DGRPC_UV '
try:
cflags += subprocess.check_output(['pkg-config', '--cflags', 'libuv']).strip() + ' '
except (subprocess.CalledProcessError, OSError):
pass
try:
ldflags = subprocess.check_output(['pkg-config', '--libs', 'libuv']).strip() + ' '
except (subprocess.CalledProcessError, OSError):
ldflags = '-luv '
self._make_options += ['EXTRA_CPPFLAGS={}'.format(cflags),
'EXTRA_LDLIBS={}'.format(ldflags)]
def test_specs(self):
out = []
binaries = get_c_tests(self.args.travis, self.test_lang)
for target in binaries:
if self._use_cmake and target.get('boringssl', False):
# cmake doesn't build boringssl tests
continue
polling_strategies = (_POLLING_STRATEGIES.get(self.platform, ['all'])
if target.get('uses_polling', True)
else ['all'])
if self.args.iomgr_platform == 'uv':
polling_strategies = ['all']
for polling_strategy in polling_strategies:
env={'GRPC_DEFAULT_SSL_ROOTS_FILE_PATH':
_ROOT + '/src/core/tsi/test_creds/ca.pem',
'GRPC_POLL_STRATEGY': polling_strategy,
'GRPC_VERBOSITY': 'DEBUG'}
resolver = os.environ.get('GRPC_DNS_RESOLVER', None);
if resolver:
env['GRPC_DNS_RESOLVER'] = resolver
shortname_ext = '' if polling_strategy=='all' else ' GRPC_POLL_STRATEGY=%s' % polling_strategy
timeout_scaling = 1
if polling_strategy == 'poll-cv':
timeout_scaling *= 5
if polling_strategy in target.get('excluded_poll_engines', []):
continue
# Scale overall test timeout if running under various sanitizers.
config = self.args.config
if ('asan' in config
or config == 'msan'
or config == 'tsan'
or config == 'ubsan'
or config == 'helgrind'
or config == 'memcheck'):
timeout_scaling *= 20
if self.config.build_config in target['exclude_configs']:
continue
if self.args.iomgr_platform in target.get('exclude_iomgrs', []):
continue
if self.platform == 'windows':
if self._use_cmake:
binary = 'cmake/build/%s/%s.exe' % (_MSBUILD_CONFIG[self.config.build_config], target['name'])
else:
binary = 'vsprojects/%s%s/%s.exe' % (
'x64/' if self.args.arch == 'x64' else '',
_MSBUILD_CONFIG[self.config.build_config],
target['name'])
else:
if self._use_cmake:
binary = 'cmake/build/%s' % target['name']
else:
binary = 'bins/%s/%s' % (self.config.build_config, target['name'])
cpu_cost = target['cpu_cost']
if cpu_cost == 'capacity':
cpu_cost = multiprocessing.cpu_count()
if os.path.isfile(binary):
if 'gtest' in target and target['gtest']:
# here we parse the output of --gtest_list_tests to build up a
# complete list of the tests contained in a binary
# for each test, we then add a job to run, filtering for just that
# test
with open(os.devnull, 'w') as fnull:
tests = subprocess.check_output([binary, '--gtest_list_tests'],
stderr=fnull)
base = None
for line in tests.split('\n'):
i = line.find('#')
if i >= 0: line = line[:i]
if not line: continue
if line[0] != ' ':
base = line.strip()
else:
assert base is not None
assert line[1] == ' '
test = base + line.strip()
cmdline = [binary, '--gtest_filter=%s' % test] + target['args']
out.append(self.config.job_spec(cmdline,
shortname='%s %s' % (' '.join(cmdline), shortname_ext),
cpu_cost=cpu_cost,
timeout_seconds=_DEFAULT_TIMEOUT_SECONDS * timeout_scaling,
environ=env))
else:
cmdline = [binary] + target['args']
out.append(self.config.job_spec(cmdline,
shortname=' '.join(
pipes.quote(arg)
for arg in cmdline) +
shortname_ext,
cpu_cost=cpu_cost,
flaky=target.get('flaky', False),
timeout_seconds=target.get('timeout_seconds', _DEFAULT_TIMEOUT_SECONDS) * timeout_scaling,
environ=env))
elif self.args.regex == '.*' or self.platform == 'windows':
print('\nWARNING: binary not found, skipping', binary)
return sorted(out)
def make_targets(self):
if self.platform == 'windows':
# don't build tools on windows just yet
return ['buildtests_%s' % self.make_target]
return ['buildtests_%s' % self.make_target, 'tools_%s' % self.make_target]
def make_options(self):
return self._make_options;
def pre_build_steps(self):
if self._use_cmake:
if self.platform == 'windows':
return [['tools\\run_tests\\helper_scripts\\pre_build_cmake.bat']]
else:
return [['tools/run_tests/helper_scripts/pre_build_cmake.sh']]
else:
if self.platform == 'windows':
return [['tools\\run_tests\\helper_scripts\\pre_build_c.bat']]
else:
return []
def build_steps(self):
return []
def post_tests_steps(self):
if self.platform == 'windows':
return []
else:
return [['tools/run_tests/helper_scripts/post_tests_c.sh']]
def makefile_name(self):
if self._use_cmake:
return 'cmake/build/Makefile'
else:
return 'Makefile'
def _clang_make_options(self, version_suffix=''):
return ['CC=clang%s' % version_suffix,
'CXX=clang++%s' % version_suffix,
'LD=clang%s' % version_suffix,
'LDXX=clang++%s' % version_suffix]
def _gcc_make_options(self, version_suffix):
return ['CC=gcc%s' % version_suffix,
'CXX=g++%s' % version_suffix,
'LD=gcc%s' % version_suffix,
'LDXX=g++%s' % version_suffix]
def _compiler_options(self, use_docker, compiler):
"""Returns docker distro and make options to use for given compiler."""
if not use_docker and not _is_use_docker_child():
_check_compiler(compiler, ['default'])
if compiler == 'gcc4.9' or compiler == 'default':
return ('jessie', [])
elif compiler == 'gcc4.4':
return ('wheezy', self._gcc_make_options(version_suffix='-4.4'))
elif compiler == 'gcc4.6':
return ('wheezy', self._gcc_make_options(version_suffix='-4.6'))
elif compiler == 'gcc4.8':
return ('jessie', self._gcc_make_options(version_suffix='-4.8'))
elif compiler == 'gcc5.3':
return ('ubuntu1604', [])
elif compiler == 'gcc_musl':
return ('alpine', [])
elif compiler == 'clang3.4':
# on ubuntu1404, clang-3.4 alias doesn't exist, just use 'clang'
return ('ubuntu1404', self._clang_make_options())
elif compiler == 'clang3.5':
return ('jessie', self._clang_make_options(version_suffix='-3.5'))
elif compiler == 'clang3.6':
return ('ubuntu1604', self._clang_make_options(version_suffix='-3.6'))
elif compiler == 'clang3.7':
return ('ubuntu1604', self._clang_make_options(version_suffix='-3.7'))
else:
raise Exception('Compiler %s not supported.' % compiler)
def dockerfile_dir(self):
return 'tools/dockerfile/test/cxx_%s_%s' % (self._docker_distro,
_docker_arch_suffix(self.args.arch))
def __str__(self):
return self.make_target
class NodeLanguage(object):
def __init__(self):
self.platform = platform_string()
def configure(self, config, args):
self.config = config
self.args = args
# Note: electron ABI only depends on major and minor version, so that's all
# we should specify in the compiler argument
_check_compiler(self.args.compiler, ['default', 'node0.12',
'node4', 'node5', 'node6',
'node7', 'electron1.3', 'electron1.6'])
if self.args.compiler == 'default':
self.runtime = 'node'
self.node_version = '7'
else:
if self.args.compiler.startswith('electron'):
self.runtime = 'electron'
self.node_version = self.args.compiler[8:]
else:
self.runtime = 'node'
# Take off the word "node"
self.node_version = self.args.compiler[4:]
def test_specs(self):
if self.platform == 'windows':
return [self.config.job_spec(['tools\\run_tests\\helper_scripts\\run_node.bat'])]
else:
run_script = 'run_node'
if self.runtime == 'electron':
run_script += '_electron'
return [self.config.job_spec(['tools/run_tests/helper_scripts/{}.sh'.format(run_script),
self.node_version],
None,
environ=_FORCE_ENVIRON_FOR_WRAPPERS)]
def pre_build_steps(self):
if self.platform == 'windows':
return [['tools\\run_tests\\helper_scripts\\pre_build_node.bat']]
else:
build_script = 'pre_build_node'
if self.runtime == 'electron':
build_script += '_electron'
return [['tools/run_tests/helper_scripts/{}.sh'.format(build_script),
self.node_version]]
def make_targets(self):
return []
def make_options(self):
return []
def build_steps(self):
if self.platform == 'windows':
if self.config == 'dbg':
config_flag = '--debug'
else:
config_flag = '--release'
return [['tools\\run_tests\\helper_scripts\\build_node.bat',
config_flag]]
else:
build_script = 'build_node'
if self.runtime == 'electron':
build_script += '_electron'
# building for electron requires a patch version
self.node_version += '.0'
return [['tools/run_tests/helper_scripts/{}.sh'.format(build_script),
self.node_version]]
def post_tests_steps(self):
return []
def makefile_name(self):
return 'Makefile'
def dockerfile_dir(self):
return 'tools/dockerfile/test/node_jessie_%s' % _docker_arch_suffix(self.args.arch)
def __str__(self):
return 'node'
class PhpLanguage(object):
def configure(self, config, args):
self.config = config
self.args = args
_check_compiler(self.args.compiler, ['default'])
def test_specs(self):
return [self.config.job_spec(['src/php/bin/run_tests.sh'],
environ=_FORCE_ENVIRON_FOR_WRAPPERS)]
def pre_build_steps(self):
return []
def make_targets(self):
return ['static_c', 'shared_c']
def make_options(self):
return []
def build_steps(self):
return [['tools/run_tests/helper_scripts/build_php.sh']]
def post_tests_steps(self):
return [['tools/run_tests/helper_scripts/post_tests_php.sh']]
def makefile_name(self):
return 'Makefile'
def dockerfile_dir(self):
return 'tools/dockerfile/test/php_jessie_%s' % _docker_arch_suffix(self.args.arch)
def __str__(self):
return 'php'
class Php7Language(object):
def configure(self, config, args):
self.config = config
self.args = args
_check_compiler(self.args.compiler, ['default'])
def test_specs(self):
return [self.config.job_spec(['src/php/bin/run_tests.sh'],
environ=_FORCE_ENVIRON_FOR_WRAPPERS)]
def pre_build_steps(self):
return []
def make_targets(self):
return ['static_c', 'shared_c']
def make_options(self):
return []
def build_steps(self):
return [['tools/run_tests/helper_scripts/build_php.sh']]
def post_tests_steps(self):
return [['tools/run_tests/helper_scripts/post_tests_php.sh']]
def makefile_name(self):
return 'Makefile'
def dockerfile_dir(self):
return 'tools/dockerfile/test/php7_jessie_%s' % _docker_arch_suffix(self.args.arch)
def __str__(self):
return 'php7'
class PythonConfig(collections.namedtuple('PythonConfig', [
'name', 'build', 'run'])):
"""Tuple of commands (named s.t. 'what it says on the tin' applies)"""
class PythonLanguage(object):
def configure(self, config, args):
self.config = config
self.args = args
self.pythons = self._get_pythons(self.args)
def test_specs(self):
# load list of known test suites
with open('src/python/grpcio_tests/tests/tests.json') as tests_json_file:
tests_json = json.load(tests_json_file)
environment = dict(_FORCE_ENVIRON_FOR_WRAPPERS)
return [self.config.job_spec(
config.run,
timeout_seconds=5*60,
environ=dict(list(environment.items()) +
[('GRPC_PYTHON_TESTRUNNER_FILTER', str(suite_name))]),
shortname='%s.test.%s' % (config.name, suite_name),)
for suite_name in tests_json
for config in self.pythons]
def pre_build_steps(self):
return []
def make_targets(self):
return []
def make_options(self):
return []
def build_steps(self):
return [config.build for config in self.pythons]
def post_tests_steps(self):
if self.config != 'gcov':
return []
else:
return [['tools/run_tests/helper_scripts/post_tests_python.sh']]
def makefile_name(self):
return 'Makefile'
def dockerfile_dir(self):
return 'tools/dockerfile/test/python_%s_%s' % (self.python_manager_name(), _docker_arch_suffix(self.args.arch))
def python_manager_name(self):
if self.args.compiler in ['python3.5', 'python3.6']:
return 'pyenv'
elif self.args.compiler == 'python_alpine':
return 'alpine'
else:
return 'jessie'
def _get_pythons(self, args):
if args.arch == 'x86':
bits = '32'
else:
bits = '64'
if os.name == 'nt':
shell = ['bash']
builder = [os.path.abspath('tools/run_tests/helper_scripts/build_python_msys2.sh')]
builder_prefix_arguments = ['MINGW{}'.format(bits)]
venv_relative_python = ['Scripts/python.exe']
toolchain = ['mingw32']
else:
shell = []
builder = [os.path.abspath('tools/run_tests/helper_scripts/build_python.sh')]
builder_prefix_arguments = []
venv_relative_python = ['bin/python']
toolchain = ['unix']
runner = [os.path.abspath('tools/run_tests/helper_scripts/run_python.sh')]
config_vars = _PythonConfigVars(shell, builder, builder_prefix_arguments,
venv_relative_python, toolchain, runner)
python27_config = _python_config_generator(name='py27', major='2',
minor='7', bits=bits,
config_vars=config_vars)
python34_config = _python_config_generator(name='py34', major='3',
minor='4', bits=bits,
config_vars=config_vars)
python35_config = _python_config_generator(name='py35', major='3',
minor='5', bits=bits,
config_vars=config_vars)
python36_config = _python_config_generator(name='py36', major='3',
minor='6', bits=bits,
config_vars=config_vars)
pypy27_config = _pypy_config_generator(name='pypy', major='2',
config_vars=config_vars)
pypy32_config = _pypy_config_generator(name='pypy3', major='3',
config_vars=config_vars)
if args.compiler == 'default':
if os.name == 'nt':
return (python27_config,)
else:
return (python27_config, python34_config,)
elif args.compiler == 'python2.7':
return (python27_config,)
elif args.compiler == 'python3.4':
return (python34_config,)
elif args.compiler == 'python3.5':
return (python35_config,)
elif args.compiler == 'python3.6':
return (python36_config,)
elif args.compiler == 'pypy':
return (pypy27_config,)
elif args.compiler == 'pypy3':
return (pypy32_config,)
elif args.compiler == 'python_alpine':
return (python27_config,)
else:
raise Exception('Compiler %s not supported.' % args.compiler)
def __str__(self):
return 'python'
class RubyLanguage(object):
def configure(self, config, args):
self.config = config
self.args = args
_check_compiler(self.args.compiler, ['default'])
def test_specs(self):
tests = [self.config.job_spec(['tools/run_tests/helper_scripts/run_ruby.sh'],
timeout_seconds=10*60,
environ=_FORCE_ENVIRON_FOR_WRAPPERS)]
tests.append(self.config.job_spec(['tools/run_tests/helper_scripts/run_ruby_end2end_tests.sh'],
timeout_seconds=10*60,
environ=_FORCE_ENVIRON_FOR_WRAPPERS))
return tests
def pre_build_steps(self):
return [['tools/run_tests/helper_scripts/pre_build_ruby.sh']]
def make_targets(self):
return []
def make_options(self):
return []
def build_steps(self):
return [['tools/run_tests/helper_scripts/build_ruby.sh']]
def post_tests_steps(self):
return [['tools/run_tests/helper_scripts/post_tests_ruby.sh']]
def makefile_name(self):
return 'Makefile'
def dockerfile_dir(self):
return 'tools/dockerfile/test/ruby_jessie_%s' % _docker_arch_suffix(self.args.arch)
def __str__(self):
return 'ruby'
class CSharpLanguage(object):
def __init__(self):
self.platform = platform_string()
def configure(self, config, args):
self.config = config
self.args = args
if self.platform == 'windows':
_check_compiler(self.args.compiler, ['coreclr', 'default'])
_check_arch(self.args.arch, ['default'])
self._cmake_arch_option = 'x64'
self._make_options = []
else:
_check_compiler(self.args.compiler, ['default', 'coreclr'])
self._docker_distro = 'jessie'
if self.platform == 'mac':
# TODO(jtattermusch): EMBED_ZLIB=true currently breaks the mac build
self._make_options = ['EMBED_OPENSSL=true']
if self.args.compiler != 'coreclr':
# On Mac, official distribution of mono is 32bit.
self._make_options += ['ARCH_FLAGS=-m32', 'LDFLAGS=-m32']
else:
self._make_options = ['EMBED_OPENSSL=true', 'EMBED_ZLIB=true']
def test_specs(self):
with open('src/csharp/tests.json') as f:
tests_by_assembly = json.load(f)
msbuild_config = _MSBUILD_CONFIG[self.config.build_config]
nunit_args = ['--labels=All', '--noresult', '--workers=1']
assembly_subdir = 'bin/%s' % msbuild_config
assembly_extension = '.exe'
if self.args.compiler == 'coreclr':
assembly_subdir += '/netcoreapp1.0'
runtime_cmd = ['dotnet', 'exec']
assembly_extension = '.dll'
else:
assembly_subdir += '/net45'
if self.platform == 'windows':
runtime_cmd = []
else:
runtime_cmd = ['mono']
specs = []
for assembly in six.iterkeys(tests_by_assembly):
assembly_file = 'src/csharp/%s/%s/%s%s' % (assembly,
assembly_subdir,
assembly,
assembly_extension)
if self.config.build_config != 'gcov' or self.platform != 'windows':
# normally, run each test as a separate process
for test in tests_by_assembly[assembly]:
cmdline = runtime_cmd + [assembly_file, '--test=%s' % test] + nunit_args
specs.append(self.config.job_spec(cmdline,
shortname='csharp.%s' % test,
environ=_FORCE_ENVIRON_FOR_WRAPPERS))
else:
# For C# test coverage, run all tests from the same assembly at once
# using OpenCover.Console (only works on Windows).
cmdline = ['src\\csharp\\packages\\OpenCover.4.6.519\\tools\\OpenCover.Console.exe',
'-target:%s' % assembly_file,
'-targetdir:src\\csharp',
'-targetargs:%s' % ' '.join(nunit_args),
'-filter:+[Grpc.Core]*',
'-register:user',
'-output:src\\csharp\\coverage_csharp_%s.xml' % assembly]
# set really high cpu_cost to make sure instances of OpenCover.Console run exclusively
# to prevent problems with registering the profiler.
run_exclusive = 1000000
specs.append(self.config.job_spec(cmdline,
shortname='csharp.coverage.%s' % assembly,
cpu_cost=run_exclusive,
environ=_FORCE_ENVIRON_FOR_WRAPPERS))
return specs
def pre_build_steps(self):
if self.platform == 'windows':
return [['tools\\run_tests\\helper_scripts\\pre_build_csharp.bat', self._cmake_arch_option]]
else:
return [['tools/run_tests/helper_scripts/pre_build_csharp.sh']]
def make_targets(self):
return ['grpc_csharp_ext']
def make_options(self):
return self._make_options;
def build_steps(self):
if self.platform == 'windows':
return [['tools\\run_tests\\helper_scripts\\build_csharp.bat']]
else:
return [['tools/run_tests/helper_scripts/build_csharp.sh']]
def post_tests_steps(self):
if self.platform == 'windows':
return [['tools\\run_tests\\helper_scripts\\post_tests_csharp.bat']]
else:
return [['tools/run_tests/helper_scripts/post_tests_csharp.sh']]
def makefile_name(self):
if self.platform == 'windows':
return 'cmake/build/%s/Makefile' % self._cmake_arch_option
else:
return 'Makefile'
def dockerfile_dir(self):
return 'tools/dockerfile/test/csharp_%s_%s' % (self._docker_distro,
_docker_arch_suffix(self.args.arch))
def __str__(self):
return 'csharp'
class ObjCLanguage(object):
def configure(self, config, args):
self.config = config
self.args = args
_check_compiler(self.args.compiler, ['default'])
def test_specs(self):
return [
self.config.job_spec(['src/objective-c/tests/run_tests.sh'],
timeout_seconds=60*60,
shortname='objc-tests',
environ=_FORCE_ENVIRON_FOR_WRAPPERS),
self.config.job_spec(['src/objective-c/tests/build_example_test.sh'],
timeout_seconds=30*60,
shortname='objc-examples-build',
environ=_FORCE_ENVIRON_FOR_WRAPPERS),
]
def pre_build_steps(self):
return []
def make_targets(self):
return ['interop_server']
def make_options(self):
return []
def build_steps(self):
return [['src/objective-c/tests/build_tests.sh']]
def post_tests_steps(self):
return []
def makefile_name(self):
return 'Makefile'
def dockerfile_dir(self):
return None
def __str__(self):
return 'objc'
class Sanity(object):
def configure(self, config, args):
self.config = config
self.args = args
_check_compiler(self.args.compiler, ['default'])
def test_specs(self):
import yaml
with open('tools/run_tests/sanity/sanity_tests.yaml', 'r') as f:
environ={'TEST': 'true'}
if _is_use_docker_child():
environ['CLANG_FORMAT_SKIP_DOCKER'] = 'true'
return [self.config.job_spec(cmd['script'].split(),
timeout_seconds=30*60,
environ=environ,
cpu_cost=cmd.get('cpu_cost', 1))
for cmd in yaml.load(f)]
def pre_build_steps(self):
return []
def make_targets(self):
return ['run_dep_checks']
def make_options(self):
return []
def build_steps(self):
return []
def post_tests_steps(self):
return []
def makefile_name(self):
return 'Makefile'
def dockerfile_dir(self):
return 'tools/dockerfile/test/sanity'
def __str__(self):
return 'sanity'
class NodeExpressLanguage(object):
"""Dummy Node express test target to enable running express performance
benchmarks"""
def __init__(self):
self.platform = platform_string()
def configure(self, config, args):
self.config = config
self.args = args
_check_compiler(self.args.compiler, ['default', 'node0.12',
'node4', 'node5', 'node6'])
if self.args.compiler == 'default':
self.node_version = '4'
else:
# Take off the word "node"
self.node_version = self.args.compiler[4:]
def test_specs(self):
return []
def pre_build_steps(self):
if self.platform == 'windows':
return [['tools\\run_tests\\helper_scripts\\pre_build_node.bat']]
else:
return [['tools/run_tests/helper_scripts/pre_build_node.sh', self.node_version]]
def make_targets(self):
return []
def make_options(self):
return []
def build_steps(self):
return []
def post_tests_steps(self):
return []
def makefile_name(self):
return 'Makefile'
def dockerfile_dir(self):
return 'tools/dockerfile/test/node_jessie_%s' % _docker_arch_suffix(self.args.arch)
def __str__(self):
return 'node_express'
# different configurations we can run under
with open('tools/run_tests/generated/configs.json') as f:
_CONFIGS = dict((cfg['config'], Config(**cfg)) for cfg in ast.literal_eval(f.read()))
_LANGUAGES = {
'c++': CLanguage('cxx', 'c++'),
'c': CLanguage('c', 'c'),
'node': NodeLanguage(),
'node_express': NodeExpressLanguage(),
'php': PhpLanguage(),
'php7': Php7Language(),
'python': PythonLanguage(),
'ruby': RubyLanguage(),
'csharp': CSharpLanguage(),
'objc' : ObjCLanguage(),
'sanity': Sanity()
}
_MSBUILD_CONFIG = {
'dbg': 'Debug',
'opt': 'Release',
'gcov': 'Debug',
}
def _windows_arch_option(arch):
"""Returns msbuild cmdline option for selected architecture."""
if arch == 'default' or arch == 'x86':
return '/p:Platform=Win32'
elif arch == 'x64':
return '/p:Platform=x64'
else:
print('Architecture %s not supported.' % arch)
sys.exit(1)
def _check_arch_option(arch):
"""Checks that architecture option is valid."""
if platform_string() == 'windows':
_windows_arch_option(arch)
elif platform_string() == 'linux':
# On linux, we need to be running under docker with the right architecture.
runtime_arch = platform.architecture()[0]
if arch == 'default':
return
elif runtime_arch == '64bit' and arch == 'x64':
return
elif runtime_arch == '32bit' and arch == 'x86':
return
else:
print('Architecture %s does not match current runtime architecture.' % arch)
sys.exit(1)
else:
if args.arch != 'default':
print('Architecture %s not supported on current platform.' % args.arch)
sys.exit(1)
def _windows_build_bat(compiler):
"""Returns name of build.bat for selected compiler."""
# For CoreCLR, fall back to the default compiler for C core
if compiler == 'default' or compiler == 'vs2013':
return 'vsprojects\\build_vs2013.bat'
elif compiler == 'vs2015':
return 'vsprojects\\build_vs2015.bat'
else:
print('Compiler %s not supported.' % compiler)
sys.exit(1)
def _windows_toolset_option(compiler):
"""Returns msbuild PlatformToolset for selected compiler."""
# For CoreCLR, fall back to the default compiler for C core
if compiler == 'default' or compiler == 'vs2013' or compiler == 'coreclr':
return '/p:PlatformToolset=v120'
elif compiler == 'vs2015':
return '/p:PlatformToolset=v140'
else:
print('Compiler %s not supported.' % compiler)
sys.exit(1)
def _docker_arch_suffix(arch):
"""Returns suffix to dockerfile dir to use."""
if arch == 'default' or arch == 'x64':
return 'x64'
elif arch == 'x86':
return 'x86'
else:
print('Architecture %s not supported with current settings.' % arch)
sys.exit(1)
def runs_per_test_type(arg_str):
"""Auxilary function to parse the "runs_per_test" flag.
Returns:
A positive integer or 0, the latter indicating an infinite number of
runs.
Raises:
argparse.ArgumentTypeError: Upon invalid input.
"""
if arg_str == 'inf':
return 0
try:
n = int(arg_str)
if n <= 0: raise ValueError
return n
except:
msg = '\'{}\' is not a positive integer or \'inf\''.format(arg_str)
raise argparse.ArgumentTypeError(msg)
def percent_type(arg_str):
pct = float(arg_str)
if pct > 100 or pct < 0:
raise argparse.ArgumentTypeError(
"'%f' is not a valid percentage in the [0, 100] range" % pct)
return pct
# This is math.isclose in python >= 3.5
def isclose(a, b, rel_tol=1e-09, abs_tol=0.0):
return abs(a-b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)
# parse command line
argp = argparse.ArgumentParser(description='Run grpc tests.')
argp.add_argument('-c', '--config',
choices=sorted(_CONFIGS.keys()),
default='opt')
argp.add_argument('-n', '--runs_per_test', default=1, type=runs_per_test_type,
help='A positive integer or "inf". If "inf", all tests will run in an '
'infinite loop. Especially useful in combination with "-f"')
argp.add_argument('-r', '--regex', default='.*', type=str)
argp.add_argument('--regex_exclude', default='', type=str)
argp.add_argument('-j', '--jobs', default=multiprocessing.cpu_count(), type=int)
argp.add_argument('-s', '--slowdown', default=1.0, type=float)
argp.add_argument('-p', '--sample_percent', default=100.0, type=percent_type,
help='Run a random sample with that percentage of tests')
argp.add_argument('-f', '--forever',
default=False,
action='store_const',
const=True)
argp.add_argument('-t', '--travis',
default=False,
action='store_const',
const=True)
argp.add_argument('--newline_on_success',
default=False,
action='store_const',
const=True)
argp.add_argument('-l', '--language',
choices=['all'] + sorted(_LANGUAGES.keys()),
nargs='+',
default=['all'])
argp.add_argument('-S', '--stop_on_failure',
default=False,
action='store_const',
const=True)
argp.add_argument('--use_docker',
default=False,
action='store_const',
const=True,
help='Run all the tests under docker. That provides ' +
'additional isolation and prevents the need to install ' +
'language specific prerequisites. Only available on Linux.')
argp.add_argument('--allow_flakes',
default=False,
action='store_const',
const=True,
help='Allow flaky tests to show as passing (re-runs failed tests up to five times)')
argp.add_argument('--arch',
choices=['default', 'x86', 'x64'],
default='default',
help='Selects architecture to target. For some platforms "default" is the only supported choice.')
argp.add_argument('--compiler',
choices=['default',
'gcc4.4', 'gcc4.6', 'gcc4.8', 'gcc4.9', 'gcc5.3', 'gcc_musl',
'clang3.4', 'clang3.5', 'clang3.6', 'clang3.7',
'vs2013', 'vs2015',
'python2.7', 'python3.4', 'python3.5', 'python3.6', 'pypy', 'pypy3', 'python_alpine',
'node0.12', 'node4', 'node5', 'node6', 'node7',
'electron1.3', 'electron1.6',
'coreclr',
'cmake'],
default='default',
help='Selects compiler to use. Allowed values depend on the platform and language.')
argp.add_argument('--iomgr_platform',
choices=['native', 'uv'],
default='native',
help='Selects iomgr platform to build on')
argp.add_argument('--build_only',
default=False,
action='store_const',
const=True,
help='Perform all the build steps but dont run any tests.')
argp.add_argument('--measure_cpu_costs', default=False, action='store_const', const=True,
help='Measure the cpu costs of tests')
argp.add_argument('--update_submodules', default=[], nargs='*',
help='Update some submodules before building. If any are updated, also run generate_projects. ' +
'Submodules are specified as SUBMODULE_NAME:BRANCH; if BRANCH is omitted, master is assumed.')
argp.add_argument('-a', '--antagonists', default=0, type=int)
argp.add_argument('-x', '--xml_report', default=None, type=str,
help='Generates a JUnit-compatible XML report')
argp.add_argument('--report_suite_name', default='tests', type=str,
help='Test suite name to use in generated JUnit XML report')
argp.add_argument('--quiet_success',
default=False,
action='store_const',
const=True,
help='Dont print anything when a test passes. Passing tests also will not be reported in XML report. ' +
'Useful when running many iterations of each test (argument -n).')
argp.add_argument('--force_default_poller', default=False, action='store_const', const=True,
help='Dont try to iterate over many polling strategies when they exist')
argp.add_argument('--max_time', default=-1, type=int, help='Maximum test runtime in seconds')
args = argp.parse_args()
if args.force_default_poller:
_POLLING_STRATEGIES = {}
jobset.measure_cpu_costs = args.measure_cpu_costs
# update submodules if necessary
need_to_regenerate_projects = False
for spec in args.update_submodules:
spec = spec.split(':', 1)
if len(spec) == 1:
submodule = spec[0]
branch = 'master'
elif len(spec) == 2:
submodule = spec[0]
branch = spec[1]
cwd = 'third_party/%s' % submodule
def git(cmd, cwd=cwd):
print('in %s: git %s' % (cwd, cmd))
run_shell_command('git %s' % cmd, cwd=cwd)
git('fetch')
git('checkout %s' % branch)
git('pull origin %s' % branch)
if os.path.exists('src/%s/gen_build_yaml.py' % submodule):
need_to_regenerate_projects = True
if need_to_regenerate_projects:
if jobset.platform_string() == 'linux':
run_shell_command('tools/buildgen/generate_projects.sh')
else:
print('WARNING: may need to regenerate projects, but since we are not on')
print(' Linux this step is being skipped. Compilation MAY fail.')
# grab config
run_config = _CONFIGS[args.config]
build_config = run_config.build_config
if args.travis:
_FORCE_ENVIRON_FOR_WRAPPERS = {'GRPC_TRACE': 'api'}
if 'all' in args.language:
lang_list = _LANGUAGES.keys()
else:
lang_list = args.language
# We don't support code coverage on some languages
if 'gcov' in args.config:
for bad in ['objc', 'sanity']:
if bad in lang_list:
lang_list.remove(bad)
languages = set(_LANGUAGES[l] for l in lang_list)
for l in languages:
l.configure(run_config, args)
language_make_options=[]
if any(language.make_options() for language in languages):
if not 'gcov' in args.config and len(languages) != 1:
print('languages with custom make options cannot be built simultaneously with other languages')
sys.exit(1)
else:
# Combining make options is not clean and just happens to work. It allows C/C++ and C# to build
# together, and is only used under gcov. All other configs should build languages individually.
language_make_options = list(set([make_option for lang in languages for make_option in lang.make_options()]))
if args.use_docker:
if not args.travis:
print('Seen --use_docker flag, will run tests under docker.')
print('')
print('IMPORTANT: The changes you are testing need to be locally committed')
print('because only the committed changes in the current branch will be')
print('copied to the docker environment.')
time.sleep(5)
dockerfile_dirs = set([l.dockerfile_dir() for l in languages])
if len(dockerfile_dirs) > 1:
if 'gcov' in args.config:
dockerfile_dir = 'tools/dockerfile/test/multilang_jessie_x64'
print ('Using multilang_jessie_x64 docker image for code coverage for '
'all languages.')
else:
print ('Languages to be tested require running under different docker '
'images.')
sys.exit(1)
else:
dockerfile_dir = next(iter(dockerfile_dirs))
child_argv = [ arg for arg in sys.argv if not arg == '--use_docker' ]
run_tests_cmd = 'python tools/run_tests/run_tests.py %s' % ' '.join(child_argv[1:])
env = os.environ.copy()
env['RUN_TESTS_COMMAND'] = run_tests_cmd
env['DOCKERFILE_DIR'] = dockerfile_dir
env['DOCKER_RUN_SCRIPT'] = 'tools/run_tests/dockerize/docker_run_tests.sh'
if args.xml_report:
env['XML_REPORT'] = args.xml_report
if not args.travis:
env['TTY_FLAG'] = '-t' # enables Ctrl-C when not on Jenkins.
subprocess.check_call('tools/run_tests/dockerize/build_docker_and_run_tests.sh',
shell=True,
env=env)
sys.exit(0)
_check_arch_option(args.arch)
def make_jobspec(cfg, targets, makefile='Makefile'):
if platform_string() == 'windows':
if makefile.startswith('cmake/build/'):
return [jobset.JobSpec(['cmake', '--build', '.',
'--target', '%s' % target,
'--config', _MSBUILD_CONFIG[cfg]],
cwd=os.path.dirname(makefile),
timeout_seconds=None) for target in targets]
extra_args = []
# better do parallel compilation
# empirically /m:2 gives the best performance/price and should prevent
# overloading the windows workers.
extra_args.extend(['/m:2'])
# disable PDB generation: it's broken, and we don't need it during CI
extra_args.extend(['/p:Jenkins=true'])
return [
jobset.JobSpec([_windows_build_bat(args.compiler),
'vsprojects\\%s.sln' % target,
'/p:Configuration=%s' % _MSBUILD_CONFIG[cfg]] +
extra_args +
language_make_options,
shell=True, timeout_seconds=None)
for target in targets]
else:
if targets and makefile.startswith('cmake/build/'):
# With cmake, we've passed all the build configuration in the pre-build step already
return [jobset.JobSpec([os.getenv('MAKE', 'make'),
'-j', '%d' % args.jobs] +
targets,
cwd='cmake/build',
timeout_seconds=None)]
if targets:
return [jobset.JobSpec([os.getenv('MAKE', 'make'),
'-f', makefile,
'-j', '%d' % args.jobs,
'EXTRA_DEFINES=GRPC_TEST_SLOWDOWN_MACHINE_FACTOR=%f' % args.slowdown,
'CONFIG=%s' % cfg,
'Q='] +
language_make_options +
([] if not args.travis else ['JENKINS_BUILD=1']) +
targets,
timeout_seconds=None)]
else:
return []
make_targets = {}
for l in languages:
makefile = l.makefile_name()
make_targets[makefile] = make_targets.get(makefile, set()).union(
set(l.make_targets()))
def build_step_environ(cfg):
environ = {'CONFIG': cfg}
msbuild_cfg = _MSBUILD_CONFIG.get(cfg)
if msbuild_cfg:
environ['MSBUILD_CONFIG'] = msbuild_cfg
return environ
build_steps = list(set(
jobset.JobSpec(cmdline, environ=build_step_environ(build_config), flake_retries=5)
for l in languages
for cmdline in l.pre_build_steps()))
if make_targets:
make_commands = itertools.chain.from_iterable(make_jobspec(build_config, list(targets), makefile) for (makefile, targets) in make_targets.items())
build_steps.extend(set(make_commands))
build_steps.extend(set(
jobset.JobSpec(cmdline, environ=build_step_environ(build_config), timeout_seconds=None)
for l in languages
for cmdline in l.build_steps()))
post_tests_steps = list(set(
jobset.JobSpec(cmdline, environ=build_step_environ(build_config))
for l in languages
for cmdline in l.post_tests_steps()))
runs_per_test = args.runs_per_test
forever = args.forever
def _shut_down_legacy_server(legacy_server_port):
try:
version = int(urllib.request.urlopen(
'http://localhost:%d/version_number' % legacy_server_port,
timeout=10).read())
except:
pass
else:
urllib.request.urlopen(
'http://localhost:%d/quitquitquit' % legacy_server_port).read()
def _calculate_num_runs_failures(list_of_results):
"""Caculate number of runs and failures for a particular test.
Args:
list_of_results: (List) of JobResult object.
Returns:
A tuple of total number of runs and failures.
"""
num_runs = len(list_of_results) # By default, there is 1 run per JobResult.
num_failures = 0
for jobresult in list_of_results:
if jobresult.retries > 0:
num_runs += jobresult.retries
if jobresult.num_failures > 0:
num_failures += jobresult.num_failures
return num_runs, num_failures
# _build_and_run results
class BuildAndRunError(object):
BUILD = object()
TEST = object()
POST_TEST = object()
# returns a list of things that failed (or an empty list on success)
def _build_and_run(
check_cancelled, newline_on_success, xml_report=None, build_only=False):
"""Do one pass of building & running tests."""
# build latest sequentially
num_failures, resultset = jobset.run(
build_steps, maxjobs=1, stop_on_failure=True,
newline_on_success=newline_on_success, travis=args.travis)
if num_failures:
return [BuildAndRunError.BUILD]
if build_only:
if xml_report:
report_utils.render_junit_xml_report(resultset, xml_report,
suite_name=args.report_suite_name)
return []
# start antagonists
antagonists = [subprocess.Popen(['tools/run_tests/python_utils/antagonist.py'])
for _ in range(0, args.antagonists)]
start_port_server.start_port_server()
resultset = None
num_test_failures = 0
try:
infinite_runs = runs_per_test == 0
one_run = set(
spec
for language in languages
for spec in language.test_specs()
if (re.search(args.regex, spec.shortname) and
(args.regex_exclude == '' or
not re.search(args.regex_exclude, spec.shortname))))
# When running on travis, we want out test runs to be as similar as possible
# for reproducibility purposes.
if args.travis and args.max_time <= 0:
massaged_one_run = sorted(one_run, key=lambda x: x.shortname)
else:
# whereas otherwise, we want to shuffle things up to give all tests a
# chance to run.
massaged_one_run = list(one_run) # random.sample needs an indexable seq.
num_jobs = len(massaged_one_run)
# for a random sample, get as many as indicated by the 'sample_percent'
# argument. By default this arg is 100, resulting in a shuffle of all
# jobs.
sample_size = int(num_jobs * args.sample_percent/100.0)
massaged_one_run = random.sample(massaged_one_run, sample_size)
if not isclose(args.sample_percent, 100.0):
assert args.runs_per_test == 1, "Can't do sampling (-p) over multiple runs (-n)."
print("Running %d tests out of %d (~%d%%)" %
(sample_size, num_jobs, args.sample_percent))
if infinite_runs:
assert len(massaged_one_run) > 0, 'Must have at least one test for a -n inf run'
runs_sequence = (itertools.repeat(massaged_one_run) if infinite_runs
else itertools.repeat(massaged_one_run, runs_per_test))
all_runs = itertools.chain.from_iterable(runs_sequence)
if args.quiet_success:
jobset.message('START', 'Running tests quietly, only failing tests will be reported', do_newline=True)
num_test_failures, resultset = jobset.run(
all_runs, check_cancelled, newline_on_success=newline_on_success,
travis=args.travis, maxjobs=args.jobs,
stop_on_failure=args.stop_on_failure,
quiet_success=args.quiet_success, max_time=args.max_time)
if resultset:
for k, v in sorted(resultset.items()):
num_runs, num_failures = _calculate_num_runs_failures(v)
if num_failures > 0:
if num_failures == num_runs: # what about infinite_runs???
jobset.message('FAILED', k, do_newline=True)
else:
jobset.message(
'FLAKE', '%s [%d/%d runs flaked]' % (k, num_failures, num_runs),
do_newline=True)
finally:
for antagonist in antagonists:
antagonist.kill()
if xml_report and resultset:
report_utils.render_junit_xml_report(resultset, xml_report,
suite_name=args.report_suite_name)
number_failures, _ = jobset.run(
post_tests_steps, maxjobs=1, stop_on_failure=True,
newline_on_success=newline_on_success, travis=args.travis)
out = []
if number_failures:
out.append(BuildAndRunError.POST_TEST)
if num_test_failures:
out.append(BuildAndRunError.TEST)
return out
if forever:
success = True
while True:
dw = watch_dirs.DirWatcher(['src', 'include', 'test', 'examples'])
initial_time = dw.most_recent_change()
have_files_changed = lambda: dw.most_recent_change() != initial_time
previous_success = success
errors = _build_and_run(check_cancelled=have_files_changed,
newline_on_success=False,
build_only=args.build_only) == 0
if not previous_success and not errors:
jobset.message('SUCCESS',
'All tests are now passing properly',
do_newline=True)
jobset.message('IDLE', 'No change detected')
while not have_files_changed():
time.sleep(1)
else:
errors = _build_and_run(check_cancelled=lambda: False,
newline_on_success=args.newline_on_success,
xml_report=args.xml_report,
build_only=args.build_only)
if not errors:
jobset.message('SUCCESS', 'All tests passed', do_newline=True)
else:
jobset.message('FAILED', 'Some tests failed', do_newline=True)
exit_code = 0
if BuildAndRunError.BUILD in errors:
exit_code |= 1
if BuildAndRunError.TEST in errors:
exit_code |= 2
if BuildAndRunError.POST_TEST in errors:
exit_code |= 4
sys.exit(exit_code)
|
|
# Copyright 2014-present MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you
# may not use this file except in compliance with the License. You
# may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied. See the License for the specific language governing
# permissions and limitations under the License.
"""Communicate with one MongoDB server in a topology."""
from datetime import datetime
from bson import _decode_all_selective
from pymongo.errors import NotPrimaryError, OperationFailure
from pymongo.helpers import _check_command_response
from pymongo.message import _convert_exception, _OpMsg
from pymongo.response import PinnedResponse, Response
_CURSOR_DOC_FIELDS = {"cursor": {"firstBatch": 1, "nextBatch": 1}}
class Server(object):
def __init__(
self, server_description, pool, monitor, topology_id=None, listeners=None, events=None
):
"""Represent one MongoDB server."""
self._description = server_description
self._pool = pool
self._monitor = monitor
self._topology_id = topology_id
self._publish = listeners is not None and listeners.enabled_for_server
self._listener = listeners
self._events = None
if self._publish:
self._events = events()
def open(self):
"""Start monitoring, or restart after a fork.
Multiple calls have no effect.
"""
if not self._pool.opts.load_balanced:
self._monitor.open()
def reset(self, service_id=None):
"""Clear the connection pool."""
self.pool.reset(service_id)
def close(self):
"""Clear the connection pool and stop the monitor.
Reconnect with open().
"""
if self._publish:
assert self._listener is not None
assert self._events is not None
self._events.put(
(
self._listener.publish_server_closed,
(self._description.address, self._topology_id),
)
)
self._monitor.close()
self._pool.reset_without_pause()
def request_check(self):
"""Check the server's state soon."""
self._monitor.request_check()
def run_operation(self, sock_info, operation, read_preference, listeners, unpack_res):
"""Run a _Query or _GetMore operation and return a Response object.
This method is used only to run _Query/_GetMore operations from
cursors.
Can raise ConnectionFailure, OperationFailure, etc.
:Parameters:
- `sock_info`: A SocketInfo instance.
- `operation`: A _Query or _GetMore object.
- `set_secondary_okay`: Pass to operation.get_message.
- `listeners`: Instance of _EventListeners or None.
- `unpack_res`: A callable that decodes the wire protocol response.
"""
duration = None
publish = listeners.enabled_for_commands
if publish:
start = datetime.now()
use_cmd = operation.use_command(sock_info)
more_to_come = operation.sock_mgr and operation.sock_mgr.more_to_come
if more_to_come:
request_id = 0
else:
message = operation.get_message(read_preference, sock_info, use_cmd)
request_id, data, max_doc_size = self._split_message(message)
if publish:
cmd, dbn = operation.as_command(sock_info)
listeners.publish_command_start(
cmd, dbn, request_id, sock_info.address, service_id=sock_info.service_id
)
start = datetime.now()
try:
if more_to_come:
reply = sock_info.receive_message(None)
else:
sock_info.send_message(data, max_doc_size)
reply = sock_info.receive_message(request_id)
# Unpack and check for command errors.
if use_cmd:
user_fields = _CURSOR_DOC_FIELDS
legacy_response = False
else:
user_fields = None
legacy_response = True
docs = unpack_res(
reply,
operation.cursor_id,
operation.codec_options,
legacy_response=legacy_response,
user_fields=user_fields,
)
if use_cmd:
first = docs[0]
operation.client._process_response(first, operation.session)
_check_command_response(first, sock_info.max_wire_version)
except Exception as exc:
if publish:
duration = datetime.now() - start
if isinstance(exc, (NotPrimaryError, OperationFailure)):
failure = exc.details
else:
failure = _convert_exception(exc)
listeners.publish_command_failure(
duration,
failure,
operation.name,
request_id,
sock_info.address,
service_id=sock_info.service_id,
)
raise
if publish:
duration = datetime.now() - start
# Must publish in find / getMore / explain command response
# format.
if use_cmd:
res = docs[0]
elif operation.name == "explain":
res = docs[0] if docs else {}
else:
res = {"cursor": {"id": reply.cursor_id, "ns": operation.namespace()}, "ok": 1}
if operation.name == "find":
res["cursor"]["firstBatch"] = docs
else:
res["cursor"]["nextBatch"] = docs
listeners.publish_command_success(
duration,
res,
operation.name,
request_id,
sock_info.address,
service_id=sock_info.service_id,
)
# Decrypt response.
client = operation.client
if client and client._encrypter:
if use_cmd:
decrypted = client._encrypter.decrypt(reply.raw_command_response())
docs = _decode_all_selective(decrypted, operation.codec_options, user_fields)
response: Response
if client._should_pin_cursor(operation.session) or operation.exhaust:
sock_info.pin_cursor()
if isinstance(reply, _OpMsg):
# In OP_MSG, the server keeps sending only if the
# more_to_come flag is set.
more_to_come = reply.more_to_come
else:
# In OP_REPLY, the server keeps sending until cursor_id is 0.
more_to_come = bool(operation.exhaust and reply.cursor_id)
if operation.sock_mgr:
operation.sock_mgr.update_exhaust(more_to_come)
response = PinnedResponse(
data=reply,
address=self._description.address,
socket_info=sock_info,
duration=duration,
request_id=request_id,
from_command=use_cmd,
docs=docs,
more_to_come=more_to_come,
)
else:
response = Response(
data=reply,
address=self._description.address,
duration=duration,
request_id=request_id,
from_command=use_cmd,
docs=docs,
)
return response
def get_socket(self, handler=None):
return self.pool.get_socket(handler)
@property
def description(self):
return self._description
@description.setter
def description(self, server_description):
assert server_description.address == self._description.address
self._description = server_description
@property
def pool(self):
return self._pool
def _split_message(self, message):
"""Return request_id, data, max_doc_size.
:Parameters:
- `message`: (request_id, data, max_doc_size) or (request_id, data)
"""
if len(message) == 3:
return message
else:
# get_more and kill_cursors messages don't include BSON documents.
request_id, data = message
return request_id, data, 0
def __repr__(self):
return "<%s %r>" % (self.__class__.__name__, self._description)
|
|
import unittest as real_unittest
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
from django.db.models import get_app, get_apps
from django.db.models.loading import unregister_app
from django.test import _doctest as doctest
from django.test.utils import setup_test_environment, teardown_test_environment
from django.test.testcases import OutputChecker, DocTestRunner, TestCase
from django.utils import unittest
from django.utils.importlib import import_module
from django.utils.module_loading import module_has_submodule
__all__ = ('DjangoTestRunner', 'DjangoTestSuiteRunner')
# The module name for tests outside models.py
TEST_MODULE = 'tests'
doctestOutputChecker = OutputChecker()
class DjangoTestRunner(unittest.TextTestRunner):
def __init__(self, *args, **kwargs):
import warnings
warnings.warn(
"DjangoTestRunner is deprecated; it's functionality is "
"indistinguishable from TextTestRunner",
DeprecationWarning
)
super(DjangoTestRunner, self).__init__(*args, **kwargs)
def get_tests(app_module):
parts = app_module.__name__.split('.')
prefix, last = parts[:-1], parts[-1]
try:
test_module = import_module('.'.join(prefix + [TEST_MODULE]))
except ImportError:
# Couldn't import tests.py. Was it due to a missing file, or
# due to an import error in a tests.py that actually exists?
# app_module either points to a models.py file, or models/__init__.py
# Tests are therefore either in same directory, or one level up
if last == 'models':
app_root = import_module('.'.join(prefix))
else:
app_root = app_module
if not module_has_submodule(app_root, TEST_MODULE):
test_module = None
else:
# The module exists, so there must be an import error in the test
# module itself.
raise
return test_module
def build_suite(app_module):
"""
Create a complete Django test suite for the provided application module.
"""
suite = unittest.TestSuite()
if skip_app(app_module):
unregister_app(app_module.__name__.split('.')[-2])
settings.INSTALLED_APPS.remove('.'.join(app_module.__name__.split('.')[:-1]))
return suite
# Load unit and doctests in the models.py module. If module has
# a suite() method, use it. Otherwise build the test suite ourselves.
if hasattr(app_module, 'suite'):
suite.addTest(app_module.suite())
else:
suite.addTest(unittest.defaultTestLoader.loadTestsFromModule(
app_module))
try:
suite.addTest(doctest.DocTestSuite(app_module,
checker=doctestOutputChecker,
runner=DocTestRunner))
except ValueError:
# No doc tests in models.py
pass
# Check to see if a separate 'tests' module exists parallel to the
# models module
test_module = get_tests(app_module)
if test_module:
# Load unit and doctests in the tests.py module. If module has
# a suite() method, use it. Otherwise build the test suite ourselves.
if hasattr(test_module, 'suite'):
suite.addTest(test_module.suite())
else:
suite.addTest(unittest.defaultTestLoader.loadTestsFromModule(
test_module))
try:
suite.addTest(doctest.DocTestSuite(
test_module, checker=doctestOutputChecker,
runner=DocTestRunner))
except ValueError:
# No doc tests in tests.py
pass
return suite
def skip_app(app_module):
"""return True if the app must be skipped acccording to its
TEST_SKIP_UNLESS_DB_FEATURES and TEST_IF_DB_FEATURE attributes"""
req_db_features = getattr(app_module, 'TEST_SKIP_UNLESS_DB_FEATURES',
[])
if req_db_features:
from django.db import connections
for c in connections:
connection = connections[c]
for feature in req_db_features:
if not getattr(connection.features, feature, False):
return True
forbidden_db_features = getattr(app_module, 'TEST_SKIP_IF_DB_FEATURES',
[])
if forbidden_db_features:
from django.db import connections
for c in connections:
connection = connections[c]
for feature in forbidden_db_features:
if getattr(connection.features, feature, False):
return True
return False
def build_test(label):
"""
Construct a test case with the specified label. Label should be of the
form model.TestClass or model.TestClass.test_method. Returns an
instantiated test or test suite corresponding to the label provided.
"""
parts = label.split('.')
if len(parts) < 2 or len(parts) > 3:
raise ValueError("Test label '%s' should be of the form app.TestCase "
"or app.TestCase.test_method" % label)
#
# First, look for TestCase instances with a name that matches
#
app_module = get_app(parts[0])
if skip_app(app_module):
unregister_app(app_module.__name__.split('.')[-2])
settings.INSTALLED_APPS.remove('.'.join(app_module.__name__.split('.')[:-1]))
return unittest.TestSuite()
test_module = get_tests(app_module)
TestClass = getattr(app_module, parts[1], None)
# Couldn't find the test class in models.py; look in tests.py
if TestClass is None:
if test_module:
TestClass = getattr(test_module, parts[1], None)
try:
if issubclass(TestClass, (unittest.TestCase, real_unittest.TestCase)):
if len(parts) == 2: # label is app.TestClass
try:
return unittest.TestLoader().loadTestsFromTestCase(
TestClass)
except TypeError:
raise ValueError(
"Test label '%s' does not refer to a test class"
% label)
else: # label is app.TestClass.test_method
return TestClass(parts[2])
except TypeError:
# TestClass isn't a TestClass - it must be a method or normal class
pass
#
# If there isn't a TestCase, look for a doctest that matches
#
tests = []
for module in app_module, test_module:
try:
doctests = doctest.DocTestSuite(module,
checker=doctestOutputChecker,
runner=DocTestRunner)
# Now iterate over the suite, looking for doctests whose name
# matches the pattern that was given
for test in doctests:
if test._dt_test.name in (
'%s.%s' % (module.__name__, '.'.join(parts[1:])),
'%s.__test__.%s' % (
module.__name__, '.'.join(parts[1:]))):
tests.append(test)
except ValueError:
# No doctests found.
pass
# If no tests were found, then we were given a bad test label.
if not tests:
raise ValueError("Test label '%s' does not refer to a test" % label)
# Construct a suite out of the tests that matched.
return unittest.TestSuite(tests)
def partition_suite(suite, classes, bins):
"""
Partitions a test suite by test type.
classes is a sequence of types
bins is a sequence of TestSuites, one more than classes
Tests of type classes[i] are added to bins[i],
tests with no match found in classes are place in bins[-1]
"""
for test in suite:
if isinstance(test, unittest.TestSuite):
partition_suite(test, classes, bins)
else:
for i in range(len(classes)):
if isinstance(test, classes[i]):
bins[i].addTest(test)
break
else:
bins[-1].addTest(test)
def reorder_suite(suite, classes):
"""
Reorders a test suite by test type.
`classes` is a sequence of types
All tests of type classes[0] are placed first, then tests of type
classes[1], etc. Tests with no match in classes are placed last.
"""
class_count = len(classes)
bins = [unittest.TestSuite() for i in range(class_count+1)]
partition_suite(suite, classes, bins)
for i in range(class_count):
bins[0].addTests(bins[i+1])
return bins[0]
def dependency_ordered(test_databases, dependencies):
"""Reorder test_databases into an order that honors the dependencies
described in TEST_DEPENDENCIES.
"""
ordered_test_databases = []
resolved_databases = set()
while test_databases:
changed = False
deferred = []
while test_databases:
signature, (db_name, aliases) = test_databases.pop()
dependencies_satisfied = True
for alias in aliases:
if alias in dependencies:
if all(a in resolved_databases
for a in dependencies[alias]):
# all dependencies for this alias are satisfied
dependencies.pop(alias)
resolved_databases.add(alias)
else:
dependencies_satisfied = False
else:
resolved_databases.add(alias)
if dependencies_satisfied:
ordered_test_databases.append((signature, (db_name, aliases)))
changed = True
else:
deferred.append((signature, (db_name, aliases)))
if not changed:
raise ImproperlyConfigured(
"Circular dependency in TEST_DEPENDENCIES")
test_databases = deferred
return ordered_test_databases
class DjangoTestSuiteRunner(object):
def __init__(self, verbosity=1, interactive=True, failfast=True, **kwargs):
self.verbosity = verbosity
self.interactive = interactive
self.failfast = failfast
def setup_test_environment(self, **kwargs):
setup_test_environment()
settings.DEBUG = False
unittest.installHandler()
def build_suite(self, test_labels, extra_tests=None, **kwargs):
suite = unittest.TestSuite()
if test_labels:
for label in test_labels:
if '.' in label:
suite.addTest(build_test(label))
else:
app = get_app(label)
suite.addTest(build_suite(app))
else:
for app in get_apps():
suite.addTest(build_suite(app))
if extra_tests:
for test in extra_tests:
suite.addTest(test)
return reorder_suite(suite, (TestCase,))
def setup_databases(self, **kwargs):
from django.db import connections, DEFAULT_DB_ALIAS
# First pass -- work out which databases actually need to be created,
# and which ones are test mirrors or duplicate entries in DATABASES
mirrored_aliases = {}
test_databases = {}
dependencies = {}
for alias in connections:
connection = connections[alias]
if connection.settings_dict['TEST_MIRROR']:
# If the database is marked as a test mirror, save
# the alias.
mirrored_aliases[alias] = (
connection.settings_dict['TEST_MIRROR'])
else:
# Store a tuple with DB parameters that uniquely identify it.
# If we have two aliases with the same values for that tuple,
# we only need to create the test database once.
item = test_databases.setdefault(
connection.creation.test_db_signature(),
(connection.settings_dict['NAME'], [])
)
item[1].append(alias)
if 'TEST_DEPENDENCIES' in connection.settings_dict:
dependencies[alias] = (
connection.settings_dict['TEST_DEPENDENCIES'])
else:
if alias != DEFAULT_DB_ALIAS:
dependencies[alias] = connection.settings_dict.get(
'TEST_DEPENDENCIES', [DEFAULT_DB_ALIAS])
# Second pass -- actually create the databases.
old_names = []
mirrors = []
for signature, (db_name, aliases) in dependency_ordered(
test_databases.items(), dependencies):
# Actually create the database for the first connection
connection = connections[aliases[0]]
old_names.append((connection, db_name, True))
test_db_name = connection.creation.create_test_db(
self.verbosity, autoclobber=not self.interactive)
for alias in aliases[1:]:
connection = connections[alias]
if db_name:
old_names.append((connection, db_name, False))
connection.settings_dict['NAME'] = test_db_name
else:
# If settings_dict['NAME'] isn't defined, we have a backend
# where the name isn't important -- e.g., SQLite, which
# uses :memory:. Force create the database instead of
# assuming it's a duplicate.
old_names.append((connection, db_name, True))
connection.creation.create_test_db(
self.verbosity, autoclobber=not self.interactive)
for alias, mirror_alias in mirrored_aliases.items():
mirrors.append((alias, connections[alias].settings_dict['NAME']))
connections[alias].settings_dict['NAME'] = (
connections[mirror_alias].settings_dict['NAME'])
connections[alias].features = connections[mirror_alias].features
return old_names, mirrors
def run_suite(self, suite, **kwargs):
return unittest.TextTestRunner(
verbosity=self.verbosity, failfast=self.failfast).run(suite)
def teardown_databases(self, old_config, **kwargs):
"""
Destroys all the non-mirror databases.
"""
old_names, mirrors = old_config
for connection, old_name, destroy in old_names:
if destroy:
connection.creation.destroy_test_db(old_name, self.verbosity)
def teardown_test_environment(self, **kwargs):
unittest.removeHandler()
teardown_test_environment()
def suite_result(self, suite, result, **kwargs):
return len(result.failures) + len(result.errors)
def run_tests(self, test_labels, extra_tests=None, **kwargs):
"""
Run the unit tests for all the test labels in the provided list.
Labels must be of the form:
- app.TestClass.test_method
Run a single specific test method
- app.TestClass
Run all the test methods in a given class
- app
Search for doctests and unittests in the named application.
When looking for tests, the test runner will look in the models and
tests modules for the application.
A list of 'extra' tests may also be provided; these tests
will be added to the test suite.
Returns the number of tests that failed.
"""
self.setup_test_environment()
suite = self.build_suite(test_labels, extra_tests)
for app in get_apps():
if skip_app(app):
unregister_app(app.__name__.split('.')[-2])
settings.INSTALLED_APPS.remove(app.__name__)
old_config = self.setup_databases()
result = self.run_suite(suite)
self.teardown_databases(old_config)
self.teardown_test_environment()
return self.suite_result(suite, result)
|
|
# coding=utf-8
# Copyright 2022 The Uncertainty Baselines Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Loss utils, including rebalancing based on class distribution."""
# pylint: disable=g-bare-generic
# pylint: disable=g-doc-args
# pylint: disable=g-doc-return-or-yield
# pylint: disable=g-importing-member
# pylint: disable=g-no-space-after-docstring-summary
# pylint: disable=g-short-docstring-punctuation
# pylint: disable=logging-format-interpolation
# pylint: disable=logging-fstring-interpolation
# pylint: disable=missing-function-docstring
from typing import Dict, Union
import tensorflow as tf
import tensorflow.keras.backend as K
import torch
def get_diabetic_retinopathy_class_balance_weights(
positive_empirical_prob: float = None) -> Dict[int, float]:
r"""Class weights used for rebalancing the dataset, by skewing the `loss`.
Diabetic Retinopathy positive class proportions are imbalanced:
Train: 19.6%
Val: 18.8%
Test: 19.2%
Here, we compute appropriate class weights such that the following
loss reweighting can be done multiplicatively for each element.
\mathcal{L}= -\frac{1}{K n} \sum_{i=1}^{n}
\frac{\mathcal{L}_{\text{cross-entropy}}}{p(k)}
where we have K = 2 classes, n images in a minibatch, and the p(k) is the
empirical probability of class k in the training dataset.
Therefore, we here compute weights
w_k = \frac{1}{K} * \frac{1}{p(k)}
in order to apply the reweighting with an elementwise multiply over the
batch losses.
We can also use the empirical probabilities for a particular minibatch,
i.e. p(k)_{\text{minibatch}}.
Args:
positive_empirical_prob: the empirical probability of a positive label.
Returns:
Reweighted positive and negative example probabilities.
"""
if positive_empirical_prob is None:
# positive_empirical_prob = 0.196
raise NotImplementedError(
'Needs to be updated for APTOS / Severity shifts, '
'different decision thresholds (Mild / Moderate classifiers).')
return {
0: (1 / 2) * (1 / (1 - positive_empirical_prob)),
1: (1 / 2) * (1 / positive_empirical_prob)
}
def get_positive_empirical_prob(labels: tf.Tensor) -> float:
"""Given set of binary labels, determine empirical prob of positive label.
(i.e., the proportion of ones).
Args:
labels: tf.Tensor, batch of labels
Returns:
empirical probability of a positive label
"""
n_pos_labels = tf.math.count_nonzero(labels)
total_n_labels = labels.get_shape()[0]
return n_pos_labels / total_n_labels
def get_weighted_binary_cross_entropy_keras(weights: Dict[int, float]):
"""Return a function to calculate weighted binary xent with multi-hot labels.
Due to @menrfa
(https://stackoverflow.com/questions/46009619/
keras-weighted-binary-crossentropy)
# Example
>>> y_true = tf.convert_to_tensor([1, 0, 0, 0, 0, 0], dtype=tf.int64)
>>> y_pred = tf.convert_to_tensor(
... [0.6, 0.1, 0.1, 0.9, 0.1, 0.], dtype=tf.float32)
>>> weights = {
... 0: 1.,
... 1: 2.
... }
# With weights
>>> loss_fn = get_weighted_binary_cross_entropy_keras(weights=weights)
>>> loss_fn(y_true, y_pred)
<tf.Tensor(0.6067193, shape=(), dtype=tf.float32)>
# Without weights
>>> loss_fn = tf.keras.losses.binary_crossentropy
>>> loss_fn(y_true, y_pred)
<tf.Tensor(0.52158177, shape=(), dtype=tf.float32)>
# Another example
>>> y_true = tf.convert_to_tensor([[0., 1.], [0., 0.]], dtype=tf.float32)
>>> y_pred = tf.convert_to_tensor([[0.6, 0.4], [0.4, 0.6]], dtype=tf.float32)
>>> weights = {
... 0: 1.,
... 1: 2.
... }
# With weights
>>> loss_fn = get_weighted_binary_cross_entropy_keras(weights=weights,
from_logits=False)
>>> loss_fn(y_true, y_pred)
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([1.3744358 , 0.71355796],
dtype=float32)>
# Without weights
>>> loss_fn = tf.keras.losses.binary_crossentropy
>>> loss_fn(y_true, y_pred)
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([0.9162905 , 0.71355796],
dtype=float32)>
Args:
weights: dict, set weights for respective labels, e.g., {
0: 1.
1: 8. } In this case, we aim to compensate for the true (1) label
occurring less in the training dataset than the false (0) label.
e.g. [0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0]
Returns:
A function to calculate (weighted) binary cross entropy.
"""
if 0 not in weights or 1 not in weights:
raise NotImplementedError
def weighted_cross_entropy_fn(y_true, y_pred, from_logits=False):
tf_y_true = tf.cast(y_true, dtype=y_pred.dtype)
tf_y_pred = tf.cast(y_pred, dtype=y_pred.dtype)
weight_1 = tf.cast(weights[1], dtype=y_pred.dtype)
weight_0 = tf.cast(weights[0], dtype=y_pred.dtype)
weights_v = tf.where(tf.equal(tf_y_true, 1), weight_1, weight_0)
ce = K.binary_crossentropy(tf_y_true, tf_y_pred, from_logits=from_logits)
loss = K.mean(tf.multiply(ce, weights_v), axis=-1)
return loss
return weighted_cross_entropy_fn
def get_weighted_binary_cross_entropy_torch(weights: Dict[int, float]):
"""Return a function to calculate weighted binary xent with multi-hot labels.
Based on implementation from @menrfa
(https://stackoverflow.com/questions/46009619/
keras-weighted-binary-crossentropy)
# Example
>>> y_true = torch.FloatTensor([1, 0, 0, 0, 0, 0])
>>> y_pred = torch.FloatTensor([0.6, 0.1, 0.1, 0.9, 0.1, 0.])
>>> weights = {
... 0: 1.,
... 1: 2.
... }
# With weights
>>> loss_fn = get_weighted_binary_cross_entropy_torch(weights=weights)
>>> loss_fn(y_true, y_pred)
tensor(0.6067)
# Without weights
>>> loss_fn = torch.nn.BCELoss()
>>> loss_fn(input=y_pred, target=y_true)
tensor(0.5216)
Args:
weights: dict, set weights for respective labels, e.g., {
0: 1.
1: 8. } In this case, we aim to compensate for the true (1) label
occurring less in the training dataset than the false (0) label.
e.g. [0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0]
Returns:
A function to calculate (weighted) binary cross entropy.
"""
if 0 not in weights or 1 not in weights:
raise NotImplementedError
def weighted_cross_entropy_fn(y_true: torch.Tensor,
y_pred: torch.Tensor,
from_logits: bool = False):
assert y_true.dtype == torch.float32
assert y_pred.dtype == torch.float32
# weight_1 = torch.as_tensor(weights[1], dtype=y_pred.dtype)
# weight_0 = torch.as_tensor(weights[0], dtype=y_pred.dtype)
weights_v = torch.where(y_true == 1, weights[1], weights[0])
if from_logits:
ce = torch.nn.BCEWithLogitsLoss(weight=weights_v, reduction='none')
else:
ce = torch.nn.BCELoss(weight=weights_v, reduction='none')
return torch.mean(ce(input=y_pred, target=y_true))
return weighted_cross_entropy_fn
def get_diabetic_retinopathy_loss_fn(class_reweight_mode: Union[str, None],
class_weights: Union[Dict[int, float],
None]):
"""Initialize loss function based on class reweighting setting.
Return None for a minibatch loss, which must be defined per-minibatch,
using the minibatch empirical label distribution.
Args:
class_reweight_mode: Union[str, None], None indicates no class reweighting,
`constant` indicates reweighting with the training set empirical
distribution, `minibatch` indicates reweighting with the minibatch
empirical label distribution.
class_weights: Union[Dict[int, float], None], class weights as produced by
`get_diabetic_retinopathy_class_balance_weights`, should only be provided
for the `constant` class_reweight_mode.
Returns:
None, or loss_fn
"""
del class_weights
if class_reweight_mode is None:
loss_fn = tf.keras.losses.binary_crossentropy
elif class_reweight_mode == 'constant':
# Initialize a reweighted BCE using the empirical class distribution
# of the training dataset.
# loss_fn = get_weighted_binary_cross_entropy(weights=class_weights)
raise NotImplementedError('No constant reweighting.')
elif class_reweight_mode == 'minibatch':
# This loss_fn must be reinitialized for each batch, using the
# minibatch empirical class distribution.
loss_fn = None
else:
raise NotImplementedError(
f'Reweighting mode {class_reweight_mode} unsupported.')
return loss_fn
def get_minibatch_reweighted_loss_fn(labels: tf.Tensor, loss_fn_type='keras'):
"""The minibatch-reweighted loss function can only be initialized
using the labels of a particular minibatch.
Args:
labels: tf.Tensor, the labels of a minibatch
loss_fn_type: str, one of {'keras', 'torch', 'jax'}
Returns:
loss_fn, for use in a particular minibatch
"""
minibatch_positive_empirical_prob = get_positive_empirical_prob(labels=labels)
minibatch_class_weights = (
get_diabetic_retinopathy_class_balance_weights(
positive_empirical_prob=minibatch_positive_empirical_prob))
if loss_fn_type == 'keras':
batch_loss_fn = get_weighted_binary_cross_entropy_keras(
weights=minibatch_class_weights)
elif loss_fn_type == 'torch':
for key, value in minibatch_class_weights.items():
minibatch_class_weights[key] = value._numpy() # pylint: disable=protected-access
batch_loss_fn = get_weighted_binary_cross_entropy_torch(
weights=minibatch_class_weights)
elif loss_fn_type == 'jax':
raise NotImplementedError
else:
raise NotImplementedError
return batch_loss_fn
|
|
# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'FaqCategoryTranslation'
db.create_table(u'fluent_faq_faqcategory_translation', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('language_code', self.gf('django.db.models.fields.CharField')(max_length=15, db_index=True)),
('title', self.gf('django.db.models.fields.CharField')(max_length=200)),
('slug', self.gf('django.db.models.fields.SlugField')(max_length=50)),
(u'master', self.gf('django.db.models.fields.related.ForeignKey')(related_name='translations', null=True, to=orm['fluent_faq.FaqCategory'])),
))
db.send_create_signal(u'fluent_faq', ['FaqCategoryTranslation'])
# Adding unique constraint on 'FaqCategoryTranslation', fields ['language_code', u'master']
db.create_unique(u'fluent_faq_faqcategory_translation', ['language_code', u'master_id'])
# Adding model 'FaqCategory'
db.create_table(u'fluent_faq_faqcategory', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('meta_keywords', self.gf('django.db.models.fields.CharField')(default='', max_length=255, blank=True)),
('meta_description', self.gf('django.db.models.fields.CharField')(default='', max_length=255, blank=True)),
('author', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'])),
('creation_date', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
('modification_date', self.gf('django.db.models.fields.DateTimeField')(auto_now=True, blank=True)),
('parent_site', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['sites.Site'])),
('order', self.gf('django.db.models.fields.PositiveIntegerField')(db_index=True, null=True, blank=True)),
))
db.send_create_signal(u'fluent_faq', ['FaqCategory'])
# Adding model 'FaqQuestionTranslation'
db.create_table(u'fluent_faq_faqquestion_translation', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('language_code', self.gf('django.db.models.fields.CharField')(max_length=15, db_index=True)),
('title', self.gf('django.db.models.fields.CharField')(max_length=200)),
('slug', self.gf('django.db.models.fields.SlugField')(max_length=50)),
(u'master', self.gf('django.db.models.fields.related.ForeignKey')(related_name='translations', null=True, to=orm['fluent_faq.FaqQuestion'])),
))
db.send_create_signal(u'fluent_faq', ['FaqQuestionTranslation'])
# Adding unique constraint on 'FaqQuestionTranslation', fields ['language_code', u'master']
db.create_unique(u'fluent_faq_faqquestion_translation', ['language_code', u'master_id'])
# Adding model 'FaqQuestion'
db.create_table(u'fluent_faq_faqquestion', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('meta_keywords', self.gf('django.db.models.fields.CharField')(default='', max_length=255, blank=True)),
('meta_description', self.gf('django.db.models.fields.CharField')(default='', max_length=255, blank=True)),
('author', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'])),
('creation_date', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
('modification_date', self.gf('django.db.models.fields.DateTimeField')(auto_now=True, blank=True)),
('parent_site', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['sites.Site'])),
('order', self.gf('django.db.models.fields.PositiveIntegerField')(db_index=True, null=True, blank=True)),
('category', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['fluent_faq.FaqCategory'])),
))
db.send_create_signal(u'fluent_faq', ['FaqQuestion'])
def backwards(self, orm):
# Removing unique constraint on 'FaqQuestionTranslation', fields ['language_code', u'master']
db.delete_unique(u'fluent_faq_faqquestion_translation', ['language_code', u'master_id'])
# Removing unique constraint on 'FaqCategoryTranslation', fields ['language_code', u'master']
db.delete_unique(u'fluent_faq_faqcategory_translation', ['language_code', u'master_id'])
# Deleting model 'FaqCategoryTranslation'
db.delete_table(u'fluent_faq_faqcategory_translation')
# Deleting model 'FaqCategory'
db.delete_table(u'fluent_faq_faqcategory')
# Deleting model 'FaqQuestionTranslation'
db.delete_table(u'fluent_faq_faqquestion_translation')
# Deleting model 'FaqQuestion'
db.delete_table(u'fluent_faq_faqquestion')
models = {
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
u'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'fluent_contents.contentitem': {
'Meta': {'ordering': "('placeholder', 'sort_order')", 'object_name': 'ContentItem'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language_code': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '15', 'db_index': 'True'}),
'parent_id': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'parent_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
'placeholder': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'contentitems'", 'null': 'True', 'on_delete': 'models.SET_NULL', 'to': "orm['fluent_contents.Placeholder']"}),
'polymorphic_ctype': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'polymorphic_fluent_contents.contentitem_set'", 'null': 'True', 'to': u"orm['contenttypes.ContentType']"}),
'sort_order': ('django.db.models.fields.IntegerField', [], {'default': '1', 'db_index': 'True'})
},
'fluent_contents.placeholder': {
'Meta': {'unique_together': "(('parent_type', 'parent_id', 'slot'),)", 'object_name': 'Placeholder'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'parent_id': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'parent_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']", 'null': 'True', 'blank': 'True'}),
'role': ('django.db.models.fields.CharField', [], {'default': "'m'", 'max_length': '1'}),
'slot': ('django.db.models.fields.SlugField', [], {'max_length': '50'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'})
},
u'fluent_faq.faqcategory': {
'Meta': {'ordering': "('order', 'creation_date')", 'object_name': 'FaqCategory'},
'author': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"}),
'creation_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'meta_description': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '255', 'blank': 'True'}),
'meta_keywords': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '255', 'blank': 'True'}),
'modification_date': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'order': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True', 'null': 'True', 'blank': 'True'}),
'parent_site': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['sites.Site']"})
},
u'fluent_faq.faqcategorytranslation': {
'Meta': {'unique_together': "[(u'language_code', u'master')]", 'object_name': 'FaqCategoryTranslation', 'db_table': "u'fluent_faq_faqcategory_translation'"},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language_code': ('django.db.models.fields.CharField', [], {'max_length': '15', 'db_index': 'True'}),
u'master': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'translations'", 'null': 'True', 'to': u"orm['fluent_faq.FaqCategory']"}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '50'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '200'})
},
u'fluent_faq.faqquestion': {
'Meta': {'ordering': "('order', 'creation_date')", 'object_name': 'FaqQuestion'},
'author': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"}),
'category': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['fluent_faq.FaqCategory']"}),
'creation_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'meta_description': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '255', 'blank': 'True'}),
'meta_keywords': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '255', 'blank': 'True'}),
'modification_date': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'order': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True', 'null': 'True', 'blank': 'True'}),
'parent_site': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['sites.Site']"})
},
u'fluent_faq.faqquestiontranslation': {
'Meta': {'unique_together': "[(u'language_code', u'master')]", 'object_name': 'FaqQuestionTranslation', 'db_table': "u'fluent_faq_faqquestion_translation'"},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language_code': ('django.db.models.fields.CharField', [], {'max_length': '15', 'db_index': 'True'}),
u'master': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'translations'", 'null': 'True', 'to': u"orm['fluent_faq.FaqQuestion']"}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '50'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '200'})
},
u'sites.site': {
'Meta': {'ordering': "('domain',)", 'object_name': 'Site', 'db_table': "'django_site'"},
'domain': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
u'taggit.tag': {
'Meta': {'object_name': 'Tag'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '100'}),
'slug': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '100'})
},
u'taggit.taggeditem': {
'Meta': {'object_name': 'TaggedItem'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "u'taggit_taggeditem_tagged_items'", 'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'object_id': ('django.db.models.fields.IntegerField', [], {'db_index': 'True'}),
'tag': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "u'taggit_taggeditem_items'", 'to': u"orm['taggit.Tag']"})
}
}
complete_apps = ['fluent_faq']
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.