repo_name stringlengths 6 100 | path stringlengths 4 294 | copies stringlengths 1 5 | size stringlengths 4 6 | content stringlengths 606 896k | license stringclasses 15
values |
|---|---|---|---|---|---|
ruebot/bagit-profiles-validator | setup.py | 1 | 1167 | from setuptools import setup
description = """
This module can be used to validate BagitProfiles.
"""
setup(
name="bagit_profile",
version="1.3.0",
url="https://github.com/bagit-profiles/bagit-profiles-validator",
install_requires=["bagit", "requests"],
author="Mark Jordan, Nick Ruest",
author_email="mjordan@sfu.ca, ruestn@gmail.com",
license="CC0",
py_modules=["bagit_profile"],
scripts=["bagit_profile.py"],
description=description,
long_description=open("README.rst").read(),
package_data={"": ["README.rst"]},
platforms=["POSIX"],
test_suite="test",
classifiers=[
"License :: Public Domain",
"Intended Audience :: Developers",
"Topic :: Communications :: File Sharing",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Filesystems",
"Topic :: Utilities",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
],
)
| unlicense |
cruzegoodin/TSC-ShippingDetails | flask/lib/python2.7/site-packages/sqlalchemy/orm/collections.py | 21 | 53263 | # orm/collections.py
# Copyright (C) 2005-2015 the SQLAlchemy authors and contributors
# <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""Support for collections of mapped entities.
The collections package supplies the machinery used to inform the ORM of
collection membership changes. An instrumentation via decoration approach is
used, allowing arbitrary types (including built-ins) to be used as entity
collections without requiring inheritance from a base class.
Instrumentation decoration relays membership change events to the
:class:`.CollectionAttributeImpl` that is currently managing the collection.
The decorators observe function call arguments and return values, tracking
entities entering or leaving the collection. Two decorator approaches are
provided. One is a bundle of generic decorators that map function arguments
and return values to events::
from sqlalchemy.orm.collections import collection
class MyClass(object):
# ...
@collection.adds(1)
def store(self, item):
self.data.append(item)
@collection.removes_return()
def pop(self):
return self.data.pop()
The second approach is a bundle of targeted decorators that wrap appropriate
append and remove notifiers around the mutation methods present in the
standard Python ``list``, ``set`` and ``dict`` interfaces. These could be
specified in terms of generic decorator recipes, but are instead hand-tooled
for increased efficiency. The targeted decorators occasionally implement
adapter-like behavior, such as mapping bulk-set methods (``extend``,
``update``, ``__setslice__``, etc.) into the series of atomic mutation events
that the ORM requires.
The targeted decorators are used internally for automatic instrumentation of
entity collection classes. Every collection class goes through a
transformation process roughly like so:
1. If the class is a built-in, substitute a trivial sub-class
2. Is this class already instrumented?
3. Add in generic decorators
4. Sniff out the collection interface through duck-typing
5. Add targeted decoration to any undecorated interface method
This process modifies the class at runtime, decorating methods and adding some
bookkeeping properties. This isn't possible (or desirable) for built-in
classes like ``list``, so trivial sub-classes are substituted to hold
decoration::
class InstrumentedList(list):
pass
Collection classes can be specified in ``relationship(collection_class=)`` as
types or a function that returns an instance. Collection classes are
inspected and instrumented during the mapper compilation phase. The
collection_class callable will be executed once to produce a specimen
instance, and the type of that specimen will be instrumented. Functions that
return built-in types like ``lists`` will be adapted to produce instrumented
instances.
When extending a known type like ``list``, additional decorations are not
generally not needed. Odds are, the extension method will delegate to a
method that's already instrumented. For example::
class QueueIsh(list):
def push(self, item):
self.append(item)
def shift(self):
return self.pop(0)
There's no need to decorate these methods. ``append`` and ``pop`` are already
instrumented as part of the ``list`` interface. Decorating them would fire
duplicate events, which should be avoided.
The targeted decoration tries not to rely on other methods in the underlying
collection class, but some are unavoidable. Many depend on 'read' methods
being present to properly instrument a 'write', for example, ``__setitem__``
needs ``__getitem__``. "Bulk" methods like ``update`` and ``extend`` may also
reimplemented in terms of atomic appends and removes, so the ``extend``
decoration will actually perform many ``append`` operations and not call the
underlying method at all.
Tight control over bulk operation and the firing of events is also possible by
implementing the instrumentation internally in your methods. The basic
instrumentation package works under the general assumption that collection
mutation will not raise unusual exceptions. If you want to closely
orchestrate append and remove events with exception management, internal
instrumentation may be the answer. Within your method,
``collection_adapter(self)`` will retrieve an object that you can use for
explicit control over triggering append and remove events.
The owning object and :class:`.CollectionAttributeImpl` are also reachable
through the adapter, allowing for some very sophisticated behavior.
"""
import inspect
import operator
import weakref
from ..sql import expression
from .. import util, exc as sa_exc
from . import base
__all__ = ['collection', 'collection_adapter',
'mapped_collection', 'column_mapped_collection',
'attribute_mapped_collection']
__instrumentation_mutex = util.threading.Lock()
class _PlainColumnGetter(object):
"""Plain column getter, stores collection of Column objects
directly.
Serializes to a :class:`._SerializableColumnGetterV2`
which has more expensive __call__() performance
and some rare caveats.
"""
def __init__(self, cols):
self.cols = cols
self.composite = len(cols) > 1
def __reduce__(self):
return _SerializableColumnGetterV2._reduce_from_cols(self.cols)
def _cols(self, mapper):
return self.cols
def __call__(self, value):
state = base.instance_state(value)
m = base._state_mapper(state)
key = [
m._get_state_attr_by_column(state, state.dict, col)
for col in self._cols(m)
]
if self.composite:
return tuple(key)
else:
return key[0]
class _SerializableColumnGetter(object):
"""Column-based getter used in version 0.7.6 only.
Remains here for pickle compatibility with 0.7.6.
"""
def __init__(self, colkeys):
self.colkeys = colkeys
self.composite = len(colkeys) > 1
def __reduce__(self):
return _SerializableColumnGetter, (self.colkeys,)
def __call__(self, value):
state = base.instance_state(value)
m = base._state_mapper(state)
key = [m._get_state_attr_by_column(
state, state.dict,
m.mapped_table.columns[k])
for k in self.colkeys]
if self.composite:
return tuple(key)
else:
return key[0]
class _SerializableColumnGetterV2(_PlainColumnGetter):
"""Updated serializable getter which deals with
multi-table mapped classes.
Two extremely unusual cases are not supported.
Mappings which have tables across multiple metadata
objects, or which are mapped to non-Table selectables
linked across inheriting mappers may fail to function
here.
"""
def __init__(self, colkeys):
self.colkeys = colkeys
self.composite = len(colkeys) > 1
def __reduce__(self):
return self.__class__, (self.colkeys,)
@classmethod
def _reduce_from_cols(cls, cols):
def _table_key(c):
if not isinstance(c.table, expression.TableClause):
return None
else:
return c.table.key
colkeys = [(c.key, _table_key(c)) for c in cols]
return _SerializableColumnGetterV2, (colkeys,)
def _cols(self, mapper):
cols = []
metadata = getattr(mapper.local_table, 'metadata', None)
for (ckey, tkey) in self.colkeys:
if tkey is None or \
metadata is None or \
tkey not in metadata:
cols.append(mapper.local_table.c[ckey])
else:
cols.append(metadata.tables[tkey].c[ckey])
return cols
def column_mapped_collection(mapping_spec):
"""A dictionary-based collection type with column-based keying.
Returns a :class:`.MappedCollection` factory with a keying function
generated from mapping_spec, which may be a Column or a sequence
of Columns.
The key value must be immutable for the lifetime of the object. You
can not, for example, map on foreign key values if those key values will
change during the session, i.e. from None to a database-assigned integer
after a session flush.
"""
cols = [expression._only_column_elements(q, "mapping_spec")
for q in util.to_list(mapping_spec)
]
keyfunc = _PlainColumnGetter(cols)
return lambda: MappedCollection(keyfunc)
class _SerializableAttrGetter(object):
def __init__(self, name):
self.name = name
self.getter = operator.attrgetter(name)
def __call__(self, target):
return self.getter(target)
def __reduce__(self):
return _SerializableAttrGetter, (self.name, )
def attribute_mapped_collection(attr_name):
"""A dictionary-based collection type with attribute-based keying.
Returns a :class:`.MappedCollection` factory with a keying based on the
'attr_name' attribute of entities in the collection, where ``attr_name``
is the string name of the attribute.
The key value must be immutable for the lifetime of the object. You
can not, for example, map on foreign key values if those key values will
change during the session, i.e. from None to a database-assigned integer
after a session flush.
"""
getter = _SerializableAttrGetter(attr_name)
return lambda: MappedCollection(getter)
def mapped_collection(keyfunc):
"""A dictionary-based collection type with arbitrary keying.
Returns a :class:`.MappedCollection` factory with a keying function
generated from keyfunc, a callable that takes an entity and returns a
key value.
The key value must be immutable for the lifetime of the object. You
can not, for example, map on foreign key values if those key values will
change during the session, i.e. from None to a database-assigned integer
after a session flush.
"""
return lambda: MappedCollection(keyfunc)
class collection(object):
"""Decorators for entity collection classes.
The decorators fall into two groups: annotations and interception recipes.
The annotating decorators (appender, remover, iterator, linker, converter,
internally_instrumented) indicate the method's purpose and take no
arguments. They are not written with parens::
@collection.appender
def append(self, append): ...
The recipe decorators all require parens, even those that take no
arguments::
@collection.adds('entity')
def insert(self, position, entity): ...
@collection.removes_return()
def popitem(self): ...
"""
# Bundled as a class solely for ease of use: packaging, doc strings,
# importability.
@staticmethod
def appender(fn):
"""Tag the method as the collection appender.
The appender method is called with one positional argument: the value
to append. The method will be automatically decorated with 'adds(1)'
if not already decorated::
@collection.appender
def add(self, append): ...
# or, equivalently
@collection.appender
@collection.adds(1)
def add(self, append): ...
# for mapping type, an 'append' may kick out a previous value
# that occupies that slot. consider d['a'] = 'foo'- any previous
# value in d['a'] is discarded.
@collection.appender
@collection.replaces(1)
def add(self, entity):
key = some_key_func(entity)
previous = None
if key in self:
previous = self[key]
self[key] = entity
return previous
If the value to append is not allowed in the collection, you may
raise an exception. Something to remember is that the appender
will be called for each object mapped by a database query. If the
database contains rows that violate your collection semantics, you
will need to get creative to fix the problem, as access via the
collection will not work.
If the appender method is internally instrumented, you must also
receive the keyword argument '_sa_initiator' and ensure its
promulgation to collection events.
"""
fn._sa_instrument_role = 'appender'
return fn
@staticmethod
def remover(fn):
"""Tag the method as the collection remover.
The remover method is called with one positional argument: the value
to remove. The method will be automatically decorated with
:meth:`removes_return` if not already decorated::
@collection.remover
def zap(self, entity): ...
# or, equivalently
@collection.remover
@collection.removes_return()
def zap(self, ): ...
If the value to remove is not present in the collection, you may
raise an exception or return None to ignore the error.
If the remove method is internally instrumented, you must also
receive the keyword argument '_sa_initiator' and ensure its
promulgation to collection events.
"""
fn._sa_instrument_role = 'remover'
return fn
@staticmethod
def iterator(fn):
"""Tag the method as the collection remover.
The iterator method is called with no arguments. It is expected to
return an iterator over all collection members::
@collection.iterator
def __iter__(self): ...
"""
fn._sa_instrument_role = 'iterator'
return fn
@staticmethod
def internally_instrumented(fn):
"""Tag the method as instrumented.
This tag will prevent any decoration from being applied to the
method. Use this if you are orchestrating your own calls to
:func:`.collection_adapter` in one of the basic SQLAlchemy
interface methods, or to prevent an automatic ABC method
decoration from wrapping your implementation::
# normally an 'extend' method on a list-like class would be
# automatically intercepted and re-implemented in terms of
# SQLAlchemy events and append(). your implementation will
# never be called, unless:
@collection.internally_instrumented
def extend(self, items): ...
"""
fn._sa_instrumented = True
return fn
@staticmethod
def linker(fn):
"""Tag the method as a "linked to attribute" event handler.
This optional event handler will be called when the collection class
is linked to or unlinked from the InstrumentedAttribute. It is
invoked immediately after the '_sa_adapter' property is set on
the instance. A single argument is passed: the collection adapter
that has been linked, or None if unlinking.
"""
fn._sa_instrument_role = 'linker'
return fn
link = linker
"""deprecated; synonym for :meth:`.collection.linker`."""
@staticmethod
def converter(fn):
"""Tag the method as the collection converter.
This optional method will be called when a collection is being
replaced entirely, as in::
myobj.acollection = [newvalue1, newvalue2]
The converter method will receive the object being assigned and should
return an iterable of values suitable for use by the ``appender``
method. A converter must not assign values or mutate the collection,
its sole job is to adapt the value the user provides into an iterable
of values for the ORM's use.
The default converter implementation will use duck-typing to do the
conversion. A dict-like collection will be convert into an iterable
of dictionary values, and other types will simply be iterated::
@collection.converter
def convert(self, other): ...
If the duck-typing of the object does not match the type of this
collection, a TypeError is raised.
Supply an implementation of this method if you want to expand the
range of possible types that can be assigned in bulk or perform
validation on the values about to be assigned.
"""
fn._sa_instrument_role = 'converter'
return fn
@staticmethod
def adds(arg):
"""Mark the method as adding an entity to the collection.
Adds "add to collection" handling to the method. The decorator
argument indicates which method argument holds the SQLAlchemy-relevant
value. Arguments can be specified positionally (i.e. integer) or by
name::
@collection.adds(1)
def push(self, item): ...
@collection.adds('entity')
def do_stuff(self, thing, entity=None): ...
"""
def decorator(fn):
fn._sa_instrument_before = ('fire_append_event', arg)
return fn
return decorator
@staticmethod
def replaces(arg):
"""Mark the method as replacing an entity in the collection.
Adds "add to collection" and "remove from collection" handling to
the method. The decorator argument indicates which method argument
holds the SQLAlchemy-relevant value to be added, and return value, if
any will be considered the value to remove.
Arguments can be specified positionally (i.e. integer) or by name::
@collection.replaces(2)
def __setitem__(self, index, item): ...
"""
def decorator(fn):
fn._sa_instrument_before = ('fire_append_event', arg)
fn._sa_instrument_after = 'fire_remove_event'
return fn
return decorator
@staticmethod
def removes(arg):
"""Mark the method as removing an entity in the collection.
Adds "remove from collection" handling to the method. The decorator
argument indicates which method argument holds the SQLAlchemy-relevant
value to be removed. Arguments can be specified positionally (i.e.
integer) or by name::
@collection.removes(1)
def zap(self, item): ...
For methods where the value to remove is not known at call-time, use
collection.removes_return.
"""
def decorator(fn):
fn._sa_instrument_before = ('fire_remove_event', arg)
return fn
return decorator
@staticmethod
def removes_return():
"""Mark the method as removing an entity in the collection.
Adds "remove from collection" handling to the method. The return
value of the method, if any, is considered the value to remove. The
method arguments are not inspected::
@collection.removes_return()
def pop(self): ...
For methods where the value to remove is known at call-time, use
collection.remove.
"""
def decorator(fn):
fn._sa_instrument_after = 'fire_remove_event'
return fn
return decorator
collection_adapter = operator.attrgetter('_sa_adapter')
"""Fetch the :class:`.CollectionAdapter` for a collection."""
class CollectionAdapter(object):
"""Bridges between the ORM and arbitrary Python collections.
Proxies base-level collection operations (append, remove, iterate)
to the underlying Python collection, and emits add/remove events for
entities entering or leaving the collection.
The ORM uses :class:`.CollectionAdapter` exclusively for interaction with
entity collections.
"""
invalidated = False
def __init__(self, attr, owner_state, data):
self._key = attr.key
self._data = weakref.ref(data)
self.owner_state = owner_state
self.link_to_self(data)
def _warn_invalidated(self):
util.warn("This collection has been invalidated.")
@property
def data(self):
"The entity collection being adapted."
return self._data()
@property
def _referenced_by_owner(self):
"""return True if the owner state still refers to this collection.
This will return False within a bulk replace operation,
where this collection is the one being replaced.
"""
return self.owner_state.dict[self._key] is self._data()
@util.memoized_property
def attr(self):
return self.owner_state.manager[self._key].impl
def link_to_self(self, data):
"""Link a collection to this adapter"""
data._sa_adapter = self
if data._sa_linker:
data._sa_linker(self)
def unlink(self, data):
"""Unlink a collection from any adapter"""
del data._sa_adapter
if data._sa_linker:
data._sa_linker(None)
def adapt_like_to_iterable(self, obj):
"""Converts collection-compatible objects to an iterable of values.
Can be passed any type of object, and if the underlying collection
determines that it can be adapted into a stream of values it can
use, returns an iterable of values suitable for append()ing.
This method may raise TypeError or any other suitable exception
if adaptation fails.
If a converter implementation is not supplied on the collection,
a default duck-typing-based implementation is used.
"""
converter = self._data()._sa_converter
if converter is not None:
return converter(obj)
setting_type = util.duck_type_collection(obj)
receiving_type = util.duck_type_collection(self._data())
if obj is None or setting_type != receiving_type:
given = obj is None and 'None' or obj.__class__.__name__
if receiving_type is None:
wanted = self._data().__class__.__name__
else:
wanted = receiving_type.__name__
raise TypeError(
"Incompatible collection type: %s is not %s-like" % (
given, wanted))
# If the object is an adapted collection, return the (iterable)
# adapter.
if getattr(obj, '_sa_adapter', None) is not None:
return obj._sa_adapter
elif setting_type == dict:
if util.py3k:
return obj.values()
else:
return getattr(obj, 'itervalues', obj.values)()
else:
return iter(obj)
def append_with_event(self, item, initiator=None):
"""Add an entity to the collection, firing mutation events."""
self._data()._sa_appender(item, _sa_initiator=initiator)
def append_without_event(self, item):
"""Add or restore an entity to the collection, firing no events."""
self._data()._sa_appender(item, _sa_initiator=False)
def append_multiple_without_event(self, items):
"""Add or restore an entity to the collection, firing no events."""
appender = self._data()._sa_appender
for item in items:
appender(item, _sa_initiator=False)
def remove_with_event(self, item, initiator=None):
"""Remove an entity from the collection, firing mutation events."""
self._data()._sa_remover(item, _sa_initiator=initiator)
def remove_without_event(self, item):
"""Remove an entity from the collection, firing no events."""
self._data()._sa_remover(item, _sa_initiator=False)
def clear_with_event(self, initiator=None):
"""Empty the collection, firing a mutation event for each entity."""
remover = self._data()._sa_remover
for item in list(self):
remover(item, _sa_initiator=initiator)
def clear_without_event(self):
"""Empty the collection, firing no events."""
remover = self._data()._sa_remover
for item in list(self):
remover(item, _sa_initiator=False)
def __iter__(self):
"""Iterate over entities in the collection."""
return iter(self._data()._sa_iterator())
def __len__(self):
"""Count entities in the collection."""
return len(list(self._data()._sa_iterator()))
def __bool__(self):
return True
__nonzero__ = __bool__
def fire_append_event(self, item, initiator=None):
"""Notify that a entity has entered the collection.
Initiator is a token owned by the InstrumentedAttribute that
initiated the membership mutation, and should be left as None
unless you are passing along an initiator value from a chained
operation.
"""
if initiator is not False:
if self.invalidated:
self._warn_invalidated()
return self.attr.fire_append_event(
self.owner_state,
self.owner_state.dict,
item, initiator)
else:
return item
def fire_remove_event(self, item, initiator=None):
"""Notify that a entity has been removed from the collection.
Initiator is the InstrumentedAttribute that initiated the membership
mutation, and should be left as None unless you are passing along
an initiator value from a chained operation.
"""
if initiator is not False:
if self.invalidated:
self._warn_invalidated()
self.attr.fire_remove_event(
self.owner_state,
self.owner_state.dict,
item, initiator)
def fire_pre_remove_event(self, initiator=None):
"""Notify that an entity is about to be removed from the collection.
Only called if the entity cannot be removed after calling
fire_remove_event().
"""
if self.invalidated:
self._warn_invalidated()
self.attr.fire_pre_remove_event(
self.owner_state,
self.owner_state.dict,
initiator=initiator)
def __getstate__(self):
return {'key': self._key,
'owner_state': self.owner_state,
'data': self.data}
def __setstate__(self, d):
self._key = d['key']
self.owner_state = d['owner_state']
self._data = weakref.ref(d['data'])
def bulk_replace(values, existing_adapter, new_adapter):
"""Load a new collection, firing events based on prior like membership.
Appends instances in ``values`` onto the ``new_adapter``. Events will be
fired for any instance not present in the ``existing_adapter``. Any
instances in ``existing_adapter`` not present in ``values`` will have
remove events fired upon them.
:param values: An iterable of collection member instances
:param existing_adapter: A :class:`.CollectionAdapter` of
instances to be replaced
:param new_adapter: An empty :class:`.CollectionAdapter`
to load with ``values``
"""
if not isinstance(values, list):
values = list(values)
idset = util.IdentitySet
existing_idset = idset(existing_adapter or ())
constants = existing_idset.intersection(values or ())
additions = idset(values or ()).difference(constants)
removals = existing_idset.difference(constants)
for member in values or ():
if member in additions:
new_adapter.append_with_event(member)
elif member in constants:
new_adapter.append_without_event(member)
if existing_adapter:
for member in removals:
existing_adapter.remove_with_event(member)
def prepare_instrumentation(factory):
"""Prepare a callable for future use as a collection class factory.
Given a collection class factory (either a type or no-arg callable),
return another factory that will produce compatible instances when
called.
This function is responsible for converting collection_class=list
into the run-time behavior of collection_class=InstrumentedList.
"""
# Convert a builtin to 'Instrumented*'
if factory in __canned_instrumentation:
factory = __canned_instrumentation[factory]
# Create a specimen
cls = type(factory())
# Did factory callable return a builtin?
if cls in __canned_instrumentation:
# Wrap it so that it returns our 'Instrumented*'
factory = __converting_factory(cls, factory)
cls = factory()
# Instrument the class if needed.
if __instrumentation_mutex.acquire():
try:
if getattr(cls, '_sa_instrumented', None) != id(cls):
_instrument_class(cls)
finally:
__instrumentation_mutex.release()
return factory
def __converting_factory(specimen_cls, original_factory):
"""Return a wrapper that converts a "canned" collection like
set, dict, list into the Instrumented* version.
"""
instrumented_cls = __canned_instrumentation[specimen_cls]
def wrapper():
collection = original_factory()
return instrumented_cls(collection)
# often flawed but better than nothing
wrapper.__name__ = "%sWrapper" % original_factory.__name__
wrapper.__doc__ = original_factory.__doc__
return wrapper
def _instrument_class(cls):
"""Modify methods in a class and install instrumentation."""
# In the normal call flow, a request for any of the 3 basic collection
# types is transformed into one of our trivial subclasses
# (e.g. InstrumentedList). Catch anything else that sneaks in here...
if cls.__module__ == '__builtin__':
raise sa_exc.ArgumentError(
"Can not instrument a built-in type. Use a "
"subclass, even a trivial one.")
roles = {}
methods = {}
# search for _sa_instrument_role-decorated methods in
# method resolution order, assign to roles
for supercls in cls.__mro__:
for name, method in vars(supercls).items():
if not util.callable(method):
continue
# note role declarations
if hasattr(method, '_sa_instrument_role'):
role = method._sa_instrument_role
assert role in ('appender', 'remover', 'iterator',
'linker', 'converter')
roles.setdefault(role, name)
# transfer instrumentation requests from decorated function
# to the combined queue
before, after = None, None
if hasattr(method, '_sa_instrument_before'):
op, argument = method._sa_instrument_before
assert op in ('fire_append_event', 'fire_remove_event')
before = op, argument
if hasattr(method, '_sa_instrument_after'):
op = method._sa_instrument_after
assert op in ('fire_append_event', 'fire_remove_event')
after = op
if before:
methods[name] = before[0], before[1], after
elif after:
methods[name] = None, None, after
# see if this class has "canned" roles based on a known
# collection type (dict, set, list). Apply those roles
# as needed to the "roles" dictionary, and also
# prepare "decorator" methods
collection_type = util.duck_type_collection(cls)
if collection_type in __interfaces:
canned_roles, decorators = __interfaces[collection_type]
for role, name in canned_roles.items():
roles.setdefault(role, name)
# apply ABC auto-decoration to methods that need it
for method, decorator in decorators.items():
fn = getattr(cls, method, None)
if (fn and method not in methods and
not hasattr(fn, '_sa_instrumented')):
setattr(cls, method, decorator(fn))
# ensure all roles are present, and apply implicit instrumentation if
# needed
if 'appender' not in roles or not hasattr(cls, roles['appender']):
raise sa_exc.ArgumentError(
"Type %s must elect an appender method to be "
"a collection class" % cls.__name__)
elif (roles['appender'] not in methods and
not hasattr(getattr(cls, roles['appender']), '_sa_instrumented')):
methods[roles['appender']] = ('fire_append_event', 1, None)
if 'remover' not in roles or not hasattr(cls, roles['remover']):
raise sa_exc.ArgumentError(
"Type %s must elect a remover method to be "
"a collection class" % cls.__name__)
elif (roles['remover'] not in methods and
not hasattr(getattr(cls, roles['remover']), '_sa_instrumented')):
methods[roles['remover']] = ('fire_remove_event', 1, None)
if 'iterator' not in roles or not hasattr(cls, roles['iterator']):
raise sa_exc.ArgumentError(
"Type %s must elect an iterator method to be "
"a collection class" % cls.__name__)
# apply ad-hoc instrumentation from decorators, class-level defaults
# and implicit role declarations
for method_name, (before, argument, after) in methods.items():
setattr(cls, method_name,
_instrument_membership_mutator(getattr(cls, method_name),
before, argument, after))
# intern the role map
for role, method_name in roles.items():
setattr(cls, '_sa_%s' % role, getattr(cls, method_name))
cls._sa_adapter = None
if not hasattr(cls, '_sa_linker'):
cls._sa_linker = None
if not hasattr(cls, '_sa_converter'):
cls._sa_converter = None
cls._sa_instrumented = id(cls)
def _instrument_membership_mutator(method, before, argument, after):
"""Route method args and/or return value through the collection
adapter."""
# This isn't smart enough to handle @adds(1) for 'def fn(self, (a, b))'
if before:
fn_args = list(util.flatten_iterator(inspect.getargspec(method)[0]))
if isinstance(argument, int):
pos_arg = argument
named_arg = len(fn_args) > argument and fn_args[argument] or None
else:
if argument in fn_args:
pos_arg = fn_args.index(argument)
else:
pos_arg = None
named_arg = argument
del fn_args
def wrapper(*args, **kw):
if before:
if pos_arg is None:
if named_arg not in kw:
raise sa_exc.ArgumentError(
"Missing argument %s" % argument)
value = kw[named_arg]
else:
if len(args) > pos_arg:
value = args[pos_arg]
elif named_arg in kw:
value = kw[named_arg]
else:
raise sa_exc.ArgumentError(
"Missing argument %s" % argument)
initiator = kw.pop('_sa_initiator', None)
if initiator is False:
executor = None
else:
executor = args[0]._sa_adapter
if before and executor:
getattr(executor, before)(value, initiator)
if not after or not executor:
return method(*args, **kw)
else:
res = method(*args, **kw)
if res is not None:
getattr(executor, after)(res, initiator)
return res
wrapper._sa_instrumented = True
if hasattr(method, "_sa_instrument_role"):
wrapper._sa_instrument_role = method._sa_instrument_role
wrapper.__name__ = method.__name__
wrapper.__doc__ = method.__doc__
return wrapper
def __set(collection, item, _sa_initiator=None):
"""Run set events, may eventually be inlined into decorators."""
if _sa_initiator is not False:
executor = collection._sa_adapter
if executor:
item = executor.fire_append_event(item, _sa_initiator)
return item
def __del(collection, item, _sa_initiator=None):
"""Run del events, may eventually be inlined into decorators."""
if _sa_initiator is not False:
executor = collection._sa_adapter
if executor:
executor.fire_remove_event(item, _sa_initiator)
def __before_delete(collection, _sa_initiator=None):
"""Special method to run 'commit existing value' methods"""
executor = collection._sa_adapter
if executor:
executor.fire_pre_remove_event(_sa_initiator)
def _list_decorators():
"""Tailored instrumentation wrappers for any list-like class."""
def _tidy(fn):
fn._sa_instrumented = True
fn.__doc__ = getattr(list, fn.__name__).__doc__
def append(fn):
def append(self, item, _sa_initiator=None):
item = __set(self, item, _sa_initiator)
fn(self, item)
_tidy(append)
return append
def remove(fn):
def remove(self, value, _sa_initiator=None):
__before_delete(self, _sa_initiator)
# testlib.pragma exempt:__eq__
fn(self, value)
__del(self, value, _sa_initiator)
_tidy(remove)
return remove
def insert(fn):
def insert(self, index, value):
value = __set(self, value)
fn(self, index, value)
_tidy(insert)
return insert
def __setitem__(fn):
def __setitem__(self, index, value):
if not isinstance(index, slice):
existing = self[index]
if existing is not None:
__del(self, existing)
value = __set(self, value)
fn(self, index, value)
else:
# slice assignment requires __delitem__, insert, __len__
step = index.step or 1
start = index.start or 0
if start < 0:
start += len(self)
if index.stop is not None:
stop = index.stop
else:
stop = len(self)
if stop < 0:
stop += len(self)
if step == 1:
for i in range(start, stop, step):
if len(self) > start:
del self[start]
for i, item in enumerate(value):
self.insert(i + start, item)
else:
rng = list(range(start, stop, step))
if len(value) != len(rng):
raise ValueError(
"attempt to assign sequence of size %s to "
"extended slice of size %s" % (len(value),
len(rng)))
for i, item in zip(rng, value):
self.__setitem__(i, item)
_tidy(__setitem__)
return __setitem__
def __delitem__(fn):
def __delitem__(self, index):
if not isinstance(index, slice):
item = self[index]
__del(self, item)
fn(self, index)
else:
# slice deletion requires __getslice__ and a slice-groking
# __getitem__ for stepped deletion
# note: not breaking this into atomic dels
for item in self[index]:
__del(self, item)
fn(self, index)
_tidy(__delitem__)
return __delitem__
if util.py2k:
def __setslice__(fn):
def __setslice__(self, start, end, values):
for value in self[start:end]:
__del(self, value)
values = [__set(self, value) for value in values]
fn(self, start, end, values)
_tidy(__setslice__)
return __setslice__
def __delslice__(fn):
def __delslice__(self, start, end):
for value in self[start:end]:
__del(self, value)
fn(self, start, end)
_tidy(__delslice__)
return __delslice__
def extend(fn):
def extend(self, iterable):
for value in iterable:
self.append(value)
_tidy(extend)
return extend
def __iadd__(fn):
def __iadd__(self, iterable):
# list.__iadd__ takes any iterable and seems to let TypeError
# raise as-is instead of returning NotImplemented
for value in iterable:
self.append(value)
return self
_tidy(__iadd__)
return __iadd__
def pop(fn):
def pop(self, index=-1):
__before_delete(self)
item = fn(self, index)
__del(self, item)
return item
_tidy(pop)
return pop
if not util.py2k:
def clear(fn):
def clear(self, index=-1):
for item in self:
__del(self, item)
fn(self)
_tidy(clear)
return clear
# __imul__ : not wrapping this. all members of the collection are already
# present, so no need to fire appends... wrapping it with an explicit
# decorator is still possible, so events on *= can be had if they're
# desired. hard to imagine a use case for __imul__, though.
l = locals().copy()
l.pop('_tidy')
return l
def _dict_decorators():
"""Tailored instrumentation wrappers for any dict-like mapping class."""
def _tidy(fn):
fn._sa_instrumented = True
fn.__doc__ = getattr(dict, fn.__name__).__doc__
Unspecified = util.symbol('Unspecified')
def __setitem__(fn):
def __setitem__(self, key, value, _sa_initiator=None):
if key in self:
__del(self, self[key], _sa_initiator)
value = __set(self, value, _sa_initiator)
fn(self, key, value)
_tidy(__setitem__)
return __setitem__
def __delitem__(fn):
def __delitem__(self, key, _sa_initiator=None):
if key in self:
__del(self, self[key], _sa_initiator)
fn(self, key)
_tidy(__delitem__)
return __delitem__
def clear(fn):
def clear(self):
for key in self:
__del(self, self[key])
fn(self)
_tidy(clear)
return clear
def pop(fn):
def pop(self, key, default=Unspecified):
if key in self:
__del(self, self[key])
if default is Unspecified:
return fn(self, key)
else:
return fn(self, key, default)
_tidy(pop)
return pop
def popitem(fn):
def popitem(self):
__before_delete(self)
item = fn(self)
__del(self, item[1])
return item
_tidy(popitem)
return popitem
def setdefault(fn):
def setdefault(self, key, default=None):
if key not in self:
self.__setitem__(key, default)
return default
else:
return self.__getitem__(key)
_tidy(setdefault)
return setdefault
def update(fn):
def update(self, __other=Unspecified, **kw):
if __other is not Unspecified:
if hasattr(__other, 'keys'):
for key in list(__other):
if (key not in self or
self[key] is not __other[key]):
self[key] = __other[key]
else:
for key, value in __other:
if key not in self or self[key] is not value:
self[key] = value
for key in kw:
if key not in self or self[key] is not kw[key]:
self[key] = kw[key]
_tidy(update)
return update
l = locals().copy()
l.pop('_tidy')
l.pop('Unspecified')
return l
_set_binop_bases = (set, frozenset)
def _set_binops_check_strict(self, obj):
"""Allow only set, frozenset and self.__class__-derived
objects in binops."""
return isinstance(obj, _set_binop_bases + (self.__class__,))
def _set_binops_check_loose(self, obj):
"""Allow anything set-like to participate in set binops."""
return (isinstance(obj, _set_binop_bases + (self.__class__,)) or
util.duck_type_collection(obj) == set)
def _set_decorators():
"""Tailored instrumentation wrappers for any set-like class."""
def _tidy(fn):
fn._sa_instrumented = True
fn.__doc__ = getattr(set, fn.__name__).__doc__
Unspecified = util.symbol('Unspecified')
def add(fn):
def add(self, value, _sa_initiator=None):
if value not in self:
value = __set(self, value, _sa_initiator)
# testlib.pragma exempt:__hash__
fn(self, value)
_tidy(add)
return add
def discard(fn):
def discard(self, value, _sa_initiator=None):
# testlib.pragma exempt:__hash__
if value in self:
__del(self, value, _sa_initiator)
# testlib.pragma exempt:__hash__
fn(self, value)
_tidy(discard)
return discard
def remove(fn):
def remove(self, value, _sa_initiator=None):
# testlib.pragma exempt:__hash__
if value in self:
__del(self, value, _sa_initiator)
# testlib.pragma exempt:__hash__
fn(self, value)
_tidy(remove)
return remove
def pop(fn):
def pop(self):
__before_delete(self)
item = fn(self)
__del(self, item)
return item
_tidy(pop)
return pop
def clear(fn):
def clear(self):
for item in list(self):
self.remove(item)
_tidy(clear)
return clear
def update(fn):
def update(self, value):
for item in value:
self.add(item)
_tidy(update)
return update
def __ior__(fn):
def __ior__(self, value):
if not _set_binops_check_strict(self, value):
return NotImplemented
for item in value:
self.add(item)
return self
_tidy(__ior__)
return __ior__
def difference_update(fn):
def difference_update(self, value):
for item in value:
self.discard(item)
_tidy(difference_update)
return difference_update
def __isub__(fn):
def __isub__(self, value):
if not _set_binops_check_strict(self, value):
return NotImplemented
for item in value:
self.discard(item)
return self
_tidy(__isub__)
return __isub__
def intersection_update(fn):
def intersection_update(self, other):
want, have = self.intersection(other), set(self)
remove, add = have - want, want - have
for item in remove:
self.remove(item)
for item in add:
self.add(item)
_tidy(intersection_update)
return intersection_update
def __iand__(fn):
def __iand__(self, other):
if not _set_binops_check_strict(self, other):
return NotImplemented
want, have = self.intersection(other), set(self)
remove, add = have - want, want - have
for item in remove:
self.remove(item)
for item in add:
self.add(item)
return self
_tidy(__iand__)
return __iand__
def symmetric_difference_update(fn):
def symmetric_difference_update(self, other):
want, have = self.symmetric_difference(other), set(self)
remove, add = have - want, want - have
for item in remove:
self.remove(item)
for item in add:
self.add(item)
_tidy(symmetric_difference_update)
return symmetric_difference_update
def __ixor__(fn):
def __ixor__(self, other):
if not _set_binops_check_strict(self, other):
return NotImplemented
want, have = self.symmetric_difference(other), set(self)
remove, add = have - want, want - have
for item in remove:
self.remove(item)
for item in add:
self.add(item)
return self
_tidy(__ixor__)
return __ixor__
l = locals().copy()
l.pop('_tidy')
l.pop('Unspecified')
return l
class InstrumentedList(list):
"""An instrumented version of the built-in list."""
class InstrumentedSet(set):
"""An instrumented version of the built-in set."""
class InstrumentedDict(dict):
"""An instrumented version of the built-in dict."""
__canned_instrumentation = {
list: InstrumentedList,
set: InstrumentedSet,
dict: InstrumentedDict,
}
__interfaces = {
list: (
{'appender': 'append', 'remover': 'remove',
'iterator': '__iter__'}, _list_decorators()
),
set: ({'appender': 'add',
'remover': 'remove',
'iterator': '__iter__'}, _set_decorators()
),
# decorators are required for dicts and object collections.
dict: ({'iterator': 'values'}, _dict_decorators()) if util.py3k
else ({'iterator': 'itervalues'}, _dict_decorators()),
}
class MappedCollection(dict):
"""A basic dictionary-based collection class.
Extends dict with the minimal bag semantics that collection
classes require. ``set`` and ``remove`` are implemented in terms
of a keying function: any callable that takes an object and
returns an object for use as a dictionary key.
"""
def __init__(self, keyfunc):
"""Create a new collection with keying provided by keyfunc.
keyfunc may be any callable that takes an object and returns an object
for use as a dictionary key.
The keyfunc will be called every time the ORM needs to add a member by
value-only (such as when loading instances from the database) or
remove a member. The usual cautions about dictionary keying apply-
``keyfunc(object)`` should return the same output for the life of the
collection. Keying based on mutable properties can result in
unreachable instances "lost" in the collection.
"""
self.keyfunc = keyfunc
@collection.appender
@collection.internally_instrumented
def set(self, value, _sa_initiator=None):
"""Add an item by value, consulting the keyfunc for the key."""
key = self.keyfunc(value)
self.__setitem__(key, value, _sa_initiator)
@collection.remover
@collection.internally_instrumented
def remove(self, value, _sa_initiator=None):
"""Remove an item by value, consulting the keyfunc for the key."""
key = self.keyfunc(value)
# Let self[key] raise if key is not in this collection
# testlib.pragma exempt:__ne__
if self[key] != value:
raise sa_exc.InvalidRequestError(
"Can not remove '%s': collection holds '%s' for key '%s'. "
"Possible cause: is the MappedCollection key function "
"based on mutable properties or properties that only obtain "
"values after flush?" %
(value, self[key], key))
self.__delitem__(key, _sa_initiator)
@collection.converter
def _convert(self, dictlike):
"""Validate and convert a dict-like object into values for set()ing.
This is called behind the scenes when a MappedCollection is replaced
entirely by another collection, as in::
myobj.mappedcollection = {'a':obj1, 'b': obj2} # ...
Raises a TypeError if the key in any (key, value) pair in the dictlike
object does not match the key that this collection's keyfunc would
have assigned for that value.
"""
for incoming_key, value in util.dictlike_iteritems(dictlike):
new_key = self.keyfunc(value)
if incoming_key != new_key:
raise TypeError(
"Found incompatible key %r for value %r; this "
"collection's "
"keying function requires a key of %r for this value." % (
incoming_key, value, new_key))
yield value
# ensure instrumentation is associated with
# these built-in classes; if a user-defined class
# subclasses these and uses @internally_instrumented,
# the superclass is otherwise not instrumented.
# see [ticket:2406].
_instrument_class(MappedCollection)
_instrument_class(InstrumentedList)
_instrument_class(InstrumentedSet)
| bsd-3-clause |
Nayaata/Web-Front-end-develeopment | AvalancheReport/node_modules/npm/node_modules/node-gyp/gyp/pylib/gyp/MSVSToolFile.py | 2736 | 1804 | # Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Visual Studio project reader/writer."""
import gyp.common
import gyp.easy_xml as easy_xml
class Writer(object):
"""Visual Studio XML tool file writer."""
def __init__(self, tool_file_path, name):
"""Initializes the tool file.
Args:
tool_file_path: Path to the tool file.
name: Name of the tool file.
"""
self.tool_file_path = tool_file_path
self.name = name
self.rules_section = ['Rules']
def AddCustomBuildRule(self, name, cmd, description,
additional_dependencies,
outputs, extensions):
"""Adds a rule to the tool file.
Args:
name: Name of the rule.
description: Description of the rule.
cmd: Command line of the rule.
additional_dependencies: other files which may trigger the rule.
outputs: outputs of the rule.
extensions: extensions handled by the rule.
"""
rule = ['CustomBuildRule',
{'Name': name,
'ExecutionDescription': description,
'CommandLine': cmd,
'Outputs': ';'.join(outputs),
'FileExtensions': ';'.join(extensions),
'AdditionalDependencies':
';'.join(additional_dependencies)
}]
self.rules_section.append(rule)
def WriteIfChanged(self):
"""Writes the tool file."""
content = ['VisualStudioToolFile',
{'Version': '8.00',
'Name': self.name
},
self.rules_section
]
easy_xml.WriteXmlIfChanged(content, self.tool_file_path,
encoding="Windows-1252")
| mit |
jaruba/chromium.src | tools/resources/list_resources_removed_by_repack.py | 95 | 3297 | #!/usr/bin/env python
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import os
import re
import sys
usage = """%s BUILDTYPE BUILDDIR
BUILDTYPE: either chromium or chrome.
BUILDDIR: The path to the output directory. e.g. relpath/to/out/Release
Prints out (to stdout) the sorted list of resource ids that are marked as
unused during the repacking process in the given build log (via stdin).
Additionally, attempt to print out the name of the resource and the generated
header file that contains the resource.
This script is used to print the list of resources that are not used so that
developers will notice and fix their .grd files.
"""
def GetResourceIdsFromRepackMessage(in_data):
"""Returns sorted set of resource ids that are not used from in_data.
"""
unused_resources = set()
unused_pattern = re.compile(
'RePackFromDataPackStrings Removed Key: (?P<resource_id>[0-9]+)')
for line in in_data:
match = unused_pattern.match(line)
if match:
resource_id = int(match.group('resource_id'))
unused_resources.add(resource_id)
return sorted(unused_resources)
def Main():
if len(sys.argv) != 3:
sys.stderr.write(usage % sys.argv[0])
return 1
build_type = sys.argv[1]
build_dir = sys.argv[2]
if build_type not in ('chromium', 'chrome'):
sys.stderr.write(usage % sys.argv[0])
return 1
generated_output_dir = os.path.join(build_dir, 'gen')
if not os.path.exists(generated_output_dir):
sys.stderr.write('Cannot find gen dir %s' % generated_output_dir)
return 1
if build_type == 'chromium':
excluded_header = 'google_chrome_strings.h'
else:
excluded_header = 'chromium_strings.h'
data_files = []
for root, dirs, files in os.walk(generated_output_dir):
if os.path.basename(root) != 'grit':
continue
header_files = [header for header in files if header.endswith('.h')]
if excluded_header in header_files:
header_files.remove(excluded_header)
data_files.extend([os.path.join(root, header) for header in header_files])
resource_id_to_name_file_map = {}
resource_pattern = re.compile('#define (?P<resource_name>[A-Z0-9_]+).* '
'(?P<resource_id>[0-9]+)$')
for f in data_files:
data = open(f).read()
for line in data.splitlines():
match = resource_pattern.match(line)
if match:
resource_id = int(match.group('resource_id'))
resource_name = match.group('resource_name')
if resource_id in resource_id_to_name_file_map:
print 'Duplicate:', resource_id
print (resource_name, f)
print resource_id_to_name_file_map[resource_id]
raise
resource_id_to_name_file_map[resource_id] = (resource_name, f)
unused_resources = GetResourceIdsFromRepackMessage(sys.stdin)
for resource_id in unused_resources:
if resource_id not in resource_id_to_name_file_map:
print 'WARNING: Unknown resource id', resource_id
continue
(resource_name, filename) = resource_id_to_name_file_map[resource_id]
sys.stdout.write('%d: %s in %s\n' % (resource_id, resource_name, filename))
return 0
if __name__ == '__main__':
sys.exit(Main())
| bsd-3-clause |
riccardomc/moto | moto/datapipeline/models.py | 14 | 4716 | from __future__ import unicode_literals
import datetime
import boto.datapipeline
from moto.core import BaseBackend
from .utils import get_random_pipeline_id, remove_capitalization_of_dict_keys
class PipelineObject(object):
def __init__(self, object_id, name, fields):
self.object_id = object_id
self.name = name
self.fields = fields
def to_json(self):
return {
"fields": self.fields,
"id": self.object_id,
"name": self.name,
}
class Pipeline(object):
def __init__(self, name, unique_id):
self.name = name
self.unique_id = unique_id
self.description = ""
self.pipeline_id = get_random_pipeline_id()
self.creation_time = datetime.datetime.utcnow()
self.objects = []
self.status = "PENDING"
@property
def physical_resource_id(self):
return self.pipeline_id
def to_meta_json(self):
return {
"id": self.pipeline_id,
"name": self.name,
}
def to_json(self):
return {
"description": self.description,
"fields": [{
"key": "@pipelineState",
"stringValue": self.status,
}, {
"key": "description",
"stringValue": self.description
}, {
"key": "name",
"stringValue": self.name
}, {
"key": "@creationTime",
"stringValue": datetime.datetime.strftime(self.creation_time, '%Y-%m-%dT%H-%M-%S'),
}, {
"key": "@id",
"stringValue": self.pipeline_id,
}, {
"key": "@sphere",
"stringValue": "PIPELINE"
}, {
"key": "@version",
"stringValue": "1"
}, {
"key": "@userId",
"stringValue": "924374875933"
}, {
"key": "@accountId",
"stringValue": "924374875933"
}, {
"key": "uniqueId",
"stringValue": self.unique_id
}],
"name": self.name,
"pipelineId": self.pipeline_id,
"tags": [
]
}
def set_pipeline_objects(self, pipeline_objects):
self.objects = [
PipelineObject(pipeline_object['id'], pipeline_object['name'], pipeline_object['fields'])
for pipeline_object in remove_capitalization_of_dict_keys(pipeline_objects)
]
def activate(self):
self.status = "SCHEDULED"
@classmethod
def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
datapipeline_backend = datapipeline_backends[region_name]
properties = cloudformation_json["Properties"]
cloudformation_unique_id = "cf-" + properties["Name"]
pipeline = datapipeline_backend.create_pipeline(properties["Name"], cloudformation_unique_id)
datapipeline_backend.put_pipeline_definition(pipeline.pipeline_id, properties["PipelineObjects"])
if properties["Activate"]:
pipeline.activate()
return pipeline
class DataPipelineBackend(BaseBackend):
def __init__(self):
self.pipelines = {}
def create_pipeline(self, name, unique_id):
pipeline = Pipeline(name, unique_id)
self.pipelines[pipeline.pipeline_id] = pipeline
return pipeline
def list_pipelines(self):
return self.pipelines.values()
def describe_pipelines(self, pipeline_ids):
pipelines = [pipeline for pipeline in self.pipelines.values() if pipeline.pipeline_id in pipeline_ids]
return pipelines
def get_pipeline(self, pipeline_id):
return self.pipelines[pipeline_id]
def put_pipeline_definition(self, pipeline_id, pipeline_objects):
pipeline = self.get_pipeline(pipeline_id)
pipeline.set_pipeline_objects(pipeline_objects)
def get_pipeline_definition(self, pipeline_id):
pipeline = self.get_pipeline(pipeline_id)
return pipeline.objects
def describe_objects(self, object_ids, pipeline_id):
pipeline = self.get_pipeline(pipeline_id)
pipeline_objects = [
pipeline_object for pipeline_object in pipeline.objects
if pipeline_object.object_id in object_ids
]
return pipeline_objects
def activate_pipeline(self, pipeline_id):
pipeline = self.get_pipeline(pipeline_id)
pipeline.activate()
datapipeline_backends = {}
for region in boto.datapipeline.regions():
datapipeline_backends[region.name] = DataPipelineBackend()
| apache-2.0 |
Iconik/eve-suite | src/view/blueprintcalculator/ui_blueprint_calculator.py | 1 | 4383 | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'ui_blueprint_calculator.ui'
#
# Created: Sat Mar 19 14:47:54 2011
# by: pyside-uic 0.2.7 running on PySide 1.0.0~rc1
#
# WARNING! All changes made in this file will be lost!
from PySide import QtCore, QtGui
class Ui_BlueprintCalculator(object):
def setupUi(self, BlueprintCalculator):
BlueprintCalculator.setObjectName("BlueprintCalculator")
BlueprintCalculator.resize(801, 544)
self.centralwidget = QtGui.QWidget(BlueprintCalculator)
self.centralwidget.setObjectName("centralwidget")
self.verticalLayout = QtGui.QVBoxLayout(self.centralwidget)
self.verticalLayout.setMargin(0)
self.verticalLayout.setObjectName("verticalLayout")
self.widget = QtGui.QWidget(self.centralwidget)
self.widget.setObjectName("widget")
self.horizontalLayout = QtGui.QHBoxLayout(self.widget)
self.horizontalLayout.setMargin(0)
self.horizontalLayout.setMargin(0)
self.horizontalLayout.setObjectName("horizontalLayout")
self.groupBox = QtGui.QGroupBox(self.widget)
self.groupBox.setObjectName("groupBox")
self.horizontalLayout_2 = QtGui.QHBoxLayout(self.groupBox)
self.horizontalLayout_2.setMargin(0)
self.horizontalLayout_2.setObjectName("horizontalLayout_2")
self.blueprint_combo = QtGui.QComboBox(self.groupBox)
self.blueprint_combo.setEditable(True)
self.blueprint_combo.setInsertPolicy(QtGui.QComboBox.NoInsert)
self.blueprint_combo.setSizeAdjustPolicy(QtGui.QComboBox.AdjustToContents)
self.blueprint_combo.setObjectName("blueprint_combo")
self.horizontalLayout_2.addWidget(self.blueprint_combo)
self.horizontalLayout.addWidget(self.groupBox)
self.groupBox_2 = QtGui.QGroupBox(self.widget)
self.groupBox_2.setObjectName("groupBox_2")
self.horizontalLayout_4 = QtGui.QHBoxLayout(self.groupBox_2)
self.horizontalLayout_4.setMargin(0)
self.horizontalLayout_4.setObjectName("horizontalLayout_4")
self.character_combo = QtGui.QComboBox(self.groupBox_2)
self.character_combo.setEditable(False)
self.character_combo.setSizeAdjustPolicy(QtGui.QComboBox.AdjustToContents)
self.character_combo.setFrame(True)
self.character_combo.setObjectName("character_combo")
self.character_combo.addItem("")
self.character_combo.addItem("")
self.horizontalLayout_4.addWidget(self.character_combo)
self.horizontalLayout.addWidget(self.groupBox_2)
self.verticalLayout.addWidget(self.widget)
self.widget_2 = QtGui.QWidget(self.centralwidget)
self.widget_2.setObjectName("widget_2")
self.horizontalLayout_3 = QtGui.QHBoxLayout(self.widget_2)
self.horizontalLayout_3.setMargin(0)
self.horizontalLayout_3.setMargin(0)
self.horizontalLayout_3.setObjectName("horizontalLayout_3")
self.manufacturing_tree = QtGui.QTreeView(self.widget_2)
self.manufacturing_tree.setObjectName("manufacturing_tree")
self.horizontalLayout_3.addWidget(self.manufacturing_tree)
self.blueprint_tree = QtGui.QTreeView(self.widget_2)
self.blueprint_tree.setObjectName("blueprint_tree")
self.horizontalLayout_3.addWidget(self.blueprint_tree)
self.verticalLayout.addWidget(self.widget_2)
BlueprintCalculator.setCentralWidget(self.centralwidget)
self.retranslateUi(BlueprintCalculator)
QtCore.QMetaObject.connectSlotsByName(BlueprintCalculator)
def retranslateUi(self, BlueprintCalculator):
BlueprintCalculator.setWindowTitle(QtGui.QApplication.translate("BlueprintCalculator", "MainWindow", None, QtGui.QApplication.UnicodeUTF8))
self.groupBox.setTitle(QtGui.QApplication.translate("BlueprintCalculator", "Blueprint", None, QtGui.QApplication.UnicodeUTF8))
self.groupBox_2.setTitle(QtGui.QApplication.translate("BlueprintCalculator", "Character", None, QtGui.QApplication.UnicodeUTF8))
self.character_combo.setItemText(0, QtGui.QApplication.translate("BlueprintCalculator", "All Level 5 Skills", None, QtGui.QApplication.UnicodeUTF8))
self.character_combo.setItemText(1, QtGui.QApplication.translate("BlueprintCalculator", "No Skills", None, QtGui.QApplication.UnicodeUTF8))
| gpl-3.0 |
bwsblake/lettercounter | django-norel-env/lib/python2.7/site-packages/django/contrib/auth/tests/signals.py | 227 | 3278 | from django.contrib.auth import signals
from django.contrib.auth.models import User
from django.contrib.auth.tests.utils import skipIfCustomUser
from django.test import TestCase
from django.test.client import RequestFactory
from django.test.utils import override_settings
@skipIfCustomUser
@override_settings(USE_TZ=False, PASSWORD_HASHERS=('django.contrib.auth.hashers.SHA1PasswordHasher',))
class SignalTestCase(TestCase):
urls = 'django.contrib.auth.tests.urls'
fixtures = ['authtestdata.json']
def listener_login(self, user, **kwargs):
self.logged_in.append(user)
def listener_logout(self, user, **kwargs):
self.logged_out.append(user)
def listener_login_failed(self, sender, credentials, **kwargs):
self.login_failed.append(credentials)
def setUp(self):
"""Set up the listeners and reset the logged in/logged out counters"""
self.logged_in = []
self.logged_out = []
self.login_failed = []
signals.user_logged_in.connect(self.listener_login)
signals.user_logged_out.connect(self.listener_logout)
signals.user_login_failed.connect(self.listener_login_failed)
def tearDown(self):
"""Disconnect the listeners"""
signals.user_logged_in.disconnect(self.listener_login)
signals.user_logged_out.disconnect(self.listener_logout)
signals.user_login_failed.disconnect(self.listener_login_failed)
def test_login(self):
# Only a successful login will trigger the success signal.
self.client.login(username='testclient', password='bad')
self.assertEqual(len(self.logged_in), 0)
self.assertEqual(len(self.login_failed), 1)
self.assertEqual(self.login_failed[0]['username'], 'testclient')
# verify the password is cleansed
self.assertTrue('***' in self.login_failed[0]['password'])
# Like this:
self.client.login(username='testclient', password='password')
self.assertEqual(len(self.logged_in), 1)
self.assertEqual(self.logged_in[0].username, 'testclient')
# Ensure there were no more failures.
self.assertEqual(len(self.login_failed), 1)
def test_logout_anonymous(self):
# The log_out function will still trigger the signal for anonymous
# users.
self.client.get('/logout/next_page/')
self.assertEqual(len(self.logged_out), 1)
self.assertEqual(self.logged_out[0], None)
def test_logout(self):
self.client.login(username='testclient', password='password')
self.client.get('/logout/next_page/')
self.assertEqual(len(self.logged_out), 1)
self.assertEqual(self.logged_out[0].username, 'testclient')
def test_update_last_login(self):
"""Ensure that only `last_login` is updated in `update_last_login`"""
user = User.objects.get(pk=3)
old_last_login = user.last_login
user.username = "This username shouldn't get saved"
request = RequestFactory().get('/login')
signals.user_logged_in.send(sender=user.__class__, request=request,
user=user)
user = User.objects.get(pk=3)
self.assertEqual(user.username, 'staff')
self.assertNotEqual(user.last_login, old_last_login)
| mit |
batermj/algorithm-challenger | code-analysis/programming_anguage/python/source_codes/Python3.5.9/Python-3.5.9/Lib/test/support/__init__.py | 1 | 86205 | """Supporting definitions for the Python regression tests."""
if __name__ != 'test.support':
raise ImportError('support must be imported from the test package')
import collections.abc
import contextlib
import errno
import faulthandler
import fnmatch
import functools
import gc
import importlib
import importlib.util
import logging.handlers
import nntplib
import os
import platform
import re
import shutil
import socket
import stat
import struct
import subprocess
import sys
import sysconfig
import tempfile
import time
import unittest
import urllib.error
import warnings
try:
import _thread, threading
except ImportError:
_thread = None
threading = None
try:
import multiprocessing.process
except ImportError:
multiprocessing = None
try:
import zlib
except ImportError:
zlib = None
try:
import gzip
except ImportError:
gzip = None
try:
import bz2
except ImportError:
bz2 = None
try:
import lzma
except ImportError:
lzma = None
try:
import resource
except ImportError:
resource = None
__all__ = [
# globals
"PIPE_MAX_SIZE", "verbose", "max_memuse", "use_resources", "failfast",
# exceptions
"Error", "TestFailed", "ResourceDenied",
# imports
"import_module", "import_fresh_module", "CleanImport",
# modules
"unload", "forget",
# io
"record_original_stdout", "get_original_stdout", "captured_stdout",
"captured_stdin", "captured_stderr",
# filesystem
"TESTFN", "SAVEDCWD", "unlink", "rmtree", "temp_cwd", "findfile",
"create_empty_file", "can_symlink", "fs_is_case_insensitive",
# unittest
"is_resource_enabled", "requires", "requires_freebsd_version",
"requires_linux_version", "requires_mac_ver", "check_syntax_error",
"TransientResource", "time_out", "socket_peer_reset", "ioerror_peer_reset",
"transient_internet", "BasicTestRunner", "run_unittest", "run_doctest",
"skip_unless_symlink", "requires_gzip", "requires_bz2", "requires_lzma",
"bigmemtest", "bigaddrspacetest", "cpython_only", "get_attribute",
"requires_IEEE_754", "skip_unless_xattr", "requires_zlib",
"anticipate_failure", "load_package_tests", "detect_api_mismatch",
"requires_multiprocessing_queue",
# sys
"is_jython", "check_impl_detail",
# network
"HOST", "IPV6_ENABLED", "find_unused_port", "bind_port", "open_urlresource",
# processes
'temp_umask', "reap_children",
# logging
"TestHandler",
# threads
"threading_setup", "threading_cleanup", "reap_threads", "start_threads",
# miscellaneous
"check_warnings", "check_no_resource_warning", "EnvironmentVarGuard",
"run_with_locale", "swap_item",
"swap_attr", "Matcher", "set_memlimit", "SuppressCrashReport", "sortdict",
"run_with_tz",
]
class Error(Exception):
"""Base class for regression test exceptions."""
class TestFailed(Error):
"""Test failed."""
class ResourceDenied(unittest.SkipTest):
"""Test skipped because it requested a disallowed resource.
This is raised when a test calls requires() for a resource that
has not be enabled. It is used to distinguish between expected
and unexpected skips.
"""
@contextlib.contextmanager
def _ignore_deprecated_imports(ignore=True):
"""Context manager to suppress package and module deprecation
warnings when importing them.
If ignore is False, this context manager has no effect.
"""
if ignore:
with warnings.catch_warnings():
warnings.filterwarnings("ignore", ".+ (module|package)",
DeprecationWarning)
yield
else:
yield
def import_module(name, deprecated=False, *, required_on=()):
"""Import and return the module to be tested, raising SkipTest if
it is not available.
If deprecated is True, any module or package deprecation messages
will be suppressed. If a module is required on a platform but optional for
others, set required_on to an iterable of platform prefixes which will be
compared against sys.platform.
"""
with _ignore_deprecated_imports(deprecated):
try:
return importlib.import_module(name)
except ImportError as msg:
if sys.platform.startswith(tuple(required_on)):
raise
raise unittest.SkipTest(str(msg))
def _save_and_remove_module(name, orig_modules):
"""Helper function to save and remove a module from sys.modules
Raise ImportError if the module can't be imported.
"""
# try to import the module and raise an error if it can't be imported
if name not in sys.modules:
__import__(name)
del sys.modules[name]
for modname in list(sys.modules):
if modname == name or modname.startswith(name + '.'):
orig_modules[modname] = sys.modules[modname]
del sys.modules[modname]
def _save_and_block_module(name, orig_modules):
"""Helper function to save and block a module in sys.modules
Return True if the module was in sys.modules, False otherwise.
"""
saved = True
try:
orig_modules[name] = sys.modules[name]
except KeyError:
saved = False
sys.modules[name] = None
return saved
def anticipate_failure(condition):
"""Decorator to mark a test that is known to be broken in some cases
Any use of this decorator should have a comment identifying the
associated tracker issue.
"""
if condition:
return unittest.expectedFailure
return lambda f: f
def load_package_tests(pkg_dir, loader, standard_tests, pattern):
"""Generic load_tests implementation for simple test packages.
Most packages can implement load_tests using this function as follows:
def load_tests(*args):
return load_package_tests(os.path.dirname(__file__), *args)
"""
if pattern is None:
pattern = "test*"
top_dir = os.path.dirname( # Lib
os.path.dirname( # test
os.path.dirname(__file__))) # support
package_tests = loader.discover(start_dir=pkg_dir,
top_level_dir=top_dir,
pattern=pattern)
standard_tests.addTests(package_tests)
return standard_tests
def import_fresh_module(name, fresh=(), blocked=(), deprecated=False):
"""Import and return a module, deliberately bypassing sys.modules.
This function imports and returns a fresh copy of the named Python module
by removing the named module from sys.modules before doing the import.
Note that unlike reload, the original module is not affected by
this operation.
*fresh* is an iterable of additional module names that are also removed
from the sys.modules cache before doing the import.
*blocked* is an iterable of module names that are replaced with None
in the module cache during the import to ensure that attempts to import
them raise ImportError.
The named module and any modules named in the *fresh* and *blocked*
parameters are saved before starting the import and then reinserted into
sys.modules when the fresh import is complete.
Module and package deprecation messages are suppressed during this import
if *deprecated* is True.
This function will raise ImportError if the named module cannot be
imported.
"""
# NOTE: test_heapq, test_json and test_warnings include extra sanity checks
# to make sure that this utility function is working as expected
with _ignore_deprecated_imports(deprecated):
# Keep track of modules saved for later restoration as well
# as those which just need a blocking entry removed
orig_modules = {}
names_to_remove = []
_save_and_remove_module(name, orig_modules)
try:
for fresh_name in fresh:
_save_and_remove_module(fresh_name, orig_modules)
for blocked_name in blocked:
if not _save_and_block_module(blocked_name, orig_modules):
names_to_remove.append(blocked_name)
fresh_module = importlib.import_module(name)
except ImportError:
fresh_module = None
finally:
for orig_name, module in orig_modules.items():
sys.modules[orig_name] = module
for name_to_remove in names_to_remove:
del sys.modules[name_to_remove]
return fresh_module
def get_attribute(obj, name):
"""Get an attribute, raising SkipTest if AttributeError is raised."""
try:
attribute = getattr(obj, name)
except AttributeError:
raise unittest.SkipTest("object %r has no attribute %r" % (obj, name))
else:
return attribute
verbose = 1 # Flag set to 0 by regrtest.py
use_resources = None # Flag set to [] by regrtest.py
max_memuse = 0 # Disable bigmem tests (they will still be run with
# small sizes, to make sure they work.)
real_max_memuse = 0
failfast = False
match_tests = None
# _original_stdout is meant to hold stdout at the time regrtest began.
# This may be "the real" stdout, or IDLE's emulation of stdout, or whatever.
# The point is to have some flavor of stdout the user can actually see.
_original_stdout = None
def record_original_stdout(stdout):
global _original_stdout
_original_stdout = stdout
def get_original_stdout():
return _original_stdout or sys.stdout
def unload(name):
try:
del sys.modules[name]
except KeyError:
pass
def _force_run(path, func, *args):
try:
return func(*args)
except OSError as err:
if verbose >= 2:
print('%s: %s' % (err.__class__.__name__, err))
print('re-run %s%r' % (func.__name__, args))
os.chmod(path, stat.S_IRWXU)
return func(*args)
if sys.platform.startswith("win"):
def _waitfor(func, pathname, waitall=False):
# Perform the operation
func(pathname)
# Now setup the wait loop
if waitall:
dirname = pathname
else:
dirname, name = os.path.split(pathname)
dirname = dirname or '.'
# Check for `pathname` to be removed from the filesystem.
# The exponential backoff of the timeout amounts to a total
# of ~1 second after which the deletion is probably an error
# anyway.
# Testing on an i7@4.3GHz shows that usually only 1 iteration is
# required when contention occurs.
timeout = 0.001
while timeout < 1.0:
# Note we are only testing for the existence of the file(s) in
# the contents of the directory regardless of any security or
# access rights. If we have made it this far, we have sufficient
# permissions to do that much using Python's equivalent of the
# Windows API FindFirstFile.
# Other Windows APIs can fail or give incorrect results when
# dealing with files that are pending deletion.
L = os.listdir(dirname)
if not (L if waitall else name in L):
return
# Increase the timeout and try again
time.sleep(timeout)
timeout *= 2
warnings.warn('tests may fail, delete still pending for ' + pathname,
RuntimeWarning, stacklevel=4)
def _unlink(filename):
_waitfor(os.unlink, filename)
def _rmdir(dirname):
_waitfor(os.rmdir, dirname)
def _rmtree(path):
def _rmtree_inner(path):
for name in _force_run(path, os.listdir, path):
fullname = os.path.join(path, name)
try:
mode = os.lstat(fullname).st_mode
except OSError as exc:
print("support.rmtree(): os.lstat(%r) failed with %s" % (fullname, exc),
file=sys.__stderr__)
mode = 0
if stat.S_ISDIR(mode):
_waitfor(_rmtree_inner, fullname, waitall=True)
_force_run(fullname, os.rmdir, fullname)
else:
_force_run(fullname, os.unlink, fullname)
_waitfor(_rmtree_inner, path, waitall=True)
_waitfor(lambda p: _force_run(p, os.rmdir, p), path)
else:
_unlink = os.unlink
_rmdir = os.rmdir
def _rmtree(path):
try:
shutil.rmtree(path)
return
except OSError:
pass
def _rmtree_inner(path):
for name in _force_run(path, os.listdir, path):
fullname = os.path.join(path, name)
try:
mode = os.lstat(fullname).st_mode
except OSError:
mode = 0
if stat.S_ISDIR(mode):
_rmtree_inner(fullname)
_force_run(path, os.rmdir, fullname)
else:
_force_run(path, os.unlink, fullname)
_rmtree_inner(path)
os.rmdir(path)
def unlink(filename):
try:
_unlink(filename)
except (FileNotFoundError, NotADirectoryError):
pass
def rmdir(dirname):
try:
_rmdir(dirname)
except FileNotFoundError:
pass
def rmtree(path):
try:
_rmtree(path)
except FileNotFoundError:
pass
def make_legacy_pyc(source):
"""Move a PEP 3147/488 pyc file to its legacy pyc location.
:param source: The file system path to the source file. The source file
does not need to exist, however the PEP 3147/488 pyc file must exist.
:return: The file system path to the legacy pyc file.
"""
pyc_file = importlib.util.cache_from_source(source)
up_one = os.path.dirname(os.path.abspath(source))
legacy_pyc = os.path.join(up_one, source + 'c')
os.rename(pyc_file, legacy_pyc)
return legacy_pyc
def forget(modname):
"""'Forget' a module was ever imported.
This removes the module from sys.modules and deletes any PEP 3147/488 or
legacy .pyc files.
"""
unload(modname)
for dirname in sys.path:
source = os.path.join(dirname, modname + '.py')
# It doesn't matter if they exist or not, unlink all possible
# combinations of PEP 3147/488 and legacy pyc files.
unlink(source + 'c')
for opt in ('', 1, 2):
unlink(importlib.util.cache_from_source(source, optimization=opt))
# Check whether a gui is actually available
def _is_gui_available():
if hasattr(_is_gui_available, 'result'):
return _is_gui_available.result
reason = None
if sys.platform.startswith('win'):
# if Python is running as a service (such as the buildbot service),
# gui interaction may be disallowed
import ctypes
import ctypes.wintypes
UOI_FLAGS = 1
WSF_VISIBLE = 0x0001
class USEROBJECTFLAGS(ctypes.Structure):
_fields_ = [("fInherit", ctypes.wintypes.BOOL),
("fReserved", ctypes.wintypes.BOOL),
("dwFlags", ctypes.wintypes.DWORD)]
dll = ctypes.windll.user32
h = dll.GetProcessWindowStation()
if not h:
raise ctypes.WinError()
uof = USEROBJECTFLAGS()
needed = ctypes.wintypes.DWORD()
res = dll.GetUserObjectInformationW(h,
UOI_FLAGS,
ctypes.byref(uof),
ctypes.sizeof(uof),
ctypes.byref(needed))
if not res:
raise ctypes.WinError()
if not bool(uof.dwFlags & WSF_VISIBLE):
reason = "gui not available (WSF_VISIBLE flag not set)"
elif sys.platform == 'darwin':
# The Aqua Tk implementations on OS X can abort the process if
# being called in an environment where a window server connection
# cannot be made, for instance when invoked by a buildbot or ssh
# process not running under the same user id as the current console
# user. To avoid that, raise an exception if the window manager
# connection is not available.
from ctypes import cdll, c_int, pointer, Structure
from ctypes.util import find_library
app_services = cdll.LoadLibrary(find_library("ApplicationServices"))
if app_services.CGMainDisplayID() == 0:
reason = "gui tests cannot run without OS X window manager"
else:
class ProcessSerialNumber(Structure):
_fields_ = [("highLongOfPSN", c_int),
("lowLongOfPSN", c_int)]
psn = ProcessSerialNumber()
psn_p = pointer(psn)
if ( (app_services.GetCurrentProcess(psn_p) < 0) or
(app_services.SetFrontProcess(psn_p) < 0) ):
reason = "cannot run without OS X gui process"
# check on every platform whether tkinter can actually do anything
if not reason:
try:
from tkinter import Tk
root = Tk()
root.withdraw()
root.update()
root.destroy()
except Exception as e:
err_string = str(e)
if len(err_string) > 50:
err_string = err_string[:50] + ' [...]'
reason = 'Tk unavailable due to {}: {}'.format(type(e).__name__,
err_string)
_is_gui_available.reason = reason
_is_gui_available.result = not reason
return _is_gui_available.result
def is_resource_enabled(resource):
"""Test whether a resource is enabled.
Known resources are set by regrtest.py. If not running under regrtest.py,
all resources are assumed enabled unless use_resources has been set.
"""
return use_resources is None or resource in use_resources
def requires(resource, msg=None):
"""Raise ResourceDenied if the specified resource is not available."""
if not is_resource_enabled(resource):
if msg is None:
msg = "Use of the %r resource not enabled" % resource
raise ResourceDenied(msg)
if resource == 'gui' and not _is_gui_available():
raise ResourceDenied(_is_gui_available.reason)
def _requires_unix_version(sysname, min_version):
"""Decorator raising SkipTest if the OS is `sysname` and the version is less
than `min_version`.
For example, @_requires_unix_version('FreeBSD', (7, 2)) raises SkipTest if
the FreeBSD version is less than 7.2.
"""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kw):
if platform.system() == sysname:
version_txt = platform.release().split('-', 1)[0]
try:
version = tuple(map(int, version_txt.split('.')))
except ValueError:
pass
else:
if version < min_version:
min_version_txt = '.'.join(map(str, min_version))
raise unittest.SkipTest(
"%s version %s or higher required, not %s"
% (sysname, min_version_txt, version_txt))
return func(*args, **kw)
wrapper.min_version = min_version
return wrapper
return decorator
def requires_freebsd_version(*min_version):
"""Decorator raising SkipTest if the OS is FreeBSD and the FreeBSD version is
less than `min_version`.
For example, @requires_freebsd_version(7, 2) raises SkipTest if the FreeBSD
version is less than 7.2.
"""
return _requires_unix_version('FreeBSD', min_version)
def requires_linux_version(*min_version):
"""Decorator raising SkipTest if the OS is Linux and the Linux version is
less than `min_version`.
For example, @requires_linux_version(2, 6, 32) raises SkipTest if the Linux
version is less than 2.6.32.
"""
return _requires_unix_version('Linux', min_version)
def requires_mac_ver(*min_version):
"""Decorator raising SkipTest if the OS is Mac OS X and the OS X
version if less than min_version.
For example, @requires_mac_ver(10, 5) raises SkipTest if the OS X version
is lesser than 10.5.
"""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kw):
if sys.platform == 'darwin':
version_txt = platform.mac_ver()[0]
try:
version = tuple(map(int, version_txt.split('.')))
except ValueError:
pass
else:
if version < min_version:
min_version_txt = '.'.join(map(str, min_version))
raise unittest.SkipTest(
"Mac OS X %s or higher required, not %s"
% (min_version_txt, version_txt))
return func(*args, **kw)
wrapper.min_version = min_version
return wrapper
return decorator
# Don't use "localhost", since resolving it uses the DNS under recent
# Windows versions (see issue #18792).
HOST = "127.0.0.1"
HOSTv6 = "::1"
def find_unused_port(family=socket.AF_INET, socktype=socket.SOCK_STREAM):
"""Returns an unused port that should be suitable for binding. This is
achieved by creating a temporary socket with the same family and type as
the 'sock' parameter (default is AF_INET, SOCK_STREAM), and binding it to
the specified host address (defaults to 0.0.0.0) with the port set to 0,
eliciting an unused ephemeral port from the OS. The temporary socket is
then closed and deleted, and the ephemeral port is returned.
Either this method or bind_port() should be used for any tests where a
server socket needs to be bound to a particular port for the duration of
the test. Which one to use depends on whether the calling code is creating
a python socket, or if an unused port needs to be provided in a constructor
or passed to an external program (i.e. the -accept argument to openssl's
s_server mode). Always prefer bind_port() over find_unused_port() where
possible. Hard coded ports should *NEVER* be used. As soon as a server
socket is bound to a hard coded port, the ability to run multiple instances
of the test simultaneously on the same host is compromised, which makes the
test a ticking time bomb in a buildbot environment. On Unix buildbots, this
may simply manifest as a failed test, which can be recovered from without
intervention in most cases, but on Windows, the entire python process can
completely and utterly wedge, requiring someone to log in to the buildbot
and manually kill the affected process.
(This is easy to reproduce on Windows, unfortunately, and can be traced to
the SO_REUSEADDR socket option having different semantics on Windows versus
Unix/Linux. On Unix, you can't have two AF_INET SOCK_STREAM sockets bind,
listen and then accept connections on identical host/ports. An EADDRINUSE
OSError will be raised at some point (depending on the platform and
the order bind and listen were called on each socket).
However, on Windows, if SO_REUSEADDR is set on the sockets, no EADDRINUSE
will ever be raised when attempting to bind two identical host/ports. When
accept() is called on each socket, the second caller's process will steal
the port from the first caller, leaving them both in an awkwardly wedged
state where they'll no longer respond to any signals or graceful kills, and
must be forcibly killed via OpenProcess()/TerminateProcess().
The solution on Windows is to use the SO_EXCLUSIVEADDRUSE socket option
instead of SO_REUSEADDR, which effectively affords the same semantics as
SO_REUSEADDR on Unix. Given the propensity of Unix developers in the Open
Source world compared to Windows ones, this is a common mistake. A quick
look over OpenSSL's 0.9.8g source shows that they use SO_REUSEADDR when
openssl.exe is called with the 's_server' option, for example. See
http://bugs.python.org/issue2550 for more info. The following site also
has a very thorough description about the implications of both REUSEADDR
and EXCLUSIVEADDRUSE on Windows:
http://msdn2.microsoft.com/en-us/library/ms740621(VS.85).aspx)
XXX: although this approach is a vast improvement on previous attempts to
elicit unused ports, it rests heavily on the assumption that the ephemeral
port returned to us by the OS won't immediately be dished back out to some
other process when we close and delete our temporary socket but before our
calling code has a chance to bind the returned port. We can deal with this
issue if/when we come across it.
"""
tempsock = socket.socket(family, socktype)
port = bind_port(tempsock)
tempsock.close()
del tempsock
return port
def bind_port(sock, host=HOST):
"""Bind the socket to a free port and return the port number. Relies on
ephemeral ports in order to ensure we are using an unbound port. This is
important as many tests may be running simultaneously, especially in a
buildbot environment. This method raises an exception if the sock.family
is AF_INET and sock.type is SOCK_STREAM, *and* the socket has SO_REUSEADDR
or SO_REUSEPORT set on it. Tests should *never* set these socket options
for TCP/IP sockets. The only case for setting these options is testing
multicasting via multiple UDP sockets.
Additionally, if the SO_EXCLUSIVEADDRUSE socket option is available (i.e.
on Windows), it will be set on the socket. This will prevent anyone else
from bind()'ing to our host/port for the duration of the test.
"""
if sock.family == socket.AF_INET and sock.type == socket.SOCK_STREAM:
if hasattr(socket, 'SO_REUSEADDR'):
if sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) == 1:
raise TestFailed("tests should never set the SO_REUSEADDR " \
"socket option on TCP/IP sockets!")
if hasattr(socket, 'SO_REUSEPORT'):
try:
if sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) == 1:
raise TestFailed("tests should never set the SO_REUSEPORT " \
"socket option on TCP/IP sockets!")
except OSError:
# Python's socket module was compiled using modern headers
# thus defining SO_REUSEPORT but this process is running
# under an older kernel that does not support SO_REUSEPORT.
pass
if hasattr(socket, 'SO_EXCLUSIVEADDRUSE'):
sock.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1)
sock.bind((host, 0))
port = sock.getsockname()[1]
return port
def _is_ipv6_enabled():
"""Check whether IPv6 is enabled on this host."""
if socket.has_ipv6:
sock = None
try:
sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
sock.bind((HOSTv6, 0))
return True
except OSError:
pass
finally:
if sock:
sock.close()
return False
IPV6_ENABLED = _is_ipv6_enabled()
def system_must_validate_cert(f):
"""Skip the test on TLS certificate validation failures."""
@functools.wraps(f)
def dec(*args, **kwargs):
try:
f(*args, **kwargs)
except IOError as e:
if "CERTIFICATE_VERIFY_FAILED" in str(e):
raise unittest.SkipTest("system does not contain "
"necessary certificates")
raise
return dec
# A constant likely larger than the underlying OS pipe buffer size, to
# make writes blocking.
# Windows limit seems to be around 512 B, and many Unix kernels have a
# 64 KiB pipe buffer size or 16 * PAGE_SIZE: take a few megs to be sure.
# (see issue #17835 for a discussion of this number).
PIPE_MAX_SIZE = 4 * 1024 * 1024 + 1
# A constant likely larger than the underlying OS socket buffer size, to make
# writes blocking.
# The socket buffer sizes can usually be tuned system-wide (e.g. through sysctl
# on Linux), or on a per-socket basis (SO_SNDBUF/SO_RCVBUF). See issue #18643
# for a discussion of this number).
SOCK_MAX_SIZE = 16 * 1024 * 1024 + 1
# decorator for skipping tests on non-IEEE 754 platforms
requires_IEEE_754 = unittest.skipUnless(
float.__getformat__("double").startswith("IEEE"),
"test requires IEEE 754 doubles")
requires_zlib = unittest.skipUnless(zlib, 'requires zlib')
requires_gzip = unittest.skipUnless(gzip, 'requires gzip')
requires_bz2 = unittest.skipUnless(bz2, 'requires bz2')
requires_lzma = unittest.skipUnless(lzma, 'requires lzma')
is_jython = sys.platform.startswith('java')
# Filename used for testing
if os.name == 'java':
# Jython disallows @ in module names
TESTFN = '$test'
else:
TESTFN = '@test'
# Disambiguate TESTFN for parallel testing, while letting it remain a valid
# module name.
TESTFN = "{}_{}_tmp".format(TESTFN, os.getpid())
# FS_NONASCII: non-ASCII character encodable by os.fsencode(),
# or None if there is no such character.
FS_NONASCII = None
for character in (
# First try printable and common characters to have a readable filename.
# For each character, the encoding list are just example of encodings able
# to encode the character (the list is not exhaustive).
# U+00E6 (Latin Small Letter Ae): cp1252, iso-8859-1
'\u00E6',
# U+0130 (Latin Capital Letter I With Dot Above): cp1254, iso8859_3
'\u0130',
# U+0141 (Latin Capital Letter L With Stroke): cp1250, cp1257
'\u0141',
# U+03C6 (Greek Small Letter Phi): cp1253
'\u03C6',
# U+041A (Cyrillic Capital Letter Ka): cp1251
'\u041A',
# U+05D0 (Hebrew Letter Alef): Encodable to cp424
'\u05D0',
# U+060C (Arabic Comma): cp864, cp1006, iso8859_6, mac_arabic
'\u060C',
# U+062A (Arabic Letter Teh): cp720
'\u062A',
# U+0E01 (Thai Character Ko Kai): cp874
'\u0E01',
# Then try more "special" characters. "special" because they may be
# interpreted or displayed differently depending on the exact locale
# encoding and the font.
# U+00A0 (No-Break Space)
'\u00A0',
# U+20AC (Euro Sign)
'\u20AC',
):
try:
os.fsdecode(os.fsencode(character))
except UnicodeError:
pass
else:
FS_NONASCII = character
break
# TESTFN_UNICODE is a non-ascii filename
TESTFN_UNICODE = TESTFN + "-\xe0\xf2\u0258\u0141\u011f"
if sys.platform == 'darwin':
# In Mac OS X's VFS API file names are, by definition, canonically
# decomposed Unicode, encoded using UTF-8. See QA1173:
# http://developer.apple.com/mac/library/qa/qa2001/qa1173.html
import unicodedata
TESTFN_UNICODE = unicodedata.normalize('NFD', TESTFN_UNICODE)
TESTFN_ENCODING = sys.getfilesystemencoding()
# TESTFN_UNENCODABLE is a filename (str type) that should *not* be able to be
# encoded by the filesystem encoding (in strict mode). It can be None if we
# cannot generate such filename.
TESTFN_UNENCODABLE = None
if os.name in ('nt', 'ce'):
# skip win32s (0) or Windows 9x/ME (1)
if sys.getwindowsversion().platform >= 2:
# Different kinds of characters from various languages to minimize the
# probability that the whole name is encodable to MBCS (issue #9819)
TESTFN_UNENCODABLE = TESTFN + "-\u5171\u0141\u2661\u0363\uDC80"
try:
TESTFN_UNENCODABLE.encode(TESTFN_ENCODING)
except UnicodeEncodeError:
pass
else:
print('WARNING: The filename %r CAN be encoded by the filesystem encoding (%s). '
'Unicode filename tests may not be effective'
% (TESTFN_UNENCODABLE, TESTFN_ENCODING))
TESTFN_UNENCODABLE = None
# Mac OS X denies unencodable filenames (invalid utf-8)
elif sys.platform != 'darwin':
try:
# ascii and utf-8 cannot encode the byte 0xff
b'\xff'.decode(TESTFN_ENCODING)
except UnicodeDecodeError:
# 0xff will be encoded using the surrogate character u+DCFF
TESTFN_UNENCODABLE = TESTFN \
+ b'-\xff'.decode(TESTFN_ENCODING, 'surrogateescape')
else:
# File system encoding (eg. ISO-8859-* encodings) can encode
# the byte 0xff. Skip some unicode filename tests.
pass
# TESTFN_UNDECODABLE is a filename (bytes type) that should *not* be able to be
# decoded from the filesystem encoding (in strict mode). It can be None if we
# cannot generate such filename (ex: the latin1 encoding can decode any byte
# sequence). On UNIX, TESTFN_UNDECODABLE can be decoded by os.fsdecode() thanks
# to the surrogateescape error handler (PEP 383), but not from the filesystem
# encoding in strict mode.
TESTFN_UNDECODABLE = None
for name in (
# b'\xff' is not decodable by os.fsdecode() with code page 932. Windows
# accepts it to create a file or a directory, or don't accept to enter to
# such directory (when the bytes name is used). So test b'\xe7' first: it is
# not decodable from cp932.
b'\xe7w\xf0',
# undecodable from ASCII, UTF-8
b'\xff',
# undecodable from iso8859-3, iso8859-6, iso8859-7, cp424, iso8859-8, cp856
# and cp857
b'\xae\xd5'
# undecodable from UTF-8 (UNIX and Mac OS X)
b'\xed\xb2\x80', b'\xed\xb4\x80',
# undecodable from shift_jis, cp869, cp874, cp932, cp1250, cp1251, cp1252,
# cp1253, cp1254, cp1255, cp1257, cp1258
b'\x81\x98',
):
try:
name.decode(TESTFN_ENCODING)
except UnicodeDecodeError:
TESTFN_UNDECODABLE = os.fsencode(TESTFN) + name
break
if FS_NONASCII:
TESTFN_NONASCII = TESTFN + '-' + FS_NONASCII
else:
TESTFN_NONASCII = None
# Save the initial cwd
SAVEDCWD = os.getcwd()
@contextlib.contextmanager
def temp_dir(path=None, quiet=False):
"""Return a context manager that creates a temporary directory.
Arguments:
path: the directory to create temporarily. If omitted or None,
defaults to creating a temporary directory using tempfile.mkdtemp.
quiet: if False (the default), the context manager raises an exception
on error. Otherwise, if the path is specified and cannot be
created, only a warning is issued.
"""
dir_created = False
if path is None:
path = tempfile.mkdtemp()
dir_created = True
path = os.path.realpath(path)
else:
try:
os.mkdir(path)
dir_created = True
except OSError:
if not quiet:
raise
warnings.warn('tests may fail, unable to create temp dir: ' + path,
RuntimeWarning, stacklevel=3)
try:
yield path
finally:
if dir_created:
rmtree(path)
@contextlib.contextmanager
def change_cwd(path, quiet=False):
"""Return a context manager that changes the current working directory.
Arguments:
path: the directory to use as the temporary current working directory.
quiet: if False (the default), the context manager raises an exception
on error. Otherwise, it issues only a warning and keeps the current
working directory the same.
"""
saved_dir = os.getcwd()
try:
os.chdir(path)
except OSError:
if not quiet:
raise
warnings.warn('tests may fail, unable to change CWD to: ' + path,
RuntimeWarning, stacklevel=3)
try:
yield os.getcwd()
finally:
os.chdir(saved_dir)
@contextlib.contextmanager
def temp_cwd(name='tempcwd', quiet=False):
"""
Context manager that temporarily creates and changes the CWD.
The function temporarily changes the current working directory
after creating a temporary directory in the current directory with
name *name*. If *name* is None, the temporary directory is
created using tempfile.mkdtemp.
If *quiet* is False (default) and it is not possible to
create or change the CWD, an error is raised. If *quiet* is True,
only a warning is raised and the original CWD is used.
"""
with temp_dir(path=name, quiet=quiet) as temp_path:
with change_cwd(temp_path, quiet=quiet) as cwd_dir:
yield cwd_dir
if hasattr(os, "umask"):
@contextlib.contextmanager
def temp_umask(umask):
"""Context manager that temporarily sets the process umask."""
oldmask = os.umask(umask)
try:
yield
finally:
os.umask(oldmask)
# TEST_HOME_DIR refers to the top level directory of the "test" package
# that contains Python's regression test suite
TEST_SUPPORT_DIR = os.path.dirname(os.path.abspath(__file__))
TEST_HOME_DIR = os.path.dirname(TEST_SUPPORT_DIR)
# TEST_DATA_DIR is used as a target download location for remote resources
TEST_DATA_DIR = os.path.join(TEST_HOME_DIR, "data")
def findfile(filename, subdir=None):
"""Try to find a file on sys.path or in the test directory. If it is not
found the argument passed to the function is returned (this does not
necessarily signal failure; could still be the legitimate path).
Setting *subdir* indicates a relative path to use to find the file
rather than looking directly in the path directories.
"""
if os.path.isabs(filename):
return filename
if subdir is not None:
filename = os.path.join(subdir, filename)
path = [TEST_HOME_DIR] + sys.path
for dn in path:
fn = os.path.join(dn, filename)
if os.path.exists(fn): return fn
return filename
def create_empty_file(filename):
"""Create an empty file. If the file already exists, truncate it."""
fd = os.open(filename, os.O_WRONLY | os.O_CREAT | os.O_TRUNC)
os.close(fd)
def sortdict(dict):
"Like repr(dict), but in sorted order."
items = sorted(dict.items())
reprpairs = ["%r: %r" % pair for pair in items]
withcommas = ", ".join(reprpairs)
return "{%s}" % withcommas
def make_bad_fd():
"""
Create an invalid file descriptor by opening and closing a file and return
its fd.
"""
file = open(TESTFN, "wb")
try:
return file.fileno()
finally:
file.close()
unlink(TESTFN)
def check_syntax_error(testcase, statement, *, lineno=None, offset=None):
with testcase.assertRaises(SyntaxError) as cm:
compile(statement, '<test string>', 'exec')
err = cm.exception
testcase.assertIsNotNone(err.lineno)
if lineno is not None:
testcase.assertEqual(err.lineno, lineno)
testcase.assertIsNotNone(err.offset)
if offset is not None:
testcase.assertEqual(err.offset, offset)
def open_urlresource(url, *args, **kw):
import urllib.request, urllib.parse
check = kw.pop('check', None)
filename = urllib.parse.urlparse(url)[2].split('/')[-1] # '/': it's URL!
fn = os.path.join(TEST_DATA_DIR, filename)
def check_valid_file(fn):
f = open(fn, *args, **kw)
if check is None:
return f
elif check(f):
f.seek(0)
return f
f.close()
if os.path.exists(fn):
f = check_valid_file(fn)
if f is not None:
return f
unlink(fn)
# Verify the requirement before downloading the file
requires('urlfetch')
if verbose:
print('\tfetching %s ...' % url, file=get_original_stdout())
opener = urllib.request.build_opener()
if gzip:
opener.addheaders.append(('Accept-Encoding', 'gzip'))
f = opener.open(url, timeout=15)
if gzip and f.headers.get('Content-Encoding') == 'gzip':
f = gzip.GzipFile(fileobj=f)
try:
with open(fn, "wb") as out:
s = f.read()
while s:
out.write(s)
s = f.read()
finally:
f.close()
f = check_valid_file(fn)
if f is not None:
return f
raise TestFailed('invalid resource %r' % fn)
class WarningsRecorder(object):
"""Convenience wrapper for the warnings list returned on
entry to the warnings.catch_warnings() context manager.
"""
def __init__(self, warnings_list):
self._warnings = warnings_list
self._last = 0
def __getattr__(self, attr):
if len(self._warnings) > self._last:
return getattr(self._warnings[-1], attr)
elif attr in warnings.WarningMessage._WARNING_DETAILS:
return None
raise AttributeError("%r has no attribute %r" % (self, attr))
@property
def warnings(self):
return self._warnings[self._last:]
def reset(self):
self._last = len(self._warnings)
def _filterwarnings(filters, quiet=False):
"""Catch the warnings, then check if all the expected
warnings have been raised and re-raise unexpected warnings.
If 'quiet' is True, only re-raise the unexpected warnings.
"""
# Clear the warning registry of the calling module
# in order to re-raise the warnings.
frame = sys._getframe(2)
registry = frame.f_globals.get('__warningregistry__')
if registry:
registry.clear()
with warnings.catch_warnings(record=True) as w:
# Set filter "always" to record all warnings. Because
# test_warnings swap the module, we need to look up in
# the sys.modules dictionary.
sys.modules['warnings'].simplefilter("always")
yield WarningsRecorder(w)
# Filter the recorded warnings
reraise = list(w)
missing = []
for msg, cat in filters:
seen = False
for w in reraise[:]:
warning = w.message
# Filter out the matching messages
if (re.match(msg, str(warning), re.I) and
issubclass(warning.__class__, cat)):
seen = True
reraise.remove(w)
if not seen and not quiet:
# This filter caught nothing
missing.append((msg, cat.__name__))
if reraise:
raise AssertionError("unhandled warning %s" % reraise[0])
if missing:
raise AssertionError("filter (%r, %s) did not catch any warning" %
missing[0])
@contextlib.contextmanager
def check_warnings(*filters, **kwargs):
"""Context manager to silence warnings.
Accept 2-tuples as positional arguments:
("message regexp", WarningCategory)
Optional argument:
- if 'quiet' is True, it does not fail if a filter catches nothing
(default True without argument,
default False if some filters are defined)
Without argument, it defaults to:
check_warnings(("", Warning), quiet=True)
"""
quiet = kwargs.get('quiet')
if not filters:
filters = (("", Warning),)
# Preserve backward compatibility
if quiet is None:
quiet = True
return _filterwarnings(filters, quiet)
@contextlib.contextmanager
def check_no_resource_warning(testcase):
"""Context manager to check that no ResourceWarning is emitted.
Usage:
with check_no_resource_warning(self):
f = open(...)
...
del f
You must remove the object which may emit ResourceWarning before
the end of the context manager.
"""
with warnings.catch_warnings(record=True) as warns:
warnings.filterwarnings('always', category=ResourceWarning)
yield
gc_collect()
testcase.assertEqual(warns, [])
class CleanImport(object):
"""Context manager to force import to return a new module reference.
This is useful for testing module-level behaviours, such as
the emission of a DeprecationWarning on import.
Use like this:
with CleanImport("foo"):
importlib.import_module("foo") # new reference
"""
def __init__(self, *module_names):
self.original_modules = sys.modules.copy()
for module_name in module_names:
if module_name in sys.modules:
module = sys.modules[module_name]
# It is possible that module_name is just an alias for
# another module (e.g. stub for modules renamed in 3.x).
# In that case, we also need delete the real module to clear
# the import cache.
if module.__name__ != module_name:
del sys.modules[module.__name__]
del sys.modules[module_name]
def __enter__(self):
return self
def __exit__(self, *ignore_exc):
sys.modules.update(self.original_modules)
class EnvironmentVarGuard(collections.abc.MutableMapping):
"""Class to help protect the environment variable properly. Can be used as
a context manager."""
def __init__(self):
self._environ = os.environ
self._changed = {}
def __getitem__(self, envvar):
return self._environ[envvar]
def __setitem__(self, envvar, value):
# Remember the initial value on the first access
if envvar not in self._changed:
self._changed[envvar] = self._environ.get(envvar)
self._environ[envvar] = value
def __delitem__(self, envvar):
# Remember the initial value on the first access
if envvar not in self._changed:
self._changed[envvar] = self._environ.get(envvar)
if envvar in self._environ:
del self._environ[envvar]
def keys(self):
return self._environ.keys()
def __iter__(self):
return iter(self._environ)
def __len__(self):
return len(self._environ)
def set(self, envvar, value):
self[envvar] = value
def unset(self, envvar):
del self[envvar]
def __enter__(self):
return self
def __exit__(self, *ignore_exc):
for (k, v) in self._changed.items():
if v is None:
if k in self._environ:
del self._environ[k]
else:
self._environ[k] = v
os.environ = self._environ
class DirsOnSysPath(object):
"""Context manager to temporarily add directories to sys.path.
This makes a copy of sys.path, appends any directories given
as positional arguments, then reverts sys.path to the copied
settings when the context ends.
Note that *all* sys.path modifications in the body of the
context manager, including replacement of the object,
will be reverted at the end of the block.
"""
def __init__(self, *paths):
self.original_value = sys.path[:]
self.original_object = sys.path
sys.path.extend(paths)
def __enter__(self):
return self
def __exit__(self, *ignore_exc):
sys.path = self.original_object
sys.path[:] = self.original_value
class TransientResource(object):
"""Raise ResourceDenied if an exception is raised while the context manager
is in effect that matches the specified exception and attributes."""
def __init__(self, exc, **kwargs):
self.exc = exc
self.attrs = kwargs
def __enter__(self):
return self
def __exit__(self, type_=None, value=None, traceback=None):
"""If type_ is a subclass of self.exc and value has attributes matching
self.attrs, raise ResourceDenied. Otherwise let the exception
propagate (if any)."""
if type_ is not None and issubclass(self.exc, type_):
for attr, attr_value in self.attrs.items():
if not hasattr(value, attr):
break
if getattr(value, attr) != attr_value:
break
else:
raise ResourceDenied("an optional resource is not available")
# Context managers that raise ResourceDenied when various issues
# with the Internet connection manifest themselves as exceptions.
# XXX deprecate these and use transient_internet() instead
time_out = TransientResource(OSError, errno=errno.ETIMEDOUT)
socket_peer_reset = TransientResource(OSError, errno=errno.ECONNRESET)
ioerror_peer_reset = TransientResource(OSError, errno=errno.ECONNRESET)
@contextlib.contextmanager
def transient_internet(resource_name, *, timeout=30.0, errnos=()):
"""Return a context manager that raises ResourceDenied when various issues
with the Internet connection manifest themselves as exceptions."""
default_errnos = [
('ECONNREFUSED', 111),
('ECONNRESET', 104),
('EHOSTUNREACH', 113),
('ENETUNREACH', 101),
('ETIMEDOUT', 110),
]
default_gai_errnos = [
('EAI_AGAIN', -3),
('EAI_FAIL', -4),
('EAI_NONAME', -2),
('EAI_NODATA', -5),
# Encountered when trying to resolve IPv6-only hostnames
('WSANO_DATA', 11004),
]
denied = ResourceDenied("Resource %r is not available" % resource_name)
captured_errnos = errnos
gai_errnos = []
if not captured_errnos:
captured_errnos = [getattr(errno, name, num)
for (name, num) in default_errnos]
gai_errnos = [getattr(socket, name, num)
for (name, num) in default_gai_errnos]
def filter_error(err):
n = getattr(err, 'errno', None)
if (isinstance(err, socket.timeout) or
(isinstance(err, socket.gaierror) and n in gai_errnos) or
(isinstance(err, urllib.error.HTTPError) and
500 <= err.code <= 599) or
(isinstance(err, urllib.error.URLError) and
(("ConnectionRefusedError" in err.reason) or
("TimeoutError" in err.reason) or
("EOFError" in err.reason))) or
n in captured_errnos):
if not verbose:
sys.stderr.write(denied.args[0] + "\n")
raise denied from err
old_timeout = socket.getdefaulttimeout()
try:
if timeout is not None:
socket.setdefaulttimeout(timeout)
yield
except nntplib.NNTPTemporaryError as err:
if verbose:
sys.stderr.write(denied.args[0] + "\n")
raise denied from err
except OSError as err:
# urllib can wrap original socket errors multiple times (!), we must
# unwrap to get at the original error.
while True:
a = err.args
if len(a) >= 1 and isinstance(a[0], OSError):
err = a[0]
# The error can also be wrapped as args[1]:
# except socket.error as msg:
# raise OSError('socket error', msg).with_traceback(sys.exc_info()[2])
elif len(a) >= 2 and isinstance(a[1], OSError):
err = a[1]
else:
break
filter_error(err)
raise
# XXX should we catch generic exceptions and look for their
# __cause__ or __context__?
finally:
socket.setdefaulttimeout(old_timeout)
@contextlib.contextmanager
def captured_output(stream_name):
"""Return a context manager used by captured_stdout/stdin/stderr
that temporarily replaces the sys stream *stream_name* with a StringIO."""
import io
orig_stdout = getattr(sys, stream_name)
setattr(sys, stream_name, io.StringIO())
try:
yield getattr(sys, stream_name)
finally:
setattr(sys, stream_name, orig_stdout)
def captured_stdout():
"""Capture the output of sys.stdout:
with captured_stdout() as stdout:
print("hello")
self.assertEqual(stdout.getvalue(), "hello\\n")
"""
return captured_output("stdout")
def captured_stderr():
"""Capture the output of sys.stderr:
with captured_stderr() as stderr:
print("hello", file=sys.stderr)
self.assertEqual(stderr.getvalue(), "hello\\n")
"""
return captured_output("stderr")
def captured_stdin():
"""Capture the input to sys.stdin:
with captured_stdin() as stdin:
stdin.write('hello\\n')
stdin.seek(0)
# call test code that consumes from sys.stdin
captured = input()
self.assertEqual(captured, "hello")
"""
return captured_output("stdin")
def gc_collect():
"""Force as many objects as possible to be collected.
In non-CPython implementations of Python, this is needed because timely
deallocation is not guaranteed by the garbage collector. (Even in CPython
this can be the case in case of reference cycles.) This means that __del__
methods may be called later than expected and weakrefs may remain alive for
longer than expected. This function tries its best to force all garbage
objects to disappear.
"""
gc.collect()
if is_jython:
time.sleep(0.1)
gc.collect()
gc.collect()
@contextlib.contextmanager
def disable_gc():
have_gc = gc.isenabled()
gc.disable()
try:
yield
finally:
if have_gc:
gc.enable()
def python_is_optimized():
"""Find if Python was built with optimizations."""
cflags = sysconfig.get_config_var('PY_CFLAGS') or ''
final_opt = ""
for opt in cflags.split():
if opt.startswith('-O'):
final_opt = opt
return final_opt not in ('', '-O0', '-Og')
_header = 'nP'
_align = '0n'
if hasattr(sys, "gettotalrefcount"):
_header = '2P' + _header
_align = '0P'
_vheader = _header + 'n'
def calcobjsize(fmt):
return struct.calcsize(_header + fmt + _align)
def calcvobjsize(fmt):
return struct.calcsize(_vheader + fmt + _align)
_TPFLAGS_HAVE_GC = 1<<14
_TPFLAGS_HEAPTYPE = 1<<9
def check_sizeof(test, o, size):
import _testcapi
result = sys.getsizeof(o)
# add GC header size
if ((type(o) == type) and (o.__flags__ & _TPFLAGS_HEAPTYPE) or\
((type(o) != type) and (type(o).__flags__ & _TPFLAGS_HAVE_GC))):
size += _testcapi.SIZEOF_PYGC_HEAD
msg = 'wrong size for %s: got %d, expected %d' \
% (type(o), result, size)
test.assertEqual(result, size, msg)
#=======================================================================
# Decorator for running a function in a different locale, correctly resetting
# it afterwards.
def run_with_locale(catstr, *locales):
def decorator(func):
def inner(*args, **kwds):
try:
import locale
category = getattr(locale, catstr)
orig_locale = locale.setlocale(category)
except AttributeError:
# if the test author gives us an invalid category string
raise
except:
# cannot retrieve original locale, so do nothing
locale = orig_locale = None
else:
for loc in locales:
try:
locale.setlocale(category, loc)
break
except:
pass
# now run the function, resetting the locale on exceptions
try:
return func(*args, **kwds)
finally:
if locale and orig_locale:
locale.setlocale(category, orig_locale)
inner.__name__ = func.__name__
inner.__doc__ = func.__doc__
return inner
return decorator
#=======================================================================
# Decorator for running a function in a specific timezone, correctly
# resetting it afterwards.
def run_with_tz(tz):
def decorator(func):
def inner(*args, **kwds):
try:
tzset = time.tzset
except AttributeError:
raise unittest.SkipTest("tzset required")
if 'TZ' in os.environ:
orig_tz = os.environ['TZ']
else:
orig_tz = None
os.environ['TZ'] = tz
tzset()
# now run the function, resetting the tz on exceptions
try:
return func(*args, **kwds)
finally:
if orig_tz is None:
del os.environ['TZ']
else:
os.environ['TZ'] = orig_tz
time.tzset()
inner.__name__ = func.__name__
inner.__doc__ = func.__doc__
return inner
return decorator
#=======================================================================
# Big-memory-test support. Separate from 'resources' because memory use
# should be configurable.
# Some handy shorthands. Note that these are used for byte-limits as well
# as size-limits, in the various bigmem tests
_1M = 1024*1024
_1G = 1024 * _1M
_2G = 2 * _1G
_4G = 4 * _1G
MAX_Py_ssize_t = sys.maxsize
def set_memlimit(limit):
global max_memuse
global real_max_memuse
sizes = {
'k': 1024,
'm': _1M,
'g': _1G,
't': 1024*_1G,
}
m = re.match(r'(\d+(\.\d+)?) (K|M|G|T)b?$', limit,
re.IGNORECASE | re.VERBOSE)
if m is None:
raise ValueError('Invalid memory limit %r' % (limit,))
memlimit = int(float(m.group(1)) * sizes[m.group(3).lower()])
real_max_memuse = memlimit
if memlimit > MAX_Py_ssize_t:
memlimit = MAX_Py_ssize_t
if memlimit < _2G - 1:
raise ValueError('Memory limit %r too low to be useful' % (limit,))
max_memuse = memlimit
class _MemoryWatchdog:
"""An object which periodically watches the process' memory consumption
and prints it out.
"""
def __init__(self):
self.procfile = '/proc/{pid}/statm'.format(pid=os.getpid())
self.started = False
def start(self):
try:
f = open(self.procfile, 'r')
except OSError as e:
warnings.warn('/proc not available for stats: {}'.format(e),
RuntimeWarning)
sys.stderr.flush()
return
watchdog_script = findfile("memory_watchdog.py")
self.mem_watchdog = subprocess.Popen([sys.executable, watchdog_script],
stdin=f, stderr=subprocess.DEVNULL)
f.close()
self.started = True
def stop(self):
if self.started:
self.mem_watchdog.terminate()
self.mem_watchdog.wait()
def bigmemtest(size, memuse, dry_run=True):
"""Decorator for bigmem tests.
'size' is a requested size for the test (in arbitrary, test-interpreted
units.) 'memuse' is the number of bytes per unit for the test, or a good
estimate of it. For example, a test that needs two byte buffers, of 4 GiB
each, could be decorated with @bigmemtest(size=_4G, memuse=2).
The 'size' argument is normally passed to the decorated test method as an
extra argument. If 'dry_run' is true, the value passed to the test method
may be less than the requested value. If 'dry_run' is false, it means the
test doesn't support dummy runs when -M is not specified.
"""
def decorator(f):
def wrapper(self):
size = wrapper.size
memuse = wrapper.memuse
if not real_max_memuse:
maxsize = 5147
else:
maxsize = size
if ((real_max_memuse or not dry_run)
and real_max_memuse < maxsize * memuse):
raise unittest.SkipTest(
"not enough memory: %.1fG minimum needed"
% (size * memuse / (1024 ** 3)))
if real_max_memuse and verbose:
print()
print(" ... expected peak memory use: {peak:.1f}G"
.format(peak=size * memuse / (1024 ** 3)))
watchdog = _MemoryWatchdog()
watchdog.start()
else:
watchdog = None
try:
return f(self, maxsize)
finally:
if watchdog:
watchdog.stop()
wrapper.size = size
wrapper.memuse = memuse
return wrapper
return decorator
def bigaddrspacetest(f):
"""Decorator for tests that fill the address space."""
def wrapper(self):
if max_memuse < MAX_Py_ssize_t:
if MAX_Py_ssize_t >= 2**63 - 1 and max_memuse >= 2**31:
raise unittest.SkipTest(
"not enough memory: try a 32-bit build instead")
else:
raise unittest.SkipTest(
"not enough memory: %.1fG minimum needed"
% (MAX_Py_ssize_t / (1024 ** 3)))
else:
return f(self)
return wrapper
#=======================================================================
# unittest integration.
class BasicTestRunner:
def run(self, test):
result = unittest.TestResult()
test(result)
return result
def _id(obj):
return obj
def requires_resource(resource):
if resource == 'gui' and not _is_gui_available():
return unittest.skip(_is_gui_available.reason)
if is_resource_enabled(resource):
return _id
else:
return unittest.skip("resource {0!r} is not enabled".format(resource))
def cpython_only(test):
"""
Decorator for tests only applicable on CPython.
"""
return impl_detail(cpython=True)(test)
def impl_detail(msg=None, **guards):
if check_impl_detail(**guards):
return _id
if msg is None:
guardnames, default = _parse_guards(guards)
if default:
msg = "implementation detail not available on {0}"
else:
msg = "implementation detail specific to {0}"
guardnames = sorted(guardnames.keys())
msg = msg.format(' or '.join(guardnames))
return unittest.skip(msg)
_have_mp_queue = None
def requires_multiprocessing_queue(test):
"""Skip decorator for tests that use multiprocessing.Queue."""
global _have_mp_queue
if _have_mp_queue is None:
import multiprocessing
# Without a functioning shared semaphore implementation attempts to
# instantiate a Queue will result in an ImportError (issue #3770).
try:
multiprocessing.Queue()
_have_mp_queue = True
except ImportError:
_have_mp_queue = False
msg = "requires a functioning shared semaphore implementation"
return test if _have_mp_queue else unittest.skip(msg)(test)
def _parse_guards(guards):
# Returns a tuple ({platform_name: run_me}, default_value)
if not guards:
return ({'cpython': True}, False)
is_true = list(guards.values())[0]
assert list(guards.values()) == [is_true] * len(guards) # all True or all False
return (guards, not is_true)
# Use the following check to guard CPython's implementation-specific tests --
# or to run them only on the implementation(s) guarded by the arguments.
def check_impl_detail(**guards):
"""This function returns True or False depending on the host platform.
Examples:
if check_impl_detail(): # only on CPython (default)
if check_impl_detail(jython=True): # only on Jython
if check_impl_detail(cpython=False): # everywhere except on CPython
"""
guards, default = _parse_guards(guards)
return guards.get(platform.python_implementation().lower(), default)
def no_tracing(func):
"""Decorator to temporarily turn off tracing for the duration of a test."""
if not hasattr(sys, 'gettrace'):
return func
else:
@functools.wraps(func)
def wrapper(*args, **kwargs):
original_trace = sys.gettrace()
try:
sys.settrace(None)
return func(*args, **kwargs)
finally:
sys.settrace(original_trace)
return wrapper
def refcount_test(test):
"""Decorator for tests which involve reference counting.
To start, the decorator does not run the test if is not run by CPython.
After that, any trace function is unset during the test to prevent
unexpected refcounts caused by the trace function.
"""
return no_tracing(cpython_only(test))
def _filter_suite(suite, pred):
"""Recursively filter test cases in a suite based on a predicate."""
newtests = []
for test in suite._tests:
if isinstance(test, unittest.TestSuite):
_filter_suite(test, pred)
newtests.append(test)
else:
if pred(test):
newtests.append(test)
suite._tests = newtests
def _run_suite(suite):
"""Run tests from a unittest.TestSuite-derived class."""
if verbose:
runner = unittest.TextTestRunner(sys.stdout, verbosity=2,
failfast=failfast)
else:
runner = BasicTestRunner()
result = runner.run(suite)
if not result.wasSuccessful():
if len(result.errors) == 1 and not result.failures:
err = result.errors[0][1]
elif len(result.failures) == 1 and not result.errors:
err = result.failures[0][1]
else:
err = "multiple errors occurred"
if not verbose: err += "; run in verbose mode for details"
raise TestFailed(err)
def _match_test(test):
global match_tests
if match_tests is None:
return True
test_id = test.id()
for match_test in match_tests:
if fnmatch.fnmatchcase(test_id, match_test):
return True
for name in test_id.split("."):
if fnmatch.fnmatchcase(name, match_test):
return True
return False
def run_unittest(*classes):
"""Run tests from unittest.TestCase-derived classes."""
valid_types = (unittest.TestSuite, unittest.TestCase)
suite = unittest.TestSuite()
for cls in classes:
if isinstance(cls, str):
if cls in sys.modules:
suite.addTest(unittest.findTestCases(sys.modules[cls]))
else:
raise ValueError("str arguments must be keys in sys.modules")
elif isinstance(cls, valid_types):
suite.addTest(cls)
else:
suite.addTest(unittest.makeSuite(cls))
_filter_suite(suite, _match_test)
_run_suite(suite)
#=======================================================================
# Check for the presence of docstrings.
# Rather than trying to enumerate all the cases where docstrings may be
# disabled, we just check for that directly
def _check_docstrings():
"""Just used to check if docstrings are enabled"""
MISSING_C_DOCSTRINGS = (check_impl_detail() and
sys.platform != 'win32' and
not sysconfig.get_config_var('WITH_DOC_STRINGS'))
HAVE_DOCSTRINGS = (_check_docstrings.__doc__ is not None and
not MISSING_C_DOCSTRINGS)
requires_docstrings = unittest.skipUnless(HAVE_DOCSTRINGS,
"test requires docstrings")
#=======================================================================
# doctest driver.
def run_doctest(module, verbosity=None, optionflags=0):
"""Run doctest on the given module. Return (#failures, #tests).
If optional argument verbosity is not specified (or is None), pass
support's belief about verbosity on to doctest. Else doctest's
usual behavior is used (it searches sys.argv for -v).
"""
import doctest
if verbosity is None:
verbosity = verbose
else:
verbosity = None
f, t = doctest.testmod(module, verbose=verbosity, optionflags=optionflags)
if f:
raise TestFailed("%d of %d doctests failed" % (f, t))
if verbose:
print('doctest (%s) ... %d tests with zero failures' %
(module.__name__, t))
return f, t
#=======================================================================
# Support for saving and restoring the imported modules.
def modules_setup():
return sys.modules.copy(),
def modules_cleanup(oldmodules):
# Encoders/decoders are registered permanently within the internal
# codec cache. If we destroy the corresponding modules their
# globals will be set to None which will trip up the cached functions.
encodings = [(k, v) for k, v in sys.modules.items()
if k.startswith('encodings.')]
sys.modules.clear()
sys.modules.update(encodings)
# XXX: This kind of problem can affect more than just encodings. In particular
# extension modules (such as _ssl) don't cope with reloading properly.
# Really, test modules should be cleaning out the test specific modules they
# know they added (ala test_runpy) rather than relying on this function (as
# test_importhooks and test_pkg do currently).
# Implicitly imported *real* modules should be left alone (see issue 10556).
sys.modules.update(oldmodules)
#=======================================================================
# Threading support to prevent reporting refleaks when running regrtest.py -R
# NOTE: we use thread._count() rather than threading.enumerate() (or the
# moral equivalent thereof) because a threading.Thread object is still alive
# until its __bootstrap() method has returned, even after it has been
# unregistered from the threading module.
# thread._count(), on the other hand, only gets decremented *after* the
# __bootstrap() method has returned, which gives us reliable reference counts
# at the end of a test run.
def threading_setup():
if _thread:
return _thread._count(), threading._dangling.copy()
else:
return 1, ()
def threading_cleanup(*original_values):
if not _thread:
return
_MAX_COUNT = 100
for count in range(_MAX_COUNT):
values = _thread._count(), threading._dangling
if values == original_values:
break
time.sleep(0.01)
gc_collect()
# XXX print a warning in case of failure?
def reap_threads(func):
"""Use this function when threads are being used. This will
ensure that the threads are cleaned up even when the test fails.
If threading is unavailable this function does nothing.
"""
if not _thread:
return func
@functools.wraps(func)
def decorator(*args):
key = threading_setup()
try:
return func(*args)
finally:
threading_cleanup(*key)
return decorator
def reap_children():
"""Use this function at the end of test_main() whenever sub-processes
are started. This will help ensure that no extra children (zombies)
stick around to hog resources and create problems when looking
for refleaks.
"""
# Reap all our dead child processes so we don't leave zombies around.
# These hog resources and might be causing some of the buildbots to die.
if hasattr(os, 'waitpid'):
any_process = -1
while True:
try:
# This will raise an exception on Windows. That's ok.
pid, status = os.waitpid(any_process, os.WNOHANG)
if pid == 0:
break
except:
break
@contextlib.contextmanager
def start_threads(threads, unlock=None):
threads = list(threads)
started = []
try:
try:
for t in threads:
t.start()
started.append(t)
except:
if verbose:
print("Can't start %d threads, only %d threads started" %
(len(threads), len(started)))
raise
yield
finally:
try:
if unlock:
unlock()
endtime = starttime = time.time()
for timeout in range(1, 16):
endtime += 60
for t in started:
t.join(max(endtime - time.time(), 0.01))
started = [t for t in started if t.isAlive()]
if not started:
break
if verbose:
print('Unable to join %d threads during a period of '
'%d minutes' % (len(started), timeout))
finally:
started = [t for t in started if t.isAlive()]
if started:
faulthandler.dump_traceback(sys.stdout)
raise AssertionError('Unable to join %d threads' % len(started))
@contextlib.contextmanager
def swap_attr(obj, attr, new_val):
"""Temporary swap out an attribute with a new object.
Usage:
with swap_attr(obj, "attr", 5):
...
This will set obj.attr to 5 for the duration of the with: block,
restoring the old value at the end of the block. If `attr` doesn't
exist on `obj`, it will be created and then deleted at the end of the
block.
The old value (or None if it doesn't exist) will be assigned to the
target of the "as" clause, if there is one.
"""
if hasattr(obj, attr):
real_val = getattr(obj, attr)
setattr(obj, attr, new_val)
try:
yield real_val
finally:
setattr(obj, attr, real_val)
else:
setattr(obj, attr, new_val)
try:
yield
finally:
if hasattr(obj, attr):
delattr(obj, attr)
@contextlib.contextmanager
def swap_item(obj, item, new_val):
"""Temporary swap out an item with a new object.
Usage:
with swap_item(obj, "item", 5):
...
This will set obj["item"] to 5 for the duration of the with: block,
restoring the old value at the end of the block. If `item` doesn't
exist on `obj`, it will be created and then deleted at the end of the
block.
The old value (or None if it doesn't exist) will be assigned to the
target of the "as" clause, if there is one.
"""
if item in obj:
real_val = obj[item]
obj[item] = new_val
try:
yield real_val
finally:
obj[item] = real_val
else:
obj[item] = new_val
try:
yield
finally:
if item in obj:
del obj[item]
def strip_python_stderr(stderr):
"""Strip the stderr of a Python process from potential debug output
emitted by the interpreter.
This will typically be run on the result of the communicate() method
of a subprocess.Popen object.
"""
stderr = re.sub(br"\[\d+ refs, \d+ blocks\]\r?\n?", b"", stderr).strip()
return stderr
requires_type_collecting = unittest.skipIf(hasattr(sys, 'getcounts'),
'types are immortal if COUNT_ALLOCS is defined')
def args_from_interpreter_flags():
"""Return a list of command-line arguments reproducing the current
settings in sys.flags and sys.warnoptions."""
return subprocess._args_from_interpreter_flags()
#============================================================
# Support for assertions about logging.
#============================================================
class TestHandler(logging.handlers.BufferingHandler):
def __init__(self, matcher):
# BufferingHandler takes a "capacity" argument
# so as to know when to flush. As we're overriding
# shouldFlush anyway, we can set a capacity of zero.
# You can call flush() manually to clear out the
# buffer.
logging.handlers.BufferingHandler.__init__(self, 0)
self.matcher = matcher
def shouldFlush(self):
return False
def emit(self, record):
self.format(record)
self.buffer.append(record.__dict__)
def matches(self, **kwargs):
"""
Look for a saved dict whose keys/values match the supplied arguments.
"""
result = False
for d in self.buffer:
if self.matcher.matches(d, **kwargs):
result = True
break
return result
class Matcher(object):
_partial_matches = ('msg', 'message')
def matches(self, d, **kwargs):
"""
Try to match a single dict with the supplied arguments.
Keys whose values are strings and which are in self._partial_matches
will be checked for partial (i.e. substring) matches. You can extend
this scheme to (for example) do regular expression matching, etc.
"""
result = True
for k in kwargs:
v = kwargs[k]
dv = d.get(k)
if not self.match_value(k, dv, v):
result = False
break
return result
def match_value(self, k, dv, v):
"""
Try to match a single stored value (dv) with a supplied value (v).
"""
if type(v) != type(dv):
result = False
elif type(dv) is not str or k not in self._partial_matches:
result = (v == dv)
else:
result = dv.find(v) >= 0
return result
_can_symlink = None
def can_symlink():
global _can_symlink
if _can_symlink is not None:
return _can_symlink
symlink_path = TESTFN + "can_symlink"
try:
os.symlink(TESTFN, symlink_path)
can = True
except (OSError, NotImplementedError, AttributeError):
can = False
else:
os.remove(symlink_path)
_can_symlink = can
return can
def skip_unless_symlink(test):
"""Skip decorator for tests that require functional symlink"""
ok = can_symlink()
msg = "Requires functional symlink implementation"
return test if ok else unittest.skip(msg)(test)
_can_xattr = None
def can_xattr():
global _can_xattr
if _can_xattr is not None:
return _can_xattr
if not hasattr(os, "setxattr"):
can = False
else:
tmp_fp, tmp_name = tempfile.mkstemp()
try:
with open(TESTFN, "wb") as fp:
try:
# TESTFN & tempfile may use different file systems with
# different capabilities
os.setxattr(tmp_fp, b"user.test", b"")
os.setxattr(fp.fileno(), b"user.test", b"")
# Kernels < 2.6.39 don't respect setxattr flags.
kernel_version = platform.release()
m = re.match("2.6.(\d{1,2})", kernel_version)
can = m is None or int(m.group(1)) >= 39
except OSError:
can = False
finally:
unlink(TESTFN)
unlink(tmp_name)
_can_xattr = can
return can
def skip_unless_xattr(test):
"""Skip decorator for tests that require functional extended attributes"""
ok = can_xattr()
msg = "no non-broken extended attribute support"
return test if ok else unittest.skip(msg)(test)
def fs_is_case_insensitive(directory):
"""Detects if the file system for the specified directory is case-insensitive."""
with tempfile.NamedTemporaryFile(dir=directory) as base:
base_path = base.name
case_path = base_path.upper()
if case_path == base_path:
case_path = base_path.lower()
try:
return os.path.samefile(base_path, case_path)
except FileNotFoundError:
return False
def detect_api_mismatch(ref_api, other_api, *, ignore=()):
"""Returns the set of items in ref_api not in other_api, except for a
defined list of items to be ignored in this check.
By default this skips private attributes beginning with '_' but
includes all magic methods, i.e. those starting and ending in '__'.
"""
missing_items = set(dir(ref_api)) - set(dir(other_api))
if ignore:
missing_items -= set(ignore)
missing_items = set(m for m in missing_items
if not m.startswith('_') or m.endswith('__'))
return missing_items
class SuppressCrashReport:
"""Try to prevent a crash report from popping up.
On Windows, don't display the Windows Error Reporting dialog. On UNIX,
disable the creation of coredump file.
"""
old_value = None
old_modes = None
def __enter__(self):
"""On Windows, disable Windows Error Reporting dialogs using
SetErrorMode.
On UNIX, try to save the previous core file size limit, then set
soft limit to 0.
"""
if sys.platform.startswith('win'):
# see http://msdn.microsoft.com/en-us/library/windows/desktop/ms680621.aspx
# GetErrorMode is not available on Windows XP and Windows Server 2003,
# but SetErrorMode returns the previous value, so we can use that
import ctypes
self._k32 = ctypes.windll.kernel32
SEM_NOGPFAULTERRORBOX = 0x02
self.old_value = self._k32.SetErrorMode(SEM_NOGPFAULTERRORBOX)
self._k32.SetErrorMode(self.old_value | SEM_NOGPFAULTERRORBOX)
# Suppress assert dialogs in debug builds
# (see http://bugs.python.org/issue23314)
try:
import msvcrt
msvcrt.CrtSetReportMode
except (AttributeError, ImportError):
# no msvcrt or a release build
pass
else:
self.old_modes = {}
for report_type in [msvcrt.CRT_WARN,
msvcrt.CRT_ERROR,
msvcrt.CRT_ASSERT]:
old_mode = msvcrt.CrtSetReportMode(report_type,
msvcrt.CRTDBG_MODE_FILE)
old_file = msvcrt.CrtSetReportFile(report_type,
msvcrt.CRTDBG_FILE_STDERR)
self.old_modes[report_type] = old_mode, old_file
else:
if resource is not None:
try:
self.old_value = resource.getrlimit(resource.RLIMIT_CORE)
resource.setrlimit(resource.RLIMIT_CORE,
(0, self.old_value[1]))
except (ValueError, OSError):
pass
if sys.platform == 'darwin':
# Check if the 'Crash Reporter' on OSX was configured
# in 'Developer' mode and warn that it will get triggered
# when it is.
#
# This assumes that this context manager is used in tests
# that might trigger the next manager.
cmd = ['/usr/bin/defaults', 'read',
'com.apple.CrashReporter', 'DialogType']
proc = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
with proc:
stdout = proc.communicate()[0]
if stdout.strip() == b'developer':
print("this test triggers the Crash Reporter, "
"that is intentional", end='', flush=True)
return self
def __exit__(self, *ignore_exc):
"""Restore Windows ErrorMode or core file behavior to initial value."""
if self.old_value is None:
return
if sys.platform.startswith('win'):
self._k32.SetErrorMode(self.old_value)
if self.old_modes:
import msvcrt
for report_type, (old_mode, old_file) in self.old_modes.items():
msvcrt.CrtSetReportMode(report_type, old_mode)
msvcrt.CrtSetReportFile(report_type, old_file)
else:
if resource is not None:
try:
resource.setrlimit(resource.RLIMIT_CORE, self.old_value)
except (ValueError, OSError):
pass
def patch(test_instance, object_to_patch, attr_name, new_value):
"""Override 'object_to_patch'.'attr_name' with 'new_value'.
Also, add a cleanup procedure to 'test_instance' to restore
'object_to_patch' value for 'attr_name'.
The 'attr_name' should be a valid attribute for 'object_to_patch'.
"""
# check that 'attr_name' is a real attribute for 'object_to_patch'
# will raise AttributeError if it does not exist
getattr(object_to_patch, attr_name)
# keep a copy of the old value
attr_is_local = False
try:
old_value = object_to_patch.__dict__[attr_name]
except (AttributeError, KeyError):
old_value = getattr(object_to_patch, attr_name, None)
else:
attr_is_local = True
# restore the value when the test is done
def cleanup():
if attr_is_local:
setattr(object_to_patch, attr_name, old_value)
else:
delattr(object_to_patch, attr_name)
test_instance.addCleanup(cleanup)
# actually override the attribute
setattr(object_to_patch, attr_name, new_value)
def run_in_subinterp(code):
"""
Run code in a subinterpreter. Raise unittest.SkipTest if the tracemalloc
module is enabled.
"""
# Issue #10915, #15751: PyGILState_*() functions don't work with
# sub-interpreters, the tracemalloc module uses these functions internally
try:
import tracemalloc
except ImportError:
pass
else:
if tracemalloc.is_tracing():
raise unittest.SkipTest("run_in_subinterp() cannot be used "
"if tracemalloc module is tracing "
"memory allocations")
import _testcapi
return _testcapi.run_in_subinterp(code)
def check_free_after_iterating(test, iter, cls, args=()):
class A(cls):
def __del__(self):
nonlocal done
done = True
try:
next(it)
except StopIteration:
pass
done = False
it = iter(A(*args))
# Issue 26494: Shouldn't crash
test.assertRaises(StopIteration, next, it)
# The sequence should be deallocated just after the end of iterating
gc_collect()
test.assertTrue(done)
| apache-2.0 |
smmoosavi/Theano-Tutorials | 5_convolutional_net.py | 1 | 3264 | import theano
from theano import tensor as T
from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
import numpy as np
from theano.tensor.nnet.conv import conv2d
from theano.tensor.signal.downsample import max_pool_2d
from load import mnist, save_model
theano.config.floatX = 'float32'
srng = RandomStreams()
def floatX(X):
return np.asarray(X, dtype=theano.config.floatX)
def init_weights(shape):
return theano.shared(floatX(np.random.randn(*shape) * 0.01))
def init_inputs(shape):
return theano.shared(floatX(np.random.randn(*shape) * 0.01 + 0.5))
def rectify(X):
return T.maximum(X, 0.)
def softmax(X):
e_x = T.exp(X - X.max(axis=1).dimshuffle(0, 'x'))
return e_x / e_x.sum(axis=1).dimshuffle(0, 'x')
def dropout(X, p=0.):
if p > 0:
retain_prob = 1 - p
X *= srng.binomial(X.shape, p=retain_prob, dtype=theano.config.floatX)
X /= retain_prob
return X
def RMSprop(cost, params, lr=0.001, rho=0.9, epsilon=1e-6):
grads = T.grad(cost=cost, wrt=params)
updates = []
for p, g in zip(params, grads):
acc = theano.shared(p.get_value() * 0.)
acc_new = rho * acc + (1 - rho) * g ** 2
gradient_scaling = T.sqrt(acc_new + epsilon)
g = g / gradient_scaling
updates.append((acc, acc_new))
updates.append((p, p - lr * g))
return updates
def model(X, params, p_drop_conv, p_drop_hidden):
w, w2, w3, w4, w_o = params
l1a = rectify(conv2d(X, w, border_mode='full'))
l1 = max_pool_2d(l1a, (2, 2))
l1 = dropout(l1, p_drop_conv)
l2a = rectify(conv2d(l1, w2))
l2 = max_pool_2d(l2a, (2, 2))
l2 = dropout(l2, p_drop_conv)
l3a = rectify(conv2d(l2, w3))
l3b = max_pool_2d(l3a, (2, 2))
l3 = T.flatten(l3b, outdim=2)
l3 = dropout(l3, p_drop_conv)
l4 = rectify(T.dot(l3, w4))
l4 = dropout(l4, p_drop_hidden)
pyx = softmax(T.dot(l4, w_o))
return l1, l2, l3, l4, pyx
def main_rain():
trX, teX, trY, teY = mnist(onehot=True)
trX = trX.reshape(-1, 1, 28, 28)
teX = teX.reshape(-1, 1, 28, 28)
X = T.ftensor4()
Y = T.fmatrix()
w = init_weights((32, 1, 3, 3))
w2 = init_weights((64, 32, 3, 3))
w3 = init_weights((128, 64, 3, 3))
w4 = init_weights((128 * 3 * 3, 625))
w_o = init_weights((625, 10))
params = [w, w2, w3, w4, w_o]
noise_l1, noise_l2, noise_l3, noise_l4, noise_py_x = model(X, params, 0.2, 0.5)
l1, l2, l3, l4, py_x = model(X, params, 0., 0.)
y_x = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(noise_py_x, Y))
updates = RMSprop(cost, params, lr=0.001)
train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_x, allow_input_downcast=True)
for i in range(100):
for start, end in zip(range(0, len(trX), 128), range(128, len(trX), 128)):
cost = train(trX[start:end], trY[start:end])
print np.mean(np.argmax(teY, axis=1) == predict(teX))
if i % 10 == 0:
name = 'media/model/conv-{0}.model'.format(str(i))
save_model(name, params)
name = 'media/model/conv-final.model'
save_model(name, params)
main_rain()
| mit |
mameneses/python-deployment | venv/lib/python2.7/site-packages/pip/_vendor/html5lib/treewalkers/_base.py | 310 | 6919 | from __future__ import absolute_import, division, unicode_literals
from pip._vendor.six import text_type, string_types
import gettext
_ = gettext.gettext
from xml.dom import Node
DOCUMENT = Node.DOCUMENT_NODE
DOCTYPE = Node.DOCUMENT_TYPE_NODE
TEXT = Node.TEXT_NODE
ELEMENT = Node.ELEMENT_NODE
COMMENT = Node.COMMENT_NODE
ENTITY = Node.ENTITY_NODE
UNKNOWN = "<#UNKNOWN#>"
from ..constants import voidElements, spaceCharacters
spaceCharacters = "".join(spaceCharacters)
def to_text(s, blank_if_none=True):
"""Wrapper around six.text_type to convert None to empty string"""
if s is None:
if blank_if_none:
return ""
else:
return None
elif isinstance(s, text_type):
return s
else:
return text_type(s)
def is_text_or_none(string):
"""Wrapper around isinstance(string_types) or is None"""
return string is None or isinstance(string, string_types)
class TreeWalker(object):
def __init__(self, tree):
self.tree = tree
def __iter__(self):
raise NotImplementedError
def error(self, msg):
return {"type": "SerializeError", "data": msg}
def emptyTag(self, namespace, name, attrs, hasChildren=False):
assert namespace is None or isinstance(namespace, string_types), type(namespace)
assert isinstance(name, string_types), type(name)
assert all((namespace is None or isinstance(namespace, string_types)) and
isinstance(name, string_types) and
isinstance(value, string_types)
for (namespace, name), value in attrs.items())
yield {"type": "EmptyTag", "name": to_text(name, False),
"namespace": to_text(namespace),
"data": attrs}
if hasChildren:
yield self.error(_("Void element has children"))
def startTag(self, namespace, name, attrs):
assert namespace is None or isinstance(namespace, string_types), type(namespace)
assert isinstance(name, string_types), type(name)
assert all((namespace is None or isinstance(namespace, string_types)) and
isinstance(name, string_types) and
isinstance(value, string_types)
for (namespace, name), value in attrs.items())
return {"type": "StartTag",
"name": text_type(name),
"namespace": to_text(namespace),
"data": dict(((to_text(namespace, False), to_text(name)),
to_text(value, False))
for (namespace, name), value in attrs.items())}
def endTag(self, namespace, name):
assert namespace is None or isinstance(namespace, string_types), type(namespace)
assert isinstance(name, string_types), type(namespace)
return {"type": "EndTag",
"name": to_text(name, False),
"namespace": to_text(namespace),
"data": {}}
def text(self, data):
assert isinstance(data, string_types), type(data)
data = to_text(data)
middle = data.lstrip(spaceCharacters)
left = data[:len(data) - len(middle)]
if left:
yield {"type": "SpaceCharacters", "data": left}
data = middle
middle = data.rstrip(spaceCharacters)
right = data[len(middle):]
if middle:
yield {"type": "Characters", "data": middle}
if right:
yield {"type": "SpaceCharacters", "data": right}
def comment(self, data):
assert isinstance(data, string_types), type(data)
return {"type": "Comment", "data": text_type(data)}
def doctype(self, name, publicId=None, systemId=None, correct=True):
assert is_text_or_none(name), type(name)
assert is_text_or_none(publicId), type(publicId)
assert is_text_or_none(systemId), type(systemId)
return {"type": "Doctype",
"name": to_text(name),
"publicId": to_text(publicId),
"systemId": to_text(systemId),
"correct": to_text(correct)}
def entity(self, name):
assert isinstance(name, string_types), type(name)
return {"type": "Entity", "name": text_type(name)}
def unknown(self, nodeType):
return self.error(_("Unknown node type: ") + nodeType)
class NonRecursiveTreeWalker(TreeWalker):
def getNodeDetails(self, node):
raise NotImplementedError
def getFirstChild(self, node):
raise NotImplementedError
def getNextSibling(self, node):
raise NotImplementedError
def getParentNode(self, node):
raise NotImplementedError
def __iter__(self):
currentNode = self.tree
while currentNode is not None:
details = self.getNodeDetails(currentNode)
type, details = details[0], details[1:]
hasChildren = False
if type == DOCTYPE:
yield self.doctype(*details)
elif type == TEXT:
for token in self.text(*details):
yield token
elif type == ELEMENT:
namespace, name, attributes, hasChildren = details
if name in voidElements:
for token in self.emptyTag(namespace, name, attributes,
hasChildren):
yield token
hasChildren = False
else:
yield self.startTag(namespace, name, attributes)
elif type == COMMENT:
yield self.comment(details[0])
elif type == ENTITY:
yield self.entity(details[0])
elif type == DOCUMENT:
hasChildren = True
else:
yield self.unknown(details[0])
if hasChildren:
firstChild = self.getFirstChild(currentNode)
else:
firstChild = None
if firstChild is not None:
currentNode = firstChild
else:
while currentNode is not None:
details = self.getNodeDetails(currentNode)
type, details = details[0], details[1:]
if type == ELEMENT:
namespace, name, attributes, hasChildren = details
if name not in voidElements:
yield self.endTag(namespace, name)
if self.tree is currentNode:
currentNode = None
break
nextSibling = self.getNextSibling(currentNode)
if nextSibling is not None:
currentNode = nextSibling
break
else:
currentNode = self.getParentNode(currentNode)
| mit |
Byron/bcore | src/python/bcmd/tests/doc/test_examples.py | 1 | 2687 | #-*-coding:utf-8-*-
"""
@package bapp.tests.doc.test_examples
@brief See bapp.tests.doc for more information
@author Sebastian Thiel
@copyright [GNU Lesser General Public License](https://www.gnu.org/licenses/lgpl.html)
"""
from __future__ import print_function
__all__ = []
import sys
import bapp
from butility.tests import (TestCase,
with_rw_directory)
import bcmd
# ==============================================================================
# \name TestTypes
# ------------------------------------------------------------------------------
# Types that derive from the type to be tested
# \{
# [ExampleCommand]
class ExampleCommand(bcmd.Command):
"""A command with verbosity argument"""
__slots__ = ()
name = 'example'
version = '1.0'
description = 'a simple example'
def setup_argparser(self, parser):
"""Add our arguments"""
parser.add_argument('-v',
action='count',
default=0,
dest='verbosity',
help='set the verbosity level - more v characters increase the level')
return self
def execute(self, args, remaining_args):
"""Be verbose or not"""
if args.verbosity > 0:
print('executing example')
if args.verbosity > 1:
print('Its my first time ...')
if args.verbosity > 2:
print('and it feels great')
return 0
# end class ExampleCommand
# [ExampleCommand]
# [ExampleCommandWithSubcommands]
class MasterCommand(bcmd.Command):
"""Allows for subcommands"""
name = 'master'
version = '1.5'
description = 'a simple example command with subcommands support'
subcommands_title = 'Modes'
subcommands_help = 'All modes we support - type "example <mode> --help" for mode-specific help'
# execute() is implemented by our base to handle subcommands automatically - we don't do anything
# end class MasterCommand
class ExampleSubCommand(ExampleCommand, bcmd.SubCommand, bapp.plugin_type()):
"""Shows how to use an existing command as mode of a master command.
@note we make ourselves a plugin to allow the Command implementation to find our command.
This can also be overridden if no plugin system is required, using the
bcmd.Command._find_compatible_subcommands() method"""
# this associates us with the main command
main_command_name = MasterCommand.name
# And this is it - otherwise you would have to implement a SubCommand as any other command
# end class ExampleSubCommand
# [ExampleCommandWithSubcommands]
# -- End TestTypes -- \}
| lgpl-3.0 |
mlperf/inference_results_v0.7 | closed/Nettrix/code/rnnt/tensorrt/preprocessing/parts/text/cleaners.py | 12 | 3466 | # Copyright (c) 2017 Keith Ito
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" from https://github.com/keithito/tacotron
Modified to add puncturation removal
"""
'''
Cleaners are transformations that run over the input text at both training and eval time.
Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
hyperparameter. Some cleaners are English-specific. You'll typically want to use:
1. "english_cleaners" for English text
2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
the Unidecode library (https://pypi.python.org/pypi/Unidecode)
3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
the symbols in symbols.py to match your data).
'''
import re
from unidecode import unidecode
from .numbers import normalize_numbers
# Regular expression matching whitespace:
_whitespace_re = re.compile(r'\s+')
# List of (regular expression, replacement) pairs for abbreviations:
_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [
('mrs', 'misess'),
('mr', 'mister'),
('dr', 'doctor'),
('st', 'saint'),
('co', 'company'),
('jr', 'junior'),
('maj', 'major'),
('gen', 'general'),
('drs', 'doctors'),
('rev', 'reverend'),
('lt', 'lieutenant'),
('hon', 'honorable'),
('sgt', 'sergeant'),
('capt', 'captain'),
('esq', 'esquire'),
('ltd', 'limited'),
('col', 'colonel'),
('ft', 'fort'),
]]
def expand_abbreviations(text):
for regex, replacement in _abbreviations:
text = re.sub(regex, replacement, text)
return text
def expand_numbers(text):
return normalize_numbers(text)
def lowercase(text):
return text.lower()
def collapse_whitespace(text):
return re.sub(_whitespace_re, ' ', text)
def convert_to_ascii(text):
return unidecode(text)
def remove_punctuation(text, table):
text = text.translate(table)
text = re.sub(r'&', " and ", text)
text = re.sub(r'\+', " plus ", text)
return text
def basic_cleaners(text):
'''Basic pipeline that lowercases and collapses whitespace without transliteration.'''
text = lowercase(text)
text = collapse_whitespace(text)
return text
def transliteration_cleaners(text):
'''Pipeline for non-English text that transliterates to ASCII.'''
text = convert_to_ascii(text)
text = lowercase(text)
text = collapse_whitespace(text)
return text
def english_cleaners(text, table=None):
'''Pipeline for English text, including number and abbreviation expansion.'''
text = convert_to_ascii(text)
text = lowercase(text)
text = expand_numbers(text)
text = expand_abbreviations(text)
if table is not None:
text = remove_punctuation(text, table)
text = collapse_whitespace(text)
return text
| apache-2.0 |
648trindade/pygame-engine-gamejam | src/engine/System.py | 1 | 13260 | import pygame
import pygame.gfxdraw
import os
import sys
from engine.Point import Point
from engine.Scene import Scene
from engine.managers.Texture import Texture
from engine.managers.Font import Font
from engine.managers.TextureSpec import TextureSpec
# tamanho fake da tela. Todos os objetos pensam que a tela tem esse tamanho
SCREEN_SIZE = Point(1920, 1080)
WINDOW_SIZE = Point(960, 540)
GAME_NAME = "Jogo"
GAME_DIR = os.path.dirname(os.path.abspath(sys.argv[0])) + "/../"
WHITE_COLOR = (255, 255, 255)
class System:
def __init__(self):
pygame.init()
# centraliza a janela no monitor
os.environ['SDL_VIDEO_CENTERED'] = 'true'
# cria a surface que o jogo enxerga, com o tamanho fake
self.screen = pygame.Surface(SCREEN_SIZE)
self.fullscreen = False
# cria a janela
self.window_size = None
self.window = None
self.scale = None
self.screen_real_size = None
self.offset = None
self.set_window(WINDOW_SIZE)
self.mouse_rel = None
pygame.display.set_caption(GAME_NAME)
# pygame.display.set_icon()
# pilha de cenas
self.scene_stack = list()
# lista de eventos
self.events = None
# retângulo da câmera
self.camera = pygame.Rect((0, 0), SCREEN_SIZE)
self.camera_limits = pygame.Rect((0, 0), SCREEN_SIZE)
self.camera_target = None
# Gerenciador de Texturas
self.textures = Texture(GAME_DIR)
self.fonts = Font(GAME_DIR)
self.texturespecs = TextureSpec(GAME_DIR)
# Clock
self.clock = pygame.time.Clock()
self.delta_time = 0
def __del__(self):
"""
Encerra o sistema. Chamado quando python apaga o objeto System da memória
:return: None
"""
pygame.quit()
def set_window(self, new_size):
"""
Define um tamanho para a janela e calcula proporção da viewport
:param new_size: novo tamanho da janela
:return: None
"""
if self.fullscreen:
flags = pygame.FULLSCREEN | pygame.HWSURFACE | pygame.DOUBLEBUF
else:
flags = pygame.NOFRAME
self.window = pygame.display.set_mode(new_size, flags)
self.window_size = Point(self.window.get_size())
# Proporção em largura e altura da janela com relação ao tamanho fake
proportion = Point(
new_size.x / SCREEN_SIZE.x,
new_size.y / SCREEN_SIZE.y
)
# Escolhe a menor taxa de proporção pra encaixar direitinho na tela
self.scale = min(proportion.x, proportion.y)
# tamanho real da tela que será renderizada, mantida a proporção da fake
self.screen_real_size = (SCREEN_SIZE * self.scale).int()
# offset de deslocamento da tela com relação ao início da janela (cria
# tarjas pretas)
self.offset = (self.window_size - self.screen_real_size)//2
def set_fullscreen(self, value):
if self.fullscreen ^ value:
self.fullscreen = value
if self.fullscreen:
self.set_window(Point(pygame.display.list_modes()[0]))
else:
self.set_window(self.window_size)
def run(self):
"""
Loop das cenas. Roda uma cena até que termine, então procura por novas
cenas na pilha. Se não houver mais cenas, termina.
:return: None
"""
game_data = {
'system': self,
'screen_size': SCREEN_SIZE,
'scene': None,
'shared': dict()
}
while len(self.scene_stack) > 0:
scene = self.scene_stack[-1] # topo da pilha
game_data['scene'] = scene
if scene.is_new():
self.load_assets(scene.name)
scene.start(game_data)
elif scene.is_paused():
scene.resume()
if not scene.is_finished():
scene.run()
if scene.is_paused():
scene.pause()
elif scene.is_finished():
scene.finish()
self.unload_assets(scene.name)
if scene == self.scene_stack[-1]: # se a cena estiver no topo
self.pop_scene()
def update(self):
"""
Atualiza o sistema. A cena ativa deve chamar esse método antes de
qualquer outro update
:return: None
"""
# pega os eventos
self.delta_time = self.clock.tick(60)
self.mouse_rel = Point(pygame.mouse.get_rel()) / self.scale
self.events = pygame.event.get()
for event in self.events:
if event.type is pygame.QUIT:
for scene in self.scene_stack:
scene.state = Scene.STATE_FINISHED
elif event.type is pygame.VIDEORESIZE:
self.set_window(Point(event.size))
if self.camera_target:
self.move_camera(Point(self.camera_target.dest.center) - Point(self.camera.center))
# limpa a tela
self.window.fill(pygame.Color(0, 0, 0))
self.screen.fill(pygame.Color(255, 255, 255))
def render(self):
"""
Renderiza a tela na janela. A cena ativa deve chamar esse método depois
de todos os outros métodos render do jogo.
:return: None
"""
viewport = pygame.transform.scale(self.screen, self.screen_real_size)
self.window.blit(viewport, self.offset)
pygame.display.update()
def blit(self, ID, dest, src=None, fixed=False, angle=0, scale=None):
"""
Desenha uma surface na tela. Possui suporte para renderização
independente da posição da câmera, como é o caso de menus. Se o tamanho
de dest é diferente de src, há um redimensionamento.
:param ID: ID da surface
:param dest: Rect de destino na tela
:param src: Rect de origem da surface
:param fixed: Determina se a renderização é relativa a camera ou não
:return: None
"""
# Pega a textura do manager de texturas
texture = self.textures.get(ID)
# Se um retangulo de origem nao foi definido, pega o da textura
if not src:
src = texture.get_rect()
# Se o retangulo de origem for difente do da textura, pega a porção
elif src != texture.get_rect():
texture = texture.subsurface(src)
# Calcula tamanho de destino a partir de escala
if scale is not None:
if type(dest) is pygame.Rect:
dest.size = Point(src.size) * scale
else:
dest = pygame.Rect(dest, Point(src.size) * scale)
if not self.camera.colliderect(dest):
# retangulo da imagem esta fora da camera
return
# Se a posição é relativa a câmera
if not fixed:
# Pega a posição relativa a câmera
dest -= Point(self.camera.topleft)
# se os retangulos de origem e destino tem tamanhos diferentes,
# redimensiona a imagem para o tamanho de destino
if Point(src.size) != Point(dest.size):
texture = pygame.transform.scale(texture, dest.size)
# se necessitar rotacionar
if angle % 360 != 0:
texture = pygame.transform.rotate(texture, angle)
src = texture.get_rect()
src.center = dest.center
dest = src
# screen = pygame.Rect((0, 0), SCREEN_SIZE)
# if not screen.contains(dest):
# clip_area = screen.clip(dest)
# src_area = clip_area - Point(dest.topleft)
# dest = clip_area
# texture = texture.subsurface(src_area)
# self.screen.blit(texture, dest.topleft, src_area)
# else:
# self.screen.blit(texture, dest.topleft)
self.screen.blit(texture, dest.topleft)
# retorna o retangulo da tela que foi renderizado
return dest
def push_scene(self, scene):
"""
Adiciona uma cena no topo da pilha
:param scene: Cena nova
:return: None
"""
self.scene_stack.append(scene)
def pop_scene(self):
"""
Remove e retorna a cena no topo da pilha
:return: Scene
"""
n_scenes = len(self.scene_stack)
if n_scenes > 0:
return self.scene_stack.pop(n_scenes - 1)
def swap_scene(self, scene):
"""
Substitui a cena atualmente no topo da pilha pela cena fornecida.
Retorna a cena removida.
:param scene: Cena nova
:return: Scene
"""
old_scene = self.pop_scene()
self.push_scene(scene)
return old_scene
def draw_font(self, text, font_name, size, destination, color=WHITE_COLOR,
centered=True, fixed=False):
"""
Renderiza um texto e desenha na tela. Possui suporte para renderização
independente da posição da câmera, como é o caso de menus.
:param text: Texto a ser renderizado
:param font_name: nome da fonte (com extensão) a ser renderizada
:param size: tamanho da fonte
:param destination: ponto de destino (centro)
:param color: cor da fonte
:param fixed: Determina se a renderização é relativa a camera ou não
:return: None
"""
texture = self.fonts.render(text, font_name, size, color)[0]
src = texture.get_rect()
dest = pygame.Rect(destination, src.size)
if centered:
dest.topleft = Point(dest.topleft) - Point(src.center)
if not fixed:
# Se alguma porção da surface está aparecendo na tela
if self.camera.colliderect(dest):
# Pega a posição relativa a câmera
dest -= Point(self.camera.topleft)
else:
# retangulo da imagem esta fora da camera
return
self.screen.blit(texture, dest)
def calculate_size_text(self, text, font_name, size):
return self.fonts.render(text, font_name, size, (0, 0, 0))[0].get_rect()
# src = self.fonts.render(text, font_name, size, (0, 0, 0))[1]
# src.topleft = (0, 0)
# return src
def draw_geom(self, name, **kargs):
if not kargs.get("fixed"):
if kargs.get("rect"):
kargs["rect"] -= Point(self.camera.topleft)
elif kargs.get("x") and kargs.get("y"):
kargs['x'] -= self.camera.x
kargs['y'] -= self.camera.y
if name == "rectangle":
pygame.gfxdraw.rectangle(self.screen, kargs['rect'], kargs['color'])
elif name == "box":
pygame.gfxdraw.box(self.screen, kargs['rect'], kargs['color'])
elif name == "circle":
pygame.gfxdraw.circle(self.screen, kargs['x'], kargs['y'],
kargs['r'], kargs['color'])
elif name == "aacicle":
pygame.gfxdraw.aacircle(self.screen, kargs['x'], kargs['y'],
kargs['r'], kargs['color'])
elif name == "filled_circle":
pygame.gfxdraw.filled_circle(self.screen, kargs['x'], kargs['y'],
kargs['r'], kargs['color'])
def get_events(self):
"""
Retorna uma cópia da lista de eventos ocorridos no frame
:return:
"""
return self.events.copy()
def get_mouse_move(self):
return self.mouse_rel
def get_mouse_pos(self):
return (Point(pygame.mouse.get_pos()) - self.offset) / self.scale
def get_animation(self, image, name):
return self.texturespecs.get(image, name)
def get_image_size(self, image):
return self.textures.get_size(image)
def load_assets(self, name):
self.textures.load(name)
self.texturespecs.load(name)
def unload_assets(self, name):
self.textures.unload(name)
self.texturespecs.unload(name)
def register_last_frame(self):
self.textures.surfaces['last_frame'] = pygame.Surface(SCREEN_SIZE)
self.textures.surfaces['last_frame'].blit(self.screen, (0, 0))
def move_camera(self, offset):
self.camera.topleft += offset
if not self.camera_limits.contains(self.camera):
if self.camera.top not in range(self.camera_limits.top,
self.camera_limits.bottom - self.camera.h):
self.camera.top = min(self.camera_limits.bottom - self.camera.h,
self.camera.top)
self.camera.top = max(self.camera_limits.top, self.camera.top)
if self.camera.left not in range(self.camera_limits.left,
self.camera_limits.right - self.camera.w):
self.camera.left = min(self.camera_limits.right- self.camera.w,
self.camera.left)
self.camera.left = max(self.camera_limits.left, self.camera.left)
def reset_camera(self):
self.camera.topleft = Point(0, 0)
| gpl-3.0 |
lamby/debian-devel-changes-bot | tests/test_utils_rewrite_topic.py | 1 | 1414 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Debian Changes Bot
# Copyright (C) 2008 Chris Lamb <chris@chris-lamb.co.uk>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import unittest
import os, sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from DebianDevelChangesBot.utils import rewrite_topic
class TestRewriteTopic(unittest.TestCase):
def testEmpty(self):
self.assertEqual(rewrite_topic("", "", 0), "")
def testSimple(self):
self.assertEqual(rewrite_topic("RC bug count: 1", "RC bug count:", 2), "RC bug count: 2")
def testEmbedded(self):
self.assertEqual(rewrite_topic("pre RC bug count: 1 post", "RC bug count:", 2), "pre RC bug count: 2 post")
if __name__ == "__main__":
unittest.main()
| agpl-3.0 |
griest024/PokyrimTools | pyffi-develop/scripts/cgf/cgftoaster.py | 1 | 3462 | #!/usr/bin/python3
"""A script for casting spells on cgf files. This script is essentially
a cgf specific wrapper around L{pyffi.spells.toaster}."""
# --------------------------------------------------------------------------
# ***** BEGIN LICENSE BLOCK *****
#
# Copyright (c) 2007-2012, Python File Format Interface
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
#
# * Neither the name of the Python File Format Interface
# project nor the names of its contributors may be used to endorse
# or promote products derived from this software without specific
# prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# ***** END LICENSE BLOCK *****
# --------------------------------------------------------------------------
import logging
import sys
import pyffi.spells
import pyffi.spells.cgf
import pyffi.spells.cgf.check
import pyffi.spells.cgf.dump
import pyffi.formats.cgf
import pyffi.spells.check
class CgfToaster(pyffi.spells.cgf.CgfToaster):
"""Class for toasting cgf files, using any of the available spells."""
SPELLS = [
pyffi.spells.check.SpellRead,
pyffi.spells.cgf.check.SpellReadWrite,
pyffi.spells.cgf.check.SpellCheckTangentSpace,
pyffi.spells.cgf.check.SpellCheckHasVertexColors,
pyffi.spells.cgf.dump.SpellDumpAll]
ALIASDICT = {
"read": "check_read",
"readwrite": "check_readwrite"}
EXAMPLES = """* check if PyFFI can read all files in current directory:
python cgftoaster.py read .
* same as above, but also find out profile information on reading cgf
files:
python -m cProfile -s cumulative -o profile_read.txt cgftoaster.py -j 1 read .
* find out time spent on a particular test:
python -m cProfile -s cumulative cgftoaster.py -j 1 dump"""
# if script is called...
if __name__ == "__main__":
# set up logger
logger = logging.getLogger("pyffi")
logger.setLevel(logging.DEBUG)
loghandler = logging.StreamHandler(sys.stdout)
loghandler.setLevel(logging.DEBUG)
logformatter = logging.Formatter("%(name)s:%(levelname)s:%(message)s")
loghandler.setFormatter(logformatter)
logger.addHandler(loghandler)
# call toaster
CgfToaster().cli()
| mit |
kaichogami/sympy | sympy/geometry/tests/test_polygon.py | 6 | 13890 | from __future__ import division
import warnings
from sympy import Abs, Rational, Float, S, Symbol, cos, pi, sqrt, oo
from sympy.functions.elementary.trigonometric import tan
from sympy.geometry import (Circle, Ellipse, GeometryError, Point, Polygon, Ray, RegularPolygon, Segment, Triangle, are_similar,
convex_hull, intersection, Line)
from sympy.utilities.pytest import raises
from sympy.utilities.randtest import verify_numerically
from sympy.geometry.polygon import rad, deg
def feq(a, b):
"""Test if two floating point values are 'equal'."""
t_float = Float("1.0E-10")
return -t_float < a - b < t_float
def test_polygon():
x = Symbol('x', real=True)
y = Symbol('y', real=True)
x1 = Symbol('x1', real=True)
half = Rational(1, 2)
a, b, c = Point(0, 0), Point(2, 0), Point(3, 3)
t = Triangle(a, b, c)
assert Polygon(a, Point(1, 0), b, c) == t
assert Polygon(Point(1, 0), b, c, a) == t
assert Polygon(b, c, a, Point(1, 0)) == t
# 2 "remove folded" tests
assert Polygon(a, Point(3, 0), b, c) == t
assert Polygon(a, b, Point(3, -1), b, c) == t
raises(GeometryError, lambda: Polygon((0, 0), (1, 0), (0, 1), (1, 1)))
# remove multiple collinear points
assert Polygon(Point(-4, 15), Point(-11, 15), Point(-15, 15),
Point(-15, 33/5), Point(-15, -87/10), Point(-15, -15),
Point(-42/5, -15), Point(-2, -15), Point(7, -15), Point(15, -15),
Point(15, -3), Point(15, 10), Point(15, 15)) == \
Polygon(Point(-15,-15), Point(15,-15), Point(15,15), Point(-15,15))
p1 = Polygon(
Point(0, 0), Point(3, -1),
Point(6, 0), Point(4, 5),
Point(2, 3), Point(0, 3))
p2 = Polygon(
Point(6, 0), Point(3, -1),
Point(0, 0), Point(0, 3),
Point(2, 3), Point(4, 5))
p3 = Polygon(
Point(0, 0), Point(3, 0),
Point(5, 2), Point(4, 4))
p4 = Polygon(
Point(0, 0), Point(4, 4),
Point(5, 2), Point(3, 0))
p5 = Polygon(
Point(0, 0), Point(4, 4),
Point(0, 4))
p6 = Polygon(
Point(-11, 1), Point(-9, 6.6),
Point(-4, -3), Point(-8.4, -8.7))
r = Ray(Point(-9,6.6), Point(-9,5.5))
#
# General polygon
#
assert p1 == p2
assert len(p1.args) == 6
assert len(p1.sides) == 6
assert p1.perimeter == 5 + 2*sqrt(10) + sqrt(29) + sqrt(8)
assert p1.area == 22
assert not p1.is_convex()
# ensure convex for both CW and CCW point specification
assert p3.is_convex()
assert p4.is_convex()
dict5 = p5.angles
assert dict5[Point(0, 0)] == pi / 4
assert dict5[Point(0, 4)] == pi / 2
assert p5.encloses_point(Point(x, y)) is None
assert p5.encloses_point(Point(1, 3))
assert p5.encloses_point(Point(0, 0)) is False
assert p5.encloses_point(Point(4, 0)) is False
assert p1.encloses(Circle(Point(2.5,2.5),5)) is False
assert p1.encloses(Ellipse(Point(2.5,2),5,6)) is False
p5.plot_interval('x') == [x, 0, 1]
assert p5.distance(
Polygon(Point(10, 10), Point(14, 14), Point(10, 14))) == 6 * sqrt(2)
assert p5.distance(
Polygon(Point(1, 8), Point(5, 8), Point(8, 12), Point(1, 12))) == 4
warnings.filterwarnings(
"error", message="Polygons may intersect producing erroneous output")
raises(UserWarning,
lambda: Polygon(Point(0, 0), Point(1, 0),
Point(1, 1)).distance(
Polygon(Point(0, 0), Point(0, 1), Point(1, 1))))
warnings.filterwarnings(
"ignore", message="Polygons may intersect producing erroneous output")
assert hash(p5) == hash(Polygon(Point(0, 0), Point(4, 4), Point(0, 4)))
assert p5 == Polygon(Point(4, 4), Point(0, 4), Point(0, 0))
assert Polygon(Point(4, 4), Point(0, 4), Point(0, 0)) in p5
assert p5 != Point(0, 4)
assert Point(0, 1) in p5
assert p5.arbitrary_point('t').subs(Symbol('t', real=True), 0) == \
Point(0, 0)
raises(ValueError, lambda: Polygon(
Point(x, 0), Point(0, y), Point(x, y)).arbitrary_point('x'))
assert p6.intersection(r) == [Point(-9, 33/5), Point(-9, -84/13)]
#
# Regular polygon
#
p1 = RegularPolygon(Point(0, 0), 10, 5)
p2 = RegularPolygon(Point(0, 0), 5, 5)
raises(GeometryError, lambda: RegularPolygon(Point(0, 0), Point(0,
1), Point(1, 1)))
raises(GeometryError, lambda: RegularPolygon(Point(0, 0), 1, 2))
raises(ValueError, lambda: RegularPolygon(Point(0, 0), 1, 2.5))
assert p1 != p2
assert p1.interior_angle == 3*pi/5
assert p1.exterior_angle == 2*pi/5
assert p2.apothem == 5*cos(pi/5)
assert p2.circumcenter == p1.circumcenter == Point(0, 0)
assert p1.circumradius == p1.radius == 10
assert p2.circumcircle == Circle(Point(0, 0), 5)
assert p2.incircle == Circle(Point(0, 0), p2.apothem)
assert p2.inradius == p2.apothem == (5 * (1 + sqrt(5)) / 4)
p2.spin(pi / 10)
dict1 = p2.angles
assert dict1[Point(0, 5)] == 3 * pi / 5
assert p1.is_convex()
assert p1.rotation == 0
assert p1.encloses_point(Point(0, 0))
assert p1.encloses_point(Point(11, 0)) is False
assert p2.encloses_point(Point(0, 4.9))
p1.spin(pi/3)
assert p1.rotation == pi/3
assert p1.vertices[0] == Point(5, 5*sqrt(3))
for var in p1.args:
if isinstance(var, Point):
assert var == Point(0, 0)
else:
assert var == 5 or var == 10 or var == pi / 3
assert p1 != Point(0, 0)
assert p1 != p5
# while spin works in place (notice that rotation is 2pi/3 below)
# rotate returns a new object
p1_old = p1
assert p1.rotate(pi/3) == RegularPolygon(Point(0, 0), 10, 5, 2*pi/3)
assert p1 == p1_old
assert p1.area == (-250*sqrt(5) + 1250)/(4*tan(pi/5))
assert p1.length == 20*sqrt(-sqrt(5)/8 + 5/8)
assert p1.scale(2, 2) == \
RegularPolygon(p1.center, p1.radius*2, p1._n, p1.rotation)
assert RegularPolygon((0, 0), 1, 4).scale(2, 3) == \
Polygon(Point(2, 0), Point(0, 3), Point(-2, 0), Point(0, -3))
assert repr(p1) == str(p1)
#
# Angles
#
angles = p4.angles
assert feq(angles[Point(0, 0)].evalf(), Float("0.7853981633974483"))
assert feq(angles[Point(4, 4)].evalf(), Float("1.2490457723982544"))
assert feq(angles[Point(5, 2)].evalf(), Float("1.8925468811915388"))
assert feq(angles[Point(3, 0)].evalf(), Float("2.3561944901923449"))
angles = p3.angles
assert feq(angles[Point(0, 0)].evalf(), Float("0.7853981633974483"))
assert feq(angles[Point(4, 4)].evalf(), Float("1.2490457723982544"))
assert feq(angles[Point(5, 2)].evalf(), Float("1.8925468811915388"))
assert feq(angles[Point(3, 0)].evalf(), Float("2.3561944901923449"))
#
# Triangle
#
p1 = Point(0, 0)
p2 = Point(5, 0)
p3 = Point(0, 5)
t1 = Triangle(p1, p2, p3)
t2 = Triangle(p1, p2, Point(Rational(5, 2), sqrt(Rational(75, 4))))
t3 = Triangle(p1, Point(x1, 0), Point(0, x1))
s1 = t1.sides
assert Triangle(p1, p2, p1) == Polygon(p1, p2, p1) == Segment(p1, p2)
raises(GeometryError, lambda: Triangle(Point(0, 0)))
# Basic stuff
assert Triangle(p1, p1, p1) == p1
assert Triangle(p2, p2*2, p2*3) == Segment(p2, p2*3)
assert t1.area == Rational(25, 2)
assert t1.is_right()
assert t2.is_right() is False
assert t3.is_right()
assert p1 in t1
assert t1.sides[0] in t1
assert Segment((0, 0), (1, 0)) in t1
assert Point(5, 5) not in t2
assert t1.is_convex()
assert feq(t1.angles[p1].evalf(), pi.evalf()/2)
assert t1.is_equilateral() is False
assert t2.is_equilateral()
assert t3.is_equilateral() is False
assert are_similar(t1, t2) is False
assert are_similar(t1, t3)
assert are_similar(t2, t3) is False
assert t1.is_similar(Point(0, 0)) is False
# Bisectors
bisectors = t1.bisectors()
assert bisectors[p1] == Segment(p1, Point(Rational(5, 2), Rational(5, 2)))
ic = (250 - 125*sqrt(2)) / 50
assert t1.incenter == Point(ic, ic)
# Inradius
assert t1.inradius == t1.incircle.radius == 5 - 5*sqrt(2)/2
assert t2.inradius == t2.incircle.radius == 5*sqrt(3)/6
assert t3.inradius == t3.incircle.radius == x1**2/((2 + sqrt(2))*Abs(x1))
# Circumcircle
assert t1.circumcircle.center == Point(2.5, 2.5)
# Medians + Centroid
m = t1.medians
assert t1.centroid == Point(Rational(5, 3), Rational(5, 3))
assert m[p1] == Segment(p1, Point(Rational(5, 2), Rational(5, 2)))
assert t3.medians[p1] == Segment(p1, Point(x1/2, x1/2))
assert intersection(m[p1], m[p2], m[p3]) == [t1.centroid]
assert t1.medial == Triangle(Point(2.5, 0), Point(0, 2.5), Point(2.5, 2.5))
# Perpendicular
altitudes = t1.altitudes
assert altitudes[p1] == Segment(p1, Point(Rational(5, 2), Rational(5, 2)))
assert altitudes[p2] == s1[0]
assert altitudes[p3] == s1[2]
assert t1.orthocenter == p1
t = S('''Triangle(
Point(100080156402737/5000000000000, 79782624633431/500000000000),
Point(39223884078253/2000000000000, 156345163124289/1000000000000),
Point(31241359188437/1250000000000, 338338270939941/1000000000000000))''')
assert t.orthocenter == S('''Point(-780660869050599840216997'''
'''79471538701955848721853/80368430960602242240789074233100000000000000,'''
'''20151573611150265741278060334545897615974257/16073686192120448448157'''
'''8148466200000000000)''')
# Ensure
assert len(intersection(*bisectors.values())) == 1
assert len(intersection(*altitudes.values())) == 1
assert len(intersection(*m.values())) == 1
# Distance
p1 = Polygon(
Point(0, 0), Point(1, 0),
Point(1, 1), Point(0, 1))
p2 = Polygon(
Point(0, Rational(5)/4), Point(1, Rational(5)/4),
Point(1, Rational(9)/4), Point(0, Rational(9)/4))
p3 = Polygon(
Point(1, 2), Point(2, 2),
Point(2, 1))
p4 = Polygon(
Point(1, 1), Point(Rational(6)/5, 1),
Point(1, Rational(6)/5))
pt1 = Point(half, half)
pt2 = Point(1, 1)
'''Polygon to Point'''
assert p1.distance(pt1) == half
assert p1.distance(pt2) == 0
assert p2.distance(pt1) == Rational(3)/4
assert p3.distance(pt2) == sqrt(2)/2
'''Polygon to Polygon'''
# p1.distance(p2) emits a warning
# First, test the warning
warnings.filterwarnings("error",
message="Polygons may intersect producing erroneous output")
raises(UserWarning, lambda: p1.distance(p2))
# now test the actual output
warnings.filterwarnings("ignore",
message="Polygons may intersect producing erroneous output")
assert p1.distance(p2) == half/2
assert p1.distance(p3) == sqrt(2)/2
assert p3.distance(p4) == (sqrt(2)/2 - sqrt(Rational(2)/25)/2)
def test_convex_hull():
p = [Point(-5, -1), Point(-2, 1), Point(-2, -1), Point(-1, -3),
Point(0, 0), Point(1, 1), Point(2, 2), Point(2, -1), Point(3, 1),
Point(4, -1), Point(6, 2)]
ch = Polygon(p[0], p[3], p[9], p[10], p[6], p[1])
#test handling of duplicate points
p.append(p[3])
#more than 3 collinear points
another_p = [Point(-45, -85), Point(-45, 85), Point(-45, 26),
Point(-45, -24)]
ch2 = Segment(another_p[0], another_p[1])
assert convex_hull(*another_p) == ch2
assert convex_hull(*p) == ch
assert convex_hull(p[0]) == p[0]
assert convex_hull(p[0], p[1]) == Segment(p[0], p[1])
# no unique points
assert convex_hull(*[p[-1]]*3) == p[-1]
# collection of items
assert convex_hull(*[Point(0, 0),
Segment(Point(1, 0), Point(1, 1)),
RegularPolygon(Point(2, 0), 2, 4)]) == \
Polygon(Point(0, 0), Point(2, -2), Point(4, 0), Point(2, 2))
def test_encloses():
# square with a dimpled left side
s = Polygon(Point(0, 0), Point(1, 0), Point(1, 1), Point(0, 1),
Point(S.Half, S.Half))
# the following is True if the polygon isn't treated as closing on itself
assert s.encloses(Point(0, S.Half)) is False
assert s.encloses(Point(S.Half, S.Half)) is False # it's a vertex
assert s.encloses(Point(Rational(3, 4), S.Half)) is True
def test_triangle_kwargs():
assert Triangle(sss=(3, 4, 5)) == \
Triangle(Point(0, 0), Point(3, 0), Point(3, 4))
assert Triangle(asa=(30, 2, 30)) == \
Triangle(Point(0, 0), Point(2, 0), Point(1, sqrt(3)/3))
assert Triangle(sas=(1, 45, 2)) == \
Triangle(Point(0, 0), Point(2, 0), Point(sqrt(2)/2, sqrt(2)/2))
assert Triangle(sss=(1, 2, 5)) is None
assert deg(rad(180)) == 180
def test_transform():
pts = [Point(0, 0), Point(1/2, 1/4), Point(1, 1)]
pts_out = [Point(-4, -10), Point(-3, -37/4), Point(-2, -7)]
assert Triangle(*pts).scale(2, 3, (4, 5)) == Triangle(*pts_out)
assert RegularPolygon((0, 0), 1, 4).scale(2, 3, (4, 5)) == \
Polygon(Point(-2, -10), Point(-4, -7), Point(-6, -10), Point(-4, -13))
def test_reflect():
x = Symbol('x', real=True)
y = Symbol('y', real=True)
b = Symbol('b')
m = Symbol('m')
l = Line((0, b), slope=m)
p = Point(x, y)
r = p.reflect(l)
dp = l.perpendicular_segment(p).length
dr = l.perpendicular_segment(r).length
assert verify_numerically(dp, dr)
t = Triangle((0, 0), (1, 0), (2, 3))
assert Polygon((1, 0), (2, 0), (2, 2)).reflect(Line((3, 0), slope=oo)) \
== Triangle(Point(5, 0), Point(4, 0), Point(4, 2))
assert Polygon((1, 0), (2, 0), (2, 2)).reflect(Line((0, 3), slope=oo)) \
== Triangle(Point(-1, 0), Point(-2, 0), Point(-2, 2))
assert Polygon((1, 0), (2, 0), (2, 2)).reflect(Line((0, 3), slope=0)) \
== Triangle(Point(1, 6), Point(2, 6), Point(2, 4))
assert Polygon((1, 0), (2, 0), (2, 2)).reflect(Line((3, 0), slope=0)) \
== Triangle(Point(1, 0), Point(2, 0), Point(2, -2))
| bsd-3-clause |
klmitch/keystone | keystone/tests/unit/identity/test_core.py | 3 | 7470 | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Unit tests for core identity behavior."""
import itertools
import os
import uuid
import mock
from oslo_config import cfg
from oslo_config import fixture as config_fixture
from keystone import exception
from keystone import identity
from keystone.tests import unit
from keystone.tests.unit.ksfixtures import database
CONF = cfg.CONF
class TestDomainConfigs(unit.BaseTestCase):
def setUp(self):
super(TestDomainConfigs, self).setUp()
self.addCleanup(CONF.reset)
self.tmp_dir = unit.dirs.tmp()
self.config_fixture = self.useFixture(config_fixture.Config(CONF))
self.config_fixture.config(domain_config_dir=self.tmp_dir,
group='identity')
def test_config_for_nonexistent_domain(self):
"""Having a config for a non-existent domain will be ignored.
There are no assertions in this test because there are no side
effects. If there is a config file for a domain that does not
exist it should be ignored.
"""
domain_id = uuid.uuid4().hex
domain_config_filename = os.path.join(self.tmp_dir,
'keystone.%s.conf' % domain_id)
self.addCleanup(lambda: os.remove(domain_config_filename))
with open(domain_config_filename, 'w'):
"""Write an empty config file."""
e = exception.DomainNotFound(domain_id=domain_id)
mock_assignment_api = mock.Mock()
mock_assignment_api.get_domain_by_name.side_effect = e
domain_config = identity.DomainConfigs()
fake_standard_driver = None
domain_config.setup_domain_drivers(fake_standard_driver,
mock_assignment_api)
def test_config_for_dot_name_domain(self):
# Ensure we can get the right domain name which has dots within it
# from filename.
domain_config_filename = os.path.join(self.tmp_dir,
'keystone.abc.def.com.conf')
with open(domain_config_filename, 'w'):
"""Write an empty config file."""
self.addCleanup(os.remove, domain_config_filename)
with mock.patch.object(identity.DomainConfigs,
'_load_config_from_file') as mock_load_config:
domain_config = identity.DomainConfigs()
fake_assignment_api = None
fake_standard_driver = None
domain_config.setup_domain_drivers(fake_standard_driver,
fake_assignment_api)
mock_load_config.assert_called_once_with(fake_assignment_api,
[domain_config_filename],
'abc.def.com')
def test_config_for_multiple_sql_backend(self):
domains_config = identity.DomainConfigs()
# Create the right sequence of is_sql in the drivers being
# requested to expose the bug, which is that a False setting
# means it forgets previous True settings.
drivers = []
files = []
for idx, is_sql in enumerate((True, False, True)):
drv = mock.Mock(is_sql=is_sql)
drivers.append(drv)
name = 'dummy.{0}'.format(idx)
files.append(''.join((
identity.DOMAIN_CONF_FHEAD,
name,
identity.DOMAIN_CONF_FTAIL)))
walk_fake = lambda *a, **kwa: (
('/fake/keystone/domains/config', [], files), )
generic_driver = mock.Mock(is_sql=False)
assignment_api = mock.Mock()
id_factory = itertools.count()
assignment_api.get_domain_by_name.side_effect = (
lambda name: {'id': next(id_factory), '_': 'fake_domain'})
load_driver_mock = mock.Mock(side_effect=drivers)
with mock.patch.object(os, 'walk', walk_fake):
with mock.patch.object(identity.cfg, 'ConfigOpts'):
with mock.patch.object(domains_config, '_load_driver',
load_driver_mock):
self.assertRaises(
exception.MultipleSQLDriversInConfig,
domains_config.setup_domain_drivers,
generic_driver, assignment_api)
self.assertEqual(3, load_driver_mock.call_count)
class TestDatabaseDomainConfigs(unit.TestCase):
def setUp(self):
super(TestDatabaseDomainConfigs, self).setUp()
self.useFixture(database.Database())
self.load_backends()
def test_domain_config_in_database_disabled_by_default(self):
self.assertFalse(CONF.identity.domain_configurations_from_database)
def test_loading_config_from_database(self):
self.config_fixture.config(domain_configurations_from_database=True,
group='identity')
domain = unit.new_domain_ref()
self.resource_api.create_domain(domain['id'], domain)
# Override two config options for our domain
conf = {'ldap': {'url': uuid.uuid4().hex,
'suffix': uuid.uuid4().hex,
'use_tls': 'True'},
'identity': {
'driver': 'ldap'}}
self.domain_config_api.create_config(domain['id'], conf)
fake_standard_driver = None
domain_config = identity.DomainConfigs()
domain_config.setup_domain_drivers(fake_standard_driver,
self.resource_api)
# Make sure our two overrides are in place, and others are not affected
res = domain_config.get_domain_conf(domain['id'])
self.assertEqual(conf['ldap']['url'], res.ldap.url)
self.assertEqual(conf['ldap']['suffix'], res.ldap.suffix)
self.assertEqual(CONF.ldap.query_scope, res.ldap.query_scope)
# Make sure the override is not changing the type of the config value
use_tls_type = type(CONF.ldap.use_tls)
self.assertEqual(use_tls_type(conf['ldap']['use_tls']),
res.ldap.use_tls)
# Now turn off using database domain configuration and check that the
# default config file values are now seen instead of the overrides.
CONF.set_override('domain_configurations_from_database', False,
'identity', enforce_type=True)
domain_config = identity.DomainConfigs()
domain_config.setup_domain_drivers(fake_standard_driver,
self.resource_api)
res = domain_config.get_domain_conf(domain['id'])
self.assertEqual(CONF.ldap.url, res.ldap.url)
self.assertEqual(CONF.ldap.suffix, res.ldap.suffix)
self.assertEqual(CONF.ldap.use_tls, res.ldap.use_tls)
self.assertEqual(CONF.ldap.query_scope, res.ldap.query_scope)
| apache-2.0 |
Kapim/ar-table-itable | art_instructions/src/art_instructions/gui/visual_inspection.py | 6 | 5819 | from art_instructions.gui import GuiInstruction
from art_projected_gui.items import ImageItem, DialogItem
import qimage2ndarray
from cv_bridge import CvBridge, CvBridgeError
import rospy
from sensor_msgs.msg import Image
from std_msgs.msg import Bool
from PyQt4 import QtCore
from art_utils import array_from_param
translate = QtCore.QCoreApplication.translate
class VisualInspection(GuiInstruction):
NAME = translate("VisualInspection", "Visual inspection")
def __init__(self, *args, **kwargs):
super(VisualInspection, self).__init__(*args, **kwargs)
self.bridge = CvBridge()
topic = ""
try:
topic = rospy.get_param("/art/visual_inspection/topic")
except KeyError:
self.logerr("Topic for visual inspection not set!")
if not topic:
self.logwarn("Using default topic!")
topic = "image_color"
self.showing_result = False
self.to_be_cleaned_up = False
try:
# TODO maybe this could be in defined in instructions yaml?
img_origin = array_from_param("/art/visual_inspection/origin", float, 2)
img_size = array_from_param("/art/visual_inspection/size", float, 2)
fixed = True
except KeyError:
img_origin = (0.3, 0.3)
img_size = (0.2, 0.1)
fixed = False
# TODO display image_item after we receive first image?
self.img_item = ImageItem(self.ui.scene, img_origin[0], img_origin[1], img_size[0], img_size[1], fixed)
self.img_sub = rospy.Subscriber(topic, Image, self.image_callback, queue_size=1)
self.result_sub = rospy.Subscriber("/art/visual_inspection/result", Bool, self.result_callback, queue_size=10)
self.text_timer = QtCore.QTimer(self)
self.text_timer.timeout.connect(self.text_timer_tick)
self.text_timer.setSingleShot(True)
@staticmethod
def get_text(ph, block_id, item_id):
text = "\n"
if ph.is_pose_set(block_id, item_id):
text += translate("VisualInspection", " Pose stored.")
else:
text += translate("VisualInspection", " Pose has to be set.")
return text
def result_callback(self, msg):
if not self.img_item:
return
self.showing_result = True
if msg.data:
self.img_item.set_text("OK", QtCore.Qt.green)
else:
self.img_item.set_text("NOK", QtCore.Qt.red)
self.text_timer.start(5000)
def text_timer_tick(self):
if not self.img_item:
return
self.img_item.set_text()
self.showing_result = False
def image_callback(self, msg):
if not self.img_item:
return
try:
cv_image = self.bridge.imgmsg_to_cv2(msg, "rgb8")
except CvBridgeError as e:
print(e)
return
self.img_item.set_image(qimage2ndarray.array2qimage(cv_image))
def cleanup(self):
self.img_sub.unregister()
self.result_sub.unregister()
self.to_be_cleaned_up = True
if not self.showing_result:
self.ui.scene.removeItem(self.img_item)
self.img_item = None
else:
return (self.img_item, rospy.Time.now() + rospy.Duration(5.0)),
return ()
class VisualInspectionLearn(VisualInspection):
def __init__(self, *args, **kwargs):
super(VisualInspectionLearn, self).__init__(*args, **kwargs)
self.dialog = None
rp_learned, rp_id = self.ui.ph.ref_pick_learned(*self.cid)
if not rp_learned:
self.ui.notif(translate("VisualInspection", "Pick instruction (%1) has to be set first.").arg(rp_id))
self.notified = True
if self.editable:
self.ui.notif(
translate(
"VisualInspection",
"Now you may adjust pose for visual inspection."))
self.notified = True
self.dialog = DialogItem(
self.ui.scene, self.ui.width / 2, 0.1, translate(
"VisualInspection", "Save visual inspection pose"),
[arm.name(self.ui.loc) for arm in self.ui.rh.get_robot_arms()], self.save_pose_cb)
self.dialog_timer = QtCore.QTimer()
self.dialog_timer.timeout.connect(self.dialog_timer_tick)
self.dialog_timer.setSingleShot(True)
def save_pose_cb(self, idx):
ps = self.ui.rh.get_robot_arms()[idx].get_pose()
if ps:
self.ui.notif(translate("VisualInspection", "Pose was stored."), temp=True)
self.ui.notify_info()
self.ui.program_vis.set_pose(ps)
self.dialog.items[idx].set_caption(translate("VisualInspection", "Stored"))
else:
self.ui.notif(translate("VisualInspection", "Failed to get pose."), temp=True)
self.ui.notify_warn()
self.dialog.items[idx].set_caption(translate("VisualInspection", "Failed"))
self.dialog.items[idx].set_enabled(False)
self.dialog_timer.start(1000)
def dialog_timer_tick(self):
for idx, arm in enumerate(self.ui.rh.get_robot_arms()):
self.dialog.items[idx].set_caption(arm.name(self.ui.loc))
for v in self.dialog.items:
v.set_enabled(True)
def learning_done(self):
if self.dialog:
self.ui.scene.removeItem(self.dialog)
self.dialog = None
class VisualInspectionRun(VisualInspection):
def __init__(self, *args, **kwargs):
super(VisualInspectionRun, self).__init__(*args, **kwargs)
self.ui.notif(
translate(
"VisualInspection",
"Visual inspection in progress..."))
| lgpl-2.1 |
davidmueller13/kernel_samsung_trelte | tools/perf/scripts/python/syscall-counts-by-pid.py | 11180 | 1927 | # system call counts, by pid
# (c) 2010, Tom Zanussi <tzanussi@gmail.com>
# Licensed under the terms of the GNU GPL License version 2
#
# Displays system-wide system call totals, broken down by syscall.
# If a [comm] arg is specified, only syscalls called by [comm] are displayed.
import os, sys
sys.path.append(os.environ['PERF_EXEC_PATH'] + \
'/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
from perf_trace_context import *
from Core import *
from Util import syscall_name
usage = "perf script -s syscall-counts-by-pid.py [comm]\n";
for_comm = None
for_pid = None
if len(sys.argv) > 2:
sys.exit(usage)
if len(sys.argv) > 1:
try:
for_pid = int(sys.argv[1])
except:
for_comm = sys.argv[1]
syscalls = autodict()
def trace_begin():
print "Press control+C to stop and show the summary"
def trace_end():
print_syscall_totals()
def raw_syscalls__sys_enter(event_name, context, common_cpu,
common_secs, common_nsecs, common_pid, common_comm,
id, args):
if (for_comm and common_comm != for_comm) or \
(for_pid and common_pid != for_pid ):
return
try:
syscalls[common_comm][common_pid][id] += 1
except TypeError:
syscalls[common_comm][common_pid][id] = 1
def print_syscall_totals():
if for_comm is not None:
print "\nsyscall events for %s:\n\n" % (for_comm),
else:
print "\nsyscall events by comm/pid:\n\n",
print "%-40s %10s\n" % ("comm [pid]/syscalls", "count"),
print "%-40s %10s\n" % ("----------------------------------------", \
"----------"),
comm_keys = syscalls.keys()
for comm in comm_keys:
pid_keys = syscalls[comm].keys()
for pid in pid_keys:
print "\n%s [%d]\n" % (comm, pid),
id_keys = syscalls[comm][pid].keys()
for id, val in sorted(syscalls[comm][pid].iteritems(), \
key = lambda(k, v): (v, k), reverse = True):
print " %-38s %10d\n" % (syscall_name(id), val),
| gpl-2.0 |
thehyve/variant | eggs/django-1.3.1-py2.7.egg/django/contrib/gis/utils/ogrinfo.py | 389 | 1973 | """
This module includes some utility functions for inspecting the layout
of a GDAL data source -- the functionality is analogous to the output
produced by the `ogrinfo` utility.
"""
from django.contrib.gis.gdal import DataSource
from django.contrib.gis.gdal.geometries import GEO_CLASSES
def ogrinfo(data_source, num_features=10):
"""
Walks the available layers in the supplied `data_source`, displaying
the fields for the first `num_features` features.
"""
# Checking the parameters.
if isinstance(data_source, str):
data_source = DataSource(data_source)
elif isinstance(data_source, DataSource):
pass
else:
raise Exception('Data source parameter must be a string or a DataSource object.')
for i, layer in enumerate(data_source):
print "data source : %s" % data_source.name
print "==== layer %s" % i
print " shape type: %s" % GEO_CLASSES[layer.geom_type.num].__name__
print " # features: %s" % len(layer)
print " srs: %s" % layer.srs
extent_tup = layer.extent.tuple
print " extent: %s - %s" % (extent_tup[0:2], extent_tup[2:4])
print "Displaying the first %s features ====" % num_features
width = max(*map(len,layer.fields))
fmt = " %%%ss: %%s" % width
for j, feature in enumerate(layer[:num_features]):
print "=== Feature %s" % j
for fld_name in layer.fields:
type_name = feature[fld_name].type_name
output = fmt % (fld_name, type_name)
val = feature.get(fld_name)
if val:
if isinstance(val, str):
val_fmt = ' ("%s")'
else:
val_fmt = ' (%s)'
output += val_fmt % val
else:
output += ' (None)'
print output
# For backwards compatibility.
sample = ogrinfo
| apache-2.0 |
SlimRemix/android_external_chromium_org | media/tools/layout_tests/test_expectations_unittest.py | 165 | 1697 | #!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import unittest
from test_expectations import TestExpectations
class TestTestExpectations(unittest.TestCase):
def testParseLine(self):
line = ('crbug.com/86714 [ Mac Gpu ] media/video-zoom.html [ Crash '
'ImageOnlyFailure ]')
expected_map = {'CRASH': True, 'IMAGE': True, 'Bugs': ['BUGCR86714'],
'Comments': '', 'MAC': True, 'Gpu': True,
'Platforms': ['MAC', 'Gpu']}
self.assertEquals(TestExpectations.ParseLine(line),
('media/video-zoom.html', expected_map))
def testParseLineWithLineComments(self):
line = ('crbug.com/86714 [ Mac Gpu ] media/video-zoom.html [ Crash '
'ImageOnlyFailure ] # foo')
expected_map = {'CRASH': True, 'IMAGE': True, 'Bugs': ['BUGCR86714'],
'Comments': ' foo', 'MAC': True, 'Gpu': True,
'Platforms': ['MAC', 'Gpu']}
self.assertEquals(TestExpectations.ParseLine(line),
('media/video-zoom.html', expected_map))
def testParseLineWithLineGPUComments(self):
line = ('crbug.com/86714 [ Mac ] media/video-zoom.html [ Crash '
'ImageOnlyFailure ] # Gpu')
expected_map = {'CRASH': True, 'IMAGE': True, 'Bugs': ['BUGCR86714'],
'Comments': ' Gpu', 'MAC': True,
'Platforms': ['MAC']}
self.assertEquals(TestExpectations.ParseLine(line),
('media/video-zoom.html', expected_map))
if __name__ == '__main__':
unittest.main()
| bsd-3-clause |
hn8841182/20150623-test02 | static/Brython3.1.0-20150301-090019/Lib/fnmatch.py | 894 | 3163 | """Filename matching with shell patterns.
fnmatch(FILENAME, PATTERN) matches according to the local convention.
fnmatchcase(FILENAME, PATTERN) always takes case in account.
The functions operate by translating the pattern into a regular
expression. They cache the compiled regular expressions for speed.
The function translate(PATTERN) returns a regular expression
corresponding to PATTERN. (It does not compile it.)
"""
import os
import posixpath
import re
import functools
__all__ = ["filter", "fnmatch", "fnmatchcase", "translate"]
def fnmatch(name, pat):
"""Test whether FILENAME matches PATTERN.
Patterns are Unix shell style:
* matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any char not in seq
An initial period in FILENAME is not special.
Both FILENAME and PATTERN are first case-normalized
if the operating system requires it.
If you don't want this, use fnmatchcase(FILENAME, PATTERN).
"""
name = os.path.normcase(name)
pat = os.path.normcase(pat)
return fnmatchcase(name, pat)
@functools.lru_cache(maxsize=256, typed=True)
def _compile_pattern(pat):
if isinstance(pat, bytes):
pat_str = str(pat, 'ISO-8859-1')
res_str = translate(pat_str)
res = bytes(res_str, 'ISO-8859-1')
else:
res = translate(pat)
return re.compile(res).match
def filter(names, pat):
"""Return the subset of the list NAMES that match PAT."""
result = []
pat = os.path.normcase(pat)
match = _compile_pattern(pat)
if os.path is posixpath:
# normcase on posix is NOP. Optimize it away from the loop.
for name in names:
if match(name):
result.append(name)
else:
for name in names:
if match(os.path.normcase(name)):
result.append(name)
return result
def fnmatchcase(name, pat):
"""Test whether FILENAME matches PATTERN, including case.
This is a version of fnmatch() which doesn't case-normalize
its arguments.
"""
match = _compile_pattern(pat)
return match(name) is not None
def translate(pat):
"""Translate a shell PATTERN to a regular expression.
There is no way to quote meta-characters.
"""
i, n = 0, len(pat)
res = ''
while i < n:
c = pat[i]
i = i+1
if c == '*':
res = res + '.*'
elif c == '?':
res = res + '.'
elif c == '[':
j = i
if j < n and pat[j] == '!':
j = j+1
if j < n and pat[j] == ']':
j = j+1
while j < n and pat[j] != ']':
j = j+1
if j >= n:
res = res + '\\['
else:
stuff = pat[i:j].replace('\\','\\\\')
i = j+1
if stuff[0] == '!':
stuff = '^' + stuff[1:]
elif stuff[0] == '^':
stuff = '\\' + stuff
res = '%s[%s]' % (res, stuff)
else:
res = res + re.escape(c)
return res + '\Z(?ms)'
| gpl-3.0 |
charley-ye/SCFS-v1 | simplecfs/message/network_handler.py | 1 | 1608 | # -*- coding: utf-8 -*-
"""
handle network sending and receiving data
"""
import logging
from simplecfs.common.parameters import DATA_FRAME_SIZE
from simplecfs.message.packet import pack, unpack
def send_command(socket_fd, message):
"""
send command message packet over socket
@socket_fd(socket.makefile('rw'))
@message: message of dict format
"""
logging.info('send command packet')
msg = pack(message)
socket_fd.write('%d\n%s' % (len(msg), msg))
socket_fd.flush()
def recv_command(socket_fd):
"""
receive command message packet from socket_fd
return dict command
"""
logging.info('recv command packet')
line = socket_fd.readline()
command_length = int(line)
command = socket_fd.read(command_length)
logging.info("recv command: %s", command)
return unpack(command)
def send_data(socket_fd, data):
"""send data packet over socket_fd"""
logging.info('sending data packet')
length = len(data)
logging.info('send data length: %d', length)
socket_fd.write('%d\n%s' % (length, data))
socket_fd.flush()
def recv_data(socket_fd, frame_size=DATA_FRAME_SIZE):
"""receive data packet from socket_fd"""
logging.info('receiving data packet')
data = []
line = socket_fd.readline()
length = int(line)
logging.info('recv data length: %d', length)
#while length > 0:
# recv_ = socket_fd.read(min(length, frame_size))
# data.append(recv_)
# length -= len(recv_)
recv_= socket_fd.read(length)
data.append(recv_)
data = b''.join(data)
return data
| apache-2.0 |
lduarte1991/edx-platform | common/test/acceptance/pages/lms/dashboard.py | 16 | 8658 | # -*- coding: utf-8 -*-
"""
Student dashboard page.
"""
from bok_choy.page_object import PageObject
from common.test.acceptance.pages.lms import BASE_URL
class DashboardPage(PageObject):
"""
Student dashboard, where the student can view
courses she/he has registered for.
"""
url = "{base}/dashboard".format(base=BASE_URL)
def is_browser_on_page(self):
return self.q(css='.my-courses').present
@property
def current_courses_text(self):
"""
This is the title label for the section of the student dashboard that
shows all the courses that the student is enrolled in.
The string displayed is defined in lms/templates/dashboard.html.
"""
text_items = self.q(css='#my-courses').text
if len(text_items) > 0:
return text_items[0]
else:
return ""
@property
def available_courses(self):
"""
Return list of the names of available courses (e.g. "999 edX Demonstration Course")
"""
def _get_course_name(el):
return el.text
return self.q(css='h3.course-title > a').map(_get_course_name).results
@property
def banner_text(self):
"""
Return the text of the banner on top of the page, or None if
the banner is not present.
"""
message = self.q(css='div.wrapper-msg')
if message.present:
return message.text[0]
return None
def get_enrollment_mode(self, course_name):
"""Get the enrollment mode for a given course on the dashboard.
Arguments:
course_name (str): The name of the course whose mode should be retrieved.
Returns:
String, indicating the enrollment mode for the course corresponding to
the provided course name.
Raises:
Exception, if no course with the provided name is found on the dashboard.
"""
# Filter elements by course name, only returning the relevant course item
course_listing = self.q(css=".course").filter(lambda el: course_name in el.text).results
if course_listing:
# There should only be one course listing for the provided course name.
# Since 'ENABLE_VERIFIED_CERTIFICATES' is true in the Bok Choy settings, we
# can expect two classes to be present on <article> elements, one being 'course'
# and the other being the enrollment mode.
enrollment_mode = course_listing[0].get_attribute('class').split('course ')[1]
else:
raise Exception("No course named {} was found on the dashboard".format(course_name))
return enrollment_mode
def upgrade_enrollment(self, course_name, upgrade_page):
"""Interact with the upgrade button for the course with the provided name.
Arguments:
course_name (str): The name of the course whose mode should be checked.
upgrade_page (PageObject): The page to wait on after clicking the upgrade button. Importing
the definition of PaymentAndVerificationFlow results in a circular dependency.
Raises:
Exception, if no enrollment corresponding to the provided course name appears
on the dashboard.
"""
# Filter elements by course name, only returning the relevant course item
course_listing = self.q(css=".course").filter(lambda el: course_name in el.text).results
if course_listing:
# There should only be one course listing corresponding to the provided course name.
el = course_listing[0]
# Click the upgrade button
el.find_element_by_css_selector('#upgrade-to-verified').click()
upgrade_page.wait_for_page()
else:
raise Exception("No enrollment for {} is visible on the dashboard.".format(course_name))
def view_course(self, course_id):
"""
Go to the course with `course_id` (e.g. edx/Open_DemoX/edx_demo_course)
"""
link_css = self._link_css(course_id)
if link_css is not None:
self.q(css=link_css).first.click()
else:
msg = "No links found for course {0}".format(course_id)
self.warning(msg)
def _link_css(self, course_id):
"""
Return a CSS selector for the link to the course with `course_id`.
"""
# Get the link hrefs for all courses
all_links = self.q(css='a.enter-course').map(lambda el: el.get_attribute('href')).results
# Search for the first link that matches the course id
link_index = None
for index in range(len(all_links)):
if course_id in all_links[index]:
link_index = index
break
if link_index is not None:
return "a.enter-course:nth-of-type({0})".format(link_index + 1)
else:
return None
def view_course_unenroll_dialog_message(self, course_id):
"""
Go to the course unenroll dialog message for `course_id` (e.g. edx/Open_DemoX/edx_demo_course)
"""
div_index = self.get_course_actions_link_css(course_id)
button_link_css = "#actions-dropdown-link-{}".format(div_index)
unenroll_css = "#unenroll-{}".format(div_index)
if button_link_css is not None:
self.q(css=button_link_css).first.click()
self.wait_for_element_visibility(unenroll_css, 'Unenroll message dialog is visible.')
self.q(css=unenroll_css).first.click()
self.wait_for_ajax()
return {
'track-info': self.q(css='#track-info').html,
'refund-info': self.q(css='#refund-info').html
}
else:
msg = "No links found for course {0}".format(course_id)
self.warning(msg)
def get_course_actions_link_css(self, course_id):
"""
Return a index for unenroll button with `course_id`.
"""
# Get the link hrefs for all courses
all_divs = self.q(css='div.wrapper-action-more').map(lambda el: el.get_attribute('data-course-key')).results
# Search for the first link that matches the course id
div_index = None
for index in range(len(all_divs)):
if course_id in all_divs[index]:
div_index = index
break
return div_index
def pre_requisite_message_displayed(self):
"""
Verify if pre-requisite course messages are being displayed.
"""
return self.q(css='div.prerequisites > .tip').visible
def get_course_listings(self):
"""Retrieve the list of course DOM elements"""
return self.q(css='ul.listing-courses')
def get_course_social_sharing_widget(self, widget_name):
""" Retrieves the specified social sharing widget by its classification """
return self.q(css='a.action-{}'.format(widget_name))
def get_profile_img(self):
""" Retrieves the user's profile image """
return self.q(css='img.user-image-frame')
def get_courses(self):
"""
Get all courses shown in the dashboard
"""
return self.q(css='ul.listing-courses .course-item')
def get_course_date(self):
"""
Get course date of the first course from dashboard
"""
return self.q(css='ul.listing-courses .course-item:first-of-type .info-date-block').first.text[0]
def click_username_dropdown(self):
"""
Click username dropdown.
"""
self.q(css='.toggle-user-dropdown').first.click()
@property
def username_dropdown_link_text(self):
"""
Return list username dropdown links.
"""
return self.q(css='.dropdown-user-menu a').text
@property
def tabs_link_text(self):
"""
Return the text of all the tabs on the dashboard.
"""
return self.q(css='.nav-tab a').text
def click_my_profile_link(self):
"""
Click on `Profile` link.
"""
self.q(css='.nav-tab a').nth(1).click()
def click_account_settings_link(self):
"""
Click on `Account` link.
"""
self.q(css='.dropdown-user-menu a').nth(1).click()
@property
def language_selector(self):
"""
return language selector
"""
self.wait_for_element_visibility(
'#settings-language-value',
'Language selector element is available'
)
return self.q(css='#settings-language-value')
| agpl-3.0 |
oleksa-pavlenko/gae-django-project-template | django/utils/daemonize.py | 169 | 2046 | import os
import sys
from . import six
buffering = int(six.PY3) # No unbuffered text I/O on Python 3 (#20815).
if os.name == 'posix':
def become_daemon(our_home_dir='.', out_log='/dev/null',
err_log='/dev/null', umask=0o022):
"Robustly turn into a UNIX daemon, running in our_home_dir."
# First fork
try:
if os.fork() > 0:
sys.exit(0) # kill off parent
except OSError as e:
sys.stderr.write("fork #1 failed: (%d) %s\n" % (e.errno, e.strerror))
sys.exit(1)
os.setsid()
os.chdir(our_home_dir)
os.umask(umask)
# Second fork
try:
if os.fork() > 0:
os._exit(0)
except OSError as e:
sys.stderr.write("fork #2 failed: (%d) %s\n" % (e.errno, e.strerror))
os._exit(1)
si = open('/dev/null', 'r')
so = open(out_log, 'a+', buffering)
se = open(err_log, 'a+', buffering)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# Set custom file descriptors so that they get proper buffering.
sys.stdout, sys.stderr = so, se
else:
def become_daemon(our_home_dir='.', out_log=None, err_log=None, umask=0o022):
"""
If we're not running under a POSIX system, just simulate the daemon
mode by doing redirections and directory changing.
"""
os.chdir(our_home_dir)
os.umask(umask)
sys.stdin.close()
sys.stdout.close()
sys.stderr.close()
if err_log:
sys.stderr = open(err_log, 'a', buffering)
else:
sys.stderr = NullDevice()
if out_log:
sys.stdout = open(out_log, 'a', buffering)
else:
sys.stdout = NullDevice()
class NullDevice:
"A writeable object that writes to nowhere -- like /dev/null."
def write(self, s):
pass
| mit |
teamotrinidad/plugin.video.chicovara | servers/one80upload.py | 44 | 3770 | # -*- coding: utf-8 -*-
#------------------------------------------------------------
# pelisalacarta - XBMC Plugin
# Conector para 180upload
# http://blog.tvalacarta.info/plugin-xbmc/pelisalacarta/
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os
from core import scrapertools
from core import logger
from core import config
def test_video_exists( page_url ):
data = scrapertools.cache_page(page_url)
if "<b>File Not Found" in data:
return False,"El fichero ha sido borrado"
return True,""
def get_video_url( page_url , premium = False , user="" , password="", video_password="" ):
logger.info("[one80upload.py] get_video_url(page_url='%s')" % page_url)
video_urls = []
data = scrapertools.cache_page(page_url)
#op=download2&id=yz6lx411cshb&rand=3wqqg6mjw3nxu254dfw4icuxknqfkzdjnbluhty&referer=&method_free=&method_premium=&down_direct=1
codigo = scrapertools.get_match(data,'<input type="hidden" name="id" value="([^"]+)">[^<]+')
rand = scrapertools.get_match(data,'<input type="hidden" name="rand" value="([^"]+)">')
post = "op=download2&id="+codigo+"&rand="+rand+"&referer=&method_free=&method_premium=&down_direct=1"
data = scrapertools.cache_page( page_url , post=post, headers=[['User-Agent','Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14'],['Referer',page_url]] )
#logger.info("data="+data)
# Busca el video online o archivo de descarga
patron = 'href="([^"]+)" target="_parent"><span class="style1">Download'
matches = re.compile(patron,re.DOTALL).findall(data)
#scrapertools.printMatches(matches)
if len(matches)>0:
logger.info("[180upload.py] encuentra archivo de descarga="+matches[0])
else:
logger.info("[180upload.py] buscando video para ver online")
patron = "this\.play\('([^']+)'"
matches = re.compile(patron,re.DOTALL).findall(data)
if len(matches)>0:
video_urls.append( ["."+matches[0].rsplit('.',1)[1]+" [180upload]",matches[0]])
for video_url in video_urls:
logger.info("[180upload.py] %s - %s" % (video_url[0],video_url[1]))
return video_urls
# Encuentra vídeos del servidor en el texto pasado
def find_videos(data):
encontrados = set()
devuelve = []
#http://180upload.com/embed-6z7cwbswemsv.html
data = urllib.unquote(data)
patronvideos = '180upload.com/embed-([a-z0-9]+)\.html'
logger.info("[one80upload.py] find_videos #"+patronvideos+"#")
matches = re.compile(patronvideos,re.DOTALL).findall(data)
for match in matches:
titulo = "[180upload]"
url = "http://180upload.com/"+match
if url not in encontrados and match!="embed":
logger.info(" url="+url)
devuelve.append( [ titulo , url , 'one80upload' ] )
encontrados.add(url)
else:
logger.info(" url duplicada="+url)
#http://180upload.com/rtp8y30mlfj0
#http%3A%2F%2F180upload.com%2F5k344aiotajv
data = urllib.unquote(data)
patronvideos = '180upload.com/([a-z0-9]+)'
logger.info("[one80upload.py] find_videos #"+patronvideos+"#")
matches = re.compile(patronvideos,re.DOTALL).findall(data)
for match in matches:
titulo = "[180upload]"
url = "http://180upload.com/"+match
if url not in encontrados and match!="embed":
logger.info(" url="+url)
devuelve.append( [ titulo , url , 'one80upload' ] )
encontrados.add(url)
else:
logger.info(" url duplicada="+url)
return devuelve
def test():
video_urls = get_video_url("http://180upload.com/98bpne5grck6")
return len(video_urls)>0 | gpl-2.0 |
adw0rd/lettuce | tests/integration/lib/Django-1.2.5/django/utils/safestring.py | 392 | 3812 | """
Functions for working with "safe strings": strings that can be displayed safely
without further escaping in HTML. Marking something as a "safe string" means
that the producer of the string has already turned characters that should not
be interpreted by the HTML engine (e.g. '<') into the appropriate entities.
"""
from django.utils.functional import curry, Promise
class EscapeData(object):
pass
class EscapeString(str, EscapeData):
"""
A string that should be HTML-escaped when output.
"""
pass
class EscapeUnicode(unicode, EscapeData):
"""
A unicode object that should be HTML-escaped when output.
"""
pass
class SafeData(object):
pass
class SafeString(str, SafeData):
"""
A string subclass that has been specifically marked as "safe" (requires no
further escaping) for HTML output purposes.
"""
def __add__(self, rhs):
"""
Concatenating a safe string with another safe string or safe unicode
object is safe. Otherwise, the result is no longer safe.
"""
t = super(SafeString, self).__add__(rhs)
if isinstance(rhs, SafeUnicode):
return SafeUnicode(t)
elif isinstance(rhs, SafeString):
return SafeString(t)
return t
def _proxy_method(self, *args, **kwargs):
"""
Wrap a call to a normal unicode method up so that we return safe
results. The method that is being wrapped is passed in the 'method'
argument.
"""
method = kwargs.pop('method')
data = method(self, *args, **kwargs)
if isinstance(data, str):
return SafeString(data)
else:
return SafeUnicode(data)
decode = curry(_proxy_method, method = str.decode)
class SafeUnicode(unicode, SafeData):
"""
A unicode subclass that has been specifically marked as "safe" for HTML
output purposes.
"""
def __add__(self, rhs):
"""
Concatenating a safe unicode object with another safe string or safe
unicode object is safe. Otherwise, the result is no longer safe.
"""
t = super(SafeUnicode, self).__add__(rhs)
if isinstance(rhs, SafeData):
return SafeUnicode(t)
return t
def _proxy_method(self, *args, **kwargs):
"""
Wrap a call to a normal unicode method up so that we return safe
results. The method that is being wrapped is passed in the 'method'
argument.
"""
method = kwargs.pop('method')
data = method(self, *args, **kwargs)
if isinstance(data, str):
return SafeString(data)
else:
return SafeUnicode(data)
encode = curry(_proxy_method, method = unicode.encode)
def mark_safe(s):
"""
Explicitly mark a string as safe for (HTML) output purposes. The returned
object can be used everywhere a string or unicode object is appropriate.
Can be called multiple times on a single string.
"""
if isinstance(s, SafeData):
return s
if isinstance(s, str) or (isinstance(s, Promise) and s._delegate_str):
return SafeString(s)
if isinstance(s, (unicode, Promise)):
return SafeUnicode(s)
return SafeString(str(s))
def mark_for_escaping(s):
"""
Explicitly mark a string as requiring HTML escaping upon output. Has no
effect on SafeData subclasses.
Can be called multiple times on a single string (the resulting escaping is
only applied once).
"""
if isinstance(s, (SafeData, EscapeData)):
return s
if isinstance(s, str) or (isinstance(s, Promise) and s._delegate_str):
return EscapeString(s)
if isinstance(s, (unicode, Promise)):
return EscapeUnicode(s)
return EscapeString(str(s))
| gpl-3.0 |
lincolnloop/django-categories | doc_src/code_examples/custom_categories3.py | 13 | 1144 | class Category(CategoryBase):
thumbnail = models.FileField(
upload_to=THUMBNAIL_UPLOAD_PATH,
null=True, blank=True,
storage=STORAGE(),)
thumbnail_width = models.IntegerField(blank=True, null=True)
thumbnail_height = models.IntegerField(blank=True, null=True)
order = models.IntegerField(default=0)
alternate_title = models.CharField(
blank=True,
default="",
max_length=100,
help_text="An alternative title to use on pages with this category.")
alternate_url = models.CharField(
blank=True,
max_length=200,
help_text="An alternative URL to use instead of the one derived from "
"the category hierarchy.")
description = models.TextField(blank=True, null=True)
meta_keywords = models.CharField(
blank=True,
default="",
max_length=255,
help_text="Comma-separated keywords for search engines.")
meta_extra = models.TextField(
blank=True,
default="",
help_text="(Advanced) Any additional HTML to be placed verbatim "
"in the <head>") | apache-2.0 |
eeshangarg/oh-mainline | vendor/packages/mechanize/test-tools/linecache_copy.py | 22 | 3864 | """Cache lines from files.
This is intended to read lines from modules imported -- hence if a filename
is not found, it will look down the module search path for a file by
that name.
"""
import sys
import os
__all__ = ["getline", "clearcache", "checkcache"]
def getline(filename, lineno, module_globals=None):
lines = getlines(filename, module_globals)
if 1 <= lineno <= len(lines):
return lines[lineno-1]
else:
return ''
# The cache
cache = {} # The cache
def clearcache():
"""Clear the cache entirely."""
global cache
cache = {}
def getlines(filename, module_globals=None):
"""Get the lines for a file from the cache.
Update the cache if it doesn't contain an entry for this file already."""
if filename in cache:
return cache[filename][2]
else:
return updatecache(filename, module_globals)
def checkcache(filename=None):
"""Discard cache entries that are out of date.
(This is not checked upon each call!)"""
if filename is None:
filenames = cache.keys()
else:
if filename in cache:
filenames = [filename]
else:
return
for filename in filenames:
size, mtime, lines, fullname = cache[filename]
if mtime is None:
continue # no-op for files loaded via a __loader__
try:
stat = os.stat(fullname)
except os.error:
del cache[filename]
continue
if size != stat.st_size or mtime != stat.st_mtime:
del cache[filename]
def updatecache(filename, module_globals=None):
"""Update a cache entry and return its list of lines.
If something's wrong, print a message, discard the cache entry,
and return an empty list."""
if filename in cache:
del cache[filename]
if not filename or filename[0] + filename[-1] == '<>':
return []
fullname = filename
try:
stat = os.stat(fullname)
except os.error, msg:
basename = os.path.split(filename)[1]
# Try for a __loader__, if available
if module_globals and '__loader__' in module_globals:
name = module_globals.get('__name__')
loader = module_globals['__loader__']
get_source = getattr(loader, 'get_source', None)
if name and get_source:
if basename.startswith(name.split('.')[-1]+'.'):
try:
data = get_source(name)
except (ImportError, IOError):
pass
else:
cache[filename] = (
len(data), None,
[line+'\n' for line in data.splitlines()], fullname
)
return cache[filename][2]
# Try looking through the module search path.
for dirname in sys.path:
# When using imputil, sys.path may contain things other than
# strings; ignore them when it happens.
try:
fullname = os.path.join(dirname, basename)
except (TypeError, AttributeError):
# Not sufficiently string-like to do anything useful with.
pass
else:
try:
stat = os.stat(fullname)
break
except os.error:
pass
else:
# No luck
## print '*** Cannot stat', filename, ':', msg
return []
try:
fp = open(fullname, 'rU')
lines = fp.readlines()
fp.close()
except IOError, msg:
## print '*** Cannot open', fullname, ':', msg
return []
size, mtime = stat.st_size, stat.st_mtime
cache[filename] = size, mtime, lines, fullname
return lines
| agpl-3.0 |
PabloTunnon/pykka-deb | tests/performance.py | 1 | 1351 | import time
from pykka.actor import ThreadingActor
from pykka.registry import ActorRegistry
def time_it(func):
start = time.time()
func()
print('%s took %.3fs' % (func.func_name, time.time() - start))
class SomeObject(object):
baz = 'bar.baz'
def func(self):
pass
class AnActor(ThreadingActor):
bar = SomeObject()
bar.pykka_traversable = True
foo = 'foo'
def __init__(self):
self.baz = 'quox'
def func(self):
pass
def test_direct_plain_attribute_access():
actor = AnActor.start().proxy()
for i in range(10000):
actor.foo.get()
def test_direct_callable_attribute_access():
actor = AnActor.start().proxy()
for i in range(10000):
actor.func().get()
def test_traversible_plain_attribute_access():
actor = AnActor.start().proxy()
for i in range(10000):
actor.bar.baz.get()
def test_traversible_callable_attribute_access():
actor = AnActor.start().proxy()
for i in range(10000):
actor.bar.func().get()
if __name__ == '__main__':
try:
time_it(test_direct_plain_attribute_access)
time_it(test_direct_callable_attribute_access)
time_it(test_traversible_plain_attribute_access)
time_it(test_traversible_callable_attribute_access)
finally:
ActorRegistry.stop_all()
| apache-2.0 |
menardorama/ReadyNAS-Add-ons | headphones-1.0.0/files/etc/apps/headphones/lib/cherrypy/lib/cpstats.py | 49 | 22770 | """CPStats, a package for collecting and reporting on program statistics.
Overview
========
Statistics about program operation are an invaluable monitoring and debugging
tool. Unfortunately, the gathering and reporting of these critical values is
usually ad-hoc. This package aims to add a centralized place for gathering
statistical performance data, a structure for recording that data which
provides for extrapolation of that data into more useful information,
and a method of serving that data to both human investigators and
monitoring software. Let's examine each of those in more detail.
Data Gathering
--------------
Just as Python's `logging` module provides a common importable for gathering
and sending messages, performance statistics would benefit from a similar
common mechanism, and one that does *not* require each package which wishes
to collect stats to import a third-party module. Therefore, we choose to
re-use the `logging` module by adding a `statistics` object to it.
That `logging.statistics` object is a nested dict. It is not a custom class,
because that would:
1. require libraries and applications to import a third-party module in
order to participate
2. inhibit innovation in extrapolation approaches and in reporting tools, and
3. be slow.
There are, however, some specifications regarding the structure of the dict.::
{
+----"SQLAlchemy": {
| "Inserts": 4389745,
| "Inserts per Second":
| lambda s: s["Inserts"] / (time() - s["Start"]),
| C +---"Table Statistics": {
| o | "widgets": {-----------+
N | l | "Rows": 1.3M, | Record
a | l | "Inserts": 400, |
m | e | },---------------------+
e | c | "froobles": {
s | t | "Rows": 7845,
p | i | "Inserts": 0,
a | o | },
c | n +---},
e | "Slow Queries":
| [{"Query": "SELECT * FROM widgets;",
| "Processing Time": 47.840923343,
| },
| ],
+----},
}
The `logging.statistics` dict has four levels. The topmost level is nothing
more than a set of names to introduce modularity, usually along the lines of
package names. If the SQLAlchemy project wanted to participate, for example,
it might populate the item `logging.statistics['SQLAlchemy']`, whose value
would be a second-layer dict we call a "namespace". Namespaces help multiple
packages to avoid collisions over key names, and make reports easier to read,
to boot. The maintainers of SQLAlchemy should feel free to use more than one
namespace if needed (such as 'SQLAlchemy ORM'). Note that there are no case
or other syntax constraints on the namespace names; they should be chosen
to be maximally readable by humans (neither too short nor too long).
Each namespace, then, is a dict of named statistical values, such as
'Requests/sec' or 'Uptime'. You should choose names which will look
good on a report: spaces and capitalization are just fine.
In addition to scalars, values in a namespace MAY be a (third-layer)
dict, or a list, called a "collection". For example, the CherryPy
:class:`StatsTool` keeps track of what each request is doing (or has most
recently done) in a 'Requests' collection, where each key is a thread ID; each
value in the subdict MUST be a fourth dict (whew!) of statistical data about
each thread. We call each subdict in the collection a "record". Similarly,
the :class:`StatsTool` also keeps a list of slow queries, where each record
contains data about each slow query, in order.
Values in a namespace or record may also be functions, which brings us to:
Extrapolation
-------------
The collection of statistical data needs to be fast, as close to unnoticeable
as possible to the host program. That requires us to minimize I/O, for example,
but in Python it also means we need to minimize function calls. So when you
are designing your namespace and record values, try to insert the most basic
scalar values you already have on hand.
When it comes time to report on the gathered data, however, we usually have
much more freedom in what we can calculate. Therefore, whenever reporting
tools (like the provided :class:`StatsPage` CherryPy class) fetch the contents
of `logging.statistics` for reporting, they first call
`extrapolate_statistics` (passing the whole `statistics` dict as the only
argument). This makes a deep copy of the statistics dict so that the
reporting tool can both iterate over it and even change it without harming
the original. But it also expands any functions in the dict by calling them.
For example, you might have a 'Current Time' entry in the namespace with the
value "lambda scope: time.time()". The "scope" parameter is the current
namespace dict (or record, if we're currently expanding one of those
instead), allowing you access to existing static entries. If you're truly
evil, you can even modify more than one entry at a time.
However, don't try to calculate an entry and then use its value in further
extrapolations; the order in which the functions are called is not guaranteed.
This can lead to a certain amount of duplicated work (or a redesign of your
schema), but that's better than complicating the spec.
After the whole thing has been extrapolated, it's time for:
Reporting
---------
The :class:`StatsPage` class grabs the `logging.statistics` dict, extrapolates
it all, and then transforms it to HTML for easy viewing. Each namespace gets
its own header and attribute table, plus an extra table for each collection.
This is NOT part of the statistics specification; other tools can format how
they like.
You can control which columns are output and how they are formatted by updating
StatsPage.formatting, which is a dict that mirrors the keys and nesting of
`logging.statistics`. The difference is that, instead of data values, it has
formatting values. Use None for a given key to indicate to the StatsPage that a
given column should not be output. Use a string with formatting
(such as '%.3f') to interpolate the value(s), or use a callable (such as
lambda v: v.isoformat()) for more advanced formatting. Any entry which is not
mentioned in the formatting dict is output unchanged.
Monitoring
----------
Although the HTML output takes pains to assign unique id's to each <td> with
statistical data, you're probably better off fetching /cpstats/data, which
outputs the whole (extrapolated) `logging.statistics` dict in JSON format.
That is probably easier to parse, and doesn't have any formatting controls,
so you get the "original" data in a consistently-serialized format.
Note: there's no treatment yet for datetime objects. Try time.time() instead
for now if you can. Nagios will probably thank you.
Turning Collection Off
----------------------
It is recommended each namespace have an "Enabled" item which, if False,
stops collection (but not reporting) of statistical data. Applications
SHOULD provide controls to pause and resume collection by setting these
entries to False or True, if present.
Usage
=====
To collect statistics on CherryPy applications::
from cherrypy.lib import cpstats
appconfig['/']['tools.cpstats.on'] = True
To collect statistics on your own code::
import logging
# Initialize the repository
if not hasattr(logging, 'statistics'): logging.statistics = {}
# Initialize my namespace
mystats = logging.statistics.setdefault('My Stuff', {})
# Initialize my namespace's scalars and collections
mystats.update({
'Enabled': True,
'Start Time': time.time(),
'Important Events': 0,
'Events/Second': lambda s: (
(s['Important Events'] / (time.time() - s['Start Time']))),
})
...
for event in events:
...
# Collect stats
if mystats.get('Enabled', False):
mystats['Important Events'] += 1
To report statistics::
root.cpstats = cpstats.StatsPage()
To format statistics reports::
See 'Reporting', above.
"""
# ------------------------------- Statistics -------------------------------- #
import logging
if not hasattr(logging, 'statistics'):
logging.statistics = {}
def extrapolate_statistics(scope):
"""Return an extrapolated copy of the given scope."""
c = {}
for k, v in list(scope.items()):
if isinstance(v, dict):
v = extrapolate_statistics(v)
elif isinstance(v, (list, tuple)):
v = [extrapolate_statistics(record) for record in v]
elif hasattr(v, '__call__'):
v = v(scope)
c[k] = v
return c
# -------------------- CherryPy Applications Statistics --------------------- #
import threading
import time
import cherrypy
appstats = logging.statistics.setdefault('CherryPy Applications', {})
appstats.update({
'Enabled': True,
'Bytes Read/Request': lambda s: (
s['Total Requests'] and
(s['Total Bytes Read'] / float(s['Total Requests'])) or
0.0
),
'Bytes Read/Second': lambda s: s['Total Bytes Read'] / s['Uptime'](s),
'Bytes Written/Request': lambda s: (
s['Total Requests'] and
(s['Total Bytes Written'] / float(s['Total Requests'])) or
0.0
),
'Bytes Written/Second': lambda s: (
s['Total Bytes Written'] / s['Uptime'](s)
),
'Current Time': lambda s: time.time(),
'Current Requests': 0,
'Requests/Second': lambda s: float(s['Total Requests']) / s['Uptime'](s),
'Server Version': cherrypy.__version__,
'Start Time': time.time(),
'Total Bytes Read': 0,
'Total Bytes Written': 0,
'Total Requests': 0,
'Total Time': 0,
'Uptime': lambda s: time.time() - s['Start Time'],
'Requests': {},
})
proc_time = lambda s: time.time() - s['Start Time']
class ByteCountWrapper(object):
"""Wraps a file-like object, counting the number of bytes read."""
def __init__(self, rfile):
self.rfile = rfile
self.bytes_read = 0
def read(self, size=-1):
data = self.rfile.read(size)
self.bytes_read += len(data)
return data
def readline(self, size=-1):
data = self.rfile.readline(size)
self.bytes_read += len(data)
return data
def readlines(self, sizehint=0):
# Shamelessly stolen from StringIO
total = 0
lines = []
line = self.readline()
while line:
lines.append(line)
total += len(line)
if 0 < sizehint <= total:
break
line = self.readline()
return lines
def close(self):
self.rfile.close()
def __iter__(self):
return self
def next(self):
data = self.rfile.next()
self.bytes_read += len(data)
return data
average_uriset_time = lambda s: s['Count'] and (s['Sum'] / s['Count']) or 0
class StatsTool(cherrypy.Tool):
"""Record various information about the current request."""
def __init__(self):
cherrypy.Tool.__init__(self, 'on_end_request', self.record_stop)
def _setup(self):
"""Hook this tool into cherrypy.request.
The standard CherryPy request object will automatically call this
method when the tool is "turned on" in config.
"""
if appstats.get('Enabled', False):
cherrypy.Tool._setup(self)
self.record_start()
def record_start(self):
"""Record the beginning of a request."""
request = cherrypy.serving.request
if not hasattr(request.rfile, 'bytes_read'):
request.rfile = ByteCountWrapper(request.rfile)
request.body.fp = request.rfile
r = request.remote
appstats['Current Requests'] += 1
appstats['Total Requests'] += 1
appstats['Requests'][threading._get_ident()] = {
'Bytes Read': None,
'Bytes Written': None,
# Use a lambda so the ip gets updated by tools.proxy later
'Client': lambda s: '%s:%s' % (r.ip, r.port),
'End Time': None,
'Processing Time': proc_time,
'Request-Line': request.request_line,
'Response Status': None,
'Start Time': time.time(),
}
def record_stop(
self, uriset=None, slow_queries=1.0, slow_queries_count=100,
debug=False, **kwargs):
"""Record the end of a request."""
resp = cherrypy.serving.response
w = appstats['Requests'][threading._get_ident()]
r = cherrypy.request.rfile.bytes_read
w['Bytes Read'] = r
appstats['Total Bytes Read'] += r
if resp.stream:
w['Bytes Written'] = 'chunked'
else:
cl = int(resp.headers.get('Content-Length', 0))
w['Bytes Written'] = cl
appstats['Total Bytes Written'] += cl
w['Response Status'] = getattr(
resp, 'output_status', None) or resp.status
w['End Time'] = time.time()
p = w['End Time'] - w['Start Time']
w['Processing Time'] = p
appstats['Total Time'] += p
appstats['Current Requests'] -= 1
if debug:
cherrypy.log('Stats recorded: %s' % repr(w), 'TOOLS.CPSTATS')
if uriset:
rs = appstats.setdefault('URI Set Tracking', {})
r = rs.setdefault(uriset, {
'Min': None, 'Max': None, 'Count': 0, 'Sum': 0,
'Avg': average_uriset_time})
if r['Min'] is None or p < r['Min']:
r['Min'] = p
if r['Max'] is None or p > r['Max']:
r['Max'] = p
r['Count'] += 1
r['Sum'] += p
if slow_queries and p > slow_queries:
sq = appstats.setdefault('Slow Queries', [])
sq.append(w.copy())
if len(sq) > slow_queries_count:
sq.pop(0)
import cherrypy
cherrypy.tools.cpstats = StatsTool()
# ---------------------- CherryPy Statistics Reporting ---------------------- #
import os
thisdir = os.path.abspath(os.path.dirname(__file__))
try:
import json
except ImportError:
try:
import simplejson as json
except ImportError:
json = None
missing = object()
locale_date = lambda v: time.strftime('%c', time.gmtime(v))
iso_format = lambda v: time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(v))
def pause_resume(ns):
def _pause_resume(enabled):
pause_disabled = ''
resume_disabled = ''
if enabled:
resume_disabled = 'disabled="disabled" '
else:
pause_disabled = 'disabled="disabled" '
return """
<form action="pause" method="POST" style="display:inline">
<input type="hidden" name="namespace" value="%s" />
<input type="submit" value="Pause" %s/>
</form>
<form action="resume" method="POST" style="display:inline">
<input type="hidden" name="namespace" value="%s" />
<input type="submit" value="Resume" %s/>
</form>
""" % (ns, pause_disabled, ns, resume_disabled)
return _pause_resume
class StatsPage(object):
formatting = {
'CherryPy Applications': {
'Enabled': pause_resume('CherryPy Applications'),
'Bytes Read/Request': '%.3f',
'Bytes Read/Second': '%.3f',
'Bytes Written/Request': '%.3f',
'Bytes Written/Second': '%.3f',
'Current Time': iso_format,
'Requests/Second': '%.3f',
'Start Time': iso_format,
'Total Time': '%.3f',
'Uptime': '%.3f',
'Slow Queries': {
'End Time': None,
'Processing Time': '%.3f',
'Start Time': iso_format,
},
'URI Set Tracking': {
'Avg': '%.3f',
'Max': '%.3f',
'Min': '%.3f',
'Sum': '%.3f',
},
'Requests': {
'Bytes Read': '%s',
'Bytes Written': '%s',
'End Time': None,
'Processing Time': '%.3f',
'Start Time': None,
},
},
'CherryPy WSGIServer': {
'Enabled': pause_resume('CherryPy WSGIServer'),
'Connections/second': '%.3f',
'Start time': iso_format,
},
}
def index(self):
# Transform the raw data into pretty output for HTML
yield """
<html>
<head>
<title>Statistics</title>
<style>
th, td {
padding: 0.25em 0.5em;
border: 1px solid #666699;
}
table {
border-collapse: collapse;
}
table.stats1 {
width: 100%;
}
table.stats1 th {
font-weight: bold;
text-align: right;
background-color: #CCD5DD;
}
table.stats2, h2 {
margin-left: 50px;
}
table.stats2 th {
font-weight: bold;
text-align: center;
background-color: #CCD5DD;
}
</style>
</head>
<body>
"""
for title, scalars, collections in self.get_namespaces():
yield """
<h1>%s</h1>
<table class='stats1'>
<tbody>
""" % title
for i, (key, value) in enumerate(scalars):
colnum = i % 3
if colnum == 0:
yield """
<tr>"""
yield (
"""
<th>%(key)s</th><td id='%(title)s-%(key)s'>%(value)s</td>""" %
vars()
)
if colnum == 2:
yield """
</tr>"""
if colnum == 0:
yield """
<th></th><td></td>
<th></th><td></td>
</tr>"""
elif colnum == 1:
yield """
<th></th><td></td>
</tr>"""
yield """
</tbody>
</table>"""
for subtitle, headers, subrows in collections:
yield """
<h2>%s</h2>
<table class='stats2'>
<thead>
<tr>""" % subtitle
for key in headers:
yield """
<th>%s</th>""" % key
yield """
</tr>
</thead>
<tbody>"""
for subrow in subrows:
yield """
<tr>"""
for value in subrow:
yield """
<td>%s</td>""" % value
yield """
</tr>"""
yield """
</tbody>
</table>"""
yield """
</body>
</html>
"""
index.exposed = True
def get_namespaces(self):
"""Yield (title, scalars, collections) for each namespace."""
s = extrapolate_statistics(logging.statistics)
for title, ns in sorted(s.items()):
scalars = []
collections = []
ns_fmt = self.formatting.get(title, {})
for k, v in sorted(ns.items()):
fmt = ns_fmt.get(k, {})
if isinstance(v, dict):
headers, subrows = self.get_dict_collection(v, fmt)
collections.append((k, ['ID'] + headers, subrows))
elif isinstance(v, (list, tuple)):
headers, subrows = self.get_list_collection(v, fmt)
collections.append((k, headers, subrows))
else:
format = ns_fmt.get(k, missing)
if format is None:
# Don't output this column.
continue
if hasattr(format, '__call__'):
v = format(v)
elif format is not missing:
v = format % v
scalars.append((k, v))
yield title, scalars, collections
def get_dict_collection(self, v, formatting):
"""Return ([headers], [rows]) for the given collection."""
# E.g., the 'Requests' dict.
headers = []
for record in v.itervalues():
for k3 in record:
format = formatting.get(k3, missing)
if format is None:
# Don't output this column.
continue
if k3 not in headers:
headers.append(k3)
headers.sort()
subrows = []
for k2, record in sorted(v.items()):
subrow = [k2]
for k3 in headers:
v3 = record.get(k3, '')
format = formatting.get(k3, missing)
if format is None:
# Don't output this column.
continue
if hasattr(format, '__call__'):
v3 = format(v3)
elif format is not missing:
v3 = format % v3
subrow.append(v3)
subrows.append(subrow)
return headers, subrows
def get_list_collection(self, v, formatting):
"""Return ([headers], [subrows]) for the given collection."""
# E.g., the 'Slow Queries' list.
headers = []
for record in v:
for k3 in record:
format = formatting.get(k3, missing)
if format is None:
# Don't output this column.
continue
if k3 not in headers:
headers.append(k3)
headers.sort()
subrows = []
for record in v:
subrow = []
for k3 in headers:
v3 = record.get(k3, '')
format = formatting.get(k3, missing)
if format is None:
# Don't output this column.
continue
if hasattr(format, '__call__'):
v3 = format(v3)
elif format is not missing:
v3 = format % v3
subrow.append(v3)
subrows.append(subrow)
return headers, subrows
if json is not None:
def data(self):
s = extrapolate_statistics(logging.statistics)
cherrypy.response.headers['Content-Type'] = 'application/json'
return json.dumps(s, sort_keys=True, indent=4)
data.exposed = True
def pause(self, namespace):
logging.statistics.get(namespace, {})['Enabled'] = False
raise cherrypy.HTTPRedirect('./')
pause.exposed = True
pause.cp_config = {'tools.allow.on': True,
'tools.allow.methods': ['POST']}
def resume(self, namespace):
logging.statistics.get(namespace, {})['Enabled'] = True
raise cherrypy.HTTPRedirect('./')
resume.exposed = True
resume.cp_config = {'tools.allow.on': True,
'tools.allow.methods': ['POST']}
| gpl-2.0 |
thenakliman/nirikshak | nirikshak/output/base.py | 1 | 1989 | # Copyright 2017 <thenakliman@gmail.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from abc import ABCMeta, abstractmethod
import logging
import six
from nirikshak.common import plugins
LOG = logging.getLogger(__name__)
@six.add_metaclass(ABCMeta)
class FormatOutput(object):
@abstractmethod
def output(self, **kwargs):
pass
def output(**jaanch_parameters):
output_plugin = jaanch_parameters.get('output', {}).get('type', 'console')
# NOTE(thenakliman): if plugin is not registered then it returns kwargs
plugin = plugins.get_plugin(output_plugin)
soochis = jaanch_parameters
try:
soochis = getattr(plugin, 'output')(**jaanch_parameters)
except Exception:
LOG.error("%s jaanch get failed for %s output.",
jaanch_parameters['name'], output_plugin, exc_info=True)
else:
LOG.info("%s soochis has been returned by the plugin", soochis)
return soochis
def make_output_dict(expected_result, **kwargs):
output_dict = {}
try:
output_dict = {'actual_output': kwargs['input']['result']}
except KeyError:
LOG.error("result key does not exist in the dictionary")
if expected_result is not None:
output_dict['expected_output'] = expected_result
jaanch = {}
try:
jaanch['input'] = kwargs['input']['args']
except KeyError:
pass
jaanch['name'] = kwargs['name']
jaanch['output'] = output_dict or None
return jaanch
| apache-2.0 |
wscullin/spack | var/spack/repos/builtin/packages/xauth/package.py | 3 | 1892 | ##############################################################################
# Copyright (c) 2013-2017, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/llnl/spack
# Please also see the NOTICE and LICENSE files for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
from spack import *
class Xauth(AutotoolsPackage):
"""The xauth program is used to edit and display the authorization
information used in connecting to the X server."""
homepage = "http://cgit.freedesktop.org/xorg/app/xauth"
url = "https://www.x.org/archive/individual/app/xauth-1.0.9.tar.gz"
version('1.0.9', 'def3b4588504ee3d8ec7be607826df02')
depends_on('libx11')
depends_on('libxau')
depends_on('libxext')
depends_on('libxmu')
depends_on('xproto@7.0.17:')
depends_on('pkg-config@0.9.0:', type='build')
depends_on('util-macros', type='build')
# TODO: add package for cmdtest test dependency
| lgpl-2.1 |
pplu/botocore | tests/functional/test_s3_control.py | 2 | 2819 | # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from tests import unittest, mock, BaseSessionTest, create_session
from botocore.config import Config
from botocore.awsrequest import AWSResponse
class S3ControlOperationTest(BaseSessionTest):
def setUp(self):
super(S3ControlOperationTest, self).setUp()
self.region = 'us-west-2'
self.client = self.session.create_client(
's3control', self.region)
self.session_send_patch = mock.patch(
'botocore.endpoint.Endpoint._send')
self.http_session_send_mock = self.session_send_patch.start()
self.http_response = mock.Mock(spec=AWSResponse)
self.http_response.status_code = 200
self.http_response.headers = {}
self.http_response.content = ''
self.http_session_send_mock.return_value = self.http_response
def tearDown(self):
super(BaseSessionTest, self).tearDown()
self.session_send_patch.stop()
def test_does_add_account_id_to_host(self):
self.client.get_public_access_block(AccountId='123')
self.assertEqual(self.http_session_send_mock.call_count, 1)
request = self.http_session_send_mock.call_args_list[0][0][0]
self.assertTrue(request.url.startswith(
'https://123.s3-control.us-west-2.amazonaws.com'))
def test_does_not_remove_account_id_from_headers(self):
self.client.get_public_access_block(AccountId='123')
self.assertEqual(self.http_session_send_mock.call_count, 1)
request = self.http_session_send_mock.call_args_list[0][0][0]
self.assertIn('x-amz-account-id', request.headers)
def test_does_support_dualstack_endpoint(self):
# Re-create the client with the use_dualstack_endpoint configuration
# option set to True.
self.client = self.session.create_client(
's3control', self.region, config=Config(
s3={'use_dualstack_endpoint': True}
)
)
self.client.get_public_access_block(AccountId='123')
self.assertEqual(self.http_session_send_mock.call_count, 1)
request = self.http_session_send_mock.call_args_list[0][0][0]
self.assertTrue(request.url.startswith(
'https://123.s3-control.dualstack.us-west-2.amazonaws.com'))
| apache-2.0 |
adamtiger/tensorflow | tensorflow/python/ops/distributions/dirichlet.py | 60 | 10583 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""The Dirichlet distribution class."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import check_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
from tensorflow.python.ops import special_math_ops
from tensorflow.python.ops.distributions import distribution
from tensorflow.python.ops.distributions import util as distribution_util
__all__ = [
"Dirichlet",
]
_dirichlet_sample_note = """Note: `value` must be a non-negative tensor with
dtype `self.dtype` and be in the `(self.event_shape() - 1)`-simplex, i.e.,
`tf.reduce_sum(value, -1) = 1`. It must have a shape compatible with
`self.batch_shape() + self.event_shape()`."""
class Dirichlet(distribution.Distribution):
"""Dirichlet distribution.
The Dirichlet distribution is defined over the
[`(k-1)`-simplex](https://en.wikipedia.org/wiki/Simplex) using a positive,
length-`k` vector `concentration` (`k > 1`). The Dirichlet is identically the
Beta distribution when `k = 2`.
#### Mathematical Details
The Dirichlet is a distribution over the open `(k-1)`-simplex, i.e.,
```none
S^{k-1} = { (x_0, ..., x_{k-1}) in R^k : sum_j x_j = 1 and all_j x_j > 0 }.
```
The probability density function (pdf) is,
```none
pdf(x; alpha) = prod_j x_j**(alpha_j - 1) / Z
Z = prod_j Gamma(alpha_j) / Gamma(sum_j alpha_j)
```
where:
* `x in S^{k-1}`, i.e., the `(k-1)`-simplex,
* `concentration = alpha = [alpha_0, ..., alpha_{k-1}]`, `alpha_j > 0`,
* `Z` is the normalization constant aka the [multivariate beta function](
https://en.wikipedia.org/wiki/Beta_function#Multivariate_beta_function),
and,
* `Gamma` is the [gamma function](
https://en.wikipedia.org/wiki/Gamma_function).
The `concentration` represents mean total counts of class occurrence, i.e.,
```none
concentration = alpha = mean * total_concentration
```
where `mean` in `S^{k-1}` and `total_concentration` is a positive real number
representing a mean total count.
Distribution parameters are automatically broadcast in all functions; see
examples for details.
#### Examples
```python
# Create a single trivariate Dirichlet, with the 3rd class being three times
# more frequent than the first. I.e., batch_shape=[], event_shape=[3].
alpha = [1., 2, 3]
dist = Dirichlet(alpha)
dist.sample([4, 5]) # shape: [4, 5, 3]
# x has one sample, one batch, three classes:
x = [.2, .3, .5] # shape: [3]
dist.prob(x) # shape: []
# x has two samples from one batch:
x = [[.1, .4, .5],
[.2, .3, .5]]
dist.prob(x) # shape: [2]
# alpha will be broadcast to shape [5, 7, 3] to match x.
x = [[...]] # shape: [5, 7, 3]
dist.prob(x) # shape: [5, 7]
```
```python
# Create batch_shape=[2], event_shape=[3]:
alpha = [[1., 2, 3],
[4, 5, 6]] # shape: [2, 3]
dist = Dirichlet(alpha)
dist.sample([4, 5]) # shape: [4, 5, 2, 3]
x = [.2, .3, .5]
# x will be broadcast as [[.2, .3, .5],
# [.2, .3, .5]],
# thus matching batch_shape [2, 3].
dist.prob(x) # shape: [2]
```
"""
def __init__(self,
concentration,
validate_args=False,
allow_nan_stats=True,
name="Dirichlet"):
"""Initialize a batch of Dirichlet distributions.
Args:
concentration: Positive floating-point `Tensor` indicating mean number
of class occurrences; aka "alpha". Implies `self.dtype`, and
`self.batch_shape`, `self.event_shape`, i.e., if
`concentration.shape = [N1, N2, ..., Nm, k]` then
`batch_shape = [N1, N2, ..., Nm]` and
`event_shape = [k]`.
validate_args: Python `bool`, default `False`. When `True` distribution
parameters are checked for validity despite possibly degrading runtime
performance. When `False` invalid inputs may silently render incorrect
outputs.
allow_nan_stats: Python `bool`, default `True`. When `True`, statistics
(e.g., mean, mode, variance) use the value "`NaN`" to indicate the
result is undefined. When `False`, an exception is raised if one or
more of the statistic's batch members are undefined.
name: Python `str` name prefixed to Ops created by this class.
"""
parameters = locals()
with ops.name_scope(name, values=[concentration]):
self._concentration = self._maybe_assert_valid_concentration(
ops.convert_to_tensor(concentration, name="concentration"),
validate_args)
self._total_concentration = math_ops.reduce_sum(self._concentration, -1)
super(Dirichlet, self).__init__(
dtype=self._concentration.dtype,
validate_args=validate_args,
allow_nan_stats=allow_nan_stats,
reparameterization_type=distribution.NOT_REPARAMETERIZED,
parameters=parameters,
graph_parents=[self._concentration,
self._total_concentration],
name=name)
@property
def concentration(self):
"""Concentration parameter; expected counts for that coordinate."""
return self._concentration
@property
def total_concentration(self):
"""Sum of last dim of concentration parameter."""
return self._total_concentration
def _batch_shape_tensor(self):
return array_ops.shape(self.total_concentration)
def _batch_shape(self):
return self.total_concentration.get_shape()
def _event_shape_tensor(self):
return array_ops.shape(self.concentration)[-1:]
def _event_shape(self):
return self.concentration.get_shape().with_rank_at_least(1)[-1:]
def _sample_n(self, n, seed=None):
gamma_sample = random_ops.random_gamma(
shape=[n],
alpha=self.concentration,
dtype=self.dtype,
seed=seed)
return gamma_sample / math_ops.reduce_sum(gamma_sample, -1, keep_dims=True)
@distribution_util.AppendDocstring(_dirichlet_sample_note)
def _log_prob(self, x):
return self._log_unnormalized_prob(x) - self._log_normalization()
@distribution_util.AppendDocstring(_dirichlet_sample_note)
def _prob(self, x):
return math_ops.exp(self._log_prob(x))
def _log_unnormalized_prob(self, x):
x = self._maybe_assert_valid_sample(x)
return math_ops.reduce_sum((self.concentration - 1.) * math_ops.log(x), -1)
def _log_normalization(self):
return special_math_ops.lbeta(self.concentration)
def _entropy(self):
k = math_ops.cast(self.event_shape_tensor()[0], self.dtype)
return (
self._log_normalization()
+ ((self.total_concentration - k)
* math_ops.digamma(self.total_concentration))
- math_ops.reduce_sum(
(self.concentration - 1.) * math_ops.digamma(self.concentration),
axis=-1))
def _mean(self):
return self.concentration / self.total_concentration[..., array_ops.newaxis]
def _covariance(self):
x = self._variance_scale_term() * self._mean()
return array_ops.matrix_set_diag(
-math_ops.matmul(x[..., array_ops.newaxis],
x[..., array_ops.newaxis, :]), # outer prod
self._variance())
def _variance(self):
scale = self._variance_scale_term()
x = scale * self._mean()
return x * (scale - x)
def _variance_scale_term(self):
"""Helper to `_covariance` and `_variance` which computes a shared scale."""
return math_ops.rsqrt(1. + self.total_concentration[..., array_ops.newaxis])
@distribution_util.AppendDocstring(
"""Note: The mode is undefined when any `concentration <= 1`. If
`self.allow_nan_stats` is `True`, `NaN` is used for undefined modes. If
`self.allow_nan_stats` is `False` an exception is raised when one or more
modes are undefined.""")
def _mode(self):
k = math_ops.cast(self.event_shape_tensor()[0], self.dtype)
mode = (self.concentration - 1.) / (
self.total_concentration[..., array_ops.newaxis] - k)
if self.allow_nan_stats:
nan = array_ops.fill(
array_ops.shape(mode),
np.array(np.nan, dtype=self.dtype.as_numpy_dtype()),
name="nan")
return array_ops.where(
math_ops.reduce_all(self.concentration > 1., axis=-1),
mode, nan)
return control_flow_ops.with_dependencies([
check_ops.assert_less(
array_ops.ones([], self.dtype),
self.concentration,
message="Mode undefined when any concentration <= 1"),
], mode)
def _maybe_assert_valid_concentration(self, concentration, validate_args):
"""Checks the validity of the concentration parameter."""
if not validate_args:
return concentration
return control_flow_ops.with_dependencies([
check_ops.assert_positive(
concentration,
message="Concentration parameter must be positive."),
check_ops.assert_rank_at_least(
concentration, 1,
message="Concentration parameter must have >=1 dimensions."),
check_ops.assert_less(
1, array_ops.shape(concentration)[-1],
message="Concentration parameter must have event_size >= 2."),
], concentration)
def _maybe_assert_valid_sample(self, x):
"""Checks the validity of a sample."""
if not self.validate_args:
return x
return control_flow_ops.with_dependencies([
check_ops.assert_positive(
x,
message="samples must be positive"),
distribution_util.assert_close(
array_ops.ones([], dtype=self.dtype),
math_ops.reduce_sum(x, -1),
message="sample last-dimension must sum to `1`"),
], x)
| apache-2.0 |
gomiero/PTVS | Python/Tests/TestData/VirtualEnv/env/Lib/copy_reg.py | 50 | 7001 | """Helper to provide extensibility for pickle/cPickle.
This is only useful to add pickle support for extension types defined in
C, not for instances of user-defined classes.
"""
from types import ClassType as _ClassType
__all__ = ["pickle", "constructor",
"add_extension", "remove_extension", "clear_extension_cache"]
dispatch_table = {}
def pickle(ob_type, pickle_function, constructor_ob=None):
if type(ob_type) is _ClassType:
raise TypeError("copy_reg is not intended for use with classes")
if not hasattr(pickle_function, '__call__'):
raise TypeError("reduction functions must be callable")
dispatch_table[ob_type] = pickle_function
# The constructor_ob function is a vestige of safe for unpickling.
# There is no reason for the caller to pass it anymore.
if constructor_ob is not None:
constructor(constructor_ob)
def constructor(object):
if not hasattr(object, '__call__'):
raise TypeError("constructors must be callable")
# Example: provide pickling support for complex numbers.
try:
complex
except NameError:
pass
else:
def pickle_complex(c):
return complex, (c.real, c.imag)
pickle(complex, pickle_complex, complex)
# Support for pickling new-style objects
def _reconstructor(cls, base, state):
if base is object:
obj = object.__new__(cls)
else:
obj = base.__new__(cls, state)
if base.__init__ != object.__init__:
base.__init__(obj, state)
return obj
_HEAPTYPE = 1<<9
# Python code for object.__reduce_ex__ for protocols 0 and 1
def _reduce_ex(self, proto):
assert proto < 2
for base in self.__class__.__mro__:
if hasattr(base, '__flags__') and not base.__flags__ & _HEAPTYPE:
break
else:
base = object # not really reachable
if base is object:
state = None
else:
if base is self.__class__:
raise TypeError, "can't pickle %s objects" % base.__name__
state = base(self)
args = (self.__class__, base, state)
try:
getstate = self.__getstate__
except AttributeError:
if getattr(self, "__slots__", None):
raise TypeError("a class that defines __slots__ without "
"defining __getstate__ cannot be pickled")
try:
dict = self.__dict__
except AttributeError:
dict = None
else:
dict = getstate()
if dict:
return _reconstructor, args, dict
else:
return _reconstructor, args
# Helper for __reduce_ex__ protocol 2
def __newobj__(cls, *args):
return cls.__new__(cls, *args)
def _slotnames(cls):
"""Return a list of slot names for a given class.
This needs to find slots defined by the class and its bases, so we
can't simply return the __slots__ attribute. We must walk down
the Method Resolution Order and concatenate the __slots__ of each
class found there. (This assumes classes don't modify their
__slots__ attribute to misrepresent their slots after the class is
defined.)
"""
# Get the value from a cache in the class if possible
names = cls.__dict__.get("__slotnames__")
if names is not None:
return names
# Not cached -- calculate the value
names = []
if not hasattr(cls, "__slots__"):
# This class has no slots
pass
else:
# Slots found -- gather slot names from all base classes
for c in cls.__mro__:
if "__slots__" in c.__dict__:
slots = c.__dict__['__slots__']
# if class has a single slot, it can be given as a string
if isinstance(slots, basestring):
slots = (slots,)
for name in slots:
# special descriptors
if name in ("__dict__", "__weakref__"):
continue
# mangled names
elif name.startswith('__') and not name.endswith('__'):
names.append('_%s%s' % (c.__name__, name))
else:
names.append(name)
# Cache the outcome in the class if at all possible
try:
cls.__slotnames__ = names
except:
pass # But don't die if we can't
return names
# A registry of extension codes. This is an ad-hoc compression
# mechanism. Whenever a global reference to <module>, <name> is about
# to be pickled, the (<module>, <name>) tuple is looked up here to see
# if it is a registered extension code for it. Extension codes are
# universal, so that the meaning of a pickle does not depend on
# context. (There are also some codes reserved for local use that
# don't have this restriction.) Codes are positive ints; 0 is
# reserved.
_extension_registry = {} # key -> code
_inverted_registry = {} # code -> key
_extension_cache = {} # code -> object
# Don't ever rebind those names: cPickle grabs a reference to them when
# it's initialized, and won't see a rebinding.
def add_extension(module, name, code):
"""Register an extension code."""
code = int(code)
if not 1 <= code <= 0x7fffffff:
raise ValueError, "code out of range"
key = (module, name)
if (_extension_registry.get(key) == code and
_inverted_registry.get(code) == key):
return # Redundant registrations are benign
if key in _extension_registry:
raise ValueError("key %s is already registered with code %s" %
(key, _extension_registry[key]))
if code in _inverted_registry:
raise ValueError("code %s is already in use for key %s" %
(code, _inverted_registry[code]))
_extension_registry[key] = code
_inverted_registry[code] = key
def remove_extension(module, name, code):
"""Unregister an extension code. For testing only."""
key = (module, name)
if (_extension_registry.get(key) != code or
_inverted_registry.get(code) != key):
raise ValueError("key %s is not registered with code %s" %
(key, code))
del _extension_registry[key]
del _inverted_registry[code]
if code in _extension_cache:
del _extension_cache[code]
def clear_extension_cache():
_extension_cache.clear()
# Standard extension code assignments
# Reserved ranges
# First Last Count Purpose
# 1 127 127 Reserved for Python standard library
# 128 191 64 Reserved for Zope
# 192 239 48 Reserved for 3rd parties
# 240 255 16 Reserved for private use (will never be assigned)
# 256 Inf Inf Reserved for future assignment
# Extension codes are assigned by the Python Software Foundation.
| apache-2.0 |
EvanzzzZ/mxnet | example/speech-demo/io_func/regr_feat_io.py | 15 | 1981 | import os
import sys
import random
import shlex
import time
import re
from utils.utils import to_bool
from feat_readers.common import *
from feat_readers import stats
from feat_io import DataReadStream
class RegrDataReadStream(object):
def __init__(self, dataset_args, n_ins):
dataset_args["has_labels"] = False
assert("seed" in dataset_args)
args1 = dict(dataset_args)
args2 = dict(dataset_args)
args1["lst_file"] = dataset_args["input_lst_file"]
args2["lst_file"] = dataset_args["output_lst_file"]
self.input = DataReadStream(args1, n_ins)
self.output = DataReadStream(args2, n_ins)
def read_by_part(self):
self.input.read_by_part()
self.output.read_by_part()
def read_by_matrix(self):
self.input.read_by_matrix()
self.output.read_by_matrix()
def make_shared(self):
self.input.make_shared()
self.output.make_shared()
def get_shared(self):
iret = self.input.get_shared()
oret = self.output.get_shared()
assert(iret[1] is None)
assert(oret[1] is None)
return iret[0], oret[0]
def initialize_read(self):
self.input.initialize_read()
self.output.initialize_read()
def current_utt_id(self):
a = self.input.current_utt_id()
b = self.output.current_utt_id()
assert(a == b)
return a
def load_next_block(self):
a = self.input.load_next_block()
b = self.output.load_next_block()
assert(a == b)
return a
def get_state(self):
a = self.input.get_state()
b = self.output.get_state()
assert(a[0] == b[0])
assert(a[2] == b[2])
assert(a[3] == b[3])
assert(a[4] == b[4])
assert(numpy.array_equal(a[1], b[1]))
return a
def set_state(self, state):
self.input.set_state(state)
self.output.set_state(state)
| apache-2.0 |
westurner/pgs | setup.py | 1 | 1664 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import codecs
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
with codecs.open('README.rst', encoding='UTF8') as readme_file:
readme = readme_file.read()
with codecs.open('HISTORY.rst', encoding='UTF8') as history_file:
history = history_file.read().replace('.. :changelog:', '')
requirements = [
# TODO: put package requirements here
]
test_requirements = [
# TODO: put package test requirements here
]
setup(
name='pgs',
version='0.1.4',
description="A bottle webapp for serving static files from a git branch, or from the local filesystem.",
long_description=readme + '\n\n' + history,
author="Wes Turner",
author_email='wes@wrd.nu',
url='https://github.com/westurner/pgs',
packages=[
'pgs',
],
package_dir={'pgs':
'pgs'},
include_package_data=True,
install_requires=requirements,
license="BSD",
zip_safe=False,
keywords='pgs',
classifiers=[
'Development Status :: 2 - Pre-Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Natural Language :: English',
"Programming Language :: Python :: 2",
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
],
entry_points="""
[console_scripts]
pgs = pgs.app:main
""",
test_suite='tests',
tests_require=test_requirements
)
| mit |
BaptisteBriois/rapid_plan | node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/pylib/gyp/__init__.py | 1524 | 22178 | #!/usr/bin/env python
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import copy
import gyp.input
import optparse
import os.path
import re
import shlex
import sys
import traceback
from gyp.common import GypError
# Default debug modes for GYP
debug = {}
# List of "official" debug modes, but you can use anything you like.
DEBUG_GENERAL = 'general'
DEBUG_VARIABLES = 'variables'
DEBUG_INCLUDES = 'includes'
def DebugOutput(mode, message, *args):
if 'all' in gyp.debug or mode in gyp.debug:
ctx = ('unknown', 0, 'unknown')
try:
f = traceback.extract_stack(limit=2)
if f:
ctx = f[0][:3]
except:
pass
if args:
message %= args
print '%s:%s:%d:%s %s' % (mode.upper(), os.path.basename(ctx[0]),
ctx[1], ctx[2], message)
def FindBuildFiles():
extension = '.gyp'
files = os.listdir(os.getcwd())
build_files = []
for file in files:
if file.endswith(extension):
build_files.append(file)
return build_files
def Load(build_files, format, default_variables={},
includes=[], depth='.', params=None, check=False,
circular_check=True, duplicate_basename_check=True):
"""
Loads one or more specified build files.
default_variables and includes will be copied before use.
Returns the generator for the specified format and the
data returned by loading the specified build files.
"""
if params is None:
params = {}
if '-' in format:
format, params['flavor'] = format.split('-', 1)
default_variables = copy.copy(default_variables)
# Default variables provided by this program and its modules should be
# named WITH_CAPITAL_LETTERS to provide a distinct "best practice" namespace,
# avoiding collisions with user and automatic variables.
default_variables['GENERATOR'] = format
default_variables['GENERATOR_FLAVOR'] = params.get('flavor', '')
# Format can be a custom python file, or by default the name of a module
# within gyp.generator.
if format.endswith('.py'):
generator_name = os.path.splitext(format)[0]
path, generator_name = os.path.split(generator_name)
# Make sure the path to the custom generator is in sys.path
# Don't worry about removing it once we are done. Keeping the path
# to each generator that is used in sys.path is likely harmless and
# arguably a good idea.
path = os.path.abspath(path)
if path not in sys.path:
sys.path.insert(0, path)
else:
generator_name = 'gyp.generator.' + format
# These parameters are passed in order (as opposed to by key)
# because ActivePython cannot handle key parameters to __import__.
generator = __import__(generator_name, globals(), locals(), generator_name)
for (key, val) in generator.generator_default_variables.items():
default_variables.setdefault(key, val)
# Give the generator the opportunity to set additional variables based on
# the params it will receive in the output phase.
if getattr(generator, 'CalculateVariables', None):
generator.CalculateVariables(default_variables, params)
# Give the generator the opportunity to set generator_input_info based on
# the params it will receive in the output phase.
if getattr(generator, 'CalculateGeneratorInputInfo', None):
generator.CalculateGeneratorInputInfo(params)
# Fetch the generator specific info that gets fed to input, we use getattr
# so we can default things and the generators only have to provide what
# they need.
generator_input_info = {
'non_configuration_keys':
getattr(generator, 'generator_additional_non_configuration_keys', []),
'path_sections':
getattr(generator, 'generator_additional_path_sections', []),
'extra_sources_for_rules':
getattr(generator, 'generator_extra_sources_for_rules', []),
'generator_supports_multiple_toolsets':
getattr(generator, 'generator_supports_multiple_toolsets', False),
'generator_wants_static_library_dependencies_adjusted':
getattr(generator,
'generator_wants_static_library_dependencies_adjusted', True),
'generator_wants_sorted_dependencies':
getattr(generator, 'generator_wants_sorted_dependencies', False),
'generator_filelist_paths':
getattr(generator, 'generator_filelist_paths', None),
}
# Process the input specific to this generator.
result = gyp.input.Load(build_files, default_variables, includes[:],
depth, generator_input_info, check, circular_check,
duplicate_basename_check,
params['parallel'], params['root_targets'])
return [generator] + result
def NameValueListToDict(name_value_list):
"""
Takes an array of strings of the form 'NAME=VALUE' and creates a dictionary
of the pairs. If a string is simply NAME, then the value in the dictionary
is set to True. If VALUE can be converted to an integer, it is.
"""
result = { }
for item in name_value_list:
tokens = item.split('=', 1)
if len(tokens) == 2:
# If we can make it an int, use that, otherwise, use the string.
try:
token_value = int(tokens[1])
except ValueError:
token_value = tokens[1]
# Set the variable to the supplied value.
result[tokens[0]] = token_value
else:
# No value supplied, treat it as a boolean and set it.
result[tokens[0]] = True
return result
def ShlexEnv(env_name):
flags = os.environ.get(env_name, [])
if flags:
flags = shlex.split(flags)
return flags
def FormatOpt(opt, value):
if opt.startswith('--'):
return '%s=%s' % (opt, value)
return opt + value
def RegenerateAppendFlag(flag, values, predicate, env_name, options):
"""Regenerate a list of command line flags, for an option of action='append'.
The |env_name|, if given, is checked in the environment and used to generate
an initial list of options, then the options that were specified on the
command line (given in |values|) are appended. This matches the handling of
environment variables and command line flags where command line flags override
the environment, while not requiring the environment to be set when the flags
are used again.
"""
flags = []
if options.use_environment and env_name:
for flag_value in ShlexEnv(env_name):
value = FormatOpt(flag, predicate(flag_value))
if value in flags:
flags.remove(value)
flags.append(value)
if values:
for flag_value in values:
flags.append(FormatOpt(flag, predicate(flag_value)))
return flags
def RegenerateFlags(options):
"""Given a parsed options object, and taking the environment variables into
account, returns a list of flags that should regenerate an equivalent options
object (even in the absence of the environment variables.)
Any path options will be normalized relative to depth.
The format flag is not included, as it is assumed the calling generator will
set that as appropriate.
"""
def FixPath(path):
path = gyp.common.FixIfRelativePath(path, options.depth)
if not path:
return os.path.curdir
return path
def Noop(value):
return value
# We always want to ignore the environment when regenerating, to avoid
# duplicate or changed flags in the environment at the time of regeneration.
flags = ['--ignore-environment']
for name, metadata in options._regeneration_metadata.iteritems():
opt = metadata['opt']
value = getattr(options, name)
value_predicate = metadata['type'] == 'path' and FixPath or Noop
action = metadata['action']
env_name = metadata['env_name']
if action == 'append':
flags.extend(RegenerateAppendFlag(opt, value, value_predicate,
env_name, options))
elif action in ('store', None): # None is a synonym for 'store'.
if value:
flags.append(FormatOpt(opt, value_predicate(value)))
elif options.use_environment and env_name and os.environ.get(env_name):
flags.append(FormatOpt(opt, value_predicate(os.environ.get(env_name))))
elif action in ('store_true', 'store_false'):
if ((action == 'store_true' and value) or
(action == 'store_false' and not value)):
flags.append(opt)
elif options.use_environment and env_name:
print >>sys.stderr, ('Warning: environment regeneration unimplemented '
'for %s flag %r env_name %r' % (action, opt,
env_name))
else:
print >>sys.stderr, ('Warning: regeneration unimplemented for action %r '
'flag %r' % (action, opt))
return flags
class RegeneratableOptionParser(optparse.OptionParser):
def __init__(self):
self.__regeneratable_options = {}
optparse.OptionParser.__init__(self)
def add_option(self, *args, **kw):
"""Add an option to the parser.
This accepts the same arguments as OptionParser.add_option, plus the
following:
regenerate: can be set to False to prevent this option from being included
in regeneration.
env_name: name of environment variable that additional values for this
option come from.
type: adds type='path', to tell the regenerator that the values of
this option need to be made relative to options.depth
"""
env_name = kw.pop('env_name', None)
if 'dest' in kw and kw.pop('regenerate', True):
dest = kw['dest']
# The path type is needed for regenerating, for optparse we can just treat
# it as a string.
type = kw.get('type')
if type == 'path':
kw['type'] = 'string'
self.__regeneratable_options[dest] = {
'action': kw.get('action'),
'type': type,
'env_name': env_name,
'opt': args[0],
}
optparse.OptionParser.add_option(self, *args, **kw)
def parse_args(self, *args):
values, args = optparse.OptionParser.parse_args(self, *args)
values._regeneration_metadata = self.__regeneratable_options
return values, args
def gyp_main(args):
my_name = os.path.basename(sys.argv[0])
parser = RegeneratableOptionParser()
usage = 'usage: %s [options ...] [build_file ...]'
parser.set_usage(usage.replace('%s', '%prog'))
parser.add_option('--build', dest='configs', action='append',
help='configuration for build after project generation')
parser.add_option('--check', dest='check', action='store_true',
help='check format of gyp files')
parser.add_option('--config-dir', dest='config_dir', action='store',
env_name='GYP_CONFIG_DIR', default=None,
help='The location for configuration files like '
'include.gypi.')
parser.add_option('-d', '--debug', dest='debug', metavar='DEBUGMODE',
action='append', default=[], help='turn on a debugging '
'mode for debugging GYP. Supported modes are "variables", '
'"includes" and "general" or "all" for all of them.')
parser.add_option('-D', dest='defines', action='append', metavar='VAR=VAL',
env_name='GYP_DEFINES',
help='sets variable VAR to value VAL')
parser.add_option('--depth', dest='depth', metavar='PATH', type='path',
help='set DEPTH gyp variable to a relative path to PATH')
parser.add_option('-f', '--format', dest='formats', action='append',
env_name='GYP_GENERATORS', regenerate=False,
help='output formats to generate')
parser.add_option('-G', dest='generator_flags', action='append', default=[],
metavar='FLAG=VAL', env_name='GYP_GENERATOR_FLAGS',
help='sets generator flag FLAG to VAL')
parser.add_option('--generator-output', dest='generator_output',
action='store', default=None, metavar='DIR', type='path',
env_name='GYP_GENERATOR_OUTPUT',
help='puts generated build files under DIR')
parser.add_option('--ignore-environment', dest='use_environment',
action='store_false', default=True, regenerate=False,
help='do not read options from environment variables')
parser.add_option('-I', '--include', dest='includes', action='append',
metavar='INCLUDE', type='path',
help='files to include in all loaded .gyp files')
# --no-circular-check disables the check for circular relationships between
# .gyp files. These relationships should not exist, but they've only been
# observed to be harmful with the Xcode generator. Chromium's .gyp files
# currently have some circular relationships on non-Mac platforms, so this
# option allows the strict behavior to be used on Macs and the lenient
# behavior to be used elsewhere.
# TODO(mark): Remove this option when http://crbug.com/35878 is fixed.
parser.add_option('--no-circular-check', dest='circular_check',
action='store_false', default=True, regenerate=False,
help="don't check for circular relationships between files")
# --no-duplicate-basename-check disables the check for duplicate basenames
# in a static_library/shared_library project. Visual C++ 2008 generator
# doesn't support this configuration. Libtool on Mac also generates warnings
# when duplicate basenames are passed into Make generator on Mac.
# TODO(yukawa): Remove this option when these legacy generators are
# deprecated.
parser.add_option('--no-duplicate-basename-check',
dest='duplicate_basename_check', action='store_false',
default=True, regenerate=False,
help="don't check for duplicate basenames")
parser.add_option('--no-parallel', action='store_true', default=False,
help='Disable multiprocessing')
parser.add_option('-S', '--suffix', dest='suffix', default='',
help='suffix to add to generated files')
parser.add_option('--toplevel-dir', dest='toplevel_dir', action='store',
default=None, metavar='DIR', type='path',
help='directory to use as the root of the source tree')
parser.add_option('-R', '--root-target', dest='root_targets',
action='append', metavar='TARGET',
help='include only TARGET and its deep dependencies')
options, build_files_arg = parser.parse_args(args)
build_files = build_files_arg
# Set up the configuration directory (defaults to ~/.gyp)
if not options.config_dir:
home = None
home_dot_gyp = None
if options.use_environment:
home_dot_gyp = os.environ.get('GYP_CONFIG_DIR', None)
if home_dot_gyp:
home_dot_gyp = os.path.expanduser(home_dot_gyp)
if not home_dot_gyp:
home_vars = ['HOME']
if sys.platform in ('cygwin', 'win32'):
home_vars.append('USERPROFILE')
for home_var in home_vars:
home = os.getenv(home_var)
if home != None:
home_dot_gyp = os.path.join(home, '.gyp')
if not os.path.exists(home_dot_gyp):
home_dot_gyp = None
else:
break
else:
home_dot_gyp = os.path.expanduser(options.config_dir)
if home_dot_gyp and not os.path.exists(home_dot_gyp):
home_dot_gyp = None
if not options.formats:
# If no format was given on the command line, then check the env variable.
generate_formats = []
if options.use_environment:
generate_formats = os.environ.get('GYP_GENERATORS', [])
if generate_formats:
generate_formats = re.split(r'[\s,]', generate_formats)
if generate_formats:
options.formats = generate_formats
else:
# Nothing in the variable, default based on platform.
if sys.platform == 'darwin':
options.formats = ['xcode']
elif sys.platform in ('win32', 'cygwin'):
options.formats = ['msvs']
else:
options.formats = ['make']
if not options.generator_output and options.use_environment:
g_o = os.environ.get('GYP_GENERATOR_OUTPUT')
if g_o:
options.generator_output = g_o
options.parallel = not options.no_parallel
for mode in options.debug:
gyp.debug[mode] = 1
# Do an extra check to avoid work when we're not debugging.
if DEBUG_GENERAL in gyp.debug:
DebugOutput(DEBUG_GENERAL, 'running with these options:')
for option, value in sorted(options.__dict__.items()):
if option[0] == '_':
continue
if isinstance(value, basestring):
DebugOutput(DEBUG_GENERAL, " %s: '%s'", option, value)
else:
DebugOutput(DEBUG_GENERAL, " %s: %s", option, value)
if not build_files:
build_files = FindBuildFiles()
if not build_files:
raise GypError((usage + '\n\n%s: error: no build_file') %
(my_name, my_name))
# TODO(mark): Chromium-specific hack!
# For Chromium, the gyp "depth" variable should always be a relative path
# to Chromium's top-level "src" directory. If no depth variable was set
# on the command line, try to find a "src" directory by looking at the
# absolute path to each build file's directory. The first "src" component
# found will be treated as though it were the path used for --depth.
if not options.depth:
for build_file in build_files:
build_file_dir = os.path.abspath(os.path.dirname(build_file))
build_file_dir_components = build_file_dir.split(os.path.sep)
components_len = len(build_file_dir_components)
for index in xrange(components_len - 1, -1, -1):
if build_file_dir_components[index] == 'src':
options.depth = os.path.sep.join(build_file_dir_components)
break
del build_file_dir_components[index]
# If the inner loop found something, break without advancing to another
# build file.
if options.depth:
break
if not options.depth:
raise GypError('Could not automatically locate src directory. This is'
'a temporary Chromium feature that will be removed. Use'
'--depth as a workaround.')
# If toplevel-dir is not set, we assume that depth is the root of our source
# tree.
if not options.toplevel_dir:
options.toplevel_dir = options.depth
# -D on the command line sets variable defaults - D isn't just for define,
# it's for default. Perhaps there should be a way to force (-F?) a
# variable's value so that it can't be overridden by anything else.
cmdline_default_variables = {}
defines = []
if options.use_environment:
defines += ShlexEnv('GYP_DEFINES')
if options.defines:
defines += options.defines
cmdline_default_variables = NameValueListToDict(defines)
if DEBUG_GENERAL in gyp.debug:
DebugOutput(DEBUG_GENERAL,
"cmdline_default_variables: %s", cmdline_default_variables)
# Set up includes.
includes = []
# If ~/.gyp/include.gypi exists, it'll be forcibly included into every
# .gyp file that's loaded, before anything else is included.
if home_dot_gyp != None:
default_include = os.path.join(home_dot_gyp, 'include.gypi')
if os.path.exists(default_include):
print 'Using overrides found in ' + default_include
includes.append(default_include)
# Command-line --include files come after the default include.
if options.includes:
includes.extend(options.includes)
# Generator flags should be prefixed with the target generator since they
# are global across all generator runs.
gen_flags = []
if options.use_environment:
gen_flags += ShlexEnv('GYP_GENERATOR_FLAGS')
if options.generator_flags:
gen_flags += options.generator_flags
generator_flags = NameValueListToDict(gen_flags)
if DEBUG_GENERAL in gyp.debug.keys():
DebugOutput(DEBUG_GENERAL, "generator_flags: %s", generator_flags)
# Generate all requested formats (use a set in case we got one format request
# twice)
for format in set(options.formats):
params = {'options': options,
'build_files': build_files,
'generator_flags': generator_flags,
'cwd': os.getcwd(),
'build_files_arg': build_files_arg,
'gyp_binary': sys.argv[0],
'home_dot_gyp': home_dot_gyp,
'parallel': options.parallel,
'root_targets': options.root_targets,
'target_arch': cmdline_default_variables.get('target_arch', '')}
# Start with the default variables from the command line.
[generator, flat_list, targets, data] = Load(
build_files, format, cmdline_default_variables, includes, options.depth,
params, options.check, options.circular_check,
options.duplicate_basename_check)
# TODO(mark): Pass |data| for now because the generator needs a list of
# build files that came in. In the future, maybe it should just accept
# a list, and not the whole data dict.
# NOTE: flat_list is the flattened dependency graph specifying the order
# that targets may be built. Build systems that operate serially or that
# need to have dependencies defined before dependents reference them should
# generate targets in the order specified in flat_list.
generator.GenerateOutput(flat_list, targets, data, params)
if options.configs:
valid_configs = targets[flat_list[0]]['configurations'].keys()
for conf in options.configs:
if conf not in valid_configs:
raise GypError('Invalid config specified via --build: %s' % conf)
generator.PerformBuild(data, options.configs, params)
# Done
return 0
def main(args):
try:
return gyp_main(args)
except GypError, e:
sys.stderr.write("gyp: %s\n" % e)
return 1
# NOTE: setuptools generated console_scripts calls function with no arguments
def script_main():
return main(sys.argv[1:])
if __name__ == '__main__':
sys.exit(script_main())
| gpl-2.0 |
DonLakeFlyer/ardupilot | Tools/scripts/make_intel_hex.py | 41 | 1315 | #!/usr/bin/env python
import sys, os, shutil, struct
import intelhex
# make two intel hex files, one including bootloader and one without
# for loading with DFU based tools
if len(sys.argv) != 4:
print("Usage: make_intel_hex.py BINFILE BOOTLOADER RESERVE_KB")
sys.exit(1)
scripts = os.path.dirname(__file__)
binfile = sys.argv[1]
bootloaderfile = sys.argv[2]
reserve_kb = int(sys.argv[3])
(root,ext) = os.path.splitext(binfile)
hexfile = root + ".hex"
hex_with_bl = root + "_with_bl.hex"
if not os.path.exists(binfile):
print("Can't find bin file %s" % binfile)
sys.exit(1)
if not os.path.exists(bootloaderfile):
print("Can't find bootloader file %s" % bootloaderfile)
sys.exit(1)
blimage = bytes(open(bootloaderfile, "rb").read())
blimage += bytes(struct.pack('B',255) * (reserve_kb * 1024 - len(blimage)))
if reserve_kb > 0 and len(blimage) != reserve_kb * 1024:
print("Bad blimage size %u" % len(blimage))
sys.exit(1)
appimage = bytes(open(binfile,"rb").read())
with_bl = blimage + appimage
tmpfile = hexfile + ".tmp"
open(tmpfile, "wb").write(appimage)
intelhex.bin2hex(tmpfile, hexfile, offset=(0x08000000 + reserve_kb*1024))
if reserve_kb > 0:
open(tmpfile, "wb").write(with_bl)
intelhex.bin2hex(tmpfile, hex_with_bl, offset=0x08000000)
os.unlink(tmpfile)
| gpl-3.0 |
Bulochkin/tensorflow_pack | tensorflow/contrib/distributions/python/ops/operator_test_util.py | 79 | 6603 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Utilities for testing `OperatorPDBase` and related classes."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import numpy as np
import six
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import linalg_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.platform import test
@six.add_metaclass(abc.ABCMeta) # pylint: disable=no-init
class OperatorPDDerivedClassTest(test.TestCase):
"""Tests for derived classes.
Subclasses should implement every abstractmethod, and this will enable all
test methods to work.
"""
def setUp(self):
self._rng = np.random.RandomState(42)
def _compare_results(self, expected, actual, static_shapes=True, atol=1e-5):
"""Compare expected value (array) to the actual value (Tensor)."""
if static_shapes:
self.assertEqual(expected.shape, actual.get_shape())
self.assertAllClose(expected, actual.eval(), atol=atol)
@abc.abstractmethod
def _build_operator_and_mat(self, batch_shape, k, dtype=np.float64):
"""Build a batch matrix and an Operator that should have similar behavior.
Every operator represents a (batch) matrix. This method returns both
together, and is used e.g. by tests.
Args:
batch_shape: List-like of Python integers giving batch shape of operator.
k: Python integer, the event size.
dtype: Numpy dtype. Data type of returned array/operator.
Returns:
operator: `OperatorPDBase` subclass.
mat: numpy array representing a (batch) matrix.
"""
# Create a matrix as a numpy array. Shape = batch_shape + [k, k].
# Create an OperatorPDDiag that should have the same behavior as the matrix.
# All arguments are convertable to numpy arrays.
#
batch_shape = list(batch_shape)
mat_shape = batch_shape + [k, k]
# return operator, mat
raise NotImplementedError("Not implemented yet.")
def testToDense(self):
with self.test_session():
for batch_shape in [(), (
2,
3,)]:
for k in [1, 4]:
for dtype in [np.float32, np.float64]:
operator, mat = self._build_operator_and_mat(
batch_shape, k, dtype=dtype)
self._compare_results(expected=mat, actual=operator.to_dense())
def testSqrtToDense(self):
with self.test_session():
for batch_shape in [(), (
2,
3,)]:
for k in [1, 4]:
operator, mat = self._build_operator_and_mat(batch_shape, k)
sqrt = operator.sqrt_to_dense()
self.assertEqual(mat.shape, sqrt.get_shape())
# Square roots are not unique, but SS^T should equal mat. In this
# case however, we should have S = S^T.
self._compare_results(
expected=mat, actual=math_ops.matmul(sqrt, sqrt))
def testDeterminants(self):
with self.test_session():
for batch_shape in [(), (
2,
3,)]:
for k in [1, 4]:
operator, mat = self._build_operator_and_mat(batch_shape, k)
expected_det = linalg_ops.matrix_determinant(mat).eval()
self._compare_results(expected_det, operator.det())
self._compare_results(np.log(expected_det), operator.log_det())
def testMatmul(self):
with self.test_session():
for batch_shape in [(), (
2,
3,)]:
for k in [1, 4]:
operator, mat = self._build_operator_and_mat(batch_shape, k)
# Work with 5 simultaneous systems. 5 is arbitrary.
x = self._rng.randn(*(batch_shape + (k, 5)))
self._compare_results(
expected=math_ops.matmul(mat, x).eval(),
actual=operator.matmul(x))
def testSqrtMatmul(self):
# Square roots are not unique, but we should have SS^T x = Ax, and in our
# case, we should have S = S^T, so SSx = Ax.
with self.test_session():
for batch_shape in [(), (
2,
3,)]:
for k in [1, 4]:
operator, mat = self._build_operator_and_mat(batch_shape, k)
# Work with 5 simultaneous systems. 5 is arbitrary.
x = self._rng.randn(*(batch_shape + (k, 5)))
self._compare_results(
expected=math_ops.matmul(mat, x).eval(),
actual=operator.sqrt_matmul(operator.sqrt_matmul(x)))
def testSolve(self):
with self.test_session():
for batch_shape in [(), (
2,
3,)]:
for k in [1, 4]:
operator, mat = self._build_operator_and_mat(batch_shape, k)
# Work with 5 simultaneous systems. 5 is arbitrary.
x = self._rng.randn(*(batch_shape + (k, 5)))
self._compare_results(
expected=linalg_ops.matrix_solve(mat, x).eval(),
actual=operator.solve(x))
def testSqrtSolve(self):
# Square roots are not unique, but we should still have
# S^{-T} S^{-1} x = A^{-1} x.
# In our case, we should have S = S^T, so then S^{-1} S^{-1} x = A^{-1} x.
with self.test_session():
for batch_shape in [(), (
2,
3,)]:
for k in [1, 4]:
operator, mat = self._build_operator_and_mat(batch_shape, k)
# Work with 5 simultaneous systems. 5 is arbitrary.
x = self._rng.randn(*(batch_shape + (k, 5)))
self._compare_results(
expected=linalg_ops.matrix_solve(mat, x).eval(),
actual=operator.sqrt_solve(operator.sqrt_solve(x)))
def testAddToTensor(self):
with self.test_session():
for batch_shape in [(), (
2,
3,)]:
for k in [1, 4]:
operator, mat = self._build_operator_and_mat(batch_shape, k)
tensor = array_ops.ones_like(mat)
self._compare_results(
expected=(mat + tensor).eval(),
actual=operator.add_to_tensor(tensor))
| apache-2.0 |
adelton/django | tests/gis_tests/geoadmin/tests.py | 304 | 3157 | from __future__ import unicode_literals
from django.contrib.gis import admin
from django.contrib.gis.geos import Point
from django.test import TestCase, override_settings, skipUnlessDBFeature
from .admin import UnmodifiableAdmin
from .models import City, site
@skipUnlessDBFeature("gis_enabled")
@override_settings(ROOT_URLCONF='django.contrib.gis.tests.geoadmin.urls')
class GeoAdminTest(TestCase):
def test_ensure_geographic_media(self):
geoadmin = site._registry[City]
admin_js = geoadmin.media.render_js()
self.assertTrue(any(geoadmin.openlayers_url in js for js in admin_js))
def test_olmap_OSM_rendering(self):
delete_all_btn = """<a href="javascript:geodjango_point.clearFeatures()">Delete all Features</a>"""
original_geoadmin = site._registry[City]
params = original_geoadmin.get_map_widget(City._meta.get_field('point')).params
result = original_geoadmin.get_map_widget(City._meta.get_field('point'))(
).render('point', Point(-79.460734, 40.18476), params)
self.assertIn(
"""geodjango_point.layers.base = new OpenLayers.Layer.OSM("OpenStreetMap (Mapnik)");""",
result)
self.assertIn(delete_all_btn, result)
site.unregister(City)
site.register(City, UnmodifiableAdmin)
try:
geoadmin = site._registry[City]
params = geoadmin.get_map_widget(City._meta.get_field('point')).params
result = geoadmin.get_map_widget(City._meta.get_field('point'))(
).render('point', Point(-79.460734, 40.18476), params)
self.assertNotIn(delete_all_btn, result)
finally:
site.unregister(City)
site.register(City, original_geoadmin.__class__)
def test_olmap_WMS_rendering(self):
geoadmin = admin.GeoModelAdmin(City, site)
result = geoadmin.get_map_widget(City._meta.get_field('point'))(
).render('point', Point(-79.460734, 40.18476))
self.assertIn(
"""geodjango_point.layers.base = new OpenLayers.Layer.WMS("OpenLayers WMS", """
""""http://vmap0.tiles.osgeo.org/wms/vmap0", {layers: 'basic', format: 'image/jpeg'});""",
result)
def test_olwidget_has_changed(self):
"""
Check that changes are accurately noticed by OpenLayersWidget.
"""
geoadmin = site._registry[City]
form = geoadmin.get_changelist_form(None)()
has_changed = form.fields['point'].has_changed
initial = Point(13.4197458572965953, 52.5194108501149799, srid=4326)
data_same = "SRID=3857;POINT(1493879.2754093995 6894592.019687599)"
data_almost_same = "SRID=3857;POINT(1493879.2754093990 6894592.019687590)"
data_changed = "SRID=3857;POINT(1493884.0527237 6894593.8111804)"
self.assertTrue(has_changed(None, data_changed))
self.assertTrue(has_changed(initial, ""))
self.assertFalse(has_changed(None, ""))
self.assertFalse(has_changed(initial, data_same))
self.assertFalse(has_changed(initial, data_almost_same))
self.assertTrue(has_changed(initial, data_changed))
| bsd-3-clause |
fabrickit/fabkit | core/agent/fabcontext.py | 1 | 1417 | # coding: utf-8
from oslo_context import context
from oslo_db.sqlalchemy import enginefacade
@enginefacade.transaction_context_provider
class RequestContext(context.RequestContext):
"""Security context and request information.
Represents the user taking a given action within the system.
"""
def __init__(self, user_id=None, project_id=None,
is_admin=None, read_deleted="no",
roles=None, remote_address=None, timestamp=None,
request_id=None, auth_token=None, overwrite=True,
quota_class=None, user_name=None, project_name=None,
service_catalog=None, instance_lock_checked=False,
user_auth_plugin=None, **kwargs):
user = kwargs.pop('user', None)
tenant = kwargs.pop('tenant', None)
super(RequestContext, self).__init__(
auth_token=auth_token,
user=user_id or user,
tenant=project_id or tenant,
domain=kwargs.pop('domain', None),
user_domain=kwargs.pop('user_domain', None),
project_domain=kwargs.pop('project_domain', None),
is_admin=is_admin,
read_only=kwargs.pop('read_only', False),
show_deleted=kwargs.pop('show_deleted', False),
request_id=request_id,
resource_uuid=kwargs.pop('resource_uuid', None),
overwrite=overwrite)
| mit |
babycaseny/poedit | deps/boost/tools/build/test/library_order.py | 44 | 2111 | #!/usr/bin/python
# Copyright 2004 Vladimir Prus
# Distributed under the Boost Software License, Version 1.0.
# (See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
# Test that on compilers sensitive to library order on linker's command line,
# we generate the correct order.
import BoostBuild
t = BoostBuild.Tester(use_test_config=False)
t.write("main.cpp", """\
void a();
int main() { a(); }
""")
t.write("a.cpp", """\
void b();
void a() { b(); }
""")
t.write("b.cpp", """\
void c();
void b() { c(); }
""")
t.write("c.cpp", """\
void d();
void c() { d(); }
""")
t.write("d.cpp", """\
void d() {}
""")
# The order of libraries in 'main' is crafted so that we get an error unless we
# do something about the order ourselves.
t.write("jamroot.jam", """\
exe main : main.cpp libd libc libb liba ;
lib libd : d.cpp ;
lib libc : c.cpp : <link>static <use>libd ;
lib libb : b.cpp : <use>libc ;
lib liba : a.cpp : <use>libb ;
""")
t.run_build_system(["-d2"])
t.expect_addition("bin/$toolset/debug/main.exe")
# Test the order between searched libraries.
t.write("jamroot.jam", """\
exe main : main.cpp png z ;
lib png : z : <name>png ;
lib z : : <name>zzz ;
""")
t.run_build_system(["-a", "-n", "-d+2"])
t.fail_test(t.stdout().find("png") > t.stdout().find("zzz"))
t.write("jamroot.jam", """\
exe main : main.cpp png z ;
lib png : : <name>png ;
lib z : png : <name>zzz ;
""")
t.run_build_system(["-a", "-n", "-d+2"])
t.fail_test(t.stdout().find("png") < t.stdout().find("zzz"))
# Test the order between prebuilt libraries.
t.write("first.a", "")
t.write("second.a", "")
t.write("jamroot.jam", """\
exe main : main.cpp first second ;
lib first : second : <file>first.a ;
lib second : : <file>second.a ;
""")
t.run_build_system(["-a", "-n", "-d+2"])
t.fail_test(t.stdout().find("first") > t.stdout().find("second"))
t.write("jamroot.jam", """
exe main : main.cpp first second ;
lib first : : <file>first.a ;
lib second : first : <file>second.a ;
""")
t.run_build_system(["-a", "-n", "-d+2"])
t.fail_test(t.stdout().find("first") < t.stdout().find("second"))
t.cleanup()
| mit |
dkubiak789/OpenUpgrade | addons/hw_proxy/__openerp__.py | 67 | 1617 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
{
'name': 'Hardware Proxy',
'version': '1.0',
'category': 'Point Of Sale',
'sequence': 6,
'summary': 'Connect the Web Client to Hardware Peripherals',
'description': """
Hardware Poxy
=============
This module allows you to remotely use peripherals connected to this server.
This modules only contains the enabling framework. The actual devices drivers
are found in other modules that must be installed separately.
""",
'author': 'OpenERP SA',
'depends': [],
'test': [
],
'installable': True,
'auto_install': False,
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
wangjun/djangae | djangae/contrib/uniquetool/admin.py | 19 | 2536 | from django import forms
from django.db import models
from django.contrib import admin
from .models import UniqueAction, ActionLog, encode_model
def _show_model(m):
if m._meta.app_label == "uniquetool":
return False
if any([ x.unique for x in m._meta.fields ]):
return True
if x._meta.unique_together:
return True
return False
class ActionLogInline(admin.TabularInline):
model = ActionLog
verbose_name_plural = 'List of action messages'
can_delete = False
extra = 0
editable_fields = []
readonly_fields = ('log_type', 'instance_key', 'marker_key', )
def has_add_permission(self, request):
return False
class UniqueActionAdmin(admin.ModelAdmin):
actions = None
list_display = ('action_type', 'model_name', 'status')
change_form_template = "admin/unique_action_change_form.html"
inlines = [ActionLogInline]
@classmethod
def model_choices(cls):
if not hasattr(cls, '_model_choices'):
all_models = sorted([
(encode_model(m), m.__name__)
for m in models.get_models()
if _show_model(m)
], key=lambda x: x[1])
cls._model_choices = all_models
return cls._model_choices
def model_name(self, instance):
return dict(self.model_choices())[instance.model]
def get_form(self, request, obj=None, **kwargs):
if obj is not None and obj.pk:
kwargs['fields'] = []
return super(UniqueActionAdmin, self).get_form(request, obj=obj, **kwargs)
form = super(UniqueActionAdmin, self).get_form(request, obj=obj, **kwargs)
# FIXME: this field should be optional when a "clean" action is selected
form.base_fields['model'] = forms.ChoiceField(choices=self.model_choices())
return form
def has_delete_permission(self, request, obj=None):
if obj and obj.status == 'running':
return False
return super(UniqueActionAdmin, self).has_delete_permission(request, obj=obj)
def render_change_form(self, request, context, add=False, change=False, form_url='', obj=None):
if obj:
context["title"] = u"Errors from %s on %s (%s)" % (obj.action_type, self.model_name(obj), obj.get_status_display())
context["readonly"] = True
return super(UniqueActionAdmin, self).render_change_form(request, context, add=add, change=change, form_url=form_url, obj=obj)
admin.site.register(UniqueAction, UniqueActionAdmin)
| bsd-3-clause |
tmpvar/skia.cc | tools/skp/page_sets/skia_pokemonwiki_desktop.py | 32 | 1308 | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# pylint: disable=W0401,W0614
from telemetry import story
from telemetry.page import page as page_module
from telemetry.page import shared_page_state
class SkiaBuildbotDesktopPage(page_module.Page):
def __init__(self, url, page_set):
super(SkiaBuildbotDesktopPage, self).__init__(
url=url,
page_set=page_set,
credentials_path='data/credentials.json',
shared_page_state_class=shared_page_state.SharedDesktopPageState)
self.archive_data_file = 'data/skia_pokemonwiki_desktop.json'
def RunNavigateSteps(self, action_runner):
action_runner.Navigate(self.url)
action_runner.Wait(5)
class SkiaPokemonwikiDesktopPageSet(story.StorySet):
""" Pages designed to represent the median, not highly optimized web """
def __init__(self):
super(SkiaPokemonwikiDesktopPageSet, self).__init__(
archive_data_file='data/skia_pokemonwiki_desktop.json')
urls_list = [
# Why: http://code.google.com/p/chromium/issues/detail?id=136555
'http://en.wikipedia.org/wiki/List_of_Pok%C3%A9mon',
]
for url in urls_list:
self.AddStory(SkiaBuildbotDesktopPage(url, self))
| apache-2.0 |
2014c2g1/c2g1 | exts/exts/sphinxcontrib/bibtex/__init__.py | 38 | 4471 | # -*- coding: utf-8 -*-
"""
Sphinx Interface
~~~~~~~~~~~~~~~~
.. autofunction:: setup
.. autofunction:: init_bibtex_cache
.. autofunction:: purge_bibtex_cache
.. autofunction:: process_citations
.. autofunction:: process_citation_references
.. autofunction:: check_duplicate_labels
"""
import docutils.nodes
from sphinxcontrib.bibtex.cache import Cache
from sphinxcontrib.bibtex.nodes import bibliography
from sphinxcontrib.bibtex.roles import CiteRole
from sphinxcontrib.bibtex.directives import BibliographyDirective
from sphinxcontrib.bibtex.transforms import BibliographyTransform
def init_bibtex_cache(app):
"""Create ``app.env.bibtex_cache`` if it does not exist yet.
Reset citation label dictionary.
:param app: The sphinx application.
:type app: :class:`sphinx.application.Sphinx`
"""
if not hasattr(app.env, "bibtex_cache"):
app.env.bibtex_cache = Cache()
def purge_bibtex_cache(app, env, docname):
"""Remove all information related to *docname* from the cache.
:param app: The sphinx application.
:type app: :class:`sphinx.application.Sphinx`
:param env: The sphinx build environment.
:type env: :class:`sphinx.environment.BuildEnvironment`
"""
env.bibtex_cache.purge(docname)
def process_citations(app, doctree, docname):
"""Replace labels of citation nodes by actual labels.
:param app: The sphinx application.
:type app: :class:`sphinx.application.Sphinx`
:param doctree: The document tree.
:type doctree: :class:`docutils.nodes.document`
:param docname: The document name.
:type docname: :class:`str`
"""
for node in doctree.traverse(docutils.nodes.citation):
key = node[0].astext()
try:
label = app.env.bibtex_cache.get_label_from_key(key)
except KeyError:
app.warn("could not relabel citation [%s]" % key)
else:
node[0] = docutils.nodes.label('', label)
def process_citation_references(app, doctree, docname):
"""Replace text of citation reference nodes by actual labels.
:param app: The sphinx application.
:type app: :class:`sphinx.application.Sphinx`
:param doctree: The document tree.
:type doctree: :class:`docutils.nodes.document`
:param docname: The document name.
:type docname: :class:`str`
"""
# XXX sphinx has already turned citation_reference nodes
# XXX into reference nodes
for node in doctree.traverse(docutils.nodes.reference):
# exclude sphinx [source] labels
if isinstance(node[0], docutils.nodes.Element):
if 'viewcode-link' in node[0]['classes']:
continue
text = node[0].astext()
if text.startswith('[') and text.endswith(']'):
key = text[1:-1]
try:
label = app.env.bibtex_cache.get_label_from_key(key)
except KeyError:
app.warn("could not relabel citation reference [%s]" % key)
else:
node[0] = docutils.nodes.Text('[' + label + ']')
def check_duplicate_labels(app, env):
"""Check and warn about duplicate citation labels.
:param app: The sphinx application.
:type app: :class:`sphinx.application.Sphinx`
:param env: The sphinx build environment.
:type env: :class:`sphinx.environment.BuildEnvironment`
"""
label_to_key = {}
for info in env.bibtex_cache.bibliographies.values():
for key, label in info.labels.items():
if label in label_to_key:
app.warn(
"duplicate label for keys %s and %s"
% (key, label_to_key[label]))
else:
label_to_key[label] = key
def setup(app):
"""Set up the bibtex extension:
* register directives
* register nodes
* register roles
* register transforms
* connect events to functions
:param app: The sphinx application.
:type app: :class:`sphinx.application.Sphinx`
"""
app.add_directive("bibliography", BibliographyDirective)
app.add_role("cite", CiteRole())
app.add_node(bibliography)
app.add_transform(BibliographyTransform)
app.connect("builder-inited", init_bibtex_cache)
app.connect("doctree-resolved", process_citations)
app.connect("doctree-resolved", process_citation_references)
app.connect("env-purge-doc", purge_bibtex_cache)
app.connect("env-updated", check_duplicate_labels)
| gpl-2.0 |
pgmillon/ansible | lib/ansible/modules/cloud/azure/azure_rm_cdnprofile.py | 27 | 9890 | #!/usr/bin/python
#
# Copyright (c) 2018 Hai Cao, <t-haicao@microsoft.com>, Yunge Zhu <yungez@microsoft.com>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: azure_rm_cdnprofile
version_added: "2.8"
short_description: Manage a Azure CDN profile
description:
- Create, update and delete a Azure CDN profile.
options:
resource_group:
description:
- Name of a resource group where the CDN profile exists or will be created.
required: true
name:
description:
- Name of the CDN profile.
required: true
location:
description:
- Valid Azure location. Defaults to location of the resource group.
sku:
description:
- The pricing tier, defines a CDN provider, feature list and rate of the CDN profile.
- Detailed pricing can be find at U(https://azure.microsoft.com/en-us/pricing/details/cdn/).
choices:
- standard_verizon
- premium_verizon
- custom_verizon
- standard_akamai
- standard_chinacdn
- standard_microsoft
state:
description:
- Assert the state of the CDN profile. Use C(present) to create or update a CDN profile and C(absent) to delete it.
default: present
choices:
- absent
- present
extends_documentation_fragment:
- azure
- azure_tags
author:
- Hai Cao (@caohai)
- Yunge Zhu (@yungezz)
'''
EXAMPLES = '''
- name: Create a CDN profile
azure_rm_cdnprofile:
resource_group: myResourceGroup
name: myCDN
sku: standard_akamai
tags:
testing: testing
- name: Delete the CDN profile
azure_rm_cdnprofile:
resource_group: myResourceGroup
name: myCDN
state: absent
'''
RETURN = '''
id:
description: Current state of the CDN profile.
returned: always
type: dict
example:
id: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.Cdn/profiles/myCDN
'''
from ansible.module_utils.azure_rm_common import AzureRMModuleBase
try:
from azure.mgmt.cdn.models import Profile, Sku, ErrorResponseException
from azure.mgmt.cdn import CdnManagementClient
except ImportError:
# This is handled in azure_rm_common
pass
def cdnprofile_to_dict(cdnprofile):
return dict(
id=cdnprofile.id,
name=cdnprofile.name,
type=cdnprofile.type,
location=cdnprofile.location,
sku=cdnprofile.sku.name,
resource_state=cdnprofile.resource_state,
provisioning_state=cdnprofile.provisioning_state,
tags=cdnprofile.tags
)
class AzureRMCdnprofile(AzureRMModuleBase):
def __init__(self):
self.module_arg_spec = dict(
resource_group=dict(
type='str',
required=True
),
name=dict(
type='str',
required=True
),
location=dict(
type='str'
),
state=dict(
type='str',
default='present',
choices=['present', 'absent']
),
sku=dict(
type='str',
choices=['standard_verizon', 'premium_verizon', 'custom_verizon', 'standard_akamai', 'standard_chinacdn', 'standard_microsoft']
)
)
self.resource_group = None
self.name = None
self.location = None
self.state = None
self.tags = None
self.sku = None
self.cdn_client = None
required_if = [
('state', 'present', ['sku'])
]
self.results = dict(changed=False)
super(AzureRMCdnprofile, self).__init__(derived_arg_spec=self.module_arg_spec,
supports_check_mode=True,
supports_tags=True,
required_if=required_if)
def exec_module(self, **kwargs):
"""Main module execution method"""
for key in list(self.module_arg_spec.keys()) + ['tags']:
setattr(self, key, kwargs[key])
self.cdn_client = self.get_cdn_client()
to_be_updated = False
resource_group = self.get_resource_group(self.resource_group)
if not self.location:
self.location = resource_group.location
response = self.get_cdnprofile()
if self.state == 'present':
if not response:
self.log("Need to create the CDN profile")
if not self.check_mode:
new_response = self.create_cdnprofile()
self.results['id'] = new_response['id']
self.results['changed'] = True
else:
self.log('Results : {0}'.format(response))
update_tags, response['tags'] = self.update_tags(response['tags'])
if response['provisioning_state'] == "Succeeded":
if update_tags:
to_be_updated = True
if to_be_updated:
self.log("Need to update the CDN profile")
if not self.check_mode:
new_response = self.update_cdnprofile()
self.results['id'] = new_response['id']
self.results['changed'] = True
elif self.state == 'absent':
if not response:
self.fail("CDN profile {0} not exists.".format(self.name))
else:
self.log("Need to delete the CDN profile")
self.results['changed'] = True
if not self.check_mode:
self.delete_cdnprofile()
self.results['id'] = response['id']
return self.results
def create_cdnprofile(self):
'''
Creates a Azure CDN profile.
:return: deserialized Azure CDN profile instance state dictionary
'''
self.log("Creating the Azure CDN profile instance {0}".format(self.name))
parameters = Profile(
location=self.location,
sku=Sku(name=self.sku),
tags=self.tags
)
import uuid
xid = str(uuid.uuid1())
try:
poller = self.cdn_client.profiles.create(self.resource_group,
self.name,
parameters,
custom_headers={'x-ms-client-request-id': xid}
)
response = self.get_poller_result(poller)
return cdnprofile_to_dict(response)
except ErrorResponseException as exc:
self.log('Error attempting to create Azure CDN profile instance.')
self.fail("Error creating Azure CDN profile instance: {0}.\n Request id: {1}".format(exc.message, xid))
def update_cdnprofile(self):
'''
Updates a Azure CDN profile.
:return: deserialized Azure CDN profile instance state dictionary
'''
self.log("Updating the Azure CDN profile instance {0}".format(self.name))
try:
poller = self.cdn_client.profiles.update(self.resource_group, self.name, self.tags)
response = self.get_poller_result(poller)
return cdnprofile_to_dict(response)
except ErrorResponseException as exc:
self.log('Error attempting to update Azure CDN profile instance.')
self.fail("Error updating Azure CDN profile instance: {0}".format(exc.message))
def delete_cdnprofile(self):
'''
Deletes the specified Azure CDN profile in the specified subscription and resource group.
:return: True
'''
self.log("Deleting the CDN profile {0}".format(self.name))
try:
poller = self.cdn_client.profiles.delete(
self.resource_group, self.name)
self.get_poller_result(poller)
return True
except ErrorResponseException as e:
self.log('Error attempting to delete the CDN profile.')
self.fail("Error deleting the CDN profile: {0}".format(e.message))
return False
def get_cdnprofile(self):
'''
Gets the properties of the specified CDN profile.
:return: deserialized CDN profile state dictionary
'''
self.log(
"Checking if the CDN profile {0} is present".format(self.name))
try:
response = self.cdn_client.profiles.get(self.resource_group, self.name)
self.log("Response : {0}".format(response))
self.log("CDN profile : {0} found".format(response.name))
return cdnprofile_to_dict(response)
except ErrorResponseException:
self.log('Did not find the CDN profile.')
return False
def get_cdn_client(self):
if not self.cdn_client:
self.cdn_client = self.get_mgmt_svc_client(CdnManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2017-04-02')
return self.cdn_client
def main():
"""Main execution"""
AzureRMCdnprofile()
if __name__ == '__main__':
main()
| gpl-3.0 |
SerCeMan/intellij-community | python/lib/Lib/uuid.py | 170 | 20165 | r"""UUID objects (universally unique identifiers) according to RFC 4122.
This module provides immutable UUID objects (class UUID) and the functions
uuid1(), uuid3(), uuid4(), uuid5() for generating version 1, 3, 4, and 5
UUIDs as specified in RFC 4122.
If all you want is a unique ID, you should probably call uuid1() or uuid4().
Note that uuid1() may compromise privacy since it creates a UUID containing
the computer's network address. uuid4() creates a random UUID.
Typical usage:
>>> import uuid
# make a UUID based on the host ID and current time
>>> uuid.uuid1()
UUID('a8098c1a-f86e-11da-bd1a-00112444be1e')
# make a UUID using an MD5 hash of a namespace UUID and a name
>>> uuid.uuid3(uuid.NAMESPACE_DNS, 'python.org')
UUID('6fa459ea-ee8a-3ca4-894e-db77e160355e')
# make a random UUID
>>> uuid.uuid4()
UUID('16fd2706-8baf-433b-82eb-8c7fada847da')
# make a UUID using a SHA-1 hash of a namespace UUID and a name
>>> uuid.uuid5(uuid.NAMESPACE_DNS, 'python.org')
UUID('886313e1-3b8a-5372-9b90-0c9aee199e5d')
# make a UUID from a string of hex digits (braces and hyphens ignored)
>>> x = uuid.UUID('{00010203-0405-0607-0809-0a0b0c0d0e0f}')
# convert a UUID to a string of hex digits in standard form
>>> str(x)
'00010203-0405-0607-0809-0a0b0c0d0e0f'
# get the raw 16 bytes of the UUID
>>> x.bytes
'\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f'
# make a UUID from a 16-byte string
>>> uuid.UUID(bytes=x.bytes)
UUID('00010203-0405-0607-0809-0a0b0c0d0e0f')
"""
__author__ = 'Ka-Ping Yee <ping@zesty.ca>'
RESERVED_NCS, RFC_4122, RESERVED_MICROSOFT, RESERVED_FUTURE = [
'reserved for NCS compatibility', 'specified in RFC 4122',
'reserved for Microsoft compatibility', 'reserved for future definition']
class UUID(object):
"""Instances of the UUID class represent UUIDs as specified in RFC 4122.
UUID objects are immutable, hashable, and usable as dictionary keys.
Converting a UUID to a string with str() yields something in the form
'12345678-1234-1234-1234-123456789abc'. The UUID constructor accepts
five possible forms: a similar string of hexadecimal digits, or a tuple
of six integer fields (with 32-bit, 16-bit, 16-bit, 8-bit, 8-bit, and
48-bit values respectively) as an argument named 'fields', or a string
of 16 bytes (with all the integer fields in big-endian order) as an
argument named 'bytes', or a string of 16 bytes (with the first three
fields in little-endian order) as an argument named 'bytes_le', or a
single 128-bit integer as an argument named 'int'.
UUIDs have these read-only attributes:
bytes the UUID as a 16-byte string (containing the six
integer fields in big-endian byte order)
bytes_le the UUID as a 16-byte string (with time_low, time_mid,
and time_hi_version in little-endian byte order)
fields a tuple of the six integer fields of the UUID,
which are also available as six individual attributes
and two derived attributes:
time_low the first 32 bits of the UUID
time_mid the next 16 bits of the UUID
time_hi_version the next 16 bits of the UUID
clock_seq_hi_variant the next 8 bits of the UUID
clock_seq_low the next 8 bits of the UUID
node the last 48 bits of the UUID
time the 60-bit timestamp
clock_seq the 14-bit sequence number
hex the UUID as a 32-character hexadecimal string
int the UUID as a 128-bit integer
urn the UUID as a URN as specified in RFC 4122
variant the UUID variant (one of the constants RESERVED_NCS,
RFC_4122, RESERVED_MICROSOFT, or RESERVED_FUTURE)
version the UUID version number (1 through 5, meaningful only
when the variant is RFC_4122)
"""
def __init__(self, hex=None, bytes=None, bytes_le=None, fields=None,
int=None, version=None):
r"""Create a UUID from either a string of 32 hexadecimal digits,
a string of 16 bytes as the 'bytes' argument, a string of 16 bytes
in little-endian order as the 'bytes_le' argument, a tuple of six
integers (32-bit time_low, 16-bit time_mid, 16-bit time_hi_version,
8-bit clock_seq_hi_variant, 8-bit clock_seq_low, 48-bit node) as
the 'fields' argument, or a single 128-bit integer as the 'int'
argument. When a string of hex digits is given, curly braces,
hyphens, and a URN prefix are all optional. For example, these
expressions all yield the same UUID:
UUID('{12345678-1234-5678-1234-567812345678}')
UUID('12345678123456781234567812345678')
UUID('urn:uuid:12345678-1234-5678-1234-567812345678')
UUID(bytes='\x12\x34\x56\x78'*4)
UUID(bytes_le='\x78\x56\x34\x12\x34\x12\x78\x56' +
'\x12\x34\x56\x78\x12\x34\x56\x78')
UUID(fields=(0x12345678, 0x1234, 0x5678, 0x12, 0x34, 0x567812345678))
UUID(int=0x12345678123456781234567812345678)
Exactly one of 'hex', 'bytes', 'bytes_le', 'fields', or 'int' must
be given. The 'version' argument is optional; if given, the resulting
UUID will have its variant and version set according to RFC 4122,
overriding the given 'hex', 'bytes', 'bytes_le', 'fields', or 'int'.
"""
if [hex, bytes, bytes_le, fields, int].count(None) != 4:
raise TypeError('need one of hex, bytes, bytes_le, fields, or int')
if hex is not None:
hex = hex.replace('urn:', '').replace('uuid:', '')
hex = hex.strip('{}').replace('-', '')
if len(hex) != 32:
raise ValueError('badly formed hexadecimal UUID string')
int = long(hex, 16)
if bytes_le is not None:
if len(bytes_le) != 16:
raise ValueError('bytes_le is not a 16-char string')
bytes = (bytes_le[3] + bytes_le[2] + bytes_le[1] + bytes_le[0] +
bytes_le[5] + bytes_le[4] + bytes_le[7] + bytes_le[6] +
bytes_le[8:])
if bytes is not None:
if len(bytes) != 16:
raise ValueError('bytes is not a 16-char string')
int = long(('%02x'*16) % tuple(map(ord, bytes)), 16)
if fields is not None:
if len(fields) != 6:
raise ValueError('fields is not a 6-tuple')
(time_low, time_mid, time_hi_version,
clock_seq_hi_variant, clock_seq_low, node) = fields
if not 0 <= time_low < 1<<32L:
raise ValueError('field 1 out of range (need a 32-bit value)')
if not 0 <= time_mid < 1<<16L:
raise ValueError('field 2 out of range (need a 16-bit value)')
if not 0 <= time_hi_version < 1<<16L:
raise ValueError('field 3 out of range (need a 16-bit value)')
if not 0 <= clock_seq_hi_variant < 1<<8L:
raise ValueError('field 4 out of range (need an 8-bit value)')
if not 0 <= clock_seq_low < 1<<8L:
raise ValueError('field 5 out of range (need an 8-bit value)')
if not 0 <= node < 1<<48L:
raise ValueError('field 6 out of range (need a 48-bit value)')
clock_seq = (clock_seq_hi_variant << 8L) | clock_seq_low
int = ((time_low << 96L) | (time_mid << 80L) |
(time_hi_version << 64L) | (clock_seq << 48L) | node)
if int is not None:
if not 0 <= int < 1<<128L:
raise ValueError('int is out of range (need a 128-bit value)')
if version is not None:
if not 1 <= version <= 5:
raise ValueError('illegal version number')
# Set the variant to RFC 4122.
int &= ~(0xc000 << 48L)
int |= 0x8000 << 48L
# Set the version number.
int &= ~(0xf000 << 64L)
int |= version << 76L
self.__dict__['int'] = int
def __cmp__(self, other):
if isinstance(other, UUID):
return cmp(self.int, other.int)
return NotImplemented
def __hash__(self):
return hash(self.int)
def __int__(self):
return self.int
def __repr__(self):
return 'UUID(%r)' % str(self)
def __setattr__(self, name, value):
raise TypeError('UUID objects are immutable')
def __str__(self):
hex = '%032x' % self.int
return '%s-%s-%s-%s-%s' % (
hex[:8], hex[8:12], hex[12:16], hex[16:20], hex[20:])
def get_bytes(self):
bytes = ''
for shift in range(0, 128, 8):
bytes = chr((self.int >> shift) & 0xff) + bytes
return bytes
bytes = property(get_bytes)
def get_bytes_le(self):
bytes = self.bytes
return (bytes[3] + bytes[2] + bytes[1] + bytes[0] +
bytes[5] + bytes[4] + bytes[7] + bytes[6] + bytes[8:])
bytes_le = property(get_bytes_le)
def get_fields(self):
return (self.time_low, self.time_mid, self.time_hi_version,
self.clock_seq_hi_variant, self.clock_seq_low, self.node)
fields = property(get_fields)
def get_time_low(self):
return self.int >> 96L
time_low = property(get_time_low)
def get_time_mid(self):
return (self.int >> 80L) & 0xffff
time_mid = property(get_time_mid)
def get_time_hi_version(self):
return (self.int >> 64L) & 0xffff
time_hi_version = property(get_time_hi_version)
def get_clock_seq_hi_variant(self):
return (self.int >> 56L) & 0xff
clock_seq_hi_variant = property(get_clock_seq_hi_variant)
def get_clock_seq_low(self):
return (self.int >> 48L) & 0xff
clock_seq_low = property(get_clock_seq_low)
def get_time(self):
return (((self.time_hi_version & 0x0fffL) << 48L) |
(self.time_mid << 32L) | self.time_low)
time = property(get_time)
def get_clock_seq(self):
return (((self.clock_seq_hi_variant & 0x3fL) << 8L) |
self.clock_seq_low)
clock_seq = property(get_clock_seq)
def get_node(self):
return self.int & 0xffffffffffff
node = property(get_node)
def get_hex(self):
return '%032x' % self.int
hex = property(get_hex)
def get_urn(self):
return 'urn:uuid:' + str(self)
urn = property(get_urn)
def get_variant(self):
if not self.int & (0x8000 << 48L):
return RESERVED_NCS
elif not self.int & (0x4000 << 48L):
return RFC_4122
elif not self.int & (0x2000 << 48L):
return RESERVED_MICROSOFT
else:
return RESERVED_FUTURE
variant = property(get_variant)
def get_version(self):
# The version bits are only meaningful for RFC 4122 UUIDs.
if self.variant == RFC_4122:
return int((self.int >> 76L) & 0xf)
version = property(get_version)
def _find_mac(command, args, hw_identifiers, get_index):
import os
for dir in ['', '/sbin/', '/usr/sbin']:
executable = os.path.join(dir, command)
if not os.path.exists(executable):
continue
try:
# LC_ALL to get English output, 2>/dev/null to
# prevent output on stderr
cmd = 'LC_ALL=C %s %s 2>/dev/null' % (executable, args)
pipe = os.popen(cmd)
except IOError:
continue
for line in pipe:
words = line.lower().split()
for i in range(len(words)):
if words[i] in hw_identifiers:
return int(words[get_index(i)].replace(':', ''), 16)
return None
def _ifconfig_getnode():
"""Get the hardware address on Unix by running ifconfig."""
# This works on Linux ('' or '-a'), Tru64 ('-av'), but not all Unixes.
for args in ('', '-a', '-av'):
mac = _find_mac('ifconfig', args, ['hwaddr', 'ether'], lambda i: i+1)
if mac:
return mac
import socket
ip_addr = socket.gethostbyname(socket.gethostname())
# Try getting the MAC addr from arp based on our IP address (Solaris).
mac = _find_mac('arp', '-an', [ip_addr], lambda i: -1)
if mac:
return mac
# This might work on HP-UX.
mac = _find_mac('lanscan', '-ai', ['lan0'], lambda i: 0)
if mac:
return mac
return None
def _ipconfig_getnode():
"""Get the hardware address on Windows by running ipconfig.exe."""
import os, re
dirs = ['', r'c:\windows\system32', r'c:\winnt\system32']
try:
import ctypes
buffer = ctypes.create_string_buffer(300)
ctypes.windll.kernel32.GetSystemDirectoryA(buffer, 300)
dirs.insert(0, buffer.value.decode('mbcs'))
except:
pass
for dir in dirs:
try:
pipe = os.popen(os.path.join(dir, 'ipconfig') + ' /all')
except IOError:
continue
for line in pipe:
value = line.split(':')[-1].strip().lower()
if re.match('([0-9a-f][0-9a-f]-){5}[0-9a-f][0-9a-f]', value):
return int(value.replace('-', ''), 16)
def _netbios_getnode():
"""Get the hardware address on Windows using NetBIOS calls.
See http://support.microsoft.com/kb/118623 for details."""
import win32wnet, netbios
ncb = netbios.NCB()
ncb.Command = netbios.NCBENUM
ncb.Buffer = adapters = netbios.LANA_ENUM()
adapters._pack()
if win32wnet.Netbios(ncb) != 0:
return
adapters._unpack()
for i in range(adapters.length):
ncb.Reset()
ncb.Command = netbios.NCBRESET
ncb.Lana_num = ord(adapters.lana[i])
if win32wnet.Netbios(ncb) != 0:
continue
ncb.Reset()
ncb.Command = netbios.NCBASTAT
ncb.Lana_num = ord(adapters.lana[i])
ncb.Callname = '*'.ljust(16)
ncb.Buffer = status = netbios.ADAPTER_STATUS()
if win32wnet.Netbios(ncb) != 0:
continue
status._unpack()
bytes = map(ord, status.adapter_address)
return ((bytes[0]<<40L) + (bytes[1]<<32L) + (bytes[2]<<24L) +
(bytes[3]<<16L) + (bytes[4]<<8L) + bytes[5])
# Thanks to Thomas Heller for ctypes and for his help with its use here.
# If ctypes is available, use it to find system routines for UUID generation.
_uuid_generate_random = _uuid_generate_time = _UuidCreate = None
try:
import ctypes, ctypes.util
_buffer = ctypes.create_string_buffer(16)
# The uuid_generate_* routines are provided by libuuid on at least
# Linux and FreeBSD, and provided by libc on Mac OS X.
for libname in ['uuid', 'c']:
try:
lib = ctypes.CDLL(ctypes.util.find_library(libname))
except:
continue
if hasattr(lib, 'uuid_generate_random'):
_uuid_generate_random = lib.uuid_generate_random
if hasattr(lib, 'uuid_generate_time'):
_uuid_generate_time = lib.uuid_generate_time
# On Windows prior to 2000, UuidCreate gives a UUID containing the
# hardware address. On Windows 2000 and later, UuidCreate makes a
# random UUID and UuidCreateSequential gives a UUID containing the
# hardware address. These routines are provided by the RPC runtime.
# NOTE: at least on Tim's WinXP Pro SP2 desktop box, while the last
# 6 bytes returned by UuidCreateSequential are fixed, they don't appear
# to bear any relationship to the MAC address of any network device
# on the box.
try:
lib = ctypes.windll.rpcrt4
except:
lib = None
_UuidCreate = getattr(lib, 'UuidCreateSequential',
getattr(lib, 'UuidCreate', None))
except:
pass
def _unixdll_getnode():
"""Get the hardware address on Unix using ctypes."""
_uuid_generate_time(_buffer)
return UUID(bytes=_buffer.raw).node
def _windll_getnode():
"""Get the hardware address on Windows using ctypes."""
if _UuidCreate(_buffer) == 0:
return UUID(bytes=_buffer.raw).node
def _random_getnode():
"""Get a random node ID, with eighth bit set as suggested by RFC 4122."""
import random
return random.randrange(0, 1<<48L) | 0x010000000000L
_node = None
def getnode():
"""Get the hardware address as a 48-bit positive integer.
The first time this runs, it may launch a separate program, which could
be quite slow. If all attempts to obtain the hardware address fail, we
choose a random 48-bit number with its eighth bit set to 1 as recommended
in RFC 4122.
"""
global _node
if _node is not None:
return _node
import sys
if sys.platform == 'win32':
getters = [_windll_getnode, _netbios_getnode, _ipconfig_getnode]
else:
getters = [_unixdll_getnode, _ifconfig_getnode]
for getter in getters + [_random_getnode]:
try:
_node = getter()
except:
continue
if _node is not None:
return _node
_last_timestamp = None
def uuid1(node=None, clock_seq=None):
"""Generate a UUID from a host ID, sequence number, and the current time.
If 'node' is not given, getnode() is used to obtain the hardware
address. If 'clock_seq' is given, it is used as the sequence number;
otherwise a random 14-bit sequence number is chosen."""
# When the system provides a version-1 UUID generator, use it (but don't
# use UuidCreate here because its UUIDs don't conform to RFC 4122).
if _uuid_generate_time and node is clock_seq is None:
_uuid_generate_time(_buffer)
return UUID(bytes=_buffer.raw)
global _last_timestamp
import time
nanoseconds = int(time.time() * 1e9)
# 0x01b21dd213814000 is the number of 100-ns intervals between the
# UUID epoch 1582-10-15 00:00:00 and the Unix epoch 1970-01-01 00:00:00.
timestamp = int(nanoseconds/100) + 0x01b21dd213814000L
if timestamp <= _last_timestamp:
timestamp = _last_timestamp + 1
_last_timestamp = timestamp
if clock_seq is None:
import random
clock_seq = random.randrange(1<<14L) # instead of stable storage
time_low = timestamp & 0xffffffffL
time_mid = (timestamp >> 32L) & 0xffffL
time_hi_version = (timestamp >> 48L) & 0x0fffL
clock_seq_low = clock_seq & 0xffL
clock_seq_hi_variant = (clock_seq >> 8L) & 0x3fL
if node is None:
node = getnode()
return UUID(fields=(time_low, time_mid, time_hi_version,
clock_seq_hi_variant, clock_seq_low, node), version=1)
def uuid3(namespace, name):
"""Generate a UUID from the MD5 hash of a namespace UUID and a name."""
import md5
hash = md5.md5(namespace.bytes + name).digest()
return UUID(bytes=hash[:16], version=3)
def uuid4():
"""Generate a random UUID."""
# When the system provides a version-4 UUID generator, use it.
if _uuid_generate_random:
_uuid_generate_random(_buffer)
return UUID(bytes=_buffer.raw)
# Otherwise, get randomness from urandom or the 'random' module.
try:
import os
return UUID(bytes=os.urandom(16), version=4)
except:
import random
bytes = [chr(random.randrange(256)) for i in range(16)]
return UUID(bytes=bytes, version=4)
def uuid5(namespace, name):
"""Generate a UUID from the SHA-1 hash of a namespace UUID and a name."""
import sha
hash = sha.sha(namespace.bytes + name).digest()
return UUID(bytes=hash[:16], version=5)
# The following standard UUIDs are for use with uuid3() or uuid5().
NAMESPACE_DNS = UUID('6ba7b810-9dad-11d1-80b4-00c04fd430c8')
NAMESPACE_URL = UUID('6ba7b811-9dad-11d1-80b4-00c04fd430c8')
NAMESPACE_OID = UUID('6ba7b812-9dad-11d1-80b4-00c04fd430c8')
NAMESPACE_X500 = UUID('6ba7b814-9dad-11d1-80b4-00c04fd430c8')
| apache-2.0 |
lorenamgUMU/sakai | reference/library/src/webapp/editor/FCKeditor/editor/filemanager/connectors/py/fckutil.py | 114 | 4490 | #!/usr/bin/env python
"""
FCKeditor - The text editor for Internet - http://www.fckeditor.net
Copyright (C) 2003-2010 Frederico Caldeira Knabben
== BEGIN LICENSE ==
Licensed under the terms of any of the following licenses at your
choice:
- GNU General Public License Version 2 or later (the "GPL")
http://www.gnu.org/licenses/gpl.html
- GNU Lesser General Public License Version 2.1 or later (the "LGPL")
http://www.gnu.org/licenses/lgpl.html
- Mozilla Public License Version 1.1 or later (the "MPL")
http://www.mozilla.org/MPL/MPL-1.1.html
== END LICENSE ==
Utility functions for the File Manager Connector for Python
"""
import string, re
import os
import config as Config
# Generic manipulation functions
def removeExtension(fileName):
index = fileName.rindex(".")
newFileName = fileName[0:index]
return newFileName
def getExtension(fileName):
index = fileName.rindex(".") + 1
fileExtension = fileName[index:]
return fileExtension
def removeFromStart(string, char):
return string.lstrip(char)
def removeFromEnd(string, char):
return string.rstrip(char)
# Path functions
def combinePaths( basePath, folder ):
return removeFromEnd( basePath, '/' ) + '/' + removeFromStart( folder, '/' )
def getFileName(filename):
" Purpose: helper function to extrapolate the filename "
for splitChar in ["/", "\\"]:
array = filename.split(splitChar)
if (len(array) > 1):
filename = array[-1]
return filename
def sanitizeFolderName( newFolderName ):
"Do a cleanup of the folder name to avoid possible problems"
# Remove . \ / | : ? * " < > and control characters
return re.sub( '\\.|\\\\|\\/|\\||\\:|\\?|\\*|"|<|>|[\x00-\x1f\x7f-\x9f]', '_', newFolderName )
def sanitizeFileName( newFileName ):
"Do a cleanup of the file name to avoid possible problems"
# Replace dots in the name with underscores (only one dot can be there... security issue).
if ( Config.ForceSingleExtension ): # remove dots
newFileName = re.sub ( '\\.(?![^.]*$)', '_', newFileName ) ;
newFileName = newFileName.replace('\\','/') # convert windows to unix path
newFileName = os.path.basename (newFileName) # strip directories
# Remove \ / | : ? *
return re.sub ( '\\\\|\\/|\\||\\:|\\?|\\*|"|<|>|[\x00-\x1f\x7f-\x9f]/', '_', newFileName )
def getCurrentFolder(currentFolder):
if not currentFolder:
currentFolder = '/'
# Check the current folder syntax (must begin and end with a slash).
if (currentFolder[-1] <> "/"):
currentFolder += "/"
if (currentFolder[0] <> "/"):
currentFolder = "/" + currentFolder
# Ensure the folder path has no double-slashes
while '//' in currentFolder:
currentFolder = currentFolder.replace('//','/')
# Check for invalid folder paths (..)
if '..' in currentFolder or '\\' in currentFolder:
return None
# Check for invalid folder paths (..)
if re.search( '(/\\.)|(//)|([\\\\:\\*\\?\\""\\<\\>\\|]|[\x00-\x1F]|[\x7f-\x9f])', currentFolder ):
return None
return currentFolder
def mapServerPath( environ, url):
" Emulate the asp Server.mapPath function. Given an url path return the physical directory that it corresponds to "
# This isn't correct but for the moment there's no other solution
# If this script is under a virtual directory or symlink it will detect the problem and stop
return combinePaths( getRootPath(environ), url )
def mapServerFolder(resourceTypePath, folderPath):
return combinePaths ( resourceTypePath , folderPath )
def getRootPath(environ):
"Purpose: returns the root path on the server"
# WARNING: this may not be thread safe, and doesn't work w/ VirtualServer/mod_python
# Use Config.UserFilesAbsolutePath instead
if environ.has_key('DOCUMENT_ROOT'):
return environ['DOCUMENT_ROOT']
else:
realPath = os.path.realpath( './' )
selfPath = environ['SCRIPT_FILENAME']
selfPath = selfPath [ : selfPath.rfind( '/' ) ]
selfPath = selfPath.replace( '/', os.path.sep)
position = realPath.find(selfPath)
# This can check only that this script isn't run from a virtual dir
# But it avoids the problems that arise if it isn't checked
raise realPath
if ( position < 0 or position <> len(realPath) - len(selfPath) or realPath[ : position ]==''):
raise Exception('Sorry, can\'t map "UserFilesPath" to a physical path. You must set the "UserFilesAbsolutePath" value in "editor/filemanager/connectors/py/config.py".')
return realPath[ : position ]
| apache-2.0 |
mylxiaoyi/wicd | tests/testwnettools.py | 4 | 2217 | import unittest
from wicd import wnettools
class TestWnettools(unittest.TestCase):
def setUp(self):
self.interface = wnettools.BaseInterface('eth0')
def test_find_wireless_interface(self):
interfaces = wnettools.GetWirelessInterfaces()
# wlan0 may change depending on your system
#self.assertTrue('wlan0' in interfaces)
self.assertTrue(type(interfaces) == list)
def test_find_wired_interface(self):
interfaces = wnettools.GetWiredInterfaces()
# eth0 may change depending on your system
self.assertTrue('eth0' in interfaces)
def test_wext_is_valid_wpasupplicant_driver(self):
self.assertTrue(wnettools.IsValidWpaSuppDriver('wext'))
def test_needs_external_calls_not_implemented(self):
self.assertRaises(NotImplementedError, wnettools.NeedsExternalCalls)
def test_is_up_boolean(self):
self.assertTrue(type(self.interface.IsUp()) == bool)
def test_enable_debug_mode(self):
self.interface.SetDebugMode(True)
self.assertTrue(self.interface.verbose)
def test_disable_debug_mode(self):
self.interface.SetDebugMode(False)
self.assertFalse(self.interface.verbose)
def test_interface_name_sanitation(self):
interface = wnettools.BaseInterface('blahblah; uptime > /tmp/blah | cat')
self.assertEquals(interface.iface, 'blahblahuptimetmpblahcat')
def test_freq_translation_low(self):
freq = '2.412 GHz'
interface = wnettools.BaseWirelessInterface('wlan0')
self.assertEquals(interface._FreqToChannel(freq), 1)
def test_freq_translation_high(self):
freq = '2.484 GHz'
interface = wnettools.BaseWirelessInterface('wlan0')
self.assertEquals(interface._FreqToChannel(freq), 14)
def test_generate_psk(self):
interface = wnettools.BaseWirelessInterface('wlan0')
if 'wlan0' in wnettools.GetWirelessInterfaces():
psk = interface.GeneratePSK({'essid' : 'Network 1', 'key' : 'arandompassphrase'})
self.assertEquals(psk, 'd70463014514f4b4ebb8e3aebbdec13f4437ac3a9af084b3433f3710e658a7be')
def suite():
suite = unittest.TestSuite()
tests = []
[ tests.append(test) for test in dir(TestWnettools) if test.startswith('test') ]
for test in tests:
suite.addTest(TestWnettools(test))
return suite
if __name__ == '__main__':
unittest.main()
| gpl-2.0 |
jhjguxin/blogserver | lib/python2.7/site-packages/django/contrib/localflavor/fi/fi_municipalities.py | 394 | 10822 | # -*- coding: utf-8 -*-
"""
An alphabetical list of Finnish municipalities for use as `choices` in a
formfield.
This exists in this standalone file so that it's only imported into memory
when explicitly needed.
"""
MUNICIPALITY_CHOICES = (
('akaa', u"Akaa"),
('alajarvi', u"Alajärvi"),
('alavieska', u"Alavieska"),
('alavus', u"Alavus"),
('artjarvi', u"Artjärvi"),
('asikkala', u"Asikkala"),
('askola', u"Askola"),
('aura', u"Aura"),
('brando', u"Brändö"),
('eckero', u"Eckerö"),
('enonkoski', u"Enonkoski"),
('enontekio', u"Enontekiö"),
('espoo', u"Espoo"),
('eura', u"Eura"),
('eurajoki', u"Eurajoki"),
('evijarvi', u"Evijärvi"),
('finstrom', u"Finström"),
('forssa', u"Forssa"),
('foglo', u"Föglö"),
('geta', u"Geta"),
('haapajarvi', u"Haapajärvi"),
('haapavesi', u"Haapavesi"),
('hailuoto', u"Hailuoto"),
('halsua', u"Halsua"),
('hamina', u"Hamina"),
('hammarland', u"Hammarland"),
('hankasalmi', u"Hankasalmi"),
('hanko', u"Hanko"),
('harjavalta', u"Harjavalta"),
('hartola', u"Hartola"),
('hattula', u"Hattula"),
('haukipudas', u"Haukipudas"),
('hausjarvi', u"Hausjärvi"),
('heinola', u"Heinola"),
('heinavesi', u"Heinävesi"),
('helsinki', u"Helsinki"),
('hirvensalmi', u"Hirvensalmi"),
('hollola', u"Hollola"),
('honkajoki', u"Honkajoki"),
('huittinen', u"Huittinen"),
('humppila', u"Humppila"),
('hyrynsalmi', u"Hyrynsalmi"),
('hyvinkaa', u"Hyvinkää"),
('hameenkoski', u"Hämeenkoski"),
('hameenkyro', u"Hämeenkyrö"),
('hameenlinna', u"Hämeenlinna"),
('ii', u"Ii"),
('iisalmi', u"Iisalmi"),
('iitti', u"Iitti"),
('ikaalinen', u"Ikaalinen"),
('ilmajoki', u"Ilmajoki"),
('ilomantsi', u"Ilomantsi"),
('imatra', u"Imatra"),
('inari', u"Inari"),
('inkoo', u"Inkoo"),
('isojoki', u"Isojoki"),
('isokyro', u"Isokyrö"),
('jalasjarvi', u"Jalasjärvi"),
('janakkala', u"Janakkala"),
('joensuu', u"Joensuu"),
('jokioinen', u"Jokioinen"),
('jomala', u"Jomala"),
('joroinen', u"Joroinen"),
('joutsa', u"Joutsa"),
('juankoski', u"Juankoski"),
('juuka', u"Juuka"),
('juupajoki', u"Juupajoki"),
('juva', u"Juva"),
('jyvaskyla', u"Jyväskylä"),
('jamijarvi', u"Jämijärvi"),
('jamsa', u"Jämsä"),
('jarvenpaa', u"Järvenpää"),
('kaarina', u"Kaarina"),
('kaavi', u"Kaavi"),
('kajaani', u"Kajaani"),
('kalajoki', u"Kalajoki"),
('kangasala', u"Kangasala"),
('kangasniemi', u"Kangasniemi"),
('kankaanpaa', u"Kankaanpää"),
('kannonkoski', u"Kannonkoski"),
('kannus', u"Kannus"),
('karijoki', u"Karijoki"),
('karjalohja', u"Karjalohja"),
('karkkila', u"Karkkila"),
('karstula', u"Karstula"),
('karttula', u"Karttula"),
('karvia', u"Karvia"),
('kaskinen', u"Kaskinen"),
('kauhajoki', u"Kauhajoki"),
('kauhava', u"Kauhava"),
('kauniainen', u"Kauniainen"),
('kaustinen', u"Kaustinen"),
('keitele', u"Keitele"),
('kemi', u"Kemi"),
('kemijarvi', u"Kemijärvi"),
('keminmaa', u"Keminmaa"),
('kemionsaari', u"Kemiönsaari"),
('kempele', u"Kempele"),
('kerava', u"Kerava"),
('kerimaki', u"Kerimäki"),
('kesalahti', u"Kesälahti"),
('keuruu', u"Keuruu"),
('kihnio', u"Kihniö"),
('kiikoinen', u"Kiikoinen"),
('kiiminki', u"Kiiminki"),
('kinnula', u"Kinnula"),
('kirkkonummi', u"Kirkkonummi"),
('kitee', u"Kitee"),
('kittila', u"Kittilä"),
('kiuruvesi', u"Kiuruvesi"),
('kivijarvi', u"Kivijärvi"),
('kokemaki', u"Kokemäki"),
('kokkola', u"Kokkola"),
('kolari', u"Kolari"),
('konnevesi', u"Konnevesi"),
('kontiolahti', u"Kontiolahti"),
('korsnas', u"Korsnäs"),
('koskitl', u"Koski Tl"),
('kotka', u"Kotka"),
('kouvola', u"Kouvola"),
('kristiinankaupunki', u"Kristiinankaupunki"),
('kruunupyy', u"Kruunupyy"),
('kuhmalahti', u"Kuhmalahti"),
('kuhmo', u"Kuhmo"),
('kuhmoinen', u"Kuhmoinen"),
('kumlinge', u"Kumlinge"),
('kuopio', u"Kuopio"),
('kuortane', u"Kuortane"),
('kurikka', u"Kurikka"),
('kustavi', u"Kustavi"),
('kuusamo', u"Kuusamo"),
('kylmakoski', u"Kylmäkoski"),
('kyyjarvi', u"Kyyjärvi"),
('karkola', u"Kärkölä"),
('karsamaki', u"Kärsämäki"),
('kokar', u"Kökar"),
('koylio', u"Köyliö"),
('lahti', u"Lahti"),
('laihia', u"Laihia"),
('laitila', u"Laitila"),
('lapinjarvi', u"Lapinjärvi"),
('lapinlahti', u"Lapinlahti"),
('lappajarvi', u"Lappajärvi"),
('lappeenranta', u"Lappeenranta"),
('lapua', u"Lapua"),
('laukaa', u"Laukaa"),
('lavia', u"Lavia"),
('lemi', u"Lemi"),
('lemland', u"Lemland"),
('lempaala', u"Lempäälä"),
('leppavirta', u"Leppävirta"),
('lestijarvi', u"Lestijärvi"),
('lieksa', u"Lieksa"),
('lieto', u"Lieto"),
('liminka', u"Liminka"),
('liperi', u"Liperi"),
('lohja', u"Lohja"),
('loimaa', u"Loimaa"),
('loppi', u"Loppi"),
('loviisa', u"Loviisa"),
('luhanka', u"Luhanka"),
('lumijoki', u"Lumijoki"),
('lumparland', u"Lumparland"),
('luoto', u"Luoto"),
('luumaki', u"Luumäki"),
('luvia', u"Luvia"),
('lansi-turunmaa', u"Länsi-Turunmaa"),
('maalahti', u"Maalahti"),
('maaninka', u"Maaninka"),
('maarianhamina', u"Maarianhamina"),
('marttila', u"Marttila"),
('masku', u"Masku"),
('merijarvi', u"Merijärvi"),
('merikarvia', u"Merikarvia"),
('miehikkala', u"Miehikkälä"),
('mikkeli', u"Mikkeli"),
('muhos', u"Muhos"),
('multia', u"Multia"),
('muonio', u"Muonio"),
('mustasaari', u"Mustasaari"),
('muurame', u"Muurame"),
('mynamaki', u"Mynämäki"),
('myrskyla', u"Myrskylä"),
('mantsala', u"Mäntsälä"),
('mantta-vilppula', u"Mänttä-Vilppula"),
('mantyharju', u"Mäntyharju"),
('naantali', u"Naantali"),
('nakkila', u"Nakkila"),
('nastola', u"Nastola"),
('nilsia', u"Nilsiä"),
('nivala', u"Nivala"),
('nokia', u"Nokia"),
('nousiainen', u"Nousiainen"),
('nummi-pusula', u"Nummi-Pusula"),
('nurmes', u"Nurmes"),
('nurmijarvi', u"Nurmijärvi"),
('narpio', u"Närpiö"),
('oravainen', u"Oravainen"),
('orimattila', u"Orimattila"),
('oripaa', u"Oripää"),
('orivesi', u"Orivesi"),
('oulainen', u"Oulainen"),
('oulu', u"Oulu"),
('oulunsalo', u"Oulunsalo"),
('outokumpu', u"Outokumpu"),
('padasjoki', u"Padasjoki"),
('paimio', u"Paimio"),
('paltamo', u"Paltamo"),
('parikkala', u"Parikkala"),
('parkano', u"Parkano"),
('pedersore', u"Pedersöre"),
('pelkosenniemi', u"Pelkosenniemi"),
('pello', u"Pello"),
('perho', u"Perho"),
('pertunmaa', u"Pertunmaa"),
('petajavesi', u"Petäjävesi"),
('pieksamaki', u"Pieksämäki"),
('pielavesi', u"Pielavesi"),
('pietarsaari', u"Pietarsaari"),
('pihtipudas', u"Pihtipudas"),
('pirkkala', u"Pirkkala"),
('polvijarvi', u"Polvijärvi"),
('pomarkku', u"Pomarkku"),
('pori', u"Pori"),
('pornainen', u"Pornainen"),
('porvoo', u"Porvoo"),
('posio', u"Posio"),
('pudasjarvi', u"Pudasjärvi"),
('pukkila', u"Pukkila"),
('punkaharju', u"Punkaharju"),
('punkalaidun', u"Punkalaidun"),
('puolanka', u"Puolanka"),
('puumala', u"Puumala"),
('pyhtaa', u"Pyhtää"),
('pyhajoki', u"Pyhäjoki"),
('pyhajarvi', u"Pyhäjärvi"),
('pyhanta', u"Pyhäntä"),
('pyharanta', u"Pyhäranta"),
('palkane', u"Pälkäne"),
('poytya', u"Pöytyä"),
('raahe', u"Raahe"),
('raasepori', u"Raasepori"),
('raisio', u"Raisio"),
('rantasalmi', u"Rantasalmi"),
('ranua', u"Ranua"),
('rauma', u"Rauma"),
('rautalampi', u"Rautalampi"),
('rautavaara', u"Rautavaara"),
('rautjarvi', u"Rautjärvi"),
('reisjarvi', u"Reisjärvi"),
('riihimaki', u"Riihimäki"),
('ristiina', u"Ristiina"),
('ristijarvi', u"Ristijärvi"),
('rovaniemi', u"Rovaniemi"),
('ruokolahti', u"Ruokolahti"),
('ruovesi', u"Ruovesi"),
('rusko', u"Rusko"),
('raakkyla', u"Rääkkylä"),
('saarijarvi', u"Saarijärvi"),
('salla', u"Salla"),
('salo', u"Salo"),
('saltvik', u"Saltvik"),
('sastamala', u"Sastamala"),
('sauvo', u"Sauvo"),
('savitaipale', u"Savitaipale"),
('savonlinna', u"Savonlinna"),
('savukoski', u"Savukoski"),
('seinajoki', u"Seinäjoki"),
('sievi', u"Sievi"),
('siikainen', u"Siikainen"),
('siikajoki', u"Siikajoki"),
('siikalatva', u"Siikalatva"),
('siilinjarvi', u"Siilinjärvi"),
('simo', u"Simo"),
('sipoo', u"Sipoo"),
('siuntio', u"Siuntio"),
('sodankyla', u"Sodankylä"),
('soini', u"Soini"),
('somero', u"Somero"),
('sonkajarvi', u"Sonkajärvi"),
('sotkamo', u"Sotkamo"),
('sottunga', u"Sottunga"),
('sulkava', u"Sulkava"),
('sund', u"Sund"),
('suomenniemi', u"Suomenniemi"),
('suomussalmi', u"Suomussalmi"),
('suonenjoki', u"Suonenjoki"),
('sysma', u"Sysmä"),
('sakyla', u"Säkylä"),
('taipalsaari', u"Taipalsaari"),
('taivalkoski', u"Taivalkoski"),
('taivassalo', u"Taivassalo"),
('tammela', u"Tammela"),
('tampere', u"Tampere"),
('tarvasjoki', u"Tarvasjoki"),
('tervo', u"Tervo"),
('tervola', u"Tervola"),
('teuva', u"Teuva"),
('tohmajarvi', u"Tohmajärvi"),
('toholampi', u"Toholampi"),
('toivakka', u"Toivakka"),
('tornio', u"Tornio"),
('turku', u"Turku"),
('tuusniemi', u"Tuusniemi"),
('tuusula', u"Tuusula"),
('tyrnava', u"Tyrnävä"),
('toysa', u"Töysä"),
('ulvila', u"Ulvila"),
('urjala', u"Urjala"),
('utajarvi', u"Utajärvi"),
('utsjoki', u"Utsjoki"),
('uurainen', u"Uurainen"),
('uusikaarlepyy', u"Uusikaarlepyy"),
('uusikaupunki', u"Uusikaupunki"),
('vaala', u"Vaala"),
('vaasa', u"Vaasa"),
('valkeakoski', u"Valkeakoski"),
('valtimo', u"Valtimo"),
('vantaa', u"Vantaa"),
('varkaus', u"Varkaus"),
('varpaisjarvi', u"Varpaisjärvi"),
('vehmaa', u"Vehmaa"),
('vesanto', u"Vesanto"),
('vesilahti', u"Vesilahti"),
('veteli', u"Veteli"),
('vierema', u"Vieremä"),
('vihanti', u"Vihanti"),
('vihti', u"Vihti"),
('viitasaari', u"Viitasaari"),
('vimpeli', u"Vimpeli"),
('virolahti', u"Virolahti"),
('virrat', u"Virrat"),
('vardo', u"Vårdö"),
('vahakyro', u"Vähäkyrö"),
('voyri-maksamaa', u"Vöyri-Maksamaa"),
('yli-ii', u"Yli-Ii"),
('ylitornio', u"Ylitornio"),
('ylivieska', u"Ylivieska"),
('ylojarvi', u"Ylöjärvi"),
('ypaja', u"Ypäjä"),
('ahtari', u"Ähtäri"),
('aanekoski', u"Äänekoski")
) | mit |
calebmadrigal/tracker-jacker | plugin_examples/plugin_template.py | 2 | 1426 | __author__ = 'Caleb Madrigal'
__email__ = 'caleb.madrigal@gmail.com'
__version__ = '0.0.2'
__apiversion__ = 1
__config__ = {'power': -100, 'log_level': 'ERROR', 'trigger_cooldown': 1}
class Trigger:
def __init__(self):
# dev_id -> [timestamp1, timestamp2, ...]
self.packets_seen = 0
self.unique_mac_addrs = set()
def __call__(self,
dev_id=None,
dev_type=None,
num_bytes=None,
data_threshold=None,
vendor=None,
power=None,
power_threshold=None,
bssid=None,
ssid=None,
iface=None,
channel=None,
frame_type=None,
frame=None,
**kwargs):
self.packets_seen += 1
self.unique_mac_addrs |= {dev_id}
print('[!] Total packets: {}, Unique devices: {}'.format(self.packets_seen, len(self.unique_mac_addrs)))
print('\tdev_id = {}, dev_type = {}, num_bytes = {}, data_threshold = {}, vendor = {}, '
'power = {}, power_threshold = {}, bssid = {}, ssid = {}, iface = {}, channel = {}, '
'frame_types = {}, frame = {}'
.format(dev_id, dev_type, num_bytes, data_threshold, vendor,
power, power_threshold, bssid, ssid, iface, channel,
frame_type, frame))
| mit |
Ervii/garage-time | garage/src/python/pants/java/nailgun_executor.py | 1 | 12377 | # coding=utf-8
# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import (nested_scopes, generators, division, absolute_import, with_statement,
print_function, unicode_literals)
import hashlib
import os
import re
import time
from collections import namedtuple
import psutil
# TODO: Once we integrate standard logging into our reporting framework, we can consider making
# some of the log.debug() below into log.info(). Right now it just looks wrong on the console.
from twitter.common import log
from twitter.common.collections import maybe_list
from twitter.common.lang import Compatibility
from pants.base.build_environment import get_buildroot
from pants.java.executor import Executor, SubprocessExecutor
from pants.java.nailgun_client import NailgunClient
from pants.util.dirutil import safe_open
class NailgunExecutor(Executor):
"""Executes java programs by launching them in nailgun server.
If a nailgun is not available for a given set of jvm args and classpath, one is launched and
re-used for the given jvm args and classpath on subsequent runs.
"""
class Endpoint(namedtuple('Endpoint', ['exe', 'fingerprint', 'pid', 'port'])):
"""The coordinates for a nailgun server controlled by NailgunExecutor."""
@classmethod
def parse(cls, endpoint):
"""Parses an endpoint from a string of the form exe:fingerprint:pid:port"""
components = endpoint.split(':')
if len(components) != 4:
raise ValueError('Invalid endpoint spec %s' % endpoint)
exe, fingerprint, pid, port = components
return cls(exe, fingerprint, int(pid), int(port))
# Used to identify we own a given java nailgun server
_PANTS_NG_ARG_PREFIX = b'-Dpants.buildroot'
_PANTS_NG_ARG = b'{0}={1}'.format(_PANTS_NG_ARG_PREFIX, get_buildroot())
_PANTS_FINGERPRINT_ARG_PREFIX = b'-Dpants.nailgun.fingerprint='
@staticmethod
def _check_pid(pid):
try:
os.kill(pid, 0)
return True
except OSError:
return False
@staticmethod
def create_owner_arg(workdir):
# Currently the owner is identified via the full path to the workdir.
return b'-Dpants.nailgun.owner={0}'.format(workdir)
@classmethod
def _create_fingerprint_arg(cls, fingerprint):
return cls._PANTS_FINGERPRINT_ARG_PREFIX + fingerprint
@classmethod
def parse_fingerprint_arg(cls, args):
for arg in args:
components = arg.split(cls._PANTS_FINGERPRINT_ARG_PREFIX)
if len(components) == 2 and components[0] == '':
return components[1]
return None
@staticmethod
def _fingerprint(jvm_args, classpath, java_version):
"""Compute a fingerprint for this invocation of a Java task.
:param list jvm_args: JVM arguments passed to the java invocation
:param list classpath: The -cp arguments passed to the java invocation
:param Revision java_version: return value from Distribution.version()
:return: a hexstring representing a fingerprint of the java invocation
"""
digest = hashlib.sha1()
digest.update(''.join(sorted(jvm_args)))
digest.update(''.join(sorted(classpath))) # TODO(John Sirois): hash classpath contents?
digest.update(repr(java_version))
return digest.hexdigest()
@staticmethod
def _log_kill(pid, port=None, logger=None):
logger = logger or log.info
port_desc = ' port:{0}'.format(port if port else '')
logger('killing ng server @ pid:{pid}{port}'.format(pid=pid, port=port_desc))
@classmethod
def _find_ngs(cls, everywhere=False):
def cmdline_matches(cmdline):
if everywhere:
return any(filter(lambda arg: arg.startswith(cls._PANTS_NG_ARG_PREFIX), cmdline))
else:
return cls._PANTS_NG_ARG in cmdline
for proc in psutil.process_iter():
try:
if b'java' == proc.name and cmdline_matches(proc.cmdline):
yield proc
except (psutil.AccessDenied, psutil.NoSuchProcess):
pass
@classmethod
def killall(cls, logger=None, everywhere=False):
"""Kills all nailgun servers started by pants.
:param bool everywhere: If ``True`` Kills all pants-started nailguns on this machine; otherwise
restricts the nailguns killed to those started for the current build root.
"""
success = True
for proc in cls._find_ngs(everywhere=everywhere):
try:
cls._log_kill(proc.pid, logger=logger)
proc.kill()
except (psutil.AccessDenied, psutil.NoSuchProcess):
success = False
return success
@staticmethod
def _find_ng_listen_port(proc):
for connection in proc.get_connections(kind=b'tcp'):
if connection.status == b'LISTEN':
host, port = connection.laddr
return port
return None
@classmethod
def _find(cls, workdir):
owner_arg = cls.create_owner_arg(workdir)
for proc in cls._find_ngs(everywhere=False):
try:
if owner_arg in proc.cmdline:
fingerprint = cls.parse_fingerprint_arg(proc.cmdline)
port = cls._find_ng_listen_port(proc)
exe = proc.cmdline[0]
if fingerprint and port:
return cls.Endpoint(exe, fingerprint, proc.pid, port)
except (psutil.AccessDenied, psutil.NoSuchProcess):
pass
return None
def __init__(self, workdir, nailgun_classpath, distribution=None, ins=None):
super(NailgunExecutor, self).__init__(distribution=distribution)
self._nailgun_classpath = maybe_list(nailgun_classpath)
if not isinstance(workdir, Compatibility.string):
raise ValueError('Workdir must be a path string, given {workdir}'.format(workdir=workdir))
self._workdir = workdir
self._ng_out = os.path.join(workdir, 'stdout')
self._ng_err = os.path.join(workdir, 'stderr')
self._ins = ins
def _runner(self, classpath, main, jvm_options, args):
command = self._create_command(classpath, main, jvm_options, args)
class Runner(self.Runner):
@property
def executor(this):
return self
@property
def cmd(this):
return ' '.join(command)
def run(this, stdout=None, stderr=None):
nailgun = self._get_nailgun_client(jvm_options, classpath, stdout, stderr)
try:
log.debug('Executing via {ng_desc}: {cmd}'.format(ng_desc=nailgun, cmd=this.cmd))
return nailgun(main, *args)
except nailgun.NailgunError as e:
self.kill()
raise self.Error('Problem launching via {ng_desc} command {main} {args}: {msg}'
.format(ng_desc=nailgun, main=main, args=' '.join(args), msg=e))
return Runner()
def kill(self):
"""Kills the nailgun server owned by this executor if its currently running."""
endpoint = self._get_nailgun_endpoint()
if endpoint:
self._log_kill(endpoint.pid, endpoint.port)
try:
os.kill(endpoint.pid, 9)
except OSError:
pass
def _get_nailgun_endpoint(self):
endpoint = self._find(self._workdir)
if endpoint:
log.debug('Found ng server launched with {endpoint}'.format(endpoint=repr(endpoint)))
return endpoint
def _get_nailgun_client(self, jvm_args, classpath, stdout, stderr):
classpath = self._nailgun_classpath + classpath
new_fingerprint = self._fingerprint(jvm_args, classpath, self._distribution.version)
endpoint = self._get_nailgun_endpoint()
running = endpoint and self._check_pid(endpoint.pid)
updated = endpoint and endpoint.fingerprint != new_fingerprint
updated = updated or (endpoint and endpoint.exe != self._distribution.java)
if running and not updated:
return self._create_ngclient(endpoint.port, stdout, stderr)
else:
if running and updated:
log.debug(
'Killing ng server launched with {endpoint}'.format(endpoint=repr(endpoint)))
self.kill()
return self._spawn_nailgun_server(new_fingerprint, jvm_args, classpath, stdout, stderr)
# 'NGServer started on 127.0.0.1, port 53785.'
_PARSE_NG_PORT = re.compile('.*\s+port\s+(\d+)\.$')
def _parse_nailgun_port(self, line):
match = self._PARSE_NG_PORT.match(line)
if not match:
raise NailgunClient.NailgunError('Failed to determine spawned ng port from response'
' line: {line}'.format(line=line))
return int(match.group(1))
def _await_nailgun_server(self, stdout, stderr, debug_desc):
# TODO(Eric Ayers) Make these cmdline/config parameters once we have a global way to fetch
# the global options scope.
nailgun_timeout_seconds = 10
max_socket_connect_attempts = 5
nailgun = None
port_parse_start = time.time()
with safe_open(self._ng_out, 'r') as ng_out:
while not nailgun:
started = ng_out.readline()
if started.find('Listening for transport dt_socket at address:') >= 0:
nailgun_timeout_seconds = 60
log.warn('Timeout extended to {timeout} seconds for debugger to attach to ng server.'
.format(timeout=nailgun_timeout_seconds))
started = ng_out.readline()
if started:
port = self._parse_nailgun_port(started)
nailgun = self._create_ngclient(port, stdout, stderr)
log.debug('Detected ng server up on port {port}'.format(port=port))
elif time.time() - port_parse_start > nailgun_timeout_seconds:
raise NailgunClient.NailgunError(
'Failed to read ng output after {sec} seconds.\n {desc}'
.format(sec=nailgun_timeout_seconds, desc=debug_desc))
attempt = 0
while nailgun:
sock = nailgun.try_connect()
if sock:
sock.close()
endpoint = self._get_nailgun_endpoint()
if endpoint:
log.debug('Connected to ng server launched with {endpoint}'
.format(endpoint=repr(endpoint)))
else:
raise NailgunClient.NailgunError('Failed to connect to ng server.')
return nailgun
elif attempt > max_socket_connect_attempts:
raise nailgun.NailgunError('Failed to connect to ng output after {count} connect attempts'
.format(count=max_socket_connect_attempts))
attempt += 1
log.debug('Failed to connect on attempt {count}'.format(count=attempt))
time.sleep(0.1)
def _create_ngclient(self, port, stdout, stderr):
return NailgunClient(port=port, ins=self._ins, out=stdout, err=stderr, workdir=get_buildroot())
def _spawn_nailgun_server(self, fingerprint, jvm_args, classpath, stdout, stderr):
log.debug('No ng server found with fingerprint {fingerprint}, spawning...'
.format(fingerprint=fingerprint))
with safe_open(self._ng_out, 'w'):
pass # truncate
pid = os.fork()
if pid != 0:
# In the parent tine - block on ng being up for connections
return self._await_nailgun_server(stdout, stderr,
'jvm_args={jvm_args} classpath={classpath}'
.format(jvm_args=jvm_args, classpath=classpath))
os.setsid()
in_fd = open('/dev/null', 'r')
out_fd = safe_open(self._ng_out, 'w')
err_fd = safe_open(self._ng_err, 'w')
java = SubprocessExecutor(self._distribution)
jvm_args = jvm_args + [self._PANTS_NG_ARG,
self.create_owner_arg(self._workdir),
self._create_fingerprint_arg(fingerprint)]
process = java.spawn(classpath=classpath,
main='com.martiansoftware.nailgun.NGServer',
jvm_options=jvm_args,
args=[':0'],
stdin=in_fd,
stdout=out_fd,
stderr=err_fd,
close_fds=True)
log.debug('Spawned ng server with fingerprint {fingerprint} @ {pid}'
.format(fingerprint=fingerprint, pid=process.pid))
# Prevents finally blocks and atexit handlers from being executed, unlike sys.exit(). We
# don't want to execute finally blocks because we might, e.g., clean up tempfiles that the
# parent still needs.
os._exit(0)
def __str__(self):
return 'NailgunExecutor({dist}, server={endpoint})' \
.format(dist=self._distribution, endpoint=self._get_nailgun_endpoint())
| apache-2.0 |
jpaalasm/pyglet | contrib/layout/layout/css.py | 29 | 32122 | #!/usr/bin/env python
'''CSS 2.1 parsing and rule matching.
This module is distinct from the CSS 2.1 properties, which are contained
in properties.py; allowing users to use the CSS syntax and rule matching for
custom properties and applications if desired.
The Stylesheet class is the top-level interface to rules and declarations.
It contains methods for quickly retrieving matching declarations for a given
element.
Implement the SelectableElement interface on your objects to allow rule
matching based on attributes, ancestors and siblings.
Currently several features of CSS are unimplemented, such as media
declarations and the stylesheet priority in the cascade (this can be faked
by applying stylesheets in increasing order of priority, with only !important
declarations being sorted incorrectly).
'''
__docformat__ = 'restructuredtext'
__version__ = '$Id$'
from StringIO import StringIO
import warnings
from Plex import *
from Plex.Traditional import re
from layout.base import *
# Interfaces
# ---------------------------------------------------------------------------
class SelectableElement(object):
'''Elements must implement this interface to allow CSS selectors to
traverse them.
'''
parent = None # SelectableElement
previous_sibling = None # SelectableElement
attributes = None # dict of str: str
id = None # str
classes = () # list of str
name = None # str
pseudo_classes = () # set of str (without colon)
def add_pseudo_class(self, c):
if self.pseudo_classes == ():
self.pseudo_classes = set()
self.pseudo_classes.add(c)
def remove_pseudo_class(self, c):
if self.pseudo_classes is not None:
self.pseudo_classes.remove(c)
# Debug methods only
def short_repr(self):
s = self.name
if self.id:
s += '#%s' % self.id
for c in self.classes:
s += '.%s' % c
return s
def __repr__(self, short=False):
s = self.short_repr()
for key, value in self.attributes.items():
if key != 'id':
s += '[%s=%r]' % (key, value)
if self.parent:
s += '(parent=%s)' % self.parent.short_repr()
if self.previous_sibling:
s += '(previous_sibling=%s)' % self.previous_sibling.short_repr()
return s
# Stylesheet objects
# ---------------------------------------------------------------------------
class RuleSet(object):
'''Primary set is a collection of rules, organised for quick matching
against an element.
'''
def __init__(self):
self.names = {}
self.ids = {}
self.classes = {}
self.universals = []
def add_rule(self, rule):
primary = rule.selector.primary
if primary.name:
self.add_name(primary.name, rule)
elif primary.id:
self.add_id(primary.id, rule)
elif primary.classes:
self.add_classes(primary.classes, rule)
else:
self.add_universal(rule)
def add_name(self, name, rule):
if name not in self.names:
self.names[name] = []
self.names[name].append(rule)
def add_id(self, id, rule):
if id not in self.ids:
self.ids[id] = []
self.ids[id].append(rule)
def add_classes(self, classes, rule):
for klass in classes:
if klass not in self.classes:
self.classes[klass] = []
self.classes[klass].append(rule)
def add_universal(self, rule):
self.universals.append(rule)
def get_matching_rules(self, elem):
'''Return a list of declarations that should be applied to the
given element.
The element must implement the SelectableElement interface. The
declarations are returned in the order that they should be applied
(sorted in increasing specifity). Redundant declarations are
currently not filtered out.
'''
# Quickly get some starting points.
primaries = []
primaries += self.names.get(elem.name, [])
if elem.id:
primaries += self.ids.get(elem.id, [])
for c in elem.classes or ():
primaries += self.classes.get(c, [])
primaries += self.universals
# Filter out non-matching
matches = [rule for rule in primaries if self.matches(rule, elem)]
# Order by specifity
matches.sort(lambda a,b: a.specifity - b.specifity)
return matches
def matches(self, rule, elem):
'''Determine if the given rule applies to the given element. Returns
True if so, False otherwise.
# XXX why isn't this on Rule?
'''
if not rule.selector.primary.matches(elem):
return False
for selector in rule.selector.combiners:
if selector.combinator == '>':
elem = elem.parent
elif selector.combinator == '+':
elem = elem.previous_sibling
else:
elem = elem.parent
while elem:
if selector.simple.matches(elem):
break
elem = elem.parent
else:
return False
continue
if not elem:
return False
if not selector.simple.matches(elem):
return False
return True
class Stylesheet(object):
'''Top-level container for rules and declarations. Typically initialised
from a CSS stylesheet file. Elements can then be searched for matching
declarations.
'''
def __init__(self, data):
'''Initialise the stylesheet with the given data, which can be
a string or file-like object.
Most parse errors are ignored as defined in the CSS specification.
Any that slip through as exceptions are bugs.
'''
if not hasattr(data, 'read'):
data = StringIO(data)
scanner = Scanner(lexicon, data)
parser = Parser(scanner)
charset, imports, rules = parser.stylesheet()
self.rules = rules # only for debugging
self.ruleset = RuleSet()
for rule in rules:
self.ruleset.add_rule(rule)
def get_element_declaration_sets(self, element):
return [rule.declaration_set for rule in self.ruleset.get_matching_rules(element)]
def get_declarations(self, element):
# XXX deprecated
declarations = []
for declaration_set in self.get_element_declaration_sets(element):
declarations += declaration_set.declarations
return declarations
def matches(self, rule, elem):
return self.ruleset.matches(rule, elem)
def pprint(self):
for rule in self.rules:
rule.pprint()
def parse_style_declaration_set(style):
scanner = Scanner(lexicon, StringIO(style))
parser = Parser(scanner)
declaration_set = parser.declaration_set()
return declaration_set
def parse_style_expression(value):
scanner = Scanner(lexicon, StringIO(value))
parser = Parser(scanner)
return parser.expr()
class Import(object):
'''An @import declaration. Currently ignored.
'''
def __init__(self, location, media):
self.location = location
self.media = media
def pprint(self):
print '@import', self.location, ','.join(self.media)
class Page(object):
'''An @page declaration. Currently ignored.
'''
def __init__(self, pseudo, declaration_set):
self.pseudo = pseudo
self.declaration_set = declaration_set
class Rule(object):
'''A rule, consisting of a single selector and one or more declarations.
The rule may also contain one or more media strings, but these are
currently ignored.
'''
media = None
def __init__(self, selector, declaration_set):
self.selector = selector
self.declaration_set = declaration_set
# Specifity calculated according to 6.4.3 with base 256
specifity = 0
simples = [selector.primary] + [s.simple for s in selector.combiners]
for s in simples:
if s.id:
specifity += 1 << 16
specifity += (1 << 8) * (len(s.classes) +
len(s.attribs) +
len(s.pseudos))
if s.name:
specifity += 1
self.specifity = specifity
def is_media(self, media):
'''Return True if this rule applies to the given media string.'''
return (self.media is None or
'all' in self.media or
media in self.media)
def pprint(self):
if self.media:
print '@media', ','.join(self.media), '{',
self.selector.pprint()
print '{'
self.declaration_set.pprint()
print '}'
class Selector(object):
'''A single selector, consisting of a primary SimpleSelector and zero or
more combining selectors.
The primary selector is the final selector in a sequence of descendent and
sibling operators, and is the starting point for searches. The combiners
list is "backwards", running from closest descendent/sibling to most
distant (opposite order to that listed in the CSS file.
'''
def __init__(self, primary, combiners):
self.primary = primary
self.combiners = combiners
@staticmethod
def from_string(data):
scanner = Scanner(lexicon, StringIO(data.strip()))
parser = Parser(scanner)
return parser.selector()
def pprint(self):
print 'Selector(primary=%r,combiners=%r)' % \
(self.primary, self.combiners),
class SimpleSelector(object):
'''A single selector consisting of an optional name, id, class list,
attribute value list and pseudo-class/element appliers. If none of these
are present, the selector is a universal selector.
'''
def __init__(self, name, id, classes, attribs, pseudos):
self.name = name
self.id = id
self.classes = classes
self.attribs = attribs
self.pseudos = pseudos
def __repr__(self):
s = self.name or '*'
if self.id:
s += '#%s' % self.id
for c in self.classes:
s += '.%s' % c
for a in self.attribs:
s += repr(a)
for p in self.pseudos:
s += repr(p)
return s
def matches(self, elem):
'''Determines if the selector matches the given element. Returns True
if so, False otherwise.
'''
if self.name is not None and elem.name != self.name:
return False
if self.id is not None and elem.id != self.id:
return False
for c in self.classes:
if c not in elem.classes:
return False
for attr in self.attribs:
if not elem.attributes.has_key(attr):
return False
value = elem.attributes[attr]
if attr.op == '=' and value != attr.value:
return False
elif attr.op == '~=' and attr.value not in value.split():
return False
elif attr.op == '|=':
pre = attr.value.split('-')
if value.split('-')[:len(pre)] != pre:
return False
for pseudo in self.pseudos:
if pseudo.name not in elem.pseudo_classes:
return False
return True
class CombiningSelector(object):
'''A selector and the combinator required to reach the element this selector
should be applied to.
The combinator can be one of '' (ancestor), '>' (parent) or '+' (previous
sibling).
'''
def __init__(self, combinator, simple):
self.combinator = combinator
self.simple = simple
def __repr__(self):
return '%s%r' % (self.combinator or '', self.simple)
class Attrib(object):
'''An attribute name, and optional value and comparison operator.
If no operator is given, the attribute is merely checked for existence.
The operator can be one of '=', '|=', '~='. Values must be strings.
'''
def __init__(self, name, op, value):
self.name = name
self.op = op
self.value = value
def __repr__(self):
if self.op:
return '[%s%s%s]' % (self.name, self.op, self.value)
else:
return '[%s]'
class Pseudo(object):
'''A pseudo-class or pseudo-element declaration.
The 'name' value does not include the ':' symbol.
'''
def __init__(self, name):
self.name = name
def __repr__(self):
return ':%s' % self.name
class PseudoFunction(Pseudo):
'''A function applied as a pseudo-element or pseudo-class. Currently
unused.
'''
def __init__(self, name, param):
super(PseudoFunction, self).__init__(name)
self.param = param
def __repr__(self):
return ':%s(%s)' % (self.name, self.param)
class DeclarationSet(object):
'''Set of declarations, for example within a rule block.
'''
def __init__(self, declarations):
self.declarations = declarations
def __str__(self):
return '; '.join([str(d) for d in self.declarations])
def pprint(self):
for declaration in self.declarations:
print declaration, ';'
class Declaration(object):
'''A single declaration, consisting of a property name, a list of values,
and optional priority.
The property name must be a string, the list of values is typically one in
length, but may have several values (for example, a shortcut property).
If the property has several values separated by commas, these commas
appear as separate items in the value list. If specified, the priority is
the string '!important', otherwise empty.
'''
def __init__(self, property, values, priority):
self.property = property
self.values = values
self.priority = priority
def __str__(self):
s = '%s: %s' % (self.property, ' '.join([str(v) for v in self.values]))
if self.priority:
s += ' ! important'
return s
def __repr__(self):
s = '%s: %r' % (self.property, self.values)
if self.priority:
s += ' ! important'
return '%s(%s)' % (self.__class__.__name__, s)
# Scanner tokens (in addition to the basic types given in base.py)
# ----------------------------------------------------------------
class Hash(str):
'''A keyword beginning with a hash, or a colour hash. Value does not
include the '#" symbol.'''
def __new__(cls, text):
return str.__new__(cls, text[1:])
class AtKeyword(str):
'''An keyword such as @media, @import, etc. Includes the '@' symbol.'''
def __new__(cls, text):
return str.__new__(cls, text.lower())
class CDO(object):
'''Comment delimiter open token '<!--'.'''
pass
class CDC(object):
'''Comment delimiter close token '-->'.'''
pass
class Whitespace(object):
pass
class Function(str):
'''A function name, including the '(' at the end signifying the beginning
of a function call.
'''
pass
class Important(object):
'''The '!important' token.'''
pass
class Delim(str):
'''Any other punctuation, such as comma, period, operator.'''
pass
# Scanner macros
# ----------------------------------------------------------------------------
nonascii = re('[^\0-\177]')
_h = NoCase(re('[0-9a-f]'))
_unicode_num = _h + Opt(_h + Opt(_h + Opt(_h + Opt(_h + Opt(_h)))))
unicode = \
(Str('\\') + _unicode_num + Opt(Str('\r\n') | Any(' \n\r\t\f')))
escape = unicode | (Str('\\') + NoCase(re('[^\n\r\f0-9a-f]')))
nmstart = NoCase(re('[_a-z]')) | nonascii | escape
nmchar = NoCase(re('[_a-z0-9-]')) | nonascii | escape
name = Rep1(nmchar)
ident = Opt(Str('-')) + nmstart + Rep(nmchar)
num = re('[0-9]+') | re('[0-9]*\\.[0-9]+')
nl = Str('\n') | Str('\r\n') | Str('\r') | Str('\f')
string1 = (Str('"') +
Rep(AnyBut('\n\r\f\\"') | (Str('\\') + nl) | escape) +
Str('"'))
string2 = (Str("'") +
Rep(AnyBut("\n\r\f\\'") | (Str('\\') + nl) | escape) +
Str("'"))
string = string1 | string2
invalid1 = (Str('"') +
Rep(AnyBut('\n\r\f\\"') | (Str('\\') + nl) | escape))
invalid2 = (Str("'") +
Rep(AnyBut("\n\r\f\\'") | (Str('\\') + nl) | escape))
invalid = invalid1 | invalid2
w = Rep(Any(' \t\r\n\f'))
# Scanner tokens
# ----------------------------------------------------------------------------
IDENT = ident
ATKEYWORD = Str('@') + ident
STRING = string
HASH = Str('#') + name
NUMBER = num
PERCENTAGE = num + Str('%')
DIMENSION = num + ident
URITOKEN = (NoCase(Str('url(')) + w + string + w + Str(')')) | \
(NoCase(Str('url(')) + w +
# XXX Following is in CSS spec (twice!) but clearly wrong
#Rep(Any('!#$%&*-~')|nonascii|escape) + w + Str(')'))
# Using this instead:
Rep(AnyBut(')')) + w + Str(')'))
UNICODE_RANGE = NoCase(Str('U+')) + _unicode_num + Opt(Str('-') + _unicode_num)
CDO = Str('<!--')
CDC = Str('-->')
S = Rep1(Any(' \t\r\n\f'))
COMMENT = re(r'/\*[^*]*\*+([^/*][^*]*\*+)*/')
FUNCTION = ident + Str('(')
INCLUDES = Str('~=')
DASHMATCH = Str('|=')
DELIM = AnyBut('\'"')
IMPORTANT = NoCase(Str('important'))
lexicon = Lexicon([
(IDENT, lambda s,t: Ident(t)),
(ATKEYWORD, lambda s,t: AtKeyword(t)),
(STRING, lambda s,t: String(t)),
(HASH, lambda s,t: Hash(t)),
(NUMBER, lambda s,t: Number(t)),
(PERCENTAGE, lambda s,t: Percentage(t)),
(DIMENSION, lambda s,t: Dimension(t)),
(URITOKEN, lambda s,t: URI(t)),
(UNICODE_RANGE, lambda s,t: UnicodeRange(t)),
(CDO, lambda s,t: CDO()),
(CDC, lambda s,t: CDC()),
(S, lambda s,t: Whitespace()),
(COMMENT, IGNORE),
(FUNCTION, lambda s,t: Function(t)),
(INCLUDES, lambda s,t: Delim(t)),
(DASHMATCH, lambda s,t: Delim(t)),
(DELIM, lambda s,t: Delim(t)),
(IMPORTANT, lambda s,t: Important())
])
# Parser
# ----------------------------------------------------------------------------
class ParserException(Exception):
def __init__(self, file, line, col):
self.file = file
self.line = line
self.col = col
def __str__(self):
return 'Parse error in "%s" at line %d, column %d' % \
(self.file, self.line, self.col)
class UnexpectedToken(ParserException):
def __init__(self, position, expected_list, got_token):
ParserException.__init__(self, *position)
if len(expected_list) == 1:
self.expected = 'expected %r' % expected_list[0]
else:
self.expected = 'expected one of %r' % expected_list
self.expected_list = expected_list
self.got_token = got_token
def __str__(self):
return '%s: %s, got %r' % \
(super(UnexpectedToken, self).__str__(),
self.expected, self.got_token)
class Parser(object):
'''Grammar parser for CSS 2.1.
This is a hand-coded LL(1) parser. There are convenience functions for
peeking at the next token and checking if the next token is of a given type
or delimiter. Otherwise, it is a straightforward recursive implementation
of the production rules given in Appendix G.1.
Some attention is paid to ignoring errors according to the specification,
but this is not perfect yet. In particular, some parse errors will result
in an exception, which halts parsing.
'''
def __init__(self, scanner):
self._scanner = scanner
self._lookahead = None
def _read(self, *_types):
if self._lookahead is not None:
r = self._lookahead
self._lookahead = None
else:
r = self._scanner.read()[0]
if _types and r.__class__ not in _types and r not in _types:
raise UnexpectedToken(self._scanner.position(), _types, r)
return r
def _peek(self):
if self._lookahead is None:
self._lookahead = self._scanner.read()[0]
return self._lookahead
def _is(self, *_types):
peek = self._peek()
return peek is not None and (peek.__class__ in _types or peek in _types)
def _eat_whitespace(self):
while self._is(Whitespace):
self._read()
# Productions
# See Appendix G.1
# -----------------------------------------------------------------------
def stylesheet(self):
# [ CHARSET_SYM STRING ';']?
# [S|CDO|CDC]* [ import [S|CDO|CDC]* ]*
# [ [ ruleset | media | page ] [S|CDO|CDC]* ]*
charset = None
if self.is_charset():
charset = self.charset()
while self._is(CDO, CDC, Whitespace):
self._read()
imports = []
while self.is_import():
imports.append(self.import_())
while self._is(CDO, CDC, Whitespace):
self._read()
rules = []
while True:
if self.is_page():
self.page()
# Pages are current ignored
elif self.is_media():
rules += self.media()
elif self.is_ruleset():
rules += self.ruleset()
else:
break
while self._is(CDO, CDC, Whitespace):
self._read()
return charset, imports, rules
def is_charset(self):
t = self._peek()
return isinstance(t, AtKeyword) and t == '@charset'
def charset(self):
# charset : CHARSET_SYM STRING ';'
self._read(AtKeyword)
charset = self._read(String)
self._read(';')
return charset
def medium_list(self):
# medium_list : IDENT S* [ COMMA S* IDENT S*]*
# (Not in CSS grammar, but common to media and import productions)
media = []
media.append(self._read(Ident))
self._eat_whitespace()
while self._is(','):
self._eat_whitespace()
media.append(self._read(Ident))
self._eat_whitespace()
return media
def is_import(self):
t = self._peek()
return isinstance(t, AtKeyword) and t == '@import'
def import_(self):
# import : IMPORT_SYM S* [STRING|URI] S* [ medium_list ]? ';' S*
self._read(AtKeyword)
self._eat_whitespace()
loc = self._read(String, URI)
self._eat_whitespace()
if self._is(Ident):
media = self.medium_list()
else:
media = []
self._read(';')
return Import(loc, media)
def is_page(self):
t = self._peek()
return isinstance(t, AtKeyword) and t == '@page'
def page(self):
# page : PAGE_SYM S* pseudo_page? S*
# LBRACE S* declaration [ ';' declaration ]* '}' S*
self._read(AtKeyword)
self._eat_whitespace()
if self._is(':'):
self._read()
pseudo = self._read(Ident)
self._eat_whitespace()
self._read('{')
declaration_set = self.declaration_set()
self._read('}')
self._eat_whitespace()
return Page(pseudo, declaration_set)
def is_media(self):
t = self._peek()
return isinstance(t, AtKeyword) and t == '@media'
def media(self):
# media : MEDIA_SYM S* medium_list LBRACE S* ruleset* '}' S*
self._read(AtKeyword)
self._eat_whitespace()
media = self.medium_list()
self._read('{')
self._eat_whitespace()
rules = []
while self.is_ruleset():
ruleset = self.ruleset()
for rule in ruleset:
rule.media = media
rules += ruleset
self._read('}')
self._eat_whitespace()
return rules
def is_operator(self):
return self._is('/', ',')
def operator(self):
# operator: '/' S* | COMMA S* | /* empty */
# (empty production isn't matched here, see expr)
op = self._read()
self._eat_whitespace()
return op
def combinator(self):
combinator = None
if self._is('+', '>'):
combinator = self._read()
self._eat_whitespace()
return combinator
def is_unary_operator(self):
return self._is('+', '-')
def unary_operator(self):
# unary_operator : '-' | PLUS
if self._read('+', '-') == '-':
return -1
return 1
def is_property(self):
return self._is(Ident)
def property(self):
# property : IDENT S*
prop = self._read(Ident)
self._eat_whitespace()
return prop
def is_ruleset(self):
return self.is_selector()
def ruleset(self):
# ruleset : selector [ COMMA S* selector ]*
# LBRACE S* declaration [ ';' S* declaration ]* '}' S*
selectors = [self.selector()]
while self._is(','):
self._read()
self._eat_whitespace()
selectors.append(self.selector())
self._read('{')
self._eat_whitespace()
declaration_set = self.declaration_set()
self._read('}')
self._eat_whitespace()
return [Rule(s, declaration_set) for s in selectors]
def is_selector(self):
return self._is(Ident, '*', Hash, '.', '[', ':')
def selector(self):
# selector : simple_selector [ combinator simple_selector ]*
combiners = []
simple = self.simple_selector()
while self._is('+', '>', Ident, '*', Hash, '.', '[', ':'):
combinator = self.combinator()
combiners.insert(0, CombiningSelector(combinator, simple))
simple = self.simple_selector()
return Selector(simple, combiners)
def simple_selector(self):
# simple_selector : element_name [ HASH | class | attrib | pseudo ]*
# | [ HASH | class | attrib | pseudo ]+
name = id = None
attribs = []
classes = []
pseudos = []
if self._is(Ident):
name = self._read()
elif self._is('*'):
self._read()
while True:
if self._is(Hash):
id = self._read() # more than 1 id? too bad (css unspecified)
elif self._is('.'):
self._read()
classes.append(self._read(Ident))
elif self._is('['):
attribs.append(self.attrib())
elif self._is(':'):
pseudos.append(self.pseudo())
else:
break
# Not in spec but definitely required.
self._eat_whitespace()
return SimpleSelector(name, id, classes, attribs, pseudos)
def attrib(self):
# attrib : '[' S* IDENT S* [ [ '=' | INCLUDES | DASHMATCH ] S*
# [ IDENT | STRING ] S* ]? ']'
self._read('[')
self._eat_whitespace()
name = self._read(Ident)
self._eat_whitespace()
op = value = None
if self._is('=', '~=', '|='):
op = self._read()
self._eat_whitespace()
value = self._read(Ident, String)
self._eat_whitespace()
self._read(']')
return Attrib(name, op, value)
def pseudo(self):
# pseudo : ':' [ IDENT | FUNCTION S* IDENT? S* ')' ]
self._read(':')
if self._is(Ident):
return Pseudo(self._read())
else:
name = self._read(Function)
self._eat_whitespace()
param = None
if self._is(Ident):
param = self._read(Ident)
self._eat_whitespace()
self._read(')')
return PseudoFunction(name, param)
def declaration_set(self):
# declaration_list : S* declaration [';' S* declaration ]*
# Adapted from bracketed section of ruleset; this is the start
# production for parsing the style attribute of HTML/XHTML.
# Declaration can also be empty, handle this here.
self._eat_whitespace()
declarations = []
while self._is(Ident, ';'):
if self._is(Ident):
try:
declarations.append(self.declaration())
except ParserException:
pass
if not self._is(';'):
break
self._read(';')
self._eat_whitespace()
return DeclarationSet(declarations)
def is_declaration(self):
return self.is_property()
def declaration(self):
# declaration : property ':' S* expr prio? | /* empty */
# Empty production of declaration is not handled here, see
# declaration_list.
prop = self.property()
self._read(':')
self._eat_whitespace()
expr = self.expr()
priority = None
if self.is_prio():
priority = self.prio()
return Declaration(prop, expr, priority)
def is_prio(self):
return self._is('!')
def prio(self):
# prio : IMPORTANT_SYM S*
self._read('!')
self._eat_whitespace()
self._read(Important)
self._eat_whitespace()
return 'important'
def expr(self):
# expr : term [ operator term ]*
# operator is optional, implemented here not in operator.
terms = []
terms.append(self.term())
while self.is_operator() or self.is_term():
if self.is_operator():
terms.append(self.operator())
terms.append(self.term())
return terms
def is_term(self):
return (self.is_unary_operator() or
self.is_function() or
self.is_hexcolor() or
self._is(Number, Percentage, Dimension, String, Ident, URI))
def term(self):
# term : unary_operator? [ NUMBER S* | PERCENTAGE S* | DIMENSION S* ]
# | STRING S* | IDENT S* | URI S* | hexcolor | function
if self.is_unary_operator():
un = self.unary_operator()
value = self._read(Number, Percentage, Dimension)
if un == -1:
value = -value
self._eat_whitespace()
return value
if self.is_function():
return self.function()
elif self.is_hexcolor():
return self.hexcolor()
value = self._read(Number, Percentage, Dimension, String, Ident, URI)
self._eat_whitespace()
return value
def is_function(self):
return self._is(Function)
def function(self):
# function : FUNCTION S* expr ')' S*
name = self._read()[:-1]
position = self._scanner.position()
self._eat_whitespace()
args = self.expr()
self._read(')')
self._eat_whitespace()
if name == 'rgb':
if len(args) != 5 or args[1] != ',' or args[3] != ',':
raise ParserException(*position)
def component(c):
if c.__class__ is Percentage:
return max(min(c / 100., 1.), 0.)
elif c.__class__ is Number:
return max(min(c / 255., 1.), 0.)
else:
raise ParserException(*position)
r = component(args[0])
g = component(args[2])
b = component(args[4])
return Color(r,g,b)
else:
raise ParserException(*position)
def is_hexcolor(self):
return self._is(Hash)
def hexcolor(self):
# hexcolor : HASH S*
hash = self._read(Hash)
if len(hash) not in (3, 6):
raise ParserException(*self._scanner.position())
self._eat_whitespace()
return Color.from_hex(hash)
| bsd-3-clause |
sklnet/openatv-enigma2 | lib/python/Screens/VideoMode.py | 2 | 30865 | from os import path
from enigma import iPlayableService, iServiceInformation, eTimer, eServiceCenter, eServiceReference, eDVBDB
from Screens.Screen import Screen
from Screens.ChannelSelection import FLAG_IS_DEDICATED_3D
from Components.About import about
from Components.SystemInfo import SystemInfo
from Components.ConfigList import ConfigListScreen
from Components.config import config, configfile, getConfigListEntry
from Components.Label import Label
from Components.Sources.StaticText import StaticText
from Components.Pixmap import Pixmap
from Components.Sources.Boolean import Boolean
from Components.ServiceEventTracker import ServiceEventTracker
from Tools.Directories import resolveFilename, SCOPE_PLUGINS
from Tools.HardwareInfo import HardwareInfo
from Components.AVSwitch import iAVSwitch
resolutionlabel = None
class VideoSetup(Screen, ConfigListScreen):
def __init__(self, session):
Screen.__init__(self, session)
# for the skin: first try VideoSetup, then Setup, this allows individual skinning
self.skinName = ["VideoSetup", "Setup" ]
self.setup_title = _("Video settings")
self["HelpWindow"] = Pixmap()
self["HelpWindow"].hide()
self["VKeyIcon"] = Boolean(False)
self['footnote'] = Label()
self.hw = iAVSwitch
self.onChangedEntry = [ ]
# handle hotplug by re-creating setup
self.onShow.append(self.startHotplug)
self.onHide.append(self.stopHotplug)
self.list = [ ]
ConfigListScreen.__init__(self, self.list, session = session, on_change = self.changedEntry)
from Components.ActionMap import ActionMap
self["actions"] = ActionMap(["SetupActions", "MenuActions", "ColorActions"],
{
"cancel": self.keyCancel,
"save": self.apply,
"menu": self.closeRecursive,
}, -2)
self["key_red"] = StaticText(_("Cancel"))
self["key_green"] = StaticText(_("OK"))
self["description"] = Label("")
self.createSetup()
self.grabLastGoodMode()
self.onLayoutFinish.append(self.layoutFinished)
def layoutFinished(self):
self.setTitle(self.setup_title)
def startHotplug(self):
self.hw.on_hotplug.append(self.createSetup)
def stopHotplug(self):
self.hw.on_hotplug.remove(self.createSetup)
def createSetup(self):
level = config.usage.setup_level.index
self.list = [
getConfigListEntry(_("Video output"), config.av.videoport, _("Configures which video output connector will be used."))
]
if config.av.videoport.value in ('HDMI', 'YPbPr', 'Scart-YPbPr') and not path.exists(resolveFilename(SCOPE_PLUGINS)+'SystemPlugins/AutoResolution'):
self.list.append(getConfigListEntry(_("Automatic resolution"), config.av.autores,_("If enabled the output resolution of the box will try to match the resolution of the video contents resolution")))
if config.av.autores.value in ('all', 'hd'):
self.list.append(getConfigListEntry(_("Delay time"), config.av.autores_delay,_("Set the time before checking video source for resolution infomation.")))
self.list.append(getConfigListEntry(_("Force de-interlace"), config.av.autores_deinterlace,_("If enabled the video will always be de-interlaced.")))
self.list.append(getConfigListEntry(_("Automatic resolution label"), config.av.autores_label_timeout,_("Allows you to adjust the amount of time the resolution infomation display on screen.")))
if config.av.autores.value in 'hd':
self.list.append(getConfigListEntry(_("Show SD as"), config.av.autores_sd,_("This option allows you to choose how to display standard defintion video on your TV.")))
self.list.append(getConfigListEntry(_("Show 480/576p 24fps as"), config.av.autores_480p24,_("This option allows you to choose how to display SD progressive 24Hz on your TV. (as not all TV's support these resolutions)")))
self.list.append(getConfigListEntry(_("Show 720p 24fps as"), config.av.autores_720p24,_("This option allows you to choose how to display 720p 24Hz on your TV. (as not all TV's support these resolutions)")))
self.list.append(getConfigListEntry(_("Show 1080p 24fps as"), config.av.autores_1080p24,_("This option allows you to choose how to display 1080p 24Hz on your TV. (as not all TV's support these resolutions)")))
self.list.append(getConfigListEntry(_("Show 1080p 25fps as"), config.av.autores_1080p25,_("This option allows you to choose how to display 1080p 25Hz on your TV. (as not all TV's support these resolutions)")))
self.list.append(getConfigListEntry(_("Show 1080p 30fps as"), config.av.autores_1080p30,_("This option allows you to choose how to display 1080p 30Hz on your TV. (as not all TV's support these resolutions)")))
self.list.append(getConfigListEntry(_('Always use smart1080p mode'), config.av.smart1080p, _("This option allows you to always use e.g. 1080p50 for TV/.ts, and 1080p24/p50/p60 for videos")))
# if we have modes for this port:
if (config.av.videoport.value in config.av.videomode and config.av.autores.value == 'disabled') or config.av.videoport.value == 'Scart':
# add mode- and rate-selection:
self.list.append(getConfigListEntry(pgettext(_("Video output mode"), _("Mode")), config.av.videomode[config.av.videoport.value], _("This option configures the video output mode (or resolution).")))
if config.av.videomode[config.av.videoport.value].value == 'PC':
self.list.append(getConfigListEntry(_("Resolution"), config.av.videorate[config.av.videomode[config.av.videoport.value].value], _("This option configures the screen resolution in PC output mode.")))
elif config.av.videoport.value != 'Scart':
self.list.append(getConfigListEntry(_("Refresh rate"), config.av.videorate[config.av.videomode[config.av.videoport.value].value], _("Configure the refresh rate of the screen.")))
port = config.av.videoport.value
if port not in config.av.videomode:
mode = None
else:
mode = config.av.videomode[port].value
# some modes (720p, 1080i) are always widescreen. Don't let the user select something here, "auto" is not what he wants.
force_wide = self.hw.isWidescreenMode(port, mode)
if not force_wide:
self.list.append(getConfigListEntry(_("Aspect ratio"), config.av.aspect, _("Configure the aspect ratio of the screen.")))
if force_wide or config.av.aspect.value in ("16:9", "16:10"):
self.list.extend((
getConfigListEntry(_("Display 4:3 content as"), config.av.policy_43, _("When the content has an aspect ratio of 4:3, choose whether to scale/stretch the picture.")),
getConfigListEntry(_("Display >16:9 content as"), config.av.policy_169, _("When the content has an aspect ratio of 16:9, choose whether to scale/stretch the picture."))
))
elif config.av.aspect.value == "4:3":
self.list.append(getConfigListEntry(_("Display 16:9 content as"), config.av.policy_169, _("When the content has an aspect ratio of 16:9, choose whether to scale/stretch the picture.")))
# if config.av.videoport.value == "HDMI":
# self.list.append(getConfigListEntry(_("Allow unsupported modes"), config.av.edid_override))
if config.av.videoport.value == "Scart":
self.list.append(getConfigListEntry(_("Color format"), config.av.colorformat, _("Configure which color format should be used on the SCART output.")))
if level >= 1:
self.list.append(getConfigListEntry(_("WSS on 4:3"), config.av.wss, _("When enabled, content with an aspect ratio of 4:3 will be stretched to fit the screen.")))
if SystemInfo["ScartSwitch"]:
self.list.append(getConfigListEntry(_("Auto scart switching"), config.av.vcrswitch, _("When enabled, your receiver will detect activity on the VCR SCART input.")))
# if not isinstance(config.av.scaler_sharpness, ConfigNothing):
# self.list.append(getConfigListEntry(_("Scaler sharpness"), config.av.scaler_sharpness, _("This option configures the picture sharpness.")))
if SystemInfo["havecolorspace"]:
self.list.append(getConfigListEntry(_("HDMI Colorspace"), config.av.hdmicolorspace,_("This option allows you can config the Colorspace from Auto to RGB")))
self["config"].list = self.list
self["config"].l.setList(self.list)
if config.usage.sort_settings.value:
self["config"].list.sort()
def keyLeft(self):
ConfigListScreen.keyLeft(self)
self.createSetup()
def keyRight(self):
ConfigListScreen.keyRight(self)
self.createSetup()
def confirm(self, confirmed):
if not confirmed:
config.av.videoport.setValue(self.last_good[0])
config.av.videomode[self.last_good[0]].setValue(self.last_good[1])
config.av.videorate[self.last_good[1]].setValue(self.last_good[2])
config.av.autores_sd.setValue(self.last_good_extra[0])
config.av.smart1080p.setValue(self.last_good_extra[1])
self.hw.setMode(*self.last_good)
else:
self.keySave()
def grabLastGoodMode(self):
port = config.av.videoport.value
mode = config.av.videomode[port].value
rate = config.av.videorate[mode].value
self.last_good = (port, mode, rate)
autores_sd = config.av.autores_sd.value
smart1080p = config.av.smart1080p.value
self.last_good_extra = (autores_sd, smart1080p)
def saveAll(self):
if config.av.videoport.value == 'Scart':
config.av.autores.setValue('disabled')
for x in self["config"].list:
x[1].save()
configfile.save()
def apply(self):
port = config.av.videoport.value
mode = config.av.videomode[port].value
rate = config.av.videorate[mode].value
autores_sd = config.av.autores_sd.value
smart1080p = config.av.smart1080p.value
if ((port, mode, rate) != self.last_good) or (autores_sd, smart1080p) != self.last_good_extra:
if autores_sd.find('1080') >= 0:
self.hw.setMode(port, '1080p', '50Hz')
elif (smart1080p == '1080p50') or (smart1080p == 'true'): # for compatibility with old ConfigEnableDisable
self.hw.setMode(port, '1080p', '50Hz')
elif smart1080p == '2160p50':
self.hw.setMode(port, '2160p', '50Hz')
elif smart1080p == '1080i50':
self.hw.setMode(port, '1080i', '50Hz')
elif smart1080p == '720p50':
self.hw.setMode(port, '720p', '50Hz')
else:
self.hw.setMode(port, mode, rate)
from Screens.MessageBox import MessageBox
self.session.openWithCallback(self.confirm, MessageBox, _("Is this video mode ok?"), MessageBox.TYPE_YESNO, timeout = 20, default = False)
else:
self.keySave()
# for summary:
def changedEntry(self):
for x in self.onChangedEntry:
x()
def getCurrentEntry(self):
return self["config"].getCurrent()[0]
def getCurrentValue(self):
return str(self["config"].getCurrent()[1].getText())
def getCurrentDescription(self):
return self["config"].getCurrent() and len(self["config"].getCurrent()) > 2 and self["config"].getCurrent()[2] or ""
def createSummary(self):
from Screens.Setup import SetupSummary
return SetupSummary
class AudioSetup(Screen, ConfigListScreen):
def __init__(self, session):
Screen.__init__(self, session)
# for the skin: first try AudioSetup, then Setup, this allows individual skinning
self.skinName = ["AudioSetup", "Setup" ]
self.setup_title = _("Audio settings")
self["HelpWindow"] = Pixmap()
self["HelpWindow"].hide()
self["VKeyIcon"] = Boolean(False)
self['footnote'] = Label()
self.hw = iAVSwitch
self.onChangedEntry = [ ]
# handle hotplug by re-creating setup
self.onShow.append(self.startHotplug)
self.onHide.append(self.stopHotplug)
self.list = [ ]
ConfigListScreen.__init__(self, self.list, session = session, on_change = self.changedEntry)
from Components.ActionMap import ActionMap
self["actions"] = ActionMap(["SetupActions", "MenuActions", "ColorActions"],
{
"cancel": self.keyCancel,
"save": self.apply,
"menu": self.closeRecursive,
}, -2)
self["key_red"] = StaticText(_("Cancel"))
self["key_green"] = StaticText(_("OK"))
self["description"] = Label("")
self.createSetup()
self.onLayoutFinish.append(self.layoutFinished)
def layoutFinished(self):
self.setTitle(self.setup_title)
def startHotplug(self):
self.hw.on_hotplug.append(self.createSetup)
def stopHotplug(self):
self.hw.on_hotplug.remove(self.createSetup)
def createSetup(self):
level = config.usage.setup_level.index
self.list = [ ]
if level >= 1:
if SystemInfo["CanPcmMultichannel"]:
self.list.append(getConfigListEntry(_("PCM Multichannel"), config.av.pcm_multichannel, _("Choose whether multi channel sound tracks should be output as PCM.")))
if SystemInfo["CanDownmixAC3"]:
self.list.append(getConfigListEntry(_("Dolby Digital / DTS downmix"), config.av.downmix_ac3, _("Choose whether multi channel sound tracks should be downmixed to stereo.")))
if SystemInfo["CanDownmixAAC"]:
self.list.append(getConfigListEntry(_("AAC downmix"), config.av.downmix_aac, _("Choose whether multi channel sound tracks should be downmixed to stereo.")))
if SystemInfo["Canaudiosource"]:
self.list.append(getConfigListEntry(_("Audio Source"), config.av.audio_source, _("Choose whether multi channel sound tracks should be convert to PCM or SPDIF.")))
if SystemInfo["CanAACTranscode"]:
self.list.append(getConfigListEntry(_("AAC transcoding"), config.av.transcodeaac, _("Choose whether AAC sound tracks should be transcoded.")))
self.list.extend((
getConfigListEntry(_("General AC3 delay"), config.av.generalAC3delay, _("This option configures the general audio delay of Dolby Digital sound tracks.")),
getConfigListEntry(_("General PCM delay"), config.av.generalPCMdelay, _("This option configures the general audio delay of stereo sound tracks."))
))
if SystemInfo["Can3DSurround"]:
self.list.append(getConfigListEntry(_("3D Surround"), config.av.surround_3d,_("This option allows you to enable 3D Surround Sound.")))
if SystemInfo["Can3DSpeaker"] and config.av.surround_3d.value != "none":
self.list.append(getConfigListEntry(_("3D Surround Speaker Position"), config.av.surround_3d_speaker,_("This option allows you to change the virtuell loadspeaker position.")))
if SystemInfo["CanAutoVolume"]:
self.list.append(getConfigListEntry(_("Audio Auto Volume Level"), config.av.autovolume,_("This option configures you can set Auto Volume Level.")))
if SystemInfo["Canedidchecking"]:
self.list.append(getConfigListEntry(_("Bypass HDMI EDID Check"), config.av.bypass_edid_checking,_("This option allows you to bypass HDMI EDID check")))
self["config"].list = self.list
self["config"].l.setList(self.list)
if config.usage.sort_settings.value:
self["config"].list.sort()
def keyLeft(self):
ConfigListScreen.keyLeft(self)
self.createSetup()
def keyRight(self):
ConfigListScreen.keyRight(self)
self.createSetup()
def confirm(self, confirmed):
self.keySave()
def apply(self):
self.keySave()
# for summary:
def changedEntry(self):
for x in self.onChangedEntry:
x()
def getCurrentEntry(self):
return self["config"].getCurrent()[0]
def getCurrentValue(self):
return str(self["config"].getCurrent()[1].getText())
def getCurrentDescription(self):
return self["config"].getCurrent() and len(self["config"].getCurrent()) > 2 and self["config"].getCurrent()[2] or ""
def createSummary(self):
from Screens.Setup import SetupSummary
return SetupSummary
class AutoVideoModeLabel(Screen):
def __init__(self, session):
Screen.__init__(self, session)
self["content"] = Label()
self["restxt"] = Label()
self.hideTimer = eTimer()
self.hideTimer.callback.append(self.hide)
self.onShow.append(self.hide_me)
def hide_me(self):
idx = config.av.autores_label_timeout.index
if idx:
idx += 4
self.hideTimer.start(idx*1000, True)
previous = None
isDedicated3D = False
def applySettings(mode=config.osd.threeDmode.value, znorm=int(config.osd.threeDznorm.value)):
global previous, isDedicated3D
mode = isDedicated3D and mode == "auto" and "sidebyside" or mode
if previous != (mode, znorm):
try:
previous = (mode, znorm)
if SystemInfo["CanUse3DModeChoices"]:
f = open("/proc/stb/fb/3dmode_choices", "r")
choices = f.readlines()[0].split()
f.close()
if mode not in choices:
if mode == "sidebyside":
mode = "sbs"
elif mode == "topandbottom":
mode = "tab"
elif mode == "auto":
mode = "off"
open(SystemInfo["3DMode"], "w").write(mode)
open(SystemInfo["3DZNorm"], "w").write('%d' % znorm)
except:
return
class AutoVideoMode(Screen):
def __init__(self, session):
Screen.__init__(self, session)
self.__event_tracker = ServiceEventTracker(screen=self, eventmap=
{
iPlayableService.evStart: self.__evStart,
iPlayableService.evVideoSizeChanged: self.VideoChanged,
iPlayableService.evVideoProgressiveChanged: self.VideoChanged,
iPlayableService.evVideoFramerateChanged: self.VideoChanged,
iPlayableService.evBuffering: self.BufferInfo,
iPlayableService.evStopped: self.BufferInfoStop
})
self.delay = False
self.bufferfull = True
self.detecttimer = eTimer()
self.detecttimer.callback.append(self.VideoChangeDetect)
def checkIfDedicated3D(self):
service = self.session.nav.getCurrentlyPlayingServiceReference()
servicepath = service and service.getPath()
if servicepath and servicepath.startswith("/"):
if service.toString().startswith("1:"):
info = eServiceCenter.getInstance().info(service)
service = info and info.getInfoString(service, iServiceInformation.sServiceref)
return service and eDVBDB.getInstance().getFlag(eServiceReference(service)) & FLAG_IS_DEDICATED_3D == FLAG_IS_DEDICATED_3D and "sidebyside"
else:
return ".3d." in servicepath.lower() and "sidebyside" or ".tab." in servicepath.lower() and "topandbottom"
service = self.session.nav.getCurrentService()
info = service and service.info()
return info and info.getInfo(iServiceInformation.sIsDedicated3D) == 1 and "sidebyside"
def __evStart(self):
if config.osd.threeDmode.value == "auto":
global isDedicated3D
isDedicated3D = self.checkIfDedicated3D()
if isDedicated3D:
applySettings(isDedicated3D)
else:
applySettings()
def BufferInfo(self):
bufferInfo = self.session.nav.getCurrentService().streamed().getBufferCharge()
if bufferInfo[0] > 98:
self.bufferfull = True
self.VideoChanged()
else:
self.bufferfull = False
def BufferInfoStop(self):
self.bufferfull = True
def VideoChanged(self):
if self.session.nav.getCurrentlyPlayingServiceReference() and not self.session.nav.getCurrentlyPlayingServiceReference().toString().startswith('4097:'):
delay = config.av.autores_delay.value
else:
delay = config.av.autores_delay.value * 2
if not self.detecttimer.isActive() and not self.delay:
self.delay = True
self.detecttimer.start(delay)
else:
self.delay = True
self.detecttimer.stop()
self.detecttimer.start(delay)
def VideoChangeDetect(self):
global resolutionlabel
config_port = config.av.videoport.value
config_mode = str(config.av.videomode[config_port].value).replace('\n','')
config_res = str(config.av.videomode[config_port].value[:-1]).replace('\n','')
config_pol = str(config.av.videomode[config_port].value[-1:]).replace('\n','')
config_rate = str(config.av.videorate[config_mode].value).replace('Hz','').replace('\n','')
f = open("/proc/stb/video/videomode")
current_mode = f.read()[:-1].replace('\n','')
f.close()
if current_mode.upper() in ('PAL', 'NTSC'):
current_mode = current_mode.upper()
current_pol = ''
if 'i' in current_mode:
current_pol = 'i'
elif 'p' in current_mode:
current_pol = 'p'
current_res = current_pol and current_mode.split(current_pol)[0].replace('\n','') or ""
current_rate = current_pol and current_mode.split(current_pol)[0].replace('\n','') and current_mode.split(current_pol)[1].replace('\n','') or ""
video_height = None
video_width = None
video_pol = None
video_rate = None
if path.exists("/proc/stb/vmpeg/0/yres"):
try:
f = open("/proc/stb/vmpeg/0/yres", "r")
video_height = int(f.read(),16)
f.close()
except:
video_height = 0
if path.exists("/proc/stb/vmpeg/0/xres"):
try:
f = open("/proc/stb/vmpeg/0/xres", "r")
video_width = int(f.read(),16)
f.close()
except:
video_width = 0
if path.exists("/proc/stb/vmpeg/0/progressive"):
try:
f = open("/proc/stb/vmpeg/0/progressive", "r")
video_pol = "p" if int(f.read(),16) else "i"
f.close()
except:
video_pol = "i"
if path.exists("/proc/stb/vmpeg/0/framerate"):
f = open("/proc/stb/vmpeg/0/framerate", "r")
try:
video_rate = int(f.read())
except:
video_rate = 50
f.close()
if not video_height or not video_width or not video_pol or not video_rate:
service = self.session.nav.getCurrentService()
if service is not None:
info = service.info()
else:
info = None
if info:
video_height = int(info.getInfo(iServiceInformation.sVideoHeight))
video_width = int(info.getInfo(iServiceInformation.sVideoWidth))
video_pol = ("i", "p")[info.getInfo(iServiceInformation.sProgressive)]
video_rate = int(info.getInfo(iServiceInformation.sFrameRate))
if (video_height and video_width and video_pol and video_rate) or (config.av.smart1080p.value != 'false'):
resolutionlabel["content"].setText(_("Video content: %ix%i%s %iHz") % (video_width, video_height, video_pol, (video_rate + 500) / 1000))
if video_height != -1:
if video_height > 720 or video_width > 1280:
new_res = "1080"
elif (576 < video_height <= 720) or video_width > 1024:
new_res = "720"
elif (480 < video_height <= 576) or video_width > 720 or video_rate in (25000, 23976, 24000):
new_res = "576"
else:
new_res = "480"
else:
new_res = config_res
if video_rate != -1:
if video_rate == 25000 and video_pol == 'i':
new_rate = 50000
elif video_rate == 59940 or (video_rate == 29970 and video_pol == 'i') or (video_rate == 29970 and video_pol == 'p' and config.av.autores.value == 'disabled'):
new_rate = 60000
elif video_rate == 23976:
new_rate = 24000
elif video_rate == 29970:
new_rate = 30000
else:
new_rate = video_rate
new_rate = str((new_rate + 500) / 1000)
else:
new_rate = config_rate
if video_pol != -1:
new_pol = str(video_pol)
else:
new_pol = config_pol
write_mode = None
new_mode = None
if config_mode in ('PAL', 'NTSC'):
write_mode = config_mode
elif config.av.autores.value == 'all' or (config.av.autores.value == 'hd' and int(new_res) >= 720):
if (config.av.autores_deinterlace.value and HardwareInfo().is_nextgen()) or (config.av.autores_deinterlace.value and not HardwareInfo().is_nextgen() and int(new_res) <= 720):
new_pol = new_pol.replace('i','p')
if new_res+new_pol+new_rate in iAVSwitch.modes_available:
new_mode = new_res+new_pol+new_rate
if new_mode == '480p24' or new_mode == '576p24':
new_mode = config.av.autores_480p24.value
if new_mode == '720p24':
new_mode = config.av.autores_720p24.value
if new_mode == '1080p24':
new_mode = config.av.autores_1080p24.value
if new_mode == '1080p25':
new_mode = config.av.autores_1080p25.value
if new_mode == '1080p30':
new_mode = config.av.autores_1080p30.value
elif new_res+new_pol in iAVSwitch.modes_available:
new_mode = new_res+new_pol
else:
new_mode = config_mode+new_rate
write_mode = new_mode
elif config.av.autores.value == 'hd' and int(new_res) <= 576:
if (config.av.autores_deinterlace.value and HardwareInfo().is_nextgen()) or (config.av.autores_deinterlace.value and not HardwareInfo().is_nextgen() and not config.av.autores_sd.value == '1080i'):
new_mode = config.av.autores_sd.value.replace('i','p')+new_rate
else:
if new_pol in 'p':
new_mode = config.av.autores_sd.value.replace('i','p')+new_rate
else:
new_mode = config.av.autores_sd.value+new_rate
if new_mode == '720p24':
new_mode = config.av.autores_720p24.value
if new_mode == '1080p24':
new_mode = config.av.autores_1080p24.value
if new_mode == '1080p25':
new_mode = config.av.autores_1080p25.value
if new_mode == '1080p30':
new_mode = config.av.autores_1080p30.value
write_mode = new_mode
else:
if path.exists('/proc/stb/video/videomode_%shz' % new_rate) and config_rate == 'multi':
f = open("/proc/stb/video/videomode_%shz" % new_rate, "r")
multi_videomode = f.read().replace('\n','')
f.close()
if multi_videomode and (current_mode != multi_videomode):
write_mode = multi_videomode
else:
write_mode = config_mode+new_rate
# workaround for bug, see http://www.opena.tv/forum/showthread.php?1642-Autoresolution-Plugin&p=38836&viewfull=1#post38836
# always use a fixed resolution and frame rate (e.g. 1080p50 if supported) for TV or .ts files
# always use a fixed resolution and correct rate (e.g. 1080p24/p50/p60 for all other videos
if config.av.smart1080p.value != 'false':
ref = self.session.nav.getCurrentlyPlayingServiceReference()
if ref is not None:
try:
mypath = ref.getPath()
except:
mypath = ''
else:
mypath = ''
# no frame rate information available, check if filename (or directory name) contains a hint
# (allow user to force a frame rate this way):
if (mypath.find('p24.') >= 0) or (mypath.find('24p.') >= 0):
new_rate = '24'
elif (mypath.find('p25.') >= 0) or (mypath.find('25p.') >= 0):
new_rate = '25'
elif (mypath.find('p30.') >= 0) or (mypath.find('30p.') >= 0):
new_rate = '30'
elif (mypath.find('p50.') >= 0) or (mypath.find('50p.') >= 0):
new_rate = '50'
elif (mypath.find('p60.') >= 0) or (mypath.find('60p.') >= 0):
new_rate = '60'
elif new_rate == 'multi':
new_rate = '' # omit frame rate specifier, e.g. '1080p' instead of '1080p50' if there is no clue
if mypath != '':
if mypath.endswith('.ts'):
print "DEBUG VIDEOMODE/ playing .ts file"
new_rate = '50' # for .ts files
else:
print "DEBUG VIDEOMODE/ playing other (non .ts) file"
# new_rate from above for all other videos
else:
print "DEBUG VIDEOMODE/ no path or no service reference, presumably live TV"
new_rate = '50' # for TV / or no service reference, then stay at 1080p50
new_rate = new_rate.replace('25', '50')
new_rate = new_rate.replace('30', '60')
if (config.av.smart1080p.value == '1080p50') or (config.av.smart1080p.value == 'true'): # for compatibility with old ConfigEnableDisable
write_mode = '1080p' + new_rate
elif config.av.smart1080p.value == '2160p50':
write_mode = '2160p' + new_rate
elif config.av.smart1080p.value == '1080i50':
if new_rate == '24':
write_mode = '1080p24' # instead of 1080i24
else:
write_mode = '1080i' + new_rate
elif config.av.smart1080p.value == '720p50':
write_mode = '720p' + new_rate
print "[VideoMode] smart1080p mode, selecting ",write_mode
if write_mode and current_mode != write_mode and self.bufferfull:
# first we read now the real available values for every stb,
# before we try to write the new mode
changeResolution = False
try:
if path.exists("/proc/stb/video/videomode_choices"):
vf = open("/proc/stb/video/videomode_choices")
values = vf.readline().replace("\n", "").split(" ", -1)
for x in values:
if x == write_mode:
try:
f = open("/proc/stb/video/videomode", "w")
f.write(write_mode)
f.close()
changeResolution = True
except Exception, e:
print("[VideoMode] write_mode exception:" + str(e))
if not changeResolution:
print "[VideoMode] setMode - port: %s, mode: %s is not available" % (config_port, write_mode)
resolutionlabel["restxt"].setText(_("Video mode: %s not available") % write_mode)
# we try to go for not available 1080p24/1080p30/1080p60 to change to 1080p from 60hz_choices if available
# TODO: can we make it easier, or more important --> smaller ?
# should we outsourced that way, like two new "def ..."
# or some other stuff, not like this?
if (write_mode == "1080p24") or (write_mode == "1080p30") or (write_mode == "1080p60"):
for x in values:
if x == "1080p":
try:
f = open("/proc/stb/video/videomode", "w")
f.write(x)
f.close()
changeResolution = True
except Exception, e:
print("[VideoMode] write_mode exception:" + str(e))
if not changeResolution:
print "[VideoMode] setMode - port: %s, mode: 1080p is also not available" % config_port
resolutionlabel["restxt"].setText(_("Video mode: 1080p also not available"))
else:
print "[VideoMode] setMode - port: %s, mode: %s" % (config_port, x)
resolutionlabel["restxt"].setText(_("Video mode: %s") % x)
if (write_mode == "2160p24") or (write_mode == "2160p30") or (write_mode == "2160p60"):
for x in values:
if x == "2160p":
try:
f = open("/proc/stb/video/videomode", "w")
f.write(x)
f.close()
changeResolution = True
except Exception, e:
print("[VideoMode] write_mode exception:" + str(e))
if not changeResolution:
print "[VideoMode] setMode - port: %s, mode: 2160p is also not available" % config_port
resolutionlabel["restxt"].setText(_("Video mode: 2160p also not available"))
else:
print "[VideoMode] setMode - port: %s, mode: %s" % (config_port, x)
resolutionlabel["restxt"].setText(_("Video mode: %s") % x)
else:
resolutionlabel["restxt"].setText(_("Video mode: %s") % write_mode)
print "[VideoMode] setMode - port: %s, mode: %s" % (config_port, write_mode)
if config.av.autores.value != "disabled" and config.av.autores_label_timeout.value != '0':
resolutionlabel.show()
vf.close()
except Exception, e:
print("[VideoMode] read videomode_choices exception:" + str(e))
elif write_mode and current_mode != write_mode:
# the resolution remained stuck at a wrong setting after streaming when self.bufferfull was False (should be fixed now after adding BufferInfoStop)
print "[VideoMode] not changing from",current_mode,"to",write_mode,"as self.bufferfull is",self.bufferfull
iAVSwitch.setAspect(config.av.aspect)
iAVSwitch.setWss(config.av.wss)
iAVSwitch.setPolicy43(config.av.policy_43)
iAVSwitch.setPolicy169(config.av.policy_169)
self.delay = False
self.detecttimer.stop()
def autostart(session):
global resolutionlabel
if not path.exists(resolveFilename(SCOPE_PLUGINS)+'SystemPlugins/AutoResolution'):
if resolutionlabel is None:
resolutionlabel = session.instantiateDialog(AutoVideoModeLabel)
AutoVideoMode(session)
else:
config.av.autores.setValue(False)
config.av.autores.save()
configfile.save()
| gpl-2.0 |
sander76/home-assistant | homeassistant/helpers/config_entry_flow.py | 3 | 6388 | """Helpers for data entry flows for config entries."""
from __future__ import annotations
from typing import Any, Awaitable, Callable, Union
from homeassistant import config_entries
from homeassistant.core import HomeAssistant
DiscoveryFunctionType = Callable[[], Union[Awaitable[bool], bool]]
class DiscoveryFlowHandler(config_entries.ConfigFlow):
"""Handle a discovery config flow."""
VERSION = 1
def __init__(
self,
domain: str,
title: str,
discovery_function: DiscoveryFunctionType,
connection_class: str,
) -> None:
"""Initialize the discovery config flow."""
self._domain = domain
self._title = title
self._discovery_function = discovery_function
self.CONNECTION_CLASS = connection_class # pylint: disable=invalid-name
async def async_step_user(
self, user_input: dict[str, Any] | None = None
) -> dict[str, Any]:
"""Handle a flow initialized by the user."""
if self._async_current_entries():
return self.async_abort(reason="single_instance_allowed")
await self.async_set_unique_id(self._domain, raise_on_progress=False)
return await self.async_step_confirm()
async def async_step_confirm(
self, user_input: dict[str, Any] | None = None
) -> dict[str, Any]:
"""Confirm setup."""
if user_input is None:
self._set_confirm_only()
return self.async_show_form(step_id="confirm")
if self.source == config_entries.SOURCE_USER:
# Get current discovered entries.
in_progress = self._async_in_progress()
has_devices = in_progress
if not has_devices:
has_devices = await self.hass.async_add_job( # type: ignore
self._discovery_function, self.hass
)
if not has_devices:
return self.async_abort(reason="no_devices_found")
# Cancel the discovered one.
for flow in in_progress:
self.hass.config_entries.flow.async_abort(flow["flow_id"])
if self._async_current_entries():
return self.async_abort(reason="single_instance_allowed")
return self.async_create_entry(title=self._title, data={})
async def async_step_discovery(
self, discovery_info: dict[str, Any]
) -> dict[str, Any]:
"""Handle a flow initialized by discovery."""
if self._async_in_progress() or self._async_current_entries():
return self.async_abort(reason="single_instance_allowed")
await self.async_set_unique_id(self._domain)
return await self.async_step_confirm()
async_step_zeroconf = async_step_discovery
async_step_ssdp = async_step_discovery
async_step_mqtt = async_step_discovery
async_step_homekit = async_step_discovery
async_step_dhcp = async_step_discovery
async def async_step_import(self, _: dict[str, Any] | None) -> dict[str, Any]:
"""Handle a flow initialized by import."""
if self._async_current_entries():
return self.async_abort(reason="single_instance_allowed")
# Cancel other flows.
in_progress = self._async_in_progress()
for flow in in_progress:
self.hass.config_entries.flow.async_abort(flow["flow_id"])
return self.async_create_entry(title=self._title, data={})
def register_discovery_flow(
domain: str,
title: str,
discovery_function: DiscoveryFunctionType,
connection_class: str,
) -> None:
"""Register flow for discovered integrations that not require auth."""
class DiscoveryFlow(DiscoveryFlowHandler):
"""Discovery flow handler."""
def __init__(self) -> None:
super().__init__(domain, title, discovery_function, connection_class)
config_entries.HANDLERS.register(domain)(DiscoveryFlow)
class WebhookFlowHandler(config_entries.ConfigFlow):
"""Handle a webhook config flow."""
VERSION = 1
def __init__(
self,
domain: str,
title: str,
description_placeholder: dict,
allow_multiple: bool,
) -> None:
"""Initialize the discovery config flow."""
self._domain = domain
self._title = title
self._description_placeholder = description_placeholder
self._allow_multiple = allow_multiple
async def async_step_user(
self, user_input: dict[str, Any] | None = None
) -> dict[str, Any]:
"""Handle a user initiated set up flow to create a webhook."""
if not self._allow_multiple and self._async_current_entries():
return self.async_abort(reason="single_instance_allowed")
if user_input is None:
return self.async_show_form(step_id="user")
webhook_id = self.hass.components.webhook.async_generate_id()
if (
"cloud" in self.hass.config.components
and self.hass.components.cloud.async_active_subscription()
):
webhook_url = await self.hass.components.cloud.async_create_cloudhook(
webhook_id
)
cloudhook = True
else:
webhook_url = self.hass.components.webhook.async_generate_url(webhook_id)
cloudhook = False
self._description_placeholder["webhook_url"] = webhook_url
return self.async_create_entry(
title=self._title,
data={"webhook_id": webhook_id, "cloudhook": cloudhook},
description_placeholders=self._description_placeholder,
)
def register_webhook_flow(
domain: str, title: str, description_placeholder: dict, allow_multiple: bool = False
) -> None:
"""Register flow for webhook integrations."""
class WebhookFlow(WebhookFlowHandler):
"""Webhook flow handler."""
def __init__(self) -> None:
super().__init__(domain, title, description_placeholder, allow_multiple)
config_entries.HANDLERS.register(domain)(WebhookFlow)
async def webhook_async_remove_entry(
hass: HomeAssistant, entry: config_entries.ConfigEntry
) -> None:
"""Remove a webhook config entry."""
if not entry.data.get("cloudhook") or "cloud" not in hass.config.components:
return
await hass.components.cloud.async_delete_cloudhook(entry.data["webhook_id"])
| apache-2.0 |
wateraccounting/wa | Collect/CFSR/DataAccess_CFSR.py | 1 | 8868 | # -*- coding: utf-8 -*-
"""
Authors: Tim Hessels
UNESCO-IHE 2016
Contact: t.hessels@unesco-ihe.org
Repository: https://github.com/wateraccounting/wa
Module: Collect/CFSR
"""
# General modules
import pandas as pd
import os
import numpy as np
from netCDF4 import Dataset
import re
from joblib import Parallel, delayed
# WA+ modules
from wa.Collect.CFSR.Download_data_CFSR import Download_data
from wa.General import data_conversions as DC
def CollectData(Dir, Var, Startdate, Enddate, latlim, lonlim, Waitbar, cores, Version):
"""
This function collects daily CFSR data in geotiff format
Keyword arguments:
Dir -- 'C:/file/to/path/'
Var -- 'dlwsfc','dswsfc','ulwsfc', or 'uswsfc'
Startdate -- 'yyyy-mm-dd'
Enddate -- 'yyyy-mm-dd'
latlim -- [ymin, ymax] (values must be between -50 and 50)
lonlim -- [xmin, xmax] (values must be between -180 and 180)
Waitbar -- 1 (Default) will print a wait bar
cores -- The number of cores used to run the routine.
It can be 'False' to avoid using parallel computing
routines.
Version -- 1 or 2 (1 = CFSR, 2 = CFSRv2)
"""
# Creates an array of the days of which the ET is taken
Dates = pd.date_range(Startdate,Enddate,freq = 'D')
# Create Waitbar
if Waitbar == 1:
import wa.Functions.Start.WaitbarConsole as WaitbarConsole
total_amount = len(Dates)
amount = 0
WaitbarConsole.printWaitBar(amount, total_amount, prefix = 'Progress:', suffix = 'Complete', length = 50)
# For collecting CFSR data
if Version == 1:
# Check the latitude and longitude and otherwise set lat or lon on greatest extent
if latlim[0] < -89.9171038899 or latlim[1] > 89.9171038899:
print 'Latitude above 89.917N or below 89.917S is not possible. Value set to maximum'
latlim[0] = np.maximum(latlim[0],-89.9171038899)
latlim[1] = np.minimum(latlim[1],89.9171038899)
if lonlim[0] < -180 or lonlim[1] > 179.843249782:
print 'Longitude must be between 179.84E and 179.84W. Now value is set to maximum'
lonlim[0] = np.maximum(lonlim[0],-180)
lonlim[1] = np.minimum(lonlim[1],179.843249782)
# Make directory for the CFSR data
output_folder=os.path.join(Dir,'Radiation','CFSR')
if not os.path.exists(output_folder):
os.makedirs(output_folder)
# For collecting CFSRv2 data
if Version == 2:
# Check the latitude and longitude and otherwise set lat or lon on greatest extent
if latlim[0] < -89.9462116040955806 or latlim[1] > 89.9462116040955806:
print 'Latitude above 89.917N or below 89.946S is not possible. Value set to maximum'
latlim[0] = np.maximum(latlim[0],-89.9462116040955806)
latlim[1] = np.minimum(latlim[1],89.9462116040955806)
if lonlim[0] < -180 or lonlim[1] > 179.8977275:
print 'Longitude must be between 179.90E and 179.90W. Now value is set to maximum'
lonlim[0] = np.maximum(lonlim[0],-180)
lonlim[1] = np.minimum(lonlim[1],179.8977275)
# Make directory for the CFSRv2 data
output_folder=os.path.join(Dir,'Radiation','CFSRv2')
if not os.path.exists(output_folder):
os.makedirs(output_folder)
# Pass variables to parallel function and run
args = [output_folder, latlim, lonlim, Var, Version]
if not cores:
for Date in Dates:
RetrieveData(Date, args)
if Waitbar == 1:
amount += 1
WaitbarConsole.printWaitBar(amount, total_amount, prefix = 'Progress:', suffix = 'Complete', length = 50)
results = True
else:
results = Parallel(n_jobs=cores)(delayed(RetrieveData)(Date, args)
for Date in Dates)
# Remove all .nc and .grb2 files
for f in os.listdir(output_folder):
if re.search(".nc", f):
os.remove(os.path.join(output_folder, f))
for f in os.listdir(output_folder):
if re.search(".grb2", f):
os.remove(os.path.join(output_folder, f))
for f in os.listdir(output_folder):
if re.search(".grib2", f):
os.remove(os.path.join(output_folder, f))
return results
def RetrieveData(Date, args):
# unpack the arguments
[output_folder, latlim, lonlim, Var, Version] = args
# Name of the model
if Version == 1:
version_name = 'CFSR'
if Version == 2:
version_name = 'CFSRv2'
# Name of the outputfile
if Var == 'dlwsfc':
Outputname = 'DLWR_%s_W-m2_' %version_name + str(Date.strftime('%Y')) + '.' + str(Date.strftime('%m')) + '.' + str(Date.strftime('%d')) + '.tif'
if Var == 'dswsfc':
Outputname = 'DSWR_%s_W-m2_' %version_name + str(Date.strftime('%Y')) + '.' + str(Date.strftime('%m')) + '.' + str(Date.strftime('%d')) + '.tif'
if Var == 'ulwsfc':
Outputname = 'ULWR_%s_W-m2_' %version_name + str(Date.strftime('%Y')) + '.' + str(Date.strftime('%m')) + '.' + str(Date.strftime('%d')) + '.tif'
if Var == 'uswsfc':
Outputname = 'USWR_%s_W-m2_' %version_name + str(Date.strftime('%Y')) + '.' + str(Date.strftime('%m')) + '.' + str(Date.strftime('%d')) + '.tif'
# Create the total end output name
outputnamePath = os.path.join(output_folder, Outputname)
# If the output name not exists than create this output
if not os.path.exists(outputnamePath):
local_filename = Download_data(Date, Version, output_folder, Var)
# convert grb2 to netcdf (wgrib2 module is needed)
for i in range(0,4):
nameNC = 'Output' + str(Date.strftime('%Y')) + str(Date.strftime('%m')) + str(Date.strftime('%d')) + '-' + str(i+1) + '.nc'
# Total path of the output
FileNC6hour = os.path.join(output_folder, nameNC)
# Band number of the grib data which is converted in .nc
band=(int(Date.strftime('%d')) - 1) * 28 + (i + 1) * 7
# Convert the data
DC.Convert_grb2_to_nc(local_filename, FileNC6hour, band)
if Version == 1:
if Date < pd.Timestamp(pd.datetime(2011, 01, 01)):
# Convert the latlim and lonlim into array
Xstart = np.floor((lonlim[0] + 180.1562497) / 0.3125)
Xend = np.ceil((lonlim[1] + 180.1562497) / 0.3125) + 1
Ystart = np.floor((latlim[0] + 89.9171038899) / 0.3122121663)
Yend = np.ceil((latlim[1] + 89.9171038899) / 0.3122121663)
# Create a new dataset
Datatot = np.zeros([576, 1152])
else:
Version = 2
if Version == 2:
# Convert the latlim and lonlim into array
Xstart = np.floor((lonlim[0] + 180.102272725) / 0.204545)
Xend = np.ceil((lonlim[1] + 180.102272725) / 0.204545) + 1
Ystart = np.floor((latlim[0] + 89.9462116040955806) / 0.204423)
Yend = np.ceil((latlim[1] + 89.9462116040955806) / 0.204423)
# Create a new dataset
Datatot = np.zeros([880, 1760])
# Open 4 times 6 hourly dataset
for i in range (0, 4):
nameNC = 'Output' + str(Date.strftime('%Y')) + str(Date.strftime('%m')) + str(Date.strftime('%d')) + '-' + str(i + 1) + '.nc'
FileNC6hour = os.path.join(output_folder, nameNC)
f = Dataset(FileNC6hour, mode = 'r')
Data = f.variables['Band1'][0:int(Datatot.shape[0]), 0:int(Datatot.shape[1])]
f.close()
data = np.array(Data)
Datatot = Datatot + data
# Calculate the average in W/m^2 over the day
DatatotDay = Datatot / 4
DatatotDayEnd = np.zeros([int(Datatot.shape[0]), int(Datatot.shape[1])])
DatatotDayEnd[:,0:int(Datatot.shape[0])] = DatatotDay[:, int(Datatot.shape[0]):int(Datatot.shape[1])]
DatatotDayEnd[:,int(Datatot.shape[0]):int(Datatot.shape[1])] = DatatotDay[:, 0:int(Datatot.shape[0])]
# clip the data to the extent difined by the user
DatasetEnd = DatatotDayEnd[int(Ystart):int(Yend), int(Xstart):int(Xend)]
# save file
if Version == 1:
pixel_size = 0.3125
if Version == 2:
pixel_size = 0.204545
geo = [lonlim[0],pixel_size,0,latlim[1],0,-pixel_size]
DC.Save_as_tiff(data = np.flipud(DatasetEnd), name = outputnamePath, geo = geo, projection = "WGS84")
return()
| apache-2.0 |
sanabby/kubernetes | cluster/juju/charms/trusty/kubernetes-master/hooks/hooks.py | 101 | 11762 | #!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
The main hook file is called by Juju.
"""
import contextlib
import os
import socket
import subprocess
import sys
from charmhelpers.core import hookenv, host
from charmhelpers.contrib import ssl
from kubernetes_installer import KubernetesInstaller
from path import Path
hooks = hookenv.Hooks()
@contextlib.contextmanager
def check_sentinel(filepath):
"""
A context manager method to write a file while the code block is doing
something and remove the file when done.
"""
fail = False
try:
yield filepath.exists()
except:
fail = True
filepath.touch()
raise
finally:
if fail is False and filepath.exists():
filepath.remove()
@hooks.hook('config-changed')
def config_changed():
"""
On the execution of the juju event 'config-changed' this function
determines the appropriate architecture and the configured version to
create kubernetes binary files.
"""
hookenv.log('Starting config-changed')
charm_dir = Path(hookenv.charm_dir())
config = hookenv.config()
# Get the version of kubernetes to install.
version = config['version']
username = config['username']
password = config['password']
certificate = config['apiserver-cert']
key = config['apiserver-key']
if version == 'master':
# The 'master' branch of kuberentes is used when master is configured.
branch = 'master'
elif version == 'local':
# Check for kubernetes binaries in the local files/output directory.
branch = None
else:
# Create a branch to a tag to get the release version.
branch = 'tags/{0}'.format(version)
cert_file = '/srv/kubernetes/apiserver.crt'
key_file = '/srv/kubernetes/apiserver.key'
# When the cert or key changes we need to restart the apiserver.
if config.changed('apiserver-cert') or config.changed('apiserver-key'):
hookenv.log('Certificate or key has changed.')
if not certificate or not key:
generate_cert(key=key_file, cert=cert_file)
else:
hookenv.log('Writing new certificate and key to server.')
with open(key_file, 'w') as file:
file.write(key)
with open(cert_file, 'w') as file:
file.write(certificate)
# Restart apiserver as the certificate or key has changed.
if host.service_running('apiserver'):
host.service_restart('apiserver')
# Reload nginx because it proxies https to apiserver.
if host.service_running('nginx'):
host.service_reload('nginx')
if config.changed('username') or config.changed('password'):
hookenv.log('Username or password changed, creating authentication.')
basic_auth(username, username, password)
if host.service_running('apiserver'):
host.service_restart('apiserver')
# Get package architecture, rather than arch from the kernel (uname -m).
arch = subprocess.check_output(['dpkg', '--print-architecture']).strip()
if not branch:
output_path = charm_dir / 'files/output'
kube_installer = KubernetesInstaller(arch, version, output_path)
else:
# Build the kuberentes binaries from source on the units.
kubernetes_dir = Path('/opt/kubernetes')
# Construct the path to the binaries using the arch.
output_path = kubernetes_dir / '_output/local/bin/linux' / arch
kube_installer = KubernetesInstaller(arch, version, output_path)
if not kubernetes_dir.exists():
message = 'The kubernetes source directory {0} does not exist. ' \
'Was the kubernetes repository cloned during the install?'
print(message.format(kubernetes_dir))
exit(1)
# Change to the kubernetes directory (git repository).
with kubernetes_dir:
# Create a command to get the current branch.
git_branch = 'git branch | grep "\*" | cut -d" " -f2'
current_branch = subprocess.check_output(git_branch, shell=True)
current_branch = current_branch.strip()
print('Current branch: ', current_branch)
# Create the path to a file to indicate if the build was broken.
broken_build = charm_dir / '.broken_build'
# write out the .broken_build file while this block is executing.
with check_sentinel(broken_build) as last_build_failed:
print('Last build failed: ', last_build_failed)
# Rebuild if current version is different or last build failed.
if current_branch != version or last_build_failed:
kube_installer.build(branch)
if not output_path.isdir():
broken_build.touch()
# Create the symoblic links to the right directories.
kube_installer.install()
relation_changed()
hookenv.log('The config-changed hook completed successfully.')
@hooks.hook('etcd-relation-changed', 'minions-api-relation-changed')
def relation_changed():
template_data = get_template_data()
# Check required keys
for k in ('etcd_servers',):
if not template_data.get(k):
print 'Missing data for', k, template_data
return
print 'Running with\n', template_data
# Render and restart as needed
for n in ('apiserver', 'controller-manager', 'scheduler'):
if render_file(n, template_data) or not host.service_running(n):
host.service_restart(n)
# Render the file that makes the kubernetes binaries available to minions.
if render_file(
'distribution', template_data,
'conf.tmpl', '/etc/nginx/sites-enabled/distribution') or \
not host.service_running('nginx'):
host.service_reload('nginx')
# Render the default nginx template.
if render_file(
'nginx', template_data,
'conf.tmpl', '/etc/nginx/sites-enabled/default') or \
not host.service_running('nginx'):
host.service_reload('nginx')
# Send api endpoint to minions
notify_minions()
@hooks.hook('network-relation-changed')
def network_relation_changed():
relation_id = hookenv.relation_id()
hookenv.relation_set(relation_id, ignore_errors=True)
def notify_minions():
print('Notify minions.')
config = hookenv.config()
for r in hookenv.relation_ids('minions-api'):
hookenv.relation_set(
r,
hostname=hookenv.unit_private_ip(),
port=8080,
version=config['version'])
print('Notified minions of version ' + config['version'])
def basic_auth(name, id, pwd=None, file='/srv/kubernetes/basic-auth.csv'):
"""
Create a basic authentication file for kubernetes. The file is a csv file
with 3 columns: password, user name, user id. From the Kubernetes docs:
The basic auth credentials last indefinitely, and the password cannot be
changed without restarting apiserver.
"""
if not pwd:
import random
import string
alphanumeric = string.ascii_letters + string.digits
pwd = ''.join(random.choice(alphanumeric) for _ in range(16))
lines = []
auth_file = Path(file)
if auth_file.isfile():
lines = auth_file.lines()
for line in lines:
target = ',{0},{1}'.format(name, id)
if target in line:
lines.remove(line)
auth_line = '{0},{1},{2}'.format(pwd, name, id)
lines.append(auth_line)
auth_file.write_lines(lines)
def generate_cert(common_name=None,
key='/srv/kubernetes/apiserver.key',
cert='/srv/kubernetes/apiserver.crt'):
"""
Create the certificate and key for the Kubernetes tls enablement.
"""
hookenv.log('Generating new self signed certificate and key', 'INFO')
if not common_name:
common_name = hookenv.unit_get('public-address')
if os.path.isfile(key) or os.path.isfile(cert):
hookenv.log('Overwriting the existing certificate or key', 'WARNING')
hookenv.log('Generating certificate for {0}'.format(common_name), 'INFO')
# Generate the self signed certificate with the public address as CN.
# https://pythonhosted.org/charmhelpers/api/charmhelpers.contrib.ssl.html
ssl.generate_selfsigned(key, cert, cn=common_name)
def get_template_data():
rels = hookenv.relations()
config = hookenv.config()
version = config['version']
template_data = {}
template_data['etcd_servers'] = ','.join([
'http://%s:%s' % (s[0], s[1]) for s in sorted(
get_rel_hosts('etcd', rels, ('hostname', 'port')))])
template_data['minions'] = ','.join(get_rel_hosts('minions-api', rels))
private_ip = hookenv.unit_private_ip()
public_ip = hookenv.unit_public_ip()
template_data['api_public_address'] = _bind_addr(public_ip)
template_data['api_private_address'] = _bind_addr(private_ip)
template_data['bind_address'] = '127.0.0.1'
template_data['api_http_uri'] = 'http://%s:%s' % (private_ip, 8080)
template_data['api_https_uri'] = 'https://%s:%s' % (private_ip, 6443)
arch = subprocess.check_output(['dpkg', '--print-architecture']).strip()
template_data['web_uri'] = '/kubernetes/%s/local/bin/linux/%s/' % (version,
arch)
if version == 'local':
template_data['alias'] = hookenv.charm_dir() + '/files/output/'
else:
directory = '/opt/kubernetes/_output/local/bin/linux/%s/' % arch
template_data['alias'] = directory
_encode(template_data)
return template_data
def _bind_addr(addr):
if addr.replace('.', '').isdigit():
return addr
try:
return socket.gethostbyname(addr)
except socket.error:
raise ValueError('Could not resolve address %s' % addr)
def _encode(d):
for k, v in d.items():
if isinstance(v, unicode):
d[k] = v.encode('utf8')
def get_rel_hosts(rel_name, rels, keys=('private-address',)):
hosts = []
for r, data in rels.get(rel_name, {}).items():
for unit_id, unit_data in data.items():
if unit_id == hookenv.local_unit():
continue
values = [unit_data.get(k) for k in keys]
if not all(values):
continue
hosts.append(len(values) == 1 and values[0] or values)
return hosts
def render_file(name, data, src_suffix='upstart.tmpl', tgt_path=None):
tmpl_path = os.path.join(
os.environ.get('CHARM_DIR'), 'files', '%s.%s' % (name, src_suffix))
with open(tmpl_path) as fh:
tmpl = fh.read()
rendered = tmpl % data
if tgt_path is None:
tgt_path = '/etc/init/%s.conf' % name
if os.path.exists(tgt_path):
with open(tgt_path) as fh:
contents = fh.read()
if contents == rendered:
return False
with open(tgt_path, 'w') as fh:
fh.write(rendered)
return True
if __name__ == '__main__':
hooks.execute(sys.argv)
| apache-2.0 |
john-wang-metro/metro-openerp | metro_mrp_id_stock/mfg_id_stock.py | 2 | 12410 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import fields, osv
from openerp.tools.translate import _
import time, datetime
import openerp.addons.decimal_precision as dp
from dateutil.relativedelta import relativedelta
from openerp.tools import DEFAULT_SERVER_DATETIME_FORMAT
class mfg_id_reserve(osv.osv):
_name = "mfg.id.reserve"
_inherit = ['mail.thread']
_columns={
'mfg_id': fields.many2one("sale.product", string="MFG ID"),
'product_id': fields.many2one("product.product", string="Product"),
'location_id': fields.many2one("stock.location", string="Location"),
'product_qty': fields.float('Reserved Quantity', track_visibility='onchange'),
'product_qty_consumed': fields.float('Consumed Quantity', track_visibility='onchange'),
'pur_req_line_id': fields.many2one("pur.req.line", string="Requisition Line"),
'pur_req_id': fields.related("pur_req_line_id","req_id",type="many2one", relation="pur.req", string="Requisition")
}
mfg_id_reserve()
class product_product(osv.osv):
_inherit = 'product.product'
def _reserved_qty(self, cr, uid, ids, field_names=None, arg=False, context=None):
if context is None:
context = {}
res = {}
mfg_inv_obj = self.pool.get('mfg.id.reserve')
for prod in self.browse(cr, uid, ids, context=context):
line_ids = mfg_inv_obj.search(cr, uid, [('product_id','=',prod.id)], context=context)
reserved_qty = 0.0
for line in mfg_inv_obj.browse(cr, uid, line_ids, context=context):
reserved_qty += line.product_qty
res[prod.id] = reserved_qty
return res
def _get_mfg_id_products(self, cr, uid, ids, context=None):
res = set()
for mfg_id_inv in self.browse(cr, uid, ids, context=context):
res.add(mfg_id_inv.product_id.id)
return res
_columns={
'reserved_qty': fields.function(_reserved_qty, string="Reserved Quantity", type="float",
store = {'mfg.id.reserve': (_get_mfg_id_products, ['product_id', 'product_qty'], 10),}
, digits_compute=dp.get_precision('Product Unit of Measure')),
}
class sale_product(osv.osv):
_inherit = 'sale.product'
_columns={
'bom_id_material': fields.many2one('mrp.bom',string='Material BOM', track_visibility='onchange',readonly=True, states={'draft':[('readonly',False)]}),
}
def onchange_bom_id(self, cr, uid, ids, bom_id, context=None):
return {'value':{'bom_id_material':bom_id}}
def _get_purchase_schedule_date(self, cr, uid, company, context=None):
date_planned = datetime.datetime.now()
schedule_date = (date_planned + relativedelta(days=company.po_lead))
return schedule_date
def reserve_and_req(self, cr, uid, ids, location_id, context=None):
bom_obj = self.pool.get('mrp.bom')
uom_obj = self.pool.get('product.uom')
mfg_id_obj = self.pool.get('sale.product')
pur_req_obj = self.pool.get('pur.req')
pur_req_line_obj = self.pool.get('pur.req.line')
prod_obj = self.pool.get('product.product')
id_reserve_obj = self.pool.get('mfg.id.reserve')
cur_user = self.pool.get('res.users').browse(cr, uid, uid, context=context)
company_id = cur_user.company_id.id
#1.check all of mfg_id's bom_id and merge mfg_ids with same bom
bom_mfg_ids = {}
for mfg_id in self.browse(cr, uid, ids, context=context):
if not mfg_id.bom_id_material:
raise osv.except_osv(_('Error'), _('Please assign material BOM to MFG ID (%s), then you can generated purchase requisitions from it')%(mfg_id.name,))
bom = mfg_id.bom_id_material
mfg_ids = bom_mfg_ids.get(bom.id,[])
if not mfg_ids:
bom_mfg_ids[bom.id] = mfg_ids
mfg_ids.append(mfg_id.id)
#2.generate the purchase requisitions by bom, and make inventory reservation
warehouse_id = self.pool.get('stock.warehouse').search(cr, uid, [('company_id', '=', company_id)], context=context)
warehouse_id = warehouse_id and warehouse_id[0] or False
req_ids = []
for bom_id, mfg_ids in bom_mfg_ids.items():
bom = bom_obj.browse(cr, uid, bom_id, context=context)
#generate pur_req
pur_req_vals = {
'warehouse_id': warehouse_id,
'user_id': uid,
'company_id': company_id,
'state': 'draft',
}
pur_req_id = pur_req_obj.create(cr, uid, pur_req_vals, context=context)
#generate pur_req_line
result = bom_obj._bom_explode(cr, uid, bom, factor=len(mfg_ids))[0]
req_line_cnt = 0
for line in result:
#get the quantity to request
bom_qty = line['product_qty']
req_qty = bom_qty
product = prod_obj.browse(cr, uid, line['product_id'], context=context)
if not product.purchase_ok:
continue
qty_avail = product.qty_virtual + product.product_qty_req - product.reserved_qty
if qty_avail < bom_qty:
req_qty = bom_qty - qty_avail
#create reservation line
pur_req_line_id = None
if req_qty > 0:
uom_bom_id = line['product_uom']
uom_po_id = product.uom_po_id.id
req_qty = uom_obj._compute_qty(cr, uid, uom_bom_id, req_qty, uom_po_id)
schedule_date = self._get_purchase_schedule_date(cr, uid, bom.company_id, context=context)
mfg_ids_str = ','.join([mfg_id.name for mfg_id in mfg_id_obj.browse(cr, uid, mfg_ids, context=context)])
pur_req_line_vals = {
'req_id': pur_req_id,
'name': product.partner_ref,
'product_qty': req_qty,
'product_id': product.id,
'product_uom_id': uom_po_id,
'inv_qty': product.qty_available,
'date_required': schedule_date.strftime(DEFAULT_SERVER_DATETIME_FORMAT),
'req_reason': 'MFG ID [%s]'%(mfg_ids_str,),
'req_emp_id': cur_user.employee_id and cur_user.employee_id.id or False,
'mfg_ids':[[6,0,mfg_ids]]
}
pur_req_line_id = pur_req_line_obj.create(cr, uid, pur_req_line_vals, context=context)
req_line_cnt += 1
#create reservation line
for mfg_id in mfg_ids:
#if there are reservation and having consumed quantity, then can raise error
ids_used = id_reserve_obj.search(cr, uid, [('mfg_id','=',mfg_id),('product_id','=',line['product_id']),('product_qty_consumed','>',0)], context=context)
if ids_used:
raise osv.except_osv(_('Error'), _('MFG ID(%s) consumed product [%s]%s, can not generated reservation again!')%(mfg_id,product.default_code,product.name))
cr.execute('delete from mfg_id_reserve where mfg_id=%s and product_id=%s',(mfg_id, line['product_id']))
reserve_vals = {'mfg_id':mfg_id, 'product_id':line['product_id'], 'location_id':location_id, 'product_qty':bom_qty/len(mfg_ids), 'pur_req_line_id':pur_req_line_id}
id_reserve_obj.create(cr, uid, reserve_vals, context=context)
#delete the pur_req if there are no req lines generated
if req_line_cnt == 0:
pur_req_obj.unlink(cr, uid, pur_req_id, context=context)
else:
req_ids.append(pur_req_id)
#finished, go to the purchase requisition page if there are requisition generated
# if req_ids:
# return self.material_requested(cr, uid, ids, context)
# else:
# return self.material_reserved(cr, uid, ids, context)
return self.material_reserved(cr, uid, ids, context)
def material_reserved(self, cr, uid, ids, context):
act_id = self.pool.get('ir.model.data').get_object_reference(cr, uid, 'metro_mrp_id_stock', 'action_mfg_id_reserve')
act_id = act_id and act_id[1] or False
act_win = self.pool.get('ir.actions.act_window').read(cr, uid, act_id, [], context=context)
act_win['context'] = {'search_default_mfg_id': ids[0]}
return act_win
class stock_move(osv.osv):
_inherit = "stock.move"
def action_done(self, cr, uid, ids, context=None):
resu = super(stock_move,self).action_done(cr, uid, ids, context)
#get the mfg id's moving out quantity {mfg_id:{product_id:qty,...},...}
mfg_ids = {}
mat_req_line = self.pool.get('material.request.line')
id_reserve_obj = self.pool.get('mfg.id.reserve')
for move in self.browse(cr, uid, ids, context=context):
if move.picking_id and move.picking_id.type == 'mr' and move.state == 'done':
move = mat_req_line.browse(cr, uid, move.id, context=context)
if not move.mr_sale_prod_id or move.product_qty <= 0:
continue
product_id = move.product_id.id
mfg_id = move.mr_sale_prod_id.id
#get the mfg_id's product
mfg_id_products = {}
if mfg_id not in mfg_ids:
mfg_ids[mfg_id] = mfg_id_products
else:
mfg_id_products = mfg_ids.get(mfg_id)
#set the products quantity
prod_qty = mfg_id_products.get(product_id, 0)
mfg_id_products[product_id] = prod_qty + move.product_qty
#remove the reserved quantity
if mfg_ids:
for mfg_id, product_qty in mfg_ids.items():
for product_id, qty in product_qty.items():
#cr.execute('update mfg_id_reserve set product_qty=product_qty-%s, product_qty_consumed=product_qty_consumed+%s where mfg_id=%s and product_id=%s',(qty, qty, mfg_id, product_id))
#In order to record the quantity changing messages, need use OpenERP to do update
id_reserve_ids = id_reserve_obj.search(cr, uid, [('mfg_id','=',mfg_id),('product_id','=',product_id)], context=context)
if id_reserve_ids:
qty_reserve = id_reserve_obj.read(cr, uid, id_reserve_ids[0], ['product_qty','product_qty_consumed'], context=context)
qty_old = qty_reserve['product_qty']
qty_consumed_old = qty_reserve['product_qty_consumed']
id_reserve_obj.write(cr, uid, id_reserve_ids[0], {'product_qty':qty_old-qty, 'product_qty_consumed':qty_consumed_old+qty}, context=context)
return resu
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
CoDEmanX/ArangoDB | 3rdParty/V8-4.3.61/third_party/python_26/Lib/test/test_extcall.py | 55 | 6263 | """Doctest for method/function calls.
We're going the use these types for extra testing
>>> from UserList import UserList
>>> from UserDict import UserDict
We're defining four helper functions
>>> def e(a,b):
... print a, b
>>> def f(*a, **k):
... print a, test_support.sortdict(k)
>>> def g(x, *y, **z):
... print x, y, test_support.sortdict(z)
>>> def h(j=1, a=2, h=3):
... print j, a, h
Argument list examples
>>> f()
() {}
>>> f(1)
(1,) {}
>>> f(1, 2)
(1, 2) {}
>>> f(1, 2, 3)
(1, 2, 3) {}
>>> f(1, 2, 3, *(4, 5))
(1, 2, 3, 4, 5) {}
>>> f(1, 2, 3, *[4, 5])
(1, 2, 3, 4, 5) {}
>>> f(1, 2, 3, *UserList([4, 5]))
(1, 2, 3, 4, 5) {}
Here we add keyword arguments
>>> f(1, 2, 3, **{'a':4, 'b':5})
(1, 2, 3) {'a': 4, 'b': 5}
>>> f(1, 2, 3, *[4, 5], **{'a':6, 'b':7})
(1, 2, 3, 4, 5) {'a': 6, 'b': 7}
>>> f(1, 2, 3, x=4, y=5, *(6, 7), **{'a':8, 'b': 9})
(1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5}
>>> f(1, 2, 3, **UserDict(a=4, b=5))
(1, 2, 3) {'a': 4, 'b': 5}
>>> f(1, 2, 3, *(4, 5), **UserDict(a=6, b=7))
(1, 2, 3, 4, 5) {'a': 6, 'b': 7}
>>> f(1, 2, 3, x=4, y=5, *(6, 7), **UserDict(a=8, b=9))
(1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5}
Examples with invalid arguments (TypeErrors). We're also testing the function
names in the exception messages.
Verify clearing of SF bug #733667
>>> e(c=4)
Traceback (most recent call last):
...
TypeError: e() got an unexpected keyword argument 'c'
>>> g()
Traceback (most recent call last):
...
TypeError: g() takes at least 1 argument (0 given)
>>> g(*())
Traceback (most recent call last):
...
TypeError: g() takes at least 1 argument (0 given)
>>> g(*(), **{})
Traceback (most recent call last):
...
TypeError: g() takes at least 1 argument (0 given)
>>> g(1)
1 () {}
>>> g(1, 2)
1 (2,) {}
>>> g(1, 2, 3)
1 (2, 3) {}
>>> g(1, 2, 3, *(4, 5))
1 (2, 3, 4, 5) {}
>>> class Nothing: pass
...
>>> g(*Nothing())
Traceback (most recent call last):
...
TypeError: g() argument after * must be a sequence, not instance
>>> class Nothing:
... def __len__(self): return 5
...
>>> g(*Nothing())
Traceback (most recent call last):
...
TypeError: g() argument after * must be a sequence, not instance
>>> class Nothing():
... def __len__(self): return 5
... def __getitem__(self, i):
... if i<3: return i
... else: raise IndexError(i)
...
>>> g(*Nothing())
0 (1, 2) {}
>>> class Nothing:
... def __init__(self): self.c = 0
... def __iter__(self): return self
... def next(self):
... if self.c == 4:
... raise StopIteration
... c = self.c
... self.c += 1
... return c
...
>>> g(*Nothing())
0 (1, 2, 3) {}
Make sure that the function doesn't stomp the dictionary
>>> d = {'a': 1, 'b': 2, 'c': 3}
>>> d2 = d.copy()
>>> g(1, d=4, **d)
1 () {'a': 1, 'b': 2, 'c': 3, 'd': 4}
>>> d == d2
True
What about willful misconduct?
>>> def saboteur(**kw):
... kw['x'] = 'm'
... return kw
>>> d = {}
>>> kw = saboteur(a=1, **d)
>>> d
{}
>>> g(1, 2, 3, **{'x': 4, 'y': 5})
Traceback (most recent call last):
...
TypeError: g() got multiple values for keyword argument 'x'
>>> f(**{1:2})
Traceback (most recent call last):
...
TypeError: f() keywords must be strings
>>> h(**{'e': 2})
Traceback (most recent call last):
...
TypeError: h() got an unexpected keyword argument 'e'
>>> h(*h)
Traceback (most recent call last):
...
TypeError: h() argument after * must be a sequence, not function
>>> dir(*h)
Traceback (most recent call last):
...
TypeError: dir() argument after * must be a sequence, not function
>>> None(*h)
Traceback (most recent call last):
...
TypeError: NoneType object argument after * must be a sequence, \
not function
>>> h(**h)
Traceback (most recent call last):
...
TypeError: h() argument after ** must be a mapping, not function
>>> dir(**h)
Traceback (most recent call last):
...
TypeError: dir() argument after ** must be a mapping, not function
>>> None(**h)
Traceback (most recent call last):
...
TypeError: NoneType object argument after ** must be a mapping, \
not function
>>> dir(b=1, **{'b': 1})
Traceback (most recent call last):
...
TypeError: dir() got multiple values for keyword argument 'b'
Another helper function
>>> def f2(*a, **b):
... return a, b
>>> d = {}
>>> for i in xrange(512):
... key = 'k%d' % i
... d[key] = i
>>> a, b = f2(1, *(2,3), **d)
>>> len(a), len(b), b == d
(3, 512, True)
>>> class Foo:
... def method(self, arg1, arg2):
... return arg1+arg2
>>> x = Foo()
>>> Foo.method(*(x, 1, 2))
3
>>> Foo.method(x, *(1, 2))
3
>>> Foo.method(*(1, 2, 3))
Traceback (most recent call last):
...
TypeError: unbound method method() must be called with Foo instance as \
first argument (got int instance instead)
>>> Foo.method(1, *[2, 3])
Traceback (most recent call last):
...
TypeError: unbound method method() must be called with Foo instance as \
first argument (got int instance instead)
A PyCFunction that takes only positional parameters shoud allow an
empty keyword dictionary to pass without a complaint, but raise a
TypeError if te dictionary is not empty
>>> try:
... silence = id(1, *{})
... True
... except:
... False
True
>>> id(1, **{'foo': 1})
Traceback (most recent call last):
...
TypeError: id() takes no keyword arguments
"""
from test import test_support
def test_main():
from test import test_extcall # self import
test_support.run_doctest(test_extcall, True)
if __name__ == '__main__':
test_main()
| apache-2.0 |
sktjdgns1189/android_kernel_pantech_ef56s | tools/perf/util/setup.py | 4998 | 1330 | #!/usr/bin/python2
from distutils.core import setup, Extension
from os import getenv
from distutils.command.build_ext import build_ext as _build_ext
from distutils.command.install_lib import install_lib as _install_lib
class build_ext(_build_ext):
def finalize_options(self):
_build_ext.finalize_options(self)
self.build_lib = build_lib
self.build_temp = build_tmp
class install_lib(_install_lib):
def finalize_options(self):
_install_lib.finalize_options(self)
self.build_dir = build_lib
cflags = ['-fno-strict-aliasing', '-Wno-write-strings']
cflags += getenv('CFLAGS', '').split()
build_lib = getenv('PYTHON_EXTBUILD_LIB')
build_tmp = getenv('PYTHON_EXTBUILD_TMP')
ext_sources = [f.strip() for f in file('util/python-ext-sources')
if len(f.strip()) > 0 and f[0] != '#']
perf = Extension('perf',
sources = ext_sources,
include_dirs = ['util/include'],
extra_compile_args = cflags,
)
setup(name='perf',
version='0.1',
description='Interface with the Linux profiling infrastructure',
author='Arnaldo Carvalho de Melo',
author_email='acme@redhat.com',
license='GPLv2',
url='http://perf.wiki.kernel.org',
ext_modules=[perf],
cmdclass={'build_ext': build_ext, 'install_lib': install_lib})
| gpl-2.0 |
mandeepdhami/neutron | neutron/plugins/embrane/common/operation.py | 59 | 1466 | # Copyright 2013 Embrane, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class Operation(object):
"""Defines a series of operations which shall be executed in order.
the operations expected are procedures, return values are discarded
"""
def __init__(self, procedure, args=(), kwargs={}, nextop=None):
self._procedure = procedure
self.args = args[:]
self.kwargs = dict(kwargs)
self.nextop = nextop
def execute(self):
args = self.args
self._procedure(*args, **self.kwargs)
return self.nextop
def execute_all(self):
nextop = self.execute()
while nextop:
nextop = self.execute_all()
def has_next(self):
return self.nextop is not None
def add_bottom_operation(self, operation):
op = self
while op.has_next():
op = op.nextop
op.nextop = operation
| apache-2.0 |
tmimori/erpnext | erpnext/patches/v4_0/update_custom_print_formats_for_renamed_fields.py | 119 | 1265 | # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
# License: GNU General Public License v3. See license.txt
from __future__ import unicode_literals
import frappe
import re
def execute():
# NOTE: sequence is important
fields_list = (
("amount", "base_amount"),
("ref_rate", "price_list_rate"),
("base_ref_rate", "base_price_list_rate"),
("adj_rate", "discount_percentage"),
("export_rate", "rate"),
("basic_rate", "base_rate"),
("export_amount", "amount"),
("reserved_warehouse", "warehouse"),
("import_ref_rate", "price_list_rate"),
("purchase_ref_rate", "base_price_list_rate"),
("discount_rate", "discount_percentage"),
("import_rate", "rate"),
("purchase_rate", "base_rate"),
("import_amount", "amount")
)
condition = " or ".join("""html like "%%{}%%" """.format(d[0].replace("_", "\\_")) for d in fields_list
if d[0] != "amount")
for name, html in frappe.db.sql("""select name, html from `tabPrint Format`
where standard = 'No' and ({}) and html not like '%%frappe.%%'""".format(condition)):
html = html.replace("wn.", "frappe.")
for from_field, to_field in fields_list:
html = re.sub(r"\b{}\b".format(from_field), to_field, html)
frappe.db.set_value("Print Format", name, "html", html)
| agpl-3.0 |
Faiz7412/or-tools | examples/python/nqueens.py | 34 | 2563 | # Copyright 2010 Hakan Kjellerstrand hakank@bonetmail.com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
n-queens problem in Google CP Solver.
N queens problem.
This model was created by Hakan Kjellerstrand (hakank@bonetmail.com)
Also see my other Google CP Solver models:
http://www.hakank.org/google_or_tools/
"""
from ortools.constraint_solver import pywrapcp
def main(n=8):
# Create the solver.
solver = pywrapcp.Solver("n-queens")
#
# data
#
# n = 8 # size of board (n x n)
# declare variables
q = [solver.IntVar(0, n - 1, "x%i" % i) for i in range(n)]
#
# constraints
#
solver.Add(solver.AllDifferent(q))
for i in range(n):
for j in range(i):
solver.Add(q[i] != q[j])
solver.Add(q[i] + i != q[j] + j)
solver.Add(q[i] - i != q[j] - j)
# for i in range(n):
# for j in range(i):
# solver.Add(abs(q[i]-q[j]) != abs(i-j))
# symmetry breaking
# solver.Add(q[0] == 0)
#
# solution and search
#
solution = solver.Assignment()
solution.Add([q[i] for i in range(n)])
collector = solver.AllSolutionCollector(solution)
# collector = solver.FirstSolutionCollector(solution)
# search_log = solver.SearchLog(100, x[0])
solver.Solve(solver.Phase([q[i] for i in range(n)],
solver.INT_VAR_SIMPLE,
solver.ASSIGN_MIN_VALUE),
[collector])
num_solutions = collector.SolutionCount()
print "num_solutions: ", num_solutions
if num_solutions > 0:
for s in range(num_solutions):
qval = [collector.Value(s, q[i]) for i in range(n)]
print "q:", qval
for i in range(n):
for j in range(n):
if qval[i] == j:
print "Q",
else:
print "_",
print
print
print
print "num_solutions:", num_solutions
print "failures:", solver.Failures()
print "branches:", solver.Branches()
print "WallTime:", solver.WallTime()
else:
print "No solutions found"
n = 8
if __name__ == "__main__":
main(n)
| apache-2.0 |
skuarch/namebench | nb_third_party/dns/rdtypes/ANY/LOC.py | 248 | 12571 | # Copyright (C) 2003-2007, 2009, 2010 Nominum, Inc.
#
# Permission to use, copy, modify, and distribute this software and its
# documentation for any purpose with or without fee is hereby granted,
# provided that the above copyright notice and this permission notice
# appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
import cStringIO
import struct
import dns.exception
import dns.rdata
_pows = (1L, 10L, 100L, 1000L, 10000L, 100000L, 1000000L, 10000000L,
100000000L, 1000000000L, 10000000000L)
def _exponent_of(what, desc):
exp = None
for i in xrange(len(_pows)):
if what // _pows[i] == 0L:
exp = i - 1
break
if exp is None or exp < 0:
raise dns.exception.SyntaxError("%s value out of bounds" % desc)
return exp
def _float_to_tuple(what):
if what < 0:
sign = -1
what *= -1
else:
sign = 1
what = long(round(what * 3600000))
degrees = int(what // 3600000)
what -= degrees * 3600000
minutes = int(what // 60000)
what -= minutes * 60000
seconds = int(what // 1000)
what -= int(seconds * 1000)
what = int(what)
return (degrees * sign, minutes, seconds, what)
def _tuple_to_float(what):
if what[0] < 0:
sign = -1
value = float(what[0]) * -1
else:
sign = 1
value = float(what[0])
value += float(what[1]) / 60.0
value += float(what[2]) / 3600.0
value += float(what[3]) / 3600000.0
return sign * value
def _encode_size(what, desc):
what = long(what);
exponent = _exponent_of(what, desc) & 0xF
base = what // pow(10, exponent) & 0xF
return base * 16 + exponent
def _decode_size(what, desc):
exponent = what & 0x0F
if exponent > 9:
raise dns.exception.SyntaxError("bad %s exponent" % desc)
base = (what & 0xF0) >> 4
if base > 9:
raise dns.exception.SyntaxError("bad %s base" % desc)
return long(base) * pow(10, exponent)
class LOC(dns.rdata.Rdata):
"""LOC record
@ivar latitude: latitude
@type latitude: (int, int, int, int) tuple specifying the degrees, minutes,
seconds, and milliseconds of the coordinate.
@ivar longitude: longitude
@type longitude: (int, int, int, int) tuple specifying the degrees,
minutes, seconds, and milliseconds of the coordinate.
@ivar altitude: altitude
@type altitude: float
@ivar size: size of the sphere
@type size: float
@ivar horizontal_precision: horizontal precision
@type horizontal_precision: float
@ivar vertical_precision: vertical precision
@type vertical_precision: float
@see: RFC 1876"""
__slots__ = ['latitude', 'longitude', 'altitude', 'size',
'horizontal_precision', 'vertical_precision']
def __init__(self, rdclass, rdtype, latitude, longitude, altitude,
size=1.0, hprec=10000.0, vprec=10.0):
"""Initialize a LOC record instance.
The parameters I{latitude} and I{longitude} may be either a 4-tuple
of integers specifying (degrees, minutes, seconds, milliseconds),
or they may be floating point values specifying the number of
degrees. The other parameters are floats."""
super(LOC, self).__init__(rdclass, rdtype)
if isinstance(latitude, int) or isinstance(latitude, long):
latitude = float(latitude)
if isinstance(latitude, float):
latitude = _float_to_tuple(latitude)
self.latitude = latitude
if isinstance(longitude, int) or isinstance(longitude, long):
longitude = float(longitude)
if isinstance(longitude, float):
longitude = _float_to_tuple(longitude)
self.longitude = longitude
self.altitude = float(altitude)
self.size = float(size)
self.horizontal_precision = float(hprec)
self.vertical_precision = float(vprec)
def to_text(self, origin=None, relativize=True, **kw):
if self.latitude[0] > 0:
lat_hemisphere = 'N'
lat_degrees = self.latitude[0]
else:
lat_hemisphere = 'S'
lat_degrees = -1 * self.latitude[0]
if self.longitude[0] > 0:
long_hemisphere = 'E'
long_degrees = self.longitude[0]
else:
long_hemisphere = 'W'
long_degrees = -1 * self.longitude[0]
text = "%d %d %d.%03d %s %d %d %d.%03d %s %0.2fm" % (
lat_degrees, self.latitude[1], self.latitude[2], self.latitude[3],
lat_hemisphere, long_degrees, self.longitude[1], self.longitude[2],
self.longitude[3], long_hemisphere, self.altitude / 100.0
)
if self.size != 1.0 or self.horizontal_precision != 10000.0 or \
self.vertical_precision != 10.0:
text += " %0.2fm %0.2fm %0.2fm" % (
self.size / 100.0, self.horizontal_precision / 100.0,
self.vertical_precision / 100.0
)
return text
def from_text(cls, rdclass, rdtype, tok, origin = None, relativize = True):
latitude = [0, 0, 0, 0]
longitude = [0, 0, 0, 0]
size = 1.0
hprec = 10000.0
vprec = 10.0
latitude[0] = tok.get_int()
t = tok.get_string()
if t.isdigit():
latitude[1] = int(t)
t = tok.get_string()
if '.' in t:
(seconds, milliseconds) = t.split('.')
if not seconds.isdigit():
raise dns.exception.SyntaxError('bad latitude seconds value')
latitude[2] = int(seconds)
if latitude[2] >= 60:
raise dns.exception.SyntaxError('latitude seconds >= 60')
l = len(milliseconds)
if l == 0 or l > 3 or not milliseconds.isdigit():
raise dns.exception.SyntaxError('bad latitude milliseconds value')
if l == 1:
m = 100
elif l == 2:
m = 10
else:
m = 1
latitude[3] = m * int(milliseconds)
t = tok.get_string()
elif t.isdigit():
latitude[2] = int(t)
t = tok.get_string()
if t == 'S':
latitude[0] *= -1
elif t != 'N':
raise dns.exception.SyntaxError('bad latitude hemisphere value')
longitude[0] = tok.get_int()
t = tok.get_string()
if t.isdigit():
longitude[1] = int(t)
t = tok.get_string()
if '.' in t:
(seconds, milliseconds) = t.split('.')
if not seconds.isdigit():
raise dns.exception.SyntaxError('bad longitude seconds value')
longitude[2] = int(seconds)
if longitude[2] >= 60:
raise dns.exception.SyntaxError('longitude seconds >= 60')
l = len(milliseconds)
if l == 0 or l > 3 or not milliseconds.isdigit():
raise dns.exception.SyntaxError('bad longitude milliseconds value')
if l == 1:
m = 100
elif l == 2:
m = 10
else:
m = 1
longitude[3] = m * int(milliseconds)
t = tok.get_string()
elif t.isdigit():
longitude[2] = int(t)
t = tok.get_string()
if t == 'W':
longitude[0] *= -1
elif t != 'E':
raise dns.exception.SyntaxError('bad longitude hemisphere value')
t = tok.get_string()
if t[-1] == 'm':
t = t[0 : -1]
altitude = float(t) * 100.0 # m -> cm
token = tok.get().unescape()
if not token.is_eol_or_eof():
value = token.value
if value[-1] == 'm':
value = value[0 : -1]
size = float(value) * 100.0 # m -> cm
token = tok.get().unescape()
if not token.is_eol_or_eof():
value = token.value
if value[-1] == 'm':
value = value[0 : -1]
hprec = float(value) * 100.0 # m -> cm
token = tok.get().unescape()
if not token.is_eol_or_eof():
value = token.value
if value[-1] == 'm':
value = value[0 : -1]
vprec = float(value) * 100.0 # m -> cm
tok.get_eol()
return cls(rdclass, rdtype, latitude, longitude, altitude,
size, hprec, vprec)
from_text = classmethod(from_text)
def to_wire(self, file, compress = None, origin = None):
if self.latitude[0] < 0:
sign = -1
degrees = long(-1 * self.latitude[0])
else:
sign = 1
degrees = long(self.latitude[0])
milliseconds = (degrees * 3600000 +
self.latitude[1] * 60000 +
self.latitude[2] * 1000 +
self.latitude[3]) * sign
latitude = 0x80000000L + milliseconds
if self.longitude[0] < 0:
sign = -1
degrees = long(-1 * self.longitude[0])
else:
sign = 1
degrees = long(self.longitude[0])
milliseconds = (degrees * 3600000 +
self.longitude[1] * 60000 +
self.longitude[2] * 1000 +
self.longitude[3]) * sign
longitude = 0x80000000L + milliseconds
altitude = long(self.altitude) + 10000000L
size = _encode_size(self.size, "size")
hprec = _encode_size(self.horizontal_precision, "horizontal precision")
vprec = _encode_size(self.vertical_precision, "vertical precision")
wire = struct.pack("!BBBBIII", 0, size, hprec, vprec, latitude,
longitude, altitude)
file.write(wire)
def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin = None):
(version, size, hprec, vprec, latitude, longitude, altitude) = \
struct.unpack("!BBBBIII", wire[current : current + rdlen])
if latitude > 0x80000000L:
latitude = float(latitude - 0x80000000L) / 3600000
else:
latitude = -1 * float(0x80000000L - latitude) / 3600000
if latitude < -90.0 or latitude > 90.0:
raise dns.exception.FormError("bad latitude")
if longitude > 0x80000000L:
longitude = float(longitude - 0x80000000L) / 3600000
else:
longitude = -1 * float(0x80000000L - longitude) / 3600000
if longitude < -180.0 or longitude > 180.0:
raise dns.exception.FormError("bad longitude")
altitude = float(altitude) - 10000000.0
size = _decode_size(size, "size")
hprec = _decode_size(hprec, "horizontal precision")
vprec = _decode_size(vprec, "vertical precision")
return cls(rdclass, rdtype, latitude, longitude, altitude,
size, hprec, vprec)
from_wire = classmethod(from_wire)
def _cmp(self, other):
f = cStringIO.StringIO()
self.to_wire(f)
wire1 = f.getvalue()
f.seek(0)
f.truncate()
other.to_wire(f)
wire2 = f.getvalue()
f.close()
return cmp(wire1, wire2)
def _get_float_latitude(self):
return _tuple_to_float(self.latitude)
def _set_float_latitude(self, value):
self.latitude = _float_to_tuple(value)
float_latitude = property(_get_float_latitude, _set_float_latitude,
doc="latitude as a floating point value")
def _get_float_longitude(self):
return _tuple_to_float(self.longitude)
def _set_float_longitude(self, value):
self.longitude = _float_to_tuple(value)
float_longitude = property(_get_float_longitude, _set_float_longitude,
doc="longitude as a floating point value")
| apache-2.0 |
vnsofthe/odoo-dev | addons/vnsoft020/vnsoft_sale.py | 1 | 5299 | # -*- coding: utf-8 -*-
from openerp import SUPERUSER_ID
from openerp.osv import fields, osv
from openerp.tools.translate import _
import openerp.addons.decimal_precision as dp
from openerp import tools, api
import datetime
import logging
_logger = logging.getLogger(__name__)
class vnsoft_sale_order(osv.osv):
_inherit = "sale.order"
def do_create_purchase(self,cr,uid,ids,context=None):
res=self.browse(cr,uid,ids,context=context)
res_id=[]
detail_id = self.pool.get("purchase.order").search(cr,uid,[('origin','=',res.name)],context=context)
if detail_id:
result = self.pool.get("product.template")._get_act_window_dict(cr, uid, 'purchase.purchase_rfq', context=context)
result['domain'] = "[('id','in',[" + ','.join(map(str, detail_id)) + "])]"
return result
#for i in res.order_line:
#res_id.append(self.pool.get("sale.order.purchase").create(cr,uid,{"name":res.id,"product_id":i.product_id.id},context=context))
return {'type': 'ir.actions.act_window',
'res_model': 'sale.order.purchase',
'view_mode': 'form',
#'view_id':'vnsoft023_view_sale_purchase',
#'res_id': res_id,
'target': 'new',
'context':{"id":res.id},
'flags': {'form': {'action_buttons': False}}}
class vnsoft_purchase_order_line(osv.osv):
_inherit ="purchase.order.line"
_columns={
"sale_order_line_id":fields.integer("Sale Order Line ID"),
}
class vnsoft_purchase(osv.osv_memory):
_name = 'sale.order.purchase'
_columns = {
"name":fields.many2one("sale.order",u"销售单号",domain=[("state","in",["progress","manual"])]),
"line":fields.one2many("sale.order.purchase.line","name",u"明细")
}
@api.onchange("name")
def _onchange_salename(self):
res=[]
if self.line:
self.line.unlink()
if self.name:
obj=self.env["sale.order"].browse(self.name.id)
ids=[]
for i in obj.order_line:
id = self.env["purchase.order"].search([('origin','=',self.name.name),('order_line.product_id','=',i.product_id.id)])
if not id:
ids.append({'name':self.id,'product_id':i.product_id.id,'product_qty':i.product_uom_qty,"sale_order_line_id":i.id})
self.update({"line":ids})
def default_get(self, cr, uid, fields, context=None):
res = super(vnsoft_purchase,self).default_get(cr,uid,fields,context)
if context.get("id"):
id=context.get("id")
obj=self.pool.get("sale.order").browse(cr,uid,id,context=context)
res['name'] = id
res['line']=[]
for i in obj.order_line:
res['line'].append({'product_id':i.product_id.id,'product_qty':i.product_uom_qty,"sale_order_line_id":i.id})
return res
def do_create(self,cr,uid,ids,context=None):
d={}
res_id=[]
obj=self.browse(cr,uid,ids)
for i in obj.line:
if d.has_key(i.partner_id.id):
d[i.partner_id.id].append([i.product_id.id,i.product_qty,i.sale_order_line_id])
else:
d[i.partner_id.id]=[[i.product_id.id,i.product_qty,i.sale_order_line_id]]
#遍历有多少不同的供应商
for k,v in d.items():
#遍历供应商下有多少不同的产品
pline=[]
pick = self.pool.get("purchase.order")._get_picking_in(cr,uid)
local = self.pool.get("purchase.order").onchange_picking_type_id(cr,uid,0,pick,context=context)
val = self.pool.get("purchase.order").onchange_partner_id(cr,uid,0,k,context=context).get("value")
val.update(local.get('value'))
val.update({'picking_type_id':pick,'partner_id':k,'origin':obj.name.name,})
for j in v:
detail_val = self.pool.get("purchase.order.line").onchange_product_id(cr, uid, 0, val.get("pricelist_id"),j[0], j[1], False, k,val.get("date_order"),val.get("fiscal_position"),val.get("date_planned"),False,False,'draft',context=context).get("value")
detail_val.update({'product_id':j[0],'product_qty':j[1],"sale_order_line_id":j[2]})
pline.append([0,0,detail_val])
val.update({'company_id':1,'order_line':pline})
res_id.append(self.pool.get("purchase.order").create(cr,uid,val,context=context))
result = self.pool.get("product.template")._get_act_window_dict(cr, uid, 'purchase.purchase_rfq', context=context)
result['domain'] = "[('id','in',[" + ','.join(map(str, res_id)) + "])]"
return result
class vnsoft_purchase_line(osv.osv_memory):
_name = "sale.order.purchase.line"
_columns = {
"name":fields.many2one("sale.order.purchase",u"销售单号"),
"product_id":fields.many2one("product.product",u"产品"),
"product_qty": fields.float(u'数量', digits_compute=dp.get_precision('Product Unit of Measure'),
required=True),
"partner_id":fields.many2one("res.partner",u"供应商",domain="[('supplier','=',True)]"),
"sale_order_line_id":fields.integer("Line ID")
}
| agpl-3.0 |
vganapath/rally | rally/plugins/openstack/scenarios/neutron/utils.py | 1 | 27676 | # Copyright 2014: Intel Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import random
from rally.common.i18n import _
from rally.common import logging
from rally import exceptions
from rally.plugins.openstack import scenario
from rally.plugins.openstack.wrappers import network as network_wrapper
from rally.task import atomic
LOG = logging.getLogger(__name__)
class NeutronScenario(scenario.OpenStackScenario):
"""Base class for Neutron scenarios with basic atomic actions."""
SUBNET_IP_VERSION = 4
# TODO(rkiran): modify in case LBaaS-v2 requires
LB_METHOD = "ROUND_ROBIN"
LB_PROTOCOL = "HTTP"
LB_PROTOCOL_PORT = 80
HM_TYPE = "PING"
HM_MAX_RETRIES = 3
HM_DELAY = 20
HM_TIMEOUT = 10
def _get_network_id(self, network, **kwargs):
"""Get Neutron network ID for the network name.
param network: str, network name/id
param kwargs: dict, network options
returns: str, Neutron network-id
"""
networks = self._list_networks(atomic_action=False)
for net in networks:
if (net["name"] == network) or (net["id"] == network):
return net["id"]
msg = (_("Network %s not found.") % network)
raise exceptions.NotFoundException(message=msg)
@atomic.action_timer("neutron.create_network")
def _create_network(self, network_create_args):
"""Create neutron network.
:param network_create_args: dict, POST /v2.0/networks request options
:returns: neutron network dict
"""
network_create_args["name"] = self.generate_random_name()
return self.clients("neutron").create_network(
{"network": network_create_args})
@atomic.optional_action_timer("neutron.list_networks")
def _list_networks(self, **kwargs):
"""Return user networks list.
:param atomic_action: True if this is an atomic action. added
and handled by the
optional_action_timer() decorator
:param kwargs: network list options
"""
return self.clients("neutron").list_networks(**kwargs)["networks"]
@atomic.action_timer("neutron.update_network")
def _update_network(self, network, network_update_args):
"""Update the network.
This atomic function updates the network with network_update_args.
:param network: Network object
:param network_update_args: dict, POST /v2.0/networks update options
:returns: updated neutron network dict
"""
network_update_args["name"] = self.generate_random_name()
body = {"network": network_update_args}
return self.clients("neutron").update_network(
network["network"]["id"], body)
@atomic.action_timer("neutron.delete_network")
def _delete_network(self, network):
"""Delete neutron network.
:param network: Network object
"""
self.clients("neutron").delete_network(network["id"])
@atomic.action_timer("neutron.create_subnet")
def _create_subnet(self, network, subnet_create_args, start_cidr=None):
"""Create neutron subnet.
:param network: neutron network dict
:param subnet_create_args: POST /v2.0/subnets request options
:returns: neutron subnet dict
"""
network_id = network["network"]["id"]
if not subnet_create_args.get("cidr"):
start_cidr = start_cidr or "10.2.0.0/24"
subnet_create_args["cidr"] = (
network_wrapper.generate_cidr(start_cidr=start_cidr))
subnet_create_args["network_id"] = network_id
subnet_create_args["name"] = self.generate_random_name()
subnet_create_args.setdefault("ip_version", self.SUBNET_IP_VERSION)
return self.clients("neutron").create_subnet(
{"subnet": subnet_create_args})
@atomic.action_timer("neutron.list_subnets")
def _list_subnets(self):
"""Returns user subnetworks list."""
return self.clients("neutron").list_subnets()["subnets"]
@atomic.action_timer("neutron.update_subnet")
def _update_subnet(self, subnet, subnet_update_args):
"""Update the neutron subnet.
This atomic function updates the subnet with subnet_update_args.
:param subnet: Subnet object
:param subnet_update_args: dict, PUT /v2.0/subnets update options
:returns: updated neutron subnet dict
"""
subnet_update_args["name"] = self.generate_random_name()
body = {"subnet": subnet_update_args}
return self.clients("neutron").update_subnet(
subnet["subnet"]["id"], body)
@atomic.action_timer("neutron.delete_subnet")
def _delete_subnet(self, subnet):
"""Delete neutron subnet
:param subnet: Subnet object
"""
self.clients("neutron").delete_subnet(subnet["subnet"]["id"])
@atomic.action_timer("neutron.create_router")
def _create_router(self, router_create_args, external_gw=False):
"""Create neutron router.
:param router_create_args: POST /v2.0/routers request options
:returns: neutron router dict
"""
router_create_args["name"] = self.generate_random_name()
if external_gw:
for network in self._list_networks():
if network.get("router:external"):
external_network = network
gw_info = {"network_id": external_network["id"],
"enable_snat": True}
router_create_args.setdefault("external_gateway_info",
gw_info)
return self.clients("neutron").create_router(
{"router": router_create_args})
@atomic.action_timer("neutron.list_routers")
def _list_routers(self):
"""Returns user routers list."""
return self.clients("neutron").list_routers()["routers"]
@atomic.action_timer("neutron.delete_router")
def _delete_router(self, router):
"""Delete neutron router
:param router: Router object
"""
self.clients("neutron").delete_router(router["router"]["id"])
@atomic.action_timer("neutron.update_router")
def _update_router(self, router, router_update_args):
"""Update the neutron router.
This atomic function updates the router with router_update_args.
:param router: dict, neutron router
:param router_update_args: dict, PUT /v2.0/routers update options
:returns: updated neutron router dict
"""
router_update_args["name"] = self.generate_random_name()
body = {"router": router_update_args}
return self.clients("neutron").update_router(
router["router"]["id"], body)
@atomic.action_timer("neutron.create_port")
def _create_port(self, network, port_create_args):
"""Create neutron port.
:param network: neutron network dict
:param port_create_args: POST /v2.0/ports request options
:returns: neutron port dict
"""
port_create_args["network_id"] = network["network"]["id"]
port_create_args["name"] = self.generate_random_name()
return self.clients("neutron").create_port({"port": port_create_args})
@atomic.action_timer("neutron.list_ports")
def _list_ports(self):
"""Return user ports list."""
return self.clients("neutron").list_ports()["ports"]
@atomic.action_timer("neutron.update_port")
def _update_port(self, port, port_update_args):
"""Update the neutron port.
This atomic function updates port with port_update_args.
:param port: dict, neutron port
:param port_update_args: dict, PUT /v2.0/ports update options
:returns: updated neutron port dict
"""
port_update_args["name"] = self.generate_random_name()
body = {"port": port_update_args}
return self.clients("neutron").update_port(port["port"]["id"], body)
@atomic.action_timer("neutron.delete_port")
def _delete_port(self, port):
"""Delete neutron port.
:param port: Port object
"""
self.clients("neutron").delete_port(port["port"]["id"])
@logging.log_deprecated_args(_("network_create_args is deprecated; "
"use the network context instead"),
"0.1.0", "network_create_args")
def _get_or_create_network(self, network_create_args=None):
"""Get a network from context, or create a new one.
This lets users either create networks with the 'network'
context, provide existing networks with the 'existing_network'
context, or let the scenario create a default network for
them. Running this without one of the network contexts is
deprecated.
:param network_create_args: Deprecated way to provide network
creation args; use the network
context instead.
:returns: Network dict
"""
if "networks" in self.context["tenant"]:
return {"network":
random.choice(self.context["tenant"]["networks"])}
else:
LOG.warning(_("Running this scenario without either the 'network' "
"or 'existing_network' context is deprecated"))
return self._create_network(network_create_args or {})
def _create_subnets(self, network,
subnet_create_args=None,
subnet_cidr_start=None,
subnets_per_network=1):
"""Create <count> new subnets in the given network.
:param network: network to create subnets in
:param subnet_create_args: dict, POST /v2.0/subnets request options
:param subnet_cidr_start: str, start value for subnets CIDR
:param subnets_per_network: int, number of subnets for one network
:returns: List of subnet dicts
"""
return [self._create_subnet(network, subnet_create_args or {},
subnet_cidr_start)
for i in range(subnets_per_network)]
def _create_network_and_subnets(self,
network_create_args=None,
subnet_create_args=None,
subnets_per_network=1,
subnet_cidr_start="1.0.0.0/24"):
"""Create network and subnets.
:parm network_create_args: dict, POST /v2.0/networks request options
:parm subnet_create_args: dict, POST /v2.0/subnets request options
:parm subnets_per_network: int, number of subnets for one network
:parm subnet_cidr_start: str, start value for subnets CIDR
:returns: tuple of result network and subnets list
"""
network = self._create_network(network_create_args or {})
subnets = self._create_subnets(network, subnet_create_args,
subnet_cidr_start, subnets_per_network)
return network, subnets
def _create_network_structure(self, network_create_args=None,
subnet_create_args=None,
subnet_cidr_start=None,
subnets_per_network=None,
router_create_args=None):
"""Create a network and a given number of subnets and routers.
:param network_create_args: dict, POST /v2.0/networks request options
:param subnet_create_args: dict, POST /v2.0/subnets request options
:param subnet_cidr_start: str, start value for subnets CIDR
:param subnets_per_network: int, number of subnets for one network
:param router_create_args: dict, POST /v2.0/routers request options
:returns: tuple of (network, subnets, routers)
"""
network = self._get_or_create_network(network_create_args)
subnets = self._create_subnets(network, subnet_create_args,
subnet_cidr_start,
subnets_per_network)
routers = []
for subnet in subnets:
router = self._create_router(router_create_args or {})
self._add_interface_router(subnet["subnet"],
router["router"])
routers.append(router)
return (network, subnets, routers)
@atomic.action_timer("neutron.add_interface_router")
def _add_interface_router(self, subnet, router):
"""Connect subnet to router.
:param subnet: dict, neutron subnet
:param router: dict, neutron router
"""
self.clients("neutron").add_interface_router(
router["id"], {"subnet_id": subnet["id"]})
@atomic.action_timer("neutron.remove_interface_router")
def _remove_interface_router(self, subnet, router):
"""Remove subnet from router
:param subnet: dict, neutron subnet
:param router: dict, neutron router
"""
self.clients("neutron").remove_interface_router(
router["id"], {"subnet_id": subnet["id"]})
@atomic.optional_action_timer("neutron.create_loadbalancer")
def _create_loadbalancer(self, subnet_id, **lb_create_args):
"""Create LB loadbalancer(v2)
:param subnet_id: str, neutron subnet-id
:param pool_create_args: dict, POST /lb/pools request options
:param atomic_action: True if this is an atomic action. added
and handled by the
optional_action_timer() decorator
:returns: dict, neutron lb pool
"""
args = {"name": self.generate_random_name(),
"vip_subnet_id": subnet_id}
args.update(lb_create_args)
return self.clients("neutron").create_loadbalancer({"loadbalancer": args})
def _create_v2_loadbalancer(self, networks, **lb_create_args):
"""Create LB loadbalancer(v2)
:param networks: list, neutron networks
:param pool_create_args: dict, POST /lb/pools request options
:returns: list, neutron lb pools
"""
subnets = []
lb = []
for net in networks:
subnets.extend(net.get("subnets", []))
with atomic.ActionTimer(self, "neutron.create_%s_lbs" %
len(subnets)):
for subnet_id in subnets:
lb.append(self._create_loadbalancer(
subnet_id, atomic_action=False, **lb_create_args))
return lb
@atomic.action_timer("neutron.delete_loadbalancer")
def _delete_v2_loadbalancer(self, lb):
"""Delete neutron vip.
:param vip: neutron Virtual IP object
"""
self.clients("neutron").delete_loadbalancer(lb)
@atomic.action_timer("neutron.create_listener")
def _create_v2_listener(self, lb, **listener_create_args):
"""Create Listener(lbaasv2)
:parm pool: dict, neutron lb-pool
:parm vip_create_args: dict, POST /lb/vips request options
:returns: dict, neutron lb vip
"""
args = {"protocol": self.LB_PROTOCOL,
"protocol_port": self.LB_PROTOCOL_PORT,
"name": self.generate_random_name(),
"loadbalancer_id": lb["loadbalancer"]["id"]}
args.update(listener_create_args)
return self.clients("neutron").create_listener({"listener": args})
@atomic.action_timer("neutron.delete_listener")
def _delete_v2_listener(self, listener):
"""Delete neutron vip.
:param vip: neutron Virtual IP object
"""
self.clients("neutron").delete_listener(listener)
@atomic.optional_action_timer("neutron.create_lbaas_pool")
def _create_v2_pool(self, listener, **pool_create_args):
"""Create LB pool(v2)
:param subnet_id: str, neutron subnet-id
:param pool_create_args: dict, POST /lb/pools request options
:param atomic_action: True if this is an atomic action. added
and handled by the
optional_action_timer() decorator
:returns: dict, neutron lb pool
"""
args = {"lb_algorithm": self.LB_METHOD,
"protocol": self.LB_PROTOCOL,
"name": self.generate_random_name(),
"listener_id": listener["listener"]["id"]}
args.update(pool_create_args)
return self.clients("neutron").create_lbaas_pool({"pool": args})
@atomic.action_timer("neutron.delete_listener")
def _delete_v2_pool(self, pool):
"""Delete loadbalancer pool.
:param vip: neutron Virtual IP object
"""
self.clients("neutron").delete_lbaas_pool(pool)
@atomic.optional_action_timer("neutron.create_lbaas_member")
def _create_v2_pool_member(self, subnet_id, pool, **mem_create_args):
"""Create LB pool member (v2)
:param subnet_id: str, neutron subnet-id
:param pool_create_args: dict, POST /lb/pools request options
:param atomic_action: True if this is an atomic action. added
and handled by the
optional_action_timer() decorator
:returns: dict, neutron lb pool
"""
args = {"subnet_id": subnet_id,
"protocol_port": self.LB_PROTOCOL_PORT}
args.update(mem_create_args)
return self.clients("neutron").create_lbaas_member(pool["pool"]["id"], {"member": args})
@atomic.action_timer("neutron.delete_pool_member")
def _delete_v2_pool_member(self, member, pool):
"""Delete lbaas pool member.
:param vip: neutron Virtual IP object
"""
self.clients("neutron").delete_lbaas_member(member, pool)
@atomic.optional_action_timer("neutron.create_pool")
def _create_lb_pool(self, subnet_id, **pool_create_args):
"""Create LB pool(v1)
:param subnet_id: str, neutron subnet-id
:param pool_create_args: dict, POST /lb/pools request options
:param atomic_action: True if this is an atomic action. added
and handled by the
optional_action_timer() decorator
:returns: dict, neutron lb pool
"""
args = {"lb_method": self.LB_METHOD,
"protocol": self.LB_PROTOCOL,
"name": self.generate_random_name(),
"subnet_id": subnet_id}
args.update(pool_create_args)
return self.clients("neutron").create_pool({"pool": args})
def _create_v1_pools(self, networks, **pool_create_args):
"""Create LB pools(v1)
:param networks: list, neutron networks
:param pool_create_args: dict, POST /lb/pools request options
:returns: list, neutron lb pools
"""
subnets = []
pools = []
for net in networks:
subnets.extend(net.get("subnets", []))
with atomic.ActionTimer(self, "neutron.create_%s_pools" %
len(subnets)):
for subnet_id in subnets:
pools.append(self._create_lb_pool(
subnet_id, atomic_action=False, **pool_create_args))
return pools
@atomic.action_timer("neutron.list_pools")
def _list_v1_pools(self, **kwargs):
"""Return user lb pool list(v1)."""
return self.clients("neutron").list_pools(**kwargs)
@atomic.action_timer("neutron.delete_pool")
def _delete_v1_pool(self, pool):
"""Delete neutron pool.
:param pool: Pool object
"""
self.clients("neutron").delete_pool(pool["id"])
@atomic.action_timer("neutron.update_pool")
def _update_v1_pool(self, pool, **pool_update_args):
"""Update pool.
This atomic function updates the pool with pool_update_args.
:param pool: Pool object
:param pool_update_args: dict, POST /lb/pools update options
:returns: updated neutron pool dict
"""
pool_update_args["name"] = self.generate_random_name()
body = {"pool": pool_update_args}
return self.clients("neutron").update_pool(pool["pool"]["id"], body)
def _create_v1_vip(self, pool, **vip_create_args):
"""Create VIP(v1)
:parm pool: dict, neutron lb-pool
:parm vip_create_args: dict, POST /lb/vips request options
:returns: dict, neutron lb vip
"""
args = {"protocol": self.LB_PROTOCOL,
"protocol_port": self.LB_PROTOCOL_PORT,
"name": self.generate_random_name(),
"pool_id": pool["pool"]["id"],
"subnet_id": pool["pool"]["subnet_id"]}
args.update(vip_create_args)
return self.clients("neutron").create_vip({"vip": args})
@atomic.action_timer("neutron.list_vips")
def _list_v1_vips(self, **kwargs):
"""Return user lb vip list(v1)."""
return self.clients("neutron").list_vips(**kwargs)
@atomic.action_timer("neutron.delete_vip")
def _delete_v1_vip(self, vip):
"""Delete neutron vip.
:param vip: neutron Virtual IP object
"""
self.clients("neutron").delete_vip(vip["id"])
@atomic.action_timer("neutron.update_vip")
def _update_v1_vip(self, vip, **vip_update_args):
"""Updates vip.
This atomic function updates vip name and admin state
:param vip: Vip object
:param vip_update_args: dict, POST /lb/vips update options
:returns: updated neutron vip dict
"""
vip_update_args["name"] = self.generate_random_name()
body = {"vip": vip_update_args}
return self.clients("neutron").update_vip(vip["vip"]["id"], body)
@atomic.action_timer("neutron.create_floating_ip")
def _create_floatingip(self, floating_network, **floating_ip_args):
"""Create floating IP with floating_network.
param: floating_network: str, external network to create floating IP
param: floating_ip_args: dict, POST /floatingips create options
returns: dict, neutron floating IP
"""
floating_network_id = self._get_network_id(
floating_network)
args = {"floating_network_id": floating_network_id}
args.update(floating_ip_args)
return self.clients("neutron").create_floatingip({"floatingip": args})
@atomic.action_timer("neutron.list_floating_ips")
def _list_floating_ips(self, **kwargs):
"""Return floating IPs list."""
return self.clients("neutron").list_floatingips(**kwargs)
@atomic.action_timer("neutron.delete_floating_ip")
def _delete_floating_ip(self, floating_ip):
"""Delete floating IP.
:param: dict, floating IP object
"""
return self.clients("neutron").delete_floatingip(floating_ip["id"])
@atomic.optional_action_timer("neutron.create_healthmonitor")
def _create_v1_healthmonitor(self, **healthmonitor_create_args):
"""Create LB healthmonitor.
This atomic function creates healthmonitor with the provided
healthmonitor_create_args.
:param atomic_action: True if this is an atomic action. added
and handled by the
optional_action_timer() decorator
:param healthmonitor_create_args: dict, POST /lb/healthmonitors
:returns: neutron healthmonitor dict
"""
args = {"type": self.HM_TYPE,
"delay": self.HM_DELAY,
"max_retries": self.HM_MAX_RETRIES,
"timeout": self.HM_TIMEOUT}
args.update(healthmonitor_create_args)
return self.clients("neutron").create_health_monitor(
{"health_monitor": args})
@atomic.action_timer("neutron.list_healthmonitors")
def _list_v1_healthmonitors(self, **kwargs):
"""List LB healthmonitors.
This atomic function lists all helthmonitors.
:param kwargs: optional parameters
:returns: neutron lb healthmonitor list
"""
return self.clients("neutron").list_health_monitors(**kwargs)
@atomic.action_timer("neutron.delete_healthmonitor")
def _delete_v1_healthmonitor(self, healthmonitor):
"""Delete neutron healthmonitor.
:param healthmonitor: neutron healthmonitor dict
"""
self.clients("neutron").delete_health_monitor(healthmonitor["id"])
@atomic.action_timer("neutron.update_healthmonitor")
def _update_v1_healthmonitor(self, healthmonitor,
**healthmonitor_update_args):
"""Update neutron healthmonitor.
:param healthmonitor: neutron lb healthmonitor dict
:param healthmonitor_update_args: POST /lb/healthmonitors
update options
:returns: updated neutron lb healthmonitor dict
"""
body = {"health_monitor": healthmonitor_update_args}
return self.clients("neutron").update_health_monitor(
healthmonitor["health_monitor"]["id"], body)
@atomic.action_timer("neutron.create_security_group")
def _create_security_group(self, **security_group_create_args):
"""Create Neutron security-group.
param: security_group_create_args: dict, POST /v2.0/security-groups
request options
return: dict, neutron security-group
"""
security_group_create_args["name"] = self.generate_random_name()
return self.clients("neutron").create_security_group(
{"security_group": security_group_create_args})
@atomic.action_timer("neutron.delete_security_group")
def _delete_security_group(self, security_group):
"""Delete Neutron security group.
param: security_group: dict, neutron security_group
"""
return self.clients("neutron").delete_security_group(
security_group["security_group"]["id"])
@atomic.action_timer("neutron.list_security_groups")
def _list_security_groups(self, **kwargs):
"""Return list of Neutron security groups."""
return self.clients("neutron").list_security_groups(**kwargs)
@atomic.action_timer("neutron.update_security_group")
def _update_security_group(self, security_group,
**security_group_update_args):
"""Update Neutron security-group.
param: security_group: dict, neutron security_group
param: security_group_update_args: dict, POST /v2.0/security-groups
update options
return: dict, updated neutron security-group
"""
security_group_update_args["name"] = self.generate_random_name()
body = {"security_group": security_group_update_args}
return self.clients("neutron").update_security_group(
security_group["security_group"]["id"], body)
| apache-2.0 |
maxikov/attfocus | 2dbluetooths/reg_bayes/test.py | 1 | 1183 | #!/usr/bin/env python
import featurebuilder
import pickle
import numpy
def load_data(filename):
src = open(filename, "r")
X, Y = eval(src.readline())
src.close()
return X, Y
def main():
print "Loading data..."
X, Y = load_data("test_set.py")
f = open("rnode.p", "rb")
rnode = pickle.load(f)
f.close()
rnode.debug = False
print "len(X):", len(X), "len(X[0]):", len(X[0]), "len(Y):", len(Y)
Y = numpy.array(Y, dtype="float64")
print "Shape of Y:", Y.shape
X = numpy.array(X, dtype="float64")
print "Shape of X:", X.shape
dsum = 0
rights = 0
for i in xrange(X.shape[0]):
x = X[i]
x = x.reshape(1, x.size)
y_est = rnode.execute(x, minx = -1.0, maxx = 10.0, miny = -1.0, maxy = 10.0, step = 0.5)
y_true = Y[i]
d = y_true - y_est[0]
dist = numpy.sqrt(numpy.dot(d, d))
dsum += dist
y_est_r = numpy.round(y_est)
got_it_right = numpy.array_equal(y_true, y_est_r[0])
if got_it_right:
rights += 1
print "true:", y_true, "estimate:", y_est, "dist:", dist, "rounded estimate:", y_est_r, "they're equal:", got_it_right
print "Average distance:", dsum/Y.shape[0]
print "Success rate:", float(rights)/Y.shape[0]
if __name__ == "__main__":
main()
| gpl-3.0 |
VladimirVystupkin/AMRParsing1.x | stanfordnlp/unidecode/x021.py | 72 | 3974 | data = (
'', # 0x00
'', # 0x01
'', # 0x02
'', # 0x03
'', # 0x04
'', # 0x05
'', # 0x06
'', # 0x07
'', # 0x08
'', # 0x09
'', # 0x0a
'', # 0x0b
'', # 0x0c
'', # 0x0d
'', # 0x0e
'', # 0x0f
'', # 0x10
'', # 0x11
'', # 0x12
'', # 0x13
'', # 0x14
'', # 0x15
'', # 0x16
'', # 0x17
'', # 0x18
'', # 0x19
'', # 0x1a
'', # 0x1b
'', # 0x1c
'', # 0x1d
'', # 0x1e
'', # 0x1f
'(sm)', # 0x20
'TEL', # 0x21
'(tm)', # 0x22
'', # 0x23
'', # 0x24
'', # 0x25
'', # 0x26
'', # 0x27
'', # 0x28
'', # 0x29
'K', # 0x2a
'A', # 0x2b
'', # 0x2c
'', # 0x2d
'', # 0x2e
'', # 0x2f
'', # 0x30
'', # 0x31
'F', # 0x32
'', # 0x33
'', # 0x34
'', # 0x35
'', # 0x36
'', # 0x37
'', # 0x38
'', # 0x39
'', # 0x3a
'FAX', # 0x3b
'[?]', # 0x3c
'[?]', # 0x3d
'[?]', # 0x3e
'[?]', # 0x3f
'[?]', # 0x40
'[?]', # 0x41
'[?]', # 0x42
'[?]', # 0x43
'[?]', # 0x44
'[?]', # 0x45
'[?]', # 0x46
'[?]', # 0x47
'[?]', # 0x48
'[?]', # 0x49
'[?]', # 0x4a
'[?]', # 0x4b
'[?]', # 0x4c
'[?]', # 0x4d
'F', # 0x4e
'[?]', # 0x4f
'[?]', # 0x50
'[?]', # 0x51
'[?]', # 0x52
' 1/3 ', # 0x53
' 2/3 ', # 0x54
' 1/5 ', # 0x55
' 2/5 ', # 0x56
' 3/5 ', # 0x57
' 4/5 ', # 0x58
' 1/6 ', # 0x59
' 5/6 ', # 0x5a
' 1/8 ', # 0x5b
' 3/8 ', # 0x5c
' 5/8 ', # 0x5d
' 7/8 ', # 0x5e
' 1/', # 0x5f
'I', # 0x60
'II', # 0x61
'III', # 0x62
'IV', # 0x63
'V', # 0x64
'VI', # 0x65
'VII', # 0x66
'VIII', # 0x67
'IX', # 0x68
'X', # 0x69
'XI', # 0x6a
'XII', # 0x6b
'L', # 0x6c
'C', # 0x6d
'D', # 0x6e
'M', # 0x6f
'i', # 0x70
'ii', # 0x71
'iii', # 0x72
'iv', # 0x73
'v', # 0x74
'vi', # 0x75
'vii', # 0x76
'viii', # 0x77
'ix', # 0x78
'x', # 0x79
'xi', # 0x7a
'xii', # 0x7b
'l', # 0x7c
'c', # 0x7d
'd', # 0x7e
'm', # 0x7f
'(D', # 0x80
'D)', # 0x81
'((|))', # 0x82
')', # 0x83
'[?]', # 0x84
'[?]', # 0x85
'[?]', # 0x86
'[?]', # 0x87
'[?]', # 0x88
'[?]', # 0x89
'[?]', # 0x8a
'[?]', # 0x8b
'[?]', # 0x8c
'[?]', # 0x8d
'[?]', # 0x8e
'[?]', # 0x8f
'-', # 0x90
'|', # 0x91
'-', # 0x92
'|', # 0x93
'-', # 0x94
'|', # 0x95
'\\', # 0x96
'/', # 0x97
'\\', # 0x98
'/', # 0x99
'-', # 0x9a
'-', # 0x9b
'~', # 0x9c
'~', # 0x9d
'-', # 0x9e
'|', # 0x9f
'-', # 0xa0
'|', # 0xa1
'-', # 0xa2
'-', # 0xa3
'-', # 0xa4
'|', # 0xa5
'-', # 0xa6
'|', # 0xa7
'|', # 0xa8
'-', # 0xa9
'-', # 0xaa
'-', # 0xab
'-', # 0xac
'-', # 0xad
'-', # 0xae
'|', # 0xaf
'|', # 0xb0
'|', # 0xb1
'|', # 0xb2
'|', # 0xb3
'|', # 0xb4
'|', # 0xb5
'^', # 0xb6
'V', # 0xb7
'\\', # 0xb8
'=', # 0xb9
'V', # 0xba
'^', # 0xbb
'-', # 0xbc
'-', # 0xbd
'|', # 0xbe
'|', # 0xbf
'-', # 0xc0
'-', # 0xc1
'|', # 0xc2
'|', # 0xc3
'=', # 0xc4
'|', # 0xc5
'=', # 0xc6
'=', # 0xc7
'|', # 0xc8
'=', # 0xc9
'|', # 0xca
'=', # 0xcb
'=', # 0xcc
'=', # 0xcd
'=', # 0xce
'=', # 0xcf
'=', # 0xd0
'|', # 0xd1
'=', # 0xd2
'|', # 0xd3
'=', # 0xd4
'|', # 0xd5
'\\', # 0xd6
'/', # 0xd7
'\\', # 0xd8
'/', # 0xd9
'=', # 0xda
'=', # 0xdb
'~', # 0xdc
'~', # 0xdd
'|', # 0xde
'|', # 0xdf
'-', # 0xe0
'|', # 0xe1
'-', # 0xe2
'|', # 0xe3
'-', # 0xe4
'-', # 0xe5
'-', # 0xe6
'|', # 0xe7
'-', # 0xe8
'|', # 0xe9
'|', # 0xea
'|', # 0xeb
'|', # 0xec
'|', # 0xed
'|', # 0xee
'|', # 0xef
'-', # 0xf0
'\\', # 0xf1
'\\', # 0xf2
'|', # 0xf3
'[?]', # 0xf4
'[?]', # 0xf5
'[?]', # 0xf6
'[?]', # 0xf7
'[?]', # 0xf8
'[?]', # 0xf9
'[?]', # 0xfa
'[?]', # 0xfb
'[?]', # 0xfc
'[?]', # 0xfd
'[?]', # 0xfe
)
| gpl-2.0 |
jlmadurga/permabots | permabots/test/factories/messenger_lib.py | 2 | 1111 | # coding=utf-8
from factory import Factory, SubFactory
from factory.fuzzy import FuzzyText, FuzzyInteger
from django.utils import timezone
from permabots.views.hooks import messenger_hook
class MessengerTextMessageFactory(Factory):
class Meta:
model = messenger_hook.MessengerTextMessage
mid = FuzzyText()
seq = FuzzyInteger(10000000)
text = FuzzyText()
class MessengerPostBackMessageFactory(Factory):
class Meta:
model = messenger_hook.MessengerPostbackMessage
payload = FuzzyText()
class MessengerMessagingFactory(Factory):
class Meta:
model = messenger_hook.MessengerMessaging
sender = FuzzyText()
recipient = FuzzyText()
timestamp = timezone.now()
type = "message"
message = SubFactory(MessengerTextMessageFactory)
class MessengerEntryFactory(Factory):
class Meta:
model = messenger_hook.MessengerEntry
page_id = FuzzyText()
time = timezone.now()
messaging = []
class MessengerWebhookFactory(Factory):
class Meta:
model = messenger_hook.Webhook
object = "page"
entries = [] | bsd-3-clause |
okin/doit | tests/test_runner.py | 1 | 24930 | import os
from multiprocessing import Queue
import pytest
from mock import Mock
from doit.dependency import Dependency
from doit.task import Task
from doit.control import TaskDispatcher, ExecNode
from doit import runner
# sample actions
def my_print(*args):
pass
def _fail():
return False
def _error():
raise Exception("I am the exception.\n")
def _exit():
raise SystemExit()
class FakeReporter(object):
"""Just log everything in internal attribute - used on tests"""
def __init__(self, outstream=None, options=None):
self.log = []
def get_status(self, task):
self.log.append(('start', task))
def execute_task(self, task):
self.log.append(('execute', task))
def add_failure(self, task, exception):
self.log.append(('fail', task))
def add_success(self, task):
self.log.append(('success', task))
def skip_uptodate(self, task):
self.log.append(('up-to-date', task))
def skip_ignore(self, task):
self.log.append(('ignore', task))
def cleanup_error(self, exception):
self.log.append(('cleanup_error',))
def runtime_error(self, msg):
self.log.append(('runtime_error',))
def teardown_task(self, task):
self.log.append(('teardown', task))
def complete_run(self):
pass
@pytest.fixture
def reporter(request):
return FakeReporter()
class TestRunner(object):
def testInit(self, reporter, depfile):
my_runner = runner.Runner(depfile.name, reporter)
assert False == my_runner._stop_running
assert runner.SUCCESS == my_runner.final_result
class TestRunner_SelectTask(object):
def test_ready(self, reporter, depfile):
t1 = Task("taskX", [(my_print, ["out a"] )])
my_runner = runner.Runner(depfile.name, reporter)
assert True == my_runner.select_task(ExecNode(t1, None), {})
assert ('start', t1) == reporter.log.pop(0)
assert not reporter.log
def test_DependencyError(self, reporter, depfile):
t1 = Task("taskX", [(my_print, ["out a"] )],
file_dep=["i_dont_exist"])
my_runner = runner.Runner(depfile.name, reporter)
assert False == my_runner.select_task(ExecNode(t1, None), {})
assert ('start', t1) == reporter.log.pop(0)
assert ('fail', t1) == reporter.log.pop(0)
assert not reporter.log
def test_upToDate(self, reporter, depfile):
t1 = Task("taskX", [(my_print, ["out a"] )], file_dep=[__file__])
my_runner = runner.Runner(depfile.name, reporter)
my_runner.dep_manager.save_success(t1)
assert False == my_runner.select_task(ExecNode(t1, None), {})
assert ('start', t1) == reporter.log.pop(0)
assert ('up-to-date', t1) == reporter.log.pop(0)
assert not reporter.log
def test_ignore(self, reporter, depfile):
t1 = Task("taskX", [(my_print, ["out a"] )])
my_runner = runner.Runner(depfile.name, reporter)
my_runner.dep_manager.ignore(t1)
assert False == my_runner.select_task(ExecNode(t1, None), {})
assert ('start', t1) == reporter.log.pop(0)
assert ('ignore', t1) == reporter.log.pop(0)
assert not reporter.log
def test_alwaysExecute(self, reporter, depfile):
t1 = Task("taskX", [(my_print, ["out a"] )])
my_runner = runner.Runner(depfile.name, reporter, always_execute=True)
my_runner.dep_manager.save_success(t1)
assert True == my_runner.select_task(ExecNode(t1, None), {})
assert ('start', t1) == reporter.log.pop(0)
assert not reporter.log
def test_noSetup_ok(self, reporter, depfile):
t1 = Task("taskX", [(my_print, ["out a"] )])
my_runner = runner.Runner(depfile.name, reporter)
assert True == my_runner.select_task(ExecNode(t1, None), {})
assert ('start', t1) == reporter.log.pop(0)
assert not reporter.log
def test_withSetup(self, reporter, depfile):
t1 = Task("taskX", [(my_print, ["out a"] )], setup=["taskY"])
my_runner = runner.Runner(depfile.name, reporter)
# defer execution
n1 = ExecNode(t1, None)
assert False == my_runner.select_task(n1, {})
assert ('start', t1) == reporter.log.pop(0)
assert not reporter.log
# trying to select again
assert True == my_runner.select_task(n1, {})
assert not reporter.log
def test_getargs_ok(self, reporter, depfile):
def ok(): return {'x':1}
def check_x(my_x): return my_x == 1
t1 = Task('t1', [(ok,)])
n1 = ExecNode(t1, None)
t2 = Task('t2', [(check_x,)], getargs={'my_x':('t1','x')})
n2 = ExecNode(t2, None)
tasks_dict = {'t1': t1, 't2':t2}
my_runner = runner.Runner(depfile.name, reporter)
# t2 gives chance for setup tasks to be executed
assert False == my_runner.select_task(n2, tasks_dict)
assert ('start', t2) == reporter.log.pop(0)
# execute task t1 to calculate value
assert True == my_runner.select_task(n1, tasks_dict)
assert ('start', t1) == reporter.log.pop(0)
t1_result = my_runner.execute_task(t1)
assert ('execute', t1) == reporter.log.pop(0)
my_runner.process_task_result(n1, t1_result)
assert ('success', t1) == reporter.log.pop(0)
# t2.options are set on select_task
assert {} == t2.options
assert True == my_runner.select_task(n2, tasks_dict)
assert not reporter.log
assert {'my_x': 1} == t2.options
def test_getargs_fail(self, reporter, depfile):
# invalid getargs. Exception wil be raised and task will fail
def check_x(my_x): return True
t1 = Task('t1', [lambda :True])
n1 = ExecNode(t1, None)
t2 = Task('t2', [(check_x,)], getargs={'my_x':('t1','x')})
n2 = ExecNode(t2, None)
tasks_dict = {'t1': t1, 't2':t2}
my_runner = runner.Runner(depfile.name, reporter)
# t2 gives chance for setup tasks to be executed
assert False == my_runner.select_task(n2, tasks_dict)
assert ('start', t2) == reporter.log.pop(0)
# execute task t1 to calculate value
assert True == my_runner.select_task(n1, tasks_dict)
assert ('start', t1) == reporter.log.pop(0)
t1_result = my_runner.execute_task(t1)
assert ('execute', t1) == reporter.log.pop(0)
my_runner.process_task_result(n1, t1_result)
assert ('success', t1) == reporter.log.pop(0)
# select_task t2 fails
assert False == my_runner.select_task(n2, tasks_dict)
assert ('fail', t2) == reporter.log.pop(0)
assert not reporter.log
def test_getargs_dict(self, reporter, depfile):
def ok(): return {'x':1}
t1 = Task('t1', [(ok,)])
n1 = ExecNode(t1, None)
t2 = Task('t2', None, getargs={'my_x':('t1', None)})
tasks_dict = {'t1': t1, 't2':t2}
my_runner = runner.Runner(depfile.name, reporter)
t1_result = my_runner.execute_task(t1)
my_runner.process_task_result(n1, t1_result)
# t2.options are set on _get_task_args
assert {} == t2.options
my_runner._get_task_args(t2, tasks_dict)
assert {'my_x': {'x':1}} == t2.options
def test_getargs_group(self, reporter, depfile):
def ok(): return {'x':1}
t1 = Task('t1', None, task_dep=['t1:a'], has_subtask=True)
t1a = Task('t1:a', [(ok,)], is_subtask=True)
t2 = Task('t2', None, getargs={'my_x':('t1', None)})
tasks_dict = {'t1': t1, 't1a':t1a, 't2':t2}
my_runner = runner.Runner(depfile.name, reporter)
t1a_result = my_runner.execute_task(t1a)
my_runner.process_task_result(ExecNode(t1a, None), t1a_result)
# t2.options are set on _get_task_args
assert {} == t2.options
my_runner._get_task_args(t2, tasks_dict)
assert {'my_x': {'a':{'x':1}} } == t2.options
def test_getargs_group_value(self, reporter, depfile):
def ok(): return {'x':1}
t1 = Task('t1', None, task_dep=['t1:a'], has_subtask=True)
t1a = Task('t1:a', [(ok,)], is_subtask=True)
t2 = Task('t2', None, getargs={'my_x':('t1', 'x')})
tasks_dict = {'t1': t1, 't1a':t1a, 't2':t2}
my_runner = runner.Runner(depfile.name, reporter)
t1a_result = my_runner.execute_task(t1a)
my_runner.process_task_result(ExecNode(t1a, None), t1a_result)
# t2.options are set on _get_task_args
assert {} == t2.options
my_runner._get_task_args(t2, tasks_dict)
assert {'my_x': {'a':1} } == t2.options
class TestTask_Teardown(object):
def test_ok(self, reporter, depfile):
touched = []
def touch():
touched.append(1)
t1 = Task('t1', [], teardown=[(touch,)])
my_runner = runner.Runner(depfile.name, reporter)
my_runner.teardown_list = [t1]
my_runner.teardown()
assert 1 == len(touched)
assert ('teardown', t1) == reporter.log.pop(0)
assert not reporter.log
def test_reverse_order(self, reporter, depfile):
def do_nothing():pass
t1 = Task('t1', [], teardown=[do_nothing])
t2 = Task('t2', [], teardown=[do_nothing])
my_runner = runner.Runner(depfile.name, reporter)
my_runner.teardown_list = [t1, t2]
my_runner.teardown()
assert ('teardown', t2) == reporter.log.pop(0)
assert ('teardown', t1) == reporter.log.pop(0)
assert not reporter.log
def test_errors(self, reporter, depfile):
def raise_something(x):
raise Exception(x)
t1 = Task('t1', [], teardown=[(raise_something,['t1 blow'])])
t2 = Task('t2', [], teardown=[(raise_something,['t2 blow'])])
my_runner = runner.Runner(depfile.name, reporter)
my_runner.teardown_list = [t1, t2]
my_runner.teardown()
assert ('teardown', t2) == reporter.log.pop(0)
assert ('cleanup_error',) == reporter.log.pop(0)
assert ('teardown', t1) == reporter.log.pop(0)
assert ('cleanup_error',) == reporter.log.pop(0)
assert not reporter.log
class TestTask_RunAll(object):
def test_reporter_runtime_error(self, reporter, depfile):
t1 = Task('t1', [], calc_dep=['t2'])
t2 = Task('t2', [lambda: {'file_dep':[1]}])
my_runner = runner.Runner(depfile.name, reporter)
my_runner.run_all(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']))
assert ('start', t2) == reporter.log.pop(0)
assert ('execute', t2) == reporter.log.pop(0)
assert ('success', t2) == reporter.log.pop(0)
assert ('runtime_error',) == reporter.log.pop(0)
assert not reporter.log
# run tests in both single process runner and multi-process runner
RUNNERS = [runner.Runner]
# TODO: test should be added and skipped!
if runner.MRunner.available():
RUNNERS.append(runner.MRunner)
@pytest.fixture(params=RUNNERS)
def RunnerClass(request):
return request.param
def ok(): return "ok"
def ok2(): return "different"
class TestRunner_run_tasks(object):
def test_teardown(self, reporter, RunnerClass, depfile):
t1 = Task('t1', [], teardown=[ok])
t2 = Task('t2', [])
my_runner = RunnerClass(depfile.name, reporter)
assert [] == my_runner.teardown_list
my_runner.run_tasks(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']))
my_runner.finish()
assert ('teardown', t1) == reporter.log[-1]
# testing whole process/API
def test_success(self, reporter, RunnerClass, depfile):
t1 = Task("t1", [(my_print, ["out a"] )] )
t2 = Task("t2", [(my_print, ["out a"] )] )
my_runner = RunnerClass(depfile.name, reporter)
my_runner.run_tasks(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']))
assert runner.SUCCESS == my_runner.finish()
assert ('start', t1) == reporter.log.pop(0), reporter.log
assert ('execute', t1) == reporter.log.pop(0)
assert ('success', t1) == reporter.log.pop(0)
assert ('start', t2) == reporter.log.pop(0)
assert ('execute', t2) == reporter.log.pop(0)
assert ('success', t2) == reporter.log.pop(0)
# test result, value, out, err are saved into task
def test_result(self, reporter, RunnerClass, depfile):
def my_action():
import sys
sys.stdout.write('out here')
sys.stderr.write('err here')
return {'bb': 5}
task = Task("taskY", [my_action] )
my_runner = RunnerClass(depfile.name, reporter)
assert None == task.result
assert {} == task.values
assert [None] == [a.out for a in task.actions]
assert [None] == [a.err for a in task.actions]
my_runner.run_tasks(TaskDispatcher({'taskY':task}, [], ['taskY']))
assert runner.SUCCESS == my_runner.finish()
assert {'bb': 5} == task.result
assert {'bb': 5} == task.values
assert ['out here'] == [a.out for a in task.actions]
assert ['err here'] == [a.err for a in task.actions]
# whenever a task fails remaining task are not executed
def test_failureOutput(self, reporter, RunnerClass, depfile):
t1 = Task("t1", [_fail])
t2 = Task("t2", [_fail])
my_runner = RunnerClass(depfile.name, reporter)
my_runner.run_tasks(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']))
assert runner.FAILURE == my_runner.finish()
assert ('start', t1) == reporter.log.pop(0)
assert ('execute', t1) == reporter.log.pop(0)
assert ('fail', t1) == reporter.log.pop(0)
# second task is not executed
assert 0 == len(reporter.log)
def test_error(self, reporter, RunnerClass, depfile):
t1 = Task("t1", [_error])
t2 = Task("t2", [_error])
my_runner = RunnerClass(depfile.name, reporter)
my_runner.run_tasks(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']))
assert runner.ERROR == my_runner.finish()
assert ('start', t1) == reporter.log.pop(0)
assert ('execute', t1) == reporter.log.pop(0)
assert ('fail', t1) == reporter.log.pop(0)
# second task is not executed
assert 0 == len(reporter.log)
# when successful dependencies are updated
def test_updateDependencies(self, reporter, RunnerClass, depfile):
depPath = os.path.join(os.path.dirname(__file__),"data/dependency1")
ff = open(depPath,"a")
ff.write("xxx")
ff.close()
dependencies = [depPath]
filePath = os.path.join(os.path.dirname(__file__),"data/target")
ff = open(filePath,"a")
ff.write("xxx")
ff.close()
targets = [filePath]
t1 = Task("t1", [my_print], dependencies, targets)
my_runner = RunnerClass(depfile.name, reporter)
my_runner.run_tasks(TaskDispatcher({'t1':t1}, [], ['t1']))
assert runner.SUCCESS == my_runner.finish()
d = Dependency(depfile.name)
assert d._get("t1", os.path.abspath(depPath))
def test_continue(self, reporter, RunnerClass, depfile):
t1 = Task("t1", [(_fail,)] )
t2 = Task("t2", [(_error,)] )
t3 = Task("t3", [(ok,)])
my_runner = RunnerClass(depfile.name, reporter, continue_=True)
disp = TaskDispatcher({'t1':t1, 't2':t2, 't3':t3}, [], ['t1', 't2', 't3'])
my_runner.run_tasks(disp)
assert runner.ERROR == my_runner.finish()
assert ('start', t1) == reporter.log.pop(0)
assert ('execute', t1) == reporter.log.pop(0)
assert ('fail', t1) == reporter.log.pop(0)
assert ('start', t2) == reporter.log.pop(0)
assert ('execute', t2) == reporter.log.pop(0)
assert ('fail', t2) == reporter.log.pop(0)
assert ('start', t3) == reporter.log.pop(0)
assert ('execute', t3) == reporter.log.pop(0)
assert ('success', t3) == reporter.log.pop(0)
assert 0 == len(reporter.log)
def test_continue_dont_execute_parent_of_failed_task(self, reporter,
RunnerClass, depfile):
t1 = Task("t1", [(_error,)] )
t2 = Task("t2", [(ok,)], task_dep=['t1'])
t3 = Task("t3", [(ok,)])
my_runner = RunnerClass(depfile.name, reporter, continue_=True)
disp = TaskDispatcher({'t1':t1, 't2':t2, 't3':t3}, [], ['t1', 't2', 't3'])
my_runner.run_tasks(disp)
assert runner.ERROR == my_runner.finish()
assert ('start', t1) == reporter.log.pop(0)
assert ('execute', t1) == reporter.log.pop(0)
assert ('fail', t1) == reporter.log.pop(0)
assert ('start', t2) == reporter.log.pop(0)
assert ('fail', t2) == reporter.log.pop(0)
assert ('start', t3) == reporter.log.pop(0)
assert ('execute', t3) == reporter.log.pop(0)
assert ('success', t3) == reporter.log.pop(0)
assert 0 == len(reporter.log)
def test_continue_dep_error(self, reporter, RunnerClass, depfile):
t1 = Task("t1", [(ok,)], file_dep=['i_dont_exist'] )
t2 = Task("t2", [(ok,)], task_dep=['t1'])
my_runner = RunnerClass(depfile.name, reporter, continue_=True)
disp = TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2'])
my_runner.run_tasks(disp)
assert runner.ERROR == my_runner.finish()
assert ('start', t1) == reporter.log.pop(0)
assert ('fail', t1) == reporter.log.pop(0)
assert ('start', t2) == reporter.log.pop(0)
assert ('fail', t2) == reporter.log.pop(0)
assert 0 == len(reporter.log)
def test_continue_ignored_dep(self, reporter, RunnerClass, depfile):
t1 = Task("t1", [(ok,)], )
t2 = Task("t2", [(ok,)], task_dep=['t1'])
my_runner = RunnerClass(depfile.name, reporter, continue_=True)
my_runner.dep_manager.ignore(t1)
disp = TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2'])
my_runner.run_tasks(disp)
assert runner.SUCCESS == my_runner.finish()
assert ('start', t1) == reporter.log.pop(0)
assert ('ignore', t1) == reporter.log.pop(0)
assert ('start', t2) == reporter.log.pop(0)
assert ('ignore', t2) == reporter.log.pop(0)
assert 0 == len(reporter.log)
def test_getargs(self, reporter, RunnerClass, depfile):
def use_args(arg1):
print arg1
def make_args(): return {'myarg':1}
t1 = Task("t1", [(use_args,)], getargs=dict(arg1=('t2','myarg')) )
t2 = Task("t2", [(make_args,)])
my_runner = RunnerClass(depfile.name, reporter)
my_runner.run_tasks(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']))
assert runner.SUCCESS == my_runner.finish()
assert ('start', t1) == reporter.log.pop(0)
assert ('start', t2) == reporter.log.pop(0)
assert ('execute', t2) == reporter.log.pop(0)
assert ('success', t2) == reporter.log.pop(0)
assert ('execute', t1) == reporter.log.pop(0)
assert ('success', t1) == reporter.log.pop(0)
assert 0 == len(reporter.log)
# SystemExit runner should not interfere with SystemExit
def testSystemExitRaises(self, reporter, RunnerClass, depfile):
t1 = Task("t1", [_exit])
my_runner = RunnerClass(depfile.name, reporter)
disp = TaskDispatcher({'t1':t1}, [], ['t1'])
pytest.raises(SystemExit, my_runner.run_tasks, disp)
my_runner.finish()
@pytest.mark.skipif('not runner.MRunner.available()')
class TestMReporter(object):
class MyRunner(object):
def __init__(self):
self.result_q = Queue()
def testReporterMethod(self, reporter):
fake_runner = self.MyRunner()
mp_reporter = runner.MReporter(fake_runner, reporter)
my_task = Task("task x", [])
mp_reporter.add_success(my_task)
got = fake_runner.result_q.get(True, 1)
assert {'name': "task x", "reporter": 'add_success'} == got
def testNonReporterMethod(self, reporter):
fake_runner = self.MyRunner()
mp_reporter = runner.MReporter(fake_runner, reporter)
assert hasattr(mp_reporter, 'add_success')
assert not hasattr(mp_reporter, 'no_existent_method')
@pytest.mark.skipif('not runner.MRunner.available()')
class TestMRunner_get_next_task(object):
# simple normal case
def test_run_task(self, reporter, depfile):
t1 = Task('t1', [])
t2 = Task('t2', [])
run = runner.MRunner(depfile.name, reporter)
run._run_tasks_init(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']))
assert t1 == run.get_next_task(None).task
assert t2 == run.get_next_task(None).task
assert None == run.get_next_task(None)
def test_stop_running(self, reporter, depfile):
t1 = Task('t1', [])
t2 = Task('t2', [])
run = runner.MRunner(depfile.name, reporter)
run._run_tasks_init(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']))
assert t1 == run.get_next_task(None).task
run._stop_running = True
assert None == run.get_next_task(None)
def test_waiting(self, reporter, depfile):
t1 = Task('t1', [])
t2 = Task('t2', [], setup=('t1',))
run = runner.MRunner(depfile.name, reporter)
run._run_tasks_init(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t2']))
# first start task 1
n1 = run.get_next_task(None)
assert t1 == n1.task
# hold until t1 is done
assert isinstance(run.get_next_task(None), runner.Hold)
assert isinstance(run.get_next_task(None), runner.Hold)
n1.run_status = 'done'
n2 = run.get_next_task(n1)
assert t2 == n2.task
assert None == run.get_next_task(n2)
def test_waiting_controller(self, reporter, depfile):
t1 = Task('t1', [])
t2 = Task('t2', [], calc_dep=('t1',))
run = runner.MRunner(depfile.name, reporter)
run._run_tasks_init(TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2']))
# first task ok
assert t1 == run.get_next_task(None).task
# hold until t1 finishes
assert 0 == run.free_proc
assert isinstance(run.get_next_task(None), runner.Hold)
assert 1 == run.free_proc
@pytest.mark.skipif('not runner.MRunner.available()')
class TestMRunner_start_process(object):
# 2 process, 3 tasks
def test_all_processes(self, reporter, monkeypatch, depfile):
mock_process = Mock()
monkeypatch.setattr(runner, 'Process', mock_process)
t1 = Task('t1', [])
t2 = Task('t2', [])
td = TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2'])
run = runner.MRunner(depfile.name, reporter, num_process=2)
run._run_tasks_init(td)
result_q = Queue()
task_q = Queue()
proc_list = run._run_start_processes(task_q, result_q)
run.finish()
assert 2 == len(proc_list)
assert t1.name == task_q.get().name
assert t2.name == task_q.get().name
# 2 process, 1 task
def test_less_processes(self, reporter, monkeypatch, depfile):
mock_process = Mock()
monkeypatch.setattr(runner, 'Process', mock_process)
t1 = Task('t1', [])
td = TaskDispatcher({'t1':t1}, [], ['t1'])
run = runner.MRunner(depfile.name, reporter, num_process=2)
run._run_tasks_init(td)
result_q = Queue()
task_q = Queue()
proc_list = run._run_start_processes(task_q, result_q)
run.finish()
assert 1 == len(proc_list)
assert t1.name == task_q.get().name
# 2 process, 2 tasks (but only one task can be started)
def test_waiting_process(self, reporter, monkeypatch, depfile):
mock_process = Mock()
monkeypatch.setattr(runner, 'Process', mock_process)
t1 = Task('t1', [])
t2 = Task('t2', [], task_dep=['t1'])
td = TaskDispatcher({'t1':t1, 't2':t2}, [], ['t1', 't2'])
run = runner.MRunner(depfile.name, reporter, num_process=2)
run._run_tasks_init(td)
result_q = Queue()
task_q = Queue()
proc_list = run._run_start_processes(task_q, result_q)
run.finish()
assert 2 == len(proc_list)
assert t1.name == task_q.get().name
assert isinstance(task_q.get(), runner.Hold)
@pytest.mark.skipif('not runner.MRunner.available()')
class TestMRunner_execute_task(object):
def test_hold(self, reporter, depfile):
run = runner.MRunner(depfile.name, reporter)
task_q = Queue()
task_q.put(runner.Hold()) # to test
task_q.put(None) # to terminate function
result_q = Queue()
run.execute_task_subprocess(task_q, result_q)
run.finish()
# nothing was done
assert result_q.empty() # pragma: no cover (coverage bug?)
| mit |
petebachant/scipy | benchmarks/benchmarks/linalg.py | 44 | 2636 | from __future__ import division, absolute_import, print_function
import numpy.linalg as nl
import numpy as np
from numpy.testing import assert_
from numpy.random import rand
try:
import scipy.linalg as sl
except ImportError:
pass
from .common import Benchmark
def random(size):
return rand(*size)
class Bench(Benchmark):
params = [
[20, 100, 500, 1000],
['contig', 'nocont'],
['numpy', 'scipy']
]
param_names = ['size', 'contiguous', 'module']
def setup(self, size, contig, module):
a = random([size,size])
# larger diagonal ensures non-singularity:
for i in range(size):
a[i,i] = 10*(.1+a[i,i])
b = random([size])
if contig != 'contig':
a = a[-1::-1,-1::-1] # turn into a non-contiguous array
assert_(not a.flags['CONTIGUOUS'])
self.a = a
self.b = b
def time_solve(self, size, contig, module):
if module == 'numpy':
nl.solve(self.a, self.b)
else:
sl.solve(self.a, self.b)
def time_inv(self, size, contig, module):
if module == 'numpy':
nl.inv(self.a)
else:
sl.inv(self.a)
def time_det(self, size, contig, module):
if module == 'numpy':
nl.det(self.a)
else:
sl.det(self.a)
def time_eigvals(self, size, contig, module):
if module == 'numpy':
nl.eigvals(self.a)
else:
sl.eigvals(self.a)
def time_svd(self, size, contig, module):
if module == 'numpy':
nl.svd(self.a)
else:
sl.svd(self.a)
class Norm(Benchmark):
params = [
[(20, 20), (100, 100), (1000, 1000), (20, 1000), (1000, 20)],
['contig', 'nocont'],
['numpy', 'scipy']
]
param_names = ['shape', 'contiguous', 'module']
def setup(self, shape, contig, module):
a = np.random.randn(*shape)
if contig != 'contig':
a = a[-1::-1,-1::-1] # turn into a non-contiguous array
assert_(not a.flags['CONTIGUOUS'])
self.a = a
def time_1_norm(self, size, contig, module):
if module == 'numpy':
nl.norm(self.a, ord=1)
else:
sl.norm(self.a, ord=1)
def time_inf_norm(self, size, contig, module):
if module == 'numpy':
nl.norm(self.a, ord=np.inf)
else:
sl.norm(self.a, ord=np.inf)
def time_frobenius_norm(self, size, contig, module):
if module == 'numpy':
nl.norm(self.a)
else:
sl.norm(self.a)
| bsd-3-clause |
stewartpark/django | tests/transactions/tests.py | 88 | 19722 | from __future__ import unicode_literals
import sys
import threading
import time
from unittest import skipIf, skipUnless
from django.db import (
DatabaseError, Error, IntegrityError, OperationalError, connection,
transaction,
)
from django.test import (
TransactionTestCase, skipIfDBFeature, skipUnlessDBFeature,
)
from django.utils import six
from .models import Reporter
@skipUnless(connection.features.uses_savepoints,
"'atomic' requires transactions and savepoints.")
class AtomicTests(TransactionTestCase):
"""
Tests for the atomic decorator and context manager.
The tests make assertions on internal attributes because there isn't a
robust way to ask the database for its current transaction state.
Since the decorator syntax is converted into a context manager (see the
implementation), there are only a few basic tests with the decorator
syntax and the bulk of the tests use the context manager syntax.
"""
available_apps = ['transactions']
def test_decorator_syntax_commit(self):
@transaction.atomic
def make_reporter():
Reporter.objects.create(first_name="Tintin")
make_reporter()
self.assertQuerysetEqual(Reporter.objects.all(), ['<Reporter: Tintin>'])
def test_decorator_syntax_rollback(self):
@transaction.atomic
def make_reporter():
Reporter.objects.create(first_name="Haddock")
raise Exception("Oops, that's his last name")
with six.assertRaisesRegex(self, Exception, "Oops"):
make_reporter()
self.assertQuerysetEqual(Reporter.objects.all(), [])
def test_alternate_decorator_syntax_commit(self):
@transaction.atomic()
def make_reporter():
Reporter.objects.create(first_name="Tintin")
make_reporter()
self.assertQuerysetEqual(Reporter.objects.all(), ['<Reporter: Tintin>'])
def test_alternate_decorator_syntax_rollback(self):
@transaction.atomic()
def make_reporter():
Reporter.objects.create(first_name="Haddock")
raise Exception("Oops, that's his last name")
with six.assertRaisesRegex(self, Exception, "Oops"):
make_reporter()
self.assertQuerysetEqual(Reporter.objects.all(), [])
def test_commit(self):
with transaction.atomic():
Reporter.objects.create(first_name="Tintin")
self.assertQuerysetEqual(Reporter.objects.all(), ['<Reporter: Tintin>'])
def test_rollback(self):
with six.assertRaisesRegex(self, Exception, "Oops"):
with transaction.atomic():
Reporter.objects.create(first_name="Haddock")
raise Exception("Oops, that's his last name")
self.assertQuerysetEqual(Reporter.objects.all(), [])
def test_nested_commit_commit(self):
with transaction.atomic():
Reporter.objects.create(first_name="Tintin")
with transaction.atomic():
Reporter.objects.create(first_name="Archibald", last_name="Haddock")
self.assertQuerysetEqual(Reporter.objects.all(),
['<Reporter: Archibald Haddock>', '<Reporter: Tintin>'])
def test_nested_commit_rollback(self):
with transaction.atomic():
Reporter.objects.create(first_name="Tintin")
with six.assertRaisesRegex(self, Exception, "Oops"):
with transaction.atomic():
Reporter.objects.create(first_name="Haddock")
raise Exception("Oops, that's his last name")
self.assertQuerysetEqual(Reporter.objects.all(), ['<Reporter: Tintin>'])
def test_nested_rollback_commit(self):
with six.assertRaisesRegex(self, Exception, "Oops"):
with transaction.atomic():
Reporter.objects.create(last_name="Tintin")
with transaction.atomic():
Reporter.objects.create(last_name="Haddock")
raise Exception("Oops, that's his first name")
self.assertQuerysetEqual(Reporter.objects.all(), [])
def test_nested_rollback_rollback(self):
with six.assertRaisesRegex(self, Exception, "Oops"):
with transaction.atomic():
Reporter.objects.create(last_name="Tintin")
with six.assertRaisesRegex(self, Exception, "Oops"):
with transaction.atomic():
Reporter.objects.create(first_name="Haddock")
raise Exception("Oops, that's his last name")
raise Exception("Oops, that's his first name")
self.assertQuerysetEqual(Reporter.objects.all(), [])
def test_merged_commit_commit(self):
with transaction.atomic():
Reporter.objects.create(first_name="Tintin")
with transaction.atomic(savepoint=False):
Reporter.objects.create(first_name="Archibald", last_name="Haddock")
self.assertQuerysetEqual(Reporter.objects.all(),
['<Reporter: Archibald Haddock>', '<Reporter: Tintin>'])
def test_merged_commit_rollback(self):
with transaction.atomic():
Reporter.objects.create(first_name="Tintin")
with six.assertRaisesRegex(self, Exception, "Oops"):
with transaction.atomic(savepoint=False):
Reporter.objects.create(first_name="Haddock")
raise Exception("Oops, that's his last name")
# Writes in the outer block are rolled back too.
self.assertQuerysetEqual(Reporter.objects.all(), [])
def test_merged_rollback_commit(self):
with six.assertRaisesRegex(self, Exception, "Oops"):
with transaction.atomic():
Reporter.objects.create(last_name="Tintin")
with transaction.atomic(savepoint=False):
Reporter.objects.create(last_name="Haddock")
raise Exception("Oops, that's his first name")
self.assertQuerysetEqual(Reporter.objects.all(), [])
def test_merged_rollback_rollback(self):
with six.assertRaisesRegex(self, Exception, "Oops"):
with transaction.atomic():
Reporter.objects.create(last_name="Tintin")
with six.assertRaisesRegex(self, Exception, "Oops"):
with transaction.atomic(savepoint=False):
Reporter.objects.create(first_name="Haddock")
raise Exception("Oops, that's his last name")
raise Exception("Oops, that's his first name")
self.assertQuerysetEqual(Reporter.objects.all(), [])
def test_reuse_commit_commit(self):
atomic = transaction.atomic()
with atomic:
Reporter.objects.create(first_name="Tintin")
with atomic:
Reporter.objects.create(first_name="Archibald", last_name="Haddock")
self.assertQuerysetEqual(Reporter.objects.all(),
['<Reporter: Archibald Haddock>', '<Reporter: Tintin>'])
def test_reuse_commit_rollback(self):
atomic = transaction.atomic()
with atomic:
Reporter.objects.create(first_name="Tintin")
with six.assertRaisesRegex(self, Exception, "Oops"):
with atomic:
Reporter.objects.create(first_name="Haddock")
raise Exception("Oops, that's his last name")
self.assertQuerysetEqual(Reporter.objects.all(), ['<Reporter: Tintin>'])
def test_reuse_rollback_commit(self):
atomic = transaction.atomic()
with six.assertRaisesRegex(self, Exception, "Oops"):
with atomic:
Reporter.objects.create(last_name="Tintin")
with atomic:
Reporter.objects.create(last_name="Haddock")
raise Exception("Oops, that's his first name")
self.assertQuerysetEqual(Reporter.objects.all(), [])
def test_reuse_rollback_rollback(self):
atomic = transaction.atomic()
with six.assertRaisesRegex(self, Exception, "Oops"):
with atomic:
Reporter.objects.create(last_name="Tintin")
with six.assertRaisesRegex(self, Exception, "Oops"):
with atomic:
Reporter.objects.create(first_name="Haddock")
raise Exception("Oops, that's his last name")
raise Exception("Oops, that's his first name")
self.assertQuerysetEqual(Reporter.objects.all(), [])
def test_force_rollback(self):
with transaction.atomic():
Reporter.objects.create(first_name="Tintin")
# atomic block shouldn't rollback, but force it.
self.assertFalse(transaction.get_rollback())
transaction.set_rollback(True)
self.assertQuerysetEqual(Reporter.objects.all(), [])
def test_prevent_rollback(self):
with transaction.atomic():
Reporter.objects.create(first_name="Tintin")
sid = transaction.savepoint()
# trigger a database error inside an inner atomic without savepoint
with self.assertRaises(DatabaseError):
with transaction.atomic(savepoint=False):
with connection.cursor() as cursor:
cursor.execute(
"SELECT no_such_col FROM transactions_reporter")
# prevent atomic from rolling back since we're recovering manually
self.assertTrue(transaction.get_rollback())
transaction.set_rollback(False)
transaction.savepoint_rollback(sid)
self.assertQuerysetEqual(Reporter.objects.all(), ['<Reporter: Tintin>'])
class AtomicInsideTransactionTests(AtomicTests):
"""All basic tests for atomic should also pass within an existing transaction."""
def setUp(self):
self.atomic = transaction.atomic()
self.atomic.__enter__()
def tearDown(self):
self.atomic.__exit__(*sys.exc_info())
@skipIf(connection.features.autocommits_when_autocommit_is_off,
"This test requires a non-autocommit mode that doesn't autocommit.")
class AtomicWithoutAutocommitTests(AtomicTests):
"""All basic tests for atomic should also pass when autocommit is turned off."""
def setUp(self):
transaction.set_autocommit(False)
def tearDown(self):
# The tests access the database after exercising 'atomic', initiating
# a transaction ; a rollback is required before restoring autocommit.
transaction.rollback()
transaction.set_autocommit(True)
@skipUnless(connection.features.uses_savepoints,
"'atomic' requires transactions and savepoints.")
class AtomicMergeTests(TransactionTestCase):
"""Test merging transactions with savepoint=False."""
available_apps = ['transactions']
def test_merged_outer_rollback(self):
with transaction.atomic():
Reporter.objects.create(first_name="Tintin")
with transaction.atomic(savepoint=False):
Reporter.objects.create(first_name="Archibald", last_name="Haddock")
with six.assertRaisesRegex(self, Exception, "Oops"):
with transaction.atomic(savepoint=False):
Reporter.objects.create(first_name="Calculus")
raise Exception("Oops, that's his last name")
# The third insert couldn't be roll back. Temporarily mark the
# connection as not needing rollback to check it.
self.assertTrue(transaction.get_rollback())
transaction.set_rollback(False)
self.assertEqual(Reporter.objects.count(), 3)
transaction.set_rollback(True)
# The second insert couldn't be roll back. Temporarily mark the
# connection as not needing rollback to check it.
self.assertTrue(transaction.get_rollback())
transaction.set_rollback(False)
self.assertEqual(Reporter.objects.count(), 3)
transaction.set_rollback(True)
# The first block has a savepoint and must roll back.
self.assertQuerysetEqual(Reporter.objects.all(), [])
def test_merged_inner_savepoint_rollback(self):
with transaction.atomic():
Reporter.objects.create(first_name="Tintin")
with transaction.atomic():
Reporter.objects.create(first_name="Archibald", last_name="Haddock")
with six.assertRaisesRegex(self, Exception, "Oops"):
with transaction.atomic(savepoint=False):
Reporter.objects.create(first_name="Calculus")
raise Exception("Oops, that's his last name")
# The third insert couldn't be roll back. Temporarily mark the
# connection as not needing rollback to check it.
self.assertTrue(transaction.get_rollback())
transaction.set_rollback(False)
self.assertEqual(Reporter.objects.count(), 3)
transaction.set_rollback(True)
# The second block has a savepoint and must roll back.
self.assertEqual(Reporter.objects.count(), 1)
self.assertQuerysetEqual(Reporter.objects.all(), ['<Reporter: Tintin>'])
@skipUnless(connection.features.uses_savepoints,
"'atomic' requires transactions and savepoints.")
class AtomicErrorsTests(TransactionTestCase):
available_apps = ['transactions']
def test_atomic_prevents_setting_autocommit(self):
autocommit = transaction.get_autocommit()
with transaction.atomic():
with self.assertRaises(transaction.TransactionManagementError):
transaction.set_autocommit(not autocommit)
# Make sure autocommit wasn't changed.
self.assertEqual(connection.autocommit, autocommit)
def test_atomic_prevents_calling_transaction_methods(self):
with transaction.atomic():
with self.assertRaises(transaction.TransactionManagementError):
transaction.commit()
with self.assertRaises(transaction.TransactionManagementError):
transaction.rollback()
def test_atomic_prevents_queries_in_broken_transaction(self):
r1 = Reporter.objects.create(first_name="Archibald", last_name="Haddock")
with transaction.atomic():
r2 = Reporter(first_name="Cuthbert", last_name="Calculus", id=r1.id)
with self.assertRaises(IntegrityError):
r2.save(force_insert=True)
# The transaction is marked as needing rollback.
with self.assertRaises(transaction.TransactionManagementError):
r2.save(force_update=True)
self.assertEqual(Reporter.objects.get(pk=r1.pk).last_name, "Haddock")
@skipIfDBFeature('atomic_transactions')
def test_atomic_allows_queries_after_fixing_transaction(self):
r1 = Reporter.objects.create(first_name="Archibald", last_name="Haddock")
with transaction.atomic():
r2 = Reporter(first_name="Cuthbert", last_name="Calculus", id=r1.id)
with self.assertRaises(IntegrityError):
r2.save(force_insert=True)
# Mark the transaction as no longer needing rollback.
transaction.set_rollback(False)
r2.save(force_update=True)
self.assertEqual(Reporter.objects.get(pk=r1.pk).last_name, "Calculus")
@skipUnlessDBFeature('test_db_allows_multiple_connections')
def test_atomic_prevents_queries_in_broken_transaction_after_client_close(self):
with transaction.atomic():
Reporter.objects.create(first_name="Archibald", last_name="Haddock")
connection.close()
# The connection is closed and the transaction is marked as
# needing rollback. This will raise an InterfaceError on databases
# that refuse to create cursors on closed connections (PostgreSQL)
# and a TransactionManagementError on other databases.
with self.assertRaises(Error):
Reporter.objects.create(first_name="Cuthbert", last_name="Calculus")
# The connection is usable again .
self.assertEqual(Reporter.objects.count(), 0)
@skipUnless(connection.vendor == 'mysql', "MySQL-specific behaviors")
class AtomicMySQLTests(TransactionTestCase):
available_apps = ['transactions']
@skipIf(threading is None, "Test requires threading")
def test_implicit_savepoint_rollback(self):
"""MySQL implicitly rolls back savepoints when it deadlocks (#22291)."""
other_thread_ready = threading.Event()
def other_thread():
try:
with transaction.atomic():
Reporter.objects.create(id=1, first_name="Tintin")
other_thread_ready.set()
# We cannot synchronize the two threads with an event here
# because the main thread locks. Sleep for a little while.
time.sleep(1)
# 2) ... and this line deadlocks. (see below for 1)
Reporter.objects.exclude(id=1).update(id=2)
finally:
# This is the thread-local connection, not the main connection.
connection.close()
other_thread = threading.Thread(target=other_thread)
other_thread.start()
other_thread_ready.wait()
with six.assertRaisesRegex(self, OperationalError, 'Deadlock found'):
# Double atomic to enter a transaction and create a savepoint.
with transaction.atomic():
with transaction.atomic():
# 1) This line locks... (see above for 2)
Reporter.objects.create(id=1, first_name="Tintin")
other_thread.join()
class AtomicMiscTests(TransactionTestCase):
available_apps = []
def test_wrap_callable_instance(self):
"""#20028 -- Atomic must support wrapping callable instances."""
class Callable(object):
def __call__(self):
pass
# Must not raise an exception
transaction.atomic(Callable())
@skipUnlessDBFeature('can_release_savepoints')
def test_atomic_does_not_leak_savepoints_on_failure(self):
"""#23074 -- Savepoints must be released after rollback."""
# Expect an error when rolling back a savepoint that doesn't exist.
# Done outside of the transaction block to ensure proper recovery.
with self.assertRaises(Error):
# Start a plain transaction.
with transaction.atomic():
# Swallow the intentional error raised in the sub-transaction.
with six.assertRaisesRegex(self, Exception, "Oops"):
# Start a sub-transaction with a savepoint.
with transaction.atomic():
sid = connection.savepoint_ids[-1]
raise Exception("Oops")
# This is expected to fail because the savepoint no longer exists.
connection.savepoint_rollback(sid)
@skipIf(connection.features.autocommits_when_autocommit_is_off,
"This test requires a non-autocommit mode that doesn't autocommit.")
def test_orm_query_without_autocommit(self):
"""#24921 -- ORM queries must be possible after set_autocommit(False)."""
transaction.set_autocommit(False)
try:
Reporter.objects.create(first_name="Tintin")
finally:
transaction.rollback()
transaction.set_autocommit(True)
| bsd-3-clause |
czgu/opendataexperience | env/lib/python2.7/site-packages/django/template/debug.py | 42 | 3778 | from django.template.base import Lexer, Parser, tag_re, NodeList, VariableNode, TemplateSyntaxError
from django.utils.encoding import force_text
from django.utils.html import conditional_escape
from django.utils.safestring import SafeData, EscapeData
from django.utils.formats import localize
from django.utils.timezone import template_localtime
class DebugLexer(Lexer):
def __init__(self, template_string, origin):
super(DebugLexer, self).__init__(template_string, origin)
def tokenize(self):
"Return a list of tokens from a given template_string"
result, upto = [], 0
for match in tag_re.finditer(self.template_string):
start, end = match.span()
if start > upto:
result.append(self.create_token(self.template_string[upto:start], (upto, start), False))
upto = start
result.append(self.create_token(self.template_string[start:end], (start, end), True))
upto = end
last_bit = self.template_string[upto:]
if last_bit:
result.append(self.create_token(last_bit, (upto, upto + len(last_bit)), False))
return result
def create_token(self, token_string, source, in_tag):
token = super(DebugLexer, self).create_token(token_string, in_tag)
token.source = self.origin, source
return token
class DebugParser(Parser):
def __init__(self, lexer):
super(DebugParser, self).__init__(lexer)
self.command_stack = []
def enter_command(self, command, token):
self.command_stack.append((command, token.source))
def exit_command(self):
self.command_stack.pop()
def error(self, token, msg):
return self.source_error(token.source, msg)
def source_error(self, source, msg):
e = TemplateSyntaxError(msg)
e.django_template_source = source
return e
def create_nodelist(self):
return DebugNodeList()
def create_variable_node(self, contents):
return DebugVariableNode(contents)
def extend_nodelist(self, nodelist, node, token):
node.source = token.source
super(DebugParser, self).extend_nodelist(nodelist, node, token)
def unclosed_block_tag(self, parse_until):
command, source = self.command_stack.pop()
msg = "Unclosed tag '%s'. Looking for one of: %s " % (command, ', '.join(parse_until))
raise self.source_error(source, msg)
def compile_filter_error(self, token, e):
if not hasattr(e, 'django_template_source'):
e.django_template_source = token.source
def compile_function_error(self, token, e):
if not hasattr(e, 'django_template_source'):
e.django_template_source = token.source
class DebugNodeList(NodeList):
def render_node(self, node, context):
try:
return node.render(context)
except Exception as e:
if not hasattr(e, 'django_template_source'):
e.django_template_source = node.source
raise
class DebugVariableNode(VariableNode):
def render(self, context):
try:
output = self.filter_expression.resolve(context)
output = template_localtime(output, use_tz=context.use_tz)
output = localize(output, use_l10n=context.use_l10n)
output = force_text(output)
except UnicodeDecodeError:
return ''
except Exception as e:
if not hasattr(e, 'django_template_source'):
e.django_template_source = self.source
raise
if (context.autoescape and not isinstance(output, SafeData)) or isinstance(output, EscapeData):
return conditional_escape(output)
else:
return output
| apache-2.0 |
Qalthos/ansible | test/units/modules/network/f5/test_bigip_service_policy.py | 38 | 4128 | # -*- coding: utf-8 -*-
#
# Copyright: (c) 2017, F5 Networks Inc.
# GNU General Public License v3.0 (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import json
import pytest
import sys
if sys.version_info < (2, 7):
pytestmark = pytest.mark.skip("F5 Ansible modules require Python >= 2.7")
from ansible.module_utils.basic import AnsibleModule
try:
from library.modules.bigip_service_policy import ApiParameters
from library.modules.bigip_service_policy import ModuleParameters
from library.modules.bigip_service_policy import ModuleManager
from library.modules.bigip_service_policy import ArgumentSpec
# In Ansible 2.8, Ansible changed import paths.
from test.units.compat import unittest
from test.units.compat.mock import Mock
from test.units.compat.mock import patch
from test.units.modules.utils import set_module_args
except ImportError:
from ansible.modules.network.f5.bigip_service_policy import ApiParameters
from ansible.modules.network.f5.bigip_service_policy import ModuleParameters
from ansible.modules.network.f5.bigip_service_policy import ModuleManager
from ansible.modules.network.f5.bigip_service_policy import ArgumentSpec
# Ansible 2.8 imports
from units.compat import unittest
from units.compat.mock import Mock
from units.compat.mock import patch
from units.modules.utils import set_module_args
fixture_path = os.path.join(os.path.dirname(__file__), 'fixtures')
fixture_data = {}
def load_fixture(name):
path = os.path.join(fixture_path, name)
if path in fixture_data:
return fixture_data[path]
with open(path) as f:
data = f.read()
try:
data = json.loads(data)
except Exception:
pass
fixture_data[path] = data
return data
class TestParameters(unittest.TestCase):
def test_module_parameters(self):
args = dict(
name='foo',
description='my description',
timer_policy='timer1',
port_misuse_policy='misuse1',
)
p = ModuleParameters(params=args)
assert p.name == 'foo'
assert p.description == 'my description'
assert p.timer_policy == '/Common/timer1'
assert p.port_misuse_policy == '/Common/misuse1'
def test_api_parameters(self):
args = load_fixture('load_net_service_policy_1.json')
p = ApiParameters(params=args)
assert p.name == 'baz'
assert p.description == 'my description'
assert p.timer_policy == '/Common/foo'
assert p.port_misuse_policy == '/Common/bar'
class TestManager(unittest.TestCase):
def setUp(self):
self.spec = ArgumentSpec()
try:
self.p1 = patch('library.modules.bigip_service_policy.module_provisioned')
self.m1 = self.p1.start()
self.m1.return_value = True
except Exception:
self.p1 = patch('ansible.modules.network.f5.bigip_service_policy.module_provisioned')
self.m1 = self.p1.start()
self.m1.return_value = True
def test_create_selfip(self, *args):
set_module_args(dict(
name='foo',
description='my description',
timer_policy='timer1',
port_misuse_policy='misuse1',
partition='Common',
state='present',
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode
)
mm = ModuleManager(module=module)
# Override methods to force specific logic in the module to happen
mm.exists = Mock(side_effect=[False, True])
mm.create_on_device = Mock(return_value=True)
mm.module_provisioned = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
| gpl-3.0 |
phihag/youtube-dl | youtube_dl/extractor/eagleplatform.py | 17 | 7755 | # coding: utf-8
from __future__ import unicode_literals
import re
from .common import InfoExtractor
from ..compat import (
compat_HTTPError,
compat_str,
)
from ..utils import (
ExtractorError,
int_or_none,
unsmuggle_url,
)
class EaglePlatformIE(InfoExtractor):
_VALID_URL = r'''(?x)
(?:
eagleplatform:(?P<custom_host>[^/]+):|
https?://(?P<host>.+?\.media\.eagleplatform\.com)/index/player\?.*\brecord_id=
)
(?P<id>\d+)
'''
_TESTS = [{
# http://lenta.ru/news/2015/03/06/navalny/
'url': 'http://lentaru.media.eagleplatform.com/index/player?player=new&record_id=227304&player_template_id=5201',
# Not checking MD5 as sometimes the direct HTTP link results in 404 and HLS is used
'info_dict': {
'id': '227304',
'ext': 'mp4',
'title': 'Навальный вышел на свободу',
'description': 'md5:d97861ac9ae77377f3f20eaf9d04b4f5',
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 87,
'view_count': int,
'age_limit': 0,
},
}, {
# http://muz-tv.ru/play/7129/
# http://media.clipyou.ru/index/player?record_id=12820&width=730&height=415&autoplay=true
'url': 'eagleplatform:media.clipyou.ru:12820',
'md5': '358597369cf8ba56675c1df15e7af624',
'info_dict': {
'id': '12820',
'ext': 'mp4',
'title': "'O Sole Mio",
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 216,
'view_count': int,
},
'skip': 'Georestricted',
}, {
# referrer protected video (https://tvrain.ru/lite/teleshow/kak_vse_nachinalos/namin-418921/)
'url': 'eagleplatform:tvrainru.media.eagleplatform.com:582306',
'only_matching': True,
}]
@staticmethod
def _extract_url(webpage):
# Regular iframe embedding
mobj = re.search(
r'<iframe[^>]+src=(["\'])(?P<url>(?:https?:)?//.+?\.media\.eagleplatform\.com/index/player\?.+?)\1',
webpage)
if mobj is not None:
return mobj.group('url')
PLAYER_JS_RE = r'''
<script[^>]+
src=(?P<qjs>["\'])(?:https?:)?//(?P<host>(?:(?!(?P=qjs)).)+\.media\.eagleplatform\.com)/player/player\.js(?P=qjs)
.+?
'''
# "Basic usage" embedding (see http://dultonmedia.github.io/eplayer/)
mobj = re.search(
r'''(?xs)
%s
<div[^>]+
class=(?P<qclass>["\'])eagleplayer(?P=qclass)[^>]+
data-id=["\'](?P<id>\d+)
''' % PLAYER_JS_RE, webpage)
if mobj is not None:
return 'eagleplatform:%(host)s:%(id)s' % mobj.groupdict()
# Generalization of "Javascript code usage", "Combined usage" and
# "Usage without attaching to DOM" embeddings (see
# http://dultonmedia.github.io/eplayer/)
mobj = re.search(
r'''(?xs)
%s
<script>
.+?
new\s+EaglePlayer\(
(?:[^,]+\s*,\s*)?
{
.+?
\bid\s*:\s*["\']?(?P<id>\d+)
.+?
}
\s*\)
.+?
</script>
''' % PLAYER_JS_RE, webpage)
if mobj is not None:
return 'eagleplatform:%(host)s:%(id)s' % mobj.groupdict()
@staticmethod
def _handle_error(response):
status = int_or_none(response.get('status', 200))
if status != 200:
raise ExtractorError(' '.join(response['errors']), expected=True)
def _download_json(self, url_or_request, video_id, *args, **kwargs):
try:
response = super(EaglePlatformIE, self)._download_json(
url_or_request, video_id, *args, **kwargs)
except ExtractorError as ee:
if isinstance(ee.cause, compat_HTTPError):
response = self._parse_json(ee.cause.read().decode('utf-8'), video_id)
self._handle_error(response)
raise
return response
def _get_video_url(self, url_or_request, video_id, note='Downloading JSON metadata'):
return self._download_json(url_or_request, video_id, note)['data'][0]
def _real_extract(self, url):
url, smuggled_data = unsmuggle_url(url, {})
mobj = re.match(self._VALID_URL, url)
host, video_id = mobj.group('custom_host') or mobj.group('host'), mobj.group('id')
headers = {}
query = {
'id': video_id,
}
referrer = smuggled_data.get('referrer')
if referrer:
headers['Referer'] = referrer
query['referrer'] = referrer
player_data = self._download_json(
'http://%s/api/player_data' % host, video_id,
headers=headers, query=query)
media = player_data['data']['playlist']['viewports'][0]['medialist'][0]
title = media['title']
description = media.get('description')
thumbnail = self._proto_relative_url(media.get('snapshot'), 'http:')
duration = int_or_none(media.get('duration'))
view_count = int_or_none(media.get('views'))
age_restriction = media.get('age_restriction')
age_limit = None
if age_restriction:
age_limit = 0 if age_restriction == 'allow_all' else 18
secure_m3u8 = self._proto_relative_url(media['sources']['secure_m3u8']['auto'], 'http:')
formats = []
m3u8_url = self._get_video_url(secure_m3u8, video_id, 'Downloading m3u8 JSON')
m3u8_formats = self._extract_m3u8_formats(
m3u8_url, video_id, 'mp4', entry_protocol='m3u8_native',
m3u8_id='hls', fatal=False)
formats.extend(m3u8_formats)
m3u8_formats_dict = {}
for f in m3u8_formats:
if f.get('height') is not None:
m3u8_formats_dict[f['height']] = f
mp4_data = self._download_json(
# Secure mp4 URL is constructed according to Player.prototype.mp4 from
# http://lentaru.media.eagleplatform.com/player/player.js
re.sub(r'm3u8|hlsvod|hls|f4m', 'mp4s', secure_m3u8),
video_id, 'Downloading mp4 JSON', fatal=False)
if mp4_data:
for format_id, format_url in mp4_data.get('data', {}).items():
if not isinstance(format_url, compat_str):
continue
height = int_or_none(format_id)
if height is not None and m3u8_formats_dict.get(height):
f = m3u8_formats_dict[height].copy()
f.update({
'format_id': f['format_id'].replace('hls', 'http'),
'protocol': 'http',
})
else:
f = {
'format_id': 'http-%s' % format_id,
'height': int_or_none(format_id),
}
f['url'] = format_url
formats.append(f)
self._sort_formats(formats)
return {
'id': video_id,
'title': title,
'description': description,
'thumbnail': thumbnail,
'duration': duration,
'view_count': view_count,
'age_limit': age_limit,
'formats': formats,
}
| unlicense |
abstract-open-solutions/OCB | addons/account/wizard/account_open_closed_fiscalyear.py | 237 | 2537 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import fields, osv
from openerp.tools.translate import _
class account_open_closed_fiscalyear(osv.osv_memory):
_name = "account.open.closed.fiscalyear"
_description = "Choose Fiscal Year"
_columns = {
'fyear_id': fields.many2one('account.fiscalyear', \
'Fiscal Year', required=True, help='Select Fiscal Year which you want to remove entries for its End of year entries journal'),
}
def remove_entries(self, cr, uid, ids, context=None):
move_obj = self.pool.get('account.move')
data = self.browse(cr, uid, ids, context=context)[0]
period_journal = data.fyear_id.end_journal_period_id or False
if not period_journal:
raise osv.except_osv(_('Error!'), _("You have to set the 'End of Year Entries Journal' for this Fiscal Year which is set after generating opening entries from 'Generate Opening Entries'."))
if period_journal.period_id.state == 'done':
raise osv.except_osv(_('Error!'), _("You can not cancel closing entries if the 'End of Year Entries Journal' period is closed."))
ids_move = move_obj.search(cr, uid, [('journal_id','=',period_journal.journal_id.id),('period_id','=',period_journal.period_id.id)])
if ids_move:
cr.execute('delete from account_move where id IN %s', (tuple(ids_move),))
self.invalidate_cache(cr, uid, context=context)
return {'type': 'ir.actions.act_window_close'}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
paulromano/openmc | tests/unit_tests/test_pin.py | 8 | 3545 | """
Tests for constructing Pin universes
"""
import numpy as np
import pytest
import openmc
from openmc.model import pin
def get_pin_radii(pin_univ):
"""Return a sorted list of all radii from pin"""
rads = set()
for cell in pin_univ.get_all_cells().values():
surfs = cell.region.get_surfaces().values()
rads.update(set(s.r for s in surfs))
return list(sorted(rads))
@pytest.fixture
def pin_mats():
fuel = openmc.Material(name="UO2")
fuel.volume = 100
clad = openmc.Material(name="zirc")
clad.volume = 100
water = openmc.Material(name="water")
return fuel, clad, water
@pytest.fixture
def good_radii():
return (0.4, 0.42)
def test_failure(pin_mats, good_radii):
"""Check for various failure modes"""
good_surfaces = [openmc.ZCylinder(r=r) for r in good_radii]
# Bad material type
with pytest.raises(TypeError):
pin(good_surfaces, [mat.name for mat in pin_mats])
# Incorrect lengths
with pytest.raises(ValueError, match="length"):
pin(good_surfaces[:len(pin_mats) - 2], pin_mats)
# Non-positive radii
rad = [openmc.ZCylinder(r=-0.1)] + good_surfaces[1:]
with pytest.raises(ValueError, match="index 0"):
pin(rad, pin_mats)
# Non-increasing radii
surfs = tuple(reversed(good_surfaces))
with pytest.raises(ValueError, match="index 1"):
pin(surfs, pin_mats)
# Bad orientation
surfs = [openmc.XCylinder(r=good_surfaces[0].r)] + good_surfaces[1:]
with pytest.raises(TypeError, match="surfaces"):
pin(surfs, pin_mats)
# Passing cells argument
with pytest.raises(ValueError, match="Cells"):
pin(surfs, pin_mats, cells=[])
def test_pins_of_universes(pin_mats, good_radii):
"""Build a pin with a Universe in one ring"""
u1 = openmc.Universe(cells=[openmc.Cell(fill=pin_mats[1])])
new_items = pin_mats[:1] + (u1, ) + pin_mats[2:]
new_pin = pin(
[openmc.ZCylinder(r=r) for r in good_radii], new_items,
subdivisions={0: 2}, divide_vols=True)
assert len(new_pin.cells) == len(pin_mats) + 1
@pytest.mark.parametrize(
"surf_type", [openmc.ZCylinder, openmc.XCylinder, openmc.YCylinder])
def test_subdivide(pin_mats, good_radii, surf_type):
"""Test the subdivision with various orientations"""
surfs = [surf_type(r=r) for r in good_radii]
fresh = pin(surfs, pin_mats, name="fresh pin")
assert len(fresh.cells) == len(pin_mats)
assert fresh.name == "fresh pin"
# subdivide inner region
N = 5
div0 = pin(surfs, pin_mats, {0: N})
assert len(div0.cells) == len(pin_mats) + N - 1
# Check volume of fuel material
for mid, mat in div0.get_all_materials().items():
if mat.name == "UO2":
assert mat.volume == pytest.approx(100 / N)
# check volumes of new rings
radii = get_pin_radii(div0)
bounds = [0] + radii[:N]
sqrs = np.square(bounds)
assert np.all(sqrs[1:] - sqrs[:-1] == pytest.approx(good_radii[0] ** 2 / N))
# subdivide non-inner most region
new_pin = pin(surfs, pin_mats, {1: N})
assert len(new_pin.cells) == len(pin_mats) + N - 1
# Check volume of clad material
for mid, mat in div0.get_all_materials().items():
if mat.name == "zirc":
assert mat.volume == pytest.approx(100 / N)
# check volumes of new rings
radii = get_pin_radii(new_pin)
sqrs = np.square(radii[:N + 1])
assert np.all(sqrs[1:] - sqrs[:-1] == pytest.approx(
(good_radii[1] ** 2 - good_radii[0] ** 2) / N))
| mit |
metpy/SHARPpy | sharppy/sharptab/winds.py | 1 | 12346 | ''' Wind Manipulation Routines '''
import math
from sharppy.sharptab import interp, vector
from sharppy.sharptab.constants import *
__all__ = ['mean_wind', 'mean_wind_npw', 'sr_wind', 'sr_wind_npw',
'wind_shear', 'helicity', 'max_wind', 'corfidi_mcs_motion',
'non_parcel_bunkers_motion', 'mbe_vectors']
def mean_wind(pbot, ptop, prof, psteps=20, stu=0, stv=0):
'''
Calculates a pressure-weighted mean wind through a layer. The default
layer is 850 to 200 hPa.
Inputs
------
pbot (float) Pressure of the bottom level (hPa)
ptop (float) Pressure of the top level (hPa)
prof (profile object) Profile Object
psteps (int; optional) Number of steps to loop through (int)
stu (float; optional) U-component of storm-motion vector
stv (float; optional) V-component of storm-motion vector
Returns
-------
mnu (float) U-component
mnv (float) V-component
'''
if pbot == -1: lower = 850.
if ptop == -1: upper = 200.
pinc = int((pbot - ptop) / psteps)
if pinc < 1:
u1, v1 = interp.components(pbot, prof)
u2, v2 = interp.components(ptop, prof)
u1 = (u1 - stu) * pbot
v1 = (v1 - stv) * pbot
u2 = (u2 - stu) * ptop
v2 = (v2 - stv) * ptop
usum = u1 + u2
vsum = v1 + v2
wgt = pbot + ptop
else:
wgt = 0
usum = 0
vsum = 0
for p in range(int(pbot), int(ptop)+1, -pinc):
utmp, vtmp = interp.components(p, prof)
usum += (utmp - stu) * p
vsum += (vtmp - stv) * p
wgt += p
return float(usum / wgt), float(vsum / wgt)
def mean_wind_npw(pbot, ptop, prof, psteps=20, stu=0, stv=0):
'''
Calculates a pressure-weighted mean wind through a layer. The default
layer is 850 to 200 hPa.
Inputs
------
pbot (float) Pressure of the bottom level (hPa)
ptop (float) Pressure of the top level (hPa)
prof (profile object) Profile Object
psteps (int; optional) Number of steps to loop through (int)
stu (float; optional) U-component of storm-motion vector
stv (float; optional) V-component of storm-motion vector
Returns
-------
mnu (float) U-component
mnv (float) V-component
'''
if pbot == -1: lower = 850.
if ptop == -1: upper = 200.
pinc = int((pbot - ptop) / psteps)
if pinc < 1:
u1, v1 = interp.components(pbot, prof)
u2, v2 = interp.components(ptop, prof)
u1 = (u1 - stu) * pbot
v1 = (v1 - stv) * pbot
u2 = (u2 - stu) * ptop
v2 = (v2 - stv) * ptop
usum = u1 + u2
vsum = v1 + v2
wgt = 2
else:
wgt = 0
usum = 0
vsum = 0
for p in range(int(pbot), int(ptop), -pinc):
utmp, vtmp = interp.components(p, prof)
usum += (utmp - stu)
vsum += (vtmp - stv)
wgt += 1
return float(usum / wgt), float(vsum / wgt)
def sr_wind(pbot, ptop, stu, stv, prof, psteps=20):
'''
Calculates a pressure-weighted mean storm-relative wind through a layer.
The default layer is 850 to 200 hPa. This is a thin wrapper around
mean_wind().
Inputs
------
pbot (float) Pressure of the bottom level (hPa)
ptop (float) Pressure of the top level (hPa)
stu (float) U-component of storm-motion vector
stv (float) V-component of storm-motion vector
prof (profile object) Profile Object
psteps (int; optional) Number of steps to loop through (int)
Returns
-------
mnu (float) U-component
mnv (float) V-component
'''
return mean_wind(pbot, ptop, prof, psteps, stu, stv)
def sr_wind_npw(pbot, ptop, stu, stv, prof, psteps=20):
'''
Calculates a non-pressure-weighted storm-relative mean wind through a
layer. The default layer is 850 to 200 hPa. This is a thin wrapper
around mean_wind_npw().
Inputs
------
pbot (float) Pressure of the bottom level (hPa)
ptop (float) Pressure of the top level (hPa)
stu (float) U-component of storm-motion vector
stv (float) V-component of storm-motion vector
prof (profile object) Profile Object
psteps (int; optional) Number of steps to loop through (int)
Returns
-------
mnu (float) U-component
mnv (float) V-component
'''
return mean_wind_npw(pbot, ptop, prof, psteps, stu, stv)
def wind_shear(pbot, ptop, prof):
'''
Calculates the shear between the wind at (pbot) and (ptop).
Inputs
------
pbot (float) Pressure of the bottom level (hPa)
ptop (float) Pressure of the top level (hPa)
prof (profile object) Profile Object
Returns
-------
shu (float) U-component
shv (float) V-component
'''
ubot, vbot = interp.components(pbot, prof)
utop, vtop = interp.components(ptop, prof)
shu = utop - ubot
shv = vtop - vbot
return shu, shv
def helicity(lower, upper, prof, stu=0, stv=0):
'''
Calculates the relative helicity (m2/s2) of a layer from lower to upper.
If storm-motion vector is supplied, storm-relative helicity, both
positve and negative, is returned.
Inputs
------
lower (float) Bottom level of layer (m, AGL)
upper (float) Top level of layer (m, AGL)
prof (profile object) Profile Object
stu (float; optional) U-component of storm-motion
stv (float; optional) V-component of storm-motion
Returns
-------
phel+nhel (float) Combined Helicity (m2/s2)
phel (float) Positive Helicity (m2/s2)
nhel (float) Negative Helicity (m2/s2)
'''
lower = interp.msl(lower, prof)
upper = interp.msl(upper, prof)
plower = interp.pres(lower, prof)
pupper = interp.pres(upper, prof)
phel = 0
nhel = 0
# Find lower and upper ind bounds for looping
i = 0
while interp.msl(prof.gSndg[i][prof.zind], prof) < lower: i+=1
lptr = i
if interp.msl(prof.gSndg[i][prof.zind], prof) == lower: lptr+=1
while interp.msl(prof.gSndg[i][prof.zind], prof) <= upper: i+=1
uptr = i
if interp.msl(prof.gSndg[i][prof.zind], prof) == upper: uptr-=1
# Integrate from interpolated bottom level to iptr level
sru1, srv1 = interp.components(plower, prof)
sru1 = KTS2MS(sru1 - stu)
srv1 = KTS2MS(srv1 - stv)
# Loop through levels
for i in range(lptr, uptr+1):
if QC(prof.gSndg[i][prof.uind]) and QC(prof.gSndg[i][prof.vind]):
sru2, srv2 = interp.components(prof.gSndg[i][prof.pind], prof)
sru2 = KTS2MS(sru2 - stu)
srv2 = KTS2MS(srv2 - stv)
lyrh = (sru2 * srv1) - (sru1 * srv2)
if lyrh > 0: phel += lyrh
else: nhel += lyrh
sru1 = sru2
srv1 = srv2
# Integrate from tptr level to interpolated top level
sru2, srv2 = interp.components(pupper, prof)
sru2 = KTS2MS(sru2 - stu)
srv2 = KTS2MS(srv2 - stv)
lyrh = (sru2 * srv1) - (sru1 * srv2)
if lyrh > 0: phel += lyrh
else: nhel += lyrh
return phel+nhel, phel, nhel
def max_wind(lower, upper, prof):
'''
Finds the maximum wind speed of the layer given by lower and upper levels.
In the event of the maximum wind speed occurring at multiple levels, the
lowest level it occurs is returned.
Inputs
------
lower (float) Bottom level of layer (m, AGL)
upper (float) Top level of layer (m, AGL)
prof (profile object) Profile Object
Returns
-------
p (float) Pressure level (hPa) of max wind speed
maxu (float) Maximum U-component
maxv (float) Maximum V-component
'''
if lower == -1: lower = prof.gSndg[prof.sfc][prof.pind]
if upper == -1: upper = prof.gSndg[prof.gNumLevels-1][prof.pind]
# Find lower and upper ind bounds for looping
i = 0
while prof.gSndg[i][prof.pind] > lower: i+=1
lptr = i
while prof.gSndg[i][prof.pind] > upper: i+=1
uptr = i
# Start with interpolated bottom level
maxu, maxv = interp.components(lower, prof)
maxspd = vector.comp2vec(maxu, maxv)[1]
p = lower
# Loop through all levels in layer
for i in range(lptr, uptr+1):
if QC(prof.gSndg[i][prof.pind]) and QC(prof.gSndg[i][prof.uind]) and \
QC(prof.gSndg[i][prof.vind]):
spd = vector.comp2vec(prof.gSndg[i][prof.uind],
prof.gSndg[i][prof.vind])[1]
if spd > maxspd:
maxspd = spd
maxu = prof.gSndg[i][prof.uind]
maxv = prof.gSndg[i][prof.vind]
p = prof.gSndg[i][prof.pind]
# Finish with interpolated top level
tmpu, tmpv = interp.components(upper, prof)
tmpspd = vector.comp2vec(tmpu, tmpv)[1]
if tmpspd > maxspd:
maxu = tmpu
maxv = tmpv
maxspd = tmpspd
p = upper
return p, maxu, maxv
def corfidi_mcs_motion(prof):
'''
Calculated the Meso-beta Elements (Corfidi) Vectors
Inputs
------
prof (profile object) Profile Object
Returns
-------
upu (float) U-component of the upshear vector
upv (float) V-component of the upshear vector
dnu (float) U-component of the downshear vector
dnv (float) V-component of the downshear vector
'''
# Compute the tropospheric (850hPa-300hPa) mean wind
mnu1, mnv1 = mean_wind_npw(850., 300., prof)
# Compute the low-level (SFC-1500m) mean wind
p_1p5km = interp.pres(interp.msl(1500., prof), prof)
mnu2, mnv2 = mean_wind_npw(prof.gSndg[prof.sfc][prof.pind], p_1p5km, prof)
# Compute the upshear vector
upu = mnu1 - mnu2
upv = mnv1 - mnv2
# Compute the downshear vector
dnu = mnu1 + upu
dnv = mnv1 + upv
return upu, upv, dnu, dnv
def non_parcel_bunkers_motion(prof):
'''
Compute the Bunkers Storm Motion for a Right Moving Supercell
Inputs
------
prof (profile object) Profile Object
Returns
-------
rstu (float) Right Storm Motion U-component
rstv (float) Right Storm Motion V-component
lstu (float) Left Storm Motion U-component
lstv (float) Left Storm Motion V-component
'''
d = MS2KTS(7.5) # Deviation value emperically derived as 7.5 m/s
msl6km = interp.msl(6000., prof)
p6km = interp.pres(msl6km, prof)
# SFC-6km Mean Wind
mnu6, mnv6 = mean_wind_npw(prof.gSndg[prof.sfc][prof.pind],
p6km, prof, 20)
# SFC-6km Shear Vector
shru6, shrv6 = wind_shear(prof.gSndg[prof.sfc][prof.pind],
p6km, prof)
# Bunkers Right Motion
tmp = d / vector.comp2vec(shru6, shrv6)[1]
rstu = mnu6 + (tmp * shrv6)
rstv = mnv6 - (tmp * shru6)
lstu = mnu6 - (tmp * shrv6)
lstv = mnv6 + (tmp * shru6)
return rstu, rstv, lstu, lstv
def mbe_vectors(prof):
'''
Thin wrapper around corfidi_mcs_motion()
Inputs
------
prof (profile object) Profile Object
Returns
-------
upu (float) U-component of the upshear vector
upv (float) V-component of the upshear vector
dnu (float) U-component of the downshear vector
dnv (float) V-component of the downshear vector
'''
return corfidi_mcs_motion(prof) | bsd-3-clause |
ifduyue/sentry | src/sentry/utils/audit.py | 2 | 3240 | from __future__ import absolute_import
from sentry.models import (
ApiKey, AuditLogEntry, AuditLogEntryEvent, DeletedOrganization, DeletedProject, DeletedTeam, Organization, Project, Team
)
def create_audit_entry(request, transaction_id=None, logger=None, **kwargs):
user = request.user if request.user.is_authenticated() else None
api_key = request.auth if hasattr(request, 'auth') \
and isinstance(request.auth, ApiKey) else None
entry = AuditLogEntry(
actor=user, actor_key=api_key, ip_address=request.META['REMOTE_ADDR'], **kwargs
)
# Only create a real AuditLogEntry record if we are passing an event type
# otherwise, we want to still log to our actual logging
if entry.event is not None:
entry.save()
if entry.event == AuditLogEntryEvent.ORG_REMOVE:
create_org_delete_log(entry)
elif entry.event == AuditLogEntryEvent.PROJECT_REMOVE:
create_project_delete_log(entry)
elif entry.event == AuditLogEntryEvent.TEAM_REMOVE:
create_team_delete_log(entry)
extra = {
'ip_address': entry.ip_address,
'organization_id': entry.organization_id,
'object_id': entry.target_object,
'entry_id': entry.id,
'actor_label': entry.actor_label
}
if entry.actor_id:
extra['actor_id'] = entry.actor_id
if entry.actor_key_id:
extra['actor_key_id'] = entry.actor_key_id
if transaction_id is not None:
extra['transaction_id'] = transaction_id
if logger:
logger.info(entry.get_event_display(), extra=extra)
return entry
def create_org_delete_log(entry):
delete_log = DeletedOrganization()
organization = Organization.objects.get(id=entry.target_object)
delete_log.name = organization.name
delete_log.slug = organization.slug
delete_log.date_created = organization.date_added
complete_delete_log(delete_log, entry)
def create_project_delete_log(entry):
delete_log = DeletedProject()
project = Project.objects.get(id=entry.target_object)
delete_log.name = project.name
delete_log.slug = project.slug
delete_log.date_created = project.date_added
delete_log.platform = project.platform
delete_log.organization_id = entry.organization.id
delete_log.organization_name = entry.organization.name
delete_log.organization_slug = entry.organization.slug
complete_delete_log(delete_log, entry)
def create_team_delete_log(entry):
delete_log = DeletedTeam()
team = Team.objects.get(id=entry.target_object)
delete_log.name = team.name
delete_log.slug = team.slug
delete_log.date_created = team.date_added
delete_log.organization_id = entry.organization.id
delete_log.organization_name = entry.organization.name
delete_log.organization_slug = entry.organization.slug
complete_delete_log(delete_log, entry)
def complete_delete_log(delete_log, entry):
"""
Adds common information on a delete log from an audit entry and
saves that delete log.
"""
delete_log.actor_label = entry.actor_label
delete_log.actor_id = entry.actor_id
delete_log.actor_key = entry.actor_key
delete_log.ip_address = entry.ip_address
delete_log.save()
| bsd-3-clause |
jledet/linux-xlnx | scripts/gdb/linux/symbols.py | 68 | 6310 | #
# gdb helper commands and functions for Linux kernel debugging
#
# load kernel and module symbols
#
# Copyright (c) Siemens AG, 2011-2013
#
# Authors:
# Jan Kiszka <jan.kiszka@siemens.com>
#
# This work is licensed under the terms of the GNU GPL version 2.
#
import gdb
import os
import re
from linux import modules
if hasattr(gdb, 'Breakpoint'):
class LoadModuleBreakpoint(gdb.Breakpoint):
def __init__(self, spec, gdb_command):
super(LoadModuleBreakpoint, self).__init__(spec, internal=True)
self.silent = True
self.gdb_command = gdb_command
def stop(self):
module = gdb.parse_and_eval("mod")
module_name = module['name'].string()
cmd = self.gdb_command
# enforce update if object file is not found
cmd.module_files_updated = False
# Disable pagination while reporting symbol (re-)loading.
# The console input is blocked in this context so that we would
# get stuck waiting for the user to acknowledge paged output.
show_pagination = gdb.execute("show pagination", to_string=True)
pagination = show_pagination.endswith("on.\n")
gdb.execute("set pagination off")
if module_name in cmd.loaded_modules:
gdb.write("refreshing all symbols to reload module "
"'{0}'\n".format(module_name))
cmd.load_all_symbols()
else:
cmd.load_module_symbols(module)
# restore pagination state
gdb.execute("set pagination %s" % ("on" if pagination else "off"))
return False
class LxSymbols(gdb.Command):
"""(Re-)load symbols of Linux kernel and currently loaded modules.
The kernel (vmlinux) is taken from the current working directly. Modules (.ko)
are scanned recursively, starting in the same directory. Optionally, the module
search path can be extended by a space separated list of paths passed to the
lx-symbols command."""
module_paths = []
module_files = []
module_files_updated = False
loaded_modules = []
breakpoint = None
def __init__(self):
super(LxSymbols, self).__init__("lx-symbols", gdb.COMMAND_FILES,
gdb.COMPLETE_FILENAME)
def _update_module_files(self):
self.module_files = []
for path in self.module_paths:
gdb.write("scanning for modules in {0}\n".format(path))
for root, dirs, files in os.walk(path):
for name in files:
if name.endswith(".ko"):
self.module_files.append(root + "/" + name)
self.module_files_updated = True
def _get_module_file(self, module_name):
module_pattern = ".*/{0}\.ko$".format(
module_name.replace("_", r"[_\-]"))
for name in self.module_files:
if re.match(module_pattern, name) and os.path.exists(name):
return name
return None
def _section_arguments(self, module):
try:
sect_attrs = module['sect_attrs'].dereference()
except gdb.error:
return ""
attrs = sect_attrs['attrs']
section_name_to_address = {
attrs[n]['name'].string(): attrs[n]['address']
for n in range(int(sect_attrs['nsections']))}
args = []
for section_name in [".data", ".data..read_mostly", ".rodata", ".bss"]:
address = section_name_to_address.get(section_name)
if address:
args.append(" -s {name} {addr}".format(
name=section_name, addr=str(address)))
return "".join(args)
def load_module_symbols(self, module):
module_name = module['name'].string()
module_addr = str(module['core_layout']['base']).split()[0]
module_file = self._get_module_file(module_name)
if not module_file and not self.module_files_updated:
self._update_module_files()
module_file = self._get_module_file(module_name)
if module_file:
gdb.write("loading @{addr}: {filename}\n".format(
addr=module_addr, filename=module_file))
cmdline = "add-symbol-file {filename} {addr}{sections}".format(
filename=module_file,
addr=module_addr,
sections=self._section_arguments(module))
gdb.execute(cmdline, to_string=True)
if module_name not in self.loaded_modules:
self.loaded_modules.append(module_name)
else:
gdb.write("no module object found for '{0}'\n".format(module_name))
def load_all_symbols(self):
gdb.write("loading vmlinux\n")
# Dropping symbols will disable all breakpoints. So save their states
# and restore them afterward.
saved_states = []
if hasattr(gdb, 'breakpoints') and not gdb.breakpoints() is None:
for bp in gdb.breakpoints():
saved_states.append({'breakpoint': bp, 'enabled': bp.enabled})
# drop all current symbols and reload vmlinux
gdb.execute("symbol-file", to_string=True)
gdb.execute("symbol-file vmlinux")
self.loaded_modules = []
module_list = modules.module_list()
if not module_list:
gdb.write("no modules found\n")
else:
[self.load_module_symbols(module) for module in module_list]
for saved_state in saved_states:
saved_state['breakpoint'].enabled = saved_state['enabled']
def invoke(self, arg, from_tty):
self.module_paths = arg.split()
self.module_paths.append(os.getcwd())
# enforce update
self.module_files = []
self.module_files_updated = False
self.load_all_symbols()
if hasattr(gdb, 'Breakpoint'):
if self.breakpoint is not None:
self.breakpoint.delete()
self.breakpoint = None
self.breakpoint = LoadModuleBreakpoint(
"kernel/module.c:do_init_module", self)
else:
gdb.write("Note: symbol update on module loading not supported "
"with this gdb version\n")
LxSymbols()
| gpl-2.0 |
mktoni/anglerfish | anglerfish/set_single_instance.py | 2 | 1102 | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""Set process name and cpu priority,return socket.socket or None."""
import logging as log
import socket
import sys
def set_single_instance(name: str, port: int=8_888) -> socket.socket:
"""Set process name and cpu priority,return socket.socket or None."""
try: # Single instance app ~crossplatform, uses udp socket.
log.info("Creating Abstract UDP Socket Lock for Single Instance.")
__lock = socket.socket(
socket.AF_UNIX if sys.platform.startswith("linux")
else socket.AF_INET, socket.SOCK_STREAM)
__lock.bind(f"\0_{name}__lock" if sys.platform.startswith("linux")
else ("127.0.0.1", port))
except (socket.error, OSError, Exception) as e:
__lock = None
log.critical("Another instance of App is already running!, Exiting!.")
log.exception(e)
sys.exit()
exit()
0 / 0 # should never reach here.
else:
log.info(f"Socket Lock for 1 Single App Instance: {__lock},{__lock!r}")
finally:
return __lock
| gpl-3.0 |
jchodera/assaytools | AssayTools/pymcmodels.py | 1 | 27554 | #!/usr/bin/env python
"""
A test of pymc for ITC.
"""
#=============================================================================================
# IMPORTS
#=============================================================================================
import numpy as np
import pymc
import pint
#=============================================================================================
# Physical constants
#=============================================================================================
Na = 6.02214179e23 # Avogadro's number (number/mol)
kB = Na * 1.3806504e-23 / 4184.0 # Boltzmann constant (kcal/mol/K)
C0 = 1.0 # standard concentration (M)
#=============================================================================================
# Parameters for MCMC sampling
#=============================================================================================
DG_min = np.log(1e-15) # kT, most favorable (negative) binding free energy possible; 1 fM
DG_max = +0 # kT, least favorable binding free energy possible
niter = 500000 # number of iterations
nburn = 50000 # number of burn-in iterations to discard
nthin = 500 # thinning interval
#=============================================================================================
# PyMC models
#=============================================================================================
def inner_filter_effect_attenuation(epsilon_ex, epsilon_em, path_length, concentration, geometry='top'):
"""
Compute primary and secondar inner filter effect attenuation for top and bottom observation geometries.
Parameters
----------
epsilon_ex : float
Exctinction coefficient at excitation wavelength. Units of 1/M/cm
epsilon_em : float
Extinction coefficient at emission wavelength. Units of 1/M/cm
path_length : float
Path length. Units of cm.
concentration : float
Concentration of species whose extinction coefficient is provided. Units of M.
geometry : str, optional, default='top'
Observation geometry, one of ['top', 'bottom'].
Returns
-------
scaling : factor by which expected fluorescence is attenuated by primary and secondary inner filter effects
"""
# Ensure concentration is a vector.
if not hasattr(concentration, '__getitem__'):
concentration = np.array([concentration])
ELC_ex = epsilon_ex*path_length*concentration
ELC_em = epsilon_em*path_length*concentration
scaling = 1.0 # no attenuation
if geometry == 'top':
alpha = (ELC_ex + ELC_em)
scaling = (1 - np.exp(-alpha)) / alpha
# Handle alpha -> 0 case explicitly.
indices = np.where(np.abs(alpha) < 0.01)
scaling[indices] = 1. - alpha[indices]/2. + (alpha[indices]**2)/6. - (alpha[indices]**3)/24. + (alpha[indices]**4)/120.
elif geometry == 'bottom':
alpha = (ELC_ex - ELC_em)
scaling = (1 - np.exp(-alpha)) / alpha
# Handle alpha -> 0 case explicitly.
indices = np.where(np.abs(alpha) < 0.01)
scaling[indices] = 1. - alpha[indices]/2. + (alpha[indices]**2)/6. - (alpha[indices]**3)/24. + (alpha[indices]**4)/120.
# Include additional term.
scaling *= np.exp(-ELC_em)
else:
raise Exception("geometry '%s' unknown, must be one of ['top', 'bottom']" % geometry)
return scaling
# Create a pymc model
def make_model(Pstated, dPstated, Lstated, dLstated,
top_complex_fluorescence=None, top_ligand_fluorescence=None,
bottom_complex_fluorescence=None, bottom_ligand_fluorescence=None,
DG_prior='uniform',
concentration_priors='lognormal',
use_primary_inner_filter_correction=True,
use_secondary_inner_filter_correction=True,
assay_volume=100e-6, well_area=0.1586,
epsilon_ex=None, depsilon_ex=None,
epsilon_em=None, depsilon_em=None,
ligand_ex_absorbance=None, ligand_em_absorbance=None,
link_top_and_bottom_sigma=True):
"""
Build a PyMC model for an assay that consists of N wells of protein:ligand at various concentrations and an additional N wells of ligand in buffer, with the ligand at the same concentrations as the corresponding protein:ligand wells.
Parameters
----------
Pstated : numpy.array of N values
Stated protein concentrations for all protein:ligand wells of assay. Units of molarity.
dPstated : numpy.array of N values
Absolute uncertainty in stated protein concentrations for all wells of assay. Units of molarity.
Uncertainties currently cannot be zero.
Lstated : numpy.array of N values
Stated ligand concentrations for all protein:ligand and ligand wells of assay, which must be the same with and without protein. Units of molarity.
dLstated : numpy.array of N values
Absolute uncertainty in stated protein concentrations for all wells of assay. Units of molarity.
Uncertainties currently cannot be zero
top_complex_fluorecence : numpy.array of N values, optional, default=None
Fluorescence intensity (top) for protein:ligand mixture.
top_ligand_fluorescence : numpy.array of N values, optional, default=None
Fluorescence intensity (top) for ligand control.
bottom_complex_fluorescence: numpy.array of N values, optional, default=None
Fluorescence intensity (bottom) for protein:ligand mixture.
bottom_ligand_fluorescence : numpy.array of N values, optional, default=None
Fluorescence intensity (bottom) for ligand control.
DG_prior : str, optional, default='uniform'
Prior to use for reduced free energy of binding (DG): 'uniform' (uniform over reasonable range), or 'chembl' (ChEMBL-inspired distribution); default: 'uniform'
concentration_priors : str, optional, default='lognormal'
Prior to use for protein and ligand concentrations. Available options are ['lognormal', 'normal'].
use_primary_inner_filter_correction : bool, optional, default=True
If true, will infer ligand extinction coefficient epsilon and apply primary inner filter correction to attenuate excitation light.
use_secondary_inner_filter_correction : bool, optional, default=True
If true, will infer ligand extinction coefficient epsilon and apply secondary inner filter correction to attenuate excitation light.
assay_volume : float, optional, default=100e-6
Assay volume. Units of L. Default 100 uL.
well_area : float, optional, default=0.1586
Well area. Units of cm^2. Default 0.1586 cm^2, for half-area plate.
epsilon_ex, depsilon_ex : float, optional, default=None
Orthogonal measurement of ligand extinction coefficient at excitation wavelength (and uncertainty). If None, will use a uniform prior.
epsilon_em, depsilon_em : float, optional, default=None
Orthogonal measurement of ligand extinction coefficient at excitation wavelength (and uncertainty). If None, will use a uniform prior.
ligand_ex_absorbance : np.array of N values, optional, default=None
Ligand absorbance measurement for excitation wavelength.
ligand_em_absorbance : np.array of N values, optional, default=None
Ligand absorbance measurement for emission wavelength.
link_top_and_bottom_sigma : bool, optional, default=True
If True, will link top and bottom fluorescence uncertainty sigma.
Returns
-------
pymc_model : dict
A dict mapping variable names to onbjects that can be used as a PyMC model object.
Examples
--------
Create a simple model
>>> N = 12 # 12 wells per series of protein:ligand or ligand alone
>>> Pstated = np.ones([N], np.float64) * 1e-6
>>> Lstated = 20.0e-6 / np.array([10**(float(i)/2.0) for i in range(N)])
>>> dPstated = 0.10 * Pstated
>>> dLstated = 0.08 * Lstated
>>> top_complex_fluorescence = np.array([ 689., 683., 664., 588., 207., 80., 28., 17., 10., 11., 10., 10.], np.float32)
>>> top_ligand_fluorescence = np.array([ 174., 115., 57., 20., 7., 6., 6., 6., 6., 7., 6., 7.], np.float32)
>>> from pymcmodels import make_model
>>> pymc_model = make_model(Pstated, dPstated, Lstated, dLstated, top_complex_fluorescence=top_complex_fluorescence, top_ligand_fluorescence=top_ligand_fluorescence)
"""
# Compute path length.
path_length = assay_volume * 1000 / well_area # cm, needed for inner filter effect corrections
# Compute number of samples.
N = len(Lstated)
# Check input.
# TODO: Check fluorescence and absorbance measurements for correct dimensions.
if (len(Pstated) != N):
raise Exception('len(Pstated) [%d] must equal len(Lstated) [%d].' % (len(Pstated), len(Lstated)))
if (len(dPstated) != N):
raise Exception('len(dPstated) [%d] must equal len(Lstated) [%d].' % (len(dPstated), len(Lstated)))
if (len(dLstated) != N):
raise Exception('len(dLstated) [%d] must equal len(Lstated) [%d].' % (len(dLstated), len(Lstated)))
# Note whether we have top or bottom fluorescence measurements.
top_fluorescence = (top_complex_fluorescence is not None) or (top_ligand_fluorescence is not None) # True if any top fluorescence measurements provided
bottom_fluorescence = (bottom_complex_fluorescence is not None) or (bottom_ligand_fluorescence is not None) # True if any bottom fluorescence measurements provided
# Create an empty dict to hold the model.
model = dict()
# Prior on binding free energies.
if DG_prior == 'uniform':
DeltaG = pymc.Uniform('DeltaG', lower=DG_min, upper=DG_max) # binding free energy (kT), uniform over huge range
elif DG_prior == 'chembl':
DeltaG = pymc.Normal('DeltaG', mu=0, tau=1./(12.5**2)) # binding free energy (kT), using a Gaussian prior inspured by ChEMBL
else:
raise Exception("DG_prior = '%s' unknown. Must be one of 'DeltaG' or 'chembl'." % DG_prior)
# Add to model.
model['DeltaG'] = DeltaG
# Create priors on true concentrations of protein and ligand.
if concentration_priors == 'lognormal':
Ptrue = pymc.Lognormal('Ptrue', mu=np.log(Pstated**2 / np.sqrt(dPstated**2 + Pstated**2)), tau=np.sqrt(np.log(1.0 + (dPstated/Pstated)**2))**(-2)) # protein concentration (M)
Ltrue = pymc.Lognormal('Ltrue', mu=np.log(Lstated**2 / np.sqrt(dLstated**2 + Lstated**2)), tau=np.sqrt(np.log(1.0 + (dLstated/Lstated)**2))**(-2)) # ligand concentration (M)
Ltrue_control = pymc.Lognormal('Ltrue_control', mu=np.log(Lstated**2 / np.sqrt(dLstated**2 + Lstated**2)), tau=np.sqrt(np.log(1.0 + (dLstated/Lstated)**2))**(-2)) # ligand concentration (M)
elif concentration_priors == 'gaussian':
# Warning: These priors could lead to negative concentrations.
Ptrue = pymc.Normal('Ptrue', mu=Pstated, tau=dPstated**(-2)) # protein concentration (M)
Ltrue = pymc.Normal('Ltrue', mu=Lstated, tau=dLstated**(-2)) # ligand concentration (M)
Ltrue_control = pymc.Normal('Ltrue_control', mu=Lstated, tau=dLstated**(-2)) # ligand concentration (M)
else:
raise Exception("concentration_priors = '%s' unknown. Must be one of ['lognormal', 'normal']." % concentration_priors)
# Add to model.
model['Ptrue'] = Ptrue
model['Ltrue'] = Ltrue
model['Ltrue_control'] = Ltrue_control
# extinction coefficient
if use_primary_inner_filter_correction:
if epsilon_ex:
model['epsilon_ex'] = pymc.Lognormal('epsilon_ex', mu=np.log(epsilon_ex**2 / np.sqrt(depsilon_ex**2 + epsilon_ex**2)), tau=np.sqrt(np.log(1.0 + (depsilon_ex/epsilon_ex)**2))**(-2)) # prior is centered on measured extinction coefficient
else:
model['epsilon_ex'] = pymc.Uniform('epsilon_ex', lower=0.0, upper=1000e3, value=70000.0) # extinction coefficient or molar absorptivity for ligand, units of 1/M/cm
if use_secondary_inner_filter_correction:
if epsilon_em:
model['epsilon_em'] = pymc.Lognormal('epsilon_em', mu=np.log(epsilon_em**2 / np.sqrt(depsilon_em**2 + epsilon_em**2)), tau=np.sqrt(np.log(1.0 + (depsilon_em/epsilon_em)**2))**(-2)) # prior is centered on measured extinction coefficient
else:
model['epsilon_em'] = pymc.Uniform('epsilon_em', lower=0.0, upper=1000e3, value=0.0) # extinction coefficient or molar absorptivity for ligand, units of 1/M/cm
# Min and max observed fluorescence.
Fmax = 0.0; Fmin = 1e6;
if top_complex_fluorescence is not None:
Fmax = max(Fmax, top_complex_fluorescence.max()); Fmin = min(Fmin, top_complex_fluorescence.min())
if top_ligand_fluorescence is not None:
Fmax = max(Fmax, top_ligand_fluorescence.max()); Fmin = min(Fmin, top_ligand_fluorescence.min())
if bottom_complex_fluorescence is not None:
Fmax = max(Fmax, bottom_complex_fluorescence.max()); Fmin = min(Fmin, bottom_complex_fluorescence.min())
if bottom_ligand_fluorescence is not None:
Fmax = max(Fmax, bottom_ligand_fluorescence.max()); Fmin = min(Fmin, bottom_ligand_fluorescence.min())
# Compute initial guesses for fluorescence quantum yield quantities.
F_plate_guess = Fmin
F_buffer_guess = Fmin / path_length
F_L_guess = (Fmax - Fmin) / Lstated.max()
F_P_guess = 0.0
F_P_guess = Fmin / Pstated.min()
F_PL_guess = (Fmax - Fmin) / min(Pstated.max(), Lstated.max())
# Priors on fluorescence intensities of complexes (later divided by a factor of Pstated for scale).
model['F_plate'] = pymc.Uniform('F_plate', lower=0.0, upper=Fmax, value=F_plate_guess) # plate fluorescence
model['F_buffer'] = pymc.Uniform('F_buffer', lower=0.0, upper=Fmax/path_length, value=F_buffer_guess) # buffer fluorescence
model['F_PL'] = pymc.Uniform('F_PL', lower=0.0, upper=2*Fmax/min(Pstated.max(),Lstated.max()), value=F_PL_guess) # complex fluorescence
model['F_P'] = pymc.Uniform('F_P', lower=0.0, upper=2*(Fmax/Pstated).max(), value=F_P_guess) # protein fluorescence
model['F_L'] = pymc.Uniform('F_L', lower=0.0, upper=2*(Fmax/Lstated).max(), value=F_L_guess) # ligand fluorescence
# Unknown experimental measurement error.
if top_fluorescence:
model['log_sigma_top'] = pymc.Uniform('log_sigma_top', lower=-10, upper=np.log(Fmax), value=np.log(5))
model['sigma_top'] = pymc.Lambda('sigma_top', lambda log_sigma=model['log_sigma_top'] : np.exp(log_sigma) )
model['precision_top'] = pymc.Lambda('precision_top', lambda log_sigma=model['log_sigma_top'] : np.exp(-2*log_sigma) )
if bottom_fluorescence:
if top_fluorescence and bottom_fluorescence and link_top_and_bottom_sigma:
# Use the same log_sigma for top and bottom fluorescence
model['log_sigma_bottom'] = pymc.Lambda('log_sigma_bottom', lambda log_sigma_top=model['log_sigma_top'] : log_sigma_top )
else:
model['log_sigma_bottom'] = pymc.Uniform('log_sigma_bottom', lower=-10, upper=np.log(Fmax), value=np.log(5))
model['sigma_bottom'] = pymc.Lambda('sigma_bottom', lambda log_sigma=model['log_sigma_bottom'] : np.exp(log_sigma) )
model['precision_bottom'] = pymc.Lambda('precision_bottom', lambda log_sigma=model['log_sigma_bottom'] : np.exp(-2*log_sigma) )
if top_fluorescence and bottom_fluorescence:
# Gain that attenuates bottom fluorescence relative to top.
# TODO: Replace this with plate absorbance?
log_gain_guess = - np.log((top_complex_fluorescence.max() - top_complex_fluorescence.min()) / (bottom_complex_fluorescence.max() - bottom_complex_fluorescence.min()))
model['log_gain_bottom'] = pymc.Uniform('log_gain_bottom', lower=-6.0, upper=6.0, value=log_gain_guess) # plate material absorbance at emission wavelength
model['gain_bottom'] = pymc.Lambda('gain_bottom', lambda log_gain_bottom=model['log_gain_bottom'] : np.exp(log_gain_bottom) )
elif (not top_fluorescence) and bottom_fluorescence:
model['log_gain_bottom'] = 0.0 # no gain
model['gain_bottom'] = pymc.Lambda('gain_bottom', lambda log_gain_bottom=model['log_gain_bottom'] : np.exp(log_gain_bottom) )
if top_fluorescence:
model['log_sigma_abs'] = pymc.Uniform('log_sigma_abs', lower=-10, upper=0, value=np.log(0.01))
model['sigma_abs'] = pymc.Lambda('sigma_abs', lambda log_sigma=model['log_sigma_abs'] : np.exp(log_sigma) )
model['precision_abs'] = pymc.Lambda('precision_abs', lambda log_sigma=model['log_sigma_abs'] : np.exp(-2*log_sigma) )
# Fluorescence model.
from assaytools.bindingmodels import TwoComponentBindingModel
if hasattr(model, 'epsilon_ex'):
epsilon_ex = model['epsilon_ex']
else:
epsilon_ex = 0.0
if hasattr(model, 'epsilon_em'):
epsilon_em = model['epsilon_em']
else:
epsilon_em = 0.0
if top_complex_fluorescence is not None:
@pymc.deterministic
def top_complex_fluorescence_model(F_plate=model['F_plate'], F_buffer=model['F_buffer'],
F_PL=model['F_PL'], F_P=model['F_P'], F_L=model['F_L'],
Ptrue=Ptrue, Ltrue=Ltrue, DeltaG=DeltaG,
epsilon_ex=epsilon_ex, epsilon_em=epsilon_em):
[P_i, L_i, PL_i] = TwoComponentBindingModel.equilibrium_concentrations(DeltaG, Ptrue[:], Ltrue[:])
IF_i = inner_filter_effect_attenuation(epsilon_ex, epsilon_em, path_length, L_i, geometry='top')
IF_i_plate = np.exp(-(epsilon_ex+epsilon_em)*path_length*L_i) # inner filter effect applied only to plate
Fmodel_i = IF_i[:]*(F_PL*PL_i + F_L*L_i + F_P*P_i + F_buffer*path_length) + IF_i_plate*F_plate
return Fmodel_i
# Add to model.
model['top_complex_fluorescence_model'] = top_complex_fluorescence_model
model['top_complex_fluorescence'] = pymc.Normal('top_complex_fluorescence',
mu=model['top_complex_fluorescence_model'], tau=model['precision_top'],
size=[N], observed=True, value=top_complex_fluorescence) # observed data
if top_ligand_fluorescence is not None:
@pymc.deterministic
def top_ligand_fluorescence_model(F_plate=model['F_plate'], F_buffer=model['F_buffer'],
F_L=model['F_L'],
Ltrue=Ltrue,
epsilon_ex=epsilon_ex, epsilon_em=epsilon_em):
IF_i = inner_filter_effect_attenuation(epsilon_ex, epsilon_em, path_length, Ltrue, geometry='top')
IF_i_plate = np.exp(-(epsilon_ex+epsilon_em)*path_length*Ltrue) # inner filter effect applied only to plate
Fmodel_i = IF_i[:]*(F_L*Ltrue + F_buffer*path_length) + IF_i_plate*F_plate
return Fmodel_i
# Add to model.
model['top_ligand_fluorescence_model'] = top_ligand_fluorescence_model
model['top_ligand_fluorescence'] = pymc.Normal('top_ligand_fluorescence',
mu=model['top_ligand_fluorescence_model'], tau=model['precision_top'],
size=[N], observed=True, value=top_ligand_fluorescence) # observed data
if bottom_complex_fluorescence is not None:
@pymc.deterministic
def bottom_complex_fluorescence_model(F_plate=model['F_plate'], F_buffer=model['F_buffer'],
F_PL=model['F_PL'], F_P=model['F_P'], F_L=model['F_L'],
Ptrue=Ptrue, Ltrue=Ltrue, DeltaG=DeltaG,
epsilon_ex=epsilon_ex, epsilon_em=epsilon_em,
log_gain_bottom=model['log_gain_bottom']):
[P_i, L_i, PL_i] = TwoComponentBindingModel.equilibrium_concentrations(DeltaG, Ptrue[:], Ltrue[:])
IF_i = inner_filter_effect_attenuation(epsilon_ex, epsilon_em, path_length, L_i, geometry='bottom')
IF_i_plate = np.exp(-epsilon_ex*path_length*L_i) # inner filter effect applied only to plate
Fmodel_i = IF_i[:]*(F_PL*PL_i + F_L*L_i + F_P*P_i + F_buffer*path_length)*np.exp(log_gain_bottom) + IF_i_plate*F_plate
return Fmodel_i
# Add to model.
model['bottom_complex_fluorescence_model'] = bottom_complex_fluorescence_model
model['bottom_complex_fluorescence'] = pymc.Normal('bottom_complex_fluorescence',
mu=model['bottom_complex_fluorescence_model'], tau=model['precision_bottom'],
size=[N], observed=True, value=bottom_complex_fluorescence) # observed data
if bottom_ligand_fluorescence is not None:
@pymc.deterministic
def bottom_ligand_fluorescence_model(F_plate=model['F_plate'], F_buffer=model['F_buffer'],
F_PL=model['F_PL'], F_P=model['F_P'], F_L=model['F_L'],
Ltrue=Ltrue,
epsilon_ex=epsilon_ex, epsilon_em=epsilon_em,
log_gain_bottom=model['log_gain_bottom']):
IF_i = inner_filter_effect_attenuation(epsilon_ex, epsilon_em, path_length, Ltrue, geometry='bottom')
IF_i_plate = np.exp(-epsilon_ex*path_length*Ltrue) # inner filter effect applied only to plate
Fmodel_i = IF_i[:]*(F_L*Ltrue + F_buffer*path_length)*np.exp(log_gain_bottom) + IF_i_plate*F_plate
return Fmodel_i
# Add to model.
model['bottom_ligand_fluorescence_model'] = bottom_ligand_fluorescence_model
model['bottom_ligand_fluorescence'] = pymc.Normal('bottom_ligand_fluorescence',
mu=model['bottom_ligand_fluorescence_model'], tau=model['precision_bottom'],
size=[N], observed=True, value=bottom_ligand_fluorescence) # observed data
if ligand_ex_absorbance is not None:
model['plate_abs_ex'] = pymc.Uniform('plate_abs_ex', lower=0.0, upper=1.0, value=ligand_ex_absorbance.min())
@pymc.deterministic
def ligand_ex_absorbance_model(Ltrue=Ltrue,
epsilon_ex=epsilon_ex,
plate_abs_ex=epsilon_em):
Fmodel_i = (1.0 - np.exp(-epsilon_ex*path_length*Ltrue)) + plate_abs_ex
return Fmodel_i
# Add to model.
model['ligand_ex_absorbance_model'] = ligand_ex_absorbance_model
model['ligand_ex_absorbance'] = pymc.Normal('ligand_ex_absorbance',
mu=model['ligand_ex_absorbance_model'], tau=model['precision_abs'],
size=[N], observed=True, value=ligand_ex_absorbance) # observed data
if ligand_em_absorbance is not None:
model['plate_abs_em'] = pymc.Uniform('plate_abs_em', lower=0.0, upper=1.0, value=ligand_em_absorbance.min())
@pymc.deterministic
def ligand_em_absorbance_model(Ltrue=Ltrue,
epsilon_em=model['epsilon_em'],
plate_abs_em=model['plate_abs_em']):
Fmodel_i = (1.0 - np.exp(-epsilon_em*path_length*Ltrue)) + plate_abs_em
return Fmodel_i
# Add to model.
model['ligand_em_absorbance_model'] = ligand_em_absorbance_model
model['ligand_em_absorbance'] = pymc.Normal('ligand_em_absorbance',
mu=model['ligand_em_absorbance_model'], tau=model['precision_abs'],
size=[N], observed=True, value=ligand_em_absorbance) # observed data
# Promote this to a full-fledged PyMC model.
pymc_model = pymc.Model(model)
# Return the pymc model
return pymc_model
def map_fit(pymc_model):
"""
Find the maximum a posteriori (MAP) fit.
Parameters
----------
pymc_model : pymc model
The pymc model to sample.
Returns
-------
map : pymc.MAP
The MAP fit.
"""
map = pymc.MAP(pymc_model)
ncycles = 50
# DEBUG
ncycles = 5
for cycle in range(ncycles):
if (cycle+1)%5==0: print('MAP fitting cycle %d/%d' % (cycle+1, ncycles))
map.fit()
return map
def run_mcmc(pymc_model):
"""
Sample the model with pymc using sensible defaults.
Parameters
----------
pymc_model : pymc model
The pymc model to sample.
Returns
-------
mcmc : pymc.MCMC
The MCMC samples.
"""
# Sample the model with pymc
mcmc = pymc.MCMC(pymc_model, db='ram', name='Sampler', verbose=True)
nthin = 20
nburn = nthin*10000
niter = nthin*10000
# DEBUG
nburn = nthin*1000
niter = nthin*1000
mcmc.use_step_method(pymc.Metropolis, getattr(pymc_model, 'DeltaG'), proposal_sd=1.0, proposal_distribution='Normal')
mcmc.use_step_method(pymc.Metropolis, getattr(pymc_model, 'F_PL'), proposal_sd=10.0, proposal_distribution='Normal')
mcmc.use_step_method(pymc.Metropolis, getattr(pymc_model, 'F_P'), proposal_sd=10.0, proposal_distribution='Normal')
mcmc.use_step_method(pymc.Metropolis, getattr(pymc_model, 'F_L'), proposal_sd=10.0, proposal_distribution='Normal')
mcmc.use_step_method(pymc.Metropolis, getattr(pymc_model, 'F_plate'), proposal_sd=10.0, proposal_distribution='Normal')
mcmc.use_step_method(pymc.Metropolis, getattr(pymc_model, 'F_buffer'), proposal_sd=10.0, proposal_distribution='Normal')
if hasattr(pymc_model, 'epsilon_ex'):
mcmc.use_step_method(pymc.Metropolis, getattr(pymc_model, 'epsilon_ex'), proposal_sd=10000.0, proposal_distribution='Normal')
if hasattr(pymc_model, 'epsilon_em'):
mcmc.use_step_method(pymc.Metropolis, getattr(pymc_model, 'epsilon_em'), proposal_sd=10000.0, proposal_distribution='Normal')
mcmc.sample(iter=(nburn+niter), burn=nburn, thin=nthin, progress_bar=False, tune_throughout=False)
return mcmc
def show_summary(pymc_model, mcmc, map):
"""
Show summary statistics of MCMC and MAP estimates.
Parameters
----------
pymc_model : pymc model
The pymc model to sample.
map : pymc.MAP
The MAP fit.
mcmc : pymc.MCMC
MCMC samples
TODO
----
* Automatically determine appropriate number of decimal places from statistical uncertainty.
* Automatically adjust concentration units (e.g. pM, nM, uM) depending on estimated affinity.
"""
# Compute summary statistics.
DeltaG = map.DeltaG.value
dDeltaG = mcmc.DeltaG.trace().std()
Kd = np.exp(map.DeltaG.value)
dKd = np.exp(mcmc.DeltaG.trace()).std()
print "DeltaG = %.1f +- %.1f kT" % (DeltaG, dDeltaG)
if (Kd < 1e-12):
print "Kd = %.1f nM +- %.1f fM" % (Kd/1e-15, dKd/1e-15)
elif (Kd < 1e-9):
print "Kd = %.1f pM +- %.1f pM" % (Kd/1e-12, dKd/1e-12)
elif (Kd < 1e-6):
print "Kd = %.1f nM +- %.1f nM" % (Kd/1e-9, dKd/1e-9)
elif (Kd < 1e-3):
print "Kd = %.1f uM +- %.1f uM" % (Kd/1e-6, dKd/1e-6)
elif (Kd < 1):
print "Kd = %.1f mM +- %.1f mM" % (Kd/1e-3, dKd/1e-3)
else:
print "Kd = %.3e M +- %.3e M" % (Kd, dKd)
| lgpl-2.1 |
awainer/7539 | aplicaciones_informaticas/aplicaciones_informaticas/settings.py | 1 | 3741 | """
Django settings for aplicaciones_informaticas project.
Generated by 'django-admin startproject' using Django 1.10.3.
For more information on this file, see
https://docs.djangoproject.com/en/1.10/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.10/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'r@@6sgb_j9v#=x!u3!j%1jvfs6c02)#948k^sffb-)0i1by4zx'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['192.168.5.20', 'ariwainer.com.ar', 'localhost']
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'geoposition',
'backend',
'rest_framework',
'corsheaders'
]
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': ('rest_framework.permissions.AllowAny',),
'PAGE_SIZE': 10
}
MIDDLEWARE = [
'django.middleware.common.CommonMiddleware',
'corsheaders.middleware.CorsMiddleware',
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
# 'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'aplicaciones_informaticas.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'aplicaciones_informaticas.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.10/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.10/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.10/howto/static-files/
STATIC_URL = '/static/'
import sys
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
import geo_settings
GEOPOSITION_GOOGLE_MAPS_API_KEY = geo_settings.api_key
CORS_ORIGIN_ALLOW_ALL = True
CORS_ORIGIN_WHITELIST = 'localhost:3000'
| unlicense |
epage/DialCentral | dialcentral/util/overloading.py | 10 | 7887 | #!/usr/bin/env python
import new
# Make the environment more like Python 3.0
__metaclass__ = type
from itertools import izip as zip
import textwrap
import inspect
__all__ = [
"AnyType",
"overloaded"
]
AnyType = object
class overloaded:
"""
Dynamically overloaded functions.
This is an implementation of (dynamically, or run-time) overloaded
functions; also known as generic functions or multi-methods.
The dispatch algorithm uses the types of all argument for dispatch,
similar to (compile-time) overloaded functions or methods in C++ and
Java.
Most of the complexity in the algorithm comes from the need to support
subclasses in call signatures. For example, if an function is
registered for a signature (T1, T2), then a call with a signature (S1,
S2) is acceptable, assuming that S1 is a subclass of T1, S2 a subclass
of T2, and there are no other more specific matches (see below).
If there are multiple matches and one of those doesn't *dominate* all
others, the match is deemed ambiguous and an exception is raised. A
subtlety here: if, after removing the dominated matches, there are
still multiple matches left, but they all map to the same function,
then the match is not deemed ambiguous and that function is used.
Read the method find_func() below for details.
@note Python 2.5 is required due to the use of predicates any() and all().
@note only supports positional arguments
@author http://www.artima.com/weblogs/viewpost.jsp?thread=155514
>>> import misc
>>> misc.validate_decorator (overloaded)
>>>
>>>
>>>
>>>
>>> #################
>>> #Basics, with reusing names and without
>>> @overloaded
... def foo(x):
... "prints x"
... print x
...
>>> @foo.register(int)
... def foo(x):
... "prints the hex representation of x"
... print hex(x)
...
>>> from types import DictType
>>> @foo.register(DictType)
... def foo_dict(x):
... "prints the keys of x"
... print [k for k in x.iterkeys()]
...
>>> #combines all of the doc strings to help keep track of the specializations
>>> foo.__doc__ # doctest: +ELLIPSIS
"prints x\\n\\n...overloading.foo (<type 'int'>):\\n\\tprints the hex representation of x\\n\\n...overloading.foo_dict (<type 'dict'>):\\n\\tprints the keys of x"
>>> foo ("text")
text
>>> foo (10) #calling the specialized foo
0xa
>>> foo ({3:5, 6:7}) #calling the specialization foo_dict
[3, 6]
>>> foo_dict ({3:5, 6:7}) #with using a unique name, you still have the option of calling the function directly
[3, 6]
>>>
>>>
>>>
>>>
>>> #################
>>> #Multiple arguments, accessing the default, and function finding
>>> @overloaded
... def two_arg (x, y):
... print x,y
...
>>> @two_arg.register(int, int)
... def two_arg_int_int (x, y):
... print hex(x), hex(y)
...
>>> @two_arg.register(float, int)
... def two_arg_float_int (x, y):
... print x, hex(y)
...
>>> @two_arg.register(int, float)
... def two_arg_int_float (x, y):
... print hex(x), y
...
>>> two_arg.__doc__ # doctest: +ELLIPSIS
"...overloading.two_arg_int_int (<type 'int'>, <type 'int'>):\\n\\n...overloading.two_arg_float_int (<type 'float'>, <type 'int'>):\\n\\n...overloading.two_arg_int_float (<type 'int'>, <type 'float'>):"
>>> two_arg(9, 10)
0x9 0xa
>>> two_arg(9.0, 10)
9.0 0xa
>>> two_arg(15, 16.0)
0xf 16.0
>>> two_arg.default_func(9, 10)
9 10
>>> two_arg.find_func ((int, float)) == two_arg_int_float
True
>>> (int, float) in two_arg
True
>>> (str, int) in two_arg
False
>>>
>>>
>>>
>>> #################
>>> #wildcard
>>> @two_arg.register(AnyType, str)
... def two_arg_any_str (x, y):
... print x, y.lower()
...
>>> two_arg("Hello", "World")
Hello world
>>> two_arg(500, "World")
500 world
"""
def __init__(self, default_func):
# Decorator to declare new overloaded function.
self.registry = {}
self.cache = {}
self.default_func = default_func
self.__name__ = self.default_func.__name__
self.__doc__ = self.default_func.__doc__
self.__dict__.update (self.default_func.__dict__)
def __get__(self, obj, type=None):
if obj is None:
return self
return new.instancemethod(self, obj)
def register(self, *types):
"""
Decorator to register an implementation for a specific set of types.
.register(t1, t2)(f) is equivalent to .register_func((t1, t2), f).
"""
def helper(func):
self.register_func(types, func)
originalDoc = self.__doc__ if self.__doc__ is not None else ""
typeNames = ", ".join ([str(type) for type in types])
typeNames = "".join ([func.__module__+".", func.__name__, " (", typeNames, "):"])
overloadedDoc = ""
if func.__doc__ is not None:
overloadedDoc = textwrap.fill (func.__doc__, width=60, initial_indent="\t", subsequent_indent="\t")
self.__doc__ = "\n".join ([originalDoc, "", typeNames, overloadedDoc]).strip()
new_func = func
#Masking the function, so we want to take on its traits
if func.__name__ == self.__name__:
self.__dict__.update (func.__dict__)
new_func = self
return new_func
return helper
def register_func(self, types, func):
"""Helper to register an implementation."""
self.registry[tuple(types)] = func
self.cache = {} # Clear the cache (later we can optimize this).
def __call__(self, *args):
"""Call the overloaded function."""
types = tuple(map(type, args))
func = self.cache.get(types)
if func is None:
self.cache[types] = func = self.find_func(types)
return func(*args)
def __contains__ (self, types):
return self.find_func(types) is not self.default_func
def find_func(self, types):
"""Find the appropriate overloaded function; don't call it.
@note This won't work for old-style classes or classes without __mro__
"""
func = self.registry.get(types)
if func is not None:
# Easy case -- direct hit in registry.
return func
# Phillip Eby suggests to use issubclass() instead of __mro__.
# There are advantages and disadvantages.
# I can't help myself -- this is going to be intense functional code.
# Find all possible candidate signatures.
mros = tuple(inspect.getmro(t) for t in types)
n = len(mros)
candidates = [sig for sig in self.registry
if len(sig) == n and
all(t in mro for t, mro in zip(sig, mros))]
if not candidates:
# No match at all -- use the default function.
return self.default_func
elif len(candidates) == 1:
# Unique match -- that's an easy case.
return self.registry[candidates[0]]
# More than one match -- weed out the subordinate ones.
def dominates(dom, sub,
orders=tuple(dict((t, i) for i, t in enumerate(mro))
for mro in mros)):
# Predicate to decide whether dom strictly dominates sub.
# Strict domination is defined as domination without equality.
# The arguments dom and sub are type tuples of equal length.
# The orders argument is a precomputed auxiliary data structure
# giving dicts of ordering information corresponding to the
# positions in the type tuples.
# A type d dominates a type s iff order[d] <= order[s].
# A type tuple (d1, d2, ...) dominates a type tuple of equal length
# (s1, s2, ...) iff d1 dominates s1, d2 dominates s2, etc.
if dom is sub:
return False
return all(order[d] <= order[s] for d, s, order in zip(dom, sub, orders))
# I suppose I could inline dominates() but it wouldn't get any clearer.
candidates = [cand
for cand in candidates
if not any(dominates(dom, cand) for dom in candidates)]
if len(candidates) == 1:
# There's exactly one candidate left.
return self.registry[candidates[0]]
# Perhaps these multiple candidates all have the same implementation?
funcs = set(self.registry[cand] for cand in candidates)
if len(funcs) == 1:
return funcs.pop()
# No, the situation is irreducibly ambiguous.
raise TypeError("ambigous call; types=%r; candidates=%r" %
(types, candidates))
| lgpl-2.1 |
biotrump/xbmc | lib/gtest/test/gtest_color_test.py | 3259 | 4911 | #!/usr/bin/env python
#
# Copyright 2008, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Verifies that Google Test correctly determines whether to use colors."""
__author__ = 'wan@google.com (Zhanyong Wan)'
import os
import gtest_test_utils
IS_WINDOWS = os.name = 'nt'
COLOR_ENV_VAR = 'GTEST_COLOR'
COLOR_FLAG = 'gtest_color'
COMMAND = gtest_test_utils.GetTestExecutablePath('gtest_color_test_')
def SetEnvVar(env_var, value):
"""Sets the env variable to 'value'; unsets it when 'value' is None."""
if value is not None:
os.environ[env_var] = value
elif env_var in os.environ:
del os.environ[env_var]
def UsesColor(term, color_env_var, color_flag):
"""Runs gtest_color_test_ and returns its exit code."""
SetEnvVar('TERM', term)
SetEnvVar(COLOR_ENV_VAR, color_env_var)
if color_flag is None:
args = []
else:
args = ['--%s=%s' % (COLOR_FLAG, color_flag)]
p = gtest_test_utils.Subprocess([COMMAND] + args)
return not p.exited or p.exit_code
class GTestColorTest(gtest_test_utils.TestCase):
def testNoEnvVarNoFlag(self):
"""Tests the case when there's neither GTEST_COLOR nor --gtest_color."""
if not IS_WINDOWS:
self.assert_(not UsesColor('dumb', None, None))
self.assert_(not UsesColor('emacs', None, None))
self.assert_(not UsesColor('xterm-mono', None, None))
self.assert_(not UsesColor('unknown', None, None))
self.assert_(not UsesColor(None, None, None))
self.assert_(UsesColor('linux', None, None))
self.assert_(UsesColor('cygwin', None, None))
self.assert_(UsesColor('xterm', None, None))
self.assert_(UsesColor('xterm-color', None, None))
self.assert_(UsesColor('xterm-256color', None, None))
def testFlagOnly(self):
"""Tests the case when there's --gtest_color but not GTEST_COLOR."""
self.assert_(not UsesColor('dumb', None, 'no'))
self.assert_(not UsesColor('xterm-color', None, 'no'))
if not IS_WINDOWS:
self.assert_(not UsesColor('emacs', None, 'auto'))
self.assert_(UsesColor('xterm', None, 'auto'))
self.assert_(UsesColor('dumb', None, 'yes'))
self.assert_(UsesColor('xterm', None, 'yes'))
def testEnvVarOnly(self):
"""Tests the case when there's GTEST_COLOR but not --gtest_color."""
self.assert_(not UsesColor('dumb', 'no', None))
self.assert_(not UsesColor('xterm-color', 'no', None))
if not IS_WINDOWS:
self.assert_(not UsesColor('dumb', 'auto', None))
self.assert_(UsesColor('xterm-color', 'auto', None))
self.assert_(UsesColor('dumb', 'yes', None))
self.assert_(UsesColor('xterm-color', 'yes', None))
def testEnvVarAndFlag(self):
"""Tests the case when there are both GTEST_COLOR and --gtest_color."""
self.assert_(not UsesColor('xterm-color', 'no', 'no'))
self.assert_(UsesColor('dumb', 'no', 'yes'))
self.assert_(UsesColor('xterm-color', 'no', 'auto'))
def testAliasesOfYesAndNo(self):
"""Tests using aliases in specifying --gtest_color."""
self.assert_(UsesColor('dumb', None, 'true'))
self.assert_(UsesColor('dumb', None, 'YES'))
self.assert_(UsesColor('dumb', None, 'T'))
self.assert_(UsesColor('dumb', None, '1'))
self.assert_(not UsesColor('xterm', None, 'f'))
self.assert_(not UsesColor('xterm', None, 'false'))
self.assert_(not UsesColor('xterm', None, '0'))
self.assert_(not UsesColor('xterm', None, 'unknown'))
if __name__ == '__main__':
gtest_test_utils.Main()
| gpl-2.0 |
kevinpt/ripyl | ripyl/util/bitops.py | 1 | 1559 | #!/usr/bin/python
# -*- coding: utf-8 -*-
'''Bit-wise operations
'''
# Copyright © 2013 Kevin Thibedeau
# This file is part of Ripyl.
# Ripyl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
# Ripyl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public
# License along with Ripyl. If not, see <http://www.gnu.org/licenses/>.
def split_bits(n, num_bits):
'''Convert integer to a list of bits (MSB-first)
n (int)
The number to convert to bits.
num_bits (int)
The number of bits in the result.
Returns a list of ints representing each bit in n.
'''
bits = [0] * num_bits
for i in xrange(num_bits-1, -1, -1):
bits[i] = n & 0x01
n >>= 1
return bits
def join_bits(bits):
'''Convert an array of bits (MSB first) to an integer word
bits (sequence of ints)
The bits to be merged into an integer
Returns an int representing the bits contained in the bits parameter.
'''
word = 0
for b in bits:
word = (word << 1) | b
return word
| lgpl-3.0 |
onshape-public/onshape-clients | python/onshape_client/oas/models/bt_bill_of_materials_table1073_all_of.py | 1 | 5229 | # coding: utf-8
"""
Onshape REST API
The Onshape REST API consumed by all clients. # noqa: E501
The version of the OpenAPI document: 1.113
Contact: api-support@onshape.zendesk.com
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
import sys # noqa: F401
import six # noqa: F401
import nulltype # noqa: F401
from onshape_client.oas.model_utils import ( # noqa: F401
ModelComposed,
ModelNormal,
ModelSimple,
date,
datetime,
file_type,
int,
none_type,
str,
validate_get_composed_info,
)
class BTBillOfMaterialsTable1073AllOf(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {}
validations = {}
additional_properties_type = None
@staticmethod
def openapi_types():
"""
This must be a class method so a model may have properties that are
of type self, this ensures that we don't create a cyclic import
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
"bt_type": (str,), # noqa: E501
"failed_metadata_representative_occurrences": ([str],), # noqa: E501
"indented": (bool,), # noqa: E501
"showing_excluded": (bool,), # noqa: E501
}
@staticmethod
def discriminator():
return None
attribute_map = {
"bt_type": "btType", # noqa: E501
"failed_metadata_representative_occurrences": "failedMetadataRepresentativeOccurrences", # noqa: E501
"indented": "indented", # noqa: E501
"showing_excluded": "showingExcluded", # noqa: E501
}
@staticmethod
def _composed_schemas():
return None
required_properties = set(
[
"_data_store",
"_check_type",
"_from_server",
"_path_to_item",
"_configuration",
]
)
def __init__(
self,
_check_type=True,
_from_server=False,
_path_to_item=(),
_configuration=None,
**kwargs
): # noqa: E501
"""bt_bill_of_materials_table1073_all_of.BTBillOfMaterialsTable1073AllOf - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_from_server (bool): True if the data is from the server
False if the data is from the client (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
bt_type (str): [optional] # noqa: E501
failed_metadata_representative_occurrences ([str]): [optional] # noqa: E501
indented (bool): [optional] # noqa: E501
showing_excluded (bool): [optional] # noqa: E501
"""
self._data_store = {}
self._check_type = _check_type
self._from_server = _from_server
self._path_to_item = _path_to_item
self._configuration = _configuration
for var_name, var_value in six.iteritems(kwargs):
if (
var_name not in self.attribute_map
and self._configuration is not None
and self._configuration.discard_unknown_keys
and self.additional_properties_type is None
):
# discard variable.
continue
setattr(self, var_name, var_value)
| mit |
jscn/django | django/http/response.py | 15 | 18100 | from __future__ import unicode_literals
import datetime
import json
import re
import sys
import time
from email.header import Header
from django.conf import settings
from django.core import signals, signing
from django.core.exceptions import DisallowedRedirect
from django.core.serializers.json import DjangoJSONEncoder
from django.http.cookie import SimpleCookie
from django.utils import six, timezone
from django.utils.encoding import (
force_bytes, force_str, force_text, iri_to_uri,
)
from django.utils.http import cookie_date
from django.utils.six.moves import map
from django.utils.six.moves.http_client import responses
from django.utils.six.moves.urllib.parse import urlparse
_charset_from_content_type_re = re.compile(r';\s*charset=(?P<charset>[^\s;]+)', re.I)
class BadHeaderError(ValueError):
pass
class HttpResponseBase(six.Iterator):
"""
An HTTP response base class with dictionary-accessed headers.
This class doesn't handle content. It should not be used directly.
Use the HttpResponse and StreamingHttpResponse subclasses instead.
"""
status_code = 200
def __init__(self, content_type=None, status=None, reason=None, charset=None):
# _headers is a mapping of the lower-case name to the original case of
# the header (required for working with legacy systems) and the header
# value. Both the name of the header and its value are ASCII strings.
self._headers = {}
self._closable_objects = []
# This parameter is set by the handler. It's necessary to preserve the
# historical behavior of request_finished.
self._handler_class = None
self.cookies = SimpleCookie()
self.closed = False
if status is not None:
self.status_code = status
self._reason_phrase = reason
self._charset = charset
if content_type is None:
content_type = '%s; charset=%s' % (settings.DEFAULT_CONTENT_TYPE,
self.charset)
self['Content-Type'] = content_type
@property
def reason_phrase(self):
if self._reason_phrase is not None:
return self._reason_phrase
# Leave self._reason_phrase unset in order to use the default
# reason phrase for status code.
return responses.get(self.status_code, 'Unknown Status Code')
@reason_phrase.setter
def reason_phrase(self, value):
self._reason_phrase = value
@property
def charset(self):
if self._charset is not None:
return self._charset
content_type = self.get('Content-Type', '')
matched = _charset_from_content_type_re.search(content_type)
if matched:
# Extract the charset and strip its double quotes
return matched.group('charset').replace('"', '')
return settings.DEFAULT_CHARSET
@charset.setter
def charset(self, value):
self._charset = value
def serialize_headers(self):
"""HTTP headers as a bytestring."""
def to_bytes(val, encoding):
return val if isinstance(val, bytes) else val.encode(encoding)
headers = [
(b': '.join([to_bytes(key, 'ascii'), to_bytes(value, 'latin-1')]))
for key, value in self._headers.values()
]
return b'\r\n'.join(headers)
if six.PY3:
__bytes__ = serialize_headers
else:
__str__ = serialize_headers
def _convert_to_charset(self, value, charset, mime_encode=False):
"""Converts headers key/value to ascii/latin-1 native strings.
`charset` must be 'ascii' or 'latin-1'. If `mime_encode` is True and
`value` can't be represented in the given charset, MIME-encoding
is applied.
"""
if not isinstance(value, (bytes, six.text_type)):
value = str(value)
if ((isinstance(value, bytes) and (b'\n' in value or b'\r' in value)) or
isinstance(value, six.text_type) and ('\n' in value or '\r' in value)):
raise BadHeaderError("Header values can't contain newlines (got %r)" % value)
try:
if six.PY3:
if isinstance(value, str):
# Ensure string is valid in given charset
value.encode(charset)
else:
# Convert bytestring using given charset
value = value.decode(charset)
else:
if isinstance(value, str):
# Ensure string is valid in given charset
value.decode(charset)
else:
# Convert unicode string to given charset
value = value.encode(charset)
except UnicodeError as e:
if mime_encode:
# Wrapping in str() is a workaround for #12422 under Python 2.
value = str(Header(value, 'utf-8', maxlinelen=sys.maxsize).encode())
else:
e.reason += ', HTTP response headers must be in %s format' % charset
raise
return value
def __setitem__(self, header, value):
header = self._convert_to_charset(header, 'ascii')
value = self._convert_to_charset(value, 'latin-1', mime_encode=True)
self._headers[header.lower()] = (header, value)
def __delitem__(self, header):
try:
del self._headers[header.lower()]
except KeyError:
pass
def __getitem__(self, header):
return self._headers[header.lower()][1]
def has_header(self, header):
"""Case-insensitive check for a header."""
return header.lower() in self._headers
__contains__ = has_header
def items(self):
return self._headers.values()
def get(self, header, alternate=None):
return self._headers.get(header.lower(), (None, alternate))[1]
def set_cookie(self, key, value='', max_age=None, expires=None, path='/',
domain=None, secure=False, httponly=False):
"""
Sets a cookie.
``expires`` can be:
- a string in the correct format,
- a naive ``datetime.datetime`` object in UTC,
- an aware ``datetime.datetime`` object in any time zone.
If it is a ``datetime.datetime`` object then ``max_age`` will be calculated.
"""
value = force_str(value)
self.cookies[key] = value
if expires is not None:
if isinstance(expires, datetime.datetime):
if timezone.is_aware(expires):
expires = timezone.make_naive(expires, timezone.utc)
delta = expires - expires.utcnow()
# Add one second so the date matches exactly (a fraction of
# time gets lost between converting to a timedelta and
# then the date string).
delta = delta + datetime.timedelta(seconds=1)
# Just set max_age - the max_age logic will set expires.
expires = None
max_age = max(0, delta.days * 86400 + delta.seconds)
else:
self.cookies[key]['expires'] = expires
else:
self.cookies[key]['expires'] = ''
if max_age is not None:
self.cookies[key]['max-age'] = max_age
# IE requires expires, so set it if hasn't been already.
if not expires:
self.cookies[key]['expires'] = cookie_date(time.time() +
max_age)
if path is not None:
self.cookies[key]['path'] = path
if domain is not None:
self.cookies[key]['domain'] = domain
if secure:
self.cookies[key]['secure'] = True
if httponly:
self.cookies[key]['httponly'] = True
def setdefault(self, key, value):
"""Sets a header unless it has already been set."""
if key not in self:
self[key] = value
def set_signed_cookie(self, key, value, salt='', **kwargs):
value = signing.get_cookie_signer(salt=key + salt).sign(value)
return self.set_cookie(key, value, **kwargs)
def delete_cookie(self, key, path='/', domain=None):
self.set_cookie(key, max_age=0, path=path, domain=domain,
expires='Thu, 01-Jan-1970 00:00:00 GMT')
# Common methods used by subclasses
def make_bytes(self, value):
"""Turn a value into a bytestring encoded in the output charset."""
# Per PEP 3333, this response body must be bytes. To avoid returning
# an instance of a subclass, this function returns `bytes(value)`.
# This doesn't make a copy when `value` already contains bytes.
# Handle string types -- we can't rely on force_bytes here because:
# - under Python 3 it attempts str conversion first
# - when self._charset != 'utf-8' it re-encodes the content
if isinstance(value, bytes):
return bytes(value)
if isinstance(value, six.text_type):
return bytes(value.encode(self.charset))
# Handle non-string types (#16494)
return force_bytes(value, self.charset)
# These methods partially implement the file-like object interface.
# See http://docs.python.org/lib/bltin-file-objects.html
# The WSGI server must call this method upon completion of the request.
# See http://blog.dscpl.com.au/2012/10/obligations-for-calling-close-on.html
def close(self):
for closable in self._closable_objects:
try:
closable.close()
except Exception:
pass
self.closed = True
signals.request_finished.send(sender=self._handler_class)
def write(self, content):
raise IOError("This %s instance is not writable" % self.__class__.__name__)
def flush(self):
pass
def tell(self):
raise IOError("This %s instance cannot tell its position" % self.__class__.__name__)
# These methods partially implement a stream-like object interface.
# See https://docs.python.org/library/io.html#io.IOBase
def readable(self):
return False
def seekable(self):
return False
def writable(self):
return False
def writelines(self, lines):
raise IOError("This %s instance is not writable" % self.__class__.__name__)
class HttpResponse(HttpResponseBase):
"""
An HTTP response class with a string as content.
This content that can be read, appended to or replaced.
"""
streaming = False
def __init__(self, content=b'', *args, **kwargs):
super(HttpResponse, self).__init__(*args, **kwargs)
# Content is a bytestring. See the `content` property methods.
self.content = content
def __repr__(self):
return '<%(cls)s status_code=%(status_code)d, "%(content_type)s">' % {
'cls': self.__class__.__name__,
'status_code': self.status_code,
'content_type': self['Content-Type'],
}
def serialize(self):
"""Full HTTP message, including headers, as a bytestring."""
return self.serialize_headers() + b'\r\n\r\n' + self.content
if six.PY3:
__bytes__ = serialize
else:
__str__ = serialize
@property
def content(self):
return b''.join(self._container)
@content.setter
def content(self, value):
# Consume iterators upon assignment to allow repeated iteration.
if hasattr(value, '__iter__') and not isinstance(value, (bytes, six.string_types)):
content = b''.join(self.make_bytes(chunk) for chunk in value)
if hasattr(value, 'close'):
try:
value.close()
except Exception:
pass
else:
content = self.make_bytes(value)
# Create a list of properly encoded bytestrings to support write().
self._container = [content]
def __iter__(self):
return iter(self._container)
def write(self, content):
self._container.append(self.make_bytes(content))
def tell(self):
return len(self.content)
def getvalue(self):
return self.content
def writable(self):
return True
def writelines(self, lines):
for line in lines:
self.write(line)
class StreamingHttpResponse(HttpResponseBase):
"""
A streaming HTTP response class with an iterator as content.
This should only be iterated once, when the response is streamed to the
client. However, it can be appended to or replaced with a new iterator
that wraps the original content (or yields entirely new content).
"""
streaming = True
def __init__(self, streaming_content=(), *args, **kwargs):
super(StreamingHttpResponse, self).__init__(*args, **kwargs)
# `streaming_content` should be an iterable of bytestrings.
# See the `streaming_content` property methods.
self.streaming_content = streaming_content
@property
def content(self):
raise AttributeError(
"This %s instance has no `content` attribute. Use "
"`streaming_content` instead." % self.__class__.__name__
)
@property
def streaming_content(self):
return map(self.make_bytes, self._iterator)
@streaming_content.setter
def streaming_content(self, value):
self._set_streaming_content(value)
def _set_streaming_content(self, value):
# Ensure we can never iterate on "value" more than once.
self._iterator = iter(value)
if hasattr(value, 'close'):
self._closable_objects.append(value)
def __iter__(self):
return self.streaming_content
def getvalue(self):
return b''.join(self.streaming_content)
class FileResponse(StreamingHttpResponse):
"""
A streaming HTTP response class optimized for files.
"""
block_size = 4096
def _set_streaming_content(self, value):
if hasattr(value, 'read'):
self.file_to_stream = value
filelike = value
if hasattr(filelike, 'close'):
self._closable_objects.append(filelike)
value = iter(lambda: filelike.read(self.block_size), b'')
else:
self.file_to_stream = None
super(FileResponse, self)._set_streaming_content(value)
class HttpResponseRedirectBase(HttpResponse):
allowed_schemes = ['http', 'https', 'ftp']
def __init__(self, redirect_to, *args, **kwargs):
parsed = urlparse(force_text(redirect_to))
if parsed.scheme and parsed.scheme not in self.allowed_schemes:
raise DisallowedRedirect("Unsafe redirect to URL with protocol '%s'" % parsed.scheme)
super(HttpResponseRedirectBase, self).__init__(*args, **kwargs)
self['Location'] = iri_to_uri(redirect_to)
url = property(lambda self: self['Location'])
def __repr__(self):
return '<%(cls)s status_code=%(status_code)d, "%(content_type)s", url="%(url)s">' % {
'cls': self.__class__.__name__,
'status_code': self.status_code,
'content_type': self['Content-Type'],
'url': self.url,
}
class HttpResponseRedirect(HttpResponseRedirectBase):
status_code = 302
class HttpResponsePermanentRedirect(HttpResponseRedirectBase):
status_code = 301
class HttpResponseNotModified(HttpResponse):
status_code = 304
def __init__(self, *args, **kwargs):
super(HttpResponseNotModified, self).__init__(*args, **kwargs)
del self['content-type']
@HttpResponse.content.setter
def content(self, value):
if value:
raise AttributeError("You cannot set content to a 304 (Not Modified) response")
self._container = []
class HttpResponseBadRequest(HttpResponse):
status_code = 400
class HttpResponseNotFound(HttpResponse):
status_code = 404
class HttpResponseForbidden(HttpResponse):
status_code = 403
class HttpResponseNotAllowed(HttpResponse):
status_code = 405
def __init__(self, permitted_methods, *args, **kwargs):
super(HttpResponseNotAllowed, self).__init__(*args, **kwargs)
self['Allow'] = ', '.join(permitted_methods)
def __repr__(self):
return '<%(cls)s [%(methods)s] status_code=%(status_code)d, "%(content_type)s">' % {
'cls': self.__class__.__name__,
'status_code': self.status_code,
'content_type': self['Content-Type'],
'methods': self['Allow'],
}
class HttpResponseGone(HttpResponse):
status_code = 410
class HttpResponseServerError(HttpResponse):
status_code = 500
class Http404(Exception):
pass
class JsonResponse(HttpResponse):
"""
An HTTP response class that consumes data to be serialized to JSON.
:param data: Data to be dumped into json. By default only ``dict`` objects
are allowed to be passed due to a security flaw before EcmaScript 5. See
the ``safe`` parameter for more information.
:param encoder: Should be an json encoder class. Defaults to
``django.core.serializers.json.DjangoJSONEncoder``.
:param safe: Controls if only ``dict`` objects may be serialized. Defaults
to ``True``.
:param json_dumps_params: A dictionary of kwargs passed to json.dumps().
"""
def __init__(self, data, encoder=DjangoJSONEncoder, safe=True,
json_dumps_params=None, **kwargs):
if safe and not isinstance(data, dict):
raise TypeError(
'In order to allow non-dict objects to be serialized set the '
'safe parameter to False.'
)
if json_dumps_params is None:
json_dumps_params = {}
kwargs.setdefault('content_type', 'application/json')
data = json.dumps(data, cls=encoder, **json_dumps_params)
super(JsonResponse, self).__init__(content=data, **kwargs)
| bsd-3-clause |
Pillar1989/BBG_linux-3.8 | tools/perf/scripts/python/check-perf-trace.py | 1997 | 2539 | # perf script event handlers, generated by perf script -g python
# (c) 2010, Tom Zanussi <tzanussi@gmail.com>
# Licensed under the terms of the GNU GPL License version 2
#
# This script tests basic functionality such as flag and symbol
# strings, common_xxx() calls back into perf, begin, end, unhandled
# events, etc. Basically, if this script runs successfully and
# displays expected results, Python scripting support should be ok.
import os
import sys
sys.path.append(os.environ['PERF_EXEC_PATH'] + \
'/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
from Core import *
from perf_trace_context import *
unhandled = autodict()
def trace_begin():
print "trace_begin"
pass
def trace_end():
print_unhandled()
def irq__softirq_entry(event_name, context, common_cpu,
common_secs, common_nsecs, common_pid, common_comm,
common_callchain, vec):
print_header(event_name, common_cpu, common_secs, common_nsecs,
common_pid, common_comm)
print_uncommon(context)
print "vec=%s\n" % \
(symbol_str("irq__softirq_entry", "vec", vec)),
def kmem__kmalloc(event_name, context, common_cpu,
common_secs, common_nsecs, common_pid, common_comm,
common_callchain, call_site, ptr, bytes_req, bytes_alloc,
gfp_flags):
print_header(event_name, common_cpu, common_secs, common_nsecs,
common_pid, common_comm)
print_uncommon(context)
print "call_site=%u, ptr=%u, bytes_req=%u, " \
"bytes_alloc=%u, gfp_flags=%s\n" % \
(call_site, ptr, bytes_req, bytes_alloc,
flag_str("kmem__kmalloc", "gfp_flags", gfp_flags)),
def trace_unhandled(event_name, context, event_fields_dict):
try:
unhandled[event_name] += 1
except TypeError:
unhandled[event_name] = 1
def print_header(event_name, cpu, secs, nsecs, pid, comm):
print "%-20s %5u %05u.%09u %8u %-20s " % \
(event_name, cpu, secs, nsecs, pid, comm),
# print trace fields not included in handler args
def print_uncommon(context):
print "common_preempt_count=%d, common_flags=%s, common_lock_depth=%d, " \
% (common_pc(context), trace_flag_str(common_flags(context)), \
common_lock_depth(context))
def print_unhandled():
keys = unhandled.keys()
if not keys:
return
print "\nunhandled events:\n\n",
print "%-40s %10s\n" % ("event", "count"),
print "%-40s %10s\n" % ("----------------------------------------", \
"-----------"),
for event_name in keys:
print "%-40s %10d\n" % (event_name, unhandled[event_name])
| gpl-2.0 |
ndtran/l10n-switzerland | l10n_ch_account_statement_base_import/parser/base_parser.py | 1 | 4932 | # -*- coding: utf-8 -*-
##############################################################################
#
# Author: Nicolas Bessi
# Copyright 2015 Camptocamp SA
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from abc import ABCMeta, abstractmethod
import logging
from openerp import _, exceptions
_logger = logging.getLogger(__name__)
class BaseSwissParser(object):
"""Base parser class for every Swiss file format.
It provides a base abstraction for every parser
of swiss localisation"""
__metaclass__ = ABCMeta
_ftype = None
def __init__(self, data_file):
"""Constructor
:param data_file: the raw content of the file to be imported
:type data_file: string
"""
if not data_file:
raise ValueError('File must not be empty')
self.data_file = data_file
self.currency_code = None
self.account_number = None
self.statements = []
@abstractmethod
def ftype(self):
"""Gives the type of file we want to import
This method is abstract, we want to ensure that developper aware of it.
If the base behavior is enought the child implementation should consist
in a simple call to super
:return: imported file type
:rtype: string
"""
if not self._ftype:
raise ValueError('No file type defined')
return self._ftype
def parse(self):
"""Parse the file the file to import"""
try:
return self._parse()
except Exception as exc:
_logger.exception(
'Error when parsing {ftype} file'.format(ftype=self.ftype())
)
raise exceptions.Warning(
_("The following problem occurred during {ftype} import. "
"The file might not be valid.\n\n {msg}").format(
ftype=self.ftype(), msg=exc.message)
)
@abstractmethod
def file_is_known(self):
"""Predicate the tells if the parser can parse the data file
This method is abstract
:return: True if file is supported
:rtype: bool
"""
pass
@abstractmethod
def _parse(self):
"""Do the parsing process job
This method is abstract
"""
pass
@abstractmethod
def get_currency(self):
"""Returns the ISO currency code of the parsed file
This method is abstract, we want to ensure that developper aware of it.
If the base behavior is enought the child implementation should consist
in a simple call to super
:return: The ISO currency code of the parsed file eg: CHF
:rtype: string
"""
return self.currency_code
@abstractmethod
def get_account_number(self):
"""Return the account_number related to parsed file
This method is abstract, we want to ensure that developper aware of it.
If the base behavior is enought the child implementation should consist
in a simple call to super
:return: The account number of the parsed file
:rtype: dict
"""
return self.account_number
@abstractmethod
def get_statements(self):
"""Return the list of bank statement dict.
Bank statements data: list of dict containing
(optional items marked by o) :
- 'name': string (e.g: '000000123')
- 'date': date (e.g: 2013-06-26)
-o 'balance_start': float (e.g: 8368.56)
-o 'balance_end_real': float (e.g: 8888.88)
- 'transactions': list of dict containing :
- 'name': string
(e.g: 'KBC-INVESTERINGSKREDIET 787-5562831-01')
- 'date': date
- 'amount': float
- 'unique_import_id': string
-o 'account_number': string
Will be used to find/create the res.partner.bank in odoo
-o 'note': string
-o 'partner_name': string
-o 'ref': string
This method is abstract
:return: a list of statement
:rtype: list
"""
return self.statements
| agpl-3.0 |
technologiescollege/Blockly-rduino-communication | scripts_XP/Lib/site-packages/serial/serialwin32.py | 147 | 18260 | #! python
# Python Serial Port Extension for Win32, Linux, BSD, Jython
# serial driver for win32
# see __init__.py
#
# (C) 2001-2011 Chris Liechti <cliechti@gmx.net>
# this is distributed under a free software license, see license.txt
#
# Initial patch to use ctypes by Giovanni Bajo <rasky@develer.com>
import ctypes
from serial import win32
from serial.serialutil import *
def device(portnum):
"""Turn a port number into a device name"""
return 'COM%d' % (portnum+1) # numbers are transformed to a string
class Win32Serial(SerialBase):
"""Serial port implementation for Win32 based on ctypes."""
BAUDRATES = (50, 75, 110, 134, 150, 200, 300, 600, 1200, 1800, 2400, 4800,
9600, 19200, 38400, 57600, 115200)
def __init__(self, *args, **kwargs):
self.hComPort = None
self._overlappedRead = None
self._overlappedWrite = None
self._rtsToggle = False
self._rtsState = win32.RTS_CONTROL_ENABLE
self._dtrState = win32.DTR_CONTROL_ENABLE
SerialBase.__init__(self, *args, **kwargs)
def open(self):
"""Open port with current settings. This may throw a SerialException
if the port cannot be opened."""
if self._port is None:
raise SerialException("Port must be configured before it can be used.")
if self._isOpen:
raise SerialException("Port is already open.")
# the "\\.\COMx" format is required for devices other than COM1-COM8
# not all versions of windows seem to support this properly
# so that the first few ports are used with the DOS device name
port = self.portstr
try:
if port.upper().startswith('COM') and int(port[3:]) > 8:
port = '\\\\.\\' + port
except ValueError:
# for like COMnotanumber
pass
self.hComPort = win32.CreateFile(port,
win32.GENERIC_READ | win32.GENERIC_WRITE,
0, # exclusive access
None, # no security
win32.OPEN_EXISTING,
win32.FILE_ATTRIBUTE_NORMAL | win32.FILE_FLAG_OVERLAPPED,
0)
if self.hComPort == win32.INVALID_HANDLE_VALUE:
self.hComPort = None # 'cause __del__ is called anyway
raise SerialException("could not open port %r: %r" % (self.portstr, ctypes.WinError()))
try:
self._overlappedRead = win32.OVERLAPPED()
self._overlappedRead.hEvent = win32.CreateEvent(None, 1, 0, None)
self._overlappedWrite = win32.OVERLAPPED()
#~ self._overlappedWrite.hEvent = win32.CreateEvent(None, 1, 0, None)
self._overlappedWrite.hEvent = win32.CreateEvent(None, 0, 0, None)
# Setup a 4k buffer
win32.SetupComm(self.hComPort, 4096, 4096)
# Save original timeout values:
self._orgTimeouts = win32.COMMTIMEOUTS()
win32.GetCommTimeouts(self.hComPort, ctypes.byref(self._orgTimeouts))
self._reconfigurePort()
# Clear buffers:
# Remove anything that was there
win32.PurgeComm(self.hComPort,
win32.PURGE_TXCLEAR | win32.PURGE_TXABORT |
win32.PURGE_RXCLEAR | win32.PURGE_RXABORT)
except:
try:
self._close()
except:
# ignore any exception when closing the port
# also to keep original exception that happened when setting up
pass
self.hComPort = None
raise
else:
self._isOpen = True
def _reconfigurePort(self):
"""Set communication parameters on opened port."""
if not self.hComPort:
raise SerialException("Can only operate on a valid port handle")
# Set Windows timeout values
# timeouts is a tuple with the following items:
# (ReadIntervalTimeout,ReadTotalTimeoutMultiplier,
# ReadTotalTimeoutConstant,WriteTotalTimeoutMultiplier,
# WriteTotalTimeoutConstant)
if self._timeout is None:
timeouts = (0, 0, 0, 0, 0)
elif self._timeout == 0:
timeouts = (win32.MAXDWORD, 0, 0, 0, 0)
else:
timeouts = (0, 0, int(self._timeout*1000), 0, 0)
if self._timeout != 0 and self._interCharTimeout is not None:
timeouts = (int(self._interCharTimeout * 1000),) + timeouts[1:]
if self._writeTimeout is None:
pass
elif self._writeTimeout == 0:
timeouts = timeouts[:-2] + (0, win32.MAXDWORD)
else:
timeouts = timeouts[:-2] + (0, int(self._writeTimeout*1000))
win32.SetCommTimeouts(self.hComPort, ctypes.byref(win32.COMMTIMEOUTS(*timeouts)))
win32.SetCommMask(self.hComPort, win32.EV_ERR)
# Setup the connection info.
# Get state and modify it:
comDCB = win32.DCB()
win32.GetCommState(self.hComPort, ctypes.byref(comDCB))
comDCB.BaudRate = self._baudrate
if self._bytesize == FIVEBITS:
comDCB.ByteSize = 5
elif self._bytesize == SIXBITS:
comDCB.ByteSize = 6
elif self._bytesize == SEVENBITS:
comDCB.ByteSize = 7
elif self._bytesize == EIGHTBITS:
comDCB.ByteSize = 8
else:
raise ValueError("Unsupported number of data bits: %r" % self._bytesize)
if self._parity == PARITY_NONE:
comDCB.Parity = win32.NOPARITY
comDCB.fParity = 0 # Disable Parity Check
elif self._parity == PARITY_EVEN:
comDCB.Parity = win32.EVENPARITY
comDCB.fParity = 1 # Enable Parity Check
elif self._parity == PARITY_ODD:
comDCB.Parity = win32.ODDPARITY
comDCB.fParity = 1 # Enable Parity Check
elif self._parity == PARITY_MARK:
comDCB.Parity = win32.MARKPARITY
comDCB.fParity = 1 # Enable Parity Check
elif self._parity == PARITY_SPACE:
comDCB.Parity = win32.SPACEPARITY
comDCB.fParity = 1 # Enable Parity Check
else:
raise ValueError("Unsupported parity mode: %r" % self._parity)
if self._stopbits == STOPBITS_ONE:
comDCB.StopBits = win32.ONESTOPBIT
elif self._stopbits == STOPBITS_ONE_POINT_FIVE:
comDCB.StopBits = win32.ONE5STOPBITS
elif self._stopbits == STOPBITS_TWO:
comDCB.StopBits = win32.TWOSTOPBITS
else:
raise ValueError("Unsupported number of stop bits: %r" % self._stopbits)
comDCB.fBinary = 1 # Enable Binary Transmission
# Char. w/ Parity-Err are replaced with 0xff (if fErrorChar is set to TRUE)
if self._rtscts:
comDCB.fRtsControl = win32.RTS_CONTROL_HANDSHAKE
elif self._rtsToggle:
comDCB.fRtsControl = win32.RTS_CONTROL_TOGGLE
else:
comDCB.fRtsControl = self._rtsState
if self._dsrdtr:
comDCB.fDtrControl = win32.DTR_CONTROL_HANDSHAKE
else:
comDCB.fDtrControl = self._dtrState
if self._rtsToggle:
comDCB.fOutxCtsFlow = 0
else:
comDCB.fOutxCtsFlow = self._rtscts
comDCB.fOutxDsrFlow = self._dsrdtr
comDCB.fOutX = self._xonxoff
comDCB.fInX = self._xonxoff
comDCB.fNull = 0
comDCB.fErrorChar = 0
comDCB.fAbortOnError = 0
comDCB.XonChar = XON
comDCB.XoffChar = XOFF
if not win32.SetCommState(self.hComPort, ctypes.byref(comDCB)):
raise ValueError("Cannot configure port, some setting was wrong. Original message: %r" % ctypes.WinError())
#~ def __del__(self):
#~ self.close()
def _close(self):
"""internal close port helper"""
if self.hComPort:
# Restore original timeout values:
win32.SetCommTimeouts(self.hComPort, self._orgTimeouts)
# Close COM-Port:
win32.CloseHandle(self.hComPort)
if self._overlappedRead is not None:
win32.CloseHandle(self._overlappedRead.hEvent)
self._overlappedRead = None
if self._overlappedWrite is not None:
win32.CloseHandle(self._overlappedWrite.hEvent)
self._overlappedWrite = None
self.hComPort = None
def close(self):
"""Close port"""
if self._isOpen:
self._close()
self._isOpen = False
def makeDeviceName(self, port):
return device(port)
# - - - - - - - - - - - - - - - - - - - - - - - -
def inWaiting(self):
"""Return the number of characters currently in the input buffer."""
flags = win32.DWORD()
comstat = win32.COMSTAT()
if not win32.ClearCommError(self.hComPort, ctypes.byref(flags), ctypes.byref(comstat)):
raise SerialException('call to ClearCommError failed')
return comstat.cbInQue
def read(self, size=1):
"""Read size bytes from the serial port. If a timeout is set it may
return less characters as requested. With no timeout it will block
until the requested number of bytes is read."""
if not self.hComPort: raise portNotOpenError
if size > 0:
win32.ResetEvent(self._overlappedRead.hEvent)
flags = win32.DWORD()
comstat = win32.COMSTAT()
if not win32.ClearCommError(self.hComPort, ctypes.byref(flags), ctypes.byref(comstat)):
raise SerialException('call to ClearCommError failed')
if self.timeout == 0:
n = min(comstat.cbInQue, size)
if n > 0:
buf = ctypes.create_string_buffer(n)
rc = win32.DWORD()
err = win32.ReadFile(self.hComPort, buf, n, ctypes.byref(rc), ctypes.byref(self._overlappedRead))
if not err and win32.GetLastError() != win32.ERROR_IO_PENDING:
raise SerialException("ReadFile failed (%r)" % ctypes.WinError())
err = win32.WaitForSingleObject(self._overlappedRead.hEvent, win32.INFINITE)
read = buf.raw[:rc.value]
else:
read = bytes()
else:
buf = ctypes.create_string_buffer(size)
rc = win32.DWORD()
err = win32.ReadFile(self.hComPort, buf, size, ctypes.byref(rc), ctypes.byref(self._overlappedRead))
if not err and win32.GetLastError() != win32.ERROR_IO_PENDING:
raise SerialException("ReadFile failed (%r)" % ctypes.WinError())
err = win32.GetOverlappedResult(self.hComPort, ctypes.byref(self._overlappedRead), ctypes.byref(rc), True)
read = buf.raw[:rc.value]
else:
read = bytes()
return bytes(read)
def write(self, data):
"""Output the given string over the serial port."""
if not self.hComPort: raise portNotOpenError
#~ if not isinstance(data, (bytes, bytearray)):
#~ raise TypeError('expected %s or bytearray, got %s' % (bytes, type(data)))
# convert data (needed in case of memoryview instance: Py 3.1 io lib), ctypes doesn't like memoryview
data = to_bytes(data)
if data:
#~ win32event.ResetEvent(self._overlappedWrite.hEvent)
n = win32.DWORD()
err = win32.WriteFile(self.hComPort, data, len(data), ctypes.byref(n), self._overlappedWrite)
if not err and win32.GetLastError() != win32.ERROR_IO_PENDING:
raise SerialException("WriteFile failed (%r)" % ctypes.WinError())
if self._writeTimeout != 0: # if blocking (None) or w/ write timeout (>0)
# Wait for the write to complete.
#~ win32.WaitForSingleObject(self._overlappedWrite.hEvent, win32.INFINITE)
err = win32.GetOverlappedResult(self.hComPort, self._overlappedWrite, ctypes.byref(n), True)
if n.value != len(data):
raise writeTimeoutError
return n.value
else:
return 0
def flush(self):
"""Flush of file like objects. In this case, wait until all data
is written."""
while self.outWaiting():
time.sleep(0.05)
# XXX could also use WaitCommEvent with mask EV_TXEMPTY, but it would
# require overlapped IO and its also only possible to set a single mask
# on the port---
def flushInput(self):
"""Clear input buffer, discarding all that is in the buffer."""
if not self.hComPort: raise portNotOpenError
win32.PurgeComm(self.hComPort, win32.PURGE_RXCLEAR | win32.PURGE_RXABORT)
def flushOutput(self):
"""Clear output buffer, aborting the current output and
discarding all that is in the buffer."""
if not self.hComPort: raise portNotOpenError
win32.PurgeComm(self.hComPort, win32.PURGE_TXCLEAR | win32.PURGE_TXABORT)
def sendBreak(self, duration=0.25):
"""Send break condition. Timed, returns to idle state after given duration."""
if not self.hComPort: raise portNotOpenError
import time
win32.SetCommBreak(self.hComPort)
time.sleep(duration)
win32.ClearCommBreak(self.hComPort)
def setBreak(self, level=1):
"""Set break: Controls TXD. When active, to transmitting is possible."""
if not self.hComPort: raise portNotOpenError
if level:
win32.SetCommBreak(self.hComPort)
else:
win32.ClearCommBreak(self.hComPort)
def setRTS(self, level=1):
"""Set terminal status line: Request To Send"""
# remember level for reconfigure
if level:
self._rtsState = win32.RTS_CONTROL_ENABLE
else:
self._rtsState = win32.RTS_CONTROL_DISABLE
# also apply now if port is open
if self.hComPort:
if level:
win32.EscapeCommFunction(self.hComPort, win32.SETRTS)
else:
win32.EscapeCommFunction(self.hComPort, win32.CLRRTS)
def setDTR(self, level=1):
"""Set terminal status line: Data Terminal Ready"""
# remember level for reconfigure
if level:
self._dtrState = win32.DTR_CONTROL_ENABLE
else:
self._dtrState = win32.DTR_CONTROL_DISABLE
# also apply now if port is open
if self.hComPort:
if level:
win32.EscapeCommFunction(self.hComPort, win32.SETDTR)
else:
win32.EscapeCommFunction(self.hComPort, win32.CLRDTR)
def _GetCommModemStatus(self):
stat = win32.DWORD()
win32.GetCommModemStatus(self.hComPort, ctypes.byref(stat))
return stat.value
def getCTS(self):
"""Read terminal status line: Clear To Send"""
if not self.hComPort: raise portNotOpenError
return win32.MS_CTS_ON & self._GetCommModemStatus() != 0
def getDSR(self):
"""Read terminal status line: Data Set Ready"""
if not self.hComPort: raise portNotOpenError
return win32.MS_DSR_ON & self._GetCommModemStatus() != 0
def getRI(self):
"""Read terminal status line: Ring Indicator"""
if not self.hComPort: raise portNotOpenError
return win32.MS_RING_ON & self._GetCommModemStatus() != 0
def getCD(self):
"""Read terminal status line: Carrier Detect"""
if not self.hComPort: raise portNotOpenError
return win32.MS_RLSD_ON & self._GetCommModemStatus() != 0
# - - platform specific - - - -
def setBufferSize(self, rx_size=4096, tx_size=None):
"""\
Recommend a buffer size to the driver (device driver can ignore this
vlaue). Must be called before the port is opended.
"""
if tx_size is None: tx_size = rx_size
win32.SetupComm(self.hComPort, rx_size, tx_size)
def setXON(self, level=True):
"""\
Manually control flow - when software flow control is enabled.
This will send XON (true) and XOFF (false) to the other device.
WARNING: this function is not portable to different platforms!
"""
if not self.hComPort: raise portNotOpenError
if level:
win32.EscapeCommFunction(self.hComPort, win32.SETXON)
else:
win32.EscapeCommFunction(self.hComPort, win32.SETXOFF)
def outWaiting(self):
"""return how many characters the in the outgoing buffer"""
flags = win32.DWORD()
comstat = win32.COMSTAT()
if not win32.ClearCommError(self.hComPort, ctypes.byref(flags), ctypes.byref(comstat)):
raise SerialException('call to ClearCommError failed')
return comstat.cbOutQue
# functions useful for RS-485 adapters
def setRtsToggle(self, rtsToggle):
"""Change RTS toggle control setting."""
self._rtsToggle = rtsToggle
if self._isOpen: self._reconfigurePort()
def getRtsToggle(self):
"""Get the current RTS toggle control setting."""
return self._rtsToggle
rtsToggle = property(getRtsToggle, setRtsToggle, doc="RTS toggle control setting")
# assemble Serial class with the platform specific implementation and the base
# for file-like behavior. for Python 2.6 and newer, that provide the new I/O
# library, derive from io.RawIOBase
try:
import io
except ImportError:
# classic version with our own file-like emulation
class Serial(Win32Serial, FileLike):
pass
else:
# io library present
class Serial(Win32Serial, io.RawIOBase):
pass
# Nur Testfunktion!!
if __name__ == '__main__':
s = Serial(0)
sys.stdout.write("%s\n" % s)
s = Serial()
sys.stdout.write("%s\n" % s)
s.baudrate = 19200
s.databits = 7
s.close()
s.port = 0
s.open()
sys.stdout.write("%s\n" % s)
| gpl-3.0 |
SickGear/SickGear | lib/hachoir_py2/parser/misc/androidxml.py | 2 | 8482 | """
AndroidManifest.xml parser
References:
- http://code.google.com/p/androguard/source/browse/core/bytecodes/apk.py
Author: Robert Xiao
Creation Date: May 29, 2011
"""
from hachoir_py2.parser import Parser
from hachoir_py2.field import (FieldSet, ParserError,
String, Enum, GenericVector,
UInt8, UInt16, UInt32, Int32,
Float32, Bits, )
from hachoir_py2.core.text_handler import textHandler, hexadecimal, filesizeHandler
from hachoir_py2.core.tools import createDict
from hachoir_py2.core.endian import LITTLE_ENDIAN
class PascalCString16(FieldSet):
def createFields(self):
yield UInt16(self, "size")
self._size = (self['size'].value + 2) * 16
yield String(self, "string", (self['size'].value + 1) * 2, strip='\0', charset="UTF-16-LE")
def createValue(self):
return self['string'].value
class StringTable(FieldSet):
def createFields(self):
for field in self['../offsets']:
pad = self.seekByte(field.value)
if pad:
yield pad
yield PascalCString16(self, "string[]")
def Top(self):
while not self.eof:
yield Chunk(self, "chunk[]")
def StringChunk(self):
# TODO: styles
yield UInt32(self, "string_count")
yield UInt32(self, "style_count")
yield UInt32(self, "reserved[]")
yield UInt32(self, "string_offset")
yield UInt32(self, "style_offset")
yield GenericVector(self, "offsets", self['string_count'].value, UInt32,
description="Offsets for string table")
pad = self.seekByte(self['string_offset'].value)
if pad:
yield pad
yield StringTable(self, "table")
def ResourceIDs(self):
while self._current_size < self._size:
yield textHandler(UInt32(self, "resource_id[]"), hexadecimal)
def stringIndex(field):
if field.value == -1:
return ''
return field['/xml_file/string_table/table/string[%d]' % field.value].display
def NamespaceTag(self):
yield UInt32(self, "lineno", "Line number from original XML file")
yield Int32(self, "unk[]", "Always -1")
yield textHandler(Int32(self, "prefix"), stringIndex)
yield textHandler(Int32(self, "uri"), stringIndex)
def NamespaceStartValue(self):
return "xmlns:%s='%s'" % (self['prefix'].display, self['uri'].display)
def NamespaceEndValue(self):
return "/%s" % self['prefix'].display
def IntTextHandler(func):
return lambda *args, **kwargs: textHandler(Int32(*args, **kwargs), func)
def booleanText(field):
if field.value == 0:
return 'false'
return 'true'
class XMLUnitFloat(FieldSet):
static_size = 32
UNIT_MAP = {}
RADIX_MAP = {
0: 0,
1: 7,
2: 15,
3: 23,
}
def createFields(self):
yield Enum(Bits(self, "unit", 4), self.UNIT_MAP)
yield Enum(Bits(self, "exponent", 2), self.RADIX_MAP)
yield Bits(self, "reserved[]", 2)
yield Bits(self, "mantissa", 24)
def createValue(self):
return float(self['mantissa'].value) >> self.RADIX_MAP[self['exponent'].value]
def createDisplay(self):
return '%f%s' % (self.value, self.UNIT_MAP.get(self['unit'].value, ''))
class XMLDimensionFloat(XMLUnitFloat):
UNIT_MAP = dict(enumerate(["px", "dip", "sp", "pt", "in", "mm"]))
class XMLFractionFloat(XMLUnitFloat):
UNIT_MAP = {0: '%', 1: '%p'}
class XMLAttribute(FieldSet):
TYPE_INFO = {
0: ('Null', IntTextHandler(lambda field: '')),
1: ('Reference', IntTextHandler(lambda field: '@%08x' % field.value)),
2: ('Attribute', IntTextHandler(lambda field: '?%08x' % field.value)),
3: ('String', IntTextHandler(stringIndex)),
4: ('Float', Float32),
5: ('Dimension', XMLDimensionFloat),
6: ('Fraction', XMLFractionFloat),
16: ('Int_Dec', Int32),
17: ('Int_Hex', IntTextHandler(hexadecimal)),
18: ('Int_Boolean', IntTextHandler(booleanText)),
28: ('Int_Color_Argb8', IntTextHandler(lambda field: '#%08x' % field.value)),
29: ('Int_Color_Rgb8', IntTextHandler(lambda field: '#%08x' % field.value)),
30: ('Int_Color_Argb4', IntTextHandler(lambda field: '#%08x' % field.value)),
31: ('Int_Color_Rgb4', IntTextHandler(lambda field: '#%08x' % field.value)),
}
TYPE_NAME = createDict(TYPE_INFO, 0)
TYPE_FUNC = createDict(TYPE_INFO, 1)
static_size = 5 * 32
def createFields(self):
yield textHandler(Int32(self, "ns"), stringIndex)
yield textHandler(Int32(self, "name"), stringIndex)
yield textHandler(Int32(self, "value_string"), stringIndex)
yield UInt16(self, "unk[]")
yield UInt8(self, "unk[]")
yield Enum(UInt8(self, "value_type"), self.TYPE_NAME)
func = self.TYPE_FUNC.get(self['value_type'].value, None)
if not func:
func = UInt32
yield func(self, "value_data")
def createValue(self):
return (self['name'].display, self['value_data'].value)
def createDisplay(self):
return '%s="%s"' % (self['name'].display, self['value_data'].display)
def TagStart(self):
yield UInt32(self, "lineno", "Line number from original XML file")
yield Int32(self, "unk[]", "Always -1")
yield textHandler(Int32(self, "ns"), stringIndex)
yield textHandler(Int32(self, "name"), stringIndex)
yield UInt32(self, "flags")
yield UInt16(self, "attrib_count")
yield UInt16(self, "attrib_id")
yield UInt16(self, "attrib_class")
yield UInt16(self, "attrib_style")
for i in xrange(self['attrib_count'].value):
yield XMLAttribute(self, "attrib[]")
def TagStartValue(self):
attrstr = ' '.join(attr.display for attr in self.array('attrib'))
if attrstr: attrstr = ' ' + attrstr
if not self['ns'].display:
return '<%s%s>' % (self['name'].display, attrstr)
return "<%s:%s%s>" % (self['ns'].display, self['name'].display, attrstr)
def TagEnd(self):
yield UInt32(self, "lineno", "Line number from original XML file")
yield Int32(self, "unk[]", "Always -1")
yield textHandler(Int32(self, "ns"), stringIndex)
yield textHandler(Int32(self, "name"), stringIndex)
def TagEndValue(self):
if not self['ns'].display:
return '</%s>' % self['name'].display
return "</%s:%s>" % (self['ns'].display, self['name'].display)
def TextChunk(self):
# TODO
yield UInt32(self, "lineno", "Line number from original XML file")
yield Int32(self, "unk[]", "Always -1")
class Chunk(FieldSet):
CHUNK_INFO = {
0x0001: ("string_table", "String Table", StringChunk, None),
0x0003: ("xml_file", "XML File", Top, None),
0x0100: ("namespace_start[]", "Start Namespace", NamespaceTag, NamespaceStartValue),
0x0101: ("namespace_end[]", "End Namespace", NamespaceTag, NamespaceEndValue),
0x0102: ("tag_start[]", "Start Tag", TagStart, TagStartValue),
0x0103: ("tag_end[]", "End Tag", TagEnd, TagEndValue),
0x0104: ("text[]", "Text", TextChunk, None),
0x0180: ("resource_ids", "Resource IDs", ResourceIDs, None),
}
CHUNK_DESC = createDict(CHUNK_INFO, 1)
def __init__(self, parent, name, description=None):
FieldSet.__init__(self, parent, name, description)
self._size = self['chunk_size'].value * 8
type = self['type'].value
self.parse_func = None
if type in self.CHUNK_INFO:
self._name, self._description, self.parse_func, value_func = self.CHUNK_INFO[type]
if value_func:
self.createValue = lambda: value_func(self)
def createFields(self):
yield Enum(UInt16(self, "type"), self.CHUNK_DESC)
yield UInt16(self, "header_size")
yield UInt32(self, "chunk_size")
if self.parse_func:
for field in self.parse_func(self):
yield field
class AndroidXMLFile(Parser):
MAGIC = "\x03\x00\x08\x00"
PARSER_TAGS = {
"id": "axml",
"category": "misc",
"file_ext": ("xml",),
"min_size": 32 * 8,
"magic": ((MAGIC, 0),),
"description": "Android binary XML format",
}
endian = LITTLE_ENDIAN
def validate(self):
if self.stream.readBytes(0, len(self.MAGIC)) != self.MAGIC:
return "Invalid magic"
return True
def createFields(self):
yield Chunk(self, "xml_file")
| gpl-3.0 |
zimmermegan/MARDA | nltk-3.0.3/nltk/test/unit/test_classify.py | 28 | 1370 | # -*- coding: utf-8 -*-
"""
Unit tests for nltk.classify. See also: nltk/test/classify.doctest
"""
from __future__ import absolute_import
from nose import SkipTest
from nltk import classify
TRAIN = [
(dict(a=1,b=1,c=1), 'y'),
(dict(a=1,b=1,c=1), 'x'),
(dict(a=1,b=1,c=0), 'y'),
(dict(a=0,b=1,c=1), 'x'),
(dict(a=0,b=1,c=1), 'y'),
(dict(a=0,b=0,c=1), 'y'),
(dict(a=0,b=1,c=0), 'x'),
(dict(a=0,b=0,c=0), 'x'),
(dict(a=0,b=1,c=1), 'y'),
]
TEST = [
(dict(a=1,b=0,c=1)), # unseen
(dict(a=1,b=0,c=0)), # unseen
(dict(a=0,b=1,c=1)), # seen 3 times, labels=y,y,x
(dict(a=0,b=1,c=0)), # seen 1 time, label=x
]
RESULTS = [
(0.16, 0.84),
(0.46, 0.54),
(0.41, 0.59),
(0.76, 0.24),
]
def assert_classifier_correct(algorithm):
try:
classifier = classify.MaxentClassifier.train(
TRAIN, algorithm, trace=0, max_iter=1000
)
except (LookupError, AttributeError) as e:
raise SkipTest(str(e))
for (px, py), featureset in zip(RESULTS, TEST):
pdist = classifier.prob_classify(featureset)
assert abs(pdist.prob('x') - px) < 1e-2, (pdist.prob('x'), px)
assert abs(pdist.prob('y') - py) < 1e-2, (pdist.prob('y'), py)
def test_megam():
assert_classifier_correct('MEGAM')
def test_tadm():
assert_classifier_correct('TADM')
| mit |
furrykef/lltvg | sf64-audio/sf64dec.py | 1 | 7342 | #!/usr/bin/env python
# By hcs with tweaks by Kef Schecter
# Requires Python 2.7 or greater
from __future__ import division
from contextlib import contextmanager
from struct import unpack, pack
import argparse
import sys
import wave
EN_SAMPLE_RATES = (
12000, # File 0
15000, # File 1
7000, # File 2
)
JA_SAMPLE_RATES = (
12000, # File 0
15000, # File 1
6000, # File 2
)
# Wave_write objects can raise an exception on close.
# This fixes it so that it will not throw on close if used with the
# 'with' statement.
@contextmanager
def my_wave_open(filename, mode=None):
wav = wave.open(filename, mode)
try:
yield wav
finally:
try:
wav.close()
except:
pass
def main(argv=None):
global header_base_offset
global data_max_offset
global data_base_offset
if argv is None:
argv = sys.argv[1:]
args = parseArgs(argv)
header_infile = args.ctl_file
data_infile = args.tbl_file
with header_infile, data_infile:
header_base_offset = 0
data_max_offset = 0
data_base_offset = 0
bank_sizes = (1, 1, 0x13)
file_idx = 0
for file_banks in bank_sizes:
for bank_idx in range(file_banks):
print "doing file %d, bank %d at %08x, %08x" % (file_idx, bank_idx, header_base_offset, data_base_offset)
header_max_offset = 0
for sample_idx in xrange(0, 128):
header_infile.seek(header_base_offset + sample_idx * 4)
sample_offset = unpack('>I', header_infile.read(4))[0]
if sample_offset == 0:
continue
header_max_offset = max(header_max_offset, header_base_offset+sample_offset)
outfile_name_base = '%s%02x_%02x_%02x' % (args.prefix, file_idx, bank_idx, sample_idx)
if args.japanese:
sample_rate = JA_SAMPLE_RATES[file_idx]
else:
sample_rate = EN_SAMPLE_RATES[file_idx]
process_sample_pair(header_infile, data_infile, sample_offset, outfile_name_base, sample_rate)
# next bank
header_base_offset = header_max_offset + 0x20
# next file
data_base_offset = (data_max_offset + 15)//16*16
#print data_max_offset,data_base_offset
file_idx += 1
def process_sample_pair(header_infile, data_infile, sample_header_offset, outfile_name_base, sample_rate):
global header_base_offset
header_infile.seek(header_base_offset+sample_header_offset+0x10)
true_header_offset1, true_header_offset2 = unpack('>IxxxxI', header_infile.read(12))
outfile_name = outfile_name_base + "_0"
process_sample(header_infile, data_infile, true_header_offset1, sample_rate, outfile_name)
if true_header_offset2 != 0:
outfile_name = outfile_name_base + "_1"
process_sample(header_infile, data_infile, true_header_offset2, sample_rate, outfile_name)
def process_sample(header_infile, data_infile, true_header_offset, sample_rate, outfile_name):
global data_max_offset, header_base_offset, data_base_offset
header_infile.seek(header_base_offset+true_header_offset)
sample_size, sample_offset, info_offset, coef_offset = \
unpack('>IIII', header_infile.read(16))
format = 0
if sample_size >= 0x20000000:
sample_size -= 0x20000000
format = 1
data_max_offset = max(data_max_offset, data_base_offset+sample_offset+sample_size)
if format == 1:
outfile_name += '.bin'
print 'dumping %s at %08x, size %08x' % (outfile_name, data_base_offset+sample_offset, sample_size)
data_infile.seek(sample_offset+data_base_offset)
with open(outfile_name, 'wb') as outfile:
for i in range(sample_size):
outfile.write(data_infile.read(1))
return
# read general header
header_infile.seek(header_base_offset+info_offset)
unk1, sample_count, unk2, unk3 = unpack('>IIII', header_infile.read(16))
# read coefficient bank
header_infile.seek(header_base_offset+coef_offset)
channels1, npredictors = unpack('>II', header_infile.read(8))
coefs = {}
for i in range(0,npredictors*16):
coefs[i] = unpack('>h', header_infile.read(2))[0]
outfile_name += '.wav'
print 'decoding %s at %08x, size %08x, samples %d' % (outfile_name, data_base_offset+sample_offset, sample_size, sample_count)
with my_wave_open(outfile_name, 'wb') as outfile:
outfile.setnchannels(1)
outfile.setsampwidth(2)
outfile.setframerate(sample_rate)
outfile.setnframes(sample_count)
decode_VADPCM(npredictors, coefs, sample_offset, sample_count, data_infile, outfile)
# based on icemario's code as found in N64AIFCAudio.cpp
# clips a little...
def decode_VADPCM(npredictors, coefs, sample_offset, sample_count, data_infile, outfile):
#print "decode at %08x" % (data_base_offset+sample_offset)
data_infile.seek(data_base_offset+sample_offset)
clip_count = 0
hist = (0,0,0,0,0,0,0,0)
out = {}
for i in xrange(0, sample_count, 16):
frame = data_infile.read(9)
scale = 1<<(ord(frame[0])>>4)
pred = (ord(frame[0])&0xf) * 16
for k in range(2):
samples = {}
for j in range(8):
sample = ord(frame[1+k*4+j//2])
if (j&1):
sample = sample&0xf
else:
sample = sample>>4
if sample >= 8:
sample -= 16
samples[j] = sample * scale
for j in range(8):
total = coefs[pred+0+j] * hist[6]
total += coefs[pred+8+j] * hist[7]
if j>0:
for x in range(j):
total += samples[((j-1)-x)] * coefs[pred+8+x]
total = ((samples[j] << 11) + total) >> 11
if (total > 32767):
total = 32767
clip_count += 1
elif total < -32768:
total = -32768
clip_count += 1
outfile.writeframesraw(pack('<h', total))
out[j] = total
hist = out
out = {}
if clip_count > 0:
print "clipped %d times" % clip_count
# @TODO@ -- argparse calls sys.exit() in case of '--help' or failure
# @TODO@ -- I do not like argparse.FileType at all
def parseArgs(argv):
parser = argparse.ArgumentParser(description="Star Fox 64 sound ripper")
parser.add_argument(
"ctl_file",
type=argparse.FileType('rb'),
help="filename of the CTL file"
)
parser.add_argument(
"tbl_file",
type=argparse.FileType('rb'),
help="filename of the TBL file"
)
parser.add_argument(
"--prefix",
default="sample_",
help="prefix for output files"
)
parser.add_argument(
"--japanese", "--ja", "--jp",
action="store_true",
default=False,
help="use sample rates for Japanese ROM"
)
return parser.parse_args(argv)
if __name__ == '__main__':
sys.exit(main())
| mit |
jaapz/werkzeug | tests/test_datastructures.py | 25 | 29192 | # -*- coding: utf-8 -*-
"""
tests.datastructures
~~~~~~~~~~~~~~~~~~~~
Tests the functionality of the provided Werkzeug
datastructures.
Classes prefixed with an underscore are mixins and are not discovered by
the test runner.
TODO:
- FileMultiDict
- Immutable types undertested
- Split up dict tests
:copyright: (c) 2014 by Armin Ronacher.
:license: BSD, see LICENSE for more details.
"""
from __future__ import with_statement
import pytest
from tests import strict_eq
import pickle
from contextlib import contextmanager
from copy import copy, deepcopy
from werkzeug import datastructures
from werkzeug._compat import iterkeys, itervalues, iteritems, iterlists, \
iterlistvalues, text_type, PY2
from werkzeug.exceptions import BadRequestKeyError
class TestNativeItermethods(object):
def test_basic(self):
@datastructures.native_itermethods(['keys', 'values', 'items'])
class StupidDict(object):
def keys(self, multi=1):
return iter(['a', 'b', 'c'] * multi)
def values(self, multi=1):
return iter([1, 2, 3] * multi)
def items(self, multi=1):
return iter(zip(iterkeys(self, multi=multi),
itervalues(self, multi=multi)))
d = StupidDict()
expected_keys = ['a', 'b', 'c']
expected_values = [1, 2, 3]
expected_items = list(zip(expected_keys, expected_values))
assert list(iterkeys(d)) == expected_keys
assert list(itervalues(d)) == expected_values
assert list(iteritems(d)) == expected_items
assert list(iterkeys(d, 2)) == expected_keys * 2
assert list(itervalues(d, 2)) == expected_values * 2
assert list(iteritems(d, 2)) == expected_items * 2
class _MutableMultiDictTests(object):
storage_class = None
def test_pickle(self):
cls = self.storage_class
def create_instance(module=None):
if module is None:
d = cls()
else:
old = cls.__module__
cls.__module__ = module
d = cls()
cls.__module__ = old
d.setlist(b'foo', [1, 2, 3, 4])
d.setlist(b'bar', b'foo bar baz'.split())
return d
for protocol in range(pickle.HIGHEST_PROTOCOL + 1):
d = create_instance()
s = pickle.dumps(d, protocol)
ud = pickle.loads(s)
assert type(ud) == type(d)
assert ud == d
alternative = pickle.dumps(create_instance('werkzeug'), protocol)
assert pickle.loads(alternative) == d
ud[b'newkey'] = b'bla'
assert ud != d
def test_basic_interface(self):
md = self.storage_class()
assert isinstance(md, dict)
mapping = [('a', 1), ('b', 2), ('a', 2), ('d', 3),
('a', 1), ('a', 3), ('d', 4), ('c', 3)]
md = self.storage_class(mapping)
# simple getitem gives the first value
assert md['a'] == 1
assert md['c'] == 3
with pytest.raises(KeyError):
md['e']
assert md.get('a') == 1
# list getitem
assert md.getlist('a') == [1, 2, 1, 3]
assert md.getlist('d') == [3, 4]
# do not raise if key not found
assert md.getlist('x') == []
# simple setitem overwrites all values
md['a'] = 42
assert md.getlist('a') == [42]
# list setitem
md.setlist('a', [1, 2, 3])
assert md['a'] == 1
assert md.getlist('a') == [1, 2, 3]
# verify that it does not change original lists
l1 = [1, 2, 3]
md.setlist('a', l1)
del l1[:]
assert md['a'] == 1
# setdefault, setlistdefault
assert md.setdefault('u', 23) == 23
assert md.getlist('u') == [23]
del md['u']
md.setlist('u', [-1, -2])
# delitem
del md['u']
with pytest.raises(KeyError):
md['u']
del md['d']
assert md.getlist('d') == []
# keys, values, items, lists
assert list(sorted(md.keys())) == ['a', 'b', 'c']
assert list(sorted(iterkeys(md))) == ['a', 'b', 'c']
assert list(sorted(itervalues(md))) == [1, 2, 3]
assert list(sorted(itervalues(md))) == [1, 2, 3]
assert list(sorted(md.items())) == [('a', 1), ('b', 2), ('c', 3)]
assert list(sorted(md.items(multi=True))) == \
[('a', 1), ('a', 2), ('a', 3), ('b', 2), ('c', 3)]
assert list(sorted(iteritems(md))) == [('a', 1), ('b', 2), ('c', 3)]
assert list(sorted(iteritems(md, multi=True))) == \
[('a', 1), ('a', 2), ('a', 3), ('b', 2), ('c', 3)]
assert list(sorted(md.lists())) == \
[('a', [1, 2, 3]), ('b', [2]), ('c', [3])]
assert list(sorted(iterlists(md))) == \
[('a', [1, 2, 3]), ('b', [2]), ('c', [3])]
# copy method
c = md.copy()
assert c['a'] == 1
assert c.getlist('a') == [1, 2, 3]
# copy method 2
c = copy(md)
assert c['a'] == 1
assert c.getlist('a') == [1, 2, 3]
# deepcopy method
c = md.deepcopy()
assert c['a'] == 1
assert c.getlist('a') == [1, 2, 3]
# deepcopy method 2
c = deepcopy(md)
assert c['a'] == 1
assert c.getlist('a') == [1, 2, 3]
# update with a multidict
od = self.storage_class([('a', 4), ('a', 5), ('y', 0)])
md.update(od)
assert md.getlist('a') == [1, 2, 3, 4, 5]
assert md.getlist('y') == [0]
# update with a regular dict
md = c
od = {'a': 4, 'y': 0}
md.update(od)
assert md.getlist('a') == [1, 2, 3, 4]
assert md.getlist('y') == [0]
# pop, poplist, popitem, popitemlist
assert md.pop('y') == 0
assert 'y' not in md
assert md.poplist('a') == [1, 2, 3, 4]
assert 'a' not in md
assert md.poplist('missing') == []
# remaining: b=2, c=3
popped = md.popitem()
assert popped in [('b', 2), ('c', 3)]
popped = md.popitemlist()
assert popped in [('b', [2]), ('c', [3])]
# type conversion
md = self.storage_class({'a': '4', 'b': ['2', '3']})
assert md.get('a', type=int) == 4
assert md.getlist('b', type=int) == [2, 3]
# repr
md = self.storage_class([('a', 1), ('a', 2), ('b', 3)])
assert "('a', 1)" in repr(md)
assert "('a', 2)" in repr(md)
assert "('b', 3)" in repr(md)
# add and getlist
md.add('c', '42')
md.add('c', '23')
assert md.getlist('c') == ['42', '23']
md.add('c', 'blah')
assert md.getlist('c', type=int) == [42, 23]
# setdefault
md = self.storage_class()
md.setdefault('x', []).append(42)
md.setdefault('x', []).append(23)
assert md['x'] == [42, 23]
# to dict
md = self.storage_class()
md['foo'] = 42
md.add('bar', 1)
md.add('bar', 2)
assert md.to_dict() == {'foo': 42, 'bar': 1}
assert md.to_dict(flat=False) == {'foo': [42], 'bar': [1, 2]}
# popitem from empty dict
with pytest.raises(KeyError):
self.storage_class().popitem()
with pytest.raises(KeyError):
self.storage_class().popitemlist()
# key errors are of a special type
with pytest.raises(BadRequestKeyError):
self.storage_class()[42]
# setlist works
md = self.storage_class()
md['foo'] = 42
md.setlist('foo', [1, 2])
assert md.getlist('foo') == [1, 2]
class _ImmutableDictTests(object):
storage_class = None
def test_follows_dict_interface(self):
cls = self.storage_class
data = {'foo': 1, 'bar': 2, 'baz': 3}
d = cls(data)
assert d['foo'] == 1
assert d['bar'] == 2
assert d['baz'] == 3
assert sorted(d.keys()) == ['bar', 'baz', 'foo']
assert 'foo' in d
assert 'foox' not in d
assert len(d) == 3
def test_copies_are_mutable(self):
cls = self.storage_class
immutable = cls({'a': 1})
with pytest.raises(TypeError):
immutable.pop('a')
mutable = immutable.copy()
mutable.pop('a')
assert 'a' in immutable
assert mutable is not immutable
assert copy(immutable) is immutable
def test_dict_is_hashable(self):
cls = self.storage_class
immutable = cls({'a': 1, 'b': 2})
immutable2 = cls({'a': 2, 'b': 2})
x = set([immutable])
assert immutable in x
assert immutable2 not in x
x.discard(immutable)
assert immutable not in x
assert immutable2 not in x
x.add(immutable2)
assert immutable not in x
assert immutable2 in x
x.add(immutable)
assert immutable in x
assert immutable2 in x
class TestImmutableTypeConversionDict(_ImmutableDictTests):
storage_class = datastructures.ImmutableTypeConversionDict
class TestImmutableMultiDict(_ImmutableDictTests):
storage_class = datastructures.ImmutableMultiDict
def test_multidict_is_hashable(self):
cls = self.storage_class
immutable = cls({'a': [1, 2], 'b': 2})
immutable2 = cls({'a': [1], 'b': 2})
x = set([immutable])
assert immutable in x
assert immutable2 not in x
x.discard(immutable)
assert immutable not in x
assert immutable2 not in x
x.add(immutable2)
assert immutable not in x
assert immutable2 in x
x.add(immutable)
assert immutable in x
assert immutable2 in x
class TestImmutableDict(_ImmutableDictTests):
storage_class = datastructures.ImmutableDict
class TestImmutableOrderedMultiDict(_ImmutableDictTests):
storage_class = datastructures.ImmutableOrderedMultiDict
def test_ordered_multidict_is_hashable(self):
a = self.storage_class([('a', 1), ('b', 1), ('a', 2)])
b = self.storage_class([('a', 1), ('a', 2), ('b', 1)])
assert hash(a) != hash(b)
class TestMultiDict(_MutableMultiDictTests):
storage_class = datastructures.MultiDict
def test_multidict_pop(self):
make_d = lambda: self.storage_class({'foo': [1, 2, 3, 4]})
d = make_d()
assert d.pop('foo') == 1
assert not d
d = make_d()
assert d.pop('foo', 32) == 1
assert not d
d = make_d()
assert d.pop('foos', 32) == 32
assert d
with pytest.raises(KeyError):
d.pop('foos')
def test_setlistdefault(self):
md = self.storage_class()
assert md.setlistdefault('u', [-1, -2]) == [-1, -2]
assert md.getlist('u') == [-1, -2]
assert md['u'] == -1
def test_iter_interfaces(self):
mapping = [('a', 1), ('b', 2), ('a', 2), ('d', 3),
('a', 1), ('a', 3), ('d', 4), ('c', 3)]
md = self.storage_class(mapping)
assert list(zip(md.keys(), md.listvalues())) == list(md.lists())
assert list(zip(md, iterlistvalues(md))) == list(iterlists(md))
assert list(zip(iterkeys(md), iterlistvalues(md))) == \
list(iterlists(md))
class TestOrderedMultiDict(_MutableMultiDictTests):
storage_class = datastructures.OrderedMultiDict
def test_ordered_interface(self):
cls = self.storage_class
d = cls()
assert not d
d.add('foo', 'bar')
assert len(d) == 1
d.add('foo', 'baz')
assert len(d) == 1
assert list(iteritems(d)) == [('foo', 'bar')]
assert list(d) == ['foo']
assert list(iteritems(d, multi=True)) == \
[('foo', 'bar'), ('foo', 'baz')]
del d['foo']
assert not d
assert len(d) == 0
assert list(d) == []
d.update([('foo', 1), ('foo', 2), ('bar', 42)])
d.add('foo', 3)
assert d.getlist('foo') == [1, 2, 3]
assert d.getlist('bar') == [42]
assert list(iteritems(d)) == [('foo', 1), ('bar', 42)]
expected = ['foo', 'bar']
assert list(d.keys()) == expected
assert list(d) == expected
assert list(iterkeys(d)) == expected
assert list(iteritems(d, multi=True)) == \
[('foo', 1), ('foo', 2), ('bar', 42), ('foo', 3)]
assert len(d) == 2
assert d.pop('foo') == 1
assert d.pop('blafasel', None) is None
assert d.pop('blafasel', 42) == 42
assert len(d) == 1
assert d.poplist('bar') == [42]
assert not d
d.get('missingkey') is None
d.add('foo', 42)
d.add('foo', 23)
d.add('bar', 2)
d.add('foo', 42)
assert d == datastructures.MultiDict(d)
id = self.storage_class(d)
assert d == id
d.add('foo', 2)
assert d != id
d.update({'blah': [1, 2, 3]})
assert d['blah'] == 1
assert d.getlist('blah') == [1, 2, 3]
# setlist works
d = self.storage_class()
d['foo'] = 42
d.setlist('foo', [1, 2])
assert d.getlist('foo') == [1, 2]
with pytest.raises(BadRequestKeyError):
d.pop('missing')
with pytest.raises(BadRequestKeyError):
d['missing']
# popping
d = self.storage_class()
d.add('foo', 23)
d.add('foo', 42)
d.add('foo', 1)
assert d.popitem() == ('foo', 23)
with pytest.raises(BadRequestKeyError):
d.popitem()
assert not d
d.add('foo', 23)
d.add('foo', 42)
d.add('foo', 1)
assert d.popitemlist() == ('foo', [23, 42, 1])
with pytest.raises(BadRequestKeyError):
d.popitemlist()
def test_iterables(self):
a = datastructures.MultiDict((("key_a", "value_a"),))
b = datastructures.MultiDict((("key_b", "value_b"),))
ab = datastructures.CombinedMultiDict((a, b))
assert sorted(ab.lists()) == [('key_a', ['value_a']), ('key_b', ['value_b'])]
assert sorted(ab.listvalues()) == [['value_a'], ['value_b']]
assert sorted(ab.keys()) == ["key_a", "key_b"]
assert sorted(iterlists(ab)) == [('key_a', ['value_a']), ('key_b', ['value_b'])]
assert sorted(iterlistvalues(ab)) == [['value_a'], ['value_b']]
assert sorted(iterkeys(ab)) == ["key_a", "key_b"]
class TestCombinedMultiDict(object):
storage_class = datastructures.CombinedMultiDict
def test_basic_interface(self):
d1 = datastructures.MultiDict([('foo', '1')])
d2 = datastructures.MultiDict([('bar', '2'), ('bar', '3')])
d = self.storage_class([d1, d2])
# lookup
assert d['foo'] == '1'
assert d['bar'] == '2'
assert d.getlist('bar') == ['2', '3']
assert sorted(d.items()) == [('bar', '2'), ('foo', '1')]
assert sorted(d.items(multi=True)) == \
[('bar', '2'), ('bar', '3'), ('foo', '1')]
assert 'missingkey' not in d
assert 'foo' in d
# type lookup
assert d.get('foo', type=int) == 1
assert d.getlist('bar', type=int) == [2, 3]
# get key errors for missing stuff
with pytest.raises(KeyError):
d['missing']
# make sure that they are immutable
with pytest.raises(TypeError):
d['foo'] = 'blub'
# copies are immutable
d = d.copy()
with pytest.raises(TypeError):
d['foo'] = 'blub'
# make sure lists merges
md1 = datastructures.MultiDict((("foo", "bar"),))
md2 = datastructures.MultiDict((("foo", "blafasel"),))
x = self.storage_class((md1, md2))
assert list(iterlists(x)) == [('foo', ['bar', 'blafasel'])]
def test_length(self):
d1 = datastructures.MultiDict([('foo', '1')])
d2 = datastructures.MultiDict([('bar', '2')])
assert len(d1) == len(d2) == 1
d = self.storage_class([d1, d2])
assert len(d) == 2
d1.clear()
assert len(d1) == 0
assert len(d) == 1
class TestHeaders(object):
storage_class = datastructures.Headers
def test_basic_interface(self):
headers = self.storage_class()
headers.add('Content-Type', 'text/plain')
headers.add('X-Foo', 'bar')
assert 'x-Foo' in headers
assert 'Content-type' in headers
headers['Content-Type'] = 'foo/bar'
assert headers['Content-Type'] == 'foo/bar'
assert len(headers.getlist('Content-Type')) == 1
# list conversion
assert headers.to_wsgi_list() == [
('Content-Type', 'foo/bar'),
('X-Foo', 'bar')
]
assert str(headers) == (
"Content-Type: foo/bar\r\n"
"X-Foo: bar\r\n"
"\r\n"
)
assert str(self.storage_class()) == "\r\n"
# extended add
headers.add('Content-Disposition', 'attachment', filename='foo')
assert headers['Content-Disposition'] == 'attachment; filename=foo'
headers.add('x', 'y', z='"')
assert headers['x'] == r'y; z="\""'
def test_defaults_and_conversion(self):
# defaults
headers = self.storage_class([
('Content-Type', 'text/plain'),
('X-Foo', 'bar'),
('X-Bar', '1'),
('X-Bar', '2')
])
assert headers.getlist('x-bar') == ['1', '2']
assert headers.get('x-Bar') == '1'
assert headers.get('Content-Type') == 'text/plain'
assert headers.setdefault('X-Foo', 'nope') == 'bar'
assert headers.setdefault('X-Bar', 'nope') == '1'
assert headers.setdefault('X-Baz', 'quux') == 'quux'
assert headers.setdefault('X-Baz', 'nope') == 'quux'
headers.pop('X-Baz')
# type conversion
assert headers.get('x-bar', type=int) == 1
assert headers.getlist('x-bar', type=int) == [1, 2]
# list like operations
assert headers[0] == ('Content-Type', 'text/plain')
assert headers[:1] == self.storage_class([('Content-Type', 'text/plain')])
del headers[:2]
del headers[-1]
assert headers == self.storage_class([('X-Bar', '1')])
def test_copying(self):
a = self.storage_class([('foo', 'bar')])
b = a.copy()
a.add('foo', 'baz')
assert a.getlist('foo') == ['bar', 'baz']
assert b.getlist('foo') == ['bar']
def test_popping(self):
headers = self.storage_class([('a', 1)])
assert headers.pop('a') == 1
assert headers.pop('b', 2) == 2
with pytest.raises(KeyError):
headers.pop('c')
def test_set_arguments(self):
a = self.storage_class()
a.set('Content-Disposition', 'useless')
a.set('Content-Disposition', 'attachment', filename='foo')
assert a['Content-Disposition'] == 'attachment; filename=foo'
def test_reject_newlines(self):
h = self.storage_class()
for variation in 'foo\nbar', 'foo\r\nbar', 'foo\rbar':
with pytest.raises(ValueError):
h['foo'] = variation
with pytest.raises(ValueError):
h.add('foo', variation)
with pytest.raises(ValueError):
h.add('foo', 'test', option=variation)
with pytest.raises(ValueError):
h.set('foo', variation)
with pytest.raises(ValueError):
h.set('foo', 'test', option=variation)
def test_slicing(self):
# there's nothing wrong with these being native strings
# Headers doesn't care about the data types
h = self.storage_class()
h.set('X-Foo-Poo', 'bleh')
h.set('Content-Type', 'application/whocares')
h.set('X-Forwarded-For', '192.168.0.123')
h[:] = [(k, v) for k, v in h if k.startswith(u'X-')]
assert list(h) == [
('X-Foo-Poo', 'bleh'),
('X-Forwarded-For', '192.168.0.123')
]
def test_bytes_operations(self):
h = self.storage_class()
h.set('X-Foo-Poo', 'bleh')
h.set('X-Whoops', b'\xff')
assert h.get('x-foo-poo', as_bytes=True) == b'bleh'
assert h.get('x-whoops', as_bytes=True) == b'\xff'
def test_to_wsgi_list(self):
h = self.storage_class()
h.set(u'Key', u'Value')
for key, value in h.to_wsgi_list():
if PY2:
strict_eq(key, b'Key')
strict_eq(value, b'Value')
else:
strict_eq(key, u'Key')
strict_eq(value, u'Value')
class TestEnvironHeaders(object):
storage_class = datastructures.EnvironHeaders
def test_basic_interface(self):
# this happens in multiple WSGI servers because they
# use a vary naive way to convert the headers;
broken_env = {
'HTTP_CONTENT_TYPE': 'text/html',
'CONTENT_TYPE': 'text/html',
'HTTP_CONTENT_LENGTH': '0',
'CONTENT_LENGTH': '0',
'HTTP_ACCEPT': '*',
'wsgi.version': (1, 0)
}
headers = self.storage_class(broken_env)
assert headers
assert len(headers) == 3
assert sorted(headers) == [
('Accept', '*'),
('Content-Length', '0'),
('Content-Type', 'text/html')
]
assert not self.storage_class({'wsgi.version': (1, 0)})
assert len(self.storage_class({'wsgi.version': (1, 0)})) == 0
def test_return_type_is_unicode(self):
# environ contains native strings; we return unicode
headers = self.storage_class({
'HTTP_FOO': '\xe2\x9c\x93',
'CONTENT_TYPE': 'text/plain',
})
assert headers['Foo'] == u"\xe2\x9c\x93"
assert isinstance(headers['Foo'], text_type)
assert isinstance(headers['Content-Type'], text_type)
iter_output = dict(iter(headers))
assert iter_output['Foo'] == u"\xe2\x9c\x93"
assert isinstance(iter_output['Foo'], text_type)
assert isinstance(iter_output['Content-Type'], text_type)
def test_bytes_operations(self):
foo_val = '\xff'
h = self.storage_class({
'HTTP_X_FOO': foo_val
})
assert h.get('x-foo', as_bytes=True) == b'\xff'
assert h.get('x-foo') == u'\xff'
class TestHeaderSet(object):
storage_class = datastructures.HeaderSet
def test_basic_interface(self):
hs = self.storage_class()
hs.add('foo')
hs.add('bar')
assert 'Bar' in hs
assert hs.find('foo') == 0
assert hs.find('BAR') == 1
assert hs.find('baz') < 0
hs.discard('missing')
hs.discard('foo')
assert hs.find('foo') < 0
assert hs.find('bar') == 0
with pytest.raises(IndexError):
hs.index('missing')
assert hs.index('bar') == 0
assert hs
hs.clear()
assert not hs
class TestImmutableList(object):
storage_class = datastructures.ImmutableList
def test_list_hashable(self):
t = (1, 2, 3, 4)
l = self.storage_class(t)
assert hash(t) == hash(l)
assert t != l
def make_call_asserter(func=None):
"""Utility to assert a certain number of function calls.
:param func: Additional callback for each function call.
>>> assert_calls, func = make_call_asserter()
>>> with assert_calls(2):
func()
func()
"""
calls = [0]
@contextmanager
def asserter(count, msg=None):
calls[0] = 0
yield
assert calls[0] == count
def wrapped(*args, **kwargs):
calls[0] += 1
if func is not None:
return func(*args, **kwargs)
return asserter, wrapped
class TestCallbackDict(object):
storage_class = datastructures.CallbackDict
def test_callback_dict_reads(self):
assert_calls, func = make_call_asserter()
initial = {'a': 'foo', 'b': 'bar'}
dct = self.storage_class(initial=initial, on_update=func)
with assert_calls(0, 'callback triggered by read-only method'):
# read-only methods
dct['a']
dct.get('a')
pytest.raises(KeyError, lambda: dct['x'])
'a' in dct
list(iter(dct))
dct.copy()
with assert_calls(0, 'callback triggered without modification'):
# methods that may write but don't
dct.pop('z', None)
dct.setdefault('a')
def test_callback_dict_writes(self):
assert_calls, func = make_call_asserter()
initial = {'a': 'foo', 'b': 'bar'}
dct = self.storage_class(initial=initial, on_update=func)
with assert_calls(8, 'callback not triggered by write method'):
# always-write methods
dct['z'] = 123
dct['z'] = 123 # must trigger again
del dct['z']
dct.pop('b', None)
dct.setdefault('x')
dct.popitem()
dct.update([])
dct.clear()
with assert_calls(0, 'callback triggered by failed del'):
pytest.raises(KeyError, lambda: dct.__delitem__('x'))
with assert_calls(0, 'callback triggered by failed pop'):
pytest.raises(KeyError, lambda: dct.pop('x'))
class TestCacheControl(object):
def test_repr(self):
cc = datastructures.RequestCacheControl(
[("max-age", "0"), ("private", "True")],
)
assert repr(cc) == "<RequestCacheControl max-age='0' private='True'>"
class TestAccept(object):
storage_class = datastructures.Accept
def test_accept_basic(self):
accept = self.storage_class([('tinker', 0), ('tailor', 0.333),
('soldier', 0.667), ('sailor', 1)])
# check __getitem__ on indices
assert accept[3] == ('tinker', 0)
assert accept[2] == ('tailor', 0.333)
assert accept[1] == ('soldier', 0.667)
assert accept[0], ('sailor', 1)
# check __getitem__ on string
assert accept['tinker'] == 0
assert accept['tailor'] == 0.333
assert accept['soldier'] == 0.667
assert accept['sailor'] == 1
assert accept['spy'] == 0
# check quality method
assert accept.quality('tinker') == 0
assert accept.quality('tailor') == 0.333
assert accept.quality('soldier') == 0.667
assert accept.quality('sailor') == 1
assert accept.quality('spy') == 0
# check __contains__
assert 'sailor' in accept
assert 'spy' not in accept
# check index method
assert accept.index('tinker') == 3
assert accept.index('tailor') == 2
assert accept.index('soldier') == 1
assert accept.index('sailor') == 0
with pytest.raises(ValueError):
accept.index('spy')
# check find method
assert accept.find('tinker') == 3
assert accept.find('tailor') == 2
assert accept.find('soldier') == 1
assert accept.find('sailor') == 0
assert accept.find('spy') == -1
# check to_header method
assert accept.to_header() == \
'sailor,soldier;q=0.667,tailor;q=0.333,tinker;q=0'
# check best_match method
assert accept.best_match(['tinker', 'tailor', 'soldier', 'sailor'],
default=None) == 'sailor'
assert accept.best_match(['tinker', 'tailor', 'soldier'],
default=None) == 'soldier'
assert accept.best_match(['tinker', 'tailor'], default=None) == \
'tailor'
assert accept.best_match(['tinker'], default=None) is None
assert accept.best_match(['tinker'], default='x') == 'x'
def test_accept_wildcard(self):
accept = self.storage_class([('*', 0), ('asterisk', 1)])
assert '*' in accept
assert accept.best_match(['asterisk', 'star'], default=None) == \
'asterisk'
assert accept.best_match(['star'], default=None) is None
@pytest.mark.skipif(True, reason='Werkzeug doesn\'t respect specificity.')
def test_accept_wildcard_specificity(self):
accept = self.storage_class([('asterisk', 0), ('star', 0.5), ('*', 1)])
assert accept.best_match(['star', 'asterisk'], default=None) == 'star'
assert accept.best_match(['asterisk', 'star'], default=None) == 'star'
assert accept.best_match(['asterisk', 'times'], default=None) == \
'times'
assert accept.best_match(['asterisk'], default=None) is None
class TestFileStorage(object):
storage_class = datastructures.FileStorage
def test_mimetype_always_lowercase(self):
file_storage = self.storage_class(content_type='APPLICATION/JSON')
assert file_storage.mimetype == 'application/json'
| bsd-3-clause |
trunca/enigma2 | lib/python/Screens/Screen.py | 10 | 4777 | from Tools.Profile import profile
profile("LOAD:GUISkin")
from Components.GUISkin import GUISkin
profile("LOAD:Source")
from Components.Sources.Source import Source
profile("LOAD:GUIComponent")
from Components.GUIComponent import GUIComponent
profile("LOAD:eRCInput")
from enigma import eRCInput
class Screen(dict, GUISkin):
False, SUSPEND_STOPS, SUSPEND_PAUSES = range(3)
ALLOW_SUSPEND = False
global_screen = None
def __init__(self, session, parent = None):
dict.__init__(self)
self.skinName = self.__class__.__name__
self.session = session
self.parent = parent
GUISkin.__init__(self)
self.onClose = [ ]
self.onFirstExecBegin = [ ]
self.onExecBegin = [ ]
self.onExecEnd = [ ]
self.onShown = [ ]
self.onShow = [ ]
self.onHide = [ ]
self.execing = False
self.shown = True
# already shown is false until the screen is really shown (after creation)
self.already_shown = False
self.renderer = [ ]
# in order to support screens *without* a help,
# we need the list in every screen. how ironic.
self.helpList = [ ]
self.close_on_next_exec = None
# stand alone screens (for example web screens)
# don't care about having or not having focus.
self.stand_alone = False
self.keyboardMode = None
def saveKeyboardMode(self):
rcinput = eRCInput.getInstance()
self.keyboardMode = rcinput.getKeyboardMode()
def setKeyboardModeAscii(self):
rcinput = eRCInput.getInstance()
rcinput.setKeyboardMode(rcinput.kmAscii)
def setKeyboardModeNone(self):
rcinput = eRCInput.getInstance()
rcinput.setKeyboardMode(rcinput.kmNone)
def restoreKeyboardMode(self):
rcinput = eRCInput.getInstance()
if self.keyboardMode is not None:
rcinput.setKeyboardMode(self.keyboardMode)
def execBegin(self):
self.active_components = [ ]
if self.close_on_next_exec is not None:
tmp = self.close_on_next_exec
self.close_on_next_exec = None
self.execing = True
self.close(*tmp)
else:
single = self.onFirstExecBegin
self.onFirstExecBegin = []
for x in self.onExecBegin + single:
x()
if not self.stand_alone and self.session.current_dialog != self:
return
# assert self.session == None, "a screen can only exec once per time"
# self.session = session
for val in self.values() + self.renderer:
val.execBegin()
if not self.stand_alone and self.session.current_dialog != self:
return
self.active_components.append(val)
self.execing = True
for x in self.onShown:
x()
def execEnd(self):
active_components = self.active_components
# for (name, val) in self.items():
self.active_components = None
if active_components is not None:
for val in active_components:
val.execEnd()
# assert self.session != None, "execEnd on non-execing screen!"
# self.session = None
self.execing = False
for x in self.onExecEnd:
x()
# never call this directly - it will be called from the session!
def doClose(self):
self.hide()
for x in self.onClose:
x()
# fixup circular references
del self.helpList
GUISkin.close(self)
# first disconnect all render from their sources.
# we might split this out into a "unskin"-call,
# but currently we destroy the screen afterwards
# anyway.
for val in self.renderer:
val.disconnectAll() # disconnected converter/sources and probably destroy them. Sources will not be destroyed.
del self.session
for (name, val) in self.items():
val.destroy()
del self[name]
self.renderer = [ ]
# really delete all elements now
self.__dict__.clear()
def close(self, *retval):
if not self.execing:
self.close_on_next_exec = retval
else:
self.session.close(self, *retval)
def setFocus(self, o):
self.instance.setFocus(o.instance)
def show(self):
# Temporarily add to ease up identification of screens
print '[SCREENNAME] ',self.skinName
if (self.shown and self.already_shown) or not self.instance:
return
self.shown = True
self.already_shown = True
self.instance.show()
for x in self.onShow:
x()
for val in self.values() + self.renderer:
if isinstance(val, GUIComponent) or isinstance(val, Source):
val.onShow()
def hide(self):
if not self.shown or not self.instance:
return
self.shown = False
self.instance.hide()
for x in self.onHide:
x()
for val in self.values() + self.renderer:
if isinstance(val, GUIComponent) or isinstance(val, Source):
val.onHide()
def setAnimationMode(self, mode):
if self.instance:
self.instance.setAnimationMode(mode)
def __repr__(self):
return str(type(self))
def getRelatedScreen(self, name):
if name == "session":
return self.session.screen
elif name == "parent":
return self.parent
elif name == "global":
return self.global_screen
else:
return None
| gpl-2.0 |
NeuralEnsemble/python-neo | neo/io/blkio.py | 5 | 13527 | from .baseio import BaseIO
from neo.core import ImageSequence, Segment, Block
import numpy as np
import struct
import os
import math
import quantities as pq
class BlkIO(BaseIO):
"""
Neo IO module for optical imaging data stored as BLK file
*Usage*:
>>> from neo import io
>>> import quantities as pq
>>> r = io.BlkIO("file_blk_1.BLK",units='V',sampling_rate=1.0*pq.Hz,
... spatial_scale=1.0*pq.Hz)
>>> block = r.read_block()
reading the header
reading block
returning block
>>> block
Block with 6 segments
file_origin: 'file_blk_1.BLK'
# segments (N=6)
0: Segment with 1 imagesequences description: 'stim nb:0' # analogsignals (N=0)
1: Segment with 1 imagesequences description: 'stim nb:1' # analogsignals (N=0)
2: Segment with 1 imagesequences description: 'stim nb:2' # analogsignals (N=0)
3: Segment with 1 imagesequences description: 'stim nb:3' # analogsignals (N=0)
4: Segment with 1 imagesequences description: 'stim nb:4' # analogsignals (N=0)
5: Segment with 1 imagesequences description: 'stim nb:5' # analogsignals (N=0)
Many thanks to Thomas Deneux for the MATLAB code on which this was based.
"""
name = 'BLK IO'
description = "Neo IO module for optical imaging data stored as BLK file"
_prefered_signal_goup_mode = 'group-by-same-units'
is_readable = True
is_wirtable = False
supported_objects = [Block, Segment, ImageSequence]
readble_objects = supported_objects
support_lazy = False
read_params = {}
write_params = {}
extensions = []
mode = 'file'
def __init__(self, file_name=None, units=None, sampling_rate=None, spatial_scale=None, **kwargs):
BaseIO.__init__(self, file_name, **kwargs)
self.units = units
self.sampling_rate = sampling_rate
self.spatial_scale = spatial_scale
def read(self, lazy=False, **kwargs):
"""
Return all data from the file as a list of Blocks
"""
if lazy:
raise ValueError('This IO module does not support lazy loading')
return [self.read_block(lazy=lazy, units=self.units, sampling_rate=self.sampling_rate,
spatial_scale=self.spatial_scale, **kwargs)]
def read_block(self, lazy=False, **kargs):
def read(name, type, nb, dictionary, file):
if type == 'int32':
# dictionary[name] = int.from_bytes(file.read(4), byteorder=sys.byteorder, signed=True)
dictionary[name] = struct.unpack("i", file.read(4))[0]
if type == 'float32':
dictionary[name] = struct.unpack('f', file.read(4))[0]
if type == 'uint8':
l = []
for i in range(nb):
l.append(chr(struct.unpack('B', file.read(1))[0]))
dictionary[name] = l
if type == 'uint16':
l = []
for i in range(nb):
l.append((struct.unpack('H', file.read(2)))[0])
dictionary[name] = l
if type == 'short':
dictionary[name] = struct.unpack('h', file.read(2))[0]
return dictionary
def read_header(file_name):
file = open(file_name, "rb")
i = [
['file_size', 'int32', 1],
['checksum_header', 'int32', 1],
['check_data', 'int32', 1],
['lenheader', 'int32', 1],
['versionid', 'float32', 1],
['filetype', 'int32', 1],
['filesubtype', 'int32', 1],
['datatype', 'int32', 1],
['sizeof', 'int32', 1],
['framewidth', 'int32', 1],
['frameheight', 'int32', 1],
['nframesperstim', 'int32', 1],
['nstimuli', 'int32', 1],
['initialxbinfactor', 'int32', 1],
['initialybinfactor', 'int32', 1],
['xbinfactor', 'int32', 1],
['ybinfactor', 'int32', 1],
['username', 'uint8', 32],
['recordingdate', 'uint8', 16],
['x1roi', 'int32', 1],
['y1roi', 'int32', 1],
['x2roi', 'int32', 1],
['y2roi', 'int32', 1],
['stimoffs', 'int32', 1],
['stimsize', 'int32', 1],
['frameoffs', 'int32', 1],
['framesize', 'int32', 1],
['refoffs', 'int32', 1],
['refsize', 'int32', 1],
['refwidth', 'int32', 1],
['refheight', 'int32', 1],
['whichblocks', 'uint16', 16],
['whichframe', 'uint16', 16],
['loclip', 'int32', 1],
['hiclip', 'int32', 1],
['lopass', 'int32', 1],
['hipass', 'int32', 1],
['operationsperformed', 'uint8', 64],
['magnification', 'float32', 1],
['gain', 'uint16', 1],
['wavelength', 'uint16', 1],
['exposuretime', 'int32', 1],
['nrepetitions', 'int32', 1],
['acquisitiondelay', 'int32', 1],
['interstiminterval', 'int32', 1],
['creationdate', 'uint8', 16],
['datafilename', 'uint8', 64],
['orareserved', 'uint8', 256]
]
dic = {}
for x in i:
dic = read(name=x[0], type=x[1], nb=x[2], dictionary=dic, file=file)
if dic['filesubtype'] == 13:
i = [
["includesrefframe", "int32", 1],
["temp", "uint8", 128],
["ntrials", "int32", 1],
["scalefactors", "int32", 1],
["cameragain", "short", 1],
["ampgain", "short", 1],
["samplingrate", "short", 1],
["average", "short", 1],
["exposuretime", "short", 1],
["samplingaverage", "short", 1],
["presentaverage", "short", 1],
["framesperstim", "short", 1],
["trialsperblock", "short", 1],
["sizeofanalogbufferinframes", "short", 1],
["cameratrials", "short", 1],
["filler", "uint8", 106],
["dyedaqreserved", "uint8", 106]
]
for x in i:
dic = read(name=x[0], type=x[1], nb=x[2], dictionary=dic, file=file)
# nottested
# p.listofstimuli=temp(1:max(find(temp~=0)))'; % up to first non-zero stimulus
dic["listofstimuli"] = dic["temp"][0:np.argwhere(x != 0).max(0)]
else:
i = [
["includesrefframe", "int32", 1],
["listofstimuli", "uint8", 256],
["nvideoframesperdataframe", "int32", 1],
["ntrials", "int32", 1],
["scalefactor", "int32", 1],
["meanampgain", "float32", 1],
["meanampdc", "float32", 1],
["vdaqreserved", "uint8", 256]
]
for x in i:
dic = read(name=x[0], type=x[1], nb=x[2], dictionary=dic, file=file)
i = [["user", "uint8", 256], ["comment", "uint8", 256], ["refscalefactor", "int32", 1]]
for x in i:
dic = read(name=x[0], type=x[1], nb=x[2], dictionary=dic, file=file)
dic["actuallength"] = os.stat(file_name).st_size
file.close()
return dic
# start of the reading process
nblocks = 1
print("reading the header")
header = read_header(self.filename)
nstim = header['nstimuli']
ni = header['framewidth']
nj = header['frameheight']
nfr = header['nframesperstim']
lenh = header['lenheader']
framesize = header['framesize']
filesize = header['file_size']
dtype = header['datatype']
gain = header['meanampgain']
dc = header['meanampdc']
scalefactor = header['scalefactor']
# [["dtype","nbytes","datatype","type_out"],[...]]
l = [
[11, 1, "uchar", "uint8", "B"], [12, 2, "ushort", "uint16", "H"],
[13, 4, "ulong", "uint32", "I"], [14, 4, "float", "single", "f"]
]
for i in l:
if dtype == i[0]:
nbytes, datatype, type_out, struct_type = i[1], i[2], i[3], i[4]
if framesize != ni * nj * nbytes:
print("BAD HEADER!!! framesize does not match framewidth*frameheight*nbytes!")
framesize = ni * nj * nbytes
if (filesize - lenh) > (framesize * nfr * nstim):
nfr2 = nfr + 1
includesrefframe = True
else:
nfr2 = nfr
includesrefframe = False
nbin = nblocks
conds = [i for i in range(1, nstim + 1)]
ncond = len(conds)
data = [[[np.zeros((ni, nj, nfr), type_out)] for x in range(ncond)] for i in range(nbin)]
for k in range(1, nbin + 1):
print("reading block")
bin = np.arange(math.floor((k - 1 / nbin * nblocks) + 1),
math.floor((k / nbin * nblocks) + 1))
sbin = bin.size
for j in range(1, sbin + 1):
file = open(self.filename, 'rb')
for i in range(1, ncond + 1):
framestart = conds[i - 1] * nfr2 - nfr
offset = framestart * ni * nj * nbytes + lenh
file.seek(offset, 0)
a = [(struct.unpack(struct_type, file.read(nbytes)))[0]
for m in range(ni * nj * nfr)]
a = np.reshape(np.array(a, dtype=type_out, order='F'),
(ni * nj, nfr), order='F')
a = np.reshape(a, (ni, nj, nfr), order='F')
if includesrefframe:
# not tested
framestart = (conds[i] - 1) * nfr2
offset = framestart * ni * nj * nbytes + lenh
file.seek(offset)
ref = [(struct.unpack(struct_type, file.read(nbytes)))[0]
for m in range(ni * nj)]
ref = np.array(ref, dtype=type_out)
for y in range(len(ref)):
ref[y] *= scalefactor
ref = np.reshape(ref, (ni, nj))
b = np.tile(ref, [1, 1, nfr])
for y in range(len(a)):
b.append([])
for x in range(len(a[y])):
b[y + 1].append([])
for frame in range(len(a[y][x])):
b[y + 1][x][frame] = (a[y][x][frame] / gain) - \
(scalefactor * dc / gain)
a = b
if sbin == 1:
data[k - 1][i - 1] = a
else:
# not tested
for y in range(len(a)):
for x in range(len(a[y])):
a[y][x] /= sbin
data[k - 1][i - 1] = data[k - 1][i - 1] + a / sbin
file.close()
# data format [block][stim][width][height][frame]]
# data structure should be [block][stim][frame][width][height] in order to be easy to use with neo
# each file is a block
# each stim could be a segment
# then an image sequence [frame][width][height]
# image need to be rotated
# changing order of data for compatibility
# [block][stim][width][height][frame]]
# to
# [block][stim][frame][width][height]
for block in range(len(data)):
for stim in range(len(data[block])):
a = []
for frame in range(header['nframesperstim']):
a.append([])
for width in range(len(data[block][stim])):
a[frame].append([])
for height in range(len(data[block][stim][width])):
a[frame][width].append(data[block][stim][width][height][frame])
# rotation of data to be the same as thomas deneux screenshot
a[frame] = np.rot90(np.fliplr(a[frame]))
data[block][stim] = a
block = Block(file_origin=self.filename)
for stim in range(len(data[0])):
image_sequence = ImageSequence(data[0][stim], units=self.units,
sampling_rate=self.sampling_rate,
spatial_scale=self.spatial_scale)
segment = Segment(file_origin=self.filename, description=("stim nb:"+str(stim)))
segment.imagesequences = [image_sequence]
segment.block = block
for key in header:
block.annotations[key] = header[key]
block.segments.append(segment)
print("returning block")
return block
| bsd-3-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.