repo_name stringlengths 5 100 | path stringlengths 4 375 | copies stringclasses 991 values | size stringlengths 4 7 | content stringlengths 666 1M | license stringclasses 15 values |
|---|---|---|---|---|---|
wskplho/sl4a | python-build/python-libs/gdata/src/gdata/Crypto/Protocol/Chaffing.py | 226 | 9467 | """This file implements the chaffing algorithm.
Winnowing and chaffing is a technique for enhancing privacy without requiring
strong encryption. In short, the technique takes a set of authenticated
message blocks (the wheat) and adds a number of chaff blocks which have
randomly chosen data and MAC fields. This means that to an adversary, the
chaff blocks look as valid as the wheat blocks, and so the authentication
would have to be performed on every block. By tailoring the number of chaff
blocks added to the message, the sender can make breaking the message
computationally infeasible. There are many other interesting properties of
the winnow/chaff technique.
For example, say Alice is sending a message to Bob. She packetizes the
message and performs an all-or-nothing transformation on the packets. Then
she authenticates each packet with a message authentication code (MAC). The
MAC is a hash of the data packet, and there is a secret key which she must
share with Bob (key distribution is an exercise left to the reader). She then
adds a serial number to each packet, and sends the packets to Bob.
Bob receives the packets, and using the shared secret authentication key,
authenticates the MACs for each packet. Those packets that have bad MACs are
simply discarded. The remainder are sorted by serial number, and passed
through the reverse all-or-nothing transform. The transform means that an
eavesdropper (say Eve) must acquire all the packets before any of the data can
be read. If even one packet is missing, the data is useless.
There's one twist: by adding chaff packets, Alice and Bob can make Eve's job
much harder, since Eve now has to break the shared secret key, or try every
combination of wheat and chaff packet to read any of the message. The cool
thing is that Bob doesn't need to add any additional code; the chaff packets
are already filtered out because their MACs don't match (in all likelihood --
since the data and MACs for the chaff packets are randomly chosen it is
possible, but very unlikely that a chaff MAC will match the chaff data). And
Alice need not even be the party adding the chaff! She could be completely
unaware that a third party, say Charles, is adding chaff packets to her
messages as they are transmitted.
For more information on winnowing and chaffing see this paper:
Ronald L. Rivest, "Chaffing and Winnowing: Confidentiality without Encryption"
http://theory.lcs.mit.edu/~rivest/chaffing.txt
"""
__revision__ = "$Id: Chaffing.py,v 1.7 2003/02/28 15:23:21 akuchling Exp $"
from Crypto.Util.number import bytes_to_long
class Chaff:
"""Class implementing the chaff adding algorithm.
Methods for subclasses:
_randnum(size):
Returns a randomly generated number with a byte-length equal
to size. Subclasses can use this to implement better random
data and MAC generating algorithms. The default algorithm is
probably not very cryptographically secure. It is most
important that the chaff data does not contain any patterns
that can be used to discern it from wheat data without running
the MAC.
"""
def __init__(self, factor=1.0, blocksper=1):
"""Chaff(factor:float, blocksper:int)
factor is the number of message blocks to add chaff to,
expressed as a percentage between 0.0 and 1.0. blocksper is
the number of chaff blocks to include for each block being
chaffed. Thus the defaults add one chaff block to every
message block. By changing the defaults, you can adjust how
computationally difficult it could be for an adversary to
brute-force crack the message. The difficulty is expressed
as:
pow(blocksper, int(factor * number-of-blocks))
For ease of implementation, when factor < 1.0, only the first
int(factor*number-of-blocks) message blocks are chaffed.
"""
if not (0.0<=factor<=1.0):
raise ValueError, "'factor' must be between 0.0 and 1.0"
if blocksper < 0:
raise ValueError, "'blocksper' must be zero or more"
self.__factor = factor
self.__blocksper = blocksper
def chaff(self, blocks):
"""chaff( [(serial-number:int, data:string, MAC:string)] )
: [(int, string, string)]
Add chaff to message blocks. blocks is a list of 3-tuples of the
form (serial-number, data, MAC).
Chaff is created by choosing a random number of the same
byte-length as data, and another random number of the same
byte-length as MAC. The message block's serial number is
placed on the chaff block and all the packet's chaff blocks
are randomly interspersed with the single wheat block. This
method then returns a list of 3-tuples of the same form.
Chaffed blocks will contain multiple instances of 3-tuples
with the same serial number, but the only way to figure out
which blocks are wheat and which are chaff is to perform the
MAC hash and compare values.
"""
chaffedblocks = []
# count is the number of blocks to add chaff to. blocksper is the
# number of chaff blocks to add per message block that is being
# chaffed.
count = len(blocks) * self.__factor
blocksper = range(self.__blocksper)
for i, wheat in map(None, range(len(blocks)), blocks):
# it shouldn't matter which of the n blocks we add chaff to, so for
# ease of implementation, we'll just add them to the first count
# blocks
if i < count:
serial, data, mac = wheat
datasize = len(data)
macsize = len(mac)
addwheat = 1
# add chaff to this block
for j in blocksper:
import sys
chaffdata = self._randnum(datasize)
chaffmac = self._randnum(macsize)
chaff = (serial, chaffdata, chaffmac)
# mix up the order, if the 5th bit is on then put the
# wheat on the list
if addwheat and bytes_to_long(self._randnum(16)) & 0x40:
chaffedblocks.append(wheat)
addwheat = 0
chaffedblocks.append(chaff)
if addwheat:
chaffedblocks.append(wheat)
else:
# just add the wheat
chaffedblocks.append(wheat)
return chaffedblocks
def _randnum(self, size):
# TBD: Not a very secure algorithm.
# TBD: size * 2 to work around possible bug in RandomPool
from Crypto.Util import randpool
import time
pool = randpool.RandomPool(size * 2)
while size > pool.entropy:
pass
# we now have enough entropy in the pool to get size bytes of random
# data... well, probably
return pool.get_bytes(size)
if __name__ == '__main__':
text = """\
We hold these truths to be self-evident, that all men are created equal, that
they are endowed by their Creator with certain unalienable Rights, that among
these are Life, Liberty, and the pursuit of Happiness. That to secure these
rights, Governments are instituted among Men, deriving their just powers from
the consent of the governed. That whenever any Form of Government becomes
destructive of these ends, it is the Right of the People to alter or to
abolish it, and to institute new Government, laying its foundation on such
principles and organizing its powers in such form, as to them shall seem most
likely to effect their Safety and Happiness.
"""
print 'Original text:\n=========='
print text
print '=========='
# first transform the text into packets
blocks = [] ; size = 40
for i in range(0, len(text), size):
blocks.append( text[i:i+size] )
# now get MACs for all the text blocks. The key is obvious...
print 'Calculating MACs...'
from Crypto.Hash import HMAC, SHA
key = 'Jefferson'
macs = [HMAC.new(key, block, digestmod=SHA).digest()
for block in blocks]
assert len(blocks) == len(macs)
# put these into a form acceptable as input to the chaffing procedure
source = []
m = map(None, range(len(blocks)), blocks, macs)
print m
for i, data, mac in m:
source.append((i, data, mac))
# now chaff these
print 'Adding chaff...'
c = Chaff(factor=0.5, blocksper=2)
chaffed = c.chaff(source)
from base64 import encodestring
# print the chaffed message blocks. meanwhile, separate the wheat from
# the chaff
wheat = []
print 'chaffed message blocks:'
for i, data, mac in chaffed:
# do the authentication
h = HMAC.new(key, data, digestmod=SHA)
pmac = h.digest()
if pmac == mac:
tag = '-->'
wheat.append(data)
else:
tag = ' '
# base64 adds a trailing newline
print tag, '%3d' % i, \
repr(data), encodestring(mac)[:-1]
# now decode the message packets and check it against the original text
print 'Undigesting wheat...'
newtext = "".join(wheat)
if newtext == text:
print 'They match!'
else:
print 'They differ!'
| apache-2.0 |
kenshay/ImageScript | ProgramData/SystemFiles/Python/Lib/site-packages/nbformat/v4/nbbase.py | 7 | 3674 | """Python API for composing notebook elements
The Python representation of a notebook is a nested structure of
dictionary subclasses that support attribute access
(ipython_genutils.ipstruct.Struct). The functions in this module are merely
helpers to build the structs in the right form.
"""
# Copyright (c) IPython Development Team.
# Distributed under the terms of the Modified BSD License.
from ..notebooknode import from_dict, NotebookNode
# Change this when incrementing the nbformat version
nbformat = 4
nbformat_minor = 2
nbformat_schema = 'nbformat.v4.schema.json'
def validate(node, ref=None):
"""validate a v4 node"""
from .. import validate
return validate(node, ref=ref, version=nbformat)
def new_output(output_type, data=None, **kwargs):
"""Create a new output, to go in the ``cell.outputs`` list of a code cell."""
output = NotebookNode(output_type=output_type)
# populate defaults:
if output_type == 'stream':
output.name = u'stdout'
output.text = u''
elif output_type in {'execute_result', 'display_data'}:
output.metadata = NotebookNode()
output.data = NotebookNode()
# load from args:
output.update(from_dict(kwargs))
if data is not None:
output.data = from_dict(data)
# validate
validate(output, output_type)
return output
def output_from_msg(msg):
"""Create a NotebookNode for an output from a kernel's IOPub message.
Returns
-------
NotebookNode: the output as a notebook node.
Raises
------
ValueError: if the message is not an output message.
"""
msg_type = msg['header']['msg_type']
content = msg['content']
if msg_type == 'execute_result':
return new_output(output_type=msg_type,
metadata=content['metadata'],
data=content['data'],
execution_count=content['execution_count'],
)
elif msg_type == 'stream':
return new_output(output_type=msg_type,
name=content['name'],
text=content['text'],
)
elif msg_type == 'display_data':
return new_output(output_type=msg_type,
metadata=content['metadata'],
data=content['data'],
)
elif msg_type == 'error':
return new_output(output_type=msg_type,
ename=content['ename'],
evalue=content['evalue'],
traceback=content['traceback'],
)
else:
raise ValueError("Unrecognized output msg type: %r" % msg_type)
def new_code_cell(source='', **kwargs):
"""Create a new code cell"""
cell = NotebookNode(
cell_type='code',
metadata=NotebookNode(),
execution_count=None,
source=source,
outputs=[],
)
cell.update(from_dict(kwargs))
validate(cell, 'code_cell')
return cell
def new_markdown_cell(source='', **kwargs):
"""Create a new markdown cell"""
cell = NotebookNode(
cell_type='markdown',
source=source,
metadata=NotebookNode(),
)
cell.update(from_dict(kwargs))
validate(cell, 'markdown_cell')
return cell
def new_raw_cell(source='', **kwargs):
"""Create a new raw cell"""
cell = NotebookNode(
cell_type='raw',
source=source,
metadata=NotebookNode(),
)
cell.update(from_dict(kwargs))
validate(cell, 'raw_cell')
return cell
def new_notebook(**kwargs):
"""Create a new notebook"""
nb = NotebookNode(
nbformat=nbformat,
nbformat_minor=nbformat_minor,
metadata=NotebookNode(),
cells=[],
)
nb.update(from_dict(kwargs))
validate(nb)
return nb
| gpl-3.0 |
rosmo/ansible | test/units/plugins/terminal/test_junos.py | 43 | 1895 | # -*- coding: utf-8 -*-
# Copyright: (c) 2018, Fran Fitzpatrick <francis.x.fitzpatrick@gmail.com> fxfitz
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from mock import call, MagicMock
import pytest
from ansible.errors import AnsibleConnectionFailure
from ansible.plugins.terminal import junos
@pytest.fixture
def junos_terminal():
mock_connection = MagicMock()
return junos.TerminalModule(mock_connection)
def test_on_open_shell_sets_terminal_parameters(junos_terminal):
expected_calls = [
call(b'set cli timestamp disable'),
call(b'set cli screen-length 0'),
call(b'set cli screen-width 1024'),
]
junos_terminal._exec_cli_command = MagicMock()
junos_terminal._get_prompt = MagicMock()
junos_terminal._get_prompt.return_value = b'user@localhost >'
junos_terminal.on_open_shell()
junos_terminal._exec_cli_command.assert_has_calls(expected_calls)
def test_on_open_shell_enters_cli_if_root_prompt(junos_terminal):
expected_calls = [
call(b'cli'),
call(b'set cli timestamp disable'),
call(b'set cli screen-length 0'),
call(b'set cli screen-width 1024'),
]
junos_terminal._exec_cli_command = MagicMock()
junos_terminal._get_prompt = MagicMock()
junos_terminal._connection.get_prompt.return_value = b'root@localhost%'
junos_terminal.on_open_shell()
junos_terminal._exec_cli_command.assert_has_calls(expected_calls)
def test_on_open_shell_raises_problem_setting_terminal_config(junos_terminal):
junos_terminal._connection.exec_command.side_effect = AnsibleConnectionFailure
with pytest.raises(AnsibleConnectionFailure) as exc:
junos_terminal.on_open_shell()
assert 'unable to set terminal parameters' in str(exc)
| gpl-3.0 |
oscarolar/odoo | addons/account/wizard/account_move_line_reconcile_select.py | 385 | 2362 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import fields, osv
from openerp.tools.translate import _
class account_move_line_reconcile_select(osv.osv_memory):
_name = "account.move.line.reconcile.select"
_description = "Move line reconcile select"
_columns = {
'account_id': fields.many2one('account.account', 'Account', \
domain = [('reconcile', '=', 1)], required=True),
}
def action_open_window(self, cr, uid, ids, context=None):
"""
This function Open account move line window for reconcile on given account id
@param cr: the current row, from the database cursor,
@param uid: the current user’s ID for security checks,
@param ids: account move line reconcile select’s ID or list of IDs
@return: dictionary of Open account move line window for reconcile on given account id
"""
data = self.read(cr, uid, ids, context=context)[0]
return {
'domain': "[('account_id','=',%d),('reconcile_id','=',False),('state','<>','draft')]" % data['account_id'],
'name': _('Reconciliation'),
'view_type': 'form',
'view_mode': 'tree,form',
'view_id': False,
'res_model': 'account.move.line',
'type': 'ir.actions.act_window'
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
bocaaust/FreshLife | django_project/env/lib/python2.7/site-packages/django/conf/global_settings.py | 63 | 21974 | # Default Django settings. Override these with settings in the module
# pointed-to by the DJANGO_SETTINGS_MODULE environment variable.
# This is defined here as a do-nothing function because we can't import
# django.utils.translation -- that module depends on the settings.
gettext_noop = lambda s: s
####################
# CORE #
####################
DEBUG = False
TEMPLATE_DEBUG = False
# Whether the framework should propagate raw exceptions rather than catching
# them. This is useful under some testing situations and should never be used
# on a live site.
DEBUG_PROPAGATE_EXCEPTIONS = False
# Whether to use the "Etag" header. This saves bandwidth but slows down performance.
USE_ETAGS = False
# People who get code error notifications.
# In the format (('Full Name', 'email@example.com'), ('Full Name', 'anotheremail@example.com'))
ADMINS = ()
# Tuple of IP addresses, as strings, that:
# * See debug comments, when DEBUG is true
# * Receive x-headers
INTERNAL_IPS = ()
# Hosts/domain names that are valid for this site.
# "*" matches anything, ".example.com" matches example.com and all subdomains
ALLOWED_HOSTS = []
# Local time zone for this installation. All choices can be found here:
# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name (although not all
# systems may support all possibilities). When USE_TZ is True, this is
# interpreted as the default user time zone.
TIME_ZONE = 'America/Chicago'
# If you set this to True, Django will use timezone-aware datetimes.
USE_TZ = False
# Language code for this installation. All choices can be found here:
# http://www.i18nguy.com/unicode/language-identifiers.html
LANGUAGE_CODE = 'en-us'
# Languages we provide translations for, out of the box. The language name
# should be the utf-8 encoded local name for the language.
LANGUAGES = (
('af', gettext_noop('Afrikaans')),
('ar', gettext_noop('Arabic')),
('az', gettext_noop('Azerbaijani')),
('bg', gettext_noop('Bulgarian')),
('be', gettext_noop('Belarusian')),
('bn', gettext_noop('Bengali')),
('br', gettext_noop('Breton')),
('bs', gettext_noop('Bosnian')),
('ca', gettext_noop('Catalan')),
('cs', gettext_noop('Czech')),
('cy', gettext_noop('Welsh')),
('da', gettext_noop('Danish')),
('de', gettext_noop('German')),
('el', gettext_noop('Greek')),
('en', gettext_noop('English')),
('en-gb', gettext_noop('British English')),
('eo', gettext_noop('Esperanto')),
('es', gettext_noop('Spanish')),
('es-ar', gettext_noop('Argentinian Spanish')),
('es-mx', gettext_noop('Mexican Spanish')),
('es-ni', gettext_noop('Nicaraguan Spanish')),
('es-ve', gettext_noop('Venezuelan Spanish')),
('et', gettext_noop('Estonian')),
('eu', gettext_noop('Basque')),
('fa', gettext_noop('Persian')),
('fi', gettext_noop('Finnish')),
('fr', gettext_noop('French')),
('fy-nl', gettext_noop('Frisian')),
('ga', gettext_noop('Irish')),
('gl', gettext_noop('Galician')),
('he', gettext_noop('Hebrew')),
('hi', gettext_noop('Hindi')),
('hr', gettext_noop('Croatian')),
('hu', gettext_noop('Hungarian')),
('ia', gettext_noop('Interlingua')),
('id', gettext_noop('Indonesian')),
('is', gettext_noop('Icelandic')),
('it', gettext_noop('Italian')),
('ja', gettext_noop('Japanese')),
('ka', gettext_noop('Georgian')),
('kk', gettext_noop('Kazakh')),
('km', gettext_noop('Khmer')),
('kn', gettext_noop('Kannada')),
('ko', gettext_noop('Korean')),
('lb', gettext_noop('Luxembourgish')),
('lt', gettext_noop('Lithuanian')),
('lv', gettext_noop('Latvian')),
('mk', gettext_noop('Macedonian')),
('ml', gettext_noop('Malayalam')),
('mn', gettext_noop('Mongolian')),
('nb', gettext_noop('Norwegian Bokmal')),
('ne', gettext_noop('Nepali')),
('nl', gettext_noop('Dutch')),
('nn', gettext_noop('Norwegian Nynorsk')),
('pa', gettext_noop('Punjabi')),
('pl', gettext_noop('Polish')),
('pt', gettext_noop('Portuguese')),
('pt-br', gettext_noop('Brazilian Portuguese')),
('ro', gettext_noop('Romanian')),
('ru', gettext_noop('Russian')),
('sk', gettext_noop('Slovak')),
('sl', gettext_noop('Slovenian')),
('sq', gettext_noop('Albanian')),
('sr', gettext_noop('Serbian')),
('sr-latn', gettext_noop('Serbian Latin')),
('sv', gettext_noop('Swedish')),
('sw', gettext_noop('Swahili')),
('ta', gettext_noop('Tamil')),
('te', gettext_noop('Telugu')),
('th', gettext_noop('Thai')),
('tr', gettext_noop('Turkish')),
('tt', gettext_noop('Tatar')),
('udm', gettext_noop('Udmurt')),
('uk', gettext_noop('Ukrainian')),
('ur', gettext_noop('Urdu')),
('vi', gettext_noop('Vietnamese')),
('zh-cn', gettext_noop('Simplified Chinese')),
('zh-tw', gettext_noop('Traditional Chinese')),
)
# Languages using BiDi (right-to-left) layout
LANGUAGES_BIDI = ("he", "ar", "fa")
# If you set this to False, Django will make some optimizations so as not
# to load the internationalization machinery.
USE_I18N = True
LOCALE_PATHS = ()
LANGUAGE_COOKIE_NAME = 'django_language'
# If you set this to True, Django will format dates, numbers and calendars
# according to user current locale.
USE_L10N = False
# Not-necessarily-technical managers of the site. They get broken link
# notifications and other various emails.
MANAGERS = ADMINS
# Default content type and charset to use for all HttpResponse objects, if a
# MIME type isn't manually specified. These are used to construct the
# Content-Type header.
DEFAULT_CONTENT_TYPE = 'text/html'
DEFAULT_CHARSET = 'utf-8'
# Encoding of files read from disk (template and initial SQL files).
FILE_CHARSET = 'utf-8'
# Email address that error messages come from.
SERVER_EMAIL = 'root@localhost'
# Whether to send broken-link emails.
SEND_BROKEN_LINK_EMAILS = False
# Database connection info. If left empty, will default to the dummy backend.
DATABASES = {}
# Classes used to implement DB routing behavior.
DATABASE_ROUTERS = []
# The email backend to use. For possible shortcuts see django.core.mail.
# The default is to use the SMTP backend.
# Third-party backends can be specified by providing a Python path
# to a module that defines an EmailBackend class.
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
# Host for sending email.
EMAIL_HOST = 'localhost'
# Port for sending email.
EMAIL_PORT = 25
# Optional SMTP authentication information for EMAIL_HOST.
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
# List of strings representing installed apps.
INSTALLED_APPS = ()
# List of locations of the template source files, in search order.
TEMPLATE_DIRS = ()
# List of callables that know how to import templates from various sources.
# See the comments in django/core/template/loader.py for interface
# documentation.
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.Loader',
)
# List of processors used by RequestContext to populate the context.
# Each one should be a callable that takes the request object as its
# only parameter and returns a dictionary to add to the context.
TEMPLATE_CONTEXT_PROCESSORS = (
'django.contrib.auth.context_processors.auth',
'django.core.context_processors.debug',
'django.core.context_processors.i18n',
'django.core.context_processors.media',
'django.core.context_processors.static',
'django.core.context_processors.tz',
# 'django.core.context_processors.request',
'django.contrib.messages.context_processors.messages',
)
# Output to use in template system for invalid (e.g. misspelled) variables.
TEMPLATE_STRING_IF_INVALID = ''
# Default email address to use for various automated correspondence from
# the site managers.
DEFAULT_FROM_EMAIL = 'webmaster@localhost'
# Subject-line prefix for email messages send with django.core.mail.mail_admins
# or ...mail_managers. Make sure to include the trailing space.
EMAIL_SUBJECT_PREFIX = '[Django] '
# Whether to append trailing slashes to URLs.
APPEND_SLASH = True
# Whether to prepend the "www." subdomain to URLs that don't have it.
PREPEND_WWW = False
# Override the server-derived value of SCRIPT_NAME
FORCE_SCRIPT_NAME = None
# List of compiled regular expression objects representing User-Agent strings
# that are not allowed to visit any page, systemwide. Use this for bad
# robots/crawlers. Here are a few examples:
# import re
# DISALLOWED_USER_AGENTS = (
# re.compile(r'^NaverBot.*'),
# re.compile(r'^EmailSiphon.*'),
# re.compile(r'^SiteSucker.*'),
# re.compile(r'^sohu-search')
# )
DISALLOWED_USER_AGENTS = ()
ABSOLUTE_URL_OVERRIDES = {}
# Tuple of strings representing allowed prefixes for the {% ssi %} tag.
# Example: ('/home/html', '/var/www')
ALLOWED_INCLUDE_ROOTS = ()
# If this is a admin settings module, this should be a list of
# settings modules (in the format 'foo.bar.baz') for which this admin
# is an admin.
ADMIN_FOR = ()
# List of compiled regular expression objects representing URLs that need not
# be reported when SEND_BROKEN_LINK_EMAILS is True. Here are a few examples:
# import re
# IGNORABLE_404_URLS = (
# re.compile(r'^/apple-touch-icon.*\.png$'),
# re.compile(r'^/favicon.ico$),
# re.compile(r'^/robots.txt$),
# re.compile(r'^/phpmyadmin/),
# re.compile(r'\.(cgi|php|pl)$'),
# )
IGNORABLE_404_URLS = ()
# A secret key for this particular Django installation. Used in secret-key
# hashing algorithms. Set this in your settings, or Django will complain
# loudly.
SECRET_KEY = ''
# Default file storage mechanism that holds media.
DEFAULT_FILE_STORAGE = 'django.core.files.storage.FileSystemStorage'
# Absolute filesystem path to the directory that will hold user-uploaded files.
# Example: "/var/www/example.com/media/"
MEDIA_ROOT = ''
# URL that handles the media served from MEDIA_ROOT.
# Examples: "http://example.com/media/", "http://media.example.com/"
MEDIA_URL = ''
# Absolute path to the directory static files should be collected to.
# Example: "/var/www/example.com/static/"
STATIC_ROOT = ''
# URL that handles the static files served from STATIC_ROOT.
# Example: "http://example.com/static/", "http://static.example.com/"
STATIC_URL = None
# List of upload handler classes to be applied in order.
FILE_UPLOAD_HANDLERS = (
'django.core.files.uploadhandler.MemoryFileUploadHandler',
'django.core.files.uploadhandler.TemporaryFileUploadHandler',
)
# Maximum size, in bytes, of a request before it will be streamed to the
# file system instead of into memory.
FILE_UPLOAD_MAX_MEMORY_SIZE = 2621440 # i.e. 2.5 MB
# Directory in which upload streamed files will be temporarily saved. A value of
# `None` will make Django use the operating system's default temporary directory
# (i.e. "/tmp" on *nix systems).
FILE_UPLOAD_TEMP_DIR = None
# The numeric mode to set newly-uploaded files to. The value should be a mode
# you'd pass directly to os.chmod; see http://docs.python.org/lib/os-file-dir.html.
FILE_UPLOAD_PERMISSIONS = None
# Python module path where user will place custom format definition.
# The directory where this setting is pointing should contain subdirectories
# named as the locales, containing a formats.py file
# (i.e. "myproject.locale" for myproject/locale/en/formats.py etc. use)
FORMAT_MODULE_PATH = None
# Default formatting for date objects. See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
DATE_FORMAT = 'N j, Y'
# Default formatting for datetime objects. See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
DATETIME_FORMAT = 'N j, Y, P'
# Default formatting for time objects. See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
TIME_FORMAT = 'P'
# Default formatting for date objects when only the year and month are relevant.
# See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
YEAR_MONTH_FORMAT = 'F Y'
# Default formatting for date objects when only the month and day are relevant.
# See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
MONTH_DAY_FORMAT = 'F j'
# Default short formatting for date objects. See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
SHORT_DATE_FORMAT = 'm/d/Y'
# Default short formatting for datetime objects.
# See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
SHORT_DATETIME_FORMAT = 'm/d/Y P'
# Default formats to be used when parsing dates from input boxes, in order
# See all available format string here:
# http://docs.python.org/library/datetime.html#strftime-behavior
# * Note that these format strings are different from the ones to display dates
DATE_INPUT_FORMATS = (
'%Y-%m-%d', '%m/%d/%Y', '%m/%d/%y', # '2006-10-25', '10/25/2006', '10/25/06'
'%b %d %Y', '%b %d, %Y', # 'Oct 25 2006', 'Oct 25, 2006'
'%d %b %Y', '%d %b, %Y', # '25 Oct 2006', '25 Oct, 2006'
'%B %d %Y', '%B %d, %Y', # 'October 25 2006', 'October 25, 2006'
'%d %B %Y', '%d %B, %Y', # '25 October 2006', '25 October, 2006'
)
# Default formats to be used when parsing times from input boxes, in order
# See all available format string here:
# http://docs.python.org/library/datetime.html#strftime-behavior
# * Note that these format strings are different from the ones to display dates
TIME_INPUT_FORMATS = (
'%H:%M:%S', # '14:30:59'
'%H:%M', # '14:30'
)
# Default formats to be used when parsing dates and times from input boxes,
# in order
# See all available format string here:
# http://docs.python.org/library/datetime.html#strftime-behavior
# * Note that these format strings are different from the ones to display dates
DATETIME_INPUT_FORMATS = (
'%Y-%m-%d %H:%M:%S', # '2006-10-25 14:30:59'
'%Y-%m-%d %H:%M:%S.%f', # '2006-10-25 14:30:59.000200'
'%Y-%m-%d %H:%M', # '2006-10-25 14:30'
'%Y-%m-%d', # '2006-10-25'
'%m/%d/%Y %H:%M:%S', # '10/25/2006 14:30:59'
'%m/%d/%Y %H:%M:%S.%f', # '10/25/2006 14:30:59.000200'
'%m/%d/%Y %H:%M', # '10/25/2006 14:30'
'%m/%d/%Y', # '10/25/2006'
'%m/%d/%y %H:%M:%S', # '10/25/06 14:30:59'
'%m/%d/%y %H:%M:%S.%f', # '10/25/06 14:30:59.000200'
'%m/%d/%y %H:%M', # '10/25/06 14:30'
'%m/%d/%y', # '10/25/06'
)
# First day of week, to be used on calendars
# 0 means Sunday, 1 means Monday...
FIRST_DAY_OF_WEEK = 0
# Decimal separator symbol
DECIMAL_SEPARATOR = '.'
# Boolean that sets whether to add thousand separator when formatting numbers
USE_THOUSAND_SEPARATOR = False
# Number of digits that will be together, when splitting them by
# THOUSAND_SEPARATOR. 0 means no grouping, 3 means splitting by thousands...
NUMBER_GROUPING = 0
# Thousand separator symbol
THOUSAND_SEPARATOR = ','
# Do you want to manage transactions manually?
# Hint: you really don't!
TRANSACTIONS_MANAGED = False
# The tablespaces to use for each model when not specified otherwise.
DEFAULT_TABLESPACE = ''
DEFAULT_INDEX_TABLESPACE = ''
# Default X-Frame-Options header value
X_FRAME_OPTIONS = 'SAMEORIGIN'
USE_X_FORWARDED_HOST = False
# The Python dotted path to the WSGI application that Django's internal servers
# (runserver, runfcgi) will use. If `None`, the return value of
# 'django.core.wsgi.get_wsgi_application' is used, thus preserving the same
# behavior as previous versions of Django. Otherwise this should point to an
# actual WSGI application object.
WSGI_APPLICATION = None
# If your Django app is behind a proxy that sets a header to specify secure
# connections, AND that proxy ensures that user-submitted headers with the
# same name are ignored (so that people can't spoof it), set this value to
# a tuple of (header_name, header_value). For any requests that come in with
# that header/value, request.is_secure() will return True.
# WARNING! Only set this if you fully understand what you're doing. Otherwise,
# you may be opening yourself up to a security risk.
SECURE_PROXY_SSL_HEADER = None
##############
# MIDDLEWARE #
##############
# List of middleware classes to use. Order is important; in the request phase,
# this middleware classes will be applied in the order given, and in the
# response phase the middleware will be applied in reverse order.
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
# 'django.middleware.http.ConditionalGetMiddleware',
# 'django.middleware.gzip.GZipMiddleware',
)
############
# SESSIONS #
############
SESSION_CACHE_ALIAS = 'default' # Cache to store session data if using the cache session backend.
SESSION_COOKIE_NAME = 'sessionid' # Cookie name. This can be whatever you want.
SESSION_COOKIE_AGE = 60 * 60 * 24 * 7 * 2 # Age of cookie, in seconds (default: 2 weeks).
SESSION_COOKIE_DOMAIN = None # A string like ".example.com", or None for standard domain cookie.
SESSION_COOKIE_SECURE = False # Whether the session cookie should be secure (https:// only).
SESSION_COOKIE_PATH = '/' # The path of the session cookie.
SESSION_COOKIE_HTTPONLY = True # Whether to use the non-RFC standard httpOnly flag (IE, FF3+, others)
SESSION_SAVE_EVERY_REQUEST = False # Whether to save the session data on every request.
SESSION_EXPIRE_AT_BROWSER_CLOSE = False # Whether a user's session cookie expires when the Web browser is closed.
SESSION_ENGINE = 'django.contrib.sessions.backends.db' # The module to store session data
SESSION_FILE_PATH = None # Directory to store session files if using the file session module. If None, the backend will use a sensible default.
SESSION_SERIALIZER = 'django.contrib.sessions.serializers.PickleSerializer' # class to serialize session data
#########
# CACHE #
#########
# The cache backends to use.
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
}
}
CACHE_MIDDLEWARE_KEY_PREFIX = ''
CACHE_MIDDLEWARE_SECONDS = 600
CACHE_MIDDLEWARE_ALIAS = 'default'
####################
# COMMENTS #
####################
COMMENTS_ALLOW_PROFANITIES = False
# The profanities that will trigger a validation error in
# CommentDetailsForm.clean_comment. All of these should be in lowercase.
PROFANITIES_LIST = ()
##################
# AUTHENTICATION #
##################
AUTH_USER_MODEL = 'auth.User'
AUTHENTICATION_BACKENDS = ('django.contrib.auth.backends.ModelBackend',)
LOGIN_URL = '/accounts/login/'
LOGOUT_URL = '/accounts/logout/'
LOGIN_REDIRECT_URL = '/accounts/profile/'
# The number of days a password reset link is valid for
PASSWORD_RESET_TIMEOUT_DAYS = 3
# the first hasher in this list is the preferred algorithm. any
# password using different algorithms will be converted automatically
# upon login
PASSWORD_HASHERS = (
'django.contrib.auth.hashers.PBKDF2PasswordHasher',
'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher',
'django.contrib.auth.hashers.BCryptPasswordHasher',
'django.contrib.auth.hashers.SHA1PasswordHasher',
'django.contrib.auth.hashers.MD5PasswordHasher',
'django.contrib.auth.hashers.UnsaltedSHA1PasswordHasher',
'django.contrib.auth.hashers.UnsaltedMD5PasswordHasher',
'django.contrib.auth.hashers.CryptPasswordHasher',
)
###########
# SIGNING #
###########
SIGNING_BACKEND = 'django.core.signing.TimestampSigner'
########
# CSRF #
########
# Dotted path to callable to be used as view when a request is
# rejected by the CSRF middleware.
CSRF_FAILURE_VIEW = 'django.views.csrf.csrf_failure'
# Settings for CSRF cookie.
CSRF_COOKIE_NAME = 'csrftoken'
CSRF_COOKIE_DOMAIN = None
CSRF_COOKIE_PATH = '/'
CSRF_COOKIE_SECURE = False
############
# MESSAGES #
############
# Class to use as messages backend
MESSAGE_STORAGE = 'django.contrib.messages.storage.fallback.FallbackStorage'
# Default values of MESSAGE_LEVEL and MESSAGE_TAGS are defined within
# django.contrib.messages to avoid imports in this settings file.
###########
# LOGGING #
###########
# The callable to use to configure logging
LOGGING_CONFIG = 'django.utils.log.dictConfig'
# Custom logging configuration.
LOGGING = {}
# Default exception reporter filter class used in case none has been
# specifically assigned to the HttpRequest instance.
DEFAULT_EXCEPTION_REPORTER_FILTER = 'django.views.debug.SafeExceptionReporterFilter'
###########
# TESTING #
###########
# The name of the class to use to run the test suite
TEST_RUNNER = 'django.test.simple.DjangoTestSuiteRunner'
############
# FIXTURES #
############
# The list of directories to search for fixtures
FIXTURE_DIRS = ()
###############
# STATICFILES #
###############
# A list of locations of additional static files
STATICFILES_DIRS = ()
# The default file storage backend used during the build process
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.StaticFilesStorage'
# List of finder classes that know how to find static files in
# various locations.
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
)
| apache-2.0 |
ryfeus/lambda-packs | Tensorflow_Pandas_Numpy/source3.6/tensorflow/python/training/input.py | 20 | 60558 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Input pipeline.
Please see the @{$reading_data$reading data how-to}
for context.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
from six.moves import xrange # pylint: disable=redefined-builtin
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import data_flow_ops
from tensorflow.python.ops import io_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
from tensorflow.python.ops import sparse_ops
from tensorflow.python.ops import variable_scope as vs
from tensorflow.python.summary import summary
from tensorflow.python.training import queue_runner
# pylint: disable=protected-access
_store_sparse = sparse_ops._add_sparse_to_tensors_map
_store_many_sparse = sparse_ops._add_many_sparse_to_tensors_map
_restore_sparse = sparse_ops._take_many_sparse_from_tensors_map
# pylint: enable=protected-access
def match_filenames_once(pattern, name=None):
"""Save the list of files matching pattern, so it is only computed once.
Args:
pattern: A file pattern (glob), or 1D tensor of file patterns.
name: A name for the operations (optional).
Returns:
A variable that is initialized to the list of files matching the pattern(s).
"""
with ops.name_scope(name, "matching_filenames", [pattern]) as name:
return vs.variable(
name=name, initial_value=io_ops.matching_files(pattern),
trainable=False, validate_shape=False,
collections=[ops.GraphKeys.LOCAL_VARIABLES])
def limit_epochs(tensor, num_epochs=None, name=None):
"""Returns tensor `num_epochs` times and then raises an `OutOfRange` error.
Note: creates local counter `epochs`. Use `local_variables_initializer()` to
initialize local variables.
Args:
tensor: Any `Tensor`.
num_epochs: A positive integer (optional). If specified, limits the number
of steps the output tensor may be evaluated.
name: A name for the operations (optional).
Returns:
tensor or `OutOfRange`.
Raises:
ValueError: if `num_epochs` is invalid.
"""
if num_epochs is None:
return tensor
if num_epochs <= 0:
raise ValueError("num_epochs must be > 0 not %d." % num_epochs)
with ops.name_scope(name, "limit_epochs", [tensor]) as name:
zero64 = constant_op.constant(0, dtype=dtypes.int64)
epochs = vs.variable(
zero64, name="epochs", trainable=False,
collections=[ops.GraphKeys.LOCAL_VARIABLES])
counter = epochs.count_up_to(num_epochs)
with ops.control_dependencies([counter]):
return array_ops.identity(tensor, name=name)
def input_producer(input_tensor,
element_shape=None,
num_epochs=None,
shuffle=True,
seed=None,
capacity=32,
shared_name=None,
summary_name=None,
name=None,
cancel_op=None):
"""Output the rows of `input_tensor` to a queue for an input pipeline.
Note: if `num_epochs` is not `None`, this function creates local counter
`epochs`. Use `local_variables_initializer()` to initialize local variables.
Args:
input_tensor: A tensor with the rows to produce. Must be at least
one-dimensional. Must either have a fully-defined shape, or
`element_shape` must be defined.
element_shape: (Optional.) A `TensorShape` representing the shape of a
row of `input_tensor`, if it cannot be inferred.
num_epochs: (Optional.) An integer. If specified `input_producer` produces
each row of `input_tensor` `num_epochs` times before generating an
`OutOfRange` error. If not specified, `input_producer` can cycle through
the rows of `input_tensor` an unlimited number of times.
shuffle: (Optional.) A boolean. If true, the rows are randomly shuffled
within each epoch.
seed: (Optional.) An integer. The seed to use if `shuffle` is true.
capacity: (Optional.) The capacity of the queue to be used for buffering
the input.
shared_name: (Optional.) If set, this queue will be shared under the given
name across multiple sessions.
summary_name: (Optional.) If set, a scalar summary for the current queue
size will be generated, using this name as part of the tag.
name: (Optional.) A name for queue.
cancel_op: (Optional.) Cancel op for the queue
Returns:
A queue with the output rows. A `QueueRunner` for the queue is
added to the current `QUEUE_RUNNER` collection of the current
graph.
Raises:
ValueError: If the shape of the input cannot be inferred from the arguments.
"""
with ops.name_scope(name, "input_producer", [input_tensor]):
input_tensor = ops.convert_to_tensor(input_tensor, name="input_tensor")
element_shape = input_tensor.shape[1:].merge_with(element_shape)
if not element_shape.is_fully_defined():
raise ValueError("Either `input_tensor` must have a fully defined shape "
"or `element_shape` must be specified")
if shuffle:
input_tensor = random_ops.random_shuffle(input_tensor, seed=seed)
input_tensor = limit_epochs(input_tensor, num_epochs)
q = data_flow_ops.FIFOQueue(capacity=capacity,
dtypes=[input_tensor.dtype.base_dtype],
shapes=[element_shape],
shared_name=shared_name, name=name)
enq = q.enqueue_many([input_tensor])
queue_runner.add_queue_runner(
queue_runner.QueueRunner(
q, [enq], cancel_op=cancel_op))
if summary_name is not None:
summary.scalar(summary_name,
math_ops.to_float(q.size()) * (1. / capacity))
return q
def string_input_producer(string_tensor,
num_epochs=None,
shuffle=True,
seed=None,
capacity=32,
shared_name=None,
name=None,
cancel_op=None):
"""Output strings (e.g. filenames) to a queue for an input pipeline.
Note: if `num_epochs` is not `None`, this function creates local counter
`epochs`. Use `local_variables_initializer()` to initialize local variables.
Args:
string_tensor: A 1-D string tensor with the strings to produce.
num_epochs: An integer (optional). If specified, `string_input_producer`
produces each string from `string_tensor` `num_epochs` times before
generating an `OutOfRange` error. If not specified,
`string_input_producer` can cycle through the strings in `string_tensor`
an unlimited number of times.
shuffle: Boolean. If true, the strings are randomly shuffled within each
epoch.
seed: An integer (optional). Seed used if shuffle == True.
capacity: An integer. Sets the queue capacity.
shared_name: (optional). If set, this queue will be shared under the given
name across multiple sessions. All sessions open to the device which has
this queue will be able to access it via the shared_name. Using this in
a distributed setting means each name will only be seen by one of the
sessions which has access to this operation.
name: A name for the operations (optional).
cancel_op: Cancel op for the queue (optional).
Returns:
A queue with the output strings. A `QueueRunner` for the Queue
is added to the current `Graph`'s `QUEUE_RUNNER` collection.
Raises:
ValueError: If the string_tensor is a null Python list. At runtime,
will fail with an assertion if string_tensor becomes a null tensor.
"""
not_null_err = "string_input_producer requires a non-null input tensor"
if not isinstance(string_tensor, ops.Tensor) and not string_tensor:
raise ValueError(not_null_err)
with ops.name_scope(name, "input_producer", [string_tensor]) as name:
string_tensor = ops.convert_to_tensor(string_tensor, dtype=dtypes.string)
with ops.control_dependencies([
control_flow_ops.Assert(
math_ops.greater(array_ops.size(string_tensor), 0),
[not_null_err])]):
string_tensor = array_ops.identity(string_tensor)
return input_producer(
input_tensor=string_tensor,
element_shape=[],
num_epochs=num_epochs,
shuffle=shuffle,
seed=seed,
capacity=capacity,
shared_name=shared_name,
name=name,
summary_name="fraction_of_%d_full" % capacity,
cancel_op=cancel_op)
def range_input_producer(limit, num_epochs=None, shuffle=True, seed=None,
capacity=32, shared_name=None, name=None):
"""Produces the integers from 0 to limit-1 in a queue.
Note: if `num_epochs` is not `None`, this function creates local counter
`epochs`. Use `local_variables_initializer()` to initialize local variables.
Args:
limit: An int32 scalar tensor.
num_epochs: An integer (optional). If specified, `range_input_producer`
produces each integer `num_epochs` times before generating an
OutOfRange error. If not specified, `range_input_producer` can cycle
through the integers an unlimited number of times.
shuffle: Boolean. If true, the integers are randomly shuffled within each
epoch.
seed: An integer (optional). Seed used if shuffle == True.
capacity: An integer. Sets the queue capacity.
shared_name: (optional). If set, this queue will be shared under the given
name across multiple sessions.
name: A name for the operations (optional).
Returns:
A Queue with the output integers. A `QueueRunner` for the Queue
is added to the current `Graph`'s `QUEUE_RUNNER` collection.
"""
with ops.name_scope(name, "input_producer", [limit]) as name:
range_tensor = math_ops.range(limit)
return input_producer(
range_tensor, [], num_epochs, shuffle, seed, capacity,
shared_name, "fraction_of_%d_full" % capacity, name)
def slice_input_producer(tensor_list, num_epochs=None, shuffle=True, seed=None,
capacity=32, shared_name=None, name=None):
"""Produces a slice of each `Tensor` in `tensor_list`.
Implemented using a Queue -- a `QueueRunner` for the Queue
is added to the current `Graph`'s `QUEUE_RUNNER` collection.
Args:
tensor_list: A list of `Tensor` objects. Every `Tensor` in
`tensor_list` must have the same size in the first dimension.
num_epochs: An integer (optional). If specified, `slice_input_producer`
produces each slice `num_epochs` times before generating
an `OutOfRange` error. If not specified, `slice_input_producer` can cycle
through the slices an unlimited number of times.
shuffle: Boolean. If true, the integers are randomly shuffled within each
epoch.
seed: An integer (optional). Seed used if shuffle == True.
capacity: An integer. Sets the queue capacity.
shared_name: (optional). If set, this queue will be shared under the given
name across multiple sessions.
name: A name for the operations (optional).
Returns:
A list of tensors, one for each element of `tensor_list`. If the tensor
in `tensor_list` has shape `[N, a, b, .., z]`, then the corresponding output
tensor will have shape `[a, b, ..., z]`.
Raises:
ValueError: if `slice_input_producer` produces nothing from `tensor_list`.
"""
with ops.name_scope(name, "input_producer", tensor_list):
tensor_list = ops.convert_n_to_tensor_or_indexed_slices(tensor_list)
if not tensor_list:
raise ValueError(
"Expected at least one tensor in slice_input_producer().")
range_size = array_ops.shape(tensor_list[0])[0]
# TODO(josh11b): Add an assertion that the first dimension of
# everything in TensorList matches. Maybe just check the inferred shapes?
queue = range_input_producer(range_size, num_epochs=num_epochs,
shuffle=shuffle, seed=seed, capacity=capacity,
shared_name=shared_name)
index = queue.dequeue()
output = [array_ops.gather(t, index) for t in tensor_list]
return output
# Helpers for the batching functions ------------------------------------------
def _flatten(tensor_list_list):
return [tensor for tensor_list in tensor_list_list for tensor in tensor_list]
class _SparseMetaData(object):
"""Store information about the Tensor: Is it sparse?, map_op, and rank."""
def __init__(self, sparse, map_op, rank):
"""Create the metadata.
Args:
sparse: Python boolean.
map_op: The `Operation` that created the `SparseTensorsMap` in question.
This Op contains information about the underlying Map object and the
dtype of the original data.
rank: The statically known rank of the `SparseTensor`.
"""
self._sparse = sparse
self._map_op = map_op
self._rank = rank
def __eq__(self, other):
if self.sparse != other.sparse:
return False
if not self.sparse:
return True
# If map_ops are not the same, the data source is not the same.
if (self.map_op is not None) != (other.map_op is not None):
return False
if self.map_op != other.map_op:
return False
if not self.rank.is_compatible_with(other.rank):
return False
return True
def __ne__(self, other):
return not self.__eq__(other)
def __str__(self):
return "[SparseMetaData(%s, %s, %s)]" % (self.sparse, self.map_op.name,
self.rank)
def merge_with(self, other):
if self != other:
raise ValueError("SparseMetaData objects are incompatible: %s vs. %s"
% (self, other))
if self.sparse:
self.rank.merge_with(other.rank)
return self
@property
def map_op(self):
return self._map_op
@property
def sparse(self):
return self._sparse
@property
def rank(self):
return self._rank
def _as_tensor_list(tensors):
if isinstance(tensors, dict):
return [tensors[k] for k in sorted(tensors, key=str)]
else:
return tensors
def _as_tensor_list_list(tensors_list):
if not tensors_list:
raise ValueError("Expected at least one set of tensors")
if isinstance(tensors_list[0], dict):
expected_keys = set(tensors_list[0].keys())
for tensors in tensors_list[1:]:
if set(tensors.keys()) != expected_keys:
raise ValueError("All dictionaries in tensors_list must have "
"the same keys")
return [_as_tensor_list(tensors) for tensors in tensors_list]
else:
return tensors_list
def _as_original_type(original_tensors, tensor_list):
if isinstance(original_tensors, dict):
if len(original_tensors) == 1:
# tensor_list is bogusly returned as a single tensor if only one tensor
# was enqueued. Make it a list again. See b/28117485.
tensor_list = [tensor_list]
return {k: tensor_list[i]
for i, k in enumerate(sorted(original_tensors, key=str))}
else:
return tensor_list
def _smart_cond(pred, if_true, if_false):
"""A `tf.cond` that does nothing when the condition is static."""
pred = ops.convert_to_tensor(pred)
static_pred = tensor_util.constant_value(pred)
if static_pred is not None:
if static_pred:
return if_true()
else:
return if_false()
else:
return control_flow_ops.cond(
pred,
if_true,
if_false)
def _store_sparse_tensors(tensor_list, enqueue_many, keep_input,
shared_map_ops=None):
"""Store SparseTensors for feeding into batch, etc.
If `shared_map_ops` is provided, the underlying `SparseTensorsMap` objects
are reused (shared). This argument is useful for, e.g., `batch_join`
where multiple enqueue operations write to the same Queue component,
and another (dequeue) thread reads from that same location and must then
restore the associated `SparseTensor` objects. In this case, the sparse
restore must have a single `SparseTensorMap` from which to read out the
handles; so a single `SparseTensorMap` must be shared for storing
across the multiple enqueue operations. This sharing is performed by
calling `_store_sparse_tensors` the first time with `shared_map_ops=None`,
and then in subsequent times with this value set to the list of `Operation`
objects created in the first call.
Args:
tensor_list: List of `Tensor` and `SparseTensor` objects.
enqueue_many: Python `Boolean`.
keep_input: Must be a scalar bool Tensor (not a Python bool). If False,
don't store.
shared_map_ops: (optional) List of `Operation` objects from a previous
call to `_store_sparse_tensors`. If not `None`, the op types should be
one of `AddSparseToTensorsMap` or `AddManySparseToTensorsMap` in the
locations corresponding to `SparseTensors` in `tensor_list`.
Returns:
A tuple `(stored_list, sparse_info_list)` where `stored_list` is a list
of `Tensor` objects (same length as `tensor_list`) and `sparse_info_list`
is a list of the same length of `_SparseMetaData` objects.
"""
maybe_shared_map_ops = shared_map_ops or [None] * len(tensor_list)
def _sparse_meta_data(t, storing_op, map_op):
if not isinstance(t, sparse_tensor.SparseTensor):
return _SparseMetaData(False, None, None)
rank = t.dense_shape.shape.with_rank(1)[0]
if enqueue_many:
rank -= 1
# If a shared map_op was provided, use that. Otherwise use the name of
# the operation used to store the SparseTensor.
return _SparseMetaData(
sparse=True, map_op=map_op or storing_op, rank=rank)
def _maybe_store(t, shared_map_op):
"""Store Sparse tensor, if necessary."""
if not isinstance(t, sparse_tensor.SparseTensor):
return t
map_op_name = shared_map_op.name if shared_map_op else None
def _maybe_store_sparse(t, map_op_name, keep_input):
"""Conditionally store a single sparse Tensor."""
return _smart_cond(
keep_input,
lambda: _store_sparse(t, shared_name=map_op_name),
lambda: constant_op.constant(-1, dtypes.int64))
def _maybe_store_many_sparse(t, map_op_name, keep_input):
"""Conditionally store multiple sparse Tensors."""
out_tensor = _smart_cond(
keep_input,
lambda: _store_many_sparse(t, shared_name=map_op_name),
lambda: -1 * array_ops.ones(array_ops.shape(t)[0:1], dtypes.int64))
out_tensor.set_shape([None]) # necessary when t.ndims is unknown
return out_tensor
def _sparse_values_to_keep(t, keep_input):
"""Convert a per-row `keep_input` vector to a per-value one."""
# Get the rows of every value in the sparse Tensor.
row_values = array_ops.reshape(
t.indices, [array_ops.shape(t.indices)[0], -1])[:, 0]
# The value should be kept iff the row should be kept.
return array_ops.gather(keep_input, row_values)
if keep_input.shape.ndims == 1:
t = sparse_ops.sparse_retain(t, _sparse_values_to_keep(t, keep_input))
store_f = lambda t, name, _: _store_many_sparse(t, shared_name=name)
elif enqueue_many:
store_f = _maybe_store_many_sparse
else:
store_f = _maybe_store_sparse
return store_f(t, map_op_name, keep_input)
stored_list = [
_maybe_store(t, shared_map_op) for t, shared_map_op
in zip(tensor_list, maybe_shared_map_ops)]
# Since the output of `_store{_many}_sparse is wrapped in a tf.cond `Merge`,
# we can't just get the Op of the resulting tensor.
def _sparse_op(stored):
for input_tensor in stored.op.inputs:
if input_tensor.op.type in ("AddSparseToTensorsMap",
"AddManySparseToTensorsMap"):
return input_tensor.op
# If there was no sparse input, then the original stored Tensor wasn't
# sparse and we can just return the original Tensor's Op.
return stored.op
sparse_info_list = [
_sparse_meta_data(t, _sparse_op(stored), shared_map_op)
for t, stored, shared_map_op
in zip(tensor_list, stored_list, maybe_shared_map_ops)]
# Expand dims of stored tensors by 1 for proper enqueue shape
stored_list = [
array_ops.expand_dims(s, [-1]) if s_info.sparse else s
for s, s_info in zip(stored_list, sparse_info_list)]
return stored_list, sparse_info_list
def _store_sparse_tensors_join(tensor_list_list, enqueue_many, keep_input):
"""Store SparseTensors for feeding into batch_join, etc."""
(s0, sparse_info_list) = _store_sparse_tensors(
tensor_list_list[0], enqueue_many, keep_input)
stored_list_list = [s0]
for tensor_list in tensor_list_list[1:]:
s, sparse_info_candidate = _store_sparse_tensors(
tensor_list, enqueue_many, keep_input,
[st.map_op for st in sparse_info_list])
if sparse_info_list != sparse_info_candidate:
raise ValueError("Inconsistent SparseTensors list: %s vs. %s"
% (tensor_list_list[0], tensor_list))
sparse_info_list = [
info.merge_with(candidate)
for (info, candidate) in zip(sparse_info_list, sparse_info_candidate)]
stored_list_list.append(s)
return (stored_list_list, sparse_info_list)
def _restore_sparse_tensors(stored_list, sparse_info_list):
"""Restore SparseTensors after dequeue in batch, batch_join, etc."""
received_sequence = isinstance(stored_list, collections.Sequence)
if not received_sequence:
stored_list = (stored_list,)
tensors = [
_restore_sparse(sparse_map_op=info.map_op,
sparse_handles=array_ops.squeeze(s, [1]),
rank=(info.rank + 1).value)
if info.sparse else s
for (s, info) in zip(stored_list, sparse_info_list)]
return tensors if received_sequence else tensors[0]
def _validate(tensor_list):
tensor_list = ops.convert_n_to_tensor_or_indexed_slices(tensor_list)
if not tensor_list:
raise ValueError("Expected at least one tensor in batch().")
return tensor_list
def _validate_join(tensor_list_list):
tensor_list_list = [ops.convert_n_to_tensor_or_indexed_slices(tl)
for tl in tensor_list_list]
if not tensor_list_list:
raise ValueError("Expected at least one input in batch_join().")
return tensor_list_list
def _validate_keep_input(keep_input, enqueue_many):
"""Validate `keep_input` argument to conditional batching functions."""
keep_input = ops.convert_to_tensor(keep_input)
if keep_input.shape.ndims is None:
raise ValueError(
"`keep_input` dimensions must be known at graph construction.")
if not enqueue_many and keep_input.shape.ndims == 1:
raise ValueError(
"`keep_input` cannot be a vector when `enqueue_many=False`.")
if keep_input.shape.ndims > 1:
raise ValueError("`keep_input` must be 0 or 1 dimensions.")
return keep_input
def _dtypes(tensor_list_list):
all_types = [[t.dtype for t in tl] for tl in tensor_list_list]
types = all_types[0]
for other_types in all_types[1:]:
if other_types != types:
raise TypeError("Expected types to be consistent: %s vs. %s." %
(", ".join(x.name for x in types),
", ".join(x.name for x in other_types)))
return types
def _merge_shapes(shape_list, enqueue_many):
shape_list = [tensor_shape.as_shape(s) for s in shape_list]
if enqueue_many:
# We want the shapes without the leading batch dimension.
shape_list = [s.with_rank_at_least(1)[1:] for s in shape_list]
merged_shape = shape_list[0]
for s in shape_list[1:]:
merged_shape.merge_with(s)
return merged_shape.as_list()
def _shapes(tensor_list_list, shapes, enqueue_many):
"""Calculate and merge the shapes of incoming tensors.
Args:
tensor_list_list: List of tensor lists.
shapes: List of shape tuples corresponding to tensors within the lists.
enqueue_many: Boolean describing whether shapes will be enqueued as
batches or individual entries.
Returns:
A list of shapes aggregating shape inference info from `tensor_list_list`,
or returning `shapes` if it is not `None`.
Raises:
ValueError: If any of the inferred shapes in `tensor_list_list` lack a
well defined rank.
"""
if shapes is None:
len0 = len(tensor_list_list[0])
for tl in tensor_list_list:
for i in xrange(len0):
if tl[i].shape.ndims is None:
raise ValueError("Cannot infer Tensor's rank: %s" % tl[i])
shapes = [_merge_shapes(
[tl[i].shape.as_list() for tl in tensor_list_list], enqueue_many)
for i in xrange(len0)]
return shapes
def _select_which_to_enqueue(tensor_list, keep_input):
"""Select which examples to enqueue based on vector `keep_input`."""
select_i = math_ops.to_int32(keep_input)
tensor_list = [
data_flow_ops.dynamic_partition(x, select_i, num_partitions=2)[1]
for x in tensor_list]
return tensor_list
def _enqueue_join(queue, tensor_list_list, enqueue_many, keep_input):
"""Enqueue `tensor_list_list` in `queue`."""
if enqueue_many:
enqueue_fn = queue.enqueue_many
else:
enqueue_fn = queue.enqueue
if keep_input.shape.ndims == 1:
enqueue_ops = [enqueue_fn(_select_which_to_enqueue(x, keep_input))
for x in tensor_list_list]
else:
enqueue_ops = [_smart_cond(
keep_input,
lambda: enqueue_fn(tl), # pylint:disable=cell-var-from-loop
control_flow_ops.no_op) for tl in tensor_list_list]
queue_runner.add_queue_runner(queue_runner.QueueRunner(queue, enqueue_ops))
def _enqueue(queue, tensor_list, threads, enqueue_many, keep_input):
"""Enqueue `tensor_list` in `queue`."""
if enqueue_many:
enqueue_fn = queue.enqueue_many
else:
enqueue_fn = queue.enqueue
if keep_input.shape.ndims == 1:
enqueue_ops = [
enqueue_fn(_select_which_to_enqueue(tensor_list, keep_input))] * threads
else:
enqueue_ops = [_smart_cond(
keep_input,
lambda: enqueue_fn(tensor_list),
control_flow_ops.no_op)] * threads
queue_runner.add_queue_runner(queue_runner.QueueRunner(queue, enqueue_ops))
def _which_queue(dynamic_pad):
return (data_flow_ops.PaddingFIFOQueue if dynamic_pad
else data_flow_ops.FIFOQueue)
def _batch(tensors, batch_size, keep_input, num_threads=1, capacity=32,
enqueue_many=False, shapes=None, dynamic_pad=False,
allow_smaller_final_batch=False, shared_name=None,
name=None):
"""Helper function for `batch` and `maybe_batch`."""
tensor_list = _as_tensor_list(tensors)
with ops.name_scope(name, "batch", list(tensor_list) + [keep_input]) as name:
tensor_list = _validate(tensor_list)
keep_input = _validate_keep_input(keep_input, enqueue_many)
(tensor_list, sparse_info) = _store_sparse_tensors(
tensor_list, enqueue_many, keep_input)
types = _dtypes([tensor_list])
shapes = _shapes([tensor_list], shapes, enqueue_many)
# TODO(josh11b,mrry): Switch to BatchQueue once it is written.
queue = _which_queue(dynamic_pad)(
capacity=capacity, dtypes=types, shapes=shapes, shared_name=shared_name)
_enqueue(queue, tensor_list, num_threads, enqueue_many, keep_input)
summary.scalar("fraction_of_%d_full" % capacity,
math_ops.to_float(queue.size()) * (1. / capacity))
if allow_smaller_final_batch:
dequeued = queue.dequeue_up_to(batch_size, name=name)
else:
dequeued = queue.dequeue_many(batch_size, name=name)
dequeued = _restore_sparse_tensors(dequeued, sparse_info)
return _as_original_type(tensors, dequeued)
# TODO(josh11b): Add a thread_multiplier or num_threads (that has to be
# a multiple of len(tensor_list_list)?) parameter, to address the use
# case where you want more parallelism than you can support different
# readers (either because you don't have that many files or can't
# read that many files in parallel due to the number of seeks required).
# Once this is done, batch() can be written as a call to batch_join().
def _batch_join(tensors_list, batch_size, keep_input, capacity=32,
enqueue_many=False, shapes=None, dynamic_pad=False,
allow_smaller_final_batch=False, shared_name=None, name=None):
"""Helper function for `batch_join` and `maybe_batch_join`."""
tensor_list_list = _as_tensor_list_list(tensors_list)
with ops.name_scope(name, "batch_join",
_flatten(tensor_list_list) + [keep_input]) as name:
tensor_list_list = _validate_join(tensor_list_list)
keep_input = _validate_keep_input(keep_input, enqueue_many)
tensor_list_list, sparse_info = _store_sparse_tensors_join(
tensor_list_list, enqueue_many, keep_input)
types = _dtypes(tensor_list_list)
shapes = _shapes(tensor_list_list, shapes, enqueue_many)
# TODO(josh11b,mrry): Switch to BatchQueue once it is written.
queue = _which_queue(dynamic_pad)(
capacity=capacity, dtypes=types, shapes=shapes, shared_name=shared_name)
_enqueue_join(queue, tensor_list_list, enqueue_many, keep_input)
summary.scalar("fraction_of_%d_full" % capacity,
math_ops.to_float(queue.size()) * (1. / capacity))
if allow_smaller_final_batch:
dequeued = queue.dequeue_up_to(batch_size, name=name)
else:
dequeued = queue.dequeue_many(batch_size, name=name)
dequeued = _restore_sparse_tensors(dequeued, sparse_info)
# tensors_list was validated to not be empty.
return _as_original_type(tensors_list[0], dequeued)
def _shuffle_batch(tensors, batch_size, capacity, min_after_dequeue,
keep_input, num_threads=1, seed=None, enqueue_many=False,
shapes=None, allow_smaller_final_batch=False,
shared_name=None, name=None):
"""Helper function for `shuffle_batch` and `maybe_shuffle_batch`."""
tensor_list = _as_tensor_list(tensors)
with ops.name_scope(name, "shuffle_batch",
list(tensor_list) + [keep_input]) as name:
if capacity <= min_after_dequeue:
raise ValueError("capacity %d must be bigger than min_after_dequeue %d."
% (capacity, min_after_dequeue))
tensor_list = _validate(tensor_list)
keep_input = _validate_keep_input(keep_input, enqueue_many)
tensor_list, sparse_info = _store_sparse_tensors(
tensor_list, enqueue_many, keep_input)
types = _dtypes([tensor_list])
shapes = _shapes([tensor_list], shapes, enqueue_many)
queue = data_flow_ops.RandomShuffleQueue(
capacity=capacity, min_after_dequeue=min_after_dequeue, seed=seed,
dtypes=types, shapes=shapes, shared_name=shared_name)
_enqueue(queue, tensor_list, num_threads, enqueue_many, keep_input)
full = (math_ops.to_float(
math_ops.maximum(0, queue.size() - min_after_dequeue)) *
(1. / (capacity - min_after_dequeue)))
# Note that name contains a '/' at the end so we intentionally do not place
# a '/' after %s below.
summary_name = (
"fraction_over_%d_of_%d_full" %
(min_after_dequeue, capacity - min_after_dequeue))
summary.scalar(summary_name, full)
if allow_smaller_final_batch:
dequeued = queue.dequeue_up_to(batch_size, name=name)
else:
dequeued = queue.dequeue_many(batch_size, name=name)
dequeued = _restore_sparse_tensors(dequeued, sparse_info)
return _as_original_type(tensors, dequeued)
def _shuffle_batch_join(tensors_list, batch_size, capacity,
min_after_dequeue, keep_input, seed=None,
enqueue_many=False, shapes=None,
allow_smaller_final_batch=False, shared_name=None,
name=None):
"""Helper function for `shuffle_batch_join` and `maybe_shuffle_batch_join`."""
tensor_list_list = _as_tensor_list_list(tensors_list)
with ops.name_scope(name, "shuffle_batch_join",
_flatten(tensor_list_list) + [keep_input]) as name:
tensor_list_list = _validate_join(tensor_list_list)
keep_input = _validate_keep_input(keep_input, enqueue_many)
tensor_list_list, sparse_info = _store_sparse_tensors_join(
tensor_list_list, enqueue_many, keep_input)
types = _dtypes(tensor_list_list)
shapes = _shapes(tensor_list_list, shapes, enqueue_many)
queue = data_flow_ops.RandomShuffleQueue(
capacity=capacity, min_after_dequeue=min_after_dequeue, seed=seed,
dtypes=types, shapes=shapes, shared_name=shared_name)
_enqueue_join(queue, tensor_list_list, enqueue_many, keep_input)
full = (math_ops.to_float(
math_ops.maximum(0, queue.size() - min_after_dequeue)) *
(1. / (capacity - min_after_dequeue)))
# Note that name contains a '/' at the end so we intentionally do not place
# a '/' after %s below.
summary_name = (
"fraction_over_%d_of_%d_full" %
(min_after_dequeue, capacity - min_after_dequeue))
summary.scalar(summary_name, full)
if allow_smaller_final_batch:
dequeued = queue.dequeue_up_to(batch_size, name=name)
else:
dequeued = queue.dequeue_many(batch_size, name=name)
dequeued = _restore_sparse_tensors(dequeued, sparse_info)
# tensors_list was validated to not be empty.
return _as_original_type(tensors_list[0], dequeued)
# Batching functions ----------------------------------------------------------
def batch(tensors, batch_size, num_threads=1, capacity=32,
enqueue_many=False, shapes=None, dynamic_pad=False,
allow_smaller_final_batch=False, shared_name=None, name=None):
"""Creates batches of tensors in `tensors`.
The argument `tensors` can be a list or a dictionary of tensors.
The value returned by the function will be of the same type
as `tensors`.
This function is implemented using a queue. A `QueueRunner` for the
queue is added to the current `Graph`'s `QUEUE_RUNNER` collection.
If `enqueue_many` is `False`, `tensors` is assumed to represent a single
example. An input tensor with shape `[x, y, z]` will be output as a tensor
with shape `[batch_size, x, y, z]`.
If `enqueue_many` is `True`, `tensors` is assumed to represent a batch of
examples, where the first dimension is indexed by example, and all members of
`tensors` should have the same size in the first dimension. If an input
tensor has shape `[*, x, y, z]`, the output will have shape `[batch_size, x,
y, z]`. The `capacity` argument controls the how long the prefetching is
allowed to grow the queues.
The returned operation is a dequeue operation and will throw
`tf.errors.OutOfRangeError` if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
*N.B.:* If `dynamic_pad` is `False`, you must ensure that either
(i) the `shapes` argument is passed, or (ii) all of the tensors in
`tensors` must have fully-defined shapes. `ValueError` will be
raised if neither of these conditions holds.
If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
tensors is known, but individual dimensions may have shape `None`.
In this case, for each enqueue the dimensions with value `None`
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See `PaddingFIFOQueue` for more info.
If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Args:
tensors: The list or dictionary of tensors to enqueue.
batch_size: The new batch size pulled from the queue.
num_threads: The number of threads enqueuing `tensors`. The batching will
be nondeterministic if `num_threads > 1`.
capacity: An integer. The maximum number of elements in the queue.
enqueue_many: Whether each tensor in `tensors` is a single example.
shapes: (Optional) The shapes for each example. Defaults to the
inferred shapes for `tensors`.
dynamic_pad: Boolean. Allow variable dimensions in input shapes.
The given dimensions are padded upon dequeue so that tensors within a
batch have the same shapes.
allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final
batch to be smaller if there are insufficient items left in the queue.
shared_name: (Optional). If set, this queue will be shared under the given
name across multiple sessions.
name: (Optional) A name for the operations.
Returns:
A list or dictionary of tensors with the same types as `tensors` (except if
the input is a list of one element, then it returns a tensor, not a list).
Raises:
ValueError: If the `shapes` are not specified, and cannot be
inferred from the elements of `tensors`.
"""
return _batch(
tensors,
batch_size,
keep_input=True,
num_threads=num_threads,
capacity=capacity,
enqueue_many=enqueue_many,
shapes=shapes,
dynamic_pad=dynamic_pad,
allow_smaller_final_batch=allow_smaller_final_batch,
shared_name=shared_name,
name=name)
def maybe_batch(tensors, keep_input, batch_size, num_threads=1, capacity=32,
enqueue_many=False, shapes=None, dynamic_pad=False,
allow_smaller_final_batch=False, shared_name=None, name=None):
"""Conditionally creates batches of tensors based on `keep_input`.
See docstring in `batch` for more details.
Args:
tensors: The list or dictionary of tensors to enqueue.
keep_input: A `bool` Tensor. This tensor controls whether the input is
added to the queue or not. If it is a scalar and evaluates `True`, then
`tensors` are all added to the queue. If it is a vector and `enqueue_many`
is `True`, then each example is added to the queue only if the
corresponding value in `keep_input` is `True`. This tensor essentially
acts as a filtering mechanism.
batch_size: The new batch size pulled from the queue.
num_threads: The number of threads enqueuing `tensors`. The batching will
be nondeterministic if `num_threads > 1`.
capacity: An integer. The maximum number of elements in the queue.
enqueue_many: Whether each tensor in `tensors` is a single example.
shapes: (Optional) The shapes for each example. Defaults to the
inferred shapes for `tensors`.
dynamic_pad: Boolean. Allow variable dimensions in input shapes.
The given dimensions are padded upon dequeue so that tensors within a
batch have the same shapes.
allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final
batch to be smaller if there are insufficient items left in the queue.
shared_name: (Optional). If set, this queue will be shared under the given
name across multiple sessions.
name: (Optional) A name for the operations.
Returns:
A list or dictionary of tensors with the same types as `tensors`.
Raises:
ValueError: If the `shapes` are not specified, and cannot be
inferred from the elements of `tensors`.
"""
return _batch(
tensors,
batch_size,
keep_input,
num_threads=num_threads,
capacity=capacity,
enqueue_many=enqueue_many,
shapes=shapes,
dynamic_pad=dynamic_pad,
allow_smaller_final_batch=allow_smaller_final_batch,
shared_name=shared_name,
name=name)
def batch_join(tensors_list, batch_size, capacity=32, enqueue_many=False,
shapes=None, dynamic_pad=False, allow_smaller_final_batch=False,
shared_name=None, name=None):
"""Runs a list of tensors to fill a queue to create batches of examples.
The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.train.batch()`.
WARNING: This function is nondeterministic, since it starts a separate thread
for each tensor.
Enqueues a different list of tensors in different threads.
Implemented using a queue -- a `QueueRunner` for the queue
is added to the current `Graph`'s `QUEUE_RUNNER` collection.
`len(tensors_list)` threads will be started,
with thread `i` enqueuing the tensors from
`tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first
dimension if `enqueue_many` is true.
If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor `x` will be output as a
tensor with shape `[batch_size] + x.shape`.
If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. The slices of any input tensor
`x` are treated as examples, and the output tensors will have shape
`[batch_size] + x.shape[1:]`.
The `capacity` argument controls the how long the prefetching is allowed to
grow the queues.
The returned operation is a dequeue operation and will throw
`tf.errors.OutOfRangeError` if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
*N.B.:* If `dynamic_pad` is `False`, you must ensure that either
(i) the `shapes` argument is passed, or (ii) all of the tensors in
`tensors_list` must have fully-defined shapes. `ValueError` will be
raised if neither of these conditions holds.
If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
tensors is known, but individual dimensions may have value `None`.
In this case, for each enqueue the dimensions with value `None`
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See `PaddingFIFOQueue` for more info.
If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Args:
tensors_list: A list of tuples or dictionaries of tensors to enqueue.
batch_size: An integer. The new batch size pulled from the queue.
capacity: An integer. The maximum number of elements in the queue.
enqueue_many: Whether each tensor in `tensor_list_list` is a single
example.
shapes: (Optional) The shapes for each example. Defaults to the
inferred shapes for `tensor_list_list[i]`.
dynamic_pad: Boolean. Allow variable dimensions in input shapes.
The given dimensions are padded upon dequeue so that tensors within a
batch have the same shapes.
allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final
batch to be smaller if there are insufficient items left in the queue.
shared_name: (Optional) If set, this queue will be shared under the given
name across multiple sessions.
name: (Optional) A name for the operations.
Returns:
A list or dictionary of tensors with the same number and types as
`tensors_list[i]`.
Raises:
ValueError: If the `shapes` are not specified, and cannot be
inferred from the elements of `tensor_list_list`.
"""
return _batch_join(
tensors_list,
batch_size,
keep_input=True,
capacity=capacity,
enqueue_many=enqueue_many,
shapes=shapes,
dynamic_pad=dynamic_pad,
allow_smaller_final_batch=allow_smaller_final_batch,
shared_name=shared_name,
name=name)
def maybe_batch_join(tensors_list, keep_input, batch_size, capacity=32,
enqueue_many=False, shapes=None, dynamic_pad=False,
allow_smaller_final_batch=False, shared_name=None,
name=None):
"""Runs a list of tensors to conditionally fill a queue to create batches.
See docstring in `batch_join` for more details.
Args:
tensors_list: A list of tuples or dictionaries of tensors to enqueue.
keep_input: A `bool` Tensor. This tensor controls whether the input is
added to the queue or not. If it is a scalar and evaluates `True`, then
`tensors` are all added to the queue. If it is a vector and `enqueue_many`
is `True`, then each example is added to the queue only if the
corresponding value in `keep_input` is `True`. This tensor essentially
acts as a filtering mechanism.
batch_size: An integer. The new batch size pulled from the queue.
capacity: An integer. The maximum number of elements in the queue.
enqueue_many: Whether each tensor in `tensor_list_list` is a single
example.
shapes: (Optional) The shapes for each example. Defaults to the
inferred shapes for `tensor_list_list[i]`.
dynamic_pad: Boolean. Allow variable dimensions in input shapes.
The given dimensions are padded upon dequeue so that tensors within a
batch have the same shapes.
allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final
batch to be smaller if there are insufficient items left in the queue.
shared_name: (Optional) If set, this queue will be shared under the given
name across multiple sessions.
name: (Optional) A name for the operations.
Returns:
A list or dictionary of tensors with the same number and types as
`tensors_list[i]`.
Raises:
ValueError: If the `shapes` are not specified, and cannot be
inferred from the elements of `tensor_list_list`.
"""
return _batch_join(
tensors_list,
batch_size,
keep_input,
capacity=capacity,
enqueue_many=enqueue_many,
shapes=shapes,
dynamic_pad=dynamic_pad,
allow_smaller_final_batch=allow_smaller_final_batch,
shared_name=shared_name,
name=name)
def shuffle_batch(tensors, batch_size, capacity, min_after_dequeue,
num_threads=1, seed=None, enqueue_many=False, shapes=None,
allow_smaller_final_batch=False, shared_name=None, name=None):
"""Creates batches by randomly shuffling tensors.
This function adds the following to the current `Graph`:
* A shuffling queue into which tensors from `tensors` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors`.
If `enqueue_many` is `False`, `tensors` is assumed to represent a
single example. An input tensor with shape `[x, y, z]` will be output
as a tensor with shape `[batch_size, x, y, z]`.
If `enqueue_many` is `True`, `tensors` is assumed to represent a
batch of examples, where the first dimension is indexed by example,
and all members of `tensors` should have the same size in the
first dimension. If an input tensor has shape `[*, x, y, z]`, the
output will have shape `[batch_size, x, y, z]`.
The `capacity` argument controls the how long the prefetching is allowed to
grow the queues.
The returned operation is a dequeue operation and will throw
`tf.errors.OutOfRangeError` if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
For example:
```python
# Creates batches of 32 images and 32 labels.
image_batch, label_batch = tf.train.shuffle_batch(
[single_image, single_label],
batch_size=32,
num_threads=4,
capacity=50000,
min_after_dequeue=10000)
```
*N.B.:* You must ensure that either (i) the `shapes` argument is
passed, or (ii) all of the tensors in `tensors` must have
fully-defined shapes. `ValueError` will be raised if neither of
these conditions holds.
If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Args:
tensors: The list or dictionary of tensors to enqueue.
batch_size: The new batch size pulled from the queue.
capacity: An integer. The maximum number of elements in the queue.
min_after_dequeue: Minimum number elements in the queue after a
dequeue, used to ensure a level of mixing of elements.
num_threads: The number of threads enqueuing `tensor_list`.
seed: Seed for the random shuffling within the queue.
enqueue_many: Whether each tensor in `tensor_list` is a single example.
shapes: (Optional) The shapes for each example. Defaults to the
inferred shapes for `tensor_list`.
allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final
batch to be smaller if there are insufficient items left in the queue.
shared_name: (Optional) If set, this queue will be shared under the given
name across multiple sessions.
name: (Optional) A name for the operations.
Returns:
A list or dictionary of tensors with the types as `tensors`.
Raises:
ValueError: If the `shapes` are not specified, and cannot be
inferred from the elements of `tensors`.
"""
return _shuffle_batch(
tensors,
batch_size,
capacity,
min_after_dequeue,
keep_input=True,
num_threads=num_threads,
seed=seed,
enqueue_many=enqueue_many,
shapes=shapes,
allow_smaller_final_batch=allow_smaller_final_batch,
shared_name=shared_name,
name=name)
def maybe_shuffle_batch(tensors, batch_size, capacity, min_after_dequeue,
keep_input, num_threads=1, seed=None,
enqueue_many=False, shapes=None,
allow_smaller_final_batch=False, shared_name=None,
name=None):
"""Creates batches by randomly shuffling conditionally-enqueued tensors.
See docstring in `shuffle_batch` for more details.
Args:
tensors: The list or dictionary of tensors to enqueue.
batch_size: The new batch size pulled from the queue.
capacity: An integer. The maximum number of elements in the queue.
min_after_dequeue: Minimum number elements in the queue after a
dequeue, used to ensure a level of mixing of elements.
keep_input: A `bool` Tensor. This tensor controls whether the input is
added to the queue or not. If it is a scalar and evaluates `True`, then
`tensors` are all added to the queue. If it is a vector and `enqueue_many`
is `True`, then each example is added to the queue only if the
corresponding value in `keep_input` is `True`. This tensor essentially
acts as a filtering mechanism.
num_threads: The number of threads enqueuing `tensor_list`.
seed: Seed for the random shuffling within the queue.
enqueue_many: Whether each tensor in `tensor_list` is a single example.
shapes: (Optional) The shapes for each example. Defaults to the
inferred shapes for `tensor_list`.
allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final
batch to be smaller if there are insufficient items left in the queue.
shared_name: (Optional) If set, this queue will be shared under the given
name across multiple sessions.
name: (Optional) A name for the operations.
Returns:
A list or dictionary of tensors with the types as `tensors`.
Raises:
ValueError: If the `shapes` are not specified, and cannot be
inferred from the elements of `tensors`.
"""
return _shuffle_batch(
tensors,
batch_size,
capacity,
min_after_dequeue,
keep_input,
num_threads=num_threads,
seed=seed,
enqueue_many=enqueue_many,
shapes=shapes,
allow_smaller_final_batch=allow_smaller_final_batch,
shared_name=shared_name,
name=name)
def shuffle_batch_join(tensors_list, batch_size, capacity,
min_after_dequeue, seed=None, enqueue_many=False,
shapes=None, allow_smaller_final_batch=False,
shared_name=None, name=None):
"""Create batches by randomly shuffling tensors.
The `tensors_list` argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the `tensors` argument of `tf.train.shuffle_batch()`.
This version enqueues a different list of tensors in different threads.
It adds the following to the current `Graph`:
* A shuffling queue into which tensors from `tensors_list` are enqueued.
* A `dequeue_many` operation to create batches from the queue.
* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
from `tensors_list`.
`len(tensors_list)` threads will be started, with thread `i` enqueuing
the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match
`tensors_list[i2][j]` in type and shape, except in the first dimension if
`enqueue_many` is true.
If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
to represent a single example. An input tensor with shape `[x, y, z]`
will be output as a tensor with shape `[batch_size, x, y, z]`.
If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of `tensors_list[i]` should have the
same size in the first dimension. If an input tensor has shape `[*, x,
y, z]`, the output will have shape `[batch_size, x, y, z]`.
The `capacity` argument controls the how long the prefetching is allowed to
grow the queues.
The returned operation is a dequeue operation and will throw
`tf.errors.OutOfRangeError` if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
If `allow_smaller_final_batch` is `True`, a smaller batch value than
`batch_size` is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
`shape` property will have a first `Dimension` value of `None`, and
operations that depend on fixed batch_size would fail.
Args:
tensors_list: A list of tuples or dictionaries of tensors to enqueue.
batch_size: An integer. The new batch size pulled from the queue.
capacity: An integer. The maximum number of elements in the queue.
min_after_dequeue: Minimum number elements in the queue after a
dequeue, used to ensure a level of mixing of elements.
seed: Seed for the random shuffling within the queue.
enqueue_many: Whether each tensor in `tensor_list_list` is a single
example.
shapes: (Optional) The shapes for each example. Defaults to the
inferred shapes for `tensors_list[i]`.
allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final
batch to be smaller if there are insufficient items left in the queue.
shared_name: (optional). If set, this queue will be shared under the given
name across multiple sessions.
name: (Optional) A name for the operations.
Returns:
A list or dictionary of tensors with the same number and types as
`tensors_list[i]`.
Raises:
ValueError: If the `shapes` are not specified, and cannot be
inferred from the elements of `tensors_list`.
"""
return _shuffle_batch_join(
tensors_list,
batch_size,
capacity,
min_after_dequeue,
keep_input=True,
seed=seed,
enqueue_many=enqueue_many,
shapes=shapes,
allow_smaller_final_batch=allow_smaller_final_batch,
shared_name=shared_name,
name=name)
def maybe_shuffle_batch_join(tensors_list, batch_size, capacity,
min_after_dequeue, keep_input, seed=None,
enqueue_many=False, shapes=None,
allow_smaller_final_batch=False, shared_name=None,
name=None):
"""Create batches by randomly shuffling conditionally-enqueued tensors.
See docstring in `shuffle_batch_join` for more details.
Args:
tensors_list: A list of tuples or dictionaries of tensors to enqueue.
batch_size: An integer. The new batch size pulled from the queue.
capacity: An integer. The maximum number of elements in the queue.
min_after_dequeue: Minimum number elements in the queue after a
dequeue, used to ensure a level of mixing of elements.
keep_input: A `bool` Tensor. This tensor controls whether the input is
added to the queue or not. If it is a scalar and evaluates `True`, then
`tensors` are all added to the queue. If it is a vector and `enqueue_many`
is `True`, then each example is added to the queue only if the
corresponding value in `keep_input` is `True`. This tensor essentially
acts as a filtering mechanism.
seed: Seed for the random shuffling within the queue.
enqueue_many: Whether each tensor in `tensor_list_list` is a single
example.
shapes: (Optional) The shapes for each example. Defaults to the
inferred shapes for `tensors_list[i]`.
allow_smaller_final_batch: (Optional) Boolean. If `True`, allow the final
batch to be smaller if there are insufficient items left in the queue.
shared_name: (optional). If set, this queue will be shared under the given
name across multiple sessions.
name: (Optional) A name for the operations.
Returns:
A list or dictionary of tensors with the same number and types as
`tensors_list[i]`.
Raises:
ValueError: If the `shapes` are not specified, and cannot be
inferred from the elements of `tensors_list`.
"""
return _shuffle_batch_join(
tensors_list,
batch_size,
capacity,
min_after_dequeue,
keep_input,
seed=seed,
enqueue_many=enqueue_many,
shapes=shapes,
allow_smaller_final_batch=allow_smaller_final_batch,
shared_name=shared_name,
name=name)
| mit |
filippog/pysnmp | examples/hlapi/asyncore/sync/manager/cmdgen/set-multiple-scalar-values.py | 1 | 1568 | """
SET scalars values
++++++++++++++++++
Send SNMP SET request using the following options:
* with SNMPv1, community 'public'
* over IPv4/UDP
* to an Agent at demo.snmplabs.com:161
* setting three var-bindings to new values
Please note, that in this example MIB lookup is only used
for the second var-bindins. For the rest, value types are
inferred from passed objects.
Functionally similar to:
| $ snmpset -v1 -c public demo.snmplabs.com \
| 1.3.6.1.2.1.1.9.1.2.1 o 1.3.6.1.4.1.20408.1.1 \
| 1.3.6.1.2.1.1.9.1.2.1 = 1.3.6.1.4.1.20408.1.1 \
| 1.3.6.1.2.1.1.9.1.3.1 s "new system name"
"""#
from pysnmp.hlapi import *
errorIndication, errorStatus, errorIndex, varBinds = next(
setCmd(SnmpEngine(),
CommunityData('public', mpModel=0),
UdpTransportTarget(('demo.snmplabs.com', 161)),
ContextData(),
ObjectType(ObjectIdentity('1.3.6.1.2.1.1.9.1.2.1'),
ObjectIdentifier('1.3.6.1.4.1.20408.1.1')),
ObjectType(ObjectIdentity('1.3.6.1.2.1.1.9.1.2.1'),
'1.3.6.1.4.1.20408.1.1'),
ObjectType(ObjectIdentity('1.3.6.1.2.1.1.9.1.3.1'),
OctetString('new system name')))
)
if errorIndication:
print(errorIndication)
elif errorStatus:
print('%s at %s' % (
errorStatus.prettyPrint(),
errorIndex and varBinds[int(errorIndex)-1][0] or '?'
)
)
else:
for varBind in varBinds:
print(' = '.join([ x.prettyPrint() for x in varBind ]))
| bsd-3-clause |
anoopsarkar/nlp-class-hw | neuralmt/zipsrc.py | 1 | 1477 | """
Run:
python3 zipsrc.py
This will create a file `source.zip` which you can upload to Coursys (courses.cs.sfu.ca) as your source code submission.
To customize the files used by default, run:
python3 zipsrc.py -h
"""
import sys, os, optparse, shutil, iocollect
if __name__ == '__main__':
optparser = optparse.OptionParser()
optparser.add_option("-a", "--answerdir", dest="answer_dir", default='answer', help="answer directory containing your source files")
optparser.add_option("-s", "--srcfile", dest="src_file", default='neuralmt.py', help="name of source file for homework")
optparser.add_option("-n", "--notebook", dest="notebook_file", default='neuralmt.ipynb', help="name of iPython notebook for homework")
optparser.add_option("-z", "--zipfile", dest="zipfile", default='source', help="zip file you should upload to Coursys (courses.cs.sfu.ca)")
(opts, _) = optparser.parse_args()
answer_files = iocollect.getfiles(opts.answer_dir)
if opts.src_file not in answer_files:
raise ValueError("Error: missing answer file {}. Did you name your answer program correctly?".format(opts.src_file))
if opts.notebook_file not in answer_files:
raise ValueError("Error: missing notebook file {}. Did you name your iPython notebook correctly?".format(opts.notebook_file))
outputs_zipfile = shutil.make_archive(opts.zipfile, 'zip', opts.answer_dir)
print("{0} created".format(outputs_zipfile), file=sys.stderr)
| apache-2.0 |
ksrajkumar/openerp-6.1 | openerp/addons/itara_medical_survey/medical_pack_question.py | 1 | 2469 | from osv import fields, osv
import time
from datetime import datetime
import pooler
import netsvc
import sys
import os
from osv import osv,fields
from mx.DateTime import *
from tools.translate import _
import logging
import netsvc
logger = logging.getLogger('server')
class med_question_pack(osv.osv):
_name="med.question.pack"
_description = ""
_columns = {
'name': fields.char('Requirements', size=64),
'code': fields.char('Code', size=6),
#'question_ids': fields.one2many('med.question', 'q_id', 'Questionnaire Lines'),
'pack_line': fields.one2many('med.question.pack.line', 'pack_id', 'Question pack Lines'),
'physical_activity_id': fields.one2many('physical.activity', 'name_id', 'Physical Activity'),
}
med_question_pack()
class med_question_pack_line(osv.osv):
_name="med.question.pack.line"
_desctiption=""
_columns={
'pack_id': fields.many2one('med.question.pack', 'Pack_line'),
'question_id': fields.many2one('med.question', 'questions'),
'remark':fields.char('Remarks', size=68),
'yesno': fields.selection([('Yes','Yes '),('No', 'No'),('Dont Know', 'Dont Know')], 'Status'),
}
med_question_pack_line()
class med_question(osv.osv):
_name="med.question"
_description = ""
_columns = {
'name': fields.char('Questionnaire', size=250),
'code': fields.char('Code', size=6),
'yesno': fields.selection([('Yes','Yes '),('No', 'No'),('Dont Know', 'Dont Know')], 'Status'),
'remark':fields.char('Remarks', size=68),
'q_id': fields.many2one('med.question.pack','Question_link'),
#'quest_sur_id':fields.many2one('general.medical.survey', 'qes')
# 'select_req': fields.many2many('data.collect', 'data_collect_rel', 'categ_id', 'req_id', 'Select Multi Requirements')
}
med_question()
class physical_activity_many(osv.osv):
_name='physical.activity.many'
_description='Creating master for Physical Activity'
_columns={
'name':fields.char('Physical Activity',size=256),
}
physical_activity_many()
class physical_activity(osv.osv):
_name='physical.activity'
_description='Creating master for Physical Activity'
_columns={
'name': fields.many2one('physical.activity.many',''),
'name_id':fields.many2one('med.question.pack','Physical Activity'),
}
physical_activity()
| agpl-3.0 |
FlappyPig/pigcrypto | setup.py | 1 | 4014 | """A setuptools based setup module.
See:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
# Always prefer setuptools over distutils
from setuptools import setup, find_packages
# To use a consistent encoding
from codecs import open
from os import path
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
setup(
name='pigcrypto',
# Versions should comply with PEP440. For a discussion on single-sourcing
# the version across setup.py and the project code, see
# https://packaging.python.org/en/latest/single_source_version.html
version='0.0.1.dev1',
description='CTF crypto tools',
long_description=long_description,
# The project's main homepage.
url='https://github.com/FlappyPig/pigcrypto',
# Author details
author='FlappyPig',
author_email='flappypig@flappypig.club',
# Choose your license
license='MIT',
# See https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers=[
# How mature is this project? Common values are
# 3 - Alpha
# 4 - Beta
# 5 - Production/Stable
'Development Status :: 3 - Alpha',
# Indicate who your project is intended for
'Intended Audience :: Developers',
'Topic :: Security :: Cryptography',
# Pick your license as you wish (should match "license" above)
'License :: OSI Approved :: MIT License',
# Specify the Python versions you support here. In particular, ensure
# that you indicate whether you support Python 2, Python 3 or both.
# 'Programming Language :: Python :: 2',
# 'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
# 'Programming Language :: Python :: 3',
# 'Programming Language :: Python :: 3.3',
# 'Programming Language :: Python :: 3.4',
# 'Programming Language :: Python :: 3.5',
],
# What does your project relate to?
keywords='ctf crypto',
# You can just specify the packages manually here if your project is
# simple. Or you can use find_packages().
packages=find_packages(exclude=['contrib', 'docs', 'tests']),
# Alternatively, if you want to distribute just a my_module.py, uncomment
# this:
# py_modules=["my_module"],
# List run-time dependencies here. These will be installed by pip when
# your project is installed. For an analysis of "install_requires" vs pip's
# requirements files see:
# https://packaging.python.org/en/latest/requirements.html
# install_requires=[''],
# List additional groups of dependencies here (e.g. development
# dependencies). You can install these using the following syntax,
# for example:
# $ pip install -e .[dev,test]
# extras_require={
# 'dev': [''],
# 'test': [''],
# },
# If there are data files included in your packages that need to be
# installed, specify them here. If using Python 2.6 or less, then these
# have to be included in MANIFEST.in as well.
# package_data={
# 'sample': ['package_data.dat'],
# },
# Although 'package_data' is the preferred approach, in some case you may
# need to place data files outside of your packages. See:
# http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files # noqa
# In this case, 'data_file' will be installed into '<sys.prefix>/my_data'
# data_files=[('my_data', ['data/data_file'])],
# To provide executable scripts, use entry points in preference to the
# "scripts" keyword. Entry points provide cross-platform support and allow
# pip to create the appropriate form of executable for the target platform.
# entry_points={
# 'console_scripts': [
# 'sample=sample:main',
# ],
# },
)
| mit |
dkodnik/arp | addons/account_anglo_saxon/__init__.py | 436 | 1090 | ##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import product
import stock
import purchase
import invoice
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
socillion/twistedpusher | client.py | 1 | 6906 | #!/usr/bin/env python
import logging
import warnings
from twisted.application.service import MultiService
from twisted.internet.endpoints import clientFromString
from zope.interface import implementer
from twistedpusher.connection import Connection
from twistedpusher import channel, websocket
from twistedpusher.events import EventEmitter
from twistedpusher.interfaces import IPusherClientService, IPusherClient
log = logging.getLogger(__name__)
# References:
# http://pusher.com/docs/pusher_protocol
# http://autobahn.ws/python/reference.html
# http://autobahn.ws/python/examples.html
VERSION = '1.3.1'
@implementer(IPusherClientService)
class PusherService(MultiService, EventEmitter):
protocol_version = 7
host = 'ws.pusherapp.com'
client_name = 'twistedpusher'
def __init__(self, key, encrypted=True, endpoint_string=None, reactor=None, **kwargs):
"""
Pusher client service. Start it with ``startService`` and stop it with ``stopService``.
:param key: key for the Pusher application to connect to
:type key: str
:param encrypted: whether to use secure websockets
:type encrypted: bool
:param endpoint_string: a string to build the endpoint with, using clientFromString
:type endpoint_string: str
:param reactor: optional Twisted reactor
"""
# Must do it this way so both constructors execute.
# (Multi)Service constructor is not equipped for multiple inheritance.
MultiService.__init__(self)
EventEmitter.__init__(self)
self.key = key
self.encrypted = encrypted
if not reactor:
from twisted.internet import reactor
# todo add support for other factory classes
factory = websocket.PusherWebsocketFactory(
url=PusherService._build_url(key, encrypted),
useragent='{0}/{1}'.format(PusherService.client_name, VERSION),
**kwargs)
endpoint = self.__class__._build_endpoint(endpoint_string, encrypted, reactor=reactor)
self.connection = Connection(factory, endpoint, self._on_event, reactor=reactor)
self.addService(self.connection)
# List of subscribed channels
self.channels = dict()
####################
##### Channels #####
####################
def subscribe(self, channel_name, **kwargs):
"""
Subscribe to a channel.
:param channel_name: the channel's name
:type channel_name: str or unicode
:param json_data: flag to enable parsing client event data as JSON
:return: the created channel
:rtype: twistedpusher.Channel
:raises BadChannelNameError: if channel_name is not a valid Pusher channel name
"""
if channel_name not in self.channels:
chan = channel.buildChannel(channel_name, self.connection, **kwargs)
# only subscribe if connected, it'll get automatically triggered on connect if we aren't
if self.connection.state == 'connected':
chan.subscribe()
self.channels[channel_name] = chan
else:
warnings.warn("Already subscribed to channel {0}".format(channel_name))
return self.channels[channel_name]
def unsubscribe(self, channel_name):
"""
Unsubscribe from a channel.
:param channel_name: the channel's name
:type channel_name: str
"""
if channel_name in self.channels:
self.channels[channel_name].unsubscribe()
self.channels.pop(channel_name)
else:
warnings.warn("Attempted to unsubscribe from channel {0} when not subscribed".format(channel_name))
def channel(self, channel_name):
"""
Get a channel by name.
:type channel_name: str
:return: the requested channel, if found
:rtype: twistedpusher.Channel
:raises ValueError: if the channel is not found (i.e. subscribed to)
"""
try:
return self.channels[channel_name]
except KeyError:
raise ValueError("Channel not found: '{0}'.".format(channel_name))
####################
##### Handlers #####
####################
def _on_event(self, event):
"""
:type event: events.Event
"""
try:
chan = self.channels[event['channel']]
except (KeyError, AttributeError) as e:
# not subscribed to the channel, or this isn't a channel event.
pass
else:
chan.emit_event(event)
self.emit_event(event)
#####################
##### Utilities #####
#####################
@classmethod
def _build_url(cls, key, encrypted):
"""
Build a Pusher URL using the specified app key.
:param key: Pusher application key
:type key: str
:param encrypted: whether to use secure websockets (wss)
:type encrypted: bool
:return: a Pusher URL
:rtype: str
"""
path = "/app/{0}?client={1}&version={2}&protocol={3}".format(
key,
cls.client_name,
VERSION,
cls.protocol_version)
return "{0}://{1}:{2}{3}".format(
"wss" if encrypted else "ws",
cls.host,
443 if encrypted else 80,
path)
@classmethod
def _build_endpoint(cls, endpoint_string=None, encrypted=True, timeout=5, host=None, reactor=None):
if not reactor:
from twisted.internet import reactor
host = host or cls.host
if endpoint_string:
endpoint = clientFromString(reactor, endpoint_string)
else:
if encrypted:
port = 443
proto = 'ssl'
else:
port = 80
proto = 'tcp'
endpoint = clientFromString(reactor, '{0}:host={1}:port={2}:timeout={3}'.format(proto, host, port, timeout))
return endpoint
@implementer(IPusherClient)
class Pusher(PusherService):
def __init__(self, key, encrypted=True, endpoint_client_string=None, **kwargs):
"""
Pusher client. Extends PusherService.
:param key: key for the Pusher application to conenct to
:type key: str
:param encrypted: whether to use secure websockets
:type encrypted: bool
"""
super(Pusher, self).__init__(key, encrypted, endpoint_client_string, **kwargs)
from twisted.internet import reactor
reactor.addSystemEventTrigger('before', 'shutdown', self.stopService)
self.connect()
def connect(self):
"""
Connect to Pusher.
"""
self.startService()
def disconnect(self):
"""Disconnect from Pusher."""
self.stopService()
Client = Pusher
ClientService = PusherService | mit |
karllessard/tensorflow | tensorflow/python/data/ops/iterator_ops.py | 5 | 36187 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Python wrappers for Iterators."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import threading
import warnings
import six
from tensorflow.python.data.experimental.ops import distribute_options
from tensorflow.python.data.ops import optional_ops
from tensorflow.python.data.util import nest
from tensorflow.python.data.util import structure
from tensorflow.python.eager import context
from tensorflow.python.framework import composite_tensor
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import tensor_spec
from tensorflow.python.framework import type_spec
from tensorflow.python.ops import gen_dataset_ops
from tensorflow.python.training.saver import BaseSaverBuilder
from tensorflow.python.training.tracking import base as trackable
from tensorflow.python.util import deprecation
from tensorflow.python.util.compat import collections_abc
from tensorflow.python.util.tf_export import tf_export
# NOTE(mrry): It is legitimate to call `Iterator.get_next()` multiple
# times, e.g. when you are distributing different elements to multiple
# devices in a single step. However, a common pitfall arises when
# users call `Iterator.get_next()` in each iteration of their training
# loop. `Iterator.get_next()` adds ops to the graph, and executing
# each op allocates resources (including threads); as a consequence,
# invoking it in every iteration of a training loop causes slowdown
# and eventual resource exhaustion. To guard against this outcome, we
# log a warning when the number of uses crosses a threshold of suspicion.
GET_NEXT_CALL_WARNING_THRESHOLD = 32
GET_NEXT_CALL_WARNING_MESSAGE = (
"An unusually high number of `Iterator.get_next()` calls was detected. "
"This often indicates that `Iterator.get_next()` is being called inside "
"a training loop, which will cause gradual slowdown and eventual resource "
"exhaustion. If this is the case, restructure your code to call "
"`next_element = iterator.get_next()` once outside the loop, and use "
"`next_element` as the input to some computation that is invoked inside "
"the loop.")
# Collection of all IteratorResources in the `Graph`.
GLOBAL_ITERATORS = "iterators"
def _device_stack_is_empty():
if context.executing_eagerly():
return context.context().device_name is None
# pylint: disable=protected-access
device_stack = ops.get_default_graph()._device_functions_outer_to_inner
# pylint: enable=protected-access
return not bool(device_stack)
@tf_export(v1=["data.Iterator"])
class Iterator(trackable.Trackable):
"""Represents the state of iterating through a `Dataset`."""
def __init__(self, iterator_resource, initializer, output_types,
output_shapes, output_classes):
"""Creates a new iterator from the given iterator resource.
Note: Most users will not call this initializer directly, and will
instead use `Dataset.make_initializable_iterator()` or
`Dataset.make_one_shot_iterator()`.
Args:
iterator_resource: A `tf.resource` scalar `tf.Tensor` representing the
iterator.
initializer: A `tf.Operation` that should be run to initialize this
iterator.
output_types: A nested structure of `tf.DType` objects corresponding to
each component of an element of this iterator.
output_shapes: A nested structure of `tf.TensorShape` objects
corresponding to each component of an element of this iterator.
output_classes: A nested structure of Python `type` objects corresponding
to each component of an element of this iterator.
"""
self._iterator_resource = iterator_resource
self._initializer = initializer
if (output_types is None or output_shapes is None
or output_classes is None):
raise ValueError("If `structure` is not specified, all of "
"`output_types`, `output_shapes`, and `output_classes`"
" must be specified.")
self._element_spec = structure.convert_legacy_structure(
output_types, output_shapes, output_classes)
self._flat_tensor_shapes = structure.get_flat_tensor_shapes(
self._element_spec)
self._flat_tensor_types = structure.get_flat_tensor_types(
self._element_spec)
self._string_handle = gen_dataset_ops.iterator_to_string_handle(
self._iterator_resource)
self._get_next_call_count = 0
ops.add_to_collection(GLOBAL_ITERATORS, self._iterator_resource)
@staticmethod
def from_structure(output_types,
output_shapes=None,
shared_name=None,
output_classes=None):
"""Creates a new, uninitialized `Iterator` with the given structure.
This iterator-constructing method can be used to create an iterator that
is reusable with many different datasets.
The returned iterator is not bound to a particular dataset, and it has
no `initializer`. To initialize the iterator, run the operation returned by
`Iterator.make_initializer(dataset)`.
The following is an example
```python
iterator = Iterator.from_structure(tf.int64, tf.TensorShape([]))
dataset_range = Dataset.range(10)
range_initializer = iterator.make_initializer(dataset_range)
dataset_evens = dataset_range.filter(lambda x: x % 2 == 0)
evens_initializer = iterator.make_initializer(dataset_evens)
# Define a model based on the iterator; in this example, the model_fn
# is expected to take scalar tf.int64 Tensors as input (see
# the definition of 'iterator' above).
prediction, loss = model_fn(iterator.get_next())
# Train for `num_epochs`, where for each epoch, we first iterate over
# dataset_range, and then iterate over dataset_evens.
for _ in range(num_epochs):
# Initialize the iterator to `dataset_range`
sess.run(range_initializer)
while True:
try:
pred, loss_val = sess.run([prediction, loss])
except tf.errors.OutOfRangeError:
break
# Initialize the iterator to `dataset_evens`
sess.run(evens_initializer)
while True:
try:
pred, loss_val = sess.run([prediction, loss])
except tf.errors.OutOfRangeError:
break
```
Args:
output_types: A nested structure of `tf.DType` objects corresponding to
each component of an element of this dataset.
output_shapes: (Optional.) A nested structure of `tf.TensorShape` objects
corresponding to each component of an element of this dataset. If
omitted, each component will have an unconstrainted shape.
shared_name: (Optional.) If non-empty, this iterator will be shared under
the given name across multiple sessions that share the same devices
(e.g. when using a remote server).
output_classes: (Optional.) A nested structure of Python `type` objects
corresponding to each component of an element of this iterator. If
omitted, each component is assumed to be of type `tf.Tensor`.
Returns:
An `Iterator`.
Raises:
TypeError: If the structures of `output_shapes` and `output_types` are
not the same.
"""
output_types = nest.map_structure(dtypes.as_dtype, output_types)
if output_shapes is None:
output_shapes = nest.map_structure(
lambda _: tensor_shape.TensorShape(None), output_types)
else:
output_shapes = nest.map_structure_up_to(output_types,
tensor_shape.as_shape,
output_shapes)
if output_classes is None:
output_classes = nest.map_structure(lambda _: ops.Tensor, output_types)
nest.assert_same_structure(output_types, output_shapes)
output_structure = structure.convert_legacy_structure(
output_types, output_shapes, output_classes)
if shared_name is None:
shared_name = ""
iterator_resource = gen_dataset_ops.iterator_v2(
container="",
shared_name=shared_name,
output_types=structure.get_flat_tensor_types(output_structure),
output_shapes=structure.get_flat_tensor_shapes(
output_structure))
return Iterator(iterator_resource, None, output_types, output_shapes,
output_classes)
@staticmethod
def from_string_handle(string_handle,
output_types,
output_shapes=None,
output_classes=None):
"""Creates a new, uninitialized `Iterator` based on the given handle.
This method allows you to define a "feedable" iterator where you can choose
between concrete iterators by feeding a value in a `tf.Session.run` call.
In that case, `string_handle` would be a `tf.compat.v1.placeholder`, and you
would
feed it with the value of `tf.data.Iterator.string_handle` in each step.
For example, if you had two iterators that marked the current position in
a training dataset and a test dataset, you could choose which to use in
each step as follows:
```python
train_iterator = tf.data.Dataset(...).make_one_shot_iterator()
train_iterator_handle = sess.run(train_iterator.string_handle())
test_iterator = tf.data.Dataset(...).make_one_shot_iterator()
test_iterator_handle = sess.run(test_iterator.string_handle())
handle = tf.compat.v1.placeholder(tf.string, shape=[])
iterator = tf.data.Iterator.from_string_handle(
handle, train_iterator.output_types)
next_element = iterator.get_next()
loss = f(next_element)
train_loss = sess.run(loss, feed_dict={handle: train_iterator_handle})
test_loss = sess.run(loss, feed_dict={handle: test_iterator_handle})
```
Args:
string_handle: A scalar `tf.Tensor` of type `tf.string` that evaluates to
a handle produced by the `Iterator.string_handle()` method.
output_types: A nested structure of `tf.DType` objects corresponding to
each component of an element of this dataset.
output_shapes: (Optional.) A nested structure of `tf.TensorShape` objects
corresponding to each component of an element of this dataset. If
omitted, each component will have an unconstrainted shape.
output_classes: (Optional.) A nested structure of Python `type` objects
corresponding to each component of an element of this iterator. If
omitted, each component is assumed to be of type `tf.Tensor`.
Returns:
An `Iterator`.
"""
output_types = nest.map_structure(dtypes.as_dtype, output_types)
if output_shapes is None:
output_shapes = nest.map_structure(
lambda _: tensor_shape.TensorShape(None), output_types)
else:
output_shapes = nest.map_structure_up_to(output_types,
tensor_shape.as_shape,
output_shapes)
if output_classes is None:
output_classes = nest.map_structure(lambda _: ops.Tensor, output_types)
nest.assert_same_structure(output_types, output_shapes)
output_structure = structure.convert_legacy_structure(
output_types, output_shapes, output_classes)
string_handle = ops.convert_to_tensor(string_handle, dtype=dtypes.string)
iterator_resource = gen_dataset_ops.iterator_from_string_handle_v2(
string_handle,
output_types=structure.get_flat_tensor_types(output_structure),
output_shapes=structure.get_flat_tensor_shapes(output_structure))
return Iterator(iterator_resource, None, output_types, output_shapes,
output_classes)
@property
def initializer(self):
"""A `tf.Operation` that should be run to initialize this iterator.
Returns:
A `tf.Operation` that should be run to initialize this iterator
Raises:
ValueError: If this iterator initializes itself automatically.
"""
if self._initializer is not None:
return self._initializer
else:
# TODO(mrry): Consider whether one-shot iterators should have
# initializers that simply reset their state to the beginning.
raise ValueError("Iterator does not have an initializer.")
def make_initializer(self, dataset, name=None):
"""Returns a `tf.Operation` that initializes this iterator on `dataset`.
Args:
dataset: A `Dataset` with compatible structure to this iterator.
name: (Optional.) A name for the created operation.
Returns:
A `tf.Operation` that can be run to initialize this iterator on the given
`dataset`.
Raises:
TypeError: If `dataset` and this iterator do not have a compatible
element structure.
"""
with ops.name_scope(name, "make_initializer") as name:
# NOTE(mrry): Cannot depend on `dataset_ops.get_legacy_output*()` due
# to that creating a circular dependency.
# pylint: disable=protected-access
dataset_output_types = nest.map_structure(
lambda component_spec: component_spec._to_legacy_output_types(),
dataset.element_spec)
dataset_output_shapes = nest.map_structure(
lambda component_spec: component_spec._to_legacy_output_shapes(),
dataset.element_spec)
dataset_output_classes = nest.map_structure(
lambda component_spec: component_spec._to_legacy_output_classes(),
dataset.element_spec)
# pylint: enable=protected-access
nest.assert_same_structure(self.output_types, dataset_output_types)
nest.assert_same_structure(self.output_shapes, dataset_output_shapes)
for iterator_class, dataset_class in zip(
nest.flatten(self.output_classes),
nest.flatten(dataset_output_classes)):
if iterator_class is not dataset_class:
raise TypeError(
"Expected output classes %r but got dataset with output class %r."
% (self.output_classes, dataset_output_classes))
for iterator_dtype, dataset_dtype in zip(
nest.flatten(self.output_types), nest.flatten(dataset_output_types)):
if iterator_dtype != dataset_dtype:
raise TypeError(
"Expected output types %r but got dataset with output types %r." %
(self.output_types, dataset_output_types))
for iterator_shape, dataset_shape in zip(
nest.flatten(self.output_shapes), nest.flatten(
dataset_output_shapes)):
if not iterator_shape.is_compatible_with(dataset_shape):
raise TypeError("Expected output shapes compatible with %r but got "
"dataset with output shapes %r." %
(self.output_shapes, dataset_output_shapes))
# TODO(b/169442955): Investigate the need for this colocation constraint.
with ops.colocate_with(self._iterator_resource):
# pylint: disable=protected-access
return gen_dataset_ops.make_iterator(
dataset._variant_tensor, self._iterator_resource, name=name)
def get_next(self, name=None):
"""Returns a nested structure of `tf.Tensor`s representing the next element.
In graph mode, you should typically call this method *once* and use its
result as the input to another computation. A typical loop will then call
`tf.Session.run` on the result of that computation. The loop will terminate
when the `Iterator.get_next()` operation raises
`tf.errors.OutOfRangeError`. The following skeleton shows how to use
this method when building a training loop:
```python
dataset = ... # A `tf.data.Dataset` object.
iterator = dataset.make_initializable_iterator()
next_element = iterator.get_next()
# Build a TensorFlow graph that does something with each element.
loss = model_function(next_element)
optimizer = ... # A `tf.compat.v1.train.Optimizer` object.
train_op = optimizer.minimize(loss)
with tf.compat.v1.Session() as sess:
try:
while True:
sess.run(train_op)
except tf.errors.OutOfRangeError:
pass
```
NOTE: It is legitimate to call `Iterator.get_next()` multiple times, e.g.
when you are distributing different elements to multiple devices in a single
step. However, a common pitfall arises when users call `Iterator.get_next()`
in each iteration of their training loop. `Iterator.get_next()` adds ops to
the graph, and executing each op allocates resources (including threads); as
a consequence, invoking it in every iteration of a training loop causes
slowdown and eventual resource exhaustion. To guard against this outcome, we
log a warning when the number of uses crosses a fixed threshold of
suspiciousness.
Args:
name: (Optional.) A name for the created operation.
Returns:
A nested structure of `tf.Tensor` objects.
"""
self._get_next_call_count += 1
if self._get_next_call_count > GET_NEXT_CALL_WARNING_THRESHOLD:
warnings.warn(GET_NEXT_CALL_WARNING_MESSAGE)
# TODO(b/169442955): Investigate the need for this colocation constraint.
with ops.colocate_with(self._iterator_resource):
# pylint: disable=protected-access
flat_ret = gen_dataset_ops.iterator_get_next(
self._iterator_resource,
output_types=self._flat_tensor_types,
output_shapes=self._flat_tensor_shapes,
name=name)
return structure.from_tensor_list(self._element_spec, flat_ret)
def get_next_as_optional(self):
# TODO(b/169442955): Investigate the need for this colocation constraint.
with ops.colocate_with(self._iterator_resource):
# pylint: disable=protected-access
return optional_ops._OptionalImpl(
gen_dataset_ops.iterator_get_next_as_optional(
self._iterator_resource,
output_types=structure.get_flat_tensor_types(self.element_spec),
output_shapes=structure.get_flat_tensor_shapes(
self.element_spec)), self.element_spec)
def string_handle(self, name=None):
"""Returns a string-valued `tf.Tensor` that represents this iterator.
Args:
name: (Optional.) A name for the created operation.
Returns:
A scalar `tf.Tensor` of type `tf.string`.
"""
if name is None:
return self._string_handle
else:
return gen_dataset_ops.iterator_to_string_handle(
self._iterator_resource, name=name)
@property
@deprecation.deprecated(
None, "Use `tf.compat.v1.data.get_output_classes(iterator)`.")
def output_classes(self):
"""Returns the class of each component of an element of this iterator.
The expected values are `tf.Tensor` and `tf.sparse.SparseTensor`.
Returns:
A nested structure of Python `type` objects corresponding to each
component of an element of this dataset.
"""
return nest.map_structure(
lambda component_spec: component_spec._to_legacy_output_classes(), # pylint: disable=protected-access
self._element_spec)
@property
@deprecation.deprecated(
None, "Use `tf.compat.v1.data.get_output_shapes(iterator)`.")
def output_shapes(self):
"""Returns the shape of each component of an element of this iterator.
Returns:
A nested structure of `tf.TensorShape` objects corresponding to each
component of an element of this dataset.
"""
return nest.map_structure(
lambda component_spec: component_spec._to_legacy_output_shapes(), # pylint: disable=protected-access
self._element_spec)
@property
@deprecation.deprecated(
None, "Use `tf.compat.v1.data.get_output_types(iterator)`.")
def output_types(self):
"""Returns the type of each component of an element of this iterator.
Returns:
A nested structure of `tf.DType` objects corresponding to each component
of an element of this dataset.
"""
return nest.map_structure(
lambda component_spec: component_spec._to_legacy_output_types(), # pylint: disable=protected-access
self._element_spec)
@property
def element_spec(self):
return self._element_spec
def _gather_saveables_for_checkpoint(self):
def _saveable_factory(name):
return _IteratorSaveable(self._iterator_resource, name)
return {"ITERATOR": _saveable_factory}
_uid_counter = 0
_uid_lock = threading.Lock()
def _generate_shared_name(prefix):
with _uid_lock:
global _uid_counter
uid = _uid_counter
_uid_counter += 1
return "{}{}".format(prefix, uid)
class IteratorResourceDeleter(object):
"""An object which cleans up an iterator resource handle.
An alternative to defining a __del__ method on an object. Even if the parent
object is part of a reference cycle, the cycle will be collectable.
"""
__slots__ = ["_deleter", "_handle", "_eager_mode"]
def __init__(self, handle, deleter):
self._deleter = deleter
self._handle = handle
self._eager_mode = context.executing_eagerly()
def __del__(self):
# Make sure the resource is deleted in the same mode as it was created in.
if self._eager_mode:
with context.eager_mode():
gen_dataset_ops.delete_iterator(
handle=self._handle, deleter=self._deleter)
else:
with context.graph_mode():
gen_dataset_ops.delete_iterator(
handle=self._handle, deleter=self._deleter)
@tf_export("data.Iterator", v1=[])
@six.add_metaclass(abc.ABCMeta)
class IteratorBase(collections_abc.Iterator, trackable.Trackable,
composite_tensor.CompositeTensor):
"""Represents an iterator of a `tf.data.Dataset`.
`tf.data.Iterator` is the primary mechanism for enumerating elements of a
`tf.data.Dataset`. It supports the Python Iterator protocol, which means
it can be iterated over using a for-loop:
>>> dataset = tf.data.Dataset.range(2)
>>> for element in dataset:
... print(element)
tf.Tensor(0, shape=(), dtype=int64)
tf.Tensor(1, shape=(), dtype=int64)
or by fetching individual elements explicitly via `get_next()`:
>>> dataset = tf.data.Dataset.range(2)
>>> iterator = iter(dataset)
>>> print(iterator.get_next())
tf.Tensor(0, shape=(), dtype=int64)
>>> print(iterator.get_next())
tf.Tensor(1, shape=(), dtype=int64)
In addition, non-raising iteration is supported via `get_next_as_optional()`,
which returns the next element (if available) wrapped in a
`tf.experimental.Optional`.
>>> dataset = tf.data.Dataset.from_tensors(42)
>>> iterator = iter(dataset)
>>> optional = iterator.get_next_as_optional()
>>> print(optional.has_value())
tf.Tensor(True, shape=(), dtype=bool)
>>> optional = iterator.get_next_as_optional()
>>> print(optional.has_value())
tf.Tensor(False, shape=(), dtype=bool)
"""
@abc.abstractproperty
def element_spec(self):
"""The type specification of an element of this iterator.
>>> dataset = tf.data.Dataset.from_tensors(42)
>>> iterator = iter(dataset)
>>> iterator.element_spec
tf.TensorSpec(shape=(), dtype=tf.int32, name=None)
Returns:
A nested structure of `tf.TypeSpec` objects matching the structure of an
element of this iterator, specifying the type of individual components.
"""
raise NotImplementedError("Iterator.element_spec")
@abc.abstractmethod
def get_next(self):
"""Returns a nested structure of `tf.Tensor`s containing the next element.
>>> dataset = tf.data.Dataset.from_tensors(42)
>>> iterator = iter(dataset)
>>> print(iterator.get_next())
tf.Tensor(42, shape=(), dtype=int32)
Returns:
A nested structure of `tf.Tensor` objects.
Raises:
`tf.errors.OutOfRangeError`: If the end of the iterator has been reached.
"""
raise NotImplementedError("Iterator.get_next()")
@abc.abstractmethod
def get_next_as_optional(self):
"""Returns a `tf.experimental.Optional` which contains the next element.
If the iterator has reached the end of the sequence, the returned
`tf.experimental.Optional` will have no value.
>>> dataset = tf.data.Dataset.from_tensors(42)
>>> iterator = iter(dataset)
>>> optional = iterator.get_next_as_optional()
>>> print(optional.has_value())
tf.Tensor(True, shape=(), dtype=bool)
>>> print(optional.get_value())
tf.Tensor(42, shape=(), dtype=int32)
>>> optional = iterator.get_next_as_optional()
>>> print(optional.has_value())
tf.Tensor(False, shape=(), dtype=bool)
Returns:
A `tf.experimental.Optional` object representing the next element.
"""
raise NotImplementedError("Iterator.get_next_as_optional()")
class OwnedIterator(IteratorBase):
"""An iterator producing tf.Tensor objects from a tf.data.Dataset.
The iterator resource created through `OwnedIterator` is owned by the Python
object and the life time of the underlying resource is tied to the life time
of the `OwnedIterator` object. This makes `OwnedIterator` appropriate for use
in eager mode and inside of tf.functions.
"""
def __init__(self, dataset=None, components=None, element_spec=None):
"""Creates a new iterator from the given dataset.
If `dataset` is not specified, the iterator will be created from the given
tensor components and element structure. In particular, the alternative for
constructing the iterator is used when the iterator is reconstructed from
it `CompositeTensor` representation.
Args:
dataset: A `tf.data.Dataset` object.
components: Tensor components to construct the iterator from.
element_spec: A nested structure of `TypeSpec` objects that
represents the type specification of elements of the iterator.
Raises:
ValueError: If `dataset` is not provided and either `components` or
`element_spec` is not provided. Or `dataset` is provided and either
`components` and `element_spec` is provided.
"""
super(OwnedIterator, self).__init__()
error_message = ("Either `dataset` or both `components` and "
"`element_spec` need to be provided.")
if dataset is None:
if (components is None or element_spec is None):
raise ValueError(error_message)
# pylint: disable=protected-access
self._element_spec = element_spec
self._flat_output_types = structure.get_flat_tensor_types(
self._element_spec)
self._flat_output_shapes = structure.get_flat_tensor_shapes(
self._element_spec)
self._iterator_resource, self._deleter = components
else:
if (components is not None or element_spec is not None):
raise ValueError(error_message)
self._create_iterator(dataset)
def _create_iterator(self, dataset):
# pylint: disable=protected-access
dataset = dataset._apply_options()
# Store dataset reference to ensure that dataset is alive when this iterator
# is being used. For example, `tf.data.Dataset.from_generator` registers
# a few py_funcs that are needed in `self._next_internal`. If the dataset
# is deleted, this iterator crashes on `self.__next__(...)` call.
self._dataset = dataset
ds_variant = dataset._variant_tensor
self._element_spec = dataset.element_spec
self._flat_output_types = structure.get_flat_tensor_types(
self._element_spec)
self._flat_output_shapes = structure.get_flat_tensor_shapes(
self._element_spec)
with ops.colocate_with(ds_variant):
self._iterator_resource, self._deleter = (
gen_dataset_ops.anonymous_iterator_v2(
output_types=self._flat_output_types,
output_shapes=self._flat_output_shapes))
gen_dataset_ops.make_iterator(ds_variant, self._iterator_resource)
# Delete the resource when this object is deleted
self._resource_deleter = IteratorResourceDeleter(
handle=self._iterator_resource,
deleter=self._deleter)
def __iter__(self):
return self
def next(self): # For Python 2 compatibility
return self.__next__()
def _next_internal(self):
if not context.executing_eagerly():
# TODO(b/169442955): Investigate the need for this colocation constraint.
with ops.colocate_with(self._iterator_resource):
ret = gen_dataset_ops.iterator_get_next(
self._iterator_resource,
output_types=self._flat_output_types,
output_shapes=self._flat_output_shapes)
return structure.from_compatible_tensor_list(self._element_spec, ret)
# TODO(b/77291417): This runs in sync mode as iterators use an error status
# to communicate that there is no more data to iterate over.
with context.execution_mode(context.SYNC):
ret = gen_dataset_ops.iterator_get_next(
self._iterator_resource,
output_types=self._flat_output_types,
output_shapes=self._flat_output_shapes)
try:
# Fast path for the case `self._structure` is not a nested structure.
return self._element_spec._from_compatible_tensor_list(ret) # pylint: disable=protected-access
except AttributeError:
return structure.from_compatible_tensor_list(self._element_spec, ret)
@property
def _type_spec(self):
return IteratorSpec(self.element_spec)
def __next__(self):
try:
return self._next_internal()
except errors.OutOfRangeError:
raise StopIteration
@property
@deprecation.deprecated(
None, "Use `tf.compat.v1.data.get_output_classes(iterator)`.")
def output_classes(self):
"""Returns the class of each component of an element of this iterator.
The expected values are `tf.Tensor` and `tf.sparse.SparseTensor`.
Returns:
A nested structure of Python `type` objects corresponding to each
component of an element of this dataset.
"""
return nest.map_structure(
lambda component_spec: component_spec._to_legacy_output_classes(), # pylint: disable=protected-access
self._element_spec)
@property
@deprecation.deprecated(
None, "Use `tf.compat.v1.data.get_output_shapes(iterator)`.")
def output_shapes(self):
"""Returns the shape of each component of an element of this iterator.
Returns:
A nested structure of `tf.TensorShape` objects corresponding to each
component of an element of this dataset.
"""
return nest.map_structure(
lambda component_spec: component_spec._to_legacy_output_shapes(), # pylint: disable=protected-access
self._element_spec)
@property
@deprecation.deprecated(
None, "Use `tf.compat.v1.data.get_output_types(iterator)`.")
def output_types(self):
"""Returns the type of each component of an element of this iterator.
Returns:
A nested structure of `tf.DType` objects corresponding to each component
of an element of this dataset.
"""
return nest.map_structure(
lambda component_spec: component_spec._to_legacy_output_types(), # pylint: disable=protected-access
self._element_spec)
@property
def element_spec(self):
return self._element_spec
def get_next(self):
return self._next_internal()
def get_next_as_optional(self):
# TODO(b/169442955): Investigate the need for this colocation constraint.
with ops.colocate_with(self._iterator_resource):
# pylint: disable=protected-access
return optional_ops._OptionalImpl(
gen_dataset_ops.iterator_get_next_as_optional(
self._iterator_resource,
output_types=structure.get_flat_tensor_types(self.element_spec),
output_shapes=structure.get_flat_tensor_shapes(
self.element_spec)), self.element_spec)
def _gather_saveables_for_checkpoint(self):
def _saveable_factory(name):
"""Returns a SaveableObject for serialization/deserialization."""
policy = None
if self._dataset:
policy = self._dataset.options().experimental_external_state_policy
if policy:
return _IteratorSaveable(
self._iterator_resource,
name,
external_state_policy=policy)
else:
return _IteratorSaveable(self._iterator_resource, name)
return {"ITERATOR": _saveable_factory}
@tf_export("data.IteratorSpec", v1=[])
class IteratorSpec(type_spec.TypeSpec):
"""Type specification for `tf.data.Iterator`.
For instance, `tf.data.IteratorSpec` can be used to define a tf.function that
takes `tf.data.Iterator` as an input argument:
>>> @tf.function(input_signature=[tf.data.IteratorSpec(
... tf.TensorSpec(shape=(), dtype=tf.int32, name=None))])
... def square(iterator):
... x = iterator.get_next()
... return x * x
>>> dataset = tf.data.Dataset.from_tensors(5)
>>> iterator = iter(dataset)
>>> print(square(iterator))
tf.Tensor(25, shape=(), dtype=int32)
Attributes:
element_spec: A nested structure of `TypeSpec` objects that represents the
type specification of the iterator elements.
"""
__slots__ = ["_element_spec"]
def __init__(self, element_spec):
self._element_spec = element_spec
@property
def value_type(self):
return OwnedIterator
def _serialize(self):
return (self._element_spec,)
@property
def _component_specs(self):
return (
tensor_spec.TensorSpec([], dtypes.resource),
tensor_spec.TensorSpec([], dtypes.variant),
)
def _to_components(self, value):
return (value._iterator_resource, value._deleter) # pylint: disable=protected-access
def _from_components(self, components):
return OwnedIterator(
dataset=None,
components=components,
element_spec=self._element_spec)
@staticmethod
def from_value(value):
return IteratorSpec(value.element_spec) # pylint: disable=protected-access
# TODO(b/71645805): Expose trackable stateful objects from dataset.
class _IteratorSaveable(BaseSaverBuilder.SaveableObject):
"""SaveableObject for saving/restoring iterator state."""
def __init__(
self,
iterator_resource,
name,
external_state_policy=distribute_options.ExternalStatePolicy.FAIL):
serialized_iterator = gen_dataset_ops.serialize_iterator(
iterator_resource, external_state_policy=external_state_policy.value)
specs = [
BaseSaverBuilder.SaveSpec(
serialized_iterator,
"",
name + "_STATE",
device=iterator_resource.device)
]
super(_IteratorSaveable, self).__init__(iterator_resource, specs, name)
def restore(self, restored_tensors, restored_shapes):
with ops.colocate_with(self.op):
return gen_dataset_ops.deserialize_iterator(self.op, restored_tensors[0])
@deprecation.deprecated(
None, "Use `tf.data.Iterator.get_next_as_optional()` instead.")
@tf_export("data.experimental.get_next_as_optional")
def get_next_as_optional(iterator):
"""Returns a `tf.experimental.Optional` with the next element of the iterator.
If the iterator has reached the end of the sequence, the returned
`tf.experimental.Optional` will have no value.
Args:
iterator: A `tf.data.Iterator`.
Returns:
A `tf.experimental.Optional` object which either contains the next element
of the iterator (if it exists) or no value.
"""
return iterator.get_next_as_optional()
| apache-2.0 |
beni55/edx-platform | lms/djangoapps/verify_student/migrations/0004_auto__add_verificationcheckpoint__add_unique_verificationcheckpoint_co.py | 98 | 10801 | # -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'VerificationCheckpoint'
db.create_table('verify_student_verificationcheckpoint', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('course_id', self.gf('xmodule_django.models.CourseKeyField')(max_length=255, db_index=True)),
('checkpoint_name', self.gf('django.db.models.fields.CharField')(max_length=32)),
))
db.send_create_signal('verify_student', ['VerificationCheckpoint'])
# Adding M2M table for field photo_verification on 'VerificationCheckpoint'
m2m_table_name = db.shorten_name('verify_student_verificationcheckpoint_photo_verification')
db.create_table(m2m_table_name, (
('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)),
('verificationcheckpoint', models.ForeignKey(orm['verify_student.verificationcheckpoint'], null=False)),
('softwaresecurephotoverification', models.ForeignKey(orm['verify_student.softwaresecurephotoverification'], null=False))
))
db.create_unique(m2m_table_name, ['verificationcheckpoint_id', 'softwaresecurephotoverification_id'])
# Adding unique constraint on 'VerificationCheckpoint', fields ['course_id', 'checkpoint_name']
db.create_unique('verify_student_verificationcheckpoint', ['course_id', 'checkpoint_name'])
# Adding model 'VerificationStatus'
db.create_table('verify_student_verificationstatus', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('checkpoint', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['verify_student.VerificationCheckpoint'])),
('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'])),
('status', self.gf('django.db.models.fields.CharField')(max_length=32, db_index=True)),
('timestamp', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
('response', self.gf('django.db.models.fields.TextField')(null=True, blank=True)),
('error', self.gf('django.db.models.fields.TextField')(null=True, blank=True)),
))
db.send_create_signal('verify_student', ['VerificationStatus'])
def backwards(self, orm):
# Removing unique constraint on 'VerificationCheckpoint', fields ['course_id', 'checkpoint_name']
db.delete_unique('verify_student_verificationcheckpoint', ['course_id', 'checkpoint_name'])
# Deleting model 'VerificationCheckpoint'
db.delete_table('verify_student_verificationcheckpoint')
# Removing M2M table for field photo_verification on 'VerificationCheckpoint'
db.delete_table(db.shorten_name('verify_student_verificationcheckpoint_photo_verification'))
# Deleting model 'VerificationStatus'
db.delete_table('verify_student_verificationstatus')
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'reverification.midcoursereverificationwindow': {
'Meta': {'object_name': 'MidcourseReverificationWindow'},
'course_id': ('xmodule_django.models.CourseKeyField', [], {'max_length': '255', 'db_index': 'True'}),
'end_date': ('django.db.models.fields.DateTimeField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'start_date': ('django.db.models.fields.DateTimeField', [], {'default': 'None', 'null': 'True', 'blank': 'True'})
},
'verify_student.softwaresecurephotoverification': {
'Meta': {'ordering': "['-created_at']", 'object_name': 'SoftwareSecurePhotoVerification'},
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'db_index': 'True', 'blank': 'True'}),
'display': ('django.db.models.fields.BooleanField', [], {'default': 'True', 'db_index': 'True'}),
'error_code': ('django.db.models.fields.CharField', [], {'max_length': '50', 'blank': 'True'}),
'error_msg': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'face_image_url': ('django.db.models.fields.URLField', [], {'max_length': '255', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'photo_id_image_url': ('django.db.models.fields.URLField', [], {'max_length': '255', 'blank': 'True'}),
'photo_id_key': ('django.db.models.fields.TextField', [], {'max_length': '1024'}),
'receipt_id': ('django.db.models.fields.CharField', [], {'default': "'4f091843-1377-4d3b-af5d-3a4ae3d17943'", 'max_length': '255', 'db_index': 'True'}),
'reviewing_service': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'reviewing_user': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'photo_verifications_reviewed'", 'null': 'True', 'to': "orm['auth.User']"}),
'status': ('model_utils.fields.StatusField', [], {'default': "'created'", 'max_length': '100', u'no_check_for_status': 'True'}),
'status_changed': ('model_utils.fields.MonitorField', [], {'default': 'datetime.datetime.now', u'monitor': "u'status'"}),
'submitted_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'db_index': 'True'}),
'updated_at': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'db_index': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"}),
'window': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['reverification.MidcourseReverificationWindow']", 'null': 'True'})
},
'verify_student.verificationcheckpoint': {
'Meta': {'unique_together': "(('course_id', 'checkpoint_name'),)", 'object_name': 'VerificationCheckpoint'},
'checkpoint_name': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'course_id': ('xmodule_django.models.CourseKeyField', [], {'max_length': '255', 'db_index': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'photo_verification': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['verify_student.SoftwareSecurePhotoVerification']", 'symmetrical': 'False'})
},
'verify_student.verificationstatus': {
'Meta': {'object_name': 'VerificationStatus'},
'checkpoint': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['verify_student.VerificationCheckpoint']"}),
'error': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'response': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'status': ('django.db.models.fields.CharField', [], {'max_length': '32', 'db_index': 'True'}),
'timestamp': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
}
}
complete_apps = ['verify_student']
| agpl-3.0 |
saurabhbajaj207/CarpeDiem | venv/Lib/site-packages/Crypto/SelfTest/Hash/test_MD4.py | 116 | 2368 | # -*- coding: utf-8 -*-
#
# SelfTest/Hash/MD4.py: Self-test for the MD4 hash function
#
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
#
# ===================================================================
# The contents of this file are dedicated to the public domain. To
# the extent that dedication to the public domain is not available,
# everyone is granted a worldwide, perpetual, royalty-free,
# non-exclusive license to exercise all rights associated with the
# contents of this file for any purpose whatsoever.
# No rights are reserved.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# ===================================================================
"""Self-test suite for Crypto.Hash.MD4"""
__revision__ = "$Id$"
from Crypto.Util.py3compat import *
# This is a list of (expected_result, input[, description]) tuples.
test_data = [
# Test vectors from RFC 1320
('31d6cfe0d16ae931b73c59d7e0c089c0', '', "'' (empty string)"),
('bde52cb31de33e46245e05fbdbd6fb24', 'a'),
('a448017aaf21d8525fc10ae87aa6729d', 'abc'),
('d9130a8164549fe818874806e1c7014b', 'message digest'),
('d79e1c308aa5bbcdeea8ed63df412da9', 'abcdefghijklmnopqrstuvwxyz',
'a-z'),
('043f8582f241db351ce627e153e7f0e4',
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',
'A-Z, a-z, 0-9'),
('e33b4ddc9c38f2199c3e7b164fcc0536',
'1234567890123456789012345678901234567890123456'
+ '7890123456789012345678901234567890',
"'1234567890' * 8"),
]
def get_tests(config={}):
from Crypto.Hash import MD4
from common import make_hash_tests
return make_hash_tests(MD4, "MD4", test_data,
digest_size=16,
oid="\x06\x08\x2a\x86\x48\x86\xf7\x0d\x02\x04")
if __name__ == '__main__':
import unittest
suite = lambda: unittest.TestSuite(get_tests())
unittest.main(defaultTest='suite')
# vim:set ts=4 sw=4 sts=4 expandtab:
| mit |
HewlettPackard/python-hpOneView | tests/unit/resources/servers/test_id_pools.py | 2 | 4253 | # -*- coding: utf-8 -*-
###
# (C) Copyright (2012-2017) Hewlett Packard Enterprise Development LP
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the 'Software'), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
###
import mock
import unittest
from hpOneView.connection import connection
from hpOneView.resources.resource import ResourceClient
from hpOneView.resources.servers.id_pools import IdPools
class TestIdPools(unittest.TestCase):
resource_info = {'type': 'Range',
'name': 'No name'}
example_uri = "/rest/id-pools/ipv4"
def setUp(self):
self.host = '127.0.0.1'
self.connection = connection(self.host)
self.client = IdPools(self.connection)
@mock.patch.object(ResourceClient, 'get')
def test_get_called_once_by_id(self, mock_get):
id_pools_range_id = "f0a0a113-ec97-41b4-83ce-d7c92b900e7c"
self.client.get(id_pools_range_id)
mock_get.assert_called_once_with(id_pools_range_id)
@mock.patch.object(ResourceClient, 'get')
def test_get_called_once_by_uri(self, mock_get):
self.client.get(self.example_uri)
mock_get.assert_called_once_with(self.example_uri)
@mock.patch.object(ResourceClient, 'get')
def test_generate_called_once(self, mock_get):
self.client.generate(self.example_uri)
mock_get.assert_called_once_with(self.example_uri + '/generate')
@mock.patch.object(ResourceClient, 'get')
def test_validate_id_pool_called_once(self, mock_get):
self.client.validate_id_pool(self.example_uri, ['VCGYOAA023',
'VCGYOAA024'])
mock_get.assert_called_once_with(self.example_uri + "/validate?idList=VCGYOAA023&idList=VCGYOAA024")
@mock.patch.object(ResourceClient, 'update')
def test_validate_called_once(self, update):
self.client.validate(self.resource_info.copy(), self.example_uri)
update.assert_called_once_with(self.resource_info.copy(), self.example_uri + "/validate", timeout=-1)
@mock.patch.object(ResourceClient, 'update')
def test_enable_called_once(self, update):
self.client.enable(self.resource_info.copy(), self.example_uri)
update.assert_called_once_with(self.resource_info.copy(), self.example_uri, timeout=-1)
@mock.patch.object(ResourceClient, 'get')
def test_get_check_range_availability_called_once_with_defaults(self, mock_get):
self.client.get_check_range_availability(self.example_uri, ['VCGYOAA023',
'VCGYOAA024'])
mock_get.assert_called_once_with(
self.example_uri + "/checkrangeavailability?idList=VCGYOAA023&idList=VCGYOAA024")
@mock.patch.object(ResourceClient, 'update')
def test_allocate_called_once(self, mock_update):
self.client.allocate(self.resource_info.copy(), self.example_uri)
mock_update.assert_called_once_with(self.resource_info.copy(), self.example_uri + "/allocator", timeout=-1)
@mock.patch.object(ResourceClient, 'update')
def test_collect_called_once(self, update):
self.client.collect(self.resource_info.copy(), self.example_uri)
update.assert_called_once_with(self.resource_info.copy(), self.example_uri + "/collector", timeout=-1)
| mit |
mancoast/CPythonPyc_test | cpython/279_test_zipimport_support.py | 90 | 10850 | # This test module covers support in various parts of the standard library
# for working with modules located inside zipfiles
# The tests are centralised in this fashion to make it easy to drop them
# if a platform doesn't support zipimport
import test.test_support
import os
import os.path
import sys
import textwrap
import zipfile
import zipimport
import doctest
import inspect
import linecache
import pdb
import warnings
from test.script_helper import (spawn_python, kill_python, run_python,
temp_dir, make_script, make_zip_script)
verbose = test.test_support.verbose
# Library modules covered by this test set
# pdb (Issue 4201)
# inspect (Issue 4223)
# doctest (Issue 4197)
# Other test modules with zipimport related tests
# test_zipimport (of course!)
# test_cmd_line_script (covers the zipimport support in runpy)
# Retrieve some helpers from other test cases
from test import (test_doctest, sample_doctest, sample_doctest_no_doctests,
sample_doctest_no_docstrings)
from test.test_importhooks import ImportHooksBaseTestCase
def _run_object_doctest(obj, module):
# Direct doctest output (normally just errors) to real stdout; doctest
# output shouldn't be compared by regrtest.
save_stdout = sys.stdout
sys.stdout = test.test_support.get_original_stdout()
try:
finder = doctest.DocTestFinder(verbose=verbose, recurse=False)
runner = doctest.DocTestRunner(verbose=verbose)
# Use the object's fully qualified name if it has one
# Otherwise, use the module's name
try:
name = "%s.%s" % (obj.__module__, obj.__name__)
except AttributeError:
name = module.__name__
for example in finder.find(obj, name, module):
runner.run(example)
f, t = runner.failures, runner.tries
if f:
raise test.test_support.TestFailed("%d of %d doctests failed" % (f, t))
finally:
sys.stdout = save_stdout
if verbose:
print 'doctest (%s) ... %d tests with zero failures' % (module.__name__, t)
return f, t
class ZipSupportTests(ImportHooksBaseTestCase):
# We use the ImportHooksBaseTestCase to restore
# the state of the import related information
# in the sys module after each test
# We also clear the linecache and zipimport cache
# just to avoid any bogus errors due to name reuse in the tests
def setUp(self):
linecache.clearcache()
zipimport._zip_directory_cache.clear()
ImportHooksBaseTestCase.setUp(self)
def test_inspect_getsource_issue4223(self):
test_src = "def foo(): pass\n"
with temp_dir() as d:
init_name = make_script(d, '__init__', test_src)
name_in_zip = os.path.join('zip_pkg',
os.path.basename(init_name))
zip_name, run_name = make_zip_script(d, 'test_zip',
init_name, name_in_zip)
os.remove(init_name)
sys.path.insert(0, zip_name)
import zip_pkg
self.assertEqual(inspect.getsource(zip_pkg.foo), test_src)
def test_doctest_issue4197(self):
# To avoid having to keep two copies of the doctest module's
# unit tests in sync, this test works by taking the source of
# test_doctest itself, rewriting it a bit to cope with a new
# location, and then throwing it in a zip file to make sure
# everything still works correctly
test_src = inspect.getsource(test_doctest)
test_src = test_src.replace(
"from test import test_doctest",
"import test_zipped_doctest as test_doctest")
test_src = test_src.replace("test.test_doctest",
"test_zipped_doctest")
test_src = test_src.replace("test.sample_doctest",
"sample_zipped_doctest")
# The sample doctest files rewritten to include in the zipped version.
sample_sources = {}
for mod in [sample_doctest, sample_doctest_no_doctests,
sample_doctest_no_docstrings]:
src = inspect.getsource(mod)
src = src.replace("test.test_doctest", "test_zipped_doctest")
# Rewrite the module name so that, for example,
# "test.sample_doctest" becomes "sample_zipped_doctest".
mod_name = mod.__name__.split(".")[-1]
mod_name = mod_name.replace("sample_", "sample_zipped_")
sample_sources[mod_name] = src
with temp_dir() as d:
script_name = make_script(d, 'test_zipped_doctest',
test_src)
zip_name, run_name = make_zip_script(d, 'test_zip',
script_name)
z = zipfile.ZipFile(zip_name, 'a')
for mod_name, src in sample_sources.items():
z.writestr(mod_name + ".py", src)
z.close()
if verbose:
zip_file = zipfile.ZipFile(zip_name, 'r')
print 'Contents of %r:' % zip_name
zip_file.printdir()
zip_file.close()
os.remove(script_name)
sys.path.insert(0, zip_name)
import test_zipped_doctest
# Some of the doc tests depend on the colocated text files
# which aren't available to the zipped version (the doctest
# module currently requires real filenames for non-embedded
# tests). So we're forced to be selective about which tests
# to run.
# doctest could really use some APIs which take a text
# string or a file object instead of a filename...
known_good_tests = [
test_zipped_doctest.SampleClass,
test_zipped_doctest.SampleClass.NestedClass,
test_zipped_doctest.SampleClass.NestedClass.__init__,
test_zipped_doctest.SampleClass.__init__,
test_zipped_doctest.SampleClass.a_classmethod,
test_zipped_doctest.SampleClass.a_property,
test_zipped_doctest.SampleClass.a_staticmethod,
test_zipped_doctest.SampleClass.double,
test_zipped_doctest.SampleClass.get,
test_zipped_doctest.SampleNewStyleClass,
test_zipped_doctest.SampleNewStyleClass.__init__,
test_zipped_doctest.SampleNewStyleClass.double,
test_zipped_doctest.SampleNewStyleClass.get,
test_zipped_doctest.old_test1,
test_zipped_doctest.old_test2,
test_zipped_doctest.old_test3,
test_zipped_doctest.old_test4,
test_zipped_doctest.sample_func,
test_zipped_doctest.test_DocTest,
test_zipped_doctest.test_DocTestParser,
test_zipped_doctest.test_DocTestRunner.basics,
test_zipped_doctest.test_DocTestRunner.exceptions,
test_zipped_doctest.test_DocTestRunner.option_directives,
test_zipped_doctest.test_DocTestRunner.optionflags,
test_zipped_doctest.test_DocTestRunner.verbose_flag,
test_zipped_doctest.test_Example,
test_zipped_doctest.test_debug,
test_zipped_doctest.test_pdb_set_trace,
test_zipped_doctest.test_pdb_set_trace_nested,
test_zipped_doctest.test_testsource,
test_zipped_doctest.test_trailing_space_in_test,
test_zipped_doctest.test_DocTestSuite,
test_zipped_doctest.test_DocTestFinder,
]
# These remaining tests are the ones which need access
# to the data files, so we don't run them
fail_due_to_missing_data_files = [
test_zipped_doctest.test_DocFileSuite,
test_zipped_doctest.test_testfile,
test_zipped_doctest.test_unittest_reportflags,
]
# Needed for test_DocTestParser and test_debug
deprecations = []
if __debug__:
# Ignore all warnings about the use of class Tester in this module.
deprecations.append(("class Tester is deprecated", DeprecationWarning))
if sys.py3kwarning:
deprecations += [
("backquote not supported", SyntaxWarning),
("execfile.. not supported", DeprecationWarning)]
with test.test_support.check_warnings(*deprecations):
for obj in known_good_tests:
_run_object_doctest(obj, test_zipped_doctest)
def test_doctest_main_issue4197(self):
test_src = textwrap.dedent("""\
class Test:
">>> 'line 2'"
pass
import doctest
doctest.testmod()
""")
pattern = 'File "%s", line 2, in %s'
with temp_dir() as d:
script_name = make_script(d, 'script', test_src)
exit_code, data = run_python(script_name)
expected = pattern % (script_name, "__main__.Test")
if verbose:
print "Expected line", expected
print "Got stdout:"
print data
self.assertIn(expected, data)
zip_name, run_name = make_zip_script(d, "test_zip",
script_name, '__main__.py')
exit_code, data = run_python(zip_name)
expected = pattern % (run_name, "__main__.Test")
if verbose:
print "Expected line", expected
print "Got stdout:"
print data
self.assertIn(expected, data)
def test_pdb_issue4201(self):
test_src = textwrap.dedent("""\
def f():
pass
import pdb
pdb.runcall(f)
""")
with temp_dir() as d:
script_name = make_script(d, 'script', test_src)
p = spawn_python(script_name)
p.stdin.write('l\n')
data = kill_python(p)
self.assertIn(script_name, data)
zip_name, run_name = make_zip_script(d, "test_zip",
script_name, '__main__.py')
p = spawn_python(zip_name)
p.stdin.write('l\n')
data = kill_python(p)
self.assertIn(run_name, data)
def test_main():
test.test_support.run_unittest(ZipSupportTests)
test.test_support.reap_children()
if __name__ == '__main__':
test_main()
| gpl-3.0 |
erkrishna9/odoo | addons/l10n_si/__openerp__.py | 430 | 1826 | # -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright: (C) 2012 - Mentis d.o.o., Dravograd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
{
"name" : "Slovenian - Accounting",
"version" : "1.2",
"author" : "Mentis d.o.o.",
"website" : "http://www.mentis.si",
"category" : "Localization/Account Charts",
"description" : " ",
"depends" : ["account", "base_iban", "base_vat", "account_chart", "account_cancel"],
"description" : "Kontni načrt za gospodarske družbe",
"data" : [
"data/account.account.type.csv",
"data/account.account.template.csv",
"data/account.tax.code.template.csv",
"data/account.chart.template.csv",
"data/account.tax.template.csv",
"data/account.fiscal.position.template.csv",
"data/account.fiscal.position.account.template.csv",
"data/account.fiscal.position.tax.template.csv",
"l10n_si_wizard.xml"
],
'auto_install': False,
"installable": True,
}
| agpl-3.0 |
samnazarko/osmc | package/mediacenter-addon-osmc/src/script.module.osmcsetting.updates/resources/lib/simple_scheduler.py | 6 | 5802 |
import datetime
import random
class SimpleScheduler(object):
def __init__(self, setting_dict, right_now=None):
self.frequency = setting_dict.get('check_freq', 1) # how often the action occurs (Never, Daily, Weekly, Monthly)
self.specific_time = setting_dict.get('check_time', 0) # whether the action should take place at a specific or random time (boolean)
self.day = setting_dict.get('check_weekday', 0) # the weekday when the action should occur, Monday=0, Sunday=6
self.daynum = setting_dict.get('check_day', 1) # the days from month end that the action should occur [-16, 16]
self.hour = setting_dict.get('check_hour', 3) # the hour the action should occur (integer)
self.minute = setting_dict.get('check_minute', 0) # the minute the action should occur (integer)
self.trigger_time = datetime.datetime.now().replace(year=2224) # the time of the next update check
self.leeway = 15 # the number of minutes past the action time that the action can still take place
if right_now is None:
right_now = datetime.datetime.now()
self.set_trigger(right_now)
def set_trigger(self, right_now):
# use right_nows year and month
if self.frequency == 1:
# the initial trigger time will be this day, and the users specified hour and minute (using defaults if not provided)
# if the user wants a specific time, then use that, otherwise use a random time
self.trigger_time = self.set_trigger_time(right_now)
elif self.frequency == 2:
# the initial trigger time will be this year and month, but the day is the one the user has chosen, as well as the users
# specified hour and minute (using defaults if not provided)
right_now_weekday = right_now.weekday()
delta_days = self.day - right_now_weekday
# mon = 0, tue = 1, wed = 2, thu = 3, fri = 4, sat = 5, sun = 6
# if the user wants a specific time, then use that, otherwise use a random time
self.trigger_time = self.set_trigger_time( right_now + datetime.timedelta(days=delta_days) )
elif self.frequency == 3:
# the initial trigger time will be this year and month, but the day number is the one the user has chosen, as well as the users
# specified hour and minute (using defaults if not provided)
# End of this current month plus or minus the number of days the user has chosen in settings
month = max([1, (right_now.month + 1) % 13])
year = right_now.year if month != 1 else right_now.year + 1
trigger_day = right_now.replace(year=year, month=month, day = 1) + datetime.timedelta(days=self.daynum-1)
# today, with day replaced by 1 and adding 1 to the month plus the number of days the user has chosen
# minus an extra day to rebase it to month-end
# if the user wants a specific time, then use that, otherwise use a random time
self.trigger_time = self.set_trigger_time(trigger_day)
# if the trigger time is before the current time, then step it to the next period
while self.trigger_time < right_now:
self.step_trigger()
def set_trigger_time(self, trigger_time):
''' Applies either the users desired time, or a random one, to the trigger '''
if self.specific_time:
new_trigger = trigger_time.replace(hour=self.hour, minute=self.minute)
else:
new_trigger = trigger_time.replace(hour=random.randint(0,23), minute=random.randint(0,59))
return new_trigger
def step_trigger(self):
''' Progress the trigger time from its current position to its next position '''
if self.frequency == 1:
# jump one say ahead from the current trigger date
self.trigger_time = self.trigger_time + datetime.timedelta(days=1)
elif self.frequency == 2:
# jump 7 days ahead from teh current trigger date
self.trigger_time = self.trigger_time + datetime.timedelta(days=7)
elif self.frequency == 3:
if self.daynum > 0:
# if the daynum is 1 to 16 then just add one month to the existing month and set the day to be the users chosen date
month = (self.trigger_time.month % 12) + 1
year = self.trigger_time.year + 1 if month == 1 else self.trigger_time.year
self.trigger_time = self.trigger_time.replace(year=year, month = month, day = self.daynum)
else:
# if the daynum is negative, that is, the user wants the update to run a certain number of days BEFORE month-end,
# then jump to the first day of the month two months ahead of the current one, and then move back one day to get
# next months month-end date, then subtract the number of days the user has chosen
month = (((self.trigger_time.month % 12) + 1) % 12) + 1
if month < 3:
year = self.trigger_time.year + 1
else:
year = self.trigger_time.year
self.trigger_time = self.trigger_time.replace(year=year, month = month, day = 1) + datetime.timedelta(days=self.daynum-1)
def check_trigger(self):
right_now = datetime.datetime.now()
# check if the current time is between the trigger time and the trigger time plus leeway
if self.trigger_time < right_now < self.trigger_time + datetime.timedelta(minutes=self.leeway):
# time is currently after the trigger time, but within the leeway
self.step_trigger()
return True
else:
return False
def test(settings=None):
if settings is not None:
x = settings
else:
x = {'check_freq':3, 'check_time':0, 'check_weekday':1, 'check_day': 5, 'check_hour':22, 'check_minute':00}
right_now = datetime.datetime.now()
for z in range(3700):
right_now += datetime.timedelta(days=1)
print z
print right_now
s = SimpleScheduler(x, right_now)
print '%s\n' % s.trigger_time
| gpl-2.0 |
GoogleChrome/big-rig | app/src/handlers/help.py | 2 | 2102 | #!/usr/bin/env python
#
# Copyright 2015 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import sys
import webapp2
import jinja2
import codecs
import re
from google.appengine.ext import vendor
vendor.add('thirdparty')
import markdown
from jinja2 import Environment, meta
from bigrig.usermanager import UserManager
JINJA_ENVIRONMENT = jinja2.Environment(
loader=jinja2.FileSystemLoader(
os.path.join(
os.path.dirname(__file__), '..'
)
),
extensions=['jinja2.ext.autoescape'],
autoescape=True)
class HelpHandler(webapp2.RequestHandler):
def get (self, url):
if UserManager.get_current_user() == None:
self.redirect('/user-not-found')
return
if (url == '' or os.path.isdir('help/%s' % url)):
url += 'index.md'
template = JINJA_ENVIRONMENT.get_template('templates/help/help.html')
help_file_path = 'help/%s' % re.sub('html$', 'md', url)
if not os.path.exists(help_file_path):
print help_file_path
return
with codecs.open(help_file_path, 'r', 'utf-8') as md_file:
help_file = md_file.read()
self.response.write(template.render({
'sign_out_url': UserManager.get_signout_url(),
'gravatar_url': UserManager.get_gravatar_url(),
'user_email': UserManager.get_email(),
'user_is_admin': UserManager.is_admin(),
'sections': [{
"name": "Help"
}],
'help': markdown.markdown(help_file, extensions=['markdown.extensions.fenced_code'])
}))
app = webapp2.WSGIApplication([
('/help/(.*)', HelpHandler)
], debug=True)
| apache-2.0 |
mancoast/CPythonPyc_test | cpython/276_test_getopt.py | 133 | 6973 | # test_getopt.py
# David Goodger <dgoodger@bigfoot.com> 2000-08-19
from test.test_support import verbose, run_doctest, run_unittest, EnvironmentVarGuard
import unittest
import getopt
sentinel = object()
class GetoptTests(unittest.TestCase):
def setUp(self):
self.env = EnvironmentVarGuard()
if "POSIXLY_CORRECT" in self.env:
del self.env["POSIXLY_CORRECT"]
def tearDown(self):
self.env.__exit__()
del self.env
def assertError(self, *args, **kwargs):
self.assertRaises(getopt.GetoptError, *args, **kwargs)
def test_short_has_arg(self):
self.assertTrue(getopt.short_has_arg('a', 'a:'))
self.assertFalse(getopt.short_has_arg('a', 'a'))
self.assertError(getopt.short_has_arg, 'a', 'b')
def test_long_has_args(self):
has_arg, option = getopt.long_has_args('abc', ['abc='])
self.assertTrue(has_arg)
self.assertEqual(option, 'abc')
has_arg, option = getopt.long_has_args('abc', ['abc'])
self.assertFalse(has_arg)
self.assertEqual(option, 'abc')
has_arg, option = getopt.long_has_args('abc', ['abcd'])
self.assertFalse(has_arg)
self.assertEqual(option, 'abcd')
self.assertError(getopt.long_has_args, 'abc', ['def'])
self.assertError(getopt.long_has_args, 'abc', [])
self.assertError(getopt.long_has_args, 'abc', ['abcd','abcde'])
def test_do_shorts(self):
opts, args = getopt.do_shorts([], 'a', 'a', [])
self.assertEqual(opts, [('-a', '')])
self.assertEqual(args, [])
opts, args = getopt.do_shorts([], 'a1', 'a:', [])
self.assertEqual(opts, [('-a', '1')])
self.assertEqual(args, [])
#opts, args = getopt.do_shorts([], 'a=1', 'a:', [])
#self.assertEqual(opts, [('-a', '1')])
#self.assertEqual(args, [])
opts, args = getopt.do_shorts([], 'a', 'a:', ['1'])
self.assertEqual(opts, [('-a', '1')])
self.assertEqual(args, [])
opts, args = getopt.do_shorts([], 'a', 'a:', ['1', '2'])
self.assertEqual(opts, [('-a', '1')])
self.assertEqual(args, ['2'])
self.assertError(getopt.do_shorts, [], 'a1', 'a', [])
self.assertError(getopt.do_shorts, [], 'a', 'a:', [])
def test_do_longs(self):
opts, args = getopt.do_longs([], 'abc', ['abc'], [])
self.assertEqual(opts, [('--abc', '')])
self.assertEqual(args, [])
opts, args = getopt.do_longs([], 'abc=1', ['abc='], [])
self.assertEqual(opts, [('--abc', '1')])
self.assertEqual(args, [])
opts, args = getopt.do_longs([], 'abc=1', ['abcd='], [])
self.assertEqual(opts, [('--abcd', '1')])
self.assertEqual(args, [])
opts, args = getopt.do_longs([], 'abc', ['ab', 'abc', 'abcd'], [])
self.assertEqual(opts, [('--abc', '')])
self.assertEqual(args, [])
# Much like the preceding, except with a non-alpha character ("-") in
# option name that precedes "="; failed in
# http://python.org/sf/126863
opts, args = getopt.do_longs([], 'foo=42', ['foo-bar', 'foo=',], [])
self.assertEqual(opts, [('--foo', '42')])
self.assertEqual(args, [])
self.assertError(getopt.do_longs, [], 'abc=1', ['abc'], [])
self.assertError(getopt.do_longs, [], 'abc', ['abc='], [])
def test_getopt(self):
# note: the empty string between '-a' and '--beta' is significant:
# it simulates an empty string option argument ('-a ""') on the
# command line.
cmdline = ['-a', '1', '-b', '--alpha=2', '--beta', '-a', '3', '-a',
'', '--beta', 'arg1', 'arg2']
opts, args = getopt.getopt(cmdline, 'a:b', ['alpha=', 'beta'])
self.assertEqual(opts, [('-a', '1'), ('-b', ''),
('--alpha', '2'), ('--beta', ''),
('-a', '3'), ('-a', ''), ('--beta', '')])
# Note ambiguity of ('-b', '') and ('-a', '') above. This must be
# accounted for in the code that calls getopt().
self.assertEqual(args, ['arg1', 'arg2'])
self.assertError(getopt.getopt, cmdline, 'a:b', ['alpha', 'beta'])
def test_gnu_getopt(self):
# Test handling of GNU style scanning mode.
cmdline = ['-a', 'arg1', '-b', '1', '--alpha', '--beta=2']
# GNU style
opts, args = getopt.gnu_getopt(cmdline, 'ab:', ['alpha', 'beta='])
self.assertEqual(args, ['arg1'])
self.assertEqual(opts, [('-a', ''), ('-b', '1'),
('--alpha', ''), ('--beta', '2')])
# recognize "-" as an argument
opts, args = getopt.gnu_getopt(['-a', '-', '-b', '-'], 'ab:', [])
self.assertEqual(args, ['-'])
self.assertEqual(opts, [('-a', ''), ('-b', '-')])
# Posix style via +
opts, args = getopt.gnu_getopt(cmdline, '+ab:', ['alpha', 'beta='])
self.assertEqual(opts, [('-a', '')])
self.assertEqual(args, ['arg1', '-b', '1', '--alpha', '--beta=2'])
# Posix style via POSIXLY_CORRECT
self.env["POSIXLY_CORRECT"] = "1"
opts, args = getopt.gnu_getopt(cmdline, 'ab:', ['alpha', 'beta='])
self.assertEqual(opts, [('-a', '')])
self.assertEqual(args, ['arg1', '-b', '1', '--alpha', '--beta=2'])
def test_libref_examples(self):
s = """
Examples from the Library Reference: Doc/lib/libgetopt.tex
An example using only Unix style options:
>>> import getopt
>>> args = '-a -b -cfoo -d bar a1 a2'.split()
>>> args
['-a', '-b', '-cfoo', '-d', 'bar', 'a1', 'a2']
>>> optlist, args = getopt.getopt(args, 'abc:d:')
>>> optlist
[('-a', ''), ('-b', ''), ('-c', 'foo'), ('-d', 'bar')]
>>> args
['a1', 'a2']
Using long option names is equally easy:
>>> s = '--condition=foo --testing --output-file abc.def -x a1 a2'
>>> args = s.split()
>>> args
['--condition=foo', '--testing', '--output-file', 'abc.def', '-x', 'a1', 'a2']
>>> optlist, args = getopt.getopt(args, 'x', [
... 'condition=', 'output-file=', 'testing'])
>>> optlist
[('--condition', 'foo'), ('--testing', ''), ('--output-file', 'abc.def'), ('-x', '')]
>>> args
['a1', 'a2']
"""
import types
m = types.ModuleType("libreftest", s)
run_doctest(m, verbose)
def test_issue4629(self):
longopts, shortopts = getopt.getopt(['--help='], '', ['help='])
self.assertEqual(longopts, [('--help', '')])
longopts, shortopts = getopt.getopt(['--help=x'], '', ['help='])
self.assertEqual(longopts, [('--help', 'x')])
self.assertRaises(getopt.GetoptError, getopt.getopt, ['--help='], '', ['help'])
def test_main():
run_unittest(GetoptTests)
if __name__ == "__main__":
test_main()
| gpl-3.0 |
tejastank/tryton_module_product | setup.py | 1 | 2857 | #!/usr/bin/env python
#This file is part of Tryton. The COPYRIGHT file at the top level of
#this repository contains the full copyright notices and license terms.
from setuptools import setup
import re
import os
import ConfigParser
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read()
config = ConfigParser.ConfigParser()
config.readfp(open('tryton.cfg'))
info = dict(config.items('tryton'))
for key in ('depends', 'extras_depend', 'xml'):
if key in info:
info[key] = info[key].strip().splitlines()
major_version, minor_version, _ = info.get('version', '0.0.1').split('.', 2)
major_version = int(major_version)
minor_version = int(minor_version)
requires = []
for dep in info.get('depends', []):
if not re.match(r'(ir|res|webdav)(\W|$)', dep):
requires.append('trytond_%s >= %s.%s, < %s.%s' %
(dep, major_version, minor_version, major_version,
minor_version + 1))
requires.append('trytond >= %s.%s, < %s.%s' %
(major_version, minor_version, major_version, minor_version + 1))
setup(name='trytond_product',
version=info.get('version', '0.0.1'),
description='Tryton module with products',
long_description=read('README'),
author='Tryton',
url='http://www.tryton.org/',
download_url="http://downloads.tryton.org/" + \
info.get('version', '0.0.1').rsplit('.', 1)[0] + '/',
package_dir={'trytond.modules.product': '.'},
packages=[
'trytond.modules.product',
'trytond.modules.product.tests',
],
package_data={
'trytond.modules.product': info.get('xml', []) \
+ ['tryton.cfg', 'locale/*.po', 'icons/*.svg'],
},
classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Plugins',
'Framework :: Tryton',
'Intended Audience :: Developers',
'Intended Audience :: Financial and Insurance Industry',
'Intended Audience :: Legal Industry',
'Intended Audience :: Manufacturing',
'License :: OSI Approved :: GNU General Public License (GPL)',
'Natural Language :: Bulgarian',
'Natural Language :: Czech',
'Natural Language :: Dutch',
'Natural Language :: English',
'Natural Language :: French',
'Natural Language :: German',
'Natural Language :: Russian',
'Natural Language :: Spanish',
'Operating System :: OS Independent',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Topic :: Office/Business',
],
license='GPL-3',
install_requires=requires,
zip_safe=False,
entry_points="""
[trytond.modules]
product = trytond.modules.product
""",
test_suite='tests',
test_loader='trytond.test_loader:Loader',
)
| gpl-3.0 |
bbc/kamaelia | Code/Python/Kamaelia/Kamaelia/Util/PureTransformer.py | 9 | 2737 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright 2010 British Broadcasting Corporation and Kamaelia Contributors(1)
#
# (1) Kamaelia Contributors are listed in the AUTHORS file and at
# http://www.kamaelia.org/AUTHORS - please extend this file,
# not this notice.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -------------------------------------------------------------------------
"""\
==========================
Pure Transformer component
==========================
This component applies a function specified at its creation to messages
received (a filter). If the function returns None, no message is sent,
otherwise the result of the function is sent to "outbox".
Example Usage
-------------
To read in lines of text, convert to upper case and then write to the console::
Pipeline(
ConsoleReader(),
PureTransformer(lambda x : x.upper()),
ConsoleEchoer()
).run()
"""
from Axon.Component import component
from Axon.Ipc import producerFinished,shutdownMicroprocess,shutdown
class PureTransformer(component):
def __init__(self, function=None):
super(PureTransformer, self).__init__()
if function:
self.processMessage = function
def processMessage(self, msg):
pass
def main(self):
while 1:
yield 1
while self.dataReady("inbox"):
returnval = self.processMessage(self.recv("inbox"))
if returnval != None:
self.send(returnval, "outbox")
while self.dataReady("control"):
msg = self.recv("control")
if isinstance(msg, producerFinished) or isinstance(msg, shutdown):
self.send(producerFinished(self), "signal")
return
self.pause()
__kamaelia_components__ = ( PureTransformer, )
if __name__ == "__main__":
from Kamaelia.Chassis.Pipeline import pipeline
from Kamaelia.Util.Console import ConsoleReader, ConsoleEchoer
# Example - prepend "foo" and append "bar" to lines entered.
pipeline(
ConsoleReader(eol=""),
PureTransformer(lambda x : "foo" + x + "bar!\n"),
ConsoleEchoer()
).run()
| apache-2.0 |
mkheirkhah/ns-3.23 | bindings/python/topsort.py | 218 | 13288 | # topsort - dependency (topological) sorting and cycle finding functions
# Copyright (C) 2007 RADLogic
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation;
# version 2.1 of the License.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# See http://www.fsf.org/licensing/licenses/lgpl.txt for full license text.
"""Provide toplogical sorting (i.e. dependency sorting) functions.
The topsort function is based on code posted on Usenet by Tim Peters.
Modifications:
- added doctests
- changed some bits to use current Python idioms
(listcomp instead of filter, +=/-=, inherit from Exception)
- added a topsort_levels version that ports items in each dependency level
into a sub-list
- added find_cycles to aid in cycle debugging
Run this module directly to run the doctests (unittests).
Make sure they all pass before checking in any modifications.
Requires Python >= 2.2
(For Python 2.2 also requires separate sets.py module)
This requires the rad_util.py module.
"""
# Provide support for Python 2.2*
from __future__ import generators
__version__ = '$Revision: 0.9 $'
__date__ = '$Date: 2007/03/27 04:15:26 $'
__credits__ = '''Tim Peters -- original topsort code
Tim Wegener -- doctesting, updating to current idioms, topsort_levels,
find_cycles
'''
# Make Python 2.3 sets look like Python 2.4 sets.
try:
set
except NameError:
from sets import Set as set
from rad_util import is_rotated
class CycleError(Exception):
"""Cycle Error"""
pass
def topsort(pairlist):
"""Topologically sort a list of (parent, child) pairs.
Return a list of the elements in dependency order (parent to child order).
>>> print topsort( [(1,2), (3,4), (5,6), (1,3), (1,5), (1,6), (2,5)] )
[1, 2, 3, 5, 4, 6]
>>> print topsort( [(1,2), (1,3), (2,4), (3,4), (5,6), (4,5)] )
[1, 2, 3, 4, 5, 6]
>>> print topsort( [(1,2), (2,3), (3,2)] )
Traceback (most recent call last):
CycleError: ([1], {2: 1, 3: 1}, {2: [3], 3: [2]})
"""
num_parents = {} # element -> # of predecessors
children = {} # element -> list of successors
for parent, child in pairlist:
# Make sure every element is a key in num_parents.
if not num_parents.has_key( parent ):
num_parents[parent] = 0
if not num_parents.has_key( child ):
num_parents[child] = 0
# Since child has a parent, increment child's num_parents count.
num_parents[child] += 1
# ... and parent gains a child.
children.setdefault(parent, []).append(child)
# Suck up everything without a parent.
answer = [x for x in num_parents.keys() if num_parents[x] == 0]
# For everything in answer, knock down the parent count on its children.
# Note that answer grows *in* the loop.
for parent in answer:
del num_parents[parent]
if children.has_key( parent ):
for child in children[parent]:
num_parents[child] -= 1
if num_parents[child] == 0:
answer.append( child )
# Following "del" isn't needed; just makes
# CycleError details easier to grasp.
del children[parent]
if num_parents:
# Everything in num_parents has at least one child ->
# there's a cycle.
raise CycleError(answer, num_parents, children)
return answer
def topsort_levels(pairlist):
"""Topologically sort a list of (parent, child) pairs into depth levels.
This returns a generator.
Turn this into a an iterator using the iter built-in function.
(if you iterate over the iterator, each element gets generated when
it is asked for, rather than generating the whole list up-front.)
Each generated element is a list of items at that dependency level.
>>> dependency_pairs = [(1,2), (3,4), (5,6), (1,3), (1,5), (1,6), (2,5)]
>>> for level in iter(topsort_levels( dependency_pairs )):
... print level
[1]
[2, 3]
[4, 5]
[6]
>>> dependency_pairs = [(1,2), (1,3), (2,4), (3,4), (5,6), (4,5)]
>>> for level in iter(topsort_levels( dependency_pairs )):
... print level
[1]
[2, 3]
[4]
[5]
[6]
>>> dependency_pairs = [(1,2), (2,3), (3,4), (4, 3)]
>>> try:
... for level in iter(topsort_levels( dependency_pairs )):
... print level
... except CycleError, exc:
... print 'CycleError:', exc
[1]
[2]
CycleError: ({3: 1, 4: 1}, {3: [4], 4: [3]})
The cycle error should look like.
CycleError: ({3: 1, 4: 1}, {3: [4], 4: [3]})
# todo: Make the doctest more robust (i.e. handle arbitrary dict order).
"""
num_parents = {} # element -> # of predecessors
children = {} # element -> list of successors
for parent, child in pairlist:
# Make sure every element is a key in num_parents.
if not num_parents.has_key( parent ):
num_parents[parent] = 0
if not num_parents.has_key( child ):
num_parents[child] = 0
# Since child has a parent, increment child's num_parents count.
num_parents[child] += 1
# ... and parent gains a child.
children.setdefault(parent, []).append(child)
return topsort_levels_core(num_parents, children)
def topsort_levels_core(num_parents, children):
"""Topologically sort a bunch of interdependent items based on dependency.
This returns a generator.
Turn this into a an iterator using the iter built-in function.
(if you iterate over the iterator, each element gets generated when
it is asked for, rather than generating the whole list up-front.)
Each generated element is a list of items at that dependency level.
>>> list(topsort_levels_core(
... {1: 0, 2: 1, 3: 1, 4: 1, 5: 2, 6: 2},
... {1: [2, 3, 5, 6], 2: [5], 3: [4], 4: [], 5: [6]}))
[[1], [2, 3], [4, 5], [6]]
>>> list(topsort_levels_core(
... {1: 0, 2: 2, 3: 1},
... {1: [2], 2: [3], 3: [2]}))
Traceback (most recent call last):
CycleError: ({2: 1, 3: 1}, {2: [3], 3: [2]})
This function has a more complicated interface than topsort_levels,
but is useful if the data is easier to generate in this form.
Arguments:
num_parents -- key: item, value: number of parents (predecessors)
children -- key: item, value: list of children (successors)
"""
while 1:
# Suck up everything without a predecessor.
level_parents = [x for x in num_parents.keys() if num_parents[x] == 0]
if not level_parents:
break
# Offer the next generated item,
# which is a list of the items at this dependency level.
yield level_parents
# For everything item in this level,
# decrement the parent count,
# since we have accounted for its parent.
for level_parent in level_parents:
del num_parents[level_parent]
if children.has_key(level_parent):
for level_parent_child in children[level_parent]:
num_parents[level_parent_child] -= 1
del children[level_parent]
if num_parents:
# Everything in num_parents has at least one child ->
# there's a cycle.
raise CycleError(num_parents, children)
else:
# This is the end of the generator.
raise StopIteration
def find_cycles(parent_children):
"""Yield cycles. Each result is a list of items comprising a cycle.
Use a 'stack' based approach to find all the cycles.
This is a generator, so yields each cycle as it finds it.
It is implicit that the last item in each cycle list is a parent of the
first item (thereby forming a cycle).
Arguments:
parent_children -- parent -> collection of children
Simplest cycle:
>>> cycles = list(find_cycles({'A': ['B'], 'B': ['A']}))
>>> len(cycles)
1
>>> cycle = cycles[0]
>>> cycle.sort()
>>> print cycle
['A', 'B']
Simplest cycle with extra baggage at the start and the end:
>>> cycles = list(find_cycles(parent_children={'A': ['B'],
... 'B': ['C'],
... 'C': ['B', 'D'],
... 'D': [],
... }))
>>> len(cycles)
1
>>> cycle = cycles[0]
>>> cycle.sort()
>>> print cycle
['B', 'C']
Double cycle:
>>> cycles = list(find_cycles(parent_children={'A': ['B'],
... 'B': ['C1', 'C2'],
... 'C1': ['D1'],
... 'D1': ['E1'],
... 'E1': ['D1'],
... 'C2': ['D2'],
... 'D2': ['E2'],
... 'E2': ['D2'],
... }))
>>> len(cycles)
2
>>> for cycle in cycles:
... cycle.sort()
>>> cycles.sort()
>>> cycle1 = cycles[0]
>>> cycle1.sort()
>>> print cycle1
['D1', 'E1']
>>> cycle2 = cycles[1]
>>> cycle2.sort()
>>> print cycle2
['D2', 'E2']
Simple cycle with children not specified for one item:
# todo: Should this barf instead?
>>> cycles = list(find_cycles(parent_children={'A': ['B'],
... 'B': ['A'],
... 'C': ['D']}))
>>> len(cycles)
1
>>> cycle = cycles[0]
>>> cycle.sort()
>>> print cycle
['A', 'B']
Diamond cycle
>>> cycles = list(find_cycles(parent_children={'A': ['B1', 'B2'],
... 'B1': ['C'],
... 'B2': ['C'],
... 'C': ['A', 'B1']}))
>>> len(cycles)
3
>>> sorted_cycles = []
>>> for cycle in cycles:
... cycle = list(cycle)
... cycle.sort()
... sorted_cycles.append(cycle)
>>> sorted_cycles.sort()
>>> for cycle in sorted_cycles:
... print cycle
['A', 'B1', 'C']
['A', 'B2', 'C']
['B1', 'C']
Hairy case (order can matter if something is wrong):
(Note order of B and C in the list.)
>>> cycles = list(find_cycles(parent_children={
... 'TD': ['DD'],
... 'TC': ['DC'],
... 'DC': ['DQ'],
... 'C': ['DQ'],
... 'DQ': ['IA', 'TO'],
... 'IA': ['A'],
... 'A': ['B', 'C'],
... }))
>>> len(cycles)
1
>>> cycle = cycles[0]
>>> cycle.sort()
>>> print cycle
['A', 'C', 'DQ', 'IA']
"""
cycles = []
visited_nodes = set()
for parent in parent_children:
if parent in visited_nodes:
# This node is part of a path that has already been traversed.
continue
paths = [[parent]]
while paths:
path = paths.pop()
parent = path[-1]
try:
children = parent_children[parent]
except KeyError:
continue
for child in children:
# Keeping a set of the path nodes, for O(1) lookups at the
# expense of more memory and complexity, actually makes speed
# worse. (Due to construction of sets.)
# This is O(N).
if child in path:
# This is a cycle.
cycle = path[path.index(child):]
# Check that this is not a dup cycle.
is_dup = False
for other_cycle in cycles:
if is_rotated(other_cycle, cycle):
is_dup = True
break
if not is_dup:
cycles.append(cycle)
yield cycle
else:
# Push this new path onto the 'stack'.
# This is probably the most expensive part of the algorithm
# (a list copy).
paths.append(path + [child])
# Mark the node as visited.
visited_nodes.add(child)
if __name__ == '__main__':
# Run the doctest tests.
import sys
import doctest
doctest.testmod(sys.modules['__main__'])
| gpl-2.0 |
stevenewey/django | tests/template_tests/filter_tests/test_dictsort.py | 342 | 1477 | from django.template.defaultfilters import dictsort
from django.test import SimpleTestCase
class FunctionTests(SimpleTestCase):
def test_sort(self):
sorted_dicts = dictsort(
[{'age': 23, 'name': 'Barbara-Ann'},
{'age': 63, 'name': 'Ra Ra Rasputin'},
{'name': 'Jonny B Goode', 'age': 18}],
'age',
)
self.assertEqual(
[sorted(dict.items()) for dict in sorted_dicts],
[[('age', 18), ('name', 'Jonny B Goode')],
[('age', 23), ('name', 'Barbara-Ann')],
[('age', 63), ('name', 'Ra Ra Rasputin')]],
)
def test_dictsort_complex_sorting_key(self):
"""
Since dictsort uses template.Variable under the hood, it can sort
on keys like 'foo.bar'.
"""
data = [
{'foo': {'bar': 1, 'baz': 'c'}},
{'foo': {'bar': 2, 'baz': 'b'}},
{'foo': {'bar': 3, 'baz': 'a'}},
]
sorted_data = dictsort(data, 'foo.baz')
self.assertEqual([d['foo']['bar'] for d in sorted_data], [3, 2, 1])
def test_invalid_values(self):
"""
If dictsort is passed something other than a list of dictionaries,
fail silently.
"""
self.assertEqual(dictsort([1, 2, 3], 'age'), '')
self.assertEqual(dictsort('Hello!', 'age'), '')
self.assertEqual(dictsort({'a': 1}, 'age'), '')
self.assertEqual(dictsort(1, 'age'), '')
| bsd-3-clause |
the100rabh/Barcamp-Bangalore-Android-App | gcm_flask/tests/tests.py | 20 | 2577 | #!/usr/bin/env python
# encoding: utf-8
"""
tests.py
TODO: These tests need to be updated to support the Python 2.7 runtime
"""
import os
import unittest
from google.appengine.ext import testbed
from application import app
class DemoTestCase(unittest.TestCase):
def setUp(self):
# Flask apps testing. See: http://flask.pocoo.org/docs/testing/
app.config['TESTING'] = True
app.config['CSRF_ENABLED'] = False
self.app = app.test_client()
# Setups app engine test bed. See: http://code.google.com/appengine/docs/python/tools/localunittesting.html#Introducing_the_Python_Testing_Utilities
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.init_datastore_v3_stub()
self.testbed.init_user_stub()
self.testbed.init_memcache_stub()
def tearDown(self):
self.testbed.deactivate()
def setCurrentUser(self, email, user_id, is_admin=False):
os.environ['USER_EMAIL'] = email or ''
os.environ['USER_ID'] = user_id or ''
os.environ['USER_IS_ADMIN'] = '1' if is_admin else '0'
def test_home_redirects(self):
rv = self.app.get('/')
assert rv.status == '302 FOUND'
def test_says_hello(self):
rv = self.app.get('/hello/world')
assert 'Hello world' in rv.data
def test_displays_no_data(self):
rv = self.app.get('/examples')
assert 'No examples yet' in rv.data
def test_inserts_data(self):
self.setCurrentUser(u'john@example.com', u'123')
rv = self.app.post('/example/new', data=dict(
example_name='An example',
example_description='Description of an example'
), follow_redirects=True)
assert 'Example successfully saved' in rv.data
rv = self.app.get('/examples')
assert 'No examples yet' not in rv.data
assert 'An example' in rv.data
def test_admin_login(self):
#Anonymous
rv = self.app.get('/admin_only')
assert rv.status == '302 FOUND'
#Normal user
self.setCurrentUser(u'john@example.com', u'123')
rv = self.app.get('/admin_only')
assert rv.status == '302 FOUND'
#Admin
self.setCurrentUser(u'john@example.com', u'123', True)
rv = self.app.get('/admin_only')
assert rv.status == '200 OK'
def test_404(self):
rv = self.app.get('/missing')
assert rv.status == '404 NOT FOUND'
assert '<h1>Not found</h1>' in rv.data
if __name__ == '__main__':
unittest.main()
| apache-2.0 |
jibeiz/PokemonGo-Map | pogom/pgoapi/exceptions.py | 54 | 1299 | """
pgoapi - Pokemon Go API
Copyright (c) 2016 tjado <https://github.com/tejado>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE
OR OTHER DEALINGS IN THE SOFTWARE.
Author: tjado <https://github.com/tejado>
"""
class AuthException(Exception):
pass
class NotLoggedInException(Exception):
pass
class ServerBusyOrOfflineException(Exception):
pass | mit |
valtandor/easybuild-framework | easybuild/toolchains/mpi/psmpi.py | 4 | 1702 | # #
# Copyright 2012-2015 Ghent University
#
# This file is part of EasyBuild,
# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
# with support of Ghent University (http://ugent.be/hpc),
# the Flemish Supercomputer Centre (VSC) (https://vscentrum.be/nl/en),
# the Hercules foundation (http://www.herculesstichting.be/in_English)
# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
#
# http://github.com/hpcugent/easybuild
#
# EasyBuild is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation v2.
#
# EasyBuild is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
# #
"""
Support for Parastation MPI as toolchain MPI library.
@author: Kenneth Hoste (Ghent University)
"""
from easybuild.toolchains.mpi.mpich import Mpich
class Psmpi(Mpich):
"""Parastation MPI class"""
MPI_MODULE_NAME = ['psmpi']
def _set_mpi_compiler_variables(self):
"""Set the MPICH_{CC, CXX, F77, F90, FC} variables."""
# hardwire MPI wrapper commands (otherwise Mpich parent class sets them based on MPICH version)
self.MPI_COMPILER_MPIF77 = 'mpif77'
self.MPI_COMPILER_MPIF90 = 'mpif90'
self.MPI_COMPILER_MPIFC = 'mpif90'
super(Psmpi, self)._set_mpi_compiler_variables()
| gpl-2.0 |
tkerola/chainer | examples/text_classification/run_text_classifier.py | 3 | 3937 | #!/usr/bin/env python
import argparse
import json
import sys
import chainer
import numpy
import nets
import nlp_utils
def setup_model(device, model_setup):
sys.stderr.write(json.dumps(args.__dict__, indent=2) + '\n')
setup = json.load(open(model_setup))
sys.stderr.write(json.dumps(setup, indent=2) + '\n')
vocab = json.load(open(setup['vocab_path']))
n_class = setup['n_class']
# Setup a model
if setup['model'] == 'rnn':
Encoder = nets.RNNEncoder
elif setup['model'] == 'cnn':
Encoder = nets.CNNEncoder
elif setup['model'] == 'bow':
Encoder = nets.BOWMLPEncoder
encoder = Encoder(n_layers=setup['layer'], n_vocab=len(vocab),
n_units=setup['unit'], dropout=setup['dropout'])
model = nets.TextClassifier(encoder, n_class)
chainer.serializers.load_npz(setup['model_path'], model)
model.to_device(device) # Copy the model to the device
return model, vocab, setup
def run_online(device):
# predict labels online
for l in sys.stdin:
l = l.strip()
if not l:
print('# blank line')
continue
text = nlp_utils.normalize_text(l)
words = nlp_utils.split_text(text, char_based=setup['char_based'])
xs = nlp_utils.transform_to_array([words], vocab, with_label=False)
xs = nlp_utils.convert_seq(xs, device=device, with_label=False)
with chainer.using_config('train', False), chainer.no_backprop_mode():
prob = model.predict(xs, softmax=True)[0]
answer = int(model.xp.argmax(prob))
score = float(prob[answer])
print('{}\t{:.4f}\t{}'.format(answer, score, ' '.join(words)))
def run_batch(device, batchsize=64):
# predict labels by batch
def predict_batch(words_batch):
xs = nlp_utils.transform_to_array(words_batch, vocab, with_label=False)
xs = nlp_utils.convert_seq(xs, device=device, with_label=False)
with chainer.using_config('train', False), chainer.no_backprop_mode():
probs = model.predict(xs, softmax=True)
answers = model.xp.argmax(probs, axis=1)
scores = probs[model.xp.arange(answers.size), answers].tolist()
for words, answer, score in zip(words_batch, answers, scores):
print('{}\t{:.4f}\t{}'.format(answer, score, ' '.join(words)))
batch = []
for l in sys.stdin:
l = l.strip()
if not l:
if batch:
predict_batch(batch)
batch = []
print('# blank line')
continue
text = nlp_utils.normalize_text(l)
words = nlp_utils.split_text(text, char_based=setup['char_based'])
batch.append(words)
if len(batch) >= batchsize:
predict_batch(batch)
batch = []
if batch:
predict_batch(batch)
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description='Chainer example: Text Classification')
parser.add_argument('--device', '-d', type=str, default='-1',
help='Device specifier. Either ChainerX device '
'specifier or an integer. If non-negative integer, '
'CuPy arrays with specified device id are used. If '
'negative integer, NumPy arrays are used')
parser.add_argument('--model-setup', required=True,
help='Model setup dictionary.')
group = parser.add_argument_group('deprecated arguments')
group.add_argument('--gpu', '-g', dest='device',
type=int, nargs='?', const=0,
help='GPU ID (negative value indicates CPU)')
args = parser.parse_args()
device = chainer.get_device(args.device)
device.use()
model, vocab, setup = setup_model(device, args.model_setup)
if device.xp is numpy:
run_online(device)
else:
run_batch(device)
| mit |
alex/kombu | kombu/utils/functional.py | 30 | 2069 | from __future__ import absolute_import
import sys
from collections import Iterable, Mapping
from kombu.five import string_t
__all__ = ['lazy', 'maybe_evaluate', 'is_list', 'maybe_list']
class lazy(object):
"""Holds lazy evaluation.
Evaluated when called or if the :meth:`evaluate` method is called.
The function is re-evaluated on every call.
Overloaded operations that will evaluate the promise:
:meth:`__str__`, :meth:`__repr__`, :meth:`__cmp__`.
"""
def __init__(self, fun, *args, **kwargs):
self._fun = fun
self._args = args
self._kwargs = kwargs
def __call__(self):
return self.evaluate()
def evaluate(self):
return self._fun(*self._args, **self._kwargs)
def __str__(self):
return str(self())
def __repr__(self):
return repr(self())
def __eq__(self, rhs):
return self() == rhs
def __ne__(self, rhs):
return self() != rhs
def __deepcopy__(self, memo):
memo[id(self)] = self
return self
def __reduce__(self):
return (self.__class__, (self._fun, ), {'_args': self._args,
'_kwargs': self._kwargs})
if sys.version_info[0] < 3:
def __cmp__(self, rhs):
if isinstance(rhs, self.__class__):
return -cmp(rhs, self())
return cmp(self(), rhs)
def maybe_evaluate(value):
"""Evaluates if the value is a :class:`lazy` instance."""
if isinstance(value, lazy):
return value.evaluate()
return value
def is_list(l, scalars=(Mapping, string_t), iters=(Iterable, )):
"""Return true if the object is iterable (but not
if object is a mapping or string)."""
return isinstance(l, iters) and not isinstance(l, scalars or ())
def maybe_list(l, scalars=(Mapping, string_t)):
"""Return list of one element if ``l`` is a scalar."""
return l if l is None or is_list(l, scalars) else [l]
# Compat names (before kombu 3.0)
promise = lazy
maybe_promise = maybe_evaluate
| bsd-3-clause |
zhumengyuan/kallithea | kallithea/tests/functional/test_my_account.py | 1 | 10877 | # -*- coding: utf-8 -*-
from kallithea.model.db import User, UserFollowing, Repository, UserApiKeys
from kallithea.tests import *
from kallithea.tests.fixture import Fixture
from kallithea.lib import helpers as h
from kallithea.model.user import UserModel
from kallithea.model.meta import Session
fixture = Fixture()
class TestMyAccountController(TestController):
test_user_1 = 'testme'
@classmethod
def teardown_class(cls):
if User.get_by_username(cls.test_user_1):
UserModel().delete(cls.test_user_1)
Session().commit()
def test_my_account(self):
self.log_user()
response = self.app.get(url('my_account'))
response.mustcontain('value="test_admin')
def test_my_account_my_repos(self):
self.log_user()
response = self.app.get(url('my_account_repos'))
cnt = Repository.query().filter(Repository.user ==
User.get_by_username(TEST_USER_ADMIN_LOGIN)).count()
response.mustcontain('"totalRecords": %s' % cnt)
def test_my_account_my_watched(self):
self.log_user()
response = self.app.get(url('my_account_watched'))
cnt = UserFollowing.query().filter(UserFollowing.user ==
User.get_by_username(TEST_USER_ADMIN_LOGIN)).count()
response.mustcontain('"totalRecords": %s' % cnt)
def test_my_account_my_emails(self):
self.log_user()
response = self.app.get(url('my_account_emails'))
response.mustcontain('No additional emails specified')
def test_my_account_my_emails_add_existing_email(self):
self.log_user()
response = self.app.get(url('my_account_emails'))
response.mustcontain('No additional emails specified')
response = self.app.post(url('my_account_emails'),
{'new_email': TEST_USER_REGULAR_EMAIL, '_authentication_token': self.authentication_token()})
self.checkSessionFlash(response, 'This e-mail address is already taken')
def test_my_account_my_emails_add_mising_email_in_form(self):
self.log_user()
response = self.app.get(url('my_account_emails'))
response.mustcontain('No additional emails specified')
response = self.app.post(url('my_account_emails'),)
self.checkSessionFlash(response, 'Please enter an email address')
def test_my_account_my_emails_add_remove(self):
self.log_user()
response = self.app.get(url('my_account_emails'))
response.mustcontain('No additional emails specified')
response = self.app.post(url('my_account_emails'),
{'new_email': 'foo@barz.com', '_authentication_token': self.authentication_token()})
response = self.app.get(url('my_account_emails'))
from kallithea.model.db import UserEmailMap
email_id = UserEmailMap.query()\
.filter(UserEmailMap.user == User.get_by_username(TEST_USER_ADMIN_LOGIN))\
.filter(UserEmailMap.email == 'foo@barz.com').one().email_id
response.mustcontain('foo@barz.com')
response.mustcontain('<input id="del_email_id" name="del_email_id" type="hidden" value="%s" />' % email_id)
response = self.app.post(url('my_account_emails'),
{'del_email_id': email_id, '_method': 'delete', '_authentication_token': self.authentication_token()})
self.checkSessionFlash(response, 'Removed email from user')
response = self.app.get(url('my_account_emails'))
response.mustcontain('No additional emails specified')
@parameterized.expand(
[('firstname', {'firstname': 'new_username'}),
('lastname', {'lastname': 'new_username'}),
('admin', {'admin': True}),
('admin', {'admin': False}),
('extern_type', {'extern_type': 'ldap'}),
('extern_type', {'extern_type': None}),
#('extern_name', {'extern_name': 'test'}),
#('extern_name', {'extern_name': None}),
('active', {'active': False}),
('active', {'active': True}),
('email', {'email': 'some@email.com'}),
# ('new_password', {'new_password': 'foobar123',
# 'password_confirmation': 'foobar123'})
])
def test_my_account_update(self, name, attrs):
usr = fixture.create_user(self.test_user_1, password='qweqwe',
email='testme@example.com',
extern_type='internal',
extern_name=self.test_user_1,
skip_if_exists=True)
params = usr.get_api_data(True) # current user data
user_id = usr.user_id
self.log_user(username=self.test_user_1, password='qweqwe')
params.update({'password_confirmation': ''})
params.update({'new_password': ''})
params.update({'extern_type': 'internal'})
params.update({'extern_name': self.test_user_1})
params.update({'_authentication_token': self.authentication_token()})
params.update(attrs)
response = self.app.post(url('my_account'), params)
self.checkSessionFlash(response,
'Your account was updated successfully')
updated_user = User.get_by_username(self.test_user_1)
updated_params = updated_user.get_api_data(True)
updated_params.update({'password_confirmation': ''})
updated_params.update({'new_password': ''})
params['last_login'] = updated_params['last_login']
if name == 'email':
params['emails'] = [attrs['email']]
if name == 'extern_type':
#cannot update this via form, expected value is original one
params['extern_type'] = "internal"
if name == 'extern_name':
#cannot update this via form, expected value is original one
params['extern_name'] = str(user_id)
if name == 'active':
#my account cannot deactivate account
params['active'] = True
if name == 'admin':
#my account cannot make you an admin !
params['admin'] = False
params.pop('_authentication_token')
self.assertEqual(params, updated_params)
def test_my_account_update_err_email_exists(self):
self.log_user()
new_email = 'test_regular@mail.com' # already exisitn email
response = self.app.post(url('my_account'),
params=dict(
username='test_admin',
new_password='test12',
password_confirmation='test122',
firstname='NewName',
lastname='NewLastname',
email=new_email,
_authentication_token=self.authentication_token())
)
response.mustcontain('This e-mail address is already taken')
def test_my_account_update_err(self):
self.log_user('test_regular2', 'test12')
new_email = 'newmail.pl'
response = self.app.post(url('my_account'),
params=dict(
username='test_admin',
new_password='test12',
password_confirmation='test122',
firstname='NewName',
lastname='NewLastname',
email=new_email,
_authentication_token=self.authentication_token()))
response.mustcontain('An email address must contain a single @')
from kallithea.model import validators
msg = validators.ValidUsername(edit=False, old_data={})\
._messages['username_exists']
msg = h.html_escape(msg % {'username': 'test_admin'})
response.mustcontain(u"%s" % msg)
def test_my_account_api_keys(self):
usr = self.log_user('test_regular2', 'test12')
user = User.get(usr['user_id'])
response = self.app.get(url('my_account_api_keys'))
response.mustcontain(user.api_key)
response.mustcontain('expires: never')
@parameterized.expand([
('forever', -1),
('5mins', 60*5),
('30days', 60*60*24*30),
])
def test_my_account_add_api_keys(self, desc, lifetime):
usr = self.log_user('test_regular2', 'test12')
user = User.get(usr['user_id'])
response = self.app.post(url('my_account_api_keys'),
{'description': desc, 'lifetime': lifetime, '_authentication_token': self.authentication_token()})
self.checkSessionFlash(response, 'Api key successfully created')
try:
response = response.follow()
user = User.get(usr['user_id'])
for api_key in user.api_keys:
response.mustcontain(api_key)
finally:
for api_key in UserApiKeys.query().all():
Session().delete(api_key)
Session().commit()
def test_my_account_remove_api_key(self):
usr = self.log_user('test_regular2', 'test12')
user = User.get(usr['user_id'])
response = self.app.post(url('my_account_api_keys'),
{'description': 'desc', 'lifetime': -1, '_authentication_token': self.authentication_token()})
self.checkSessionFlash(response, 'Api key successfully created')
response = response.follow()
#now delete our key
keys = UserApiKeys.query().all()
self.assertEqual(1, len(keys))
response = self.app.post(url('my_account_api_keys'),
{'_method': 'delete', 'del_api_key': keys[0].api_key, '_authentication_token': self.authentication_token()})
self.checkSessionFlash(response, 'Api key successfully deleted')
keys = UserApiKeys.query().all()
self.assertEqual(0, len(keys))
def test_my_account_reset_main_api_key(self):
usr = self.log_user('test_regular2', 'test12')
user = User.get(usr['user_id'])
api_key = user.api_key
response = self.app.get(url('my_account_api_keys'))
response.mustcontain(api_key)
response.mustcontain('expires: never')
response = self.app.post(url('my_account_api_keys'),
{'_method': 'delete', 'del_api_key_builtin': api_key, '_authentication_token': self.authentication_token()})
self.checkSessionFlash(response, 'Api key successfully reset')
response = response.follow()
response.mustcontain(no=[api_key])
| gpl-3.0 |
jirikuncar/invenio | invenio/testsuite/test_ext_template.py | 9 | 4215 | # -*- coding: utf-8 -*-
#
# This file is part of Invenio.
# Copyright (C) 2013, 2014 CERN.
#
# Invenio is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of the
# License, or (at your option) any later version.
#
# Invenio is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Invenio; if not, write to the Free Software Foundation, Inc.,
# 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
Unit test for the template extensions.
"""
from flask import url_for
from invenio.ext.template import render_template_to_string
from invenio.testsuite import make_test_suite, run_test_suite, InvenioTestCase
from invenio.testsuite import unittest
class TemplateTest(InvenioTestCase):
"""
jinja2utils TestSuite.
"""
def tplEqualToString(self, tpl, text, **ctx):
self.assertEqual(
render_template_to_string(tpl, _from_string=True, **ctx),
text)
def test_wrap_equal_to_prefix_and_suffix(self):
wrap_tpl = '{{ test_variable|wrap(prefix="***", suffix="###") }}'
pxsx_tpl = '{{ test_variable|prefix("***")|suffix("###") }}'
# None is printed as empty string
self.tplEqualToString(wrap_tpl, '', test_variable=None)
self.tplEqualToString(pxsx_tpl, '', test_variable=None)
# Nothing is appended to empty string
self.tplEqualToString(wrap_tpl, '', test_variable='')
self.tplEqualToString(pxsx_tpl, '', test_variable='')
# x|prefix|suffix is equal to x|wrap
self.tplEqualToString(wrap_tpl, '***test###', test_variable='test')
self.tplEqualToString(pxsx_tpl, '***test###', test_variable='test')
class TemplateLoaderCase(InvenioTestCase):
@property
def config(self):
cfg = super(TemplateLoaderCase, self).config
cfg['PACKAGES'] = [
'invenio.testsuite.test_apps.first',
'invenio.modules.*',
'invenio.testsuite.test_apps.last',
]
return cfg
def test_fisrt_blueprint(self):
response = self.client.get('/')
self.assertEqual(response.data.strip(), 'First')
self.assertNotEqual(response.data.strip(), 'Last')
class TemplateArgsTest(InvenioTestCase):
"""Test ``template_args`` decorator."""
@classmethod
def setup_app(cls, app):
"""Custom setup function."""
from invenio.ext.template.context_processor import template_args
from invenio.modules.collections.views.collections import index
@template_args(index)
def foo():
return {'foo': 'foo', 'baz': 'baz'}
@template_args('collections.index', app=app)
def bar():
return {'bar': 'bar', 'baz': 'BAZ'}
@property
def config(self):
from invenio.base.config import EXTENSIONS
cfg = super(TemplateArgsTest, self).config
cfg['EXTENSIONS'] = EXTENSIONS + [
'invenio.testsuite.test_ext_template.TemplateArgsTest']
return cfg
def test_template_args_loading(self):
self.client.get(url_for('collections.index'))
self.assertEqual(self.get_context_variable('foo'), 'foo')
self.assertEqual(self.get_context_variable('bar'), 'bar')
self.assertEqual(self.get_context_variable('baz'), 'BAZ')
class TemplateArgsLoadingTest(unittest.TestCase):
"""Test ``template_args`` decorator outside app context."""
def test_broken_loading(self):
from invenio.ext.template.context_processor import template_args
def foo():
return {'foo': 'foo'}
self.assertRaises(Exception,
lambda: template_args('collections.index')(foo))
TEST_SUITE = make_test_suite(TemplateTest, TemplateLoaderCase,
TemplateArgsTest, TemplateArgsLoadingTest)
if __name__ == "__main__":
run_test_suite(TEST_SUITE)
| gpl-2.0 |
nerzhul/ansible | lib/ansible/modules/univention/udm_group.py | 27 | 5389 | #!/usr/bin/python
# -*- coding: UTF-8 -*-
# Copyright (c) 2016, Adfinis SyGroup AG
# Tobias Rueetschi <tobias.ruetschi@adfinis-sygroup.ch>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.univention_umc import (
umc_module_for_add,
umc_module_for_edit,
ldap_search,
base_dn,
)
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'version': '1.0'}
DOCUMENTATION = '''
---
module: udm_group
version_added: "2.2"
author: "Tobias Rueetschi (@2-B)"
short_description: Manage of the posix group
description:
- "This module allows to manage user groups on a univention corporate server (UCS).
It uses the python API of the UCS to create a new object or edit it."
requirements:
- Python >= 2.6
options:
state:
required: false
default: "present"
choices: [ present, absent ]
description:
- Whether the group is present or not.
name:
required: true
description:
- Name of the posix group.
description:
required: false
description:
- Group description.
position:
required: false
description:
- define the whole ldap position of the group, e.g.
C(cn=g123m-1A,cn=classes,cn=schueler,cn=groups,ou=schule,dc=example,dc=com).
ou:
required: false
description:
- LDAP OU, e.g. school for LDAP OU C(ou=school,dc=example,dc=com).
subpath:
required: false
description:
- Subpath inside the OU, e.g. C(cn=classes,cn=students,cn=groups).
'''
EXAMPLES = '''
# Create a POSIX group
- udm_group:
name: g123m-1A
# Create a POSIX group with the exact DN
# C(cn=g123m-1A,cn=classes,cn=students,cn=groups,ou=school,dc=school,dc=example,dc=com)
- udm_group:
name: g123m-1A
subpath: 'cn=classes,cn=students,cn=groups'
ou: school
# or
- udm_group:
name: g123m-1A
position: 'cn=classes,cn=students,cn=groups,ou=school,dc=school,dc=example,dc=com'
'''
RETURN = '''# '''
def main():
module = AnsibleModule(
argument_spec = dict(
name = dict(required=True,
type='str'),
description = dict(default=None,
type='str'),
position = dict(default='',
type='str'),
ou = dict(default='',
type='str'),
subpath = dict(default='cn=groups',
type='str'),
state = dict(default='present',
choices=['present', 'absent'],
type='str')
),
supports_check_mode=True
)
name = module.params['name']
description = module.params['description']
position = module.params['position']
ou = module.params['ou']
subpath = module.params['subpath']
state = module.params['state']
changed = False
groups = list(ldap_search(
'(&(objectClass=posixGroup)(cn={}))'.format(name),
attr=['cn']
))
if position != '':
container = position
else:
if ou != '':
ou = 'ou={},'.format(ou)
if subpath != '':
subpath = '{},'.format(subpath)
container = '{}{}{}'.format(subpath, ou, base_dn())
group_dn = 'cn={},{}'.format(name, container)
exists = bool(len(groups))
if state == 'present':
try:
if not exists:
grp = umc_module_for_add('groups/group', container)
else:
grp = umc_module_for_edit('groups/group', group_dn)
grp['name'] = name
grp['description'] = description
diff = grp.diff()
changed = grp.diff() != []
if not module.check_mode:
if not exists:
grp.create()
else:
grp.modify()
except:
module.fail_json(
msg="Creating/editing group {} in {} failed".format(name, container)
)
if state == 'absent' and exists:
try:
grp = umc_module_for_edit('groups/group', group_dn)
if not module.check_mode:
grp.remove()
changed = True
except:
module.fail_json(
msg="Removing group {} failed".format(name)
)
module.exit_json(
changed=changed,
name=name,
diff=diff,
container=container
)
if __name__ == '__main__':
main()
| gpl-3.0 |
deworrall92/harmonicConvolutions | harmonic_network_lite.py | 2 | 3212 | """
Harmonic Convolutions Lite
A simplified API for harmomin_network_ops
"""
import numpy as np
import tensorflow as tf
from harmonic_network_ops import *
def conv2d(x, n_channels, ksize, strides=(1,1,1,1), padding='VALID', phase=True,
max_order=1, stddev=0.4, n_rings=None, name='conv2d'):
"""Harmonic Convolution lite
x: input tf tensor, shape [batchsize,height,width,order,complex,channels],
e.g. a real input tensor of rotation order 0 could have shape
[16,32,32,3,1,9], or a complex input tensor of rotation orders 0,1,2, could
have shape [32,121,121,3,2,10]
n_channels: number of output channels (int)
ksize: size of square filter (int)
strides: stride size (4-tuple: default (1,1,1,1))
padding: SAME or VALID (defult VALID)
phase: use a per-channel phase offset (default True)
max_order: maximum rotation order e.g. max_order=2 uses 0,1,2 (default 1)
stddev: scale of filter initialization wrt He initialization
name: (default 'lconv')
device: (default '/cpu:0')
"""
xsh = x.get_shape().as_list()
shape = [ksize, ksize, xsh[5], n_channels]
Q = get_weights_dict(shape, max_order, std_mult=stddev, n_rings=n_rings, name='W'+name)
if phase == True:
P = get_phase_dict(xsh[5], n_channels, max_order, name='phase'+name)
else:
P = None
W = get_filters(Q, filter_size=ksize, P=P, n_rings=n_rings)
R = h_conv(x, W, strides=strides, padding=padding, max_order=max_order,
name=name)
return R
def batch_norm(x, train_phase, fnc=tf.nn.relu, decay=0.99, eps=1e-4, name='hbn'):
"""Batch normalization for the magnitudes of X"""
return h_batch_norm(x, fnc, train_phase, decay=decay, eps=eps, name=name)
def non_linearity(x, fnc=tf.nn.relu, eps=1e-4, name='nl'):
"""Alter nonlinearity for the complex domains"""
return h_nonlin(x, fnc, eps=eps, name=name)
def mean_pool(x, ksize=(1,1,1,1), strides=(1,1,1,1), name='mp'):
"""Mean pooling"""
with tf.name_scope(name) as scope:
return mean_pooling(x, ksize=ksize, strides=strides)
def sum_magnitudes(x, eps=1e-12, keep_dims=True):
"""Sum the magnitudes of each of the complex feature maps in X.
Output U = sum_i |x_i|
x: input tf tensor, shape [batchsize,height,width,channels,complex,order],
e.g. a real input tensor of rotation order 0 could have shape
[16,32,32,3,1,1], or a complex input tensor of rotation orders 0,1,2, could
have shape [32,121,121,32,2,3]
eps: regularization since grad |x| is infinite at zero (default 1e-4)
keep_dims: whether to collapse summed dimensions (default True)
"""
R = tf.reduce_sum(tf.square(x), axis=[4], keep_dims=keep_dims)
return tf.reduce_sum(tf.sqrt(tf.maximum(R,eps)), axis=[3], keep_dims=keep_dims)
def stack_magnitudes(X, eps=1e-12, keep_dims=True):
"""Stack the magnitudes of each of the complex feature maps in X.
Output U = concat(|X_i|)
X: dict of channels {rotation order: (real, imaginary)}
eps: regularization since grad |Z| is infinite at zero (default 1e-12)
"""
R = tf.reduce_sum(tf.square(X), axis=[4], keep_dims=keep_dims)
return tf.sqrt(tf.maximum(R,eps))
| mit |
iledarn/addons-yelizariev | mail_move_message/controllers/main.py | 13 | 2454 | from openerp.addons.web.controllers.main import DataSet
from openerp.tools.translate import _
from openerp import http
from openerp.http import request
class DataSetCustom(DataSet):
def _extend_name(self, model, records):
cr, uid, context = request.cr, request.uid, request.context
Model = request.registry[model]
fields = Model.fields_get(cr, uid, False, context)
contact_field = False
for n, f in fields.iteritems():
if f['type'] == 'many2one' and f['relation'] == 'res.partner':
contact_field = n
break
partner_info = {}
if contact_field:
partner_info = Model.read(cr, uid, [r[0] for r in records], [contact_field], context)
partner_info = dict([(p['id'], p[contact_field]) for p in partner_info])
res = []
for r in records:
if partner_info.get(r[0]):
res.append((r[0], _('%s [%s] ID %s') % (r[1], partner_info.get(r[0])[1], r[0])))
else:
res.append((r[0], _('%s ID %s') % (r[1], r[0])))
return res
@http.route('/web/dataset/call_kw/<model>/name_search', type='json', auth="user")
def name_search(self, model, method, args, kwargs):
context = kwargs.get('context')
if context and context.get('extended_name_with_contact'):
#add order by ID desc
cr, uid = request.cr, request.uid
Model = request.registry[model]
search_args = list(kwargs.get('args') or [])
limit = int(kwargs.get('limit') or 100)
operator = kwargs.get('operator')
name = kwargs.get('name')
if Model._rec_name and (not name == '' and operator == 'ilike'):
search_args += [(Model._rec_name, operator, name)]
ids = Model.search(cr, uid, search_args, limit=limit, order='id desc', context=context)
res = Model.name_get(cr, uid, ids, context)
return self._extend_name(model, res)
return self._call_kw(model, method, args, kwargs)
@http.route('/web/dataset/call_kw/<model>/name_get', type='json', auth="user")
def name_get(self, model, method, args, kwargs):
res = self._call_kw(model, method, args, kwargs)
context = kwargs.get('context')
if context and context.get('extended_name_with_contact'):
res = self._extend_name(model, res)
return res
| lgpl-3.0 |
brijeshkesariya/odoo | addons/hr_timesheet_sheet/wizard/__init__.py | 443 | 1075 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import hr_timesheet_current
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
onceuponatimeforever/oh-mainline | vendor/packages/kombu/funtests/transport.py | 20 | 9620 | from __future__ import absolute_import, print_function
import random
import socket
import string
import sys
import time
import unittest2 as unittest
import warnings
import weakref
from nose import SkipTest
from kombu import Connection
from kombu import Exchange, Queue
from kombu.five import range
from kombu.tests.case import skip_if_quick
if sys.version_info >= (2, 5):
from hashlib import sha256 as _digest
else:
from sha import new as _digest # noqa
def say(msg):
print(msg, file=sys.stderr)
def _nobuf(x):
return [str(i) if isinstance(i, buffer) else i for i in x]
def consumeN(conn, consumer, n=1, timeout=30):
messages = []
def callback(message_data, message):
messages.append(message_data)
message.ack()
prev, consumer.callbacks = consumer.callbacks, [callback]
consumer.consume()
seconds = 0
while True:
try:
conn.drain_events(timeout=1)
except socket.timeout:
seconds += 1
msg = 'Received %s/%s messages. %s seconds passed.' % (
len(messages), n, seconds)
if seconds >= timeout:
raise socket.timeout(msg)
if seconds > 1:
say(msg)
if len(messages) >= n:
break
consumer.cancel()
consumer.callback = prev
return messages
class TransportCase(unittest.TestCase):
transport = None
prefix = None
sep = '.'
userid = None
password = None
event_loop_max = 100
connection_options = {}
suppress_disorder_warning = False
reliable_purge = True
connected = False
skip_test_reason = None
message_size_limit = None
def before_connect(self):
pass
def after_connect(self, connection):
pass
def setUp(self):
if self.transport:
try:
self.before_connect()
except SkipTest as exc:
self.skip_test_reason = str(exc)
else:
self.do_connect()
self.exchange = Exchange(self.prefix, 'direct')
self.queue = Queue(self.prefix, self.exchange, self.prefix)
def purge(self, names):
chan = self.connection.channel()
total = 0
for queue in names:
while 1:
# ensure the queue is completly empty
purged = chan.queue_purge(queue=queue)
if not purged:
break
total += purged
chan.close()
return total
def get_connection(self, **options):
if self.userid:
options.setdefault('userid', self.userid)
if self.password:
options.setdefault('password', self.password)
return Connection(transport=self.transport, **options)
def do_connect(self):
self.connection = self.get_connection(**self.connection_options)
try:
self.connection.connect()
self.after_connect(self.connection)
except self.connection.connection_errors:
self.skip_test_reason = '%s transport cannot connect' % (
self.transport, )
else:
self.connected = True
def verify_alive(self):
if self.transport:
if not self.connected:
raise SkipTest(self.skip_test_reason)
return True
def purge_consumer(self, consumer):
return self.purge([queue.name for queue in consumer.queues])
def test_produce__consume(self):
if not self.verify_alive():
return
chan1 = self.connection.channel()
consumer = chan1.Consumer(self.queue)
self.purge_consumer(consumer)
producer = chan1.Producer(self.exchange)
producer.publish({'foo': 'bar'}, routing_key=self.prefix)
message = consumeN(self.connection, consumer)
self.assertDictEqual(message[0], {'foo': 'bar'})
chan1.close()
self.purge([self.queue.name])
def test_purge(self):
if not self.verify_alive():
return
chan1 = self.connection.channel()
consumer = chan1.Consumer(self.queue)
self.purge_consumer(consumer)
producer = chan1.Producer(self.exchange)
for i in range(10):
producer.publish({'foo': 'bar'}, routing_key=self.prefix)
if self.reliable_purge:
self.assertEqual(consumer.purge(), 10)
self.assertEqual(consumer.purge(), 0)
else:
purged = 0
while purged < 9:
purged += self.purge_consumer(consumer)
def _digest(self, data):
return _digest(data).hexdigest()
@skip_if_quick
def test_produce__consume_large_messages(
self, bytes=1048576, n=10,
charset=string.punctuation + string.letters + string.digits):
if not self.verify_alive():
return
bytes = min(x for x in [bytes, self.message_size_limit] if x)
messages = [''.join(random.choice(charset)
for j in range(bytes)) + '--%s' % n
for i in range(n)]
digests = []
chan1 = self.connection.channel()
consumer = chan1.Consumer(self.queue)
self.purge_consumer(consumer)
producer = chan1.Producer(self.exchange)
for i, message in enumerate(messages):
producer.publish({'text': message,
'i': i}, routing_key=self.prefix)
digests.append(self._digest(message))
received = [(msg['i'], msg['text'])
for msg in consumeN(self.connection, consumer, n)]
self.assertEqual(len(received), n)
ordering = [i for i, _ in received]
if ordering != list(range(n)) and not self.suppress_disorder_warning:
warnings.warn(
'%s did not deliver messages in FIFO order: %r' % (
self.transport, ordering))
for i, text in received:
if text != messages[i]:
raise AssertionError('%i: %r is not %r' % (
i, text[-100:], messages[i][-100:]))
self.assertEqual(self._digest(text), digests[i])
chan1.close()
self.purge([self.queue.name])
def P(self, rest):
return '%s%s%s' % (self.prefix, self.sep, rest)
def test_produce__consume_multiple(self):
if not self.verify_alive():
return
chan1 = self.connection.channel()
producer = chan1.Producer(self.exchange)
b1 = Queue(self.P('b1'), self.exchange, 'b1')(chan1)
b2 = Queue(self.P('b2'), self.exchange, 'b2')(chan1)
b3 = Queue(self.P('b3'), self.exchange, 'b3')(chan1)
[q.declare() for q in (b1, b2, b3)]
self.purge([b1.name, b2.name, b3.name])
producer.publish('b1', routing_key='b1')
producer.publish('b2', routing_key='b2')
producer.publish('b3', routing_key='b3')
chan1.close()
chan2 = self.connection.channel()
consumer = chan2.Consumer([b1, b2, b3])
messages = consumeN(self.connection, consumer, 3)
self.assertItemsEqual(_nobuf(messages), ['b1', 'b2', 'b3'])
chan2.close()
self.purge([self.P('b1'), self.P('b2'), self.P('b3')])
def test_timeout(self):
if not self.verify_alive():
return
chan = self.connection.channel()
self.purge([self.queue.name])
consumer = chan.Consumer(self.queue)
self.assertRaises(
socket.timeout, self.connection.drain_events, timeout=0.3,
)
consumer.cancel()
chan.close()
def test_basic_get(self):
if not self.verify_alive():
return
chan1 = self.connection.channel()
producer = chan1.Producer(self.exchange)
chan2 = self.connection.channel()
queue = Queue(self.P('basic_get'), self.exchange, 'basic_get')
queue = queue(chan2)
queue.declare()
producer.publish({'basic.get': 'this'}, routing_key='basic_get')
chan1.close()
for i in range(self.event_loop_max):
m = queue.get()
if m:
break
time.sleep(0.1)
self.assertEqual(m.payload, {'basic.get': 'this'})
self.purge([queue.name])
chan2.close()
def test_cyclic_reference_transport(self):
if not self.verify_alive():
return
def _createref():
conn = self.get_connection()
conn.transport
conn.close()
return weakref.ref(conn)
self.assertIsNone(_createref()())
def test_cyclic_reference_connection(self):
if not self.verify_alive():
return
def _createref():
conn = self.get_connection()
conn.connect()
conn.close()
return weakref.ref(conn)
self.assertIsNone(_createref()())
def test_cyclic_reference_channel(self):
if not self.verify_alive():
return
def _createref():
conn = self.get_connection()
conn.connect()
chanrefs = []
try:
for i in range(100):
channel = conn.channel()
chanrefs.append(weakref.ref(channel))
channel.close()
finally:
conn.close()
return chanrefs
for chanref in _createref():
self.assertIsNone(chanref())
def tearDown(self):
if self.transport and self.connected:
self.connection.close()
| agpl-3.0 |
Lekensteyn/buildbot | master/buildbot/test/unit/test_schedulers_basic.py | 10 | 25471 | # This file is part of Buildbot. Buildbot is free software: you can
# redistribute it and/or modify it under the terms of the GNU General Public
# License as published by the Free Software Foundation, version 2.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
# details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc., 51
# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Copyright Buildbot Team Members
from __future__ import absolute_import
from __future__ import print_function
import mock
from twisted.internet import defer
from twisted.internet import task
from twisted.trial import unittest
from buildbot import config
from buildbot.schedulers import basic
from buildbot.test.fake import fakedb
from buildbot.test.util import scheduler
class CommonStuffMixin(object):
def makeScheduler(self, klass, **kwargs_override):
kwargs = dict(name="tsched", treeStableTimer=60,
builderNames=['tbuild'])
kwargs.update(kwargs_override)
self.master.db.insertTestData(
[fakedb.Builder(name=builderName) for builderName in kwargs['builderNames']])
sched = self.attachScheduler(
klass(**kwargs), self.OBJECTID, self.SCHEDULERID)
# add a Clock to help checking timing issues
self.clock = sched._reactor = task.Clock()
# keep track of builds in self.events
self.events = []
@self.assertArgSpecMatches(sched.addBuildsetForChanges)
def addBuildsetForChanges(
waited_for=False, reason='', external_idstring=None, changeids=None,
builderNames=None, properties=None, **kw):
self.assertEqual(external_idstring, None)
self.assertEqual(reason, sched.reason)
self.events.append('B%s@%d' % (repr(changeids).replace(' ', ''),
self.clock.seconds()))
return defer.succeed(None)
sched.addBuildsetForChanges = addBuildsetForChanges
# see self.assertConsumingChanges
self.consumingChanges = None
def startConsumingChanges(**kwargs):
self.consumingChanges = kwargs
return defer.succeed(None)
sched.startConsumingChanges = startConsumingChanges
return sched
def assertConsumingChanges(self, **kwargs):
self.assertEqual(self.consumingChanges, kwargs)
class BaseBasicScheduler(CommonStuffMixin,
scheduler.SchedulerMixin, unittest.TestCase):
OBJECTID = 244
SCHEDULERID = 4
# a custom subclass since we're testing the base class. This basically
# re-implements SingleBranchScheduler, but with more asserts
class Subclass(basic.BaseBasicScheduler):
timer_started = False
def getChangeFilter(self, *args, **kwargs):
return kwargs.get('change_filter')
def getTimerNameForChange(self, change):
self.timer_started = True
return "xxx"
def getChangeClassificationsForTimer(self, schedulerid, timer_name):
assert timer_name == "xxx"
assert schedulerid == BaseBasicScheduler.SCHEDULERID
return self.master.db.schedulers.getChangeClassifications(schedulerid)
def setUp(self):
self.setUpScheduler()
def tearDown(self):
self.tearDownScheduler()
# tests
def test_constructor_positional_exception(self):
self.assertRaises(config.ConfigErrors,
lambda: self.Subclass("tsched", "master", 60))
def test_activate_no_treeStableTimer(self):
cf = mock.Mock('cf')
fII = mock.Mock('fII')
sched = self.makeScheduler(self.Subclass, treeStableTimer=None, change_filter=cf,
fileIsImportant=fII)
self.db.schedulers.fakeClassifications(self.SCHEDULERID, {20: True})
d = sched.activate()
# check that the scheduler has started to consume changes, and the
# classifications *have* been flushed, since they will not be used
def check(_):
self.assertConsumingChanges(fileIsImportant=fII, change_filter=cf,
onlyImportant=False)
self.db.schedulers.assertClassifications(self.SCHEDULERID, {})
d.addCallback(check)
d.addCallback(lambda _: sched.deactivate())
return d
def test_subclass_fileIsImportant(self):
class Subclass(self.Subclass):
def fileIsImportant(self, change):
return False
sched = self.makeScheduler(Subclass, onlyImportant=True)
self.assertEqual(
Subclass.fileIsImportant.__get__(sched), sched.fileIsImportant)
def test_activate_treeStableTimer(self):
cf = mock.Mock()
sched = self.makeScheduler(
self.Subclass, treeStableTimer=10, change_filter=cf)
self.db.schedulers.fakeClassifications(self.SCHEDULERID, {20: True})
self.master.db.insertTestData([
fakedb.Change(changeid=20),
fakedb.SchedulerChange(schedulerid=self.SCHEDULERID,
changeid=20, important=1)
])
d = sched.activate()
# check that the scheduler has started to consume changes, and no
# classifications have been flushed. Furthermore, the existing
# classification should have been acted on, so the timer should be
# running
def check(_):
self.assertConsumingChanges(fileIsImportant=None, change_filter=cf,
onlyImportant=False)
self.db.schedulers.assertClassifications(
self.SCHEDULERID, {20: True})
self.assertTrue(sched.timer_started)
self.clock.advance(10)
d.addCallback(check)
d.addCallback(lambda _: sched.deactivate())
return d
def test_gotChange_no_treeStableTimer_unimportant(self):
sched = self.makeScheduler(
self.Subclass, treeStableTimer=None, branch='master')
sched.activate()
d = sched.gotChange(
self.makeFakeChange(branch='master', number=13), False)
def check(_):
self.assertEqual(self.events, [])
d.addCallback(check)
d.addCallback(lambda _: sched.deactivate())
def test_gotChange_no_treeStableTimer_important(self):
sched = self.makeScheduler(
self.Subclass, treeStableTimer=None, branch='master')
sched.activate()
d = sched.gotChange(
self.makeFakeChange(branch='master', number=13), True)
def check(_):
self.assertEqual(self.events, ['B[13]@0'])
d.addCallback(check)
d.addCallback(lambda _: sched.deactivate())
def test_gotChange_treeStableTimer_unimportant(self):
sched = self.makeScheduler(
self.Subclass, treeStableTimer=10, branch='master')
sched.activate()
d = sched.gotChange(
self.makeFakeChange(branch='master', number=13), False)
def check(_):
self.assertEqual(self.events, [])
d.addCallback(check)
d.addCallback(lambda _: self.clock.advance(10))
d.addCallback(check) # should still be empty
d.addCallback(lambda _: sched.deactivate())
def test_gotChange_treeStableTimer_important(self):
sched = self.makeScheduler(
self.Subclass, treeStableTimer=10, branch='master')
sched.activate()
d = sched.gotChange(
self.makeFakeChange(branch='master', number=13), True)
d.addCallback(lambda _: self.clock.advance(10))
def check(_):
self.assertEqual(self.events, ['B[13]@10'])
d.addCallback(check)
d.addCallback(lambda _: sched.deactivate())
@defer.inlineCallbacks
def test_gotChange_treeStableTimer_sequence(self):
sched = self.makeScheduler(
self.Subclass, treeStableTimer=9, branch='master')
self.master.db.insertTestData([
fakedb.Change(changeid=1, branch='master', when_timestamp=1110),
fakedb.ChangeFile(changeid=1, filename='readme.txt'),
fakedb.Change(changeid=2, branch='master', when_timestamp=2220),
fakedb.ChangeFile(changeid=2, filename='readme.txt'),
fakedb.Change(changeid=3, branch='master', when_timestamp=3330),
fakedb.ChangeFile(changeid=3, filename='readme.txt'),
fakedb.Change(changeid=4, branch='master', when_timestamp=4440),
fakedb.ChangeFile(changeid=4, filename='readme.txt'),
])
sched.activate()
self.clock.advance(2220)
# this important change arrives at 2220, so the stable timer will last
# until 2229
yield sched.gotChange(
self.makeFakeChange(branch='master', number=1, when=2220),
True)
self.assertEqual(self.events, [])
self.db.schedulers.assertClassifications(self.SCHEDULERID, {1: True})
# but another (unimportant) change arrives before then
self.clock.advance(6) # to 2226
self.assertEqual(self.events, [])
yield sched.gotChange(
self.makeFakeChange(branch='master', number=2, when=2226),
False)
self.assertEqual(self.events, [])
self.db.schedulers.assertClassifications(
self.SCHEDULERID, {1: True, 2: False})
self.clock.advance(3) # to 2229
self.assertEqual(self.events, [])
self.clock.advance(3) # to 2232
self.assertEqual(self.events, [])
# another important change arrives at 2232
yield sched.gotChange(
self.makeFakeChange(branch='master', number=3, when=2232),
True)
self.assertEqual(self.events, [])
self.db.schedulers.assertClassifications(
self.SCHEDULERID, {1: True, 2: False, 3: True})
self.clock.advance(3) # to 2235
self.assertEqual(self.events, [])
# finally, time to start the build!
self.clock.advance(6) # to 2241
self.assertEqual(self.events, ['B[1,2,3]@2241'])
self.db.schedulers.assertClassifications(self.SCHEDULERID, {})
yield sched.deactivate()
@defer.inlineCallbacks
def test_enabled_callback(self):
sched = self.makeScheduler(self.Subclass)
expectedValue = not sched.enabled
yield sched._enabledCallback(None, {'enabled': not sched.enabled})
self.assertEqual(sched.enabled, expectedValue)
expectedValue = not sched.enabled
yield sched._enabledCallback(None, {'enabled': not sched.enabled})
self.assertEqual(sched.enabled, expectedValue)
@defer.inlineCallbacks
def test_disabled_activate(self):
sched = self.makeScheduler(self.Subclass)
yield sched._enabledCallback(None, {'enabled': not sched.enabled})
self.assertEqual(sched.enabled, False)
r = yield sched.activate()
self.assertEqual(r, None)
@defer.inlineCallbacks
def test_disabled_deactivate(self):
sched = self.makeScheduler(self.Subclass)
yield sched._enabledCallback(None, {'enabled': not sched.enabled})
self.assertEqual(sched.enabled, False)
r = yield sched.deactivate()
self.assertEqual(r, None)
class SingleBranchScheduler(CommonStuffMixin,
scheduler.SchedulerMixin, unittest.TestCase):
SCHEDULERID = 245
OBJECTID = 224455
codebases = {'a': {'repository': "", 'branch': 'master'},
'b': {'repository': "", 'branch': 'master'}}
def makeFullScheduler(self, **kwargs):
self.master.db.insertTestData(
[fakedb.Builder(name=builderName) for builderName in kwargs['builderNames']])
sched = self.attachScheduler(basic.SingleBranchScheduler(**kwargs),
self.OBJECTID, self.SCHEDULERID,
overrideBuildsetMethods=True)
# add a Clock to help checking timing issues
self.clock = sched._reactor = task.Clock()
return sched
def mkbs(self, **kwargs):
# create buildset for expected_buildset in assertBuildset.
bs = dict(reason=self.sched.reason, external_idstring=None, sourcestampsetid=100,
properties=[('scheduler', ('test', 'Scheduler'))])
bs.update(kwargs)
return bs
def mkss(self, **kwargs):
# create sourcestamp for expected_sourcestamps in assertBuildset.
ss = dict(
branch='master', project='', repository='', sourcestampsetid=100)
ss.update(kwargs)
return ss
def mkch(self, **kwargs):
# create changeset and insert in database.
chd = dict(branch='master', project='', repository='')
chd.update(kwargs)
ch = self.makeFakeChange(**chd)
# fakedb.Change requires changeid instead of number
chd['changeid'] = chd['number']
del chd['number']
self.db.insertTestData([fakedb.Change(**chd)])
return ch
def setUp(self):
self.setUpScheduler()
def tearDown(self):
self.tearDownScheduler()
def test_constructor_no_reason(self):
sched = self.makeScheduler(
basic.SingleBranchScheduler, branch="master")
self.assertEqual(
sched.reason, "The SingleBranchScheduler scheduler named 'tsched' triggered this build")
def test_constructor_reason(self):
sched = self.makeScheduler(
basic.SingleBranchScheduler, branch="master", reason="Changeset")
self.assertEqual(sched.reason, "Changeset")
def test_constructor_branch_mandatory(self):
self.assertRaises(config.ConfigErrors,
lambda: basic.SingleBranchScheduler(name="tsched", treeStableTimer=60))
def test_constructor_no_branch_but_filter(self):
# this shouldn't fail
basic.SingleBranchScheduler(name="tsched", treeStableTimer=60,
builderNames=['a', 'b'], change_filter=mock.Mock())
def test_constructor_branches_forbidden(self):
self.assertRaises(config.ConfigErrors,
lambda: basic.SingleBranchScheduler(name="tsched", treeStableTimer=60, branches='x'))
def test_gotChange_treeStableTimer_important(self):
# this looks suspiciously like the same test above, because SingleBranchScheduler
# is about the same as the test subclass used above
sched = self.makeScheduler(basic.SingleBranchScheduler,
treeStableTimer=10, branch='master')
sched.activate()
d = sched.gotChange(
self.makeFakeChange(branch='master', number=13), True)
d.addCallback(lambda _: self.clock.advance(10))
def check(_):
self.assertEqual(self.events, ['B[13]@10'])
d.addCallback(check)
d.addCallback(lambda _: sched.deactivate())
@defer.inlineCallbacks
def test_gotChange_createAbsoluteSourceStamps_saveCodebase(self):
# check codebase is stored after receiving change.
sched = self.makeFullScheduler(name='test', builderNames=['test'],
treeStableTimer=None, branch='master',
codebases=self.codebases,
createAbsoluteSourceStamps=True)
self.db.insertTestData([
fakedb.Object(id=self.OBJECTID, name='test', class_name='SingleBranchScheduler')])
yield sched.activate()
yield sched.gotChange(self.mkch(codebase='a', revision='1234:abc', repository='A', number=0), True)
yield sched.gotChange(self.mkch(codebase='b', revision='2345:bcd', repository='B', number=1), True)
self.db.state.assertState(self.OBJECTID, lastCodebases={
'a': dict(branch='master', repository='A', revision=u'1234:abc', lastChange=0),
'b': dict(branch='master', repository='B', revision=u'2345:bcd', lastChange=1)})
yield sched.deactivate()
@defer.inlineCallbacks
def test_gotChange_createAbsoluteSourceStamps_older_change(self):
# check codebase is not stored if it's older than the most recent
sched = self.makeFullScheduler(name='test', builderNames=['test'],
treeStableTimer=None, branch='master',
codebases=self.codebases,
createAbsoluteSourceStamps=True)
self.db.insertTestData([
fakedb.Object(id=self.OBJECTID, name='test',
class_name='SingleBranchScheduler'),
fakedb.ObjectState(objectid=self.OBJECTID, name='lastCodebases',
value_json='{"a": {"branch": "master", "repository": "A", '
'"revision": "5555:def", "lastChange": 20}}')])
yield sched.activate()
# this change is not recorded, since it's older than
# change 20
yield sched.gotChange(self.mkch(codebase='a', revision='1234:abc', repository='A', number=10), True)
self.db.state.assertState(self.OBJECTID, lastCodebases={
'a': dict(branch='master', repository='A', revision=u'5555:def', lastChange=20)})
yield sched.deactivate()
@defer.inlineCallbacks
def test_getCodebaseDict(self):
sched = self.makeFullScheduler(name='test', builderNames=['test'],
treeStableTimer=None, branch='master',
codebases=self.codebases,
createAbsoluteSourceStamps=True)
sched._lastCodebases = {'a': dict(branch='master', repository='A',
revision=u'5555:def', lastChange=20)}
cbd = yield sched.getCodebaseDict('a')
self.assertEqual(cbd, dict(branch='master', repository='A',
revision=u'5555:def', lastChange=20))
@defer.inlineCallbacks
def test_getCodebaseDict_no_createAbsoluteSourceStamps(self):
sched = self.makeFullScheduler(name='test', builderNames=['test'],
treeStableTimer=None, branch='master',
codebases=self.codebases,
createAbsoluteSourceStamps=False)
sched._lastCodebases = {'a': dict(branch='master', repository='A',
revision=u'5555:def', lastChange=20)}
cbd = yield sched.getCodebaseDict('a')
# _lastCodebases is ignored
self.assertEqual(cbd, {'branch': 'master', 'repository': ''})
class AnyBranchScheduler(CommonStuffMixin,
scheduler.SchedulerMixin, unittest.TestCase):
SCHEDULERID = 6
OBJECTID = 246
def setUp(self):
self.setUpScheduler()
def tearDown(self):
self.tearDownScheduler()
def test_constructor_branch_forbidden(self):
self.assertRaises(config.ConfigErrors,
lambda: basic.SingleBranchScheduler(name="tsched", treeStableTimer=60, branch='x'))
def test_gotChange_treeStableTimer_multiple_branches(self):
"""Two changes with different branches get different treeStableTimers"""
sched = self.makeScheduler(basic.AnyBranchScheduler,
treeStableTimer=10, branches=['master', 'devel', 'boring'])
sched.activate()
def mkch(**kwargs):
ch = self.makeFakeChange(**kwargs)
self.db.changes.fakeAddChangeInstance(ch)
return ch
d = defer.succeed(None)
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', number=13), True))
d.addCallback(lambda _:
self.clock.advance(1)) # time is now 1
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', number=14), False))
d.addCallback(lambda _:
sched.gotChange(mkch(branch='boring', number=15), False))
d.addCallback(lambda _:
self.clock.pump([1] * 4)) # time is now 5
d.addCallback(lambda _:
sched.gotChange(mkch(branch='devel', number=16), True))
d.addCallback(lambda _:
self.clock.pump([1] * 10)) # time is now 15
def check(_):
self.assertEqual(self.events, ['B[13,14]@11', 'B[16]@15'])
d.addCallback(check)
d.addCallback(lambda _: sched.deactivate())
def test_gotChange_treeStableTimer_multiple_repositories(self):
"""Two repositories, even with the same branch name, have different treeStableTimers"""
sched = self.makeScheduler(basic.AnyBranchScheduler,
treeStableTimer=10, branches=['master'])
d = sched.activate()
def mkch(**kwargs):
ch = self.makeFakeChange(**kwargs)
self.db.changes.fakeAddChangeInstance(ch)
return ch
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', repository="repo", number=13), True))
d.addCallback(lambda _:
self.clock.advance(1)) # time is now 1
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', repository="repo", number=14), False))
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', repository="other_repo", number=15), False))
d.addCallback(lambda _:
self.clock.pump([1] * 4)) # time is now 5
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', repository="other_repo", number=17), True))
d.addCallback(lambda _:
self.clock.pump([1] * 10)) # time is now 15
def check(_):
self.assertEqual(self.events, ['B[13,14]@11', 'B[15,17]@15'])
d.addCallback(check)
d.addCallback(lambda _: sched.deactivate())
def test_gotChange_treeStableTimer_multiple_projects(self):
"""Two projects, even with the same branch name, have different treeStableTimers"""
sched = self.makeScheduler(basic.AnyBranchScheduler,
treeStableTimer=10, branches=['master'])
sched.startService()
def mkch(**kwargs):
ch = self.makeFakeChange(**kwargs)
self.db.changes.fakeAddChangeInstance(ch)
return ch
d = defer.succeed(None)
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', project="proj", number=13), True))
d.addCallback(lambda _:
self.clock.advance(1)) # time is now 1
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', project="proj", number=14), False))
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', project="other_proj", number=15), False))
d.addCallback(lambda _:
self.clock.pump([1] * 4)) # time is now 5
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', project="other_proj", number=17), True))
d.addCallback(lambda _:
self.clock.pump([1] * 10)) # time is now 15
def check(_):
self.assertEqual(self.events, ['B[13,14]@11', 'B[15,17]@15'])
d.addCallback(check)
d.addCallback(lambda _: sched.deactivate())
def test_gotChange_treeStableTimer_multiple_codebases(self):
"""Two codebases, even with the same branch name, have different treeStableTimers"""
sched = self.makeScheduler(basic.AnyBranchScheduler,
treeStableTimer=10, branches=['master'])
sched.startService()
def mkch(**kwargs):
ch = self.makeFakeChange(**kwargs)
self.db.changes.fakeAddChangeInstance(ch)
return ch
d = defer.succeed(None)
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', codebase="base", number=13), True))
d.addCallback(lambda _:
self.clock.advance(1)) # time is now 1
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', codebase="base", number=14), False))
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', codebase="other_base", number=15), False))
d.addCallback(lambda _:
self.clock.pump([1] * 4)) # time is now 5
d.addCallback(lambda _:
sched.gotChange(mkch(branch='master', codebase="other_base", number=17), True))
d.addCallback(lambda _:
self.clock.pump([1] * 10)) # time is now 15
def check(_):
self.assertEqual(self.events, ['B[13,14]@11', 'B[15,17]@15'])
d.addCallback(check)
d.addCallback(lambda _: sched.deactivate())
| gpl-2.0 |
treejames/android_kernel_htc_msm7x30 | Documentation/networking/cxacru-cf.py | 14668 | 1626 | #!/usr/bin/env python
# Copyright 2009 Simon Arlott
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the Free
# Software Foundation; either version 2 of the License, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
# more details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc., 59
# Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# Usage: cxacru-cf.py < cxacru-cf.bin
# Output: values string suitable for the sysfs adsl_config attribute
#
# Warning: cxacru-cf.bin with MD5 hash cdbac2689969d5ed5d4850f117702110
# contains mis-aligned values which will stop the modem from being able
# to make a connection. If the first and last two bytes are removed then
# the values become valid, but the modulation will be forced to ANSI
# T1.413 only which may not be appropriate.
#
# The original binary format is a packed list of le32 values.
import sys
import struct
i = 0
while True:
buf = sys.stdin.read(4)
if len(buf) == 0:
break
elif len(buf) != 4:
sys.stdout.write("\n")
sys.stderr.write("Error: read {0} not 4 bytes\n".format(len(buf)))
sys.exit(1)
if i > 0:
sys.stdout.write(" ")
sys.stdout.write("{0:x}={1}".format(i, struct.unpack("<I", buf)[0]))
i += 1
sys.stdout.write("\n")
| gpl-2.0 |
barak066/python-chardet | chardet/euckrprober.py | 236 | 1672 | ######################## BEGIN LICENSE BLOCK ########################
# The Original Code is mozilla.org code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 1998
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
from mbcharsetprober import MultiByteCharSetProber
from codingstatemachine import CodingStateMachine
from chardistribution import EUCKRDistributionAnalysis
from mbcssm import EUCKRSMModel
class EUCKRProber(MultiByteCharSetProber):
def __init__(self):
MultiByteCharSetProber.__init__(self)
self._mCodingSM = CodingStateMachine(EUCKRSMModel)
self._mDistributionAnalyzer = EUCKRDistributionAnalysis()
self.reset()
def get_charset_name(self):
return "EUC-KR"
| lgpl-2.1 |
tony5856/repository.tonysrepo | plugin.program.xmlerator/unidecode/x014.py | 252 | 4300 | data = (
'[?]', # 0x00
'e', # 0x01
'aai', # 0x02
'i', # 0x03
'ii', # 0x04
'o', # 0x05
'oo', # 0x06
'oo', # 0x07
'ee', # 0x08
'i', # 0x09
'a', # 0x0a
'aa', # 0x0b
'we', # 0x0c
'we', # 0x0d
'wi', # 0x0e
'wi', # 0x0f
'wii', # 0x10
'wii', # 0x11
'wo', # 0x12
'wo', # 0x13
'woo', # 0x14
'woo', # 0x15
'woo', # 0x16
'wa', # 0x17
'wa', # 0x18
'waa', # 0x19
'waa', # 0x1a
'waa', # 0x1b
'ai', # 0x1c
'w', # 0x1d
'\'', # 0x1e
't', # 0x1f
'k', # 0x20
'sh', # 0x21
's', # 0x22
'n', # 0x23
'w', # 0x24
'n', # 0x25
'[?]', # 0x26
'w', # 0x27
'c', # 0x28
'?', # 0x29
'l', # 0x2a
'en', # 0x2b
'in', # 0x2c
'on', # 0x2d
'an', # 0x2e
'pe', # 0x2f
'paai', # 0x30
'pi', # 0x31
'pii', # 0x32
'po', # 0x33
'poo', # 0x34
'poo', # 0x35
'hee', # 0x36
'hi', # 0x37
'pa', # 0x38
'paa', # 0x39
'pwe', # 0x3a
'pwe', # 0x3b
'pwi', # 0x3c
'pwi', # 0x3d
'pwii', # 0x3e
'pwii', # 0x3f
'pwo', # 0x40
'pwo', # 0x41
'pwoo', # 0x42
'pwoo', # 0x43
'pwa', # 0x44
'pwa', # 0x45
'pwaa', # 0x46
'pwaa', # 0x47
'pwaa', # 0x48
'p', # 0x49
'p', # 0x4a
'h', # 0x4b
'te', # 0x4c
'taai', # 0x4d
'ti', # 0x4e
'tii', # 0x4f
'to', # 0x50
'too', # 0x51
'too', # 0x52
'dee', # 0x53
'di', # 0x54
'ta', # 0x55
'taa', # 0x56
'twe', # 0x57
'twe', # 0x58
'twi', # 0x59
'twi', # 0x5a
'twii', # 0x5b
'twii', # 0x5c
'two', # 0x5d
'two', # 0x5e
'twoo', # 0x5f
'twoo', # 0x60
'twa', # 0x61
'twa', # 0x62
'twaa', # 0x63
'twaa', # 0x64
'twaa', # 0x65
't', # 0x66
'tte', # 0x67
'tti', # 0x68
'tto', # 0x69
'tta', # 0x6a
'ke', # 0x6b
'kaai', # 0x6c
'ki', # 0x6d
'kii', # 0x6e
'ko', # 0x6f
'koo', # 0x70
'koo', # 0x71
'ka', # 0x72
'kaa', # 0x73
'kwe', # 0x74
'kwe', # 0x75
'kwi', # 0x76
'kwi', # 0x77
'kwii', # 0x78
'kwii', # 0x79
'kwo', # 0x7a
'kwo', # 0x7b
'kwoo', # 0x7c
'kwoo', # 0x7d
'kwa', # 0x7e
'kwa', # 0x7f
'kwaa', # 0x80
'kwaa', # 0x81
'kwaa', # 0x82
'k', # 0x83
'kw', # 0x84
'keh', # 0x85
'kih', # 0x86
'koh', # 0x87
'kah', # 0x88
'ce', # 0x89
'caai', # 0x8a
'ci', # 0x8b
'cii', # 0x8c
'co', # 0x8d
'coo', # 0x8e
'coo', # 0x8f
'ca', # 0x90
'caa', # 0x91
'cwe', # 0x92
'cwe', # 0x93
'cwi', # 0x94
'cwi', # 0x95
'cwii', # 0x96
'cwii', # 0x97
'cwo', # 0x98
'cwo', # 0x99
'cwoo', # 0x9a
'cwoo', # 0x9b
'cwa', # 0x9c
'cwa', # 0x9d
'cwaa', # 0x9e
'cwaa', # 0x9f
'cwaa', # 0xa0
'c', # 0xa1
'th', # 0xa2
'me', # 0xa3
'maai', # 0xa4
'mi', # 0xa5
'mii', # 0xa6
'mo', # 0xa7
'moo', # 0xa8
'moo', # 0xa9
'ma', # 0xaa
'maa', # 0xab
'mwe', # 0xac
'mwe', # 0xad
'mwi', # 0xae
'mwi', # 0xaf
'mwii', # 0xb0
'mwii', # 0xb1
'mwo', # 0xb2
'mwo', # 0xb3
'mwoo', # 0xb4
'mwoo', # 0xb5
'mwa', # 0xb6
'mwa', # 0xb7
'mwaa', # 0xb8
'mwaa', # 0xb9
'mwaa', # 0xba
'm', # 0xbb
'm', # 0xbc
'mh', # 0xbd
'm', # 0xbe
'm', # 0xbf
'ne', # 0xc0
'naai', # 0xc1
'ni', # 0xc2
'nii', # 0xc3
'no', # 0xc4
'noo', # 0xc5
'noo', # 0xc6
'na', # 0xc7
'naa', # 0xc8
'nwe', # 0xc9
'nwe', # 0xca
'nwa', # 0xcb
'nwa', # 0xcc
'nwaa', # 0xcd
'nwaa', # 0xce
'nwaa', # 0xcf
'n', # 0xd0
'ng', # 0xd1
'nh', # 0xd2
'le', # 0xd3
'laai', # 0xd4
'li', # 0xd5
'lii', # 0xd6
'lo', # 0xd7
'loo', # 0xd8
'loo', # 0xd9
'la', # 0xda
'laa', # 0xdb
'lwe', # 0xdc
'lwe', # 0xdd
'lwi', # 0xde
'lwi', # 0xdf
'lwii', # 0xe0
'lwii', # 0xe1
'lwo', # 0xe2
'lwo', # 0xe3
'lwoo', # 0xe4
'lwoo', # 0xe5
'lwa', # 0xe6
'lwa', # 0xe7
'lwaa', # 0xe8
'lwaa', # 0xe9
'l', # 0xea
'l', # 0xeb
'l', # 0xec
'se', # 0xed
'saai', # 0xee
'si', # 0xef
'sii', # 0xf0
'so', # 0xf1
'soo', # 0xf2
'soo', # 0xf3
'sa', # 0xf4
'saa', # 0xf5
'swe', # 0xf6
'swe', # 0xf7
'swi', # 0xf8
'swi', # 0xf9
'swii', # 0xfa
'swii', # 0xfb
'swo', # 0xfc
'swo', # 0xfd
'swoo', # 0xfe
'swoo', # 0xff
)
| gpl-3.0 |
pnedunuri/scikit-learn | examples/covariance/plot_lw_vs_oas.py | 248 | 2903 | """
=============================
Ledoit-Wolf vs OAS estimation
=============================
The usual covariance maximum likelihood estimate can be regularized
using shrinkage. Ledoit and Wolf proposed a close formula to compute
the asymptotically optimal shrinkage parameter (minimizing a MSE
criterion), yielding the Ledoit-Wolf covariance estimate.
Chen et al. proposed an improvement of the Ledoit-Wolf shrinkage
parameter, the OAS coefficient, whose convergence is significantly
better under the assumption that the data are Gaussian.
This example, inspired from Chen's publication [1], shows a comparison
of the estimated MSE of the LW and OAS methods, using Gaussian
distributed data.
[1] "Shrinkage Algorithms for MMSE Covariance Estimation"
Chen et al., IEEE Trans. on Sign. Proc., Volume 58, Issue 10, October 2010.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from scipy.linalg import toeplitz, cholesky
from sklearn.covariance import LedoitWolf, OAS
np.random.seed(0)
###############################################################################
n_features = 100
# simulation covariance matrix (AR(1) process)
r = 0.1
real_cov = toeplitz(r ** np.arange(n_features))
coloring_matrix = cholesky(real_cov)
n_samples_range = np.arange(6, 31, 1)
repeat = 100
lw_mse = np.zeros((n_samples_range.size, repeat))
oa_mse = np.zeros((n_samples_range.size, repeat))
lw_shrinkage = np.zeros((n_samples_range.size, repeat))
oa_shrinkage = np.zeros((n_samples_range.size, repeat))
for i, n_samples in enumerate(n_samples_range):
for j in range(repeat):
X = np.dot(
np.random.normal(size=(n_samples, n_features)), coloring_matrix.T)
lw = LedoitWolf(store_precision=False, assume_centered=True)
lw.fit(X)
lw_mse[i, j] = lw.error_norm(real_cov, scaling=False)
lw_shrinkage[i, j] = lw.shrinkage_
oa = OAS(store_precision=False, assume_centered=True)
oa.fit(X)
oa_mse[i, j] = oa.error_norm(real_cov, scaling=False)
oa_shrinkage[i, j] = oa.shrinkage_
# plot MSE
plt.subplot(2, 1, 1)
plt.errorbar(n_samples_range, lw_mse.mean(1), yerr=lw_mse.std(1),
label='Ledoit-Wolf', color='g')
plt.errorbar(n_samples_range, oa_mse.mean(1), yerr=oa_mse.std(1),
label='OAS', color='r')
plt.ylabel("Squared error")
plt.legend(loc="upper right")
plt.title("Comparison of covariance estimators")
plt.xlim(5, 31)
# plot shrinkage coefficient
plt.subplot(2, 1, 2)
plt.errorbar(n_samples_range, lw_shrinkage.mean(1), yerr=lw_shrinkage.std(1),
label='Ledoit-Wolf', color='g')
plt.errorbar(n_samples_range, oa_shrinkage.mean(1), yerr=oa_shrinkage.std(1),
label='OAS', color='r')
plt.xlabel("n_samples")
plt.ylabel("Shrinkage")
plt.legend(loc="lower right")
plt.ylim(plt.ylim()[0], 1. + (plt.ylim()[1] - plt.ylim()[0]) / 10.)
plt.xlim(5, 31)
plt.show()
| bsd-3-clause |
anujkhare/poppler | regtest/commands/create-report.py | 5 | 2559 | # create-report.py
#
# Copyright (C) 2012 Carlos Garcia Campos <carlosgc@gnome.org>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
from commands import Command, register_command
from HTMLReport import HTMLReport
from Config import Config
import os
import tempfile
class CreateReport(Command):
name = 'create-report'
usage_args = '[ options ... ] tests '
description = 'Create report of test results'
def __init__(self):
Command.__init__(self)
parser = self._get_args_parser()
parser.add_argument('--refs-dir',
action = 'store', dest = 'refs_dir', default = os.path.join(tempfile.gettempdir(), 'refs'),
help = 'Directory containing the references')
parser.add_argument('-o', '--out-dir',
action = 'store', dest = 'out_dir', default = os.path.join(tempfile.gettempdir(), 'out'),
help = 'Directory containing the results')
parser.add_argument('-p', '--pretty-diff',
action = 'store_true', dest = 'pretty_diff', default = False,
help = 'Include pretty diff output')
parser.add_argument('-n', '--no-browser',
action = 'store_false', dest = 'launch_browser', default = True,
help = 'Do not launch a web browser with the results')
parser.add_argument('tests')
def run(self, options):
config = Config()
config.pretty_diff = options['pretty_diff']
doc = options['tests']
if os.path.isdir(doc):
docs_dir = doc
else:
docs_dir = os.path.dirname(doc)
report = HTMLReport(docs_dir, options['refs_dir'], options['out_dir'])
report.create(options['launch_browser'])
return 0
register_command('create-report', CreateReport)
| gpl-2.0 |
ssbarnea/ansible | test/units/compat/unittest.py | 156 | 1302 | # (c) 2014, Toshio Kuratomi <tkuratomi@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
'''
Compat module for Python2.7's unittest module
'''
import sys
# Allow wildcard import because we really do want to import all of
# unittests's symbols into this compat shim
# pylint: disable=wildcard-import,unused-wildcard-import
if sys.version_info < (2, 7):
try:
# Need unittest2 on python2.6
from unittest2 import *
except ImportError:
print('You need unittest2 installed on python2.6.x to run tests')
else:
from unittest import *
| gpl-3.0 |
lindamar/ecclesi | env/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py | 19 | 57277 | # orm/attributes.py
# Copyright (C) 2005-2016 the SQLAlchemy authors and contributors
# <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""Defines instrumentation for class attributes and their interaction
with instances.
This module is usually not directly visible to user applications, but
defines a large part of the ORM's interactivity.
"""
import operator
from .. import util, event, inspection
from . import interfaces, collections, exc as orm_exc
from .base import instance_state, instance_dict, manager_of_class
from .base import PASSIVE_NO_RESULT, ATTR_WAS_SET, ATTR_EMPTY, NO_VALUE,\
NEVER_SET, NO_CHANGE, CALLABLES_OK, SQL_OK, RELATED_OBJECT_OK,\
INIT_OK, NON_PERSISTENT_OK, LOAD_AGAINST_COMMITTED, PASSIVE_OFF,\
PASSIVE_RETURN_NEVER_SET, PASSIVE_NO_INITIALIZE, PASSIVE_NO_FETCH,\
PASSIVE_NO_FETCH_RELATED, PASSIVE_ONLY_PERSISTENT, NO_AUTOFLUSH
from .base import state_str, instance_str
@inspection._self_inspects
class QueryableAttribute(interfaces._MappedAttribute,
interfaces.InspectionAttr,
interfaces.PropComparator):
"""Base class for :term:`descriptor` objects that intercept
attribute events on behalf of a :class:`.MapperProperty`
object. The actual :class:`.MapperProperty` is accessible
via the :attr:`.QueryableAttribute.property`
attribute.
.. seealso::
:class:`.InstrumentedAttribute`
:class:`.MapperProperty`
:attr:`.Mapper.all_orm_descriptors`
:attr:`.Mapper.attrs`
"""
is_attribute = True
def __init__(self, class_, key, impl=None,
comparator=None, parententity=None,
of_type=None):
self.class_ = class_
self.key = key
self.impl = impl
self.comparator = comparator
self._parententity = parententity
self._of_type = of_type
manager = manager_of_class(class_)
# manager is None in the case of AliasedClass
if manager:
# propagate existing event listeners from
# immediate superclass
for base in manager._bases:
if key in base:
self.dispatch._update(base[key].dispatch)
@util.memoized_property
def _supports_population(self):
return self.impl.supports_population
def get_history(self, instance, passive=PASSIVE_OFF):
return self.impl.get_history(instance_state(instance),
instance_dict(instance), passive)
def __selectable__(self):
# TODO: conditionally attach this method based on clause_element ?
return self
@util.memoized_property
def info(self):
"""Return the 'info' dictionary for the underlying SQL element.
The behavior here is as follows:
* If the attribute is a column-mapped property, i.e.
:class:`.ColumnProperty`, which is mapped directly
to a schema-level :class:`.Column` object, this attribute
will return the :attr:`.SchemaItem.info` dictionary associated
with the core-level :class:`.Column` object.
* If the attribute is a :class:`.ColumnProperty` but is mapped to
any other kind of SQL expression other than a :class:`.Column`,
the attribute will refer to the :attr:`.MapperProperty.info`
dictionary associated directly with the :class:`.ColumnProperty`,
assuming the SQL expression itself does not have its own ``.info``
attribute (which should be the case, unless a user-defined SQL
construct has defined one).
* If the attribute refers to any other kind of
:class:`.MapperProperty`, including :class:`.RelationshipProperty`,
the attribute will refer to the :attr:`.MapperProperty.info`
dictionary associated with that :class:`.MapperProperty`.
* To access the :attr:`.MapperProperty.info` dictionary of the
:class:`.MapperProperty` unconditionally, including for a
:class:`.ColumnProperty` that's associated directly with a
:class:`.schema.Column`, the attribute can be referred to using
:attr:`.QueryableAttribute.property` attribute, as
``MyClass.someattribute.property.info``.
.. versionadded:: 0.8.0
.. seealso::
:attr:`.SchemaItem.info`
:attr:`.MapperProperty.info`
"""
return self.comparator.info
@util.memoized_property
def parent(self):
"""Return an inspection instance representing the parent.
This will be either an instance of :class:`.Mapper`
or :class:`.AliasedInsp`, depending upon the nature
of the parent entity which this attribute is associated
with.
"""
return inspection.inspect(self._parententity)
@property
def expression(self):
return self.comparator.__clause_element__()
def __clause_element__(self):
return self.comparator.__clause_element__()
def _query_clause_element(self):
"""like __clause_element__(), but called specifically
by :class:`.Query` to allow special behavior."""
return self.comparator._query_clause_element()
def adapt_to_entity(self, adapt_to_entity):
assert not self._of_type
return self.__class__(adapt_to_entity.entity,
self.key, impl=self.impl,
comparator=self.comparator.adapt_to_entity(
adapt_to_entity),
parententity=adapt_to_entity)
def of_type(self, cls):
return QueryableAttribute(
self.class_,
self.key,
self.impl,
self.comparator.of_type(cls),
self._parententity,
of_type=cls)
def label(self, name):
return self._query_clause_element().label(name)
def operate(self, op, *other, **kwargs):
return op(self.comparator, *other, **kwargs)
def reverse_operate(self, op, other, **kwargs):
return op(other, self.comparator, **kwargs)
def hasparent(self, state, optimistic=False):
return self.impl.hasparent(state, optimistic=optimistic) is not False
def __getattr__(self, key):
try:
return getattr(self.comparator, key)
except AttributeError:
raise AttributeError(
'Neither %r object nor %r object associated with %s '
'has an attribute %r' % (
type(self).__name__,
type(self.comparator).__name__,
self,
key)
)
def __str__(self):
return "%s.%s" % (self.class_.__name__, self.key)
@util.memoized_property
def property(self):
"""Return the :class:`.MapperProperty` associated with this
:class:`.QueryableAttribute`.
Return values here will commonly be instances of
:class:`.ColumnProperty` or :class:`.RelationshipProperty`.
"""
return self.comparator.property
class InstrumentedAttribute(QueryableAttribute):
"""Class bound instrumented attribute which adds basic
:term:`descriptor` methods.
See :class:`.QueryableAttribute` for a description of most features.
"""
def __set__(self, instance, value):
self.impl.set(instance_state(instance),
instance_dict(instance), value, None)
def __delete__(self, instance):
self.impl.delete(instance_state(instance), instance_dict(instance))
def __get__(self, instance, owner):
if instance is None:
return self
dict_ = instance_dict(instance)
if self._supports_population and self.key in dict_:
return dict_[self.key]
else:
return self.impl.get(instance_state(instance), dict_)
def create_proxied_attribute(descriptor):
"""Create an QueryableAttribute / user descriptor hybrid.
Returns a new QueryableAttribute type that delegates descriptor
behavior and getattr() to the given descriptor.
"""
# TODO: can move this to descriptor_props if the need for this
# function is removed from ext/hybrid.py
class Proxy(QueryableAttribute):
"""Presents the :class:`.QueryableAttribute` interface as a
proxy on top of a Python descriptor / :class:`.PropComparator`
combination.
"""
def __init__(self, class_, key, descriptor,
comparator,
adapt_to_entity=None, doc=None,
original_property=None):
self.class_ = class_
self.key = key
self.descriptor = descriptor
self.original_property = original_property
self._comparator = comparator
self._adapt_to_entity = adapt_to_entity
self.__doc__ = doc
@property
def property(self):
return self.comparator.property
@util.memoized_property
def comparator(self):
if util.callable(self._comparator):
self._comparator = self._comparator()
if self._adapt_to_entity:
self._comparator = self._comparator.adapt_to_entity(
self._adapt_to_entity)
return self._comparator
def adapt_to_entity(self, adapt_to_entity):
return self.__class__(adapt_to_entity.entity,
self.key,
self.descriptor,
self._comparator,
adapt_to_entity)
def __get__(self, instance, owner):
if instance is None:
return self
else:
return self.descriptor.__get__(instance, owner)
def __str__(self):
return "%s.%s" % (self.class_.__name__, self.key)
def __getattr__(self, attribute):
"""Delegate __getattr__ to the original descriptor and/or
comparator."""
try:
return getattr(descriptor, attribute)
except AttributeError:
try:
return getattr(self.comparator, attribute)
except AttributeError:
raise AttributeError(
'Neither %r object nor %r object associated with %s '
'has an attribute %r' % (
type(descriptor).__name__,
type(self.comparator).__name__,
self,
attribute)
)
Proxy.__name__ = type(descriptor).__name__ + 'Proxy'
util.monkeypatch_proxied_specials(Proxy, type(descriptor),
name='descriptor',
from_instance=descriptor)
return Proxy
OP_REMOVE = util.symbol("REMOVE")
OP_APPEND = util.symbol("APPEND")
OP_REPLACE = util.symbol("REPLACE")
class Event(object):
"""A token propagated throughout the course of a chain of attribute
events.
Serves as an indicator of the source of the event and also provides
a means of controlling propagation across a chain of attribute
operations.
The :class:`.Event` object is sent as the ``initiator`` argument
when dealing with the :meth:`.AttributeEvents.append`,
:meth:`.AttributeEvents.set`,
and :meth:`.AttributeEvents.remove` events.
The :class:`.Event` object is currently interpreted by the backref
event handlers, and is used to control the propagation of operations
across two mutually-dependent attributes.
.. versionadded:: 0.9.0
:var impl: The :class:`.AttributeImpl` which is the current event
initiator.
:var op: The symbol :attr:`.OP_APPEND`, :attr:`.OP_REMOVE` or
:attr:`.OP_REPLACE`, indicating the source operation.
"""
__slots__ = 'impl', 'op', 'parent_token'
def __init__(self, attribute_impl, op):
self.impl = attribute_impl
self.op = op
self.parent_token = self.impl.parent_token
def __eq__(self, other):
return isinstance(other, Event) and \
other.impl is self.impl and \
other.op == self.op
@property
def key(self):
return self.impl.key
def hasparent(self, state):
return self.impl.hasparent(state)
class AttributeImpl(object):
"""internal implementation for instrumented attributes."""
def __init__(self, class_, key,
callable_, dispatch, trackparent=False, extension=None,
compare_function=None, active_history=False,
parent_token=None, expire_missing=True,
send_modified_events=True,
**kwargs):
"""Construct an AttributeImpl.
\class_
associated class
key
string name of the attribute
\callable_
optional function which generates a callable based on a parent
instance, which produces the "default" values for a scalar or
collection attribute when it's first accessed, if not present
already.
trackparent
if True, attempt to track if an instance has a parent attached
to it via this attribute.
extension
a single or list of AttributeExtension object(s) which will
receive set/delete/append/remove/etc. events. Deprecated.
The event package is now used.
compare_function
a function that compares two values which are normally
assignable to this attribute.
active_history
indicates that get_history() should always return the "old" value,
even if it means executing a lazy callable upon attribute change.
parent_token
Usually references the MapperProperty, used as a key for
the hasparent() function to identify an "owning" attribute.
Allows multiple AttributeImpls to all match a single
owner attribute.
expire_missing
if False, don't add an "expiry" callable to this attribute
during state.expire_attributes(None), if no value is present
for this key.
send_modified_events
if False, the InstanceState._modified_event method will have no
effect; this means the attribute will never show up as changed in a
history entry.
"""
self.class_ = class_
self.key = key
self.callable_ = callable_
self.dispatch = dispatch
self.trackparent = trackparent
self.parent_token = parent_token or self
self.send_modified_events = send_modified_events
if compare_function is None:
self.is_equal = operator.eq
else:
self.is_equal = compare_function
# TODO: pass in the manager here
# instead of doing a lookup
attr = manager_of_class(class_)[key]
for ext in util.to_list(extension or []):
ext._adapt_listener(attr, ext)
if active_history:
self.dispatch._active_history = True
self.expire_missing = expire_missing
__slots__ = (
'class_', 'key', 'callable_', 'dispatch', 'trackparent',
'parent_token', 'send_modified_events', 'is_equal', 'expire_missing'
)
def __str__(self):
return "%s.%s" % (self.class_.__name__, self.key)
def _get_active_history(self):
"""Backwards compat for impl.active_history"""
return self.dispatch._active_history
def _set_active_history(self, value):
self.dispatch._active_history = value
active_history = property(_get_active_history, _set_active_history)
def hasparent(self, state, optimistic=False):
"""Return the boolean value of a `hasparent` flag attached to
the given state.
The `optimistic` flag determines what the default return value
should be if no `hasparent` flag can be located.
As this function is used to determine if an instance is an
*orphan*, instances that were loaded from storage should be
assumed to not be orphans, until a True/False value for this
flag is set.
An instance attribute that is loaded by a callable function
will also not have a `hasparent` flag.
"""
msg = "This AttributeImpl is not configured to track parents."
assert self.trackparent, msg
return state.parents.get(id(self.parent_token), optimistic) \
is not False
def sethasparent(self, state, parent_state, value):
"""Set a boolean flag on the given item corresponding to
whether or not it is attached to a parent object via the
attribute represented by this ``InstrumentedAttribute``.
"""
msg = "This AttributeImpl is not configured to track parents."
assert self.trackparent, msg
id_ = id(self.parent_token)
if value:
state.parents[id_] = parent_state
else:
if id_ in state.parents:
last_parent = state.parents[id_]
if last_parent is not False and \
last_parent.key != parent_state.key:
if last_parent.obj() is None:
raise orm_exc.StaleDataError(
"Removing state %s from parent "
"state %s along attribute '%s', "
"but the parent record "
"has gone stale, can't be sure this "
"is the most recent parent." %
(state_str(state),
state_str(parent_state),
self.key))
return
state.parents[id_] = False
def get_history(self, state, dict_, passive=PASSIVE_OFF):
raise NotImplementedError()
def get_all_pending(self, state, dict_, passive=PASSIVE_NO_INITIALIZE):
"""Return a list of tuples of (state, obj)
for all objects in this attribute's current state
+ history.
Only applies to object-based attributes.
This is an inlining of existing functionality
which roughly corresponds to:
get_state_history(
state,
key,
passive=PASSIVE_NO_INITIALIZE).sum()
"""
raise NotImplementedError()
def initialize(self, state, dict_):
"""Initialize the given state's attribute with an empty value."""
value = None
for fn in self.dispatch.init_scalar:
ret = fn(state, value, dict_)
if ret is not ATTR_EMPTY:
value = ret
return value
def get(self, state, dict_, passive=PASSIVE_OFF):
"""Retrieve a value from the given object.
If a callable is assembled on this object's attribute, and
passive is False, the callable will be executed and the
resulting value will be set as the new value for this attribute.
"""
if self.key in dict_:
return dict_[self.key]
else:
# if history present, don't load
key = self.key
if key not in state.committed_state or \
state.committed_state[key] is NEVER_SET:
if not passive & CALLABLES_OK:
return PASSIVE_NO_RESULT
if key in state.expired_attributes:
value = state._load_expired(state, passive)
elif key in state.callables:
callable_ = state.callables[key]
value = callable_(state, passive)
elif self.callable_:
value = self.callable_(state, passive)
else:
value = ATTR_EMPTY
if value is PASSIVE_NO_RESULT or value is NEVER_SET:
return value
elif value is ATTR_WAS_SET:
try:
return dict_[key]
except KeyError:
# TODO: no test coverage here.
raise KeyError(
"Deferred loader for attribute "
"%r failed to populate "
"correctly" % key)
elif value is not ATTR_EMPTY:
return self.set_committed_value(state, dict_, value)
if not passive & INIT_OK:
return NEVER_SET
else:
# Return a new, empty value
return self.initialize(state, dict_)
def append(self, state, dict_, value, initiator, passive=PASSIVE_OFF):
self.set(state, dict_, value, initiator, passive=passive)
def remove(self, state, dict_, value, initiator, passive=PASSIVE_OFF):
self.set(state, dict_, None, initiator,
passive=passive, check_old=value)
def pop(self, state, dict_, value, initiator, passive=PASSIVE_OFF):
self.set(state, dict_, None, initiator,
passive=passive, check_old=value, pop=True)
def set(self, state, dict_, value, initiator,
passive=PASSIVE_OFF, check_old=None, pop=False):
raise NotImplementedError()
def get_committed_value(self, state, dict_, passive=PASSIVE_OFF):
"""return the unchanged value of this attribute"""
if self.key in state.committed_state:
value = state.committed_state[self.key]
if value in (NO_VALUE, NEVER_SET):
return None
else:
return value
else:
return self.get(state, dict_, passive=passive)
def set_committed_value(self, state, dict_, value):
"""set an attribute value on the given instance and 'commit' it."""
dict_[self.key] = value
state._commit(dict_, [self.key])
return value
class ScalarAttributeImpl(AttributeImpl):
"""represents a scalar value-holding InstrumentedAttribute."""
accepts_scalar_loader = True
uses_objects = False
supports_population = True
collection = False
__slots__ = '_replace_token', '_append_token', '_remove_token'
def __init__(self, *arg, **kw):
super(ScalarAttributeImpl, self).__init__(*arg, **kw)
self._replace_token = self._append_token = None
self._remove_token = None
def _init_append_token(self):
self._replace_token = self._append_token = Event(self, OP_REPLACE)
return self._replace_token
_init_append_or_replace_token = _init_append_token
def _init_remove_token(self):
self._remove_token = Event(self, OP_REMOVE)
return self._remove_token
def delete(self, state, dict_):
# TODO: catch key errors, convert to attributeerror?
if self.dispatch._active_history:
old = self.get(state, dict_, PASSIVE_RETURN_NEVER_SET)
else:
old = dict_.get(self.key, NO_VALUE)
if self.dispatch.remove:
self.fire_remove_event(state, dict_, old, self._remove_token)
state._modified_event(dict_, self, old)
del dict_[self.key]
def get_history(self, state, dict_, passive=PASSIVE_OFF):
if self.key in dict_:
return History.from_scalar_attribute(self, state, dict_[self.key])
else:
if passive & INIT_OK:
passive ^= INIT_OK
current = self.get(state, dict_, passive=passive)
if current is PASSIVE_NO_RESULT:
return HISTORY_BLANK
else:
return History.from_scalar_attribute(self, state, current)
def set(self, state, dict_, value, initiator,
passive=PASSIVE_OFF, check_old=None, pop=False):
if self.dispatch._active_history:
old = self.get(state, dict_, PASSIVE_RETURN_NEVER_SET)
else:
old = dict_.get(self.key, NO_VALUE)
if self.dispatch.set:
value = self.fire_replace_event(state, dict_,
value, old, initiator)
state._modified_event(dict_, self, old)
dict_[self.key] = value
def fire_replace_event(self, state, dict_, value, previous, initiator):
for fn in self.dispatch.set:
value = fn(
state, value, previous,
initiator or self._replace_token or
self._init_append_or_replace_token())
return value
def fire_remove_event(self, state, dict_, value, initiator):
for fn in self.dispatch.remove:
fn(state, value,
initiator or self._remove_token or self._init_remove_token())
@property
def type(self):
self.property.columns[0].type
class ScalarObjectAttributeImpl(ScalarAttributeImpl):
"""represents a scalar-holding InstrumentedAttribute,
where the target object is also instrumented.
Adds events to delete/set operations.
"""
accepts_scalar_loader = False
uses_objects = True
supports_population = True
collection = False
__slots__ = ()
def delete(self, state, dict_):
old = self.get(state, dict_)
self.fire_remove_event(
state, dict_, old,
self._remove_token or self._init_remove_token())
del dict_[self.key]
def get_history(self, state, dict_, passive=PASSIVE_OFF):
if self.key in dict_:
return History.from_object_attribute(self, state, dict_[self.key])
else:
if passive & INIT_OK:
passive ^= INIT_OK
current = self.get(state, dict_, passive=passive)
if current is PASSIVE_NO_RESULT:
return HISTORY_BLANK
else:
return History.from_object_attribute(self, state, current)
def get_all_pending(self, state, dict_, passive=PASSIVE_NO_INITIALIZE):
if self.key in dict_:
current = dict_[self.key]
elif passive & CALLABLES_OK:
current = self.get(state, dict_, passive=passive)
else:
return []
# can't use __hash__(), can't use __eq__() here
if current is not None and \
current is not PASSIVE_NO_RESULT and \
current is not NEVER_SET:
ret = [(instance_state(current), current)]
else:
ret = [(None, None)]
if self.key in state.committed_state:
original = state.committed_state[self.key]
if original is not None and \
original is not PASSIVE_NO_RESULT and \
original is not NEVER_SET and \
original is not current:
ret.append((instance_state(original), original))
return ret
def set(self, state, dict_, value, initiator,
passive=PASSIVE_OFF, check_old=None, pop=False):
"""Set a value on the given InstanceState.
"""
if self.dispatch._active_history:
old = self.get(
state, dict_,
passive=PASSIVE_ONLY_PERSISTENT |
NO_AUTOFLUSH | LOAD_AGAINST_COMMITTED)
else:
old = self.get(
state, dict_, passive=PASSIVE_NO_FETCH ^ INIT_OK |
LOAD_AGAINST_COMMITTED)
if check_old is not None and \
old is not PASSIVE_NO_RESULT and \
check_old is not old:
if pop:
return
else:
raise ValueError(
"Object %s not associated with %s on attribute '%s'" % (
instance_str(check_old),
state_str(state),
self.key
))
value = self.fire_replace_event(state, dict_, value, old, initiator)
dict_[self.key] = value
def fire_remove_event(self, state, dict_, value, initiator):
if self.trackparent and value is not None:
self.sethasparent(instance_state(value), state, False)
for fn in self.dispatch.remove:
fn(state, value, initiator or
self._remove_token or self._init_remove_token())
state._modified_event(dict_, self, value)
def fire_replace_event(self, state, dict_, value, previous, initiator):
if self.trackparent:
if (previous is not value and
previous not in (None, PASSIVE_NO_RESULT, NEVER_SET)):
self.sethasparent(instance_state(previous), state, False)
for fn in self.dispatch.set:
value = fn(
state, value, previous, initiator or
self._replace_token or self._init_append_or_replace_token())
state._modified_event(dict_, self, previous)
if self.trackparent:
if value is not None:
self.sethasparent(instance_state(value), state, True)
return value
class CollectionAttributeImpl(AttributeImpl):
"""A collection-holding attribute that instruments changes in membership.
Only handles collections of instrumented objects.
InstrumentedCollectionAttribute holds an arbitrary, user-specified
container object (defaulting to a list) and brokers access to the
CollectionAdapter, a "view" onto that object that presents consistent bag
semantics to the orm layer independent of the user data implementation.
"""
accepts_scalar_loader = False
uses_objects = True
supports_population = True
collection = True
__slots__ = (
'copy', 'collection_factory', '_append_token', '_remove_token',
'_duck_typed_as'
)
def __init__(self, class_, key, callable_, dispatch,
typecallable=None, trackparent=False, extension=None,
copy_function=None, compare_function=None, **kwargs):
super(CollectionAttributeImpl, self).__init__(
class_,
key,
callable_, dispatch,
trackparent=trackparent,
extension=extension,
compare_function=compare_function,
**kwargs)
if copy_function is None:
copy_function = self.__copy
self.copy = copy_function
self.collection_factory = typecallable
self._append_token = None
self._remove_token = None
self._duck_typed_as = util.duck_type_collection(
self.collection_factory())
if getattr(self.collection_factory, "_sa_linker", None):
@event.listens_for(self, "init_collection")
def link(target, collection, collection_adapter):
collection._sa_linker(collection_adapter)
@event.listens_for(self, "dispose_collection")
def unlink(target, collection, collection_adapter):
collection._sa_linker(None)
def _init_append_token(self):
self._append_token = Event(self, OP_APPEND)
return self._append_token
def _init_remove_token(self):
self._remove_token = Event(self, OP_REMOVE)
return self._remove_token
def __copy(self, item):
return [y for y in collections.collection_adapter(item)]
def get_history(self, state, dict_, passive=PASSIVE_OFF):
current = self.get(state, dict_, passive=passive)
if current is PASSIVE_NO_RESULT:
return HISTORY_BLANK
else:
return History.from_collection(self, state, current)
def get_all_pending(self, state, dict_, passive=PASSIVE_NO_INITIALIZE):
# NOTE: passive is ignored here at the moment
if self.key not in dict_:
return []
current = dict_[self.key]
current = getattr(current, '_sa_adapter')
if self.key in state.committed_state:
original = state.committed_state[self.key]
if original not in (NO_VALUE, NEVER_SET):
current_states = [((c is not None) and
instance_state(c) or None, c)
for c in current]
original_states = [((c is not None) and
instance_state(c) or None, c)
for c in original]
current_set = dict(current_states)
original_set = dict(original_states)
return \
[(s, o) for s, o in current_states
if s not in original_set] + \
[(s, o) for s, o in current_states
if s in original_set] + \
[(s, o) for s, o in original_states
if s not in current_set]
return [(instance_state(o), o) for o in current]
def fire_append_event(self, state, dict_, value, initiator):
for fn in self.dispatch.append:
value = fn(
state, value,
initiator or self._append_token or self._init_append_token())
state._modified_event(dict_, self, NEVER_SET, True)
if self.trackparent and value is not None:
self.sethasparent(instance_state(value), state, True)
return value
def fire_pre_remove_event(self, state, dict_, initiator):
state._modified_event(dict_, self, NEVER_SET, True)
def fire_remove_event(self, state, dict_, value, initiator):
if self.trackparent and value is not None:
self.sethasparent(instance_state(value), state, False)
for fn in self.dispatch.remove:
fn(state, value,
initiator or self._remove_token or self._init_remove_token())
state._modified_event(dict_, self, NEVER_SET, True)
def delete(self, state, dict_):
if self.key not in dict_:
return
state._modified_event(dict_, self, NEVER_SET, True)
collection = self.get_collection(state, state.dict)
collection.clear_with_event()
# TODO: catch key errors, convert to attributeerror?
del dict_[self.key]
def initialize(self, state, dict_):
"""Initialize this attribute with an empty collection."""
_, user_data = self._initialize_collection(state)
dict_[self.key] = user_data
return user_data
def _initialize_collection(self, state):
adapter, collection = state.manager.initialize_collection(
self.key, state, self.collection_factory)
self.dispatch.init_collection(state, collection, adapter)
return adapter, collection
def append(self, state, dict_, value, initiator, passive=PASSIVE_OFF):
collection = self.get_collection(state, dict_, passive=passive)
if collection is PASSIVE_NO_RESULT:
value = self.fire_append_event(state, dict_, value, initiator)
assert self.key not in dict_, \
"Collection was loaded during event handling."
state._get_pending_mutation(self.key).append(value)
else:
collection.append_with_event(value, initiator)
def remove(self, state, dict_, value, initiator, passive=PASSIVE_OFF):
collection = self.get_collection(state, state.dict, passive=passive)
if collection is PASSIVE_NO_RESULT:
self.fire_remove_event(state, dict_, value, initiator)
assert self.key not in dict_, \
"Collection was loaded during event handling."
state._get_pending_mutation(self.key).remove(value)
else:
collection.remove_with_event(value, initiator)
def pop(self, state, dict_, value, initiator, passive=PASSIVE_OFF):
try:
# TODO: better solution here would be to add
# a "popper" role to collections.py to complement
# "remover".
self.remove(state, dict_, value, initiator, passive=passive)
except (ValueError, KeyError, IndexError):
pass
def set(self, state, dict_, value, initiator=None,
passive=PASSIVE_OFF, pop=False, _adapt=True):
iterable = orig_iterable = value
# pulling a new collection first so that an adaptation exception does
# not trigger a lazy load of the old collection.
new_collection, user_data = self._initialize_collection(state)
if _adapt:
if new_collection._converter is not None:
iterable = new_collection._converter(iterable)
else:
setting_type = util.duck_type_collection(iterable)
receiving_type = self._duck_typed_as
if setting_type is not receiving_type:
given = iterable is None and 'None' or \
iterable.__class__.__name__
wanted = self._duck_typed_as.__name__
raise TypeError(
"Incompatible collection type: %s is not %s-like" % (
given, wanted))
# If the object is an adapted collection, return the (iterable)
# adapter.
if hasattr(iterable, '_sa_iterator'):
iterable = iterable._sa_iterator()
elif setting_type is dict:
if util.py3k:
iterable = iterable.values()
else:
iterable = getattr(
iterable, 'itervalues', iterable.values)()
else:
iterable = iter(iterable)
new_values = list(iterable)
old = self.get(state, dict_, passive=PASSIVE_ONLY_PERSISTENT)
if old is PASSIVE_NO_RESULT:
old = self.initialize(state, dict_)
elif old is orig_iterable:
# ignore re-assignment of the current collection, as happens
# implicitly with in-place operators (foo.collection |= other)
return
# place a copy of "old" in state.committed_state
state._modified_event(dict_, self, old, True)
old_collection = old._sa_adapter
dict_[self.key] = user_data
collections.bulk_replace(
new_values, old_collection, new_collection)
del old._sa_adapter
self.dispatch.dispose_collection(state, old, old_collection)
def _invalidate_collection(self, collection):
adapter = getattr(collection, '_sa_adapter')
adapter.invalidated = True
def set_committed_value(self, state, dict_, value):
"""Set an attribute value on the given instance and 'commit' it."""
collection, user_data = self._initialize_collection(state)
if value:
collection.append_multiple_without_event(value)
state.dict[self.key] = user_data
state._commit(dict_, [self.key])
if self.key in state._pending_mutations:
# pending items exist. issue a modified event,
# add/remove new items.
state._modified_event(dict_, self, user_data, True)
pending = state._pending_mutations.pop(self.key)
added = pending.added_items
removed = pending.deleted_items
for item in added:
collection.append_without_event(item)
for item in removed:
collection.remove_without_event(item)
return user_data
def get_collection(self, state, dict_,
user_data=None, passive=PASSIVE_OFF):
"""Retrieve the CollectionAdapter associated with the given state.
Creates a new CollectionAdapter if one does not exist.
"""
if user_data is None:
user_data = self.get(state, dict_, passive=passive)
if user_data is PASSIVE_NO_RESULT:
return user_data
return getattr(user_data, '_sa_adapter')
def backref_listeners(attribute, key, uselist):
"""Apply listeners to synchronize a two-way relationship."""
# use easily recognizable names for stack traces
parent_token = attribute.impl.parent_token
parent_impl = attribute.impl
def _acceptable_key_err(child_state, initiator, child_impl):
raise ValueError(
"Bidirectional attribute conflict detected: "
'Passing object %s to attribute "%s" '
'triggers a modify event on attribute "%s" '
'via the backref "%s".' % (
state_str(child_state),
initiator.parent_token,
child_impl.parent_token,
attribute.impl.parent_token
)
)
def emit_backref_from_scalar_set_event(state, child, oldchild, initiator):
if oldchild is child:
return child
if oldchild is not None and \
oldchild is not PASSIVE_NO_RESULT and \
oldchild is not NEVER_SET:
# With lazy=None, there's no guarantee that the full collection is
# present when updating via a backref.
old_state, old_dict = instance_state(oldchild),\
instance_dict(oldchild)
impl = old_state.manager[key].impl
if initiator.impl is not impl or \
initiator.op not in (OP_REPLACE, OP_REMOVE):
impl.pop(old_state,
old_dict,
state.obj(),
parent_impl._append_token or
parent_impl._init_append_token(),
passive=PASSIVE_NO_FETCH)
if child is not None:
child_state, child_dict = instance_state(child),\
instance_dict(child)
child_impl = child_state.manager[key].impl
if initiator.parent_token is not parent_token and \
initiator.parent_token is not child_impl.parent_token:
_acceptable_key_err(state, initiator, child_impl)
elif initiator.impl is not child_impl or \
initiator.op not in (OP_APPEND, OP_REPLACE):
child_impl.append(
child_state,
child_dict,
state.obj(),
initiator,
passive=PASSIVE_NO_FETCH)
return child
def emit_backref_from_collection_append_event(state, child, initiator):
if child is None:
return
child_state, child_dict = instance_state(child), \
instance_dict(child)
child_impl = child_state.manager[key].impl
if initiator.parent_token is not parent_token and \
initiator.parent_token is not child_impl.parent_token:
_acceptable_key_err(state, initiator, child_impl)
elif initiator.impl is not child_impl or \
initiator.op not in (OP_APPEND, OP_REPLACE):
child_impl.append(
child_state,
child_dict,
state.obj(),
initiator,
passive=PASSIVE_NO_FETCH)
return child
def emit_backref_from_collection_remove_event(state, child, initiator):
if child is not None:
child_state, child_dict = instance_state(child),\
instance_dict(child)
child_impl = child_state.manager[key].impl
if initiator.impl is not child_impl or \
initiator.op not in (OP_REMOVE, OP_REPLACE):
child_impl.pop(
child_state,
child_dict,
state.obj(),
initiator,
passive=PASSIVE_NO_FETCH)
if uselist:
event.listen(attribute, "append",
emit_backref_from_collection_append_event,
retval=True, raw=True)
else:
event.listen(attribute, "set",
emit_backref_from_scalar_set_event,
retval=True, raw=True)
# TODO: need coverage in test/orm/ of remove event
event.listen(attribute, "remove",
emit_backref_from_collection_remove_event,
retval=True, raw=True)
_NO_HISTORY = util.symbol('NO_HISTORY')
_NO_STATE_SYMBOLS = frozenset([
id(PASSIVE_NO_RESULT),
id(NO_VALUE),
id(NEVER_SET)])
History = util.namedtuple("History", [
"added", "unchanged", "deleted"
])
class History(History):
"""A 3-tuple of added, unchanged and deleted values,
representing the changes which have occurred on an instrumented
attribute.
The easiest way to get a :class:`.History` object for a particular
attribute on an object is to use the :func:`.inspect` function::
from sqlalchemy import inspect
hist = inspect(myobject).attrs.myattribute.history
Each tuple member is an iterable sequence:
* ``added`` - the collection of items added to the attribute (the first
tuple element).
* ``unchanged`` - the collection of items that have not changed on the
attribute (the second tuple element).
* ``deleted`` - the collection of items that have been removed from the
attribute (the third tuple element).
"""
def __bool__(self):
return self != HISTORY_BLANK
__nonzero__ = __bool__
def empty(self):
"""Return True if this :class:`.History` has no changes
and no existing, unchanged state.
"""
return not bool(
(self.added or self.deleted)
or self.unchanged
)
def sum(self):
"""Return a collection of added + unchanged + deleted."""
return (self.added or []) +\
(self.unchanged or []) +\
(self.deleted or [])
def non_deleted(self):
"""Return a collection of added + unchanged."""
return (self.added or []) +\
(self.unchanged or [])
def non_added(self):
"""Return a collection of unchanged + deleted."""
return (self.unchanged or []) +\
(self.deleted or [])
def has_changes(self):
"""Return True if this :class:`.History` has changes."""
return bool(self.added or self.deleted)
def as_state(self):
return History(
[(c is not None)
and instance_state(c) or None
for c in self.added],
[(c is not None)
and instance_state(c) or None
for c in self.unchanged],
[(c is not None)
and instance_state(c) or None
for c in self.deleted],
)
@classmethod
def from_scalar_attribute(cls, attribute, state, current):
original = state.committed_state.get(attribute.key, _NO_HISTORY)
if original is _NO_HISTORY:
if current is NEVER_SET:
return cls((), (), ())
else:
return cls((), [current], ())
# don't let ClauseElement expressions here trip things up
elif attribute.is_equal(current, original) is True:
return cls((), [current], ())
else:
# current convention on native scalars is to not
# include information
# about missing previous value in "deleted", but
# we do include None, which helps in some primary
# key situations
if id(original) in _NO_STATE_SYMBOLS:
deleted = ()
else:
deleted = [original]
if current is NEVER_SET:
return cls((), (), deleted)
else:
return cls([current], (), deleted)
@classmethod
def from_object_attribute(cls, attribute, state, current):
original = state.committed_state.get(attribute.key, _NO_HISTORY)
if original is _NO_HISTORY:
if current is NO_VALUE or current is NEVER_SET:
return cls((), (), ())
else:
return cls((), [current], ())
elif current is original:
return cls((), [current], ())
else:
# current convention on related objects is to not
# include information
# about missing previous value in "deleted", and
# to also not include None - the dependency.py rules
# ignore the None in any case.
if id(original) in _NO_STATE_SYMBOLS or original is None:
deleted = ()
else:
deleted = [original]
if current is NO_VALUE or current is NEVER_SET:
return cls((), (), deleted)
else:
return cls([current], (), deleted)
@classmethod
def from_collection(cls, attribute, state, current):
original = state.committed_state.get(attribute.key, _NO_HISTORY)
if current is NO_VALUE or current is NEVER_SET:
return cls((), (), ())
current = getattr(current, '_sa_adapter')
if original in (NO_VALUE, NEVER_SET):
return cls(list(current), (), ())
elif original is _NO_HISTORY:
return cls((), list(current), ())
else:
current_states = [((c is not None) and instance_state(c)
or None, c)
for c in current
]
original_states = [((c is not None) and instance_state(c)
or None, c)
for c in original
]
current_set = dict(current_states)
original_set = dict(original_states)
return cls(
[o for s, o in current_states if s not in original_set],
[o for s, o in current_states if s in original_set],
[o for s, o in original_states if s not in current_set]
)
HISTORY_BLANK = History(None, None, None)
def get_history(obj, key, passive=PASSIVE_OFF):
"""Return a :class:`.History` record for the given object
and attribute key.
:param obj: an object whose class is instrumented by the
attributes package.
:param key: string attribute name.
:param passive: indicates loading behavior for the attribute
if the value is not already present. This is a
bitflag attribute, which defaults to the symbol
:attr:`.PASSIVE_OFF` indicating all necessary SQL
should be emitted.
"""
if passive is True:
util.warn_deprecated("Passing True for 'passive' is deprecated. "
"Use attributes.PASSIVE_NO_INITIALIZE")
passive = PASSIVE_NO_INITIALIZE
elif passive is False:
util.warn_deprecated("Passing False for 'passive' is "
"deprecated. Use attributes.PASSIVE_OFF")
passive = PASSIVE_OFF
return get_state_history(instance_state(obj), key, passive)
def get_state_history(state, key, passive=PASSIVE_OFF):
return state.get_history(key, passive)
def has_parent(cls, obj, key, optimistic=False):
"""TODO"""
manager = manager_of_class(cls)
state = instance_state(obj)
return manager.has_parent(state, key, optimistic)
def register_attribute(class_, key, **kw):
comparator = kw.pop('comparator', None)
parententity = kw.pop('parententity', None)
doc = kw.pop('doc', None)
desc = register_descriptor(class_, key,
comparator, parententity, doc=doc)
register_attribute_impl(class_, key, **kw)
return desc
def register_attribute_impl(class_, key,
uselist=False, callable_=None,
useobject=False,
impl_class=None, backref=None, **kw):
manager = manager_of_class(class_)
if uselist:
factory = kw.pop('typecallable', None)
typecallable = manager.instrument_collection_class(
key, factory or list)
else:
typecallable = kw.pop('typecallable', None)
dispatch = manager[key].dispatch
if impl_class:
impl = impl_class(class_, key, typecallable, dispatch, **kw)
elif uselist:
impl = CollectionAttributeImpl(class_, key, callable_, dispatch,
typecallable=typecallable, **kw)
elif useobject:
impl = ScalarObjectAttributeImpl(class_, key, callable_,
dispatch, **kw)
else:
impl = ScalarAttributeImpl(class_, key, callable_, dispatch, **kw)
manager[key].impl = impl
if backref:
backref_listeners(manager[key], backref, uselist)
manager.post_configure_attribute(key)
return manager[key]
def register_descriptor(class_, key, comparator=None,
parententity=None, doc=None):
manager = manager_of_class(class_)
descriptor = InstrumentedAttribute(class_, key, comparator=comparator,
parententity=parententity)
descriptor.__doc__ = doc
manager.instrument_attribute(key, descriptor)
return descriptor
def unregister_attribute(class_, key):
manager_of_class(class_).uninstrument_attribute(key)
def init_collection(obj, key):
"""Initialize a collection attribute and return the collection adapter.
This function is used to provide direct access to collection internals
for a previously unloaded attribute. e.g.::
collection_adapter = init_collection(someobject, 'elements')
for elem in values:
collection_adapter.append_without_event(elem)
For an easier way to do the above, see
:func:`~sqlalchemy.orm.attributes.set_committed_value`.
obj is an instrumented object instance. An InstanceState
is accepted directly for backwards compatibility but
this usage is deprecated.
"""
state = instance_state(obj)
dict_ = state.dict
return init_state_collection(state, dict_, key)
def init_state_collection(state, dict_, key):
"""Initialize a collection attribute and return the collection adapter."""
attr = state.manager[key].impl
user_data = attr.initialize(state, dict_)
return attr.get_collection(state, dict_, user_data)
def set_committed_value(instance, key, value):
"""Set the value of an attribute with no history events.
Cancels any previous history present. The value should be
a scalar value for scalar-holding attributes, or
an iterable for any collection-holding attribute.
This is the same underlying method used when a lazy loader
fires off and loads additional data from the database.
In particular, this method can be used by application code
which has loaded additional attributes or collections through
separate queries, which can then be attached to an instance
as though it were part of its original loaded state.
"""
state, dict_ = instance_state(instance), instance_dict(instance)
state.manager[key].impl.set_committed_value(state, dict_, value)
def set_attribute(instance, key, value):
"""Set the value of an attribute, firing history events.
This function may be used regardless of instrumentation
applied directly to the class, i.e. no descriptors are required.
Custom attribute management schemes will need to make usage
of this method to establish attribute state as understood
by SQLAlchemy.
"""
state, dict_ = instance_state(instance), instance_dict(instance)
state.manager[key].impl.set(state, dict_, value, None)
def get_attribute(instance, key):
"""Get the value of an attribute, firing any callables required.
This function may be used regardless of instrumentation
applied directly to the class, i.e. no descriptors are required.
Custom attribute management schemes will need to make usage
of this method to make usage of attribute state as understood
by SQLAlchemy.
"""
state, dict_ = instance_state(instance), instance_dict(instance)
return state.manager[key].impl.get(state, dict_)
def del_attribute(instance, key):
"""Delete the value of an attribute, firing history events.
This function may be used regardless of instrumentation
applied directly to the class, i.e. no descriptors are required.
Custom attribute management schemes will need to make usage
of this method to establish attribute state as understood
by SQLAlchemy.
"""
state, dict_ = instance_state(instance), instance_dict(instance)
state.manager[key].impl.delete(state, dict_)
def flag_modified(instance, key):
"""Mark an attribute on an instance as 'modified'.
This sets the 'modified' flag on the instance and
establishes an unconditional change event for the given attribute.
"""
state, dict_ = instance_state(instance), instance_dict(instance)
impl = state.manager[key].impl
state._modified_event(dict_, impl, NO_VALUE, force=True)
| mit |
ivanhorvath/openshift-tools | ansible/roles/lib_git/build/ansible/git_clone.py | 13 | 1587 | # pylint: skip-file
def main():
'''
ansible git module for cloning
'''
module = AnsibleModule(
argument_spec=dict(
state=dict(default='present', type='str', choices=['present']),
dest=dict(default=None, required=True, type='str'),
repo=dict(default=None, required=True, type='str'),
branch=dict(default=None, required=False, type='str'),
bare=dict(default=False, required=False, type='bool'),
ssh_key=dict(default=None, required=False, type='str'),
),
supports_check_mode=False,
)
git = GitClone(module.params['dest'],
module.params['repo'],
module.params['branch'],
module.params['bare'],
module.params['ssh_key'])
state = module.params['state']
if state == 'present':
results = git.clone()
if results['returncode'] != 0:
module.fail_json(msg=results)
if results['no_clone_needed'] == True:
module.exit_json(changed=False, results=results, state="present")
module.exit_json(changed=True, results=results, state="present")
module.exit_json(failed=True,
changed=False,
results='Unknown state passed. %s' % state,
state="unknown")
# pylint: disable=redefined-builtin, unused-wildcard-import, wildcard-import, locally-disabled
# import module snippets. This are required
if __name__ == '__main__':
from ansible.module_utils.basic import *
main()
| apache-2.0 |
flowersteam/naminggamesal | naminggamesal/ngmeth_utils/zipf_utils.py | 1 | 2917 | import copy
import random
from scipy.optimize import curve_fit
from scipy.special import zetac
def zipf_rank_freq(x, a):
return 1./((x**a)*(1.+zetac(a)))
def fit_zipf(distrib):
distrib_2 = copy.deepcopy(distrib)
distrib_2.sort(reverse=True)
exponent,zipferror = curve_fit(zipf_rank_freq, xdata=range(1,len(distrib_2)+1), ydata=distrib_2)
return exponent[0],zipferror[0]
def zipfpick(agent=None,pop=None):
if agent is not None and pop is not None:
raise ValueError('agent and pop args should not be used at the same time')
elif agent is not None:
ms = agent.pick_m()
return ms,agent.pick_w(m=ms)
elif pop is not None:
nbiter = pop._agentlist[0]._vocabulary.get_W()*100
ag = random.choice(pop._agentlist)
ms = ag.pick_m()
return ms,ag.pick_w(m=ms)
else:
raise ValueError('pop or agent args should be defined')
def zipf_current(agent=None,pop=None,option='w'):
if agent is not None and pop is not None:
raise ValueError('agent and pop args should not be used at the same time')
elif agent is not None:
nbiter = agent._vocabulary.get_W()*100
elif pop is not None:
nbiter = pop._agentlist[0]._vocabulary.get_W()*100
else:
raise ValueError('pop or agent args should be defined')
distrib = {'m':{},'w':{}}
delta = 1./nbiter
for i in range(nbiter):
ms,w = zipfpick(agent=agent,pop=pop)
if ms not in list(distrib['m'].keys()):
distrib['m'][ms] = delta
else:
distrib['m'][ms] += delta
if w not in list(distrib['w'].keys()):
distrib['w'][w] = delta
else:
distrib['w'][w] += delta
distrib_m = list(distrib['m'].values())
distrib_w = list(distrib['w'].values())
if option == 'm':
return fit_zipf(distrib_m)
elif option == 'w':
return fit_zipf(distrib_w)
def zipf_past(agent=None,pop=None,option='w'):
if agent is not None and pop is not None:
raise ValueError('agent and pop args should not be used at the same time')
elif agent is not None:
#iterlist = agent._memory
raise NotImplementedError
elif pop is not None:
templist = pop._past
iterlist = [(p[0],p[1]) for p in templist]
else:
raise ValueError('pop or agent args should be defined')
distrib = {'m':{},'w':{}}
delta = 1./nbiter
for elt in iterlist:
ms,w = elt[0],elt[1]
if ms not in list(distrib['m'].keys()):
distrib['m'][ms] = delta
else:
distrib['m'][ms] += delta
if w not in list(distrib['w'].keys()):
distrib['w'][w] = delta
else:
distrib['w'][w] += delta
distrib_m = list(distrib['m'].values())
distrib_w = list(distrib['w'].values())
if option == 'm':
return fit_zipf(distrib_m)
elif option == 'w':
return fit_zipf(distrib_w)
#zipf_current(agent=agent,option='m')[0]
#zipf_current(agent=agent,option='m')[1]
#zipf_current(agent=agent,option='w')[0]
#zipf_current(agent=agent,option='w')[1]
#zipf_current(pop=pop,option='m')[0]
#zipf_current(pop=pop,option='m')[1]
#zipf_current(pop=pop,option='w')[0]
#zipf_current(pop=pop,option='w')[1]
| agpl-3.0 |
instantchow/home-assistant | tests/components/switch/test_mfi.py | 11 | 3778 | """The tests for the mFi switch platform."""
import unittest
import unittest.mock as mock
import homeassistant.components.switch as switch
import homeassistant.components.switch.mfi as mfi
from tests.components.sensor import test_mfi as test_mfi_sensor
from tests.common import get_test_home_assistant
class TestMfiSwitchSetup(test_mfi_sensor.TestMfiSensorSetup):
"""Test the mFi switch."""
PLATFORM = mfi
COMPONENT = switch
THING = 'switch'
GOOD_CONFIG = {
'switch': {
'platform': 'mfi',
'host': 'foo',
'port': 6123,
'username': 'user',
'password': 'pass',
}
}
@mock.patch('mficlient.client.MFiClient')
@mock.patch('homeassistant.components.switch.mfi.MfiSwitch')
def test_setup_adds_proper_devices(self, mock_switch, mock_client):
"""Test if setup adds devices."""
ports = {i: mock.MagicMock(model=model)
for i, model in enumerate(mfi.SWITCH_MODELS)}
ports['bad'] = mock.MagicMock(model='notaswitch')
print(ports['bad'].model)
mock_client.return_value.get_devices.return_value = \
[mock.MagicMock(ports=ports)]
assert self.COMPONENT.setup(self.hass, self.GOOD_CONFIG)
for ident, port in ports.items():
if ident != 'bad':
mock_switch.assert_any_call(port)
assert mock.call(ports['bad'], self.hass) not in mock_switch.mock_calls
class TestMfiSwitch(unittest.TestCase):
"""Test for mFi switch platform."""
def setup_method(self, method):
"""Setup things to be run when tests are started."""
self.hass = get_test_home_assistant()
self.hass.config.latitude = 32.87336
self.hass.config.longitude = 117.22743
self.port = mock.MagicMock()
self.switch = mfi.MfiSwitch(self.port)
def teardown_method(self, method):
"""Stop everything that was started."""
self.hass.stop()
def test_name(self):
"""Test the name."""
self.assertEqual(self.port.label, self.switch.name)
def test_update(self):
"""Test update."""
self.switch.update()
self.port.refresh.assert_called_once_with()
def test_update_with_target_state(self):
"""Test update with target state."""
self.switch._target_state = True
self.port.data = {}
self.port.data['output'] = 'stale'
self.switch.update()
self.assertEqual(1.0, self.port.data['output'])
self.assertEqual(None, self.switch._target_state)
self.port.data['output'] = 'untouched'
self.switch.update()
self.assertEqual('untouched', self.port.data['output'])
def test_turn_on(self):
"""Test turn_on."""
self.switch.turn_on()
self.port.control.assert_called_once_with(True)
self.assertTrue(self.switch._target_state)
def test_turn_off(self):
"""Test turn_off."""
self.switch.turn_off()
self.port.control.assert_called_once_with(False)
self.assertFalse(self.switch._target_state)
def test_current_power_mwh(self):
"""Test current power."""
self.port.data = {'active_pwr': 1}
self.assertEqual(1000, self.switch.current_power_mwh)
def test_current_power_mwh_no_data(self):
"""Test current power if there is no data."""
self.port.data = {'notpower': 123}
self.assertEqual(0, self.switch.current_power_mwh)
def test_device_state_attributes(self):
"""Test the state attributes."""
self.port.data = {'v_rms': 1.25,
'i_rms': 2.75}
self.assertEqual({'volts': 1.2, 'amps': 2.8},
self.switch.device_state_attributes)
| mit |
40123248/2015cd_midterm2 | static/Brython3.1.0-20150301-090019/Lib/multiprocessing/__init__.py | 693 | 6866 | #
# Package analogous to 'threading.py' but using processes
#
# multiprocessing/__init__.py
#
# This package is intended to duplicate the functionality (and much of
# the API) of threading.py but uses processes instead of threads. A
# subpackage 'multiprocessing.dummy' has the same API but is a simple
# wrapper for 'threading'.
#
# Try calling `multiprocessing.doc.main()` to read the html
# documentation in a webbrowser.
#
#
# Copyright (c) 2006-2008, R Oudkerk
# Licensed to PSF under a Contributor Agreement.
#
__version__ = '0.70a1'
__all__ = [
'Process', 'current_process', 'active_children', 'freeze_support',
'Manager', 'Pipe', 'cpu_count', 'log_to_stderr', 'get_logger',
'allow_connection_pickling', 'BufferTooShort', 'TimeoutError',
'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Condition',
'Event', 'Barrier', 'Queue', 'SimpleQueue', 'JoinableQueue', 'Pool',
'Value', 'Array', 'RawValue', 'RawArray', 'SUBDEBUG', 'SUBWARNING',
]
__author__ = 'R. Oudkerk (r.m.oudkerk@gmail.com)'
#
# Imports
#
import os
import sys
from multiprocessing.process import Process, current_process, active_children
from multiprocessing.util import SUBDEBUG, SUBWARNING
#
# Exceptions
#
class ProcessError(Exception):
pass
class BufferTooShort(ProcessError):
pass
class TimeoutError(ProcessError):
pass
class AuthenticationError(ProcessError):
pass
import _multiprocessing
#
# Definitions not depending on native semaphores
#
def Manager():
'''
Returns a manager associated with a running server process
The managers methods such as `Lock()`, `Condition()` and `Queue()`
can be used to create shared objects.
'''
from multiprocessing.managers import SyncManager
m = SyncManager()
m.start()
return m
#brython fix me
#def Pipe(duplex=True):
# '''
# Returns two connection object connected by a pipe
# '''
# from multiprocessing.connection import Pipe
# return Pipe(duplex)
def cpu_count():
'''
Returns the number of CPUs in the system
'''
if sys.platform == 'win32':
try:
num = int(os.environ['NUMBER_OF_PROCESSORS'])
except (ValueError, KeyError):
num = 0
elif 'bsd' in sys.platform or sys.platform == 'darwin':
comm = '/sbin/sysctl -n hw.ncpu'
if sys.platform == 'darwin':
comm = '/usr' + comm
try:
with os.popen(comm) as p:
num = int(p.read())
except ValueError:
num = 0
else:
try:
num = os.sysconf('SC_NPROCESSORS_ONLN')
except (ValueError, OSError, AttributeError):
num = 0
if num >= 1:
return num
else:
raise NotImplementedError('cannot determine number of cpus')
def freeze_support():
'''
Check whether this is a fake forked process in a frozen executable.
If so then run code specified by commandline and exit.
'''
if sys.platform == 'win32' and getattr(sys, 'frozen', False):
from multiprocessing.forking import freeze_support
freeze_support()
def get_logger():
'''
Return package logger -- if it does not already exist then it is created
'''
from multiprocessing.util import get_logger
return get_logger()
def log_to_stderr(level=None):
'''
Turn on logging and add a handler which prints to stderr
'''
from multiprocessing.util import log_to_stderr
return log_to_stderr(level)
#brython fix me
#def allow_connection_pickling():
# '''
# Install support for sending connections and sockets between processes
# '''
# # This is undocumented. In previous versions of multiprocessing
# # its only effect was to make socket objects inheritable on Windows.
# import multiprocessing.connection
#
# Definitions depending on native semaphores
#
def Lock():
'''
Returns a non-recursive lock object
'''
from multiprocessing.synchronize import Lock
return Lock()
def RLock():
'''
Returns a recursive lock object
'''
from multiprocessing.synchronize import RLock
return RLock()
def Condition(lock=None):
'''
Returns a condition object
'''
from multiprocessing.synchronize import Condition
return Condition(lock)
def Semaphore(value=1):
'''
Returns a semaphore object
'''
from multiprocessing.synchronize import Semaphore
return Semaphore(value)
def BoundedSemaphore(value=1):
'''
Returns a bounded semaphore object
'''
from multiprocessing.synchronize import BoundedSemaphore
return BoundedSemaphore(value)
def Event():
'''
Returns an event object
'''
from multiprocessing.synchronize import Event
return Event()
def Barrier(parties, action=None, timeout=None):
'''
Returns a barrier object
'''
from multiprocessing.synchronize import Barrier
return Barrier(parties, action, timeout)
def Queue(maxsize=0):
'''
Returns a queue object
'''
from multiprocessing.queues import Queue
return Queue(maxsize)
def JoinableQueue(maxsize=0):
'''
Returns a queue object
'''
from multiprocessing.queues import JoinableQueue
return JoinableQueue(maxsize)
def SimpleQueue():
'''
Returns a queue object
'''
from multiprocessing.queues import SimpleQueue
return SimpleQueue()
def Pool(processes=None, initializer=None, initargs=(), maxtasksperchild=None):
'''
Returns a process pool object
'''
from multiprocessing.pool import Pool
return Pool(processes, initializer, initargs, maxtasksperchild)
def RawValue(typecode_or_type, *args):
'''
Returns a shared object
'''
from multiprocessing.sharedctypes import RawValue
return RawValue(typecode_or_type, *args)
def RawArray(typecode_or_type, size_or_initializer):
'''
Returns a shared array
'''
from multiprocessing.sharedctypes import RawArray
return RawArray(typecode_or_type, size_or_initializer)
def Value(typecode_or_type, *args, lock=True):
'''
Returns a synchronized shared object
'''
from multiprocessing.sharedctypes import Value
return Value(typecode_or_type, *args, lock=lock)
def Array(typecode_or_type, size_or_initializer, *, lock=True):
'''
Returns a synchronized shared array
'''
from multiprocessing.sharedctypes import Array
return Array(typecode_or_type, size_or_initializer, lock=lock)
#
#
#
if sys.platform == 'win32':
def set_executable(executable):
'''
Sets the path to a python.exe or pythonw.exe binary used to run
child processes on Windows instead of sys.executable.
Useful for people embedding Python.
'''
from multiprocessing.forking import set_executable
set_executable(executable)
__all__ += ['set_executable']
| gpl-3.0 |
oVirt/ovirt-node | src/ovirt/node/ui/__init__.py | 1 | 33652 | #!/usr/bin/python
# -*- coding: utf-8 -*-
#
# __init__.py - Copyright (C) 2012 Red Hat, Inc.
# Written by Fabian Deutsch <fabiand@redhat.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
# MA 02110-1301, USA. A copy of the GNU General Public License is
# also available at http://www.gnu.org/copyleft/gpl.html.
from ovirt.node import base
from ovirt.node.utils import console, security
from ovirt.node.exceptions import InvalidData
import threading
"""
This contains abstract UI Elements
"""
# http://stackoverflow.com/questions/739654/understanding-python-decorators
class Element(base.Base):
"""An abstract UI Element.
This basically provides signals to communicate between real UI widget and
the plugins
Args:
path: The model path this item is mapped to
on_value_change: Emitted by the value() method if the value changes
on_exception: Signal to be called when an exception is raosed during a
callback
"""
path = None
on_value_change = None
on_exception = None
def __init__(self, path=None):
"""Registers all widget signals.
All signals must be given in self.signals
"""
super(Element, self).__init__()
self.path = path
self.on_value_change = self.new_signal()
self.on_notice_change = self.new_signal()
self.logger.debug("Initializing %s" % self)
def value(self, value=None):
"""A general way to set the "value" of a widget
Can be a text or selection, ...
"""
raise NotImplementedError
def elements(self):
return [self]
def notice(self, txt):
"""Protoype command to show a notice associated with this element
"""
self.on_notice_change(txt)
def __repr__(self):
return "<%s path='%s' at %s>" % (self.__class__.__name__, self.path,
hex(id(self)))
class InputElement(Element):
"""An abstract UI Element for user input
Args:
on_change: To be called by the consumer when the associated widget
changed
on_enabled_change: Called by the Element when enabled changes°
on_valid_change: Called by the Element when validity changes°
"""
on_change = None
on_label_change = None
on_enabled_change = None
on_valid_change = None
def __init__(self, path, label, is_enabled):
super(InputElement, self).__init__(path)
self.on_label_change = self.new_signal()
self.on_enabled_change = self.new_signal()
self.on_change = self.new_signal()
self.on_valid_change = self.new_signal()
self.label(label)
self.enabled(is_enabled)
self.text("")
self.valid(True)
self.on_change.connect(ChangeAction())
def enabled(self, is_enabled=None):
"""Enable or disable the widget wrt user input
"""
if is_enabled in [True, False]:
self.on_enabled_change(is_enabled)
self._enabled = is_enabled
return self._enabled
def valid(self, is_valid=None):
"""Get or set the validity of this element.
If a reason is given show it as a notice.
"""
if is_valid in [True, False]:
self.on_valid_change(is_valid)
self._valid = is_valid
return self._valid
def _validates(self):
"""Validate the value of this widget against the validator
This funcion is mainly needed to implement widget specific
validation methods
"""
pass
def text(self, text=None):
"""Get or set the textual value
"""
if text is not None:
self.on_value_change(text)
self._text = text
return self._text
def label(self, label=None):
"""Can be used to retrieve or change the label
"""
if label is not None:
self.on_label_change(label)
self._label = label
return self._label
def value(self, txt=None):
return self.text(txt)
class ContainerElement(Element):
"""An abstract container Element containing other Elements
"""
children = []
title = None
def __init__(self, path, children, title=None):
super(ContainerElement, self).__init__(path)
self.children = children
self.title = title
def elements(self):
"""Retrieve all Elements in this Element-Tree in a flat dict
Returns:
dict of mapping (path, element)
"""
elements = [self]
for element in self.children:
elements += element.elements()
return elements
def value(self, dummy):
"""A container doesn't have a single value, therefor None
"""
return NotImplementedError
def enabled(self, is_enabled=True):
"""Enable/Disable all children
"""
if is_enabled in [True, False]:
for child in self.children:
child.enabled(is_enabled)
return all(child.enabled() for child in self.children)
def __getitem__(self, path):
return dict((c.path, c) for c in self.children)[path]
class Action(base.Base):
callback = None
def __init__(self, callback=None):
super(Action, self).__init__()
self.callback = callback
def __call__(self, target, userdata=None):
r = None
if self.callback:
self.logger.debug("Calling action %s %s with %s" % (self,
self.callback,
userdata))
r = self.callback(userdata)
self.logger.debug("Action %s called and returned: %s" % (self,
r))
else:
self.logger.warning("No callback for %s" % self)
return r
def __str__(self):
return "<%s '%s'>" % (self.__class__.__name__, self.callback)
class ChangeAction(Action):
"""Action to validate the current change
"""
pass
class SaveAction(Action):
"""Action to save the current page/dialog
"""
pass
class CloseAction(Action):
"""Action to close the current/given dialog
Args:
dialog: The dialog to close
"""
dialog = None
def __init__(self, callback=None, dialog=None):
super(CloseAction, self).__init__(callback)
self.dialog = dialog
class ResetAction(Action):
"""Action to reset all InputElements on the current page/dialog
"""
pass
class ReloadAction(Action):
"""Action to reload the current page/dialog
"""
pass
class QuitAction(Action):
"""Action to quit the application
"""
pass
class Row(ContainerElement):
"""Align elements horizontally in one row
"""
pass
class Label(Element):
"""Represents a r/o label
"""
def __init__(self, path, text):
super(Label, self).__init__(path)
self.text(text)
def text(self, text=None):
if text is not None:
self.on_value_change(text)
self._text = text
return self._text
def value(self, txt=None):
return self.text(txt)
class Notice(Label):
def __init__(self, path, text):
super(Notice, self).__init__(path, text)
class Header(Label):
template = "\n %s\n"
def __init__(self, path, text, template=template):
super(Header, self).__init__(path, text)
self.template = template
class KeywordLabel(Label):
"""A label consisting of a prominent keyword and a value.
E.g.: <b>Networking:</b> Enabled
"""
def __init__(self, path, keyword, text=""):
super(KeywordLabel, self).__init__(path, text)
self.keyword = keyword
class Entry(InputElement):
"""Represents an entry field
TODO multiline
Args:
on_valid_change: Is emitted by this class when the value of valid
changes e.g. when a plugin is changing it.
"""
def __init__(self, path, label, enabled=True, align_vertical=False):
super(Entry, self).__init__(path, label, enabled)
self.align_vertical = align_vertical
class PasswordEntry(Entry):
pass
class ConfirmedEntry(ContainerElement):
"""A container for elements which must be identical
"""
on_change = None
on_valid_change = None
_primary = None
_secondary = None
_changes = None
is_password = False
min_length = 0
_additional_notice = None
def __init__(self, path, label, is_password=False, min_length=0):
self.on_change = self.new_signal()
self.on_valid_change = self.new_signal()
self._changes = {}
children = []
entry_class = PasswordEntry if is_password else Entry
self._primary = entry_class("%s[0]" % path,
label)
self._secondary = entry_class("%s[1]" % path,
"Confirm %s" % label)
self._notice = Notice("%s.notice" % path, "")
children += [self._primary, self._secondary, self._notice]
for child in [self._primary, self._secondary]:
self._changes[child.path] = ""
# Remove all callbacks - so we don't triggre on_change and friends
# We redirect it to call the validation methods of this widget
child.on_change.clear()
child.on_change.connect(self.__on_change)
if is_password:
self.is_password = is_password
self.min_length = min_length
self.on_change.connect(ChangeAction())
super(ConfirmedEntry, self).__init__(path, children)
def _validates(self):
if self.is_password:
self.logger.debug("Doing security check")
msg = ""
pw, pwc = self._values()
try:
msg = security.password_check(pw, pwc,
min_length=self.min_length)
except ValueError as e:
msg = e.message
if msg:
raise InvalidData(msg)
self._additional_notice = msg
def __on_change(self, target, change):
self._additional_notice = ""
self._changes.update(change)
self.on_change({self.path: self.value()})
def _values(self):
return (self._changes[self._primary.path],
self._changes[self._secondary.path])
def value(self, new_value=None):
if new_value is not None:
pass
return self._values()[0]
def valid(self, is_valid=None):
if is_valid in [True, False]:
self._primary.valid(is_valid)
self._secondary.valid(is_valid)
self.on_valid_change(is_valid)
return self._primary.valid()
def notice(self, txt=""):
msg = "\n".join(t for t in [txt, self._additional_notice] if t)
self._notice.text(msg)
class Button(InputElement):
"""A button can be used to submit or save the current page/dialog
There are several derivatives which are "shortcuts" to launch a specific
action.
Args:
on_activate: The signal shall be called by the toolkit implementing the
button, when the button got "clicked"
"""
on_activate = None
def __init__(self, path, label, enabled=True):
"""Constructor
Args:
path: Path within the model
label: Label of the button
enabled: If the button is enabled (can be clicked)
"""
super(Button, self).__init__(path, label, enabled)
self.text(label)
self.label(label)
self.on_activate = self.new_signal()
self.on_activate.connect(ChangeAction())
self.on_activate.connect(SaveAction())
def value(self, value=None):
self.on_value_change(value)
self.label(value)
class SaveButton(Button):
"""This derived class is primarily needed to allow an easy disabling of the
save button when the changed data is invalid.
"""
def __init__(self, path, label=None, enabled=True):
label = label or _("Save")
super(SaveButton, self).__init__(path, label, enabled)
class ResetButton(Button):
"""This button calls the ResetAction to reset all UI data to the current
model, discrading all pending changes.
"""
def __init__(self, path, label=None, enabled=True):
label = label or _("Reset")
super(ResetButton, self).__init__(path, label, enabled)
self.on_activate.clear()
self.on_activate.connect(ResetAction())
self.on_activate.connect(ReloadAction())
class CloseButton(Button):
"""The close button can be used to close the top-most dialog
"""
def __init__(self, path, label=None, enabled=True):
label = label or _("Close")
super(CloseButton, self).__init__(path, label, enabled)
self.on_activate.clear()
self.on_activate.connect(CloseAction())
class QuitButton(Button):
"""The quit button can be used to quit the whole application
"""
def __init__(self, path, label=None, enabled=True):
label = label or _("Quit")
super(QuitButton, self).__init__(path, label, enabled)
self.on_activate.clear()
self.on_activate.connect(QuitAction())
class Divider(Element):
"""A divider can be used to add some space between UI Elements.
Args:
char: A (optional) char to be used as a separator
"""
def __init__(self, path, char=u" "):
super(Divider, self).__init__(path)
self.char = char
class Options(InputElement):
"""A selection of options
Args:
label: The caption of the options
options:
"""
def __init__(self, path, label, options, selected=None):
super(Options, self).__init__(path, label, True)
self.options = options
self.option(selected or options[0][0])
def option(self, option=None):
if option in dict(self.options).keys():
self.on_value_change(option)
self._option = option
return self._option
def value(self, value=None):
return self.option(value)
class Checkbox(InputElement):
"""A simple Checkbox
Args:
label: Caption of this checkbox
state: The initial change
"""
def __init__(self, path, label, state=False, is_enabled=True):
super(Checkbox, self).__init__(path, label, is_enabled)
self.state(state)
def state(self, s=None):
if s in [True, False]:
self.on_value_change(s)
self._state = s
return self._state
def value(self, value=None):
return self.state(value)
class ProgressBar(Element):
"""A abstract progress bar.
Args:
current: The initial value
done: The maximum value
"""
def __init__(self, path, current=0, done=100):
super(ProgressBar, self).__init__(path)
self.current(current)
self.done = done
def current(self, current=None):
"""Get/Set the current status
Args:
current: New value or None
Returns:
The current progress
"""
if current is not None:
self.on_value_change(current)
self._current = current
return self._current
def value(self, value):
return self.current(value)
class Table(InputElement):
"""Represents a simple Table with one column
"""
_selected = None
on_activate = None
def __init__(self, path, label, header, items, selected_item=None,
height=5, enabled=True, multi=False):
"""Args:
path: The model path to map to
label: An optional label
header: The header above the cells, can be used to name the rows)
items: A list of tuples (key, label) (both str like)
height: The height of the Table
selected_item: The item (key) which shall be selected initially
enabled: Whether the table can be changed or not
multi: Whether we allow multiple items to be selected
"""
super(Table, self).__init__(path, label, enabled)
self.header = header
if type(items) in [str, tuple, unicode]:
# For convenience, create a list of tuples if it is not already one
if type(items) in [str, unicode]:
self.items = [(i, item) for i, item in zip(range(len(
items.split('\n'))),
items.split('\n'))]
elif type(items) is tuple:
self.items = [items]
else:
self.items = items
self.height = height
self.multi = multi
self.on_activate = self.new_signal()
if multi:
self.selection(selected_item or [])
self.on_activate.connect(ChangeAction())
self.on_change.clear()
else:
if selected_item or self.items:
self.selection(selected_item or self.items[0][0])
self.on_activate.connect(ChangeAction())
self.on_activate.connect(SaveAction())
def selection(self, selected=None):
"""Get/Select the given item (key) or multiple items if multi
Args:
selected: The item key to be selected
Returns:
The select item key or a list of keys which are selected
"""
if self.multi:
return self.__selection_multi(selected)
return self.__selection_single(selected)
def __selection_single(self, selected=None):
if selected in dict(self.items).keys():
self.on_value_change(selected)
self._selected = selected
return self._selected
def __selection_multi(self, selected=None):
if type(selected) in [list, str, unicode]:
if type(selected) in [str, unicode]:
# for convenience create a list for single strings
selected = [selected]
self.on_value_change(selected)
self._selected = set([k for k in selected
if k in dict(self.items).keys()])
return list(self._selected)
def value(self, value=None):
return self.selection(value)
class Window(Element):
"""Abstract Window definition
"""
application = None
__hotkeys_enabled = True
def __init__(self, path, application):
super(Window, self).__init__(path=path)
self.logger.info("Creating UI for application '%s'" % application)
self.application = application
self._plugins = {}
self._hotkeys = {}
self.footer = None
self.navigate = Window.Navigation(self.application)
def register_plugin(self, title, plugin):
"""Register a plugin to be shown in the UI
"""
if title in self._plugins:
raise RuntimeError("Plugin with same path is " +
"already registered: %s" % title)
self._plugins[title] = plugin
def hotkeys_enabled(self, new_value=None):
"""Disable all attached hotkey callbacks
Args:
new_value: If hotkeys shall be enabled or disabled
Returns:
If the hotkeys are enabled or disabled
"""
if new_value in [True, False]:
self.__hotkeys_enabled = new_value
return self.__hotkeys_enabled
def register_hotkey(self, hotkey, cb):
"""Register a hotkey
Args:
hotkeys: The key combination (very vague ...) triggering the
callback
cb: The callback to be called
"""
if type(hotkey) is str:
hotkey = [hotkey]
self.logger.debug("Registering hotkey '%s': %s" % (hotkey, cb))
self._hotkeys[str(hotkey)] = cb
def _show_on_page(self, page):
"""Shows the ui.Page as on a dialog.
"""
raise NotImplementedError
def _show_on_dialog(self, dialog):
"""Shows the ui.Dialog as on a dialog.
"""
raise NotImplementedError
def _show_on_notice(self, text):
"""Shows the text in the notice field
Something liek a HUD display
"""
raise NotImplementedError
def close_dialog(self, dialog):
"""Close the ui.Dialog
"""
raise NotImplementedError
def suspended(self):
"""Supspends the screen to do something in the foreground
Returns:
...
"""
raise NotImplementedError
def force_redraw(self):
"""Forces a complete redraw of the UI
"""
raise NotImplementedError
def reset(self):
"""Reset the UI
"""
raise NotImplementedError
def run(self):
"""Starts the UI
"""
raise NotImplementedError
def thread_connection(self):
"""Run a callback in the context of the UI thread
Returns:
A new UIThreadConnection instance
"""
raise NotImplementedError
class UIThreadConnection(base.Base):
"""A class to interact with the UI thread
This is needed if other threads want to interact with the UI
"""
def call(self, callback):
"""Call a callback in the context of the UI thread
This needs to be used when updates to the ui are made
Args:
callback: A callable to be called in the ctx of the ui thread
Returns:
Nothing
"""
raise NotImplementedError
class Navigation(base.Base):
"""A convenience class to navigate through a window
"""
application = None
def __init__(self, application):
self.application = application
super(Window.Navigation, self).__init__()
def index(self):
plugins = self.application.plugins().items()
get_rank = lambda path_plugin: path_plugin[1].rank()
self.logger.debug("Available plugins: %s" % plugins)
sorted_plugins = [p for n, p in sorted(plugins, key=get_rank)
if p.has_ui()]
self.logger.debug("Available plugins with ui: %s" % sorted_plugins)
return sorted_plugins
def to_plugin(self, plugin_candidate):
"""Goes to the plugin (by instance or type)
Args
idx: The plugin instance/type to go to
"""
self.logger.debug("Navigating to plugin %s" % plugin_candidate)
self.application.switch_to_plugin(plugin_candidate)
self.logger.debug("Navigated to plugin %s" % plugin_candidate)
def to_nth(self, idx, is_relative=False):
"""Goes to the plugin (by idx)
Any pending changes are ignored.
Args
idx: The plugin idx to go to
"""
plugins = self.index()
self.logger.debug("Switching to page %s (%s)" % (idx, plugins))
if is_relative:
idx += plugins.index(self.application.current_plugin())
plugin = plugins[idx]
self.to_plugin(plugin)
def to_next_plugin(self):
"""Goes to the next plugin, based on the current one
"""
self.to_nth(1, True)
def to_previous_plugin(self):
"""Goes to the previous plugin, based on the current one
"""
self.to_nth(-1, True)
def to_first_plugin(self):
"""Goes to the first plugin
"""
self.to_nth(0)
def to_last_plugin(self):
"""Goes to the last plugin
"""
self.to_nth(-1)
class Page(ContainerElement):
"""An abstract page with a couple of widgets
"""
buttons = []
def __init__(self, path, children, title=None):
super(Page, self).__init__(path, children, title)
self.buttons = self.buttons or [SaveButton("%s.save" % path),
ResetButton("%s.reset" % path)
]
def elements(self):
return super(Page, self).elements() + self.buttons
class Dialog(Page):
"""An abstract dialog, similar to a page
Args:
on_close: Emitted by the Dialog when it requests a close
"""
escape_key = "esc"
on_close_change = None
def __init__(self, path, title, children):
super(Dialog, self).__init__(path, children, title)
self.buttons = [SaveButton("%s.save" % path),
CloseButton("%s.close" % path)
]
self.on_close_change = self.new_signal()
self.close(False)
self.on_close_change.connect(CloseAction(dialog=self))
def close(self, v=True):
if v:
self.on_close_change(self)
class InfoDialog(Dialog):
"""A dialog with a title and a text
"""
def __init__(self, path, title, text, buttons=None):
super(InfoDialog, self).__init__(path, title, [])
self.children = [Label(path + ".label", text)]
self.buttons = buttons or [CloseButton(path + ".close")]
class TextViewDialog(Dialog):
"""A dialog to display much text, e.g. log files
"""
def __init__(self, path, title, contents, height=16):
super(TextViewDialog, self).__init__(path, title, [])
self.children = [Table("contents", "", _("Contents"),
contents, height=height)]
self.buttons = [CloseButton("dialog.close")]
class ConfirmationDialog(InfoDialog):
"""A generic dialog showing a text and offering buttons
By default a OK and Close button will be shown.
"""
def __init__(self, path, title, text, buttons=None):
self.children = [Divider("divider[0]"),
Label("label[0]", text)
]
if not buttons:
# Default: OK and Close
self.buttons = [Button(path + ".yes", _("OK")),
CloseButton(path + ".close", _("Cancel"))]
buttons = self.buttons
super(ConfirmationDialog, self).__init__(path, title, text,
buttons)
class TransactionProgressDialog(Dialog):
"""Display the progress of a transaction in a dialog
"""
def __init__(self, path, transaction, plugin, initial_text=""):
self.transaction = transaction
title = _("Transaction: %s") % self.transaction.title
self._progress_label = Label("dialog.progress", initial_text)
super(TransactionProgressDialog, self).__init__(path,
title,
[self._progress_label])
self.texts = [initial_text, ""]
self.plugin = plugin
self._close_button = CloseButton("button.close")
self.buttons = [self._close_button]
self.buttons[0].on_activate.connect(self._clear_event)
self.event = threading.Event()
def _clear_event(self, *args, **kwargs):
self.event.set()
self.event.clear()
def add_update(self, txt):
self.texts.append(txt)
self._progress_label.text("\n".join(self.texts))
def run(self):
try:
self.plugin.application.show(self)
self._close_button.enabled(False)
if self.transaction:
self.logger.debug("Initiating transaction")
self.__run_transaction()
else:
self.add_update("There were no changes, nothing to do.")
self._close_button.enabled(True)
# We enforce a redraw, because this the non-mainloop thread
self.plugin.application.ui.force_redraw()
except Exception as e:
self.logger.warning("An exception in the Transaction: %s" % e,
exc_info=True)
def __run_transaction(self):
try:
self.add_update("Checking pre-conditions ...")
for idx, tx_element in self.transaction.step():
txt = "(%s/%s) %s" % (idx + 1, len(self.transaction),
tx_element.title)
self.add_update(txt)
with console.CaptureOutput() as captured:
# Sometimes a tx_element is wrapping some code that
# writes to stdout/stderr which scrambles the screen,
# therefore we are capturing this
self.plugin.dry_or(lambda: tx_element.commit())
self.add_update("\nAll changes were applied successfully.")
except Exception as e:
self.logger.info("An exception during the transaction: %s" % e,
exc_info=True)
self.add_update("\nAn error occurred while applying the changes:")
self.add_update("%s" % e)
self._clear_event()
if captured.stderr.getvalue():
se = captured.stderr.getvalue()
if se:
self.add_update("Stderr: %s" % se)
class AbstractUIBuilder(base.Base):
"""An abstract class
Every toolkit that wants to be a backend for the above elements needs to
implement this builder. An instance of that specififc builder is then
passed to the application which uses the builder to build the UI.
"""
application = None
def __init__(self, application):
super(AbstractUIBuilder, self).__init__()
self.application = application
def build(self, ui_element):
assert Element in type(ui_element).mro()
builder_for_element = {
ContainerElement: self._build_container,
Window: self._build_window,
Page: self._build_page,
Dialog: self._build_dialog,
Label: self._build_label,
KeywordLabel: self._build_keywordlabel,
Entry: self._build_entry,
PasswordEntry: self._build_passwordentry,
Header: self._build_header,
Notice: self._build_notice,
Button: self._build_button,
Options: self._build_options,
ProgressBar: self._build_progressbar,
Table: self._build_table,
Checkbox: self._build_checkbox,
Divider: self._build_divider,
Row: self._build_row,
}
self.logger.debug("Building %s" % ui_element)
ui_element_type = type(ui_element)
builder_func = None
# Check if builder is available for UI Element
if ui_element_type in builder_for_element:
builder_func = builder_for_element[ui_element_type]
else:
# It could be a derived type, therefor find it's base:
for sub_type in type(ui_element).mro():
if sub_type in builder_for_element:
builder_func = builder_for_element[sub_type]
if not builder_func:
raise Exception("No builder for UI element '%s' (%s)" %
(ui_element, type(ui_element).mro()))
# Build widget from UI Element
widget = builder_func(ui_element)
# Give the widget the ability to also use the ui_builder
widget._ui_builder = self
return widget
def _build_container(self, ui_container):
raise NotImplementedError
def _build_window(self, ui_window):
raise NotImplementedError
def _build_page(self, ui_page):
raise NotImplementedError
def _build_dialog(self, ui_dialog):
raise NotImplementedError
def _build_label(self, ui_label):
raise NotImplementedError
def _build_keywordlabel(self, ui_keywordlabel):
raise NotImplementedError
def _build_header(self, ui_header):
raise NotImplementedError
def _build_notice(self, ui_notice):
raise NotImplementedError
def _build_button(self, ui_button):
raise NotImplementedError
def _build_button_bar(self, ui_button):
raise NotImplementedError
def _build_entry(self, ui_entry):
raise NotImplementedError
def _build_passwordentry(self, ui_passwordentry):
raise NotImplementedError
def _build_divider(self, ui_divider):
raise NotImplementedError
def _build_options(self, ui_options):
raise NotImplementedError
def _build_checkbox(self, ui_checkbox):
raise NotImplementedError
def _build_progressbar(self, ui_progressbar):
raise NotImplementedError
def _build_table(self, ui_table):
raise NotImplementedError
def _build_row(self, ui_row):
raise NotImplementedError
| gpl-2.0 |
40023255/-w16b_test | static/Brython3.1.1-20150328-091302/Lib/logging/config.py | 739 | 35619 | # Copyright 2001-2013 by Vinay Sajip. All Rights Reserved.
#
# Permission to use, copy, modify, and distribute this software and its
# documentation for any purpose and without fee is hereby granted,
# provided that the above copyright notice appear in all copies and that
# both that copyright notice and this permission notice appear in
# supporting documentation, and that the name of Vinay Sajip
# not be used in advertising or publicity pertaining to distribution
# of the software without specific, written prior permission.
# VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING
# ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
# VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR
# ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER
# IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
"""
Configuration functions for the logging package for Python. The core package
is based on PEP 282 and comments thereto in comp.lang.python, and influenced
by Apache's log4j system.
Copyright (C) 2001-2013 Vinay Sajip. All Rights Reserved.
To use, simply 'import logging' and log away!
"""
import sys, logging, logging.handlers, socket, struct, traceback, re
import io
try:
import _thread as thread
import threading
except ImportError: #pragma: no cover
thread = None
from socketserver import ThreadingTCPServer, StreamRequestHandler
DEFAULT_LOGGING_CONFIG_PORT = 9030
if sys.platform == "win32":
RESET_ERROR = 10054 #WSAECONNRESET
else:
RESET_ERROR = 104 #ECONNRESET
#
# The following code implements a socket listener for on-the-fly
# reconfiguration of logging.
#
# _listener holds the server object doing the listening
_listener = None
def fileConfig(fname, defaults=None, disable_existing_loggers=True):
"""
Read the logging configuration from a ConfigParser-format file.
This can be called several times from an application, allowing an end user
the ability to select from various pre-canned configurations (if the
developer provides a mechanism to present the choices and load the chosen
configuration).
"""
import configparser
cp = configparser.ConfigParser(defaults)
if hasattr(fname, 'readline'):
cp.read_file(fname)
else:
cp.read(fname)
formatters = _create_formatters(cp)
# critical section
logging._acquireLock()
try:
logging._handlers.clear()
del logging._handlerList[:]
# Handlers add themselves to logging._handlers
handlers = _install_handlers(cp, formatters)
_install_loggers(cp, handlers, disable_existing_loggers)
finally:
logging._releaseLock()
def _resolve(name):
"""Resolve a dotted name to a global object."""
name = name.split('.')
used = name.pop(0)
found = __import__(used)
for n in name:
used = used + '.' + n
try:
found = getattr(found, n)
except AttributeError:
__import__(used)
found = getattr(found, n)
return found
def _strip_spaces(alist):
return map(lambda x: x.strip(), alist)
def _create_formatters(cp):
"""Create and return formatters"""
flist = cp["formatters"]["keys"]
if not len(flist):
return {}
flist = flist.split(",")
flist = _strip_spaces(flist)
formatters = {}
for form in flist:
sectname = "formatter_%s" % form
fs = cp.get(sectname, "format", raw=True, fallback=None)
dfs = cp.get(sectname, "datefmt", raw=True, fallback=None)
c = logging.Formatter
class_name = cp[sectname].get("class")
if class_name:
c = _resolve(class_name)
f = c(fs, dfs)
formatters[form] = f
return formatters
def _install_handlers(cp, formatters):
"""Install and return handlers"""
hlist = cp["handlers"]["keys"]
if not len(hlist):
return {}
hlist = hlist.split(",")
hlist = _strip_spaces(hlist)
handlers = {}
fixups = [] #for inter-handler references
for hand in hlist:
section = cp["handler_%s" % hand]
klass = section["class"]
fmt = section.get("formatter", "")
try:
klass = eval(klass, vars(logging))
except (AttributeError, NameError):
klass = _resolve(klass)
args = section["args"]
args = eval(args, vars(logging))
h = klass(*args)
if "level" in section:
level = section["level"]
h.setLevel(logging._levelNames[level])
if len(fmt):
h.setFormatter(formatters[fmt])
if issubclass(klass, logging.handlers.MemoryHandler):
target = section.get("target", "")
if len(target): #the target handler may not be loaded yet, so keep for later...
fixups.append((h, target))
handlers[hand] = h
#now all handlers are loaded, fixup inter-handler references...
for h, t in fixups:
h.setTarget(handlers[t])
return handlers
def _handle_existing_loggers(existing, child_loggers, disable_existing):
"""
When (re)configuring logging, handle loggers which were in the previous
configuration but are not in the new configuration. There's no point
deleting them as other threads may continue to hold references to them;
and by disabling them, you stop them doing any logging.
However, don't disable children of named loggers, as that's probably not
what was intended by the user. Also, allow existing loggers to NOT be
disabled if disable_existing is false.
"""
root = logging.root
for log in existing:
logger = root.manager.loggerDict[log]
if log in child_loggers:
logger.level = logging.NOTSET
logger.handlers = []
logger.propagate = True
else:
logger.disabled = disable_existing
def _install_loggers(cp, handlers, disable_existing):
"""Create and install loggers"""
# configure the root first
llist = cp["loggers"]["keys"]
llist = llist.split(",")
llist = list(map(lambda x: x.strip(), llist))
llist.remove("root")
section = cp["logger_root"]
root = logging.root
log = root
if "level" in section:
level = section["level"]
log.setLevel(logging._levelNames[level])
for h in root.handlers[:]:
root.removeHandler(h)
hlist = section["handlers"]
if len(hlist):
hlist = hlist.split(",")
hlist = _strip_spaces(hlist)
for hand in hlist:
log.addHandler(handlers[hand])
#and now the others...
#we don't want to lose the existing loggers,
#since other threads may have pointers to them.
#existing is set to contain all existing loggers,
#and as we go through the new configuration we
#remove any which are configured. At the end,
#what's left in existing is the set of loggers
#which were in the previous configuration but
#which are not in the new configuration.
existing = list(root.manager.loggerDict.keys())
#The list needs to be sorted so that we can
#avoid disabling child loggers of explicitly
#named loggers. With a sorted list it is easier
#to find the child loggers.
existing.sort()
#We'll keep the list of existing loggers
#which are children of named loggers here...
child_loggers = []
#now set up the new ones...
for log in llist:
section = cp["logger_%s" % log]
qn = section["qualname"]
propagate = section.getint("propagate", fallback=1)
logger = logging.getLogger(qn)
if qn in existing:
i = existing.index(qn) + 1 # start with the entry after qn
prefixed = qn + "."
pflen = len(prefixed)
num_existing = len(existing)
while i < num_existing:
if existing[i][:pflen] == prefixed:
child_loggers.append(existing[i])
i += 1
existing.remove(qn)
if "level" in section:
level = section["level"]
logger.setLevel(logging._levelNames[level])
for h in logger.handlers[:]:
logger.removeHandler(h)
logger.propagate = propagate
logger.disabled = 0
hlist = section["handlers"]
if len(hlist):
hlist = hlist.split(",")
hlist = _strip_spaces(hlist)
for hand in hlist:
logger.addHandler(handlers[hand])
#Disable any old loggers. There's no point deleting
#them as other threads may continue to hold references
#and by disabling them, you stop them doing any logging.
#However, don't disable children of named loggers, as that's
#probably not what was intended by the user.
#for log in existing:
# logger = root.manager.loggerDict[log]
# if log in child_loggers:
# logger.level = logging.NOTSET
# logger.handlers = []
# logger.propagate = 1
# elif disable_existing_loggers:
# logger.disabled = 1
_handle_existing_loggers(existing, child_loggers, disable_existing)
IDENTIFIER = re.compile('^[a-z_][a-z0-9_]*$', re.I)
def valid_ident(s):
m = IDENTIFIER.match(s)
if not m:
raise ValueError('Not a valid Python identifier: %r' % s)
return True
# The ConvertingXXX classes are wrappers around standard Python containers,
# and they serve to convert any suitable values in the container. The
# conversion converts base dicts, lists and tuples to their wrapped
# equivalents, whereas strings which match a conversion format are converted
# appropriately.
#
# Each wrapper should have a configurator attribute holding the actual
# configurator to use for conversion.
class ConvertingDict(dict):
"""A converting dictionary wrapper."""
def __getitem__(self, key):
value = dict.__getitem__(self, key)
result = self.configurator.convert(value)
#If the converted value is different, save for next time
if value is not result:
self[key] = result
if type(result) in (ConvertingDict, ConvertingList,
ConvertingTuple):
result.parent = self
result.key = key
return result
def get(self, key, default=None):
value = dict.get(self, key, default)
result = self.configurator.convert(value)
#If the converted value is different, save for next time
if value is not result:
self[key] = result
if type(result) in (ConvertingDict, ConvertingList,
ConvertingTuple):
result.parent = self
result.key = key
return result
def pop(self, key, default=None):
value = dict.pop(self, key, default)
result = self.configurator.convert(value)
if value is not result:
if type(result) in (ConvertingDict, ConvertingList,
ConvertingTuple):
result.parent = self
result.key = key
return result
class ConvertingList(list):
"""A converting list wrapper."""
def __getitem__(self, key):
value = list.__getitem__(self, key)
result = self.configurator.convert(value)
#If the converted value is different, save for next time
if value is not result:
self[key] = result
if type(result) in (ConvertingDict, ConvertingList,
ConvertingTuple):
result.parent = self
result.key = key
return result
def pop(self, idx=-1):
value = list.pop(self, idx)
result = self.configurator.convert(value)
if value is not result:
if type(result) in (ConvertingDict, ConvertingList,
ConvertingTuple):
result.parent = self
return result
class ConvertingTuple(tuple):
"""A converting tuple wrapper."""
def __getitem__(self, key):
value = tuple.__getitem__(self, key)
result = self.configurator.convert(value)
if value is not result:
if type(result) in (ConvertingDict, ConvertingList,
ConvertingTuple):
result.parent = self
result.key = key
return result
class BaseConfigurator(object):
"""
The configurator base class which defines some useful defaults.
"""
CONVERT_PATTERN = re.compile(r'^(?P<prefix>[a-z]+)://(?P<suffix>.*)$')
WORD_PATTERN = re.compile(r'^\s*(\w+)\s*')
DOT_PATTERN = re.compile(r'^\.\s*(\w+)\s*')
INDEX_PATTERN = re.compile(r'^\[\s*(\w+)\s*\]\s*')
DIGIT_PATTERN = re.compile(r'^\d+$')
value_converters = {
'ext' : 'ext_convert',
'cfg' : 'cfg_convert',
}
# We might want to use a different one, e.g. importlib
importer = staticmethod(__import__)
def __init__(self, config):
self.config = ConvertingDict(config)
self.config.configurator = self
def resolve(self, s):
"""
Resolve strings to objects using standard import and attribute
syntax.
"""
name = s.split('.')
used = name.pop(0)
try:
found = self.importer(used)
for frag in name:
used += '.' + frag
try:
found = getattr(found, frag)
except AttributeError:
self.importer(used)
found = getattr(found, frag)
return found
except ImportError:
e, tb = sys.exc_info()[1:]
v = ValueError('Cannot resolve %r: %s' % (s, e))
v.__cause__, v.__traceback__ = e, tb
raise v
def ext_convert(self, value):
"""Default converter for the ext:// protocol."""
return self.resolve(value)
def cfg_convert(self, value):
"""Default converter for the cfg:// protocol."""
rest = value
m = self.WORD_PATTERN.match(rest)
if m is None:
raise ValueError("Unable to convert %r" % value)
else:
rest = rest[m.end():]
d = self.config[m.groups()[0]]
#print d, rest
while rest:
m = self.DOT_PATTERN.match(rest)
if m:
d = d[m.groups()[0]]
else:
m = self.INDEX_PATTERN.match(rest)
if m:
idx = m.groups()[0]
if not self.DIGIT_PATTERN.match(idx):
d = d[idx]
else:
try:
n = int(idx) # try as number first (most likely)
d = d[n]
except TypeError:
d = d[idx]
if m:
rest = rest[m.end():]
else:
raise ValueError('Unable to convert '
'%r at %r' % (value, rest))
#rest should be empty
return d
def convert(self, value):
"""
Convert values to an appropriate type. dicts, lists and tuples are
replaced by their converting alternatives. Strings are checked to
see if they have a conversion format and are converted if they do.
"""
if not isinstance(value, ConvertingDict) and isinstance(value, dict):
value = ConvertingDict(value)
value.configurator = self
elif not isinstance(value, ConvertingList) and isinstance(value, list):
value = ConvertingList(value)
value.configurator = self
elif not isinstance(value, ConvertingTuple) and\
isinstance(value, tuple):
value = ConvertingTuple(value)
value.configurator = self
elif isinstance(value, str): # str for py3k
m = self.CONVERT_PATTERN.match(value)
if m:
d = m.groupdict()
prefix = d['prefix']
converter = self.value_converters.get(prefix, None)
if converter:
suffix = d['suffix']
converter = getattr(self, converter)
value = converter(suffix)
return value
def configure_custom(self, config):
"""Configure an object with a user-supplied factory."""
c = config.pop('()')
if not callable(c):
c = self.resolve(c)
props = config.pop('.', None)
# Check for valid identifiers
kwargs = dict([(k, config[k]) for k in config if valid_ident(k)])
result = c(**kwargs)
if props:
for name, value in props.items():
setattr(result, name, value)
return result
def as_tuple(self, value):
"""Utility function which converts lists to tuples."""
if isinstance(value, list):
value = tuple(value)
return value
class DictConfigurator(BaseConfigurator):
"""
Configure logging using a dictionary-like object to describe the
configuration.
"""
def configure(self):
"""Do the configuration."""
config = self.config
if 'version' not in config:
raise ValueError("dictionary doesn't specify a version")
if config['version'] != 1:
raise ValueError("Unsupported version: %s" % config['version'])
incremental = config.pop('incremental', False)
EMPTY_DICT = {}
logging._acquireLock()
try:
if incremental:
handlers = config.get('handlers', EMPTY_DICT)
for name in handlers:
if name not in logging._handlers:
raise ValueError('No handler found with '
'name %r' % name)
else:
try:
handler = logging._handlers[name]
handler_config = handlers[name]
level = handler_config.get('level', None)
if level:
handler.setLevel(logging._checkLevel(level))
except Exception as e:
raise ValueError('Unable to configure handler '
'%r: %s' % (name, e))
loggers = config.get('loggers', EMPTY_DICT)
for name in loggers:
try:
self.configure_logger(name, loggers[name], True)
except Exception as e:
raise ValueError('Unable to configure logger '
'%r: %s' % (name, e))
root = config.get('root', None)
if root:
try:
self.configure_root(root, True)
except Exception as e:
raise ValueError('Unable to configure root '
'logger: %s' % e)
else:
disable_existing = config.pop('disable_existing_loggers', True)
logging._handlers.clear()
del logging._handlerList[:]
# Do formatters first - they don't refer to anything else
formatters = config.get('formatters', EMPTY_DICT)
for name in formatters:
try:
formatters[name] = self.configure_formatter(
formatters[name])
except Exception as e:
raise ValueError('Unable to configure '
'formatter %r: %s' % (name, e))
# Next, do filters - they don't refer to anything else, either
filters = config.get('filters', EMPTY_DICT)
for name in filters:
try:
filters[name] = self.configure_filter(filters[name])
except Exception as e:
raise ValueError('Unable to configure '
'filter %r: %s' % (name, e))
# Next, do handlers - they refer to formatters and filters
# As handlers can refer to other handlers, sort the keys
# to allow a deterministic order of configuration
handlers = config.get('handlers', EMPTY_DICT)
deferred = []
for name in sorted(handlers):
try:
handler = self.configure_handler(handlers[name])
handler.name = name
handlers[name] = handler
except Exception as e:
if 'target not configured yet' in str(e):
deferred.append(name)
else:
raise ValueError('Unable to configure handler '
'%r: %s' % (name, e))
# Now do any that were deferred
for name in deferred:
try:
handler = self.configure_handler(handlers[name])
handler.name = name
handlers[name] = handler
except Exception as e:
raise ValueError('Unable to configure handler '
'%r: %s' % (name, e))
# Next, do loggers - they refer to handlers and filters
#we don't want to lose the existing loggers,
#since other threads may have pointers to them.
#existing is set to contain all existing loggers,
#and as we go through the new configuration we
#remove any which are configured. At the end,
#what's left in existing is the set of loggers
#which were in the previous configuration but
#which are not in the new configuration.
root = logging.root
existing = list(root.manager.loggerDict.keys())
#The list needs to be sorted so that we can
#avoid disabling child loggers of explicitly
#named loggers. With a sorted list it is easier
#to find the child loggers.
existing.sort()
#We'll keep the list of existing loggers
#which are children of named loggers here...
child_loggers = []
#now set up the new ones...
loggers = config.get('loggers', EMPTY_DICT)
for name in loggers:
if name in existing:
i = existing.index(name) + 1 # look after name
prefixed = name + "."
pflen = len(prefixed)
num_existing = len(existing)
while i < num_existing:
if existing[i][:pflen] == prefixed:
child_loggers.append(existing[i])
i += 1
existing.remove(name)
try:
self.configure_logger(name, loggers[name])
except Exception as e:
raise ValueError('Unable to configure logger '
'%r: %s' % (name, e))
#Disable any old loggers. There's no point deleting
#them as other threads may continue to hold references
#and by disabling them, you stop them doing any logging.
#However, don't disable children of named loggers, as that's
#probably not what was intended by the user.
#for log in existing:
# logger = root.manager.loggerDict[log]
# if log in child_loggers:
# logger.level = logging.NOTSET
# logger.handlers = []
# logger.propagate = True
# elif disable_existing:
# logger.disabled = True
_handle_existing_loggers(existing, child_loggers,
disable_existing)
# And finally, do the root logger
root = config.get('root', None)
if root:
try:
self.configure_root(root)
except Exception as e:
raise ValueError('Unable to configure root '
'logger: %s' % e)
finally:
logging._releaseLock()
def configure_formatter(self, config):
"""Configure a formatter from a dictionary."""
if '()' in config:
factory = config['()'] # for use in exception handler
try:
result = self.configure_custom(config)
except TypeError as te:
if "'format'" not in str(te):
raise
#Name of parameter changed from fmt to format.
#Retry with old name.
#This is so that code can be used with older Python versions
#(e.g. by Django)
config['fmt'] = config.pop('format')
config['()'] = factory
result = self.configure_custom(config)
else:
fmt = config.get('format', None)
dfmt = config.get('datefmt', None)
style = config.get('style', '%')
result = logging.Formatter(fmt, dfmt, style)
return result
def configure_filter(self, config):
"""Configure a filter from a dictionary."""
if '()' in config:
result = self.configure_custom(config)
else:
name = config.get('name', '')
result = logging.Filter(name)
return result
def add_filters(self, filterer, filters):
"""Add filters to a filterer from a list of names."""
for f in filters:
try:
filterer.addFilter(self.config['filters'][f])
except Exception as e:
raise ValueError('Unable to add filter %r: %s' % (f, e))
def configure_handler(self, config):
"""Configure a handler from a dictionary."""
config_copy = dict(config) # for restoring in case of error
formatter = config.pop('formatter', None)
if formatter:
try:
formatter = self.config['formatters'][formatter]
except Exception as e:
raise ValueError('Unable to set formatter '
'%r: %s' % (formatter, e))
level = config.pop('level', None)
filters = config.pop('filters', None)
if '()' in config:
c = config.pop('()')
if not callable(c):
c = self.resolve(c)
factory = c
else:
cname = config.pop('class')
klass = self.resolve(cname)
#Special case for handler which refers to another handler
if issubclass(klass, logging.handlers.MemoryHandler) and\
'target' in config:
try:
th = self.config['handlers'][config['target']]
if not isinstance(th, logging.Handler):
config.update(config_copy) # restore for deferred cfg
raise TypeError('target not configured yet')
config['target'] = th
except Exception as e:
raise ValueError('Unable to set target handler '
'%r: %s' % (config['target'], e))
elif issubclass(klass, logging.handlers.SMTPHandler) and\
'mailhost' in config:
config['mailhost'] = self.as_tuple(config['mailhost'])
elif issubclass(klass, logging.handlers.SysLogHandler) and\
'address' in config:
config['address'] = self.as_tuple(config['address'])
factory = klass
kwargs = dict([(k, config[k]) for k in config if valid_ident(k)])
try:
result = factory(**kwargs)
except TypeError as te:
if "'stream'" not in str(te):
raise
#The argument name changed from strm to stream
#Retry with old name.
#This is so that code can be used with older Python versions
#(e.g. by Django)
kwargs['strm'] = kwargs.pop('stream')
result = factory(**kwargs)
if formatter:
result.setFormatter(formatter)
if level is not None:
result.setLevel(logging._checkLevel(level))
if filters:
self.add_filters(result, filters)
return result
def add_handlers(self, logger, handlers):
"""Add handlers to a logger from a list of names."""
for h in handlers:
try:
logger.addHandler(self.config['handlers'][h])
except Exception as e:
raise ValueError('Unable to add handler %r: %s' % (h, e))
def common_logger_config(self, logger, config, incremental=False):
"""
Perform configuration which is common to root and non-root loggers.
"""
level = config.get('level', None)
if level is not None:
logger.setLevel(logging._checkLevel(level))
if not incremental:
#Remove any existing handlers
for h in logger.handlers[:]:
logger.removeHandler(h)
handlers = config.get('handlers', None)
if handlers:
self.add_handlers(logger, handlers)
filters = config.get('filters', None)
if filters:
self.add_filters(logger, filters)
def configure_logger(self, name, config, incremental=False):
"""Configure a non-root logger from a dictionary."""
logger = logging.getLogger(name)
self.common_logger_config(logger, config, incremental)
propagate = config.get('propagate', None)
if propagate is not None:
logger.propagate = propagate
def configure_root(self, config, incremental=False):
"""Configure a root logger from a dictionary."""
root = logging.getLogger()
self.common_logger_config(root, config, incremental)
dictConfigClass = DictConfigurator
def dictConfig(config):
"""Configure logging using a dictionary."""
dictConfigClass(config).configure()
def listen(port=DEFAULT_LOGGING_CONFIG_PORT):
"""
Start up a socket server on the specified port, and listen for new
configurations.
These will be sent as a file suitable for processing by fileConfig().
Returns a Thread object on which you can call start() to start the server,
and which you can join() when appropriate. To stop the server, call
stopListening().
"""
if not thread: #pragma: no cover
raise NotImplementedError("listen() needs threading to work")
class ConfigStreamHandler(StreamRequestHandler):
"""
Handler for a logging configuration request.
It expects a completely new logging configuration and uses fileConfig
to install it.
"""
def handle(self):
"""
Handle a request.
Each request is expected to be a 4-byte length, packed using
struct.pack(">L", n), followed by the config file.
Uses fileConfig() to do the grunt work.
"""
try:
conn = self.connection
chunk = conn.recv(4)
if len(chunk) == 4:
slen = struct.unpack(">L", chunk)[0]
chunk = self.connection.recv(slen)
while len(chunk) < slen:
chunk = chunk + conn.recv(slen - len(chunk))
chunk = chunk.decode("utf-8")
try:
import json
d =json.loads(chunk)
assert isinstance(d, dict)
dictConfig(d)
except:
#Apply new configuration.
file = io.StringIO(chunk)
try:
fileConfig(file)
except (KeyboardInterrupt, SystemExit): #pragma: no cover
raise
except:
traceback.print_exc()
if self.server.ready:
self.server.ready.set()
except socket.error as e:
if not isinstance(e.args, tuple):
raise
else:
errcode = e.args[0]
if errcode != RESET_ERROR:
raise
class ConfigSocketReceiver(ThreadingTCPServer):
"""
A simple TCP socket-based logging config receiver.
"""
allow_reuse_address = 1
def __init__(self, host='localhost', port=DEFAULT_LOGGING_CONFIG_PORT,
handler=None, ready=None):
ThreadingTCPServer.__init__(self, (host, port), handler)
logging._acquireLock()
self.abort = 0
logging._releaseLock()
self.timeout = 1
self.ready = ready
def serve_until_stopped(self):
import select
abort = 0
while not abort:
rd, wr, ex = select.select([self.socket.fileno()],
[], [],
self.timeout)
if rd:
self.handle_request()
logging._acquireLock()
abort = self.abort
logging._releaseLock()
self.socket.close()
class Server(threading.Thread):
def __init__(self, rcvr, hdlr, port):
super(Server, self).__init__()
self.rcvr = rcvr
self.hdlr = hdlr
self.port = port
self.ready = threading.Event()
def run(self):
server = self.rcvr(port=self.port, handler=self.hdlr,
ready=self.ready)
if self.port == 0:
self.port = server.server_address[1]
self.ready.set()
global _listener
logging._acquireLock()
_listener = server
logging._releaseLock()
server.serve_until_stopped()
return Server(ConfigSocketReceiver, ConfigStreamHandler, port)
def stopListening():
"""
Stop the listening server which was created with a call to listen().
"""
global _listener
logging._acquireLock()
try:
if _listener:
_listener.abort = 1
_listener = None
finally:
logging._releaseLock()
| agpl-3.0 |
colinligertwood/odoo | addons/hr_expense/__init__.py | 436 | 1079 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import hr_expense
import report
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
BaichuanWu/Blog_on_django | site-packages/pip/_vendor/requests/packages/chardet/langhungarianmodel.py | 2763 | 12536 | ######################## BEGIN LICENSE BLOCK ########################
# The Original Code is Mozilla Communicator client code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 1998
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
# 255: Control characters that usually does not exist in any text
# 254: Carriage/Return
# 253: symbol (punctuation) that does not belong to word
# 252: 0 - 9
# Character Mapping Table:
Latin2_HungarianCharToOrderMap = (
255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10
253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20
252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30
253, 28, 40, 54, 45, 32, 50, 49, 38, 39, 53, 36, 41, 34, 35, 47,
46, 71, 43, 33, 37, 57, 48, 64, 68, 55, 52,253,253,253,253,253,
253, 2, 18, 26, 17, 1, 27, 12, 20, 9, 22, 7, 6, 13, 4, 8,
23, 67, 10, 5, 3, 21, 19, 65, 62, 16, 11,253,253,253,253,253,
159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,
175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,
191,192,193,194,195,196,197, 75,198,199,200,201,202,203,204,205,
79,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,
221, 51, 81,222, 78,223,224,225,226, 44,227,228,229, 61,230,231,
232,233,234, 58,235, 66, 59,236,237,238, 60, 69, 63,239,240,241,
82, 14, 74,242, 70, 80,243, 72,244, 15, 83, 77, 84, 30, 76, 85,
245,246,247, 25, 73, 42, 24,248,249,250, 31, 56, 29,251,252,253,
)
win1250HungarianCharToOrderMap = (
255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10
253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20
252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30
253, 28, 40, 54, 45, 32, 50, 49, 38, 39, 53, 36, 41, 34, 35, 47,
46, 72, 43, 33, 37, 57, 48, 64, 68, 55, 52,253,253,253,253,253,
253, 2, 18, 26, 17, 1, 27, 12, 20, 9, 22, 7, 6, 13, 4, 8,
23, 67, 10, 5, 3, 21, 19, 65, 62, 16, 11,253,253,253,253,253,
161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,
177,178,179,180, 78,181, 69,182,183,184,185,186,187,188,189,190,
191,192,193,194,195,196,197, 76,198,199,200,201,202,203,204,205,
81,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,
221, 51, 83,222, 80,223,224,225,226, 44,227,228,229, 61,230,231,
232,233,234, 58,235, 66, 59,236,237,238, 60, 70, 63,239,240,241,
84, 14, 75,242, 71, 82,243, 73,244, 15, 85, 79, 86, 30, 77, 87,
245,246,247, 25, 74, 42, 24,248,249,250, 31, 56, 29,251,252,253,
)
# Model Table:
# total sequences: 100%
# first 512 sequences: 94.7368%
# first 1024 sequences:5.2623%
# rest sequences: 0.8894%
# negative sequences: 0.0009%
HungarianLangModel = (
0,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,1,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,
3,3,3,3,3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,2,2,3,3,1,1,2,2,2,2,2,1,2,
3,2,2,3,3,3,3,3,2,3,3,3,3,3,3,1,2,3,3,3,3,2,3,3,1,1,3,3,0,1,1,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,
3,2,1,3,3,3,3,3,2,3,3,3,3,3,1,1,2,3,3,3,3,3,3,3,1,1,3,2,0,1,1,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,3,3,3,1,1,2,3,3,3,1,3,3,3,3,3,1,3,3,2,2,0,3,2,3,
0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,
3,3,3,3,3,3,2,3,3,3,2,3,3,2,3,3,3,3,3,2,3,3,2,2,3,2,3,2,0,3,2,2,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,
3,3,3,3,3,3,2,3,3,3,3,3,2,3,3,3,1,2,3,2,2,3,1,2,3,3,2,2,0,3,3,3,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,3,3,3,3,3,3,3,2,2,3,3,3,3,3,3,2,3,3,3,3,2,3,3,3,3,0,2,3,2,
0,0,0,1,1,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,3,3,3,3,3,3,3,3,1,1,1,3,3,2,1,3,2,2,3,2,1,3,2,2,1,0,3,3,1,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,2,2,3,3,3,3,3,1,2,3,3,3,3,1,2,1,3,3,3,3,2,2,3,1,1,3,2,0,1,1,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,2,2,3,3,3,3,3,2,1,3,3,3,3,3,2,2,1,3,3,3,0,1,1,2,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,2,3,3,2,3,3,3,2,0,3,2,3,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,1,0,
3,3,3,3,3,3,2,3,3,3,2,3,2,3,3,3,1,3,2,2,2,3,1,1,3,3,1,1,0,3,3,2,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,3,3,3,3,2,3,3,3,2,3,2,3,3,3,2,3,3,3,3,3,1,2,3,2,2,0,2,2,2,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,2,2,2,3,1,3,3,2,2,1,3,3,3,1,1,3,1,2,3,2,3,2,2,2,1,0,2,2,2,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,
3,1,1,3,3,3,3,3,1,2,3,3,3,3,1,2,1,3,3,3,2,2,3,2,1,0,3,2,0,1,1,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,1,1,3,3,3,3,3,1,2,3,3,3,3,1,1,0,3,3,3,3,0,2,3,0,0,2,1,0,1,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,3,3,3,3,3,2,2,3,3,2,2,2,2,3,3,0,1,2,3,2,3,2,2,3,2,1,2,0,2,2,2,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,
3,3,3,3,3,3,1,2,3,3,3,2,1,2,3,3,2,2,2,3,2,3,3,1,3,3,1,1,0,2,3,2,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,1,2,2,2,2,3,3,3,1,1,1,3,3,1,1,3,1,1,3,2,1,2,3,1,1,0,2,2,2,
0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,2,1,2,1,1,3,3,1,1,1,1,3,3,1,1,2,2,1,2,1,1,2,2,1,1,0,2,2,1,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,1,1,2,1,1,3,3,1,0,1,1,3,3,2,0,1,1,2,3,1,0,2,2,1,0,0,1,3,2,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,2,1,3,3,3,3,3,1,2,3,2,3,3,2,1,1,3,2,3,2,1,2,2,0,1,2,1,0,0,1,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,
3,3,3,3,2,2,2,2,3,1,2,2,1,1,3,3,0,3,2,1,2,3,2,1,3,3,1,1,0,2,1,3,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,3,3,2,2,2,3,2,3,3,3,2,1,1,3,3,1,1,1,2,2,3,2,3,2,2,2,1,0,2,2,1,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
1,0,0,3,3,3,3,3,0,0,3,3,2,3,0,0,0,2,3,3,1,0,1,2,0,0,1,1,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,1,2,3,3,3,3,3,1,2,3,3,2,2,1,1,0,3,3,2,2,1,2,2,1,0,2,2,0,1,1,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,3,2,2,1,3,1,2,3,3,2,2,1,1,2,2,1,1,1,1,3,2,1,1,1,1,2,1,0,1,2,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,
2,3,3,1,1,1,1,1,3,3,3,0,1,1,3,3,1,1,1,1,1,2,2,0,3,1,1,2,0,2,1,1,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,
3,1,0,1,2,1,2,2,0,1,2,3,1,2,0,0,0,2,1,1,1,1,1,2,0,0,1,1,0,0,0,0,
1,2,1,2,2,2,1,2,1,2,0,2,0,2,2,1,1,2,1,1,2,1,1,1,0,1,0,0,0,1,1,0,
1,1,1,2,3,2,3,3,0,1,2,2,3,1,0,1,0,2,1,2,2,0,1,1,0,0,1,1,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,0,0,3,3,2,2,1,0,0,3,2,3,2,0,0,0,1,1,3,0,0,1,1,0,0,2,1,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,1,1,2,2,3,3,1,0,1,3,2,3,1,1,1,0,1,1,1,1,1,3,1,0,0,2,2,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,1,1,1,2,2,2,1,0,1,2,3,3,2,0,0,0,2,1,1,1,2,1,1,1,0,1,1,1,0,0,0,
1,2,2,2,2,2,1,1,1,2,0,2,1,1,1,1,1,2,1,1,1,1,1,1,0,1,1,1,0,0,1,1,
3,2,2,1,0,0,1,1,2,2,0,3,0,1,2,1,1,0,0,1,1,1,0,1,1,1,1,0,2,1,1,1,
2,2,1,1,1,2,1,2,1,1,1,1,1,1,1,2,1,1,1,2,3,1,1,1,1,1,1,1,1,1,0,1,
2,3,3,0,1,0,0,0,3,3,1,0,0,1,2,2,1,0,0,0,0,2,0,0,1,1,1,0,2,1,1,1,
2,1,1,1,1,1,1,2,1,1,0,1,1,0,1,1,1,0,1,2,1,1,0,1,1,1,1,1,1,1,0,1,
2,3,3,0,1,0,0,0,2,2,0,0,0,0,1,2,2,0,0,0,0,1,0,0,1,1,0,0,2,0,1,0,
2,1,1,1,1,2,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,2,0,1,1,1,1,1,0,1,
3,2,2,0,1,0,1,0,2,3,2,0,0,1,2,2,1,0,0,1,1,1,0,0,2,1,0,1,2,2,1,1,
2,1,1,1,1,1,1,2,1,1,1,1,1,1,0,2,1,0,1,1,0,1,1,1,0,1,1,2,1,1,0,1,
2,2,2,0,0,1,0,0,2,2,1,1,0,0,2,1,1,0,0,0,1,2,0,0,2,1,0,0,2,1,1,1,
2,1,1,1,1,2,1,2,1,1,1,2,2,1,1,2,1,1,1,2,1,1,1,1,1,1,1,1,1,1,0,1,
1,2,3,0,0,0,1,0,3,2,1,0,0,1,2,1,1,0,0,0,0,2,1,0,1,1,0,0,2,1,2,1,
1,1,0,0,0,1,0,1,1,1,1,1,2,0,0,1,0,0,0,2,0,0,1,1,1,1,1,1,1,1,0,1,
3,0,0,2,1,2,2,1,0,0,2,1,2,2,0,0,0,2,1,1,1,0,1,1,0,0,1,1,2,0,0,0,
1,2,1,2,2,1,1,2,1,2,0,1,1,1,1,1,1,1,1,1,2,1,1,0,0,1,1,1,1,0,0,1,
1,3,2,0,0,0,1,0,2,2,2,0,0,0,2,2,1,0,0,0,0,3,1,1,1,1,0,0,2,1,1,1,
2,1,0,1,1,1,0,1,1,1,1,1,1,1,0,2,1,0,0,1,0,1,1,0,1,1,1,1,1,1,0,1,
2,3,2,0,0,0,1,0,2,2,0,0,0,0,2,1,1,0,0,0,0,2,1,0,1,1,0,0,2,1,1,0,
2,1,1,1,1,2,1,2,1,2,0,1,1,1,0,2,1,1,1,2,1,1,1,1,0,1,1,1,1,1,0,1,
3,1,1,2,2,2,3,2,1,1,2,2,1,1,0,1,0,2,2,1,1,1,1,1,0,0,1,1,0,1,1,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,2,2,0,0,0,0,0,2,2,0,0,0,0,2,2,1,0,0,0,1,1,0,0,1,2,0,0,2,1,1,1,
2,2,1,1,1,2,1,2,1,1,0,1,1,1,1,2,1,1,1,2,1,1,1,1,0,1,2,1,1,1,0,1,
1,0,0,1,2,3,2,1,0,0,2,0,1,1,0,0,0,1,1,1,1,0,1,1,0,0,1,0,0,0,0,0,
1,2,1,2,1,2,1,1,1,2,0,2,1,1,1,0,1,2,0,0,1,1,1,0,0,0,0,0,0,0,0,0,
2,3,2,0,0,0,0,0,1,1,2,1,0,0,1,1,1,0,0,0,0,2,0,0,1,1,0,0,2,1,1,1,
2,1,1,1,1,1,1,2,1,0,1,1,1,1,0,2,1,1,1,1,1,1,0,1,0,1,1,1,1,1,0,1,
1,2,2,0,1,1,1,0,2,2,2,0,0,0,3,2,1,0,0,0,1,1,0,0,1,1,0,1,1,1,0,0,
1,1,0,1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,2,1,1,1,0,0,1,1,1,0,1,0,1,
2,1,0,2,1,1,2,2,1,1,2,1,1,1,0,0,0,1,1,0,1,1,1,1,0,0,1,1,1,0,0,0,
1,2,2,2,2,2,1,1,1,2,0,2,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,0,0,1,0,
1,2,3,0,0,0,1,0,2,2,0,0,0,0,2,2,0,0,0,0,0,1,0,0,1,0,0,0,2,0,1,0,
2,1,1,1,1,1,0,2,0,0,0,1,2,1,1,1,1,0,1,2,0,1,0,1,0,1,1,1,0,1,0,1,
2,2,2,0,0,0,1,0,2,1,2,0,0,0,1,1,2,0,0,0,0,1,0,0,1,1,0,0,2,1,0,1,
2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,0,1,1,1,1,1,0,1,
1,2,2,0,0,0,1,0,2,2,2,0,0,0,1,1,0,0,0,0,0,1,1,0,2,0,0,1,1,1,0,1,
1,0,1,1,1,1,1,1,0,1,1,1,1,0,0,1,0,0,1,1,0,1,0,1,1,1,1,1,0,0,0,1,
1,0,0,1,0,1,2,1,0,0,1,1,1,2,0,0,0,1,1,0,1,0,1,1,0,0,1,0,0,0,0,0,
0,2,1,2,1,1,1,1,1,2,0,2,0,1,1,0,1,2,1,0,1,1,1,0,0,0,0,0,0,1,0,0,
2,1,1,0,1,2,0,0,1,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,1,0,0,0,2,1,0,1,
2,2,1,1,1,1,1,2,1,1,0,1,1,1,1,2,1,1,1,2,1,1,0,1,0,1,1,1,1,1,0,1,
1,2,2,0,0,0,0,0,1,1,0,0,0,0,2,1,0,0,0,0,0,2,0,0,2,2,0,0,2,0,0,1,
2,1,1,1,1,1,1,1,0,1,1,0,1,1,0,1,0,0,0,1,1,1,1,0,0,1,1,1,1,0,0,1,
1,1,2,0,0,3,1,0,2,1,1,1,0,0,1,1,1,0,0,0,1,1,0,0,0,1,0,0,1,0,1,0,
1,2,1,0,1,1,1,2,1,1,0,1,1,1,1,1,0,0,0,1,1,1,1,1,0,1,0,0,0,1,0,0,
2,1,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,1,0,0,0,1,0,0,0,0,2,0,0,0,
2,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,2,1,1,0,0,1,1,1,1,1,0,1,
2,1,1,1,2,1,1,1,0,1,1,2,1,0,0,0,0,1,1,1,1,0,1,0,0,0,0,1,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,1,0,1,1,1,1,1,0,0,1,1,2,1,0,0,0,1,1,0,0,0,1,1,0,0,1,0,1,0,0,0,
1,2,1,1,1,1,1,1,1,1,0,1,0,1,1,1,1,1,1,0,1,1,1,0,0,0,0,0,0,1,0,0,
2,0,0,0,1,1,1,1,0,0,1,1,0,0,0,0,0,1,1,1,2,0,0,1,0,0,1,0,1,0,0,0,
0,1,1,1,1,1,1,1,1,2,0,1,1,1,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,
1,0,0,1,1,1,1,1,0,0,2,1,0,1,0,0,0,1,0,1,0,0,0,0,0,0,1,0,0,0,0,0,
0,1,1,1,1,1,1,0,1,1,0,1,0,1,1,0,1,1,0,0,1,1,1,0,0,0,0,0,0,0,0,0,
1,0,0,1,1,1,0,0,0,0,1,0,2,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,
0,1,1,1,1,1,0,0,1,1,0,1,0,1,0,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,
0,0,0,1,0,0,0,0,0,0,1,1,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,1,1,1,0,1,0,0,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,
2,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,0,0,1,0,0,1,0,1,0,1,1,1,0,0,1,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,0,0,1,1,1,1,0,0,0,1,1,1,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,
0,1,1,1,1,1,1,0,1,1,0,1,0,1,0,0,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,
)
Latin2HungarianModel = {
'charToOrderMap': Latin2_HungarianCharToOrderMap,
'precedenceMatrix': HungarianLangModel,
'mTypicalPositiveRatio': 0.947368,
'keepEnglishLetter': True,
'charsetName': "ISO-8859-2"
}
Win1250HungarianModel = {
'charToOrderMap': win1250HungarianCharToOrderMap,
'precedenceMatrix': HungarianLangModel,
'mTypicalPositiveRatio': 0.947368,
'keepEnglishLetter': True,
'charsetName': "windows-1250"
}
# flake8: noqa
| mit |
g-vidal/upm | examples/python/grovevdiv.py | 4 | 2183 | #!/usr/bin/env python
# Author: Zion Orent <zorent@ics.com>
# Copyright (c) 2015 Intel Corporation.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from __future__ import print_function
import time, sys, signal, atexit
from upm import pyupm_grovevdiv as upmGrovevdiv
def main():
# Instantiate a Grove Voltage Divider sensor on analog pin A0
myVoltageDivider = upmGrovevdiv.GroveVDiv(0)
## Exit handlers ##
# This stops python from printing a stacktrace when you hit control-C
def SIGINTHandler(signum, frame):
raise SystemExit
# This function lets you run code on exit,
# including functions from myVoltageDivider
def exitHandler():
print("Exiting")
sys.exit(0)
# Register exit handlers
atexit.register(exitHandler)
signal.signal(signal.SIGINT, SIGINTHandler)
while(1):
val = myVoltageDivider.value(100)
gain3val = myVoltageDivider.computedValue(3, val)
gain10val = myVoltageDivider.computedValue(10, val)
print("ADC value: {0} Gain 3: {1}v Gain 10: {2}v".format(
val, gain3val, gain10val))
time.sleep(1)
if __name__ == '__main__':
main()
| mit |
phaniso/phpmonitor-frontend | user_guide_src/cilexer/cilexer/cilexer.py | 241 | 2214 | # CodeIgniter
# http://codeigniter.com
#
# An open source application development framework for PHP
#
# This content is released under the MIT License (MIT)
#
# Copyright (c) 2014 - 2015, British Columbia Institute of Technology
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# Copyright (c) 2008 - 2014, EllisLab, Inc. (http://ellislab.com/)
# Copyright (c) 2014 - 2015, British Columbia Institute of Technology (http://bcit.ca/)
#
# http://opensource.org/licenses/MIT MIT License
import re
import copy
from pygments.lexer import DelegatingLexer
from pygments.lexers.web import PhpLexer, HtmlLexer
__all__ = ['CodeIgniterLexer']
class CodeIgniterLexer(DelegatingLexer):
"""
Handles HTML, PHP, JavaScript, and CSS is highlighted
PHP is highlighted with the "startline" option
"""
name = 'CodeIgniter'
aliases = ['ci', 'codeigniter']
filenames = ['*.html', '*.css', '*.php', '*.xml', '*.static']
mimetypes = ['text/html', 'application/xhtml+xml']
def __init__(self, **options):
super(CodeIgniterLexer, self).__init__(HtmlLexer,
PhpLexer,
startinline=True)
| mit |
shawger/pucklab | webapp/src/pieces/page.py | 1 | 1311 |
'''
Page is an object used for displaying a standard page
'''
import os
import jinja2
import os
import sys
import nav
from flask.ext.pymongo import PyMongo
# Setup the JINJA envronment
templatePath = os.path.abspath(os.path.join(os.path.dirname(__file__),
'..',
'..',
'templates'))
JINJA_ENVIRONMENT = jinja2.Environment(
loader=jinja2.FileSystemLoader(templatePath),
extensions=['jinja2.ext.autoescape'],
autoescape=False)
class Page:
def __init__(self, db):
self.title = ""
self.content = ""
self.nav = nav.Nav(db)
self.db = db
self.script = ""
self.data = ""
self.styles = []
self.d3 = False
'''
Render returns a string that can be sent back to a brower
in the form of a web page
'''
def render(self):
if(self.nav != None):
nav = self.nav.render()
else:
nav = ""
template_values = {
'page': self,
'nav': nav
}
template = JINJA_ENVIRONMENT.get_template('standard.html')
return template.render(template_values)
| gpl-3.0 |
iver333/phantomjs | src/qt/qtwebkit/Tools/Scripts/webkitpy/common/system/outputcapture.py | 124 | 5478 | # Copyright (c) 2009, Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Class for unittest support. Used for capturing stderr/stdout.
import logging
import unittest # Don't use unittest2 here as the autoinstaller may not have it yet.
import sys
from StringIO import StringIO
class OutputCapture(object):
# By default we capture the output to a stream. Other modules may override
# this function in order to do things like pass through the output. See
# webkitpy.test.main for an example.
@staticmethod
def stream_wrapper(stream):
return StringIO()
def __init__(self):
self.saved_outputs = dict()
self._log_level = logging.INFO
def set_log_level(self, log_level):
self._log_level = log_level
if hasattr(self, '_logs_handler'):
self._logs_handler.setLevel(self._log_level)
def _capture_output_with_name(self, output_name):
stream = getattr(sys, output_name)
captured_output = self.stream_wrapper(stream)
self.saved_outputs[output_name] = stream
setattr(sys, output_name, captured_output)
return captured_output
def _restore_output_with_name(self, output_name):
captured_output = getattr(sys, output_name).getvalue()
setattr(sys, output_name, self.saved_outputs[output_name])
del self.saved_outputs[output_name]
return captured_output
def capture_output(self):
self._logs = StringIO()
self._logs_handler = logging.StreamHandler(self._logs)
self._logs_handler.setLevel(self._log_level)
self._logger = logging.getLogger()
self._orig_log_level = self._logger.level
self._logger.addHandler(self._logs_handler)
self._logger.setLevel(min(self._log_level, self._orig_log_level))
return (self._capture_output_with_name("stdout"), self._capture_output_with_name("stderr"))
def restore_output(self):
self._logger.removeHandler(self._logs_handler)
self._logger.setLevel(self._orig_log_level)
self._logs_handler.flush()
self._logs.flush()
logs_string = self._logs.getvalue()
delattr(self, '_logs_handler')
delattr(self, '_logs')
return (self._restore_output_with_name("stdout"), self._restore_output_with_name("stderr"), logs_string)
def assert_outputs(self, testcase, function, args=[], kwargs={}, expected_stdout="", expected_stderr="", expected_exception=None, expected_logs=None):
self.capture_output()
try:
if expected_exception:
return_value = testcase.assertRaises(expected_exception, function, *args, **kwargs)
else:
return_value = function(*args, **kwargs)
finally:
(stdout_string, stderr_string, logs_string) = self.restore_output()
if hasattr(testcase, 'assertMultiLineEqual'):
testassert = testcase.assertMultiLineEqual
else:
testassert = testcase.assertEqual
testassert(stdout_string, expected_stdout)
testassert(stderr_string, expected_stderr)
if expected_logs is not None:
testassert(logs_string, expected_logs)
# This is a little strange, but I don't know where else to return this information.
return return_value
class OutputCaptureTestCaseBase(unittest.TestCase):
maxDiff = None
def setUp(self):
unittest.TestCase.setUp(self)
self.output_capture = OutputCapture()
(self.__captured_stdout, self.__captured_stderr) = self.output_capture.capture_output()
def tearDown(self):
del self.__captured_stdout
del self.__captured_stderr
self.output_capture.restore_output()
unittest.TestCase.tearDown(self)
def assertStdout(self, expected_stdout):
self.assertEqual(expected_stdout, self.__captured_stdout.getvalue())
def assertStderr(self, expected_stderr):
self.assertEqual(expected_stderr, self.__captured_stderr.getvalue())
| bsd-3-clause |
abhattad4/Digi-Menu | django/forms/extras/widgets.py | 82 | 5299 | """
Extra HTML Widget classes
"""
from __future__ import unicode_literals
import datetime
import re
from django.conf import settings
from django.forms.widgets import Select, Widget
from django.utils import datetime_safe, six
from django.utils.dates import MONTHS
from django.utils.encoding import force_str
from django.utils.formats import get_format
from django.utils.safestring import mark_safe
__all__ = ('SelectDateWidget',)
RE_DATE = re.compile(r'(\d{4})-(\d\d?)-(\d\d?)$')
def _parse_date_fmt():
fmt = get_format('DATE_FORMAT')
escaped = False
for char in fmt:
if escaped:
escaped = False
elif char == '\\':
escaped = True
elif char in 'Yy':
yield 'year'
elif char in 'bEFMmNn':
yield 'month'
elif char in 'dj':
yield 'day'
class SelectDateWidget(Widget):
"""
A Widget that splits date input into three <select> boxes.
This also serves as an example of a Widget that has more than one HTML
element and hence implements value_from_datadict.
"""
none_value = (0, '---')
month_field = '%s_month'
day_field = '%s_day'
year_field = '%s_year'
def __init__(self, attrs=None, years=None, months=None, empty_label=None):
self.attrs = attrs or {}
# Optional list or tuple of years to use in the "year" select box.
if years:
self.years = years
else:
this_year = datetime.date.today().year
self.years = range(this_year, this_year + 10)
# Optional dict of months to use in the "month" select box.
if months:
self.months = months
else:
self.months = MONTHS
# Optional string, list, or tuple to use as empty_label.
if isinstance(empty_label, (list, tuple)):
if not len(empty_label) == 3:
raise ValueError('empty_label list/tuple must have 3 elements.')
self.year_none_value = (0, empty_label[0])
self.month_none_value = (0, empty_label[1])
self.day_none_value = (0, empty_label[2])
else:
if empty_label is not None:
self.none_value = (0, empty_label)
self.year_none_value = self.none_value
self.month_none_value = self.none_value
self.day_none_value = self.none_value
def render(self, name, value, attrs=None):
try:
year_val, month_val, day_val = value.year, value.month, value.day
except AttributeError:
year_val = month_val = day_val = None
if isinstance(value, six.string_types):
if settings.USE_L10N:
try:
input_format = get_format('DATE_INPUT_FORMATS')[0]
v = datetime.datetime.strptime(force_str(value), input_format)
year_val, month_val, day_val = v.year, v.month, v.day
except ValueError:
pass
else:
match = RE_DATE.match(value)
if match:
year_val, month_val, day_val = [int(v) for v in match.groups()]
html = {}
choices = [(i, i) for i in self.years]
html['year'] = self.create_select(name, self.year_field, value, year_val, choices, self.year_none_value)
choices = list(six.iteritems(self.months))
html['month'] = self.create_select(name, self.month_field, value, month_val, choices, self.month_none_value)
choices = [(i, i) for i in range(1, 32)]
html['day'] = self.create_select(name, self.day_field, value, day_val, choices, self.day_none_value)
output = []
for field in _parse_date_fmt():
output.append(html[field])
return mark_safe('\n'.join(output))
def id_for_label(self, id_):
for first_select in _parse_date_fmt():
return '%s_%s' % (id_, first_select)
else:
return '%s_month' % id_
def value_from_datadict(self, data, files, name):
y = data.get(self.year_field % name)
m = data.get(self.month_field % name)
d = data.get(self.day_field % name)
if y == m == d == "0":
return None
if y and m and d:
if settings.USE_L10N:
input_format = get_format('DATE_INPUT_FORMATS')[0]
try:
date_value = datetime.date(int(y), int(m), int(d))
except ValueError:
return '%s-%s-%s' % (y, m, d)
else:
date_value = datetime_safe.new_date(date_value)
return date_value.strftime(input_format)
else:
return '%s-%s-%s' % (y, m, d)
return data.get(name, None)
def create_select(self, name, field, value, val, choices, none_value):
if 'id' in self.attrs:
id_ = self.attrs['id']
else:
id_ = 'id_%s' % name
if not self.is_required:
choices.insert(0, none_value)
local_attrs = self.build_attrs(id=field % id_)
s = Select(choices=choices)
select_html = s.render(field % name, val, local_attrs)
return select_html
| bsd-3-clause |
maljac/odoo-addons | project_task_order/__openerp__.py | 1 | 1554 | # -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (C) 2015 ADHOC SA (http://www.adhoc.com.ar)
# All Rights Reserved.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
{
'name': 'Project Order',
'version': '1.0',
'category': 'Projects & Services',
'sequence': 14,
'summary': '',
'description': """
Project Order
===================
Change default tasks order to the following criteria:
"priority desc, sequence, date_deadline, duration, create_date desc"
""",
'author': 'ADHOC SA',
'website': 'www.adhoc.com.ar',
'images': [
],
'depends': [
'project',
],
'data': [
],
'demo': [
],
'test': [
],
'installable': True,
'auto_install': False,
'application': False,
} | agpl-3.0 |
cloudera/hue | apps/jobbrowser/src/jobbrowser/api.py | 2 | 9316 | #!/usr/bin/env python
# Licensed to Cloudera, Inc. under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. Cloudera, Inc. licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from builtins import object
import logging
import sys
from datetime import datetime, timedelta
from django.core.paginator import Paginator
from desktop.lib.exceptions_renderable import PopupException
from desktop.lib.rest.http_client import RestException
from hadoop.conf import YARN_CLUSTERS
from hadoop.cluster import rm_ha
import hadoop.yarn.history_server_api as history_server_api
import hadoop.yarn.mapreduce_api as mapreduce_api
import hadoop.yarn.node_manager_api as node_manager_api
import hadoop.yarn.resource_manager_api as resource_manager_api
import hadoop.yarn.spark_history_server_api as spark_history_server_api
from jobbrowser.conf import SHARE_JOBS
from jobbrowser.yarn_models import Application, YarnV2Job, Job as YarnJob, KilledJob as KilledYarnJob, Container, SparkJob
from desktop.auth.backend import is_admin
if sys.version_info[0] > 2:
from django.utils.translation import gettext as _
else:
from django.utils.translation import ugettext as _
LOG = logging.getLogger(__name__)
_DEFAULT_OBJ_PER_PAGINATION = 10
def get_api(user, jt):
return YarnApi(user)
class JobBrowserApi(object):
def paginate_task(self, task_list, pagenum):
paginator = Paginator(task_list, _DEFAULT_OBJ_PER_PAGINATION, allow_empty_first_page=True)
return paginator.page(pagenum)
class YarnApi(JobBrowserApi):
"""
List all the jobs with Resource Manager API.
Get running single job information with MapReduce API.
Get finished single job information with History Server API.
The trick is that we use appid when the job is running and jobid when it is finished.
We also suppose that each app id has only one MR job id.
e.g. job_1355791146953_0105, application_1355791146953_0105
A better alternative might be to call the Resource Manager instead of relying on the type of job id.
The perfect solution would be to have all this logic embedded
"""
def __init__(self, user):
self.user = user
self.resource_manager_api = None
self.mapreduce_api = None
self.history_server_api = None
self.spark_history_server_api = None
if list(YARN_CLUSTERS.keys()):
self.resource_manager_api = resource_manager_api.get_resource_manager(user.username)
self.mapreduce_api = mapreduce_api.get_mapreduce_api(user.username)
self.history_server_api = history_server_api.get_history_server_api(user.username)
self.spark_history_server_api = spark_history_server_api.get_history_server_api() # Spark HS does not support setuser
def get_job_link(self, job_id):
return self.get_job(job_id)
@rm_ha
def get_jobs(self, user, **kwargs):
state_filters = {'running': 'UNDEFINED', 'completed': 'SUCCEEDED', 'failed': 'FAILED', 'killed': 'KILLED',}
states_filters = {'running': 'NEW,NEW_SAVING,SUBMITTED,ACCEPTED,RUNNING', 'completed': 'FINISHED', 'failed': 'FAILED,KILLED',}
filters = {}
if kwargs['username']:
filters['user'] = kwargs['username']
if kwargs['state'] and kwargs['state'] != 'all':
filters['finalStatus'] = state_filters[kwargs['state']]
if kwargs.get('states'):
filters['states'] = ','.join([states_filters[_s] for _s in kwargs['states']])
if kwargs.get('limit'):
filters['limit'] = kwargs['limit']
if kwargs.get('time_value'):
filters['startedTimeBegin'] = self._get_started_time_begin(kwargs.get('time_value'), kwargs.get('time_unit'))
if self.resource_manager_api: # This happens when yarn is not configured, but we need jobbrowser for Impala
json = self.resource_manager_api.apps(**filters)
else:
json = {}
if type(json) == str and 'This is standby RM' in json:
raise Exception(json)
if json.get('apps'):
jobs = [Application(app) for app in json['apps']['app']]
else:
return []
if kwargs['text']:
text = kwargs['text'].lower()
jobs = [job for job in jobs if text in job.name.lower() or
text in job.id.lower() or
text in job.user.lower() or
text in job.queue.lower()]
return self.filter_jobs(user, jobs)
def _get_started_time_begin(self, time_value, time_unit):
if time_unit == 'hours':
start_date = datetime.utcnow() - timedelta(hours=time_value)
elif time_unit == 'minutes':
start_date = datetime.utcnow() - timedelta(minutes=time_value)
else:
start_date = datetime.utcnow() - timedelta(days=time_value)
elapsed_time = start_date - datetime.utcfromtimestamp(0)
return int(elapsed_time.days * 86400 + elapsed_time.seconds) * 1000
def filter_jobs(self, user, jobs, **kwargs):
check_permission = not SHARE_JOBS.get() and not is_admin(user)
return [job for job in jobs if not check_permission or
is_admin(user) or
job.user == user.username]
def _get_job_from_history_server(self, job_id):
resp = self.history_server_api.job(self.user, job_id)
return YarnJob(self.history_server_api, resp['job'])
@rm_ha
def get_job(self, jobid):
job_id = jobid.replace('application', 'job')
app_id = jobid.replace('job', 'application')
try:
app = self.resource_manager_api.app(app_id)['app']
if app['applicationType'] == 'Oozie Launcher' or app['applicationType'] == 'TEZ' or app['applicationType'] == 'yarn-service':
job = YarnV2Job(self.resource_manager_api, app)
elif app['finalStatus'] in ('SUCCEEDED', 'FAILED', 'KILLED'):
if app['applicationType'] == 'SPARK':
job = SparkJob(app, rm_api=self.resource_manager_api, hs_api=self.spark_history_server_api)
elif app['state'] in ('KILLED', 'FAILED'):
job = KilledYarnJob(self.resource_manager_api, app)
else: # Job succeeded, attempt to fetch from JHS
job = self._get_job_from_history_server(job_id)
else:
if app['state'] == 'ACCEPTED':
raise ApplicationNotRunning(app_id, app)
# The MapReduce API only returns JSON when the application is in a RUNNING state
elif app['state'] in ('NEW', 'SUBMITTED', 'RUNNING') and app['applicationType'] == 'MAPREDUCE':
resp = self.mapreduce_api.job(self.user, job_id)
if not isinstance(resp, dict):
raise PopupException(_('Mapreduce Proxy API did not return JSON response, check if the job is running.'))
job = YarnJob(self.mapreduce_api, resp['job'])
elif app['state'] in ('NEW', 'SUBMITTED', 'RUNNING') and app['applicationType'] == 'SPARK':
job = SparkJob(app, rm_api=self.resource_manager_api, hs_api=self.spark_history_server_api)
else:
job = Application(app, self.resource_manager_api)
except RestException as e:
if e.code == 404: # Job not found in RM so attempt to find job in JHS
job = self._get_job_from_history_server(job_id)
else:
LOG.error("Job %s has expired: %s" % (app_id, e))
raise JobExpired(app_id)
except PopupException as e:
if 'NotFoundException' in e.message:
job = self._get_job_from_history_server(job_id)
else:
raise e
except ApplicationNotRunning as e:
raise e
except Exception as e:
raise PopupException('Job %s could not be found: %s' % (jobid, e), detail=e)
return job
def get_application(self, jobid):
app = None
app_id = jobid.replace('job', 'application')
try:
app = self.resource_manager_api.app(app_id)['app']
except RestException as e:
raise PopupException(_('Job %s could not be found in Resource Manager: %s') % (jobid, e), detail=e)
except ApplicationNotRunning as e:
raise PopupException(_('Application is not running: %s') % e, detail=e)
except Exception as e:
raise PopupException(_('Job %s could not be found: %s') % (jobid, e), detail=e)
return app
def get_tasks(self, jobid, **filters):
filters.pop('pagenum')
return self.get_job(jobid).filter_tasks(**filters)
def get_task(self, jobid, task_id):
return self.get_job(jobid).get_task(task_id)
def get_tracker(self, node_manager_http_address, container_id):
api = node_manager_api.get_node_manager_api('http://' + node_manager_http_address)
return Container(api.container(container_id))
class ApplicationNotRunning(Exception):
def __init__(self, application_id, job):
self.application_id = application_id
self.job = job
class JobExpired(Exception):
def __init__(self, job):
super(JobExpired, self).__init__('JobExpired: %s' %job)
self.job = job
| apache-2.0 |
pombredanne/teamwork | exts/wsgi/static/Brython2.1.0-20140419-113919/Lib/ui/dialog.py | 109 | 2102 | import widget
from browser import html, doc
class Dialog(widget.DraggableWidget):
def __init__(self, id=None):
self._div_shell=html.DIV(
Class="ui-dialog ui-widget ui-widget-content ui-corner-all ui-front ui-draggable ui-resizable",
style={'position': 'absolute', 'height': 'auto', 'width': '300px',
'top': '98px', 'left': '140px', 'display': 'block'})
widget.DraggableWidget.__init__(self, self._div_shell, 'dialog', id)
_div_titlebar=html.DIV(Id="titlebar",
Class="ui-dialog-titlebar ui-widget-header ui-corner-all ui-helper-clearfix")
self._div_shell <= _div_titlebar
self._div_title=html.SPAN(Id="title", Class="ui-dialog-title")
_div_titlebar <= self._div_title
self._title_button=html.BUTTON(Title="close",
Class="ui-button ui-widget ui-state-default ui-corner-all ui-button-icon-only ui-dialog-titlebar-close")
def dialog_close(e):
#del document[self._div_shell.id]
del doc[self._div_shell.id]
self._title_button.bind('click', dialog_close)
_span=html.SPAN(Class="ui-button-icon-primary ui-icon ui-icon-closethick")
self._title_button <= _span
_span=html.SPAN('close', Class="ui-button-text")
self._title_button <= _span
_div_titlebar <= self._title_button
self._div_dialog=html.DIV(Class="ui-dialog-content ui-widget-content",
style={'width': 'auto', 'min-height': '105px',
'max-height': 'none', 'height': 'auto'})
self._div_shell <= self._div_dialog
for _i in ['n', 'e', 's', 'w', 'se', 'sw', 'ne', 'nw']:
if _i == 'se':
_class="ui-resizable-handle ui-resizable-%s ui-icon ui-icon-gripsmall-diagonal-%s" % (_i, _i)
else:
_class="ui-resizable-handle ui-resizable-%s" % _i
self._div_shell <= html.DIV(Class=_class, style={'z-index': '90'})
doc <= self._div_shell
def set_title(self, title):
self._div_title.set_text(title)
def set_body(self, body):
self._div_dialog.set_html(body)
| gpl-2.0 |
inn1983/teropi | lib/freetype/src/tools/cordic.py | 401 | 1588 | # compute arctangent table for CORDIC computations in fttrigon.c
import sys, math
#units = 64*65536.0 # don't change !!
units = 256
scale = units/math.pi
shrink = 1.0
comma = ""
def calc_val( x ):
global units, shrink
angle = math.atan(x)
shrink = shrink * math.cos(angle)
return angle/math.pi * units
def print_val( n, x ):
global comma
lo = int(x)
hi = lo + 1
alo = math.atan(lo)
ahi = math.atan(hi)
ax = math.atan(2.0**n)
errlo = abs( alo - ax )
errhi = abs( ahi - ax )
if ( errlo < errhi ):
hi = lo
sys.stdout.write( comma + repr( int(hi) ) )
comma = ", "
print ""
print "table of arctan( 1/2^n ) for PI = " + repr(units/65536.0) + " units"
# compute range of "i"
r = [-1]
r = r + range(32)
for n in r:
if n >= 0:
x = 1.0/(2.0**n) # tangent value
else:
x = 2.0**(-n)
angle = math.atan(x) # arctangent
angle2 = angle*scale # arctangent in FT_Angle units
# determine which integer value for angle gives the best tangent
lo = int(angle2)
hi = lo + 1
tlo = math.tan(lo/scale)
thi = math.tan(hi/scale)
errlo = abs( tlo - x )
errhi = abs( thi - x )
angle2 = hi
if errlo < errhi:
angle2 = lo
if angle2 <= 0:
break
sys.stdout.write( comma + repr( int(angle2) ) )
comma = ", "
shrink = shrink * math.cos( angle2/scale)
print
print "shrink factor = " + repr( shrink )
print "shrink factor 2 = " + repr( shrink * (2.0**32) )
print "expansion factor = " + repr(1/shrink)
print ""
| gpl-2.0 |
mazafrav/JdeRobot | src/drivers/MAVLinkServer/modules/mavproxy_misseditor/me_event.py | 14 | 1762 | #!/usr/bin/env python
'''
Event class and enums for Mission Editor
Michael Day
June 2014
'''
#MissionEditorEvents come FROM the GUI (with a few exceptions where the Mission Editor Module sends a message to itself, e.g., MEE_TIME_TO_QUIT)
#MissionEditorGUIEvents go TO the GUI
#enum for MissionEditorEvent types
MEE_READ_WPS = 0
MEE_WRITE_WPS = 1
MEE_TIME_TO_QUIT = 2
MEE_GET_WP_RAD = 3
MEE_GET_LOIT_RAD = 4
MEE_GET_WP_DEFAULT_ALT = 5
MEE_WRITE_WP_NUM = 6
MEE_LOAD_WP_FILE = 7
MEE_SAVE_WP_FILE = 8
MEE_SET_WP_RAD = 9
MEE_SET_LOIT_RAD = 10
MEE_SET_WP_DEFAULT_ALT = 11
#enum of MissionEditorGUIEvent types
MEGE_CLEAR_MISS_TABLE = 0
MEGE_ADD_MISS_TABLE_ROWS = 1
MEGE_SET_MISS_ITEM = 2
MEGE_SET_WP_RAD = 3
MEGE_SET_LOIT_RAD = 4
MEGE_SET_WP_DEFAULT_ALT = 5
MEGE_SET_LAST_MAP_CLICK_POS = 6
class MissionEditorEvent:
def __init__(self, type, **kwargs):
self.type = type
self.arg_dict = kwargs
if not self.type in [MEE_READ_WPS, MEE_WRITE_WPS, MEGE_CLEAR_MISS_TABLE,
MEGE_ADD_MISS_TABLE_ROWS, MEGE_SET_MISS_ITEM, MEE_TIME_TO_QUIT,
MEE_GET_WP_RAD, MEE_GET_LOIT_RAD, MEGE_SET_WP_RAD, MEGE_SET_LOIT_RAD,
MEE_GET_WP_DEFAULT_ALT, MEGE_SET_WP_DEFAULT_ALT, MEE_WRITE_WP_NUM,
MEE_LOAD_WP_FILE, MEE_SAVE_WP_FILE, MEE_SET_WP_RAD, MEE_SET_LOIT_RAD,
MEE_SET_WP_DEFAULT_ALT]:
raise TypeError("Unrecongized MissionEditorEvent type:" + str(self.type))
def get_type(self):
return self.type
def get_arg(self, key):
if not key in self.arg_dict:
print("No key %s in %s" % (key, str(self.type)))
return None
return self.arg_dict[key]
| gpl-3.0 |
tillahoffmann/tensorflow | tensorflow/contrib/mpi_collectives/__init__.py | 6 | 11367 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=g-short-docstring-punctuation
"""## Communicating Between Processes with MPI
TensorFlow natively provides inter-device communication through send and
receive ops and inter-node communication through Distributed TensorFlow, based
on the same send and receive abstractions. On HPC clusters where Infiniband or
other high-speed node interconnects are available, these can end up being
insufficient for synchronous data-parallel training (without asynchronous
gradient descent). This module implements a variety of MPI ops which can take
advantage of hardware-specific MPI libraries for efficient communication.
In order to use this module, TensorFlow must be built with an MPI library,
which can be provided to the `./configure` script at build time. As a user of
TensorFlow, you will need to build TensorFlow yourself to select the MPI
library to use; to do so, follow the [instructions for building TensorFlow from
source](https://www.tensorflow.org/get_started/os_setup#installing_from_sources).
### Utility Ops
In addition to reductions and gathers, this module provides utility operations
for detecting the running MPI configuration.
Example:
```python
from tensorflow.contrib import mpi
# Use `mpi.Session` instead of `tf.Session`
with mpi.Session() as session:
rank = session.run(mpi.rank())
print("My MPI Rank:", rank)
if rank == 0:
print("MPI Size:", session.run(mpi.size()))
```
@@rank
@@size
### Ring Allreduce and Allgather
When summing or averaging tensors across many processes, communication can
easily become a bottleneck. A naive implementation will send all the tensor
values to the same process, perform the reduction, and then broadcast the
values back to all other processes, effectively creating a synchronous
parameter server in one process. However, the process responsible for
performing the reduction will have to receive and send a massive amount of data
which scales with the number of processes *and* the number of parameters in the
model.
Instead of centralizing the reduction and having one primary reducer, we can
implement a distributed allreduce or allgather. A bandwidth-optimal allreduce
will end up sending 2(N - 1) values for every value in the input tensor,
and can be implemented with a ring allreduce [1]. (Intuitively, a linear reduce
requires at least (N - 1) sends between the different nodes, and a broadcast of
the result also requires (N - 1) sends, for a total of 2 (N - 1); these two
steps cannot be combined in a clever way to reduce the number of required
sends.) This module implements bandwidth-optimal ring allreduce and ring
allgather operations using MPI; by choosing a hardware-appropriate MPI
implementation (such as OpenMPI with CUDA-IPC support), you can train large
models with synchronous gradient descent with minimal communication overhead.
In addition to the `allreduce` and `allgather` functions, a convenience
`DistributedOptimizer` wrapper is provided to simplify using these functions
for reducing model gradients.
Example:
```python
import tensorflow as tf
from tensorflow.contrib import mpi_collectives as mpi
# Construct a simple linear regression model to optimize
W = tf.get_variable("W", shape=[20, 1], dtype=tf.float32)
B = tf.get_variable("B", shape=[1, 1], dtype=tf.float32)
inputs = tf.placeholder("Inputs", shape=[None, 20])
outputs = tf.placeholder("Outputs", shape=[None, 1])
loss = tf.nn.l2_loss(tf.matmul(inputs, W) + B - outputs)
# Training using MPI allreduce with DistributedOptimizer
optimizer = mpi.DistributedOptimizer(tf.train.AdamOptimizer())
train = optimizer.minimize(loss)
# Average loss over all ranks, for printing.
# Do not pass this to an optimizer!
avg_loss = mpi.allreduce(loss)
# On different ranks, feed different input data.
with mpi.Session() as session:
rank = session.run(mpi.rank())
batch_inputs, batch_outputs = construct_batch_for_rank(rank)
feed_dict = {inputs: batch_inputs, outputs: batch_outputs}
_, l = session.run([train, avg_loss], feed_dict=feed_dict)
print("Average Loss:", l)
```
[1] Patarasuk, Pitch and Yuan, Xin. "Bandwidth Optimal All-reduce Algorithms
for Clusters of Workstations".
@@Session
@@DistributedOptimizer
@@allreduce
@@allgather
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow.contrib.mpi_collectives.mpi_ops import size
from tensorflow.contrib.mpi_collectives.mpi_ops import rank
from tensorflow.contrib.mpi_collectives.mpi_ops import local_rank
from tensorflow.contrib.mpi_collectives.mpi_ops import allgather
from tensorflow.contrib.mpi_collectives.mpi_ops import _allreduce
from tensorflow.contrib.mpi_collectives.mpi_ops import init
def allreduce(tensor, average=True):
"""Perform an MPI allreduce on a tf.Tensor or tf.IndexedSlices.
Arguments:
tensor: tf.Tensor, tf.Variable, or tf.IndexedSlices to reduce.
The shape of the input must be identical across all ranks.
average: If True, computes the average over all ranks.
Otherwise, computes the sum over all ranks.
This function performs a bandwidth-optimal ring allreduce on the input
tensor. If the input is an tf.IndexedSlices, the function instead does an
allgather on the values and the indices, effectively doing an allreduce on
the represented tensor.
"""
if isinstance(tensor, tf.IndexedSlices):
# For IndexedSlices, do two allgathers intead of an allreduce.
mpi_size = tf.cast(size(), tensor.values.dtype)
values = allgather(tensor.values)
indices = allgather(tensor.indices)
# To make this operation into an average, divide all gathered values by
# the MPI size.
new_values = tf.div(values, mpi_size) if average else values
return tf.IndexedSlices(new_values, indices,
dense_shape=tensor.dense_shape)
else:
mpi_size = tf.cast(size(), tensor.dtype)
summed_tensor = _allreduce(tensor)
new_tensor = (tf.div(summed_tensor, mpi_size)
if average else summed_tensor)
return new_tensor
class DistributedOptimizer(tf.train.Optimizer):
"""An optimizer that wraps another tf.Optimizer, using an MPI allreduce to
average gradient values before applying gradients to model weights."""
def __init__(self, optimizer, name=None, use_locking=False):
"""Construct a new DistributedOptimizer, which uses another optimizer
under the hood for computing single-process gradient values and
applying gradient updates after the gradient values have been averaged
across all the MPI ranks.
Args:
optimizer: Optimizer to use for computing gradients and applying updates.
name: Optional name prefix for the operations created when applying
gradients. Defaults to "Distributed" followed by the provided
optimizer type.
use_locking: Whether to use locking when updating variables. See
Optimizer.__init__ for more info.
"""
if name is None:
name = "Distributed{}".format(type(optimizer).__name__)
self._optimizer = optimizer
super(DistributedOptimizer, self).__init__(
name=name, use_locking=use_locking)
def compute_gradients(self, *args, **kwargs):
"""Compute gradients of all trainable variables.
See Optimizer.compute_gradients() for more info.
In DistributedOptimizer, compute_gradients() is overriden to also
allreduce the gradients before returning them.
"""
gradients = (super(DistributedOptimizer, self)
.compute_gradients(*args, **kwargs))
return [(allreduce(gradient), var) for (gradient, var) in gradients]
def _apply_dense(self, *args, **kwargs):
"""Calls this same method on the underlying optimizer."""
return self._optimizer._apply_dense(*args, **kwargs)
def _apply_sparse(self, *args, **kwargs):
"""Calls this same method on the underlying optimizer."""
return self._optimizer._apply_sparse(*args, **kwargs)
def _apply_sparse_duplicate_indices(self, *args, **kwargs):
"""Calls this same method on the underlying optimizer."""
return self._optimizer._apply_sparse_duplicate_indices(*args,
**kwargs)
def _prepare(self, *args, **kwargs):
"""Calls this same method on the underlying optimizer."""
return self._optimizer._prepare(*args, **kwargs)
def _create_slots(self, *args, **kwargs):
"""Calls this same method on the underlying optimizer."""
return self._optimizer._create_slots(*args, **kwargs)
def _valid_dtypes(self, *args, **kwargs):
"""Calls this same method on the underlying optimizer."""
return self._optimizer._valid_dtypes(*args, **kwargs)
def _finish(self, *args, **kwargs):
"""Calls this same method on the underlying optimizer."""
return self._optimizer._finish(*args, **kwargs)
class Session(tf.Session):
"""A class for running TensorFlow operations, with copies of the same graph
running distributed across different MPI nodes.
The primary difference between `tf.Session` and
`tf.contrib.mpi_collectives.Session` is that the MPI `Session` ensures that
the `Session` options are correct for use with `tf.contrib.mpi`, and
initializes MPI immediately upon the start of the session.
"""
def __init__(self, target='', graph=None, config=None):
"""Creates a new TensorFlow MPI session.
Unlike a normal `tf.Session`, an MPI Session may only use a single GPU,
which must be specified in advance before the session is initialized.
In addition, it only uses a single graph evaluation thread, and
initializes MPI immediately upon starting.
If no `graph` argument is specified when constructing the session,
the default graph will be launched in the session. If you are
using more than one graph (created with `tf.Graph()` in the same
process, you will have to use different sessions for each graph,
but each graph can be used in multiple sessions. In this case, it
is often clearer to pass the graph to be launched explicitly to
the session constructor.
Args:
target: (Optional.) The execution engine to connect to.
graph: (Optional.) The `Graph` to be launched (described above).
config: (Optional.) A `ConfigProto` protocol buffer with configuration
options for the session.
"""
super(Session, self).__init__(target, graph, config=config)
# Initialize MPI on the relevant device.
# TODO: Move this to library load and eliminate mpi.Session()
if graph is None:
graph = tf.get_default_graph()
with graph.as_default():
self.run(init())
| apache-2.0 |
paulboco/phpmyadmin-insecure | doc/_ext/configext.py | 141 | 6618 | from sphinx.domains import Domain, ObjType
from sphinx.roles import XRefRole
from sphinx.domains.std import GenericObject, StandardDomain
from sphinx.directives import ObjectDescription
from sphinx.util.nodes import clean_astext, make_refnode
from sphinx.util import ws_re
from sphinx import addnodes
from sphinx.util.docfields import Field
from docutils import nodes
def get_id_from_cfg(text):
'''
Formats anchor ID from config option.
'''
if text[:6] == '$cfg[\'':
text = text[6:]
if text[-2:] == '\']':
text = text[:-2]
text = text.replace('[$i]', '')
parts = text.split("']['")
return parts
class ConfigOption(ObjectDescription):
indextemplate = 'configuration option; %s'
parse_node = None
has_arguments = True
doc_field_types = [
Field('default', label='Default value', has_arg=False,
names=('default', )),
Field('type', label='Type', has_arg=False,
names=('type',)),
]
def handle_signature(self, sig, signode):
signode.clear()
signode += addnodes.desc_name(sig, sig)
# normalize whitespace like XRefRole does
name = ws_re.sub('', sig)
return name
def add_target_and_index(self, name, sig, signode):
targetparts = get_id_from_cfg(name)
targetname = 'cfg_%s' % '_'.join(targetparts)
signode['ids'].append(targetname)
self.state.document.note_explicit_target(signode)
indextype = 'single'
# Generic index entries
indexentry = self.indextemplate % (name,)
self.indexnode['entries'].append((indextype, indexentry,
targetname, targetname))
self.indexnode['entries'].append((indextype, name,
targetname, targetname))
# Server section
if targetparts[0] == 'Servers' and len(targetparts) > 1:
indexname = ', '.join(targetparts[1:])
self.indexnode['entries'].append((indextype, 'server configuration; %s' % indexname,
targetname, targetname))
self.indexnode['entries'].append((indextype, indexname,
targetname, targetname))
else:
indexname = ', '.join(targetparts)
self.indexnode['entries'].append((indextype, indexname,
targetname, targetname))
self.env.domaindata['config']['objects'][self.objtype, name] = \
self.env.docname, targetname
class ConfigSectionXRefRole(XRefRole):
"""
Cross-referencing role for configuration sections (adds an index entry).
"""
def result_nodes(self, document, env, node, is_ref):
if not is_ref:
return [node], []
varname = node['reftarget']
tgtid = 'index-%s' % env.new_serialno('index')
indexnode = addnodes.index()
indexnode['entries'] = [
('single', varname, tgtid, varname),
('single', 'configuration section; %s' % varname, tgtid, varname)
]
targetnode = nodes.target('', '', ids=[tgtid])
document.note_explicit_target(targetnode)
return [indexnode, targetnode, node], []
class ConfigSection(ObjectDescription):
indextemplate = 'configuration section; %s'
parse_node = None
def handle_signature(self, sig, signode):
if self.parse_node:
name = self.parse_node(self.env, sig, signode)
else:
signode.clear()
signode += addnodes.desc_name(sig, sig)
# normalize whitespace like XRefRole does
name = ws_re.sub('', sig)
return name
def add_target_and_index(self, name, sig, signode):
targetname = '%s-%s' % (self.objtype, name)
signode['ids'].append(targetname)
self.state.document.note_explicit_target(signode)
if self.indextemplate:
colon = self.indextemplate.find(':')
if colon != -1:
indextype = self.indextemplate[:colon].strip()
indexentry = self.indextemplate[colon+1:].strip() % (name,)
else:
indextype = 'single'
indexentry = self.indextemplate % (name,)
self.indexnode['entries'].append((indextype, indexentry,
targetname, targetname))
self.env.domaindata['config']['objects'][self.objtype, name] = \
self.env.docname, targetname
class ConfigOptionXRefRole(XRefRole):
"""
Cross-referencing role for configuration options (adds an index entry).
"""
def result_nodes(self, document, env, node, is_ref):
if not is_ref:
return [node], []
varname = node['reftarget']
tgtid = 'index-%s' % env.new_serialno('index')
indexnode = addnodes.index()
indexnode['entries'] = [
('single', varname, tgtid, varname),
('single', 'configuration option; %s' % varname, tgtid, varname)
]
targetnode = nodes.target('', '', ids=[tgtid])
document.note_explicit_target(targetnode)
return [indexnode, targetnode, node], []
class ConfigFileDomain(Domain):
name = 'config'
label = 'Config'
object_types = {
'option': ObjType('config option', 'option'),
'section': ObjType('config section', 'section'),
}
directives = {
'option': ConfigOption,
'section': ConfigSection,
}
roles = {
'option': ConfigOptionXRefRole(),
'section': ConfigSectionXRefRole(),
}
initial_data = {
'objects': {}, # (type, name) -> docname, labelid
}
def clear_doc(self, docname):
for key, (fn, _) in self.data['objects'].items():
if fn == docname:
del self.data['objects'][key]
def resolve_xref(self, env, fromdocname, builder,
typ, target, node, contnode):
docname, labelid = self.data['objects'].get((typ, target), ('', ''))
if not docname:
return None
else:
return make_refnode(builder, fromdocname, docname,
labelid, contnode)
def get_objects(self):
for (type, name), info in self.data['objects'].items():
yield (name, name, type, info[0], info[1],
self.object_types[type].attrs['searchprio'])
def setup(app):
app.add_domain(ConfigFileDomain)
| gpl-2.0 |
dx9/cattle | tests/integration/cattletest/core/test_external_handler.py | 5 | 7600 | from common_fixtures import * # NOQA
TEST_HANDLER_PREFIX = 'test-handler-'
@pytest.fixture(scope='module', autouse=True)
def tear_down(request, admin_user_client):
request.addfinalizer(lambda: _disable_test_handlers(admin_user_client))
def _get_extension(admin_client, extension_point_name, impl_name,
format='Dynamic : {}'):
for ep in admin_client.list_extension_point():
if ep.name == extension_point_name:
for impl in ep.implementations:
try:
if impl.properties.name == impl_name:
return impl
except AttributeError:
pass
if impl.name == format.format(impl_name):
return impl
return None
def _disable_test_handlers(client):
name = TEST_HANDLER_PREFIX + '%'
for h in client.list_external_handler(state='active',
name_like=name):
client.wait_success(h.deactivate())
def test_external_handler(admin_user_client):
name = '{}-{}'.format(TEST_HANDLER_PREFIX, random_str())
configs = [{'name': 'instance.start', 'onError': 'instance.stop'}]
h = admin_user_client.create_external_handler(name=name,
processConfigs=configs)
assert h.state == 'registering'
assert h.get('processConfigs') is None
assert h.data.fields.processConfigs == configs
h = admin_user_client.wait_success(h)
assert h.state == 'active'
assert h.data.fields.processConfigs is None
maps = h.externalHandlerExternalHandlerProcessMaps()
assert len(maps) == 1
assert maps[0].state == 'active'
assert maps[0].onError == 'instance.stop'
process = maps[0].externalHandlerProcess()
assert process.state == 'active'
assert process.name == 'instance.start'
ep = _get_extension(admin_user_client, 'process.instance.start.handlers',
name)
assert ep is not None
def test_defaults(admin_user_client):
name = '{}-{}'.format(TEST_HANDLER_PREFIX, random_str())
configs = [{'name': 'instance.start'}]
h = admin_user_client.create_external_handler(name=name,
processConfigs=configs)
h = admin_user_client.wait_success(h)
assert h.state == 'active'
ep = _get_extension(admin_user_client, 'process.instance.start.handlers',
name)
assert ep is not None
assert ep.properties.retry is None
assert ep.properties.timeoutMillis is None
assert ep.properties.name == name
assert ep.properties.priority == '1000'
assert ep.properties.eventName == 'instance.start;handler={}'.format(name)
def test_properties(admin_user_client):
name = '{}-{}'.format(TEST_HANDLER_PREFIX, random_str())
configs = [{'name': 'instance.start'}]
h = admin_user_client.create_external_handler(name=name,
processConfigs=configs,
timeoutMillis=2000,
retries=4,
priority=1234)
h = admin_user_client.wait_success(h)
assert h.state == 'active'
ep = _get_extension(admin_user_client, 'process.instance.start.handlers',
name)
assert ep is not None
assert ep.properties.retry == '4'
assert ep.properties.timeoutMillis == '2000'
assert ep.properties.priority == '1234'
assert ep.properties.name == name
assert ep.properties.eventName == 'instance.start;handler={}'.format(name)
def test_pre_handler(admin_user_client):
name = '{}-{}'.format(TEST_HANDLER_PREFIX, random_str())
configs = [{'name': 'pre.instance.start'}]
h = admin_user_client.create_external_handler(name=name,
processConfigs=configs,
timeoutMillis=2000,
retries=4,
priority=1234)
h = admin_user_client.wait_success(h)
assert h.state == 'active'
ep = _get_extension(admin_user_client,
'process.instance.start.pre.listeners',
name)
assert ep is not None
assert ep.properties.eventName == \
'pre.instance.start;handler={}'.format(name)
def test_post_handler(admin_user_client):
name = '{}-{}'.format(TEST_HANDLER_PREFIX, random_str())
configs = [{'name': 'post.instance.start'}]
h = admin_user_client.create_external_handler(name=name,
processConfigs=configs,
timeoutMillis=2000,
retries=4,
priority=1234)
h = admin_user_client.wait_success(h)
assert h.state == 'active'
ep = _get_extension(admin_user_client,
'process.instance.start.post.listeners',
name)
assert ep is not None
assert ep.properties.eventName == \
'post.instance.start;handler={}'.format(name)
def test_enabled_disable(admin_user_client):
name = '{}-{}'.format(TEST_HANDLER_PREFIX, random_str())
configs = [{'name': 'instance.start'}]
h = admin_user_client.create_external_handler(name=name,
processConfigs=configs)
h = admin_user_client.wait_success(h)
ep = _get_extension(admin_user_client, 'process.instance.start.handlers',
name)
assert ep is not None
h = admin_user_client.wait_success(h.deactivate())
ep = _get_extension(admin_user_client, 'process.instance.start.handlers',
name)
assert ep is None
h = admin_user_client.wait_success(h.activate())
ep = _get_extension(admin_user_client, 'process.instance.start.handlers',
name)
assert ep is not None
admin_user_client.wait_success(
h.externalHandlerProcesses()[0].deactivate())
ep = _get_extension(admin_user_client, 'process.instance.start.handlers',
name)
assert ep is None
admin_user_client.wait_success(h.externalHandlerProcesses()[0].activate())
ep = _get_extension(admin_user_client, 'process.instance.start.handlers',
name)
assert ep is not None
admin_user_client.wait_success(
h.externalHandlerExternalHandlerProcessMaps()[0].deactivate())
ep = _get_extension(admin_user_client, 'process.instance.start.handlers',
name)
assert ep is None
admin_user_client.wait_success(
h.externalHandlerExternalHandlerProcessMaps()[0].activate())
ep = _get_extension(admin_user_client, 'process.instance.start.handlers',
name)
assert ep is not None
def test_event_name_comma(admin_user_client):
name = '{}-{}'.format(TEST_HANDLER_PREFIX, random_str())
configs = [{'name': 'pre.instance.start,instance.start'}]
h = admin_user_client.create_external_handler(name=name,
processConfigs=configs)
h = admin_user_client.wait_success(h)
processes = [x.name for x in h.externalHandlerProcesses()]
assert len(processes) == 2
assert 'pre.instance.start' in processes
assert 'instance.start' in processes
| apache-2.0 |
infobloxopen/neutron | neutron/tests/unit/scheduler/test_dhcp_agent_scheduler.py | 9 | 18655 | # Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import datetime
import mock
from oslo_config import cfg
from oslo_utils import importutils
from oslo_utils import timeutils
import testscenarios
from neutron.common import constants
from neutron.common import topics
from neutron import context
from neutron.db import agents_db
from neutron.db import agentschedulers_db as sched_db
from neutron.db import models_v2
from neutron.extensions import dhcpagentscheduler
from neutron.scheduler import dhcp_agent_scheduler
from neutron.tests.unit import testlib_api
# Required to generate tests from scenarios. Not compatible with nose.
load_tests = testscenarios.load_tests_apply_scenarios
class TestDhcpSchedulerBaseTestCase(testlib_api.SqlTestCase):
def setUp(self):
super(TestDhcpSchedulerBaseTestCase, self).setUp()
self.ctx = context.get_admin_context()
self.network = {'id': 'foo_network_id'}
self.network_id = 'foo_network_id'
self._save_networks([self.network_id])
def _get_agents(self, hosts):
return [
agents_db.Agent(
binary='neutron-dhcp-agent',
host=host,
topic=topics.DHCP_AGENT,
configurations="",
agent_type=constants.AGENT_TYPE_DHCP,
created_at=timeutils.utcnow(),
started_at=timeutils.utcnow(),
heartbeat_timestamp=timeutils.utcnow())
for host in hosts
]
def _save_agents(self, agents):
for agent in agents:
with self.ctx.session.begin(subtransactions=True):
self.ctx.session.add(agent)
def _create_and_set_agents_down(self, hosts, down_agent_count=0, **kwargs):
dhcp_agents = self._get_agents(hosts)
# bring down the specified agents
for agent in dhcp_agents[:down_agent_count]:
old_time = agent['heartbeat_timestamp']
hour_old = old_time - datetime.timedelta(hours=1)
agent['heartbeat_timestamp'] = hour_old
agent['started_at'] = hour_old
for agent in dhcp_agents:
agent.update(kwargs)
self._save_agents(dhcp_agents)
return dhcp_agents
def _save_networks(self, networks):
for network_id in networks:
with self.ctx.session.begin(subtransactions=True):
self.ctx.session.add(models_v2.Network(id=network_id))
def _test_schedule_bind_network(self, agents, network_id):
scheduler = dhcp_agent_scheduler.ChanceScheduler()
scheduler.resource_filter.bind(self.ctx, agents, network_id)
results = self.ctx.session.query(
sched_db.NetworkDhcpAgentBinding).filter_by(
network_id=network_id).all()
self.assertEqual(len(agents), len(results))
for result in results:
self.assertEqual(network_id, result.network_id)
class TestDhcpScheduler(TestDhcpSchedulerBaseTestCase):
def test_schedule_bind_network_single_agent(self):
agents = self._create_and_set_agents_down(['host-a'])
self._test_schedule_bind_network(agents, self.network_id)
def test_schedule_bind_network_multi_agents(self):
agents = self._create_and_set_agents_down(['host-a', 'host-b'])
self._test_schedule_bind_network(agents, self.network_id)
def test_schedule_bind_network_multi_agent_fail_one(self):
agents = self._create_and_set_agents_down(['host-a'])
self._test_schedule_bind_network(agents, self.network_id)
with mock.patch.object(dhcp_agent_scheduler.LOG, 'info') as fake_log:
self._test_schedule_bind_network(agents, self.network_id)
self.assertEqual(1, fake_log.call_count)
class TestAutoScheduleNetworks(TestDhcpSchedulerBaseTestCase):
"""Unit test scenarios for ChanceScheduler.auto_schedule_networks.
network_present
Network is present or not
enable_dhcp
Dhcp is enabled or disabled in the subnet of the network
scheduled_already
Network is already scheduled to the agent or not
agent_down
Dhcp agent is down or alive
valid_host
If true, then an valid host is passed to schedule the network,
else an invalid host is passed.
"""
scenarios = [
('Network present',
dict(network_present=True,
enable_dhcp=True,
scheduled_already=False,
agent_down=False,
valid_host=True)),
('No network',
dict(network_present=False,
enable_dhcp=False,
scheduled_already=False,
agent_down=False,
valid_host=True)),
('Network already scheduled',
dict(network_present=True,
enable_dhcp=True,
scheduled_already=True,
agent_down=False,
valid_host=True)),
('Agent down',
dict(network_present=True,
enable_dhcp=True,
scheduled_already=False,
agent_down=False,
valid_host=True)),
('dhcp disabled',
dict(network_present=True,
enable_dhcp=False,
scheduled_already=False,
agent_down=False,
valid_host=False)),
('Invalid host',
dict(network_present=True,
enable_dhcp=True,
scheduled_already=False,
agent_down=False,
valid_host=False)),
]
def test_auto_schedule_network(self):
plugin = mock.MagicMock()
plugin.get_subnets.return_value = (
[{"network_id": self.network_id, "enable_dhcp": self.enable_dhcp}]
if self.network_present else [])
scheduler = dhcp_agent_scheduler.ChanceScheduler()
if self.network_present:
down_agent_count = 1 if self.agent_down else 0
agents = self._create_and_set_agents_down(
['host-a'], down_agent_count=down_agent_count)
if self.scheduled_already:
self._test_schedule_bind_network(agents, self.network_id)
expected_result = (self.network_present and self.enable_dhcp)
expected_hosted_agents = (1 if expected_result and
self.valid_host else 0)
host = "host-a" if self.valid_host else "host-b"
observed_ret_value = scheduler.auto_schedule_networks(
plugin, self.ctx, host)
self.assertEqual(expected_result, observed_ret_value)
hosted_agents = self.ctx.session.query(
sched_db.NetworkDhcpAgentBinding).all()
self.assertEqual(expected_hosted_agents, len(hosted_agents))
class TestNetworksFailover(TestDhcpSchedulerBaseTestCase,
sched_db.DhcpAgentSchedulerDbMixin):
def test_reschedule_network_from_down_agent(self):
agents = self._create_and_set_agents_down(['host-a', 'host-b'], 1)
self._test_schedule_bind_network([agents[0]], self.network_id)
self._save_networks(["foo-network-2"])
self._test_schedule_bind_network([agents[1]], "foo-network-2")
with contextlib.nested(
mock.patch.object(self, 'remove_network_from_dhcp_agent'),
mock.patch.object(self, 'schedule_network',
return_value=[agents[1]]),
mock.patch.object(self, 'get_network', create=True,
return_value={'id': self.network_id})
) as (rn, sch, getn):
notifier = mock.MagicMock()
self.agent_notifiers[constants.AGENT_TYPE_DHCP] = notifier
self.remove_networks_from_down_agents()
rn.assert_called_with(mock.ANY, agents[0].id, self.network_id,
notify=False)
sch.assert_called_with(mock.ANY, {'id': self.network_id})
notifier.network_added_to_agent.assert_called_with(
mock.ANY, self.network_id, agents[1].host)
def _test_failed_rescheduling(self, rn_side_effect=None):
agents = self._create_and_set_agents_down(['host-a'], 1)
self._test_schedule_bind_network([agents[0]], self.network_id)
with contextlib.nested(
mock.patch.object(
self, 'remove_network_from_dhcp_agent',
side_effect=rn_side_effect),
mock.patch.object(self, 'schedule_network',
return_value=None),
mock.patch.object(self, 'get_network', create=True,
return_value={'id': self.network_id})
) as (rn, sch, getn):
notifier = mock.MagicMock()
self.agent_notifiers[constants.AGENT_TYPE_DHCP] = notifier
self.remove_networks_from_down_agents()
rn.assert_called_with(mock.ANY, agents[0].id, self.network_id,
notify=False)
sch.assert_called_with(mock.ANY, {'id': self.network_id})
self.assertFalse(notifier.network_added_to_agent.called)
def test_reschedule_network_from_down_agent_failed(self):
self._test_failed_rescheduling()
def test_reschedule_network_from_down_agent_concurrent_removal(self):
self._test_failed_rescheduling(
rn_side_effect=dhcpagentscheduler.NetworkNotHostedByDhcpAgent(
network_id='foo', agent_id='bar'))
def test_filter_bindings(self):
bindings = [
sched_db.NetworkDhcpAgentBinding(network_id='foo1',
dhcp_agent={'id': 'id1'}),
sched_db.NetworkDhcpAgentBinding(network_id='foo2',
dhcp_agent={'id': 'id1'}),
sched_db.NetworkDhcpAgentBinding(network_id='foo3',
dhcp_agent={'id': 'id2'}),
sched_db.NetworkDhcpAgentBinding(network_id='foo4',
dhcp_agent={'id': 'id2'})]
with mock.patch.object(self, 'agent_starting_up',
side_effect=[True, False]):
res = [b for b in self._filter_bindings(None, bindings)]
# once per each agent id1 and id2
self.assertEqual(2, len(res))
res_ids = [b.network_id for b in res]
self.assertIn('foo3', res_ids)
self.assertIn('foo4', res_ids)
class DHCPAgentWeightSchedulerTestCase(TestDhcpSchedulerBaseTestCase):
"""Unit test scenarios for WeightScheduler.schedule."""
hostc = {
'binary': 'neutron-dhcp-agent',
'host': 'host-c',
'topic': 'DHCP_AGENT',
'configurations': {'dhcp_driver': 'dhcp_driver',
'networks': 0,
'use_namespaces': True,
},
'agent_type': constants.AGENT_TYPE_DHCP}
hostd = {
'binary': 'neutron-dhcp-agent',
'host': 'host-d',
'topic': 'DHCP_AGENT',
'configurations': {'dhcp_driver': 'dhcp_driver',
'networks': 1,
'use_namespaces': True,
},
'agent_type': constants.AGENT_TYPE_DHCP}
def setUp(self):
super(DHCPAgentWeightSchedulerTestCase, self).setUp()
DB_PLUGIN_KLASS = 'neutron.plugins.ml2.plugin.Ml2Plugin'
self.setup_coreplugin(DB_PLUGIN_KLASS)
cfg.CONF.set_override("network_scheduler_driver",
'neutron.scheduler.dhcp_agent_scheduler.WeightScheduler')
self.dhcp_periodic_p = mock.patch(
'neutron.db.agentschedulers_db.DhcpAgentSchedulerDbMixin.'
'start_periodic_dhcp_agent_status_check')
self.patched_dhcp_periodic = self.dhcp_periodic_p.start()
self.plugin = importutils.import_object('neutron.plugins.ml2.plugin.'
'Ml2Plugin')
self.assertEqual(1, self.patched_dhcp_periodic.call_count)
self.plugin.network_scheduler = importutils.import_object(
'neutron.scheduler.dhcp_agent_scheduler.WeightScheduler'
)
cfg.CONF.set_override('dhcp_agents_per_network', 1)
cfg.CONF.set_override("dhcp_load_type", "networks")
def test_scheduler_one_agents_per_network(self):
cfg.CONF.set_override('dhcp_agents_per_network', 1)
self._save_networks(['1111'])
agents = self._get_agents(['host-c', 'host-d'])
self._save_agents(agents)
self.plugin.network_scheduler.schedule(self.plugin, self.ctx,
{'id': '1111'})
agents = self.plugin.get_dhcp_agents_hosting_networks(self.ctx,
['1111'])
self.assertEqual(1, len(agents))
def test_scheduler_two_agents_per_network(self):
cfg.CONF.set_override('dhcp_agents_per_network', 2)
self._save_networks(['1111'])
agents = self._get_agents(['host-c', 'host-d'])
self._save_agents(agents)
self.plugin.network_scheduler.schedule(self.plugin, self.ctx,
{'id': '1111'})
agents = self.plugin.get_dhcp_agents_hosting_networks(self.ctx,
['1111'])
self.assertEqual(2, len(agents))
def test_scheduler_no_active_agents(self):
self._save_networks(['1111'])
self.plugin.network_scheduler.schedule(self.plugin, self.ctx,
{'id': '1111'})
agents = self.plugin.get_dhcp_agents_hosting_networks(self.ctx,
['1111'])
self.assertEqual(0, len(agents))
def test_scheduler_equal_distribution(self):
cfg.CONF.set_override('dhcp_agents_per_network', 1)
self._save_networks(['1111', '2222', '3333'])
agents = self._get_agents(['host-c', 'host-d'])
self._save_agents(agents)
callback = agents_db.AgentExtRpcCallback()
callback.report_state(self.ctx,
agent_state={'agent_state': self.hostc},
time=timeutils.strtime())
callback.report_state(self.ctx,
agent_state={'agent_state': self.hostd},
time=timeutils.strtime())
self.plugin.network_scheduler.schedule(self.plugin, self.ctx,
{'id': '1111'})
agent1 = self.plugin.get_dhcp_agents_hosting_networks(self.ctx,
['1111'])
self.hostd['configurations']['networks'] = 2
callback.report_state(self.ctx,
agent_state={'agent_state': self.hostd},
time=timeutils.strtime())
self.plugin.network_scheduler.schedule(self.plugin, self.ctx,
{'id': '2222'})
agent2 = self.plugin.get_dhcp_agents_hosting_networks(self.ctx,
['2222'])
self.hostc['configurations']['networks'] = 4
callback.report_state(self.ctx,
agent_state={'agent_state': self.hostc},
time=timeutils.strtime())
self.plugin.network_scheduler.schedule(self.plugin, self.ctx,
{'id': '3333'})
agent3 = self.plugin.get_dhcp_agents_hosting_networks(self.ctx,
['3333'])
self.assertEqual('host-c', agent1[0]['host'])
self.assertEqual('host-c', agent2[0]['host'])
self.assertEqual('host-d', agent3[0]['host'])
class TestDhcpSchedulerFilter(TestDhcpSchedulerBaseTestCase,
sched_db.DhcpAgentSchedulerDbMixin):
def _test_get_dhcp_agents_hosting_networks(self, expected, **kwargs):
agents = self._create_and_set_agents_down(['host-a', 'host-b'], 1)
agents += self._create_and_set_agents_down(['host-c', 'host-d'], 1,
admin_state_up=False)
self._test_schedule_bind_network(agents, self.network_id)
agents = self.get_dhcp_agents_hosting_networks(self.ctx,
[self.network_id],
**kwargs)
host_ids = set(a['host'] for a in agents)
self.assertEqual(expected, host_ids)
def test_get_dhcp_agents_hosting_networks_default(self):
self._test_get_dhcp_agents_hosting_networks({'host-a', 'host-b',
'host-c', 'host-d'})
def test_get_dhcp_agents_hosting_networks_active(self):
self._test_get_dhcp_agents_hosting_networks({'host-b', 'host-d'},
active=True)
def test_get_dhcp_agents_hosting_networks_admin_up(self):
self._test_get_dhcp_agents_hosting_networks({'host-a', 'host-b'},
admin_state_up=True)
def test_get_dhcp_agents_hosting_networks_active_admin_up(self):
self._test_get_dhcp_agents_hosting_networks({'host-b'},
active=True,
admin_state_up=True)
def test_get_dhcp_agents_hosting_networks_admin_down(self):
self._test_get_dhcp_agents_hosting_networks({'host-c', 'host-d'},
admin_state_up=False)
def test_get_dhcp_agents_hosting_networks_active_admin_down(self):
self._test_get_dhcp_agents_hosting_networks({'host-d'},
active=True,
admin_state_up=False)
| apache-2.0 |
tumi8/nasty | tools/get-extended-metrics.py | 1 | 8128 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# Export von AggregateSet-Zeitreihen aus NASTY-Tabellen
# erstellt von Gerhard Muenz, August 2008
import MySQLdb, getopt, os, sys, string, calendar, time
def query_metrics(c, starttime, endtime, interval, addr, mask, port, proto, nasty):
def ip2int(adresse):
# konvertiert eine IP im dotted-quad Format in einen Integer
adresse=adresse.split('.')
return 2**24*long(adresse[0])+2**16*long(adresse[1])+2**8*long(adresse[2])+long(adresse
[3])
def create_where_expression(addr, mask, port, proto):
bitmask = ['0x00000000', '0x80000000', '0xC0000000', '0xE0000000', \
'0xF0000000', '0xF8000000', '0xFC000000', '0xFE000000', \
'0xFF000000', '0xFF800000', '0xFFC00000', '0xFFE00000', \
'0xFFF00000', '0xFFF80000', '0xFFFC0000', '0xFFFE0000', \
'0xFFFF0000', '0xFFFF8000', '0xFFFFC000', '0xFFFFE000', \
'0xFFFFF000', '0xFFFFF800', '0xFFFFFC00', '0xFFFFFE00', \
'0xFFFFFF00', '0xFFFFFF80', '0xFFFFFFC0', '0xFFFFFFE0', \
'0xFFFFFFF0', '0xFFFFFFF8', '0xFFFFFFFC', '0xFFFFFFFE']
if ((addr=='' or mask==0) and port=='' and proto==''):
return ''
needand = False
result = ' WHERE ('
if addr != '':
if ((mask > 0) and (mask < 32)):
result=result+'((srcIp & '+bitmask[mask]+')='+str(ip2int(addr))+' OR (dstIp & '+bitmask[mask]+')='+str(ip2int(addr))+')'
needand = True
elif mask == 32:
result=result+'(srcIp='+str(ip2int(addr))+' OR dstIp='+str(ip2int(addr))+')'
needand = True
if port != '':
if needand:
result = result + ' AND '
result=result+'(srcPort='+port+' OR dstPort='+port+')'
needand = True
if proto != '':
if needand:
result = result + ' AND '
result = result+'proto='+proto
return result+')'
def time2tables(starttime, endtime):
# sucht relevante Halbstundentabellen
tables = []
c.execute("SHOW TABLES LIKE 'h\\_%'")
for row in c.fetchall():
tabletime = calendar.timegm([string.atoi(row[0][2:6]), string.atoi(row[0][6:8]), string.atoi(row[0][8:10]), \
string.atoi(row[0][11:13]), string.atoi(row[0][14])*30, 0, 0, 0, 0])
if ((endtime==0 or tabletime < endtime) and (starttime==0 or tabletime+30*60 > starttime)):
tables.append(row[0])
return tables
tables = time2tables(starttime, endtime)
if tables==[]:
print('Keine Tabellen für '+str(starttime)+'<=firstSwitched<='+str(endtime)+' gefunden.')
return
filter = create_where_expression(addr, mask, port, proto)
timecol = "(firstSwitched DIV "+str(interval)+")*"+str(interval)+" AS time"
if starttime>0:
if endtime>0:
timecond = "HAVING time BETWEEN "+str(starttime)+" AND "+str(endtime)
else:
timecond = "HAVING time>="+str(starttime)
elif endtime>0:
timecond = "HAVING time<="+str(endtime)
else:
timecond = ''
earliest = 0
for table in tables:
if nasty:
c.execute('SELECT '+timecol+', SUM(bytes), SUM(pkts), COUNT(*), (COUNT(DISTINCT dstIp) + COUNT(DISTINCT srcIp)), (COUNT(DISTINCT dstPort) + COUNT(DISTINCT srcPort)), AVG(lastSwitched-firstSwitched) FROM '+table+filter+' GROUP BY time '+timecond)
else:
c.execute('SELECT '+timecol+', SUM(bytes), SUM(pkts), COUNT(*), COUNT(DISTINCT srcIp), COUNT(DISTINCT dstIp), COUNT(DISTINCT srcPort), COUNT(DISTINCT dstPort), AVG(lastSwitched-firstSwitched) FROM '+table+filter+' GROUP BY time '+timecond)
for row in c.fetchall():
if earliest == 0:
# initialize interval counter
i = 0
earliest = row[0]
else:
# increase interval counter and fill in zero rows if necessary
i = i + 1
while i < (row[0]-earliest) / interval:
print(str(earliest + i*interval) + '\t0\t0\t0\t0\t0\t0')
i = i + 1
# print current row
if nasty:
print(str(row[0])+'\t'+str(row[1])+'\t'+str(row[2])+'\t'+str(row[3])+'\t'+str(row[4])+'\t'+str(row[5])+'\t'+str(row[6]))
else:
print(str(row[0])+'\t'+str(row[1])+'\t'+str(row[2])+'\t'+str(row[3])+'\t'+str(row[4])+'\t'+str(row[5])+'\t'+str(row[6])+'\t'+str(row[7])+'\t'+str(row[8]))
return
def usage():
print ('''Benutzung: ''' + sys.argv[0] + ''' -d <database> [options]
Datenbankoptionen:
-d, --database= Name der Datenbank
-h, --host= Hostname
-u, --user= Benutzername
-p, --password= Passwort
Zeitintervaloptionen:
-I, --interval= Intervalllaenge
-S, --starttime= Startzeit (UNIX-Zeit)
-E, --endtime= Endzeit (UNIX-Zeit)
Filteroptionen:
-A, --address= IP-Adresse
-M, --mask= Adressmaske
-P, --port= Port
-R, --protocol= Protokoll
-n, --nasty Ausgabe von AggregateSets wie bei Nasty''')
print ('''Ausgabe:
Intervallstartzeit, #bytes, #pkts, #records, #srcips, #dstips, #srcports, #dstports, avg_duration''')
print ('''Ausgabe mit Option -n:
Intervallstartzeit, #bytes, #pkts, #records, #ips, #ports, avg_duration''')
def main():
user=os.environ['LOGNAME']
host='localhost'
password=''
database=''
bin=False
addr=''
mask=32
port=''
proto=''
interval=60
starttime=0
endtime=0
nasty=False
try:
opts, args = getopt.gnu_getopt(sys.argv[1:], "u:h:p:d:I:S:E:A:M:P:R:n", ["user=", "host=", "password=", "database=", "interval=", "starttime=", "endtime=", "mask=", "port=", "protocol=", "nasty"])
except getopt.GetoptError:
print "Ungueltige Option."
usage()
sys.exit(2)
for o, a in opts:
if o in ("-u", "--user"):
user=a
if o in ("-h", "--host"):
host=a
if o in ("-p", "--password"):
password=a
if o in ("-d", "--database"):
database=a
if o in ("-I", "--interval"):
interval=string.atoi(a)
if o in ("-S", "--starttime"):
starttime=string.atoi(a)
starttime = (starttime//interval)*interval
if o in ("-E", "--endtime"):
endtime=string.atoi(a)
endtime = (endtime//interval+1)*interval
if o in ("-A", "--address"):
addr=a
if o in ("-M", "--mask"):
mask=string.atoi(a)
if o in ("-P", "--port"):
port=str(string.atoi(a))
if o in ("-R", "--protocol"):
proto=str(string.atoi(a))
if o in ("-n", "--nasty"):
nasty=True
if interval<1 or starttime<0 or endtime<0:
print('Startzeit, Endzeit und Interval muessen positiv sein')
return
if (database):
try:
connection = MySQLdb.connect(host,user,password,database)
except MySQLdb.OperationalError, message:
print ('%d: Konnte nicht mit Datenbank verbinden: %s' % (message[0], message[1]))
return
c = connection.cursor()
query_metrics(c, starttime, endtime, interval, addr, mask, port, proto, nasty)
else:
usage()
if __name__ == "__main__":
main()
| gpl-2.0 |
2014c2g3/2015cd_midterm | static/Brython3.1.1-20150328-091302/Lib/importlib/abc.py | 743 | 14595 | """Abstract base classes related to import."""
from . import _bootstrap
from . import machinery
try:
import _frozen_importlib
except ImportError as exc:
if exc.name != '_frozen_importlib':
raise
_frozen_importlib = None
import abc
import imp
import marshal
import sys
import tokenize
import warnings
def _register(abstract_cls, *classes):
for cls in classes:
abstract_cls.register(cls)
if _frozen_importlib is not None:
frozen_cls = getattr(_frozen_importlib, cls.__name__)
abstract_cls.register(frozen_cls)
class Finder(metaclass=abc.ABCMeta):
"""Legacy abstract base class for import finders.
It may be subclassed for compatibility with legacy third party
reimplementations of the import system. Otherwise, finder
implementations should derive from the more specific MetaPathFinder
or PathEntryFinder ABCs.
"""
@abc.abstractmethod
def find_module(self, fullname, path=None):
"""An abstract method that should find a module.
The fullname is a str and the optional path is a str or None.
Returns a Loader object.
"""
raise NotImplementedError
class MetaPathFinder(Finder):
"""Abstract base class for import finders on sys.meta_path."""
@abc.abstractmethod
def find_module(self, fullname, path):
"""Abstract method which, when implemented, should find a module.
The fullname is a str and the path is a str or None.
Returns a Loader object.
"""
raise NotImplementedError
def invalidate_caches(self):
"""An optional method for clearing the finder's cache, if any.
This method is used by importlib.invalidate_caches().
"""
return NotImplemented
_register(MetaPathFinder, machinery.BuiltinImporter, machinery.FrozenImporter,
machinery.PathFinder, machinery.WindowsRegistryFinder)
class PathEntryFinder(Finder):
"""Abstract base class for path entry finders used by PathFinder."""
@abc.abstractmethod
def find_loader(self, fullname):
"""Abstract method which, when implemented, returns a module loader.
The fullname is a str. Returns a 2-tuple of (Loader, portion) where
portion is a sequence of file system locations contributing to part of
a namespace package. The sequence may be empty and the loader may be
None.
"""
raise NotImplementedError
find_module = _bootstrap._find_module_shim
def invalidate_caches(self):
"""An optional method for clearing the finder's cache, if any.
This method is used by PathFinder.invalidate_caches().
"""
return NotImplemented
_register(PathEntryFinder, machinery.FileFinder)
class Loader(metaclass=abc.ABCMeta):
"""Abstract base class for import loaders."""
@abc.abstractmethod
def load_module(self, fullname):
"""Abstract method which when implemented should load a module.
The fullname is a str."""
raise NotImplementedError
@abc.abstractmethod
def module_repr(self, module):
"""Abstract method which when implemented calculates and returns the
given module's repr."""
raise NotImplementedError
class ResourceLoader(Loader):
"""Abstract base class for loaders which can return data from their
back-end storage.
This ABC represents one of the optional protocols specified by PEP 302.
"""
@abc.abstractmethod
def get_data(self, path):
"""Abstract method which when implemented should return the bytes for
the specified path. The path must be a str."""
raise NotImplementedError
class InspectLoader(Loader):
"""Abstract base class for loaders which support inspection about the
modules they can load.
This ABC represents one of the optional protocols specified by PEP 302.
"""
@abc.abstractmethod
def is_package(self, fullname):
"""Abstract method which when implemented should return whether the
module is a package. The fullname is a str. Returns a bool."""
raise NotImplementedError
@abc.abstractmethod
def get_code(self, fullname):
"""Abstract method which when implemented should return the code object
for the module. The fullname is a str. Returns a types.CodeType."""
raise NotImplementedError
@abc.abstractmethod
def get_source(self, fullname):
"""Abstract method which should return the source code for the
module. The fullname is a str. Returns a str."""
raise NotImplementedError
_register(InspectLoader, machinery.BuiltinImporter, machinery.FrozenImporter,
machinery.ExtensionFileLoader)
class ExecutionLoader(InspectLoader):
"""Abstract base class for loaders that wish to support the execution of
modules as scripts.
This ABC represents one of the optional protocols specified in PEP 302.
"""
@abc.abstractmethod
def get_filename(self, fullname):
"""Abstract method which should return the value that __file__ is to be
set to."""
raise NotImplementedError
class FileLoader(_bootstrap.FileLoader, ResourceLoader, ExecutionLoader):
"""Abstract base class partially implementing the ResourceLoader and
ExecutionLoader ABCs."""
_register(FileLoader, machinery.SourceFileLoader,
machinery.SourcelessFileLoader)
class SourceLoader(_bootstrap.SourceLoader, ResourceLoader, ExecutionLoader):
"""Abstract base class for loading source code (and optionally any
corresponding bytecode).
To support loading from source code, the abstractmethods inherited from
ResourceLoader and ExecutionLoader need to be implemented. To also support
loading from bytecode, the optional methods specified directly by this ABC
is required.
Inherited abstractmethods not implemented in this ABC:
* ResourceLoader.get_data
* ExecutionLoader.get_filename
"""
def path_mtime(self, path):
"""Return the (int) modification time for the path (str)."""
if self.path_stats.__func__ is SourceLoader.path_stats:
raise NotImplementedError
return int(self.path_stats(path)['mtime'])
def path_stats(self, path):
"""Return a metadata dict for the source pointed to by the path (str).
Possible keys:
- 'mtime' (mandatory) is the numeric timestamp of last source
code modification;
- 'size' (optional) is the size in bytes of the source code.
"""
if self.path_mtime.__func__ is SourceLoader.path_mtime:
raise NotImplementedError
return {'mtime': self.path_mtime(path)}
def set_data(self, path, data):
"""Write the bytes to the path (if possible).
Accepts a str path and data as bytes.
Any needed intermediary directories are to be created. If for some
reason the file cannot be written because of permissions, fail
silently.
"""
raise NotImplementedError
_register(SourceLoader, machinery.SourceFileLoader)
class PyLoader(SourceLoader):
"""Implement the deprecated PyLoader ABC in terms of SourceLoader.
This class has been deprecated! It is slated for removal in Python 3.4.
If compatibility with Python 3.1 is not needed then implement the
SourceLoader ABC instead of this class. If Python 3.1 compatibility is
needed, then use the following idiom to have a single class that is
compatible with Python 3.1 onwards::
try:
from importlib.abc import SourceLoader
except ImportError:
from importlib.abc import PyLoader as SourceLoader
class CustomLoader(SourceLoader):
def get_filename(self, fullname):
# Implement ...
def source_path(self, fullname):
'''Implement source_path in terms of get_filename.'''
try:
return self.get_filename(fullname)
except ImportError:
return None
def is_package(self, fullname):
filename = os.path.basename(self.get_filename(fullname))
return os.path.splitext(filename)[0] == '__init__'
"""
@abc.abstractmethod
def is_package(self, fullname):
raise NotImplementedError
@abc.abstractmethod
def source_path(self, fullname):
"""Abstract method. Accepts a str module name and returns the path to
the source code for the module."""
raise NotImplementedError
def get_filename(self, fullname):
"""Implement get_filename in terms of source_path.
As get_filename should only return a source file path there is no
chance of the path not existing but loading still being possible, so
ImportError should propagate instead of being turned into returning
None.
"""
warnings.warn("importlib.abc.PyLoader is deprecated and is "
"slated for removal in Python 3.4; "
"use SourceLoader instead. "
"See the importlib documentation on how to be "
"compatible with Python 3.1 onwards.",
DeprecationWarning)
path = self.source_path(fullname)
if path is None:
raise ImportError(name=fullname)
else:
return path
class PyPycLoader(PyLoader):
"""Abstract base class to assist in loading source and bytecode by
requiring only back-end storage methods to be implemented.
This class has been deprecated! Removal is slated for Python 3.4. Implement
the SourceLoader ABC instead. If Python 3.1 compatibility is needed, see
PyLoader.
The methods get_code, get_source, and load_module are implemented for the
user.
"""
def get_filename(self, fullname):
"""Return the source or bytecode file path."""
path = self.source_path(fullname)
if path is not None:
return path
path = self.bytecode_path(fullname)
if path is not None:
return path
raise ImportError("no source or bytecode path available for "
"{0!r}".format(fullname), name=fullname)
def get_code(self, fullname):
"""Get a code object from source or bytecode."""
warnings.warn("importlib.abc.PyPycLoader is deprecated and slated for "
"removal in Python 3.4; use SourceLoader instead. "
"If Python 3.1 compatibility is required, see the "
"latest documentation for PyLoader.",
DeprecationWarning)
source_timestamp = self.source_mtime(fullname)
# Try to use bytecode if it is available.
bytecode_path = self.bytecode_path(fullname)
if bytecode_path:
data = self.get_data(bytecode_path)
try:
magic = data[:4]
if len(magic) < 4:
raise ImportError(
"bad magic number in {}".format(fullname),
name=fullname, path=bytecode_path)
raw_timestamp = data[4:8]
if len(raw_timestamp) < 4:
raise EOFError("bad timestamp in {}".format(fullname))
pyc_timestamp = _bootstrap._r_long(raw_timestamp)
raw_source_size = data[8:12]
if len(raw_source_size) != 4:
raise EOFError("bad file size in {}".format(fullname))
# Source size is unused as the ABC does not provide a way to
# get the size of the source ahead of reading it.
bytecode = data[12:]
# Verify that the magic number is valid.
if imp.get_magic() != magic:
raise ImportError(
"bad magic number in {}".format(fullname),
name=fullname, path=bytecode_path)
# Verify that the bytecode is not stale (only matters when
# there is source to fall back on.
if source_timestamp:
if pyc_timestamp < source_timestamp:
raise ImportError("bytecode is stale", name=fullname,
path=bytecode_path)
except (ImportError, EOFError):
# If source is available give it a shot.
if source_timestamp is not None:
pass
else:
raise
else:
# Bytecode seems fine, so try to use it.
return marshal.loads(bytecode)
elif source_timestamp is None:
raise ImportError("no source or bytecode available to create code "
"object for {0!r}".format(fullname),
name=fullname)
# Use the source.
source_path = self.source_path(fullname)
if source_path is None:
message = "a source path must exist to load {0}".format(fullname)
raise ImportError(message, name=fullname)
source = self.get_data(source_path)
code_object = compile(source, source_path, 'exec', dont_inherit=True)
# Generate bytecode and write it out.
if not sys.dont_write_bytecode:
data = bytearray(imp.get_magic())
data.extend(_bootstrap._w_long(source_timestamp))
data.extend(_bootstrap._w_long(len(source) & 0xFFFFFFFF))
data.extend(marshal.dumps(code_object))
self.write_bytecode(fullname, data)
return code_object
@abc.abstractmethod
def source_mtime(self, fullname):
"""Abstract method. Accepts a str filename and returns an int
modification time for the source of the module."""
raise NotImplementedError
@abc.abstractmethod
def bytecode_path(self, fullname):
"""Abstract method. Accepts a str filename and returns the str pathname
to the bytecode for the module."""
raise NotImplementedError
@abc.abstractmethod
def write_bytecode(self, fullname, bytecode):
"""Abstract method. Accepts a str filename and bytes object
representing the bytecode for the module. Returns a boolean
representing whether the bytecode was written or not."""
raise NotImplementedError
| gpl-3.0 |
Dziolas/inspire-next | inspire/modules/authors/workflows/authornew.py | 1 | 4173 | # -*- coding: utf-8 -*-
#
# This file is part of INSPIRE.
# Copyright (C) 2014, 2015 CERN.
#
# INSPIRE is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# INSPIRE is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with INSPIRE. If not, see <http://www.gnu.org/licenses/>.
#
# In applying this licence, CERN does not waive the privileges and immunities
# granted to it by virtue of its status as an Intergovernmental Organization
# or submit itself to any jurisdiction.
from flask import render_template
from invenio_access.control import acc_get_user_email
from invenio_workflows.definitions import WorkflowBase
from invenio_workflows.tasks.logic_tasks import (
workflow_else,
workflow_if,
)
from invenio_workflows.tasks.marcxml_tasks import (
was_approved
)
from invenio_workflows.tasks.workflows_tasks import log_info
from inspire.modules.workflows.tasks.submission import (
halt_record_with_action,
close_ticket
)
from ..tasks import send_robotupload, \
create_marcxml_record, \
convert_data_to_model, \
create_curator_ticket_new, \
reply_ticket, \
curation_ticket_needed, \
create_curation_ticket
class authornew(WorkflowBase):
"""Workflow for new author information."""
object_type = "Author New"
workflow = [
convert_data_to_model(),
create_marcxml_record(),
create_curator_ticket_new(
template="authors/tickets/curator_new.html",
queue="Authors_add_user"),
reply_ticket(template="authors/tickets/user_new.html",
keep_new=True),
halt_record_with_action(action="author_approval",
message="Accept submission?"),
workflow_if(was_approved),
[
send_robotupload(mode="insert"),
reply_ticket(template="authors/tickets/user_accepted.html"),
log_info("New author info has been approved"),
close_ticket(ticket_id_key="ticket_id"),
workflow_if(curation_ticket_needed),
[
create_curation_ticket(
template="authors/tickets/curation_needed.html",
queue="AUTHORS_curation",
ticket_id_key="curation_ticket_id"
),
],
],
workflow_else,
[
log_info("New author info has been rejected"),
close_ticket(ticket_id_key="ticket_id"),
]
]
@staticmethod
def get_title(bwo):
"""Return title of object."""
id_user = bwo.id_user
user_email = acc_get_user_email(id_user)
return u"New Author by: {0}".format(user_email)
@staticmethod
def get_description(bwo):
"""Return description of object."""
return bwo.get_data().get("name").get("display_name")
@staticmethod
def formatter(bwo, **kwargs):
"""Return formatted data of object."""
of = kwargs.get("of", "hp")
extra_data = bwo.get_extra_data()
xml = extra_data.get("marcxml")
id_user = bwo.id_user
user_email = acc_get_user_email(id_user)
ticket_id = extra_data.get("ticket_id")
ticket_url = "https://rt.inspirehep.net/Ticket/Display.html?id={}".format(
ticket_id
)
if of == "xm":
return xml
else:
# FIXME add a template for the author display in the HP
return render_template("authors/workflows/authorupdate.html",
record_preview="",
user_email=user_email,
ticket_url=ticket_url,
comments=extra_data.get("comments"))
| gpl-2.0 |
keithroe/vtkoptix | ThirdParty/Twisted/twisted/protocols/htb.py | 51 | 9330 | # -*- test-case-name: twisted.test.test_htb -*-
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
Hierarchical Token Bucket traffic shaping.
Patterned after U{Martin Devera's Hierarchical Token Bucket traffic
shaper for the Linux kernel<http://luxik.cdi.cz/~devik/qos/htb/>}.
@seealso: U{HTB Linux queuing discipline manual - user guide
<http://luxik.cdi.cz/~devik/qos/htb/manual/userg.htm>}
@seealso: U{Token Bucket Filter in Linux Advanced Routing & Traffic Control
HOWTO<http://lartc.org/howto/lartc.qdisc.classless.html#AEN682>}
"""
# TODO: Investigate whether we should be using os.times()[-1] instead of
# time.time. time.time, it has been pointed out, can go backwards. Is
# the same true of os.times?
from time import time
from zope.interface import implements, Interface
from twisted.protocols import pcp
class Bucket:
"""
Implementation of a Token bucket.
A bucket can hold a certain number of tokens and it drains over time.
@cvar maxburst: The maximum number of tokens that the bucket can
hold at any given time. If this is C{None}, the bucket has
an infinite size.
@type maxburst: C{int}
@cvar rate: The rate at which the bucket drains, in number
of tokens per second. If the rate is C{None}, the bucket
drains instantaneously.
@type rate: C{int}
"""
maxburst = None
rate = None
_refcount = 0
def __init__(self, parentBucket=None):
"""
Create a L{Bucket} that may have a parent L{Bucket}.
@param parentBucket: If a parent Bucket is specified,
all L{add} and L{drip} operations on this L{Bucket}
will be applied on the parent L{Bucket} as well.
@type parentBucket: L{Bucket}
"""
self.content = 0
self.parentBucket = parentBucket
self.lastDrip = time()
def add(self, amount):
"""
Adds tokens to the L{Bucket} and its C{parentBucket}.
This will add as many of the C{amount} tokens as will fit into both
this L{Bucket} and its C{parentBucket}.
@param amount: The number of tokens to try to add.
@type amount: C{int}
@returns: The number of tokens that actually fit.
@returntype: C{int}
"""
self.drip()
if self.maxburst is None:
allowable = amount
else:
allowable = min(amount, self.maxburst - self.content)
if self.parentBucket is not None:
allowable = self.parentBucket.add(allowable)
self.content += allowable
return allowable
def drip(self):
"""
Let some of the bucket drain.
The L{Bucket} drains at the rate specified by the class
variable C{rate}.
@returns: C{True} if the bucket is empty after this drip.
@returntype: C{bool}
"""
if self.parentBucket is not None:
self.parentBucket.drip()
if self.rate is None:
self.content = 0
else:
now = time()
deltaTime = now - self.lastDrip
deltaTokens = deltaTime * self.rate
self.content = max(0, self.content - deltaTokens)
self.lastDrip = now
return self.content == 0
class IBucketFilter(Interface):
def getBucketFor(*somethings, **some_kw):
"""
Return a L{Bucket} corresponding to the provided parameters.
@returntype: L{Bucket}
"""
class HierarchicalBucketFilter:
"""
Filter things into buckets that can be nested.
@cvar bucketFactory: Class of buckets to make.
@type bucketFactory: L{Bucket}
@cvar sweepInterval: Seconds between sweeping out the bucket cache.
@type sweepInterval: C{int}
"""
implements(IBucketFilter)
bucketFactory = Bucket
sweepInterval = None
def __init__(self, parentFilter=None):
self.buckets = {}
self.parentFilter = parentFilter
self.lastSweep = time()
def getBucketFor(self, *a, **kw):
"""
Find or create a L{Bucket} corresponding to the provided parameters.
Any parameters are passed on to L{getBucketKey}, from them it
decides which bucket you get.
@returntype: L{Bucket}
"""
if ((self.sweepInterval is not None)
and ((time() - self.lastSweep) > self.sweepInterval)):
self.sweep()
if self.parentFilter:
parentBucket = self.parentFilter.getBucketFor(self, *a, **kw)
else:
parentBucket = None
key = self.getBucketKey(*a, **kw)
bucket = self.buckets.get(key)
if bucket is None:
bucket = self.bucketFactory(parentBucket)
self.buckets[key] = bucket
return bucket
def getBucketKey(self, *a, **kw):
"""
Construct a key based on the input parameters to choose a L{Bucket}.
The default implementation returns the same key for all
arguments. Override this method to provide L{Bucket} selection.
@returns: Something to be used as a key in the bucket cache.
"""
return None
def sweep(self):
"""
Remove empty buckets.
"""
for key, bucket in self.buckets.items():
bucket_is_empty = bucket.drip()
if (bucket._refcount == 0) and bucket_is_empty:
del self.buckets[key]
self.lastSweep = time()
class FilterByHost(HierarchicalBucketFilter):
"""
A Hierarchical Bucket filter with a L{Bucket} for each host.
"""
sweepInterval = 60 * 20
def getBucketKey(self, transport):
return transport.getPeer()[1]
class FilterByServer(HierarchicalBucketFilter):
"""
A Hierarchical Bucket filter with a L{Bucket} for each service.
"""
sweepInterval = None
def getBucketKey(self, transport):
return transport.getHost()[2]
class ShapedConsumer(pcp.ProducerConsumerProxy):
"""
Wraps a C{Consumer} and shapes the rate at which it receives data.
"""
# Providing a Pull interface means I don't have to try to schedule
# traffic with callLaters.
iAmStreaming = False
def __init__(self, consumer, bucket):
pcp.ProducerConsumerProxy.__init__(self, consumer)
self.bucket = bucket
self.bucket._refcount += 1
def _writeSomeData(self, data):
# In practice, this actually results in obscene amounts of
# overhead, as a result of generating lots and lots of packets
# with twelve-byte payloads. We may need to do a version of
# this with scheduled writes after all.
amount = self.bucket.add(len(data))
return pcp.ProducerConsumerProxy._writeSomeData(self, data[:amount])
def stopProducing(self):
pcp.ProducerConsumerProxy.stopProducing(self)
self.bucket._refcount -= 1
class ShapedTransport(ShapedConsumer):
"""
Wraps a C{Transport} and shapes the rate at which it receives data.
This is a L{ShapedConsumer} with a little bit of magic to provide for
the case where the consumer it wraps is also a C{Transport} and people
will be attempting to access attributes this does not proxy as a
C{Consumer} (e.g. C{loseConnection}).
"""
# Ugh. We only wanted to filter IConsumer, not ITransport.
iAmStreaming = False
def __getattr__(self, name):
# Because people will be doing things like .getPeer and
# .loseConnection on me.
return getattr(self.consumer, name)
class ShapedProtocolFactory:
"""
Dispense C{Protocols} with traffic shaping on their transports.
Usage::
myserver = SomeFactory()
myserver.protocol = ShapedProtocolFactory(myserver.protocol,
bucketFilter)
Where C{SomeServerFactory} is a L{twisted.internet.protocol.Factory}, and
C{bucketFilter} is an instance of L{HierarchicalBucketFilter}.
"""
def __init__(self, protoClass, bucketFilter):
"""
Tell me what to wrap and where to get buckets.
@param protoClass: The class of C{Protocol} this will generate
wrapped instances of.
@type protoClass: L{Protocol<twisted.internet.interfaces.IProtocol>}
class
@param bucketFilter: The filter which will determine how
traffic is shaped.
@type bucketFilter: L{HierarchicalBucketFilter}.
"""
# More precisely, protoClass can be any callable that will return
# instances of something that implements IProtocol.
self.protocol = protoClass
self.bucketFilter = bucketFilter
def __call__(self, *a, **kw):
"""
Make a C{Protocol} instance with a shaped transport.
Any parameters will be passed on to the protocol's initializer.
@returns: A C{Protocol} instance with a L{ShapedTransport}.
"""
proto = self.protocol(*a, **kw)
origMakeConnection = proto.makeConnection
def makeConnection(transport):
bucket = self.bucketFilter.getBucketFor(transport)
shapedTransport = ShapedTransport(transport, bucket)
return origMakeConnection(shapedTransport)
proto.makeConnection = makeConnection
return proto
| bsd-3-clause |
kaidokert/prospector | prospector/tools/pyroma/__init__.py | 3 | 1998 | import os
from prospector.message import Location, Message
from prospector.tools.base import ToolBase
from pyroma import projectdata, ratings
PYROMA_CODES = {
ratings.Name: 'PYR01',
ratings.Version: 'PYR02',
ratings.VersionIsString: 'PYR03',
ratings.PEP386Version: 'PYR04',
ratings.Description: 'PYR05',
ratings.LongDescription: 'PYR06',
ratings.Classifiers: 'PYR07',
ratings.PythonVersion: 'PYR08',
ratings.Keywords: 'PYR09',
ratings.Author: 'PYR10',
ratings.AuthorEmail: 'PYR11',
ratings.Url: 'PYR12',
ratings.License: 'PYR13',
ratings.ZipSafe: 'PYR14',
ratings.TestSuite: 'PYR15',
ratings.SDist: 'PYR16',
ratings.PackageDocs: 'PYR17',
ratings.ValidREST: 'PYR18',
ratings.BusFactor: 'PYR19',
}
PYROMA_TEST_CLASSES = [t.__class__ for t in ratings.ALL_TESTS]
class PyromaTool(ToolBase):
def __init__(self, *args, **kwargs):
super(PyromaTool, self).__init__(*args, **kwargs)
self.ignore_codes = ()
def configure(self, prospector_config, found_files):
self.ignore_codes = prospector_config.get_disabled_messages('pyroma')
return None
def run(self, found_files):
messages = []
for module in found_files.iter_module_paths(include_ignored=True):
dirname, filename = os.path.split(module)
if filename != 'setup.py':
continue
data = projectdata.get_data(dirname)
all_tests = [m() for m in PYROMA_TEST_CLASSES]
for test in all_tests:
code = PYROMA_CODES[test.__class__]
if code in self.ignore_codes:
continue
passed = test.test(data)
if passed is False: # passed can be True, False or None...
loc = Location(module, 'setup', None, -1, -1)
msg = Message('pyroma', code, loc, test.message())
messages.append(msg)
return messages
| gpl-2.0 |
tseaver/google-cloud-python | iam/setup.py | 2 | 2556 | # -*- coding: utf-8 -*-
#
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import io
import os
import setuptools
name = "google-cloud-iam"
description = "IAM Service Account Credentials API client library"
version = "0.2.1"
# Should be one of:
# 'Development Status :: 3 - Alpha'
# 'Development Status :: 4 - Beta'
# 'Development Status :: 5 - Production/Stable'
release_status = "Development Status :: 3 - Alpha"
dependencies = [
"google-api-core[grpc] >= 1.14.0, < 2.0.0dev",
'enum34; python_version < "3.4"',
]
package_root = os.path.abspath(os.path.dirname(__file__))
readme_filename = os.path.join(package_root, "README.rst")
with io.open(readme_filename, encoding="utf-8") as readme_file:
readme = readme_file.read()
packages = [
package for package in setuptools.find_packages() if package.startswith("google")
]
namespaces = ["google"]
if "google.cloud" in packages:
namespaces.append("google.cloud")
setuptools.setup(
name=name,
version=version,
description=description,
long_description=readme,
author="Google LLC",
author_email="googleapis-packages@google.com",
license="Apache 2.0",
url="https://github.com/GoogleCloudPlatform/google-cloud-python",
classifiers=[
release_status,
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Operating System :: OS Independent",
"Topic :: Internet",
],
platforms="Posix; MacOS X; Windows",
packages=packages,
namespace_packages=namespaces,
install_requires=dependencies,
python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
include_package_data=True,
zip_safe=False,
)
| apache-2.0 |
bcnice20/android-kernel-common | tools/perf/scripts/python/failed-syscalls-by-pid.py | 11180 | 2058 | # failed system call counts, by pid
# (c) 2010, Tom Zanussi <tzanussi@gmail.com>
# Licensed under the terms of the GNU GPL License version 2
#
# Displays system-wide failed system call totals, broken down by pid.
# If a [comm] arg is specified, only syscalls called by [comm] are displayed.
import os
import sys
sys.path.append(os.environ['PERF_EXEC_PATH'] + \
'/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
from perf_trace_context import *
from Core import *
from Util import *
usage = "perf script -s syscall-counts-by-pid.py [comm|pid]\n";
for_comm = None
for_pid = None
if len(sys.argv) > 2:
sys.exit(usage)
if len(sys.argv) > 1:
try:
for_pid = int(sys.argv[1])
except:
for_comm = sys.argv[1]
syscalls = autodict()
def trace_begin():
print "Press control+C to stop and show the summary"
def trace_end():
print_error_totals()
def raw_syscalls__sys_exit(event_name, context, common_cpu,
common_secs, common_nsecs, common_pid, common_comm,
id, ret):
if (for_comm and common_comm != for_comm) or \
(for_pid and common_pid != for_pid ):
return
if ret < 0:
try:
syscalls[common_comm][common_pid][id][ret] += 1
except TypeError:
syscalls[common_comm][common_pid][id][ret] = 1
def print_error_totals():
if for_comm is not None:
print "\nsyscall errors for %s:\n\n" % (for_comm),
else:
print "\nsyscall errors:\n\n",
print "%-30s %10s\n" % ("comm [pid]", "count"),
print "%-30s %10s\n" % ("------------------------------", \
"----------"),
comm_keys = syscalls.keys()
for comm in comm_keys:
pid_keys = syscalls[comm].keys()
for pid in pid_keys:
print "\n%s [%d]\n" % (comm, pid),
id_keys = syscalls[comm][pid].keys()
for id in id_keys:
print " syscall: %-16s\n" % syscall_name(id),
ret_keys = syscalls[comm][pid][id].keys()
for ret, val in sorted(syscalls[comm][pid][id].iteritems(), key = lambda(k, v): (v, k), reverse = True):
print " err = %-20s %10d\n" % (strerror(ret), val),
| gpl-2.0 |
Universal-Model-Converter/UMC3.0a | data/Python/x86/Lib/site-packages/OpenGL/GL/NV/gpu_program4.py | 4 | 2510 | '''OpenGL extension NV.gpu_program4
This module customises the behaviour of the
OpenGL.raw.GL.NV.gpu_program4 to provide a more
Python-friendly API
Overview (from the spec)
This specification documents the common instruction set and basic
functionality provided by NVIDIA's 4th generation of assembly instruction
sets supporting programmable graphics pipeline stages.
The instruction set builds upon the basic framework provided by the
ARB_vertex_program and ARB_fragment_program extensions to expose
considerably more capable hardware. In addition to new capabilities for
vertex and fragment programs, this extension provides a new program type
(geometry programs) further described in the NV_geometry_program4
specification.
NV_gpu_program4 provides a unified instruction set -- all instruction set
features are available for all program types, except for a small number of
features that make sense only for a specific program type. It provides
fully capable signed and unsigned integer data types, along with a set of
arithmetic, logical, and data type conversion instructions capable of
operating on integers. It also provides a uniform set of structured
branching constructs (if tests, loops, and subroutines) that fully support
run-time condition testing.
This extension provides several new texture mapping capabilities. Shadow
cube maps are supported, where cube map faces can encode depth values.
Texture lookup instructions can include an immediate texel offset, which
can assist in advanced filtering. New instructions are provided to fetch
a single texel by address in a texture map (TXF) and query the size of a
specified texture level (TXQ).
By and large, vertex and fragment programs written to ARB_vertex_program
and ARB_fragment_program can be ported directly by simply changing the
program header from "!!ARBvp1.0" or "!!ARBfp1.0" to "!!NVvp4.0" or
"!!NVfp4.0", and then modifying the code to take advantage of the expanded
feature set. There are a small number of areas where this extension is
not a functional superset of previous vertex program extensions, which are
documented in this specification.
The official definition of this extension is available here:
http://www.opengl.org/registry/specs/NV/gpu_program4.txt
'''
from OpenGL import platform, constants, constant, arrays
from OpenGL import extensions, wrapper
from OpenGL.GL import glget
import ctypes
from OpenGL.raw.GL.NV.gpu_program4 import *
### END AUTOGENERATED SECTION | mit |
pigate/mongo-python-driver | bson/timestamp.py | 48 | 3932 | # Copyright 2010-2015 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tools for representing MongoDB internal Timestamps.
"""
import calendar
import datetime
from bson.py3compat import integer_types
from bson.tz_util import utc
UPPERBOUND = 4294967296
class Timestamp(object):
"""MongoDB internal timestamps used in the opLog.
"""
_type_marker = 17
def __init__(self, time, inc):
"""Create a new :class:`Timestamp`.
This class is only for use with the MongoDB opLog. If you need
to store a regular timestamp, please use a
:class:`~datetime.datetime`.
Raises :class:`TypeError` if `time` is not an instance of
:class: `int` or :class:`~datetime.datetime`, or `inc` is not
an instance of :class:`int`. Raises :class:`ValueError` if
`time` or `inc` is not in [0, 2**32).
:Parameters:
- `time`: time in seconds since epoch UTC, or a naive UTC
:class:`~datetime.datetime`, or an aware
:class:`~datetime.datetime`
- `inc`: the incrementing counter
"""
if isinstance(time, datetime.datetime):
if time.utcoffset() is not None:
time = time - time.utcoffset()
time = int(calendar.timegm(time.timetuple()))
if not isinstance(time, integer_types):
raise TypeError("time must be an instance of int")
if not isinstance(inc, integer_types):
raise TypeError("inc must be an instance of int")
if not 0 <= time < UPPERBOUND:
raise ValueError("time must be contained in [0, 2**32)")
if not 0 <= inc < UPPERBOUND:
raise ValueError("inc must be contained in [0, 2**32)")
self.__time = time
self.__inc = inc
@property
def time(self):
"""Get the time portion of this :class:`Timestamp`.
"""
return self.__time
@property
def inc(self):
"""Get the inc portion of this :class:`Timestamp`.
"""
return self.__inc
def __eq__(self, other):
if isinstance(other, Timestamp):
return (self.__time == other.time and self.__inc == other.inc)
else:
return NotImplemented
def __hash__(self):
return hash(self.time) ^ hash(self.inc)
def __ne__(self, other):
return not self == other
def __lt__(self, other):
if isinstance(other, Timestamp):
return (self.time, self.inc) < (other.time, other.inc)
return NotImplemented
def __le__(self, other):
if isinstance(other, Timestamp):
return (self.time, self.inc) <= (other.time, other.inc)
return NotImplemented
def __gt__(self, other):
if isinstance(other, Timestamp):
return (self.time, self.inc) > (other.time, other.inc)
return NotImplemented
def __ge__(self, other):
if isinstance(other, Timestamp):
return (self.time, self.inc) >= (other.time, other.inc)
return NotImplemented
def __repr__(self):
return "Timestamp(%s, %s)" % (self.__time, self.__inc)
def as_datetime(self):
"""Return a :class:`~datetime.datetime` instance corresponding
to the time portion of this :class:`Timestamp`.
The returned datetime's timezone is UTC.
"""
return datetime.datetime.fromtimestamp(self.__time, utc)
| apache-2.0 |
rail/treeherder | tests/e2e/conftest.py | 5 | 1995 | import os
import pytest
import simplejson as json
from django.template import Context, Template
from tests import test_utils
from treeherder.client import TreeherderJobCollection
base_dir = os.path.dirname(__file__)
@pytest.fixture
def pending_jobs():
"""returns a list of buildapi pending jobs"""
with open(os.path.join(base_dir, "pending.json")) as f:
return json.loads(f.read())
@pytest.fixture
def running_jobs():
"""returns a list of buildapi running jobs"""
with open(os.path.join(base_dir, "running.json")) as f:
return json.loads(f.read())
@pytest.fixture
def completed_jobs(sample_data):
"""returns a list of buildapi completed jobs"""
with open(os.path.join(base_dir, "finished.json")) as f:
content = f.read()
t = Template(content)
c = Context({"base_dir": base_dir})
return json.loads(t.render(c))
@pytest.fixture
def pending_jobs_stored(
jm, pending_jobs, result_set_stored):
"""
stores a list of buildapi pending jobs into the jobs store
using BuildApiTreeHerderAdapter
"""
pending_jobs.update(result_set_stored[0])
tjc = TreeherderJobCollection()
tj = tjc.get_job(pending_jobs)
tjc.add(tj)
test_utils.post_collection(jm.project, tjc)
@pytest.fixture
def running_jobs_stored(
jm, running_jobs, result_set_stored):
"""
stores a list of buildapi running jobs
"""
running_jobs.update(result_set_stored[0])
tjc = TreeherderJobCollection()
tj = tjc.get_job(running_jobs)
tjc.add(tj)
test_utils.post_collection(jm.project, tjc)
@pytest.fixture
def completed_jobs_stored(
jm, completed_jobs, result_set_stored, mock_post_json):
"""
stores a list of buildapi completed jobs
"""
completed_jobs['revision_hash'] = result_set_stored[0]['revision_hash']
tjc = TreeherderJobCollection()
tj = tjc.get_job(completed_jobs)
tjc.add(tj)
test_utils.post_collection(jm.project, tjc)
| mpl-2.0 |
orgito/ansible | lib/ansible/modules/storage/netapp/na_ontap_command.py | 9 | 3138 | #!/usr/bin/python
# (c) 2018, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'certified'}
DOCUMENTATION = '''
author: NetApp Ansible Team (@carchi8py) <ng-ansibleteam@netapp.com>
description:
- "Run system-cli commands on ONTAP"
extends_documentation_fragment:
- netapp.na_ontap
module: na_ontap_command
short_description: "NetApp ONTAP Run any cli command"
version_added: "2.7"
options:
command:
description:
- a comma separated list containing the command and arguments.
'''
EXAMPLES = """
- name: run ontap cli command
na_ontap_command:
hostname: "{{ hostname }}"
username: "{{ admin username }}"
password: "{{ admin password }}"
command: ['version']
- name: run ontap cli command
na_ontap_command:
hostname: "{{ hostname }}"
username: "{{ admin username }}"
password: "{{ admin password }}"
command: ['network', 'interface', 'show']
"""
RETURN = """
"""
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
import ansible.module_utils.netapp as netapp_utils
HAS_NETAPP_LIB = netapp_utils.has_netapp_lib()
class NetAppONTAPCommand(object):
def __init__(self):
self.argument_spec = netapp_utils.na_ontap_host_argument_spec()
self.argument_spec.update(dict(
command=dict(required=True, type='list')
))
self.module = AnsibleModule(
argument_spec=self.argument_spec,
supports_check_mode=True
)
parameters = self.module.params
# set up state variables
self.command = parameters['command']
if HAS_NETAPP_LIB is False:
self.module.fail_json(msg="the python NetApp-Lib module is required")
else:
self.server = netapp_utils.setup_na_ontap_zapi(module=self.module)
def run_command(self):
command_obj = netapp_utils.zapi.NaElement("system-cli")
args_obj = netapp_utils.zapi.NaElement("args")
for arg in self.command:
args_obj.add_new_child('arg', arg)
command_obj.add_child_elem(args_obj)
try:
output = self.server.invoke_successfully(command_obj, True)
return output.to_string()
except netapp_utils.zapi.NaApiError as error:
self.module.fail_json(msg='Error running command %s: %s' %
(self.command, to_native(error)),
exception=traceback.format_exc())
def apply(self):
changed = True
output = self.run_command()
self.module.exit_json(changed=changed, msg=output)
def main():
"""
Execute action from playbook
"""
command = NetAppONTAPCommand()
command.apply()
if __name__ == '__main__':
main()
| gpl-3.0 |
imtapps/authorize | authorize/aim.py | 2 | 8529 | import urllib
from authorize import gen_xml as xml, util, base, responses as resp
from authorize.gen_xml import CREDIT_CARD, ECHECK, AuthorizeSystemError
from authorize.gen_xml import AUTH_CAPTURE, AUTH_ONLY, CAPTURE_ONLY
from authorize.gen_xml import CREDIT, PRIOR_AUTH_CAPTURE, VOID
class Api(base.BaseApi):
"""
This object is the interface to Authorize.net AIM payment API, the
following list is a list of all arguments that the spec supports,
you can pass all of these in the transaction method as keyword
arguments without adding 'x_' at the beginning of the name. For
example:
result_dict = aim.transaction(
amount=10.00,
card_num=u"4111111111111111",
exp_date=u"2009-07"
)
Every string argument must be unicode.
NOTE:
It's important that you make sure that your Authorize dashboard
uses the same delimiter and encapsulator that you are using in
your API objects. If you don't check this it could happen that the
direct_response cannot be parsed even in those cases where it's
absolutely necessary, like in the AIM API.
Minimum required arguments for a transaction:
x_login: up to 20 chars
x_tran_key: 16 chars
x_type: AUTH_CAPTURE (default), AUTH_ONLY, CAPTURE_ONLY, CREDIT, PRIOR_AUTH_CAPTURE, VOID
x_amount: up to 15 digits with a decimal point, no dollar symbol, all inclusive (tax, shipping etc)
x_card_num: between 13 and 16 digits, with x_type == CREDIT only the last 4 digits are required
x_exp_date: MMYY, MM/YY, MM-YY, MMYYYY, MM/YYYY, MM-YYYY
x_trans_id: only required for CREDIT, PRIOR_AUTH_CAPTURE, VOID
x_auth_code: authorization code of an original transaction not authorized on the payment gateway,
6 chars only for CAPTURE_ONLY
Protocol related required arguments, you MUST NOT pass these:
x_version: 3.1
x_delim_char: char delimitator for the response
x_delim_data: TRUE (return a transaction response)
x_encap_char: boh
x_relay_response: FALSE
Optional arguments or conditional arguments (arguments required only in certain cases):
x_method: CC (default), ECHECK
x_recurring_billing: TRUE, FALSE, T, F, YES, NO, Y, N (optional, default F)
x_card_code: optional
x_test_request: TRUE, FALSE, T, F, YES, NO, Y, N, 1, 0 (default FALSE)
x_duplicate_window: avoid multiple transactions submitted. (default 0)
x_invoice_num: up to 20 chars (optional)
x_description: up to 255 chars (optional)
Repeat multiple times for each itemID:
NOTE: This argument can also be passed using a list of lists with
the name 'items'.
x_line_item: ItemID<|>Item name<|>item description<|>itemX quantity<|>item price (unit cost)<|>itemX taxable<|>
31 chars, 31 chars, 255 chars , up to 2 digits >0, up to 2 digits >0 , TRUE, FALSE etc...
x_first_name: billing name, 50 chars (opt)
x_last_name: 50 chars (opt)
x_company: 50 chars (opt)
x_address: billing address, 60 chars (opt), required with avs
x_city: 40 chars
x_state: 40 chars or valid 2 char code
x_zip: 20 chars required with avs
x_country: 60 chars
x_phone: 25 digits (no letters)
x_fax: 25 digits (no letters)
x_email: up to 255 chars
x_email_customer: send an email to the customer: TRUE, FALSE, T, F, YES, NO, Y, N, 1, 0
x_header_email_receipt: plain text, header of the email receipt
x_footer_email_receipt: plain text, footer of the email receipt
x_cust_id: merchant customer id, 20 chars
x_customer_ip: customer ip address, 15 chars, (Useful for fraud detection)
x_ship_to_first_name: 50 chars
x_ship_to_last_name: 50 chars
x_ship_to_company: 50 chars
x_ship_to_address: 60 chars
x_ship_to_city: 40 chars
x_ship_to_state: 40 chars or 2 char state code
x_ship_to_zip: 20 chars
x_ship_to_country: 60 chars
x_tax: tax item name<|>tax description<|>tax amount
name of tax , describe tax , digits with no $ sign
x_amount includes this already
x_freight: freight item name<|>freight description<|>freight amount
name of freight , describe it , digits with no $ sign
x_amount includes this already
x_duty: duty item name<|>duty description<|>duty amount
item name , description , digits with no $ sign
x_amount includes this already
x_tax_exempt: TRUE, FALSE, T, F, YES, NO, Y, N, 1, 0
x_po_num: 25 chars, merchant assigned purchase order number
x_authenticator_indicator: only AUTH_CAPTURE or AUTH_ONLY when processed
through cardholder authentication program
x_cardholder_authentication_value: only AUTH_CAPTURE or AUTH_ONLY when processed
through cardholder authentication program
valid combinations of the fields above:
VISA:
indicator - value
-----------------
5 - something
6 - something
6 - <blank>
7 - <blank>
7 - something
<blank> - <blank>
MasterCard:
indicator - value
-----------------
0 - <blank>
2 - something
1 - Null
Null - Null
there are also custom fields that will be added to the transaction
and can be passed to the transaction method using the keyword argument:
extra_fields of type C{dict}
like:
result_dict = aim.transaction(
amount=10.00,
card_num=u"4111111111111111",
exp_date=u"2009-07",
extra_fields={u'color': u'blue',
u'comment': u'ring twice at the door'}
)
"""
responses = resp.aim_codes
def __init__(self, *args, **kwargs):
super(Api, self).__init__(*args, **kwargs)
if not self.is_test:
self.server = "secure.authorize.net"
else:
self.server = "test.authorize.net"
self.path = "/gateway/transact.dll"
self.headers = {'Content-Type': 'x-www-url-encoded; charset=utf-8'}
self.required_arguments = {
'x_login': self.login,
'x_tran_key': self.key,
'x_delim_char': u'|',
'x_encap_char': u'',
'x_delim_data': True,
'x_relay_response': False,
'x_version': u'3.1'
}
def transaction(self, **kwargs):
"""
Create a transaction with the service through aim
"""
extra_fields = kwargs.pop('extra_fields', {})
argslist = []
for field, value in kwargs.iteritems():
if field == "items":
# these are the items that are bought here.
field_name = "x_line_item"
for item in value:
s_item = u"<|>".join(unicode(item))
argslist.append((field_name, s_item.encode('utf-8')))
else:
if field == "authentication_indicator" or \
field == "cardholder_authentication_value":
value = unicode(urllib.quote(str(value)))
field_name = "x_" + field
if isinstance(value, list):
value = u'<|>'.join(value)
argslist.append((field_name, xml.utf8convert(value)))
for args in [self.required_arguments, extra_fields]:
for field, value in args.iteritems():
argslist.append((field, xml.utf8convert(value)))
body = urllib.urlencode(argslist)
return self.request(body)
def parse_response(self, response):
"""
Parse the response string.
@param response: The response string
@type response: C{str}
"""
dict_response = xml.parse_direct_response(response, self.delimiter, self.encapsulator)
if dict_response.code != u"1":
if self.do_raise:
raise resp.AuthorizeError(dict_response.code,
dict_response.reason_text,
dict_response)
return dict_response
| mit |
ge0rgi/cinder | cinder/objects/cluster.py | 5 | 8814 | # Copyright (c) 2016 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_versionedobjects import fields
from cinder import db
from cinder import exception
from cinder.i18n import _
from cinder import objects
from cinder.objects import base
from cinder.objects import fields as c_fields
from cinder import utils
@base.CinderObjectRegistry.register
class Cluster(base.CinderPersistentObject, base.CinderObject,
base.CinderComparableObject):
"""Cluster Versioned Object.
Method get_by_id supports as additional named arguments:
- get_services: If we want to load all services from this cluster.
- services_summary: If we want to load num_nodes and num_down_nodes
fields.
- is_up: Boolean value to filter based on the cluster's up status.
- read_deleted: Filtering based on delete status. Default value "no".
- Any other cluster field will be used as a filter.
"""
# Version 1.0: Initial version
# Version 1.1: Add replication fields
VERSION = '1.1'
OPTIONAL_FIELDS = ('num_hosts', 'num_down_hosts', 'services')
# NOTE(geguileo): We don't want to expose race_preventer field at the OVO
# layer since it is only meant for the DB layer internal mechanism to
# prevent races.
fields = {
'id': fields.IntegerField(),
'name': fields.StringField(nullable=False),
'binary': fields.StringField(nullable=False),
'disabled': fields.BooleanField(default=False, nullable=True),
'disabled_reason': fields.StringField(nullable=True),
'num_hosts': fields.IntegerField(default=0, read_only=True),
'num_down_hosts': fields.IntegerField(default=0, read_only=True),
'last_heartbeat': fields.DateTimeField(nullable=True, read_only=True),
'services': fields.ObjectField('ServiceList', nullable=True,
read_only=True),
# Replication properties
'replication_status': c_fields.ReplicationStatusField(nullable=True),
'frozen': fields.BooleanField(default=False),
'active_backend_id': fields.StringField(nullable=True),
}
def obj_make_compatible(self, primitive, target_version):
"""Make a cluster representation compatible with a target version."""
# Convert all related objects
super(Cluster, self).obj_make_compatible(primitive, target_version)
# Before v1.1 we didn't have relication fields so we have to remove
# them.
if target_version == '1.0':
for obj_field in ('replication_status', 'frozen',
'active_backend_id'):
primitive.pop(obj_field, None)
@classmethod
def _get_expected_attrs(cls, context, *args, **kwargs):
"""Return expected attributes when getting a cluster.
Expected attributes depend on whether we are retrieving all related
services as well as if we are getting the services summary.
"""
expected_attrs = []
if kwargs.get('get_services'):
expected_attrs.append('services')
if kwargs.get('services_summary'):
expected_attrs.extend(('num_hosts', 'num_down_hosts'))
return expected_attrs
@staticmethod
def _from_db_object(context, cluster, db_cluster, expected_attrs=None):
"""Fill cluster OVO fields from cluster ORM instance."""
expected_attrs = expected_attrs or tuple()
for name, field in cluster.fields.items():
# The only field that cannot be assigned using setattr is services,
# because it is an ObjectField. So we don't assign the value if
# it's a non expected optional field or if it's services field.
if ((name in Cluster.OPTIONAL_FIELDS
and name not in expected_attrs) or name == 'services'):
continue
value = getattr(db_cluster, name)
setattr(cluster, name, value)
cluster._context = context
if 'services' in expected_attrs:
cluster.services = base.obj_make_list(
context,
objects.ServiceList(context),
objects.Service,
db_cluster.services)
cluster.obj_reset_changes()
return cluster
def obj_load_attr(self, attrname):
"""Lazy load services attribute."""
# NOTE(geguileo): We only allow lazy loading services to raise
# awareness of the high cost of lazy loading num_hosts and
# num_down_hosts, so if we are going to need this information we should
# be certain we really need it and it should loaded when retrieving the
# data from the DB the first time we read the OVO.
if attrname != 'services':
raise exception.ObjectActionError(
action='obj_load_attr',
reason=_('attribute %s not lazy-loadable') % attrname)
if not self._context:
raise exception.OrphanedObjectError(method='obj_load_attr',
objtype=self.obj_name())
self.services = objects.ServiceList.get_all(
self._context, {'cluster_name': self.name})
self.obj_reset_changes(fields=('services',))
def create(self):
if self.obj_attr_is_set('id'):
raise exception.ObjectActionError(action='create',
reason=_('already created'))
updates = self.cinder_obj_get_changes()
if updates:
for field in self.OPTIONAL_FIELDS:
if field in updates:
raise exception.ObjectActionError(
action='create', reason=_('%s assigned') % field)
db_cluster = db.cluster_create(self._context, updates)
self._from_db_object(self._context, self, db_cluster)
def save(self):
updates = self.cinder_obj_get_changes()
if updates:
for field in self.OPTIONAL_FIELDS:
if field in updates:
raise exception.ObjectActionError(
action='save', reason=_('%s changed') % field)
db.cluster_update(self._context, self.id, updates)
self.obj_reset_changes()
def destroy(self):
with self.obj_as_admin():
updated_values = db.cluster_destroy(self._context, self.id)
for field, value in updated_values.items():
setattr(self, field, value)
self.obj_reset_changes(updated_values.keys())
@property
def is_up(self):
return (self.last_heartbeat and
self.last_heartbeat >= utils.service_expired_time(True))
@base.CinderObjectRegistry.register
class ClusterList(base.ObjectListBase, base.CinderObject):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {'objects': fields.ListOfObjectsField('Cluster')}
@classmethod
def get_all(cls, context, is_up=None, get_services=False,
services_summary=False, read_deleted='no', **filters):
"""Get all clusters that match the criteria.
:param is_up: Boolean value to filter based on the cluster's up status.
:param get_services: If we want to load all services from this cluster.
:param services_summary: If we want to load num_nodes and
num_down_nodes fields.
:param read_deleted: Filtering based on delete status. Default value is
"no".
:param filters: Field based filters in the form of key/value.
"""
expected_attrs = Cluster._get_expected_attrs(
context,
get_services=get_services,
services_summary=services_summary)
clusters = db.cluster_get_all(context, is_up=is_up,
get_services=get_services,
services_summary=services_summary,
read_deleted=read_deleted,
**filters)
return base.obj_make_list(context, cls(context), Cluster, clusters,
expected_attrs=expected_attrs)
| apache-2.0 |
Lekanich/intellij-community | python/lib/Lib/distutils/command/bdist.py | 81 | 5471 | """distutils.command.bdist
Implements the Distutils 'bdist' command (create a built [binary]
distribution)."""
# This module should be kept compatible with Python 2.1.
__revision__ = "$Id: bdist.py 37828 2004-11-10 22:23:15Z loewis $"
import os, string
from types import *
from distutils.core import Command
from distutils.errors import *
from distutils.util import get_platform
def show_formats ():
"""Print list of available formats (arguments to "--format" option).
"""
from distutils.fancy_getopt import FancyGetopt
formats=[]
for format in bdist.format_commands:
formats.append(("formats=" + format, None,
bdist.format_command[format][1]))
pretty_printer = FancyGetopt(formats)
pretty_printer.print_help("List of available distribution formats:")
class bdist (Command):
description = "create a built (binary) distribution"
user_options = [('bdist-base=', 'b',
"temporary directory for creating built distributions"),
('plat-name=', 'p',
"platform name to embed in generated filenames "
"(default: %s)" % get_platform()),
('formats=', None,
"formats for distribution (comma-separated list)"),
('dist-dir=', 'd',
"directory to put final built distributions in "
"[default: dist]"),
('skip-build', None,
"skip rebuilding everything (for testing/debugging)"),
]
boolean_options = ['skip-build']
help_options = [
('help-formats', None,
"lists available distribution formats", show_formats),
]
# The following commands do not take a format option from bdist
no_format_option = ('bdist_rpm',
#'bdist_sdux', 'bdist_pkgtool'
)
# This won't do in reality: will need to distinguish RPM-ish Linux,
# Debian-ish Linux, Solaris, FreeBSD, ..., Windows, Mac OS.
default_format = { 'posix': 'gztar',
'java': 'gztar',
'nt': 'zip',
'os2': 'zip', }
# Establish the preferred order (for the --help-formats option).
format_commands = ['rpm', 'gztar', 'bztar', 'ztar', 'tar',
'wininst', 'zip',
#'pkgtool', 'sdux'
]
# And the real information.
format_command = { 'rpm': ('bdist_rpm', "RPM distribution"),
'zip': ('bdist_dumb', "ZIP file"),
'gztar': ('bdist_dumb', "gzip'ed tar file"),
'bztar': ('bdist_dumb', "bzip2'ed tar file"),
'ztar': ('bdist_dumb', "compressed tar file"),
'tar': ('bdist_dumb', "tar file"),
'wininst': ('bdist_wininst',
"Windows executable installer"),
'zip': ('bdist_dumb', "ZIP file"),
#'pkgtool': ('bdist_pkgtool',
# "Solaris pkgtool distribution"),
#'sdux': ('bdist_sdux', "HP-UX swinstall depot"),
}
def initialize_options (self):
self.bdist_base = None
self.plat_name = None
self.formats = None
self.dist_dir = None
self.skip_build = 0
# initialize_options()
def finalize_options (self):
# have to finalize 'plat_name' before 'bdist_base'
if self.plat_name is None:
self.plat_name = get_platform()
# 'bdist_base' -- parent of per-built-distribution-format
# temporary directories (eg. we'll probably have
# "build/bdist.<plat>/dumb", "build/bdist.<plat>/rpm", etc.)
if self.bdist_base is None:
build_base = self.get_finalized_command('build').build_base
self.bdist_base = os.path.join(build_base,
'bdist.' + self.plat_name)
self.ensure_string_list('formats')
if self.formats is None:
try:
self.formats = [self.default_format[os.name]]
except KeyError:
raise DistutilsPlatformError, \
"don't know how to create built distributions " + \
"on platform %s" % os.name
if self.dist_dir is None:
self.dist_dir = "dist"
# finalize_options()
def run (self):
# Figure out which sub-commands we need to run.
commands = []
for format in self.formats:
try:
commands.append(self.format_command[format][0])
except KeyError:
raise DistutilsOptionError, "invalid format '%s'" % format
# Reinitialize and run each command.
for i in range(len(self.formats)):
cmd_name = commands[i]
sub_cmd = self.reinitialize_command(cmd_name)
if cmd_name not in self.no_format_option:
sub_cmd.format = self.formats[i]
# If we're going to need to run this command again, tell it to
# keep its temporary files around so subsequent runs go faster.
if cmd_name in commands[i+1:]:
sub_cmd.keep_temp = 1
self.run_command(cmd_name)
# run()
# class bdist
| apache-2.0 |
rianf2/FinalYearProject | Website/node_modules/node-gyp/gyp/pylib/gyp/MSVSVersion.py | 1509 | 17165 | # Copyright (c) 2013 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Handle version information related to Visual Stuio."""
import errno
import os
import re
import subprocess
import sys
import gyp
import glob
class VisualStudioVersion(object):
"""Information regarding a version of Visual Studio."""
def __init__(self, short_name, description,
solution_version, project_version, flat_sln, uses_vcxproj,
path, sdk_based, default_toolset=None):
self.short_name = short_name
self.description = description
self.solution_version = solution_version
self.project_version = project_version
self.flat_sln = flat_sln
self.uses_vcxproj = uses_vcxproj
self.path = path
self.sdk_based = sdk_based
self.default_toolset = default_toolset
def ShortName(self):
return self.short_name
def Description(self):
"""Get the full description of the version."""
return self.description
def SolutionVersion(self):
"""Get the version number of the sln files."""
return self.solution_version
def ProjectVersion(self):
"""Get the version number of the vcproj or vcxproj files."""
return self.project_version
def FlatSolution(self):
return self.flat_sln
def UsesVcxproj(self):
"""Returns true if this version uses a vcxproj file."""
return self.uses_vcxproj
def ProjectExtension(self):
"""Returns the file extension for the project."""
return self.uses_vcxproj and '.vcxproj' or '.vcproj'
def Path(self):
"""Returns the path to Visual Studio installation."""
return self.path
def ToolPath(self, tool):
"""Returns the path to a given compiler tool. """
return os.path.normpath(os.path.join(self.path, "VC/bin", tool))
def DefaultToolset(self):
"""Returns the msbuild toolset version that will be used in the absence
of a user override."""
return self.default_toolset
def SetupScript(self, target_arch):
"""Returns a command (with arguments) to be used to set up the
environment."""
# Check if we are running in the SDK command line environment and use
# the setup script from the SDK if so. |target_arch| should be either
# 'x86' or 'x64'.
assert target_arch in ('x86', 'x64')
sdk_dir = os.environ.get('WindowsSDKDir')
if self.sdk_based and sdk_dir:
return [os.path.normpath(os.path.join(sdk_dir, 'Bin/SetEnv.Cmd')),
'/' + target_arch]
else:
# We don't use VC/vcvarsall.bat for x86 because vcvarsall calls
# vcvars32, which it can only find if VS??COMNTOOLS is set, which it
# isn't always.
if target_arch == 'x86':
if self.short_name >= '2013' and self.short_name[-1] != 'e' and (
os.environ.get('PROCESSOR_ARCHITECTURE') == 'AMD64' or
os.environ.get('PROCESSOR_ARCHITEW6432') == 'AMD64'):
# VS2013 and later, non-Express have a x64-x86 cross that we want
# to prefer.
return [os.path.normpath(
os.path.join(self.path, 'VC/vcvarsall.bat')), 'amd64_x86']
# Otherwise, the standard x86 compiler.
return [os.path.normpath(
os.path.join(self.path, 'Common7/Tools/vsvars32.bat'))]
else:
assert target_arch == 'x64'
arg = 'x86_amd64'
# Use the 64-on-64 compiler if we're not using an express
# edition and we're running on a 64bit OS.
if self.short_name[-1] != 'e' and (
os.environ.get('PROCESSOR_ARCHITECTURE') == 'AMD64' or
os.environ.get('PROCESSOR_ARCHITEW6432') == 'AMD64'):
arg = 'amd64'
return [os.path.normpath(
os.path.join(self.path, 'VC/vcvarsall.bat')), arg]
def _RegistryQueryBase(sysdir, key, value):
"""Use reg.exe to read a particular key.
While ideally we might use the win32 module, we would like gyp to be
python neutral, so for instance cygwin python lacks this module.
Arguments:
sysdir: The system subdirectory to attempt to launch reg.exe from.
key: The registry key to read from.
value: The particular value to read.
Return:
stdout from reg.exe, or None for failure.
"""
# Skip if not on Windows or Python Win32 setup issue
if sys.platform not in ('win32', 'cygwin'):
return None
# Setup params to pass to and attempt to launch reg.exe
cmd = [os.path.join(os.environ.get('WINDIR', ''), sysdir, 'reg.exe'),
'query', key]
if value:
cmd.extend(['/v', value])
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Obtain the stdout from reg.exe, reading to the end so p.returncode is valid
# Note that the error text may be in [1] in some cases
text = p.communicate()[0]
# Check return code from reg.exe; officially 0==success and 1==error
if p.returncode:
return None
return text
def _RegistryQuery(key, value=None):
r"""Use reg.exe to read a particular key through _RegistryQueryBase.
First tries to launch from %WinDir%\Sysnative to avoid WoW64 redirection. If
that fails, it falls back to System32. Sysnative is available on Vista and
up and available on Windows Server 2003 and XP through KB patch 942589. Note
that Sysnative will always fail if using 64-bit python due to it being a
virtual directory and System32 will work correctly in the first place.
KB 942589 - http://support.microsoft.com/kb/942589/en-us.
Arguments:
key: The registry key.
value: The particular registry value to read (optional).
Return:
stdout from reg.exe, or None for failure.
"""
text = None
try:
text = _RegistryQueryBase('Sysnative', key, value)
except OSError, e:
if e.errno == errno.ENOENT:
text = _RegistryQueryBase('System32', key, value)
else:
raise
return text
def _RegistryGetValueUsingWinReg(key, value):
"""Use the _winreg module to obtain the value of a registry key.
Args:
key: The registry key.
value: The particular registry value to read.
Return:
contents of the registry key's value, or None on failure. Throws
ImportError if _winreg is unavailable.
"""
import _winreg
try:
root, subkey = key.split('\\', 1)
assert root == 'HKLM' # Only need HKLM for now.
with _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, subkey) as hkey:
return _winreg.QueryValueEx(hkey, value)[0]
except WindowsError:
return None
def _RegistryGetValue(key, value):
"""Use _winreg or reg.exe to obtain the value of a registry key.
Using _winreg is preferable because it solves an issue on some corporate
environments where access to reg.exe is locked down. However, we still need
to fallback to reg.exe for the case where the _winreg module is not available
(for example in cygwin python).
Args:
key: The registry key.
value: The particular registry value to read.
Return:
contents of the registry key's value, or None on failure.
"""
try:
return _RegistryGetValueUsingWinReg(key, value)
except ImportError:
pass
# Fallback to reg.exe if we fail to import _winreg.
text = _RegistryQuery(key, value)
if not text:
return None
# Extract value.
match = re.search(r'REG_\w+\s+([^\r]+)\r\n', text)
if not match:
return None
return match.group(1)
def _CreateVersion(name, path, sdk_based=False):
"""Sets up MSVS project generation.
Setup is based off the GYP_MSVS_VERSION environment variable or whatever is
autodetected if GYP_MSVS_VERSION is not explicitly specified. If a version is
passed in that doesn't match a value in versions python will throw a error.
"""
if path:
path = os.path.normpath(path)
versions = {
'2015': VisualStudioVersion('2015',
'Visual Studio 2015',
solution_version='12.00',
project_version='14.0',
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset='v140'),
'2013': VisualStudioVersion('2013',
'Visual Studio 2013',
solution_version='13.00',
project_version='12.0',
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset='v120'),
'2013e': VisualStudioVersion('2013e',
'Visual Studio 2013',
solution_version='13.00',
project_version='12.0',
flat_sln=True,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset='v120'),
'2012': VisualStudioVersion('2012',
'Visual Studio 2012',
solution_version='12.00',
project_version='4.0',
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset='v110'),
'2012e': VisualStudioVersion('2012e',
'Visual Studio 2012',
solution_version='12.00',
project_version='4.0',
flat_sln=True,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset='v110'),
'2010': VisualStudioVersion('2010',
'Visual Studio 2010',
solution_version='11.00',
project_version='4.0',
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based),
'2010e': VisualStudioVersion('2010e',
'Visual C++ Express 2010',
solution_version='11.00',
project_version='4.0',
flat_sln=True,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based),
'2008': VisualStudioVersion('2008',
'Visual Studio 2008',
solution_version='10.00',
project_version='9.00',
flat_sln=False,
uses_vcxproj=False,
path=path,
sdk_based=sdk_based),
'2008e': VisualStudioVersion('2008e',
'Visual Studio 2008',
solution_version='10.00',
project_version='9.00',
flat_sln=True,
uses_vcxproj=False,
path=path,
sdk_based=sdk_based),
'2005': VisualStudioVersion('2005',
'Visual Studio 2005',
solution_version='9.00',
project_version='8.00',
flat_sln=False,
uses_vcxproj=False,
path=path,
sdk_based=sdk_based),
'2005e': VisualStudioVersion('2005e',
'Visual Studio 2005',
solution_version='9.00',
project_version='8.00',
flat_sln=True,
uses_vcxproj=False,
path=path,
sdk_based=sdk_based),
}
return versions[str(name)]
def _ConvertToCygpath(path):
"""Convert to cygwin path if we are using cygwin."""
if sys.platform == 'cygwin':
p = subprocess.Popen(['cygpath', path], stdout=subprocess.PIPE)
path = p.communicate()[0].strip()
return path
def _DetectVisualStudioVersions(versions_to_check, force_express):
"""Collect the list of installed visual studio versions.
Returns:
A list of visual studio versions installed in descending order of
usage preference.
Base this on the registry and a quick check if devenv.exe exists.
Only versions 8-10 are considered.
Possibilities are:
2005(e) - Visual Studio 2005 (8)
2008(e) - Visual Studio 2008 (9)
2010(e) - Visual Studio 2010 (10)
2012(e) - Visual Studio 2012 (11)
2013(e) - Visual Studio 2013 (12)
2015 - Visual Studio 2015 (14)
Where (e) is e for express editions of MSVS and blank otherwise.
"""
version_to_year = {
'8.0': '2005',
'9.0': '2008',
'10.0': '2010',
'11.0': '2012',
'12.0': '2013',
'14.0': '2015',
}
versions = []
for version in versions_to_check:
# Old method of searching for which VS version is installed
# We don't use the 2010-encouraged-way because we also want to get the
# path to the binaries, which it doesn't offer.
keys = [r'HKLM\Software\Microsoft\VisualStudio\%s' % version,
r'HKLM\Software\Wow6432Node\Microsoft\VisualStudio\%s' % version,
r'HKLM\Software\Microsoft\VCExpress\%s' % version,
r'HKLM\Software\Wow6432Node\Microsoft\VCExpress\%s' % version]
for index in range(len(keys)):
path = _RegistryGetValue(keys[index], 'InstallDir')
if not path:
continue
path = _ConvertToCygpath(path)
# Check for full.
full_path = os.path.join(path, 'devenv.exe')
express_path = os.path.join(path, '*express.exe')
if not force_express and os.path.exists(full_path):
# Add this one.
versions.append(_CreateVersion(version_to_year[version],
os.path.join(path, '..', '..')))
# Check for express.
elif glob.glob(express_path):
# Add this one.
versions.append(_CreateVersion(version_to_year[version] + 'e',
os.path.join(path, '..', '..')))
# The old method above does not work when only SDK is installed.
keys = [r'HKLM\Software\Microsoft\VisualStudio\SxS\VC7',
r'HKLM\Software\Wow6432Node\Microsoft\VisualStudio\SxS\VC7']
for index in range(len(keys)):
path = _RegistryGetValue(keys[index], version)
if not path:
continue
path = _ConvertToCygpath(path)
if version != '14.0': # There is no Express edition for 2015.
versions.append(_CreateVersion(version_to_year[version] + 'e',
os.path.join(path, '..'), sdk_based=True))
return versions
def SelectVisualStudioVersion(version='auto', allow_fallback=True):
"""Select which version of Visual Studio projects to generate.
Arguments:
version: Hook to allow caller to force a particular version (vs auto).
Returns:
An object representing a visual studio project format version.
"""
# In auto mode, check environment variable for override.
if version == 'auto':
version = os.environ.get('GYP_MSVS_VERSION', 'auto')
version_map = {
'auto': ('14.0', '12.0', '10.0', '9.0', '8.0', '11.0'),
'2005': ('8.0',),
'2005e': ('8.0',),
'2008': ('9.0',),
'2008e': ('9.0',),
'2010': ('10.0',),
'2010e': ('10.0',),
'2012': ('11.0',),
'2012e': ('11.0',),
'2013': ('12.0',),
'2013e': ('12.0',),
'2015': ('14.0',),
}
override_path = os.environ.get('GYP_MSVS_OVERRIDE_PATH')
if override_path:
msvs_version = os.environ.get('GYP_MSVS_VERSION')
if not msvs_version:
raise ValueError('GYP_MSVS_OVERRIDE_PATH requires GYP_MSVS_VERSION to be '
'set to a particular version (e.g. 2010e).')
return _CreateVersion(msvs_version, override_path, sdk_based=True)
version = str(version)
versions = _DetectVisualStudioVersions(version_map[version], 'e' in version)
if not versions:
if not allow_fallback:
raise ValueError('Could not locate Visual Studio installation.')
if version == 'auto':
# Default to 2005 if we couldn't find anything
return _CreateVersion('2005', None)
else:
return _CreateVersion(version, None)
return versions[0]
| mit |
lara-unb/ema_tao | scripts/ema/modules/control.py | 1 | 2094 | #!/usr/bin/env python
theta = {
'left': {'min': 300, 'max': 40},
'right': {'min': 120, 'max': 220},
'shift': 35
}
class Control:
def __init__(self, config_dict):
self.config_dict = config_dict
def fx(self, id, angle, speed, speed_ref):
dth = (speed/speed_ref)*theta['shift']
theta_min = theta[id]['min'] - dth
theta_max = theta[id]['max'] - dth
# check if angle in range (theta_min, theta_max)
if theta_min <= angle and angle <= theta_max:
return 1
elif theta[id]['min'] > theta[id]['max']:
if angle <= theta_min and angle <= theta_max:
if theta_min <= angle + 360 and angle <= theta_max:
return 1
elif angle >= theta_min and angle >= theta_max:
if theta_min <= angle and angle <= theta_max + 360:
return 1
# return 0 otherwise
return 0
def g(self, error):
Kp = 1/float(5000)
Ki = 1/float(100000)
# If there is a change of signal, reset
if ((error[-2] >= 0) and (error[-1] < 0)) or ((error[-2] < 0) and (error[-1] >= 0)):
errorTemp = [0 for x in range(len(error))]
errorTemp[-1] = error[-1]
error = errorTemp
signal = 0.5 + Kp*error[-1]+Ki*sum(error)
# saturation
if signal > 1:
signal = 1
error[-1] = 0
elif signal < 0:
signal = 0
error[-1] = 0
return signal
def calculate(self, angle, speed, speed_ref, speed_err):
pw_max = 500
fx_left = self.fx('left', angle, speed, speed_ref)
fx_right = self.fx('right', angle, speed, speed_ref)
g = self.g(speed_err)
pw_left = fx_left*g*pw_max
pw_right = fx_right*g*pw_max
# print g, pw_left, pw_right
# print [round(x) for x in speed_err[-10:]]
return pw_left, pw_right
| apache-2.0 |
pizzathief/scipy | scipy/sparse/_matrix_io.py | 5 | 5309 | import numpy as np
import scipy.sparse
__all__ = ['save_npz', 'load_npz']
# Make loading safe vs. malicious input
PICKLE_KWARGS = dict(allow_pickle=False)
def save_npz(file, matrix, compressed=True):
""" Save a sparse matrix to a file using ``.npz`` format.
Parameters
----------
file : str or file-like object
Either the file name (string) or an open file (file-like object)
where the data will be saved. If file is a string, the ``.npz``
extension will be appended to the file name if it is not already
there.
matrix: spmatrix (format: ``csc``, ``csr``, ``bsr``, ``dia`` or coo``)
The sparse matrix to save.
compressed : bool, optional
Allow compressing the file. Default: True
See Also
--------
scipy.sparse.load_npz: Load a sparse matrix from a file using ``.npz`` format.
numpy.savez: Save several arrays into a ``.npz`` archive.
numpy.savez_compressed : Save several arrays into a compressed ``.npz`` archive.
Examples
--------
Store sparse matrix to disk, and load it again:
>>> import scipy.sparse
>>> sparse_matrix = scipy.sparse.csc_matrix(np.array([[0, 0, 3], [4, 0, 0]]))
>>> sparse_matrix
<2x3 sparse matrix of type '<class 'numpy.int64'>'
with 2 stored elements in Compressed Sparse Column format>
>>> sparse_matrix.todense()
matrix([[0, 0, 3],
[4, 0, 0]], dtype=int64)
>>> scipy.sparse.save_npz('/tmp/sparse_matrix.npz', sparse_matrix)
>>> sparse_matrix = scipy.sparse.load_npz('/tmp/sparse_matrix.npz')
>>> sparse_matrix
<2x3 sparse matrix of type '<class 'numpy.int64'>'
with 2 stored elements in Compressed Sparse Column format>
>>> sparse_matrix.todense()
matrix([[0, 0, 3],
[4, 0, 0]], dtype=int64)
"""
arrays_dict = {}
if matrix.format in ('csc', 'csr', 'bsr'):
arrays_dict.update(indices=matrix.indices, indptr=matrix.indptr)
elif matrix.format == 'dia':
arrays_dict.update(offsets=matrix.offsets)
elif matrix.format == 'coo':
arrays_dict.update(row=matrix.row, col=matrix.col)
else:
raise NotImplementedError('Save is not implemented for sparse matrix of format {}.'.format(matrix.format))
arrays_dict.update(
format=matrix.format.encode('ascii'),
shape=matrix.shape,
data=matrix.data
)
if compressed:
np.savez_compressed(file, **arrays_dict)
else:
np.savez(file, **arrays_dict)
def load_npz(file):
""" Load a sparse matrix from a file using ``.npz`` format.
Parameters
----------
file : str or file-like object
Either the file name (string) or an open file (file-like object)
where the data will be loaded.
Returns
-------
result : csc_matrix, csr_matrix, bsr_matrix, dia_matrix or coo_matrix
A sparse matrix containing the loaded data.
Raises
------
IOError
If the input file does not exist or cannot be read.
See Also
--------
scipy.sparse.save_npz: Save a sparse matrix to a file using ``.npz`` format.
numpy.load: Load several arrays from a ``.npz`` archive.
Examples
--------
Store sparse matrix to disk, and load it again:
>>> import scipy.sparse
>>> sparse_matrix = scipy.sparse.csc_matrix(np.array([[0, 0, 3], [4, 0, 0]]))
>>> sparse_matrix
<2x3 sparse matrix of type '<class 'numpy.int64'>'
with 2 stored elements in Compressed Sparse Column format>
>>> sparse_matrix.todense()
matrix([[0, 0, 3],
[4, 0, 0]], dtype=int64)
>>> scipy.sparse.save_npz('/tmp/sparse_matrix.npz', sparse_matrix)
>>> sparse_matrix = scipy.sparse.load_npz('/tmp/sparse_matrix.npz')
>>> sparse_matrix
<2x3 sparse matrix of type '<class 'numpy.int64'>'
with 2 stored elements in Compressed Sparse Column format>
>>> sparse_matrix.todense()
matrix([[0, 0, 3],
[4, 0, 0]], dtype=int64)
"""
with np.load(file, **PICKLE_KWARGS) as loaded:
try:
matrix_format = loaded['format']
except KeyError:
raise ValueError('The file {} does not contain a sparse matrix.'.format(file))
matrix_format = matrix_format.item()
if not isinstance(matrix_format, str):
# Play safe with Python 2 vs 3 backward compatibility;
# files saved with SciPy < 1.0.0 may contain unicode or bytes.
matrix_format = matrix_format.decode('ascii')
try:
cls = getattr(scipy.sparse, '{}_matrix'.format(matrix_format))
except AttributeError:
raise ValueError('Unknown matrix format "{}"'.format(matrix_format))
if matrix_format in ('csc', 'csr', 'bsr'):
return cls((loaded['data'], loaded['indices'], loaded['indptr']), shape=loaded['shape'])
elif matrix_format == 'dia':
return cls((loaded['data'], loaded['offsets']), shape=loaded['shape'])
elif matrix_format == 'coo':
return cls((loaded['data'], (loaded['row'], loaded['col'])), shape=loaded['shape'])
else:
raise NotImplementedError('Load is not implemented for '
'sparse matrix of format {}.'.format(matrix_format))
| bsd-3-clause |
Slach/vkontakte | vkontakte/tests.py | 3 | 29321 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Tests for vkontakte package.
Requires mock >= 0.7.2.
"""
import os
import sys
import urllib
sys.path.insert(0, os.path.abspath('..'))
import unittest
import mock
import vkontakte
import vkontakte.api
API_ID = 'api_id'
API_SECRET = 'api_secret'
class VkontakteTest(unittest.TestCase):
def test_api_creation_error(self):
self.assertRaises(ValueError, lambda: vkontakte.API())
class SignatureTest(unittest.TestCase):
def test_signature_supports_unicode(self):
params = {'foo': u'клен'}
self.assertEqual(
vkontakte.signature(API_SECRET, params),
'560b3f1e09ff65167b8dc211604fed2b'
)
class IterparseTest(unittest.TestCase):
def test_iterparse(self):
data = '{"error":{"error_code":8,"error_msg":"Invalid request: this auth method is obsolete, please use oauth. vk.com\/developers","request_params":[{"key":"sig","value":"97aasff03cc81d5db25de67893e207"},{"key":"uids","value":"1,2"},{"key":"timestamp","value":"1355095295"},{"key":"v","value":"3.0"},{"key":"fields","value":"education"},{"key":"format","value":"JSON"},{"key":"random","value":"937530097"},{"key":"method","value":"getProfiles"},{"key":"api_id","value":"3267523"}]}}{"error":{"error_code":8,"error_msg":"Invalid request: this auth method is obsolete, please use oauth. vk.com\/developers","request_params":[{"key":"sig","value":"97aasff03cc81d5db25de67893e207"},{"key":"uids","value":"1,2"},{"key":"timestamp","value":"1355095295"},{"key":"v","value":"3.0"},{"key":"fields","value":"education"},{"key":"format","value":"JSON"},{"key":"random","value":"937530097"},{"key":"method","value":"getProfiles"},{"key":"api_id","value":"3267523"}]}}{"response":[{"uid":1,"first_name":"Павел","last_name":"Дуров","university":1,"university_name":"СПбГУ","faculty":15,"faculty_name":"Филологический","graduation":2006},{"uid":2,"first_name":"Александра","last_name":"Владимирова"}]}'
parses = list(vkontakte.api._json_iterparse(data))
self.assertEqual(len(parses), 3)
assert "error" in parses[0]
assert "error" in parses[1]
self.assertEqual(parses[2]["response"][0]["first_name"], u"Павел")
def test_iterparse_edge(self):
data = '{"error": {"}{": "foo"}}{"foo":"bar"}'
parses = list(vkontakte.api._json_iterparse(data))
self.assertEqual(parses[0]["error"]["}{"], "foo")
self.assertEqual(parses[1]["foo"], "bar")
class VkontakteMagicTest(unittest.TestCase):
def setUp(self):
self.api = vkontakte.API(API_ID, API_SECRET)
@mock.patch('vkontakte.api._API._get')
def test_basic(self, _get):
_get.return_value = '123'
time = self.api.getServerTime()
self.assertEqual(time, '123')
_get.assert_called_once_with('getServerTime')
@mock.patch('vkontakte.api._API._get')
def test_with_arguments(self, _get):
_get.return_value = [{'last_name': u'Дуров'}]
res = self.api.getProfiles(uids='1,2', fields='education')
self.assertEqual(res, _get.return_value)
_get.assert_called_once_with('getProfiles', uids='1,2', fields='education')
@mock.patch('vkontakte.api._API._get')
def test_with_arguments_get(self, _get):
_get.return_value = [{'last_name': u'Дуров'}]
res = self.api.get('getProfiles', uids='1,2', fields='education')
self.assertEqual(res, _get.return_value)
_get.assert_called_once_with('getProfiles', vkontakte.api.DEFAULT_TIMEOUT, uids='1,2', fields='education')
@mock.patch('vkontakte.http.post')
def test_timeout(self, post):
post.return_value = 200, '{"response":123}'
api = vkontakte.API(API_ID, API_SECRET, timeout=5)
res = api.getServerTime()
self.assertEqual(res, 123)
self.assertEqual(post.call_args[0][3], 5)
@mock.patch('vkontakte.api._API._get')
def test_magic(self, _get):
for method in vkontakte.api.COMPLEX_METHODS:
_get.return_value = None
res = getattr(self.api, method).test()
self.assertEqual(res, None)
_get.assert_called_once_with('%s.test' % method)
_get.reset_mock()
@mock.patch('vkontakte.api._API._get')
def test_magic_get(self, _get):
_get.return_value = 'foo'
res = self.api.friends.get(uid=642177)
self.assertEqual(res, 'foo')
_get.assert_called_once_with('friends.get', uid=642177)
@mock.patch('vkontakte.http.post')
def test_urlencode_bug(self, post):
post.return_value = 200, '{"response":123}'
res = self.api.search(q=u'клен')
self.assertEqual(res, 123)
@mock.patch('vkontakte.http.post')
def test_valid_quoted_json(self, post):
post.return_value = 200, '{"response": 123}'
self.api.ads.getStat(data={'type': '1', 'id': 1})
posted_data = urllib.unquote(post.call_args[0][1])
self.assertTrue('data={"type":+"1",+"id":+1}' in posted_data, posted_data)
@mock.patch('vkontakte.http.post')
def test_unicode_decode_error(self, post):
post.return_value = 200, '{"response":[35,{"cid":195478,"uid":223297741,"from_id":223297741,"date":1386616969,"text":"[id152534333|\xd0\x90\xd0\xbd\xd0\xb4\xd1\x80\xd0\xb5\xd0\xb9], \xd0\xb4\xd0\xbb\xd1\x8f \xd1\x81\xd1\x82\xd0\xb0\xd1\x82\xd0\xb8\xd1\x81\xd1\x82\xd0\xb8\xd0\xba\xd0\xb8 \xd0\xb2 \xd1\x84\xd1\x81\xd0\xb1","likes":{"count":0,"user_likes":0,"can_like":1},"reply_to_uid":152534333,"reply_to_cid":195368},{"cid":195370,"uid":225734427,"from_id":225734427,"date":1386606029,"text":"[id14808949|\xd0\x9b\xd0\xb8\xd0\xbb\xd0\xb8\xd1\x8f], \xd0\xb2 \xd0\xbc\xd0\xb0\xd1\x80\xd1\x81 \xd1\x8f \xd0\xb2\xd0\xb5\xd1\x80\xd1\x8e!)))))) http:\\/\\/ru.wikipedia.org\\/wiki\\/\xd0\x9a\xd0\xbe\xd0\xbb\xd0\xbe\xd0\xbd\xd0\xb8\xd0\xb7\xd0\xb0\xd1\x86\xd0\xb8\xd1\x8f_\xd0\x9c\xd0\xb0\xd1\x80\xd1\x81\xd0\xb0 http:\\/\\/ru.wikipedia.org\\/wiki\\/\xca\xee\xeb\xee\xed\xe8\xe7\xe0\xf6\xe8\xff_\xcc\xe0\xf0\xf1\xe0","likes":{"count":3,"user_likes":0,"can_like":1},"reply_to_uid":14808949,"reply_to_cid":195359},{"cid":195368,"uid":152534333,"from_id":152534333,"date":1386605970,"text":"\xd0\x9e\xd0\xb4\xd0\xb8\xd0\xbd \xd0\xb2\xd0\xbe\xd0\xbf\xd1\x80\xd0\xbe\xd1\x81: \xd0\xb7\xd0\xb0\xd1\x87\xd0\xb5\xd0\xbc?","likes":{"count":0,"user_likes":0,"can_like":1}},{"cid":195359,"uid":14808949,"from_id":14808949,"date":1386605354,"text":"[id225734427|\xd0\xa0\xd1\x83\xd1\x81\xd1\x82\xd0\xb0\xd0\xbc], \xd1\x81\xd0\xb0\xd0\xbc \xd0\xb2 \xd1\x8d\xd1\x82\xd0\xbe \xd0\xb2\xd0\xb5\xd1\x80\xd0\xb8\xd1\x88\xd1\x8c ?","likes":{"count":0,"user_likes":0,"can_like":1},"reply_to_uid":225734427,"reply_to_cid":195358},{"cid":195358,"uid":225734427,"from_id":225734427,"date":1386605334,"text":"[id14808949|\xd0\x9b\xd0\xb8\xd0\xbb\xd0\xb8\xd1\x8f], \xd0\xbf\xd1\x83\xd1\x81\xd1\x82\xd1\x8c \xd0\xbe\xd0\xbf\xd1\x80\xd0\xb5\xd0\xb4\xd0\xb5\xd0\xbb\xd1\x8f\xd1\x8e\xd1\x82 \xd0\xba\xd0\xb0\xd0\xba \xd1\x85\xd0\xbe\xd1\x82\xd1\x8f\xd1\x82. \xd1\x85\xd0\xbe\xd1\x87\xd0\xb5\xd1\x82 \xd0\xb1\xd1\x8b\xd1\x82\xd1\x8c \xd1\x87\xd0\xb5\xd0\xbb\xd0\xbe\xd0\xb2\xd0\xb5\xd0\xba - \\"\xd0\xbc\xd0\xb0\xd1\x80\xd1\x81\xd0\xb8\xd0\xb0\xd0\xbd\xd0\xb8\xd0\xbd\xd0\xbe\xd0\xbc\\" \xd0\xbf\xd1\x83\xd1\x81\xd1\x82\xd1\x8c \xd0\xb1\xd1\x83\xd0\xb4\xd0\xb5\xd1\x82. \xd1\x83 \xd0\xba\xd0\xb0\xd0\xb6\xd0\xb4\xd0\xbe\xd0\xb3\xd0\xbe \xd1\x81\xd0\xb2\xd0\xbe\xd0\xb8 \xd0\xbc\xd0\xbe\xd0\xb7\xd0\xb3\xd0\xb8 (\xd0\xb4\xd0\xb0\xd0\xb6\xd0\xb5 \xd0\xb5\xd1\x81\xd0\xbb\xd0\xb8 \xd0\xbe\xd0\xbd\xd0\xb8 \xd0\xbf\xd1\x83\xd1\x81\xd1\x82\xd1\x8b\xd0\xb5))) \xd0\xb2 \xd0\xba\xd0\xbe\xd0\xbd\xd1\x86\xd0\xb5 \xd0\xba\xd0\xbe\xd0\xbd\xd1\x86\xd0\xbe\xd0\xb2 \xd0\xbc\xd0\xb0\xd1\x80\xd1\x81 \xd0\xb2\xd0\xb5\xd0\xb4\xd1\x8c \xd0\xb1\xd1\x83\xd0\xb4\xd1\x83\xd1\x82 \xd0\xba\xd0\xbe\xd0\xb3\xd0\xb4\xd0\xb0-\xd0\xbd\xd0\xb8\xd0\xb1\xd1\x83\xd0\xb4\xd1\x8c \xd0\xba\xd0\xbe\xd0\xbb\xd0\xbe\xd0\xbd\xd0\xb8\xd0\xb7\xd0\xb8\xd1\x80\xd0\xbe\xd0\xb2\xd0\xb0\xd1\x82\xd1\x8c))))","likes":{"count":0,"user_likes":0,"can_like":1},"reply_to_uid":14808949,"reply_to_cid":195340},{"cid":195354,"uid":231583480,"from_id":231583480,"date":1386605275,"text":"https:\\/\\/docs.google.com\\/forms\\/d\\/19ZSLp3TGSIWtW_zYfYelpiyZ7eVfPrw_TSQidEhBojg\\/viewform","likes":{"count":0,"user_likes":0,"can_like":1}},{"cid":195348,"uid":90729922,"from_id":90729922,"date":1386604909,"text":"\xd0\xa0\xd0\xb0\xd0\xb7\xd1\x80\xd0\xb0\xd0\xb1\xd0\xbe\xd1\x82\xd0\xb0\xd0\xbb \xd0\xb8\xd0\xbd\xd1\x81\xd1\x82\xd0\xb8\xd1\x82\xd1\x83\xd1\x82 \xd0\xb3\xd0\xbb\xd1\x83\xd0\xb1\xd0\xbe\xd0\xba\xd0\xbe - \xd0\xbd\xd0\xb0\xd1\x83\xd1\x87\xd0\xbd\xd1\x8b\xd0\xb9 \xd1\x82\xd1\x80\xd1\x83\xd0\xb4: \\"\xd0\x9e \xd0\xb2\xd0\xbb\xd0\xb8\xd1\x8f\xd0\xbd\xd0\xb8\xd0\xb8 \xd0\xba\xd0\xbe\xd0\xbc\xd0\xb0\xd1\x80\xd0\xbe\xd0\xb2 \xd0\xbd\xd0\xb0 \xd0\xbc\xd1\x8b\xd1\x87\xd0\xb0\xd0\xbd\xd0\xb8\xd0\xb5 \xd0\xba\xd0\xbe\xd1\x80\xd0\xbe\xd0\xb2!\\"","likes":{"count":0,"user_likes":0,"can_like":1}},{"cid":195342,"uid":1976765,"from_id":1976765,"date":1386604786,"text":"[id197770104|\xd0\x90\xd0\xbb\xd0\xb5\xd0\xba\xd1\x81\xd0\xb5\xd0\xb9], \xd0\xb0 \xd0\xb5\xd1\x89\xd0\xb5 \xd0\xb5\xd0\xb2\xd1\x80\xd0\xb5\xd0\xb8 \xd0\xbf\xd1\x80\xd0\xb8\xd0\xb4\xd1\x83\xd0\xbc\xd0\xb0\xd0\xbb\xd0\xb8 \xd1\x82\xd0\xb5\xd0\xbe\xd1\x80\xd0\xb8\xd1\x8e \xd0\xbe\xd1\x82\xd0\xbd\xd0\xbe\xd1\x81\xd0\xb8\xd1\x82\xd0\xb5\xd0\xbb\xd1\x8c\xd0\xbd\xd0\xbe\xd1\x81\xd1\x82\xd0\xb8. \xd0\xba\xd0\xb0\xd0\xba \xd0\xbb\xd0\xb5\xd1\x82\xd0\xb0\xd1\x82\xd1\x8c \xd0\xb2 \xd0\xba\xd0\xbe\xd1\x81\xd0\xbc\xd0\xbe\xd1\x81. \xd0\xb4\xd0\xb0\xd0\xbb\xd0\xb8 \xd0\xbd\xd0\xb0\xd1\x87\xd0\xb0\xd0\xbb\xd0\xbe \xd0\xba\xd0\xb2\xd0\xb0\xd0\xbd\xd1\x82\xd0\xbe\xd0\xb2\xd0\xbe\xd0\xb9 \xd1\x84\xd0\xb8\xd0\xb7\xd0\xb8\xd0\xba\xd0\xb5 \xd0\xb8 \xd0\xb5\xd1\x89\xd0\xb5 \xd0\xbc\xd0\xbd\xd0\xbe\xd0\xb3\xd0\xbe \xd1\x87\xd0\xb5\xd0\xb3\xd0\xbe. \xd0\xba\xd0\xbe\xd0\xbd\xd0\xb5\xd1\x87\xd0\xbd\xd0\xbe. \xd0\xb8\xd0\xbc \xd1\x81\xd1\x82\xd1\x8b\xd0\xb4\xd0\xbd\xd0\xbe \xd0\xb7\xd0\xb0 \xd1\x82\xd0\xbe, \xd1\x87\xd1\x82\xd0\xbe \xd0\xbe\xd0\xbd\xd0\xb8 \xd0\xb5\xd0\xb2\xd1\x80\xd0\xb5\xd0\xb8. \xd0\xbf\xd1\x80\xd0\xbe\xd1\x81\xd1\x82\xd0\xbe \xd0\xb1\xd1\x80\xd0\xb1\xd1\x80\xd0\xb1\xd1\x80 \xd0\xba\xd0\xb0\xd0\xba \xd1\x81\xd1\x82\xd1\x8b\xd0\xb4\xd0\xbd\xd0\xbe","likes":{"count":1,"user_likes":0,"can_like":1},"reply_to_uid":197770104,"reply_to_cid":195316},{"cid":195340,"uid":14808949,"from_id":14808949,"date":1386604639,"text":"[id225734427|\xd0\xa0\xd1\x83\xd1\x81\xd1\x82\xd0\xb0\xd0\xbc], \xd0\x92\xd0\xb0\xd1\x88\xd0\xb0 \xd0\xbf\xd0\xbe\xd0\xb7\xd0\xb8\xd1\x86\xd0\xb8\xd1\x8f \xd0\xbf\xd0\xbe\xd0\xbd\xd1\x8f\xd1\x82\xd0\xbd\xd0\xb0. \xd0\xb5\xd1\x81\xd1\x82\xd1\x8c \xd1\x82\xd0\xb5 \xd0\xba\xd1\x82\xd0\xbe \xd1\x81\xd0\xb2\xd0\xbe\xd1\x8e \xd0\xbd\xd0\xb0\xd1\x86\xd0\xb8\xd0\xbe\xd0\xbd\xd0\xb0\xd0\xbb\xd1\x8c\xd0\xbd\xd0\xbe\xd1\x81\xd1\x82\xd1\x8c \xd0\xbe\xd0\xbf\xd1\x80\xd0\xb5\xd0\xb4\xd0\xb5\xd0\xbb\xd1\x8f\xd1\x8e\xd1\x82 \xd0\xb8 \xd0\xbf\xd0\xbe \xd0\xbe\xd1\x82\xd1\x86\xd1\x83 \xd0\xb8 \xd0\xbf\xd0\xbe \xd0\xbc\xd0\xb0\xd0\xbc\xd0\xb5. \xd0\x90 \xd0\xb5\xd1\x81\xd1\x82\xd1\x8c \xd1\x82\xd0\xb5, \xd0\xba\xd1\x82\xd0\xbe \xd1\x8d\xd1\x82\xd0\xbe\xd0\xbc\xd1\x83 \xd0\xbf\xd1\x80\xd0\xb0\xd0\xb2\xd0\xb8\xd0\xbb\xd1\x83 \xd0\xbd\xd0\xb5 \xd1\x81\xd0\xbb\xd0\xb5\xd0\xb4\xd1\x83\xd0\xb5\xd1\x82. \xd1\x82\xd0\xb0\xd0\xba \xd1\x87\xd1\x82\xd0\xbe \xd1\x8d\xd1\x82\xd0\xb0 \xd0\xb8\xd0\xbd\xd0\xb8\xd1\x86\xd0\xb8\xd0\xb0\xd1\x82\xd0\xb8\xd0\xb2\xd0\xb0 \xd0\xbd\xd0\xb0 \xd0\xbc\xd0\xbe\xd0\xb9 \xd0\xb2\xd0\xb7\xd0\xb3\xd0\xbb\xd1\x8f\xd0\xb4 \xd0\xbd\xd0\xb5 \xd0\xb1\xd1\x83\xd0\xb4\xd0\xb5\xd1\x82 \xd0\xbe\xd1\x82\xd1\x80\xd0\xb0\xd0\xb6\xd0\xb0\xd1\x82\xd1\x8c \xd0\xb4\xd0\xb5\xd0\xb9\xd1\x81\xd1\x82\xd0\xb2\xd0\xb8\xd1\x82\xd0\xb5\xd0\xbb\xd1\x8c\xd0\xbd\xd0\xbe\xd1\x81\xd1\x82\xd0\xb8.","likes":{"count":0,"user_likes":0,"can_like":1},"reply_to_uid":225734427,"reply_to_cid":195339},{"cid":195339,"uid":225734427,"from_id":225734427,"date":1386604440,"text":"[id14808949|\xd0\x9b\xd0\xb8\xd0\xbb\xd0\xb8\xd1\x8f], \xd1\x83 \xd1\x87\xd0\xb5\xd0\xbb\xd0\xbe\xd0\xb2\xd0\xb5\xd0\xba\xd0\xb0 \xd0\xbd\xd0\xb5 \xd0\xbc\xd0\xbe\xd0\xb6\xd0\xb5\xd1\x82 \xd0\xb1\xd1\x8b\xd1\x82\xd1\x8c \xd0\xb4\xd0\xb2\xd1\x83\xd1\x85 \xd0\xbd\xd0\xb0\xd1\x86\xd0\xb8\xd0\xbe\xd0\xbd\xd0\xb0\xd0\xbb\xd1\x8c\xd0\xbd\xd0\xbe\xd1\x81\xd1\x82\xd0\xb5\xd0\xb9. \xd0\xbc\xd0\xbe\xd0\xb6\xd0\xbd\xd0\xbe (\xd0\xbd\xd1\x83\xd0\xb6\xd0\xbd\xd0\xbe!) \xd1\x83\xd0\xb2\xd0\xb0\xd0\xb6\xd0\xb0\xd1\x82\xd1\x8c \xd0\xb4\xd1\x80\xd1\x83\xd0\xb3\xd0\xb8\xd0\xb5 \xd0\xbd\xd0\xb0\xd1\x80\xd0\xbe\xd0\xb4\xd1\x8b, \xd0\xbd\xd0\xbe \xd0\xbd\xd0\xb0\xd0\xb4\xd0\xbe \xd0\xb1\xd1\x8b\xd1\x82\xd1\x8c \xd0\xba\xd0\xb5\xd0\xbc-\xd1\x82\xd0\xbe \xd0\xb0 \xd0\xbd\xd0\xb5 \xd0\xb2\xd1\x81\xd0\xb5\xd0\xbc)) \xd0\xbd\xd0\xb5\xd0\xbb\xd1\x8c\xd0\xb7\xd1\x8f \xd0\xb1\xd1\x8b\xd1\x82\xd1\x8c \xd1\x80\xd1\x83\xd1\x81\xd1\x81\xd0\xba\xd0\xbe-\xd1\x82\xd0\xb0\xd1\x82\xd0\xb0\xd1\x80\xd0\xb8\xd0\xbd\xd0\xbe\xd0\xbc, \xd0\xb5\xd0\xb2\xd1\x80\xd0\xb5\xd0\xb5-\xd0\xbf\xd0\xbe\xd0\xbb\xd1\x8f\xd0\xba\xd0\xbe\xd0\xbc \xd0\xb8\xd0\xbb\xd0\xb8 \xd0\xbc\xd1\x83\xd1\x81\xd1\x83\xd0\xbb\xd1\x8c\xd0\xbc\xd0\xb0\xd0\xbd\xd0\xbe-\xd0\xbf\xd1\x80\xd0\xb0\xd0\xb2\xd0\xbe\xd1\x81\xd0\xbb\xd0\xb0\xd0\xb2\xd0\xbd\xd1\x8b\xd0\xbc)))) \xd0\xb2\xd1\x81\xd0\xb5 \xd0\xbe\xd1\x81\xd1\x82\xd0\xb0\xd0\xbb\xd1\x8c\xd0\xbd\xd0\xbe\xd0\xb5 \xd0\x94\xd0\x95\xd0\x9c\xd0\x90\xd0\x93\xd0\x9e\xd0\x93\xd0\x98\xd0\xaf \xd1\x87\xd1\x82\xd0\xbe-\xd0\xb1\xd1\x8b \xd0\xbf\xd1\x80\xd0\xb8\xd0\xba\xd1\x80\xd1\x8b\xd1\x82\xd1\x8c \xd0\xbe\xd1\x82\xd1\x81\xd1\x83\xd1\x82\xd1\x81\xd1\x82\xd0\xb2\xd0\xb8\xd0\xb5 \xd0\xbb\xd1\x8e\xd0\xb1\xd0\xbe\xd0\xb9 \xd0\xba\xd1\x83\xd0\xbb\xd1\x8c\xd1\x82\xd1\x83\xd1\x80\xd0\xbd\xd0\xbe\xd0\xb9 \xd0\xbe\xd1\x81\xd0\xbd\xd0\xbe\xd0\xb2\xd1\x8b \xd0\xb8\xd0\xbb\xd0\xb8 \xd1\x85\xd0\xb8\xd1\x82\xd1\x80\xd0\xb0\xd1\x8f \xd0\xbf\xd0\xbe\xd0\xb7\xd0\xb8\xd1\x86\xd0\xb8\xd1\x8f (\xd0\xb2 \xd1\x80\xd0\xbe\xd1\x81\xd1\x81\xd0\xb8\xd1\x8f - \xd1\x8f \xd1\x80\xd1\x83\xd1\x81\xd1\x81\xd0\xba\xd0\xb8\xd0\xb9 \xd0\xb2 \xd0\xb8\xd0\xb7\xd1\x80\xd0\xb0\xd0\xb8\xd0\xbb\xd0\xb5 - \xd1\x8f \xd0\xb5\xd0\xb2\xd1\x80\xd0\xb5\xd0\xb9))) \xd0\xb4\xd0\xb5\xd1\x82\xd0\xb8 \xd0\xb4\xd0\xbe\xd0\xbb\xd0\xb6\xd0\xbd\xd1\x8b \xd1\x81\xd0\xb4\xd0\xb5\xd0\xbb\xd0\xb0\xd1\x82\xd1\x8c \xd0\xb2\xd1\x8b\xd0\xb1\xd0\xbe\xd1\x80 \xd0\xba\xd0\xbe\xd0\xb3\xd0\xb4\xd0\xb0 \xd0\xbf\xd0\xbe\xd0\xb2\xd0\xb7\xd1\x80\xd0\xbe\xd1\x81\xd0\xbb\xd0\xb5\xd1\x8e\xd1\x82","likes":{"count":8,"user_likes":0,"can_like":1},"reply_to_uid":14808949,"reply_to_cid":195326},{"cid":195326,"uid":14808949,"from_id":14808949,"date":1386603984,"text":"[id225734427|\xd0\xa0\xd1\x83\xd1\x81\xd1\x82\xd0\xb0\xd0\xbc], \xd1\x8f \xd0\xbe \xd0\xb4\xd0\xb5\xd1\x82\xd1\x8f\xd1\x85 \xd0\xb2 \xd1\x81\xd0\xbc\xd0\xb5\xd1\x88\xd0\xb0\xd0\xbd\xd0\xbd\xd1\x8b\xd1\x85 \xd0\xb1\xd1\x80\xd0\xb0\xd0\xba\xd0\xb0\xd1\x85.","likes":{"count":0,"user_likes":0,"can_like":1},"reply_to_uid":225734427,"reply_to_cid":195323},{"cid":195323,"uid":225734427,"from_id":225734427,"date":1386603941,"text":"[id14808949|\xd0\x9b\xd0\xb8\xd0\xbb\xd0\xb8\xd1\x8f], \xd0\xb5\xd1\x81\xd0\xbb\xd0\xb8 \xd1\x82\xd0\xb0\xd0\xba\xd0\xbe\xd0\xb9 \xd0\xb2\xd0\xbe\xd0\xbf\xd1\x80\xd0\xbe\xd1\x81 \xd0\xb7\xd0\xb0\xd0\xb4\xd0\xb0\xd0\xb5\xd1\x82\xd1\x81\xd1\x8f \xd1\x82\xd0\xbe \xd0\xbd\xd0\xb0\xd1\x86\xd0\xb8\xd0\xbe\xd0\xbd\xd0\xb0\xd0\xbb\xd1\x8c\xd0\xbd\xd0\xbe\xd1\x81\xd1\x82\xd1\x8c \xd0\xbe\xd0\xb4\xd0\xbd\xd0\xb0 - \xd0\xbc\xd0\xb0\xd0\xbd\xd0\xba\xd1\x83\xd1\x80\xd1\x82","likes":{"count":0,"user_likes":0,"can_like":1},"reply_to_uid":14808949,"reply_to_cid":195314},{"cid":195319,"uid":167496147,"from_id":167496147,"date":1386603872,"text":"[id165400079|\xd0\x90\xd1\x80\xd1\x82\xd1\x91\xd0\xbc], \xd1\x8f \xd0\xbd\xd0\xb5 \xd0\xbf\xd1\x80\xd0\xbe \xd0\xb3\xd1\x80\xd1\x83\xd0\xb4\xd1\x8c, \xd0\xb1\xd0\xb5\xd1\x80\xd0\xb8 \xd0\xbd\xd0\xb8\xd0\xb6\xd0\xb5 )))","likes":{"count":0,"user_likes":0,"can_like":1},"reply_to_uid":165400079,"reply_to_cid":195299},{"cid":195318,"uid":209113652,"from_id":209113652,"date":1386603848,"text":"\xd0\xa1\xd0\xbb\xd0\xb0\xd0\xb2\xd0\xb0 \xd0\x9f\xd0\xb5\xd1\x82\xd0\xb5\xd1\x80\xd0\xb1\xd1\x83\xd1\x80\xd0\xb3\xd1\x83!!!","likes":{"count":0,"user_likes":0,"can_like":1}},{"cid":195317,"uid":207803816,"from_id":207803816,"date":1386603803,"text":"\xd0\x98 \xd1\x81\xd1\x82\xd0\xb0\xd1\x82\xd0\xbe\xd1\x82\xd0\xb4\xd0\xb5\xd0\xbb \xd1\x80\xd0\xb0\xd1\x81\xd1\x88\xd0\xb8\xd1\x80\xd0\xb8\xd0\xbc, \xd0\xb8 \xd0\xbd\xd0\xbe\xd0\xb2\xd1\x8b\xd0\xb5 \xd0\xb1\xd0\xbb\xd0\xb0\xd0\xbd\xd0\xba\xd0\xb8 \xd0\xbd\xd0\xb0\xd0\xbf\xd0\xb5\xd1\x87\xd0\xb0\xd1\x82\xd0\xb0\xd0\xb5\xd0\xbc, \xd0\xb8 \xd0\xbd\xd0\xbe\xd0\xb2\xd1\x8b\xd0\xb5 \xd0\xba\xd0\xbe\xd0\xbc\xd0\xbf\xd1\x8c\xd1\x8e\xd1\x82\xd0\xb5\xd1\x80\xd0\xbd\xd1\x8b\xd0\xb5 \xd0\xbf\xd1\x80\xd0\xbe\xd0\xb3\xd0\xb8 \xd0\xb7\xd0\xb0\xd0\xba\xd1\x83\xd0\xbf\xd0\xb8\xd0\xbc... \xd1\x87\xd0\xb8\xd1\x81\xd1\x82\xd1\x8b\xd0\xb9 \xd1\x80\xd0\xb0\xd1\x81\xd0\xbf\xd0\xb8\xd0\xbb \xd0\xb8 \xd0\xbd\xd0\xb8\xd0\xba\xd0\xb0\xd0\xba\xd0\xbe\xd0\xb3\xd0\xbe \xd0\xbd\xd0\xb0\xd1\x86\xd0\xb8\xd0\xbe\xd0\xbd\xd0\xb0\xd0\xbb\xd0\xb8\xd0\xb7\xd0\xbc\xd0\xb0.<br>\xd0\x90 \xd0\xbf\xd0\xb0\xd1\x82\xd1\x80\xd0\xb8\xd0\xbe\xd1\x82\xd0\xb0\xd0\xbc \xd0\xbb\xd0\xb8\xd1\x88\xd1\x8c \xd0\xb1\xd1\x8b \xd0\xbf\xd0\xbe\xd0\xb3\xd1\x83\xd0\xbd\xd0\xb4\xd0\xb5\xd1\x82\xd1\x8c \xd0\xbe \xd1\x81\xd0\xb2\xd0\xbe\xd1\x91\xd0\xbc, \xd0\xbe \xd0\xb4\xd0\xb5\xd0\xb2\xd0\xb8\xd1\x87\xd1\x8c\xd0\xb5\xd0\xbc.","likes":{"count":0,"user_likes":0,"can_like":1}},{"cid":195316,"uid":197770104,"from_id":197770104,"date":1386603710,"text":"\xd1\x8d\xd1\x82\xd0\xbe \xd0\xbf\xd1\x80\xd0\xb8\xd0\xb4\xd1\x83\xd0\xbc\xd0\xb0\xd0\xbb\xd0\xb8 \xd0\xb5\xd0\xb2\xd1\x80\xd0\xb5\xd0\xb8 \xd0\xb8\xd0\xbc \xd1\x81\xd1\x82\xd1\x8b\xd0\xb4\xd0\xbd\xd0\xbe \xd1\x81\xd0\xba\xd0\xb0\xd0\xb7\xd0\xb0\xd1\x82\xd1\x8c \xd1\x87\xd1\x82\xd0\xbe \xd0\xbe\xd0\xbd\xd0\xb8 \xd0\xb5\xd0\xb2\xd1\x80\xd0\xb5\xd0\xb8 ))","likes":{"count":1,"user_likes":0,"can_like":1}},{"cid":195314,"uid":14808949,"from_id":14808949,"date":1386603606,"text":"[id225734427|\xd0\xa0\xd1\x83\xd1\x81\xd1\x82\xd0\xb0\xd0\xbc], \xd0\xbd\xd0\xb0\xd1\x86\xd0\xb8\xd0\xbe\xd0\xbd\xd0\xb0\xd0\xbb\xd1\x8c\xd0\xbd\xd0\xbe\xd1\x81\xd1\x82\xd1\x8c \xd0\xbd\xd0\xb0\xd1\x86\xd0\xb8\xd0\xbe\xd0\xbd\xd0\xb0\xd0\xbb\xd1\x8c\xd0\xbd\xd0\xbe\xd1\x81\xd1\x82\xd1\x8c\xd1\x8e, \xd0\xb0 \xd0\xba\xd0\xb0\xd0\xba \xd0\xb5\xd1\x91 \xd0\xbe\xd0\xbf\xd1\x80\xd0\xb5\xd0\xb4\xd0\xb5\xd0\xbb\xd1\x8f\xd1\x82\xd1\x8c?","likes":{"count":1,"user_likes":0,"can_like":1},"reply_to_uid":225734427,"reply_to_cid":195310},{"cid":195313,"uid":213252379,"from_id":213252379,"date":1386603572,"text":"\xd0\xa5\xd0\xb0\xd0\xba\xd0\xb8\xd0\xbc\xd0\xbe\xd0\xb2, \xd0\xbd\xd0\xb5 \xd1\x83 \xd0\xb2\xd1\x81\xd0\xb5\xd1\x85 \xd0\xbd\xd0\xb0\xd1\x86\xd0\xb8\xd0\xbe\xd0\xbd\xd0\xb0\xd0\xbb\xd1\x8c\xd0\xbd\xd0\xbe\xd1\x81\xd1\x82\xd1\x8c \xd0\xbd\xd0\xb0 \xd0\xbb\xd0\xb1\xd1\x83 \xd0\xbd\xd0\xb0\xd0\xbf\xd0\xb8\xd1\x81\xd0\xb0\xd0\xbd\xd0\xb0.","likes":{"count":0,"user_likes":0,"can_like":1},"reply_to_uid":225734427,"reply_to_cid":195310},{"cid":195312,"uid":144124676,"from_id":144124676,"date":1386603564,"text":"\xd0\xa3\xd0\xb6\xd0\xb5 \xd0\xbd\xd0\xb5 \xd0\xb7\xd0\xbd\xd0\xb0\xd1\x8e\xd1\x82 \xd0\xbd\xd0\xb0 \xd0\xba\xd0\xb0\xd0\xba\xd0\xbe\xd0\xb9 \xd1\x83\xd1\x87\xd0\xb5\xd1\x82 \xd0\xbd\xd0\xb0\xd1\x80\xd0\xbe\xd0\xb4 \xd0\xbf\xd0\xbe\xd1\x81\xd1\x82\xd0\xb0\xd0\xb2\xd0\xb8\xd1\x82\xd1\x8c!!! \xd0\x98\xd0\xbd\xd1\x81\xd1\x82\xd0\xb8\xd1\x82\xd1\x83\xd1\x82 \xd0\xbf\xd1\x80\xd0\xbe\xd0\xbf\xd0\xb8\xd1\x81\xd0\xba\xd0\xbc \xd1\x82\xd0\xbe\xd0\xb6\xd0\xb5 \xd0\xb2\xd1\x80\xd0\xbe\xd0\xb4\xd0\xb5 \xd0\xba\xd0\xb0\xd0\xba \xd0\xbe\xd1\x82\xd0\xbc\xd0\xbd\xd0\xb5\xd0\xbd, \xd0\xbd\xd0\xbe \xd0\xb7\xd0\xb0 \xd0\xbf\xd1\x80\xd0\xbe\xd0\xb6\xd0\xb8\xd0\xb2\xd0\xb0\xd0\xbd\xd0\xb8\xd0\xb5 \xd0\xb1\xd0\xb5\xd0\xb7 \xd1\x80\xd0\xb5\xd0\xb3\xd0\xb8\xd1\x81\xd1\x82\xd1\x80\xd0\xb0\xd1\x86\xd0\xb8\xd0\xb8 \xd0\xbf\xd0\xbe \xd0\xbc\\/\xd0\xb6\xd0\xb8\xd1\x82\xd0\xb5\xd0\xbb\xd1\x8c\xd1\x81\xd1\x82\xd0\xb2\xd0\xb0 \xd0\xb8\xd0\xbb\xd0\xb8 \xd0\xbc\\/\xd0\xbf\xd1\x80\xd0\xb5\xd0\xb1\xd1\x8b\xd0\xb2\xd0\xb0\xd0\xbd\xd0\xb8\xd1\x8f \xd0\xb7\xd0\xb0\xd0\xbf\xd1\x80\xd0\xbe\xd1\x81\xd1\x82\xd0\xbe \xd1\x81\xd1\x85\xd0\xbb\xd0\xbe\xd0\xbf\xd0\xbe\xd1\x87\xd0\xb5\xd1\x88\xd1\x8c \xd0\xb0\xd0\xb4\xd0\xbc\xd0\xb8\xd0\xbd\xd0\xb8\xd1\x81\xd1\x82\xd1\x80\xd0\xb0\xd1\x82\xd0\xb8\xd0\xb2\xd0\xbd\xd1\x8b\xd0\xb9 \xd1\x88\xd1\x82\xd1\x80\xd0\xb0\xd1\x84 \xd0\xb2 \xd1\x80-\xd1\x80\xd0\xb5 1500 \xd1\x80\xd1\x83\xd0\xb1. \xd0\xbc\xd0\xb8\xd0\xbd\xd0\xb8\xd0\xbc\xd1\x83\xd0\xbc. \xd0\x98 \xd0\xb7\xd0\xb4\xd0\xb5\xd1\x81\xd1\x8c \xd0\xb1\xd1\x83\xd0\xb4\xd0\xb5\xd1\x82 \xd1\x82\xd0\xbe\xd0\xb6\xd0\xb5 \xd1\x81\xd0\xb0\xd0\xbc\xd0\xbe\xd0\xb5, \xd0\xb2\xd1\x80\xd0\xbe\xd0\xb4\xd0\xb5 \xd0\xb1\xd1\x8b \xd0\xb8 \xd1\x81\xd1\x82. 26 \xd0\x9a\xd0\xbe\xd0\xbd\xd1\x81\xd1\x82\xd0\xb8\xd1\x82\xd1\x83\xd1\x86\xd0\xb8\xd0\xb8 \xd0\xbd\xd0\xb8\xd0\xba\xd1\x82\xd0\xbe \xd0\xbd\xd0\xb5 \xd0\xbe\xd1\x82\xd0\xbc\xd0\xb5\xd0\xbd\xd1\x8f\xd0\xbb, \xd0\xb0 \xd0\xb7\xd0\xb0 \xd0\xbd\xd0\xb5 \xd1\x83\xd0\xba\xd0\xb0\xd0\xb7\xd0\xb0\xd0\xbd\xd0\xb8\xd0\xb5 \xd1\x81\xd0\xb2\xd0\xb5\xd0\xb4\xd0\xb5\xd0\xbd\xd0\xb8\xd0\xb9 \xd0\xb2 \xd1\x80\xd0\xb0\xd0\xb7\xd0\xb4\xd0\xb5\xd0\xbb\xd0\xb5 \\"\xd0\xbd\xd0\xb0\xd1\x86\xd0\xb8\xd0\xbe\xd0\xbd\xd0\xb0\xd0\xbb\xd1\x8c\xd0\xbd\xd0\xbe\xd1\x81\xd1\x82\xd1\x8c\\" \xd0\xbd\xd0\xb0\xd1\x88\xd0\xb5 \xd1\x80\xd0\xbe\xd0\xb4\xd0\xbd\xd0\xbe\xd0\xb5 \xd0\xb3\xd0\xbe\xd1\x81\xd1\x83\xd0\xb4\xd0\xb0\xd1\x80\xd1\x81\xd1\x82\xd0\xb2\xd0\xbe \xd1\x83\xd1\x81\xd1\x82\xd0\xb0\xd0\xbd\xd0\xbe\xd0\xb2\xd0\xb8\xd1\x82 \xd0\xba\xd0\xb0\xd0\xba\xd0\xbe\xd0\xb9-\xd0\xbd\xd0\xb8\xd0\xb1\xd1\x83\xd0\xb4\xd1\x8c \xd1\x88\xd1\x82\xd1\x80\xd0\xb0\xd1\x84. \xd0\xa3\xd0\xb6\xd0\xb5 \xd0\xbd\xd0\xb5 \xd0\xb7\xd0\xbd\xd0\xb0\xd1\x8e\xd1\x82 \xd0\xbf\xd0\xbe \xd0\xba\xd0\xb0\xd0\xba\xd0\xb8\xd0\xbc \xd0\xba\xd1\x80\xd0\xb8\xd1\x82\xd0\xb5\xd1\x80\xd0\xb8\xd1\x8f\xd0\xbc \xd0\xbd\xd0\xb0\xd1\x80\xd0\xbe\xd0\xb4 \xd0\xba\xd0\xb0\xd0\xba \xd0\xb1\xd0\xb0\xd1\x80\xd0\xb0\xd0\xbd\xd0\xbe\xd0\xb2 \xd0\xbf\xd0\xbe\xd1\x81\xd1\x87\xd0\xb8\xd1\x82\xd0\xb0\xd1\x82\xd1\x8c. ","likes":{"count":0,"user_likes":0,"can_like":1}},{"cid":195310,"uid":225734427,"from_id":225734427,"date":1386603316,"text":"\xd0\xbf\xd1\x80\xd0\xb5\xd0\xb7\xd0\xb8\xd1\x80\xd0\xb0\xd1\x8e \xd0\xbb\xd1\x8e\xd0\xb4\xd0\xb5\xd0\xb9 \xd0\xba\xd0\xbe\xd1\x82\xd0\xbe\xd1\x80\xd1\x8b\xd0\xb5 \xd1\x81\xd1\x82\xd0\xb5\xd1\x81\xd0\xbd\xd1\x8f\xd1\x8e\xd1\x82\xd1\x81\xd1\x8f \xd1\x81\xd0\xb2\xd0\xbe\xd0\xb5\xd0\xb9 \xd0\xbd\xd0\xb0\xd1\x86\xd0\xb8\xd0\xbe\xd0\xbd\xd0\xb0\xd0\xbb\xd1\x8c\xd0\xbd\xd0\xbe\xd1\x81\xd1\x82\xd0\xb8","likes":{"count":7,"user_likes":0,"can_like":1}},{"cid":195309,"uid":1441686,"from_id":1441686,"date":1386603180,"text":"\xd0\x9d\xd0\xb0\xd1\x86\xd0\xb8\xd0\xbe\xd0\xbd\xd0\xb0\xd0\xbb\xd1\x8c\xd0\xbd\xd1\x8b\xd0\xb9 \xd0\xb4\xd0\xbe\xd0\xbf\xd1\x80\xd0\xbe\xd1\x81, \xd1\x80\xd0\xb5\xd0\xba\xd0\xbe\xd0\xbc\xd0\xb5\xd0\xbd\xd0\xb4\xd1\x83\xd1\x8e! http:\\/\\/www.echo.msk.ru\\/blog\\/minkin\\/1116562-echo\\/ http:\\/\\/echo.msk.ru\\/blog\\/minkin\\/1116562-echo\\/","likes":{"count":0,"user_likes":0,"can_like":1}},{"cid":195308,"uid":213252379,"from_id":213252379,"date":1386603171,"text":"\xd0\x9f\xd1\x80\xd0\xbe\xd0\xb5\xd0\xba\xd1\x82 \xd0\xbe\xd0\xb4\xd0\xbd\xd0\xbe\xd0\xb7\xd0\xbd\xd0\xb0\xd1\x87\xd0\xbd\xd0\xbe \xd0\xbd\xd0\xb5 \xd0\xbf\xd1\x80\xd0\xbe\xd0\xba\xd0\xb0\xd1\x82\xd0\xb8\xd1\x82 \xd1\x81\xd0\xbe\xd0\xb3\xd0\xbb\xd0\xb0\xd1\x81\xd0\xbe\xd0\xb2\xd0\xb0\xd0\xbd\xd0\xb8\xd0\xb5 \xd0\xb2 \xd0\x9c\xd0\xb8\xd0\xbd\xd1\x8e\xd1\x81\xd1\x82\xd0\xb5)).","likes":{"count":1,"user_likes":0,"can_like":1}},{"cid":195307,"uid":160959903,"from_id":160959903,"date":1386603167,"text":"\xd0\x98 \xd1\x87\xd1\x82\xd0\xbe \xd0\xbc\xd0\xb5\xd0\xbd\xd1\x8f\xd0\xb5\xd1\x82?\xd0\x9b\xd1\x8e\xd0\xb4\xd0\xb8 \xd1\x80\xd0\xb5\xd1\x88\xd0\xb8\xd0\xbb\xd0\xb8 \xd0\xb1\xd1\x8b\xd1\x82\xd1\x8c \xd0\xb2\xd0\xbc\xd0\xb5\xd1\x81\xd1\x82\xd0\xb5,\xd0\xb4\xd0\xb0\xd0\xb9 \xd0\x91\xd0\x9e\xd0\x93.","likes":{"count":0,"user_likes":0,"can_like":1}},{"cid":195304,"uid":126980853,"from_id":126980853,"date":1386603102,"text":"5-\xd0\xb3\xd0\xbe \xd0\xbf\xd1\x83\xd0\xbd\xd0\xba\xd1\x82\xd0\xb0 \xd0\xb2 \xd0\xa4\xd0\xbe\xd1\x80\xd0\xbc\xd0\xb5 \xe2\x84\x96 1 \xd0\xb8\xd0\xbc \xd0\xbc\xd0\xb0\xd0\xbb\xd0\xbe \xf0\x9f\x98\x8a \xd0\xa1\xd0\xba\xd0\xb0\xd0\xb6\xd1\x83 - \xd1\x81\xd0\xb8\xd0\xb1\xd0\xb8\xd1\x80\xd1\x8f\xd0\xba, \xd0\xb1\xd0\xbb\xd0\xb8\xd0\xbd, \xd0\xbf\xd1\x83\xd1\x81\xd1\x82\xd1\x8c \xd0\xb4\xd0\xbe\xd0\xba\xd0\xb0\xd0\xb6\xd1\x83\xd1\x82, \xd1\x87\xd1\x82\xd0\xbe \xd0\xbd\xd0\xb5\xd0\xbf\xd1\x80\xd0\xb0\xd0\xb2 \xf0\x9f\x98\x8a","likes":{"count":0,"user_likes":0,"can_like":1}},{"cid":195303,"uid":99696,"from_id":99696,"date":1386603042,"text":"\xd0\x95\xd0\xb2\xd1\x80\xd0\xb0\xd0\xb7\xd0\xb8\xd0\xb5\xd1\x86!","likes":{"count":2,"user_likes":0,"can_like":1}},{"cid":195302,"uid":2091173,"from_id":2091173,"date":1386602981,"text":"\xd0\xbd\xd1\x83 \xd0\xb2\xd0\xbe\xd0\xbe\xd0\xb1\xd1\x89\xd0\xb5-\xd1\x82\xd0\xbe \xd0\xbd\xd0\xb0\xd1\x86\xd0\xb8\xd0\xbe\xd0\xbd\xd0\xb0\xd0\xbb\xd1\x8c\xd0\xbd\xd0\xbe\xd1\x81\xd1\x82\xd1\x8c \xd0\xb2 \xd0\xb1\xd0\xbb\xd0\xb0\xd0\xbd\xd0\xba\xd0\xb0\xd1\x85 \xd0\xb8 \xd0\xb1\xd1\x8b\xd0\xbb\xd0\xb0, \xd1\x82\xd0\xbe\xd0\xbb\xd1\x8c\xd0\xba\xd0\xbe \xd0\xb7\xd0\xb0\xd0\xbf\xd0\xbe\xd0\xbb\xd0\xbd\xd1\x8f\xd0\xbb\xd0\xb8 \xd0\xbf\xd0\xbe \xd0\xb6\xd0\xb5\xd0\xbb\xd0\xb0\xd0\xbd\xd0\xb8\xd1\x8e","likes":{"count":1,"user_likes":0,"can_like":1}},{"cid":195301,"uid":6717923,"from_id":6717923,"date":1386602934,"text":"\xd1\x82\xd0\xb0\xd0\xba \xd0\xb2\xd1\x80\xd0\xbe\xd0\xb4\xd0\xb5\xd0\xb6 \xd0\xbd\xd0\xb5\xd1\x82 \xd0\xbd\xd0\xb0\xd1\x86\xd0\xb8\xd0\xbe\xd0\xbd\xd0\xb0\xd0\xbb\xd1\x8c\xd0\xbd\xd0\xbe\xd1\x81\xd1\x82\xd0\xb8 \xd0\xb4\xd0\xbb\xd1\x8f \xd1\x80\xd1\x83\xd1\x81\xd1\x81\xd0\xba\xd0\xb8\xd1\x85. \xd0\xb5\xd0\xb5 \xd0\xb7\xd0\xb0\xd0\xbc\xd0\xb5\xd0\xbd\xd0\xb8\xd0\xbb\xd0\xb8 \xd0\xbd\xd0\xb0 \\"\xd0\xb3\xd0\xbe\xd1\x80\xd0\xb4\xd0\xbe\xd0\xb5\\" \xd1\x80\xd0\xbe\xd1\x81\xd1\x81\xd0\xb8\xd1\x8f\xd0\xbd\xd0\xb8\xd0\xbd","likes":{"count":0,"user_likes":0,"can_like":1}},{"cid":195300,"uid":56807327,"from_id":56807327,"date":1386602876,"text":"http:\\/\\/vk.com\\/photo-23482909_315789481","likes":{"count":0,"user_likes":0,"can_like":1}},{"cid":195299,"uid":165400079,"from_id":165400079,"date":1386602872,"text":"[id167496147|\xd0\x90\xd0\xbd\xd0\xb4\xd1\x80\xd0\xb5\xd0\xb9], \xd0\xbd\xd0\xb5, \xd1\x82\xd0\xbe\xd0\xbb\xd1\x8c\xd0\xba\xd0\xbe \xd0\xb4\xd0\xbb\xd0\xb8\xd0\xbd\xd1\x83 \xd0\xb8 \xd1\x80\xd0\xb0\xd0\xb7\xd0\xbc\xd0\xb5\xd1\x80 \xd0\xb3\xd1\x80\xd1\x83\xd0\xb4\xd0\xb8","likes":{"count":1,"user_likes":0,"can_like":1},"reply_to_uid":167496147,"reply_to_cid":195296},{"cid":195298,"uid":76332491,"from_id":76332491,"date":1386602852,"text":"\xd0\x90 \xd1\x87\xd0\xb5\xd0\xb3\xd0\xbe \xd1\x81\xd1\x82\xd0\xb5\xd1\x81\xd0\xbd\xd1\x8f\xd1\x82\xd1\x8c\xd1\x81\xd1\x8f?","likes":{"count":1,"user_likes":0,"can_like":1}},{"cid":195297,"uid":115228782,"from_id":115228782,"date":1386602852,"text":"\xd1\x8d\xd0\xbb\xd1\x8c\xd1\x84\xd1\x8b, \xd1\x85\xd0\xbe\xd0\xb1\xd0\xb1\xd0\xb8\xd1\x82\xd1\x8b, \xd0\xb3\xd0\xbe\xd0\xb1\xd0\xbb\xd0\xb8\xd0\xbd\xd1\x8b \xd0\xb8 \xd1\x82.\xd0\xb4.","likes":{"count":2,"user_likes":0,"can_like":1}},{"cid":195296,"uid":167496147,"from_id":167496147,"date":1386602836,"text":"\xd0\xb0 \xd0\xb4\xd0\xbb\xd0\xb8\xd0\xbd\xd1\x83 \xd0\xb8 \xd0\xb3\xd0\xbb\xd1\x83\xd0\xb1\xd0\xb8\xd0\xbd\xd1\x83 \xd0\xb8\xd0\xbc \xd0\xbd\xd0\xb5 \xd0\xbd\xd0\xb0\xd0\xb4\xd0\xbe \xd1\x83\xd0\xba\xd0\xb0\xd0\xb7\xd1\x8b\xd0\xb2\xd0\xb0\xd1\x82\xd1\x8c???","likes":{"count":8,"user_likes":0,"can_like":1}},{"cid":195295,"uid":70236500,"from_id":70236500,"date":1386602820,"text":"\xd0\x90 \xd0\xb2\xd0\xbe\xd0\xbe\xd0\xb1\xd1\x89\xd0\xb5, \xd1\x81\xd1\x82\xd1\x80\xd0\xb0\xd0\xbd\xd0\xbd\xd0\xbe \xd1\x8d\xd1\x82\xd0\xbe. \xd0\x98 \xd0\xb7\xd0\xb0\xd1\x87\xd0\xb5\xd0\xbc \xd0\xbe\xd0\xbd\xd0\xbe, \xd1\x84\xd0\xb0\xd1\x88\xd0\xb8\xd0\xb7\xd0\xbc \xd0\xb6\xd0\xb5","likes":{"count":4,"user_likes":0,"can_like":1}},{"cid":195294,"uid":156031418,"from_id":156031418,"date":1386602812,"text":"\xd0\xb4\xd0\xb0\xd0\xb2\xd0\xbd\xd0\xbe \xd0\xbf\xd0\xbe\xd1\x80\xd0\xb0.","likes":{"count":4,"user_likes":0,"can_like":1}},{"cid":195293,"uid":70236500,"from_id":70236500,"date":1386602804,"text":"\\"\xd0\x9c\xd0\xbd\xd0\xbe\xd0\xb3\xd0\xbe\xd0\xbd\xd0\xb0\xd1\x86\xd0\xb8\xd0\xbe\xd0\xbd\xd0\xb0\xd0\xbb\\"","likes":{"count":4,"user_likes":0,"can_like":1}}]}'
kwargs = {'sort': 'desc', 'count': 90, 'need_likes': 1, 'post_id': 195292, 'offset': 0, 'preview_length': 0, 'owner_id': -23482909}
comments = self.api.get('wall.getComments', **kwargs)
self.assertEqual(len(comments), 36)
if __name__ == '__main__':
unittest.main() | mit |
Elettronik/SickRage | lib/stevedore/tests/test_dispatch.py | 55 | 3590 | from stevedore.tests import utils
from stevedore import dispatch
def check_dispatch(ep, *args, **kwds):
return ep.name == 't2'
class TestDispatch(utils.TestCase):
def check_dispatch(ep, *args, **kwds):
return ep.name == 't2'
def test_dispatch(self):
def invoke(ep, *args, **kwds):
return (ep.name, args, kwds)
em = dispatch.DispatchExtensionManager('stevedore.test.extension',
lambda *args, **kwds: True,
invoke_on_load=True,
invoke_args=('a',),
invoke_kwds={'b': 'B'},
)
self.assertEqual(len(em.extensions), 2)
self.assertEqual(set(em.names()), set(['t1', 't2']))
results = em.map(check_dispatch,
invoke,
'first',
named='named value',
)
expected = [('t2', ('first',), {'named': 'named value'})]
self.assertEqual(results, expected)
def test_dispatch_map_method(self):
em = dispatch.DispatchExtensionManager('stevedore.test.extension',
lambda *args, **kwds: True,
invoke_on_load=True,
invoke_args=('a',),
invoke_kwds={'b': 'B'},
)
results = em.map_method(check_dispatch, 'get_args_and_data', 'first')
self.assertEqual(results, [(('a',), {'b': 'B'}, 'first')])
def test_name_dispatch(self):
def invoke(ep, *args, **kwds):
return (ep.name, args, kwds)
em = dispatch.NameDispatchExtensionManager('stevedore.test.extension',
lambda *args, **kwds: True,
invoke_on_load=True,
invoke_args=('a',),
invoke_kwds={'b': 'B'},
)
self.assertEqual(len(em.extensions), 2)
self.assertEqual(set(em.names()), set(['t1', 't2']))
results = em.map(['t2'], invoke, 'first', named='named value',)
expected = [('t2', ('first',), {'named': 'named value'})]
self.assertEqual(results, expected)
def test_name_dispatch_ignore_missing(self):
def invoke(ep, *args, **kwds):
return (ep.name, args, kwds)
em = dispatch.NameDispatchExtensionManager(
'stevedore.test.extension',
lambda *args, **kwds: True,
invoke_on_load=True,
invoke_args=('a',),
invoke_kwds={'b': 'B'},
)
results = em.map(['t3', 't1'], invoke, 'first', named='named value',)
expected = [('t1', ('first',), {'named': 'named value'})]
self.assertEqual(results, expected)
def test_name_dispatch_map_method(self):
em = dispatch.NameDispatchExtensionManager(
'stevedore.test.extension',
lambda *args, **kwds: True,
invoke_on_load=True,
invoke_args=('a',),
invoke_kwds={'b': 'B'},
)
results = em.map_method(['t3', 't1'], 'get_args_and_data', 'first')
self.assertEqual(results, [(('a',), {'b': 'B'}, 'first')])
| gpl-3.0 |
has2k1/numpy | numpy/core/memmap.py | 97 | 10378 | from __future__ import division, absolute_import, print_function
import numpy as np
from .numeric import uint8, ndarray, dtype
from numpy.compat import long, basestring
__all__ = ['memmap']
dtypedescr = dtype
valid_filemodes = ["r", "c", "r+", "w+"]
writeable_filemodes = ["r+", "w+"]
mode_equivalents = {
"readonly":"r",
"copyonwrite":"c",
"readwrite":"r+",
"write":"w+"
}
class memmap(ndarray):
"""Create a memory-map to an array stored in a *binary* file on disk.
Memory-mapped files are used for accessing small segments of large files
on disk, without reading the entire file into memory. Numpy's
memmap's are array-like objects. This differs from Python's ``mmap``
module, which uses file-like objects.
This subclass of ndarray has some unpleasant interactions with
some operations, because it doesn't quite fit properly as a subclass.
An alternative to using this subclass is to create the ``mmap``
object yourself, then create an ndarray with ndarray.__new__ directly,
passing the object created in its 'buffer=' parameter.
This class may at some point be turned into a factory function
which returns a view into an mmap buffer.
Delete the memmap instance to close.
Parameters
----------
filename : str or file-like object
The file name or file object to be used as the array data buffer.
dtype : data-type, optional
The data-type used to interpret the file contents.
Default is `uint8`.
mode : {'r+', 'r', 'w+', 'c'}, optional
The file is opened in this mode:
+------+-------------------------------------------------------------+
| 'r' | Open existing file for reading only. |
+------+-------------------------------------------------------------+
| 'r+' | Open existing file for reading and writing. |
+------+-------------------------------------------------------------+
| 'w+' | Create or overwrite existing file for reading and writing. |
+------+-------------------------------------------------------------+
| 'c' | Copy-on-write: assignments affect data in memory, but |
| | changes are not saved to disk. The file on disk is |
| | read-only. |
+------+-------------------------------------------------------------+
Default is 'r+'.
offset : int, optional
In the file, array data starts at this offset. Since `offset` is
measured in bytes, it should normally be a multiple of the byte-size
of `dtype`. When ``mode != 'r'``, even positive offsets beyond end of
file are valid; The file will be extended to accommodate the
additional data. By default, ``memmap`` will start at the beginning of
the file, even if ``filename`` is a file pointer ``fp`` and
``fp.tell() != 0``.
shape : tuple, optional
The desired shape of the array. If ``mode == 'r'`` and the number
of remaining bytes after `offset` is not a multiple of the byte-size
of `dtype`, you must specify `shape`. By default, the returned array
will be 1-D with the number of elements determined by file size
and data-type.
order : {'C', 'F'}, optional
Specify the order of the ndarray memory layout:
:term:`row-major`, C-style or :term:`column-major`,
Fortran-style. This only has an effect if the shape is
greater than 1-D. The default order is 'C'.
Attributes
----------
filename : str
Path to the mapped file.
offset : int
Offset position in the file.
mode : str
File mode.
Methods
-------
flush
Flush any changes in memory to file on disk.
When you delete a memmap object, flush is called first to write
changes to disk before removing the object.
Notes
-----
The memmap object can be used anywhere an ndarray is accepted.
Given a memmap ``fp``, ``isinstance(fp, numpy.ndarray)`` returns
``True``.
Memory-mapped arrays use the Python memory-map object which
(prior to Python 2.5) does not allow files to be larger than a
certain size depending on the platform. This size is always < 2GB
even on 64-bit systems.
When a memmap causes a file to be created or extended beyond its
current size in the filesystem, the contents of the new part are
unspecified. On systems with POSIX filesystem semantics, the extended
part will be filled with zero bytes.
Examples
--------
>>> data = np.arange(12, dtype='float32')
>>> data.resize((3,4))
This example uses a temporary file so that doctest doesn't write
files to your directory. You would use a 'normal' filename.
>>> from tempfile import mkdtemp
>>> import os.path as path
>>> filename = path.join(mkdtemp(), 'newfile.dat')
Create a memmap with dtype and shape that matches our data:
>>> fp = np.memmap(filename, dtype='float32', mode='w+', shape=(3,4))
>>> fp
memmap([[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]], dtype=float32)
Write data to memmap array:
>>> fp[:] = data[:]
>>> fp
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
>>> fp.filename == path.abspath(filename)
True
Deletion flushes memory changes to disk before removing the object:
>>> del fp
Load the memmap and verify data was stored:
>>> newfp = np.memmap(filename, dtype='float32', mode='r', shape=(3,4))
>>> newfp
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
Read-only memmap:
>>> fpr = np.memmap(filename, dtype='float32', mode='r', shape=(3,4))
>>> fpr.flags.writeable
False
Copy-on-write memmap:
>>> fpc = np.memmap(filename, dtype='float32', mode='c', shape=(3,4))
>>> fpc.flags.writeable
True
It's possible to assign to copy-on-write array, but values are only
written into the memory copy of the array, and not written to disk:
>>> fpc
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
>>> fpc[0,:] = 0
>>> fpc
memmap([[ 0., 0., 0., 0.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
File on disk is unchanged:
>>> fpr
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
Offset into a memmap:
>>> fpo = np.memmap(filename, dtype='float32', mode='r', offset=16)
>>> fpo
memmap([ 4., 5., 6., 7., 8., 9., 10., 11.], dtype=float32)
"""
__array_priority__ = -100.0
def __new__(subtype, filename, dtype=uint8, mode='r+', offset=0,
shape=None, order='C'):
# Import here to minimize 'import numpy' overhead
import mmap
import os.path
try:
mode = mode_equivalents[mode]
except KeyError:
if mode not in valid_filemodes:
raise ValueError("mode must be one of %s" %
(valid_filemodes + list(mode_equivalents.keys())))
if hasattr(filename, 'read'):
fid = filename
own_file = False
else:
fid = open(filename, (mode == 'c' and 'r' or mode)+'b')
own_file = True
if (mode == 'w+') and shape is None:
raise ValueError("shape must be given")
fid.seek(0, 2)
flen = fid.tell()
descr = dtypedescr(dtype)
_dbytes = descr.itemsize
if shape is None:
bytes = flen - offset
if (bytes % _dbytes):
fid.close()
raise ValueError("Size of available data is not a "
"multiple of the data-type size.")
size = bytes // _dbytes
shape = (size,)
else:
if not isinstance(shape, tuple):
shape = (shape,)
size = 1
for k in shape:
size *= k
bytes = long(offset + size*_dbytes)
if mode == 'w+' or (mode == 'r+' and flen < bytes):
fid.seek(bytes - 1, 0)
fid.write(np.compat.asbytes('\0'))
fid.flush()
if mode == 'c':
acc = mmap.ACCESS_COPY
elif mode == 'r':
acc = mmap.ACCESS_READ
else:
acc = mmap.ACCESS_WRITE
start = offset - offset % mmap.ALLOCATIONGRANULARITY
bytes -= start
offset -= start
mm = mmap.mmap(fid.fileno(), bytes, access=acc, offset=start)
self = ndarray.__new__(subtype, shape, dtype=descr, buffer=mm,
offset=offset, order=order)
self._mmap = mm
self.offset = offset
self.mode = mode
if isinstance(filename, basestring):
self.filename = os.path.abspath(filename)
# py3 returns int for TemporaryFile().name
elif (hasattr(filename, "name") and
isinstance(filename.name, basestring)):
self.filename = os.path.abspath(filename.name)
# same as memmap copies (e.g. memmap + 1)
else:
self.filename = None
if own_file:
fid.close()
return self
def __array_finalize__(self, obj):
if hasattr(obj, '_mmap') and np.may_share_memory(self, obj):
self._mmap = obj._mmap
self.filename = obj.filename
self.offset = obj.offset
self.mode = obj.mode
else:
self._mmap = None
self.filename = None
self.offset = None
self.mode = None
def flush(self):
"""
Write any changes in the array to the file on disk.
For further information, see `memmap`.
Parameters
----------
None
See Also
--------
memmap
"""
if self.base is not None and hasattr(self.base, 'flush'):
self.base.flush()
| bsd-3-clause |
wilsonfreitas/pelican-plugins | pdf/test_pdf.py | 24 | 1527 | import unittest
import os
import locale
import logging
import pdf
from tempfile import mkdtemp
from pelican import Pelican
from pelican.readers import MarkdownReader
from pelican.settings import read_settings
from shutil import rmtree
CUR_DIR = os.path.dirname(__file__)
class TestPdfGeneration(unittest.TestCase):
def setUp(self, override=None):
self.temp_path = mkdtemp(prefix='pelicantests.')
settings = {
'PATH': os.path.join(os.path.dirname(CUR_DIR), '..', 'test_data',
'content'),
'OUTPUT_PATH': self.temp_path,
'PLUGINS': [pdf],
'LOCALE': locale.normalize('en_US'),
}
if override:
settings.update(override)
self.settings = read_settings(override=settings)
pelican = Pelican(settings=self.settings)
try:
pelican.run()
except ValueError:
logging.warn('Relative links in the form of ' +
'|filename|images/test.png are not yet handled by ' +
' the pdf generator')
pass
def tearDown(self):
rmtree(self.temp_path)
def test_existence(self):
assert os.path.exists(os.path.join(self.temp_path, 'pdf',
'this-is-a-super-article.pdf'))
if MarkdownReader.enabled:
assert os.path.exists(os.path.join(self.temp_path, 'pdf',
'a-markdown-powered-article.pdf'))
| agpl-3.0 |
manojgudi/sandhi | modules/gr36/gr-utils/src/python/modtool/code_generator.py | 7 | 2263 | #
# Copyright 2013 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
""" A code generator (needed by ModToolAdd) """
from templates import Templates
import Cheetah.Template
from util_functions import str_to_fancyc_comment
from util_functions import str_to_python_comment
from util_functions import strip_default_values
from util_functions import strip_arg_types
from util_functions import strip_arg_types_grc
class GRMTemplate(Cheetah.Template.Template):
""" An extended template class """
def __init__(self, src, searchList):
self.grtypelist = {
'sync': 'gr_sync_block',
'sink': 'gr_sync_block',
'source': 'gr_sync_block',
'decimator': 'gr_sync_decimator',
'interpolator': 'gr_sync_interpolator',
'general': 'gr_block',
'hier': 'gr_hier_block2',
'noblock': ''}
searchList['str_to_fancyc_comment'] = str_to_fancyc_comment
searchList['str_to_python_comment'] = str_to_python_comment
searchList['strip_default_values'] = strip_default_values
searchList['strip_arg_types'] = strip_arg_types
searchList['strip_arg_types_grc'] = strip_arg_types_grc
Cheetah.Template.Template.__init__(self, src, searchList=searchList)
self.grblocktype = self.grtypelist[searchList['blocktype']]
def get_template(tpl_id, **kwargs):
""" Return the template given by tpl_id, parsed through Cheetah """
return str(GRMTemplate(Templates[tpl_id], searchList=kwargs))
| gpl-3.0 |
BhavyaLight/kaggle-predicting-Red-Hat-Business-Value | Initial_Classification_Models/Ensemble/RandomForest500XGBoost.py | 1 | 8265 | import pandas as pd
import xgboost as xgb1
from xgboost import XGBClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import OneHotEncoder
from Classification import Utility
import pickle
import time
from sklearn.preprocessing import StandardScaler
# Function to change labels of categories to one-hot encoding using scikit's OneHot Encoding
# pd.get_dummies(df) does the same, provides sweet header's as well but it it not fast enough, kill's memory
def category_to_one_hot(dataset, non_feature, continuous_feature):
# Function to change labels of categories to one-hot encoding using scikit's OneHot Encoding sparse matrix
# pd.get_dummies(df) does the same, provides sweet header's as well but it kill's memory
ds = dataset.drop(non_feature, axis=1, errors='ignore')
boolean_column = []
counter = 0
if('days' in ds.columns):
ds['weekend'] = ds['days']//5
for column in ds.columns:
if column not in continuous_feature:
boolean_column.append(counter)
counter += 1
# boolean_column is not the column name but index
print("Done filtering columns...")
grd_enc = OneHotEncoder(categorical_features=boolean_column)
encoded_arr = grd_enc.fit_transform(ds)
return encoded_arr
start_time = time.time()
# features = [(1.0, 'people_group_1')]
# columns = []
#
# filename = "randomForest500Model"
#
# for val in features:
# if (val[0] == 1.0):
# columns.append(val[1])
#
# RandomForestFilename = "randomForest500Model"
#
# train_dataset = pd.read_csv("../../Data/act_train_features_reduced.csv")
# test_dataset = pd.read_csv("../../Data/act_test_features_reduced.csv")
# train_output = pd.read_csv("../../Data/act_train_output.csv")
#
# train_dataset = pd.merge(train_dataset, train_output, on="activity_id", how='inner')
# print("--- %s seconds ---" % (time.time() - start_time))
#
# randomForestModel = Utility.loadModel("randomForestModel_OHE")
# # randomForestModel = RandomForestClassifier(n_estimators=500)
# #
# # randomForestModel.fit(train_dataset[columns], train_dataset[["outcome"]].values.ravel())
#
# prob_train = randomForestModel.predict_proba(train_dataset[columns])
# prob_test = randomForestModel.predict_proba(test_dataset[columns])
# # Utility.saveModel(randomForestModel, "randomForestModel_OHE")
#
# train_dataset["Random_Forest_1"] = prob_train[:,1]
#
# test_dataset["Random_Forest_1"] = prob_test[:,1]
#
# Utility.saveModel(train_dataset, "train_randomforest")
# Utility.saveModel(test_dataset, "test_randomforest")
train_dataset = Utility.loadModel("train_randomforest")
test_dataset = Utility.loadModel("test_randomforest")
print("Random Forest Done")
print("--- %s seconds ---" % (time.time() - start_time))
features = [(1.0, 'Random_Forest_1'), (1.0, 'char_3'), (1.0, 'char_4'), (1.0, 'char_5'),
(1.0, 'char_6'), (1.0, 'char_8'), (1.0, 'char_9'), (1.0, 'days'), (1.0, 'month'), (1.0, 'people_char_1'),
(1.0, 'people_char_10'), (1.0, 'people_char_11'), (1.0, 'people_char_12'), (1.0, 'people_char_13'),
(1.0, 'people_char_14'), (1.0, 'people_char_15'), (1.0, 'people_char_16'), (1.0, 'people_char_17'),
(1.0, 'people_char_18'), (1.0, 'people_char_19'), (1.0, 'people_char_2'), (1.0, 'people_char_20'),
(1.0, 'people_char_21'), (1.0, 'people_char_22'), (1.0, 'people_char_23'), (1.0, 'people_char_24'),
(1.0, 'people_char_25'), (1.0, 'people_char_26'), (1.0, 'people_char_27'), (1.0, 'people_char_28'),
(1.0, 'people_char_29'), (1.0, 'people_char_3'), (1.0, 'people_char_30'), (1.0, 'people_char_31'),
(1.0, 'people_char_32'), (1.0, 'people_char_33'), (1.0, 'people_char_34'), (1.0, 'people_char_35'),
(1.0, 'people_char_36'), (1.0, 'people_char_37'), (1.0, 'people_char_38'), (1.0, 'people_char_4'),
(1.0, 'people_char_5'), (1.0, 'people_char_6'), (1.0, 'people_char_7'), (1.0, 'people_char_8'),
(1.0, 'people_char_9'), (1.0, 'people_dayOfMonth'), (1.0, 'people_month'), (1.0, 'people_quarter'),
(1.0, 'people_week'), (1.0, 'people_year'), (1.0, 'quarter'), (1.0, 'week'), (1.0, 'year'), (2.0, 'char_7'),
(3.0, 'char_1'), (4.0, 'dayOfMonth'), (5.0, 'activity_category'), (6.0, 'people_days'), (7.0, 'char_2'),
(8.0, 'people_group_1'), (9.0, 'people_id')]
columns = []
filename = 'randomPlusXGBOHE_new_woGP10_2'
for val in features:
# if(val[0] == 1.0):
columns.append(val[1])
train_dataset_outcome = train_dataset[["outcome"]]
train_dataset = train_dataset[columns]
# Non feature
# Non feature
NON_FEATURE=['activity_id','people_id','date','people_date']
# Categorical data that is only label encoded
CATEGORICAL_DATA = ['people_char_1', 'people_char_2','people_group_1',
'people_char_3', 'people_char_4', 'people_char_5',
'people_char_6', 'people_char_7', 'people_char_8',
'people_char_9', 'activity_category',
'char_1', 'char_2', 'char_3', 'char_4', 'char_5', 'char_6',
'char_7', 'char_8', 'char_9']
#removed char_10 to check xgb
# Already in a one-hot encoded form
CATEGORICAL_BINARY = ['people_char_10', 'people_char_11', 'people_char_12',
'people_char_13', 'people_char_14', 'people_char_15',
'people_char_16', 'people_char_17', 'people_char_18',
'people_char_19', 'people_char_20', 'people_char_21',
'people_char_22', 'people_char_23', 'people_char_24',
'people_char_25', 'people_char_26', 'people_char_27',
'people_char_28', 'people_char_29', 'people_char_30',
'people_char_31', 'people_char_32', 'people_char_33',
'people_char_34', 'people_char_35', 'people_char_36',
'people_char_37', 'Random_Forest_1' ]
# Continuous categories
CONT = ['people_days', 'days',
'people_month', 'month',
'people_quarter', 'quarter',
'people_week', 'week',
'people_dayOfMonth', 'dayOfMonth',
'people_year', 'year',
'people_char_38']
train_dataset_array = (category_to_one_hot(train_dataset, NON_FEATURE, CONT))
Utility.saveModel(train_dataset_array, "ohe_log")
norm = StandardScaler(with_mean=False, with_std=True)
norm1 = StandardScaler(with_mean=False, with_std=True)
# train_dataset_array =
print("--- %s seconds ---" % (time.time() - start_time))
print("Starting Log Reg")
X = train_dataset_array
#norm.fit(X)
#X = norm.transform(X)
Y = train_dataset_outcome.values.ravel()
# logisticModel = Utility.loadModel(filename)
test_dataset_act_id = test_dataset[["activity_id"]]
test_dataset = test_dataset[columns]
test_dataset_array = (category_to_one_hot(test_dataset, NON_FEATURE, CONT))
#norm1.fit(test_dataset_array)
#test_dataset_array = norm1.transform(test_dataset_array)
xgb = XGBClassifier(max_depth=10, learning_rate=0.3, n_estimators=25,
objective='binary:logistic', subsample=0.7,
colsample_bytree=0.7, seed=0, silent=1, nthread=4,
min_child_weight=0)
dtrain = xgb1.DMatrix(X,label=Y)
dtest = xgb1.DMatrix(test_dataset_array)
param = {'max_depth':10, 'eta':0.02, 'silent':1, 'objective':'binary:logistic' }
param['nthread'] = 4
param['eval_metric'] = 'auc'
param['subsample'] = 0.7
param['colsample_bytree']= 0.7
param['min_child_weight'] = 0
param['booster'] = "gblinear"
watchlist = [(dtrain,'train')]
num_round = 1500
early_stopping_rounds=10
bst = xgb1.train(param, dtrain, num_round, watchlist,early_stopping_rounds=early_stopping_rounds)
print("--- %s seconds ---" % (time.time() - start_time))
Utility.saveModel(bst, filename)
# pickle.dump(logisti cModel, open(filename, 'wb'))
ypred = bst.predict(dtest)
# probs = (xgb.predict_proba(test_dataset_array))
# test_dataset_act_id["outcome"] = probs[:,1]
test_dataset_act_id["outcome"] = ypred
print("--- %s seconds ---" % (time.time() - start_time))
Utility.saveInOutputForm(test_dataset_act_id, filename + ".csv", "ensemble")
# test_dataset_act_id[["activity_id", "outcome"]].set_index(["activity_id"]).to_csv("../../Data/" + filename + ".csv")
| mit |
JamesMGreene/phantomjs | src/breakpad/src/third_party/protobuf/protobuf/examples/add_person.py | 432 | 1656 | #! /usr/bin/python
# See README.txt for information and build instructions.
import addressbook_pb2
import sys
# This function fills in a Person message based on user input.
def PromptForAddress(person):
person.id = int(raw_input("Enter person ID number: "))
person.name = raw_input("Enter name: ")
email = raw_input("Enter email address (blank for none): ")
if email != "":
person.email = email
while True:
number = raw_input("Enter a phone number (or leave blank to finish): ")
if number == "":
break
phone_number = person.phone.add()
phone_number.number = number
type = raw_input("Is this a mobile, home, or work phone? ")
if type == "mobile":
phone_number.type = addressbook_pb2.Person.MOBILE
elif type == "home":
phone_number.type = addressbook_pb2.Person.HOME
elif type == "work":
phone_number.type = addressbook_pb2.Person.WORK
else:
print "Unknown phone type; leaving as default value."
# Main procedure: Reads the entire address book from a file,
# adds one person based on user input, then writes it back out to the same
# file.
if len(sys.argv) != 2:
print "Usage:", sys.argv[0], "ADDRESS_BOOK_FILE"
sys.exit(-1)
address_book = addressbook_pb2.AddressBook()
# Read the existing address book.
try:
f = open(sys.argv[1], "rb")
address_book.ParseFromString(f.read())
f.close()
except IOError:
print sys.argv[1] + ": File not found. Creating a new file."
# Add an address.
PromptForAddress(address_book.person.add())
# Write the new address book back to disk.
f = open(sys.argv[1], "wb")
f.write(address_book.SerializeToString())
f.close()
| bsd-3-clause |
NHellFire/kmotion | install.py | 2 | 19107 | #!/usr/bin/env python
# Copyright 2008 David Selby dave6502@googlemail.com
# This file is part of kmotion.
# kmotion is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# kmotion is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with kmotion. If not, see <http://www.gnu.org/licenses/>.
"""
The auto installer
"""
import os, sys, pwd, grp, time, stat, ConfigParser, shutil
from subprocess import * # breaking habit of a lifetime !
import core.init_core as init_core
import core.daemon_whip as daemon_whip
class exit_(Exception): pass
DEP_TEXT = """
****************************************************************************
Welcome to the kmotion v2 automatic installer. If this installer fails
please file a bug at http://code.google.com/p/kmotion-v2-code/issues/list
with full details so kmotion can be improved.
****************************************************************************
kmotion v2 has the following dependencies :
apache2 ... v2.2.x
apache2 python module v3.3.x
motion ... v3.2.x
python ... v2.4.x
ssh ... vx.x.x (advised for remote server install)
ntp ... vx.x.x (advised for remote server install)
screen ... vx.x.x (advised for remote server install)
If you are running Debian / Ubuntu or a derivative use the following command
apt-get install apache2 libapache2-mod-python motion python
Have the above dependencies been met (yes/no) ? (default yes) :"""
INSTALL_TEXT = """
****************************************************************************
This automatic installer will modify your system in the following ways :
(1) The 'kmotion' and 'kmotion_ptz' exes will be added to 'bin'
(2) A kmotion vhosts include will be added to 'apache2.conf'
(3) An @reboot 'kmotion start' will be added to 'crontab'
All of which are reversible manually or by executing uninstall.py.
**IMPORTANT** KMOTION USES MOTION AS PART OF ITS BACK END, YOU CANNOT USE
MOTION INDEPENDENTLY IF YOU ARE USING KMOTION.
Type \'install\' to start install :"""
SELECT_USER = """
****************************************************************************
kmotion v2 runs as a service. Select the user who will run this service
by typing the users name below. Please ensure that the selected user has
sufficient authority to execute kmotion in its current location.
This would normally be the current user unless you are root !
Avaliable users are :"""
LINE_TEXT = """
****************************************************************************
"""
FOOTER_TEXT = """
****************************************************************************
PORT : %s
IMAGES DIRECTORY : %s
MAX IMAGES DIR SIZE : %s GB
LDAP : %s
TO CHANGE ANY OF THE ABOVE EDIT 'kmotion_rc'
FOR FURTHER CONFIGURATION DETAILS PLEASE REFER TO THE VIDEOS AT :
'http://kmotion.eu/mediawiki/index.php/Videos_v2'
****************************************************************************
INSTALLATION HAS FINISHED, REBOOT THE SYSTEM TO TEST THE AUTO START SCRIPT OR
EXECUTE 'kmotion start'.
POINT YOUR BROWSER (HOPEFULLY FIREFOX) TO 'http://localhost:%s' OR
'http://xx.xx.xx.xx:%s' THE DEFAULT USERNAME IS 'kmotion', THE DEFAULT
PASSWORD IS 'kmotion'.
"""
def install():
"""
The auto install script
args :
excepts :
return : none
"""
# ##########################################################################
print DEP_TEXT,
raw_ip = raw_input()
if raw_ip != '' and raw_ip != 'yes':
raise exit_('Please satisfy the above dependencies')
# ##########################################################################
print INSTALL_TEXT,
if raw_input() != 'install':
raise exit_('Install aborted')
print LINE_TEXT
# ##########################################################################
# check we are running as root
checking('Checking install is running as root')
uid = os.getuid()
if uid != 0:
fail()
raise exit_('The installer needs to be runs as root')
ok()
# ##########################################################################
# check we can read ./core/core_rc - if we can't, assume we are
# not in the kmotion root directory
checking('Checking installer is running in correct directory')
if not os.path.isfile('./core/core_rc'):
fail()
raise exit_('Please \'cd\' to the kmotion root directory before running the installer')
ok()
# if we are in the root dir set kmotion_dir
kmotion_dir = os.getcwd()
# check for existing motion instances
checking('Checking for existing \'motion\' daemon instances')
check_motion(kmotion_dir)
ok()
checking('Killing kmotion daemons')
daemon_whip.kill_daemons()
ok()
# select a user to run the kmotion service
checking('Searching for possible users to run kmotion service')
ok()
print SELECT_USER,
users_uid = [[i[0], i[2], i[3]] for i in pwd.getpwall() if i[2] >= 500 or i[2] == 0]
users = [i[0] for i in users_uid if i[0] != 'root' and i[0] != 'nobody']
uid = [i[1] for i in users_uid if i[0] != 'root' and i[0] != 'nobody']
gid = [i[2] for i in users_uid if i[0] != 'root' and i[0] != 'nobody']
for user in users:
print '\'%s\'' % user,
print '\n\nType \'user\' to continue :',
select = raw_input()
if select not in users:
raise exit_('Invalid user selected, Install aborted')
kmotion_user = select
kmotion_uid = uid[users.index(select)]
kmotion_gid = gid[users.index(select)]
# ##########################################################################
# select ramdisk type
df_out = Popen(['df'], stdout=PIPE).communicate()[0].split('\n')
for line in df_out:
split = line.split()
if len(split) < 6:
continue
#if False: # debug option to force 'virtual_ramdisk'
if split[5] == '/dev/shm' and int(split[1]) > 30000:
ramdisk_dir = '/dev/shm/kmotion_ramdisk'
checking('Selected ramdisk ... /dev/shm')
ok()
break
else:
ramdisk_dir = '%s/www/virtual_ramdisk' % kmotion_dir
checking('Selected virtual_ramdisk')
ok()
# ##########################################################################
# initialise resource configurations
checking('Initialise resource configurations')
try: # wrapping in a try - except because parsing data from kmotion_rc
init_core.init_rcs(kmotion_dir, ramdisk_dir)
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
fail()
raise exit_('Corrupt \'kmotion_rc\' : %s' % sys.exc_info()[1])
ok()
# ##########################################################################
# generate kmotion vhost
checking('Generating kmotion vhost')
try: # wrapping in a try - except because parsing data from kmotion_rc
init_core.gen_vhost(kmotion_dir)
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
fail()
raise exit_('Corrupt \'kmotion_rc\' : %s' % sys.exc_info()[1])
# chown is needed so kmotion vhost is not locked to root allowing non root
# kmotion to regenerate the vhost
os.chown('%s/www/vhosts/kmotion' % kmotion_dir, kmotion_uid, kmotion_gid)
ok()
# ##########################################################################
# modifying 'apache2.conf'
checking('Adding kmotion include to \'apache2.conf\'')
try:
modify_apache2(kmotion_dir)
except exit_, text_:
fail()
raise exit_(text_)
ok()
# ##########################################################################
checking('Apache2 restart')
ok()
restart_apache2()
# ##########################################################################
checking('Waiting for apache2 to init processes ... please wait')
ok()
time.sleep(5)
# ##########################################################################
apache2_group, apache2_gid = get_apache2_gid()
checking('Searching for Apache2 group ... found \'%s\'' % apache2_group)
ok()
# ##########################################################################
checking('Generating \'kmotion\' executable')
init_core.gen_kmotion(kmotion_dir, kmotion_uid, kmotion_gid)
ok()
# ##########################################################################
checking('Generating \'kmotion_ptz\' executable')
init_core.gen_kmotion_ptz(kmotion_dir, kmotion_uid, kmotion_gid)
ok()
# ##########################################################################
checking('Moving executables to \'bin\' directories')
try:
exe_path = move_exes(kmotion_dir)
except exit_, text_:
fail()
raise exit_(text_)
ok()
# ##########################################################################
checking('Adding @reboot to crontab')
modify_crontab(kmotion_user, exe_path)
ok()
# ##########################################################################
checking('Setting named pipes permissions')
init_core.set_uid_gid_named_pipes(kmotion_dir, kmotion_uid, apache2_gid)
ok()
# ##########################################################################
checking('Setting \'servo_state\' permissions')
init_core.set_uid_gid_servo_state(kmotion_dir, kmotion_uid, apache2_gid)
ok()
# ##########################################################################
checking('Setting mutex permissions')
init_core.set_uid_gid_mutex(kmotion_dir, kmotion_uid, apache2_gid)
ok()
# ##########################################################################
# kmotion not running, no need for mutex
parser = ConfigParser.SafeConfigParser()
parser.read('%s/kmotion_rc' % kmotion_dir)
ldap = parser.get('LDAP', 'enabled')
images_dbase_dir = parser.get('dirs', 'images_dbase_dir')
port = parser.get('misc', 'port')
images_dbase_limit_gb = parser.get('storage', 'images_dbase_limit_gb')
# ##########################################################################
checking('Removing root pyc\'s')
rm_root_pycs(kmotion_dir)
ok()
# ##########################################################################
print FOOTER_TEXT % (port, images_dbase_dir, images_dbase_limit_gb, ldap, port, port),
print LINE_TEXT
def check_motion(kmotion_dir):
"""
Check for any invalid motion processes
args :
excepts : exit_ ... if motion daemon already running
return : none
"""
p_objs = Popen('ps ax | grep -e [[:space:]/]motion[[:space:]/] | grep -v \'\-c %s/core/motion_conf/motion.conf\'' % kmotion_dir, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)
line = p_objs.stdout.readline()
if line != '':
raise exit_("""\nAn instance of the motion daemon has been detected which is not under control
of kmotion. Please kill this instance and ensure that motion is not started
automatically on system bootup. This a known problem with Ubuntu 8.04
Reference Bug #235599.""")
def modify_apache2(kmotion_dir):
"""
Locates apache2.conf, remove any previous include references, add kmotion
include and comment
args : kmotion_dir ... the 'root' directory of kmotion
excepts : exit_ ... if apache2.conf can not be found
return : none
"""
confs = ['/etc/apache2/apache2.conf', '/etc/apache2/httpd.conf',
'/etc/httpd/conf/httpd.conf', '/etc/httpd/httpd.conf',
'/etc/apache2/default-server.conf']
for conf in confs: # coded this way to allow for different .conf files
if os.path.isfile(conf):
f_obj = open(conf)
lines = f_obj.readlines()
f_obj.close()
for i in range(len(lines) - 1):
if i >= len(lines): break # 'del' will shorten the list length
if (lines[i].rstrip() == '# Include kmotion vhosts directory' and
lines[i + 1][:7] == 'Include'):
del lines[i:i + 2]
if lines[-1] == '\n': del lines[-1] # strip off new line
f_obj = open(conf, 'w')
f_obj.writelines(lines)
f_obj.write('\n# Include kmotion vhosts directory\n')
f_obj.write('Include %s/www/vhosts/kmotion\n' % kmotion_dir)
f_obj.close()
break
else:
raise exit_('Unable to locate : %s' % list_format(confs))
def restart_apache2():
"""
Restart the apache2 server
args :
excepts :
return :
"""
if call(['which', 'apachectl']) == 1:
call(['apache2ctl', 'restart'])
else:
call(['apachectl', 'restart'])
#apache2s = ['/etc/init.d/apache2', '/etc/init.d/httpd']
#for apache2 in apache2s:
#if os.path.isfile(apache2):
#print
#os.system('%s restart' % apache2) # os.system used deliberately as
## it dumps output to term real time
#break
#else:
#raise exit_('Unable to locate : %s' % list_format(apache2s))
def get_apache2_gid():
"""
Return apache2's group name and gid
args :
excepts :
return : apache2's group name
apache2's gid
"""
# debian and derivatives
if os.path.isfile('/etc/apache2/envvars'):
f_obj = open('/etc/apache2/envvars')
lines = f_obj.readlines()
f_obj.close()
for line in lines:
split = line.split('=')
if split[0] == 'export APACHE_RUN_GROUP':
apache2_group = split[1].strip()
break
# suse
elif os.path.isfile('/etc/apache2/uid.conf'):
f_obj = open('/etc/apache2/uid.conf')
lines = f_obj.readlines()
f_obj.close()
for line in lines:
split = line.split(' ')
if split[0] == 'Group':
apache2_group = split[1].strip()
break
# slackware
elif os.path.isfile('/etc/httpd/httpd.conf'):
f_obj = open('/etc/httpd/httpd.conf')
lines = f_obj.readlines()
f_obj.close()
for line in lines:
split = line.split(' ')
if split[0] == 'Group':
apache2_group = split[1].strip()
break
return apache2_group, int(grp.getgrnam(apache2_group)[2])
def move_exes(kmotion_dir):
"""
Move bin files kmotion and kmotion_ptz to /usr/local/bin, /usr/bin or /bin
overwrite old version if they already exist
args : kmotion_dir ... the 'root' directory of kmotion
excepts : exit_ ... if neither /usr/local/bin, /usr/bin or /bin can be found
return : string ... path to the executables
"""
paths = ['/usr/local/bin', '/usr/bin', '/bin']
for exe_path in paths:
if os.path.isdir(exe_path):
# os.rename replaced by os.system (move) because many home directories are
# on a differend partition rename only works within 1 partition
# ....added by Gudy
os_command_string = 'mv ' + kmotion_dir + '/kmotion ' + exe_path + '/kmotion'
os.system(os_command_string)
os_command_string = 'mv '+ kmotion_dir + '/kmotion_ptz ' + exe_path + '/kmotion_ptz'
os.system( os_command_string)
return exe_path
raise exit_('Unable to locate : %s' % list_format(paths))
def modify_crontab(sel_user, exe_path):
"""
Read users crontab, remove any previous references, add @reboot to start
kmotion as a background process
args : sel_user ... the users that is to run kmotion
exe_path ... the path to kmotion
excepts :
return : none
"""
# delete all existing @reboot kmotion lines in case installer called a
# second time with a different user selected
for user in [i[0] for i in pwd.getpwall() if i[2] > 500 or i[2] == 0]:
f_obj = os.popen('crontab -u %s -l' % user)
ctab = f_obj.readlines()
f_obj.close()
if ctab == []: ctab = [''] # 'no crontab for ....'
tmp = []
for line in ctab:
if not (line[:7] == '@reboot' and line[-17:] == '/kmotion start &\n'):
tmp.append(line)
f_obj = os.popen('crontab - -u %s' % user, 'w')
f_obj.writelines(tmp)
f_obj.close()
f_obj = os.popen('crontab -u %s -l' % sel_user)
ctab = f_obj.readlines()
f_obj.close()
f_obj = os.popen('crontab - -u %s' % sel_user, 'w')
f_obj.writelines(ctab)
f_obj.write('@reboot %s/kmotion start &\n' % exe_path)
f_obj.close()
def rm_root_pycs(kmotion_dir):
"""
Remove any root generated pyc's to allow normal users to generate fresh
pyc's after any upgrades, helps performance.
args : kmotion_dir ... the 'root' directory of kmotion
excepts :
return : none
"""
for pyc in [i for i in os.listdir('%s/core' % kmotion_dir) if i[-4:] == '.pyc']:
os.remove('%s/core/%s' % (kmotion_dir, pyc))
def list_format(list_):
"""
Changes a list of strings into a comma seperated string containing all the
list items with an 'or' instead of the last comma
args : list_ ... a list of strings
excepts :
return : string ... a single formatted string
"""
tmp = '\'%s\'' % list_[0] # for correct ',' logic
for i in list_[1:-1]: tmp += ', \'%s\'' % i
return '%s or \'%s\'' % (tmp, list_[-1])
def checking(text_):
"""
Print the text and calculate the number of '.'s
args : text ... the string to print
excepts :
return : none
"""
print text_, '.' * (68 - len(text_)) ,
def ok():
"""
print [ OK ]
args :
excepts :
return : none
"""
print '[ OK ]'
def fail():
"""
print [FAIL]
args :
excepts :
return : none
"""
print '[FAIL]'
try:
install()
except exit_, text:
print '\n%s\n' % text
| gpl-3.0 |
podhmo/boto | boto/mashups/interactive.py | 148 | 2783 | # Copyright (C) 2003-2007 Robey Pointer <robey@lag.net>
#
# This file is part of paramiko.
#
# Paramiko is free software; you can redistribute it and/or modify it under the
# terms of the GNU Lesser General Public License as published by the Free
# Software Foundation; either version 2.1 of the License, or (at your option)
# any later version.
#
# Paramiko is distrubuted in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more
# details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Paramiko; if not, write to the Free Software Foundation, Inc.,
# 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
from __future__ import print_function
import socket
import sys
# windows does not have termios...
try:
import termios
import tty
has_termios = True
except ImportError:
has_termios = False
def interactive_shell(chan):
if has_termios:
posix_shell(chan)
else:
windows_shell(chan)
def posix_shell(chan):
import select
oldtty = termios.tcgetattr(sys.stdin)
try:
tty.setraw(sys.stdin.fileno())
tty.setcbreak(sys.stdin.fileno())
chan.settimeout(0.0)
while True:
r, w, e = select.select([chan, sys.stdin], [], [])
if chan in r:
try:
x = chan.recv(1024)
if len(x) == 0:
print('\r\n*** EOF\r\n', end=' ')
break
sys.stdout.write(x)
sys.stdout.flush()
except socket.timeout:
pass
if sys.stdin in r:
x = sys.stdin.read(1)
if len(x) == 0:
break
chan.send(x)
finally:
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, oldtty)
# thanks to Mike Looijmans for this code
def windows_shell(chan):
import threading
sys.stdout.write("Line-buffered terminal emulation. Press F6 or ^Z to send EOF.\r\n\r\n")
def writeall(sock):
while True:
data = sock.recv(256)
if not data:
sys.stdout.write('\r\n*** EOF ***\r\n\r\n')
sys.stdout.flush()
break
sys.stdout.write(data)
sys.stdout.flush()
writer = threading.Thread(target=writeall, args=(chan,))
writer.start()
try:
while True:
d = sys.stdin.read(1)
if not d:
break
chan.send(d)
except EOFError:
# user hit ^Z or F6
pass
| mit |
gokuale/weblate | weblate/trans/scripts.py | 4 | 2637 |
# -*- coding: utf-8 -*-
#
# Copyright © 2012 - 2015 Michal Čihař <michal@cihar.com>
#
# This file is part of Weblate <http://weblate.org/>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
"""Hook scripts handling"""
import os.path
import subprocess
from weblate.trans.util import get_clean_env
def get_script_name(name):
'''
Returns script name from string possibly containing full path and
parameters.
'''
return os.path.basename(name).split()[0]
def run_post_push_script(component):
"""Run post push hook"""
run_hook(component, component.post_push_script)
def run_post_update_script(component):
"""Run post update hook"""
run_hook(component, component.post_update_script)
def run_pre_commit_script(component, filename):
"""
Pre commit hook
"""
run_hook(component, component.pre_commit_script, filename)
def run_post_commit_script(component, filename):
"""
Post commit hook
"""
run_hook(component, component.post_commit_script, filename)
def run_hook(component, script, *args):
"""
Generic script hook executor.
"""
if script:
command = [script]
if args:
command.extend(args)
environment = get_clean_env()
if component.is_repo_link:
target = component.linked_subproject
else:
target = component
environment['WL_VCS'] = target.vcs
environment['WL_REPO'] = target.repo
environment['WL_PATH'] = target.get_path()
environment['WL_FILEMASK'] = component.filemask
environment['WL_FILE_FORMAT'] = component.file_format
try:
subprocess.check_call(
command,
env=environment,
cwd=component.get_path(),
)
return True
except (OSError, subprocess.CalledProcessError) as err:
component.log_error(
'failed to run hook script %s: %s',
script,
err
)
return False
| gpl-3.0 |
molobrakos/home-assistant | homeassistant/components/yessssms/notify.py | 7 | 2657 | """Support for the YesssSMS platform."""
import logging
import voluptuous as vol
from homeassistant.const import CONF_PASSWORD, CONF_RECIPIENT, CONF_USERNAME
import homeassistant.helpers.config_validation as cv
from homeassistant.components.notify import (PLATFORM_SCHEMA,
BaseNotificationService)
_LOGGER = logging.getLogger(__name__)
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend({
vol.Required(CONF_USERNAME): cv.string,
vol.Required(CONF_PASSWORD): cv.string,
vol.Required(CONF_RECIPIENT): cv.string,
})
def get_service(hass, config, discovery_info=None):
"""Get the YesssSMS notification service."""
return YesssSMSNotificationService(
config[CONF_USERNAME], config[CONF_PASSWORD], config[CONF_RECIPIENT])
class YesssSMSNotificationService(BaseNotificationService):
"""Implement a notification service for the YesssSMS service."""
def __init__(self, username, password, recipient):
"""Initialize the service."""
from YesssSMS import YesssSMS
self.yesss = YesssSMS(username, password)
self._recipient = recipient
_LOGGER.debug(
"initialized; library version: %s", self.yesss.version())
def send_message(self, message="", **kwargs):
"""Send a SMS message via Yesss.at's website."""
if self.yesss.account_is_suspended():
# only retry to login after HASS was restarted with (hopefully)
# new login data.
_LOGGER.error(
"Account is suspended, cannot send SMS. "
"Check your login data and restart Home Assistant")
return
try:
self.yesss.send(self._recipient, message)
except self.yesss.NoRecipientError as ex:
_LOGGER.error(
"You need to provide a recipient for SMS notification: %s",
ex)
except self.yesss.EmptyMessageError as ex:
_LOGGER.error(
"Cannot send empty SMS message: %s", ex)
except self.yesss.SMSSendingError as ex:
_LOGGER.error(str(ex), exc_info=ex)
except ConnectionError as ex:
_LOGGER.error(
"YesssSMS: unable to connect to yesss.at server.",
exc_info=ex)
except self.yesss.AccountSuspendedError as ex:
_LOGGER.error(
"Wrong login credentials!! Verify correct credentials and "
"restart Home Assistant: %s", ex)
except self.yesss.LoginError as ex:
_LOGGER.error("Wrong login credentials: %s", ex)
else:
_LOGGER.info("SMS sent")
| apache-2.0 |
jmontoyam/mne-python | mne/forward/tests/test_make_forward.py | 3 | 18288 | from __future__ import print_function
from itertools import product
import os
import os.path as op
import warnings
from nose.tools import assert_raises, assert_true
import numpy as np
from numpy.testing import (assert_equal, assert_allclose)
from mne.datasets import testing
from mne.io import read_raw_fif, read_raw_kit, read_raw_bti, read_info
from mne.io.constants import FIFF
from mne import (read_forward_solution, make_forward_solution,
convert_forward_solution, setup_volume_source_space,
read_source_spaces, make_sphere_model,
pick_types_forward, pick_info, pick_types, Transform,
read_evokeds, read_cov, read_dipole)
from mne.utils import (requires_mne, requires_nibabel, _TempDir,
run_tests_if_main, slow_test, run_subprocess)
from mne.forward._make_forward import _create_meg_coils, make_forward_dipole
from mne.forward._compute_forward import _magnetic_dipole_field_vec
from mne.forward import Forward, _do_forward_solution
from mne.dipole import Dipole, fit_dipole
from mne.simulation import simulate_evoked
from mne.source_estimate import VolSourceEstimate
from mne.source_space import (get_volume_labels_from_aseg,
_compare_source_spaces, setup_source_space)
data_path = testing.data_path(download=False)
fname_meeg = op.join(data_path, 'MEG', 'sample',
'sample_audvis_trunc-meg-eeg-oct-4-fwd.fif')
fname_raw = op.join(op.dirname(__file__), '..', '..', 'io', 'tests', 'data',
'test_raw.fif')
fname_evo = op.join(data_path, 'MEG', 'sample', 'sample_audvis_trunc-ave.fif')
fname_cov = op.join(data_path, 'MEG', 'sample', 'sample_audvis_trunc-cov.fif')
fname_dip = op.join(data_path, 'MEG', 'sample', 'sample_audvis_trunc_set1.dip')
fname_trans = op.join(data_path, 'MEG', 'sample',
'sample_audvis_trunc-trans.fif')
subjects_dir = os.path.join(data_path, 'subjects')
fname_src = op.join(subjects_dir, 'sample', 'bem', 'sample-oct-4-src.fif')
fname_bem = op.join(subjects_dir, 'sample', 'bem',
'sample-1280-1280-1280-bem-sol.fif')
fname_aseg = op.join(subjects_dir, 'sample', 'mri', 'aseg.mgz')
fname_bem_meg = op.join(subjects_dir, 'sample', 'bem',
'sample-1280-bem-sol.fif')
def _compare_forwards(fwd, fwd_py, n_sensors, n_src,
meg_rtol=1e-4, meg_atol=1e-9,
eeg_rtol=1e-3, eeg_atol=1e-3):
"""Helper to test forwards"""
# check source spaces
assert_equal(len(fwd['src']), len(fwd_py['src']))
_compare_source_spaces(fwd['src'], fwd_py['src'], mode='approx')
for surf_ori, force_fixed in product([False, True], [False, True]):
# use copy here to leave our originals unmodified
fwd = convert_forward_solution(fwd, surf_ori, force_fixed,
copy=True)
fwd_py = convert_forward_solution(fwd_py, surf_ori, force_fixed,
copy=True)
check_src = n_src // 3 if force_fixed else n_src
for key in ('nchan', 'source_rr', 'source_ori',
'surf_ori', 'coord_frame', 'nsource'):
assert_allclose(fwd_py[key], fwd[key], rtol=1e-4, atol=1e-7,
err_msg=key)
# In surf_ori=True only Z matters for source_nn
if surf_ori and not force_fixed:
ori_sl = slice(2, None, 3)
else:
ori_sl = slice(None)
assert_allclose(fwd_py['source_nn'][ori_sl], fwd['source_nn'][ori_sl],
rtol=1e-4, atol=1e-6)
assert_allclose(fwd_py['mri_head_t']['trans'],
fwd['mri_head_t']['trans'], rtol=1e-5, atol=1e-8)
assert_equal(fwd_py['sol']['data'].shape, (n_sensors, check_src))
assert_equal(len(fwd['sol']['row_names']), n_sensors)
assert_equal(len(fwd_py['sol']['row_names']), n_sensors)
# check MEG
assert_allclose(fwd['sol']['data'][:306, ori_sl],
fwd_py['sol']['data'][:306, ori_sl],
rtol=meg_rtol, atol=meg_atol,
err_msg='MEG mismatch')
# check EEG
if fwd['sol']['data'].shape[0] > 306:
assert_allclose(fwd['sol']['data'][306:, ori_sl],
fwd_py['sol']['data'][306:, ori_sl],
rtol=eeg_rtol, atol=eeg_atol,
err_msg='EEG mismatch')
def test_magnetic_dipole():
"""Test basic magnetic dipole forward calculation
"""
trans = Transform('mri', 'head', np.eye(4))
info = read_info(fname_raw)
picks = pick_types(info, meg=True, eeg=False, exclude=[])
info = pick_info(info, picks[:12])
coils = _create_meg_coils(info['chs'], 'normal', trans)
# magnetic dipole at device origin
r0 = np.array([0., 13., -6.])
for ch, coil in zip(info['chs'], coils):
rr = (ch['loc'][:3] + r0) / 2.
far_fwd = _magnetic_dipole_field_vec(r0[np.newaxis, :], [coil])
near_fwd = _magnetic_dipole_field_vec(rr[np.newaxis, :], [coil])
ratio = 8. if ch['ch_name'][-1] == '1' else 16. # grad vs mag
assert_allclose(np.median(near_fwd / far_fwd), ratio, atol=1e-1)
@testing.requires_testing_data
@requires_mne
def test_make_forward_solution_kit():
"""Test making fwd using KIT, BTI, and CTF (compensated) files
"""
kit_dir = op.join(op.dirname(__file__), '..', '..', 'io', 'kit',
'tests', 'data')
sqd_path = op.join(kit_dir, 'test.sqd')
mrk_path = op.join(kit_dir, 'test_mrk.sqd')
elp_path = op.join(kit_dir, 'test_elp.txt')
hsp_path = op.join(kit_dir, 'test_hsp.txt')
trans_path = op.join(kit_dir, 'trans-sample.fif')
fname_kit_raw = op.join(kit_dir, 'test_bin_raw.fif')
bti_dir = op.join(op.dirname(__file__), '..', '..', 'io', 'bti',
'tests', 'data')
bti_pdf = op.join(bti_dir, 'test_pdf_linux')
bti_config = op.join(bti_dir, 'test_config_linux')
bti_hs = op.join(bti_dir, 'test_hs_linux')
fname_bti_raw = op.join(bti_dir, 'exported4D_linux_raw.fif')
fname_ctf_raw = op.join(op.dirname(__file__), '..', '..', 'io', 'tests',
'data', 'test_ctf_comp_raw.fif')
# first set up a small testing source space
temp_dir = _TempDir()
fname_src_small = op.join(temp_dir, 'sample-oct-2-src.fif')
src = setup_source_space('sample', fname_src_small, 'oct2',
subjects_dir=subjects_dir, add_dist=False)
n_src = 108 # this is the resulting # of verts in fwd
# first use mne-C: convert file, make forward solution
fwd = _do_forward_solution('sample', fname_kit_raw, src=fname_src_small,
bem=fname_bem_meg, mri=trans_path,
eeg=False, meg=True, subjects_dir=subjects_dir)
assert_true(isinstance(fwd, Forward))
# now let's use python with the same raw file
fwd_py = make_forward_solution(fname_kit_raw, trans_path, src,
fname_bem_meg, eeg=False, meg=True)
_compare_forwards(fwd, fwd_py, 157, n_src)
assert_true(isinstance(fwd_py, Forward))
# now let's use mne-python all the way
raw_py = read_raw_kit(sqd_path, mrk_path, elp_path, hsp_path)
# without ignore_ref=True, this should throw an error:
assert_raises(NotImplementedError, make_forward_solution, raw_py.info,
src=src, eeg=False, meg=True,
bem=fname_bem_meg, trans=trans_path)
# check that asking for eeg channels (even if they don't exist) is handled
meg_only_info = pick_info(raw_py.info, pick_types(raw_py.info, meg=True,
eeg=False))
fwd_py = make_forward_solution(meg_only_info, src=src, meg=True, eeg=True,
bem=fname_bem_meg, trans=trans_path,
ignore_ref=True)
_compare_forwards(fwd, fwd_py, 157, n_src,
meg_rtol=1e-3, meg_atol=1e-7)
# BTI python end-to-end versus C
fwd = _do_forward_solution('sample', fname_bti_raw, src=fname_src_small,
bem=fname_bem_meg, mri=trans_path,
eeg=False, meg=True, subjects_dir=subjects_dir)
with warnings.catch_warnings(record=True): # weight tables
raw_py = read_raw_bti(bti_pdf, bti_config, bti_hs, preload=False)
fwd_py = make_forward_solution(raw_py.info, src=src, eeg=False, meg=True,
bem=fname_bem_meg, trans=trans_path)
_compare_forwards(fwd, fwd_py, 248, n_src)
# now let's test CTF w/compensation
fwd_py = make_forward_solution(fname_ctf_raw, fname_trans, src,
fname_bem_meg, eeg=False, meg=True)
fwd = _do_forward_solution('sample', fname_ctf_raw, mri=fname_trans,
src=fname_src_small, bem=fname_bem_meg,
eeg=False, meg=True, subjects_dir=subjects_dir)
_compare_forwards(fwd, fwd_py, 274, n_src)
# CTF with compensation changed in python
ctf_raw = read_raw_fif(fname_ctf_raw, add_eeg_ref=False)
ctf_raw.apply_gradient_compensation(2)
fwd_py = make_forward_solution(ctf_raw.info, fname_trans, src,
fname_bem_meg, eeg=False, meg=True)
with warnings.catch_warnings(record=True):
fwd = _do_forward_solution('sample', ctf_raw, mri=fname_trans,
src=fname_src_small, bem=fname_bem_meg,
eeg=False, meg=True,
subjects_dir=subjects_dir)
_compare_forwards(fwd, fwd_py, 274, n_src)
@slow_test
@testing.requires_testing_data
def test_make_forward_solution():
"""Test making M-EEG forward solution from python
"""
fwd_py = make_forward_solution(fname_raw, fname_trans, fname_src,
fname_bem, mindist=5.0, eeg=True, meg=True)
assert_true(isinstance(fwd_py, Forward))
fwd = read_forward_solution(fname_meeg)
assert_true(isinstance(fwd, Forward))
_compare_forwards(fwd, fwd_py, 366, 1494, meg_rtol=1e-3)
@testing.requires_testing_data
@requires_mne
def test_make_forward_solution_sphere():
"""Test making a forward solution with a sphere model"""
temp_dir = _TempDir()
fname_src_small = op.join(temp_dir, 'sample-oct-2-src.fif')
src = setup_source_space('sample', fname_src_small, 'oct2',
subjects_dir=subjects_dir, add_dist=False)
out_name = op.join(temp_dir, 'tmp-fwd.fif')
run_subprocess(['mne_forward_solution', '--meg', '--eeg',
'--meas', fname_raw, '--src', fname_src_small,
'--mri', fname_trans, '--fwd', out_name])
fwd = read_forward_solution(out_name)
sphere = make_sphere_model(verbose=True)
fwd_py = make_forward_solution(fname_raw, fname_trans, src, sphere,
meg=True, eeg=True, verbose=True)
_compare_forwards(fwd, fwd_py, 366, 108,
meg_rtol=5e-1, meg_atol=1e-6,
eeg_rtol=5e-1, eeg_atol=5e-1)
# Since the above is pretty lax, let's check a different way
for meg, eeg in zip([True, False], [False, True]):
fwd_ = pick_types_forward(fwd, meg=meg, eeg=eeg)
fwd_py_ = pick_types_forward(fwd, meg=meg, eeg=eeg)
assert_allclose(np.corrcoef(fwd_['sol']['data'].ravel(),
fwd_py_['sol']['data'].ravel())[0, 1],
1.0, rtol=1e-3)
@slow_test
@testing.requires_testing_data
@requires_nibabel(False)
def test_forward_mixed_source_space():
"""Test making the forward solution for a mixed source space
"""
temp_dir = _TempDir()
# get the surface source space
surf = read_source_spaces(fname_src)
# setup two volume source spaces
label_names = get_volume_labels_from_aseg(fname_aseg)
vol_labels = [label_names[int(np.random.rand() * len(label_names))]
for _ in range(2)]
vol1 = setup_volume_source_space('sample', fname=None, pos=20.,
mri=fname_aseg,
volume_label=vol_labels[0],
add_interpolator=False)
vol2 = setup_volume_source_space('sample', fname=None, pos=20.,
mri=fname_aseg,
volume_label=vol_labels[1],
add_interpolator=False)
# merge surfaces and volume
src = surf + vol1 + vol2
# calculate forward solution
fwd = make_forward_solution(fname_raw, fname_trans, src, fname_bem, None)
assert_true(repr(fwd))
# extract source spaces
src_from_fwd = fwd['src']
# get the coordinate frame of each source space
coord_frames = np.array([s['coord_frame'] for s in src_from_fwd])
# assert that all source spaces are in head coordinates
assert_true((coord_frames == FIFF.FIFFV_COORD_HEAD).all())
# run tests for SourceSpaces.export_volume
fname_img = op.join(temp_dir, 'temp-image.mgz')
# head coordinates and mri_resolution, but trans file
assert_raises(ValueError, src_from_fwd.export_volume, fname_img,
mri_resolution=True, trans=None)
# head coordinates and mri_resolution, but wrong trans file
vox_mri_t = vol1[0]['vox_mri_t']
assert_raises(ValueError, src_from_fwd.export_volume, fname_img,
mri_resolution=True, trans=vox_mri_t)
@slow_test
@testing.requires_testing_data
def test_make_forward_dipole():
"""Test forward-projecting dipoles"""
rng = np.random.RandomState(0)
evoked = read_evokeds(fname_evo)[0]
cov = read_cov(fname_cov)
dip_c = read_dipole(fname_dip)
# Only use magnetometers for speed!
picks = pick_types(evoked.info, meg='mag', eeg=False)
evoked.pick_channels([evoked.ch_names[p] for p in picks])
info = evoked.info
# Make new Dipole object with n_test_dipoles picked from the dipoles
# in the test dataset.
n_test_dipoles = 3 # minimum 3 needed to get uneven sampling in time
dipsel = np.sort(rng.permutation(np.arange(len(dip_c)))[:n_test_dipoles])
dip_test = Dipole(times=dip_c.times[dipsel],
pos=dip_c.pos[dipsel],
amplitude=dip_c.amplitude[dipsel],
ori=dip_c.ori[dipsel],
gof=dip_c.gof[dipsel])
sphere = make_sphere_model(head_radius=0.1)
# Warning emitted due to uneven sampling in time
with warnings.catch_warnings(record=True) as w:
fwd, stc = make_forward_dipole(dip_test, sphere, info,
trans=fname_trans)
assert_true(issubclass(w[-1].category, RuntimeWarning))
# stc is list of VolSourceEstimate's
assert_true(isinstance(stc, list))
for nd in range(n_test_dipoles):
assert_true(isinstance(stc[nd], VolSourceEstimate))
# Now simulate evoked responses for each of the test dipoles,
# and fit dipoles to them (sphere model, MEG and EEG)
times, pos, amplitude, ori, gof = [], [], [], [], []
snr = 20. # add a tiny amount of noise to the simulated evokeds
for s in stc:
evo_test = simulate_evoked(fwd, s, info, cov,
snr=snr, random_state=rng)
# evo_test.add_proj(make_eeg_average_ref_proj(evo_test.info))
dfit, resid = fit_dipole(evo_test, cov, sphere, None)
times += dfit.times.tolist()
pos += dfit.pos.tolist()
amplitude += dfit.amplitude.tolist()
ori += dfit.ori.tolist()
gof += dfit.gof.tolist()
# Create a new Dipole object with the dipole fits
dip_fit = Dipole(times, pos, amplitude, ori, gof)
# check that true (test) dipoles and fits are "close"
# cf. mne/tests/test_dipole.py
diff = dip_test.pos - dip_fit.pos
corr = np.corrcoef(dip_test.pos.ravel(), dip_fit.pos.ravel())[0, 1]
dist = np.sqrt(np.mean(np.sum(diff * diff, axis=1)))
gc_dist = 180 / np.pi * \
np.mean(np.arccos(np.sum(dip_test.ori * dip_fit.ori, axis=1)))
amp_err = np.sqrt(np.mean((dip_test.amplitude - dip_fit.amplitude) ** 2))
# Make sure each coordinate is close to reference
# NB tolerance should be set relative to snr of simulated evoked!
assert_allclose(dip_fit.pos, dip_test.pos, rtol=0, atol=1e-2,
err_msg='position mismatch')
assert_true(dist < 1e-2, 'dist: %s' % dist) # within 1 cm
assert_true(corr > 1 - 1e-2, 'corr: %s' % corr)
assert_true(gc_dist < 20, 'gc_dist: %s' % gc_dist) # less than 20 degrees
assert_true(amp_err < 10e-9, 'amp_err: %s' % amp_err) # within 10 nAm
# Make sure rejection works with BEM: one dipole at z=1m
# NB _make_forward.py:_prepare_for_forward will raise a RuntimeError
# if no points are left after min_dist exclusions, hence 2 dips here!
dip_outside = Dipole(times=[0., 0.001],
pos=[[0., 0., 1.0], [0., 0., 0.040]],
amplitude=[100e-9, 100e-9],
ori=[[1., 0., 0.], [1., 0., 0.]], gof=1)
assert_raises(ValueError, make_forward_dipole, dip_outside, fname_bem,
info, fname_trans)
# if we get this far, can safely assume the code works with BEMs too
# -> use sphere again below for speed
# Now make an evenly sampled set of dipoles, some simultaneous,
# should return a VolSourceEstimate regardless
times = [0., 0., 0., 0.001, 0.001, 0.002]
pos = np.random.rand(6, 3) * 0.020 + \
np.array([0., 0., 0.040])[np.newaxis, :]
amplitude = np.random.rand(6) * 100e-9
ori = np.eye(6, 3) + np.eye(6, 3, -3)
gof = np.arange(len(times)) / len(times) # arbitrary
dip_even_samp = Dipole(times, pos, amplitude, ori, gof)
fwd, stc = make_forward_dipole(dip_even_samp, sphere, info,
trans=fname_trans)
assert_true(isinstance, VolSourceEstimate)
assert_allclose(stc.times, np.arange(0., 0.003, 0.001))
run_tests_if_main()
| bsd-3-clause |
kirca/odoo | addons/mrp/mrp.py | 10 | 60444 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import time
import openerp.addons.decimal_precision as dp
from openerp.osv import fields, osv, orm
from openerp.tools import DEFAULT_SERVER_DATETIME_FORMAT
from openerp.tools import float_compare
from openerp.tools.translate import _
from openerp import tools, SUPERUSER_ID
from openerp.addons.product import _common
class mrp_property_group(osv.osv):
"""
Group of mrp properties.
"""
_name = 'mrp.property.group'
_description = 'Property Group'
_columns = {
'name': fields.char('Property Group', size=64, required=True),
'description': fields.text('Description'),
}
class mrp_property(osv.osv):
"""
Properties of mrp.
"""
_name = 'mrp.property'
_description = 'Property'
_columns = {
'name': fields.char('Name', size=64, required=True),
'composition': fields.selection([('min','min'),('max','max'),('plus','plus')], 'Properties composition', required=True, help="Not used in computations, for information purpose only."),
'group_id': fields.many2one('mrp.property.group', 'Property Group', required=True),
'description': fields.text('Description'),
}
_defaults = {
'composition': lambda *a: 'min',
}
#----------------------------------------------------------
# Work Centers
#----------------------------------------------------------
# capacity_hour : capacity per hour. default: 1.0.
# Eg: If 5 concurrent operations at one time: capacity = 5 (because 5 employees)
# unit_per_cycle : how many units are produced for one cycle
class mrp_workcenter(osv.osv):
_name = 'mrp.workcenter'
_description = 'Work Center'
_inherits = {'resource.resource':"resource_id"}
_columns = {
'note': fields.text('Description', help="Description of the Work Center. Explain here what's a cycle according to this Work Center."),
'capacity_per_cycle': fields.float('Capacity per Cycle', help="Number of operations this Work Center can do in parallel. If this Work Center represents a team of 5 workers, the capacity per cycle is 5."),
'time_cycle': fields.float('Time for 1 cycle (hour)', help="Time in hours for doing one cycle."),
'time_start': fields.float('Time before prod.', help="Time in hours for the setup."),
'time_stop': fields.float('Time after prod.', help="Time in hours for the cleaning."),
'costs_hour': fields.float('Cost per hour', help="Specify Cost of Work Center per hour."),
'costs_hour_account_id': fields.many2one('account.analytic.account', 'Hour Account', domain=[('type','!=','view')],
help="Fill this only if you want automatic analytic accounting entries on production orders."),
'costs_cycle': fields.float('Cost per cycle', help="Specify Cost of Work Center per cycle."),
'costs_cycle_account_id': fields.many2one('account.analytic.account', 'Cycle Account', domain=[('type','!=','view')],
help="Fill this only if you want automatic analytic accounting entries on production orders."),
'costs_journal_id': fields.many2one('account.analytic.journal', 'Analytic Journal'),
'costs_general_account_id': fields.many2one('account.account', 'General Account', domain=[('type','!=','view')]),
'resource_id': fields.many2one('resource.resource','Resource', ondelete='cascade', required=True),
'product_id': fields.many2one('product.product','Work Center Product', help="Fill this product to easily track your production costs in the analytic accounting."),
}
_defaults = {
'capacity_per_cycle': 1.0,
'resource_type': 'material',
}
def on_change_product_cost(self, cr, uid, ids, product_id, context=None):
value = {}
if product_id:
cost = self.pool.get('product.product').browse(cr, uid, product_id, context=context)
value = {'costs_hour': cost.standard_price}
return {'value': value}
class mrp_routing(osv.osv):
"""
For specifying the routings of Work Centers.
"""
_name = 'mrp.routing'
_description = 'Routing'
_columns = {
'name': fields.char('Name', size=64, required=True),
'active': fields.boolean('Active', help="If the active field is set to False, it will allow you to hide the routing without removing it."),
'code': fields.char('Code', size=8),
'note': fields.text('Description'),
'workcenter_lines': fields.one2many('mrp.routing.workcenter', 'routing_id', 'Work Centers'),
'location_id': fields.many2one('stock.location', 'Production Location',
help="Keep empty if you produce at the location where the finished products are needed." \
"Set a location if you produce at a fixed location. This can be a partner location " \
"if you subcontract the manufacturing operations."
),
'company_id': fields.many2one('res.company', 'Company'),
}
_defaults = {
'active': lambda *a: 1,
'company_id': lambda self, cr, uid, context: self.pool.get('res.company')._company_default_get(cr, uid, 'mrp.routing', context=context)
}
class mrp_routing_workcenter(osv.osv):
"""
Defines working cycles and hours of a Work Center using routings.
"""
_name = 'mrp.routing.workcenter'
_description = 'Work Center Usage'
_order = 'sequence'
_columns = {
'workcenter_id': fields.many2one('mrp.workcenter', 'Work Center', required=True),
'name': fields.char('Name', size=64, required=True),
'sequence': fields.integer('Sequence', help="Gives the sequence order when displaying a list of routing Work Centers."),
'cycle_nbr': fields.float('Number of Cycles', required=True,
help="Number of iterations this work center has to do in the specified operation of the routing."),
'hour_nbr': fields.float('Number of Hours', required=True, help="Time in hours for this Work Center to achieve the operation of the specified routing."),
'routing_id': fields.many2one('mrp.routing', 'Parent Routing', select=True, ondelete='cascade',
help="Routing indicates all the Work Centers used, for how long and/or cycles." \
"If Routing is indicated then,the third tab of a production order (Work Centers) will be automatically pre-completed."),
'note': fields.text('Description'),
'company_id': fields.related('routing_id', 'company_id', type='many2one', relation='res.company', string='Company', store=True, readonly=True),
}
_defaults = {
'cycle_nbr': lambda *a: 1.0,
'hour_nbr': lambda *a: 0.0,
}
class mrp_bom(osv.osv):
"""
Defines bills of material for a product.
"""
_name = 'mrp.bom'
_description = 'Bill of Material'
_inherit = ['mail.thread']
def _child_compute(self, cr, uid, ids, name, arg, context=None):
""" Gets child bom.
@param self: The object pointer
@param cr: The current row, from the database cursor,
@param uid: The current user ID for security checks
@param ids: List of selected IDs
@param name: Name of the field
@param arg: User defined argument
@param context: A standard dictionary for contextual values
@return: Dictionary of values
"""
result = {}
if context is None:
context = {}
bom_obj = self.pool.get('mrp.bom')
bom_id = context and context.get('active_id', False) or False
cr.execute('select id from mrp_bom')
if all(bom_id != r[0] for r in cr.fetchall()):
ids.sort()
bom_id = ids[0]
bom_parent = bom_obj.browse(cr, uid, bom_id, context=context)
for bom in self.browse(cr, uid, ids, context=context):
if (bom_parent) or (bom.id == bom_id):
result[bom.id] = map(lambda x: x.id, bom.bom_lines)
else:
result[bom.id] = []
if bom.bom_lines:
continue
ok = ((name=='child_complete_ids'))
if (bom.type=='phantom' or ok):
sids = bom_obj.search(cr, uid, [('bom_id','=',False),('product_id','=',bom.product_id.id)])
if sids:
bom2 = bom_obj.browse(cr, uid, sids[0], context=context)
result[bom.id] += map(lambda x: x.id, bom2.bom_lines)
return result
_columns = {
'name': fields.char('Name', size=64),
'code': fields.char('Reference', size=16),
'active': fields.boolean('Active', help="If the active field is set to False, it will allow you to hide the bills of material without removing it."),
'type': fields.selection([('normal', 'Normal BoM'), ('phantom', 'Sets / Phantom')], 'BoM Type', required=True,
help= "If a by-product is used in several products, it can be useful to create its own BoM. "\
"Though if you don't want separated production orders for this by-product, select Set/Phantom as BoM type. "\
"If a Phantom BoM is used for a root product, it will be sold and shipped as a set of components, instead of being produced."),
'date_start': fields.date('Valid From', help="Validity of this BoM or component. Keep empty if it's always valid."),
'date_stop': fields.date('Valid Until', help="Validity of this BoM or component. Keep empty if it's always valid."),
'sequence': fields.integer('Sequence', help="Gives the sequence order when displaying a list of bills of material."),
'position': fields.char('Internal Reference', size=64, help="Reference to a position in an external plan."),
'product_id': fields.many2one('product.product', 'Product', required=True),
'product_uos_qty': fields.float('Product UOS Qty'),
'product_uos': fields.many2one('product.uom', 'Product UOS', help="Product UOS (Unit of Sale) is the unit of measurement for the invoicing and promotion of stock."),
'product_qty': fields.float('Product Quantity', required=True, digits_compute=dp.get_precision('Product Unit of Measure')),
'product_uom': fields.many2one('product.uom', 'Product Unit of Measure', required=True, help="Unit of Measure (Unit of Measure) is the unit of measurement for the inventory control"),
'product_rounding': fields.float('Product Rounding', help="Rounding applied on the product quantity."),
'product_efficiency': fields.float('Manufacturing Efficiency', required=True, help="A factor of 0.9 means a loss of 10% within the production process."),
'bom_lines': fields.one2many('mrp.bom', 'bom_id', 'BoM Lines'),
'bom_id': fields.many2one('mrp.bom', 'Parent BoM', ondelete='cascade', select=True),
'routing_id': fields.many2one('mrp.routing', 'Routing', help="The list of operations (list of work centers) to produce the finished product. The routing is mainly used to compute work center costs during operations and to plan future loads on work centers based on production planning."),
'property_ids': fields.many2many('mrp.property', 'mrp_bom_property_rel', 'bom_id', 'property_id', 'Properties'),
'child_complete_ids': fields.function(_child_compute, relation='mrp.bom', string="BoM Hierarchy", type='many2many'),
'company_id': fields.many2one('res.company', 'Company', required=True),
}
_defaults = {
'active': lambda *a: 1,
'product_efficiency': lambda *a: 1.0,
'product_qty': lambda *a: 1.0,
'product_rounding': lambda *a: 0.0,
'type': lambda *a: 'normal',
'company_id': lambda self, cr, uid, c: self.pool.get('res.company')._company_default_get(cr, uid, 'mrp.bom', context=c),
}
_order = "sequence"
_parent_name = "bom_id"
_sql_constraints = [
('bom_qty_zero', 'CHECK (product_qty>0)', 'All product quantities must be greater than 0.\n' \
'You should install the mrp_byproduct module if you want to manage extra products on BoMs !'),
]
def _check_recursion(self, cr, uid, ids, context=None):
level = 100
while len(ids):
cr.execute('select distinct bom_id from mrp_bom where id IN %s', (tuple(ids),))
ids = filter(None, map(lambda x: x[0], cr.fetchall()))
if not level:
return False
level -= 1
return True
def _check_product(self, cr, uid, ids, context=None):
all_prod = []
boms = self.browse(cr, uid, ids, context=context)
def check_bom(boms):
res = True
for bom in boms:
if bom.product_id.id in all_prod:
res = res and False
all_prod.append(bom.product_id.id)
lines = bom.bom_lines
if lines:
res = res and check_bom([bom_id for bom_id in lines if bom_id not in boms])
return res
return check_bom(boms)
_constraints = [
(_check_recursion, 'Error ! You cannot create recursive BoM.', ['parent_id']),
(_check_product, 'BoM line product should not be same as BoM product.', ['product_id']),
]
def onchange_product_id(self, cr, uid, ids, product_id, name, product_qty=0, context=None):
""" Changes UoM and name if product_id changes.
@param name: Name of the field
@param product_id: Changed product_id
@return: Dictionary of changed values
"""
res = {}
if product_id:
prod = self.pool.get('product.product').browse(cr, uid, product_id, context=context)
res['value'] = {'name': prod.name, 'product_uom': prod.uom_id.id, 'product_uos_qty': 0, 'product_uos': False}
if prod.uos_id.id:
res['value']['product_uos_qty'] = product_qty * prod.uos_coeff
res['value']['product_uos'] = prod.uos_id.id
return res
def onchange_uom(self, cr, uid, ids, product_id, product_uom, context=None):
res = {'value': {}}
if not product_uom or not product_id:
return res
product = self.pool.get('product.product').browse(cr, uid, product_id, context=context)
uom = self.pool.get('product.uom').browse(cr, uid, product_uom, context=context)
if uom.category_id.id != product.uom_id.category_id.id:
res['warning'] = {'title': _('Warning'), 'message': _('The Product Unit of Measure you chose has a different category than in the product form.')}
res['value'].update({'product_uom': product.uom_id.id})
return res
def _bom_find(self, cr, uid, product_id, product_uom, properties=None):
""" Finds BoM for particular product and product uom.
@param product_id: Selected product.
@param product_uom: Unit of measure of a product.
@param properties: List of related properties.
@return: False or BoM id.
"""
if properties is None:
properties = []
domain = [('product_id', '=', product_id), ('bom_id', '=', False),
'|', ('date_start', '=', False), ('date_start', '<=', time.strftime(DEFAULT_SERVER_DATETIME_FORMAT)),
'|', ('date_stop', '=', False), ('date_stop', '>=', time.strftime(DEFAULT_SERVER_DATETIME_FORMAT))]
ids = self.search(cr, uid, domain)
max_prop = 0
result = False
for bom in self.pool.get('mrp.bom').browse(cr, uid, ids):
prop = 0
for prop_id in bom.property_ids:
if prop_id.id in properties:
prop += 1
if (prop > max_prop) or ((max_prop == 0) and not result):
result = bom.id
max_prop = prop
return result
def _bom_explode(self, cr, uid, bom, factor, properties=None, addthis=False, level=0, routing_id=False):
""" Finds Products and Work Centers for related BoM for manufacturing order.
@param bom: BoM of particular product.
@param factor: Factor of product UoM.
@param properties: A List of properties Ids.
@param addthis: If BoM found then True else False.
@param level: Depth level to find BoM lines starts from 10.
@return: result: List of dictionaries containing product details.
result2: List of dictionaries containing Work Center details.
"""
routing_obj = self.pool.get('mrp.routing')
factor = factor / (bom.product_efficiency or 1.0)
factor = _common.ceiling(factor, bom.product_rounding)
if factor < bom.product_rounding:
factor = bom.product_rounding
result = []
result2 = []
phantom = False
if bom.type == 'phantom' and not bom.bom_lines:
newbom = self._bom_find(cr, uid, bom.product_id.id, bom.product_uom.id, properties)
if newbom:
res = self._bom_explode(cr, uid, self.browse(cr, uid, [newbom])[0], factor * bom.product_qty, properties, addthis=True, level=level + 10)
result = result + res[0]
result2 = result2 + res[1]
phantom = True
else:
phantom = False
if not phantom:
if addthis and not bom.bom_lines:
result.append({
'name': bom.product_id.name,
'product_id': bom.product_id.id,
'product_qty': bom.product_qty * factor,
'product_uom': bom.product_uom.id,
'product_uos_qty': bom.product_uos and bom.product_uos_qty * factor or False,
'product_uos': bom.product_uos and bom.product_uos.id or False,
})
routing = (routing_id and routing_obj.browse(cr, uid, routing_id)) or bom.routing_id or False
if routing:
for wc_use in routing.workcenter_lines:
wc = wc_use.workcenter_id
d, m = divmod(factor, wc_use.workcenter_id.capacity_per_cycle)
mult = (d + (m and 1.0 or 0.0))
cycle = mult * wc_use.cycle_nbr
result2.append({
'name': tools.ustr(wc_use.name) + ' - ' + tools.ustr(bom.product_id.name),
'workcenter_id': wc.id,
'sequence': level + (wc_use.sequence or 0),
'cycle': cycle,
'hour': float(wc_use.hour_nbr * mult + ((wc.time_start or 0.0) + (wc.time_stop or 0.0) + cycle * (wc.time_cycle or 0.0)) * (wc.time_efficiency or 1.0)),
})
for bom2 in bom.bom_lines:
res = self._bom_explode(cr, uid, bom2, factor, properties, addthis=True, level=level + 10)
result = result + res[0]
result2 = result2 + res[1]
return result, result2
def copy_data(self, cr, uid, id, default=None, context=None):
if default is None:
default = {}
bom_data = self.read(cr, uid, id, [], context=context)
default.update(name=_("%s (copy)") % (bom_data['name']), bom_id=False)
return super(mrp_bom, self).copy_data(cr, uid, id, default, context=context)
class mrp_production(osv.osv):
"""
Production Orders / Manufacturing Orders
"""
_name = 'mrp.production'
_description = 'Manufacturing Order'
_date_name = 'date_planned'
_inherit = ['mail.thread', 'ir.needaction_mixin']
def _production_calc(self, cr, uid, ids, prop, unknow_none, context=None):
""" Calculates total hours and total no. of cycles for a production order.
@param prop: Name of field.
@param unknow_none:
@return: Dictionary of values.
"""
result = {}
for prod in self.browse(cr, uid, ids, context=context):
result[prod.id] = {
'hour_total': 0.0,
'cycle_total': 0.0,
}
for wc in prod.workcenter_lines:
result[prod.id]['hour_total'] += wc.hour
result[prod.id]['cycle_total'] += wc.cycle
return result
def _src_id_default(self, cr, uid, ids, context=None):
try:
location_model, location_id = self.pool.get('ir.model.data').get_object_reference(cr, uid, 'stock', 'stock_location_stock')
self.pool.get('stock.location').check_access_rule(cr, uid, [location_id], 'read', context=context)
except (orm.except_orm, ValueError):
location_id = False
return location_id
def _dest_id_default(self, cr, uid, ids, context=None):
try:
location_model, location_id = self.pool.get('ir.model.data').get_object_reference(cr, uid, 'stock', 'stock_location_stock')
self.pool.get('stock.location').check_access_rule(cr, uid, [location_id], 'read', context=context)
except (orm.except_orm, ValueError):
location_id = False
return location_id
def _get_progress(self, cr, uid, ids, name, arg, context=None):
""" Return product quantity percentage """
result = dict.fromkeys(ids, 100)
for mrp_production in self.browse(cr, uid, ids, context=context):
if mrp_production.product_qty:
done = 0.0
for move in mrp_production.move_created_ids2:
if not move.scrapped and move.product_id == mrp_production.product_id:
done += move.product_qty
result[mrp_production.id] = done / mrp_production.product_qty * 100
return result
def _moves_assigned(self, cr, uid, ids, name, arg, context=None):
""" Test whether all the consume lines are assigned """
res = {}
for production in self.browse(cr, uid, ids, context=context):
res[production.id] = True
states = [x.state != 'assigned' for x in production.move_lines if x]
if any(states) or len(states) == 0:
res[production.id] = False
return res
def _mrp_from_move(self, cr, uid, ids, context=None):
""" Return mrp"""
res = []
for move in self.browse(cr, uid, ids, context=context):
res += self.pool.get("mrp.production").search(cr, uid, [('move_lines', 'in', move.id)], context=context)
return res
_columns = {
'name': fields.char('Reference', size=64, required=True, readonly=True, states={'draft': [('readonly', False)]}),
'origin': fields.char('Source Document', size=64, readonly=True, states={'draft': [('readonly', False)]},
help="Reference of the document that generated this production order request."),
'priority': fields.selection([('0', 'Not urgent'), ('1', 'Normal'), ('2', 'Urgent'), ('3', 'Very Urgent')], 'Priority',
select=True, readonly=True, states=dict.fromkeys(['draft', 'confirmed'], [('readonly', False)])),
'product_id': fields.many2one('product.product', 'Product', required=True, readonly=True, states={'draft': [('readonly', False)]}),
'product_qty': fields.float('Product Quantity', digits_compute=dp.get_precision('Product Unit of Measure'), required=True, readonly=True, states={'draft': [('readonly', False)]}),
'product_uom': fields.many2one('product.uom', 'Product Unit of Measure', required=True, readonly=True, states={'draft': [('readonly', False)]}),
'product_uos_qty': fields.float('Product UoS Quantity', readonly=True, states={'draft': [('readonly', False)]}),
'product_uos': fields.many2one('product.uom', 'Product UoS', readonly=True, states={'draft': [('readonly', False)]}),
'progress': fields.function(_get_progress, type='float',
string='Production progress'),
'location_src_id': fields.many2one('stock.location', 'Raw Materials Location', required=True,
readonly=True, states={'draft': [('readonly', False)]},
help="Location where the system will look for components."),
'location_dest_id': fields.many2one('stock.location', 'Finished Products Location', required=True,
readonly=True, states={'draft': [('readonly', False)]},
help="Location where the system will stock the finished products."),
'date_planned': fields.datetime('Scheduled Date', required=True, select=1, readonly=True, states={'draft': [('readonly', False)]}),
'date_start': fields.datetime('Start Date', select=True, readonly=True),
'date_finished': fields.datetime('End Date', select=True, readonly=True),
'bom_id': fields.many2one('mrp.bom', 'Bill of Material', domain=[('bom_id', '=', False)], readonly=True, states={'draft': [('readonly', False)]},
help="Bill of Materials allow you to define the list of required raw materials to make a finished product."),
'routing_id': fields.many2one('mrp.routing', string='Routing', on_delete='set null', readonly=True, states={'draft': [('readonly', False)]},
help="The list of operations (list of work centers) to produce the finished product. The routing is mainly used to compute work center costs during operations and to plan future loads on work centers based on production plannification."),
'move_prod_id': fields.many2one('stock.move', 'Product Move', readonly=True),
'move_lines': fields.one2many('stock.move', 'raw_material_production_id', 'Products to Consume',
domain=[('state', 'not in', ('done', 'cancel'))], readonly=True, states={'draft': [('readonly', False)]}),
'move_lines2': fields.one2many('stock.move', 'raw_material_production_id', 'Consumed Products',
domain=[('state', 'in', ('done', 'cancel'))], readonly=True),
'move_created_ids': fields.one2many('stock.move', 'production_id', 'Products to Produce',
domain=[('state', 'not in', ('done', 'cancel'))], readonly=True),
'move_created_ids2': fields.one2many('stock.move', 'production_id', 'Produced Products',
domain=[('state', 'in', ('done', 'cancel'))], readonly=True),
'product_lines': fields.one2many('mrp.production.product.line', 'production_id', 'Scheduled goods',
readonly=True),
'workcenter_lines': fields.one2many('mrp.production.workcenter.line', 'production_id', 'Work Centers Utilisation',
readonly=True, states={'draft': [('readonly', False)]}),
'state': fields.selection(
[('draft', 'New'), ('cancel', 'Cancelled'), ('confirmed', 'Awaiting Raw Materials'),
('ready', 'Ready to Produce'), ('in_production', 'Production Started'), ('done', 'Done')],
string='Status', readonly=True,
track_visibility='onchange',
help="When the production order is created the status is set to 'Draft'.\n\
If the order is confirmed the status is set to 'Waiting Goods'.\n\
If any exceptions are there, the status is set to 'Picking Exception'.\n\
If the stock is available then the status is set to 'Ready to Produce'.\n\
When the production gets started then the status is set to 'In Production'.\n\
When the production is over, the status is set to 'Done'."),
'hour_total': fields.function(_production_calc, type='float', string='Total Hours', multi='workorder', store=True),
'cycle_total': fields.function(_production_calc, type='float', string='Total Cycles', multi='workorder', store=True),
'user_id': fields.many2one('res.users', 'Responsible'),
'company_id': fields.many2one('res.company', 'Company', required=True),
'ready_production': fields.function(_moves_assigned, type='boolean', store={'stock.move': (_mrp_from_move, ['state'], 10)}),
}
_defaults = {
'priority': lambda *a: '1',
'state': lambda *a: 'draft',
'date_planned': lambda *a: time.strftime('%Y-%m-%d %H:%M:%S'),
'product_qty': lambda *a: 1.0,
'user_id': lambda self, cr, uid, c: uid,
'name': lambda x, y, z, c: x.pool.get('ir.sequence').get(y, z, 'mrp.production') or '/',
'company_id': lambda self, cr, uid, c: self.pool.get('res.company')._company_default_get(cr, uid, 'mrp.production', context=c),
'location_src_id': _src_id_default,
'location_dest_id': _dest_id_default
}
_sql_constraints = [
('name_uniq', 'unique(name, company_id)', 'Reference must be unique per Company!'),
]
_order = 'priority desc, date_planned asc'
def _check_qty(self, cr, uid, ids, context=None):
for order in self.browse(cr, uid, ids, context=context):
if order.product_qty <= 0:
return False
return True
_constraints = [
(_check_qty, 'Order quantity cannot be negative or zero!', ['product_qty']),
]
def unlink(self, cr, uid, ids, context=None):
for production in self.browse(cr, uid, ids, context=context):
if production.state not in ('draft', 'cancel'):
raise osv.except_osv(_('Invalid Action!'), _('Cannot delete a manufacturing order in state \'%s\'.') % production.state)
return super(mrp_production, self).unlink(cr, uid, ids, context=context)
def copy(self, cr, uid, id, default=None, context=None):
if default is None:
default = {}
default.update({
'name': self.pool.get('ir.sequence').get(cr, uid, 'mrp.production'),
'move_lines': [],
'move_lines2': [],
'move_created_ids': [],
'move_created_ids2': [],
'product_lines': [],
'move_prod_id': False,
})
return super(mrp_production, self).copy(cr, uid, id, default, context)
def location_id_change(self, cr, uid, ids, src, dest, context=None):
""" Changes destination location if source location is changed.
@param src: Source location id.
@param dest: Destination location id.
@return: Dictionary of values.
"""
if dest:
return {}
if src:
return {'value': {'location_dest_id': src}}
return {}
def product_id_change(self, cr, uid, ids, product_id, product_qty=0, context=None):
""" Finds UoM of changed product.
@param product_id: Id of changed product.
@return: Dictionary of values.
"""
result = {}
if not product_id:
return {'value': {
'product_uom': False,
'bom_id': False,
'routing_id': False,
'product_uos_qty': 0,
'product_uos': False
}}
bom_obj = self.pool.get('mrp.bom')
product = self.pool.get('product.product').browse(cr, uid, product_id, context=context)
bom_id = bom_obj._bom_find(cr, uid, product.id, product.uom_id and product.uom_id.id, [])
routing_id = False
if bom_id:
bom_point = bom_obj.browse(cr, uid, bom_id, context=context)
routing_id = bom_point.routing_id.id or False
product_uom_id = product.uom_id and product.uom_id.id or False
result['value'] = {'product_uos_qty': 0, 'product_uos': False, 'product_uom': product_uom_id, 'bom_id': bom_id, 'routing_id': routing_id}
if product.uos_id.id:
result['value']['product_uos_qty'] = product_qty * product.uos_coeff
result['value']['product_uos'] = product.uos_id.id
return result
def bom_id_change(self, cr, uid, ids, bom_id, context=None):
""" Finds routing for changed BoM.
@param product: Id of product.
@return: Dictionary of values.
"""
if not bom_id:
return {'value': {
'routing_id': False
}}
bom_point = self.pool.get('mrp.bom').browse(cr, uid, bom_id, context=context)
routing_id = bom_point.routing_id.id or False
result = {
'routing_id': routing_id
}
return {'value': result}
def _action_compute_lines(self, cr, uid, ids, properties=None, context=None):
""" Compute product_lines and workcenter_lines from BoM structure
@return: product_lines
"""
if properties is None:
properties = []
results = []
bom_obj = self.pool.get('mrp.bom')
uom_obj = self.pool.get('product.uom')
prod_line_obj = self.pool.get('mrp.production.product.line')
workcenter_line_obj = self.pool.get('mrp.production.workcenter.line')
for production in self.browse(cr, uid, ids, context=context):
#unlink product_lines
prod_line_obj.unlink(cr, SUPERUSER_ID, [line.id for line in production.product_lines], context=context)
#unlink workcenter_lines
workcenter_line_obj.unlink(cr, SUPERUSER_ID, [line.id for line in production.workcenter_lines], context=context)
# search BoM structure and route
bom_point = production.bom_id
bom_id = production.bom_id.id
if not bom_point:
bom_id = bom_obj._bom_find(cr, uid, production.product_id.id, production.product_uom.id, properties)
if bom_id:
bom_point = bom_obj.browse(cr, uid, bom_id)
routing_id = bom_point.routing_id.id or False
self.write(cr, uid, [production.id], {'bom_id': bom_id, 'routing_id': routing_id})
if not bom_id:
raise osv.except_osv(_('Error!'), _("Cannot find a bill of material for this product."))
# get components and workcenter_lines from BoM structure
factor = uom_obj._compute_qty(cr, uid, production.product_uom.id, production.product_qty, bom_point.product_uom.id)
res = bom_obj._bom_explode(cr, uid, bom_point, factor / bom_point.product_qty, properties, routing_id=production.routing_id.id)
results = res[0] # product_lines
results2 = res[1] # workcenter_lines
# reset product_lines in production order
for line in results:
line['production_id'] = production.id
prod_line_obj.create(cr, uid, line)
#reset workcenter_lines in production order
for line in results2:
line['production_id'] = production.id
workcenter_line_obj.create(cr, uid, line)
return results
def action_compute(self, cr, uid, ids, properties=None, context=None):
""" Computes bills of material of a product.
@param properties: List containing dictionaries of properties.
@return: No. of products.
"""
return len(self._action_compute_lines(cr, uid, ids, properties=properties, context=context))
def action_cancel(self, cr, uid, ids, context=None):
""" Cancels the production order and related stock moves.
@return: True
"""
if context is None:
context = {}
move_obj = self.pool.get('stock.move')
for production in self.browse(cr, uid, ids, context=context):
if production.move_created_ids:
move_obj.action_cancel(cr, uid, [x.id for x in production.move_created_ids])
move_obj.action_cancel(cr, uid, [x.id for x in production.move_lines])
self.write(cr, uid, ids, {'state': 'cancel'})
# Put related procurements in exception
proc_obj = self.pool.get("procurement.order")
procs = proc_obj.search(cr, uid, [('production_id', 'in', ids)], context=context)
if procs:
proc_obj.write(cr, uid, procs, {'state': 'exception'}, context=context)
return True
def action_ready(self, cr, uid, ids, context=None):
""" Changes the production state to Ready and location id of stock move.
@return: True
"""
move_obj = self.pool.get('stock.move')
self.write(cr, uid, ids, {'state': 'ready'})
for production in self.browse(cr, uid, ids, context=context):
if not production.move_created_ids:
self._make_production_produce_line(cr, uid, production, context=context)
if production.move_prod_id and production.move_prod_id.location_id.id != production.location_dest_id.id:
move_obj.write(cr, uid, [production.move_prod_id.id],
{'location_id': production.location_dest_id.id})
return True
def action_production_end(self, cr, uid, ids, context=None):
""" Changes production state to Finish and writes finished date.
@return: True
"""
for production in self.browse(cr, uid, ids):
self._costs_generate(cr, uid, production)
write_res = self.write(cr, uid, ids, {'state': 'done', 'date_finished': time.strftime('%Y-%m-%d %H:%M:%S')})
# Check related procurements
proc_obj = self.pool.get("procurement.order")
procs = proc_obj.search(cr, uid, [('production_id', 'in', ids)], context=context)
proc_obj.check(cr, uid, procs, context=context)
return write_res
def test_production_done(self, cr, uid, ids):
""" Tests whether production is done or not.
@return: True or False
"""
res = True
for production in self.browse(cr, uid, ids):
if production.move_lines:
res = False
if production.move_created_ids:
res = False
return res
def _get_subproduct_factor(self, cr, uid, production_id, move_id=None, context=None):
""" Compute the factor to compute the qty of procucts to produce for the given production_id. By default,
it's always equal to the quantity encoded in the production order or the production wizard, but if the
module mrp_subproduct is installed, then we must use the move_id to identify the product to produce
and its quantity.
:param production_id: ID of the mrp.order
:param move_id: ID of the stock move that needs to be produced. Will be used in mrp_subproduct.
:return: The factor to apply to the quantity that we should produce for the given production order.
"""
return 1
def _get_produced_qty(self, cr, uid, production, context=None):
''' returns the produced quantity of product 'production.product_id' for the given production, in the product UoM
'''
produced_qty = 0
for produced_product in production.move_created_ids2:
if (produced_product.scrapped) or (produced_product.product_id.id != production.product_id.id):
continue
produced_qty += produced_product.product_qty
return produced_qty
def _get_consumed_data(self, cr, uid, production, context=None):
''' returns a dictionary containing for each raw material of the given production, its quantity already consumed (in the raw material UoM)
'''
consumed_data = {}
# Calculate already consumed qtys
for consumed in production.move_lines2:
if consumed.scrapped:
continue
if not consumed_data.get(consumed.product_id.id, False):
consumed_data[consumed.product_id.id] = 0
consumed_data[consumed.product_id.id] += consumed.product_qty
return consumed_data
def _calculate_qty(self, cr, uid, production, product_qty=0.0, context=None):
"""
Calculates the quantity still needed to produce an extra number of products
"""
quant_obj = self.pool.get("stock.quant")
produced_qty = self._get_produced_qty(cr, uid, production, context=context)
consumed_data = self._get_consumed_data(cr, uid, production, context=context)
#In case no product_qty is given, take the remaining qty to produce for the given production
if not product_qty:
product_qty = production.product_qty - produced_qty
dicts = {}
# Find product qty to be consumed and consume it
for scheduled in production.product_lines:
consumed_qty = consumed_data.get(scheduled.product_id.id, 0.0)
# qty available for consume and produce
qty_avail = scheduled.product_qty - consumed_qty
if qty_avail <= 0.0:
# there will be nothing to consume for this raw material
continue
if not dicts.get(scheduled.product_id.id):
dicts[scheduled.product_id.id] = {}
# total qty of consumed product we need after this consumption
total_consume = ((product_qty + produced_qty) * scheduled.product_qty / production.product_qty)
qty = total_consume - consumed_qty
# Search for quants related to this related move
for move in production.move_lines:
if qty <= 0.0:
break
if move.product_id.id != scheduled.product_id.id:
continue
product_id = scheduled.product_id.id
q = min(move.product_qty, qty)
quants = quant_obj.quants_get_prefered_domain(cr, uid, move.location_id, scheduled.product_id, q, domain=[('qty', '>', 0.0)],
prefered_domain_list=[[('reservation_id', '=', move.id)], [('reservation_id', '=', False)]], context=context)
for quant, quant_qty in quants:
if quant:
lot_id = quant.lot_id.id
if not product_id in dicts.keys():
dicts[product_id] = {lot_id: quant_qty}
elif lot_id in dicts[product_id].keys():
dicts[product_id][lot_id] += quant_qty
else:
dicts[product_id][lot_id] = quant_qty
qty -= quant_qty
if qty > 0:
if dicts[product_id].get(False):
dicts[product_id][False] += qty
else:
dicts[product_id][False] = qty
consume_lines = []
for prod in dicts.keys():
for lot, qty in dicts[prod].items():
consume_lines.append({'product_id': prod, 'product_qty': qty, 'lot_id': lot})
return consume_lines
def action_produce(self, cr, uid, production_id, production_qty, production_mode, wiz=False, context=None):
""" To produce final product based on production mode (consume/consume&produce).
If Production mode is consume, all stock move lines of raw materials will be done/consumed.
If Production mode is consume & produce, all stock move lines of raw materials will be done/consumed
and stock move lines of final product will be also done/produced.
@param production_id: the ID of mrp.production object
@param production_qty: specify qty to produce
@param production_mode: specify production mode (consume/consume&produce).
@param wiz: the mrp produce product wizard, which will tell the amount of consumed products needed
@return: True
"""
stock_mov_obj = self.pool.get('stock.move')
production = self.browse(cr, uid, production_id, context=context)
if not production.move_lines and production.state == 'ready':
# trigger workflow if not products to consume (eg: services)
self.signal_button_produce(cr, uid, [production_id])
produced_qty = self._get_produced_qty(cr, uid, production, context=context)
main_production_move = False
if production_mode == 'consume_produce':
# To produce remaining qty of final product
#vals = {'state':'confirmed'}
#final_product_todo = [x.id for x in production.move_created_ids]
#stock_mov_obj.write(cr, uid, final_product_todo, vals)
#stock_mov_obj.action_confirm(cr, uid, final_product_todo, context)
produced_products = {}
for produced_product in production.move_created_ids2:
if produced_product.scrapped:
continue
if not produced_products.get(produced_product.product_id.id, False):
produced_products[produced_product.product_id.id] = 0
produced_products[produced_product.product_id.id] += produced_product.product_qty
for produce_product in production.move_created_ids:
produced_qty = produced_products.get(produce_product.product_id.id, 0)
subproduct_factor = self._get_subproduct_factor(cr, uid, production.id, produce_product.id, context=context)
rest_qty = (subproduct_factor * production.product_qty) - produced_qty
if float_compare(rest_qty, (subproduct_factor * production_qty), precision_rounding=produce_product.product_id.uom_id.rounding) < 0:
prod_name = produce_product.product_id.name_get()[0][1]
raise osv.except_osv(_('Warning!'), _('You are going to produce total %s quantities of "%s".\nBut you can only produce up to total %s quantities.') % ((subproduct_factor * production_qty), prod_name, rest_qty))
if float_compare(rest_qty, 0, precision_rounding=produce_product.product_id.uom_id.rounding) > 0:
lot_id = False
if wiz:
lot_id = wiz.lot_id.id
new_moves = stock_mov_obj.action_consume(cr, uid, [produce_product.id], (subproduct_factor * production_qty), location_id=produce_product.location_id.id, restrict_lot_id=lot_id, context=context)
stock_mov_obj.write(cr, uid, new_moves, {'production_id': production_id}, context=context)
if produce_product.product_id.id == production.product_id.id and new_moves:
main_production_move = new_moves[0]
if production_mode in ['consume', 'consume_produce']:
if wiz:
consume_lines = []
for cons in wiz.consume_lines:
consume_lines.append({'product_id': cons.product_id.id, 'lot_id': cons.lot_id.id, 'product_qty': cons.product_qty})
else:
consume_lines = self._calculate_qty(cr, uid, production, production_qty, context=context)
for consume in consume_lines:
remaining_qty = consume['product_qty']
for raw_material_line in production.move_lines:
if remaining_qty <= 0:
break
if consume['product_id'] != raw_material_line.product_id.id:
continue
consumed_qty = min(remaining_qty, raw_material_line.product_qty)
stock_mov_obj.action_consume(cr, uid, [raw_material_line.id], consumed_qty, raw_material_line.location_id.id, restrict_lot_id=consume['lot_id'], consumed_for=main_production_move, context=context)
remaining_qty -= consumed_qty
if remaining_qty:
#consumed more in wizard than previously planned
product = self.pool.get('product.product').browse(cr, uid, consume['product_id'], context=context)
extra_move_id = self._make_consume_line_from_data(cr, uid, production, product, product.uom_id.id, remaining_qty, False, 0, context=context)
stock_mov_obj.action_done(cr, uid, [extra_move_id], context=context)
self.message_post(cr, uid, production_id, body=_("%s produced") % self._description, context=context)
self.signal_button_produce_done(cr, uid, [production_id])
return True
def _costs_generate(self, cr, uid, production):
""" Calculates total costs at the end of the production.
@param production: Id of production order.
@return: Calculated amount.
"""
amount = 0.0
analytic_line_obj = self.pool.get('account.analytic.line')
for wc_line in production.workcenter_lines:
wc = wc_line.workcenter_id
if wc.costs_journal_id and wc.costs_general_account_id:
# Cost per hour
value = wc_line.hour * wc.costs_hour
account = wc.costs_hour_account_id.id
if value and account:
amount += value
# we user SUPERUSER_ID as we do not garantee an mrp user
# has access to account analytic lines but still should be
# able to produce orders
analytic_line_obj.create(cr, SUPERUSER_ID, {
'name': wc_line.name + ' (H)',
'amount': value,
'account_id': account,
'general_account_id': wc.costs_general_account_id.id,
'journal_id': wc.costs_journal_id.id,
'ref': wc.code,
'product_id': wc.product_id.id,
'unit_amount': wc_line.hour,
'product_uom_id': wc.product_id and wc.product_id.uom_id.id or False
})
# Cost per cycle
value = wc_line.cycle * wc.costs_cycle
account = wc.costs_cycle_account_id.id
if value and account:
amount += value
analytic_line_obj.create(cr, SUPERUSER_ID, {
'name': wc_line.name + ' (C)',
'amount': value,
'account_id': account,
'general_account_id': wc.costs_general_account_id.id,
'journal_id': wc.costs_journal_id.id,
'ref': wc.code,
'product_id': wc.product_id.id,
'unit_amount': wc_line.cycle,
'product_uom_id': wc.product_id and wc.product_id.uom_id.id or False
})
return amount
def action_in_production(self, cr, uid, ids, context=None):
""" Changes state to In Production and writes starting date.
@return: True
"""
return self.write(cr, uid, ids, {'state': 'in_production', 'date_start': time.strftime('%Y-%m-%d %H:%M:%S')})
def consume_lines_get(self, cr, uid, ids, *args):
res = []
for order in self.browse(cr, uid, ids, context={}):
res += [x.id for x in order.move_lines]
return res
def test_ready(self, cr, uid, ids):
res = False
for production in self.browse(cr, uid, ids):
if production.ready_production:
res = True
return res
def _make_production_produce_line(self, cr, uid, production, context=None):
stock_move = self.pool.get('stock.move')
source_location_id = production.product_id.property_stock_production.id
destination_location_id = production.location_dest_id.id
data = {
'name': production.name,
'date': production.date_planned,
'product_id': production.product_id.id,
'product_uom': production.product_uom.id,
'product_uom_qty': production.product_qty,
'product_uos_qty': production.product_uos and production.product_uos_qty or False,
'product_uos': production.product_uos and production.product_uos.id or False,
'location_id': source_location_id,
'location_dest_id': destination_location_id,
'move_dest_id': production.move_prod_id.id,
'company_id': production.company_id.id,
'production_id': production.id,
'origin': production.name,
}
move_id = stock_move.create(cr, uid, data, context=context)
#a phantom bom cannot be used in mrp order so it's ok to assume the list returned by action_confirm
#is 1 element long, so we can take the first.
return stock_move.action_confirm(cr, uid, [move_id], context=context)[0]
def _get_raw_material_procure_method(self, cr, uid, product, context=None):
'''This method returns the procure_method to use when creating the stock move for the production raw materials'''
try:
mto_route = self.pool.get('ir.model.data').get_object_reference(cr, uid, 'stock', 'route_warehouse0_mto')[1]
except:
return "make_to_stock"
routes = product.route_ids + product.categ_id.total_route_ids
if mto_route in [x.id for x in routes]:
return "make_to_order"
return "make_to_stock"
def _make_consume_line_from_data(self, cr, uid, production, product, uom_id, qty, uos_id, uos_qty, context=None):
stock_move = self.pool.get('stock.move')
# Internal shipment is created for Stockable and Consumer Products
if product.type not in ('product', 'consu'):
return False
# Take routing location as a Source Location.
source_location_id = production.location_src_id.id
if production.bom_id.routing_id and production.bom_id.routing_id.location_id:
source_location_id = production.bom_id.routing_id.location_id.id
destination_location_id = production.product_id.property_stock_production.id
if not source_location_id:
source_location_id = production.location_src_id.id
move_id = stock_move.create(cr, uid, {
'name': production.name,
'date': production.date_planned,
'product_id': product.id,
'product_uom_qty': qty,
'product_uom': uom_id,
'product_uos_qty': uos_id and uos_qty or False,
'product_uos': uos_id or False,
'location_id': source_location_id,
'location_dest_id': destination_location_id,
'company_id': production.company_id.id,
'procure_method': self._get_raw_material_procure_method(cr, uid, product, context=context),
'raw_material_production_id': production.id,
#this saves us a browse in create()
'price_unit': product.standard_price,
'origin': production.name,
})
return move_id
def _make_production_consume_line(self, cr, uid, line, context=None):
return self._make_consume_line_from_data(cr, uid, line.production_id, line.product_id, line.product_uom.id, line.product_qty, line.product_uos.id, line.product_uos_qty, context=context)
def action_confirm(self, cr, uid, ids, context=None):
""" Confirms production order.
@return: Newly generated Shipment Id.
"""
uncompute_ids = filter(lambda x: x, [not x.product_lines and x.id or False for x in self.browse(cr, uid, ids, context=context)])
self.action_compute(cr, uid, uncompute_ids, context=context)
for production in self.browse(cr, uid, ids, context=context):
self._make_production_produce_line(cr, uid, production, context=context)
stock_moves = []
for line in production.product_lines:
stock_move_id = self._make_production_consume_line(cr, uid, line, context=context)
if stock_move_id:
stock_moves.append(stock_move_id)
if stock_moves:
self.pool.get('stock.move').action_confirm(cr, uid, stock_moves, context=context)
production.write({'state': 'confirmed'}, context=context)
return 0
def action_assign(self, cr, uid, ids, context=None):
"""
Checks the availability on the consume lines of the production order
"""
move_obj = self.pool.get("stock.move")
for production in self.browse(cr, uid, ids, context=context):
move_obj.action_assign(cr, uid, [x.id for x in production.move_lines], context=context)
def force_production(self, cr, uid, ids, *args):
""" Assigns products.
@param *args: Arguments
@return: True
"""
move_obj = self.pool.get('stock.move')
for order in self.browse(cr, uid, ids):
move_obj.force_assign(cr, uid, [x.id for x in order.move_lines])
return True
class mrp_production_workcenter_line(osv.osv):
_name = 'mrp.production.workcenter.line'
_description = 'Work Order'
_order = 'sequence'
_inherit = ['mail.thread']
_columns = {
'name': fields.char('Work Order', size=64, required=True),
'workcenter_id': fields.many2one('mrp.workcenter', 'Work Center', required=True),
'cycle': fields.float('Number of Cycles', digits=(16, 2)),
'hour': fields.float('Number of Hours', digits=(16, 2)),
'sequence': fields.integer('Sequence', required=True, help="Gives the sequence order when displaying a list of work orders."),
'production_id': fields.many2one('mrp.production', 'Manufacturing Order',
track_visibility='onchange', select=True, ondelete='cascade', required=True),
}
_defaults = {
'sequence': lambda *a: 1,
'hour': lambda *a: 0,
'cycle': lambda *a: 0,
}
class mrp_production_product_line(osv.osv):
_name = 'mrp.production.product.line'
_description = 'Production Scheduled Product'
_columns = {
'name': fields.char('Name', size=64, required=True),
'product_id': fields.many2one('product.product', 'Product', required=True),
'product_qty': fields.float('Product Quantity', digits_compute=dp.get_precision('Product Unit of Measure'), required=True),
'product_uom': fields.many2one('product.uom', 'Product Unit of Measure', required=True),
'product_uos_qty': fields.float('Product UOS Quantity'),
'product_uos': fields.many2one('product.uom', 'Product UOS'),
'production_id': fields.many2one('mrp.production', 'Production Order', select=True),
}
class product_product(osv.osv):
_inherit = "product.product"
def _bom_orders_count(self, cr, uid, ids, field_name, arg, context=None):
Bom = self.pool('mrp.bom')
Production = self.pool('mrp.production')
return {
product_id: {
'bom_count': Bom.search_count(cr, uid, [('product_id', '=', product_id), ('bom_id', '=', False)], context=context),
'mo_count': Production.search_count(cr,uid, [('product_id', '=', product_id)], context=context),
'bom_strct': Bom.search_count(cr, uid, [('product_id', '=', product_id), ('bom_id', '=', False)], context=context),
}
for product_id in ids
}
_columns = {
'bom_ids': fields.one2many('mrp.bom', 'product_id', 'Bill of Materials'),
'bom_count': fields.function(_bom_orders_count, string='# Bill of Material', type='integer', multi="_bom_order_count"),
'bom_strct': fields.function(_bom_orders_count, string='# Bill of Material Structure', type='integer', multi="_bom_order_count"),
'mo_count': fields.function(_bom_orders_count, string='# Manufacturing Orders', type='integer', multi="_bom_order_count"),
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
DrMeers/django | django/contrib/admin/checks.py | 3 | 36925 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from itertools import chain
from django.contrib.admin.utils import get_fields_from_path, NotRelationField
from django.core import checks
from django.db import models
from django.db.models.fields import FieldDoesNotExist
from django.forms.models import BaseModelForm, _get_foreign_key, BaseModelFormSet
def check_admin_app(**kwargs):
from django.contrib.admin.sites import site
return list(chain.from_iterable(
model_admin.check(model, **kwargs)
for model, model_admin in site._registry.items()
))
class BaseModelAdminChecks(object):
def check(self, cls, model, **kwargs):
errors = []
errors.extend(self._check_raw_id_fields(cls, model))
errors.extend(self._check_fields(cls, model))
errors.extend(self._check_fieldsets(cls, model))
errors.extend(self._check_exclude(cls, model))
errors.extend(self._check_form(cls, model))
errors.extend(self._check_filter_vertical(cls, model))
errors.extend(self._check_filter_horizontal(cls, model))
errors.extend(self._check_radio_fields(cls, model))
errors.extend(self._check_prepopulated_fields(cls, model))
errors.extend(self._check_view_on_site_url(cls, model))
errors.extend(self._check_ordering(cls, model))
errors.extend(self._check_readonly_fields(cls, model))
return errors
def _check_raw_id_fields(self, cls, model):
""" Check that `raw_id_fields` only contains field names that are listed
on the model. """
if not isinstance(cls.raw_id_fields, (list, tuple)):
return must_be('a list or tuple', option='raw_id_fields', obj=cls, id='admin.E001')
else:
return list(chain(*[
self._check_raw_id_fields_item(cls, model, field_name, 'raw_id_fields[%d]' % index)
for index, field_name in enumerate(cls.raw_id_fields)
]))
def _check_raw_id_fields_item(self, cls, model, field_name, label):
""" Check an item of `raw_id_fields`, i.e. check that field named
`field_name` exists in model `model` and is a ForeignKey or a
ManyToManyField. """
try:
field = model._meta.get_field(field_name)
except models.FieldDoesNotExist:
return refer_to_missing_field(field=field_name, option=label,
model=model, obj=cls, id='admin.E002')
else:
if not isinstance(field, (models.ForeignKey, models.ManyToManyField)):
return must_be('a ForeignKey or ManyToManyField',
option=label, obj=cls, id='admin.E003')
else:
return []
def _check_fields(self, cls, model):
""" Check that `fields` only refer to existing fields, doesn't contain
duplicates. Check if at most one of `fields` and `fieldsets` is defined.
"""
if cls.fields is None:
return []
elif not isinstance(cls.fields, (list, tuple)):
return must_be('a list or tuple', option='fields', obj=cls, id='admin.E004')
elif cls.fieldsets:
return [
checks.Error(
'Both "fieldsets" and "fields" are specified.',
hint=None,
obj=cls,
id='admin.E005',
)
]
elif len(cls.fields) != len(set(cls.fields)):
return [
checks.Error(
'There are duplicate field(s) in "fields".',
hint=None,
obj=cls,
id='admin.E006',
)
]
else:
return list(chain(*[
self._check_field_spec(cls, model, field_name, 'fields')
for field_name in cls.fields
]))
def _check_fieldsets(self, cls, model):
""" Check that fieldsets is properly formatted and doesn't contain
duplicates. """
if cls.fieldsets is None:
return []
elif not isinstance(cls.fieldsets, (list, tuple)):
return must_be('a list or tuple', option='fieldsets', obj=cls, id='admin.E007')
else:
return list(chain(*[
self._check_fieldsets_item(cls, model, fieldset, 'fieldsets[%d]' % index)
for index, fieldset in enumerate(cls.fieldsets)
]))
def _check_fieldsets_item(self, cls, model, fieldset, label):
""" Check an item of `fieldsets`, i.e. check that this is a pair of a
set name and a dictionary containing "fields" key. """
if not isinstance(fieldset, (list, tuple)):
return must_be('a list or tuple', option=label, obj=cls, id='admin.E008')
elif len(fieldset) != 2:
return must_be('a pair', option=label, obj=cls, id='admin.E009')
elif not isinstance(fieldset[1], dict):
return must_be('a dictionary', option='%s[1]' % label, obj=cls, id='admin.E010')
elif 'fields' not in fieldset[1]:
return [
checks.Error(
'"%s[1]" must contain "fields" key.' % label,
hint=None,
obj=cls,
id='admin.E011',
)
]
elif len(fieldset[1]['fields']) != len(set(fieldset[1]['fields'])):
return [
checks.Error(
'There are duplicate field(s) in "%s[1]".' % label,
hint=None,
obj=cls,
id='admin.E012',
)
]
else:
return list(chain(*[
self._check_field_spec(cls, model, fields, '%s[1][\'fields\']' % label)
for fields in fieldset[1]['fields']
]))
def _check_field_spec(self, cls, model, fields, label):
""" `fields` should be an item of `fields` or an item of
fieldset[1]['fields'] for any `fieldset` in `fieldsets`. It should be a
field name or a tuple of field names. """
if isinstance(fields, tuple):
return list(chain(*[
self._check_field_spec_item(cls, model, field_name, "%s[%d]" % (label, index))
for index, field_name in enumerate(fields)
]))
else:
return self._check_field_spec_item(cls, model, fields, label)
def _check_field_spec_item(self, cls, model, field_name, label):
if field_name in cls.readonly_fields:
# Stuff can be put in fields that isn't actually a model field if
# it's in readonly_fields, readonly_fields will handle the
# validation of such things.
return []
else:
try:
field = model._meta.get_field(field_name)
except models.FieldDoesNotExist:
# If we can't find a field on the model that matches, it could
# be an extra field on the form.
return []
else:
if (isinstance(field, models.ManyToManyField) and
not field.rel.through._meta.auto_created):
return [
checks.Error(
'"%s" cannot include the ManyToManyField "%s", '
'because "%s" manually specifies relationship model.'
% (label, field_name, field_name),
hint=None,
obj=cls,
id='admin.E013',
)
]
else:
return []
def _check_exclude(self, cls, model):
""" Check that exclude is a sequence without duplicates. """
if cls.exclude is None: # default value is None
return []
elif not isinstance(cls.exclude, (list, tuple)):
return must_be('a list or tuple', option='exclude', obj=cls, id='admin.E014')
elif len(cls.exclude) > len(set(cls.exclude)):
return [
checks.Error(
'"exclude" contains duplicate field(s).',
hint=None,
obj=cls,
id='admin.E015',
)
]
else:
return []
def _check_form(self, cls, model):
""" Check that form subclasses BaseModelForm. """
if hasattr(cls, 'form') and not issubclass(cls.form, BaseModelForm):
return must_inherit_from(parent='BaseModelForm', option='form',
obj=cls, id='admin.E016')
else:
return []
def _check_filter_vertical(self, cls, model):
""" Check that filter_vertical is a sequence of field names. """
if not hasattr(cls, 'filter_vertical'):
return []
elif not isinstance(cls.filter_vertical, (list, tuple)):
return must_be('a list or tuple', option='filter_vertical', obj=cls, id='admin.E017')
else:
return list(chain(*[
self._check_filter_item(cls, model, field_name, "filter_vertical[%d]" % index)
for index, field_name in enumerate(cls.filter_vertical)
]))
def _check_filter_horizontal(self, cls, model):
""" Check that filter_horizontal is a sequence of field names. """
if not hasattr(cls, 'filter_horizontal'):
return []
elif not isinstance(cls.filter_horizontal, (list, tuple)):
return must_be('a list or tuple', option='filter_horizontal', obj=cls, id='admin.E018')
else:
return list(chain(*[
self._check_filter_item(cls, model, field_name, "filter_horizontal[%d]" % index)
for index, field_name in enumerate(cls.filter_horizontal)
]))
def _check_filter_item(self, cls, model, field_name, label):
""" Check one item of `filter_vertical` or `filter_horizontal`, i.e.
check that given field exists and is a ManyToManyField. """
try:
field = model._meta.get_field(field_name)
except models.FieldDoesNotExist:
return refer_to_missing_field(field=field_name, option=label,
model=model, obj=cls, id='admin.E019')
else:
if not isinstance(field, models.ManyToManyField):
return must_be('a ManyToManyField', option=label, obj=cls, id='admin.E020')
else:
return []
def _check_radio_fields(self, cls, model):
""" Check that `radio_fields` is a dictionary. """
if not hasattr(cls, 'radio_fields'):
return []
elif not isinstance(cls.radio_fields, dict):
return must_be('a dictionary', option='radio_fields', obj=cls, id='admin.E021')
else:
return list(chain(*[
self._check_radio_fields_key(cls, model, field_name, 'radio_fields') +
self._check_radio_fields_value(cls, model, val, 'radio_fields[\'%s\']' % field_name)
for field_name, val in cls.radio_fields.items()
]))
def _check_radio_fields_key(self, cls, model, field_name, label):
""" Check that a key of `radio_fields` dictionary is name of existing
field and that the field is a ForeignKey or has `choices` defined. """
try:
field = model._meta.get_field(field_name)
except models.FieldDoesNotExist:
return refer_to_missing_field(field=field_name, option=label,
model=model, obj=cls, id='admin.E022')
else:
if not (isinstance(field, models.ForeignKey) or field.choices):
return [
checks.Error(
'"%s" refers to "%s", which is neither an instance of ForeignKey nor does have choices set.' % (
label, field_name
),
hint=None,
obj=cls,
id='admin.E023',
)
]
else:
return []
def _check_radio_fields_value(self, cls, model, val, label):
""" Check type of a value of `radio_fields` dictionary. """
from django.contrib.admin.options import HORIZONTAL, VERTICAL
if val not in (HORIZONTAL, VERTICAL):
return [
checks.Error(
'"%s" is neither admin.HORIZONTAL nor admin.VERTICAL.' % label,
hint=None,
obj=cls,
id='admin.E024',
)
]
else:
return []
def _check_view_on_site_url(self, cls, model):
if hasattr(cls, 'view_on_site'):
if not callable(cls.view_on_site) and not isinstance(cls.view_on_site, bool):
return [
checks.Error(
'"view_on_site" is not a callable or a boolean value.',
hint=None,
obj=cls,
id='admin.E025',
)
]
else:
return []
else:
return []
def _check_prepopulated_fields(self, cls, model):
""" Check that `prepopulated_fields` is a dictionary containing allowed
field types. """
if not hasattr(cls, 'prepopulated_fields'):
return []
elif not isinstance(cls.prepopulated_fields, dict):
return must_be('a dictionary', option='prepopulated_fields', obj=cls, id='admin.E026')
else:
return list(chain(*[
self._check_prepopulated_fields_key(cls, model, field_name, 'prepopulated_fields') +
self._check_prepopulated_fields_value(cls, model, val, 'prepopulated_fields[\'%s\']' % field_name)
for field_name, val in cls.prepopulated_fields.items()
]))
def _check_prepopulated_fields_key(self, cls, model, field_name, label):
""" Check a key of `prepopulated_fields` dictionary, i.e. check that it
is a name of existing field and the field is one of the allowed types.
"""
forbidden_field_types = (
models.DateTimeField,
models.ForeignKey,
models.ManyToManyField
)
try:
field = model._meta.get_field(field_name)
except models.FieldDoesNotExist:
return refer_to_missing_field(field=field_name, option=label,
model=model, obj=cls, id='admin.E027')
else:
if isinstance(field, forbidden_field_types):
return [
checks.Error(
'"%s" refers to "%s", which must not be a DateTimeField, '
'ForeignKey or ManyToManyField.' % (
label, field_name
),
hint=None,
obj=cls,
id='admin.E028',
)
]
else:
return []
def _check_prepopulated_fields_value(self, cls, model, val, label):
""" Check a value of `prepopulated_fields` dictionary, i.e. it's an
iterable of existing fields. """
if not isinstance(val, (list, tuple)):
return must_be('a list or tuple', option=label, obj=cls, id='admin.E029')
else:
return list(chain(*[
self._check_prepopulated_fields_value_item(cls, model, subfield_name, "%s[%r]" % (label, index))
for index, subfield_name in enumerate(val)
]))
def _check_prepopulated_fields_value_item(self, cls, model, field_name, label):
""" For `prepopulated_fields` equal to {"slug": ("title",)},
`field_name` is "title". """
try:
model._meta.get_field(field_name)
except models.FieldDoesNotExist:
return refer_to_missing_field(field=field_name, option=label,
model=model, obj=cls, id='admin.E030')
else:
return []
def _check_ordering(self, cls, model):
""" Check that ordering refers to existing fields or is random. """
# ordering = None
if cls.ordering is None: # The default value is None
return []
elif not isinstance(cls.ordering, (list, tuple)):
return must_be('a list or tuple', option='ordering', obj=cls, id='admin.E031')
else:
return list(chain(*[
self._check_ordering_item(cls, model, field_name, 'ordering[%d]' % index)
for index, field_name in enumerate(cls.ordering)
]))
def _check_ordering_item(self, cls, model, field_name, label):
""" Check that `ordering` refers to existing fields. """
if field_name == '?' and len(cls.ordering) != 1:
return [
checks.Error(
'"ordering" has the random ordering marker "?", '
'but contains other fields as well.',
hint='Either remove the "?", or remove the other fields.',
obj=cls,
id='admin.E032',
)
]
elif field_name == '?':
return []
elif '__' in field_name:
# Skip ordering in the format field1__field2 (FIXME: checking
# this format would be nice, but it's a little fiddly).
return []
else:
if field_name.startswith('-'):
field_name = field_name[1:]
try:
model._meta.get_field(field_name)
except models.FieldDoesNotExist:
return refer_to_missing_field(field=field_name, option=label,
model=model, obj=cls, id='admin.E033')
else:
return []
def _check_readonly_fields(self, cls, model):
""" Check that readonly_fields refers to proper attribute or field. """
if cls.readonly_fields == ():
return []
elif not isinstance(cls.readonly_fields, (list, tuple)):
return must_be('a list or tuple', option='readonly_fields', obj=cls, id='admin.E034')
else:
return list(chain(*[
self._check_readonly_fields_item(cls, model, field_name, "readonly_fields[%d]" % index)
for index, field_name in enumerate(cls.readonly_fields)
]))
def _check_readonly_fields_item(self, cls, model, field_name, label):
if callable(field_name):
return []
elif hasattr(cls, field_name):
return []
elif hasattr(model, field_name):
return []
else:
try:
model._meta.get_field(field_name)
except models.FieldDoesNotExist:
return [
checks.Error(
'"%s" is neither a callable nor an attribute of "%s" nor found in the model %s.%s.' % (
label, cls.__name__, model._meta.app_label, model._meta.object_name
),
hint=None,
obj=cls,
id='admin.E035',
)
]
else:
return []
class ModelAdminChecks(BaseModelAdminChecks):
def check(self, cls, model, **kwargs):
errors = super(ModelAdminChecks, self).check(cls, model=model, **kwargs)
errors.extend(self._check_save_as(cls, model))
errors.extend(self._check_save_on_top(cls, model))
errors.extend(self._check_inlines(cls, model))
errors.extend(self._check_list_display(cls, model))
errors.extend(self._check_list_display_links(cls, model))
errors.extend(self._check_list_filter(cls, model))
errors.extend(self._check_list_select_related(cls, model))
errors.extend(self._check_list_per_page(cls, model))
errors.extend(self._check_list_max_show_all(cls, model))
errors.extend(self._check_list_editable(cls, model))
errors.extend(self._check_search_fields(cls, model))
errors.extend(self._check_date_hierarchy(cls, model))
return errors
def _check_save_as(self, cls, model):
""" Check save_as is a boolean. """
if not isinstance(cls.save_as, bool):
return must_be('a boolean', option='save_as',
obj=cls, id='admin.E101')
else:
return []
def _check_save_on_top(self, cls, model):
""" Check save_on_top is a boolean. """
if not isinstance(cls.save_on_top, bool):
return must_be('a boolean', option='save_on_top',
obj=cls, id='admin.E102')
else:
return []
def _check_inlines(self, cls, model):
""" Check all inline model admin classes. """
if not isinstance(cls.inlines, (list, tuple)):
return must_be('a list or tuple', option='inlines', obj=cls, id='admin.E103')
else:
return list(chain(*[
self._check_inlines_item(cls, model, item, "inlines[%d]" % index)
for index, item in enumerate(cls.inlines)
]))
def _check_inlines_item(self, cls, model, inline, label):
""" Check one inline model admin. """
from django.contrib.admin.options import BaseModelAdmin
if not issubclass(inline, BaseModelAdmin):
return must_inherit_from(parent='BaseModelAdmin', option=label,
obj=cls, id='admin.E104')
elif not inline.model:
return [
checks.Error(
'"model" is a required attribute of "%s".' % label,
hint=None,
obj=cls,
id='admin.E105',
)
]
elif not issubclass(inline.model, models.Model):
return must_be('a Model', option='%s.model' % label,
obj=cls, id='admin.E106')
else:
return inline.check(model)
def _check_list_display(self, cls, model):
""" Check that list_display only contains fields or usable attributes.
"""
if not isinstance(cls.list_display, (list, tuple)):
return must_be('a list or tuple', option='list_display', obj=cls, id='admin.E107')
else:
return list(chain(*[
self._check_list_display_item(cls, model, item, "list_display[%d]" % index)
for index, item in enumerate(cls.list_display)
]))
def _check_list_display_item(self, cls, model, item, label):
if callable(item):
return []
elif hasattr(cls, item):
return []
elif hasattr(model, item):
# getattr(model, item) could be an X_RelatedObjectsDescriptor
try:
field = model._meta.get_field(item)
except models.FieldDoesNotExist:
try:
field = getattr(model, item)
except AttributeError:
field = None
if field is None:
return [
checks.Error(
'"%s" refers to "%s" that is neither a field, method nor a property of model %s.%s.' % (
label, item, model._meta.app_label, model._meta.object_name
),
hint=None,
obj=cls,
id='admin.E108',
)
]
elif isinstance(field, models.ManyToManyField):
return [
checks.Error(
'"%s" must not be a ManyToManyField.' % label,
hint=None,
obj=cls,
id='admin.E109',
)
]
else:
return []
else:
try:
model._meta.get_field(item)
except models.FieldDoesNotExist:
return [
checks.Error(
'"%s" is neither a callable nor an attribute of "%s" nor found in model %s.%s.' % (
label, cls.__name__, model._meta.app_label, model._meta.object_name
),
hint=None,
obj=cls,
id='admin.E110',
)
]
else:
return []
def _check_list_display_links(self, cls, model):
""" Check that list_display_links is a unique subset of list_display.
"""
if cls.list_display_links is None:
return []
elif not isinstance(cls.list_display_links, (list, tuple)):
return must_be('a list or tuple or None', option='list_display_links', obj=cls, id='admin.E111')
else:
return list(chain(*[
self._check_list_display_links_item(cls, model, field_name, "list_display_links[%d]" % index)
for index, field_name in enumerate(cls.list_display_links)
]))
def _check_list_display_links_item(self, cls, model, field_name, label):
if field_name not in cls.list_display:
return [
checks.Error(
'"%s" refers to "%s", which is not defined in "list_display".' % (
label, field_name
),
hint=None,
obj=cls,
id='admin.E112',
)
]
else:
return []
def _check_list_filter(self, cls, model):
if not isinstance(cls.list_filter, (list, tuple)):
return must_be('a list or tuple', option='list_filter', obj=cls, id='admin.E113')
else:
return list(chain(*[
self._check_list_filter_item(cls, model, item, "list_filter[%d]" % index)
for index, item in enumerate(cls.list_filter)
]))
def _check_list_filter_item(self, cls, model, item, label):
"""
Check one item of `list_filter`, i.e. check if it is one of three options:
1. 'field' -- a basic field filter, possibly w/ relationships (e.g.
'field__rel')
2. ('field', SomeFieldListFilter) - a field-based list filter class
3. SomeListFilter - a non-field list filter class
"""
from django.contrib.admin import ListFilter, FieldListFilter
if callable(item) and not isinstance(item, models.Field):
# If item is option 3, it should be a ListFilter...
if not issubclass(item, ListFilter):
return must_inherit_from(parent='ListFilter', option=label,
obj=cls, id='admin.E114')
# ... but not a FieldListFilter.
elif issubclass(item, FieldListFilter):
return [
checks.Error(
'"%s" must not inherit from FieldListFilter.' % label,
hint=None,
obj=cls,
id='admin.E115',
)
]
else:
return []
elif isinstance(item, (tuple, list)):
# item is option #2
field, list_filter_class = item
if not issubclass(list_filter_class, FieldListFilter):
return must_inherit_from(parent='FieldListFilter', option='%s[1]' % label,
obj=cls, id='admin.E116')
else:
return []
else:
# item is option #1
field = item
# Validate the field string
try:
get_fields_from_path(model, field)
except (NotRelationField, FieldDoesNotExist):
return [
checks.Error(
'"%s" refers to "%s", which does not refer to a Field.' % (label, field),
hint=None,
obj=cls,
id='admin.E117',
)
]
else:
return []
def _check_list_select_related(self, cls, model):
""" Check that list_select_related is a boolean, a list or a tuple. """
if not isinstance(cls.list_select_related, (bool, list, tuple)):
return must_be('a boolean, tuple or list', option='list_select_related',
obj=cls, id='admin.E118')
else:
return []
def _check_list_per_page(self, cls, model):
""" Check that list_per_page is an integer. """
if not isinstance(cls.list_per_page, int):
return must_be('an integer', option='list_per_page', obj=cls, id='admin.E119')
else:
return []
def _check_list_max_show_all(self, cls, model):
""" Check that list_max_show_all is an integer. """
if not isinstance(cls.list_max_show_all, int):
return must_be('an integer', option='list_max_show_all', obj=cls, id='admin.E120')
else:
return []
def _check_list_editable(self, cls, model):
""" Check that list_editable is a sequence of editable fields from
list_display without first element. """
if not isinstance(cls.list_editable, (list, tuple)):
return must_be('a list or tuple', option='list_editable', obj=cls, id='admin.E121')
else:
return list(chain(*[
self._check_list_editable_item(cls, model, item, "list_editable[%d]" % index)
for index, item in enumerate(cls.list_editable)
]))
def _check_list_editable_item(self, cls, model, field_name, label):
try:
field = model._meta.get_field_by_name(field_name)[0]
except models.FieldDoesNotExist:
return refer_to_missing_field(field=field_name, option=label,
model=model, obj=cls, id='admin.E122')
else:
if field_name not in cls.list_display:
return refer_to_missing_field(field=field_name, option=label,
model=model, obj=cls, id='admin.E123')
elif field_name in cls.list_display_links:
return [
checks.Error(
'"%s" cannot be in both "list_editable" and "list_display_links".' % field_name,
hint=None,
obj=cls,
id='admin.E124',
)
]
elif not cls.list_display_links and cls.list_display[0] in cls.list_editable:
return [
checks.Error(
'"%s" refers to the first field in list_display ("%s"), '
'which cannot be used unless list_display_links is set.' % (
label, cls.list_display[0]
),
hint=None,
obj=cls,
id='admin.E125',
)
]
elif not field.editable:
return [
checks.Error(
'"%s" refers to field "%s", which is not editable through the admin.' % (
label, field_name
),
hint=None,
obj=cls,
id='admin.E126',
)
]
else:
return []
def _check_search_fields(self, cls, model):
""" Check search_fields is a sequence. """
if not isinstance(cls.search_fields, (list, tuple)):
return must_be('a list or tuple', option='search_fields', obj=cls, id='admin.E127')
else:
return []
def _check_date_hierarchy(self, cls, model):
""" Check that date_hierarchy refers to DateField or DateTimeField. """
if cls.date_hierarchy is None:
return []
else:
try:
field = model._meta.get_field(cls.date_hierarchy)
except models.FieldDoesNotExist:
return refer_to_missing_field(option='date_hierarchy',
field=cls.date_hierarchy,
model=model, obj=cls, id='admin.E128')
else:
if not isinstance(field, (models.DateField, models.DateTimeField)):
return must_be('a DateField or DateTimeField', option='date_hierarchy',
obj=cls, id='admin.E129')
else:
return []
class InlineModelAdminChecks(BaseModelAdminChecks):
def check(self, cls, parent_model, **kwargs):
errors = super(InlineModelAdminChecks, self).check(cls, model=cls.model, **kwargs)
errors.extend(self._check_fk_name(cls, parent_model))
errors.extend(self._check_exclude_of_parent_model(cls, parent_model))
errors.extend(self._check_extra(cls))
errors.extend(self._check_max_num(cls))
errors.extend(self._check_formset(cls))
return errors
def _check_exclude_of_parent_model(self, cls, parent_model):
# Do not perform more specific checks if the base checks result in an
# error.
errors = super(InlineModelAdminChecks, self)._check_exclude(cls, parent_model)
if errors:
return []
# Skip if `fk_name` is invalid.
if self._check_fk_name(cls, parent_model):
return []
if cls.exclude is None:
return []
fk = _get_foreign_key(parent_model, cls.model, fk_name=cls.fk_name)
if fk.name in cls.exclude:
return [
checks.Error(
'Cannot exclude the field "%s", because it is the foreign key '
'to the parent model %s.%s.' % (
fk.name, parent_model._meta.app_label, parent_model._meta.object_name
),
hint=None,
obj=cls,
id='admin.E201',
)
]
else:
return []
def _check_fk_name(self, cls, parent_model):
try:
_get_foreign_key(parent_model, cls.model, fk_name=cls.fk_name)
except ValueError as e:
return [checks.Error(e.args[0], hint=None, obj=cls, id='admin.E202')]
else:
return []
def _check_extra(self, cls):
""" Check that extra is an integer. """
if not isinstance(cls.extra, int):
return must_be('an integer', option='extra', obj=cls, id='admin.E203')
else:
return []
def _check_max_num(self, cls):
""" Check that max_num is an integer. """
if cls.max_num is None:
return []
elif not isinstance(cls.max_num, int):
return must_be('an integer', option='max_num', obj=cls, id='admin.E204')
else:
return []
def _check_formset(self, cls):
""" Check formset is a subclass of BaseModelFormSet. """
if not issubclass(cls.formset, BaseModelFormSet):
return must_inherit_from(parent='BaseModelFormSet', option='formset',
obj=cls, id='admin.E205')
else:
return []
def must_be(type, option, obj, id):
return [
checks.Error(
'"%s" must be %s.' % (option, type),
hint=None,
obj=obj,
id=id,
),
]
def must_inherit_from(parent, option, obj, id):
return [
checks.Error(
'"%s" must inherit from %s.' % (option, parent),
hint=None,
obj=obj,
id=id,
),
]
def refer_to_missing_field(field, option, model, obj, id):
return [
checks.Error(
'"%s" refers to field "%s", which is missing from model %s.%s.' % (
option, field, model._meta.app_label, model._meta.object_name
),
hint=None,
obj=obj,
id=id,
),
]
| bsd-3-clause |
elkingtonmcb/scikit-learn | sklearn/decomposition/sparse_pca.py | 284 | 9424 | """Matrix factorization with Sparse PCA"""
# Author: Vlad Niculae, Gael Varoquaux, Alexandre Gramfort
# License: BSD 3 clause
import numpy as np
from ..utils import check_random_state, check_array
from ..utils.validation import check_is_fitted
from ..linear_model import ridge_regression
from ..base import BaseEstimator, TransformerMixin
from .dict_learning import dict_learning, dict_learning_online
class SparsePCA(BaseEstimator, TransformerMixin):
"""Sparse Principal Components Analysis (SparsePCA)
Finds the set of sparse components that can optimally reconstruct
the data. The amount of sparseness is controllable by the coefficient
of the L1 penalty, given by the parameter alpha.
Read more in the :ref:`User Guide <SparsePCA>`.
Parameters
----------
n_components : int,
Number of sparse atoms to extract.
alpha : float,
Sparsity controlling parameter. Higher values lead to sparser
components.
ridge_alpha : float,
Amount of ridge shrinkage to apply in order to improve
conditioning when calling the transform method.
max_iter : int,
Maximum number of iterations to perform.
tol : float,
Tolerance for the stopping condition.
method : {'lars', 'cd'}
lars: uses the least angle regression method to solve the lasso problem
(linear_model.lars_path)
cd: uses the coordinate descent method to compute the
Lasso solution (linear_model.Lasso). Lars will be faster if
the estimated components are sparse.
n_jobs : int,
Number of parallel jobs to run.
U_init : array of shape (n_samples, n_components),
Initial values for the loadings for warm restart scenarios.
V_init : array of shape (n_components, n_features),
Initial values for the components for warm restart scenarios.
verbose :
Degree of verbosity of the printed output.
random_state : int or RandomState
Pseudo number generator state used for random sampling.
Attributes
----------
components_ : array, [n_components, n_features]
Sparse components extracted from the data.
error_ : array
Vector of errors at each iteration.
n_iter_ : int
Number of iterations run.
See also
--------
PCA
MiniBatchSparsePCA
DictionaryLearning
"""
def __init__(self, n_components=None, alpha=1, ridge_alpha=0.01,
max_iter=1000, tol=1e-8, method='lars', n_jobs=1, U_init=None,
V_init=None, verbose=False, random_state=None):
self.n_components = n_components
self.alpha = alpha
self.ridge_alpha = ridge_alpha
self.max_iter = max_iter
self.tol = tol
self.method = method
self.n_jobs = n_jobs
self.U_init = U_init
self.V_init = V_init
self.verbose = verbose
self.random_state = random_state
def fit(self, X, y=None):
"""Fit the model from data in X.
Parameters
----------
X: array-like, shape (n_samples, n_features)
Training vector, where n_samples in the number of samples
and n_features is the number of features.
Returns
-------
self : object
Returns the instance itself.
"""
random_state = check_random_state(self.random_state)
X = check_array(X)
if self.n_components is None:
n_components = X.shape[1]
else:
n_components = self.n_components
code_init = self.V_init.T if self.V_init is not None else None
dict_init = self.U_init.T if self.U_init is not None else None
Vt, _, E, self.n_iter_ = dict_learning(X.T, n_components, self.alpha,
tol=self.tol,
max_iter=self.max_iter,
method=self.method,
n_jobs=self.n_jobs,
verbose=self.verbose,
random_state=random_state,
code_init=code_init,
dict_init=dict_init,
return_n_iter=True
)
self.components_ = Vt.T
self.error_ = E
return self
def transform(self, X, ridge_alpha=None):
"""Least Squares projection of the data onto the sparse components.
To avoid instability issues in case the system is under-determined,
regularization can be applied (Ridge regression) via the
`ridge_alpha` parameter.
Note that Sparse PCA components orthogonality is not enforced as in PCA
hence one cannot use a simple linear projection.
Parameters
----------
X: array of shape (n_samples, n_features)
Test data to be transformed, must have the same number of
features as the data used to train the model.
ridge_alpha: float, default: 0.01
Amount of ridge shrinkage to apply in order to improve
conditioning.
Returns
-------
X_new array, shape (n_samples, n_components)
Transformed data.
"""
check_is_fitted(self, 'components_')
X = check_array(X)
ridge_alpha = self.ridge_alpha if ridge_alpha is None else ridge_alpha
U = ridge_regression(self.components_.T, X.T, ridge_alpha,
solver='cholesky')
s = np.sqrt((U ** 2).sum(axis=0))
s[s == 0] = 1
U /= s
return U
class MiniBatchSparsePCA(SparsePCA):
"""Mini-batch Sparse Principal Components Analysis
Finds the set of sparse components that can optimally reconstruct
the data. The amount of sparseness is controllable by the coefficient
of the L1 penalty, given by the parameter alpha.
Read more in the :ref:`User Guide <SparsePCA>`.
Parameters
----------
n_components : int,
number of sparse atoms to extract
alpha : int,
Sparsity controlling parameter. Higher values lead to sparser
components.
ridge_alpha : float,
Amount of ridge shrinkage to apply in order to improve
conditioning when calling the transform method.
n_iter : int,
number of iterations to perform for each mini batch
callback : callable,
callable that gets invoked every five iterations
batch_size : int,
the number of features to take in each mini batch
verbose :
degree of output the procedure will print
shuffle : boolean,
whether to shuffle the data before splitting it in batches
n_jobs : int,
number of parallel jobs to run, or -1 to autodetect.
method : {'lars', 'cd'}
lars: uses the least angle regression method to solve the lasso problem
(linear_model.lars_path)
cd: uses the coordinate descent method to compute the
Lasso solution (linear_model.Lasso). Lars will be faster if
the estimated components are sparse.
random_state : int or RandomState
Pseudo number generator state used for random sampling.
Attributes
----------
components_ : array, [n_components, n_features]
Sparse components extracted from the data.
error_ : array
Vector of errors at each iteration.
n_iter_ : int
Number of iterations run.
See also
--------
PCA
SparsePCA
DictionaryLearning
"""
def __init__(self, n_components=None, alpha=1, ridge_alpha=0.01,
n_iter=100, callback=None, batch_size=3, verbose=False,
shuffle=True, n_jobs=1, method='lars', random_state=None):
self.n_components = n_components
self.alpha = alpha
self.ridge_alpha = ridge_alpha
self.n_iter = n_iter
self.callback = callback
self.batch_size = batch_size
self.verbose = verbose
self.shuffle = shuffle
self.n_jobs = n_jobs
self.method = method
self.random_state = random_state
def fit(self, X, y=None):
"""Fit the model from data in X.
Parameters
----------
X: array-like, shape (n_samples, n_features)
Training vector, where n_samples in the number of samples
and n_features is the number of features.
Returns
-------
self : object
Returns the instance itself.
"""
random_state = check_random_state(self.random_state)
X = check_array(X)
if self.n_components is None:
n_components = X.shape[1]
else:
n_components = self.n_components
Vt, _, self.n_iter_ = dict_learning_online(
X.T, n_components, alpha=self.alpha,
n_iter=self.n_iter, return_code=True,
dict_init=None, verbose=self.verbose,
callback=self.callback,
batch_size=self.batch_size,
shuffle=self.shuffle,
n_jobs=self.n_jobs, method=self.method,
random_state=random_state,
return_n_iter=True)
self.components_ = Vt.T
return self
| bsd-3-clause |
CredoReference/edx-platform | lms/djangoapps/discussion/apps.py | 13 | 1211 | """
Discussion Application Configuration
Signal handlers are connected here.
"""
from django.apps import AppConfig
from openedx.core.constants import COURSE_ID_PATTERN
from openedx.core.djangoapps.plugins.constants import ProjectType, SettingsType, PluginURLs, PluginSettings
class DiscussionConfig(AppConfig):
"""
Application Configuration for Grades.
"""
name = u'lms.djangoapps.discussion'
plugin_app = {
PluginURLs.CONFIG: {
ProjectType.LMS: {
PluginURLs.NAMESPACE: u'',
PluginURLs.REGEX: r'^courses/{}/discussion/forum/'.format(COURSE_ID_PATTERN),
PluginURLs.RELATIVE_PATH: u'urls',
}
},
PluginSettings.CONFIG: {
ProjectType.CMS: {
SettingsType.COMMON: {PluginSettings.RELATIVE_PATH: u'settings.common'},
},
ProjectType.LMS: {
SettingsType.COMMON: {PluginSettings.RELATIVE_PATH: u'settings.common'},
},
}
}
def ready(self):
"""
Connect handlers to send notifications about discussions.
"""
from .signals import handlers # pylint: disable=unused-variable
| agpl-3.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.