repo_name stringlengths 6 100 | path stringlengths 4 294 | copies stringlengths 1 5 | size stringlengths 4 6 | content stringlengths 606 896k | license stringclasses 15
values |
|---|---|---|---|---|---|
LiaoPan/scikit-learn | examples/applications/plot_outlier_detection_housing.py | 243 | 5577 | """
====================================
Outlier detection on a real data set
====================================
This example illustrates the need for robust covariance estimation
on a real data set. It is useful both for outlier detection and for
a better understanding of the data structure.
We selected two sets of two variables from the Boston housing data set
as an illustration of what kind of analysis can be done with several
outlier detection tools. For the purpose of visualization, we are working
with two-dimensional examples, but one should be aware that things are
not so trivial in high-dimension, as it will be pointed out.
In both examples below, the main result is that the empirical covariance
estimate, as a non-robust one, is highly influenced by the heterogeneous
structure of the observations. Although the robust covariance estimate is
able to focus on the main mode of the data distribution, it sticks to the
assumption that the data should be Gaussian distributed, yielding some biased
estimation of the data structure, but yet accurate to some extent.
The One-Class SVM algorithm
First example
-------------
The first example illustrates how robust covariance estimation can help
concentrating on a relevant cluster when another one exists. Here, many
observations are confounded into one and break down the empirical covariance
estimation.
Of course, some screening tools would have pointed out the presence of two
clusters (Support Vector Machines, Gaussian Mixture Models, univariate
outlier detection, ...). But had it been a high-dimensional example, none
of these could be applied that easily.
Second example
--------------
The second example shows the ability of the Minimum Covariance Determinant
robust estimator of covariance to concentrate on the main mode of the data
distribution: the location seems to be well estimated, although the covariance
is hard to estimate due to the banana-shaped distribution. Anyway, we can
get rid of some outlying observations.
The One-Class SVM is able to capture the real data structure, but the
difficulty is to adjust its kernel bandwidth parameter so as to obtain
a good compromise between the shape of the data scatter matrix and the
risk of over-fitting the data.
"""
print(__doc__)
# Author: Virgile Fritsch <virgile.fritsch@inria.fr>
# License: BSD 3 clause
import numpy as np
from sklearn.covariance import EllipticEnvelope
from sklearn.svm import OneClassSVM
import matplotlib.pyplot as plt
import matplotlib.font_manager
from sklearn.datasets import load_boston
# Get data
X1 = load_boston()['data'][:, [8, 10]] # two clusters
X2 = load_boston()['data'][:, [5, 12]] # "banana"-shaped
# Define "classifiers" to be used
classifiers = {
"Empirical Covariance": EllipticEnvelope(support_fraction=1.,
contamination=0.261),
"Robust Covariance (Minimum Covariance Determinant)":
EllipticEnvelope(contamination=0.261),
"OCSVM": OneClassSVM(nu=0.261, gamma=0.05)}
colors = ['m', 'g', 'b']
legend1 = {}
legend2 = {}
# Learn a frontier for outlier detection with several classifiers
xx1, yy1 = np.meshgrid(np.linspace(-8, 28, 500), np.linspace(3, 40, 500))
xx2, yy2 = np.meshgrid(np.linspace(3, 10, 500), np.linspace(-5, 45, 500))
for i, (clf_name, clf) in enumerate(classifiers.items()):
plt.figure(1)
clf.fit(X1)
Z1 = clf.decision_function(np.c_[xx1.ravel(), yy1.ravel()])
Z1 = Z1.reshape(xx1.shape)
legend1[clf_name] = plt.contour(
xx1, yy1, Z1, levels=[0], linewidths=2, colors=colors[i])
plt.figure(2)
clf.fit(X2)
Z2 = clf.decision_function(np.c_[xx2.ravel(), yy2.ravel()])
Z2 = Z2.reshape(xx2.shape)
legend2[clf_name] = plt.contour(
xx2, yy2, Z2, levels=[0], linewidths=2, colors=colors[i])
legend1_values_list = list( legend1.values() )
legend1_keys_list = list( legend1.keys() )
# Plot the results (= shape of the data points cloud)
plt.figure(1) # two clusters
plt.title("Outlier detection on a real data set (boston housing)")
plt.scatter(X1[:, 0], X1[:, 1], color='black')
bbox_args = dict(boxstyle="round", fc="0.8")
arrow_args = dict(arrowstyle="->")
plt.annotate("several confounded points", xy=(24, 19),
xycoords="data", textcoords="data",
xytext=(13, 10), bbox=bbox_args, arrowprops=arrow_args)
plt.xlim((xx1.min(), xx1.max()))
plt.ylim((yy1.min(), yy1.max()))
plt.legend((legend1_values_list[0].collections[0],
legend1_values_list[1].collections[0],
legend1_values_list[2].collections[0]),
(legend1_keys_list[0], legend1_keys_list[1], legend1_keys_list[2]),
loc="upper center",
prop=matplotlib.font_manager.FontProperties(size=12))
plt.ylabel("accessibility to radial highways")
plt.xlabel("pupil-teacher ratio by town")
legend2_values_list = list( legend2.values() )
legend2_keys_list = list( legend2.keys() )
plt.figure(2) # "banana" shape
plt.title("Outlier detection on a real data set (boston housing)")
plt.scatter(X2[:, 0], X2[:, 1], color='black')
plt.xlim((xx2.min(), xx2.max()))
plt.ylim((yy2.min(), yy2.max()))
plt.legend((legend2_values_list[0].collections[0],
legend2_values_list[1].collections[0],
legend2_values_list[2].collections[0]),
(legend2_values_list[0], legend2_values_list[1], legend2_values_list[2]),
loc="upper center",
prop=matplotlib.font_manager.FontProperties(size=12))
plt.ylabel("% lower status of the population")
plt.xlabel("average number of rooms per dwelling")
plt.show()
| bsd-3-clause |
umkay/zulip | zerver/lib/migrate.py | 8 | 4082 | from __future__ import print_function
from typing import Any, Callable, Tuple
from django.db.models.query import QuerySet
import re
import time
def timed_ddl(db, stmt):
# type: (Any, str) -> None
print()
print(time.asctime())
print(stmt)
t = time.time()
db.execute(stmt)
delay = time.time() - t
print('Took %.2fs' % (delay,))
def validate(sql_thingy):
# type: (str) -> None
# Do basic validation that table/col name is safe.
if not re.match('^[a-z][a-z\d_]+$', sql_thingy):
raise Exception('Invalid SQL object: %s' % (sql_thingy,))
def do_batch_update(db, table, cols, vals, batch_size=10000, sleep=0.1):
# type: (Any, str, List[str], List[str], int, float) -> None
validate(table)
for col in cols:
validate(col)
stmt = '''
UPDATE %s
SET (%s) = (%s)
WHERE id >= %%s AND id < %%s
''' % (table, ', '.join(cols), ', '.join(['%s'] * len(cols)))
print(stmt)
(min_id, max_id) = db.execute("SELECT MIN(id), MAX(id) FROM %s" % (table,))[0]
if min_id is None:
return
print("%s rows need updating" % (max_id - min_id,))
while min_id <= max_id:
lower = min_id
upper = min_id + batch_size
print('%s about to update range [%s,%s)' % (time.asctime(), lower, upper))
db.start_transaction()
params = list(vals) + [lower, upper]
db.execute(stmt, params=params)
db.commit_transaction()
min_id = upper
time.sleep(sleep)
def add_bool_columns(db, table, cols):
# type: (Any, str, List[str]) -> None
validate(table)
for col in cols:
validate(col)
coltype = 'boolean'
val = 'false'
stmt = ('ALTER TABLE %s ' % (table,)) \
+ ', '.join(['ADD %s %s' % (col, coltype) for col in cols])
timed_ddl(db, stmt)
stmt = ('ALTER TABLE %s ' % (table,)) \
+ ', '.join(['ALTER %s SET DEFAULT %s' % (col, val) for col in cols])
timed_ddl(db, stmt)
vals = [val] * len(cols)
do_batch_update(db, table, cols, vals)
stmt = 'ANALYZE %s' % (table,)
timed_ddl(db, stmt)
stmt = ('ALTER TABLE %s ' % (table,)) \
+ ', '.join(['ALTER %s SET NOT NULL' % (col,) for col in cols])
timed_ddl(db, stmt)
def create_index_if_nonexistant(db, table, col, index):
# type: (Any, str, str, str) -> None
validate(table)
validate(col)
validate(index)
test = """SELECT relname FROM pg_class
WHERE relname = %s"""
if len(db.execute(test, params=[index])) != 0:
print("Not creating index '%s' because it already exists" % (index,))
else:
stmt = "CREATE INDEX %s ON %s (%s)" % (index, table, col)
timed_ddl(db, stmt)
def act_on_message_ranges(db, orm, tasks, batch_size=5000, sleep=0.5):
# type: (Any, Dict[str, Any], List[Tuple[Callable[[QuerySet], QuerySet], Callable[[QuerySet], None]]], int , float) -> None
# tasks should be an array of (filterer, action) tuples
# where filterer is a function that returns a filtered QuerySet
# and action is a function that acts on a QuerySet
all_objects = orm['zerver.Message'].objects
try:
min_id = all_objects.all().order_by('id')[0].id
except IndexError:
print('There is no work to do')
return
max_id = all_objects.all().order_by('-id')[0].id
print("max_id = %d" % (max_id,))
overhead = int((max_id + 1 - min_id)/ batch_size * sleep / 60)
print("Expect this to take at least %d minutes, just due to sleeps alone." % (overhead,))
while min_id <= max_id:
lower = min_id
upper = min_id + batch_size - 1
if upper > max_id:
upper = max_id
print('%s about to update range %s to %s' % (time.asctime(), lower, upper))
db.start_transaction()
for filterer, action in tasks:
objects = all_objects.filter(id__range=(lower, upper))
targets = filterer(objects)
action(targets)
db.commit_transaction()
min_id = upper + 1
time.sleep(sleep)
| apache-2.0 |
pwong-mapr/private-hue | desktop/core/ext-py/Django-1.4.5/django/core/mail/backends/smtp.py | 94 | 4022 | """SMTP email backend class."""
import smtplib
import socket
import threading
from django.conf import settings
from django.core.mail.backends.base import BaseEmailBackend
from django.core.mail.utils import DNS_NAME
from django.core.mail.message import sanitize_address
class EmailBackend(BaseEmailBackend):
"""
A wrapper that manages the SMTP network connection.
"""
def __init__(self, host=None, port=None, username=None, password=None,
use_tls=None, fail_silently=False, **kwargs):
super(EmailBackend, self).__init__(fail_silently=fail_silently)
self.host = host or settings.EMAIL_HOST
self.port = port or settings.EMAIL_PORT
if username is None:
self.username = settings.EMAIL_HOST_USER
else:
self.username = username
if password is None:
self.password = settings.EMAIL_HOST_PASSWORD
else:
self.password = password
if use_tls is None:
self.use_tls = settings.EMAIL_USE_TLS
else:
self.use_tls = use_tls
self.connection = None
self._lock = threading.RLock()
def open(self):
"""
Ensures we have a connection to the email server. Returns whether or
not a new connection was required (True or False).
"""
if self.connection:
# Nothing to do if the connection is already open.
return False
try:
# If local_hostname is not specified, socket.getfqdn() gets used.
# For performance, we use the cached FQDN for local_hostname.
self.connection = smtplib.SMTP(self.host, self.port,
local_hostname=DNS_NAME.get_fqdn())
if self.use_tls:
self.connection.ehlo()
self.connection.starttls()
self.connection.ehlo()
if self.username and self.password:
self.connection.login(self.username, self.password)
return True
except:
if not self.fail_silently:
raise
def close(self):
"""Closes the connection to the email server."""
try:
try:
self.connection.quit()
except socket.sslerror:
# This happens when calling quit() on a TLS connection
# sometimes.
self.connection.close()
except:
if self.fail_silently:
return
raise
finally:
self.connection = None
def send_messages(self, email_messages):
"""
Sends one or more EmailMessage objects and returns the number of email
messages sent.
"""
if not email_messages:
return
self._lock.acquire()
try:
new_conn_created = self.open()
if not self.connection:
# We failed silently on open().
# Trying to send would be pointless.
return
num_sent = 0
for message in email_messages:
sent = self._send(message)
if sent:
num_sent += 1
if new_conn_created:
self.close()
finally:
self._lock.release()
return num_sent
def _send(self, email_message):
"""A helper method that does the actual sending."""
if not email_message.recipients():
return False
from_email = sanitize_address(email_message.from_email, email_message.encoding)
recipients = [sanitize_address(addr, email_message.encoding)
for addr in email_message.recipients()]
try:
self.connection.sendmail(from_email, recipients,
email_message.message().as_string())
except:
if not self.fail_silently:
raise
return False
return True
| apache-2.0 |
ojengwa/oh-mainline | vendor/packages/python-openid/openid/test/storetest.py | 77 | 13015 | from openid.association import Association
from openid.cryptutil import randomString
from openid.store.nonce import mkNonce, split
import unittest
import string
import time
import socket
import random
import os
db_host = 'dbtest'
allowed_handle = []
for c in string.printable:
if c not in string.whitespace:
allowed_handle.append(c)
allowed_handle = ''.join(allowed_handle)
def generateHandle(n):
return randomString(n, allowed_handle)
generateSecret = randomString
def getTmpDbName():
hostname = socket.gethostname()
hostname = hostname.replace('.', '_')
hostname = hostname.replace('-', '_')
return "%s_%d_%s_openid_test" % \
(hostname, os.getpid(), \
random.randrange(1, int(time.time())))
def testStore(store):
"""Make sure a given store has a minimum of API compliance. Call
this function with an empty store.
Raises AssertionError if the store does not work as expected.
OpenIDStore -> NoneType
"""
### Association functions
now = int(time.time())
server_url = 'http://www.myopenid.com/openid'
def genAssoc(issued, lifetime=600):
sec = generateSecret(20)
hdl = generateHandle(128)
return Association(hdl, sec, now + issued, lifetime, 'HMAC-SHA1')
def checkRetrieve(url, handle=None, expected=None):
retrieved_assoc = store.getAssociation(url, handle)
assert retrieved_assoc == expected, (retrieved_assoc, expected)
if expected is not None:
if retrieved_assoc is expected:
print ('Unexpected: retrieved a reference to the expected '
'value instead of a new object')
assert retrieved_assoc.handle == expected.handle
assert retrieved_assoc.secret == expected.secret
def checkRemove(url, handle, expected):
present = store.removeAssociation(url, handle)
assert bool(expected) == bool(present)
assoc = genAssoc(issued=0)
# Make sure that a missing association returns no result
checkRetrieve(server_url)
# Check that after storage, getting returns the same result
store.storeAssociation(server_url, assoc)
checkRetrieve(server_url, None, assoc)
# more than once
checkRetrieve(server_url, None, assoc)
# Storing more than once has no ill effect
store.storeAssociation(server_url, assoc)
checkRetrieve(server_url, None, assoc)
# Removing an association that does not exist returns not present
checkRemove(server_url, assoc.handle + 'x', False)
# Removing an association that does not exist returns not present
checkRemove(server_url + 'x', assoc.handle, False)
# Removing an association that is present returns present
checkRemove(server_url, assoc.handle, True)
# but not present on subsequent calls
checkRemove(server_url, assoc.handle, False)
# Put assoc back in the store
store.storeAssociation(server_url, assoc)
# More recent and expires after assoc
assoc2 = genAssoc(issued=1)
store.storeAssociation(server_url, assoc2)
# After storing an association with a different handle, but the
# same server_url, the handle with the later issue date is returned.
checkRetrieve(server_url, None, assoc2)
# We can still retrieve the older association
checkRetrieve(server_url, assoc.handle, assoc)
# Plus we can retrieve the association with the later issue date
# explicitly
checkRetrieve(server_url, assoc2.handle, assoc2)
# More recent, and expires earlier than assoc2 or assoc. Make sure
# that we're picking the one with the latest issued date and not
# taking into account the expiration.
assoc3 = genAssoc(issued=2, lifetime=100)
store.storeAssociation(server_url, assoc3)
checkRetrieve(server_url, None, assoc3)
checkRetrieve(server_url, assoc.handle, assoc)
checkRetrieve(server_url, assoc2.handle, assoc2)
checkRetrieve(server_url, assoc3.handle, assoc3)
checkRemove(server_url, assoc2.handle, True)
checkRetrieve(server_url, None, assoc3)
checkRetrieve(server_url, assoc.handle, assoc)
checkRetrieve(server_url, assoc2.handle, None)
checkRetrieve(server_url, assoc3.handle, assoc3)
checkRemove(server_url, assoc2.handle, False)
checkRemove(server_url, assoc3.handle, True)
checkRetrieve(server_url, None, assoc)
checkRetrieve(server_url, assoc.handle, assoc)
checkRetrieve(server_url, assoc2.handle, None)
checkRetrieve(server_url, assoc3.handle, None)
checkRemove(server_url, assoc2.handle, False)
checkRemove(server_url, assoc.handle, True)
checkRemove(server_url, assoc3.handle, False)
checkRetrieve(server_url, None, None)
checkRetrieve(server_url, assoc.handle, None)
checkRetrieve(server_url, assoc2.handle, None)
checkRetrieve(server_url, assoc3.handle, None)
checkRemove(server_url, assoc2.handle, False)
checkRemove(server_url, assoc.handle, False)
checkRemove(server_url, assoc3.handle, False)
### test expired associations
# assoc 1: server 1, valid
# assoc 2: server 1, expired
# assoc 3: server 2, expired
# assoc 4: server 3, valid
assocValid1 = genAssoc(issued=-3600,lifetime=7200)
assocValid2 = genAssoc(issued=-5)
assocExpired1 = genAssoc(issued=-7200,lifetime=3600)
assocExpired2 = genAssoc(issued=-7200,lifetime=3600)
store.cleanupAssociations()
store.storeAssociation(server_url + '1', assocValid1)
store.storeAssociation(server_url + '1', assocExpired1)
store.storeAssociation(server_url + '2', assocExpired2)
store.storeAssociation(server_url + '3', assocValid2)
cleaned = store.cleanupAssociations()
assert cleaned == 2, cleaned
### Nonce functions
def checkUseNonce(nonce, expected, server_url, msg=''):
stamp, salt = split(nonce)
actual = store.useNonce(server_url, stamp, salt)
assert bool(actual) == bool(expected), "%r != %r: %s" % (actual, expected,
msg)
for url in [server_url, '']:
# Random nonce (not in store)
nonce1 = mkNonce()
# A nonce is allowed by default
checkUseNonce(nonce1, True, url)
# Storing once causes useNonce to return True the first, and only
# the first, time it is called after the store.
checkUseNonce(nonce1, False, url)
checkUseNonce(nonce1, False, url)
# Nonces from when the universe was an hour old should not pass these days.
old_nonce = mkNonce(3600)
checkUseNonce(old_nonce, False, url, "Old nonce (%r) passed." % (old_nonce,))
old_nonce1 = mkNonce(now - 20000)
old_nonce2 = mkNonce(now - 10000)
recent_nonce = mkNonce(now - 600)
from openid.store import nonce as nonceModule
orig_skew = nonceModule.SKEW
try:
nonceModule.SKEW = 0
store.cleanupNonces()
# Set SKEW high so stores will keep our nonces.
nonceModule.SKEW = 100000
assert store.useNonce(server_url, *split(old_nonce1))
assert store.useNonce(server_url, *split(old_nonce2))
assert store.useNonce(server_url, *split(recent_nonce))
nonceModule.SKEW = 3600
cleaned = store.cleanupNonces()
assert cleaned == 2, "Cleaned %r nonces." % (cleaned,)
nonceModule.SKEW = 100000
# A roundabout method of checking that the old nonces were cleaned is
# to see if we're allowed to add them again.
assert store.useNonce(server_url, *split(old_nonce1))
assert store.useNonce(server_url, *split(old_nonce2))
# The recent nonce wasn't cleaned, so it should still fail.
assert not store.useNonce(server_url, *split(recent_nonce))
finally:
nonceModule.SKEW = orig_skew
def test_filestore():
from openid.store import filestore
import tempfile
import shutil
try:
temp_dir = tempfile.mkdtemp()
except AttributeError:
import os
temp_dir = os.tmpnam()
os.mkdir(temp_dir)
store = filestore.FileOpenIDStore(temp_dir)
try:
testStore(store)
store.cleanup()
except:
raise
else:
shutil.rmtree(temp_dir)
def test_sqlite():
from openid.store import sqlstore
try:
from pysqlite2 import dbapi2 as sqlite
except ImportError:
pass
else:
conn = sqlite.connect(':memory:')
store = sqlstore.SQLiteStore(conn)
store.createTables()
testStore(store)
def test_mysql():
from openid.store import sqlstore
try:
import MySQLdb
except ImportError:
pass
else:
db_user = 'openid_test'
db_passwd = ''
db_name = getTmpDbName()
from MySQLdb.constants import ER
# Change this connect line to use the right user and password
try:
conn = MySQLdb.connect(user=db_user, passwd=db_passwd, host = db_host)
except MySQLdb.OperationalError, why:
if why[0] == 2005:
print ('Skipping MySQL store test (cannot connect '
'to test server on host %r)' % (db_host,))
return
else:
raise
conn.query('CREATE DATABASE %s;' % db_name)
try:
conn.query('USE %s;' % db_name)
# OK, we're in the right environment. Create store and
# create the tables.
store = sqlstore.MySQLStore(conn)
store.createTables()
# At last, we get to run the test.
testStore(store)
finally:
# Remove the database. If you want to do post-mortem on a
# failing test, comment out this line.
conn.query('DROP DATABASE %s;' % db_name)
def test_postgresql():
"""
Tests the PostgreSQLStore on a locally-hosted PostgreSQL database
cluster, version 7.4 or later. To run this test, you must have:
- The 'psycopg' python module (version 1.1) installed
- PostgreSQL running locally
- An 'openid_test' user account in your database cluster, which
you can create by running 'createuser -Ad openid_test' as the
'postgres' user
- Trust auth for the 'openid_test' account, which you can activate
by adding the following line to your pg_hba.conf file:
local all openid_test trust
This test connects to the database cluster three times:
- To the 'template1' database, to create the test database
- To the test database, to run the store tests
- To the 'template1' database once more, to drop the test database
"""
from openid.store import sqlstore
try:
import psycopg
except ImportError:
pass
else:
db_name = getTmpDbName()
db_user = 'openid_test'
# Connect once to create the database; reconnect to access the
# new database.
conn_create = psycopg.connect(database = 'template1', user = db_user,
host = db_host)
conn_create.autocommit()
# Create the test database.
cursor = conn_create.cursor()
cursor.execute('CREATE DATABASE %s;' % (db_name,))
conn_create.close()
# Connect to the test database.
conn_test = psycopg.connect(database = db_name, user = db_user,
host = db_host)
# OK, we're in the right environment. Create the store
# instance and create the tables.
store = sqlstore.PostgreSQLStore(conn_test)
store.createTables()
# At last, we get to run the test.
testStore(store)
# Disconnect.
conn_test.close()
# It takes a little time for the close() call above to take
# effect, so we'll wait for a second before trying to remove
# the database. (Maybe this is because we're using a UNIX
# socket to connect to postgres rather than TCP?)
import time
time.sleep(1)
# Remove the database now that the test is over.
conn_remove = psycopg.connect(database = 'template1', user = db_user,
host = db_host)
conn_remove.autocommit()
cursor = conn_remove.cursor()
cursor.execute('DROP DATABASE %s;' % (db_name,))
conn_remove.close()
def test_memstore():
from openid.store import memstore
testStore(memstore.MemoryStore())
test_functions = [
test_filestore,
test_sqlite,
test_mysql,
test_postgresql,
test_memstore,
]
def pyUnitTests():
tests = map(unittest.FunctionTestCase, test_functions)
load = unittest.defaultTestLoader.loadTestsFromTestCase
return unittest.TestSuite(tests)
if __name__ == '__main__':
import sys
suite = pyUnitTests()
runner = unittest.TextTestRunner()
result = runner.run(suite)
if result.wasSuccessful():
sys.exit(0)
else:
sys.exit(1)
| agpl-3.0 |
mprinc/McMap | src/scripts/CSN_Archive/check_object_names.py | 1 | 4677 | #!/usr/bin/env python
# Copyright (c) 2015, Scott D. Peckham
#------------------------------------------------------
# S.D. Peckham
# July 9, 2015
#
# Tool to extract the object part of every CSDMS Standard
# Variable Name and generate a list of objects that
# includes those as well as all parent objects.
#
# Example of use at a Unix prompt:
#
# % ./check_object_names.py CSN_VarNames_v0.82.txt
#------------------------------------------------------
#
# Functions:
# check_objects()
#
#------------------------------------------------------
import os.path
import sys
#------------------------------------------------------
def check_objects( in_file='CSN_VarNames_v0.82.txt' ):
#--------------------------------------------------
# Open input file that contains copied names table
#--------------------------------------------------
try:
in_unit = open( in_file, 'r' )
except:
print 'SORRY: Could not open TXT file named:'
print ' ' + in_file
#-------------------------
# Open new CSV text file
#-------------------------
## pos = in_file.rfind('.')
## prefix = in_file[0:pos]
## out_file = prefix + '.ttl'
out_file = 'All_Object_Names.txt'
#-------------------------------------------
OUT_EXISTS = os.path.exists( out_file )
if (OUT_EXISTS):
print 'SORRY, A text file with the name'
print ' ' + out_file
print ' already exists.'
return
out_unit = open( out_file, 'w' )
#---------------------------
# Parse all variable names
#---------------------------
n_objects = 0
object_list1 = list()
object_list2 = list()
while (True):
#------------------------------
# Read data line from in_file
#------------------------------
line = in_unit.readline()
if (line == ''):
break
#--------------------------------------------------
# Write object and quantity fullnames to TTL file
#--------------------------------------------------
line = line.strip() # (strip leading/trailing white space)
main_parts = line.split('__')
object_fullname = main_parts[0]
# quantity_fullname = main_parts[1]
#------------------------------------
# Append object name to object_list
#------------------------------------
object_list1.append( object_fullname )
object_list2.append( object_fullname )
#------------------------------------------------
# Append all parent object names to object_list
#------------------------------------------------
object_name = object_fullname
while (True):
pos = object_name.rfind('_')
if (pos < 0):
break
object_name = object_name[:pos]
object_list2.append( object_name )
#---------------------------------------------
# Create sorted lists of unique object names
# Not fastest method, but simple.
#---------------------------------------------
old_list = sorted( set(object_list1) )
new_list = sorted( set(object_list2) )
n_objects1 = len( old_list )
n_objects2 = len( new_list )
#--------------------------------------------
# Write complete object list to output file
#--------------------------------------------
for k in xrange( n_objects2 ):
out_unit.write( new_list[k] + '\n' )
#----------------------
# Close the input file
#----------------------
in_unit.close()
#----------------------------
# Close the TXT output file
#----------------------------
out_unit.close()
print 'Finished checking all object names.'
print 'Number of old object names =', n_objects1, '.'
print 'Number of new object names =', n_objects2, '.'
print ' '
# check_objects()
#------------------------------------------------------
if (__name__ == "__main__"):
#-----------------------------------------------------
# Note: First arg in sys.argv is the command itself.
#-----------------------------------------------------
n_args = len(sys.argv)
if (n_args < 2):
print 'ERROR: This tool requires an input'
print ' text file argument.'
print 'sys.argv =', sys.argv
print ' '
elif (n_args == 2):
check_objects( sys.argv[1] )
else:
print 'ERROR: Invalid number of arguments.'
#-----------------------------------------------------------------------
| mit |
liorvh/raspberry_pwn | src/pentest/fimap/singleScan.py | 8 | 5441 | #
# This file is part of fimap.
#
# Copyright(c) 2009-2010 Iman Karim(ikarim2s@smail.inf.fh-brs.de).
# http://fimap.googlecode.com
#
# This file may be licensed under the terms of of the
# GNU General Public License Version 2 (the ``GPL'').
#
# Software distributed under the License is distributed
# on an ``AS IS'' basis, WITHOUT WARRANTY OF ANY KIND, either
# express or implied. See the GPL for the specific language
# governing rights and limitations.
#
# You should have received a copy of the GPL along with this
# program. If not, go to http://www.gnu.org/licenses/gpl.html
# or write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
from baseClass import baseClass
from targetScanner import targetScanner
import sys, time
__author__="Iman Karim(ikarim2s@smail.inf.fh-brs.de)"
__date__ ="$03.09.2009 01:29:37$"
class singleScan(baseClass):
def _load(self):
self.URL = None
self.quite = False
def setURL(self, URL):
self.URL = URL
def setQuite(self, b):
self.quite = b
def scan(self):
try:
self.localLog("SingleScan is testing URL: '%s'" %self.URL)
t = targetScanner(self.config)
t.MonkeyTechnique = self.config["p_monkeymode"]
idx = 0
if (t.prepareTarget(self.URL)):
res = t.testTargetVuln()
if (len(res) == 0):
self.localLog("Target URL isn't affected by any file inclusion bug :(")
else:
for i in res:
report = i[0]
files = i[1]
idx = idx +1
boxarr = []
header = "[%d] Possible File Inclusion"%(idx)
if (report.getLanguage() != None):
header = "[%d] Possible %s-File Inclusion"%(idx, report.getLanguage())
boxarr.append(" [URL] %s"%report.getURL())
if (report.getPostData() != None and report.getPostData() != ""): boxarr.append(" [POST] %s"%report.getPostData())
if (report.isPost):
boxarr.append(" [POSTPARM] %s"%report.getVulnKey())
else:
boxarr.append(" [PARAM] %s"%report.getVulnKey())
if (report.isBlindDiscovered()):
boxarr.append(" [PATH] Not received (Blindmode)")
else:
boxarr.append(" [PATH] %s"%report.getServerPath())
if (report.isUnix()):
boxarr.append(" [OS] Unix")
else:
boxarr.append(" [OS] Windows")
boxarr.append(" [TYPE] %s"%report.getType())
if (not report.isBlindDiscovered()):
if (report.isNullbytePossible() == None):
boxarr.append(" [NULLBYTE] No Need. It's clean.")
else:
if (report.isNullbytePossible()):
boxarr.append(" [NULLBYTE] Works. :)")
else:
boxarr.append(" [NULLBYTE] Doesn't work. :(")
else:
if (report.isNullbytePossible()):
boxarr.append(" [NULLBYTE] Is needed.")
else:
boxarr.append(" [NULLBYTE] Not tested.")
boxarr.append(" [READABLE FILES]")
if (len(files) == 0):
boxarr.append(" No Readable files found :(")
else:
fidx = 0
for file in files:
payload = "%s%s%s"%(report.getPrefix(), file, report.getSurfix())
if (file != payload):
if report.isWindows() and file[1]==":":
file = file[3:]
txt = " [%d] %s -> %s"%(fidx, file, payload)
#if (fidx == 0): txt = txt.strip()
boxarr.append(txt)
else:
txt = " [%d] %s"%(fidx, file)
#if (fidx == 0): txt = txt.strip()
boxarr.append(txt)
fidx = fidx +1
self.drawBox(header, boxarr)
except KeyboardInterrupt:
if (self.quite): # We are in google mode.
print "\nCancelled current target..."
print "Press CTRL+C again in the next second to terminate fimap."
try:
time.sleep(1)
except KeyboardInterrupt:
raise
else: # We are in single mode. Simply raise the exception.
raise
def localLog(self, txt):
if (not self.quite):
print txt | gpl-3.0 |
PrFalken/exaproxy | lib/exaproxy/icap/response.py | 1 | 2403 |
class ICAPResponse (object):
def __init__ (self, version, code, status, headers, icap_header, http_header):
self.version = version
self.code = code
self.status = status
self.headers = headers
icap_len = len(icap_header)
http_len = len(http_header)
icap_end = icap_len
if http_header:
http_len_string = '%x\n' % http_len
http_string = http_len_string + http_header + '0\n'
http_offset = icap_end + len(http_len_string)
http_end = http_offset + http_len
else:
http_string = http_header
http_offset = icap_end
http_end = icap_end
self.response_view = memoryview(icap_header + http_string)
self.icap_view = self.response_view[:icap_end]
self.http_view = self.response_view[http_offset:http_end]
@property
def response_string (self):
return self.response_view.tobytes()
@property
def icap_header (self):
return self.icap_view.tobytes()
@property
def http_header (self):
return self.http_view.tobytes()
@property
def pragma (self):
return self.headers.get('pragma', {})
@property
def is_permit (self):
return False
@property
def is_modify (self):
return False
@property
def is_content (self):
return False
@property
def is_intercept (self):
return False
class ICAPRequestModification (ICAPResponse):
def __init__ (self, version, code, status, headers, icap_header, http_header, intercept_header=None):
ICAPResponse.__init__(self, version, code, status, headers, icap_header, http_header)
self.intercept_header = intercept_header
@property
def is_permit (self):
return self.code == 304
@property
def is_modify (self):
return self.code == 200 and self.intercept_header is None
@property
def is_intercept (self):
return self.code == 200 and self.intercept_header is not None
class ICAPResponseModification (ICAPResponse):
@property
def is_content (self):
return self.code == 200
class ICAPResponseFactory:
def __init__ (self, configuration):
self.configuration = configuration
def create (self, version, code, status, headers, icap_header, request_header, response_header, intercept_header=None):
if response_header:
response = ICAPResponseModification(version, code, status, headers, icap_header, response_header)
else:
response = ICAPRequestModification(version, code, status, headers, icap_header, request_header, intercept_header=intercept_header)
return response
| bsd-2-clause |
burntcustard/DeskBot-Zero | neural-net/keras/neuralNet.py | 1 | 4088 | '''
How to run:
$ source ~/Documents/tensorflow/bin/activate
$ cd Documents/DeskBot-Zero/neural-net/keras
$ python neuralNet.py
Heavily based on:
https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py
'''
import os
import keras
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
import deskBotData
batch_size = 100
epochs = 100
data_augmentation = True
num_predictions = 20
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'keras_deskbot_distance_trained_model.h5'
# The data, split between train and test sets:
(x_train, y_train), (x_test, y_test), num_classes = deskBotData.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print(num_classes, '(potential) classes')
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
# initiate RMSprop optimizer
opt = keras.optimizers.rmsprop(lr=0.0001, decay=1e-6)
# Let's train the model using RMSprop
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
if not data_augmentation:
print('Not using data augmentation.')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
else:
print('Using real-time data augmentation.')
# This will do preprocessing and realtime data augmentation:
# This should be the main differnce between rotation and distance training:
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0, # randomly shift images horizontally (fraction of width)
height_shift_range=0, # randomly shift images vertically (fraction of height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
# Compute quantities required for feature-wise normalization
# (std, mean, and principal components if ZCA whitening is applied).
datagen.fit(x_train)
# Fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(x_train, y_train,
batch_size=batch_size),
epochs=epochs,
validation_data=(x_test, y_test),
workers=4)
# Save model and weights
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('Saved trained model at %s ' % model_path)
# Score trained model.
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
| mit |
BeDjango/intef-openedx | openedx/core/djangoapps/user_api/accounts/views.py | 21 | 8229 | """
NOTE: this API is WIP and has not yet been approved. Do not use this API without talking to Christina or Andy.
For more information, see:
https://openedx.atlassian.net/wiki/display/TNL/User+API
"""
from django.db import transaction
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework import status
from rest_framework import permissions
from openedx.core.lib.api.authentication import (
SessionAuthenticationAllowInactiveUser,
OAuth2AuthenticationAllowInactiveUser,
)
from ..errors import UserNotFound, UserNotAuthorized, AccountUpdateError, AccountValidationError
from openedx.core.lib.api.parsers import MergePatchParser
from .api import get_account_settings, update_account_settings
from .serializers import PROFILE_IMAGE_KEY_PREFIX
class AccountView(APIView):
"""
**Use Cases**
Get or update a user's account information. Updates are supported
only through merge patch.
**Example Requests**
GET /api/user/v1/accounts/{username}/[?view=shared]
PATCH /api/user/v1/accounts/{username}/{"key":"value"} "application/merge-patch+json"
**Response Values for GET**
If no user exists with the specified username, an HTTP 404 "Not
Found" response is returned.
If the user makes the request for her own account, or makes a
request for another account and has "is_staff" access, an HTTP 200
"OK" response is returned. The response contains the following
values.
* bio: null or textual representation of user biographical
information ("about me").
* country: An ISO 3166 country code or null.
* date_joined: The date the account was created, in the string
format provided by datetime. For example, "2014-08-26T17:52:11Z".
* email: Email address for the user. New email addresses must be confirmed
via a confirmation email, so GET does not reflect the change until
the address has been confirmed.
* gender: One of the following values:
* null
* "f"
* "m"
* "o"
* goals: The textual representation of the user's goals, or null.
* is_active: Boolean representation of whether a user is active.
* language: The user's preferred language, or null.
* language_proficiencies: Array of language preferences. Each
preference is a JSON object with the following keys:
* "code": string ISO 639-1 language code e.g. "en".
* level_of_education: One of the following values:
* "p": PhD or Doctorate
* "m": Master's or professional degree
* "b": Bachelor's degree
* "a": Associate's degree
* "hs": Secondary/high school
* "jhs": Junior secondary/junior high/middle school
* "el": Elementary/primary school
* "none": None
* "o": Other
* null: The user did not enter a value
* mailing_address: The textual representation of the user's mailing
address, or null.
* name: The full name of the user.
* profile_image: A JSON representation of a user's profile image
information. This representation has the following keys.
* "has_image": Boolean indicating whether the user has a profile
image.
* "image_url_*": Absolute URL to various sizes of a user's
profile image, where '*' matches a representation of the
corresponding image size, such as 'small', 'medium', 'large',
and 'full'. These are configurable via PROFILE_IMAGE_SIZES_MAP.
* requires_parental_consent: True if the user is a minor
requiring parental consent.
* username: The username associated with the account.
* year_of_birth: The year the user was born, as an integer, or null.
* account_privacy: The user's setting for sharing her personal
profile. Possible values are "all_users" or "private".
For all text fields, plain text instead of HTML is supported. The
data is stored exactly as specified. Clients must HTML escape
rendered values to avoid script injections.
If a user who does not have "is_staff" access requests account
information for a different user, only a subset of these fields is
returned. The returns fields depend on the
ACCOUNT_VISIBILITY_CONFIGURATION configuration setting and the
visibility preference of the user for whom data is requested.
Note that a user can view which account fields they have shared
with other users by requesting their own username and providing
the "view=shared" URL parameter.
**Response Values for PATCH**
Users can only modify their own account information. If the
requesting user does not have the specified username and has staff
access, the request returns an HTTP 403 "Forbidden" response. If
the requesting user does not have staff access, the request
returns an HTTP 404 "Not Found" response to avoid revealing the
existence of the account.
If no user exists with the specified username, an HTTP 404 "Not
Found" response is returned.
If "application/merge-patch+json" is not the specified content
type, a 415 "Unsupported Media Type" response is returned.
If validation errors prevent the update, this method returns a 400
"Bad Request" response that includes a "field_errors" field that
lists all error messages.
If a failure at the time of the update prevents the update, a 400
"Bad Request" error is returned. The JSON collection contains
specific errors.
If the update is successful, updated user account data is returned.
"""
authentication_classes = (OAuth2AuthenticationAllowInactiveUser, SessionAuthenticationAllowInactiveUser)
permission_classes = (permissions.IsAuthenticated,)
parser_classes = (MergePatchParser,)
def get(self, request, username):
"""
GET /api/user/v1/accounts/{username}/
"""
try:
account_settings = get_account_settings(request, username, view=request.query_params.get('view'))
except UserNotFound:
return Response(status=status.HTTP_403_FORBIDDEN if request.user.is_staff else status.HTTP_404_NOT_FOUND)
return Response(account_settings)
def patch(self, request, username):
"""
PATCH /api/user/v1/accounts/{username}/
Note that this implementation is the "merge patch" implementation proposed in
https://tools.ietf.org/html/rfc7396. The content_type must be "application/merge-patch+json" or
else an error response with status code 415 will be returned.
"""
try:
with transaction.atomic():
update_account_settings(request.user, request.data, username=username)
account_settings = get_account_settings(request, username)
except UserNotAuthorized:
return Response(status=status.HTTP_403_FORBIDDEN if request.user.is_staff else status.HTTP_404_NOT_FOUND)
except UserNotFound:
return Response(status=status.HTTP_404_NOT_FOUND)
except AccountValidationError as err:
return Response({"field_errors": err.field_errors}, status=status.HTTP_400_BAD_REQUEST)
except AccountUpdateError as err:
return Response(
{
"developer_message": err.developer_message,
"user_message": err.user_message
},
status=status.HTTP_400_BAD_REQUEST
)
return Response(account_settings)
| agpl-3.0 |
anthonyryan1/xbmc | lib/gtest/test/gtest_shuffle_test.py | 3023 | 12549 | #!/usr/bin/env python
#
# Copyright 2009 Google Inc. All Rights Reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Verifies that test shuffling works."""
__author__ = 'wan@google.com (Zhanyong Wan)'
import os
import gtest_test_utils
# Command to run the gtest_shuffle_test_ program.
COMMAND = gtest_test_utils.GetTestExecutablePath('gtest_shuffle_test_')
# The environment variables for test sharding.
TOTAL_SHARDS_ENV_VAR = 'GTEST_TOTAL_SHARDS'
SHARD_INDEX_ENV_VAR = 'GTEST_SHARD_INDEX'
TEST_FILTER = 'A*.A:A*.B:C*'
ALL_TESTS = []
ACTIVE_TESTS = []
FILTERED_TESTS = []
SHARDED_TESTS = []
SHUFFLED_ALL_TESTS = []
SHUFFLED_ACTIVE_TESTS = []
SHUFFLED_FILTERED_TESTS = []
SHUFFLED_SHARDED_TESTS = []
def AlsoRunDisabledTestsFlag():
return '--gtest_also_run_disabled_tests'
def FilterFlag(test_filter):
return '--gtest_filter=%s' % (test_filter,)
def RepeatFlag(n):
return '--gtest_repeat=%s' % (n,)
def ShuffleFlag():
return '--gtest_shuffle'
def RandomSeedFlag(n):
return '--gtest_random_seed=%s' % (n,)
def RunAndReturnOutput(extra_env, args):
"""Runs the test program and returns its output."""
environ_copy = os.environ.copy()
environ_copy.update(extra_env)
return gtest_test_utils.Subprocess([COMMAND] + args, env=environ_copy).output
def GetTestsForAllIterations(extra_env, args):
"""Runs the test program and returns a list of test lists.
Args:
extra_env: a map from environment variables to their values
args: command line flags to pass to gtest_shuffle_test_
Returns:
A list where the i-th element is the list of tests run in the i-th
test iteration.
"""
test_iterations = []
for line in RunAndReturnOutput(extra_env, args).split('\n'):
if line.startswith('----'):
tests = []
test_iterations.append(tests)
elif line.strip():
tests.append(line.strip()) # 'TestCaseName.TestName'
return test_iterations
def GetTestCases(tests):
"""Returns a list of test cases in the given full test names.
Args:
tests: a list of full test names
Returns:
A list of test cases from 'tests', in their original order.
Consecutive duplicates are removed.
"""
test_cases = []
for test in tests:
test_case = test.split('.')[0]
if not test_case in test_cases:
test_cases.append(test_case)
return test_cases
def CalculateTestLists():
"""Calculates the list of tests run under different flags."""
if not ALL_TESTS:
ALL_TESTS.extend(
GetTestsForAllIterations({}, [AlsoRunDisabledTestsFlag()])[0])
if not ACTIVE_TESTS:
ACTIVE_TESTS.extend(GetTestsForAllIterations({}, [])[0])
if not FILTERED_TESTS:
FILTERED_TESTS.extend(
GetTestsForAllIterations({}, [FilterFlag(TEST_FILTER)])[0])
if not SHARDED_TESTS:
SHARDED_TESTS.extend(
GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3',
SHARD_INDEX_ENV_VAR: '1'},
[])[0])
if not SHUFFLED_ALL_TESTS:
SHUFFLED_ALL_TESTS.extend(GetTestsForAllIterations(
{}, [AlsoRunDisabledTestsFlag(), ShuffleFlag(), RandomSeedFlag(1)])[0])
if not SHUFFLED_ACTIVE_TESTS:
SHUFFLED_ACTIVE_TESTS.extend(GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(1)])[0])
if not SHUFFLED_FILTERED_TESTS:
SHUFFLED_FILTERED_TESTS.extend(GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(1), FilterFlag(TEST_FILTER)])[0])
if not SHUFFLED_SHARDED_TESTS:
SHUFFLED_SHARDED_TESTS.extend(
GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3',
SHARD_INDEX_ENV_VAR: '1'},
[ShuffleFlag(), RandomSeedFlag(1)])[0])
class GTestShuffleUnitTest(gtest_test_utils.TestCase):
"""Tests test shuffling."""
def setUp(self):
CalculateTestLists()
def testShufflePreservesNumberOfTests(self):
self.assertEqual(len(ALL_TESTS), len(SHUFFLED_ALL_TESTS))
self.assertEqual(len(ACTIVE_TESTS), len(SHUFFLED_ACTIVE_TESTS))
self.assertEqual(len(FILTERED_TESTS), len(SHUFFLED_FILTERED_TESTS))
self.assertEqual(len(SHARDED_TESTS), len(SHUFFLED_SHARDED_TESTS))
def testShuffleChangesTestOrder(self):
self.assert_(SHUFFLED_ALL_TESTS != ALL_TESTS, SHUFFLED_ALL_TESTS)
self.assert_(SHUFFLED_ACTIVE_TESTS != ACTIVE_TESTS, SHUFFLED_ACTIVE_TESTS)
self.assert_(SHUFFLED_FILTERED_TESTS != FILTERED_TESTS,
SHUFFLED_FILTERED_TESTS)
self.assert_(SHUFFLED_SHARDED_TESTS != SHARDED_TESTS,
SHUFFLED_SHARDED_TESTS)
def testShuffleChangesTestCaseOrder(self):
self.assert_(GetTestCases(SHUFFLED_ALL_TESTS) != GetTestCases(ALL_TESTS),
GetTestCases(SHUFFLED_ALL_TESTS))
self.assert_(
GetTestCases(SHUFFLED_ACTIVE_TESTS) != GetTestCases(ACTIVE_TESTS),
GetTestCases(SHUFFLED_ACTIVE_TESTS))
self.assert_(
GetTestCases(SHUFFLED_FILTERED_TESTS) != GetTestCases(FILTERED_TESTS),
GetTestCases(SHUFFLED_FILTERED_TESTS))
self.assert_(
GetTestCases(SHUFFLED_SHARDED_TESTS) != GetTestCases(SHARDED_TESTS),
GetTestCases(SHUFFLED_SHARDED_TESTS))
def testShuffleDoesNotRepeatTest(self):
for test in SHUFFLED_ALL_TESTS:
self.assertEqual(1, SHUFFLED_ALL_TESTS.count(test),
'%s appears more than once' % (test,))
for test in SHUFFLED_ACTIVE_TESTS:
self.assertEqual(1, SHUFFLED_ACTIVE_TESTS.count(test),
'%s appears more than once' % (test,))
for test in SHUFFLED_FILTERED_TESTS:
self.assertEqual(1, SHUFFLED_FILTERED_TESTS.count(test),
'%s appears more than once' % (test,))
for test in SHUFFLED_SHARDED_TESTS:
self.assertEqual(1, SHUFFLED_SHARDED_TESTS.count(test),
'%s appears more than once' % (test,))
def testShuffleDoesNotCreateNewTest(self):
for test in SHUFFLED_ALL_TESTS:
self.assert_(test in ALL_TESTS, '%s is an invalid test' % (test,))
for test in SHUFFLED_ACTIVE_TESTS:
self.assert_(test in ACTIVE_TESTS, '%s is an invalid test' % (test,))
for test in SHUFFLED_FILTERED_TESTS:
self.assert_(test in FILTERED_TESTS, '%s is an invalid test' % (test,))
for test in SHUFFLED_SHARDED_TESTS:
self.assert_(test in SHARDED_TESTS, '%s is an invalid test' % (test,))
def testShuffleIncludesAllTests(self):
for test in ALL_TESTS:
self.assert_(test in SHUFFLED_ALL_TESTS, '%s is missing' % (test,))
for test in ACTIVE_TESTS:
self.assert_(test in SHUFFLED_ACTIVE_TESTS, '%s is missing' % (test,))
for test in FILTERED_TESTS:
self.assert_(test in SHUFFLED_FILTERED_TESTS, '%s is missing' % (test,))
for test in SHARDED_TESTS:
self.assert_(test in SHUFFLED_SHARDED_TESTS, '%s is missing' % (test,))
def testShuffleLeavesDeathTestsAtFront(self):
non_death_test_found = False
for test in SHUFFLED_ACTIVE_TESTS:
if 'DeathTest.' in test:
self.assert_(not non_death_test_found,
'%s appears after a non-death test' % (test,))
else:
non_death_test_found = True
def _VerifyTestCasesDoNotInterleave(self, tests):
test_cases = []
for test in tests:
[test_case, _] = test.split('.')
if test_cases and test_cases[-1] != test_case:
test_cases.append(test_case)
self.assertEqual(1, test_cases.count(test_case),
'Test case %s is not grouped together in %s' %
(test_case, tests))
def testShuffleDoesNotInterleaveTestCases(self):
self._VerifyTestCasesDoNotInterleave(SHUFFLED_ALL_TESTS)
self._VerifyTestCasesDoNotInterleave(SHUFFLED_ACTIVE_TESTS)
self._VerifyTestCasesDoNotInterleave(SHUFFLED_FILTERED_TESTS)
self._VerifyTestCasesDoNotInterleave(SHUFFLED_SHARDED_TESTS)
def testShuffleRestoresOrderAfterEachIteration(self):
# Get the test lists in all 3 iterations, using random seed 1, 2,
# and 3 respectively. Google Test picks a different seed in each
# iteration, and this test depends on the current implementation
# picking successive numbers. This dependency is not ideal, but
# makes the test much easier to write.
[tests_in_iteration1, tests_in_iteration2, tests_in_iteration3] = (
GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(1), RepeatFlag(3)]))
# Make sure running the tests with random seed 1 gets the same
# order as in iteration 1 above.
[tests_with_seed1] = GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(1)])
self.assertEqual(tests_in_iteration1, tests_with_seed1)
# Make sure running the tests with random seed 2 gets the same
# order as in iteration 2 above. Success means that Google Test
# correctly restores the test order before re-shuffling at the
# beginning of iteration 2.
[tests_with_seed2] = GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(2)])
self.assertEqual(tests_in_iteration2, tests_with_seed2)
# Make sure running the tests with random seed 3 gets the same
# order as in iteration 3 above. Success means that Google Test
# correctly restores the test order before re-shuffling at the
# beginning of iteration 3.
[tests_with_seed3] = GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(3)])
self.assertEqual(tests_in_iteration3, tests_with_seed3)
def testShuffleGeneratesNewOrderInEachIteration(self):
[tests_in_iteration1, tests_in_iteration2, tests_in_iteration3] = (
GetTestsForAllIterations(
{}, [ShuffleFlag(), RandomSeedFlag(1), RepeatFlag(3)]))
self.assert_(tests_in_iteration1 != tests_in_iteration2,
tests_in_iteration1)
self.assert_(tests_in_iteration1 != tests_in_iteration3,
tests_in_iteration1)
self.assert_(tests_in_iteration2 != tests_in_iteration3,
tests_in_iteration2)
def testShuffleShardedTestsPreservesPartition(self):
# If we run M tests on N shards, the same M tests should be run in
# total, regardless of the random seeds used by the shards.
[tests1] = GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3',
SHARD_INDEX_ENV_VAR: '0'},
[ShuffleFlag(), RandomSeedFlag(1)])
[tests2] = GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3',
SHARD_INDEX_ENV_VAR: '1'},
[ShuffleFlag(), RandomSeedFlag(20)])
[tests3] = GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3',
SHARD_INDEX_ENV_VAR: '2'},
[ShuffleFlag(), RandomSeedFlag(25)])
sorted_sharded_tests = tests1 + tests2 + tests3
sorted_sharded_tests.sort()
sorted_active_tests = []
sorted_active_tests.extend(ACTIVE_TESTS)
sorted_active_tests.sort()
self.assertEqual(sorted_active_tests, sorted_sharded_tests)
if __name__ == '__main__':
gtest_test_utils.Main()
| gpl-2.0 |
chidea/GoPythonDLLWrapper | bin/lib/test/test_compare.py | 12 | 3994 | import unittest
from test import support
class Empty:
def __repr__(self):
return '<Empty>'
class Cmp:
def __init__(self,arg):
self.arg = arg
def __repr__(self):
return '<Cmp %s>' % self.arg
def __eq__(self, other):
return self.arg == other
class Anything:
def __eq__(self, other):
return True
def __ne__(self, other):
return False
class ComparisonTest(unittest.TestCase):
set1 = [2, 2.0, 2, 2+0j, Cmp(2.0)]
set2 = [[1], (3,), None, Empty()]
candidates = set1 + set2
def test_comparisons(self):
for a in self.candidates:
for b in self.candidates:
if ((a in self.set1) and (b in self.set1)) or a is b:
self.assertEqual(a, b)
else:
self.assertNotEqual(a, b)
def test_id_comparisons(self):
# Ensure default comparison compares id() of args
L = []
for i in range(10):
L.insert(len(L)//2, Empty())
for a in L:
for b in L:
self.assertEqual(a == b, id(a) == id(b),
'a=%r, b=%r' % (a, b))
def test_ne_defaults_to_not_eq(self):
a = Cmp(1)
b = Cmp(1)
c = Cmp(2)
self.assertIs(a == b, True)
self.assertIs(a != b, False)
self.assertIs(a != c, True)
def test_ne_high_priority(self):
"""object.__ne__() should allow reflected __ne__() to be tried"""
calls = []
class Left:
# Inherits object.__ne__()
def __eq__(*args):
calls.append('Left.__eq__')
return NotImplemented
class Right:
def __eq__(*args):
calls.append('Right.__eq__')
return NotImplemented
def __ne__(*args):
calls.append('Right.__ne__')
return NotImplemented
Left() != Right()
self.assertSequenceEqual(calls, ['Left.__eq__', 'Right.__ne__'])
def test_ne_low_priority(self):
"""object.__ne__() should not invoke reflected __eq__()"""
calls = []
class Base:
# Inherits object.__ne__()
def __eq__(*args):
calls.append('Base.__eq__')
return NotImplemented
class Derived(Base): # Subclassing forces higher priority
def __eq__(*args):
calls.append('Derived.__eq__')
return NotImplemented
def __ne__(*args):
calls.append('Derived.__ne__')
return NotImplemented
Base() != Derived()
self.assertSequenceEqual(calls, ['Derived.__ne__', 'Base.__eq__'])
def test_other_delegation(self):
"""No default delegation between operations except __ne__()"""
ops = (
('__eq__', lambda a, b: a == b),
('__lt__', lambda a, b: a < b),
('__le__', lambda a, b: a <= b),
('__gt__', lambda a, b: a > b),
('__ge__', lambda a, b: a >= b),
)
for name, func in ops:
with self.subTest(name):
def unexpected(*args):
self.fail('Unexpected operator method called')
class C:
__ne__ = unexpected
for other, _ in ops:
if other != name:
setattr(C, other, unexpected)
if name == '__eq__':
self.assertIs(func(C(), object()), False)
else:
self.assertRaises(TypeError, func, C(), object())
def test_issue_1393(self):
x = lambda: None
self.assertEqual(x, Anything())
self.assertEqual(Anything(), x)
y = object()
self.assertEqual(y, Anything())
self.assertEqual(Anything(), y)
def test_main():
support.run_unittest(ComparisonTest)
if __name__ == '__main__':
test_main()
| mit |
yoe/veyepar | dj/scripts/enc.py | 1 | 23477 | #!/usr/bin/python
"""
assembles raw cuts into final, titles, tweaks audio, encodes to format for upload.
"""
import re
import os
import sys
import subprocess
import xml.etree.ElementTree
from mk_mlt import mk_mlt
import pprint
from process import process
from main.models import Client, Show, Location, Episode, Raw_File, Cut_List
class enc(process):
ready_state = 2
def mk_title_svg(self, raw_svg, texts):
"""
Make a title slide by filling in a pre-made svg with name/authors.
return: svg
"""
tree = xml.etree.ElementTree.XMLID(raw_svg)
for key in texts:
if self.options.verbose:
print("looking for:", key)
# tollerate template where tokens have been removed
if key in tree[1]:
if key == "license":
# CC license image
if self.options.verbose:
print("found in svg:", tree[1][key])
print("replacing with:", texts[key])
t = tree[1][key]
# import code; code.interact(local=locals())
if texts[key] is None:
# del(tree[1][key])
# print tree[1].has_key(key)
tree[1][key].clear()
else:
t.set('{http://www.w3.org/1999/xlink}href', texts[key])
elif key == "date":
if self.options.verbose:
print("found in svg:", tree[1][key].text)
print("replacing with:", re.split(',',texts[key])[0]) # .encode()
tree[1][key].text = re.split(',',texts[key])[0]
else:
if self.options.verbose:
print("found in svg:", tree[1][key].text)
print("replacing with:", texts[key]) # .encode()
tree[1][key].text = texts[key]
# cooked_svg = xml.etree.ElementTree.tostring(tree[0])
# print "testing...", "license" in cooked_svg
if 'presenternames' in tree[1]:
# some people like to add spiffy text near the presenter name(s)
if texts['authors']:
# prefix = u"Featuring" if "," in texts['authors'] else "By"
# tree[1]['presenternames'].text=u"%s %s" % (prefix,texts['authors'])
tree[1]['presenternames'].text = texts['authors']
else:
# remove the text (there is a placholder to make editing sane)
tree[1]['presenternames'].text = ""
cooked_svg = xml.etree.ElementTree.tostring(tree[0]).decode('ascii')
return cooked_svg
def get_title_text(self, episode):
# lets try putting (stuff) on a new line
title = episode.name
authors = episode.authors
if episode.show.slug == 'write_docs_na_2016':
title = title.upper()
authors = authors.upper()
if False and episode.show.slug != 'pygotham_2015' and len(title) > 80: # crazy long titles need all the lines
title2 = ''
elif ": " in title: # the space keeps 9:00 from breaking
pos = title.index(":") + 1
title, title2 = title[:pos], title[pos:].strip()
elif " - " in title:
# error if there is more than 1.
title, title2 = title.split(' - ')
elif " -- " in title:
# error if there is more than 1.
title, title2 = title.split(' -- ')
elif " (" in title:
pos = title.index(" (")
# +1 skip space in " ("
title, title2 = title[:pos], title[pos + 1:]
elif " using " in title:
pos = title.index(" using ")
title, title2 = title[:pos], title[pos + 1:]
elif ";" in title:
pos = title.index(";") + 1
title, title2 = title[:pos], title[pos:].strip()
elif "? " in title: # ?(space) to not break on 'can you?'
pos = title.index("?") + 1
title, title2 = title[:pos], title[pos:].strip()
elif ". " in title:
pos = title.index(". ") + 1
title, title2 = title[:pos], title[pos:].strip()
else:
title2 = ""
if episode.license:
license = "cc/{}.svg".format(episode.license.lower())
else:
license = None
if episode.tags:
tags = episode.tags.split(',')
tag1 = tags[0]
else:
tags = []
tag1 = ''
"""
# split authors over two objects
# breaking on comma, not space.
if ',' in authors:
authors = authors.split(', ')
author2 = ', '.join(authors[1:])
authors = authors[0].strip()
else:
author2 = ''
"""
author2 = ''
date = episode.start.strftime("%B %-d, %Y")
# DebConf style
# date = episode.start.strftime("%Y-%m-%-d")
texts = {
'client': episode.show.client.name,
'show': episode.show.name,
'title': title,
'title2': title2,
'tag1': tag1,
'authors': authors,
'author2': author2,
'presentertitle': "",
'twitter_id': episode.twitter_id,
'date': date,
'time': episode.start.strftime("%H:%M"),
'license': license,
'room': episode.location.name,
}
return texts
def svg2png(self, svg_name, png_name, episode):
"""
Make a title slide png file.
melt uses librsvg which doesn't support flow,
wich is needed for long titles, so render it to a .png using inkscape
"""
# create png file
# inkscape does not return an error code on failure
# so clean up previous run and
# check for the existance of a new png
if os.path.exists(png_name):
os.remove(png_name)
cmd = ["inkscape", svg_name,
"--export-png", png_name,
# "--export-width", "720",
]
ret = self.run_cmds(episode, [cmd])
ret = os.path.exists(png_name)
# if self.options.verbose: print cooked_svg
if self.options.verbose:
print(png_name)
if not ret:
print("svg:", svg_name)
png_name = None
return png_name
def mk_title(self, episode):
# make a title slide
# if we find titles/custom/(slug).svg, use that
# else make one from the tempalte
custom_svg_name = os.path.join( "..",
"custom", "titles", episode.slug + ".svg")
if self.options.verbose: print("custom:", custom_svg_name)
abs_path = os.path.join( self.show_dir, "tmp", custom_svg_name )
if os.path.exists(abs_path):
# cooked_svg_name = custom_svg_name
cooked_svg_name = abs_path
else:
svg_name = episode.show.client.title_svg
print(svg_name)
template = os.path.join(
os.path.split(os.path.abspath(__file__))[0],
"bling",
svg_name)
raw_svg = open(template).read()
# happy_filename = episode.slug.encode('utf-8')
happy_filename = episode.slug
# happy_filename = ''.join([c for c in happy_filename if c.isalpha()])
# title_base = os.path.join(self.show_dir, "titles", happy_filename)
title_base = os.path.join("..", "titles", happy_filename)
texts = self.get_title_text(episode)
cooked_svg = self.mk_title_svg(raw_svg, texts)
# save svg to a file
# strip 'broken' chars because inkscape can't handle the truth
# output_base=''.join([ c for c in output_base if c.isalpha()])
# output_base=''.join([ c for c in output_base if ord(c)<128])
# output_base=output_base.encode('utf-8','ignore')
cooked_svg_name = os.path.join(
self.show_dir, "titles", '{}.svg'.format(episode.slug))
open(cooked_svg_name, 'w').write(cooked_svg)
png_name = os.path.join( "..",
"titles", '{}.png'.format(episode.slug))
abs_path = os.path.join( self.show_dir, "tmp", png_name )
title_img = self.svg2png(cooked_svg_name, abs_path, episode)
if title_img is None:
print("missing title png")
return False
return png_name
def get_params(self, episode, rfs, cls):
"""
assemble a dict of params to send to mk_mlt
mlt template, title screen image,
filter parameters (currently just audio)
and cutlist+raw filenames
"""
def get_title(episode):
# if we find show_dir/custom/titles/(slug).svg, use that
# else make one from the tempalte
custom_png_name = os.path.join(
self.show_dir, "custom", "titles", episode.slug + ".png")
print("custom:", custom_png_name)
if os.path.exists(custom_png_name):
title_img = custom_png_name
else:
title_img = self.mk_title(episode)
return title_img
def get_foot(episode):
credits_img = episode.show.client.credits
credits_pathname = os.path.join("..", "assets", credits_img )
return credits_pathname
def get_clips(rfs, ep):
"""
return list of possible input files
this may get the files and store them localy.
start/end segments are under get_cuts.
ps. this is not used for encoding,
just shows in ShotCut for easy dragging onto the timeline.
"""
clips = []
for rf in rfs:
clip = {'id': rf.id }
# if rf.filename.startswith('\\'):
# rawpathname = rf.filename
# else:
raw_pathname = os.path.join( "../dv",
rf.location.slug, rf.filename)
# self.episode_dir, rf.filename)
# check for missing input file
# typically due to incorrect fs mount
abs_path = os.path.join(
self.show_dir, "tmp", raw_pathname)
if not os.path.exists(abs_path):
print(( 'raw_pathname not found: "{}"'.format(
abs_path)))
return False
clip['filename']=raw_pathname
# trim start/end based on episode start/end
if rf.start < ep.start < rf.end:
# if the ep start falls durring this clip,
# trim it
d = ep.start - rf.start
clip['in']="00:00:{}".format(d.total_seconds())
else:
clip['in']=None
# if "mkv" in rf.filename:
# import code; code.interact(local=locals())
if rf.start < ep.end < rf.end:
# if the ep end falls durring this clip,
d = ep.end - rf.start
clip['out']="00:00:{}".format(d.total_seconds())
else:
clip['out']=None
pprint.pprint(clip)
clips.append(clip)
return clips
def get_cuts(cls):
"""
gets the list of cuts.
input file, start, end, filters
ps, does not reference the clips above.
"""
def hms_to_clock(hms):
"""
Converts what media players show h:m:s
to the mlt time format h:m:s.s
for more on this:
http://mltframework.blogspot.com/2012/04/time-properties.html
"""
if not hms:
return None
if ":" not in hms:
hms = "0:" + hms
if "." not in hms:
hms = hms + ".0"
return hms
cuts = []
for cl in cls:
cut = {}
cut['id'] = cl.id
rawpathname = os.path.join( "../dv",
cl.raw_file.location.slug, cl.raw_file.filename)
# self.episode_dir, cl.raw_file.filename)
# print(rawpathname)
cut['filename'] = rawpathname
# set start/end on the clips if they are set in the db
# else None
cut['in']=hms_to_clock(cl.start)
cut['out']=hms_to_clock(cl.end)
cut['length'] = cl.duration()
if cl.episode.channelcopy:
cut['channelcopy'] = cl.episode.channelcopy
else:
cut['channelcopy']='01'
if cl.episode.normalise:
cut['normalize'] = cl.episode.normalise
else:
cut['normalize']='-12.0'
cut['video_delay']='0.0'
cuts.append(cut)
return cuts
params = {}
params['title_img'] = get_title(episode)
params['foot_img'] = get_foot(episode)
params['clips'] = get_clips(rfs, episode)
params['cuts'] = get_cuts(cls)
return params
def enc_all(self, mlt_pathname, episode):
def enc_one(ext):
out_pathname = os.path.join(
self.show_dir, ext, "%s.%s" % (episode.slug, ext))
if ext == 'webm':
parms = {
'dv_format': self.options.dv_format,
'mlt': mlt_pathname,
'out': out_pathname,
'threads': self.options.threads,
'test': '',
}
# cmds=["melt %s -profile dv_ntsc -consumer avformat:%s progress=1 acodec=libvorbis ab=128k ar=44100 vcodec=libvpx minrate=0 b=600k aspect=@4/3 maxrate=1800k g=120 qmax=42 qmin=10"% (mlt_pathname,out_pathname,)]
cmds = [
"melt -profile %(dv_format)s %(mlt)s force_aspect_ratio=@64/45 -consumer avformat:%(out)s progress=1 threads=0 ab=256k vb=2000k quality=good deadline=good deinterlace=1 deinterlace_method=yadif" % parms]
if ext == 'flv':
cmds = [
"melt %(mlt)s -progress -profile %(dv_format)s -consumer avformat:%(out)s progressive=1 acodec=libfaac ab=96k ar=44100 vcodec=libx264 b=110k vpre=/usr/share/ffmpeg/libx264-hq.ffpreset" % parms]
if ext == 'flac':
# 16kHz/mono
cmds = ["melt -verbose -progress %s -consumer avformat:%s ar=16000" %
(mlt_pathname, out_pathname)]
if ext == 'mp3':
cmds = ["melt -verbose -progress %s -consumer avformat:%s" %
(mlt_pathname, out_pathname)]
if ext == 'mp4':
# High Quality Master 720x480 NTSC
parms = {
'dv_format': self.options.dv_format,
'mlt': mlt_pathname,
'out': out_pathname,
'threads': self.options.threads,
'test': '',
}
cmd = "melt -verbose -progress "\
"-profile %(dv_format)s %(mlt)s "\
"-consumer avformat:%(out)s "\
"threads=%(threads)s "\
"progressive=1 "\
"strict=-2 "\
"properties=x264-high "\
"ab=256k "\
% parms
cmd = cmd.split()
# 2 pass causes no video track, so dumping this.
# need to figure out how to switch between good and fast
if False:
cmds = [cmd + ['pass=1'],
cmd + ['pass=2']]
if True: # even faster!
cmds[0].append('fastfirstpass=1')
else:
cmds = [cmd]
# cmds.append( ["qt-faststart", tmp_pathname, out_pathname] )
if self.options.rm_temp:
cmds.append(["rm", tmp_pathname])
if ext == 'm4v':
# iPhone
tmp_pathname = os.path.join(
self.tmp_dir, "%s.%s" % (episode.slug, ext))
# combine settings from 2 files
ffpreset = open(
'/usr/share/ffmpeg/libx264-default.ffpreset').read().split('\n')
ffpreset.extend(
open('/usr/share/ffmpeg/libx264-ipod640.ffpreset').read().split('\n'))
ffpreset = [i for i in ffpreset if i]
cmd = "melt %(mlt)s -progress -profile %(dv_format)s -consumer avformat:%(tmp)s s=432x320 aspect=@4/3 progressive=1 acodec=libfaac ar=44100 ab=128k vcodec=libx264 b=70k" % parms
cmd = cmd.split()
cmd.extend(ffpreset)
cmds = [cmd]
cmds.append(["qt-faststart", tmp_pathname, out_pathname])
if self.options.rm_temp:
cmds.append(["rm", tmp_pathname])
if ext == 'dv':
out_pathname = os.path.join(
self.tmp_dir, "%s.%s" % (episode.slug, ext))
cmds = ["melt -verbose -progress %s -consumer avformat:%s pix_fmt=yuv411p progressive=1" %
(mlt_pathname, out_pathname)]
if ext == 'ogv':
# melt/ffmpeg ogv encoder is loopy,
# so make a .dv and pass it to ffmpeg2theora
ret = enc_one("dv")
if ret:
dv_pathname = os.path.join(
self.tmp_dir, "%s.dv" % (episode.slug,))
cmds = [
"ffmpeg2theora --videoquality 5 -V 600 --audioquality 5 --channels 1 %s -o %s" % (dv_pathname, out_pathname)]
if self.options.rm_temp:
cmds.append(["rm", dv_pathname])
else:
return ret
# run encoder:
if self.options.noencode:
print("sorce files generated, skipping encode.")
if self.options.melt:
self.run_cmd(['melt', mlt_pathname])
ret = False
else:
ret = self.run_cmds(episode, cmds, )
if ret and not os.path.exists(out_pathname):
print("melt returned %ret, but no output: %s" % \
(ret, out_pathname))
ret = False
return ret
ret = True
# create all the formats for uploading
for ext in self.options.upload_formats:
print("encoding to %s" % (ext,))
ret = enc_one(ext) and ret
"""
if self.options.enc_script:
cmd = [self.options.enc_script,
self.show_dir, episode.slug]
ret = ret and self.run_cmds(episode, [cmd])
"""
return ret
def dv2theora(self, episode, dv_path_name, cls, rfs):
"""
Not used any more.
transcode dv to ogv
"""
oggpathname = os.path.join(
self.show_dir, "ogv", "%s.ogv" % episode.slug)
# cmd="ffmpeg2theora --videoquality 5 -V 600 --audioquality 5 --speedlevel 0 --optimize --keyint 256 --channels 1".split()
cmd = "ffmpeg2theora --videoquality 5 -V 600 --audioquality 5 --keyint 256 --channels 1".split()
cmd += ['--output', oggpathname]
cmd += [dv_path_name]
return cmd
def process_ep(self, episode):
ret = False
cls = Cut_List.objects.filter(
episode=episode, apply=True).order_by('sequence')
if cls:
# get list of raw footage for this episode
rfs = Raw_File.objects. \
filter(cut_list__episode=episode).\
exclude(trash=True).distinct()
# get a .mlt file for this episode (mlt_pathname)
# look for custom/slug.mlt and just use it,
# else build one from client.template_mlt
mlt_pathname = os.path.join(
self.show_dir, "custom",
"{}.mlt".format(episode.slug))
if os.path.exists(mlt_pathname):
print(("found custom/slug.mlt:\n{}".format( mlt_pathname )))
ret = True
else:
template_mlt = episode.show.client.template_mlt
mlt_pathname = os.path.join(self.show_dir,
"mlt", "%s.mlt" % episode.slug)
params = self.get_params(episode, rfs, cls )
pprint.pprint(params)
print((2, mlt_pathname))
ret = mk_mlt( template_mlt, mlt_pathname, params )
if not ret:
episode.state = 0
episode.comment += "\nenc.py mlt = self.mkmlt_1 failed.\n"
episode.save()
return False
# do the final encoding:
# using melt
ret = self.enc_all(mlt_pathname, episode)
if self.options.load_temp and self.options.rm_temp:
cmds = []
for rf in rfs:
dst_path = os.path.join(
self.tmp_dir, episode.slug, os.path.dirname(rf.filename))
rawpathname = os.path.join(
self.tmp_dir, episode.slug, rf.filename)
cmds.append(['rm', rawpathname])
cmds.append(['rmdir', dst_path])
dst_path = os.path.join(self.tmp_dir, episode.slug)
cmds.append(['rmdir', dst_path])
self.run_cmds(episode, cmds)
else:
err_msg = "No cutlist found."
episode.state = 0
episode.comment += "\nenc error: %s\n" % (err_msg,)
episode.save()
print(err_msg)
return False
if self.options.test:
ret = False
# save the episode so the test suite can get the slug
self.episode = episode
return ret
def add_more_options(self, parser):
parser.add_option('--enc-script',
help='encode shell script')
parser.add_option('--noencode', action="store_true",
help="don't encode, just make svg, png, mlt")
parser.add_option('--melt', action="store_true",
help="call melt slug.melt (only w/noencode)")
parser.add_option('--load-temp', action="store_true",
help='copy .dv to temp files')
parser.add_option('--rm-temp',
help='remove large temp files')
parser.add_option('--threads',
help='thread parameter passed to encoder')
def add_more_option_defaults(self, parser):
parser.set_defaults(threads=0)
if __name__ == '__main__':
p = enc()
p.main()
| mit |
GoodCloud/johnny-cache | johnny/backends/memcached.py | 1 | 1842 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Infinite caching memcached class. Caches forever when passed a timeout
of 0. For Django >= 1.3, this module also provides ``MemcachedCache`` and
``PyLibMCCache``, which use the backends of their respective analogs in
django's default backend modules.
"""
from django.core.cache.backends import memcached
from django.utils.encoding import smart_str
import django
class CacheClass(memcached.CacheClass):
"""By checking ``timeout is None`` rather than ``not timeout``, this
cache class allows for non-expiring cache writes on certain backends,
notably memcached."""
def _get_memcache_timeout(self, timeout=None):
if timeout == 0: return 0 #2591999
return super(CacheClass, self)._get_memcache_timeout(timeout)
if django.VERSION[:2] > (1, 2):
class MemcachedCache(memcached.MemcachedCache):
"""Infinitely Caching version of django's MemcachedCache backend."""
def _get_memcache_timeout(self, timeout=None):
if timeout == 0: return 0 #2591999
return super(MemcachedCache, self)._get_memcache_timeout(timeout)
class PyLibMCCache(memcached.PyLibMCCache):
"""PyLibMCCache version that interprets 0 to mean, roughly, 30 days.
This is because `pylibmc interprets 0 to mean literally zero seconds
<http://sendapatch.se/projects/pylibmc/misc.html#differences-from-python-memcached>`_
rather than "infinity" as memcached itself does. The maximum timeout
memcached allows before treating the timeout as a timestamp is just
under 30 days."""
def _get_memcache_timeout(self, timeout=None):
# pylibmc doesn't like our definition of 0
if timeout == 0: return 2591999
return super(PyLibMCCache, self)._get_memcache_timeout(timeout)
| mit |
freifunk-darmstadt/tools | update-telemetry.py | 1 | 8987 | #!/usr/bin/env python3
import psutil
import os
import json
import re
import itertools
from contextlib import contextmanager
import pprint
import time
import socket
import subprocess
import logging
logger = logging.getLogger(__name__)
def pairwise(iterable):
"s -> (s0,s1), (s2,s3), (s4, s5), ..."
a = iter(iterable)
return zip(a, a)
@contextmanager
def get_socket(host, port):
sock = socket.socket()
sock.settimeout(1)
sock.connect((host, port))
yield sock
sock.close()
@contextmanager
def get_unix_socket(filename):
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.settimeout(1)
sock.connect(filename)
yield sock
sock.close()
def write_to_graphite(data, prefix='freifunk', hostname=socket.gethostname()):
if '.' in hostname:
hostname = hostname.split('.')[0]
now = time.time()
with get_socket('stats.darmstadt.freifunk.net', 2013) as s:
for key, value in data.items():
line = "%s.%s.%s %s %s\n" % (prefix, hostname, key, value, now)
s.sendall(line.encode('latin-1'))
def write_to_node_collector(filename, data, patterns, prefix='freifunk'):
patterns = [re.compile(exp) for exp in patterns]
print(data)
updates = []
for metric, value in data.items():
for pattern in patterns:
m = pattern.match(metric)
if m:
groups = m.groupdict()
if all(key in groups for key in ['key']):
updates.append([groups, value])
break
content = []
for update, value in updates:
key = update['key'].replace('.', '_')
sub_key = update.pop('sub_key', None)
if prefix:
key = '{}_{}'.format(prefix, key)
if sub_key:
key += '_' + sub_key
params =update.copy()
params.pop('key')
params = ','.join(['{}={}'.format(k, v) for k, v in params.items()])
params = '{%s}' % (params)
content.append('{key}{params} {value}'.format(key=key, params=params, value=value))
with open(filename, 'w') as fh:
fh.write('\n'.join(content))
def read_from_fastd_socket(filename):
with get_unix_socket(filename) as client:
try:
strings = []
while True:
s = client.recv(8096)
if not s:
break
strings.append(s.decode('utf-8'))
data = json.loads(''.join(strings))
#pprint.pprint(data['statistics'])
online_peers = len([None for name, d in data['peers'].items() if d['connection']])
return {
'peers.count': len(data['peers']),
'peers.online': online_peers,
'rx.packets': data['statistics']['rx']['packets'],
'rx.bytes': data['statistics']['rx']['bytes'],
'rx.reordered.bytes': data['statistics']['rx_reordered']['bytes'],
'rx.reordered.packets': data['statistics']['rx_reordered']['packets'],
'tx.bytes': data['statistics']['tx']['bytes'],
'tx.packets': data['statistics']['tx']['packets'],
'tx.dropped.bytes': data['statistics']['tx_dropped']['bytes'],
'tx.dropped.packets': data['statistics']['tx_dropped']['packets'],
}
except Exception as e:
print(e)
return {}
def get_fastd_process_stats():
for proc in psutil.process_iter():
if proc.name() == 'fastd':
# 11905: 00000000000000000000000001000000:0035 00000000000000000000000000000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 4469598 2 ffff880519be5100 0
drop_count = 0
for proto in ['udp', 'udp6']:
with open('/proc/{}/net/{}'.format(proc.pid, proto), 'r') as fh:
for line in (line.strip() for line in fh.read().split('\n')):
if not line:
continue
if line.startswith('sl'):
continue
parts = line.split(' ')
drop_count += int(parts[-1])
return drop_count
return None
def get_neighbour_table_states(family=socket.AF_INET6):
if family is socket.AF_INET:
family = '-4'
elif family is socket.AF_INET6:
family = '-6'
else:
return
response = subprocess.check_output(
['/bin/ip', family, 'neigh', 'show', 'nud', 'all']
).decode()
states = {'PERMANENT': 0, 'NOARP': 0, 'REACHABLE': 0, 'STALE': 0, 'NONE': 0,
'INCOMPLETE': 0, 'DELAY': 0, 'PROBE': 0, 'FAILED': 0}
for neigh_entry in response.split('\n'):
if not neigh_entry:
continue
state = neigh_entry.split()[-1]
if state not in states:
continue
states[state] += 1
return states
def main():
fastd_sockets = (
('0', '/run/fastd-ffda-vpn.sock'),
('1', '/run/fastd-ffda-vpn1.sock'),
)
device_name_mapping = {
'freifunk': 'ffda-br',
'bat0': 'ffda-bat',
'mesh-vpn': 'ffda-vpn'
}
device_whitelist = [
'eth0',
'ffda-vpn',
'ffda-vpn-1280',
'ffda-vpn-1312',
'ffda-bat',
'ffda-br',
'ffda-transport',
'services',
]
fields = [
'bytes', 'packets', 'errs', 'drop', 'fifo',
'frame', 'compressed', 'multicast',
]
field_format = '(?P<{direction}_{field}>\d+)'
pattern = re.compile(
'^\s*(?P<device_name>[\w-]+):\s+' + '\s+'.join(
itertools.chain.from_iterable((field_format.format(direction=direction, field=field)
for field in fields) for direction in ['rx', 'tx'])
)
)
update = {}
with open('/proc/net/dev') as fh:
lines = fh.readlines()
for line in lines:
m = pattern.match(line)
if m:
groupdict = m.groupdict()
device_name = groupdict.pop('device_name')
device_name = device_name_mapping.get(device_name, device_name)
if device_name in device_whitelist or device_name.endswith('-vpn') or \
device_name.endswith('-bat') or \
device_name.endswith('-br') or \
device_name.endswith('-transport'):
for key, value in groupdict.items():
direction, metric = key.split('_')
update['%s.%s.%s' % (device_name, direction, metric)] = value
with open('/proc/loadavg', 'r') as fh:
line = fh.read()
values = line.split(' ', 3)
update['load.15'] = values[0]
update['load.5'] = values[1]
update['load.1'] = values[2]
for key in ['count', 'max']:
try:
with open('/proc/sys/net/netfilter/nf_conntrack_%s' % key, 'r') as fh:
update['netfilter.%s' % key] = fh.read().strip()
except IOError as e:
pass
with open('/proc/net/snmp6', 'r') as fh:
for line in fh.readlines():
key, value = line.split(' ', 1)
value = value.strip()
update['ipv6.%s' % key] = value
with open('/proc/net/snmp', 'r') as fh:
for heading, values in pairwise(fh.readlines()):
section, headings = heading.split(':')
headings = headings.strip().split(' ')
_, values = values.split(':')
values = values.strip().split(' ')
for key, value in zip(headings, values):
update['ipv4.%s.%s' % (section, key)] = value
for af, prefix in [(socket.AF_INET, 'ipv4.Neigh'),
(socket.AF_INET6, 'ipv6.Neigh')]:
for state, count in get_neighbour_table_states(af).items():
update['{0}.{1}'.format(prefix, state.lower())] = count
with open('/proc/stat', 'r') as fh:
for line in fh.readlines():
key, value = line.split(' ', 1)
if key == 'ctxt':
update['context_switches'] = value.strip()
break
for name, filename in fastd_sockets:
if not os.path.exists(filename):
continue
data = read_from_fastd_socket(filename)
if len(data) > 0:
update.update({'fastd.%s.%s' % (name, key): value for (key, value) in data.items()})
fastd_drops = get_fastd_process_stats()
if fastd_drops:
update['fastd.drops'] = fastd_drops
#pprint.pprint(update)
write_to_graphite(update)
write_to_node_collector('/dev/shm/telemetry.prom', update, patterns=[
# '^(?P<interface>[^.]+)\.(?P<key>(rx|tx).+)',
'^(?P<key>fastd)\.(?P<fast_instance>.+)\.(?P<sub_key>.+)',
# '^(?P<key>load)\.(?P<period>\d+)'
], prefix='ffda_')
if __name__ == "__main__":
main()
| agpl-3.0 |
anastue/netforce | netforce_mfg/netforce_mfg/models/workcenter.py | 2 | 4110 | # Copyright (c) 2012-2015 Netforce Co. Ltd.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE
# OR OTHER DEALINGS IN THE SOFTWARE.
from netforce.model import Model, fields
from netforce.database import get_connection
from datetime import *
import time
def get_days(n):
days = []
d = date.today()
days.append(d.strftime("%Y-%m-%d"))
while n > 1:
d -= timedelta(days=1)
days.append(d.strftime("%Y-%m-%d"))
n -= 1
days.reverse()
return days
def js_time(d):
return time.mktime(datetime.strptime(d, "%Y-%m-%d").timetuple()) * 1000
class Workcenter(Model):
_name = "workcenter"
_string = "Workcenter"
_key = ["code"]
_export_field = "code"
_fields = {
"code": fields.Char("Workcenter Code", search=True),
"name": fields.Char("Workcenter Name", search=True),
"location_id": fields.Many2One("stock.location", "Location"),
"asset_id": fields.Many2One("account.fixed.asset", "Fixed Asset"),
"hours_history": fields.Json("Hours History", function="get_hours_history"),
"hours_week": fields.Decimal("Hours This Week", function="get_hours", function_multi=True),
"comments": fields.One2Many("message", "related_id", "Comments"),
"documents": fields.One2Many("document", "related_id", "Documents"),
}
_order = "code"
def name_get(self, ids, context={}):
vals = []
for obj in self.browse(ids):
if obj.code:
name = "%s [%s]" % (obj.name, obj.code)
else:
name = obj.name
vals.append((obj.id, name))
return vals
def name_search(self, name, condition=None, context={}, limit=None, **kw):
cond = [["code", "ilike", "%" + name + "%"]]
if condition:
cond = [cond, condition]
ids1 = self.search(cond, limit=limit)
cond = [["name", "ilike", "%" + name + "%"]]
if condition:
cond = [cond, condition]
ids2 = self.search(cond, limit=limit)
ids = list(set(ids1 + ids2))
return self.name_get(ids, context=context)
def get_hours_history(self, ids, context={}):
db = get_connection()
vals = {}
days = get_days(30)
for id in ids:
res = db.query(
"SELECT o.date,SUM(o.hours) AS hours FROM mrp_operation o WHERE workcenter_id=%s GROUP BY o.date", id)
hours = {}
for r in res:
hours[r.date] = r.hours
data = []
for d in days:
data.append((js_time(d), hours.get(d, 0)))
vals[id] = data
return vals
def get_hours(self, ids, context={}):
db = get_connection()
d = date.today()
date_from = d - timedelta(days=d.weekday())
vals = {}
for id in ids:
res = db.get(
"SELECT SUM(o.hours) AS hours FROM mrp_operation o WHERE o.workcenter_id=%s AND o.date>=%s", id, date_from)
vals[id] = {
"hours_week": res.hours or 0,
}
return vals
Workcenter.register()
| mit |
ddong8/ihasy | lib/gravatar.py | 12 | 4549 | #!/usr/bin/env python3
#
# python-gravatar - Copyright (c) 2009 Pablo Seminario
# This software is distributed under the terms of the GNU General
# Public License
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
A library that provides a Python 3 interface to the Gravatar APIs.
"""
__author__ = 'Pablo SEMINARIO <pabluk@gmail.com>'
__version__ = '0.2'
# import xmlrpc.client
from hashlib import md5
class Gravatar(object):
"""
This class encapsulates all the unauthenticated methods from APIs.
Gravatar Image Requests http://en.gravatar.com/site/implement/images/
Gravatar Profile Requests http://en.gravatar.com/site/implement/profiles/
"""
def __init__(self, email):
self.email = sanitize_email(email)
self.email_hash = md5_hash(self.email)
def get_image(self, size=80, filetype_extension=True):
"""
Returns an URL to the user profile image.
"""
base_url = 'http://www.gravatar.com/avatar/' \
'{hash}{extension}?size={size}'
extension = '.jpg' if filetype_extension else ''
data = {
'hash': self.email_hash,
'extension': extension,
'size': size,
}
return base_url.format(**data)
def get_profile(self, data_format=''):
"""
Returns an URL to the profile information associated with the
Gravatar account.
"""
base_url = 'http://www.gravatar.com/{hash}{data_format}'
valid_formats = ['json', 'xml', 'php', 'vcf', 'qr']
if data_format and data_format in valid_formats:
data_format = '.%s' % data_format
data = {
'hash': self.email_hash,
'data_format': data_format,
}
return base_url.format(**data)
class GravatarXMLRPC(object):
"""
This class encapsulates all the authenticated methods from the XML-RPC API.
API details: http://en.gravatar.com/site/implement/xmlrpc
"""
API_URI = 'https://secure.gravatar.com/xmlrpc?user={0}'
def __init__(self, email, apikey='', password=''):
self.apikey = apikey
self.password = password
self.email = sanitize_email(email)
self.email_hash = md5_hash(self.email)
self._server = xmlrpc.client.ServerProxy(
self.API_URI.format(self.email_hash))
def exists(self, hashes):
"""Checks whether a hash has a gravatar."""
response = self._call('exists', params={'hashes': hashes})
results = {}
for key, value in response.items():
results[key] = True if value else False
return results
def addresses(self):
"""Gets a list of addresses for this account."""
return self._call('addresses')
def userimages(self):
"""Returns a dict of userimages for this account."""
return self._call('userimages')
def test(self):
"""Test the API."""
return self._call('test')
def _call(self, method, params={}):
"""Call a method from the API, gets 'grav.' prepended to it."""
args = {
'apikey': self.apikey,
'password': self.password,
}
args.update(params)
try:
return getattr(self._server, 'grav.' + method, None)(args)
except xmlrpc.client.Fault as error:
error_msg = "Server error: {1} (error code: {0})"
print(error_msg.format(error.faultCode, error.faultString))
def sanitize_email(email):
"""
Returns an e-mail address in lower-case and strip leading and trailing
whitespaces.
>>> sanitize_email(' MyEmailAddress@example.com ')
'myemailaddress@example.com'
"""
return email.lower().strip()
def md5_hash(email):
"""
Returns a md5 hash from an e-mail address.
>>> md5_hash('myemailaddress@example.com')
'0bc83cb571cd1c50ba6f3e8a78ef1346'
"""
return md5(email.encode('utf-8')).hexdigest()
| bsd-3-clause |
maferelo/saleor | saleor/account/migrations/0001_initial.py | 3 | 19366 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import django.db.models.deletion
import django.utils.timezone
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [("auth", "0006_require_contenttypes_0002")]
replaces = [("userprofile", "0001_initial")]
operations = [
migrations.CreateModel(
name="User",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
(
"is_superuser",
models.BooleanField(
default=False,
help_text=(
"Designates that this user has all permissions "
"without explicitly assigning them."
),
verbose_name="superuser status",
),
),
("email", models.EmailField(unique=True, max_length=254)),
(
"is_staff",
models.BooleanField(default=False, verbose_name="staff status"),
),
(
"is_active",
models.BooleanField(default=False, verbose_name="active"),
),
(
"password",
models.CharField(
verbose_name="password", max_length=128, editable=False
),
),
(
"date_joined",
models.DateTimeField(
default=django.utils.timezone.now,
verbose_name="date joined",
editable=False,
),
),
(
"last_login",
models.DateTimeField(
default=django.utils.timezone.now,
verbose_name="last login",
editable=False,
),
),
],
options={"db_table": "userprofile_user", "abstract": False},
),
migrations.CreateModel(
name="Address",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
(
"first_name",
models.CharField(max_length=256, verbose_name="first name"),
),
(
"last_name",
models.CharField(max_length=256, verbose_name="last name"),
),
(
"company_name",
models.CharField(
max_length=256,
verbose_name="company or organization",
blank=True,
),
),
(
"street_address_1",
models.CharField(max_length=256, verbose_name="address"),
),
(
"street_address_2",
models.CharField(
max_length=256, verbose_name="address", blank=True
),
),
("city", models.CharField(max_length=256, verbose_name="city")),
(
"postal_code",
models.CharField(max_length=20, verbose_name="postal code"),
),
(
"country",
models.CharField(
max_length=2,
verbose_name="country",
choices=[
("AF", "Afghanistan"),
("AX", "\xc5land Islands"),
("AL", "Albania"),
("DZ", "Algeria"),
("AS", "American Samoa"),
("AD", "Andorra"),
("AO", "Angola"),
("AI", "Anguilla"),
("AQ", "Antarctica"),
("AG", "Antigua And Barbuda"),
("AR", "Argentina"),
("AM", "Armenia"),
("AW", "Aruba"),
("AU", "Australia"),
("AT", "Austria"),
("AZ", "Azerbaijan"),
("BS", "Bahamas"),
("BH", "Bahrain"),
("BD", "Bangladesh"),
("BB", "Barbados"),
("BY", "Belarus"),
("BE", "Belgium"),
("BZ", "Belize"),
("BJ", "Benin"),
("BM", "Bermuda"),
("BT", "Bhutan"),
("BO", "Bolivia"),
("BQ", "Bonaire, Saint Eustatius And Saba"),
("BA", "Bosnia And Herzegovina"),
("BW", "Botswana"),
("BV", "Bouvet Island"),
("BR", "Brazil"),
("IO", "British Indian Ocean Territory"),
("BN", "Brunei Darussalam"),
("BG", "Bulgaria"),
("BF", "Burkina Faso"),
("BI", "Burundi"),
("KH", "Cambodia"),
("CM", "Cameroon"),
("CA", "Canada"),
("CV", "Cape Verde"),
("KY", "Cayman Islands"),
("CF", "Central African Republic"),
("TD", "Chad"),
("CL", "Chile"),
("CN", "China"),
("CX", "Christmas Island"),
("CC", "Cocos (Keeling) Islands"),
("CO", "Colombia"),
("KM", "Comoros"),
("CG", "Congo"),
("CD", "Congo, The Democratic Republic of the"),
("CK", "Cook Islands"),
("CR", "Costa Rica"),
("CI", "C\xf4te D'Ivoire"),
("HR", "Croatia"),
("CU", "Cuba"),
("CW", "Cura\xe7o"),
("CY", "Cyprus"),
("CZ", "Czech Republic"),
("DK", "Denmark"),
("DJ", "Djibouti"),
("DM", "Dominica"),
("DO", "Dominican Republic"),
("EC", "Ecuador"),
("EG", "Egypt"),
("SV", "El Salvador"),
("GQ", "Equatorial Guinea"),
("ER", "Eritrea"),
("EE", "Estonia"),
("ET", "Ethiopia"),
("FK", "Falkland Islands (Malvinas)"),
("FO", "Faroe Islands"),
("FJ", "Fiji"),
("FI", "Finland"),
("FR", "France"),
("GF", "French Guiana"),
("PF", "French Polynesia"),
("TF", "French Southern Territories"),
("GA", "Gabon"),
("GM", "Gambia"),
("GE", "Georgia"),
("DE", "Germany"),
("GH", "Ghana"),
("GI", "Gibraltar"),
("GR", "Greece"),
("GL", "Greenland"),
("GD", "Grenada"),
("GP", "Guadeloupe"),
("GU", "Guam"),
("GT", "Guatemala"),
("GG", "Guernsey"),
("GN", "Guinea"),
("GW", "Guinea-Bissau"),
("GY", "Guyana"),
("HT", "Haiti"),
("HM", "Heard Island And Mcdonald Islands"),
("VA", "Holy See (Vatican City State)"),
("HN", "Honduras"),
("HK", "Hong Kong"),
("HU", "Hungary"),
("IS", "Iceland"),
("IN", "India"),
("ID", "Indonesia"),
("IR", "Iran, Islamic Republic of"),
("IQ", "Iraq"),
("IE", "Ireland"),
("IM", "Isle of Man"),
("IL", "Israel"),
("IT", "Italy"),
("JM", "Jamaica"),
("JP", "Japan"),
("JE", "Jersey"),
("JO", "Jordan"),
("KZ", "Kazakhstan"),
("KE", "Kenya"),
("KI", "Kiribati"),
("KP", "Korea, Democratic People's Republic of"),
("KR", "Korea, Republic of"),
("KW", "Kuwait"),
("KG", "Kyrgyzstan"),
("LA", "Lao People's Democratic Republic"),
("LV", "Latvia"),
("LB", "Lebanon"),
("LS", "Lesotho"),
("LR", "Liberia"),
("LY", "Libya"),
("LI", "Liechtenstein"),
("LT", "Lithuania"),
("LU", "Luxembourg"),
("MO", "Macao"),
("MK", "Macedonia, The Former Yugoslav Republic of"),
("MG", "Madagascar"),
("MW", "Malawi"),
("MY", "Malaysia"),
("MV", "Maldives"),
("ML", "Mali"),
("MT", "Malta"),
("MH", "Marshall Islands"),
("MQ", "Martinique"),
("MR", "Mauritania"),
("MU", "Mauritius"),
("YT", "Mayotte"),
("MX", "Mexico"),
("FM", "Micronesia, Federated States of"),
("MD", "Moldova, Republic of"),
("MC", "Monaco"),
("MN", "Mongolia"),
("ME", "Montenegro"),
("MS", "Montserrat"),
("MA", "Morocco"),
("MZ", "Mozambique"),
("MM", "Myanmar"),
("NA", "Namibia"),
("NR", "Nauru"),
("NP", "Nepal"),
("NL", "Netherlands"),
("NC", "New Caledonia"),
("NZ", "New Zealand"),
("NI", "Nicaragua"),
("NE", "Niger"),
("NG", "Nigeria"),
("NU", "Niue"),
("NF", "Norfolk Island"),
("MP", "Northern Mariana Islands"),
("NO", "Norway"),
("OM", "Oman"),
("PK", "Pakistan"),
("PW", "Palau"),
("PS", "Palestinian Territory, Occupied"),
("PA", "Panama"),
("PG", "Papua New Guinea"),
("PY", "Paraguay"),
("PE", "Peru"),
("PH", "Philippines"),
("PN", "Pitcairn"),
("PL", "Poland"),
("PT", "Portugal"),
("PR", "Puerto Rico"),
("QA", "Qatar"),
("RE", "R\xe9union"),
("RO", "Romania"),
("RU", "Russian Federation"),
("RW", "Rwanda"),
("BL", "Saint Barth\xe9lemy"),
("SH", "Saint Helena, Ascension And Tristan Da Cunha"),
("KN", "Saint Kitts And Nevis"),
("LC", "Saint Lucia"),
("MF", "Saint Martin (French Part)"),
("PM", "Saint Pierre And Miquelon"),
("VC", "Saint Vincent And the Grenadines"),
("WS", "Samoa"),
("SM", "San Marino"),
("ST", "Sao Tome And Principe"),
("SA", "Saudi Arabia"),
("SN", "Senegal"),
("RS", "Serbia"),
("SC", "Seychelles"),
("SL", "Sierra Leone"),
("SG", "Singapore"),
("SX", "Sint Maarten (Dutch Part)"),
("SK", "Slovakia"),
("SI", "Slovenia"),
("SB", "Solomon Islands"),
("SO", "Somalia"),
("ZA", "South Africa"),
("GS", "South Georgia and the South Sandwich Islands"),
("ES", "Spain"),
("LK", "Sri Lanka"),
("SD", "Sudan"),
("SR", "Suriname"),
("SJ", "Svalbard and Jan Mayen"),
("SZ", "Swaziland"),
("SE", "Sweden"),
("CH", "Switzerland"),
("SY", "Syria"),
("TW", "Taiwan"),
("TJ", "Tajikistan"),
("TZ", "Tanzania"),
("TH", "Thailand"),
("TL", "Timor-Leste"),
("TG", "Togo"),
("TK", "Tokelau"),
("TO", "Tonga"),
("TT", "Trinidad And Tobago"),
("TN", "Tunisia"),
("TR", "Turkey"),
("TM", "Turkmenistan"),
("TC", "Turks And Caicos Islands"),
("TV", "Tuvalu"),
("UG", "Uganda"),
("UA", "Ukraine"),
("AE", "United Arab Emirates"),
("GB", "United Kingdom"),
("US", "United States"),
("UM", "United States Minor Outlying Islands"),
("UY", "Uruguay"),
("UZ", "Uzbekistan"),
("VU", "Vanuatu"),
("VE", "Venezuela"),
("VN", "Viet Nam"),
("VG", "Virgin Islands, British"),
("VI", "Virgin Islands, U.S."),
("WF", "Wallis And Futuna"),
("EH", "Western Sahara"),
("YE", "Yemen"),
("ZM", "Zambia"),
("ZW", "Zimbabwe"),
],
),
),
(
"country_area",
models.CharField(
max_length=128, verbose_name="state or province", blank=True
),
),
(
"phone",
models.CharField(
max_length=30, verbose_name="phone number", blank=True
),
),
],
options={"db_table": "userprofile_address"},
),
migrations.AddField(
model_name="user",
name="addresses",
field=models.ManyToManyField(to="account.Address"),
),
migrations.AddField(
model_name="user",
name="default_billing_address",
field=models.ForeignKey(
related_name="+",
on_delete=django.db.models.deletion.SET_NULL,
verbose_name="default billing address",
blank=True,
to="account.Address",
null=True,
),
),
migrations.AddField(
model_name="user",
name="default_shipping_address",
field=models.ForeignKey(
related_name="+",
on_delete=django.db.models.deletion.SET_NULL,
verbose_name="default shipping address",
blank=True,
to="account.Address",
null=True,
),
),
migrations.AddField(
model_name="user",
name="groups",
field=models.ManyToManyField(
related_query_name="user",
related_name="user_set",
to="auth.Group",
blank=True,
help_text=(
"The groups this user belongs to. "
"A user will get all permissions granted to each of their groups."
),
verbose_name="groups",
),
),
migrations.AddField(
model_name="user",
name="user_permissions",
field=models.ManyToManyField(
related_query_name="user",
related_name="user_set",
to="auth.Permission",
blank=True,
help_text="Specific permissions for this user.",
verbose_name="user permissions",
),
),
]
| bsd-3-clause |
elelsee/pycfn-elasticsearch | pycfn_elasticsearch/vendored/docutils/languages/he.py | 148 | 2683 | # Author: Meir Kriheli
# Id: $Id: he.py 4837 2006-12-26 09:59:41Z sfcben $
# Copyright: This module has been placed in the public domain.
# New language mappings are welcome. Before doing a new translation, please
# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
# translated for each language: one in docutils/languages, the other in
# docutils/parsers/rst/languages.
"""
Hebrew-language mappings for language-dependent features of Docutils.
"""
__docformat__ = 'reStructuredText'
labels = {
# fixed: language-dependent
'author': u'\u05de\u05d7\u05d1\u05e8',
'authors': u'\u05de\u05d7\u05d1\u05e8\u05d9',
'organization': u'\u05d0\u05e8\u05d2\u05d5\u05df',
'address': u'\u05db\u05ea\u05d5\u05d1\u05ea',
'contact': u'\u05d0\u05d9\u05e9 \u05e7\u05e9\u05e8',
'version': u'\u05d2\u05e8\u05e1\u05d4',
'revision': u'\u05de\u05d4\u05d3\u05d5\u05e8\u05d4',
'status': u'\u05e1\u05d8\u05d8\u05d5\u05e1',
'date': u'\u05ea\u05d0\u05e8\u05d9\u05da',
'copyright': u'\u05d6\u05db\u05d5\u05d9\u05d5\u05ea \u05e9\u05de\u05d5\u05e8\u05d5\u05ea',
'dedication': u'\u05d4\u05e7\u05d3\u05e9\u05d4',
'abstract': u'\u05ea\u05e7\u05e6\u05d9\u05e8',
'attention': u'\u05ea\u05e9\u05d5\u05de\u05ea \u05dc\u05d1',
'caution': u'\u05d6\u05d4\u05d9\u05e8\u05d5\u05ea',
'danger': u'\u05e1\u05db\u05e0\u05d4',
'error': u'\u05e9\u05d2\u05d9\u05d0\u05d4' ,
'hint': u'\u05e8\u05de\u05d6',
'important': u'\u05d7\u05e9\u05d5\u05d1',
'note': u'\u05d4\u05e2\u05e8\u05d4',
'tip': u'\u05d8\u05d9\u05e4',
'warning': u'\u05d0\u05d6\u05d4\u05e8\u05d4',
'contents': u'\u05ea\u05d5\u05db\u05df'}
"""Mapping of node class name to label text."""
bibliographic_fields = {
# language-dependent: fixed
u'\u05de\u05d7\u05d1\u05e8': 'author',
u'\u05de\u05d7\u05d1\u05e8\u05d9': 'authors',
u'\u05d0\u05e8\u05d2\u05d5\u05df': 'organization',
u'\u05db\u05ea\u05d5\u05d1\u05ea': 'address',
u'\u05d0\u05d9\u05e9 \u05e7\u05e9\u05e8': 'contact',
u'\u05d2\u05e8\u05e1\u05d4': 'version',
u'\u05de\u05d4\u05d3\u05d5\u05e8\u05d4': 'revision',
u'\u05e1\u05d8\u05d8\u05d5\u05e1': 'status',
u'\u05ea\u05d0\u05e8\u05d9\u05da': 'date',
u'\u05d6\u05db\u05d5\u05d9\u05d5\u05ea \u05e9\u05de\u05d5\u05e8\u05d5\u05ea': 'copyright',
u'\u05d4\u05e7\u05d3\u05e9\u05d4': 'dedication',
u'\u05ea\u05e7\u05e6\u05d9\u05e8': 'abstract'}
"""Hebrew to canonical name mapping for bibliographic fields."""
author_separators = [';', ',']
"""List of separator strings for the 'Authors' bibliographic field. Tried in
order."""
| apache-2.0 |
ag-sc/QALD | 4/scripts/Evaluation.py | 1 | 26257 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import xml.dom.minidom as dom
import xml.dom
from decimal import *
import sys
import os
import datetime
#from Numeric import *
# Dokument erzeugen
implement = xml.dom.getDOMImplementation()
###################Globale Variablen###################
task=None
choosen_tag={}
system_time=0
filename_out_html = None
filename_out_txt = None
system_name=None
configuration=None
testing=False
###################Funktionen##########################
def set_system_name(name):
global system_name
system_name=name
def set_configuration(name):
global configuration
configuration=name
def _ausgabe_(ausgabe):
print ausgabe
def set_filename_txt_out(time):
global filename_out_txt
filename_out_txt="upload/out"+str(time)+".txt"
def set_filename_out(time):
global filename_out_html
filename_out_html="upload/out"+str(time)+".html"
def _knoten_auslesen(knoten):
try:
string = knoten.firstChild.data.strip().encode("utf-8")
# print "knoten_auslesen: "+string
return string
except:
# print "Unexpected error:", sys.exc_info()[0]
pass
#def _knoten_auslesen(knoten):
# return eval("%s('%s')" % (knoten.getAttribute("typ"),
# knoten.firstChild.data.strip()))
def lade_musterloesung(dateiname):
d = {}
global choosen_tag
#baum = dom.parse(dateiname.encode( "utf-8" ))
baum = dom.parse(dateiname)
zaehler=1
for eintrag in baum.firstChild.childNodes:
if eintrag.nodeName == "question":
id=(eintrag.attributes["id"]).value
question_text = query = None
answer=[]
for knoten in eintrag.childNodes:
if knoten.nodeName == "text" or knoten.nodeName == "string":
if (knoten.attributes["lang"]).value == "en":
question_text = _knoten_auslesen(knoten)
elif (knoten.attributes["lang"]).value == "de":
question_text = _knoten_auslesen(knoten)
elif (knoten.attributes["lang"]).value == "es":
question_text = _knoten_auslesen(knoten)
elif (knoten.attributes["lang"]).value == "it":
question_text = _knoten_auslesen(knoten)
elif (knoten.attributes["lang"]).value == "fr":
question_text = _knoten_auslesen(knoten)
elif (knoten.attributes["lang"]).value == "nl":
question_text = _knoten_auslesen(knoten)
# elif knoten.nodeName == "query":
# query=knoten.firstChild.data.strip()
if knoten.nodeName=="answers":
answer_elem_1=[]
for knoten_answer in knoten.childNodes:
#here i have to check for optional.
if knoten_answer.nodeName=="answer":
answer_elem=[]
for knoten_answer1 in knoten_answer.childNodes:
for id_loesung,tag_loesung in choosen_tag.iteritems():
if(id==id_loesung):
###########################
#
#
# In QALD3 only uri/boolean/number and date are allowed, so string is "turned off"
#
#
###########################
if knoten_answer1.nodeName == "string" and choosen_tag[id]=="string":
try:
answer_elem.append(knoten_answer1.firstChild.data.strip())
except Exception:
answer_elem.append(" ")
if knoten_answer1.nodeName == "boolean" and choosen_tag[id]=="boolean":
try:
answer_elem.append(knoten_answer1.firstChild.data.strip())
except Exception:
answer_elem.append(" ")
if knoten_answer1.nodeName == "number"and choosen_tag[id]=="number":
try:
answer_elem.append(knoten_answer1.firstChild.data.strip())
except Exception:
answer_elem.append(" ")
if knoten_answer1.nodeName == "date" and choosen_tag[id]=="date":
try:
answer_elem.append(knoten_answer1.firstChild.data.strip())
except Exception:
answer_elem.append(" ")
if knoten_answer1.nodeName == "uri" and choosen_tag[id]=="uri":
try:
answer_elem.append(knoten_answer1.firstChild.data.strip())
except Exception:
answer_elem.append(" ")
answer_elem_1.append(answer_elem)
answer.append(answer_elem_1)
# print(answer)
d[id] = [query,question_text,answer]
# print str(d)
return d
def bearbeite_baum(dateiname):
#setze Zeielnumbrueche, damit der Parser spaeter besser mit dem Dokument zurecht kommt
fobj = open(dateiname, "r")
string=""
for line1 in fobj:
line=str(line1)
line=line.replace('<question','\n<question')
#line=line.replace('<string>','\n<string>')
line=line.replace('</string>','</string>\n')
line=line.replace('</keywords>','</keywords>\n')
line=line.replace('</query>','</query>\n')
line=line.replace('<answers>','<answers>\n')
line=line.replace('<answer>','<answer>\n')
line=line.replace('</answer>','</answer>\n')
line=line.replace('</answers>','</answers>\n')
line=line.replace('</uri>','</uri>\n')
line=line.replace('</boolean>','</boolean>\n')
line=line.replace('</number>','</number>\n')
line=line.replace('</date>','</date>\n')
#line=line.replace('&','&')
string+=line
fobj.close()
# print string
fobj = open(dateiname, "w")
fobj.write(string)
fobj.close()
def lade_baum(dateiname):
d = {}
bearbeite_baum(dateiname)
global choosen_tag
global testing
# print "after bearbeite baum"
baum = dom.parse(dateiname.encode( "utf-8" ))
zaehler=1
# print "after parsing baum"
for eintrag in baum.firstChild.childNodes:
if(zaehler==1):
knoten_id=((eintrag.parentNode).attributes["id"]).value
zaehler=2
# print "after 1"
if eintrag.nodeName == "question":
# print "in question"
id=(eintrag.attributes["id"]).value
# print "id: "+str(id)
question_text = query = None
answer=[]
for knoten in eintrag.childNodes: #
# print "in for knoten in eintrag.childNodes: "
if knoten.nodeName == "text" or knoten.nodeName == "string":
if (knoten.attributes["lang"]).value == "en":
question_text = _knoten_auslesen(knoten)
elif (knoten.attributes["lang"]).value == "de":
question_text = _knoten_auslesen(knoten)
elif (knoten.attributes["lang"]).value == "es":
question_text = _knoten_auslesen(knoten)
elif (knoten.attributes["lang"]).value == "it":
question_text = _knoten_auslesen(knoten)
elif (knoten.attributes["lang"]).value == "fr":
question_text = _knoten_auslesen(knoten)
elif (knoten.attributes["lang"]).value == "nl":
question_text = _knoten_auslesen(knoten)
# print str(question_txt)
# elif knoten.nodeName == "query":
# query=knoten.firstChild.data.strip()
elif knoten.nodeName=="answers":
try:
answer_elem_1=[]
for knoten_answer in knoten.childNodes:
if knoten_answer.nodeName=="answer":
answer_elem=[]
###########################
#
#
# In QALD3 only uri/boolean/number and date are allowed, so string is "turned off"
#
#
###########################
mehr_als_ein_typ=False
eins=zwei=None
eins=((knoten_answer.childNodes).item(1)).nodeName
if((knoten_answer.childNodes).item(3)):
zwei=((knoten_answer.childNodes).item(3)).nodeName
else:
zwei= None
if(eins==zwei or zwei==None):
mehr_als_ein_typ=False
choosen_tag[id]=((knoten_answer.childNodes).item(1)).nodeName
else:
mehr_als_ein_typ=True
#choosen_tag[id]="string"
choosen_tag[id]="uri"
for knoten_answer1 in knoten_answer.childNodes:
if(knoten_answer1.nodeName!="#text"):
if knoten_answer1.nodeName == "string" and mehr_als_ein_typ==False:
try:
answer_elem.append(knoten_answer1.firstChild.data.strip())
except Exception:
answer_elem.append(" ")
if knoten_answer1.nodeName == "boolean" and mehr_als_ein_typ==False:
try:
answer_elem.append(knoten_answer1.firstChild.data.strip())
except Exception:
answer_elem.append(" ")
if knoten_answer1.nodeName == "number" and mehr_als_ein_typ==False:
try:
answer_elem.append(knoten_answer1.firstChild.data.strip())
except Exception:
answer_elem.append(" ")
if knoten_answer1.nodeName == "date" and mehr_als_ein_typ==False:
try:
answer_elem.append(knoten_answer1.firstChild.data.strip())
except Exception:
answer_elem.append(" ")
if knoten_answer1.nodeName == "uri" and mehr_als_ein_typ==False:
try:
answer_elem.append(knoten_answer1.firstChild.data.strip())
except Exception:
answer_elem.append(" ")
#if knoten_answer1.nodeName == choosen_tag[id] and mehr_als_ein_typ==True:
# try:
# answer_elem.append(knoten_answer1.firstChild.data.strip())
# except Exception:
# answer_elem.append(" ")
answer_elem_1.append(answer_elem)
except Exception as inst:
error= "<!doctype html> <html> <head> <title>ERROR</title></head> <body> <p>"+str(type(inst))+"</p><p>"+str(inst.args)+"</p><p>"+str(inst)+"</p><p>"+id+"</p><p>PLEASE CHECK YOUR XML FILE</p></body></html>"
outfile=open(filename_out_html,"w")
# _ausgabe_(filename_out_html)
outfile.write(error)
outfile.close()
choosen_tag[id]="string"
answer_elem_1.append("ERROR IN FILE")
# print "Unexpected error:", sys.exc_info()[0]
# print "9"
answer.append(answer_elem_1)
d[question_text] = [query,id,answer]
# print str(d)
return d
def sortedDictValues2(adict):
keys = adict.keys()
keys.sort()
return [dict[key] for key in keys]
def _evaluation(loesung, musterloesung, task):
anzahl_bearbeiteter_fragen=0
anzahl_korrekter_antworten=0
anzahl_falscher_antworten=0
falsche_antworten=[]
anzahl_bearbeiteter_fragen=len(loesung)
bewertung_ausgabe={}
#number_answers_goldstandard = 0
number_answers_user = 0
#for question_text, query_loesung in musterloesung.iteritems():
# gold_loesung1=query_loesung[2]
# gold_loesung=gold_loesung1[0]
# number_answer_goldstandard += len(gold_loesung)
for question_text, query_loesung in loesung.iteritems():
anzahl_falscher_frageelemente=anzahl_richtiger_frageelemente=0
R=P=F=0
# print question_text
# print
# print str(query_loesung[2])
answer_loesung1=query_loesung[2]
answer_loesung=answer_loesung1[0]
number_answers_user += len(answer_loesung)
loesung_id=query_loesung[1]
answer_musterloesung1=musterloesung[loesung_id]
answer_musterloesung2=answer_musterloesung1[2]
answer_musterloesung=answer_musterloesung2[0]
#print "user: "+str(answer_loesung)
#print "gold: "+str(answer_musterloesung)
if len(answer_musterloesung) == len(answer_loesung) and len(answer_loesung) == 0:
bewertung_ausgabe[loesung_id]=[question_text,str(1.0),str(1.0),str(1.0)]
anzahl_korrekter_antworten+=1
elif(len(answer_loesung)==0):
# anzahl_falscher_fragen+=1
anzahl_falscher_antworten+=1
falsche_antworten.append(loesung_id)
R=P=F=0
bewertung_ausgabe[loesung_id]=[question_text,str(R),str(P),str(F)]
else:
if(len(answer_musterloesung)>len(answer_loesung)):
anzahl_falscher_antworten+=1
anzahl_falscher_frageelemente+=(len(answer_musterloesung)-len(answer_loesung))
falsche_antworten.append(loesung_id)
for i in range(0,len(answer_loesung)):
for j in range(0,len(answer_musterloesung)):
if(answer_loesung[i]==answer_musterloesung[j]):
anzahl_richtiger_frageelemente+=1
break
if(anzahl_richtiger_frageelemente==0):
R=F=P=0
else:
R1=Decimal(anzahl_richtiger_frageelemente)
R2=Decimal(len(answer_musterloesung))
R=round((R1/R2),5)
P1=R1
P2=Decimal(len(answer_loesung))
P=round((P1/P2),5)
F=round(((2*P*R)/(R+P)),5)
bewertung_ausgabe[loesung_id]=[question_text,str(R),str(P),str(F)]
else:
for i in range(0,len(answer_loesung)):
for j in range(0,len(answer_musterloesung)):
if(answer_loesung[i]==answer_musterloesung[j]):
anzahl_richtiger_frageelemente+=1
break
if(anzahl_richtiger_frageelemente==len(answer_loesung)):
anzahl_korrekter_antworten+=1
else:
anzahl_falscher_antworten+=1
falsche_antworten.append(loesung_id)
if(anzahl_richtiger_frageelemente==0):
R=F=P=0
else:
R1=Decimal(anzahl_richtiger_frageelemente)
R2=Decimal(len(answer_musterloesung))
R=round((R1/R2),5)
P1=R1
P2=Decimal(len(answer_loesung))
P=round((P1/P2),5)
F=round(((2*P*R)/(R+P)),5)
bewertung_ausgabe[loesung_id]=[question_text,str(R),str(P),str(F)]
if(anzahl_korrekter_antworten==0):
fmeasure=recall=precision=0
else:
wert1=Decimal(anzahl_korrekter_antworten)
wert2=Decimal(anzahl_bearbeiteter_fragen)
recall=round(((wert1/len(musterloesung))),5)
precision=round(((wert1/wert2)),5)
fmeasure=round(((2*recall*precision)/(recall+precision)),5)
recall=str(recall)
precision=str(precision)
fmeasure=str(fmeasure)
number_correct_user_answers = anzahl_bearbeiteter_fragen
anzahl_bearbeiteter_fragen=str(anzahl_bearbeiteter_fragen)
anzahl_korrekter_antworten=str(anzahl_korrekter_antworten)
anzahl_falscher_antworten=str(anzahl_falscher_antworten)
############################################################################################
# #
#Recall = Overall numbers of correct answers / overall number of goldstandard answers #
#Precision = Overall numbers of correct answers / overall number of all answers(given xml)
#F-Measure = (2*Recall*Precision)/(Recall+Precision)
# #
############################################################################################
global_precision=0.0
global_recall=0.0
global_fmeasure=0.0
for id,value in bewertung_ausgabe.iteritems():
tmp = id +";"
x = value[0]
x = x.decode("ascii","ignore")
tmp += x +";"
tmp += str(value[2])+";"
tmp += str(value[1])+";"
tmp += str(value[3])+";"
#print"tmp: "+ tmp
#tmp = (id+";"+str(value[0])+";"+str(value[2])+";"+str(value[1])+";"+str(value[3])+"\n").encode("utf-8")
string = "qald-4_"
if task == 1: string += "multilingual"
if task == 2: string += "biomedical"
if task == 3: string += "hybrid"
string += tmp
global_precision += float(value[2])
global_recall += float(value[1])
if global_recall == 0.0 or global_precision == 0.0:
global_precision = str(0)
global_recall = str(0)
global_fmeasure = str(0)
else:
global_precision = global_precision/len(musterloesung)
global_recall = global_recall/len(musterloesung)
global_fmeasure=str((2*global_recall*global_precision)/(global_precision + global_recall))
global_precision = str(global_precision)
global_recall = str(global_recall)
write_html(string,anzahl_falscher_antworten,anzahl_korrekter_antworten,anzahl_bearbeiteter_fragen,global_fmeasure,global_precision,global_recall,bewertung_ausgabe,falsche_antworten)
def write_txt(anzahl_falscher_antworten,anzahl_korrekter_antworten,anzahl_bearbeiteter_fragen,fmeasure,precision,recall,bewertung_ausgabe,falsche_antworten):
#global system_name, configuration
bla=""
bla=system_name+";"+configuration+"\n"
globale_uebersicht_txt= anzahl_bearbeiteter_fragen+";"+anzahl_korrekter_antworten+";"+anzahl_falscher_antworten+";"+recall+";"+precision+";"+fmeasure+"\n"
string=""
for id,answer in bewertung_ausgabe.iteritems():
question = answer[0]
question = question.decode("ascii","ignore")
string += id+";"+question+";"+answer[1]+";"+answer[2]+";"+answer[3]+"\n"
outfile=open(filename_out_txt,"w")
outfile.write(bla+globale_uebersicht_txt+string)
outfile.close()
_ausgabe_(filename_out_txt)
def write_html(string,anzahl_falscher_antworten,anzahl_korrekter_antworten,anzahl_bearbeiteter_fragen,fmeasure,precision,recall,bewertung_ausgabe,falsche_antworten):
tabelle3="<table class=\"eval\" border=\"1\"><tr><th>Failed questions (IDs)</th></tr>"
string_question ="<tr>"
for i in range(0,len(falsche_antworten)):
string_question+="<td>"+str(falsche_antworten[i])+"</td></tr>"
end_tabelle3="</table>"
start_table= "<!doctype html> <html> <head> <title>Evaluation of "+string+"</title></head> <body> <p>Evaluation</p><p>Skript Version 5.5</p>"
space="<p></p><p></p><p></p><p></p><p></p>"
tabelle1="<table class=\"eval\" border=\"1\"><tr><th>ID</th><th>Question</th><th>Recall</th><th>Precision</th><th>F-Measure</th></tr>"
tabelle2="<table class=\"eval\" border=\"1\"><tr><th>Number of constructed Queries</th><th>Number of correct Answers</th><th>Number of wrong Answers</th><th>Global Recall</th><th>Global Precision</th><th>Global F-Measure</th></tr>"
inhalt_tabelle2="<tr><td>"+anzahl_bearbeiteter_fragen+"</td><td>"+anzahl_korrekter_antworten+"</td><td>"+anzahl_falscher_antworten+"</td><td>"+recall+"</td><td>"+precision+"</td><td>"+fmeasure+"</td></tr>"
end_tabelle2="</table>"
end_tabelle1="</table>"
ende="</body> </html>"
string=""
for id,answer in bewertung_ausgabe.iteritems():
question = answer[0]
question = question.decode("ascii","ignore")
string_bla="<tr><td>"+id+"</td><td>"+question+"</td><td>"+answer[1]+"</td><td>"+answer[2]+"</td><td>"+answer[3]+"</td></tr>"
string+=string_bla
outfile=open(filename_out_html,"w")
outfile.write(start_table+space+tabelle2+inhalt_tabelle2+end_tabelle2+space+tabelle1+string+end_tabelle1+space+tabelle3+string_question+end_tabelle3+ende)
outfile.close()
_ausgabe_(filename_out_html)
################### MAIN ##################################################
def main():
global system_time, testing, task
system_time = datetime.datetime.now()
set_filename_out(system_time)
set_filename_txt_out(system_time)
#print system_time
#print filename_out_html
# Train or Test
if sys.argv[2] == "test":
testing = True
else:
testing = False
# Task
task = sys.argv[3]
# Set gold standard
gold = '../data/qald-4_'
if task == '1': gold += 'multilingual'
elif task == '2': gold += 'biomedical'
elif task == '3': gold += 'hybrid'
if testing: gold += '_test'
else: gold += '_train'
gold += '_withanswers.xml'
import urllib
dateiname=sys.argv[1]
if (len(sys.argv)>=6):
set_system_name(sys.argv[4])
set_configuration(sys.argv[5])
else:
set_system_name("None")
set_configuration("None")
loesung=None
try:
loesung=lade_baum(dateiname)
except Exception as inst:
error= "<!doctype html> <html> <head> <title>ERROR</title></head> <body> <p>"+str(type(inst))+"</p><p>"+str(inst.args)+"</p><p>"+str(inst)+"</p><p>PLEASE CHECK YOUR XML FILE</p></body></html>"
outfile=open(filename_out_html,"w")
outfile.write(error)
outfile.close()
_ausgabe_(filename_out_html)
# print "Unexpected error:", sys.exc_info()[0]
# print "8"
gstandard_importet=True
try:
musterloesung=lade_musterloesung(urllib.urlopen(gold))
except Exception as inst:
error= "<!doctype html> <html> <head> <title>ERROR</title></head> <body> <p>"+str(type(inst))+"</p><p>"+str(inst.args)+"</p><p>"+str(inst)+"</p></body></html>"
write_error(error)
# print "Unexpected error:", sys.exc_info()[0]
# print "7"
else:
_evaluation(loesung,musterloesung,task)
# print "Unexpected error:", sys.exc_info()[0]
# print "6"
def write_error(error):
global filename_out_html
outfile=open(filename_out_html,"w")
outfile.write(error)
outfile.close()
_ausgabe_(filename_out_html)
if __name__ == "__main__":
main()
| mit |
Re4son/Kali-Pi | Menus/menu_pause.py | 1 | 1785 | #!/usr/bin/env python
import pygame, os, sys, subprocess, time
import RPi.GPIO as GPIO
from pygame.locals import *
from subprocess import *
if "TFT" in os.environ and os.environ["TFT"] == "0":
# No TFT screen
SCREEN=0
pass
elif "TFT" in os.environ and os.environ["TFT"] == "2":
# TFT screen with mouse
SCREEN=2
os.environ["SDL_FBDEV"] = "/dev/fb1"
elif "TFT" in os.environ and os.environ["TFT"] == "3":
# HDMI touchscreen
SCREEN=3
os.environ["SDL_FBDEV"] = "/dev/fb0"
os.environ["SDL_MOUSEDEV"] = "/dev/input/touchscreen"
os.environ["SDL_MOUSEDRV"] = "TSLIB"
elif "TFT" in os.environ and os.environ["TFT"] == "4":
# Raspberry Pi 7" touchscreen
SCREEN=4
from ft5406 import Touchscreen
os.environ["SDL_FBDEV"] = "/dev/fb0"
ts = Touchscreen()
else:
# TFT touchscreen
SCREEN=1
os.environ["SDL_FBDEV"] = "/dev/fb1"
os.environ["SDL_MOUSEDEV"] = "/dev/input/touchscreen"
os.environ["SDL_MOUSEDRV"] = "TSLIB"
# Initialize pygame modules individually (to avoid ALSA errors) and hide mouse
pygame.font.init()
pygame.display.init()
pygame.mouse.set_visible(0)
# Initialise GPIO
GPIO.setwarnings(False)
#While loop to manage touch screen inputs
state = [False for x in range(10)]
while 1:
if SCREEN==4:
for touch in ts.poll():
if state[touch.slot] != touch.valid:
if touch.valid:
sys.exit()
else:
for event in pygame.event.get():
if event.type == pygame.MOUSEBUTTONDOWN:
sys.exit()
#Debug:
#ensure there is always a safe way to end the program if the touch screen fails
##if event.type == KEYDOWN:
## if event.key == K_ESCAPE:
## sys.exit()
time.sleep(0.4)
| gpl-3.0 |
ZotPlus/zotero-better-bibtex | util/scrub-profile.py | 2 | 3327 | #!/usr/bin/env python3
import os
import glob
import shutil
import fileinput
import json
import sys
from configparser import ConfigParser
root = os.path.expanduser('~/.BBTZ5TEST')
false = False
true = True
def user_pref(key, value):
global user_pref_key
global user_pref_value
user_pref_key = key
user_pref_value = value
for client in ['zotero', 'jurism']:
bbt = os.path.join(root, client, 'better-bibtex')
if os.path.exists(bbt): shutil.rmtree(bbt)
bbt = os.path.join(root, client, 'better-bibtex.sqlite')
if os.path.exists(bbt): os.remove(bbt)
for bbt in glob.glob(os.path.join(root, client, 'translators', '*.js')):
name = os.path.basename(bbt)
if name.startswith('Better') or name == 'Collected notes.js' or name == 'Citation graph.js':
os.remove(bbt)
for bbt in ['better-bibtex', 'debug-bridge']:
bbt = os.path.join(root, f'extensions/{bbt}@iris-advies.com')
if os.path.exists(bbt): shutil.rmtree(bbt)
prefs = os.path.join(root, 'prefs.js')
with open(prefs) as f:
lines = f.readlines()
with open(prefs, 'w') as f:
for line in lines:
if line.startswith('user_pref("'):
js = line.strip()
if js[-1] == ';': js = js[:-1]
eval(js)
if line.startswith('user_pref("extensions.zotero.translators.better-bibtex.'): continue
if line.startswith('user_pref("extensions.xpiState",'):
xpiState = json.loads(user_pref_value)
if 'app-profile' in xpiState:
xpiState['app-profile'].pop('debug-bridge@iris-advies.com', None)
xpiState['app-profile'].pop('better-bibtex@iris-advies.com', None)
if len(xpiState['app-profile']) == 0: xpiState.pop('app-profile')
print(f'user_pref({json.dumps(user_pref_key)}, {json.dumps(json.dumps(xpiState))});', file=f)
continue
if line.startswith('user_pref("extensions.enabledAddons",'):
enabledAddons = [addon for addon in user_pref_value.split(',') if not addon.startswith('debug-bridge%40iris-advies.com') and not addon.startswith('better-bibtex%40iris-advies.com')]
print(f'user_pref({json.dumps(user_pref_key)}, {json.dumps(",".join(enabledAddons))});', file=f)
continue
if line.startswith('user_pref("extensions.zotero.pane.persist"'):
persist = json.loads(user_pref_value)
persist.pop('zotero-items-column-citekey', None)
print(f'user_pref({json.dumps(user_pref_key)}, {json.dumps(json.dumps(persist))});', file=f)
continue
print(line, file=f, end='')
_extensions = os.path.join(root, 'extensions.json')
with open(_extensions) as f:
extensions = json.load(f)
with open(_extensions, 'w') as f:
extensions['addons'] = [ext for ext in extensions['addons'] if ext['id'] not in ['debug-bridge@iris-advies.com', 'better-bibtex@iris-advies.com']]
f.write(json.dumps(extensions))
_extensions = os.path.join(root, 'extensions.ini')
config = ConfigParser()
config.read(_extensions)
for section in config.sections():
if section in ['ExtensionDirs', 'MultiprocessIncompatibleExtensions']:
for key in config[section]:
if os.path.basename(config[section][key]) in ['debug-bridge@iris-advies.com', 'better-bibtex@iris-advies.com']:
del config[section][key]
with open(_extensions, 'w') as f:
config.write(f)
| unlicense |
grantsewell/nzbToMedia | libs/beets/autotag/match.py | 18 | 17713 | # This file is part of beets.
# Copyright 2013, Adrian Sampson.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""Matches existing metadata with canonical information to identify
releases and tracks.
"""
from __future__ import division
import datetime
import logging
import re
from munkres import Munkres
from beets import plugins
from beets import config
from beets.util import plurality
from beets.util.enumeration import enum
from beets.autotag import hooks
# Recommendation enumeration.
recommendation = enum('none', 'low', 'medium', 'strong', name='recommendation')
# Artist signals that indicate "various artists". These are used at the
# album level to determine whether a given release is likely a VA
# release and also on the track level to to remove the penalty for
# differing artists.
VA_ARTISTS = (u'', u'various artists', u'various', u'va', u'unknown')
# Global logger.
log = logging.getLogger('beets')
# Primary matching functionality.
def current_metadata(items):
"""Extract the likely current metadata for an album given a list of its
items. Return two dictionaries:
- The most common value for each field.
- Whether each field's value was unanimous (values are booleans).
"""
assert items # Must be nonempty.
likelies = {}
consensus = {}
fields = ['artist', 'album', 'albumartist', 'year', 'disctotal',
'mb_albumid', 'label', 'catalognum', 'country', 'media',
'albumdisambig']
for key in fields:
values = [getattr(item, key) for item in items if item]
likelies[key], freq = plurality(values)
consensus[key] = (freq == len(values))
# If there's an album artist consensus, use this for the artist.
if consensus['albumartist'] and likelies['albumartist']:
likelies['artist'] = likelies['albumartist']
return likelies, consensus
def assign_items(items, tracks):
"""Given a list of Items and a list of TrackInfo objects, find the
best mapping between them. Returns a mapping from Items to TrackInfo
objects, a set of extra Items, and a set of extra TrackInfo
objects. These "extra" objects occur when there is an unequal number
of objects of the two types.
"""
# Construct the cost matrix.
costs = []
for item in items:
row = []
for i, track in enumerate(tracks):
row.append(track_distance(item, track))
costs.append(row)
# Find a minimum-cost bipartite matching.
matching = Munkres().compute(costs)
# Produce the output matching.
mapping = dict((items[i], tracks[j]) for (i, j) in matching)
extra_items = list(set(items) - set(mapping.keys()))
extra_items.sort(key=lambda i: (i.disc, i.track, i.title))
extra_tracks = list(set(tracks) - set(mapping.values()))
extra_tracks.sort(key=lambda t: (t.index, t.title))
return mapping, extra_items, extra_tracks
def track_index_changed(item, track_info):
"""Returns True if the item and track info index is different. Tolerates
per disc and per release numbering.
"""
return item.track not in (track_info.medium_index, track_info.index)
def track_distance(item, track_info, incl_artist=False):
"""Determines the significance of a track metadata change. Returns a
Distance object. `incl_artist` indicates that a distance component should
be included for the track artist (i.e., for various-artist releases).
"""
dist = hooks.Distance()
# Length.
if track_info.length:
diff = abs(item.length - track_info.length) - \
config['match']['track_length_grace'].as_number()
dist.add_ratio('track_length', diff,
config['match']['track_length_max'].as_number())
# Title.
dist.add_string('track_title', item.title, track_info.title)
# Artist. Only check if there is actually an artist in the track data.
if incl_artist and track_info.artist and \
item.artist.lower() not in VA_ARTISTS:
dist.add_string('track_artist', item.artist, track_info.artist)
# Track index.
if track_info.index and item.track:
dist.add_expr('track_index', track_index_changed(item, track_info))
# Track ID.
if item.mb_trackid:
dist.add_expr('track_id', item.mb_trackid != track_info.track_id)
# Plugins.
dist.update(plugins.track_distance(item, track_info))
return dist
def distance(items, album_info, mapping):
"""Determines how "significant" an album metadata change would be.
Returns a Distance object. `album_info` is an AlbumInfo object
reflecting the album to be compared. `items` is a sequence of all
Item objects that will be matched (order is not important).
`mapping` is a dictionary mapping Items to TrackInfo objects; the
keys are a subset of `items` and the values are a subset of
`album_info.tracks`.
"""
likelies, _ = current_metadata(items)
dist = hooks.Distance()
# Artist, if not various.
if not album_info.va:
dist.add_string('artist', likelies['artist'], album_info.artist)
# Album.
dist.add_string('album', likelies['album'], album_info.album)
# Current or preferred media.
if album_info.media:
# Preferred media options.
patterns = config['match']['preferred']['media'].as_str_seq()
options = [re.compile(r'(\d+x)?(%s)' % pat, re.I) for pat in patterns]
if options:
dist.add_priority('media', album_info.media, options)
# Current media.
elif likelies['media']:
dist.add_equality('media', album_info.media, likelies['media'])
# Mediums.
if likelies['disctotal'] and album_info.mediums:
dist.add_number('mediums', likelies['disctotal'], album_info.mediums)
# Prefer earliest release.
if album_info.year and config['match']['preferred']['original_year']:
# Assume 1889 (earliest first gramophone discs) if we don't know the
# original year.
original = album_info.original_year or 1889
diff = abs(album_info.year - original)
diff_max = abs(datetime.date.today().year - original)
dist.add_ratio('year', diff, diff_max)
# Year.
elif likelies['year'] and album_info.year:
if likelies['year'] in (album_info.year, album_info.original_year):
# No penalty for matching release or original year.
dist.add('year', 0.0)
elif album_info.original_year:
# Prefer matchest closest to the release year.
diff = abs(likelies['year'] - album_info.year)
diff_max = abs(datetime.date.today().year -
album_info.original_year)
dist.add_ratio('year', diff, diff_max)
else:
# Full penalty when there is no original year.
dist.add('year', 1.0)
# Preferred countries.
patterns = config['match']['preferred']['countries'].as_str_seq()
options = [re.compile(pat, re.I) for pat in patterns]
if album_info.country and options:
dist.add_priority('country', album_info.country, options)
# Country.
elif likelies['country'] and album_info.country:
dist.add_string('country', likelies['country'], album_info.country)
# Label.
if likelies['label'] and album_info.label:
dist.add_string('label', likelies['label'], album_info.label)
# Catalog number.
if likelies['catalognum'] and album_info.catalognum:
dist.add_string('catalognum', likelies['catalognum'],
album_info.catalognum)
# Disambiguation.
if likelies['albumdisambig'] and album_info.albumdisambig:
dist.add_string('albumdisambig', likelies['albumdisambig'],
album_info.albumdisambig)
# Album ID.
if likelies['mb_albumid']:
dist.add_equality('album_id', likelies['mb_albumid'],
album_info.album_id)
# Tracks.
dist.tracks = {}
for item, track in mapping.iteritems():
dist.tracks[track] = track_distance(item, track, album_info.va)
dist.add('tracks', dist.tracks[track].distance)
# Missing tracks.
for i in range(len(album_info.tracks) - len(mapping)):
dist.add('missing_tracks', 1.0)
# Unmatched tracks.
for i in range(len(items) - len(mapping)):
dist.add('unmatched_tracks', 1.0)
# Plugins.
dist.update(plugins.album_distance(items, album_info, mapping))
return dist
def match_by_id(items):
"""If the items are tagged with a MusicBrainz album ID, returns an
AlbumInfo object for the corresponding album. Otherwise, returns
None.
"""
# Is there a consensus on the MB album ID?
albumids = [item.mb_albumid for item in items if item.mb_albumid]
if not albumids:
log.debug('No album IDs found.')
return None
# If all album IDs are equal, look up the album.
if bool(reduce(lambda x,y: x if x==y else (), albumids)):
albumid = albumids[0]
log.debug('Searching for discovered album ID: ' + albumid)
return hooks.album_for_mbid(albumid)
else:
log.debug('No album ID consensus.')
def _recommendation(results):
"""Given a sorted list of AlbumMatch or TrackMatch objects, return a
recommendation based on the results' distances.
If the recommendation is higher than the configured maximum for
an applied penalty, the recommendation will be downgraded to the
configured maximum for that penalty.
"""
if not results:
# No candidates: no recommendation.
return recommendation.none
# Basic distance thresholding.
min_dist = results[0].distance
if min_dist < config['match']['strong_rec_thresh'].as_number():
# Strong recommendation level.
rec = recommendation.strong
elif min_dist <= config['match']['medium_rec_thresh'].as_number():
# Medium recommendation level.
rec = recommendation.medium
elif len(results) == 1:
# Only a single candidate.
rec = recommendation.low
elif results[1].distance - min_dist >= \
config['match']['rec_gap_thresh'].as_number():
# Gap between first two candidates is large.
rec = recommendation.low
else:
# No conclusion. Return immediately. Can't be downgraded any further.
return recommendation.none
# Downgrade to the max rec if it is lower than the current rec for an
# applied penalty.
keys = set(min_dist.keys())
if isinstance(results[0], hooks.AlbumMatch):
for track_dist in min_dist.tracks.values():
keys.update(track_dist.keys())
max_rec_view = config['match']['max_rec']
for key in keys:
if key in max_rec_view.keys():
max_rec = max_rec_view[key].as_choice({
'strong': recommendation.strong,
'medium': recommendation.medium,
'low': recommendation.low,
'none': recommendation.none,
})
rec = min(rec, max_rec)
return rec
def _add_candidate(items, results, info):
"""Given a candidate AlbumInfo object, attempt to add the candidate
to the output dictionary of AlbumMatch objects. This involves
checking the track count, ordering the items, checking for
duplicates, and calculating the distance.
"""
log.debug('Candidate: %s - %s' % (info.artist, info.album))
# Don't duplicate.
if info.album_id in results:
log.debug('Duplicate.')
return
# Find mapping between the items and the track info.
mapping, extra_items, extra_tracks = assign_items(items, info.tracks)
# Get the change distance.
dist = distance(items, info, mapping)
# Skip matches with ignored penalties.
penalties = [key for _, key in dist]
for penalty in config['match']['ignored'].as_str_seq():
if penalty in penalties:
log.debug('Ignored. Penalty: %s' % penalty)
return
log.debug('Success. Distance: %f' % dist)
results[info.album_id] = hooks.AlbumMatch(dist, info, mapping,
extra_items, extra_tracks)
def tag_album(items, search_artist=None, search_album=None,
search_id=None):
"""Bundles together the functionality used to infer tags for a
set of items comprised by an album. Returns everything relevant:
- The current artist.
- The current album.
- A list of AlbumMatch objects. The candidates are sorted by
distance (i.e., best match first).
- A recommendation.
If search_artist and search_album or search_id are provided, then
they are used as search terms in place of the current metadata.
"""
# Get current metadata.
likelies, consensus = current_metadata(items)
cur_artist = likelies['artist']
cur_album = likelies['album']
log.debug('Tagging %s - %s' % (cur_artist, cur_album))
# The output result (distance, AlbumInfo) tuples (keyed by MB album
# ID).
candidates = {}
# Search by explicit ID.
if search_id is not None:
log.debug('Searching for album ID: ' + search_id)
search_cands = hooks.albums_for_id(search_id)
# Use existing metadata or text search.
else:
# Try search based on current ID.
id_info = match_by_id(items)
if id_info:
_add_candidate(items, candidates, id_info)
rec = _recommendation(candidates.values())
log.debug('Album ID match recommendation is ' + str(rec))
if candidates and not config['import']['timid']:
# If we have a very good MBID match, return immediately.
# Otherwise, this match will compete against metadata-based
# matches.
if rec == recommendation.strong:
log.debug('ID match.')
return cur_artist, cur_album, candidates.values(), rec
# Search terms.
if not (search_artist and search_album):
# No explicit search terms -- use current metadata.
search_artist, search_album = cur_artist, cur_album
log.debug(u'Search terms: %s - %s' % (search_artist, search_album))
# Is this album likely to be a "various artist" release?
va_likely = ((not consensus['artist']) or
(search_artist.lower() in VA_ARTISTS) or
any(item.comp for item in items))
log.debug(u'Album might be VA: %s' % str(va_likely))
# Get the results from the data sources.
search_cands = hooks.album_candidates(items, search_artist,
search_album, va_likely)
log.debug(u'Evaluating %i candidates.' % len(search_cands))
for info in search_cands:
_add_candidate(items, candidates, info)
# Sort and get the recommendation.
candidates = sorted(candidates.itervalues())
rec = _recommendation(candidates)
return cur_artist, cur_album, candidates, rec
def tag_item(item, search_artist=None, search_title=None,
search_id=None):
"""Attempts to find metadata for a single track. Returns a
`(candidates, recommendation)` pair where `candidates` is a list of
TrackMatch objects. `search_artist` and `search_title` may be used
to override the current metadata for the purposes of the MusicBrainz
title; likewise `search_id`.
"""
# Holds candidates found so far: keys are MBIDs; values are
# (distance, TrackInfo) pairs.
candidates = {}
# First, try matching by MusicBrainz ID.
trackid = search_id or item.mb_trackid
if trackid:
log.debug('Searching for track ID: ' + trackid)
for track_info in hooks.tracks_for_id(trackid):
dist = track_distance(item, track_info, incl_artist=True)
candidates[track_info.track_id] = \
hooks.TrackMatch(dist, track_info)
# If this is a good match, then don't keep searching.
rec = _recommendation(candidates.values())
if rec == recommendation.strong and not config['import']['timid']:
log.debug('Track ID match.')
return candidates.values(), rec
# If we're searching by ID, don't proceed.
if search_id is not None:
if candidates:
return candidates.values(), rec
else:
return [], recommendation.none
# Search terms.
if not (search_artist and search_title):
search_artist, search_title = item.artist, item.title
log.debug(u'Item search terms: %s - %s' % (search_artist, search_title))
# Get and evaluate candidate metadata.
for track_info in hooks.item_candidates(item, search_artist, search_title):
dist = track_distance(item, track_info, incl_artist=True)
candidates[track_info.track_id] = hooks.TrackMatch(dist, track_info)
# Sort by distance and return with recommendation.
log.debug('Found %i candidates.' % len(candidates))
candidates = sorted(candidates.itervalues())
rec = _recommendation(candidates)
return candidates, rec
| gpl-3.0 |
foobarbazblarg/stayclean | stayclean-2016-april/serve-signups-with-flask.py | 1 | 8086 | #!/usr/bin/python
import subprocess
import praw
from hashlib import sha1
from flask import Flask
from flask import Response
from flask import request
from cStringIO import StringIO
from base64 import b64encode
from base64 import b64decode
from ConfigParser import ConfigParser
import OAuth2Util
import os
import markdown
import bleach
# encoding=utf8
import sys
from participantCollection import ParticipantCollection
reload(sys)
sys.setdefaultencoding('utf8')
# Edit Me!
# Each day after you post a signup post, copy its 6-character ID to this array.
signupPageSubmissionIds = [ '4bvb7i', '4c1crs', '4c5lvg', '4ca9ff', '4cf91t', '4ckta7', '4cp4ir' ]
flaskport = 8883
app = Flask(__name__)
app.debug = True
commentHashesAndComments = {}
def loginAndReturnRedditSession():
config = ConfigParser()
config.read("../reddit-password-credentials.cfg")
user = config.get("Reddit", "user")
password = config.get("Reddit", "password")
# TODO: password auth is going away, and we will soon need to do oauth.
redditSession = praw.Reddit(user_agent='Test Script by /u/foobarbazblarg')
redditSession.login(user, password, disable_warning=True)
# submissions = redditSession.get_subreddit('pornfree').get_hot(limit=5)
# print [str(x) for x in submissions]
return redditSession
def loginOAuthAndReturnRedditSession():
redditSession = praw.Reddit(user_agent='Test Script by /u/foobarbazblarg')
o = OAuth2Util.OAuth2Util(redditSession, print_log=True, configfile="../reddit-oauth-credentials.cfg")
o.refresh(force=True)
return redditSession
def getSubmissionsForRedditSession(redditSession):
submissions = [redditSession.get_submission(submission_id=submissionId) for submissionId in signupPageSubmissionIds]
for submission in submissions:
submission.replace_more_comments(limit=None, threshold=0)
return submissions
def getCommentsForSubmissions(submissions):
comments = []
for submission in submissions:
comments += praw.helpers.flatten_tree(submission.comments)
return comments
def retireCommentHash(commentHash):
with open("retiredcommenthashes.txt", "a") as commentHashFile:
commentHashFile.write(commentHash + '\n')
def retiredCommentHashes():
with open("retiredcommenthashes.txt", "r") as commentHashFile:
# return commentHashFile.readlines()
return commentHashFile.read().splitlines()
@app.route('/moderatesignups.html')
def moderatesignups():
global commentHashesAndComments
commentHashesAndComments = {}
stringio = StringIO()
stringio.write('<html>\n<head>\n</head>\n\n')
# redditSession = loginAndReturnRedditSession()
redditSession = loginOAuthAndReturnRedditSession()
submissions = getSubmissionsForRedditSession(redditSession)
flat_comments = getCommentsForSubmissions(submissions)
retiredHashes = retiredCommentHashes()
i = 1
stringio.write('<iframe name="invisibleiframe" style="display:none;"></iframe>\n')
stringio.write("<h3>")
stringio.write(os.getcwd())
stringio.write("<br>\n")
for submission in submissions:
stringio.write(submission.title)
stringio.write("<br>\n")
stringio.write("</h3>\n\n")
stringio.write('<form action="copydisplayduringsignuptoclipboard.html" method="post" target="invisibleiframe">')
stringio.write('<input type="submit" value="Copy display-during-signup.py stdout to clipboard">')
stringio.write('</form>')
for comment in flat_comments:
# print comment.is_root
# print comment.score
i += 1
commentHash = sha1()
commentHash.update(comment.permalink)
commentHash.update(comment.body.encode('utf-8'))
commentHash = commentHash.hexdigest()
if commentHash not in retiredHashes:
commentHashesAndComments[commentHash] = comment
authorName = str(comment.author) # can be None if author was deleted. So check for that and skip if it's None.
stringio.write("<hr>\n")
stringio.write('<font color="blue"><b>')
stringio.write(authorName) # can be None if author was deleted. So check for that and skip if it's None.
stringio.write('</b></font><br>')
if ParticipantCollection().hasParticipantNamed(authorName):
stringio.write(' <small><font color="green">(member)</font></small>')
# if ParticipantCollection().participantNamed(authorName).isStillIn:
# stringio.write(' <small><font color="green">(in)</font></small>')
# else:
# stringio.write(' <small><font color="red">(out)</font></small>')
else:
stringio.write(' <small><font color="red">(not a member)</font></small>')
stringio.write('<form action="takeaction.html" method="post" target="invisibleiframe">')
stringio.write('<input type="submit" name="actiontotake" value="Signup" style="color:white;background-color:green">')
# stringio.write('<input type="submit" name="actiontotake" value="Signup and checkin">')
# stringio.write('<input type="submit" name="actiontotake" value="Relapse">')
# stringio.write('<input type="submit" name="actiontotake" value="Reinstate">')
stringio.write('<input type="submit" name="actiontotake" value="Skip comment">')
stringio.write('<input type="submit" name="actiontotake" value="Skip comment and don\'t upvote">')
stringio.write('<input type="hidden" name="username" value="' + b64encode(authorName) + '">')
stringio.write('<input type="hidden" name="commenthash" value="' + commentHash + '">')
stringio.write('<input type="hidden" name="commentpermalink" value="' + comment.permalink + '">')
stringio.write('</form>')
stringio.write(bleach.clean(markdown.markdown(comment.body.encode('utf-8')), tags=['p']))
stringio.write("\n<br><br>\n\n")
stringio.write('</html>')
pageString = stringio.getvalue()
stringio.close()
return Response(pageString, mimetype='text/html')
@app.route('/takeaction.html', methods=["POST"])
def takeaction():
username = b64decode(request.form["username"])
commentHash = str(request.form["commenthash"])
# commentPermalink = request.form["commentpermalink"]
actionToTake = request.form["actiontotake"]
# print commentHashesAndComments
comment = commentHashesAndComments[commentHash]
# print "comment: " + str(comment)
if actionToTake == 'Signup':
print "signup - " + username
subprocess.call(['./signup.py', username])
comment.upvote()
retireCommentHash(commentHash)
# if actionToTake == 'Signup and checkin':
# print "signup and checkin - " + username
# subprocess.call(['./signup-and-checkin.sh', username])
# comment.upvote()
# retireCommentHash(commentHash)
# elif actionToTake == 'Relapse':
# print "relapse - " + username
# subprocess.call(['./relapse.py', username])
# comment.upvote()
# retireCommentHash(commentHash)
# elif actionToTake == 'Reinstate':
# print "reinstate - " + username
# subprocess.call(['./reinstate.py', username])
# comment.upvote()
# retireCommentHash(commentHash)
elif actionToTake == 'Skip comment':
print "Skip comment - " + username
comment.upvote()
retireCommentHash(commentHash)
elif actionToTake == "Skip comment and don't upvote":
print "Skip comment and don't upvote - " + username
retireCommentHash(commentHash)
return Response("hello", mimetype='text/html')
@app.route('/copydisplayduringsignuptoclipboard.html', methods=["POST"])
def copydisplayduringsignuptoclipboard():
print "TODO: Copy display to clipboard"
subprocess.call(['./display-during-signup.py'])
return Response("hello", mimetype='text/html')
if __name__ == '__main__':
app.run(host='127.0.0.1', port=flaskport)
| mit |
sentient-energy/emsw-bitbake-mirror | lib/bb/server/none.py | 3 | 6071 | #
# BitBake 'dummy' Passthrough Server
#
# Copyright (C) 2006 - 2007 Michael 'Mickey' Lauer
# Copyright (C) 2006 - 2008 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
This module implements a passthrough server for BitBake.
Use register_idle_function() to add a function which the server
calls from within idle_commands when no requests are pending. Make sure
that those functions are non-blocking or else you will introduce latency
in the server's main loop.
"""
import time
import bb
import signal
DEBUG = False
import inspect, select
class BitBakeServerCommands():
def __init__(self, server):
self.server = server
def runCommand(self, command):
"""
Run a cooker command on the server
"""
#print "Running Command %s" % command
return self.cooker.command.runCommand(command)
def terminateServer(self):
"""
Trigger the server to quit
"""
self.server.server_exit()
#print "Server (cooker) exitting"
return
def ping(self):
"""
Dummy method which can be used to check the server is still alive
"""
return True
eventQueue = []
class BBUIEventQueue:
class event:
def __init__(self, parent):
self.parent = parent
@staticmethod
def send(event):
bb.server.none.eventQueue.append(event)
@staticmethod
def quit():
return
def __init__(self, BBServer):
self.eventQueue = bb.server.none.eventQueue
self.BBServer = BBServer
self.EventHandle = bb.event.register_UIHhandler(self)
def __popEvent(self):
if len(self.eventQueue) == 0:
return None
return self.eventQueue.pop(0)
def getEvent(self):
if len(self.eventQueue) == 0:
self.BBServer.idle_commands(0)
return self.__popEvent()
def waitEvent(self, delay):
event = self.__popEvent()
if event:
return event
self.BBServer.idle_commands(delay)
return self.__popEvent()
def queue_event(self, event):
self.eventQueue.append(event)
def system_quit( self ):
bb.event.unregister_UIHhandler(self.EventHandle)
# Dummy signal handler to ensure we break out of sleep upon SIGCHLD
def chldhandler(signum, stackframe):
pass
class BitBakeNoneServer():
# remove this when you're done with debugging
# allow_reuse_address = True
def __init__(self):
self._idlefuns = {}
self.commands = BitBakeServerCommands(self)
def addcooker(self, cooker):
self.cooker = cooker
self.commands.cooker = cooker
def register_idle_function(self, function, data):
"""Register a function to be called while the server is idle"""
assert hasattr(function, '__call__')
self._idlefuns[function] = data
def idle_commands(self, delay):
#print "Idle queue length %s" % len(self._idlefuns)
#print "Idle timeout, running idle functions"
#if len(self._idlefuns) == 0:
nextsleep = delay
for function, data in self._idlefuns.items():
try:
retval = function(self, data, False)
#print "Idle function returned %s" % (retval)
if retval is False:
del self._idlefuns[function]
elif retval is True:
nextsleep = None
elif nextsleep is None:
continue
elif retval < nextsleep:
nextsleep = retval
except SystemExit:
raise
except:
import traceback
traceback.print_exc()
self.commands.runCommand(["stateShutdown"])
pass
if nextsleep is not None:
#print "Sleeping for %s (%s)" % (nextsleep, delay)
signal.signal(signal.SIGCHLD, chldhandler)
time.sleep(nextsleep)
signal.signal(signal.SIGCHLD, signal.SIG_DFL)
def server_exit(self):
# Tell idle functions we're exiting
for function, data in self._idlefuns.items():
try:
retval = function(self, data, True)
except:
pass
class BitBakeServerConnection():
def __init__(self, server):
self.server = server.server
self.connection = self.server.commands
self.events = bb.server.none.BBUIEventQueue(self.server)
for event in bb.event.ui_queue:
self.events.queue_event(event)
def terminate(self):
try:
self.events.system_quit()
except:
pass
try:
self.connection.terminateServer()
except:
pass
class BitBakeServer(object):
def initServer(self):
self.server = BitBakeNoneServer()
def addcooker(self, cooker):
self.cooker = cooker
self.server.addcooker(cooker)
def getServerIdleCB(self):
return self.server.register_idle_function
def saveConnectionDetails(self):
return
def detach(self, cooker_logfile):
self.logfile = cooker_logfile
def establishConnection(self):
self.connection = BitBakeServerConnection(self)
return self.connection
def launchUI(self, uifunc, *args):
return bb.cooker.server_main(self.cooker, uifunc, *args)
| gpl-2.0 |
zhibolau/webApp | www/models.py | 1 | 1589 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
__author__ = 'Zhibo Liu'
import time,uuid
from transwarp.db import next_id # 直接from Import会出错 必须在那个目录下建立__init__.py 文件!!!!!!!!
from transwarp.orm import Model, StringField, BooleanField, FloatField, TextField
class User(Model):
__table__ = 'users'
id = StringField(primary_key=True, default=next_id, ddl='varchar(50)')
email = StringField(updatable=False, ddl='varchar(50)')
password = StringField(ddl='varchar(50)')
admin = BooleanField()
name = StringField(ddl='varchar(50)')
image = StringField(ddl='varchar(500)')
created_at = FloatField(updatable=False, default=time.time)
class Blog(Model):
__table__ = 'blogs'
id = StringField(primary_key=True, default=next_id, ddl='varchar(50)')
user_id = StringField(updatable=False, ddl='varchar(50)')
user_name = StringField(ddl='varchar(50)')
user_image = StringField(ddl='varchar(500)')
name = StringField(ddl='varchar(50)')
summary = StringField(ddl='varchar(200)')
content = TextField()
created_at = FloatField(updatable=False, default=time.time)
class Comment(Model):
__table__ = 'comments'
id = StringField(primary_key=True, default=next_id, ddl='varchar(50)')
blog_id = StringField(updatable=False, ddl='varchar(50)')
user_id = StringField(updatable=False, ddl='varchar(50)')
user_name = StringField(ddl='varchar(50)')
user_image = StringField(ddl='varchar(500)')
content = TextField()
created_at = FloatField(updatable=False, default=time.time)
| gpl-2.0 |
splav/servo | tests/wpt/web-platform-tests/tools/third_party/html5lib/html5lib/html5parser.py | 45 | 118951 | from __future__ import absolute_import, division, unicode_literals
from six import with_metaclass, viewkeys
import types
from collections import OrderedDict
from . import _inputstream
from . import _tokenizer
from . import treebuilders
from .treebuilders.base import Marker
from . import _utils
from .constants import (
spaceCharacters, asciiUpper2Lower,
specialElements, headingElements, cdataElements, rcdataElements,
tokenTypes, tagTokenTypes,
namespaces,
htmlIntegrationPointElements, mathmlTextIntegrationPointElements,
adjustForeignAttributes as adjustForeignAttributesMap,
adjustMathMLAttributes, adjustSVGAttributes,
E,
_ReparseException
)
def parse(doc, treebuilder="etree", namespaceHTMLElements=True, **kwargs):
"""Parse an HTML document as a string or file-like object into a tree
:arg doc: the document to parse as a string or file-like object
:arg treebuilder: the treebuilder to use when parsing
:arg namespaceHTMLElements: whether or not to namespace HTML elements
:returns: parsed tree
Example:
>>> from html5lib.html5parser import parse
>>> parse('<html><body><p>This is a doc</p></body></html>')
<Element u'{http://www.w3.org/1999/xhtml}html' at 0x7feac4909db0>
"""
tb = treebuilders.getTreeBuilder(treebuilder)
p = HTMLParser(tb, namespaceHTMLElements=namespaceHTMLElements)
return p.parse(doc, **kwargs)
def parseFragment(doc, container="div", treebuilder="etree", namespaceHTMLElements=True, **kwargs):
"""Parse an HTML fragment as a string or file-like object into a tree
:arg doc: the fragment to parse as a string or file-like object
:arg container: the container context to parse the fragment in
:arg treebuilder: the treebuilder to use when parsing
:arg namespaceHTMLElements: whether or not to namespace HTML elements
:returns: parsed tree
Example:
>>> from html5lib.html5libparser import parseFragment
>>> parseFragment('<b>this is a fragment</b>')
<Element u'DOCUMENT_FRAGMENT' at 0x7feac484b090>
"""
tb = treebuilders.getTreeBuilder(treebuilder)
p = HTMLParser(tb, namespaceHTMLElements=namespaceHTMLElements)
return p.parseFragment(doc, container=container, **kwargs)
def method_decorator_metaclass(function):
class Decorated(type):
def __new__(meta, classname, bases, classDict):
for attributeName, attribute in classDict.items():
if isinstance(attribute, types.FunctionType):
attribute = function(attribute)
classDict[attributeName] = attribute
return type.__new__(meta, classname, bases, classDict)
return Decorated
class HTMLParser(object):
"""HTML parser
Generates a tree structure from a stream of (possibly malformed) HTML.
"""
def __init__(self, tree=None, strict=False, namespaceHTMLElements=True, debug=False):
"""
:arg tree: a treebuilder class controlling the type of tree that will be
returned. Built in treebuilders can be accessed through
html5lib.treebuilders.getTreeBuilder(treeType)
:arg strict: raise an exception when a parse error is encountered
:arg namespaceHTMLElements: whether or not to namespace HTML elements
:arg debug: whether or not to enable debug mode which logs things
Example:
>>> from html5lib.html5parser import HTMLParser
>>> parser = HTMLParser() # generates parser with etree builder
>>> parser = HTMLParser('lxml', strict=True) # generates parser with lxml builder which is strict
"""
# Raise an exception on the first error encountered
self.strict = strict
if tree is None:
tree = treebuilders.getTreeBuilder("etree")
self.tree = tree(namespaceHTMLElements)
self.errors = []
self.phases = dict([(name, cls(self, self.tree)) for name, cls in
getPhases(debug).items()])
def _parse(self, stream, innerHTML=False, container="div", scripting=False, **kwargs):
self.innerHTMLMode = innerHTML
self.container = container
self.scripting = scripting
self.tokenizer = _tokenizer.HTMLTokenizer(stream, parser=self, **kwargs)
self.reset()
try:
self.mainLoop()
except _ReparseException:
self.reset()
self.mainLoop()
def reset(self):
self.tree.reset()
self.firstStartTag = False
self.errors = []
self.log = [] # only used with debug mode
# "quirks" / "limited quirks" / "no quirks"
self.compatMode = "no quirks"
if self.innerHTMLMode:
self.innerHTML = self.container.lower()
if self.innerHTML in cdataElements:
self.tokenizer.state = self.tokenizer.rcdataState
elif self.innerHTML in rcdataElements:
self.tokenizer.state = self.tokenizer.rawtextState
elif self.innerHTML == 'plaintext':
self.tokenizer.state = self.tokenizer.plaintextState
else:
# state already is data state
# self.tokenizer.state = self.tokenizer.dataState
pass
self.phase = self.phases["beforeHtml"]
self.phase.insertHtmlElement()
self.resetInsertionMode()
else:
self.innerHTML = False # pylint:disable=redefined-variable-type
self.phase = self.phases["initial"]
self.lastPhase = None
self.beforeRCDataPhase = None
self.framesetOK = True
@property
def documentEncoding(self):
"""Name of the character encoding that was used to decode the input stream, or
:obj:`None` if that is not determined yet
"""
if not hasattr(self, 'tokenizer'):
return None
return self.tokenizer.stream.charEncoding[0].name
def isHTMLIntegrationPoint(self, element):
if (element.name == "annotation-xml" and
element.namespace == namespaces["mathml"]):
return ("encoding" in element.attributes and
element.attributes["encoding"].translate(
asciiUpper2Lower) in
("text/html", "application/xhtml+xml"))
else:
return (element.namespace, element.name) in htmlIntegrationPointElements
def isMathMLTextIntegrationPoint(self, element):
return (element.namespace, element.name) in mathmlTextIntegrationPointElements
def mainLoop(self):
CharactersToken = tokenTypes["Characters"]
SpaceCharactersToken = tokenTypes["SpaceCharacters"]
StartTagToken = tokenTypes["StartTag"]
EndTagToken = tokenTypes["EndTag"]
CommentToken = tokenTypes["Comment"]
DoctypeToken = tokenTypes["Doctype"]
ParseErrorToken = tokenTypes["ParseError"]
for token in self.normalizedTokens():
prev_token = None
new_token = token
while new_token is not None:
prev_token = new_token
currentNode = self.tree.openElements[-1] if self.tree.openElements else None
currentNodeNamespace = currentNode.namespace if currentNode else None
currentNodeName = currentNode.name if currentNode else None
type = new_token["type"]
if type == ParseErrorToken:
self.parseError(new_token["data"], new_token.get("datavars", {}))
new_token = None
else:
if (len(self.tree.openElements) == 0 or
currentNodeNamespace == self.tree.defaultNamespace or
(self.isMathMLTextIntegrationPoint(currentNode) and
((type == StartTagToken and
token["name"] not in frozenset(["mglyph", "malignmark"])) or
type in (CharactersToken, SpaceCharactersToken))) or
(currentNodeNamespace == namespaces["mathml"] and
currentNodeName == "annotation-xml" and
type == StartTagToken and
token["name"] == "svg") or
(self.isHTMLIntegrationPoint(currentNode) and
type in (StartTagToken, CharactersToken, SpaceCharactersToken))):
phase = self.phase
else:
phase = self.phases["inForeignContent"]
if type == CharactersToken:
new_token = phase.processCharacters(new_token)
elif type == SpaceCharactersToken:
new_token = phase.processSpaceCharacters(new_token)
elif type == StartTagToken:
new_token = phase.processStartTag(new_token)
elif type == EndTagToken:
new_token = phase.processEndTag(new_token)
elif type == CommentToken:
new_token = phase.processComment(new_token)
elif type == DoctypeToken:
new_token = phase.processDoctype(new_token)
if (type == StartTagToken and prev_token["selfClosing"] and
not prev_token["selfClosingAcknowledged"]):
self.parseError("non-void-element-with-trailing-solidus",
{"name": prev_token["name"]})
# When the loop finishes it's EOF
reprocess = True
phases = []
while reprocess:
phases.append(self.phase)
reprocess = self.phase.processEOF()
if reprocess:
assert self.phase not in phases
def normalizedTokens(self):
for token in self.tokenizer:
yield self.normalizeToken(token)
def parse(self, stream, *args, **kwargs):
"""Parse a HTML document into a well-formed tree
:arg stream: a file-like object or string containing the HTML to be parsed
The optional encoding parameter must be a string that indicates
the encoding. If specified, that encoding will be used,
regardless of any BOM or later declaration (such as in a meta
element).
:arg scripting: treat noscript elements as if JavaScript was turned on
:returns: parsed tree
Example:
>>> from html5lib.html5parser import HTMLParser
>>> parser = HTMLParser()
>>> parser.parse('<html><body><p>This is a doc</p></body></html>')
<Element u'{http://www.w3.org/1999/xhtml}html' at 0x7feac4909db0>
"""
self._parse(stream, False, None, *args, **kwargs)
return self.tree.getDocument()
def parseFragment(self, stream, *args, **kwargs):
"""Parse a HTML fragment into a well-formed tree fragment
:arg container: name of the element we're setting the innerHTML
property if set to None, default to 'div'
:arg stream: a file-like object or string containing the HTML to be parsed
The optional encoding parameter must be a string that indicates
the encoding. If specified, that encoding will be used,
regardless of any BOM or later declaration (such as in a meta
element)
:arg scripting: treat noscript elements as if JavaScript was turned on
:returns: parsed tree
Example:
>>> from html5lib.html5libparser import HTMLParser
>>> parser = HTMLParser()
>>> parser.parseFragment('<b>this is a fragment</b>')
<Element u'DOCUMENT_FRAGMENT' at 0x7feac484b090>
"""
self._parse(stream, True, *args, **kwargs)
return self.tree.getFragment()
def parseError(self, errorcode="XXX-undefined-error", datavars=None):
# XXX The idea is to make errorcode mandatory.
if datavars is None:
datavars = {}
self.errors.append((self.tokenizer.stream.position(), errorcode, datavars))
if self.strict:
raise ParseError(E[errorcode] % datavars)
def normalizeToken(self, token):
# HTML5 specific normalizations to the token stream
if token["type"] == tokenTypes["StartTag"]:
raw = token["data"]
token["data"] = OrderedDict(raw)
if len(raw) > len(token["data"]):
# we had some duplicated attribute, fix so first wins
token["data"].update(raw[::-1])
return token
def adjustMathMLAttributes(self, token):
adjust_attributes(token, adjustMathMLAttributes)
def adjustSVGAttributes(self, token):
adjust_attributes(token, adjustSVGAttributes)
def adjustForeignAttributes(self, token):
adjust_attributes(token, adjustForeignAttributesMap)
def reparseTokenNormal(self, token):
# pylint:disable=unused-argument
self.parser.phase()
def resetInsertionMode(self):
# The name of this method is mostly historical. (It's also used in the
# specification.)
last = False
newModes = {
"select": "inSelect",
"td": "inCell",
"th": "inCell",
"tr": "inRow",
"tbody": "inTableBody",
"thead": "inTableBody",
"tfoot": "inTableBody",
"caption": "inCaption",
"colgroup": "inColumnGroup",
"table": "inTable",
"head": "inBody",
"body": "inBody",
"frameset": "inFrameset",
"html": "beforeHead"
}
for node in self.tree.openElements[::-1]:
nodeName = node.name
new_phase = None
if node == self.tree.openElements[0]:
assert self.innerHTML
last = True
nodeName = self.innerHTML
# Check for conditions that should only happen in the innerHTML
# case
if nodeName in ("select", "colgroup", "head", "html"):
assert self.innerHTML
if not last and node.namespace != self.tree.defaultNamespace:
continue
if nodeName in newModes:
new_phase = self.phases[newModes[nodeName]]
break
elif last:
new_phase = self.phases["inBody"]
break
self.phase = new_phase
def parseRCDataRawtext(self, token, contentType):
# Generic RCDATA/RAWTEXT Parsing algorithm
assert contentType in ("RAWTEXT", "RCDATA")
self.tree.insertElement(token)
if contentType == "RAWTEXT":
self.tokenizer.state = self.tokenizer.rawtextState
else:
self.tokenizer.state = self.tokenizer.rcdataState
self.originalPhase = self.phase
self.phase = self.phases["text"]
@_utils.memoize
def getPhases(debug):
def log(function):
"""Logger that records which phase processes each token"""
type_names = dict((value, key) for key, value in
tokenTypes.items())
def wrapped(self, *args, **kwargs):
if function.__name__.startswith("process") and len(args) > 0:
token = args[0]
try:
info = {"type": type_names[token['type']]}
except:
raise
if token['type'] in tagTokenTypes:
info["name"] = token['name']
self.parser.log.append((self.parser.tokenizer.state.__name__,
self.parser.phase.__class__.__name__,
self.__class__.__name__,
function.__name__,
info))
return function(self, *args, **kwargs)
else:
return function(self, *args, **kwargs)
return wrapped
def getMetaclass(use_metaclass, metaclass_func):
if use_metaclass:
return method_decorator_metaclass(metaclass_func)
else:
return type
# pylint:disable=unused-argument
class Phase(with_metaclass(getMetaclass(debug, log))):
"""Base class for helper object that implements each phase of processing
"""
def __init__(self, parser, tree):
self.parser = parser
self.tree = tree
def processEOF(self):
raise NotImplementedError
def processComment(self, token):
# For most phases the following is correct. Where it's not it will be
# overridden.
self.tree.insertComment(token, self.tree.openElements[-1])
def processDoctype(self, token):
self.parser.parseError("unexpected-doctype")
def processCharacters(self, token):
self.tree.insertText(token["data"])
def processSpaceCharacters(self, token):
self.tree.insertText(token["data"])
def processStartTag(self, token):
return self.startTagHandler[token["name"]](token)
def startTagHtml(self, token):
if not self.parser.firstStartTag and token["name"] == "html":
self.parser.parseError("non-html-root")
# XXX Need a check here to see if the first start tag token emitted is
# this token... If it's not, invoke self.parser.parseError().
for attr, value in token["data"].items():
if attr not in self.tree.openElements[0].attributes:
self.tree.openElements[0].attributes[attr] = value
self.parser.firstStartTag = False
def processEndTag(self, token):
return self.endTagHandler[token["name"]](token)
class InitialPhase(Phase):
def processSpaceCharacters(self, token):
pass
def processComment(self, token):
self.tree.insertComment(token, self.tree.document)
def processDoctype(self, token):
name = token["name"]
publicId = token["publicId"]
systemId = token["systemId"]
correct = token["correct"]
if (name != "html" or publicId is not None or
systemId is not None and systemId != "about:legacy-compat"):
self.parser.parseError("unknown-doctype")
if publicId is None:
publicId = ""
self.tree.insertDoctype(token)
if publicId != "":
publicId = publicId.translate(asciiUpper2Lower)
if (not correct or token["name"] != "html" or
publicId.startswith(
("+//silmaril//dtd html pro v0r11 19970101//",
"-//advasoft ltd//dtd html 3.0 aswedit + extensions//",
"-//as//dtd html 3.0 aswedit + extensions//",
"-//ietf//dtd html 2.0 level 1//",
"-//ietf//dtd html 2.0 level 2//",
"-//ietf//dtd html 2.0 strict level 1//",
"-//ietf//dtd html 2.0 strict level 2//",
"-//ietf//dtd html 2.0 strict//",
"-//ietf//dtd html 2.0//",
"-//ietf//dtd html 2.1e//",
"-//ietf//dtd html 3.0//",
"-//ietf//dtd html 3.2 final//",
"-//ietf//dtd html 3.2//",
"-//ietf//dtd html 3//",
"-//ietf//dtd html level 0//",
"-//ietf//dtd html level 1//",
"-//ietf//dtd html level 2//",
"-//ietf//dtd html level 3//",
"-//ietf//dtd html strict level 0//",
"-//ietf//dtd html strict level 1//",
"-//ietf//dtd html strict level 2//",
"-//ietf//dtd html strict level 3//",
"-//ietf//dtd html strict//",
"-//ietf//dtd html//",
"-//metrius//dtd metrius presentational//",
"-//microsoft//dtd internet explorer 2.0 html strict//",
"-//microsoft//dtd internet explorer 2.0 html//",
"-//microsoft//dtd internet explorer 2.0 tables//",
"-//microsoft//dtd internet explorer 3.0 html strict//",
"-//microsoft//dtd internet explorer 3.0 html//",
"-//microsoft//dtd internet explorer 3.0 tables//",
"-//netscape comm. corp.//dtd html//",
"-//netscape comm. corp.//dtd strict html//",
"-//o'reilly and associates//dtd html 2.0//",
"-//o'reilly and associates//dtd html extended 1.0//",
"-//o'reilly and associates//dtd html extended relaxed 1.0//",
"-//softquad software//dtd hotmetal pro 6.0::19990601::extensions to html 4.0//",
"-//softquad//dtd hotmetal pro 4.0::19971010::extensions to html 4.0//",
"-//spyglass//dtd html 2.0 extended//",
"-//sq//dtd html 2.0 hotmetal + extensions//",
"-//sun microsystems corp.//dtd hotjava html//",
"-//sun microsystems corp.//dtd hotjava strict html//",
"-//w3c//dtd html 3 1995-03-24//",
"-//w3c//dtd html 3.2 draft//",
"-//w3c//dtd html 3.2 final//",
"-//w3c//dtd html 3.2//",
"-//w3c//dtd html 3.2s draft//",
"-//w3c//dtd html 4.0 frameset//",
"-//w3c//dtd html 4.0 transitional//",
"-//w3c//dtd html experimental 19960712//",
"-//w3c//dtd html experimental 970421//",
"-//w3c//dtd w3 html//",
"-//w3o//dtd w3 html 3.0//",
"-//webtechs//dtd mozilla html 2.0//",
"-//webtechs//dtd mozilla html//")) or
publicId in ("-//w3o//dtd w3 html strict 3.0//en//",
"-/w3c/dtd html 4.0 transitional/en",
"html") or
publicId.startswith(
("-//w3c//dtd html 4.01 frameset//",
"-//w3c//dtd html 4.01 transitional//")) and
systemId is None or
systemId and systemId.lower() == "http://www.ibm.com/data/dtd/v11/ibmxhtml1-transitional.dtd"):
self.parser.compatMode = "quirks"
elif (publicId.startswith(
("-//w3c//dtd xhtml 1.0 frameset//",
"-//w3c//dtd xhtml 1.0 transitional//")) or
publicId.startswith(
("-//w3c//dtd html 4.01 frameset//",
"-//w3c//dtd html 4.01 transitional//")) and
systemId is not None):
self.parser.compatMode = "limited quirks"
self.parser.phase = self.parser.phases["beforeHtml"]
def anythingElse(self):
self.parser.compatMode = "quirks"
self.parser.phase = self.parser.phases["beforeHtml"]
def processCharacters(self, token):
self.parser.parseError("expected-doctype-but-got-chars")
self.anythingElse()
return token
def processStartTag(self, token):
self.parser.parseError("expected-doctype-but-got-start-tag",
{"name": token["name"]})
self.anythingElse()
return token
def processEndTag(self, token):
self.parser.parseError("expected-doctype-but-got-end-tag",
{"name": token["name"]})
self.anythingElse()
return token
def processEOF(self):
self.parser.parseError("expected-doctype-but-got-eof")
self.anythingElse()
return True
class BeforeHtmlPhase(Phase):
# helper methods
def insertHtmlElement(self):
self.tree.insertRoot(impliedTagToken("html", "StartTag"))
self.parser.phase = self.parser.phases["beforeHead"]
# other
def processEOF(self):
self.insertHtmlElement()
return True
def processComment(self, token):
self.tree.insertComment(token, self.tree.document)
def processSpaceCharacters(self, token):
pass
def processCharacters(self, token):
self.insertHtmlElement()
return token
def processStartTag(self, token):
if token["name"] == "html":
self.parser.firstStartTag = True
self.insertHtmlElement()
return token
def processEndTag(self, token):
if token["name"] not in ("head", "body", "html", "br"):
self.parser.parseError("unexpected-end-tag-before-html",
{"name": token["name"]})
else:
self.insertHtmlElement()
return token
class BeforeHeadPhase(Phase):
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
("head", self.startTagHead)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
(("head", "body", "html", "br"), self.endTagImplyHead)
])
self.endTagHandler.default = self.endTagOther
def processEOF(self):
self.startTagHead(impliedTagToken("head", "StartTag"))
return True
def processSpaceCharacters(self, token):
pass
def processCharacters(self, token):
self.startTagHead(impliedTagToken("head", "StartTag"))
return token
def startTagHtml(self, token):
return self.parser.phases["inBody"].processStartTag(token)
def startTagHead(self, token):
self.tree.insertElement(token)
self.tree.headPointer = self.tree.openElements[-1]
self.parser.phase = self.parser.phases["inHead"]
def startTagOther(self, token):
self.startTagHead(impliedTagToken("head", "StartTag"))
return token
def endTagImplyHead(self, token):
self.startTagHead(impliedTagToken("head", "StartTag"))
return token
def endTagOther(self, token):
self.parser.parseError("end-tag-after-implied-root",
{"name": token["name"]})
class InHeadPhase(Phase):
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
("title", self.startTagTitle),
(("noframes", "style"), self.startTagNoFramesStyle),
("noscript", self.startTagNoscript),
("script", self.startTagScript),
(("base", "basefont", "bgsound", "command", "link"),
self.startTagBaseLinkCommand),
("meta", self.startTagMeta),
("head", self.startTagHead)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
("head", self.endTagHead),
(("br", "html", "body"), self.endTagHtmlBodyBr)
])
self.endTagHandler.default = self.endTagOther
# the real thing
def processEOF(self):
self.anythingElse()
return True
def processCharacters(self, token):
self.anythingElse()
return token
def startTagHtml(self, token):
return self.parser.phases["inBody"].processStartTag(token)
def startTagHead(self, token):
self.parser.parseError("two-heads-are-not-better-than-one")
def startTagBaseLinkCommand(self, token):
self.tree.insertElement(token)
self.tree.openElements.pop()
token["selfClosingAcknowledged"] = True
def startTagMeta(self, token):
self.tree.insertElement(token)
self.tree.openElements.pop()
token["selfClosingAcknowledged"] = True
attributes = token["data"]
if self.parser.tokenizer.stream.charEncoding[1] == "tentative":
if "charset" in attributes:
self.parser.tokenizer.stream.changeEncoding(attributes["charset"])
elif ("content" in attributes and
"http-equiv" in attributes and
attributes["http-equiv"].lower() == "content-type"):
# Encoding it as UTF-8 here is a hack, as really we should pass
# the abstract Unicode string, and just use the
# ContentAttrParser on that, but using UTF-8 allows all chars
# to be encoded and as a ASCII-superset works.
data = _inputstream.EncodingBytes(attributes["content"].encode("utf-8"))
parser = _inputstream.ContentAttrParser(data)
codec = parser.parse()
self.parser.tokenizer.stream.changeEncoding(codec)
def startTagTitle(self, token):
self.parser.parseRCDataRawtext(token, "RCDATA")
def startTagNoFramesStyle(self, token):
# Need to decide whether to implement the scripting-disabled case
self.parser.parseRCDataRawtext(token, "RAWTEXT")
def startTagNoscript(self, token):
if self.parser.scripting:
self.parser.parseRCDataRawtext(token, "RAWTEXT")
else:
self.tree.insertElement(token)
self.parser.phase = self.parser.phases["inHeadNoscript"]
def startTagScript(self, token):
self.tree.insertElement(token)
self.parser.tokenizer.state = self.parser.tokenizer.scriptDataState
self.parser.originalPhase = self.parser.phase
self.parser.phase = self.parser.phases["text"]
def startTagOther(self, token):
self.anythingElse()
return token
def endTagHead(self, token):
node = self.parser.tree.openElements.pop()
assert node.name == "head", "Expected head got %s" % node.name
self.parser.phase = self.parser.phases["afterHead"]
def endTagHtmlBodyBr(self, token):
self.anythingElse()
return token
def endTagOther(self, token):
self.parser.parseError("unexpected-end-tag", {"name": token["name"]})
def anythingElse(self):
self.endTagHead(impliedTagToken("head"))
class InHeadNoscriptPhase(Phase):
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
(("basefont", "bgsound", "link", "meta", "noframes", "style"), self.startTagBaseLinkCommand),
(("head", "noscript"), self.startTagHeadNoscript),
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
("noscript", self.endTagNoscript),
("br", self.endTagBr),
])
self.endTagHandler.default = self.endTagOther
def processEOF(self):
self.parser.parseError("eof-in-head-noscript")
self.anythingElse()
return True
def processComment(self, token):
return self.parser.phases["inHead"].processComment(token)
def processCharacters(self, token):
self.parser.parseError("char-in-head-noscript")
self.anythingElse()
return token
def processSpaceCharacters(self, token):
return self.parser.phases["inHead"].processSpaceCharacters(token)
def startTagHtml(self, token):
return self.parser.phases["inBody"].processStartTag(token)
def startTagBaseLinkCommand(self, token):
return self.parser.phases["inHead"].processStartTag(token)
def startTagHeadNoscript(self, token):
self.parser.parseError("unexpected-start-tag", {"name": token["name"]})
def startTagOther(self, token):
self.parser.parseError("unexpected-inhead-noscript-tag", {"name": token["name"]})
self.anythingElse()
return token
def endTagNoscript(self, token):
node = self.parser.tree.openElements.pop()
assert node.name == "noscript", "Expected noscript got %s" % node.name
self.parser.phase = self.parser.phases["inHead"]
def endTagBr(self, token):
self.parser.parseError("unexpected-inhead-noscript-tag", {"name": token["name"]})
self.anythingElse()
return token
def endTagOther(self, token):
self.parser.parseError("unexpected-end-tag", {"name": token["name"]})
def anythingElse(self):
# Caller must raise parse error first!
self.endTagNoscript(impliedTagToken("noscript"))
class AfterHeadPhase(Phase):
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
("body", self.startTagBody),
("frameset", self.startTagFrameset),
(("base", "basefont", "bgsound", "link", "meta", "noframes", "script",
"style", "title"),
self.startTagFromHead),
("head", self.startTagHead)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([(("body", "html", "br"),
self.endTagHtmlBodyBr)])
self.endTagHandler.default = self.endTagOther
def processEOF(self):
self.anythingElse()
return True
def processCharacters(self, token):
self.anythingElse()
return token
def startTagHtml(self, token):
return self.parser.phases["inBody"].processStartTag(token)
def startTagBody(self, token):
self.parser.framesetOK = False
self.tree.insertElement(token)
self.parser.phase = self.parser.phases["inBody"]
def startTagFrameset(self, token):
self.tree.insertElement(token)
self.parser.phase = self.parser.phases["inFrameset"]
def startTagFromHead(self, token):
self.parser.parseError("unexpected-start-tag-out-of-my-head",
{"name": token["name"]})
self.tree.openElements.append(self.tree.headPointer)
self.parser.phases["inHead"].processStartTag(token)
for node in self.tree.openElements[::-1]:
if node.name == "head":
self.tree.openElements.remove(node)
break
def startTagHead(self, token):
self.parser.parseError("unexpected-start-tag", {"name": token["name"]})
def startTagOther(self, token):
self.anythingElse()
return token
def endTagHtmlBodyBr(self, token):
self.anythingElse()
return token
def endTagOther(self, token):
self.parser.parseError("unexpected-end-tag", {"name": token["name"]})
def anythingElse(self):
self.tree.insertElement(impliedTagToken("body", "StartTag"))
self.parser.phase = self.parser.phases["inBody"]
self.parser.framesetOK = True
class InBodyPhase(Phase):
# http://www.whatwg.org/specs/web-apps/current-work/#parsing-main-inbody
# the really-really-really-very crazy mode
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
# Set this to the default handler
self.processSpaceCharacters = self.processSpaceCharactersNonPre
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
(("base", "basefont", "bgsound", "command", "link", "meta",
"script", "style", "title"),
self.startTagProcessInHead),
("body", self.startTagBody),
("frameset", self.startTagFrameset),
(("address", "article", "aside", "blockquote", "center", "details",
"dir", "div", "dl", "fieldset", "figcaption", "figure",
"footer", "header", "hgroup", "main", "menu", "nav", "ol", "p",
"section", "summary", "ul"),
self.startTagCloseP),
(headingElements, self.startTagHeading),
(("pre", "listing"), self.startTagPreListing),
("form", self.startTagForm),
(("li", "dd", "dt"), self.startTagListItem),
("plaintext", self.startTagPlaintext),
("a", self.startTagA),
(("b", "big", "code", "em", "font", "i", "s", "small", "strike",
"strong", "tt", "u"), self.startTagFormatting),
("nobr", self.startTagNobr),
("button", self.startTagButton),
(("applet", "marquee", "object"), self.startTagAppletMarqueeObject),
("xmp", self.startTagXmp),
("table", self.startTagTable),
(("area", "br", "embed", "img", "keygen", "wbr"),
self.startTagVoidFormatting),
(("param", "source", "track"), self.startTagParamSource),
("input", self.startTagInput),
("hr", self.startTagHr),
("image", self.startTagImage),
("isindex", self.startTagIsIndex),
("textarea", self.startTagTextarea),
("iframe", self.startTagIFrame),
("noscript", self.startTagNoscript),
(("noembed", "noframes"), self.startTagRawtext),
("select", self.startTagSelect),
(("rp", "rt"), self.startTagRpRt),
(("option", "optgroup"), self.startTagOpt),
(("math"), self.startTagMath),
(("svg"), self.startTagSvg),
(("caption", "col", "colgroup", "frame", "head",
"tbody", "td", "tfoot", "th", "thead",
"tr"), self.startTagMisplaced)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
("body", self.endTagBody),
("html", self.endTagHtml),
(("address", "article", "aside", "blockquote", "button", "center",
"details", "dialog", "dir", "div", "dl", "fieldset", "figcaption", "figure",
"footer", "header", "hgroup", "listing", "main", "menu", "nav", "ol", "pre",
"section", "summary", "ul"), self.endTagBlock),
("form", self.endTagForm),
("p", self.endTagP),
(("dd", "dt", "li"), self.endTagListItem),
(headingElements, self.endTagHeading),
(("a", "b", "big", "code", "em", "font", "i", "nobr", "s", "small",
"strike", "strong", "tt", "u"), self.endTagFormatting),
(("applet", "marquee", "object"), self.endTagAppletMarqueeObject),
("br", self.endTagBr),
])
self.endTagHandler.default = self.endTagOther
def isMatchingFormattingElement(self, node1, node2):
return (node1.name == node2.name and
node1.namespace == node2.namespace and
node1.attributes == node2.attributes)
# helper
def addFormattingElement(self, token):
self.tree.insertElement(token)
element = self.tree.openElements[-1]
matchingElements = []
for node in self.tree.activeFormattingElements[::-1]:
if node is Marker:
break
elif self.isMatchingFormattingElement(node, element):
matchingElements.append(node)
assert len(matchingElements) <= 3
if len(matchingElements) == 3:
self.tree.activeFormattingElements.remove(matchingElements[-1])
self.tree.activeFormattingElements.append(element)
# the real deal
def processEOF(self):
allowed_elements = frozenset(("dd", "dt", "li", "p", "tbody", "td",
"tfoot", "th", "thead", "tr", "body",
"html"))
for node in self.tree.openElements[::-1]:
if node.name not in allowed_elements:
self.parser.parseError("expected-closing-tag-but-got-eof")
break
# Stop parsing
def processSpaceCharactersDropNewline(self, token):
# Sometimes (start of <pre>, <listing>, and <textarea> blocks) we
# want to drop leading newlines
data = token["data"]
self.processSpaceCharacters = self.processSpaceCharactersNonPre
if (data.startswith("\n") and
self.tree.openElements[-1].name in ("pre", "listing", "textarea") and
not self.tree.openElements[-1].hasContent()):
data = data[1:]
if data:
self.tree.reconstructActiveFormattingElements()
self.tree.insertText(data)
def processCharacters(self, token):
if token["data"] == "\u0000":
# The tokenizer should always emit null on its own
return
self.tree.reconstructActiveFormattingElements()
self.tree.insertText(token["data"])
# This must be bad for performance
if (self.parser.framesetOK and
any([char not in spaceCharacters
for char in token["data"]])):
self.parser.framesetOK = False
def processSpaceCharactersNonPre(self, token):
self.tree.reconstructActiveFormattingElements()
self.tree.insertText(token["data"])
def startTagProcessInHead(self, token):
return self.parser.phases["inHead"].processStartTag(token)
def startTagBody(self, token):
self.parser.parseError("unexpected-start-tag", {"name": "body"})
if (len(self.tree.openElements) == 1 or
self.tree.openElements[1].name != "body"):
assert self.parser.innerHTML
else:
self.parser.framesetOK = False
for attr, value in token["data"].items():
if attr not in self.tree.openElements[1].attributes:
self.tree.openElements[1].attributes[attr] = value
def startTagFrameset(self, token):
self.parser.parseError("unexpected-start-tag", {"name": "frameset"})
if (len(self.tree.openElements) == 1 or self.tree.openElements[1].name != "body"):
assert self.parser.innerHTML
elif not self.parser.framesetOK:
pass
else:
if self.tree.openElements[1].parent:
self.tree.openElements[1].parent.removeChild(self.tree.openElements[1])
while self.tree.openElements[-1].name != "html":
self.tree.openElements.pop()
self.tree.insertElement(token)
self.parser.phase = self.parser.phases["inFrameset"]
def startTagCloseP(self, token):
if self.tree.elementInScope("p", variant="button"):
self.endTagP(impliedTagToken("p"))
self.tree.insertElement(token)
def startTagPreListing(self, token):
if self.tree.elementInScope("p", variant="button"):
self.endTagP(impliedTagToken("p"))
self.tree.insertElement(token)
self.parser.framesetOK = False
self.processSpaceCharacters = self.processSpaceCharactersDropNewline
def startTagForm(self, token):
if self.tree.formPointer:
self.parser.parseError("unexpected-start-tag", {"name": "form"})
else:
if self.tree.elementInScope("p", variant="button"):
self.endTagP(impliedTagToken("p"))
self.tree.insertElement(token)
self.tree.formPointer = self.tree.openElements[-1]
def startTagListItem(self, token):
self.parser.framesetOK = False
stopNamesMap = {"li": ["li"],
"dt": ["dt", "dd"],
"dd": ["dt", "dd"]}
stopNames = stopNamesMap[token["name"]]
for node in reversed(self.tree.openElements):
if node.name in stopNames:
self.parser.phase.processEndTag(
impliedTagToken(node.name, "EndTag"))
break
if (node.nameTuple in specialElements and
node.name not in ("address", "div", "p")):
break
if self.tree.elementInScope("p", variant="button"):
self.parser.phase.processEndTag(
impliedTagToken("p", "EndTag"))
self.tree.insertElement(token)
def startTagPlaintext(self, token):
if self.tree.elementInScope("p", variant="button"):
self.endTagP(impliedTagToken("p"))
self.tree.insertElement(token)
self.parser.tokenizer.state = self.parser.tokenizer.plaintextState
def startTagHeading(self, token):
if self.tree.elementInScope("p", variant="button"):
self.endTagP(impliedTagToken("p"))
if self.tree.openElements[-1].name in headingElements:
self.parser.parseError("unexpected-start-tag", {"name": token["name"]})
self.tree.openElements.pop()
self.tree.insertElement(token)
def startTagA(self, token):
afeAElement = self.tree.elementInActiveFormattingElements("a")
if afeAElement:
self.parser.parseError("unexpected-start-tag-implies-end-tag",
{"startName": "a", "endName": "a"})
self.endTagFormatting(impliedTagToken("a"))
if afeAElement in self.tree.openElements:
self.tree.openElements.remove(afeAElement)
if afeAElement in self.tree.activeFormattingElements:
self.tree.activeFormattingElements.remove(afeAElement)
self.tree.reconstructActiveFormattingElements()
self.addFormattingElement(token)
def startTagFormatting(self, token):
self.tree.reconstructActiveFormattingElements()
self.addFormattingElement(token)
def startTagNobr(self, token):
self.tree.reconstructActiveFormattingElements()
if self.tree.elementInScope("nobr"):
self.parser.parseError("unexpected-start-tag-implies-end-tag",
{"startName": "nobr", "endName": "nobr"})
self.processEndTag(impliedTagToken("nobr"))
# XXX Need tests that trigger the following
self.tree.reconstructActiveFormattingElements()
self.addFormattingElement(token)
def startTagButton(self, token):
if self.tree.elementInScope("button"):
self.parser.parseError("unexpected-start-tag-implies-end-tag",
{"startName": "button", "endName": "button"})
self.processEndTag(impliedTagToken("button"))
return token
else:
self.tree.reconstructActiveFormattingElements()
self.tree.insertElement(token)
self.parser.framesetOK = False
def startTagAppletMarqueeObject(self, token):
self.tree.reconstructActiveFormattingElements()
self.tree.insertElement(token)
self.tree.activeFormattingElements.append(Marker)
self.parser.framesetOK = False
def startTagXmp(self, token):
if self.tree.elementInScope("p", variant="button"):
self.endTagP(impliedTagToken("p"))
self.tree.reconstructActiveFormattingElements()
self.parser.framesetOK = False
self.parser.parseRCDataRawtext(token, "RAWTEXT")
def startTagTable(self, token):
if self.parser.compatMode != "quirks":
if self.tree.elementInScope("p", variant="button"):
self.processEndTag(impliedTagToken("p"))
self.tree.insertElement(token)
self.parser.framesetOK = False
self.parser.phase = self.parser.phases["inTable"]
def startTagVoidFormatting(self, token):
self.tree.reconstructActiveFormattingElements()
self.tree.insertElement(token)
self.tree.openElements.pop()
token["selfClosingAcknowledged"] = True
self.parser.framesetOK = False
def startTagInput(self, token):
framesetOK = self.parser.framesetOK
self.startTagVoidFormatting(token)
if ("type" in token["data"] and
token["data"]["type"].translate(asciiUpper2Lower) == "hidden"):
# input type=hidden doesn't change framesetOK
self.parser.framesetOK = framesetOK
def startTagParamSource(self, token):
self.tree.insertElement(token)
self.tree.openElements.pop()
token["selfClosingAcknowledged"] = True
def startTagHr(self, token):
if self.tree.elementInScope("p", variant="button"):
self.endTagP(impliedTagToken("p"))
self.tree.insertElement(token)
self.tree.openElements.pop()
token["selfClosingAcknowledged"] = True
self.parser.framesetOK = False
def startTagImage(self, token):
# No really...
self.parser.parseError("unexpected-start-tag-treated-as",
{"originalName": "image", "newName": "img"})
self.processStartTag(impliedTagToken("img", "StartTag",
attributes=token["data"],
selfClosing=token["selfClosing"]))
def startTagIsIndex(self, token):
self.parser.parseError("deprecated-tag", {"name": "isindex"})
if self.tree.formPointer:
return
form_attrs = {}
if "action" in token["data"]:
form_attrs["action"] = token["data"]["action"]
self.processStartTag(impliedTagToken("form", "StartTag",
attributes=form_attrs))
self.processStartTag(impliedTagToken("hr", "StartTag"))
self.processStartTag(impliedTagToken("label", "StartTag"))
# XXX Localization ...
if "prompt" in token["data"]:
prompt = token["data"]["prompt"]
else:
prompt = "This is a searchable index. Enter search keywords: "
self.processCharacters(
{"type": tokenTypes["Characters"], "data": prompt})
attributes = token["data"].copy()
if "action" in attributes:
del attributes["action"]
if "prompt" in attributes:
del attributes["prompt"]
attributes["name"] = "isindex"
self.processStartTag(impliedTagToken("input", "StartTag",
attributes=attributes,
selfClosing=token["selfClosing"]))
self.processEndTag(impliedTagToken("label"))
self.processStartTag(impliedTagToken("hr", "StartTag"))
self.processEndTag(impliedTagToken("form"))
def startTagTextarea(self, token):
self.tree.insertElement(token)
self.parser.tokenizer.state = self.parser.tokenizer.rcdataState
self.processSpaceCharacters = self.processSpaceCharactersDropNewline
self.parser.framesetOK = False
def startTagIFrame(self, token):
self.parser.framesetOK = False
self.startTagRawtext(token)
def startTagNoscript(self, token):
if self.parser.scripting:
self.startTagRawtext(token)
else:
self.startTagOther(token)
def startTagRawtext(self, token):
"""iframe, noembed noframes, noscript(if scripting enabled)"""
self.parser.parseRCDataRawtext(token, "RAWTEXT")
def startTagOpt(self, token):
if self.tree.openElements[-1].name == "option":
self.parser.phase.processEndTag(impliedTagToken("option"))
self.tree.reconstructActiveFormattingElements()
self.parser.tree.insertElement(token)
def startTagSelect(self, token):
self.tree.reconstructActiveFormattingElements()
self.tree.insertElement(token)
self.parser.framesetOK = False
if self.parser.phase in (self.parser.phases["inTable"],
self.parser.phases["inCaption"],
self.parser.phases["inColumnGroup"],
self.parser.phases["inTableBody"],
self.parser.phases["inRow"],
self.parser.phases["inCell"]):
self.parser.phase = self.parser.phases["inSelectInTable"]
else:
self.parser.phase = self.parser.phases["inSelect"]
def startTagRpRt(self, token):
if self.tree.elementInScope("ruby"):
self.tree.generateImpliedEndTags()
if self.tree.openElements[-1].name != "ruby":
self.parser.parseError()
self.tree.insertElement(token)
def startTagMath(self, token):
self.tree.reconstructActiveFormattingElements()
self.parser.adjustMathMLAttributes(token)
self.parser.adjustForeignAttributes(token)
token["namespace"] = namespaces["mathml"]
self.tree.insertElement(token)
# Need to get the parse error right for the case where the token
# has a namespace not equal to the xmlns attribute
if token["selfClosing"]:
self.tree.openElements.pop()
token["selfClosingAcknowledged"] = True
def startTagSvg(self, token):
self.tree.reconstructActiveFormattingElements()
self.parser.adjustSVGAttributes(token)
self.parser.adjustForeignAttributes(token)
token["namespace"] = namespaces["svg"]
self.tree.insertElement(token)
# Need to get the parse error right for the case where the token
# has a namespace not equal to the xmlns attribute
if token["selfClosing"]:
self.tree.openElements.pop()
token["selfClosingAcknowledged"] = True
def startTagMisplaced(self, token):
""" Elements that should be children of other elements that have a
different insertion mode; here they are ignored
"caption", "col", "colgroup", "frame", "frameset", "head",
"option", "optgroup", "tbody", "td", "tfoot", "th", "thead",
"tr", "noscript"
"""
self.parser.parseError("unexpected-start-tag-ignored", {"name": token["name"]})
def startTagOther(self, token):
self.tree.reconstructActiveFormattingElements()
self.tree.insertElement(token)
def endTagP(self, token):
if not self.tree.elementInScope("p", variant="button"):
self.startTagCloseP(impliedTagToken("p", "StartTag"))
self.parser.parseError("unexpected-end-tag", {"name": "p"})
self.endTagP(impliedTagToken("p", "EndTag"))
else:
self.tree.generateImpliedEndTags("p")
if self.tree.openElements[-1].name != "p":
self.parser.parseError("unexpected-end-tag", {"name": "p"})
node = self.tree.openElements.pop()
while node.name != "p":
node = self.tree.openElements.pop()
def endTagBody(self, token):
if not self.tree.elementInScope("body"):
self.parser.parseError()
return
elif self.tree.openElements[-1].name != "body":
for node in self.tree.openElements[2:]:
if node.name not in frozenset(("dd", "dt", "li", "optgroup",
"option", "p", "rp", "rt",
"tbody", "td", "tfoot",
"th", "thead", "tr", "body",
"html")):
# Not sure this is the correct name for the parse error
self.parser.parseError(
"expected-one-end-tag-but-got-another",
{"gotName": "body", "expectedName": node.name})
break
self.parser.phase = self.parser.phases["afterBody"]
def endTagHtml(self, token):
# We repeat the test for the body end tag token being ignored here
if self.tree.elementInScope("body"):
self.endTagBody(impliedTagToken("body"))
return token
def endTagBlock(self, token):
# Put us back in the right whitespace handling mode
if token["name"] == "pre":
self.processSpaceCharacters = self.processSpaceCharactersNonPre
inScope = self.tree.elementInScope(token["name"])
if inScope:
self.tree.generateImpliedEndTags()
if self.tree.openElements[-1].name != token["name"]:
self.parser.parseError("end-tag-too-early", {"name": token["name"]})
if inScope:
node = self.tree.openElements.pop()
while node.name != token["name"]:
node = self.tree.openElements.pop()
def endTagForm(self, token):
node = self.tree.formPointer
self.tree.formPointer = None
if node is None or not self.tree.elementInScope(node):
self.parser.parseError("unexpected-end-tag",
{"name": "form"})
else:
self.tree.generateImpliedEndTags()
if self.tree.openElements[-1] != node:
self.parser.parseError("end-tag-too-early-ignored",
{"name": "form"})
self.tree.openElements.remove(node)
def endTagListItem(self, token):
if token["name"] == "li":
variant = "list"
else:
variant = None
if not self.tree.elementInScope(token["name"], variant=variant):
self.parser.parseError("unexpected-end-tag", {"name": token["name"]})
else:
self.tree.generateImpliedEndTags(exclude=token["name"])
if self.tree.openElements[-1].name != token["name"]:
self.parser.parseError(
"end-tag-too-early",
{"name": token["name"]})
node = self.tree.openElements.pop()
while node.name != token["name"]:
node = self.tree.openElements.pop()
def endTagHeading(self, token):
for item in headingElements:
if self.tree.elementInScope(item):
self.tree.generateImpliedEndTags()
break
if self.tree.openElements[-1].name != token["name"]:
self.parser.parseError("end-tag-too-early", {"name": token["name"]})
for item in headingElements:
if self.tree.elementInScope(item):
item = self.tree.openElements.pop()
while item.name not in headingElements:
item = self.tree.openElements.pop()
break
def endTagFormatting(self, token):
"""The much-feared adoption agency algorithm"""
# http://svn.whatwg.org/webapps/complete.html#adoptionAgency revision 7867
# XXX Better parseError messages appreciated.
# Step 1
outerLoopCounter = 0
# Step 2
while outerLoopCounter < 8:
# Step 3
outerLoopCounter += 1
# Step 4:
# Let the formatting element be the last element in
# the list of active formatting elements that:
# - is between the end of the list and the last scope
# marker in the list, if any, or the start of the list
# otherwise, and
# - has the same tag name as the token.
formattingElement = self.tree.elementInActiveFormattingElements(
token["name"])
if (not formattingElement or
(formattingElement in self.tree.openElements and
not self.tree.elementInScope(formattingElement.name))):
# If there is no such node, then abort these steps
# and instead act as described in the "any other
# end tag" entry below.
self.endTagOther(token)
return
# Otherwise, if there is such a node, but that node is
# not in the stack of open elements, then this is a
# parse error; remove the element from the list, and
# abort these steps.
elif formattingElement not in self.tree.openElements:
self.parser.parseError("adoption-agency-1.2", {"name": token["name"]})
self.tree.activeFormattingElements.remove(formattingElement)
return
# Otherwise, if there is such a node, and that node is
# also in the stack of open elements, but the element
# is not in scope, then this is a parse error; ignore
# the token, and abort these steps.
elif not self.tree.elementInScope(formattingElement.name):
self.parser.parseError("adoption-agency-4.4", {"name": token["name"]})
return
# Otherwise, there is a formatting element and that
# element is in the stack and is in scope. If the
# element is not the current node, this is a parse
# error. In any case, proceed with the algorithm as
# written in the following steps.
else:
if formattingElement != self.tree.openElements[-1]:
self.parser.parseError("adoption-agency-1.3", {"name": token["name"]})
# Step 5:
# Let the furthest block be the topmost node in the
# stack of open elements that is lower in the stack
# than the formatting element, and is an element in
# the special category. There might not be one.
afeIndex = self.tree.openElements.index(formattingElement)
furthestBlock = None
for element in self.tree.openElements[afeIndex:]:
if element.nameTuple in specialElements:
furthestBlock = element
break
# Step 6:
# If there is no furthest block, then the UA must
# first pop all the nodes from the bottom of the stack
# of open elements, from the current node up to and
# including the formatting element, then remove the
# formatting element from the list of active
# formatting elements, and finally abort these steps.
if furthestBlock is None:
element = self.tree.openElements.pop()
while element != formattingElement:
element = self.tree.openElements.pop()
self.tree.activeFormattingElements.remove(element)
return
# Step 7
commonAncestor = self.tree.openElements[afeIndex - 1]
# Step 8:
# The bookmark is supposed to help us identify where to reinsert
# nodes in step 15. We have to ensure that we reinsert nodes after
# the node before the active formatting element. Note the bookmark
# can move in step 9.7
bookmark = self.tree.activeFormattingElements.index(formattingElement)
# Step 9
lastNode = node = furthestBlock
innerLoopCounter = 0
index = self.tree.openElements.index(node)
while innerLoopCounter < 3:
innerLoopCounter += 1
# Node is element before node in open elements
index -= 1
node = self.tree.openElements[index]
if node not in self.tree.activeFormattingElements:
self.tree.openElements.remove(node)
continue
# Step 9.6
if node == formattingElement:
break
# Step 9.7
if lastNode == furthestBlock:
bookmark = self.tree.activeFormattingElements.index(node) + 1
# Step 9.8
clone = node.cloneNode()
# Replace node with clone
self.tree.activeFormattingElements[
self.tree.activeFormattingElements.index(node)] = clone
self.tree.openElements[
self.tree.openElements.index(node)] = clone
node = clone
# Step 9.9
# Remove lastNode from its parents, if any
if lastNode.parent:
lastNode.parent.removeChild(lastNode)
node.appendChild(lastNode)
# Step 9.10
lastNode = node
# Step 10
# Foster parent lastNode if commonAncestor is a
# table, tbody, tfoot, thead, or tr we need to foster
# parent the lastNode
if lastNode.parent:
lastNode.parent.removeChild(lastNode)
if commonAncestor.name in frozenset(("table", "tbody", "tfoot", "thead", "tr")):
parent, insertBefore = self.tree.getTableMisnestedNodePosition()
parent.insertBefore(lastNode, insertBefore)
else:
commonAncestor.appendChild(lastNode)
# Step 11
clone = formattingElement.cloneNode()
# Step 12
furthestBlock.reparentChildren(clone)
# Step 13
furthestBlock.appendChild(clone)
# Step 14
self.tree.activeFormattingElements.remove(formattingElement)
self.tree.activeFormattingElements.insert(bookmark, clone)
# Step 15
self.tree.openElements.remove(formattingElement)
self.tree.openElements.insert(
self.tree.openElements.index(furthestBlock) + 1, clone)
def endTagAppletMarqueeObject(self, token):
if self.tree.elementInScope(token["name"]):
self.tree.generateImpliedEndTags()
if self.tree.openElements[-1].name != token["name"]:
self.parser.parseError("end-tag-too-early", {"name": token["name"]})
if self.tree.elementInScope(token["name"]):
element = self.tree.openElements.pop()
while element.name != token["name"]:
element = self.tree.openElements.pop()
self.tree.clearActiveFormattingElements()
def endTagBr(self, token):
self.parser.parseError("unexpected-end-tag-treated-as",
{"originalName": "br", "newName": "br element"})
self.tree.reconstructActiveFormattingElements()
self.tree.insertElement(impliedTagToken("br", "StartTag"))
self.tree.openElements.pop()
def endTagOther(self, token):
for node in self.tree.openElements[::-1]:
if node.name == token["name"]:
self.tree.generateImpliedEndTags(exclude=token["name"])
if self.tree.openElements[-1].name != token["name"]:
self.parser.parseError("unexpected-end-tag", {"name": token["name"]})
while self.tree.openElements.pop() != node:
pass
break
else:
if node.nameTuple in specialElements:
self.parser.parseError("unexpected-end-tag", {"name": token["name"]})
break
class TextPhase(Phase):
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
("script", self.endTagScript)])
self.endTagHandler.default = self.endTagOther
def processCharacters(self, token):
self.tree.insertText(token["data"])
def processEOF(self):
self.parser.parseError("expected-named-closing-tag-but-got-eof",
{"name": self.tree.openElements[-1].name})
self.tree.openElements.pop()
self.parser.phase = self.parser.originalPhase
return True
def startTagOther(self, token):
assert False, "Tried to process start tag %s in RCDATA/RAWTEXT mode" % token['name']
def endTagScript(self, token):
node = self.tree.openElements.pop()
assert node.name == "script"
self.parser.phase = self.parser.originalPhase
# The rest of this method is all stuff that only happens if
# document.write works
def endTagOther(self, token):
self.tree.openElements.pop()
self.parser.phase = self.parser.originalPhase
class InTablePhase(Phase):
# http://www.whatwg.org/specs/web-apps/current-work/#in-table
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
("caption", self.startTagCaption),
("colgroup", self.startTagColgroup),
("col", self.startTagCol),
(("tbody", "tfoot", "thead"), self.startTagRowGroup),
(("td", "th", "tr"), self.startTagImplyTbody),
("table", self.startTagTable),
(("style", "script"), self.startTagStyleScript),
("input", self.startTagInput),
("form", self.startTagForm)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
("table", self.endTagTable),
(("body", "caption", "col", "colgroup", "html", "tbody", "td",
"tfoot", "th", "thead", "tr"), self.endTagIgnore)
])
self.endTagHandler.default = self.endTagOther
# helper methods
def clearStackToTableContext(self):
# "clear the stack back to a table context"
while self.tree.openElements[-1].name not in ("table", "html"):
# self.parser.parseError("unexpected-implied-end-tag-in-table",
# {"name": self.tree.openElements[-1].name})
self.tree.openElements.pop()
# When the current node is <html> it's an innerHTML case
# processing methods
def processEOF(self):
if self.tree.openElements[-1].name != "html":
self.parser.parseError("eof-in-table")
else:
assert self.parser.innerHTML
# Stop parsing
def processSpaceCharacters(self, token):
originalPhase = self.parser.phase
self.parser.phase = self.parser.phases["inTableText"]
self.parser.phase.originalPhase = originalPhase
self.parser.phase.processSpaceCharacters(token)
def processCharacters(self, token):
originalPhase = self.parser.phase
self.parser.phase = self.parser.phases["inTableText"]
self.parser.phase.originalPhase = originalPhase
self.parser.phase.processCharacters(token)
def insertText(self, token):
# If we get here there must be at least one non-whitespace character
# Do the table magic!
self.tree.insertFromTable = True
self.parser.phases["inBody"].processCharacters(token)
self.tree.insertFromTable = False
def startTagCaption(self, token):
self.clearStackToTableContext()
self.tree.activeFormattingElements.append(Marker)
self.tree.insertElement(token)
self.parser.phase = self.parser.phases["inCaption"]
def startTagColgroup(self, token):
self.clearStackToTableContext()
self.tree.insertElement(token)
self.parser.phase = self.parser.phases["inColumnGroup"]
def startTagCol(self, token):
self.startTagColgroup(impliedTagToken("colgroup", "StartTag"))
return token
def startTagRowGroup(self, token):
self.clearStackToTableContext()
self.tree.insertElement(token)
self.parser.phase = self.parser.phases["inTableBody"]
def startTagImplyTbody(self, token):
self.startTagRowGroup(impliedTagToken("tbody", "StartTag"))
return token
def startTagTable(self, token):
self.parser.parseError("unexpected-start-tag-implies-end-tag",
{"startName": "table", "endName": "table"})
self.parser.phase.processEndTag(impliedTagToken("table"))
if not self.parser.innerHTML:
return token
def startTagStyleScript(self, token):
return self.parser.phases["inHead"].processStartTag(token)
def startTagInput(self, token):
if ("type" in token["data"] and
token["data"]["type"].translate(asciiUpper2Lower) == "hidden"):
self.parser.parseError("unexpected-hidden-input-in-table")
self.tree.insertElement(token)
# XXX associate with form
self.tree.openElements.pop()
else:
self.startTagOther(token)
def startTagForm(self, token):
self.parser.parseError("unexpected-form-in-table")
if self.tree.formPointer is None:
self.tree.insertElement(token)
self.tree.formPointer = self.tree.openElements[-1]
self.tree.openElements.pop()
def startTagOther(self, token):
self.parser.parseError("unexpected-start-tag-implies-table-voodoo", {"name": token["name"]})
# Do the table magic!
self.tree.insertFromTable = True
self.parser.phases["inBody"].processStartTag(token)
self.tree.insertFromTable = False
def endTagTable(self, token):
if self.tree.elementInScope("table", variant="table"):
self.tree.generateImpliedEndTags()
if self.tree.openElements[-1].name != "table":
self.parser.parseError("end-tag-too-early-named",
{"gotName": "table",
"expectedName": self.tree.openElements[-1].name})
while self.tree.openElements[-1].name != "table":
self.tree.openElements.pop()
self.tree.openElements.pop()
self.parser.resetInsertionMode()
else:
# innerHTML case
assert self.parser.innerHTML
self.parser.parseError()
def endTagIgnore(self, token):
self.parser.parseError("unexpected-end-tag", {"name": token["name"]})
def endTagOther(self, token):
self.parser.parseError("unexpected-end-tag-implies-table-voodoo", {"name": token["name"]})
# Do the table magic!
self.tree.insertFromTable = True
self.parser.phases["inBody"].processEndTag(token)
self.tree.insertFromTable = False
class InTableTextPhase(Phase):
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.originalPhase = None
self.characterTokens = []
def flushCharacters(self):
data = "".join([item["data"] for item in self.characterTokens])
if any([item not in spaceCharacters for item in data]):
token = {"type": tokenTypes["Characters"], "data": data}
self.parser.phases["inTable"].insertText(token)
elif data:
self.tree.insertText(data)
self.characterTokens = []
def processComment(self, token):
self.flushCharacters()
self.parser.phase = self.originalPhase
return token
def processEOF(self):
self.flushCharacters()
self.parser.phase = self.originalPhase
return True
def processCharacters(self, token):
if token["data"] == "\u0000":
return
self.characterTokens.append(token)
def processSpaceCharacters(self, token):
# pretty sure we should never reach here
self.characterTokens.append(token)
# assert False
def processStartTag(self, token):
self.flushCharacters()
self.parser.phase = self.originalPhase
return token
def processEndTag(self, token):
self.flushCharacters()
self.parser.phase = self.originalPhase
return token
class InCaptionPhase(Phase):
# http://www.whatwg.org/specs/web-apps/current-work/#in-caption
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
(("caption", "col", "colgroup", "tbody", "td", "tfoot", "th",
"thead", "tr"), self.startTagTableElement)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
("caption", self.endTagCaption),
("table", self.endTagTable),
(("body", "col", "colgroup", "html", "tbody", "td", "tfoot", "th",
"thead", "tr"), self.endTagIgnore)
])
self.endTagHandler.default = self.endTagOther
def ignoreEndTagCaption(self):
return not self.tree.elementInScope("caption", variant="table")
def processEOF(self):
self.parser.phases["inBody"].processEOF()
def processCharacters(self, token):
return self.parser.phases["inBody"].processCharacters(token)
def startTagTableElement(self, token):
self.parser.parseError()
# XXX Have to duplicate logic here to find out if the tag is ignored
ignoreEndTag = self.ignoreEndTagCaption()
self.parser.phase.processEndTag(impliedTagToken("caption"))
if not ignoreEndTag:
return token
def startTagOther(self, token):
return self.parser.phases["inBody"].processStartTag(token)
def endTagCaption(self, token):
if not self.ignoreEndTagCaption():
# AT this code is quite similar to endTagTable in "InTable"
self.tree.generateImpliedEndTags()
if self.tree.openElements[-1].name != "caption":
self.parser.parseError("expected-one-end-tag-but-got-another",
{"gotName": "caption",
"expectedName": self.tree.openElements[-1].name})
while self.tree.openElements[-1].name != "caption":
self.tree.openElements.pop()
self.tree.openElements.pop()
self.tree.clearActiveFormattingElements()
self.parser.phase = self.parser.phases["inTable"]
else:
# innerHTML case
assert self.parser.innerHTML
self.parser.parseError()
def endTagTable(self, token):
self.parser.parseError()
ignoreEndTag = self.ignoreEndTagCaption()
self.parser.phase.processEndTag(impliedTagToken("caption"))
if not ignoreEndTag:
return token
def endTagIgnore(self, token):
self.parser.parseError("unexpected-end-tag", {"name": token["name"]})
def endTagOther(self, token):
return self.parser.phases["inBody"].processEndTag(token)
class InColumnGroupPhase(Phase):
# http://www.whatwg.org/specs/web-apps/current-work/#in-column
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
("col", self.startTagCol)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
("colgroup", self.endTagColgroup),
("col", self.endTagCol)
])
self.endTagHandler.default = self.endTagOther
def ignoreEndTagColgroup(self):
return self.tree.openElements[-1].name == "html"
def processEOF(self):
if self.tree.openElements[-1].name == "html":
assert self.parser.innerHTML
return
else:
ignoreEndTag = self.ignoreEndTagColgroup()
self.endTagColgroup(impliedTagToken("colgroup"))
if not ignoreEndTag:
return True
def processCharacters(self, token):
ignoreEndTag = self.ignoreEndTagColgroup()
self.endTagColgroup(impliedTagToken("colgroup"))
if not ignoreEndTag:
return token
def startTagCol(self, token):
self.tree.insertElement(token)
self.tree.openElements.pop()
token["selfClosingAcknowledged"] = True
def startTagOther(self, token):
ignoreEndTag = self.ignoreEndTagColgroup()
self.endTagColgroup(impliedTagToken("colgroup"))
if not ignoreEndTag:
return token
def endTagColgroup(self, token):
if self.ignoreEndTagColgroup():
# innerHTML case
assert self.parser.innerHTML
self.parser.parseError()
else:
self.tree.openElements.pop()
self.parser.phase = self.parser.phases["inTable"]
def endTagCol(self, token):
self.parser.parseError("no-end-tag", {"name": "col"})
def endTagOther(self, token):
ignoreEndTag = self.ignoreEndTagColgroup()
self.endTagColgroup(impliedTagToken("colgroup"))
if not ignoreEndTag:
return token
class InTableBodyPhase(Phase):
# http://www.whatwg.org/specs/web-apps/current-work/#in-table0
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
("tr", self.startTagTr),
(("td", "th"), self.startTagTableCell),
(("caption", "col", "colgroup", "tbody", "tfoot", "thead"),
self.startTagTableOther)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
(("tbody", "tfoot", "thead"), self.endTagTableRowGroup),
("table", self.endTagTable),
(("body", "caption", "col", "colgroup", "html", "td", "th",
"tr"), self.endTagIgnore)
])
self.endTagHandler.default = self.endTagOther
# helper methods
def clearStackToTableBodyContext(self):
while self.tree.openElements[-1].name not in ("tbody", "tfoot",
"thead", "html"):
# self.parser.parseError("unexpected-implied-end-tag-in-table",
# {"name": self.tree.openElements[-1].name})
self.tree.openElements.pop()
if self.tree.openElements[-1].name == "html":
assert self.parser.innerHTML
# the rest
def processEOF(self):
self.parser.phases["inTable"].processEOF()
def processSpaceCharacters(self, token):
return self.parser.phases["inTable"].processSpaceCharacters(token)
def processCharacters(self, token):
return self.parser.phases["inTable"].processCharacters(token)
def startTagTr(self, token):
self.clearStackToTableBodyContext()
self.tree.insertElement(token)
self.parser.phase = self.parser.phases["inRow"]
def startTagTableCell(self, token):
self.parser.parseError("unexpected-cell-in-table-body",
{"name": token["name"]})
self.startTagTr(impliedTagToken("tr", "StartTag"))
return token
def startTagTableOther(self, token):
# XXX AT Any ideas on how to share this with endTagTable?
if (self.tree.elementInScope("tbody", variant="table") or
self.tree.elementInScope("thead", variant="table") or
self.tree.elementInScope("tfoot", variant="table")):
self.clearStackToTableBodyContext()
self.endTagTableRowGroup(
impliedTagToken(self.tree.openElements[-1].name))
return token
else:
# innerHTML case
assert self.parser.innerHTML
self.parser.parseError()
def startTagOther(self, token):
return self.parser.phases["inTable"].processStartTag(token)
def endTagTableRowGroup(self, token):
if self.tree.elementInScope(token["name"], variant="table"):
self.clearStackToTableBodyContext()
self.tree.openElements.pop()
self.parser.phase = self.parser.phases["inTable"]
else:
self.parser.parseError("unexpected-end-tag-in-table-body",
{"name": token["name"]})
def endTagTable(self, token):
if (self.tree.elementInScope("tbody", variant="table") or
self.tree.elementInScope("thead", variant="table") or
self.tree.elementInScope("tfoot", variant="table")):
self.clearStackToTableBodyContext()
self.endTagTableRowGroup(
impliedTagToken(self.tree.openElements[-1].name))
return token
else:
# innerHTML case
assert self.parser.innerHTML
self.parser.parseError()
def endTagIgnore(self, token):
self.parser.parseError("unexpected-end-tag-in-table-body",
{"name": token["name"]})
def endTagOther(self, token):
return self.parser.phases["inTable"].processEndTag(token)
class InRowPhase(Phase):
# http://www.whatwg.org/specs/web-apps/current-work/#in-row
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
(("td", "th"), self.startTagTableCell),
(("caption", "col", "colgroup", "tbody", "tfoot", "thead",
"tr"), self.startTagTableOther)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
("tr", self.endTagTr),
("table", self.endTagTable),
(("tbody", "tfoot", "thead"), self.endTagTableRowGroup),
(("body", "caption", "col", "colgroup", "html", "td", "th"),
self.endTagIgnore)
])
self.endTagHandler.default = self.endTagOther
# helper methods (XXX unify this with other table helper methods)
def clearStackToTableRowContext(self):
while self.tree.openElements[-1].name not in ("tr", "html"):
self.parser.parseError("unexpected-implied-end-tag-in-table-row",
{"name": self.tree.openElements[-1].name})
self.tree.openElements.pop()
def ignoreEndTagTr(self):
return not self.tree.elementInScope("tr", variant="table")
# the rest
def processEOF(self):
self.parser.phases["inTable"].processEOF()
def processSpaceCharacters(self, token):
return self.parser.phases["inTable"].processSpaceCharacters(token)
def processCharacters(self, token):
return self.parser.phases["inTable"].processCharacters(token)
def startTagTableCell(self, token):
self.clearStackToTableRowContext()
self.tree.insertElement(token)
self.parser.phase = self.parser.phases["inCell"]
self.tree.activeFormattingElements.append(Marker)
def startTagTableOther(self, token):
ignoreEndTag = self.ignoreEndTagTr()
self.endTagTr(impliedTagToken("tr"))
# XXX how are we sure it's always ignored in the innerHTML case?
if not ignoreEndTag:
return token
def startTagOther(self, token):
return self.parser.phases["inTable"].processStartTag(token)
def endTagTr(self, token):
if not self.ignoreEndTagTr():
self.clearStackToTableRowContext()
self.tree.openElements.pop()
self.parser.phase = self.parser.phases["inTableBody"]
else:
# innerHTML case
assert self.parser.innerHTML
self.parser.parseError()
def endTagTable(self, token):
ignoreEndTag = self.ignoreEndTagTr()
self.endTagTr(impliedTagToken("tr"))
# Reprocess the current tag if the tr end tag was not ignored
# XXX how are we sure it's always ignored in the innerHTML case?
if not ignoreEndTag:
return token
def endTagTableRowGroup(self, token):
if self.tree.elementInScope(token["name"], variant="table"):
self.endTagTr(impliedTagToken("tr"))
return token
else:
self.parser.parseError()
def endTagIgnore(self, token):
self.parser.parseError("unexpected-end-tag-in-table-row",
{"name": token["name"]})
def endTagOther(self, token):
return self.parser.phases["inTable"].processEndTag(token)
class InCellPhase(Phase):
# http://www.whatwg.org/specs/web-apps/current-work/#in-cell
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
(("caption", "col", "colgroup", "tbody", "td", "tfoot", "th",
"thead", "tr"), self.startTagTableOther)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
(("td", "th"), self.endTagTableCell),
(("body", "caption", "col", "colgroup", "html"), self.endTagIgnore),
(("table", "tbody", "tfoot", "thead", "tr"), self.endTagImply)
])
self.endTagHandler.default = self.endTagOther
# helper
def closeCell(self):
if self.tree.elementInScope("td", variant="table"):
self.endTagTableCell(impliedTagToken("td"))
elif self.tree.elementInScope("th", variant="table"):
self.endTagTableCell(impliedTagToken("th"))
# the rest
def processEOF(self):
self.parser.phases["inBody"].processEOF()
def processCharacters(self, token):
return self.parser.phases["inBody"].processCharacters(token)
def startTagTableOther(self, token):
if (self.tree.elementInScope("td", variant="table") or
self.tree.elementInScope("th", variant="table")):
self.closeCell()
return token
else:
# innerHTML case
assert self.parser.innerHTML
self.parser.parseError()
def startTagOther(self, token):
return self.parser.phases["inBody"].processStartTag(token)
def endTagTableCell(self, token):
if self.tree.elementInScope(token["name"], variant="table"):
self.tree.generateImpliedEndTags(token["name"])
if self.tree.openElements[-1].name != token["name"]:
self.parser.parseError("unexpected-cell-end-tag",
{"name": token["name"]})
while True:
node = self.tree.openElements.pop()
if node.name == token["name"]:
break
else:
self.tree.openElements.pop()
self.tree.clearActiveFormattingElements()
self.parser.phase = self.parser.phases["inRow"]
else:
self.parser.parseError("unexpected-end-tag", {"name": token["name"]})
def endTagIgnore(self, token):
self.parser.parseError("unexpected-end-tag", {"name": token["name"]})
def endTagImply(self, token):
if self.tree.elementInScope(token["name"], variant="table"):
self.closeCell()
return token
else:
# sometimes innerHTML case
self.parser.parseError()
def endTagOther(self, token):
return self.parser.phases["inBody"].processEndTag(token)
class InSelectPhase(Phase):
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
("option", self.startTagOption),
("optgroup", self.startTagOptgroup),
("select", self.startTagSelect),
(("input", "keygen", "textarea"), self.startTagInput),
("script", self.startTagScript)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
("option", self.endTagOption),
("optgroup", self.endTagOptgroup),
("select", self.endTagSelect)
])
self.endTagHandler.default = self.endTagOther
# http://www.whatwg.org/specs/web-apps/current-work/#in-select
def processEOF(self):
if self.tree.openElements[-1].name != "html":
self.parser.parseError("eof-in-select")
else:
assert self.parser.innerHTML
def processCharacters(self, token):
if token["data"] == "\u0000":
return
self.tree.insertText(token["data"])
def startTagOption(self, token):
# We need to imply </option> if <option> is the current node.
if self.tree.openElements[-1].name == "option":
self.tree.openElements.pop()
self.tree.insertElement(token)
def startTagOptgroup(self, token):
if self.tree.openElements[-1].name == "option":
self.tree.openElements.pop()
if self.tree.openElements[-1].name == "optgroup":
self.tree.openElements.pop()
self.tree.insertElement(token)
def startTagSelect(self, token):
self.parser.parseError("unexpected-select-in-select")
self.endTagSelect(impliedTagToken("select"))
def startTagInput(self, token):
self.parser.parseError("unexpected-input-in-select")
if self.tree.elementInScope("select", variant="select"):
self.endTagSelect(impliedTagToken("select"))
return token
else:
assert self.parser.innerHTML
def startTagScript(self, token):
return self.parser.phases["inHead"].processStartTag(token)
def startTagOther(self, token):
self.parser.parseError("unexpected-start-tag-in-select",
{"name": token["name"]})
def endTagOption(self, token):
if self.tree.openElements[-1].name == "option":
self.tree.openElements.pop()
else:
self.parser.parseError("unexpected-end-tag-in-select",
{"name": "option"})
def endTagOptgroup(self, token):
# </optgroup> implicitly closes <option>
if (self.tree.openElements[-1].name == "option" and
self.tree.openElements[-2].name == "optgroup"):
self.tree.openElements.pop()
# It also closes </optgroup>
if self.tree.openElements[-1].name == "optgroup":
self.tree.openElements.pop()
# But nothing else
else:
self.parser.parseError("unexpected-end-tag-in-select",
{"name": "optgroup"})
def endTagSelect(self, token):
if self.tree.elementInScope("select", variant="select"):
node = self.tree.openElements.pop()
while node.name != "select":
node = self.tree.openElements.pop()
self.parser.resetInsertionMode()
else:
# innerHTML case
assert self.parser.innerHTML
self.parser.parseError()
def endTagOther(self, token):
self.parser.parseError("unexpected-end-tag-in-select",
{"name": token["name"]})
class InSelectInTablePhase(Phase):
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
(("caption", "table", "tbody", "tfoot", "thead", "tr", "td", "th"),
self.startTagTable)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
(("caption", "table", "tbody", "tfoot", "thead", "tr", "td", "th"),
self.endTagTable)
])
self.endTagHandler.default = self.endTagOther
def processEOF(self):
self.parser.phases["inSelect"].processEOF()
def processCharacters(self, token):
return self.parser.phases["inSelect"].processCharacters(token)
def startTagTable(self, token):
self.parser.parseError("unexpected-table-element-start-tag-in-select-in-table", {"name": token["name"]})
self.endTagOther(impliedTagToken("select"))
return token
def startTagOther(self, token):
return self.parser.phases["inSelect"].processStartTag(token)
def endTagTable(self, token):
self.parser.parseError("unexpected-table-element-end-tag-in-select-in-table", {"name": token["name"]})
if self.tree.elementInScope(token["name"], variant="table"):
self.endTagOther(impliedTagToken("select"))
return token
def endTagOther(self, token):
return self.parser.phases["inSelect"].processEndTag(token)
class InForeignContentPhase(Phase):
breakoutElements = frozenset(["b", "big", "blockquote", "body", "br",
"center", "code", "dd", "div", "dl", "dt",
"em", "embed", "h1", "h2", "h3",
"h4", "h5", "h6", "head", "hr", "i", "img",
"li", "listing", "menu", "meta", "nobr",
"ol", "p", "pre", "ruby", "s", "small",
"span", "strong", "strike", "sub", "sup",
"table", "tt", "u", "ul", "var"])
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
def adjustSVGTagNames(self, token):
replacements = {"altglyph": "altGlyph",
"altglyphdef": "altGlyphDef",
"altglyphitem": "altGlyphItem",
"animatecolor": "animateColor",
"animatemotion": "animateMotion",
"animatetransform": "animateTransform",
"clippath": "clipPath",
"feblend": "feBlend",
"fecolormatrix": "feColorMatrix",
"fecomponenttransfer": "feComponentTransfer",
"fecomposite": "feComposite",
"feconvolvematrix": "feConvolveMatrix",
"fediffuselighting": "feDiffuseLighting",
"fedisplacementmap": "feDisplacementMap",
"fedistantlight": "feDistantLight",
"feflood": "feFlood",
"fefunca": "feFuncA",
"fefuncb": "feFuncB",
"fefuncg": "feFuncG",
"fefuncr": "feFuncR",
"fegaussianblur": "feGaussianBlur",
"feimage": "feImage",
"femerge": "feMerge",
"femergenode": "feMergeNode",
"femorphology": "feMorphology",
"feoffset": "feOffset",
"fepointlight": "fePointLight",
"fespecularlighting": "feSpecularLighting",
"fespotlight": "feSpotLight",
"fetile": "feTile",
"feturbulence": "feTurbulence",
"foreignobject": "foreignObject",
"glyphref": "glyphRef",
"lineargradient": "linearGradient",
"radialgradient": "radialGradient",
"textpath": "textPath"}
if token["name"] in replacements:
token["name"] = replacements[token["name"]]
def processCharacters(self, token):
if token["data"] == "\u0000":
token["data"] = "\uFFFD"
elif (self.parser.framesetOK and
any(char not in spaceCharacters for char in token["data"])):
self.parser.framesetOK = False
Phase.processCharacters(self, token)
def processStartTag(self, token):
currentNode = self.tree.openElements[-1]
if (token["name"] in self.breakoutElements or
(token["name"] == "font" and
set(token["data"].keys()) & set(["color", "face", "size"]))):
self.parser.parseError("unexpected-html-element-in-foreign-content",
{"name": token["name"]})
while (self.tree.openElements[-1].namespace !=
self.tree.defaultNamespace and
not self.parser.isHTMLIntegrationPoint(self.tree.openElements[-1]) and
not self.parser.isMathMLTextIntegrationPoint(self.tree.openElements[-1])):
self.tree.openElements.pop()
return token
else:
if currentNode.namespace == namespaces["mathml"]:
self.parser.adjustMathMLAttributes(token)
elif currentNode.namespace == namespaces["svg"]:
self.adjustSVGTagNames(token)
self.parser.adjustSVGAttributes(token)
self.parser.adjustForeignAttributes(token)
token["namespace"] = currentNode.namespace
self.tree.insertElement(token)
if token["selfClosing"]:
self.tree.openElements.pop()
token["selfClosingAcknowledged"] = True
def processEndTag(self, token):
nodeIndex = len(self.tree.openElements) - 1
node = self.tree.openElements[-1]
if node.name.translate(asciiUpper2Lower) != token["name"]:
self.parser.parseError("unexpected-end-tag", {"name": token["name"]})
while True:
if node.name.translate(asciiUpper2Lower) == token["name"]:
# XXX this isn't in the spec but it seems necessary
if self.parser.phase == self.parser.phases["inTableText"]:
self.parser.phase.flushCharacters()
self.parser.phase = self.parser.phase.originalPhase
while self.tree.openElements.pop() != node:
assert self.tree.openElements
new_token = None
break
nodeIndex -= 1
node = self.tree.openElements[nodeIndex]
if node.namespace != self.tree.defaultNamespace:
continue
else:
new_token = self.parser.phase.processEndTag(token)
break
return new_token
class AfterBodyPhase(Phase):
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([("html", self.endTagHtml)])
self.endTagHandler.default = self.endTagOther
def processEOF(self):
# Stop parsing
pass
def processComment(self, token):
# This is needed because data is to be appended to the <html> element
# here and not to whatever is currently open.
self.tree.insertComment(token, self.tree.openElements[0])
def processCharacters(self, token):
self.parser.parseError("unexpected-char-after-body")
self.parser.phase = self.parser.phases["inBody"]
return token
def startTagHtml(self, token):
return self.parser.phases["inBody"].processStartTag(token)
def startTagOther(self, token):
self.parser.parseError("unexpected-start-tag-after-body",
{"name": token["name"]})
self.parser.phase = self.parser.phases["inBody"]
return token
def endTagHtml(self, name):
if self.parser.innerHTML:
self.parser.parseError("unexpected-end-tag-after-body-innerhtml")
else:
self.parser.phase = self.parser.phases["afterAfterBody"]
def endTagOther(self, token):
self.parser.parseError("unexpected-end-tag-after-body",
{"name": token["name"]})
self.parser.phase = self.parser.phases["inBody"]
return token
class InFramesetPhase(Phase):
# http://www.whatwg.org/specs/web-apps/current-work/#in-frameset
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
("frameset", self.startTagFrameset),
("frame", self.startTagFrame),
("noframes", self.startTagNoframes)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
("frameset", self.endTagFrameset)
])
self.endTagHandler.default = self.endTagOther
def processEOF(self):
if self.tree.openElements[-1].name != "html":
self.parser.parseError("eof-in-frameset")
else:
assert self.parser.innerHTML
def processCharacters(self, token):
self.parser.parseError("unexpected-char-in-frameset")
def startTagFrameset(self, token):
self.tree.insertElement(token)
def startTagFrame(self, token):
self.tree.insertElement(token)
self.tree.openElements.pop()
def startTagNoframes(self, token):
return self.parser.phases["inBody"].processStartTag(token)
def startTagOther(self, token):
self.parser.parseError("unexpected-start-tag-in-frameset",
{"name": token["name"]})
def endTagFrameset(self, token):
if self.tree.openElements[-1].name == "html":
# innerHTML case
self.parser.parseError("unexpected-frameset-in-frameset-innerhtml")
else:
self.tree.openElements.pop()
if (not self.parser.innerHTML and
self.tree.openElements[-1].name != "frameset"):
# If we're not in innerHTML mode and the current node is not a
# "frameset" element (anymore) then switch.
self.parser.phase = self.parser.phases["afterFrameset"]
def endTagOther(self, token):
self.parser.parseError("unexpected-end-tag-in-frameset",
{"name": token["name"]})
class AfterFramesetPhase(Phase):
# http://www.whatwg.org/specs/web-apps/current-work/#after3
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
("noframes", self.startTagNoframes)
])
self.startTagHandler.default = self.startTagOther
self.endTagHandler = _utils.MethodDispatcher([
("html", self.endTagHtml)
])
self.endTagHandler.default = self.endTagOther
def processEOF(self):
# Stop parsing
pass
def processCharacters(self, token):
self.parser.parseError("unexpected-char-after-frameset")
def startTagNoframes(self, token):
return self.parser.phases["inHead"].processStartTag(token)
def startTagOther(self, token):
self.parser.parseError("unexpected-start-tag-after-frameset",
{"name": token["name"]})
def endTagHtml(self, token):
self.parser.phase = self.parser.phases["afterAfterFrameset"]
def endTagOther(self, token):
self.parser.parseError("unexpected-end-tag-after-frameset",
{"name": token["name"]})
class AfterAfterBodyPhase(Phase):
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml)
])
self.startTagHandler.default = self.startTagOther
def processEOF(self):
pass
def processComment(self, token):
self.tree.insertComment(token, self.tree.document)
def processSpaceCharacters(self, token):
return self.parser.phases["inBody"].processSpaceCharacters(token)
def processCharacters(self, token):
self.parser.parseError("expected-eof-but-got-char")
self.parser.phase = self.parser.phases["inBody"]
return token
def startTagHtml(self, token):
return self.parser.phases["inBody"].processStartTag(token)
def startTagOther(self, token):
self.parser.parseError("expected-eof-but-got-start-tag",
{"name": token["name"]})
self.parser.phase = self.parser.phases["inBody"]
return token
def processEndTag(self, token):
self.parser.parseError("expected-eof-but-got-end-tag",
{"name": token["name"]})
self.parser.phase = self.parser.phases["inBody"]
return token
class AfterAfterFramesetPhase(Phase):
def __init__(self, parser, tree):
Phase.__init__(self, parser, tree)
self.startTagHandler = _utils.MethodDispatcher([
("html", self.startTagHtml),
("noframes", self.startTagNoFrames)
])
self.startTagHandler.default = self.startTagOther
def processEOF(self):
pass
def processComment(self, token):
self.tree.insertComment(token, self.tree.document)
def processSpaceCharacters(self, token):
return self.parser.phases["inBody"].processSpaceCharacters(token)
def processCharacters(self, token):
self.parser.parseError("expected-eof-but-got-char")
def startTagHtml(self, token):
return self.parser.phases["inBody"].processStartTag(token)
def startTagNoFrames(self, token):
return self.parser.phases["inHead"].processStartTag(token)
def startTagOther(self, token):
self.parser.parseError("expected-eof-but-got-start-tag",
{"name": token["name"]})
def processEndTag(self, token):
self.parser.parseError("expected-eof-but-got-end-tag",
{"name": token["name"]})
# pylint:enable=unused-argument
return {
"initial": InitialPhase,
"beforeHtml": BeforeHtmlPhase,
"beforeHead": BeforeHeadPhase,
"inHead": InHeadPhase,
"inHeadNoscript": InHeadNoscriptPhase,
"afterHead": AfterHeadPhase,
"inBody": InBodyPhase,
"text": TextPhase,
"inTable": InTablePhase,
"inTableText": InTableTextPhase,
"inCaption": InCaptionPhase,
"inColumnGroup": InColumnGroupPhase,
"inTableBody": InTableBodyPhase,
"inRow": InRowPhase,
"inCell": InCellPhase,
"inSelect": InSelectPhase,
"inSelectInTable": InSelectInTablePhase,
"inForeignContent": InForeignContentPhase,
"afterBody": AfterBodyPhase,
"inFrameset": InFramesetPhase,
"afterFrameset": AfterFramesetPhase,
"afterAfterBody": AfterAfterBodyPhase,
"afterAfterFrameset": AfterAfterFramesetPhase,
# XXX after after frameset
}
def adjust_attributes(token, replacements):
needs_adjustment = viewkeys(token['data']) & viewkeys(replacements)
if needs_adjustment:
token['data'] = OrderedDict((replacements.get(k, k), v)
for k, v in token['data'].items())
def impliedTagToken(name, type="EndTag", attributes=None,
selfClosing=False):
if attributes is None:
attributes = {}
return {"type": tokenTypes[type], "name": name, "data": attributes,
"selfClosing": selfClosing}
class ParseError(Exception):
"""Error in parsed document"""
pass
| mpl-2.0 |
maniteja123/numpy | numpy/distutils/command/build_py.py | 264 | 1210 | from __future__ import division, absolute_import, print_function
from distutils.command.build_py import build_py as old_build_py
from numpy.distutils.misc_util import is_string
class build_py(old_build_py):
def run(self):
build_src = self.get_finalized_command('build_src')
if build_src.py_modules_dict and self.packages is None:
self.packages = list(build_src.py_modules_dict.keys ())
old_build_py.run(self)
def find_package_modules(self, package, package_dir):
modules = old_build_py.find_package_modules(self, package, package_dir)
# Find build_src generated *.py files.
build_src = self.get_finalized_command('build_src')
modules += build_src.py_modules_dict.get(package, [])
return modules
def find_modules(self):
old_py_modules = self.py_modules[:]
new_py_modules = [_m for _m in self.py_modules if is_string(_m)]
self.py_modules[:] = new_py_modules
modules = old_build_py.find_modules(self)
self.py_modules[:] = old_py_modules
return modules
# XXX: Fix find_source_files for item in py_modules such that item is 3-tuple
# and item[2] is source file.
| bsd-3-clause |
shekkbuilder/the-backdoor-factory | onionduke/onionduke.py | 15 | 4890 | #!/usr/bin/env python
import struct
import os
def xor_file(input_file, output_file, xorkey):
number_added = 0
while True:
some_bytes = input_file.read(4)
if len(some_bytes) == 0:
break
if len(some_bytes) % 4 != 0:
number_added = 4 - len(some_bytes)
some_bytes = some_bytes + "\x00" * (number_added)
writable_bytes = struct.pack("<I", (struct.unpack("<I", some_bytes)[0]) ^ xorkey)
output_file.write(writable_bytes)
if number_added != 0:
number_added = 0 - number_added
output_file.seek(number_added, os.SEEK_END)
output_file.truncate()
def write_rsrc(f, oldrva, newRva):
'''
This parses a .rsrc section and will adjust the RVA attributes
for patching on to the OnionDuke Stub
'''
rsrc_structure = {}
def parse_header(f):
return {"Characteristics": struct.unpack("<I", f.read(4))[0],
"TimeDataStamp": struct.unpack("<I", f.read(4))[0],
"MajorVersion": struct.unpack("<H", f.read(2))[0],
"MinorVersion": struct.unpack("<H", f.read(2))[0],
"NumberOfNamedEntries": struct.unpack("<H", f.read(2))[0],
"NumberofIDEntries": struct.unpack("<H", f.read(2))[0],
}
def merge_two_dicts(x, y):
'''Given two dicts, merge them into a new dict as a shallow copy.'''
z = x.copy()
z.update(y)
return z
def parse_data_entry(f):
return {"WriteME": f.tell(),
"RVA of Data": struct.unpack("<I", f.read(4))[0],
"Size": struct.unpack("<I", f.read(4))[0],
"CodePage": struct.unpack("<I", f.read(4))[0],
"Reserved": struct.unpack("<I", f.read(4))[0]
}
def parse_ID(f, number):
temp = {}
for i in range(0, number):
_tempid = struct.unpack("<I", f.read(4))[0]
temp[_tempid] = struct.unpack("<I", f.read(4))[0]
return temp
#parse initial header
rsrc_structure['Typeheader'] = parse_header(f)
rsrc_structure['Typeheader']['NameEntries'] = {}
rsrc_structure['Typeheader']["IDentries"] = {}
if rsrc_structure['Typeheader']["NumberofIDEntries"]:
rsrc_structure['Typeheader']["IDentries"] = parse_ID(f, rsrc_structure['Typeheader']["NumberofIDEntries"])
if rsrc_structure['Typeheader']["NumberOfNamedEntries"]:
rsrc_structure['Typeheader']['NameEntries'] = parse_ID(f, rsrc_structure['Typeheader']['NumberOfNamedEntries'])
#merge, flatten
rsrc_structure['Typeheader']['Entries'] = merge_two_dicts(rsrc_structure['Typeheader']["IDentries"],
rsrc_structure['Typeheader']['NameEntries'])
for entry, value in rsrc_structure['Typeheader']["Entries"].iteritems():
#jump to location in PE adjusted for RVA
f.seek((value & 0xffffff), 0)
rsrc_structure[entry] = parse_header(f)
rsrc_structure[entry]["IDs"] = {}
rsrc_structure[entry]["Names"] = {}
if rsrc_structure[entry]["NumberofIDEntries"]:
rsrc_structure[entry]["IDs"] = parse_ID(f, rsrc_structure[entry]["NumberofIDEntries"])
if rsrc_structure[entry]["NumberOfNamedEntries"]:
rsrc_structure[entry]["Names"] = parse_ID(f, rsrc_structure[entry]["NumberOfNamedEntries"])
rsrc_structure[entry]["NameIDs"] = merge_two_dicts(rsrc_structure[entry]["IDs"],
rsrc_structure[entry]["Names"])
#Now get language
for name_id, offset in rsrc_structure[entry]["NameIDs"].iteritems():
f.seek((offset & 0xffffff), 0)
rsrc_structure[name_id] = parse_header(f)
rsrc_structure[name_id]["IDs"] = {}
rsrc_structure[name_id]["Names"] = {}
if rsrc_structure[name_id]["NumberofIDEntries"]:
rsrc_structure[name_id]["IDs"] = parse_ID(f, rsrc_structure[name_id]["NumberofIDEntries"])
if rsrc_structure[name_id]["NumberOfNamedEntries"]:
rsrc_structure[name_id]["Names"] = parse_ID(f, rsrc_structure[name_id]["NumberOfNamedEntries"])
rsrc_structure[name_id]["language"] = merge_two_dicts(rsrc_structure[name_id]["IDs"],
rsrc_structure[name_id]["Names"])
#now get Data Entry Details and write
for lanID, offsetDataEntry in rsrc_structure[name_id]["language"].iteritems():
f.seek((offsetDataEntry & 0xffffff), 0)
rsrc_structure[lanID] = parse_data_entry(f)
#write to location
f.seek(rsrc_structure[lanID]["WriteME"], 0)
f.write(struct.pack("<I", rsrc_structure[lanID]["RVA of Data"] - oldrva + newRva))
| bsd-3-clause |
kimjinyong/i2nsf-framework | Hackathon-104/DMS/confd-6.6/lib/pyang/pyang/xpath.py | 9 | 5841 | import re
import sys
# not 100% XPath / XML, but good enough for YANG
namestr=r'[a-zA-Z_][a-zA-Z0-9_\-.]*'
ncnamestr = '((' + namestr + '):)?(' + namestr + ')'
prefixteststr = '((' + namestr + r'):)?\*'
patterns = [
('whitespace', re.compile(r'\s+')),
# Expr tokens
('(', re.compile(r'\(')),
(')', re.compile(r'\)')),
('[', re.compile(r'\[')),
(']', re.compile(r'\]')),
('..', re.compile(r'\.\.')),
('.', re.compile(r'\.')),
('@', re.compile(r'\@')),
(',', re.compile(r',')),
('::', re.compile(r'::')),
# operators
('//', re.compile(r'\/\/')),
('/', re.compile(r'\/')),
('|', re.compile(r'\|')),
('+', re.compile(r'\+')),
('-', re.compile(r'-')),
('=', re.compile(r'=')),
('!=', re.compile(r'!=')),
('<=', re.compile(r'<=')),
('>=', re.compile(r'>=')),
('>', re.compile(r'>')),
('<', re.compile(r'<')),
('*', re.compile(r'\*')),
# others
('number', re.compile(r'[0-9]+(\.[0-9]+)?')),
('prefix-test', re.compile(prefixteststr)),
('name', re.compile(ncnamestr)),
('attribute', re.compile(r'\@' + ncnamestr)),
('variable', re.compile(r'\$' + ncnamestr)),
('literal', re.compile(r'(\".*?\")|(\'.*?\')')),
]
operators = [ 'div', 'and', 'or', 'mod' ]
node_types = [ 'comment', 'text', 'processing-instruction', 'node' ]
axes = [ 'ancestor-or-self', 'ancestor', 'attribute', 'child',
'descendant-or-self', 'descendant', 'following-sibling',
'following', 'namespace', 'parent', 'preceding-sibling',
'preceding', 'self' ]
re_open_para = re.compile(r'\s*\(')
re_axis = re.compile(r'\s*::')
def validate(s):
"""Validate the XPath expression in the string `s`
Return True if the expression is correct, and throw
SyntaxError on failure."""
t = tokens(s)
return True
def tokens(s):
"""Return a list of tokens, or throw SyntaxError on failure.
A token is one of the patterns or:
('wildcard', '*')
('axis', axisname)
"""
pos = 0
toks = []
while pos < len(s):
matched = False
for (tokname, r) in patterns:
m = r.match(s, pos)
if m is not None:
# found a matching token
prec = _preceding_token(toks)
if tokname == '*' and prec is not None and _is_special(prec):
# XPath 1.0 spec, 3.7 special rule 1a
# interpret '*' as a wildcard
tok = ('wildcard', m.group(0))
elif (tokname == 'name' and prec is not None and
not _is_special(prec)):
# XPath 1.0 spec, 3.7 special rule 1b
# interpret the name as an operator
if m.group(0) in operators:
tok = (m.group(0), m.group(0))
else:
e = "%s: unknown operator %s" % (pos+1, m.group(0))
raise SyntaxError(e)
elif tokname == 'name':
# check if next token is '('
if re_open_para.match(s, pos + len(m.group(0))):
# XPath 1.0 spec, 3.7 special rule 2
if m.group(0) in node_types:
# XPath 1.0 spec, 3.7 special rule 2a
tok = (m.group(0), m.group(0))
else:
# XPath 1.0 spec, 3.7 special rule 2b
tok = ('function', m.group(0))
# check if next token is '::'
elif re_axis.match(s, pos + len(m.group(0))):
# XPath 1.0 spec, 3.7 special rule 3
if m.group(0) in axes:
tok = ('axis', m.group(0))
else:
e = "%s: unknown axis %s" % (pos+1, m.group(0))
raise SyntaxError(e)
else:
tok = ('name', m.group(0))
else:
tok = (tokname, m.group(0))
pos += len(m.group(0))
toks.append(tok)
matched = True
break
if matched == False:
# no patterns matched
raise SyntaxError("at position %s" % str(pos+1))
return toks
def _preceding_token(toks):
if len(toks) > 1 and toks[-1][0] == 'whitespace':
return toks[-2][0]
if len(toks) > 0 and toks[-1][0] != 'whitespace':
return toks[-1][0]
return None
_special_toks = [ ',', '@', '::', '(', '[', '/', '//', '|', '+', '-',
'=', '!=', '<', '<=', '>', '>=',
'and', 'or', 'mod', 'div' ]
def _is_special(tok):
return tok in _special_toks
def add_prefix(prefix, s):
"Add `prefix` to all unprefixed names in `s`"
# tokenize the XPath expression
toks = tokens(s)
# add default prefix to unprefixed names
toks2 = [_add_prefix(prefix, tok) for tok in toks]
# build a string of the patched expression
ls = [x for (_tokname, x) in toks2]
return ''.join(ls)
_re_ncname = re.compile(ncnamestr)
def _add_prefix(prefix, tok):
(tokname, s) = tok
if tokname == 'name':
m = _re_ncname.match(s)
if m.group(2) == None:
return (tokname, prefix + ':' + s)
return tok
core_functions = (
'last',
'position',
'count',
'id',
'local-name',
'namespace-uri',
'name',
'string',
'concat',
'starts-with',
'contains',
'substring-before',
'substring-after',
'substring',
'string-length',
'normalize-space',
'translate',
'boolean',
'not',
'true',
'false',
'lang',
'number',
'sum',
'floor',
'ceiling',
'round',
)
| apache-2.0 |
weisongchen/flaskapp | venv/lib/python2.7/site-packages/pip/_vendor/html5lib/filters/whitespace.py | 353 | 1139 | from __future__ import absolute_import, division, unicode_literals
import re
from . import base
from ..constants import rcdataElements, spaceCharacters
spaceCharacters = "".join(spaceCharacters)
SPACES_REGEX = re.compile("[%s]+" % spaceCharacters)
class Filter(base.Filter):
spacePreserveElements = frozenset(["pre", "textarea"] + list(rcdataElements))
def __iter__(self):
preserve = 0
for token in base.Filter.__iter__(self):
type = token["type"]
if type == "StartTag" \
and (preserve or token["name"] in self.spacePreserveElements):
preserve += 1
elif type == "EndTag" and preserve:
preserve -= 1
elif not preserve and type == "SpaceCharacters" and token["data"]:
# Test on token["data"] above to not introduce spaces where there were not
token["data"] = " "
elif not preserve and type == "Characters":
token["data"] = collapse_spaces(token["data"])
yield token
def collapse_spaces(text):
return SPACES_REGEX.sub(' ', text)
| mit |
thomaserlang/XenBackup | src/xenbackup/XenAPI.py | 1 | 9750 | # Copyright (c) Citrix Systems, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# 1) Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# 2) Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in
# the documentation and/or other materials provided with the
# distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# --------------------------------------------------------------------
# Parts of this file are based upon xmlrpclib.py, the XML-RPC client
# interface included in the Python distribution.
#
# Copyright (c) 1999-2002 by Secret Labs AB
# Copyright (c) 1999-2002 by Fredrik Lundh
#
# By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# --------------------------------------------------------------------
import gettext
import xmlrpclib
import httplib
import socket
import sys
import ssl
translation = gettext.translation('xen-xm', fallback = True)
API_VERSION_1_1 = '1.1'
API_VERSION_1_2 = '1.2'
class Failure(Exception):
def __init__(self, details):
self.details = details
def __str__(self):
try:
return str(self.details)
except Exception, exn:
import sys
print >>sys.stderr, exn
return "Xen-API failure: %s" % str(self.details)
def _details_map(self):
return dict([(str(i), self.details[i])
for i in range(len(self.details))])
# Just a "constant" that we use to decide whether to retry the RPC
_RECONNECT_AND_RETRY = object()
class UDSHTTPConnection(httplib.HTTPConnection):
"""HTTPConnection subclass to allow HTTP over Unix domain sockets. """
def connect(self):
path = self.host.replace("_", "/")
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
self.sock.connect(path)
class UDSHTTP(httplib.HTTP):
_connection_class = UDSHTTPConnection
class UDSTransport(xmlrpclib.Transport):
def __init__(self, use_datetime=0):
self._use_datetime = use_datetime
self._extra_headers=[]
def add_extra_header(self, key, value):
self._extra_headers += [ (key,value) ]
def make_connection(self, host):
# Python 2.4 compatibility
if sys.version_info[0] <= 2 and sys.version_info[1] < 6:
return UDSHTTP(host)
else:
return UDSHTTPConnection(host)
def send_request(self, connection, handler, request_body):
connection.putrequest("POST", handler)
for key, value in self._extra_headers:
connection.putheader(key, value)
class Session(xmlrpclib.ServerProxy):
"""A server proxy and session manager for communicating with xapi using
the Xen-API.
Example:
session = Session('http://localhost/')
session.login_with_password('me', 'mypassword')
session.xenapi.VM.start(vm_uuid)
session.xenapi.session.logout()
"""
def __init__(self, uri, transport=None, encoding=None, verbose=0,
allow_none=1):
try:
xmlrpclib.ServerProxy.__init__(self, uri, transport, encoding,
verbose, allow_none, context=ssl._create_unverified_context())
except AttributeError:
xmlrpclib.ServerProxy.__init__(self, uri, transport, encoding, verbose, allow_none)
self.transport = transport
self._session = None
self.last_login_method = None
self.last_login_params = None
self.API_version = API_VERSION_1_1
def xenapi_request(self, methodname, params):
if methodname.startswith('login'):
self._login(methodname, params)
return None
elif methodname == 'logout' or methodname == 'session.logout':
self._logout()
return None
else:
retry_count = 0
while retry_count < 3:
full_params = (self._session,) + params
result = _parse_result(getattr(self, methodname)(*full_params))
if result is _RECONNECT_AND_RETRY:
retry_count += 1
if self.last_login_method:
self._login(self.last_login_method,
self.last_login_params)
else:
raise xmlrpclib.Fault(401, 'You must log in')
else:
return result
raise xmlrpclib.Fault(
500, 'Tried 3 times to get a valid session, but failed')
def _login(self, method, params):
result = _parse_result(getattr(self, 'session.%s' % method)(*params))
if result is _RECONNECT_AND_RETRY:
raise xmlrpclib.Fault(
500, 'Received SESSION_INVALID when logging in')
self._session = result
self.last_login_method = method
self.last_login_params = params
self.API_version = self._get_api_version()
def _logout(self):
try:
if self.last_login_method.startswith("slave_local"):
return _parse_result(self.session.local_logout(self._session))
else:
return _parse_result(self.session.logout(self._session))
finally:
self._session = None
self.last_login_method = None
self.last_login_params = None
self.API_version = API_VERSION_1_1
def _get_api_version(self):
pool = self.xenapi.pool.get_all()[0]
host = self.xenapi.pool.get_master(pool)
major = self.xenapi.host.get_API_version_major(host)
minor = self.xenapi.host.get_API_version_minor(host)
return "%s.%s"%(major,minor)
def __getattr__(self, name):
if name == 'handle':
return self._session
elif name == 'xenapi':
return _Dispatcher(self.API_version, self.xenapi_request, None)
elif name.startswith('login') or name.startswith('slave_local'):
return lambda *params: self._login(name, params)
else:
return xmlrpclib.ServerProxy.__getattr__(self, name)
def xapi_local():
return Session("http://_var_xapi_xapi/", transport=UDSTransport())
def _parse_result(result):
if type(result) != dict or 'Status' not in result:
raise xmlrpclib.Fault(500, 'Missing Status in response from server' + result)
if result['Status'] == 'Success':
if 'Value' in result:
return result['Value']
else:
raise xmlrpclib.Fault(500,
'Missing Value in response from server')
else:
if 'ErrorDescription' in result:
if result['ErrorDescription'][0] == 'SESSION_INVALID':
return _RECONNECT_AND_RETRY
else:
raise Failure(result['ErrorDescription'])
else:
raise xmlrpclib.Fault(
500, 'Missing ErrorDescription in response from server')
# Based upon _Method from xmlrpclib.
class _Dispatcher:
def __init__(self, API_version, send, name):
self.__API_version = API_version
self.__send = send
self.__name = name
def __repr__(self):
if self.__name:
return '<XenAPI._Dispatcher for %s>' % self.__name
else:
return '<XenAPI._Dispatcher>'
def __getattr__(self, name):
if self.__name is None:
return _Dispatcher(self.__API_version, self.__send, name)
else:
return _Dispatcher(self.__API_version, self.__send, "%s.%s" % (self.__name, name))
def __call__(self, *args):
return self.__send(self.__name, args)
| mit |
kawamon/hue | desktop/core/ext-py/urllib3-1.25.8/test/with_dummyserver/test_socketlevel.py | 2 | 58975 | # TODO: Break this module up into pieces. Maybe group by functionality tested
# rather than the socket level-ness of it.
from urllib3 import HTTPConnectionPool, HTTPSConnectionPool
from urllib3.poolmanager import proxy_from_url
from urllib3.exceptions import (
MaxRetryError,
ProxyError,
ReadTimeoutError,
SSLError,
ProtocolError,
)
from urllib3.response import httplib
from urllib3.util.ssl_ import HAS_SNI
from urllib3.util import ssl_
from urllib3.util.timeout import Timeout
from urllib3.util.retry import Retry
from urllib3._collections import HTTPHeaderDict
from dummyserver.testcase import SocketDummyServerTestCase, consume_socket
from dummyserver.server import (
DEFAULT_CERTS,
DEFAULT_CA,
COMBINED_CERT_AND_KEY,
PASSWORD_KEYFILE,
get_unreachable_address,
)
from .. import onlyPy3, LogRecorder
try:
from mimetools import Message as MimeToolMessage
except ImportError:
class MimeToolMessage(object):
pass
from collections import OrderedDict
from threading import Event
import select
import socket
import ssl
import mock
import pytest
from test import (
fails_on_travis_gce,
requires_ssl_context_keyfile_password,
SHORT_TIMEOUT,
LONG_TIMEOUT,
notPyPy2,
)
# Retry failed tests
pytestmark = pytest.mark.flaky
class TestCookies(SocketDummyServerTestCase):
def test_multi_setcookie(self):
def multicookie_response_handler(listener):
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
sock.send(
b"HTTP/1.1 200 OK\r\n"
b"Set-Cookie: foo=1\r\n"
b"Set-Cookie: bar=1\r\n"
b"\r\n"
)
sock.close()
self._start_server(multicookie_response_handler)
with HTTPConnectionPool(self.host, self.port) as pool:
r = pool.request("GET", "/", retries=0)
assert r.headers == {"set-cookie": "foo=1, bar=1"}
assert r.headers.getlist("set-cookie") == ["foo=1", "bar=1"]
class TestSNI(SocketDummyServerTestCase):
@pytest.mark.skipif(not HAS_SNI, reason="SNI-support not available")
def test_hostname_in_first_request_packet(self):
done_receiving = Event()
self.buf = b""
def socket_handler(listener):
sock = listener.accept()[0]
self.buf = sock.recv(65536) # We only accept one packet
done_receiving.set() # let the test know it can proceed
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port) as pool:
try:
pool.request("GET", "/", retries=0)
except MaxRetryError: # We are violating the protocol
pass
done_receiving.wait()
assert (
self.host.encode("ascii") in self.buf
), "missing hostname in SSL handshake"
class TestClientCerts(SocketDummyServerTestCase):
"""
Tests for client certificate support.
"""
def _wrap_in_ssl(self, sock):
"""
Given a single socket, wraps it in TLS.
"""
return ssl.wrap_socket(
sock,
ssl_version=ssl.PROTOCOL_SSLv23,
cert_reqs=ssl.CERT_REQUIRED,
ca_certs=DEFAULT_CA,
certfile=DEFAULT_CERTS["certfile"],
keyfile=DEFAULT_CERTS["keyfile"],
server_side=True,
)
def test_client_certs_two_files(self):
"""
Having a client cert in a separate file to its associated key works
properly.
"""
done_receiving = Event()
client_certs = []
def socket_handler(listener):
sock = listener.accept()[0]
sock = self._wrap_in_ssl(sock)
client_certs.append(sock.getpeercert())
data = b""
while not data.endswith(b"\r\n\r\n"):
data += sock.recv(8192)
sock.sendall(
b"HTTP/1.1 200 OK\r\n"
b"Server: testsocket\r\n"
b"Connection: close\r\n"
b"Content-Length: 6\r\n"
b"\r\n"
b"Valid!"
)
done_receiving.wait(5)
sock.close()
self._start_server(socket_handler)
with HTTPSConnectionPool(
self.host,
self.port,
cert_file=DEFAULT_CERTS["certfile"],
key_file=DEFAULT_CERTS["keyfile"],
cert_reqs="REQUIRED",
ca_certs=DEFAULT_CA,
) as pool:
pool.request("GET", "/", retries=0)
done_receiving.set()
assert len(client_certs) == 1
def test_client_certs_one_file(self):
"""
Having a client cert and its associated private key in just one file
works properly.
"""
done_receiving = Event()
client_certs = []
def socket_handler(listener):
sock = listener.accept()[0]
sock = self._wrap_in_ssl(sock)
client_certs.append(sock.getpeercert())
data = b""
while not data.endswith(b"\r\n\r\n"):
data += sock.recv(8192)
sock.sendall(
b"HTTP/1.1 200 OK\r\n"
b"Server: testsocket\r\n"
b"Connection: close\r\n"
b"Content-Length: 6\r\n"
b"\r\n"
b"Valid!"
)
done_receiving.wait(5)
sock.close()
self._start_server(socket_handler)
with HTTPSConnectionPool(
self.host,
self.port,
cert_file=COMBINED_CERT_AND_KEY,
cert_reqs="REQUIRED",
ca_certs=DEFAULT_CA,
) as pool:
pool.request("GET", "/", retries=0)
done_receiving.set()
assert len(client_certs) == 1
def test_missing_client_certs_raises_error(self):
"""
Having client certs not be present causes an error.
"""
done_receiving = Event()
def socket_handler(listener):
sock = listener.accept()[0]
try:
self._wrap_in_ssl(sock)
except ssl.SSLError:
pass
done_receiving.wait(5)
sock.close()
self._start_server(socket_handler)
with HTTPSConnectionPool(
self.host, self.port, cert_reqs="REQUIRED", ca_certs=DEFAULT_CA
) as pool:
with pytest.raises(MaxRetryError):
pool.request("GET", "/", retries=0)
done_receiving.set()
done_receiving.set()
@requires_ssl_context_keyfile_password
def test_client_cert_with_string_password(self):
self.run_client_cert_with_password_test(u"letmein")
@requires_ssl_context_keyfile_password
def test_client_cert_with_bytes_password(self):
self.run_client_cert_with_password_test(b"letmein")
def run_client_cert_with_password_test(self, password):
"""
Tests client certificate password functionality
"""
done_receiving = Event()
client_certs = []
def socket_handler(listener):
sock = listener.accept()[0]
sock = self._wrap_in_ssl(sock)
client_certs.append(sock.getpeercert())
data = b""
while not data.endswith(b"\r\n\r\n"):
data += sock.recv(8192)
sock.sendall(
b"HTTP/1.1 200 OK\r\n"
b"Server: testsocket\r\n"
b"Connection: close\r\n"
b"Content-Length: 6\r\n"
b"\r\n"
b"Valid!"
)
done_receiving.wait(5)
sock.close()
self._start_server(socket_handler)
ssl_context = ssl_.SSLContext(ssl_.PROTOCOL_SSLv23)
ssl_context.load_cert_chain(
certfile=DEFAULT_CERTS["certfile"],
keyfile=PASSWORD_KEYFILE,
password=password,
)
with HTTPSConnectionPool(
self.host,
self.port,
ssl_context=ssl_context,
cert_reqs="REQUIRED",
ca_certs=DEFAULT_CA,
) as pool:
pool.request("GET", "/", retries=0)
done_receiving.set()
assert len(client_certs) == 1
@requires_ssl_context_keyfile_password
def test_load_keyfile_with_invalid_password(self):
context = ssl_.SSLContext(ssl_.PROTOCOL_SSLv23)
# Different error is raised depending on context.
if ssl_.IS_PYOPENSSL:
from OpenSSL.SSL import Error
expected_error = Error
else:
expected_error = ssl.SSLError
with pytest.raises(expected_error):
context.load_cert_chain(
certfile=DEFAULT_CERTS["certfile"],
keyfile=PASSWORD_KEYFILE,
password=b"letmei",
)
class TestSocketClosing(SocketDummyServerTestCase):
def test_recovery_when_server_closes_connection(self):
# Does the pool work seamlessly if an open connection in the
# connection pool gets hung up on by the server, then reaches
# the front of the queue again?
done_closing = Event()
def socket_handler(listener):
for i in 0, 1:
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf = sock.recv(65536)
body = "Response %d" % i
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: %d\r\n"
"\r\n"
"%s" % (len(body), body)
).encode("utf-8")
)
sock.close() # simulate a server timing out, closing socket
done_closing.set() # let the test know it can proceed
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port) as pool:
response = pool.request("GET", "/", retries=0)
assert response.status == 200
assert response.data == b"Response 0"
done_closing.wait() # wait until the socket in our pool gets closed
response = pool.request("GET", "/", retries=0)
assert response.status == 200
assert response.data == b"Response 1"
def test_connection_refused(self):
# Does the pool retry if there is no listener on the port?
host, port = get_unreachable_address()
with HTTPConnectionPool(host, port, maxsize=3, block=True) as http:
with pytest.raises(MaxRetryError):
http.request("GET", "/", retries=0, release_conn=False)
assert http.pool.qsize() == http.pool.maxsize
def test_connection_read_timeout(self):
timed_out = Event()
def socket_handler(listener):
sock = listener.accept()[0]
while not sock.recv(65536).endswith(b"\r\n\r\n"):
pass
timed_out.wait()
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(
self.host,
self.port,
timeout=SHORT_TIMEOUT,
retries=False,
maxsize=3,
block=True,
) as http:
try:
with pytest.raises(ReadTimeoutError):
http.request("GET", "/", release_conn=False)
finally:
timed_out.set()
assert http.pool.qsize() == http.pool.maxsize
def test_read_timeout_dont_retry_method_not_in_whitelist(self):
timed_out = Event()
def socket_handler(listener):
sock = listener.accept()[0]
sock.recv(65536)
timed_out.wait()
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(
self.host, self.port, timeout=SHORT_TIMEOUT, retries=True
) as pool:
try:
with pytest.raises(ReadTimeoutError):
pool.request("POST", "/")
finally:
timed_out.set()
def test_https_connection_read_timeout(self):
""" Handshake timeouts should fail with a Timeout"""
timed_out = Event()
def socket_handler(listener):
sock = listener.accept()[0]
while not sock.recv(65536):
pass
timed_out.wait()
sock.close()
self._start_server(socket_handler)
with HTTPSConnectionPool(
self.host, self.port, timeout=SHORT_TIMEOUT, retries=False
) as pool:
try:
with pytest.raises(ReadTimeoutError):
pool.request("GET", "/")
finally:
timed_out.set()
def test_timeout_errors_cause_retries(self):
def socket_handler(listener):
sock_timeout = listener.accept()[0]
# Wait for a second request before closing the first socket.
sock = listener.accept()[0]
sock_timeout.close()
# Second request.
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
# Now respond immediately.
body = "Response 2"
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: %d\r\n"
"\r\n"
"%s" % (len(body), body)
).encode("utf-8")
)
sock.close()
# In situations where the main thread throws an exception, the server
# thread can hang on an accept() call. This ensures everything times
# out within 1 second. This should be long enough for any socket
# operations in the test suite to complete
default_timeout = socket.getdefaulttimeout()
socket.setdefaulttimeout(1)
try:
self._start_server(socket_handler)
t = Timeout(connect=SHORT_TIMEOUT, read=LONG_TIMEOUT)
with HTTPConnectionPool(self.host, self.port, timeout=t) as pool:
response = pool.request("GET", "/", retries=1)
assert response.status == 200
assert response.data == b"Response 2"
finally:
socket.setdefaulttimeout(default_timeout)
def test_delayed_body_read_timeout(self):
timed_out = Event()
def socket_handler(listener):
sock = listener.accept()[0]
buf = b""
body = "Hi"
while not buf.endswith(b"\r\n\r\n"):
buf = sock.recv(65536)
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: %d\r\n"
"\r\n" % len(body)
).encode("utf-8")
)
timed_out.wait()
sock.send(body.encode("utf-8"))
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port) as pool:
response = pool.urlopen(
"GET",
"/",
retries=0,
preload_content=False,
timeout=Timeout(connect=1, read=LONG_TIMEOUT),
)
try:
with pytest.raises(ReadTimeoutError):
response.read()
finally:
timed_out.set()
def test_delayed_body_read_timeout_with_preload(self):
timed_out = Event()
def socket_handler(listener):
sock = listener.accept()[0]
buf = b""
body = "Hi"
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: %d\r\n"
"\r\n" % len(body)
).encode("utf-8")
)
timed_out.wait(5)
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port) as pool:
try:
with pytest.raises(ReadTimeoutError):
timeout = Timeout(connect=LONG_TIMEOUT, read=SHORT_TIMEOUT)
pool.urlopen("GET", "/", retries=False, timeout=timeout)
finally:
timed_out.set()
def test_incomplete_response(self):
body = "Response"
partial_body = body[:2]
def socket_handler(listener):
sock = listener.accept()[0]
# Consume request
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf = sock.recv(65536)
# Send partial response and close socket.
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: %d\r\n"
"\r\n"
"%s" % (len(body), partial_body)
).encode("utf-8")
)
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port) as pool:
response = pool.request("GET", "/", retries=0, preload_content=False)
with pytest.raises(ProtocolError):
response.read()
def test_retry_weird_http_version(self):
""" Retry class should handle httplib.BadStatusLine errors properly """
def socket_handler(listener):
sock = listener.accept()[0]
# First request.
# Pause before responding so the first request times out.
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
# send unknown http protocol
body = "bad http 0.5 response"
sock.send(
(
"HTTP/0.5 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: %d\r\n"
"\r\n"
"%s" % (len(body), body)
).encode("utf-8")
)
sock.close()
# Second request.
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
# Now respond immediately.
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: %d\r\n"
"\r\n"
"foo" % (len("foo"))
).encode("utf-8")
)
sock.close() # Close the socket.
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port) as pool:
retry = Retry(read=1)
response = pool.request("GET", "/", retries=retry)
assert response.status == 200
assert response.data == b"foo"
def test_connection_cleanup_on_read_timeout(self):
timed_out = Event()
def socket_handler(listener):
sock = listener.accept()[0]
buf = b""
body = "Hi"
while not buf.endswith(b"\r\n\r\n"):
buf = sock.recv(65536)
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: %d\r\n"
"\r\n" % len(body)
).encode("utf-8")
)
timed_out.wait()
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port) as pool:
poolsize = pool.pool.qsize()
timeout = Timeout(connect=LONG_TIMEOUT, read=SHORT_TIMEOUT)
response = pool.urlopen(
"GET", "/", retries=0, preload_content=False, timeout=timeout
)
try:
with pytest.raises(ReadTimeoutError):
response.read()
assert poolsize == pool.pool.qsize()
finally:
timed_out.set()
def test_connection_cleanup_on_protocol_error_during_read(self):
body = "Response"
partial_body = body[:2]
def socket_handler(listener):
sock = listener.accept()[0]
# Consume request
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf = sock.recv(65536)
# Send partial response and close socket.
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: %d\r\n"
"\r\n"
"%s" % (len(body), partial_body)
).encode("utf-8")
)
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port) as pool:
poolsize = pool.pool.qsize()
response = pool.request("GET", "/", retries=0, preload_content=False)
with pytest.raises(ProtocolError):
response.read()
assert poolsize == pool.pool.qsize()
def test_connection_closed_on_read_timeout_preload_false(self):
timed_out = Event()
def socket_handler(listener):
sock = listener.accept()[0]
# Consume request
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf = sock.recv(65535)
# Send partial chunked response and then hang.
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Transfer-Encoding: chunked\r\n"
"\r\n"
"8\r\n"
"12345678\r\n"
).encode("utf-8")
)
timed_out.wait(5)
# Expect a new request, but keep hold of the old socket to avoid
# leaking it. Because we don't want to hang this thread, we
# actually use select.select to confirm that a new request is
# coming in: this lets us time the thread out.
rlist, _, _ = select.select([listener], [], [], 1)
assert rlist
new_sock = listener.accept()[0]
# Consume request
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf = new_sock.recv(65535)
# Send complete chunked response.
new_sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Transfer-Encoding: chunked\r\n"
"\r\n"
"8\r\n"
"12345678\r\n"
"0\r\n\r\n"
).encode("utf-8")
)
new_sock.close()
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port) as pool:
# First request should fail.
response = pool.urlopen(
"GET", "/", retries=0, preload_content=False, timeout=LONG_TIMEOUT
)
try:
with pytest.raises(ReadTimeoutError):
response.read()
finally:
timed_out.set()
# Second should succeed.
response = pool.urlopen(
"GET", "/", retries=0, preload_content=False, timeout=LONG_TIMEOUT
)
assert len(response.read()) == 8
def test_closing_response_actually_closes_connection(self):
done_closing = Event()
complete = Event()
def socket_handler(listener):
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf = sock.recv(65536)
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: 0\r\n"
"\r\n"
).encode("utf-8")
)
# Wait for the socket to close.
done_closing.wait(timeout=LONG_TIMEOUT)
# Look for the empty string to show that the connection got closed.
# Don't get stuck in a timeout.
sock.settimeout(LONG_TIMEOUT)
new_data = sock.recv(65536)
assert not new_data
sock.close()
complete.set()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port) as pool:
response = pool.request("GET", "/", retries=0, preload_content=False)
assert response.status == 200
response.close()
done_closing.set() # wait until the socket in our pool gets closed
successful = complete.wait(timeout=LONG_TIMEOUT)
assert successful, "Timed out waiting for connection close"
def test_release_conn_param_is_respected_after_timeout_retry(self):
"""For successful ```urlopen(release_conn=False)```,
the connection isn't released, even after a retry.
This test allows a retry: one request fails, the next request succeeds.
This is a regression test for issue #651 [1], where the connection
would be released if the initial request failed, even if a retry
succeeded.
[1] <https://github.com/urllib3/urllib3/issues/651>
"""
def socket_handler(listener):
sock = listener.accept()[0]
consume_socket(sock)
# Close the connection, without sending any response (not even the
# HTTP status line). This will trigger a `Timeout` on the client,
# inside `urlopen()`.
sock.close()
# Expect a new request. Because we don't want to hang this thread,
# we actually use select.select to confirm that a new request is
# coming in: this lets us time the thread out.
rlist, _, _ = select.select([listener], [], [], 5)
assert rlist
sock = listener.accept()[0]
consume_socket(sock)
# Send complete chunked response.
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Transfer-Encoding: chunked\r\n"
"\r\n"
"8\r\n"
"12345678\r\n"
"0\r\n\r\n"
).encode("utf-8")
)
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port, maxsize=1) as pool:
# First request should fail, but the timeout and `retries=1` should
# save it.
response = pool.urlopen(
"GET",
"/",
retries=1,
release_conn=False,
preload_content=False,
timeout=Timeout(connect=LONG_TIMEOUT, read=SHORT_TIMEOUT),
)
# The connection should still be on the response object, and none
# should be in the pool. We opened two though.
assert pool.num_connections == 2
assert pool.pool.qsize() == 0
assert response.connection is not None
# Consume the data. This should put the connection back.
response.read()
assert pool.pool.qsize() == 1
assert response.connection is None
class TestProxyManager(SocketDummyServerTestCase):
def test_simple(self):
def echo_socket_handler(listener):
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: %d\r\n"
"\r\n"
"%s" % (len(buf), buf.decode("utf-8"))
).encode("utf-8")
)
sock.close()
self._start_server(echo_socket_handler)
base_url = "http://%s:%d" % (self.host, self.port)
with proxy_from_url(base_url) as proxy:
r = proxy.request("GET", "http://google.com/")
assert r.status == 200
# FIXME: The order of the headers is not predictable right now. We
# should fix that someday (maybe when we migrate to
# OrderedDict/MultiDict).
assert sorted(r.data.split(b"\r\n")) == sorted(
[
b"GET http://google.com/ HTTP/1.1",
b"Host: google.com",
b"Accept-Encoding: identity",
b"Accept: */*",
b"",
b"",
]
)
def test_headers(self):
def echo_socket_handler(listener):
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: %d\r\n"
"\r\n"
"%s" % (len(buf), buf.decode("utf-8"))
).encode("utf-8")
)
sock.close()
self._start_server(echo_socket_handler)
base_url = "http://%s:%d" % (self.host, self.port)
# Define some proxy headers.
proxy_headers = HTTPHeaderDict({"For The Proxy": "YEAH!"})
with proxy_from_url(base_url, proxy_headers=proxy_headers) as proxy:
conn = proxy.connection_from_url("http://www.google.com/")
r = conn.urlopen("GET", "http://www.google.com/", assert_same_host=False)
assert r.status == 200
# FIXME: The order of the headers is not predictable right now. We
# should fix that someday (maybe when we migrate to
# OrderedDict/MultiDict).
assert b"For The Proxy: YEAH!\r\n" in r.data
def test_retries(self):
close_event = Event()
def echo_socket_handler(listener):
sock = listener.accept()[0]
# First request, which should fail
sock.close()
# Second request
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: %d\r\n"
"\r\n"
"%s" % (len(buf), buf.decode("utf-8"))
).encode("utf-8")
)
sock.close()
close_event.set()
self._start_server(echo_socket_handler)
base_url = "http://%s:%d" % (self.host, self.port)
with proxy_from_url(base_url) as proxy:
conn = proxy.connection_from_url("http://www.google.com")
r = conn.urlopen(
"GET", "http://www.google.com", assert_same_host=False, retries=1
)
assert r.status == 200
close_event.wait(timeout=LONG_TIMEOUT)
with pytest.raises(ProxyError):
conn.urlopen(
"GET",
"http://www.google.com",
assert_same_host=False,
retries=False,
)
def test_connect_reconn(self):
def proxy_ssl_one(listener):
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
s = buf.decode("utf-8")
if not s.startswith("CONNECT "):
sock.send(
(
"HTTP/1.1 405 Method not allowed\r\nAllow: CONNECT\r\n\r\n"
).encode("utf-8")
)
sock.close()
return
if not s.startswith("CONNECT %s:443" % (self.host,)):
sock.send(("HTTP/1.1 403 Forbidden\r\n\r\n").encode("utf-8"))
sock.close()
return
sock.send(("HTTP/1.1 200 Connection Established\r\n\r\n").encode("utf-8"))
ssl_sock = ssl.wrap_socket(
sock,
server_side=True,
keyfile=DEFAULT_CERTS["keyfile"],
certfile=DEFAULT_CERTS["certfile"],
ca_certs=DEFAULT_CA,
)
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += ssl_sock.recv(65536)
ssl_sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: 2\r\n"
"Connection: close\r\n"
"\r\n"
"Hi"
).encode("utf-8")
)
ssl_sock.close()
def echo_socket_handler(listener):
proxy_ssl_one(listener)
proxy_ssl_one(listener)
self._start_server(echo_socket_handler)
base_url = "http://%s:%d" % (self.host, self.port)
with proxy_from_url(base_url, ca_certs=DEFAULT_CA) as proxy:
url = "https://{0}".format(self.host)
conn = proxy.connection_from_url(url)
r = conn.urlopen("GET", url, retries=0)
assert r.status == 200
r = conn.urlopen("GET", url, retries=0)
assert r.status == 200
def test_connect_ipv6_addr(self):
ipv6_addr = "2001:4998:c:a06::2:4008"
def echo_socket_handler(listener):
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
s = buf.decode("utf-8")
if s.startswith("CONNECT [%s]:443" % (ipv6_addr,)):
sock.send(b"HTTP/1.1 200 Connection Established\r\n\r\n")
ssl_sock = ssl.wrap_socket(
sock,
server_side=True,
keyfile=DEFAULT_CERTS["keyfile"],
certfile=DEFAULT_CERTS["certfile"],
)
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += ssl_sock.recv(65536)
ssl_sock.send(
b"HTTP/1.1 200 OK\r\n"
b"Content-Type: text/plain\r\n"
b"Content-Length: 2\r\n"
b"Connection: close\r\n"
b"\r\n"
b"Hi"
)
ssl_sock.close()
else:
sock.close()
self._start_server(echo_socket_handler)
base_url = "http://%s:%d" % (self.host, self.port)
with proxy_from_url(base_url, cert_reqs="NONE") as proxy:
url = "https://[{0}]".format(ipv6_addr)
conn = proxy.connection_from_url(url)
try:
r = conn.urlopen("GET", url, retries=0)
assert r.status == 200
except MaxRetryError:
self.fail("Invalid IPv6 format in HTTP CONNECT request")
class TestSSL(SocketDummyServerTestCase):
def test_ssl_failure_midway_through_conn(self):
def socket_handler(listener):
sock = listener.accept()[0]
sock2 = sock.dup()
ssl_sock = ssl.wrap_socket(
sock,
server_side=True,
keyfile=DEFAULT_CERTS["keyfile"],
certfile=DEFAULT_CERTS["certfile"],
ca_certs=DEFAULT_CA,
)
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += ssl_sock.recv(65536)
# Deliberately send from the non-SSL socket.
sock2.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: 2\r\n"
"\r\n"
"Hi"
).encode("utf-8")
)
sock2.close()
ssl_sock.close()
self._start_server(socket_handler)
with HTTPSConnectionPool(self.host, self.port) as pool:
with pytest.raises(MaxRetryError) as cm:
pool.request("GET", "/", retries=0)
assert isinstance(cm.value.reason, SSLError)
def test_ssl_read_timeout(self):
timed_out = Event()
def socket_handler(listener):
sock = listener.accept()[0]
ssl_sock = ssl.wrap_socket(
sock,
server_side=True,
keyfile=DEFAULT_CERTS["keyfile"],
certfile=DEFAULT_CERTS["certfile"],
)
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += ssl_sock.recv(65536)
# Send incomplete message (note Content-Length)
ssl_sock.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: 10\r\n"
"\r\n"
"Hi-"
).encode("utf-8")
)
timed_out.wait()
sock.close()
ssl_sock.close()
self._start_server(socket_handler)
with HTTPSConnectionPool(self.host, self.port, ca_certs=DEFAULT_CA) as pool:
response = pool.urlopen(
"GET", "/", retries=0, preload_content=False, timeout=LONG_TIMEOUT
)
try:
with pytest.raises(ReadTimeoutError):
response.read()
finally:
timed_out.set()
def test_ssl_failed_fingerprint_verification(self):
def socket_handler(listener):
for i in range(2):
sock = listener.accept()[0]
ssl_sock = ssl.wrap_socket(
sock,
server_side=True,
keyfile=DEFAULT_CERTS["keyfile"],
certfile=DEFAULT_CERTS["certfile"],
ca_certs=DEFAULT_CA,
)
ssl_sock.send(
b"HTTP/1.1 200 OK\r\n"
b"Content-Type: text/plain\r\n"
b"Content-Length: 5\r\n\r\n"
b"Hello"
)
ssl_sock.close()
sock.close()
self._start_server(socket_handler)
# GitHub's fingerprint. Valid, but not matching.
fingerprint = "A0:C4:A7:46:00:ED:A7:2D:C0:BE:CB:9A:8C:B6:07:CA:58:EE:74:5E"
def request():
pool = HTTPSConnectionPool(
self.host, self.port, assert_fingerprint=fingerprint
)
try:
timeout = Timeout(connect=LONG_TIMEOUT, read=SHORT_TIMEOUT)
response = pool.urlopen(
"GET", "/", preload_content=False, retries=0, timeout=timeout
)
response.read()
finally:
pool.close()
with pytest.raises(MaxRetryError) as cm:
request()
assert isinstance(cm.value.reason, SSLError)
# Should not hang, see https://github.com/urllib3/urllib3/issues/529
with pytest.raises(MaxRetryError):
request()
def test_retry_ssl_error(self):
def socket_handler(listener):
# first request, trigger an SSLError
sock = listener.accept()[0]
sock2 = sock.dup()
ssl_sock = ssl.wrap_socket(
sock,
server_side=True,
keyfile=DEFAULT_CERTS["keyfile"],
certfile=DEFAULT_CERTS["certfile"],
)
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += ssl_sock.recv(65536)
# Deliberately send from the non-SSL socket to trigger an SSLError
sock2.send(
(
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n"
"Content-Length: 4\r\n"
"\r\n"
"Fail"
).encode("utf-8")
)
sock2.close()
ssl_sock.close()
# retried request
sock = listener.accept()[0]
ssl_sock = ssl.wrap_socket(
sock,
server_side=True,
keyfile=DEFAULT_CERTS["keyfile"],
certfile=DEFAULT_CERTS["certfile"],
)
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += ssl_sock.recv(65536)
ssl_sock.send(
b"HTTP/1.1 200 OK\r\n"
b"Content-Type: text/plain\r\n"
b"Content-Length: 7\r\n\r\n"
b"Success"
)
ssl_sock.close()
self._start_server(socket_handler)
with HTTPSConnectionPool(self.host, self.port, ca_certs=DEFAULT_CA) as pool:
response = pool.urlopen("GET", "/", retries=1)
assert response.data == b"Success"
def test_ssl_load_default_certs_when_empty(self):
def socket_handler(listener):
sock = listener.accept()[0]
ssl_sock = ssl.wrap_socket(
sock,
server_side=True,
keyfile=DEFAULT_CERTS["keyfile"],
certfile=DEFAULT_CERTS["certfile"],
ca_certs=DEFAULT_CA,
)
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += ssl_sock.recv(65536)
ssl_sock.send(
b"HTTP/1.1 200 OK\r\n"
b"Content-Type: text/plain\r\n"
b"Content-Length: 5\r\n\r\n"
b"Hello"
)
ssl_sock.close()
sock.close()
context = mock.create_autospec(ssl_.SSLContext)
context.load_default_certs = mock.Mock()
context.options = 0
with mock.patch("urllib3.util.ssl_.SSLContext", lambda *_, **__: context):
self._start_server(socket_handler)
with HTTPSConnectionPool(self.host, self.port) as pool:
with pytest.raises(MaxRetryError):
pool.request("GET", "/", timeout=SHORT_TIMEOUT)
context.load_default_certs.assert_called_with()
@notPyPy2
def test_ssl_dont_load_default_certs_when_given(self):
def socket_handler(listener):
sock = listener.accept()[0]
ssl_sock = ssl.wrap_socket(
sock,
server_side=True,
keyfile=DEFAULT_CERTS["keyfile"],
certfile=DEFAULT_CERTS["certfile"],
ca_certs=DEFAULT_CA,
)
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += ssl_sock.recv(65536)
ssl_sock.send(
b"HTTP/1.1 200 OK\r\n"
b"Content-Type: text/plain\r\n"
b"Content-Length: 5\r\n\r\n"
b"Hello"
)
ssl_sock.close()
sock.close()
context = mock.create_autospec(ssl_.SSLContext)
context.load_default_certs = mock.Mock()
context.options = 0
with mock.patch("urllib3.util.ssl_.SSLContext", lambda *_, **__: context):
for kwargs in [
{"ca_certs": "/a"},
{"ca_cert_dir": "/a"},
{"ca_certs": "a", "ca_cert_dir": "a"},
{"ssl_context": context},
]:
self._start_server(socket_handler)
with HTTPSConnectionPool(self.host, self.port, **kwargs) as pool:
with pytest.raises(MaxRetryError):
pool.request("GET", "/", timeout=SHORT_TIMEOUT)
context.load_default_certs.assert_not_called()
class TestErrorWrapping(SocketDummyServerTestCase):
def test_bad_statusline(self):
self.start_response_handler(
b"HTTP/1.1 Omg What Is This?\r\n" b"Content-Length: 0\r\n" b"\r\n"
)
with HTTPConnectionPool(self.host, self.port, retries=False) as pool:
with pytest.raises(ProtocolError):
pool.request("GET", "/")
def test_unknown_protocol(self):
self.start_response_handler(
b"HTTP/1000 200 OK\r\n" b"Content-Length: 0\r\n" b"\r\n"
)
with HTTPConnectionPool(self.host, self.port, retries=False) as pool:
with pytest.raises(ProtocolError):
pool.request("GET", "/")
class TestHeaders(SocketDummyServerTestCase):
@onlyPy3
def test_httplib_headers_case_insensitive(self):
self.start_response_handler(
b"HTTP/1.1 200 OK\r\n"
b"Content-Length: 0\r\n"
b"Content-type: text/plain\r\n"
b"\r\n"
)
with HTTPConnectionPool(self.host, self.port, retries=False) as pool:
HEADERS = {"Content-Length": "0", "Content-type": "text/plain"}
r = pool.request("GET", "/")
assert HEADERS == dict(r.headers.items()) # to preserve case sensitivity
def test_headers_are_sent_with_the_original_case(self):
headers = {"foo": "bar", "bAz": "quux"}
parsed_headers = {}
def socket_handler(listener):
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
headers_list = [header for header in buf.split(b"\r\n")[1:] if header]
for header in headers_list:
(key, value) = header.split(b": ")
parsed_headers[key.decode("ascii")] = value.decode("ascii")
sock.send(
("HTTP/1.1 204 No Content\r\nContent-Length: 0\r\n\r\n").encode("utf-8")
)
sock.close()
self._start_server(socket_handler)
expected_headers = {
"Accept-Encoding": "identity",
"Host": "{0}:{1}".format(self.host, self.port),
}
expected_headers.update(headers)
with HTTPConnectionPool(self.host, self.port, retries=False) as pool:
pool.request("GET", "/", headers=HTTPHeaderDict(headers))
assert expected_headers == parsed_headers
def test_request_headers_are_sent_in_the_original_order(self):
# NOTE: Probability this test gives a false negative is 1/(K!)
K = 16
# NOTE: Provide headers in non-sorted order (i.e. reversed)
# so that if the internal implementation tries to sort them,
# a change will be detected.
expected_request_headers = [
(u"X-Header-%d" % i, str(i)) for i in reversed(range(K))
]
actual_request_headers = []
def socket_handler(listener):
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
headers_list = [header for header in buf.split(b"\r\n")[1:] if header]
for header in headers_list:
(key, value) = header.split(b": ")
if not key.decode("ascii").startswith(u"X-Header-"):
continue
actual_request_headers.append(
(key.decode("ascii"), value.decode("ascii"))
)
sock.send(
(u"HTTP/1.1 204 No Content\r\nContent-Length: 0\r\n\r\n").encode(
"ascii"
)
)
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port, retries=False) as pool:
pool.request("GET", "/", headers=OrderedDict(expected_request_headers))
assert expected_request_headers == actual_request_headers
@fails_on_travis_gce
def test_request_host_header_ignores_fqdn_dot(self):
received_headers = []
def socket_handler(listener):
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
for header in buf.split(b"\r\n")[1:]:
if header:
received_headers.append(header)
sock.send(
(u"HTTP/1.1 204 No Content\r\nContent-Length: 0\r\n\r\n").encode(
"ascii"
)
)
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host + ".", self.port, retries=False) as pool:
pool.request("GET", "/")
self.assert_header_received(
received_headers, "Host", "%s:%s" % (self.host, self.port)
)
def test_response_headers_are_returned_in_the_original_order(self):
# NOTE: Probability this test gives a false negative is 1/(K!)
K = 16
# NOTE: Provide headers in non-sorted order (i.e. reversed)
# so that if the internal implementation tries to sort them,
# a change will be detected.
expected_response_headers = [
("X-Header-%d" % i, str(i)) for i in reversed(range(K))
]
def socket_handler(listener):
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
sock.send(
b"HTTP/1.1 200 OK\r\n"
+ b"\r\n".join(
[
(k.encode("utf8") + b": " + v.encode("utf8"))
for (k, v) in expected_response_headers
]
)
+ b"\r\n"
)
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port) as pool:
r = pool.request("GET", "/", retries=0)
actual_response_headers = [
(k, v) for (k, v) in r.headers.items() if k.startswith("X-Header-")
]
assert expected_response_headers == actual_response_headers
@pytest.mark.skipif(
issubclass(httplib.HTTPMessage, MimeToolMessage),
reason="Header parsing errors not available",
)
class TestBrokenHeaders(SocketDummyServerTestCase):
def _test_broken_header_parsing(self, headers, unparsed_data_check=None):
self.start_response_handler(
(
b"HTTP/1.1 200 OK\r\n"
b"Content-Length: 0\r\n"
b"Content-type: text/plain\r\n"
)
+ b"\r\n".join(headers)
+ b"\r\n\r\n"
)
with HTTPConnectionPool(self.host, self.port, retries=False) as pool:
with LogRecorder() as logs:
pool.request("GET", "/")
for record in logs:
if (
"Failed to parse headers" in record.msg
and pool._absolute_url("/") == record.args[0]
):
if (
unparsed_data_check is None
or unparsed_data_check in record.getMessage()
):
return
self.fail("Missing log about unparsed headers")
def test_header_without_name(self):
self._test_broken_header_parsing([b": Value", b"Another: Header"])
def test_header_without_name_or_value(self):
self._test_broken_header_parsing([b":", b"Another: Header"])
def test_header_without_colon_or_value(self):
self._test_broken_header_parsing(
[b"Broken Header", b"Another: Header"], "Broken Header"
)
class TestHeaderParsingContentType(SocketDummyServerTestCase):
def _test_okay_header_parsing(self, header):
self.start_response_handler(
(b"HTTP/1.1 200 OK\r\n" b"Content-Length: 0\r\n") + header + b"\r\n\r\n"
)
with HTTPConnectionPool(self.host, self.port, retries=False) as pool:
with LogRecorder() as logs:
pool.request("GET", "/")
for record in logs:
assert "Failed to parse headers" not in record.msg
def test_header_text_plain(self):
self._test_okay_header_parsing(b"Content-type: text/plain")
def test_header_message_rfc822(self):
self._test_okay_header_parsing(b"Content-type: message/rfc822")
class TestHEAD(SocketDummyServerTestCase):
def test_chunked_head_response_does_not_hang(self):
self.start_response_handler(
b"HTTP/1.1 200 OK\r\n"
b"Transfer-Encoding: chunked\r\n"
b"Content-type: text/plain\r\n"
b"\r\n"
)
with HTTPConnectionPool(self.host, self.port, retries=False) as pool:
r = pool.request("HEAD", "/", timeout=LONG_TIMEOUT, preload_content=False)
# stream will use the read_chunked method here.
assert [] == list(r.stream())
def test_empty_head_response_does_not_hang(self):
self.start_response_handler(
b"HTTP/1.1 200 OK\r\n"
b"Content-Length: 256\r\n"
b"Content-type: text/plain\r\n"
b"\r\n"
)
with HTTPConnectionPool(self.host, self.port, retries=False) as pool:
r = pool.request("HEAD", "/", timeout=LONG_TIMEOUT, preload_content=False)
# stream will use the read method here.
assert [] == list(r.stream())
class TestStream(SocketDummyServerTestCase):
def test_stream_none_unchunked_response_does_not_hang(self):
done_event = Event()
def socket_handler(listener):
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
sock.send(
b"HTTP/1.1 200 OK\r\n"
b"Content-Length: 12\r\n"
b"Content-type: text/plain\r\n"
b"\r\n"
b"hello, world"
)
done_event.wait(5)
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port, retries=False) as pool:
r = pool.request("GET", "/", timeout=LONG_TIMEOUT, preload_content=False)
# Stream should read to the end.
assert [b"hello, world"] == list(r.stream(None))
done_event.set()
class TestBadContentLength(SocketDummyServerTestCase):
def test_enforce_content_length_get(self):
done_event = Event()
def socket_handler(listener):
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
sock.send(
b"HTTP/1.1 200 OK\r\n"
b"Content-Length: 22\r\n"
b"Content-type: text/plain\r\n"
b"\r\n"
b"hello, world"
)
done_event.wait(LONG_TIMEOUT)
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port, maxsize=1) as conn:
# Test stream read when content length less than headers claim
get_response = conn.request(
"GET", url="/", preload_content=False, enforce_content_length=True
)
data = get_response.stream(100)
# Read "good" data before we try to read again.
# This won't trigger till generator is exhausted.
next(data)
try:
next(data)
assert False
except ProtocolError as e:
assert "12 bytes read, 10 more expected" in str(e)
done_event.set()
def test_enforce_content_length_no_body(self):
done_event = Event()
def socket_handler(listener):
sock = listener.accept()[0]
buf = b""
while not buf.endswith(b"\r\n\r\n"):
buf += sock.recv(65536)
sock.send(
b"HTTP/1.1 200 OK\r\n"
b"Content-Length: 22\r\n"
b"Content-type: text/plain\r\n"
b"\r\n"
)
done_event.wait(1)
sock.close()
self._start_server(socket_handler)
with HTTPConnectionPool(self.host, self.port, maxsize=1) as conn:
# Test stream on 0 length body
head_response = conn.request(
"HEAD", url="/", preload_content=False, enforce_content_length=True
)
data = [chunk for chunk in head_response.stream(1)]
assert len(data) == 0
done_event.set()
class TestRetryPoolSizeDrainFail(SocketDummyServerTestCase):
def test_pool_size_retry_drain_fail(self):
def socket_handler(listener):
for _ in range(2):
sock = listener.accept()[0]
while not sock.recv(65536).endswith(b"\r\n\r\n"):
pass
# send a response with an invalid content length -- this causes
# a ProtocolError to raise when trying to drain the connection
sock.send(
b"HTTP/1.1 404 NOT FOUND\r\n"
b"Content-Length: 1000\r\n"
b"Content-Type: text/plain\r\n"
b"\r\n"
)
sock.close()
self._start_server(socket_handler)
retries = Retry(total=1, raise_on_status=False, status_forcelist=[404])
with HTTPConnectionPool(
self.host, self.port, maxsize=10, retries=retries, block=True
) as pool:
pool.urlopen("GET", "/not_found", preload_content=False)
assert pool.num_connections == 1
| apache-2.0 |
ageron/tensorflow | tensorflow/contrib/util/__init__.py | 18 | 1477 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Utilities for dealing with Tensors.
@@constant_value
@@make_tensor_proto
@@make_ndarray
@@ops_used_by_graph_def
@@stripped_op_list_for_graph
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# pylint: disable=unused-import
from tensorflow.python.framework.meta_graph import ops_used_by_graph_def
from tensorflow.python.framework.meta_graph import stripped_op_list_for_graph
from tensorflow.python.framework.tensor_util import constant_value
from tensorflow.python.framework.tensor_util import make_tensor_proto
from tensorflow.python.framework.tensor_util import MakeNdarray as make_ndarray
# pylint: disable=unused_import
from tensorflow.python.util.all_util import remove_undocumented
remove_undocumented(__name__)
| apache-2.0 |
nvoron23/arangodb | 3rdParty/V8-4.3.61/third_party/python_26/Lib/site-packages/pythonwin/pywin/framework/toolmenu.py | 17 | 7743 | # toolmenu.py
import win32ui
import win32con
import win32api
import app
import sys
import string
tools = {}
idPos = 100
# The default items should no tools menu exist in the INI file.
defaultToolMenuItems = [
('Browser', 'win32ui.GetApp().OnViewBrowse(0,0)'),
('Browse PythonPath', 'from pywin.tools import browseProjects;browseProjects.Browse()'),
('Edit Python Path', 'from pywin.tools import regedit;regedit.EditRegistry()'),
('COM Makepy utility', 'from win32com.client import makepy;makepy.main()'),
('COM Browser', 'from win32com.client import combrowse;combrowse.main()'),
('Trace Collector Debugging tool', 'from pywin.tools import TraceCollector;TraceCollector.MakeOutputWindow()'),
]
def LoadToolMenuItems():
# Load from the registry.
items = []
lookNo = 1
while 1:
menu = win32ui.GetProfileVal("Tools Menu\\%s" % lookNo, "", "")
if menu=="":
break
cmd = win32ui.GetProfileVal("Tools Menu\\%s" % lookNo, "Command", "")
items.append((menu, cmd))
lookNo = lookNo + 1
if len(items)==0:
items = defaultToolMenuItems
return items
def WriteToolMenuItems( items ):
# Items is a list of (menu, command)
# Delete the entire registry tree.
try:
mainKey = win32ui.GetAppRegistryKey()
toolKey = win32api.RegOpenKey(mainKey, "Tools Menu")
except win32ui.error:
toolKey = None
if toolKey is not None:
while 1:
try:
subkey = win32api.RegEnumKey(toolKey, 0)
except win32api.error:
break
win32api.RegDeleteKey(toolKey, subkey)
# Keys are now removed - write the new ones.
# But first check if we have the defaults - and if so, dont write anything!
if items==defaultToolMenuItems:
return
itemNo = 1
for menu, cmd in items:
win32ui.WriteProfileVal("Tools Menu\\%s" % itemNo, "", menu)
win32ui.WriteProfileVal("Tools Menu\\%s" % itemNo, "Command", cmd)
itemNo = itemNo + 1
def SetToolsMenu(menu, menuPos = None):
global tools
global idPos
# todo - check the menu does not already exist.
# Create the new menu
toolsMenu = win32ui.CreatePopupMenu()
# Load from the ini file.
items = LoadToolMenuItems()
for menuString, cmd in items:
tools[idPos] = (menuString, cmd, menuString)
toolsMenu.AppendMenu(win32con.MF_ENABLED|win32con.MF_STRING,idPos, menuString)
win32ui.GetMainFrame().HookCommand(HandleToolCommand, idPos)
idPos=idPos+1
# Find the correct spot to insert the new tools menu.
if menuPos is None:
menuPos = menu.GetMenuItemCount()-2
if menuPos<0: menuPos=0
menu.InsertMenu(menuPos, win32con.MF_BYPOSITION|win32con.MF_ENABLED|win32con.MF_STRING|win32con.MF_POPUP, toolsMenu.GetHandle(), '&Tools')
def HandleToolCommand(cmd, code):
import traceback
import re
global tools
(menuString, pyCmd, desc) = tools[cmd]
win32ui.SetStatusText("Executing tool %s" % desc, 1)
pyCmd = re.sub('\\\\n','\n', pyCmd)
win32ui.DoWaitCursor(1)
oldFlag = None
try:
oldFlag = sys.stdout.template.writeQueueing
sys.stdout.template.writeQueueing = 0
except (NameError, AttributeError):
pass
try:
exec "%s\n" % pyCmd
worked=1
except SystemExit:
# The program raised a SystemExit - ignore it.
worked = 1
except:
print "Failed to execute command:\n%s" % pyCmd
traceback.print_exc()
worked=0
if oldFlag is not None:
sys.stdout.template.writeQueueing = oldFlag
win32ui.DoWaitCursor(0)
if worked:
text = "Completed successfully."
else:
text = "Error executing %s." % desc
win32ui.SetStatusText(text, 1)
# The property page for maintaing the items on the Tools menu.
import commctrl
from pywin.mfc import dialog
class ToolMenuPropPage(dialog.PropertyPage):
def __init__(self):
self.bImChangingEditControls = 0 # Am I programatically changing the controls?
dialog.PropertyPage.__init__(self, win32ui.IDD_PP_TOOLMENU)
def OnInitDialog(self):
self.editMenuCommand = self.GetDlgItem(win32ui.IDC_EDIT2)
self.butNew = self.GetDlgItem(win32ui.IDC_BUTTON3)
# Now hook the change notification messages for the edit controls.
self.HookCommand(self.OnCommandEditControls, win32ui.IDC_EDIT1)
self.HookCommand(self.OnCommandEditControls, win32ui.IDC_EDIT2)
self.HookNotify(self.OnNotifyListControl, commctrl.LVN_ITEMCHANGED)
self.HookNotify(self.OnNotifyListControlEndLabelEdit, commctrl.LVN_ENDLABELEDIT)
# Hook the button clicks.
self.HookCommand(self.OnButtonNew, win32ui.IDC_BUTTON3) # New Item
self.HookCommand(self.OnButtonDelete, win32ui.IDC_BUTTON4) # Delete item
self.HookCommand(self.OnButtonMove, win32ui.IDC_BUTTON1) # Move up
self.HookCommand(self.OnButtonMove, win32ui.IDC_BUTTON2) # Move down
# Setup the columns in the list control
lc = self.GetDlgItem(win32ui.IDC_LIST1)
rect = lc.GetWindowRect()
cx = rect[2] - rect[0]
colSize = cx/2 - win32api.GetSystemMetrics(win32con.SM_CXBORDER) - 1
item = commctrl.LVCFMT_LEFT, colSize, "Menu Text"
lc.InsertColumn(0, item)
item = commctrl.LVCFMT_LEFT, colSize, "Python Command"
lc.InsertColumn(1, item)
# Insert the existing tools menu
itemNo = 0
for desc, cmd in LoadToolMenuItems():
lc.InsertItem(itemNo, desc)
lc.SetItemText(itemNo, 1, cmd)
itemNo = itemNo + 1
self.listControl = lc
return dialog.PropertyPage.OnInitDialog(self)
def OnOK(self):
# Write the menu back to the registry.
items = []
itemLook = 0
while 1:
try:
items.append( ( self.listControl.GetItemText(itemLook, 0), self.listControl.GetItemText(itemLook, 1) ) )
except win32ui.error:
# no more items!
break
itemLook = itemLook + 1
WriteToolMenuItems( items )
return self._obj_.OnOK()
def OnCommandEditControls(self, id, cmd):
# print "OnEditControls", id, cmd
if cmd==win32con.EN_CHANGE and not self.bImChangingEditControls:
itemNo = self.listControl.GetNextItem(-1, commctrl.LVNI_SELECTED)
newText = self.editMenuCommand.GetWindowText()
self.listControl.SetItemText(itemNo, 1, newText)
return 0
def OnNotifyListControlEndLabelEdit(self, id, cmd):
newText = self.listControl.GetEditControl().GetWindowText()
itemNo = self.listControl.GetNextItem(-1, commctrl.LVNI_SELECTED)
self.listControl.SetItemText(itemNo, 0, newText)
def OnNotifyListControl(self, id, cmd):
# print id, cmd
try:
itemNo = self.listControl.GetNextItem(-1, commctrl.LVNI_SELECTED)
except win32ui.error: # No selection!
return
self.bImChangingEditControls = 1
try:
item = self.listControl.GetItem(itemNo, 1)
self.editMenuCommand.SetWindowText(item[4])
finally:
self.bImChangingEditControls = 0
return 0 # we have handled this!
def OnButtonNew(self, id, cmd):
if cmd==win32con.BN_CLICKED:
newIndex = self.listControl.GetItemCount()
self.listControl.InsertItem(newIndex, "Click to edit the text")
self.listControl.EnsureVisible(newIndex, 0)
def OnButtonMove(self, id, cmd):
if cmd==win32con.BN_CLICKED:
try:
itemNo = self.listControl.GetNextItem(-1, commctrl.LVNI_SELECTED)
except win32ui.error:
return
menu = self.listControl.GetItemText(itemNo, 0)
cmd = self.listControl.GetItemText(itemNo, 1)
if id == win32ui.IDC_BUTTON1:
# Move up
if itemNo > 0:
self.listControl.DeleteItem(itemNo)
# reinsert it.
self.listControl.InsertItem(itemNo-1, menu)
self.listControl.SetItemText(itemNo-1, 1, cmd)
else:
# Move down.
if itemNo < self.listControl.GetItemCount()-1:
self.listControl.DeleteItem(itemNo)
# reinsert it.
self.listControl.InsertItem(itemNo+1, menu)
self.listControl.SetItemText(itemNo+1, 1, cmd)
def OnButtonDelete(self, id, cmd):
if cmd==win32con.BN_CLICKED:
try:
itemNo = self.listControl.GetNextItem(-1, commctrl.LVNI_SELECTED)
except win32ui.error: # No selection!
return
self.listControl.DeleteItem(itemNo)
| apache-2.0 |
sp1rs/vyked | vyked/packet.py | 2 | 5504 | from collections import defaultdict
from uuid import uuid4
class _Packet:
_pid = 0
@classmethod
def _next_pid(cls):
from uuid import uuid4
return str(uuid4())
@classmethod
def ack(cls, request_id):
return {'pid': cls._next_pid(), 'type': 'ack', 'request_id': request_id}
@classmethod
def pong(cls, node_id):
return cls._get_ping_pong(node_id, 'pong')
@classmethod
def ping(cls, node_id):
return cls._get_ping_pong(node_id, 'ping')
@classmethod
def _get_ping_pong(cls, node_id, packet_type):
return {'pid': cls._next_pid(), 'type': packet_type, 'node_id': node_id}
class ControlPacket(_Packet):
@classmethod
def registration(cls, ip: str, port: int, node_id, service: str, version: str, vendors, service_type: str):
v = [{'service': vendor.name, 'version': vendor.version} for vendor in vendors]
params = {'service': service,
'version': version,
'host': ip,
'port': port,
'node_id': node_id,
'vendors': v,
'type': service_type}
packet = {'pid': cls._next_pid(), 'type': 'register', 'params': params}
return packet
@classmethod
def get_instances(cls, service, version):
params = {'service': service, 'version': version}
packet = {'pid': cls._next_pid(),
'type': 'get_instances',
'service': service,
'version': version,
'params': params,
'request_id': str(uuid4())}
return packet
@classmethod
def get_subscribers(cls, service, version, endpoint):
params = {'service': service, 'version': version, 'endpoint': endpoint}
packet = {'pid': cls._next_pid(),
'type': 'get_subscribers',
'params': params,
'request_id': str(uuid4())}
return packet
@classmethod
def send_instances(cls, service, version, instances):
instances = [{'host': host, 'port': port, 'node': node, 'type': service_type} for host, port, node, service_type
in instances]
instance_packet_params = {'service': service, 'version': version, 'instances': instances}
return {'pid': cls._next_pid(), 'type': 'instances', 'params': instance_packet_params}
@classmethod
# TODO : fix parsing on client side
def deregister(cls, service, version, node_id):
params = {'node_id': node_id, 'service': service, 'version': version}
packet = {'pid': cls._next_pid(), 'type': 'deregister', 'params': params}
return packet
@classmethod
def activated(cls, instances):
vendors_packet = []
for k, v in instances.items():
vendor_packet = defaultdict(list)
vendor_packet['name'] = k[0]
vendor_packet['version'] = k[1]
for host, port, node, service_type in v:
vendor_node_packet = {
'host': host,
'port': port,
'node_id': node,
'type': service_type
}
vendor_packet['addresses'].append(vendor_node_packet)
vendors_packet.append(vendor_packet)
params = {
'vendors': vendors_packet
}
packet = {'pid': cls._next_pid(),
'type': 'registered',
'params': params}
return packet
@classmethod
def xsubscribe(cls, service, version, host, port, node_id, endpoints):
params = {'service': service, 'version': version, 'host': host, 'port': port, 'node_id': node_id}
events = [{'service': service, 'version': version, 'endpoint': endpoint, 'strategy': strategy} for
service, version, endpoint, strategy in endpoints]
params['events'] = events
packet = {'pid': cls._next_pid(),
'type': 'xsubscribe',
'params': params}
return packet
@classmethod
def subscribers(cls, service, version, endpoint, request_id, subscribers):
params = {'service': service, 'version': version, 'endpoint': endpoint}
subscribers = [{'service': service, 'version': version, 'host': host, 'port': port, 'node_id': node_id,
'strategy': strategy} for service, version, host, port, node_id, strategy in subscribers]
params['subscribers'] = subscribers
packet = {'pid': cls._next_pid(),
'request_id': request_id,
'type': 'subscribers',
'params': params}
return packet
class MessagePacket(_Packet):
@classmethod
def request(cls, name, version, app_name, packet_type, endpoint, params, entity):
return {'pid': cls._next_pid(),
'app': app_name,
'service': name,
'version': version,
'entity': entity,
'endpoint': endpoint,
'type': packet_type,
'payload': params}
@classmethod
def publish(cls, publish_id, service, version, endpoint, payload):
return {'pid': cls._next_pid(),
'type': 'publish',
'service': service,
'version': version,
'endpoint': endpoint,
'payload': payload,
'publish_id': publish_id}
| mit |
MalloyPower/parsing-python | front-end/testsuite-python-lib/Python-2.5/Lib/plat-freebsd6/IN.py | 8 | 11516 | # Generated by h2py from /usr/include/netinet/in.h
# Included from sys/cdefs.h
def __P(protos): return protos
def __STRING(x): return #x
def __XSTRING(x): return __STRING(x)
def __P(protos): return ()
def __STRING(x): return "x"
def __aligned(x): return __attribute__((__aligned__(x)))
def __section(x): return __attribute__((__section__(x)))
def __aligned(x): return __attribute__((__aligned__(x)))
def __section(x): return __attribute__((__section__(x)))
def __nonnull(x): return __attribute__((__nonnull__(x)))
def __predict_true(exp): return __builtin_expect((exp), 1)
def __predict_false(exp): return __builtin_expect((exp), 0)
def __predict_true(exp): return (exp)
def __predict_false(exp): return (exp)
def __FBSDID(s): return __IDSTRING(__CONCAT(__rcsid_,__LINE__),s)
def __RCSID(s): return __IDSTRING(__CONCAT(__rcsid_,__LINE__),s)
def __RCSID_SOURCE(s): return __IDSTRING(__CONCAT(__rcsid_source_,__LINE__),s)
def __SCCSID(s): return __IDSTRING(__CONCAT(__sccsid_,__LINE__),s)
def __COPYRIGHT(s): return __IDSTRING(__CONCAT(__copyright_,__LINE__),s)
_POSIX_C_SOURCE = 199009
_POSIX_C_SOURCE = 199209
__XSI_VISIBLE = 600
_POSIX_C_SOURCE = 200112
__XSI_VISIBLE = 500
_POSIX_C_SOURCE = 199506
_POSIX_C_SOURCE = 198808
__POSIX_VISIBLE = 200112
__ISO_C_VISIBLE = 1999
__POSIX_VISIBLE = 199506
__ISO_C_VISIBLE = 1990
__POSIX_VISIBLE = 199309
__ISO_C_VISIBLE = 1990
__POSIX_VISIBLE = 199209
__ISO_C_VISIBLE = 1990
__POSIX_VISIBLE = 199009
__ISO_C_VISIBLE = 1990
__POSIX_VISIBLE = 198808
__ISO_C_VISIBLE = 0
__POSIX_VISIBLE = 0
__XSI_VISIBLE = 0
__BSD_VISIBLE = 0
__ISO_C_VISIBLE = 1990
__POSIX_VISIBLE = 0
__XSI_VISIBLE = 0
__BSD_VISIBLE = 0
__ISO_C_VISIBLE = 1999
__POSIX_VISIBLE = 200112
__XSI_VISIBLE = 600
__BSD_VISIBLE = 1
__ISO_C_VISIBLE = 1999
# Included from sys/_types.h
# Included from machine/_types.h
# Included from machine/endian.h
_QUAD_HIGHWORD = 1
_QUAD_LOWWORD = 0
_LITTLE_ENDIAN = 1234
_BIG_ENDIAN = 4321
_PDP_ENDIAN = 3412
_BYTE_ORDER = _LITTLE_ENDIAN
LITTLE_ENDIAN = _LITTLE_ENDIAN
BIG_ENDIAN = _BIG_ENDIAN
PDP_ENDIAN = _PDP_ENDIAN
BYTE_ORDER = _BYTE_ORDER
__INTEL_COMPILER_with_FreeBSD_endian = 1
__INTEL_COMPILER_with_FreeBSD_endian = 1
def __word_swap_int_var(x): return \
def __word_swap_int_const(x): return \
def __word_swap_int(x): return __word_swap_int_var(x)
def __byte_swap_int_var(x): return \
def __byte_swap_int_var(x): return \
def __byte_swap_int_const(x): return \
def __byte_swap_int(x): return __byte_swap_int_var(x)
def __byte_swap_word_var(x): return \
def __byte_swap_word_const(x): return \
def __byte_swap_word(x): return __byte_swap_word_var(x)
def __htonl(x): return __bswap32(x)
def __htons(x): return __bswap16(x)
def __ntohl(x): return __bswap32(x)
def __ntohs(x): return __bswap16(x)
IPPROTO_IP = 0
IPPROTO_ICMP = 1
IPPROTO_TCP = 6
IPPROTO_UDP = 17
def htonl(x): return __htonl(x)
def htons(x): return __htons(x)
def ntohl(x): return __ntohl(x)
def ntohs(x): return __ntohs(x)
IPPROTO_RAW = 255
INET_ADDRSTRLEN = 16
IPPROTO_HOPOPTS = 0
IPPROTO_IGMP = 2
IPPROTO_GGP = 3
IPPROTO_IPV4 = 4
IPPROTO_IPIP = IPPROTO_IPV4
IPPROTO_ST = 7
IPPROTO_EGP = 8
IPPROTO_PIGP = 9
IPPROTO_RCCMON = 10
IPPROTO_NVPII = 11
IPPROTO_PUP = 12
IPPROTO_ARGUS = 13
IPPROTO_EMCON = 14
IPPROTO_XNET = 15
IPPROTO_CHAOS = 16
IPPROTO_MUX = 18
IPPROTO_MEAS = 19
IPPROTO_HMP = 20
IPPROTO_PRM = 21
IPPROTO_IDP = 22
IPPROTO_TRUNK1 = 23
IPPROTO_TRUNK2 = 24
IPPROTO_LEAF1 = 25
IPPROTO_LEAF2 = 26
IPPROTO_RDP = 27
IPPROTO_IRTP = 28
IPPROTO_TP = 29
IPPROTO_BLT = 30
IPPROTO_NSP = 31
IPPROTO_INP = 32
IPPROTO_SEP = 33
IPPROTO_3PC = 34
IPPROTO_IDPR = 35
IPPROTO_XTP = 36
IPPROTO_DDP = 37
IPPROTO_CMTP = 38
IPPROTO_TPXX = 39
IPPROTO_IL = 40
IPPROTO_IPV6 = 41
IPPROTO_SDRP = 42
IPPROTO_ROUTING = 43
IPPROTO_FRAGMENT = 44
IPPROTO_IDRP = 45
IPPROTO_RSVP = 46
IPPROTO_GRE = 47
IPPROTO_MHRP = 48
IPPROTO_BHA = 49
IPPROTO_ESP = 50
IPPROTO_AH = 51
IPPROTO_INLSP = 52
IPPROTO_SWIPE = 53
IPPROTO_NHRP = 54
IPPROTO_MOBILE = 55
IPPROTO_TLSP = 56
IPPROTO_SKIP = 57
IPPROTO_ICMPV6 = 58
IPPROTO_NONE = 59
IPPROTO_DSTOPTS = 60
IPPROTO_AHIP = 61
IPPROTO_CFTP = 62
IPPROTO_HELLO = 63
IPPROTO_SATEXPAK = 64
IPPROTO_KRYPTOLAN = 65
IPPROTO_RVD = 66
IPPROTO_IPPC = 67
IPPROTO_ADFS = 68
IPPROTO_SATMON = 69
IPPROTO_VISA = 70
IPPROTO_IPCV = 71
IPPROTO_CPNX = 72
IPPROTO_CPHB = 73
IPPROTO_WSN = 74
IPPROTO_PVP = 75
IPPROTO_BRSATMON = 76
IPPROTO_ND = 77
IPPROTO_WBMON = 78
IPPROTO_WBEXPAK = 79
IPPROTO_EON = 80
IPPROTO_VMTP = 81
IPPROTO_SVMTP = 82
IPPROTO_VINES = 83
IPPROTO_TTP = 84
IPPROTO_IGP = 85
IPPROTO_DGP = 86
IPPROTO_TCF = 87
IPPROTO_IGRP = 88
IPPROTO_OSPFIGP = 89
IPPROTO_SRPC = 90
IPPROTO_LARP = 91
IPPROTO_MTP = 92
IPPROTO_AX25 = 93
IPPROTO_IPEIP = 94
IPPROTO_MICP = 95
IPPROTO_SCCSP = 96
IPPROTO_ETHERIP = 97
IPPROTO_ENCAP = 98
IPPROTO_APES = 99
IPPROTO_GMTP = 100
IPPROTO_IPCOMP = 108
IPPROTO_PIM = 103
IPPROTO_PGM = 113
IPPROTO_PFSYNC = 240
IPPROTO_OLD_DIVERT = 254
IPPROTO_MAX = 256
IPPROTO_DONE = 257
IPPROTO_DIVERT = 258
IPPORT_RESERVED = 1024
IPPORT_HIFIRSTAUTO = 49152
IPPORT_HILASTAUTO = 65535
IPPORT_RESERVEDSTART = 600
IPPORT_MAX = 65535
def IN_CLASSA(i): return (((u_int32_t)(i) & (-2147483648)) == 0)
IN_CLASSA_NET = (-16777216)
IN_CLASSA_NSHIFT = 24
IN_CLASSA_HOST = 0x00ffffff
IN_CLASSA_MAX = 128
def IN_CLASSB(i): return (((u_int32_t)(i) & (-1073741824)) == (-2147483648))
IN_CLASSB_NET = (-65536)
IN_CLASSB_NSHIFT = 16
IN_CLASSB_HOST = 0x0000ffff
IN_CLASSB_MAX = 65536
def IN_CLASSC(i): return (((u_int32_t)(i) & (-536870912)) == (-1073741824))
IN_CLASSC_NET = (-256)
IN_CLASSC_NSHIFT = 8
IN_CLASSC_HOST = 0x000000ff
def IN_CLASSD(i): return (((u_int32_t)(i) & (-268435456)) == (-536870912))
IN_CLASSD_NET = (-268435456)
IN_CLASSD_NSHIFT = 28
IN_CLASSD_HOST = 0x0fffffff
def IN_MULTICAST(i): return IN_CLASSD(i)
def IN_EXPERIMENTAL(i): return (((u_int32_t)(i) & (-268435456)) == (-268435456))
def IN_BADCLASS(i): return (((u_int32_t)(i) & (-268435456)) == (-268435456))
INADDR_NONE = (-1)
IN_LOOPBACKNET = 127
IP_OPTIONS = 1
IP_HDRINCL = 2
IP_TOS = 3
IP_TTL = 4
IP_RECVOPTS = 5
IP_RECVRETOPTS = 6
IP_RECVDSTADDR = 7
IP_SENDSRCADDR = IP_RECVDSTADDR
IP_RETOPTS = 8
IP_MULTICAST_IF = 9
IP_MULTICAST_TTL = 10
IP_MULTICAST_LOOP = 11
IP_ADD_MEMBERSHIP = 12
IP_DROP_MEMBERSHIP = 13
IP_MULTICAST_VIF = 14
IP_RSVP_ON = 15
IP_RSVP_OFF = 16
IP_RSVP_VIF_ON = 17
IP_RSVP_VIF_OFF = 18
IP_PORTRANGE = 19
IP_RECVIF = 20
IP_IPSEC_POLICY = 21
IP_FAITH = 22
IP_ONESBCAST = 23
IP_FW_TABLE_ADD = 40
IP_FW_TABLE_DEL = 41
IP_FW_TABLE_FLUSH = 42
IP_FW_TABLE_GETSIZE = 43
IP_FW_TABLE_LIST = 44
IP_FW_ADD = 50
IP_FW_DEL = 51
IP_FW_FLUSH = 52
IP_FW_ZERO = 53
IP_FW_GET = 54
IP_FW_RESETLOG = 55
IP_DUMMYNET_CONFIGURE = 60
IP_DUMMYNET_DEL = 61
IP_DUMMYNET_FLUSH = 62
IP_DUMMYNET_GET = 64
IP_RECVTTL = 65
IP_DEFAULT_MULTICAST_TTL = 1
IP_DEFAULT_MULTICAST_LOOP = 1
IP_MAX_MEMBERSHIPS = 20
IP_PORTRANGE_DEFAULT = 0
IP_PORTRANGE_HIGH = 1
IP_PORTRANGE_LOW = 2
IPPROTO_MAXID = (IPPROTO_AH + 1)
IPCTL_FORWARDING = 1
IPCTL_SENDREDIRECTS = 2
IPCTL_DEFTTL = 3
IPCTL_DEFMTU = 4
IPCTL_RTEXPIRE = 5
IPCTL_RTMINEXPIRE = 6
IPCTL_RTMAXCACHE = 7
IPCTL_SOURCEROUTE = 8
IPCTL_DIRECTEDBROADCAST = 9
IPCTL_INTRQMAXLEN = 10
IPCTL_INTRQDROPS = 11
IPCTL_STATS = 12
IPCTL_ACCEPTSOURCEROUTE = 13
IPCTL_FASTFORWARDING = 14
IPCTL_KEEPFAITH = 15
IPCTL_GIF_TTL = 16
IPCTL_MAXID = 17
def in_nullhost(x): return ((x).s_addr == INADDR_ANY)
# Included from netinet6/in6.h
__KAME_VERSION = "20010528/FreeBSD"
IPV6PORT_RESERVED = 1024
IPV6PORT_ANONMIN = 49152
IPV6PORT_ANONMAX = 65535
IPV6PORT_RESERVEDMIN = 600
IPV6PORT_RESERVEDMAX = (IPV6PORT_RESERVED-1)
INET6_ADDRSTRLEN = 46
IPV6_ADDR_INT32_ONE = 1
IPV6_ADDR_INT32_TWO = 2
IPV6_ADDR_INT32_MNL = (-16711680)
IPV6_ADDR_INT32_MLL = (-16646144)
IPV6_ADDR_INT32_SMP = 0x0000ffff
IPV6_ADDR_INT16_ULL = 0xfe80
IPV6_ADDR_INT16_USL = 0xfec0
IPV6_ADDR_INT16_MLL = 0xff02
IPV6_ADDR_INT32_ONE = 0x01000000
IPV6_ADDR_INT32_TWO = 0x02000000
IPV6_ADDR_INT32_MNL = 0x000001ff
IPV6_ADDR_INT32_MLL = 0x000002ff
IPV6_ADDR_INT32_SMP = (-65536)
IPV6_ADDR_INT16_ULL = 0x80fe
IPV6_ADDR_INT16_USL = 0xc0fe
IPV6_ADDR_INT16_MLL = 0x02ff
def IN6_IS_ADDR_UNSPECIFIED(a): return \
def IN6_IS_ADDR_LOOPBACK(a): return \
def IN6_IS_ADDR_V4COMPAT(a): return \
def IN6_IS_ADDR_V4MAPPED(a): return \
IPV6_ADDR_SCOPE_NODELOCAL = 0x01
IPV6_ADDR_SCOPE_INTFACELOCAL = 0x01
IPV6_ADDR_SCOPE_LINKLOCAL = 0x02
IPV6_ADDR_SCOPE_SITELOCAL = 0x05
IPV6_ADDR_SCOPE_ORGLOCAL = 0x08
IPV6_ADDR_SCOPE_GLOBAL = 0x0e
__IPV6_ADDR_SCOPE_NODELOCAL = 0x01
__IPV6_ADDR_SCOPE_INTFACELOCAL = 0x01
__IPV6_ADDR_SCOPE_LINKLOCAL = 0x02
__IPV6_ADDR_SCOPE_SITELOCAL = 0x05
__IPV6_ADDR_SCOPE_ORGLOCAL = 0x08
__IPV6_ADDR_SCOPE_GLOBAL = 0x0e
def IN6_IS_ADDR_LINKLOCAL(a): return \
def IN6_IS_ADDR_SITELOCAL(a): return \
def IN6_IS_ADDR_MC_NODELOCAL(a): return \
def IN6_IS_ADDR_MC_INTFACELOCAL(a): return \
def IN6_IS_ADDR_MC_LINKLOCAL(a): return \
def IN6_IS_ADDR_MC_SITELOCAL(a): return \
def IN6_IS_ADDR_MC_ORGLOCAL(a): return \
def IN6_IS_ADDR_MC_GLOBAL(a): return \
def IN6_IS_ADDR_MC_NODELOCAL(a): return \
def IN6_IS_ADDR_MC_LINKLOCAL(a): return \
def IN6_IS_ADDR_MC_SITELOCAL(a): return \
def IN6_IS_ADDR_MC_ORGLOCAL(a): return \
def IN6_IS_ADDR_MC_GLOBAL(a): return \
def IN6_IS_SCOPE_LINKLOCAL(a): return \
def IFA6_IS_DEPRECATED(a): return \
def IFA6_IS_INVALID(a): return \
IPV6_OPTIONS = 1
IPV6_RECVOPTS = 5
IPV6_RECVRETOPTS = 6
IPV6_RECVDSTADDR = 7
IPV6_RETOPTS = 8
IPV6_SOCKOPT_RESERVED1 = 3
IPV6_UNICAST_HOPS = 4
IPV6_MULTICAST_IF = 9
IPV6_MULTICAST_HOPS = 10
IPV6_MULTICAST_LOOP = 11
IPV6_JOIN_GROUP = 12
IPV6_LEAVE_GROUP = 13
IPV6_PORTRANGE = 14
ICMP6_FILTER = 18
IPV6_2292PKTINFO = 19
IPV6_2292HOPLIMIT = 20
IPV6_2292NEXTHOP = 21
IPV6_2292HOPOPTS = 22
IPV6_2292DSTOPTS = 23
IPV6_2292RTHDR = 24
IPV6_2292PKTOPTIONS = 25
IPV6_CHECKSUM = 26
IPV6_V6ONLY = 27
IPV6_BINDV6ONLY = IPV6_V6ONLY
IPV6_IPSEC_POLICY = 28
IPV6_FAITH = 29
IPV6_FW_ADD = 30
IPV6_FW_DEL = 31
IPV6_FW_FLUSH = 32
IPV6_FW_ZERO = 33
IPV6_FW_GET = 34
IPV6_RTHDRDSTOPTS = 35
IPV6_RECVPKTINFO = 36
IPV6_RECVHOPLIMIT = 37
IPV6_RECVRTHDR = 38
IPV6_RECVHOPOPTS = 39
IPV6_RECVDSTOPTS = 40
IPV6_RECVRTHDRDSTOPTS = 41
IPV6_USE_MIN_MTU = 42
IPV6_RECVPATHMTU = 43
IPV6_PATHMTU = 44
IPV6_REACHCONF = 45
IPV6_PKTINFO = 46
IPV6_HOPLIMIT = 47
IPV6_NEXTHOP = 48
IPV6_HOPOPTS = 49
IPV6_DSTOPTS = 50
IPV6_RTHDR = 51
IPV6_PKTOPTIONS = 52
IPV6_RECVTCLASS = 57
IPV6_AUTOFLOWLABEL = 59
IPV6_TCLASS = 61
IPV6_DONTFRAG = 62
IPV6_PREFER_TEMPADDR = 63
IPV6_RTHDR_LOOSE = 0
IPV6_RTHDR_STRICT = 1
IPV6_RTHDR_TYPE_0 = 0
IPV6_DEFAULT_MULTICAST_HOPS = 1
IPV6_DEFAULT_MULTICAST_LOOP = 1
IPV6_PORTRANGE_DEFAULT = 0
IPV6_PORTRANGE_HIGH = 1
IPV6_PORTRANGE_LOW = 2
IPV6PROTO_MAXID = (IPPROTO_PIM + 1)
IPV6CTL_FORWARDING = 1
IPV6CTL_SENDREDIRECTS = 2
IPV6CTL_DEFHLIM = 3
IPV6CTL_DEFMTU = 4
IPV6CTL_FORWSRCRT = 5
IPV6CTL_STATS = 6
IPV6CTL_MRTSTATS = 7
IPV6CTL_MRTPROTO = 8
IPV6CTL_MAXFRAGPACKETS = 9
IPV6CTL_SOURCECHECK = 10
IPV6CTL_SOURCECHECK_LOGINT = 11
IPV6CTL_ACCEPT_RTADV = 12
IPV6CTL_KEEPFAITH = 13
IPV6CTL_LOG_INTERVAL = 14
IPV6CTL_HDRNESTLIMIT = 15
IPV6CTL_DAD_COUNT = 16
IPV6CTL_AUTO_FLOWLABEL = 17
IPV6CTL_DEFMCASTHLIM = 18
IPV6CTL_GIF_HLIM = 19
IPV6CTL_KAME_VERSION = 20
IPV6CTL_USE_DEPRECATED = 21
IPV6CTL_RR_PRUNE = 22
IPV6CTL_MAPPED_ADDR = 23
IPV6CTL_V6ONLY = 24
IPV6CTL_RTEXPIRE = 25
IPV6CTL_RTMINEXPIRE = 26
IPV6CTL_RTMAXCACHE = 27
IPV6CTL_USETEMPADDR = 32
IPV6CTL_TEMPPLTIME = 33
IPV6CTL_TEMPVLTIME = 34
IPV6CTL_AUTO_LINKLOCAL = 35
IPV6CTL_RIP6STATS = 36
IPV6CTL_PREFER_TEMPADDR = 37
IPV6CTL_ADDRCTLPOLICY = 38
IPV6CTL_MAXFRAGS = 41
IPV6CTL_MAXID = 42
| mit |
aduric/crossfit | nonrel/tests/regressiontests/admin_views/customadmin.py | 52 | 1279 | """
A second, custom AdminSite -- see tests.CustomAdminSiteTests.
"""
from django.conf.urls.defaults import patterns
from django.contrib import admin
from django.http import HttpResponse
import models, forms
class Admin2(admin.AdminSite):
login_form = forms.CustomAdminAuthenticationForm
login_template = 'custom_admin/login.html'
logout_template = 'custom_admin/logout.html'
index_template = 'custom_admin/index.html'
password_change_template = 'custom_admin/password_change_form.html'
password_change_done_template = 'custom_admin/password_change_done.html'
# A custom index view.
def index(self, request, extra_context=None):
return super(Admin2, self).index(request, {'foo': '*bar*'})
def get_urls(self):
return patterns('',
(r'^my_view/$', self.admin_view(self.my_view)),
) + super(Admin2, self).get_urls()
def my_view(self, request):
return HttpResponse("Django is a magical pony!")
site = Admin2(name="admin2")
site.register(models.Article, models.ArticleAdmin)
site.register(models.Section, inlines=[models.ArticleInline])
site.register(models.Thing, models.ThingAdmin)
site.register(models.Fabric, models.FabricAdmin)
site.register(models.ChapterXtra1, models.ChapterXtra1Admin)
| bsd-3-clause |
kantel/processingpy | mpmathtest/mpmath/libmp/libmpi.py | 6 | 27622 | """
Computational functions for interval arithmetic.
"""
from .backend import xrange
from .libmpf import (
ComplexResult,
round_down, round_up, round_floor, round_ceiling, round_nearest,
prec_to_dps, repr_dps, dps_to_prec,
bitcount,
from_float,
fnan, finf, fninf, fzero, fhalf, fone, fnone,
mpf_sign, mpf_lt, mpf_le, mpf_gt, mpf_ge, mpf_eq, mpf_cmp,
mpf_min_max,
mpf_floor, from_int, to_int, to_str, from_str,
mpf_abs, mpf_neg, mpf_pos, mpf_add, mpf_sub, mpf_mul, mpf_mul_int,
mpf_div, mpf_shift, mpf_pow_int,
from_man_exp, MPZ_ONE)
from .libelefun import (
mpf_log, mpf_exp, mpf_sqrt, mpf_atan, mpf_atan2,
mpf_pi, mod_pi2, mpf_cos_sin
)
from .gammazeta import mpf_gamma, mpf_rgamma, mpf_loggamma, mpc_loggamma
def mpi_str(s, prec):
sa, sb = s
dps = prec_to_dps(prec) + 5
return "[%s, %s]" % (to_str(sa, dps), to_str(sb, dps))
#dps = prec_to_dps(prec)
#m = mpi_mid(s, prec)
#d = mpf_shift(mpi_delta(s, 20), -1)
#return "%s +/- %s" % (to_str(m, dps), to_str(d, 3))
mpi_zero = (fzero, fzero)
mpi_one = (fone, fone)
def mpi_eq(s, t):
return s == t
def mpi_ne(s, t):
return s != t
def mpi_lt(s, t):
sa, sb = s
ta, tb = t
if mpf_lt(sb, ta): return True
if mpf_ge(sa, tb): return False
return None
def mpi_le(s, t):
sa, sb = s
ta, tb = t
if mpf_le(sb, ta): return True
if mpf_gt(sa, tb): return False
return None
def mpi_gt(s, t): return mpi_lt(t, s)
def mpi_ge(s, t): return mpi_le(t, s)
def mpi_add(s, t, prec=0):
sa, sb = s
ta, tb = t
a = mpf_add(sa, ta, prec, round_floor)
b = mpf_add(sb, tb, prec, round_ceiling)
if a == fnan: a = fninf
if b == fnan: b = finf
return a, b
def mpi_sub(s, t, prec=0):
sa, sb = s
ta, tb = t
a = mpf_sub(sa, tb, prec, round_floor)
b = mpf_sub(sb, ta, prec, round_ceiling)
if a == fnan: a = fninf
if b == fnan: b = finf
return a, b
def mpi_delta(s, prec):
sa, sb = s
return mpf_sub(sb, sa, prec, round_up)
def mpi_mid(s, prec):
sa, sb = s
return mpf_shift(mpf_add(sa, sb, prec, round_nearest), -1)
def mpi_pos(s, prec):
sa, sb = s
a = mpf_pos(sa, prec, round_floor)
b = mpf_pos(sb, prec, round_ceiling)
return a, b
def mpi_neg(s, prec=0):
sa, sb = s
a = mpf_neg(sb, prec, round_floor)
b = mpf_neg(sa, prec, round_ceiling)
return a, b
def mpi_abs(s, prec=0):
sa, sb = s
sas = mpf_sign(sa)
sbs = mpf_sign(sb)
# Both points nonnegative?
if sas >= 0:
a = mpf_pos(sa, prec, round_floor)
b = mpf_pos(sb, prec, round_ceiling)
# Upper point nonnegative?
elif sbs >= 0:
a = fzero
negsa = mpf_neg(sa)
if mpf_lt(negsa, sb):
b = mpf_pos(sb, prec, round_ceiling)
else:
b = mpf_pos(negsa, prec, round_ceiling)
# Both negative?
else:
a = mpf_neg(sb, prec, round_floor)
b = mpf_neg(sa, prec, round_ceiling)
return a, b
# TODO: optimize
def mpi_mul_mpf(s, t, prec):
return mpi_mul(s, (t, t), prec)
def mpi_div_mpf(s, t, prec):
return mpi_div(s, (t, t), prec)
def mpi_mul(s, t, prec=0):
sa, sb = s
ta, tb = t
sas = mpf_sign(sa)
sbs = mpf_sign(sb)
tas = mpf_sign(ta)
tbs = mpf_sign(tb)
if sas == sbs == 0:
# Should maybe be undefined
if ta == fninf or tb == finf:
return fninf, finf
return fzero, fzero
if tas == tbs == 0:
# Should maybe be undefined
if sa == fninf or sb == finf:
return fninf, finf
return fzero, fzero
if sas >= 0:
# positive * positive
if tas >= 0:
a = mpf_mul(sa, ta, prec, round_floor)
b = mpf_mul(sb, tb, prec, round_ceiling)
if a == fnan: a = fzero
if b == fnan: b = finf
# positive * negative
elif tbs <= 0:
a = mpf_mul(sb, ta, prec, round_floor)
b = mpf_mul(sa, tb, prec, round_ceiling)
if a == fnan: a = fninf
if b == fnan: b = fzero
# positive * both signs
else:
a = mpf_mul(sb, ta, prec, round_floor)
b = mpf_mul(sb, tb, prec, round_ceiling)
if a == fnan: a = fninf
if b == fnan: b = finf
elif sbs <= 0:
# negative * positive
if tas >= 0:
a = mpf_mul(sa, tb, prec, round_floor)
b = mpf_mul(sb, ta, prec, round_ceiling)
if a == fnan: a = fninf
if b == fnan: b = fzero
# negative * negative
elif tbs <= 0:
a = mpf_mul(sb, tb, prec, round_floor)
b = mpf_mul(sa, ta, prec, round_ceiling)
if a == fnan: a = fzero
if b == fnan: b = finf
# negative * both signs
else:
a = mpf_mul(sa, tb, prec, round_floor)
b = mpf_mul(sa, ta, prec, round_ceiling)
if a == fnan: a = fninf
if b == fnan: b = finf
else:
# General case: perform all cross-multiplications and compare
# Since the multiplications can be done exactly, we need only
# do 4 (instead of 8: two for each rounding mode)
cases = [mpf_mul(sa, ta), mpf_mul(sa, tb), mpf_mul(sb, ta), mpf_mul(sb, tb)]
if fnan in cases:
a, b = (fninf, finf)
else:
a, b = mpf_min_max(cases)
a = mpf_pos(a, prec, round_floor)
b = mpf_pos(b, prec, round_ceiling)
return a, b
def mpi_square(s, prec=0):
sa, sb = s
if mpf_ge(sa, fzero):
a = mpf_mul(sa, sa, prec, round_floor)
b = mpf_mul(sb, sb, prec, round_ceiling)
elif mpf_le(sb, fzero):
a = mpf_mul(sb, sb, prec, round_floor)
b = mpf_mul(sa, sa, prec, round_ceiling)
else:
sa = mpf_neg(sa)
sa, sb = mpf_min_max([sa, sb])
a = fzero
b = mpf_mul(sb, sb, prec, round_ceiling)
return a, b
def mpi_div(s, t, prec):
sa, sb = s
ta, tb = t
sas = mpf_sign(sa)
sbs = mpf_sign(sb)
tas = mpf_sign(ta)
tbs = mpf_sign(tb)
# 0 / X
if sas == sbs == 0:
# 0 / <interval containing 0>
if (tas < 0 and tbs > 0) or (tas == 0 or tbs == 0):
return fninf, finf
return fzero, fzero
# Denominator contains both negative and positive numbers;
# this should properly be a multi-interval, but the closest
# match is the entire (extended) real line
if tas < 0 and tbs > 0:
return fninf, finf
# Assume denominator to be nonnegative
if tas < 0:
return mpi_div(mpi_neg(s), mpi_neg(t), prec)
# Division by zero
# XXX: make sure all results make sense
if tas == 0:
# Numerator contains both signs?
if sas < 0 and sbs > 0:
return fninf, finf
if tas == tbs:
return fninf, finf
# Numerator positive?
if sas >= 0:
a = mpf_div(sa, tb, prec, round_floor)
b = finf
if sbs <= 0:
a = fninf
b = mpf_div(sb, tb, prec, round_ceiling)
# Division with positive denominator
# We still have to handle nans resulting from inf/0 or inf/inf
else:
# Nonnegative numerator
if sas >= 0:
a = mpf_div(sa, tb, prec, round_floor)
b = mpf_div(sb, ta, prec, round_ceiling)
if a == fnan: a = fzero
if b == fnan: b = finf
# Nonpositive numerator
elif sbs <= 0:
a = mpf_div(sa, ta, prec, round_floor)
b = mpf_div(sb, tb, prec, round_ceiling)
if a == fnan: a = fninf
if b == fnan: b = fzero
# Numerator contains both signs?
else:
a = mpf_div(sa, ta, prec, round_floor)
b = mpf_div(sb, ta, prec, round_ceiling)
if a == fnan: a = fninf
if b == fnan: b = finf
return a, b
def mpi_pi(prec):
a = mpf_pi(prec, round_floor)
b = mpf_pi(prec, round_ceiling)
return a, b
def mpi_exp(s, prec):
sa, sb = s
# exp is monotonic
a = mpf_exp(sa, prec, round_floor)
b = mpf_exp(sb, prec, round_ceiling)
return a, b
def mpi_log(s, prec):
sa, sb = s
# log is monotonic
a = mpf_log(sa, prec, round_floor)
b = mpf_log(sb, prec, round_ceiling)
return a, b
def mpi_sqrt(s, prec):
sa, sb = s
# sqrt is monotonic
a = mpf_sqrt(sa, prec, round_floor)
b = mpf_sqrt(sb, prec, round_ceiling)
return a, b
def mpi_atan(s, prec):
sa, sb = s
a = mpf_atan(sa, prec, round_floor)
b = mpf_atan(sb, prec, round_ceiling)
return a, b
def mpi_pow_int(s, n, prec):
sa, sb = s
if n < 0:
return mpi_div((fone, fone), mpi_pow_int(s, -n, prec+20), prec)
if n == 0:
return (fone, fone)
if n == 1:
return s
if n == 2:
return mpi_square(s, prec)
# Odd -- signs are preserved
if n & 1:
a = mpf_pow_int(sa, n, prec, round_floor)
b = mpf_pow_int(sb, n, prec, round_ceiling)
# Even -- important to ensure positivity
else:
sas = mpf_sign(sa)
sbs = mpf_sign(sb)
# Nonnegative?
if sas >= 0:
a = mpf_pow_int(sa, n, prec, round_floor)
b = mpf_pow_int(sb, n, prec, round_ceiling)
# Nonpositive?
elif sbs <= 0:
a = mpf_pow_int(sb, n, prec, round_floor)
b = mpf_pow_int(sa, n, prec, round_ceiling)
# Mixed signs?
else:
a = fzero
# max(-a,b)**n
sa = mpf_neg(sa)
if mpf_ge(sa, sb):
b = mpf_pow_int(sa, n, prec, round_ceiling)
else:
b = mpf_pow_int(sb, n, prec, round_ceiling)
return a, b
def mpi_pow(s, t, prec):
ta, tb = t
if ta == tb and ta not in (finf, fninf):
if ta == from_int(to_int(ta)):
return mpi_pow_int(s, to_int(ta), prec)
if ta == fhalf:
return mpi_sqrt(s, prec)
u = mpi_log(s, prec + 20)
v = mpi_mul(u, t, prec + 20)
return mpi_exp(v, prec)
def MIN(x, y):
if mpf_le(x, y):
return x
return y
def MAX(x, y):
if mpf_ge(x, y):
return x
return y
def cos_sin_quadrant(x, wp):
sign, man, exp, bc = x
if x == fzero:
return fone, fzero, 0
# TODO: combine evaluation code to avoid duplicate modulo
c, s = mpf_cos_sin(x, wp)
t, n, wp_ = mod_pi2(man, exp, exp+bc, 15)
if sign:
n = -1-n
return c, s, n
def mpi_cos_sin(x, prec):
a, b = x
if a == b == fzero:
return (fone, fone), (fzero, fzero)
# Guaranteed to contain both -1 and 1
if (finf in x) or (fninf in x):
return (fnone, fone), (fnone, fone)
wp = prec + 20
ca, sa, na = cos_sin_quadrant(a, wp)
cb, sb, nb = cos_sin_quadrant(b, wp)
ca, cb = mpf_min_max([ca, cb])
sa, sb = mpf_min_max([sa, sb])
# Both functions are monotonic within one quadrant
if na == nb:
pass
# Guaranteed to contain both -1 and 1
elif nb - na >= 4:
return (fnone, fone), (fnone, fone)
else:
# cos has maximum between a and b
if na//4 != nb//4:
cb = fone
# cos has minimum
if (na-2)//4 != (nb-2)//4:
ca = fnone
# sin has maximum
if (na-1)//4 != (nb-1)//4:
sb = fone
# sin has minimum
if (na-3)//4 != (nb-3)//4:
sa = fnone
# Perturb to force interval rounding
more = from_man_exp((MPZ_ONE<<wp) + (MPZ_ONE<<10), -wp)
less = from_man_exp((MPZ_ONE<<wp) - (MPZ_ONE<<10), -wp)
def finalize(v, rounding):
if bool(v[0]) == (rounding == round_floor):
p = more
else:
p = less
v = mpf_mul(v, p, prec, rounding)
sign, man, exp, bc = v
if exp+bc >= 1:
if sign:
return fnone
return fone
return v
ca = finalize(ca, round_floor)
cb = finalize(cb, round_ceiling)
sa = finalize(sa, round_floor)
sb = finalize(sb, round_ceiling)
return (ca,cb), (sa,sb)
def mpi_cos(x, prec):
return mpi_cos_sin(x, prec)[0]
def mpi_sin(x, prec):
return mpi_cos_sin(x, prec)[1]
def mpi_tan(x, prec):
cos, sin = mpi_cos_sin(x, prec+20)
return mpi_div(sin, cos, prec)
def mpi_cot(x, prec):
cos, sin = mpi_cos_sin(x, prec+20)
return mpi_div(cos, sin, prec)
def mpi_from_str_a_b(x, y, percent, prec):
wp = prec + 20
xa = from_str(x, wp, round_floor)
xb = from_str(x, wp, round_ceiling)
#ya = from_str(y, wp, round_floor)
y = from_str(y, wp, round_ceiling)
assert mpf_ge(y, fzero)
if percent:
y = mpf_mul(MAX(mpf_abs(xa), mpf_abs(xb)), y, wp, round_ceiling)
y = mpf_div(y, from_int(100), wp, round_ceiling)
a = mpf_sub(xa, y, prec, round_floor)
b = mpf_add(xb, y, prec, round_ceiling)
return a, b
def mpi_from_str(s, prec):
"""
Parse an interval number given as a string.
Allowed forms are
"-1.23e-27"
Any single decimal floating-point literal.
"a +- b" or "a (b)"
a is the midpoint of the interval and b is the half-width
"a +- b%" or "a (b%)"
a is the midpoint of the interval and the half-width
is b percent of a (`a \times b / 100`).
"[a, b]"
The interval indicated directly.
"x[y,z]e"
x are shared digits, y and z are unequal digits, e is the exponent.
"""
e = ValueError("Improperly formed interval number '%s'" % s)
s = s.replace(" ", "")
wp = prec + 20
if "+-" in s:
x, y = s.split("+-")
return mpi_from_str_a_b(x, y, False, prec)
# case 2
elif "(" in s:
# Don't confuse with a complex number (x,y)
if s[0] == "(" or ")" not in s:
raise e
s = s.replace(")", "")
percent = False
if "%" in s:
if s[-1] != "%":
raise e
percent = True
s = s.replace("%", "")
x, y = s.split("(")
return mpi_from_str_a_b(x, y, percent, prec)
elif "," in s:
if ('[' not in s) or (']' not in s):
raise e
if s[0] == '[':
# case 3
s = s.replace("[", "")
s = s.replace("]", "")
a, b = s.split(",")
a = from_str(a, prec, round_floor)
b = from_str(b, prec, round_ceiling)
return a, b
else:
# case 4
x, y = s.split('[')
y, z = y.split(',')
if 'e' in s:
z, e = z.split(']')
else:
z, e = z.rstrip(']'), ''
a = from_str(x+y+e, prec, round_floor)
b = from_str(x+z+e, prec, round_ceiling)
return a, b
else:
a = from_str(s, prec, round_floor)
b = from_str(s, prec, round_ceiling)
return a, b
def mpi_to_str(x, dps, use_spaces=True, brackets='[]', mode='brackets', error_dps=4, **kwargs):
"""
Convert a mpi interval to a string.
**Arguments**
*dps*
decimal places to use for printing
*use_spaces*
use spaces for more readable output, defaults to true
*brackets*
pair of strings (or two-character string) giving left and right brackets
*mode*
mode of display: 'plusminus', 'percent', 'brackets' (default) or 'diff'
*error_dps*
limit the error to *error_dps* digits (mode 'plusminus and 'percent')
Additional keyword arguments are forwarded to the mpf-to-string conversion
for the components of the output.
**Examples**
>>> from mpmath import mpi, mp
>>> mp.dps = 30
>>> x = mpi(1, 2)._mpi_
>>> mpi_to_str(x, 2, mode='plusminus')
'1.5 +- 0.5'
>>> mpi_to_str(x, 2, mode='percent')
'1.5 (33.33%)'
>>> mpi_to_str(x, 2, mode='brackets')
'[1.0, 2.0]'
>>> mpi_to_str(x, 2, mode='brackets' , brackets=('<', '>'))
'<1.0, 2.0>'
>>> x = mpi('5.2582327113062393041', '5.2582327113062749951')._mpi_
>>> mpi_to_str(x, 15, mode='diff')
'5.2582327113062[4, 7]'
>>> mpi_to_str(mpi(0)._mpi_, 2, mode='percent')
'0.0 (0.0%)'
"""
prec = dps_to_prec(dps)
wp = prec + 20
a, b = x
mid = mpi_mid(x, prec)
delta = mpi_delta(x, prec)
a_str = to_str(a, dps, **kwargs)
b_str = to_str(b, dps, **kwargs)
mid_str = to_str(mid, dps, **kwargs)
sp = ""
if use_spaces:
sp = " "
br1, br2 = brackets
if mode == 'plusminus':
delta_str = to_str(mpf_shift(delta,-1), dps, **kwargs)
s = mid_str + sp + "+-" + sp + delta_str
elif mode == 'percent':
if mid == fzero:
p = fzero
else:
# p = 100 * delta(x) / (2*mid(x))
p = mpf_mul(delta, from_int(100))
p = mpf_div(p, mpf_mul(mid, from_int(2)), wp)
s = mid_str + sp + "(" + to_str(p, error_dps) + "%)"
elif mode == 'brackets':
s = br1 + a_str + "," + sp + b_str + br2
elif mode == 'diff':
# use more digits if str(x.a) and str(x.b) are equal
if a_str == b_str:
a_str = to_str(a, dps+3, **kwargs)
b_str = to_str(b, dps+3, **kwargs)
# separate mantissa and exponent
a = a_str.split('e')
if len(a) == 1:
a.append('')
b = b_str.split('e')
if len(b) == 1:
b.append('')
if a[1] == b[1]:
if a[0] != b[0]:
for i in xrange(len(a[0]) + 1):
if a[0][i] != b[0][i]:
break
s = (a[0][:i] + br1 + a[0][i:] + ',' + sp + b[0][i:] + br2
+ 'e'*min(len(a[1]), 1) + a[1])
else: # no difference
s = a[0] + br1 + br2 + 'e'*min(len(a[1]), 1) + a[1]
else:
s = br1 + 'e'.join(a) + ',' + sp + 'e'.join(b) + br2
else:
raise ValueError("'%s' is unknown mode for printing mpi" % mode)
return s
def mpci_add(x, y, prec):
a, b = x
c, d = y
return mpi_add(a, c, prec), mpi_add(b, d, prec)
def mpci_sub(x, y, prec):
a, b = x
c, d = y
return mpi_sub(a, c, prec), mpi_sub(b, d, prec)
def mpci_neg(x, prec=0):
a, b = x
return mpi_neg(a, prec), mpi_neg(b, prec)
def mpci_pos(x, prec):
a, b = x
return mpi_pos(a, prec), mpi_pos(b, prec)
def mpci_mul(x, y, prec):
# TODO: optimize for real/imag cases
a, b = x
c, d = y
r1 = mpi_mul(a,c)
r2 = mpi_mul(b,d)
re = mpi_sub(r1,r2,prec)
i1 = mpi_mul(a,d)
i2 = mpi_mul(b,c)
im = mpi_add(i1,i2,prec)
return re, im
def mpci_div(x, y, prec):
# TODO: optimize for real/imag cases
a, b = x
c, d = y
wp = prec+20
m1 = mpi_square(c)
m2 = mpi_square(d)
m = mpi_add(m1,m2,wp)
re = mpi_add(mpi_mul(a,c), mpi_mul(b,d), wp)
im = mpi_sub(mpi_mul(b,c), mpi_mul(a,d), wp)
re = mpi_div(re, m, prec)
im = mpi_div(im, m, prec)
return re, im
def mpci_exp(x, prec):
a, b = x
wp = prec+20
r = mpi_exp(a, wp)
c, s = mpi_cos_sin(b, wp)
a = mpi_mul(r, c, prec)
b = mpi_mul(r, s, prec)
return a, b
def mpi_shift(x, n):
a, b = x
return mpf_shift(a,n), mpf_shift(b,n)
def mpi_cosh_sinh(x, prec):
# TODO: accuracy for small x
wp = prec+20
e1 = mpi_exp(x, wp)
e2 = mpi_div(mpi_one, e1, wp)
c = mpi_add(e1, e2, prec)
s = mpi_sub(e1, e2, prec)
c = mpi_shift(c, -1)
s = mpi_shift(s, -1)
return c, s
def mpci_cos(x, prec):
a, b = x
wp = prec+10
c, s = mpi_cos_sin(a, wp)
ch, sh = mpi_cosh_sinh(b, wp)
re = mpi_mul(c, ch, prec)
im = mpi_mul(s, sh, prec)
return re, mpi_neg(im)
def mpci_sin(x, prec):
a, b = x
wp = prec+10
c, s = mpi_cos_sin(a, wp)
ch, sh = mpi_cosh_sinh(b, wp)
re = mpi_mul(s, ch, prec)
im = mpi_mul(c, sh, prec)
return re, im
def mpci_abs(x, prec):
a, b = x
if a == mpi_zero:
return mpi_abs(b)
if b == mpi_zero:
return mpi_abs(a)
# Important: nonnegative
a = mpi_square(a)
b = mpi_square(b)
t = mpi_add(a, b, prec+20)
return mpi_sqrt(t, prec)
def mpi_atan2(y, x, prec):
ya, yb = y
xa, xb = x
# Constrained to the real line
if ya == yb == fzero:
if mpf_ge(xa, fzero):
return mpi_zero
return mpi_pi(prec)
# Right half-plane
if mpf_ge(xa, fzero):
if mpf_ge(ya, fzero):
a = mpf_atan2(ya, xb, prec, round_floor)
else:
a = mpf_atan2(ya, xa, prec, round_floor)
if mpf_ge(yb, fzero):
b = mpf_atan2(yb, xa, prec, round_ceiling)
else:
b = mpf_atan2(yb, xb, prec, round_ceiling)
# Upper half-plane
elif mpf_ge(ya, fzero):
b = mpf_atan2(ya, xa, prec, round_ceiling)
if mpf_le(xb, fzero):
a = mpf_atan2(yb, xb, prec, round_floor)
else:
a = mpf_atan2(ya, xb, prec, round_floor)
# Lower half-plane
elif mpf_le(yb, fzero):
a = mpf_atan2(yb, xa, prec, round_floor)
if mpf_le(xb, fzero):
b = mpf_atan2(ya, xb, prec, round_ceiling)
else:
b = mpf_atan2(yb, xb, prec, round_ceiling)
# Covering the origin
else:
b = mpf_pi(prec, round_ceiling)
a = mpf_neg(b)
return a, b
def mpci_arg(z, prec):
x, y = z
return mpi_atan2(y, x, prec)
def mpci_log(z, prec):
x, y = z
re = mpi_log(mpci_abs(z, prec+20), prec)
im = mpci_arg(z, prec)
return re, im
def mpci_pow(x, y, prec):
# TODO: recognize/speed up real cases, integer y
yre, yim = y
if yim == mpi_zero:
ya, yb = yre
if ya == yb:
sign, man, exp, bc = yb
if man and exp >= 0:
return mpci_pow_int(x, (-1)**sign * int(man<<exp), prec)
# x^0
if yb == fzero:
return mpci_pow_int(x, 0, prec)
wp = prec+20
return mpci_exp(mpci_mul(y, mpci_log(x, wp), wp), prec)
def mpci_square(x, prec):
a, b = x
# (a+bi)^2 = (a^2-b^2) + 2abi
re = mpi_sub(mpi_square(a), mpi_square(b), prec)
im = mpi_mul(a, b, prec)
im = mpi_shift(im, 1)
return re, im
def mpci_pow_int(x, n, prec):
if n < 0:
return mpci_div((mpi_one,mpi_zero), mpci_pow_int(x, -n, prec+20), prec)
if n == 0:
return mpi_one, mpi_zero
if n == 1:
return mpci_pos(x, prec)
if n == 2:
return mpci_square(x, prec)
wp = prec + 20
result = (mpi_one, mpi_zero)
while n:
if n & 1:
result = mpci_mul(result, x, wp)
n -= 1
x = mpci_square(x, wp)
n >>= 1
return mpci_pos(result, prec)
gamma_min_a = from_float(1.46163214496)
gamma_min_b = from_float(1.46163214497)
gamma_min = (gamma_min_a, gamma_min_b)
gamma_mono_imag_a = from_float(-1.1)
gamma_mono_imag_b = from_float(1.1)
def mpi_overlap(x, y):
a, b = x
c, d = y
if mpf_lt(d, a): return False
if mpf_gt(c, b): return False
return True
# type = 0 -- gamma
# type = 1 -- factorial
# type = 2 -- 1/gamma
# type = 3 -- log-gamma
def mpi_gamma(z, prec, type=0):
a, b = z
wp = prec+20
if type == 1:
return mpi_gamma(mpi_add(z, mpi_one, wp), prec, 0)
# increasing
if mpf_gt(a, gamma_min_b):
if type == 0:
c = mpf_gamma(a, prec, round_floor)
d = mpf_gamma(b, prec, round_ceiling)
elif type == 2:
c = mpf_rgamma(b, prec, round_floor)
d = mpf_rgamma(a, prec, round_ceiling)
elif type == 3:
c = mpf_loggamma(a, prec, round_floor)
d = mpf_loggamma(b, prec, round_ceiling)
# decreasing
elif mpf_gt(a, fzero) and mpf_lt(b, gamma_min_a):
if type == 0:
c = mpf_gamma(b, prec, round_floor)
d = mpf_gamma(a, prec, round_ceiling)
elif type == 2:
c = mpf_rgamma(a, prec, round_floor)
d = mpf_rgamma(b, prec, round_ceiling)
elif type == 3:
c = mpf_loggamma(b, prec, round_floor)
d = mpf_loggamma(a, prec, round_ceiling)
else:
# TODO: reflection formula
znew = mpi_add(z, mpi_one, wp)
if type == 0: return mpi_div(mpi_gamma(znew, prec+2, 0), z, prec)
if type == 2: return mpi_mul(mpi_gamma(znew, prec+2, 2), z, prec)
if type == 3: return mpi_sub(mpi_gamma(znew, prec+2, 3), mpi_log(z, prec+2), prec)
return c, d
def mpci_gamma(z, prec, type=0):
(a1,a2), (b1,b2) = z
# Real case
if b1 == b2 == fzero and (type != 3 or mpf_gt(a1,fzero)):
return mpi_gamma(z, prec, type), mpi_zero
# Estimate precision
wp = prec+20
if type != 3:
amag = a2[2]+a2[3]
bmag = b2[2]+b2[3]
if a2 != fzero:
mag = max(amag, bmag)
else:
mag = bmag
an = abs(to_int(a2))
bn = abs(to_int(b2))
absn = max(an, bn)
gamma_size = max(0,absn*mag)
wp += bitcount(gamma_size)
# Assume type != 1
if type == 1:
(a1,a2) = mpi_add((a1,a2), mpi_one, wp); z = (a1,a2), (b1,b2)
type = 0
# Avoid non-monotonic region near the negative real axis
if mpf_lt(a1, gamma_min_b):
if mpi_overlap((b1,b2), (gamma_mono_imag_a, gamma_mono_imag_b)):
# TODO: reflection formula
#if mpf_lt(a2, mpf_shift(fone,-1)):
# znew = mpci_sub((mpi_one,mpi_zero),z,wp)
# ...
# Recurrence:
# gamma(z) = gamma(z+1)/z
znew = mpi_add((a1,a2), mpi_one, wp), (b1,b2)
if type == 0: return mpci_div(mpci_gamma(znew, prec+2, 0), z, prec)
if type == 2: return mpci_mul(mpci_gamma(znew, prec+2, 2), z, prec)
if type == 3: return mpci_sub(mpci_gamma(znew, prec+2, 3), mpci_log(z,prec+2), prec)
# Use monotonicity (except for a small region close to the
# origin and near poles)
# upper half-plane
if mpf_ge(b1, fzero):
minre = mpc_loggamma((a1,b2), wp, round_floor)
maxre = mpc_loggamma((a2,b1), wp, round_ceiling)
minim = mpc_loggamma((a1,b1), wp, round_floor)
maxim = mpc_loggamma((a2,b2), wp, round_ceiling)
# lower half-plane
elif mpf_le(b2, fzero):
minre = mpc_loggamma((a1,b1), wp, round_floor)
maxre = mpc_loggamma((a2,b2), wp, round_ceiling)
minim = mpc_loggamma((a2,b1), wp, round_floor)
maxim = mpc_loggamma((a1,b2), wp, round_ceiling)
# crosses real axis
else:
maxre = mpc_loggamma((a2,fzero), wp, round_ceiling)
# stretches more into the lower half-plane
if mpf_gt(mpf_neg(b1), b2):
minre = mpc_loggamma((a1,b1), wp, round_ceiling)
else:
minre = mpc_loggamma((a1,b2), wp, round_ceiling)
minim = mpc_loggamma((a2,b1), wp, round_floor)
maxim = mpc_loggamma((a2,b2), wp, round_floor)
w = (minre[0], maxre[0]), (minim[1], maxim[1])
if type == 3:
return mpi_pos(w[0], prec), mpi_pos(w[1], prec)
if type == 2:
w = mpci_neg(w)
return mpci_exp(w, prec)
def mpi_loggamma(z, prec): return mpi_gamma(z, prec, type=3)
def mpci_loggamma(z, prec): return mpci_gamma(z, prec, type=3)
def mpi_rgamma(z, prec): return mpi_gamma(z, prec, type=2)
def mpci_rgamma(z, prec): return mpci_gamma(z, prec, type=2)
def mpi_factorial(z, prec): return mpi_gamma(z, prec, type=1)
def mpci_factorial(z, prec): return mpci_gamma(z, prec, type=1)
| mit |
volkandkaya/trader | trader/joins/migrations/0005_auto__add_unique_join_email_ref_id.py | 1 | 1346 | # -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding unique constraint on 'Join', fields ['email', 'ref_id']
db.create_unique(u'joins_join', ['email', 'ref_id'])
def backwards(self, orm):
# Removing unique constraint on 'Join', fields ['email', 'ref_id']
db.delete_unique(u'joins_join', ['email', 'ref_id'])
models = {
u'joins.join': {
'Meta': {'unique_together': "(('email', 'ref_id'),)", 'object_name': 'Join'},
'email': ('django.db.models.fields.EmailField', [], {'unique': 'True', 'max_length': '75'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'ip_address': ('django.db.models.fields.CharField', [], {'default': "'ABC'", 'max_length': '123'}),
'ref_id': ('django.db.models.fields.CharField', [], {'default': "'ABC'", 'max_length': '123'}),
'timestamp': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
}
}
complete_apps = ['joins'] | mit |
gdi2290/rethinkdb | test/rql_test/connections/http_support/flask/json.py | 428 | 8113 | # -*- coding: utf-8 -*-
"""
flask.jsonimpl
~~~~~~~~~~~~~~
Implementation helpers for the JSON support in Flask.
:copyright: (c) 2012 by Armin Ronacher.
:license: BSD, see LICENSE for more details.
"""
import io
import uuid
from datetime import datetime
from .globals import current_app, request
from ._compat import text_type, PY2
from werkzeug.http import http_date
from jinja2 import Markup
# Use the same json implementation as itsdangerous on which we
# depend anyways.
try:
from itsdangerous import simplejson as _json
except ImportError:
from itsdangerous import json as _json
# figure out if simplejson escapes slashes. This behavior was changed
# from one version to another without reason.
_slash_escape = '\\/' not in _json.dumps('/')
__all__ = ['dump', 'dumps', 'load', 'loads', 'htmlsafe_dump',
'htmlsafe_dumps', 'JSONDecoder', 'JSONEncoder',
'jsonify']
def _wrap_reader_for_text(fp, encoding):
if isinstance(fp.read(0), bytes):
fp = io.TextIOWrapper(io.BufferedReader(fp), encoding)
return fp
def _wrap_writer_for_text(fp, encoding):
try:
fp.write('')
except TypeError:
fp = io.TextIOWrapper(fp, encoding)
return fp
class JSONEncoder(_json.JSONEncoder):
"""The default Flask JSON encoder. This one extends the default simplejson
encoder by also supporting ``datetime`` objects, ``UUID`` as well as
``Markup`` objects which are serialized as RFC 822 datetime strings (same
as the HTTP date format). In order to support more data types override the
:meth:`default` method.
"""
def default(self, o):
"""Implement this method in a subclass such that it returns a
serializable object for ``o``, or calls the base implementation (to
raise a ``TypeError``).
For example, to support arbitrary iterators, you could implement
default like this::
def default(self, o):
try:
iterable = iter(o)
except TypeError:
pass
else:
return list(iterable)
return JSONEncoder.default(self, o)
"""
if isinstance(o, datetime):
return http_date(o)
if isinstance(o, uuid.UUID):
return str(o)
if hasattr(o, '__html__'):
return text_type(o.__html__())
return _json.JSONEncoder.default(self, o)
class JSONDecoder(_json.JSONDecoder):
"""The default JSON decoder. This one does not change the behavior from
the default simplejson encoder. Consult the :mod:`json` documentation
for more information. This decoder is not only used for the load
functions of this module but also :attr:`~flask.Request`.
"""
def _dump_arg_defaults(kwargs):
"""Inject default arguments for dump functions."""
if current_app:
kwargs.setdefault('cls', current_app.json_encoder)
if not current_app.config['JSON_AS_ASCII']:
kwargs.setdefault('ensure_ascii', False)
kwargs.setdefault('sort_keys', current_app.config['JSON_SORT_KEYS'])
else:
kwargs.setdefault('sort_keys', True)
kwargs.setdefault('cls', JSONEncoder)
def _load_arg_defaults(kwargs):
"""Inject default arguments for load functions."""
if current_app:
kwargs.setdefault('cls', current_app.json_decoder)
else:
kwargs.setdefault('cls', JSONDecoder)
def dumps(obj, **kwargs):
"""Serialize ``obj`` to a JSON formatted ``str`` by using the application's
configured encoder (:attr:`~flask.Flask.json_encoder`) if there is an
application on the stack.
This function can return ``unicode`` strings or ascii-only bytestrings by
default which coerce into unicode strings automatically. That behavior by
default is controlled by the ``JSON_AS_ASCII`` configuration variable
and can be overriden by the simplejson ``ensure_ascii`` parameter.
"""
_dump_arg_defaults(kwargs)
encoding = kwargs.pop('encoding', None)
rv = _json.dumps(obj, **kwargs)
if encoding is not None and isinstance(rv, text_type):
rv = rv.encode(encoding)
return rv
def dump(obj, fp, **kwargs):
"""Like :func:`dumps` but writes into a file object."""
_dump_arg_defaults(kwargs)
encoding = kwargs.pop('encoding', None)
if encoding is not None:
fp = _wrap_writer_for_text(fp, encoding)
_json.dump(obj, fp, **kwargs)
def loads(s, **kwargs):
"""Unserialize a JSON object from a string ``s`` by using the application's
configured decoder (:attr:`~flask.Flask.json_decoder`) if there is an
application on the stack.
"""
_load_arg_defaults(kwargs)
if isinstance(s, bytes):
s = s.decode(kwargs.pop('encoding', None) or 'utf-8')
return _json.loads(s, **kwargs)
def load(fp, **kwargs):
"""Like :func:`loads` but reads from a file object.
"""
_load_arg_defaults(kwargs)
if not PY2:
fp = _wrap_reader_for_text(fp, kwargs.pop('encoding', None) or 'utf-8')
return _json.load(fp, **kwargs)
def htmlsafe_dumps(obj, **kwargs):
"""Works exactly like :func:`dumps` but is safe for use in ``<script>``
tags. It accepts the same arguments and returns a JSON string. Note that
this is available in templates through the ``|tojson`` filter which will
also mark the result as safe. Due to how this function escapes certain
characters this is safe even if used outside of ``<script>`` tags.
The following characters are escaped in strings:
- ``<``
- ``>``
- ``&``
- ``'``
This makes it safe to embed such strings in any place in HTML with the
notable exception of double quoted attributes. In that case single
quote your attributes or HTML escape it in addition.
.. versionchanged:: 0.10
This function's return value is now always safe for HTML usage, even
if outside of script tags or if used in XHTML. This rule does not
hold true when using this function in HTML attributes that are double
quoted. Always single quote attributes if you use the ``|tojson``
filter. Alternatively use ``|tojson|forceescape``.
"""
rv = dumps(obj, **kwargs) \
.replace(u'<', u'\\u003c') \
.replace(u'>', u'\\u003e') \
.replace(u'&', u'\\u0026') \
.replace(u"'", u'\\u0027')
if not _slash_escape:
rv = rv.replace('\\/', '/')
return rv
def htmlsafe_dump(obj, fp, **kwargs):
"""Like :func:`htmlsafe_dumps` but writes into a file object."""
fp.write(unicode(htmlsafe_dumps(obj, **kwargs)))
def jsonify(*args, **kwargs):
"""Creates a :class:`~flask.Response` with the JSON representation of
the given arguments with an `application/json` mimetype. The arguments
to this function are the same as to the :class:`dict` constructor.
Example usage::
from flask import jsonify
@app.route('/_get_current_user')
def get_current_user():
return jsonify(username=g.user.username,
email=g.user.email,
id=g.user.id)
This will send a JSON response like this to the browser::
{
"username": "admin",
"email": "admin@localhost",
"id": 42
}
For security reasons only objects are supported toplevel. For more
information about this, have a look at :ref:`json-security`.
This function's response will be pretty printed if it was not requested
with ``X-Requested-With: XMLHttpRequest`` to simplify debugging unless
the ``JSONIFY_PRETTYPRINT_REGULAR`` config parameter is set to false.
.. versionadded:: 0.2
"""
indent = None
if current_app.config['JSONIFY_PRETTYPRINT_REGULAR'] \
and not request.is_xhr:
indent = 2
return current_app.response_class(dumps(dict(*args, **kwargs),
indent=indent),
mimetype='application/json')
def tojson_filter(obj, **kwargs):
return Markup(htmlsafe_dumps(obj, **kwargs))
| agpl-3.0 |
mohseniaref/PySAR-1 | pysar/pysarApp.py | 1 | 21751 | #! /usr/bin/env python
###############################################################################
#
# Project: PySAR
# Purpose: Python Module for InSAR Time-series Analysis
# Author: Heresh Fattahi
# Created: July 2013
# Modified: Yunjun Zhang, Feb 2015
###############################################################################
# Copyright (c) 2013, Heresh Fattahi
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###############################################################################
import os
import sys
import glob
import time
import _readfile as readfile
import h5py
import subprocess
from pysar._pysar_utilities import check_variable_name
def radar_Or_geo(igramFile):
h5file=h5py.File(igramFile,'r')
igramList=h5file['interferograms'].keys()
if 'X_FIRST' in h5file['interferograms'][igramList[0]].attrs.keys():
rdr_geo='geo'
else:
rdr_geo='radar'
h5file.close()
return rdr_geo
def Usage():
print '''
*******************************************************
*******************************************************
*******************************************************
*******************************************************
********* OOOOO OOOOO O OOOO *********
********* O O O O O O O O O *********
********* OOOOO OOO OOOOO OOOOO OOOO *********
********* O O O O O O O *********
********* O OOO OOOOO O O O O *********
********* *********
*******************************************************
*******************************************************
*******************************************************
*******************************************************
A Python Module for InSAR time-series analysis.
PySAR v1.0 July 2013, InSAR Lab, RSMAS, University of Miami
usage:
pysarApp.py TEMPLATEFILE
example:
pysarApp.py /nethome/hfattahi/SanAndreasT356EnvD.template
pysarApp.py $TE/SanAndreasT356EnvD.template
*******************************************************
Template file options:
pysar.inputdata=/scratch/hfattahi/PROCESS/SanAndreasT356EnvD/DONE/IFG*/filt*0*c10.unw
pysar.CorFiles = /scratch/hfattahi/PROCESS/SanAndreasT356EnvD/DONE/IFG*/filt*0*.cor
pysar.wraped = /scratch/hfattahi/PROCESS/SanAndreasT356EnvD/DONE/IFG*/filt*0*.int
pysar.geomap = /scratch/hfattahi/PROCESS/SanAndreasT356EnvD/GEO/geomap_12/geomap_8rlks.trans
pysar.dem = /scratch/hfattahi/PROCESS/SanAndreasT356EnvD/DONE/IFG_20050102_20070809/radar_8lks.hgt
pysar.subset.yx = 1800:2000,700:800
pysar.seed.ll=31.5, 67 or pysar.seed.yx=257 , 151
pysar.unwrap_error = yes [no]
pysar.tropospheric_delay = yes ['no']
pysar.tropospheric_delay.method = pyaps ['height-correlation']
pysar.Numerical_Weather_Model = ECMWF ['MERRA', 'NARR']
pysar.acquisition_time = 00:00 ['06:00', '12:00', '18:00']
pysar.topo_error = yes [no]
pysar.orbit_error = yes [np]
pysar.orbit_error.method = plane ['quadratic', 'plane', 'quardatic_range', 'quadratic_azimiuth', 'plane_range', 'plane_azimuth','baselineCor','BaseTropCor']
pysar.mask=yes
pysar.mask.threshold = 0.7
pysar.geocode = yes
*******************************************************
'''
#########################################
def main(argv):
try:
templateFile = argv[1]
except:
Usage();sys.exit(1)
projectName = os.path.basename(templateFile.partition('.')[0])
try:
tssarProjectDir = os.getenv('TSSARDIR') +'/'+projectName
except:
tssarProjectDir = os.getenv('SCRATCHDIR') + '/' + projectName + "/TSSAR" # FA 7/2015: adopted for new directory structure
print "QQ " + tssarProjectDir
if not os.path.isdir(tssarProjectDir): os.mkdir(tssarProjectDir)
os.chdir(tssarProjectDir)
igramFile = 'LoadedData.h5'
Modified_igramFile = 'Modified_LoadedData.h5'
if os.path.isfile(Modified_igramFile):
print Modified_igramFile + ' already exists.'
igramFile=Modified_igramFile
template = readfile.read_template(templateFile)
Rlooks = template['Rlooks_unw']
#########################################
# Loading interferograms
#########################################
print '******************************************'
print''
if os.path.isfile(igramFile):
print igramFile + ' already exists.'
else:
loadCmd='load_data.py ' + templateFile
print loadCmd
os.system(loadCmd)
# copyDemCmd='copy_dem_trans.py ' + templateFile
# print copyDemCmd
# os.system(copyDemCmd)
print''
print '******************************************'
#########################################
# Check the subset
#########################################
try:
subset= template['pysar.subset.yx'].split(',')
print subset
print subset[0]
subsetOutName='subset_'+igramFile
subsetCmd='subset.py -f '+ igramFile + ' -y '+subset[0]+' -x '+subset[1] + ' -o ' + subsetOutName
print '*****************************************'
print 'Subset the area ...'
print subsetCmd
os.system(subsetCmd)
igramFile=subsetOutName
print '*****************************************'
except:
print '*****************************************'
print 'No Subset selected. Processing the whole area'
print '*****************************************'
#########################################
#Referencing all interferograms to the same pixel
#########################################
rdr_or_geo=radar_Or_geo(igramFile)
print '******************************************'
print''
if os.path.isfile('Seeded_'+igramFile):
igramFile = 'Seeded_'+igramFile
print igramFile + ' already exists.'
else:
print 'referncing all interferograms to the same pixel.'
if 'pysar.seed.ll' in template.keys():
'Checking the lat/lon refernce point'
lat= template['pysar.seed.ll'].split(',')[0]
lon= template['pysar.seed.ll'].split(',')[1]
seedCmd= 'SeedData.py -f ' + igramFile + ' -l ' +lat+ ' -L '+lon
elif 'pysar.seed.yx' in template.keys():
'Checking y/x reference point'
y= template['pysar.seed.yx'].split(',')[0]
x= template['pysar.seed.yx'].split(',')[1]
seedCmd= 'seed_data.py -f ' + igramFile + ' -y ' +y+ ' -x '+x
else:
seedCmd= 'seed_data.py -f ' + igramFile
igramFile = 'Seeded_'+igramFile
print seedCmd
os.system(seedCmd)
print''
print '******************************************'
############################################
#unwrapping error correction based on the
# consistency of triplets of interferograms
############################################
print '******************************************'
print''
try:
template['pysar.unwrap_error']
if template['pysar.unwrap_error'] in ('y','yes','Yes','YES'):
print 'unwrapping error correction might take a while depending on the size of your data set! '
unwCmd='unwrap_error.py '+igramFile
os.system(unwCmd)
igramFile=igramFile.split('.')[0]+'_unwCor.h5'
else:
print 'No unwrapping error correction.'
except:
print 'No unwrapping error correction.'
print''
print '******************************************'
#########################################
# inversion of interferograms
########################################
print '******************************************'
print''
if os.path.isfile(igramFile.split('.')[0]+'_unwCor.h5'):
igramFile = igramFile.split('.')[0]+'_unwCor.h5'
print igramFile + ' exists.'
if os.path.isfile('timeseries.h5'):
print 'timeseries.h5 already exists, inversion is not needed.'
else:
invertCmd = 'igram_inversion.py '+ igramFile
print invertCmd
os.system(invertCmd)
timeseriesFile='timeseries.h5'
print''
print '******************************************'
##############################################
#temporal coherence:
#A parameter to evaluate the consistency of
# timeseries with the interferograms
##############################################
print '******************************************'
print''
# if os.path.isfile('temporal_coherence.h5'):
# print 'temporal_coherence.h5 already exists.'
# else:
# tempcohCmd='temporal_coherence.py '+igramFile+' '+timeseriesFile
# print tempcohCmd
# os.system(tempcohCmd)
tempcohCmd='temporal_coherence.py '+igramFile+' '+timeseriesFile
print tempcohCmd
os.system(tempcohCmd)
print''
print '******************************************'
##############################################
#update Mask based on temporal coherence
# add by Yunjun Feb 15, 2015
##############################################
print '******************************************'
print''
try:
template['pysar.mask']
if template['pysar.mask'] in ('yes','Yes','YES','y'):
print 'Updating mask according to temporal coherence'
cohT=template['pysar.mask.threshold']
maskCmd='generate_mask.py -f temporal_coherence.h5 -m '+ cohT +' -M 1.0 -o Mask.h5'
print maskCmd
os.system(maskCmd)
else:
print 'No update for mask.'
except:
print 'No update for mask.'
print''
print '******************************************'
##############################################
# Generate incident angle
# add by Yunjun Feb 15, 2015
##############################################
print '******************************************'
print''
inciCmd='incidence_angle.py -f timeseries.h5'
print inciCmd
os.system(inciCmd)
print''
print '******************************************'
##############################################
#If Satellite is Envisat and if Coordinate
#system is radar then LOD correction
##############################################
print '******************************************'
print''
h5file=h5py.File(timeseriesFile,'r')
if rdr_or_geo =='radar':
if h5file['timeseries'].attrs['PLATFORM']=='ENVISAT':
LODcmd='lod.py '+timeseriesFile
print LODcmd
os.system(LODcmd)
timeseriesFile=timeseriesFile.split('.')[0]+'_LODcor.h5'
print''
print '******************************************'
##############################################
# Tropospheric Correction
##############################################
print '******************************************'
print''
try:
if (template['pysar.tropospheric_delay'] in ('y','yes','Yes','YES')) and template['pysar.orbit_error.method']=='BaseTropCor':
print '''
+++++++++++++++++++++++++++++++++++++++++++++++++++
WARNING:
Orbital error correction was BaseTropCor.
Tropospheric correction was already applied simultaneous with baseline error correction.
Tropospheric correction can not be applied again.
To apply the tropospheric correction separate from baseline error correction, chhose other existing options for orbital error correction.
+++++++++++++++++++++++++++++++++++++++++++++++++++
'''
template['pysar.tropospheric_delay']='no'
except:
print 'Checking the tropospheric delay correction ...'
if template['pysar.tropospheric_delay'] in ('y','yes','Yes','YES'):
# demFile='radar_'+Rlooks+'rlks.hgt'
demFile=template['pysar.dem']
demFile=check_variable_name(demFile)
# print 'DEM file: '+demFile
if not os.path.isfile(demFile):
print '++++++++++++++++++++++++++++++++++++++++++++++'
print 'Error:'
print 'DEM (radar_*rlks.hgt file) was not found!'
print 'Continue without tropospheric correction ...'
print '++++++++++++++++++++++++++++++++++++++++++++++'
else:
if template['pysar.tropospheric_delay.method'] in ['height-correlation','height_correlation','Height-Correlation','Height_Correlation']:
print 'tropospheric delay correction with height-correlation approach'
try:
polyOrder=template['pysar.trop.polyOrder']
except:
print 'Deafult polynomial order for troposphreic correction = 1'
polyOrder='1'
cmdTrop='tropospheric_correction.py'+ ' -f '+ timeseriesFile + ' -d '+ demfile + ' -p '+ polyOrder
os.system(cmdTrop)
timeseriesFile=timeseriesFile.split('.')[0]+'_tropCor.h5'
elif template['pysar.tropospheric_delay.method']=='pyaps':
print 'Atmospheric correction using Numerical Weather Models (using PyAPS software)'
print 'reading DEM, source of NWM and acquisition time from template file'
source_of_NWM=template['pysar.Numerical_Weather_Model']
print 'Numerical Weather Model: '+source_of_NWM
acquisition_time=template['pysar.acquisition_time']
print 'acquisition time: '+acquisition_time
# cmdTrop = ["tropcor_pyaps.py -f ",timeseriesFile," -d ",demFile," -s ",source_of_NWM," -h ",acquisition_time," -i incidence_angle.h5"]
cmdTrop = 'tropcor_pyaps.py -f '+timeseriesFile+ ' -d '+ demFile +' -s ' + source_of_NWM + ' -h '+ acquisition_time + ' -i incidence_angle.h5'
print cmdTrop
os.system(cmdTrop)
# subprocess.Popen(cmdTrop).wait()
timeseriesFile=timeseriesFile.split('.')[0]+'_'+source_of_NWM+'.h5'
else:
print 'Atmospheric correction method not recognized.'
else:
print 'No atmospheric delay correction.'
print''
print '******************************************'
##############################################
#topographic residuals
##############################################
print '******************************************'
print''
try:
template['pysar.topo_error']
if template['pysar.topo_error'] in ('yes','Yes','YES','y'):
print 'Correcting topographic residuals'
topoCmd='dem_error.py '+ timeseriesFile +' '+ igramFile
print topoCmd
os.system(topoCmd)
timeseriesFile=timeseriesFile.split('.')[0]+'_demCor.h5'
else:
print 'No correction for topographic residuals.'
except:
print 'No correction for topographic residuals.'
print''
print '******************************************'
##############################################
#Orbit correction
##############################################
print '******************************************'
print''
try:
template['pysar.orbit_error']
if template['pysar.orbit_error'] in ('yes','Yes','YES','y'):
try:
orbit_error_method=template['pysar.orbit_error.method']
print 'orbit error correction method : '+orbit_error_method
if orbit_error_method in ['quadratic', 'plane', 'quardatic_range', 'quadratic_azimiuth', 'plane_range', 'plane_azimuth']:
orbitCmd='remove_plane.py '+timeseriesFile+' '+template['pysar.orbit_error.method'] #+ ' Mask.h5'
timeseriesFile=timeseriesFile.split('.')[0]+'_'+template['pysar.orbit_error.method']+'.h5'
print orbitCmd
os.system(orbitCmd)
elif orbit_error_method == 'baselineCor':
orbitCmd='baseline_error.py ' +timeseriesFile #+ ' Mask.h5'
print orbitCmd
try:
h5file=h5py.File(timeseriesFile,'r')
daz=float(h5file['timeseries'].attrs['AZIMUTH_PIXEL_SIZE'])
os.system(orbitCmd)
timeseriesFile=timeseriesFile.split('.')[0]+'_'+template['pysar.orbit_error.method']+'.h5'
except:
print 'WARNING!'
print 'Skipping orbital error correction.'
print 'baselineCor method can only be applied in radar coordinate'
elif orbit_error_method =='BaseTropCor':
demfile=template['pysar.dem']
demfile=check_variable_name(demfile)
try:
polyOrder=template['pysar.trop.polyOrder']
except:
print 'Deafult polynomial order for troposphreic correction = 1'
polyOrder=1
try:
h5file=h5py.File(timeseriesFile,'r')
daz=float(h5file['timeseries'].attrs['AZIMUTH_PIXEL_SIZE'])
orbitCmd='baseline_trop.py '+timeseriesFile+' '+ demfile +' '+ polyOrder +'range_and_azimuth'
print 'Joint estimation of Baseline error and tropospheric delay [height-correlation approach]'
print orbitCmd
os.system(orbitCmd)
timeseriesFile=timeseriesFile.split('.')[0]+'_'+template['pysar.orbit_error.method']+'.h5'
except:
print 'WARNING!'
print 'Skipping orbital error correction.'
print 'baselineCor method can only be applied in radar coordinate'
else:
print '+++++++++++++++++++++++++++++++++++++++++++++++++++++++'
print 'WARNING!'
print 'Orbital error correction method was not recognized!'
print 'Possible options are:'
print 'quadratic, plane, quardatic_range, quadratic_azimiuth, plane_range, plane_azimuth,baselineCor,BaseTropCor'
print 'Continue without orbital errors correction...'
print '+++++++++++++++++++++++++++++++++++++++++++++++++++++++'
except:
print 'No orbital errors correction.'
else:
print 'No orbital errors correction.'
except:
print 'No orbital errors correction.'
print''
print '******************************************'
#############################################
#Velocity and rmse maps
#############################################
print '******************************************'
print''
velCmd='timeseries2velocity.py '+timeseriesFile
print velCmd
os.system(velCmd)
print''
print '******************************************'
#############################################
#Masking the velocity based on the temporal
#coherence or rmse if it's specified
#############################################
print '******************************************'
print''
try:
template['pysar.mask']
if template['pysar.mask'] in ('yes','Yes','YES','y'):
try:
template['pysar.mask.threshold']
maskCmd='masking.py -f velocity.h5 -m temporal_coherence.h5 -t '+template['pysar.mask.threshold']
print 'Masking the velocity file using the temporal coherence with the threshold of '+template['pysar.mask.threshold']
except:
maskCmd='Masking.py -f velocity.h5 -m temporal_coherence.h5 -t 0.7'
print 'Masking the velocity file using the temporal coherence with the threshold of 0.7'
os.system(maskCmd)
# rmCmd='rm velocity.h5'
# os.system(rmCmd)
# mvCmd='mv velocity_masked.h5 velocity.h5'
# os.system(mvCmd)
else:
print 'No masking applied'
except:
print 'No masking applied'
print''
print '******************************************'
############################################
#Geocoding
############################################
print '******************************************'
print''
try:
template['pysar.geocode']
if template['pysar.geocode'] in ('y','yes','Yes','YES'):
geomapFile='geomap_'+Rlooks+'rlks.trans'
# geoCmd = 'geocode.py '+timeseriesFile+' '+geomapFile
# print geoCmd
# os.system(geoCmd)
geoCmd = 'geocode.py velocity.h5 '+geomapFile
print geoCmd
os.system(geoCmd)
geoCmd = 'geocode.py Mask.h5 '+geomapFile
print geoCmd
os.system(geoCmd)
# maskCmd = 'Masking.py -f geo_'+timeseriesFile+' -m geo_Mask.h5'
# print maskCmd
# os.system(maskCmd)
maskCmd = 'masking.py -f geo_velocity.h5 -m geo_Mask.h5'
print maskCmd
os.system(maskCmd)
else:
print 'No geocoding applied'
except:
print 'No geocoding applied'
print''
print '******************************************'
#############################################
# PySAR v1.0 #
#############################################
print''
print '###############################################'
print ''
print 'End of PySAR processing.'
print ''
print '################################################'
if __name__ == '__main__':
main(sys.argv[:])
| mit |
janisz/Diamond-1 | src/collectors/endecadgraph/endecadgraph.py | 57 | 3912 | # coding=utf-8
"""
Collects stats from Endeca Dgraph/MDEX server.
Tested with: Endeca Information Access Platform version 6.3.0.655584
=== Authors
Jan van Bemmelen <jvanbemmelen@bol.com>
Renzo Toma <rtoma@bol.com>
"""
import diamond.collector
import urllib2
from StringIO import StringIO
import re
import sys
if sys.version_info >= (2, 5):
import xml.etree.cElementTree as ElementTree
else:
import cElementTree as ElementTree
class EndecaDgraphCollector(diamond.collector.Collector):
# ignore these elements, because they are of no use
IGNORE_ELEMENTS = [
'most_expensive_queries',
'general_information',
'analytics_performance',
'disk_usage',
'configupdates',
'xqueryconfigupdates',
'spelling_updates',
'precomputed_sorts',
'analytics_performance',
'cache_slices',
]
# ignore these metrics, because they can be generated by graphite
IGNORE_STATS = [
'name',
'units',
]
# set of regular expressions for matching & sub'ing.
NUMVAL_MATCH = re.compile('^[\d\.e\-\+]*$')
CHAR_BLACKLIST = re.compile('\-|\ |,|:|/|>|\(|\)')
UNDERSCORE_UNDUPE = re.compile('_+')
# endeca xml namespace
XML_NS = '{http://xmlns.endeca.com/ene/dgraph}'
def get_default_config_help(self):
config_help = super(EndecaDgraphCollector,
self).get_default_config_help()
config_help.update({
'host': "Hostname of Endeca Dgraph instance",
'port': "Port of the Dgraph API listener",
'timeout': "Timeout for http API calls",
})
return config_help
def get_default_config(self):
"""
Returns the default collector settings
"""
config = super(EndecaDgraphCollector, self).get_default_config()
config.update({
'path': 'endeca.dgraph',
'host': 'localhost',
'port': 8080,
'timeout': 1,
})
return config
def collect(self):
def makeSane(stat):
stat = self.CHAR_BLACKLIST.sub('_', stat.lower())
stat = self.UNDERSCORE_UNDUPE.sub('_', stat)
return stat
def createKey(element):
if element.attrib.get("name"):
key = element.attrib.get("name")
key = makeSane(key)
else:
key = element.tag[len(self.XML_NS):]
return key
def processElem(elem, keyList):
for k, v in elem.items():
prefix = '.'.join(keyList)
if k not in self.IGNORE_ELEMENTS and self.NUMVAL_MATCH.match(v):
k = makeSane(k)
self.publish('%s.%s' % (prefix, k), v)
def walkXML(context, elemList):
try:
for event, elem in context:
elemName = createKey(elem)
if event == 'start':
elemList.append(elemName)
if len(elem) == 0:
if set(elemList).intersection(self.IGNORE_ELEMENTS):
continue
processElem(elem, elemList)
elif event == 'end':
elemList.pop()
except Exception, e:
self.log.error('Something went wrong: %s', e)
url = 'http://%s:%d/admin?op=stats' % (self.config['host'],
self.config['port'])
try:
xml = urllib2.urlopen(url, timeout=self.config['timeout']).read()
except Exception, e:
self.log.error('Could not connect to endeca on %s: %s' % (url, e))
return {}
context = ElementTree.iterparse(StringIO(xml), events=('start', 'end'))
elemList = []
walkXML(context, elemList)
| mit |
AsimmHirani/ISpyPi | tensorflow/contrib/tensorflow-master/tensorflow/contrib/layers/python/ops/sparse_feature_cross_op.py | 34 | 5025 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Wrappers for sparse cross operations."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.contrib.framework import deprecated_arg_values
from tensorflow.contrib.util import loader
from tensorflow.python.framework import common_shapes
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.ops import math_ops
from tensorflow.python.platform import resource_loader
_sparse_feature_cross_op = loader.load_op_library(
resource_loader.get_path_to_datafile("_sparse_feature_cross_op.so"))
# Default hash key for the FingerprintCat64.
SPARSE_FEATURE_CROSS_DEFAULT_HASH_KEY = 0xDECAFCAFFE
@deprecated_arg_values(
"2016-11-20",
"The default behavior of sparse_feature_cross is changing, the default\n"
"value for hash_key will change to SPARSE_FEATURE_CROSS_DEFAULT_HASH_KEY.\n"
"From that point on sparse_feature_cross will always use FingerprintCat64\n"
"to concatenate the feature fingerprints. And the underlying\n"
"_sparse_feature_cross_op.sparse_feature_cross operation will be marked\n"
"as deprecated.",
hash_key=None)
def sparse_feature_cross(inputs, hashed_output=False, num_buckets=0,
name=None, hash_key=None):
"""Crosses a list of Tensor or SparseTensor objects.
See sparse_feature_cross_kernel.cc for more details.
Args:
inputs: List of `SparseTensor` or `Tensor` to be crossed.
hashed_output: If true, returns the hash of the cross instead of the string.
This will allow us avoiding string manipulations.
num_buckets: It is used if hashed_output is true.
output = hashed_value%num_buckets if num_buckets > 0 else hashed_value.
name: A name prefix for the returned tensors (optional).
hash_key: Specify the hash_key that will be used by the `FingerprintCat64`
function to combine the crosses fingerprints on SparseFeatureCrossOp.
The default value is None, but will become
SPARSE_FEATURE_CROSS_DEFAULT_HASH_KEY after 2016-11-20 (optional).
Returns:
A `SparseTensor` with the crossed features.
Return type is string if hashed_output=False, int64 otherwise.
Raises:
TypeError: If the inputs aren't either SparseTensor or Tensor.
"""
if not isinstance(inputs, list):
raise TypeError("Inputs must be a list")
if not all(isinstance(i, sparse_tensor.SparseTensor) or
isinstance(i, ops.Tensor) for i in inputs):
raise TypeError("All inputs must be SparseTensors")
sparse_inputs = [i for i in inputs
if isinstance(i, sparse_tensor.SparseTensor)]
dense_inputs = [i for i in inputs
if not isinstance(i, sparse_tensor.SparseTensor)]
indices = [sp_input.indices for sp_input in sparse_inputs]
values = [sp_input.values for sp_input in sparse_inputs]
shapes = [sp_input.dense_shape for sp_input in sparse_inputs]
out_type = dtypes.int64 if hashed_output else dtypes.string
internal_type = dtypes.string
for i in range(len(values)):
if values[i].dtype != dtypes.string:
values[i] = math_ops.to_int64(values[i])
internal_type = dtypes.int64
for i in range(len(dense_inputs)):
if dense_inputs[i].dtype != dtypes.string:
dense_inputs[i] = math_ops.to_int64(dense_inputs[i])
internal_type = dtypes.int64
if hash_key:
indices_out, values_out, shape_out = (
_sparse_feature_cross_op.sparse_feature_cross_v2(
indices,
values,
shapes,
dense_inputs,
hashed_output,
num_buckets,
hash_key=hash_key,
out_type=out_type,
internal_type=internal_type,
name=name))
else:
indices_out, values_out, shape_out = (
_sparse_feature_cross_op.sparse_feature_cross(
indices,
values,
shapes,
dense_inputs,
hashed_output,
num_buckets,
out_type=out_type,
internal_type=internal_type,
name=name))
return sparse_tensor.SparseTensor(indices_out, values_out, shape_out)
ops.NotDifferentiable("SparseFeatureCross")
ops.NotDifferentiable("SparseFeatureCrossV2")
| apache-2.0 |
qbuat/rootpy | rootpy/tree/model.py | 1 | 5162 | # Copyright 2012 the rootpy developers
# distributed under the terms of the GNU General Public License
from __future__ import absolute_import
import inspect
from cStringIO import StringIO
import types
import ROOT
from .. import log; log = log[__name__]
from .treetypes import Column
from .treebuffer import TreeBuffer
__all__ = [
'TreeModel',
]
class TreeModelMeta(type):
"""
Metaclass for all TreeModels
Addition/subtraction of TreeModels is handled
as set union and difference of class attributes
"""
def __new__(cls, name, bases, dct):
for attr, value in dct.items():
TreeModelMeta.checkattr(attr, value)
return type.__new__(cls, name, bases, dct)
def __add__(cls, other):
return type('_'.join([cls.__name__, other.__name__]),
(cls, other), {})
def __iadd__(cls, other):
return cls.__add__(other)
def __sub__(cls, other):
attrs = dict(set(cls.get_attrs()).difference(set(other.get_attrs())))
return type('_'.join([cls.__name__, other.__name__]),
(TreeModel,), attrs)
def __isub__(cls, other):
return cls.__sub__(other)
def __setattr__(cls, attr, value):
TreeModelMeta.checkattr(attr, value)
type.__setattr__(cls, attr, value)
@classmethod
def checkattr(metacls, attr, value):
"""
Only allow class attributes that are instances of
rootpy.types.Column, ROOT.TObject, or ROOT.ObjectProxy
"""
if not isinstance(value, (
types.MethodType,
types.FunctionType,
classmethod,
staticmethod,
property)):
if attr in dir(type('dummy', (object,), {})) + \
['__metaclass__']:
return
if attr.startswith('_'):
raise SyntaxError(
"TreeModel attribute `{0}` "
"must not start with `_`".format(attr))
if not inspect.isclass(value):
if not isinstance(value, Column):
raise TypeError(
"TreeModel attribute `{0}` "
"must be an instance of "
"`rootpy.tree.treetypes.Column`".format(attr))
return
if not issubclass(value, (ROOT.TObject, ROOT.ObjectProxy)):
raise TypeError(
"TreeModel attribute `{0}` must inherit "
"from `ROOT.TObject` or `ROOT.ObjectProxy`".format(
attr))
def prefix(cls, name):
"""
Create a new TreeModel where class attribute
names are prefixed with ``name``
"""
attrs = dict([(name + attr, value) for attr, value in cls.get_attrs()])
return TreeModelMeta(
'_'.join([name, cls.__name__]),
(TreeModel,), attrs)
def suffix(cls, name):
"""
Create a new TreeModel where class attribute
names are suffixed with ``name``
"""
attrs = dict([(attr + name, value) for attr, value in cls.get_attrs()])
return TreeModelMeta(
'_'.join([cls.__name__, name]),
(TreeModel,), attrs)
def get_attrs(cls):
"""
Get all class attributes ordered by definition
"""
ignore = dir(type('dummy', (object,), {})) + ['__metaclass__']
attrs = [
item for item in inspect.getmembers(cls) if item[0] not in ignore
and not isinstance(
item[1], (
types.FunctionType,
types.MethodType,
classmethod,
staticmethod,
property))]
# sort by idx and use attribute name to break ties
attrs.sort(key=lambda attr: (getattr(attr[1], 'idx', -1), attr[0]))
return attrs
def to_struct(cls, name=None):
"""
Convert the TreeModel into a compiled C struct
"""
if name is None:
name = cls.__name__
basic_attrs = dict([(attr_name, value)
for attr_name, value in cls.get_attrs()
if isinstance(value, Column)])
if not basic_attrs:
return None
src = 'struct {0} {{'.format(name)
for attr_name, value in basic_attrs.items():
src += '{0} {1};'.format(value.type.typename, attr_name)
src += '};'
if ROOT.gROOT.ProcessLine(src) != 0:
return None
return getattr(ROOT, name, None)
def __repr__(cls):
out = StringIO()
for name, value in cls.get_attrs():
print >> out, '{0} -> {1}'.format(name, value)
return out.getvalue()[:-1]
def __str__(cls):
return repr(cls)
class TreeModel(object):
__metaclass__ = TreeModelMeta
def __new__(cls):
"""
Return a TreeBuffer for this TreeModel
"""
treebuffer = TreeBuffer()
for name, attr in cls.get_attrs():
treebuffer[name] = attr()
return treebuffer
| gpl-3.0 |
Easy-as-Bit/p2pool | p2pool/util/math.py | 130 | 6565 | from __future__ import absolute_import, division
import __builtin__
import math
import random
import time
def median(x, use_float=True):
# there exist better algorithms...
y = sorted(x)
if not y:
raise ValueError('empty sequence!')
left = (len(y) - 1)//2
right = len(y)//2
sum = y[left] + y[right]
if use_float:
return sum/2
else:
return sum//2
def mean(x):
total = 0
count = 0
for y in x:
total += y
count += 1
return total/count
def shuffled(x):
x = list(x)
random.shuffle(x)
return x
def shift_left(n, m):
# python: :(
if m >= 0:
return n << m
return n >> -m
def clip(x, (low, high)):
if x < low:
return low
elif x > high:
return high
else:
return x
add_to_range = lambda x, (low, high): (min(low, x), max(high, x))
def nth(i, n=0):
i = iter(i)
for _ in xrange(n):
i.next()
return i.next()
def geometric(p):
if p <= 0 or p > 1:
raise ValueError('p must be in the interval (0.0, 1.0]')
if p == 1:
return 1
return int(math.log1p(-random.random()) / math.log1p(-p)) + 1
def add_dicts_ext(add_func=lambda a, b: a+b, zero=0):
def add_dicts(*dicts):
res = {}
for d in dicts:
for k, v in d.iteritems():
res[k] = add_func(res.get(k, zero), v)
return dict((k, v) for k, v in res.iteritems() if v != zero)
return add_dicts
add_dicts = add_dicts_ext()
mult_dict = lambda c, x: dict((k, c*v) for k, v in x.iteritems())
def format(x):
prefixes = 'kMGTPEZY'
count = 0
while x >= 100000 and count < len(prefixes) - 2:
x = x//1000
count += 1
s = '' if count == 0 else prefixes[count - 1]
return '%i' % (x,) + s
def format_dt(dt):
for value, name in [
(365.2425*60*60*24, 'years'),
(60*60*24, 'days'),
(60*60, 'hours'),
(60, 'minutes'),
(1, 'seconds'),
]:
if dt > value:
break
return '%.01f %s' % (dt/value, name)
perfect_round = lambda x: int(x + random.random())
def erf(x):
# save the sign of x
sign = 1
if x < 0:
sign = -1
x = abs(x)
# constants
a1 = 0.254829592
a2 = -0.284496736
a3 = 1.421413741
a4 = -1.453152027
a5 = 1.061405429
p = 0.3275911
# A&S formula 7.1.26
t = 1.0/(1.0 + p*x)
y = 1.0 - (((((a5*t + a4)*t) + a3)*t + a2)*t + a1)*t*math.exp(-x*x)
return sign*y # erf(-x) = -erf(x)
def find_root(y_over_dy, start, steps=10, bounds=(None, None)):
guess = start
for i in xrange(steps):
prev, guess = guess, guess - y_over_dy(guess)
if bounds[0] is not None and guess < bounds[0]: guess = bounds[0]
if bounds[1] is not None and guess > bounds[1]: guess = bounds[1]
if guess == prev:
break
return guess
def ierf(z):
return find_root(lambda x: (erf(x) - z)/(2*math.e**(-x**2)/math.sqrt(math.pi)), 0)
def binomial_conf_interval(x, n, conf=0.95):
assert 0 <= x <= n and 0 <= conf < 1
if n == 0:
left = random.random()*(1 - conf)
return left, left + conf
# approximate - Wilson score interval
z = math.sqrt(2)*ierf(conf)
p = x/n
topa = p + z**2/2/n
topb = z * math.sqrt(p*(1-p)/n + z**2/4/n**2)
bottom = 1 + z**2/n
return [clip(x, (0, 1)) for x in add_to_range(x/n, [(topa - topb)/bottom, (topa + topb)/bottom])]
minmax = lambda x: (min(x), max(x))
def format_binomial_conf(x, n, conf=0.95, f=lambda x: x):
if n == 0:
return '???'
left, right = minmax(map(f, binomial_conf_interval(x, n, conf)))
return '~%.1f%% (%.f-%.f%%)' % (100*f(x/n), math.floor(100*left), math.ceil(100*right))
def reversed(x):
try:
return __builtin__.reversed(x)
except TypeError:
return reversed(list(x))
class Object(object):
def __init__(self, **kwargs):
for k, v in kwargs.iteritems():
setattr(self, k, v)
def add_tuples(res, *tuples):
for t in tuples:
if len(t) != len(res):
raise ValueError('tuples must all be the same length')
res = tuple(a + b for a, b in zip(res, t))
return res
def flatten_linked_list(x):
while x is not None:
x, cur = x
yield cur
def weighted_choice(choices):
choices = list((item, weight) for item, weight in choices)
target = random.randrange(sum(weight for item, weight in choices))
for item, weight in choices:
if weight > target:
return item
target -= weight
raise AssertionError()
def natural_to_string(n, alphabet=None):
if n < 0:
raise TypeError('n must be a natural')
if alphabet is None:
s = ('%x' % (n,)).lstrip('0')
if len(s) % 2:
s = '0' + s
return s.decode('hex')
else:
assert len(set(alphabet)) == len(alphabet)
res = []
while n:
n, x = divmod(n, len(alphabet))
res.append(alphabet[x])
res.reverse()
return ''.join(res)
def string_to_natural(s, alphabet=None):
if alphabet is None:
assert not s.startswith('\x00')
return int(s.encode('hex'), 16) if s else 0
else:
assert len(set(alphabet)) == len(alphabet)
assert not s.startswith(alphabet[0])
return sum(alphabet.index(char) * len(alphabet)**i for i, char in enumerate(reversed(s)))
class RateMonitor(object):
def __init__(self, max_lookback_time):
self.max_lookback_time = max_lookback_time
self.datums = []
self.first_timestamp = None
def _prune(self):
start_time = time.time() - self.max_lookback_time
for i, (ts, datum) in enumerate(self.datums):
if ts > start_time:
self.datums[:] = self.datums[i:]
return
def get_datums_in_last(self, dt=None):
if dt is None:
dt = self.max_lookback_time
assert dt <= self.max_lookback_time
self._prune()
now = time.time()
return [datum for ts, datum in self.datums if ts > now - dt], min(dt, now - self.first_timestamp) if self.first_timestamp is not None else 0
def add_datum(self, datum):
self._prune()
t = time.time()
if self.first_timestamp is None:
self.first_timestamp = t
else:
self.datums.append((t, datum))
def merge_dicts(*dicts):
res = {}
for d in dicts: res.update(d)
return res
| gpl-3.0 |
uniphil/heroku-buildpack-pythonsass | vendor/setuptools-2.1/setuptools/extension.py | 284 | 1404 | import sys
import distutils.core
import distutils.extension
from setuptools.dist import _get_unpatched
_Extension = _get_unpatched(distutils.core.Extension)
def have_pyrex():
"""
Return True if Cython or Pyrex can be imported.
"""
pyrex_impls = 'Cython.Distutils.build_ext', 'Pyrex.Distutils.build_ext'
for pyrex_impl in pyrex_impls:
try:
# from (pyrex_impl) import build_ext
__import__(pyrex_impl, fromlist=['build_ext']).build_ext
return True
except Exception:
pass
return False
class Extension(_Extension):
"""Extension that uses '.c' files in place of '.pyx' files"""
def __init__(self, *args, **kw):
_Extension.__init__(self, *args, **kw)
if not have_pyrex():
self._convert_pyx_sources_to_c()
def _convert_pyx_sources_to_c(self):
"convert .pyx extensions to .c"
def pyx_to_c(source):
if source.endswith('.pyx'):
source = source[:-4] + '.c'
return source
self.sources = list(map(pyx_to_c, self.sources))
class Library(Extension):
"""Just like a regular Extension, but built as a library instead"""
distutils.core.Extension = Extension
distutils.extension.Extension = Extension
if 'distutils.command.build_ext' in sys.modules:
sys.modules['distutils.command.build_ext'].Extension = Extension
| mit |
natefoo/ansible-modules-extras | cloud/amazon/ec2_remote_facts.py | 42 | 5671 | #!/usr/bin/python
#
# This is a free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This Ansible library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this library. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: ec2_remote_facts
short_description: Gather facts about ec2 instances in AWS
description:
- Gather facts about ec2 instances in AWS
version_added: "2.0"
options:
filters:
description:
- A dict of filters to apply. Each dict item consists of a filter key and a filter value. See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html) for possible filters.
required: false
default: null
author:
- "Michael Schuett (@michaeljs1990)"
extends_documentation_fragment:
- aws
- ec2
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Gather facts about all ec2 instances
- ec2_remote_facts:
# Gather facts about all running ec2 instances with a tag of Name:Example
- ec2_remote_facts:
filters:
instance-state-name: running
"tag:Name": Example
# Gather facts about instance i-123456
- ec2_remote_facts:
filters:
instance-id: i-123456
# Gather facts about all instances in vpc-123456 that are t2.small type
- ec2_remote_facts:
filters:
vpc-id: vpc-123456
instance-type: t2.small
'''
try:
import boto.ec2
from boto.exception import BotoServerError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
def get_instance_info(instance):
# Get groups
groups = []
for group in instance.groups:
groups.append({ 'id': group.id, 'name': group.name }.copy())
# Get interfaces
interfaces = []
for interface in instance.interfaces:
interfaces.append({ 'id': interface.id, 'mac_address': interface.mac_address }.copy())
# If an instance is terminated, sourceDestCheck is no longer returned
try:
source_dest_check = instance.sourceDestCheck
except AttributeError:
source_dest_check = None
instance_info = { 'id': instance.id,
'kernel': instance.kernel,
'instance_profile': instance.instance_profile,
'root_device_type': instance.root_device_type,
'private_dns_name': instance.private_dns_name,
'public_dns_name': instance.public_dns_name,
'ebs_optimized': instance.ebs_optimized,
'client_token': instance.client_token,
'virtualization_type': instance.virtualization_type,
'architecture': instance.architecture,
'ramdisk': instance.ramdisk,
'tags': instance.tags,
'key_name': instance.key_name,
'source_destination_check': source_dest_check,
'image_id': instance.image_id,
'groups': groups,
'interfaces': interfaces,
'spot_instance_request_id': instance.spot_instance_request_id,
'requester_id': instance.requester_id,
'monitoring_state': instance.monitoring_state,
'placement': {
'tenancy': instance._placement.tenancy,
'zone': instance._placement.zone
},
'ami_launch_index': instance.ami_launch_index,
'launch_time': instance.launch_time,
'hypervisor': instance.hypervisor,
'region': instance.region.name,
'persistent': instance.persistent,
'private_ip_address': instance.private_ip_address,
'state': instance._state.name,
'vpc_id': instance.vpc_id,
}
return instance_info
def list_ec2_instances(connection, module):
filters = module.params.get("filters")
instance_dict_array = []
try:
all_instances = connection.get_only_instances(filters=filters)
except BotoServerError as e:
module.fail_json(msg=e.message)
for instance in all_instances:
instance_dict_array.append(get_instance_info(instance))
module.exit_json(instances=instance_dict_array)
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
filters = dict(default=None, type='dict')
)
)
module = AnsibleModule(argument_spec=argument_spec)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
if region:
try:
connection = connect_to_aws(boto.ec2, region, **aws_connect_params)
except (boto.exception.NoAuthHandlerFound, AnsibleAWSError), e:
module.fail_json(msg=str(e))
else:
module.fail_json(msg="region must be specified")
list_ec2_instances(connection, module)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
if __name__ == '__main__':
main()
| gpl-3.0 |
daxxi13/CouchPotatoServer | libs/html5lib/treewalkers/pulldom.py | 1729 | 2302 | from __future__ import absolute_import, division, unicode_literals
from xml.dom.pulldom import START_ELEMENT, END_ELEMENT, \
COMMENT, IGNORABLE_WHITESPACE, CHARACTERS
from . import _base
from ..constants import voidElements
class TreeWalker(_base.TreeWalker):
def __iter__(self):
ignore_until = None
previous = None
for event in self.tree:
if previous is not None and \
(ignore_until is None or previous[1] is ignore_until):
if previous[1] is ignore_until:
ignore_until = None
for token in self.tokens(previous, event):
yield token
if token["type"] == "EmptyTag":
ignore_until = previous[1]
previous = event
if ignore_until is None or previous[1] is ignore_until:
for token in self.tokens(previous, None):
yield token
elif ignore_until is not None:
raise ValueError("Illformed DOM event stream: void element without END_ELEMENT")
def tokens(self, event, next):
type, node = event
if type == START_ELEMENT:
name = node.nodeName
namespace = node.namespaceURI
attrs = {}
for attr in list(node.attributes.keys()):
attr = node.getAttributeNode(attr)
attrs[(attr.namespaceURI, attr.localName)] = attr.value
if name in voidElements:
for token in self.emptyTag(namespace,
name,
attrs,
not next or next[1] is not node):
yield token
else:
yield self.startTag(namespace, name, attrs)
elif type == END_ELEMENT:
name = node.nodeName
namespace = node.namespaceURI
if name not in voidElements:
yield self.endTag(namespace, name)
elif type == COMMENT:
yield self.comment(node.nodeValue)
elif type in (IGNORABLE_WHITESPACE, CHARACTERS):
for token in self.text(node.nodeValue):
yield token
else:
yield self.unknown(type)
| gpl-3.0 |
tessercat/ddj | languages/tr.py | 121 | 7256 | # coding: utf-8
{
'!langcode!': 'tr',
'!langname!': 'Türkçe',
'"update" is an optional expression like "field1=\'newvalue\'". You cannot update or delete the results of a JOIN': '"güncelle" ("update") "field1=\'yenideğer\'" gibi isteğe bağlı bir ifadedir. JON sonucu güncelleyemez veya silemzsiniz.',
'%s %%(shop)': '%s %%(shop)',
'%s %%(shop[0])': '%s %%(shop[0])',
'%s %%{quark[0]}': '%s %%{quark[0]}',
'%s %%{shop[0]}': '%s %%{shop[0]}',
'%s %%{shop}': '%s %%{shop}',
'%s selected': '%s selected',
'%Y-%m-%d': '%d-%m-%Y',
'%Y-%m-%d %H:%M:%S': '%d-%m-%Y %H:%M:%S',
'@markmin\x01**Hello World**': '**Merhaba Dünya**',
'@markmin\x01An error occured, please [[reload %s]] the page': 'Bir hata oluştu, lütfen sayfayı [[yenileyin yükleyin %s]] ',
'About': 'Hakkında',
'Access Control': 'Erişim Denetimi',
'Administrative Interface': 'Yönetim Arayüzü',
'Ajax Recipes': 'Ajax Tarifleri',
'An error occured, please %s the page': 'Bir hata meydana geldi, lütfen sayfayı %s',
'Apply changes': 'Değişiklikleri uygula',
'Are you sure you want to delete this object?': 'Bu nesneyi silmek istediğinden emin misin?',
'Available Databases and Tables': 'Kullanılabilir Varitabanları ve Tablolar',
'Buy this book': 'Bu kitabı satın alın',
'cache': 'zula',
'Cannot be empty': 'Boş bırakılamaz',
'Change password': 'Parolayı değiştir',
'Check to delete': 'Silmek için denetle',
'Client IP': 'İstemci IP',
'Community': 'Topluluk',
'Components and Plugins': 'Bileşenler ve Eklentiler',
'Controller': 'Denetçi',
'Copyright': 'Telif',
'Created By': 'Tasarlayan',
'Created On': 'Oluşturma tarihi',
'customize me!': 'burayı değiştir!',
'Database': 'Veritabanı',
'Database %s select': '%s veritabanı seç',
'Database Administration (appadmin)': 'Veritabanı Yönetimi (appadmin)',
'db': 'db',
'DB Model': 'DB Modeli',
'Delete:': 'Sil:',
'Demo': 'Tanıtım',
'Deployment Recipes': 'Yayınlama tarifleri',
'Description': 'Açıklama',
'design': 'tasarım',
'Documentation': 'Kitap',
"Don't know what to do?": 'Neleri nasıl yapacağını bilmiyor musun?',
'Download': 'İndir',
'E-mail': 'E-posta',
'Email and SMS': 'E-posta ve kısa mesaj (SMS)',
'enter a value': 'bir değer giriniz',
'enter an integer between %(min)g and %(max)g': '%(min)g ve %(max)g arasında bir sayı girin',
'enter date and time as %(format)s': 'tarih ve saati %(format)s biçiminde girin',
'Errors': 'Hatalar',
'Errors in form, please check it out.': 'Formda hatalar var, lütfen kontrol edin.',
'export as csv file': 'csv dosyası olarak dışa aktar',
'FAQ': 'SSS',
'First name': 'Ad',
'Forgot username?': 'Kullanıcı adını mı unuttun?',
'Forms and Validators': 'Biçimler ve Doğrulayıcılar',
'Free Applications': 'Ücretsiz uygulamalar',
'Giriş': 'Giriş',
'Graph Model': 'Grafik Modeli',
'Group %(group_id)s created': '%(group_id)s takımı oluşturuldu',
'Group ID': 'Takım ID',
'Group uniquely assigned to user %(id)s': 'Grup özgün olarak %(id)s kullanıcılara atandı',
'Groups': 'Gruplar',
'Hello World': 'Merhaba Dünya',
'Hello World ## comment': 'Merhaba Dünya ## yorum ',
'Hello World## comment': 'Merhaba Dünya## yorum ',
'Home': 'Anasayfa',
'How did you get here?': 'Bu sayfayı görüntüleme uğruna neler mi oldu?',
'import': 'import',
'Import/Export': 'Dışa/İçe Aktar',
'Introduction': 'Giriş',
'Invalid email': 'Yanlış eposta',
'Is Active': 'Etkin',
'Kayıt ol': 'Kayıt ol',
'Last name': 'Soyad',
'Layout': 'Şablon',
'Layout Plugins': 'Şablon Eklentileri',
'Layouts': 'Şablonlar',
'Live Chat': 'Canlı Sohbet',
'Logged in': 'Giriş yapıldı',
'Logged out': 'Çıkış yapıldı',
'Login': 'Giriş',
'Logout': 'Terket',
'Lost Password': 'Şifremi unuttum',
'Lost password?': 'Şifrenizimi unuttunuz?',
'Menu Model': 'Model Menü',
'Modified By': 'Değiştiren',
'Modified On': 'Değiştirilme tarihi',
'My Sites': 'Sitelerim',
'Name': 'İsim',
'New password': 'Yeni parola',
'New Record': 'Yeni Kayıt',
'Object or table name': 'Nesne ya da tablo adı',
'Old password': 'Eski parola',
'Online examples': 'Canlı örnekler',
'or import from csv file': 'veya csv dosyasından içe aktar',
'Origin': 'Asıl',
'Other Plugins': 'Diğer eklentiler',
'Other Recipes': 'Diğer Tarifler',
'Overview': 'Göz gezdir',
'Password': 'Parola',
"Password fields don't match": 'Parolalar uyuşmuyor',
'please input your password again': 'lütfen parolanızı tekrar girin',
'Plugins': 'Eklentiler',
'Powered by': 'Yazılım Temeli',
'Preface': 'Önzös',
'Profile': 'Profil',
'pygraphviz library not found': 'pygraphviz library not found',
'Python': 'Python',
'Query:': 'Sorgu:',
'Quick Examples': 'Hızlı Örnekler',
'Recipes': 'Tarifeler',
'Record ID': 'Kayıt ID',
'Register': 'Kayıt ol',
'Registration identifier': 'Kayıt belirleyicisi',
'Registration key': 'Kayıt anahtarı',
'Registration successful': 'Kayıt başarılı',
'reload': 'yeniden yükle',
'Remember me (for 30 days)': 'Beni hatırla (30 gün)',
'Request reset password': 'Parolanı sıfırla',
'Reset Password key': 'Parola anahtarını sıfırla',
'Role': 'Rol',
'Rows in Table': 'Tablodaki Satırlar',
'Save model as...': 'Modeli farklı kaydet...',
'Semantic': 'Anlamsal',
'Services': 'Hizmetler',
'state': 'durum',
'Stylesheet': 'Stil Şablonu',
'submit': 'gönder',
'Submit': 'Gönder',
'Support': 'Destek',
'Table': 'Tablo',
'The "query" is a condition like "db.table1.field1==\'value\'". Something like "db.table1.field1==db.table2.field2" results in a SQL JOIN.': '"sorgulama" "db.table1.field1==\'değer\'" şeklinde bir durumu ifade eder. SQL birleştirmede (JOIN) "db.table1.field1==db.table2.field2" şeklindedir.',
'The Core': 'Çekirdek',
'The output of the file is a dictionary that was rendered by the view %s': 'Son olarak fonksiyonların vs. işlenip %s dosyasıyla tasarıma yedirilmesiyle sayfayı görüntüledin',
'The Views': 'Görünümler',
'This App': 'Bu Uygulama',
'This email already has an account': 'Bu e-postaya ait bir hesap zaten var',
'Timestamp': 'Zaman damgası',
'Twitter': 'Twitter',
'Update:': 'Güncelle:',
'Use (...)&(...) for AND, (...)|(...) for OR, and ~(...) for NOT to build more complex queries.': 'Karmaşık sorgularda Ve (AND) için (...)&(...) kullanın, Veya (OR) için (...)|(...) kullanın ve DEĞİL (NOT) için ~(...) kullanın. ',
'User %(id)s Logged-in': '%(id)s Giriş yaptı',
'User %(id)s Logged-out': '%(id)s çıkış yaptı',
'User %(id)s Password reset': 'Kullanıc %(id)s Parolasını sıfırla',
'User %(id)s Registered': '%(id)s Kayıt oldu',
'User ID': 'Kullanıcı ID',
'value already in database or empty': 'değer boş ya da veritabanında zaten mevcut',
'Verify Password': 'Parolanı Onayla',
'Videos': 'Videolar',
'View': 'Görünüm',
'Welcome': 'Hoşgeldin',
'Welcome to web2py!': "web2py'ye hoşgeldiniz!",
'Which called the function %s located in the file %s': 'Bu ziyaretle %s fonksiyonunu %s dosyasından çağırmış oldun ',
'Working...': 'Çalışıyor...',
'You are successfully running web2py': 'web2py çatısını çalıştırmayı başardın',
'You can modify this application and adapt it to your needs': 'Artık uygulamayı istediğin gibi düzenleyebilirsin!',
'You visited the url %s': '%s adresini ziyaret ettin',
'invalid controller': 'geçersiz denetleyici',
}
| mit |
13k/pygmentize | vendor/pygments/formatters/bbcode.py | 75 | 3314 | # -*- coding: utf-8 -*-
"""
pygments.formatters.bbcode
~~~~~~~~~~~~~~~~~~~~~~~~~~
BBcode formatter.
:copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.formatter import Formatter
from pygments.util import get_bool_opt
__all__ = ['BBCodeFormatter']
class BBCodeFormatter(Formatter):
"""
Format tokens with BBcodes. These formatting codes are used by many
bulletin boards, so you can highlight your sourcecode with pygments before
posting it there.
This formatter has no support for background colors and borders, as there
are no common BBcode tags for that.
Some board systems (e.g. phpBB) don't support colors in their [code] tag,
so you can't use the highlighting together with that tag.
Text in a [code] tag usually is shown with a monospace font (which this
formatter can do with the ``monofont`` option) and no spaces (which you
need for indentation) are removed.
Additional options accepted:
`style`
The style to use, can be a string or a Style subclass (default:
``'default'``).
`codetag`
If set to true, put the output into ``[code]`` tags (default:
``false``)
`monofont`
If set to true, add a tag to show the code with a monospace font
(default: ``false``).
"""
name = 'BBCode'
aliases = ['bbcode', 'bb']
filenames = []
def __init__(self, **options):
Formatter.__init__(self, **options)
self._code = get_bool_opt(options, 'codetag', False)
self._mono = get_bool_opt(options, 'monofont', False)
self.styles = {}
self._make_styles()
def _make_styles(self):
for ttype, ndef in self.style:
start = end = ''
if ndef['color']:
start += '[color=#%s]' % ndef['color']
end = '[/color]' + end
if ndef['bold']:
start += '[b]'
end = '[/b]' + end
if ndef['italic']:
start += '[i]'
end = '[/i]' + end
if ndef['underline']:
start += '[u]'
end = '[/u]' + end
# there are no common BBcodes for background-color and border
self.styles[ttype] = start, end
def format_unencoded(self, tokensource, outfile):
if self._code:
outfile.write('[code]')
if self._mono:
outfile.write('[font=monospace]')
lastval = ''
lasttype = None
for ttype, value in tokensource:
while ttype not in self.styles:
ttype = ttype.parent
if ttype == lasttype:
lastval += value
else:
if lastval:
start, end = self.styles[lasttype]
outfile.write(''.join((start, lastval, end)))
lastval = value
lasttype = ttype
if lastval:
start, end = self.styles[lasttype]
outfile.write(''.join((start, lastval, end)))
if self._mono:
outfile.write('[/font]')
if self._code:
outfile.write('[/code]')
if self._code or self._mono:
outfile.write('\n')
| mit |
LarsDu/DeepNuc | deepnuc/nucbinaryclassifier.py | 2 | 15464 | import tensorflow as tf
import numpy as np
import sklearn.metrics as metrics
#from databatcher import DataBatcher
import nucconvmodel
#import dubiotools as dbt
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import pprint
from itertools import cycle
import os
import sys
#Logging imports
from logger import Logger
from nucinference import NucInference
from collections import OrderedDict
class NucBinaryClassifier(NucInference):
use_onehot_labels = True
def __init__(self,
sess,
train_batcher,
test_batcher,
num_epochs,
learning_rate,
batch_size,
seq_len,
save_dir,
keep_prob=0.5,
beta1=0.9,
concat_revcom_input=False,
nn_method_key="inferenceA",
pos_index=1):
"""NucBinaryClassifier encapsulates training and data
evaluation for
:param sess: tf.Session() object
:param train_batcher: DataBatcher object for training set
:param test_batcher: DataBatcher object for test set
:param num_epochs: Number of epoch cycles to perform training
:param learning_rate: Learning rate
:param batch_size: Mini-batch pull size
:param seq_len: Sequence length
:param save_dir: Root save directory for binary classification model
:param keep_prob: Probability of keeping weight for dropout
regularization
:param beta1: Beta1 parameter for AdamOptimizer
:param concat_revcom_input: If true, concatenate reverse
complement of nucleotide sequence to input vector
:param nn_method_key: Dictionary key for inference
method found in nucconvmodels.py file. Determines which model
to use. Example: "inferenceA" will run nucconvmodels.inferenceA
:param pos_index: The index to use for the positive class
(defaults to 1)
:returns: a NucBinaryClassifier object
:rtype: NucBinaryClassifier
"""
super(NucBinaryClassifier, self).__init__(sess,
train_batcher,
test_batcher,
num_epochs,
learning_rate,
batch_size,
seq_len,
save_dir,
keep_prob,
beta1,
concat_revcom_input,
nn_method_key="inferenceA")
if self.train_batcher.num_classes != 2:
print "Error, more than two classes detected in train batcher"
else:
self.num_classes = 2
#The index for the label that should be considered the positive class
self.pos_index=pos_index
self.save_on_epoch = 5
def build_model(self):
self.dna_seq_placeholder = tf.placeholder(tf.float32,
shape=[None,self.seq_len,4],
name="dna_seq")
self.labels_placeholder = tf.placeholder(tf.float32,
shape=[None, self.num_classes],
name="labels")
self.keep_prob_placeholder = tf.placeholder(tf.float32,name="keep_prob")
self.logits, self.network = self.nn_method(self.dna_seq_placeholder,
self.keep_prob_placeholder,
self.num_classes)
self.probs = tf.nn.softmax(self.logits)
self.loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=self.labels_placeholder,
logits=self.logits))
'''
Calculate metrics. num_true positives is the number of true positives for the current batch
Table below shows index if tf.argmax is applied
+-----+-----------+---------+
| | Classifier| Label |
+-----+-----------+---------+
| TP | 1 | 1 |
+-----+-----------+---------+
| FP | 1 | 0 |
+-----+-----------+---------+
| TN | 0 | 0 |
+-----+-----------+---------+
| FN | 0 | 1 |
+-----+-----------+---------+
Precision = TP/(TP+FP)
Recall = TP/(TP+FN)
F1-score = 2*(Prec*Rec)/(Prec+Rec)
# Note: I ended up not using the tp,fp,tn,fn ops because I ended up calculating
# these metrics using sklearn.
'''
#correct = TN+TP #Used for calculating accuracy
self.logits_ind = tf.argmax(self.logits,1)
self.labels_ind = tf.argmax(self.labels_placeholder,1)
#Create max_mask of logits (ie: [-.5,.5] --> [0 1]. Note logits have
# shape [batch_size * num_classes= 2]
#self.inverse_logits_col = tf.ones_like(self.logits_ind) - self.logits_ind
#self.max_mask_logits = tf.concat([self.inverse_logits_col,self.logits_ind],1)
#True positives where logits_ind+labels_ind == 2
#True negatives where logits_ind+labels_ind == 0
self.sum_ind = tf.add(self.logits_ind,self.labels_ind)
self.true_positives = tf.equal(self.sum_ind,2*tf.ones_like(self.sum_ind)) #bool
self.num_true_positives =tf.reduce_sum(tf.cast(self.true_positives, tf.int32))
#For FP classifier index > label index
self.false_positives=tf.greater(self.logits_ind,self.labels_ind)
self.num_false_positives = tf.reduce_sum(tf.cast(self.false_positives, tf.int32))
self.true_negatives = tf.equal(self.sum_ind,tf.zeros_like(self.sum_ind)) #bool
self.num_true_negatives= tf.reduce_sum(tf.cast(self.true_negatives,tf.int32))
#For FN classifier index < label index
self.false_negatives=tf.less(self.logits_ind,self.labels_ind)
self.num_false_negatives = tf.reduce_sum(tf.cast(self.false_negatives,tf.int32))
#num correct can be used to calculate accuracy
self.correct = tf.equal(self.logits_ind,self.labels_ind)
self.num_correct= tf.reduce_sum(tf.cast(self.correct, tf.int32))
self.relevance =self.network.relevance_backprop(tf.multiply(self.logits,
self.labels_placeholder))
'''Write and consolidate summaries'''
self.loss_summary = tf.summary.scalar('loss',self.loss)
self.summary_writer = tf.summary.FileWriter(self.summary_dir,self.sess.graph)
self.summary_op = tf.summary.merge([self.loss_summary])
#Note: Do not use tf.summary.merge_all() here. This will break encapsulation for
# cross validation and lead to crashes when training multiple models
# Add gradient ops to graph with learning rate
self.train_op = tf.train.AdamOptimizer(self.learning_rate,
beta1=self.beta1).minimize(self.loss)
self.vars = tf.trainable_variables()
self.var_names = [var.name for var in self.vars]
#print "Trainable variables:\n"
#for vname in self.var_names:
# print vname
self.saver = tf.train.Saver()
self.init_op = tf.global_variables_initializer()
#Important note: Restoring model does not require init_op.
#In fact calling tf.global_variables_initializer() after loading a model
#will overwrite loaded weights
self.sess.run(self.init_op)
self.load(self.checkpoint_dir)
def eval_model_metrics(self,
batcher,
save_plots=False,
image_name ='metrics.png',
eval_batch_size=50):
"""
Note: This method only works for binary classification
as auPRC and auROC graphs only apply to binary classificaton problems.
TODO: Modify this code to perform auROC generation
for one-vs-all in the case of multiclass classification.
"""
#Ref: http://scikit-learn.org/stable/modules/model_evaluation.html#roc-metrics
##auROC calculations
#Keep batch size at 1 for now to ensure 1 full epoch is evaluated
all_labels = np.zeros((batcher.num_records,self.num_classes), dtype = np.float32)
all_probs = np.zeros((batcher.num_records,self.num_classes), dtype = np.float32)
#num_correct = 0 #counts number of correct predictions
num_whole_pulls = batcher.num_records//eval_batch_size
num_single_pulls = batcher.num_records%eval_batch_size
num_steps = num_whole_pulls+num_single_pulls
for i in range(num_steps):
if i<num_whole_pulls:
batch_size=eval_batch_size
else:
batch_size=1
labels_batch, dna_seq_batch = batcher.pull_batch(batch_size)
feed_dict = {
self.dna_seq_placeholder:dna_seq_batch,
self.labels_placeholder:labels_batch,
self.keep_prob_placeholder:1.0
}
cur_prob= self.sess.run(self.probs,feed_dict=feed_dict)
#Fill labels array
if batch_size > 1:
start_ind = batch_size*i
elif batch_size == 1:
start_ind = num_whole_pulls*eval_batch_size+(i-num_whole_pulls)
else:
print "Never reach this condition"
all_labels[start_ind:start_ind+batch_size,:] = labels_batch
all_probs[start_ind:start_ind+batch_size,:] = cur_prob
#Calculate metrics and save results in a dict
md = self.calc_classifier_metrics(all_labels,all_probs)
md["epoch"]=self.epoch
md["step"]=self.step
#print "Testing accuracy",float(num_correct)/float(batcher.num_records)
print 'Num examples: %d Num correct: %d Accuracy: %0.04f' % \
(batcher.num_records, md["num_correct"], md["accuracy"])+'\n'
if save_plots:
###Plot some metrics
plot_colors = cycle(['cyan','blue','orange','teal'])
#print "Labels shape",all_labels.shape
#print "Probs shape",all_probs.shape
#print "Preds shape",all_preds.shape
#Generate auROC plot axes
fig1,ax1 = plt.subplots(2)
fig1.subplots_adjust(bottom=0.2)
ax1[0].plot([0,1],[0,1],color='navy',lw=2,linestyle='--')
ax1[0].set_xbound(0.0,1.0)
ax1[0].set_ybound(0.0,1.05)
ax1[0].set_xlabel('False Positive Rate')
ax1[0].set_ylabel('True Positive Rate')
ax1[0].set_title('auROC')
#plt.legend(loc='lower right')
ax1[0].plot(md["fpr"],md["tpr"],color=plot_colors.next(),
lw=2,linestyle='-',label='auROC curve (area=%0.2f)' % md["auroc"] )
#Generate auPRC plot axes
#ax1[1].plot([0,1],[1,1],color='royalblue',lw=2,linestyle='--')
ax1[1].set_xlabel('Precision')
ax1[1].set_ylabel('Recall')
ax1[1].set_title('auPRC')
ax1[1].plot(md["thresh_precision"],md["thresh_recall"],color=plot_colors.next(),
lw=2,linestyle='-',label='auPRC curve (area=%0.2f)' % md["auprc"] )
ax1[1].set_xbound(0.0,1.0)
ax1[1].set_ybound(0.0,1.05)
#Note: avg prec score is the area under the prec recall curve
#Note: Presumably class 1 (pos examples) should be the only f1 score we focus on
#print "F1 score for class",i,"is",f1_score
plt.tight_layout()
plt_fname = self.save_dir+os.sep+image_name
print "Saving auROC image to",plt_fname
fig1.savefig(plt_fname)
#Return metrics dictionary
return md
def calc_classifier_metrics(self,all_labels,all_probs):
"""Calculate some metrics for the dataset
return dictionary with metrics
:param all_probs: nx2 prob values
:param all_labels: nx2 labels
:returns: dictionary of metrics
:rtype: dict()
"""
num_records = all_probs.shape[0]
all_preds = np.zeros((num_records, self.num_classes),dtype = np.float32)
all_preds[np.arange(num_records),all_probs.argmax(1)] = 1
#Calculate accuracy
num_correct = metrics.accuracy_score(all_labels[:,self.pos_index],all_preds[:,self.pos_index],normalize=False)
accuracy = num_correct/float(all_preds.shape[0])
###Calculate auROC
#http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html
#metrics.roc_curve(y_true, y_score[, ...]) #y_score is probs
fpr,tpr,_ = metrics.roc_curve(all_labels[:,self.pos_index],
all_probs[:,self.pos_index],
pos_label=self.pos_index)
auroc = metrics.auc(fpr,tpr)
thresh_precision,thresh_recall,prc_thresholds = metrics.precision_recall_curve(
all_labels[:,self.pos_index],
all_probs[:,self.pos_index])
#Calculate precision, recall, and f1-score for threshold = 0.5
#confusion_matrix = metrics.confusion_matrix(all_labels[:,self.pos_index],all_probs[:,self.pos_index])
precision, recall, f1_score, support = metrics.precision_recall_fscore_support(
all_labels[:,self.pos_index],
all_preds[:,self.pos_index],
pos_label=self.pos_index)
precision = precision[self.pos_index]
recall = recall[self.pos_index]
f1_score = f1_score[self.pos_index]
support = support[self.pos_index]
auprc = metrics.average_precision_score(all_labels[:,self.pos_index],
all_probs[:,self.pos_index])
return OrderedDict([
("num_correct",num_correct),
("accuracy",accuracy),
("auroc",auroc),
("auprc",auprc),
("fpr",fpr),
("tpr",tpr),
("precision",precision),
("recall",recall),
("f1_score",f1_score),
("support",support),
("thresh_precision",thresh_precision),
("thresh_recall",thresh_recall),
("prc_thresholds",prc_thresholds)
])
| gpl-3.0 |
xgwubin/vitess | py/vtproto/vtctldata_pb2.py | 6 | 4617 | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: vtctldata.proto
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
import logutil_pb2 as logutil__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='vtctldata.proto',
package='vtctldata',
syntax='proto3',
serialized_pb=b'\n\x0fvtctldata.proto\x12\tvtctldata\x1a\rlogutil.proto\"X\n\x1a\x45xecuteVtctlCommandRequest\x12\x0c\n\x04\x61rgs\x18\x01 \x03(\t\x12\x16\n\x0e\x61\x63tion_timeout\x18\x02 \x01(\x03\x12\x14\n\x0clock_timeout\x18\x03 \x01(\x03\"<\n\x1b\x45xecuteVtctlCommandResponse\x12\x1d\n\x05\x65vent\x18\x01 \x01(\x0b\x32\x0e.logutil.Eventb\x06proto3'
,
dependencies=[logutil__pb2.DESCRIPTOR,])
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
_EXECUTEVTCTLCOMMANDREQUEST = _descriptor.Descriptor(
name='ExecuteVtctlCommandRequest',
full_name='vtctldata.ExecuteVtctlCommandRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='args', full_name='vtctldata.ExecuteVtctlCommandRequest.args', index=0,
number=1, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='action_timeout', full_name='vtctldata.ExecuteVtctlCommandRequest.action_timeout', index=1,
number=2, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='lock_timeout', full_name='vtctldata.ExecuteVtctlCommandRequest.lock_timeout', index=2,
number=3, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=45,
serialized_end=133,
)
_EXECUTEVTCTLCOMMANDRESPONSE = _descriptor.Descriptor(
name='ExecuteVtctlCommandResponse',
full_name='vtctldata.ExecuteVtctlCommandResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='event', full_name='vtctldata.ExecuteVtctlCommandResponse.event', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=135,
serialized_end=195,
)
_EXECUTEVTCTLCOMMANDRESPONSE.fields_by_name['event'].message_type = logutil__pb2._EVENT
DESCRIPTOR.message_types_by_name['ExecuteVtctlCommandRequest'] = _EXECUTEVTCTLCOMMANDREQUEST
DESCRIPTOR.message_types_by_name['ExecuteVtctlCommandResponse'] = _EXECUTEVTCTLCOMMANDRESPONSE
ExecuteVtctlCommandRequest = _reflection.GeneratedProtocolMessageType('ExecuteVtctlCommandRequest', (_message.Message,), dict(
DESCRIPTOR = _EXECUTEVTCTLCOMMANDREQUEST,
__module__ = 'vtctldata_pb2'
# @@protoc_insertion_point(class_scope:vtctldata.ExecuteVtctlCommandRequest)
))
_sym_db.RegisterMessage(ExecuteVtctlCommandRequest)
ExecuteVtctlCommandResponse = _reflection.GeneratedProtocolMessageType('ExecuteVtctlCommandResponse', (_message.Message,), dict(
DESCRIPTOR = _EXECUTEVTCTLCOMMANDRESPONSE,
__module__ = 'vtctldata_pb2'
# @@protoc_insertion_point(class_scope:vtctldata.ExecuteVtctlCommandResponse)
))
_sym_db.RegisterMessage(ExecuteVtctlCommandResponse)
import abc
from grpc.beta import implementations as beta_implementations
from grpc.early_adopter import implementations as early_adopter_implementations
from grpc.framework.alpha import utilities as alpha_utilities
from grpc.framework.common import cardinality
from grpc.framework.interfaces.face import utilities as face_utilities
# @@protoc_insertion_point(module_scope)
| bsd-3-clause |
Guneet-Dhillon/mxnet | python/setup.py | 14 | 3481 | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# pylint: disable=invalid-name, exec-used
"""Setup mxnet package."""
from __future__ import absolute_import
import os
import sys
# need to use distutils.core for correct placement of cython dll
kwargs = {}
if "--inplace" in sys.argv:
from distutils.core import setup
from distutils.extension import Extension
else:
from setuptools import setup
from setuptools.extension import Extension
kwargs = {'install_requires': ['numpy', 'requests', 'graphviz'], 'zip_safe': False}
from setuptools import find_packages
with_cython = False
if '--with-cython' in sys.argv:
with_cython = True
sys.argv.remove('--with-cython')
# We can not import `mxnet.info.py` in setup.py directly since mxnet/__init__.py
# Will be invoked which introduces dependences
CURRENT_DIR = os.path.dirname(__file__)
libinfo_py = os.path.join(CURRENT_DIR, 'mxnet/libinfo.py')
libinfo = {'__file__': libinfo_py}
exec(compile(open(libinfo_py, "rb").read(), libinfo_py, 'exec'), libinfo, libinfo)
LIB_PATH = libinfo['find_lib_path']()
__version__ = libinfo['__version__']
def config_cython():
"""Try to configure cython and return cython configuration"""
if not with_cython:
return []
# pylint: disable=unreachable
if os.name == 'nt':
print("WARNING: Cython is not supported on Windows, will compile without cython module")
return []
try:
from Cython.Build import cythonize
# from setuptools.extension import Extension
if sys.version_info >= (3, 0):
subdir = "_cy3"
else:
subdir = "_cy2"
ret = []
path = "mxnet/cython"
if os.name == 'nt':
library_dirs = ['mxnet', '../build/Release', '../build']
libraries = ['libmxnet']
else:
library_dirs = None
libraries = None
for fn in os.listdir(path):
if not fn.endswith(".pyx"):
continue
ret.append(Extension(
"mxnet/%s/.%s" % (subdir, fn[:-4]),
["mxnet/cython/%s" % fn],
include_dirs=["../include/", "../nnvm/include"],
library_dirs=library_dirs,
libraries=libraries,
language="c++"))
return cythonize(ret)
except ImportError:
print("WARNING: Cython is not installed, will compile without cython module")
return []
setup(name='mxnet',
version=__version__,
description=open(os.path.join(CURRENT_DIR, 'README.md')).read(),
packages=find_packages(),
data_files=[('mxnet', [LIB_PATH[0]])],
url='https://github.com/dmlc/mxnet',
ext_modules=config_cython(),
**kwargs)
| apache-2.0 |
lirui-apache/hive | hcatalog/src/test/e2e/templeton/inpdir/xmlmapper.py | 11 | 1320 | #!/usr/bin/env python
# dftmapper.py
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
list = []
title = "Unknown"
inText = False
for line in sys.stdin:
line = line.strip()
if line.find( "<title>" )!= -1:
title = line[ len( "<title>" ) : -len( "</title>" ) ]
if line.find( "<text>" ) != -1:
inText = True
continue
if line.find( "</text>" ) != -1:
inText = False
continue
if inText:
list.append( line )
text = ' '.join( list )
text = text[0:10] + "..." + text[-10:]
print '[[%s]]\t[[%s]]' % (title, text)
| apache-2.0 |
jolid/script.module.donnie | lib/donnie/furk.py | 1 | 6496 | import urllib2, urllib, sys, os, re, random, copy
from BeautifulSoup import BeautifulSoup, Tag, NavigableString
import xbmc,xbmcplugin,xbmcgui,xbmcaddon
from t0mm0.common.net import Net
from t0mm0.common.addon import Addon
from scrapers import CommonScraper
net = Net()
try:
import json
except:
# pre-frodo and python 2.4
import simplejson as json
''' ###########################################################
Usage and helper functions
############################################################'''
class FurkServiceSracper(CommonScraper):
def __init__(self, settingsid, DB=None, REG=None):
if DB:
self.DB=DB
if REG:
self.REG=REG
self.addon_id = 'script.module.donnie'
self.service='furk'
self.name = 'furk.net'
self.raiseError = False
self.referrer = 'http://www.furk.net/'
self.base_url = 'https://api.furk.net/api/'
self.user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3'
self.provides = []
self.settingsid = settingsid
self._loadsettings()
self.settings_addon = self.addon
def _getShows(self, silent=False):
self.log('Do Nothing here')
def _getRecentShows(self, silent=False):
self.log('Do Nothing here')
def _getEpisodes(self, showid, show, url, pDialog, percent, silent):
self.log('Do Nothing here')
def _getMovies(self, silent=False):
self.log('Do Nothing here')
def _getRecentMovies(self, silent):
self.log('Do Nothing here')
def _setKey(self, api_key):
xbmcaddon.Addon(id='script.module.donnie').setSetting('furk-apikey', api_key)
def _getKey(self):
api_key = xbmcaddon.Addon(id='script.module.donnie').getSetting('furk-apikey')
if api_key == '':
return None
return api_key
def cleanQuery(self, query):
self.log('Cleaning furk search string')
cleaned = query
if re.search('\\(\\d\\d\\d\\d\\)$', cleaned):
cleaned = cleaned[0:len(cleaned)-7]
cleaned = cleaned.replace(":", '')
cleaned = cleaned.replace("'", '')
cleaned = cleaned.replace("-", ' ')
cleaned = cleaned.replace("_", ' ')
print cleaned
return cleaned
def _login(self):
api_key = self._getKey()
if api_key:
self.log('Using cached api key')
return api_key
loginurl = "%slogin/login" % self.base_url
login = self.getSetting('furk-username')
password = self.getSetting('furk-password')
post_dict = {"login": login, "pwd": password}
cookiejar = os.path.join(self.cookie_path,'furk.lwp')
try:
response = net.http_POST(loginurl, post_dict).content
data = json.loads(response)
status = data['status']
api_key = data['api_key']
self._setKey(api_key)
self.log("Furk response: %s", response)
if status=="ok":
net.save_cookies(cookiejar)
else:
print 'Furk Account: login failed'
return api_key
except Exception, e:
print '**** Furk Error: %s' % e
pass
def _getStreams(self, episodeid=None, movieid=None):
api_key = self._login()
query = ""
if episodeid:
row = self.DB.query("SELECT rw_shows.showname, season, episode FROM rw_episodes JOIN rw_shows ON rw_shows.showid=rw_episodes.showid WHERE episodeid=?", [episodeid])
name = row[0].replace("'", "")
if re.search('\\(\\d\\d\\d\\d\\)$', row[0]):
name = name[0:len(name)-7]
season = row[1].zfill(2)
episode = row[2].zfill(2)
query = str("%s S%sE%s" % (name, season, episode))
elif movieid:
row = self.DB.query("SELECT movie, year FROM rw_movies WHERE imdb=? LIMIT 1", [movieid])
movie = self.cleanQuery(row[0])
query = "%s %s" %(movie, row[1])
streams = []
url = "%splugins/metasearch" % self.base_url
params = {"type": "video", "filter": "cached", "api_key": api_key, "q": query}
pagedata = net.http_POST(url, params).content
if pagedata=='':
return False
data = json.loads(pagedata)
try:
files = data['files']
for f in files:
if f['type'] == 'video':
raw_url = f['id']
name = f['name']
size = int(f['size']) / (1024 * 1024)
if size > 2000:
size = size / 1024
unit = 'GB'
else :
unit = 'MB'
self.getStreamByPriority('Furk - %s ([COLOR blue]%s %s[/COLOR])' %(name, size, unit), self.service + '://' + raw_url)
except Exception, e:
self.log("********Donnie Error: %s, %s" % (self.service, e))
self.DB.commit()
def getStreamByPriority(self, link, stream):
self.log(link)
host = 'furk.net'
SQL = "INSERT INTO rw_stream_list(stream, url, priority, machineid) " \
"SELECT ?, ?, priority, ? " \
"FROM rw_providers " \
"WHERE mirror=? and provider=?"
self.DB.execute(SQL, [link, stream, self.REG.getSetting('machine-id'), host, self.service])
def _getServicePriority(self, link):
self.log(link)
host = 'furk.net'
row = self.DB.query("SELECT priority FROM rw_providers WHERE mirror=? and provider=?", [host, self.service])
return row[0]
def _resolveStream(self, stream):
raw_url = stream.replace(self.service + '://', '')
resolved_url = ''
t_files = []
t_options = []
sdialog = xbmcgui.Dialog()
api_key = self._getKey()
params = {"type": "video", "id": raw_url, "api_key": api_key, 't_files': 1}
url = "%sfile/get" % self.base_url
pagedata = net.http_POST(url, params).content
if pagedata=='':
return False
#print pagedata
data = json.loads(str(pagedata))
try:
files = data['files'][0]['t_files']
for f in files:
if re.search('^video/', f['ct']):
size = int(f['size']) / (1024 * 1024)
if size > 2000:
size = size / 1024
unit = 'GB'
else :
unit = 'MB'
t_files.append("%s ([COLOR blue]%s %s[/COLOR])" %(f['name'], size, unit))
t_options.append(f['url_dl'])
file_select = sdialog.select('Select Furk Stream', t_files)
if file_select < 0:
return resolved_url
resolved_url = str(t_options[file_select])
except Exception, e:
self.log("********Donnie Error: %s, %s" % (self.service, e))
self.log("Furk retruned: %s", resolved_url, level=0)
return resolved_url
def _resolveIMDB(self, uri): #Often needed if a sites movie index does not include imdb links but the movie page does
imdb = ''
print uri
pagedata = self.getURL(uri, append_base_url=True)
if pagedata=='':
return
imdb = re.search('http://www.imdb.com/title/(.+?)/', pagedata).group(1)
return imdb
def whichHost(self, host): #Sometimes needed
table = { 'Watch Blah' : 'blah.com',
'Watch Blah2' : 'blah2.com',
}
try:
host_url = table[host]
return host_url
except:
return 'Unknown'
| gpl-2.0 |
axinging/chromium-crosswalk | third_party/cython/src/Cython/Compiler/StringEncoding.py | 97 | 9235 | #
# Cython -- encoding related tools
#
import re
import sys
if sys.version_info[0] >= 3:
_unicode, _str, _bytes = str, str, bytes
IS_PYTHON3 = True
else:
_unicode, _str, _bytes = unicode, str, str
IS_PYTHON3 = False
empty_bytes = _bytes()
empty_unicode = _unicode()
join_bytes = empty_bytes.join
class UnicodeLiteralBuilder(object):
"""Assemble a unicode string.
"""
def __init__(self):
self.chars = []
def append(self, characters):
if isinstance(characters, _bytes):
# this came from a Py2 string literal in the parser code
characters = characters.decode("ASCII")
assert isinstance(characters, _unicode), str(type(characters))
self.chars.append(characters)
if sys.maxunicode == 65535:
def append_charval(self, char_number):
if char_number > 65535:
# wide Unicode character on narrow platform => replace
# by surrogate pair
char_number -= 0x10000
self.chars.append( unichr((char_number // 1024) + 0xD800) )
self.chars.append( unichr((char_number % 1024) + 0xDC00) )
else:
self.chars.append( unichr(char_number) )
else:
def append_charval(self, char_number):
self.chars.append( unichr(char_number) )
def append_uescape(self, char_number, escape_string):
self.append_charval(char_number)
def getstring(self):
return EncodedString(u''.join(self.chars))
def getstrings(self):
return (None, self.getstring())
class BytesLiteralBuilder(object):
"""Assemble a byte string or char value.
"""
def __init__(self, target_encoding):
self.chars = []
self.target_encoding = target_encoding
def append(self, characters):
if isinstance(characters, _unicode):
characters = characters.encode(self.target_encoding)
assert isinstance(characters, _bytes), str(type(characters))
self.chars.append(characters)
def append_charval(self, char_number):
self.chars.append( unichr(char_number).encode('ISO-8859-1') )
def append_uescape(self, char_number, escape_string):
self.append(escape_string)
def getstring(self):
# this *must* return a byte string!
s = BytesLiteral(join_bytes(self.chars))
s.encoding = self.target_encoding
return s
def getchar(self):
# this *must* return a byte string!
return self.getstring()
def getstrings(self):
return (self.getstring(), None)
class StrLiteralBuilder(object):
"""Assemble both a bytes and a unicode representation of a string.
"""
def __init__(self, target_encoding):
self._bytes = BytesLiteralBuilder(target_encoding)
self._unicode = UnicodeLiteralBuilder()
def append(self, characters):
self._bytes.append(characters)
self._unicode.append(characters)
def append_charval(self, char_number):
self._bytes.append_charval(char_number)
self._unicode.append_charval(char_number)
def append_uescape(self, char_number, escape_string):
self._bytes.append(escape_string)
self._unicode.append_charval(char_number)
def getstrings(self):
return (self._bytes.getstring(), self._unicode.getstring())
class EncodedString(_unicode):
# unicode string subclass to keep track of the original encoding.
# 'encoding' is None for unicode strings and the source encoding
# otherwise
encoding = None
def __deepcopy__(self, memo):
return self
def byteencode(self):
assert self.encoding is not None
return self.encode(self.encoding)
def utf8encode(self):
assert self.encoding is None
return self.encode("UTF-8")
@property
def is_unicode(self):
return self.encoding is None
def contains_surrogates(self):
return string_contains_surrogates(self)
def string_contains_surrogates(ustring):
"""
Check if the unicode string contains surrogate code points
on a CPython platform with wide (UCS-4) or narrow (UTF-16)
Unicode, i.e. characters that would be spelled as two
separate code units on a narrow platform.
"""
for c in map(ord, ustring):
if c > 65535: # can only happen on wide platforms
return True
if 0xD800 <= c <= 0xDFFF:
return True
return False
class BytesLiteral(_bytes):
# bytes subclass that is compatible with EncodedString
encoding = None
def __deepcopy__(self, memo):
return self
def byteencode(self):
if IS_PYTHON3:
return _bytes(self)
else:
# fake-recode the string to make it a plain bytes object
return self.decode('ISO-8859-1').encode('ISO-8859-1')
def utf8encode(self):
assert False, "this is not a unicode string: %r" % self
def __str__(self):
"""Fake-decode the byte string to unicode to support %
formatting of unicode strings.
"""
return self.decode('ISO-8859-1')
is_unicode = False
char_from_escape_sequence = {
r'\a' : u'\a',
r'\b' : u'\b',
r'\f' : u'\f',
r'\n' : u'\n',
r'\r' : u'\r',
r'\t' : u'\t',
r'\v' : u'\v',
}.get
_c_special = ('\\', '??', '"') + tuple(map(chr, range(32)))
def _to_escape_sequence(s):
if s in '\n\r\t':
return repr(s)[1:-1]
elif s == '"':
return r'\"'
elif s == '\\':
return r'\\'
else:
# within a character sequence, oct passes much better than hex
return ''.join(['\\%03o' % ord(c) for c in s])
def _build_specials_replacer():
subexps = []
replacements = {}
for special in _c_special:
regexp = ''.join(['[%s]' % c.replace('\\', '\\\\') for c in special])
subexps.append(regexp)
replacements[special.encode('ASCII')] = _to_escape_sequence(special).encode('ASCII')
sub = re.compile(('(%s)' % '|'.join(subexps)).encode('ASCII')).sub
def replace_specials(m):
return replacements[m.group(1)]
def replace(s):
return sub(replace_specials, s)
return replace
_replace_specials = _build_specials_replacer()
def escape_char(c):
if IS_PYTHON3:
c = c.decode('ISO-8859-1')
if c in '\n\r\t\\':
return repr(c)[1:-1]
elif c == "'":
return "\\'"
n = ord(c)
if n < 32 or n > 127:
# hex works well for characters
return "\\x%02X" % n
else:
return c
def escape_byte_string(s):
"""Escape a byte string so that it can be written into C code.
Note that this returns a Unicode string instead which, when
encoded as ISO-8859-1, will result in the correct byte sequence
being written.
"""
s = _replace_specials(s)
try:
return s.decode("ASCII") # trial decoding: plain ASCII => done
except UnicodeDecodeError:
pass
if IS_PYTHON3:
s_new = bytearray()
append, extend = s_new.append, s_new.extend
for b in s:
if b >= 128:
extend(('\\%3o' % b).encode('ASCII'))
else:
append(b)
return s_new.decode('ISO-8859-1')
else:
l = []
append = l.append
for c in s:
o = ord(c)
if o >= 128:
append('\\%3o' % o)
else:
append(c)
return join_bytes(l).decode('ISO-8859-1')
def split_string_literal(s, limit=2000):
# MSVC can't handle long string literals.
if len(s) < limit:
return s
else:
start = 0
chunks = []
while start < len(s):
end = start + limit
if len(s) > end-4 and '\\' in s[end-4:end]:
end -= 4 - s[end-4:end].find('\\') # just before the backslash
while s[end-1] == '\\':
end -= 1
if end == start:
# must have been a long line of backslashes
end = start + limit - (limit % 2) - 4
break
chunks.append(s[start:end])
start = end
return '""'.join(chunks)
def encode_pyunicode_string(s):
"""Create Py_UNICODE[] representation of a given unicode string.
"""
s = map(ord, s) + [0]
if sys.maxunicode >= 0x10000: # Wide build or Py3.3
utf16, utf32 = [], s
for code_point in s:
if code_point >= 0x10000: # outside of BMP
high, low = divmod(code_point - 0x10000, 1024)
utf16.append(high + 0xD800)
utf16.append(low + 0xDC00)
else:
utf16.append(code_point)
else:
utf16, utf32 = s, []
for code_unit in s:
if 0xDC00 <= code_unit <= 0xDFFF and utf32 and 0xD800 <= utf32[-1] <= 0xDBFF:
high, low = utf32[-1], code_unit
utf32[-1] = ((high & 0x3FF) << 10) + (low & 0x3FF) + 0x10000
else:
utf32.append(code_unit)
if utf16 == utf32:
utf16 = []
return ",".join(map(unicode, utf16)), ",".join(map(unicode, utf32))
| bsd-3-clause |
twitterdev/twitter-leaderboard | services/migrations/0001_initial.py | 1 | 1248 | # -*- coding: utf-8 -*-
# Generated by Django 1.9.1 on 2016-01-28 08:34
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='UserProfile',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('timezone', models.CharField(blank=True, max_length=100, null=True)),
('curator_auth_token', models.CharField(blank=True, max_length=40, null=True)),
('twitter_id', models.CharField(blank=True, max_length=25, null=True)),
('twitter_access_token', models.CharField(blank=True, max_length=75, null=True)),
('twitter_access_token_secret', models.CharField(blank=True, max_length=75, null=True)),
('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, related_name='profile', to=settings.AUTH_USER_MODEL)),
],
),
]
| mit |
codingforfun/Olena-Mirror | swilena/python/complex2-misc.py | 2 | 2380 | #! /usr/bin/env python
# Copyright (C) 2010 EPITA Research and Development Laboratory (LRDE)
#
# This file is part of Olena.
#
# Olena is free software: you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free
# Software Foundation, version 2 of the License.
#
# Olena is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Olena. If not, see <http://www.gnu.org/licenses/>.
# \file python/mesh-complex-segm.py
# \brief Test complex2.
#
# See also Milena's tests/topo/complex.cc.
from swilena import *
# A 2-d (simplicial) complex and its adjacency graph.
#
# v0 e3 v3
# o-----------o v0----e3----v3
# / \ ,-----. / / \ | /
# / . \ \ t1/ / / \ t1 /
# e0 / / \ e1\ / / e4 e0. ,e1' `e4
# / /t0 \ \ ' / / t0 \ /
# / `-----' \ / / | \ /
# o-----------o v1----e2----v2
# v1 e2 v2
#
# v = vertex (0-face)
# e = edge (1-face)
# t = triangle (2-face)
## ---------------------- ##
## Complex construction. ##
## ---------------------- ##
c = complex2()
# 0-faces (points).
v0 = c.add_face()
v1 = c.add_face()
v2 = c.add_face()
v3 = c.add_face()
# 1-faces (segments).
e0 = c.add_face(-v1 + v0)
e1 = c.add_face(-v0 + v2)
e2 = c.add_face(-v2 + v1)
e3 = c.add_face(-v0 + v3)
e4 = c.add_face(-v3 + v2)
# 2-faces (triangles).
t0 = c.add_face( e0 + e1 + e2)
t1 = c.add_face(-e1 + e3 + e4)
print c
## ------------------ ##
## Handles and data. ##
## ------------------ ##
# Get the face data from (``static'') face handle E0.
face1 = e0.data()
# Face handle.
f = face_2(e0)
print f
print
# Get the face data from (``dynamic'') face handle AF.
face2 = f.data_1()
## ----------- ##
## Iteration. ##
## ----------- ##
# --------------- #
# Iterator on C. #
# --------------- #
# (Forward) Iterator on a complex (not complex_image), or more
# precisely on (all) the faces of complex C.
for f in c:
print f
# FIXME: Test more iterators.
| gpl-2.0 |
movmov/cc | vendor/Twisted-10.0.0/twisted/web/error.py | 52 | 7408 | # -*- test-case-name: twisted.web.test.test_error -*-
# Copyright (c) 2001-2010 Twisted Matrix Laboratories.
# See LICENSE for details.
"""
Exception definitions for L{twisted.web}.
"""
import operator, warnings
from twisted.web import http
class Error(Exception):
"""
A basic HTTP error.
@type status: C{str}
@ivar status: Refers to an HTTP status code, for example L{http.NOT_FOUND}.
@type message: C{str}
@param message: A short error message, for example "NOT FOUND".
@type response: C{str}
@ivar response: A complete HTML document for an error page.
"""
def __init__(self, code, message=None, response=None):
"""
Initializes a basic exception.
@type code: C{str}
@param code: Refers to an HTTP status code, for example
L{http.NOT_FOUND}. If no C{message} is given, C{code} is mapped to a
descriptive string that is used instead.
@type message: C{str}
@param message: A short error message, for example "NOT FOUND".
@type response: C{str}
@param response: A complete HTML document for an error page.
"""
if not message:
try:
message = http.responses.get(int(code))
except ValueError:
# If code wasn't a stringified int, can't map the
# status code to a descriptive string so keep message
# unchanged.
pass
Exception.__init__(self, code, message, response)
self.status = code
self.message = message
self.response = response
def __str__(self):
return '%s %s' % (self[0], self[1])
class PageRedirect(Error):
"""
A request resulted in an HTTP redirect.
@type location: C{str}
@ivar location: The location of the redirect which was not followed.
"""
def __init__(self, code, message=None, response=None, location=None):
"""
Initializes a page redirect exception.
@type code: C{str}
@param code: Refers to an HTTP status code, for example
L{http.NOT_FOUND}. If no C{message} is given, C{code} is mapped to a
descriptive string that is used instead.
@type message: C{str}
@param message: A short error message, for example "NOT FOUND".
@type response: C{str}
@param response: A complete HTML document for an error page.
@type location: C{str}
@param location: The location response-header field value. It is an
absolute URI used to redirect the receiver to a location other than
the Request-URI so the request can be completed.
"""
if not message:
try:
message = http.responses.get(int(code))
except ValueError:
# If code wasn't a stringified int, can't map the
# status code to a descriptive string so keep message
# unchanged.
pass
if location and message:
message = "%s to %s" % (message, location)
Error.__init__(self, code, message, response)
self.location = location
class InfiniteRedirection(Error):
"""
HTTP redirection is occurring endlessly.
@type location: C{str}
@ivar location: The first URL in the series of redirections which was
not followed.
"""
def __init__(self, code, message=None, response=None, location=None):
"""
Initializes an infinite redirection exception.
@type code: C{str}
@param code: Refers to an HTTP status code, for example
L{http.NOT_FOUND}. If no C{message} is given, C{code} is mapped to a
descriptive string that is used instead.
@type message: C{str}
@param message: A short error message, for example "NOT FOUND".
@type response: C{str}
@param response: A complete HTML document for an error page.
@type location: C{str}
@param location: The location response-header field value. It is an
absolute URI used to redirect the receiver to a location other than
the Request-URI so the request can be completed.
"""
if not message:
try:
message = http.responses.get(int(code))
except ValueError:
# If code wasn't a stringified int, can't map the
# status code to a descriptive string so keep message
# unchanged.
pass
if location and message:
message = "%s to %s" % (message, location)
Error.__init__(self, code, message, response)
self.location = location
class UnsupportedMethod(Exception):
"""
Raised by a resource when faced with a strange request method.
RFC 2616 (HTTP 1.1) gives us two choices when faced with this situtation:
If the type of request is known to us, but not allowed for the requested
resource, respond with NOT_ALLOWED. Otherwise, if the request is something
we don't know how to deal with in any case, respond with NOT_IMPLEMENTED.
When this exception is raised by a Resource's render method, the server
will make the appropriate response.
This exception's first argument MUST be a sequence of the methods the
resource *does* support.
"""
allowedMethods = ()
def __init__(self, allowedMethods, *args):
Exception.__init__(self, allowedMethods, *args)
self.allowedMethods = allowedMethods
if not operator.isSequenceType(allowedMethods):
why = "but my first argument is not a sequence."
s = ("First argument must be a sequence of"
" supported methods, %s" % (why,))
raise TypeError, s
class SchemeNotSupported(Exception):
"""
The scheme of a URI was not one of the supported values.
"""
from twisted.web import resource as _resource
class ErrorPage(_resource.ErrorPage):
"""
Deprecated alias for L{twisted.web.resource.ErrorPage}.
"""
def __init__(self, *args, **kwargs):
warnings.warn(
"twisted.web.error.ErrorPage is deprecated since Twisted 9.0. "
"See twisted.web.resource.ErrorPage.", DeprecationWarning,
stacklevel=2)
_resource.ErrorPage.__init__(self, *args, **kwargs)
class NoResource(_resource.NoResource):
"""
Deprecated alias for L{twisted.web.resource.NoResource}.
"""
def __init__(self, *args, **kwargs):
warnings.warn(
"twisted.web.error.NoResource is deprecated since Twisted 9.0. "
"See twisted.web.resource.NoResource.", DeprecationWarning,
stacklevel=2)
_resource.NoResource.__init__(self, *args, **kwargs)
class ForbiddenResource(_resource.ForbiddenResource):
"""
Deprecated alias for L{twisted.web.resource.ForbiddenResource}.
"""
def __init__(self, *args, **kwargs):
warnings.warn(
"twisted.web.error.ForbiddenResource is deprecated since Twisted "
"9.0. See twisted.web.resource.ForbiddenResource.",
DeprecationWarning, stacklevel=2)
_resource.ForbiddenResource.__init__(self, *args, **kwargs)
__all__ = [
'Error', 'PageRedirect', 'InfiniteRedirection',
'ErrorPage', 'NoResource', 'ForbiddenResource']
| apache-2.0 |
angelman/phantomjs | src/qt/qtwebkit/Tools/Scripts/webkitpy/tool/steps/commit_unittest.py | 124 | 3270 | # Copyright (C) 2012 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import unittest2 as unittest
from webkitpy.common.system.outputcapture import OutputCapture
from webkitpy.common.system.executive import ScriptError
from webkitpy.common.system.executive_mock import MockExecutive
from webkitpy.tool.mocktool import MockOptions, MockTool
from webkitpy.tool.steps.commit import Commit
class CommitTest(unittest.TestCase):
def _test_check_test_expectations(self, filename):
capture = OutputCapture()
options = MockOptions()
options.git_commit = ""
options.non_interactive = True
tool = MockTool()
tool.user = None # Will cause any access of tool.user to raise an exception.
step = Commit(tool, options)
state = {
"changed_files": [filename + "XXX"],
}
tool.executive = MockExecutive(should_log=True, should_throw_when_run=False)
expected_logs = "Committed r49824: <http://trac.webkit.org/changeset/49824>\n"
capture.assert_outputs(self, step.run, [state], expected_logs=expected_logs)
state = {
"changed_files": ["platform/chromium/" + filename],
}
expected_logs = """MOCK run_and_throw_if_fail: ['mock-check-webkit-style', '--diff-files', 'platform/chromium/%s'], cwd=/mock-checkout
Committed r49824: <http://trac.webkit.org/changeset/49824>
""" % filename
capture.assert_outputs(self, step.run, [state], expected_logs=expected_logs)
tool.executive = MockExecutive(should_log=True, should_throw_when_run=set(["platform/chromium/" + filename]))
self.assertRaises(ScriptError, capture.assert_outputs, self, step.run, [state])
def test_check_test_expectations(self):
self._test_check_test_expectations('TestExpectations')
| bsd-3-clause |
was4444/chromium.src | build/android/pylib/uirobot/uirobot_test_instance.py | 26 | 2028 | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import json
import logging
from devil.android import apk_helper
from pylib.base import test_instance
class UirobotTestInstance(test_instance.TestInstance):
def __init__(self, args, error_func):
"""Constructor.
Args:
args: Command line arguments.
"""
super(UirobotTestInstance, self).__init__()
if not args.app_under_test:
error_func('Must set --app-under-test.')
self._app_under_test = args.app_under_test
self._minutes = args.minutes
if args.remote_device_file:
with open(args.remote_device_file) as remote_device_file:
device_json = json.load(remote_device_file)
else:
device_json = {}
device_type = device_json.get('device_type', 'Android')
if args.device_type:
if device_type and device_type != args.device_type:
logging.info('Overriding device_type from %s to %s',
device_type, args.device_type)
device_type = args.device_type
if device_type == 'Android':
self._suite = 'Android Uirobot'
self._package_name = apk_helper.GetPackageName(self._app_under_test)
elif device_type == 'iOS':
self._suite = 'iOS Uirobot'
self._package_name = self._app_under_test
#override
def TestType(self):
"""Returns type of test."""
return 'uirobot'
#override
def SetUp(self):
"""Setup for test."""
pass
#override
def TearDown(self):
"""Teardown for test."""
pass
@property
def app_under_test(self):
"""Returns the app to run the test on."""
return self._app_under_test
@property
def minutes(self):
"""Returns the number of minutes to run the uirobot for."""
return self._minutes
@property
def package_name(self):
"""Returns the name of the package in the APK."""
return self._package_name
@property
def suite(self):
return self._suite
| bsd-3-clause |
ksmit799/Toontown-Source | toontown/effects/SparksTrail.py | 6 | 4236 | from pandac.PandaModules import *
from direct.interval.IntervalGlobal import *
from direct.particles import ParticleEffect, Particles, ForceGroup
from PooledEffect import PooledEffect
from EffectController import EffectController
class SparksTrail(PooledEffect, EffectController):
def __init__(self):
PooledEffect.__init__(self)
EffectController.__init__(self)
model = loader.loadModel('phase_4/models/props/tt_m_efx_ext_particleCards')
self.card = model.find('**/tt_t_efx_ext_particleStars')
self.cardScale = 64.0
self.effectColor = Vec4(1, 1, 1, 1)
self.effectScale = 1.0
self.lifespan = 1.0
if not SparksTrail.particleDummy:
SparksTrail.particleDummy = render.attachNewNode(ModelNode('SparksTrailParticleDummy'))
SparksTrail.particleDummy.setDepthWrite(0)
SparksTrail.particleDummy.setLightOff()
SparksTrail.particleDummy.setFogOff()
self.f = ParticleEffect.ParticleEffect('SparksTrail')
self.f.reparentTo(self)
self.p0 = Particles.Particles('particles-1')
self.p0.setFactory('ZSpinParticleFactory')
self.p0.setRenderer('SpriteParticleRenderer')
self.p0.setEmitter('PointEmitter')
self.f.addParticles(self.p0)
self.p0.setPoolSize(64)
self.p0.setBirthRate(0.02)
self.p0.setLitterSize(1)
self.p0.setLitterSpread(0)
self.p0.setSystemLifespan(0.0)
self.p0.setLocalVelocityFlag(0)
self.p0.setSystemGrowsOlderFlag(0)
self.p0.factory.setLifespanBase(0.5)
self.p0.factory.setLifespanSpread(0.1)
self.p0.factory.setMassBase(1.0)
self.p0.factory.setMassSpread(0.0)
self.p0.factory.setTerminalVelocityBase(400.0)
self.p0.factory.setTerminalVelocitySpread(0.0)
self.p0.factory.setInitialAngle(0.0)
self.p0.factory.setInitialAngleSpread(90.0)
self.p0.factory.enableAngularVelocity(1)
self.p0.factory.setAngularVelocity(0.0)
self.p0.factory.setAngularVelocitySpread(25.0)
self.p0.renderer.setAlphaMode(BaseParticleRenderer.PRALPHAOUT)
self.p0.renderer.setUserAlpha(1.0)
self.p0.renderer.setFromNode(self.card)
self.p0.renderer.setColor(Vec4(1.0, 1.0, 1.0, 1.0))
self.p0.renderer.setXScaleFlag(1)
self.p0.renderer.setYScaleFlag(1)
self.p0.renderer.setAnimAngleFlag(1)
self.p0.renderer.setNonanimatedTheta(0.0)
self.p0.renderer.setAlphaBlendMethod(BaseParticleRenderer.PPBLENDLINEAR)
self.p0.renderer.setAlphaDisable(0)
self.p0.renderer.setColorBlendMode(ColorBlendAttrib.MAdd, ColorBlendAttrib.OIncomingAlpha, ColorBlendAttrib.OOne)
self.p0.emitter.setEmissionType(BaseParticleEmitter.ETRADIATE)
self.p0.emitter.setAmplitudeSpread(0.0)
self.p0.emitter.setOffsetForce(Vec3(0.0, 0.0, -2.0))
self.p0.emitter.setExplicitLaunchVector(Vec3(1.0, 0.0, 0.0))
self.p0.emitter.setRadiateOrigin(Point3(0.0, 0.0, 0.0))
self.setEffectScale(self.effectScale)
def createTrack(self):
self.startEffect = Sequence(Func(self.p0.setBirthRate, 0.01), Func(self.p0.clearToInitial), Func(self.f.start, self, self.particleDummy))
self.endEffect = Sequence(Func(self.p0.setBirthRate, 100.0), Wait(1.0), Func(self.cleanUpEffect))
self.track = Sequence(self.startEffect, Wait(1.0), self.endEffect)
def setEffectColor(self, color):
self.effectColor = color
self.p0.renderer.setColor(self.effectColor)
def setEffectScale(self, scale):
self.effectScale = scale
self.p0.renderer.setInitialXScale(0.1 * self.cardScale * scale)
self.p0.renderer.setFinalXScale(0.2 * self.cardScale * scale)
self.p0.renderer.setInitialYScale(0.1 * self.cardScale * scale)
self.p0.renderer.setFinalYScale(0.2 * self.cardScale * scale)
self.p0.emitter.setAmplitude(20.0 * scale)
def cleanUpEffect(self):
EffectController.cleanUpEffect(self)
if self.pool and self.pool.isUsed(self):
self.pool.checkin(self)
def destroy(self):
EffectController.destroy(self)
PooledEffect.destroy(self)
| mit |
pferreir/indico | indico/web/flask/app.py | 2 | 19781 | # This file is part of Indico.
# Copyright (C) 2002 - 2021 CERN
#
# Indico is free software; you can redistribute it and/or
# modify it under the terms of the MIT License; see the
# LICENSE file for more details.
import os
import uuid
from babel.numbers import format_currency, get_currency_name
from flask import _app_ctx_stack, render_template, request
from flask.helpers import get_root_path
from flask_pluginengine import current_plugin, plugins_loaded
from markupsafe import Markup
from packaging.version import Version
from pywebpack import WebpackBundleProject
from sqlalchemy.orm import configure_mappers
from sqlalchemy.pool import QueuePool
from werkzeug.exceptions import BadRequest
from werkzeug.local import LocalProxy
from werkzeug.middleware.proxy_fix import ProxyFix
from werkzeug.urls import url_parse
from wtforms.widgets import html_params
import indico
from indico.core import signals
from indico.core.auth import multipass
from indico.core.cache import cache
from indico.core.celery import celery
from indico.core.config import IndicoConfig, config, load_config
from indico.core.db.sqlalchemy import db
from indico.core.db.sqlalchemy.logging import apply_db_loggers
from indico.core.db.sqlalchemy.util.models import import_all_models
from indico.core.limiter import limiter
from indico.core.logger import Logger
from indico.core.marshmallow import mm
from indico.core.oauth.oauth2 import setup_oauth_provider
from indico.core.plugins import plugin_engine, url_for_plugin
from indico.core.sentry import init_sentry
from indico.core.webpack import IndicoManifestLoader, webpack
from indico.modules.auth.providers import IndicoAuthProvider, IndicoIdentityProvider
from indico.modules.auth.util import url_for_login, url_for_logout
from indico.util import date_time as date_time_util
from indico.util.i18n import (_, babel, get_all_locales, get_current_locale, gettext_context, ngettext_context,
npgettext_context, pgettext_context)
from indico.util.mimetypes import icon_from_mimetype
from indico.util.signals import values_from_signal
from indico.util.string import RichMarkup, alpha_enum, crc32, html_to_plaintext, sanitize_html, slugify
from indico.web.flask.errors import errors_bp
from indico.web.flask.stats import get_request_stats, setup_request_stats
from indico.web.flask.templating import (call_template_hook, decodeprincipal, dedent, groupby, instanceof, markdown,
natsort, plusdelta, subclassof, underline)
from indico.web.flask.util import ListConverter, XAccelMiddleware, discover_blueprints, url_for, url_rule_to_js
from indico.web.flask.wrappers import IndicoFlask
from indico.web.forms.jinja_helpers import is_single_line_field, iter_form_fields, render_field
from indico.web.menu import render_sidemenu
from indico.web.util import url_for_index
from indico.web.views import render_session_bar
def configure_app(app):
config = IndicoConfig(app.config['INDICO']) # needed since we don't have an app ctx yet
app.config['DEBUG'] = config.DEBUG
app.config['SECRET_KEY'] = config.SECRET_KEY
app.config['LOGGER_NAME'] = 'flask.app'
app.config['LOGGER_HANDLER_POLICY'] = 'never'
if not app.config['SECRET_KEY'] or len(app.config['SECRET_KEY']) < 16:
raise ValueError('SECRET_KEY must be set to a random secret of at least 16 characters. '
'You can generate one using os.urandom(32) in Python shell.')
if config.MAX_UPLOAD_FILES_TOTAL_SIZE > 0:
app.config['MAX_CONTENT_LENGTH'] = config.MAX_UPLOAD_FILES_TOTAL_SIZE * 1024 * 1024
app.config['PROPAGATE_EXCEPTIONS'] = True
app.config['TRAP_HTTP_EXCEPTIONS'] = False
app.config['TRAP_BAD_REQUEST_ERRORS'] = config.DEBUG
app.config['SESSION_COOKIE_NAME'] = 'indico_session'
app.config['PERMANENT_SESSION_LIFETIME'] = config.SESSION_LIFETIME
app.config['RATELIMIT_STORAGE_URL'] = config.REDIS_CACHE_URL or 'memory://'
configure_cache(app, config)
configure_multipass(app, config)
app.config['PLUGINENGINE_NAMESPACE'] = 'indico.plugins'
app.config['PLUGINENGINE_PLUGINS'] = config.PLUGINS
base = url_parse(config.BASE_URL)
app.config['PREFERRED_URL_SCHEME'] = base.scheme
app.config['SERVER_NAME'] = base.netloc
if base.path:
app.config['APPLICATION_ROOT'] = base.path
configure_xsendfile(app, config.STATIC_FILE_METHOD)
if config.USE_PROXY:
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1)
configure_webpack(app)
configure_emails(app, config)
def configure_cache(app, config):
app.config['CACHE_DEFAULT_TIMEOUT'] = 0
app.config['CACHE_KEY_PREFIX'] = f'indico_{_get_cache_version()}_'
if config.REDIS_CACHE_URL is not None or not app.testing:
# We configure the redis cache if we have the URL, or if we are not in testing mode in
# order to fail properly if redis is not configured.
app.config['CACHE_TYPE'] = 'indico.core.cache.IndicoRedisCache'
app.config['CACHE_REDIS_URL'] = config.REDIS_CACHE_URL
else:
app.config['CACHE_TYPE'] = 'flask_caching.backends.nullcache.NullCache'
app.config['CACHE_NO_NULL_WARNING'] = True
def configure_multipass(app, config):
app.config['MULTIPASS_AUTH_PROVIDERS'] = config.AUTH_PROVIDERS
app.config['MULTIPASS_IDENTITY_PROVIDERS'] = config.IDENTITY_PROVIDERS
app.config['MULTIPASS_PROVIDER_MAP'] = config.PROVIDER_MAP or {x: x for x in config.AUTH_PROVIDERS}
if 'indico' in app.config['MULTIPASS_AUTH_PROVIDERS'] or 'indico' in app.config['MULTIPASS_IDENTITY_PROVIDERS']:
raise ValueError('The name `indico` is reserved and cannot be used as an Auth/Identity provider name.')
if config.LOCAL_IDENTITIES:
configure_multipass_local(app)
app.config['MULTIPASS_IDENTITY_INFO_KEYS'] = {'first_name', 'last_name', 'email', 'affiliation', 'phone',
'address'}
app.config['MULTIPASS_LOGIN_ENDPOINT'] = 'auth.login'
app.config['MULTIPASS_LOGIN_URLS'] = None # registered in a blueprint
app.config['MULTIPASS_SUCCESS_ENDPOINT'] = 'categories.display'
app.config['MULTIPASS_FAILURE_MESSAGE'] = _('Login failed: {error}')
def configure_multipass_local(app):
app.config['MULTIPASS_AUTH_PROVIDERS'] = dict(app.config['MULTIPASS_AUTH_PROVIDERS'], indico={
'type': IndicoAuthProvider,
'title': 'Indico',
'default': not any(p.get('default') for p in app.config['MULTIPASS_AUTH_PROVIDERS'].values())
})
app.config['MULTIPASS_IDENTITY_PROVIDERS'] = dict(app.config['MULTIPASS_IDENTITY_PROVIDERS'], indico={
'type': IndicoIdentityProvider,
# We don't want any user info from this provider
'identity_info_keys': {}
})
app.config['MULTIPASS_PROVIDER_MAP'] = dict(app.config['MULTIPASS_PROVIDER_MAP'], indico='indico')
def configure_webpack(app):
pkg_path = os.path.dirname(get_root_path('indico'))
project = WebpackBundleProject(pkg_path, None)
app.config['WEBPACKEXT_PROJECT'] = project
app.config['WEBPACKEXT_MANIFEST_LOADER'] = IndicoManifestLoader
app.config['WEBPACKEXT_MANIFEST_PATH'] = os.path.join('dist', 'manifest.json')
def configure_emails(app, config):
# TODO: use more straightforward mapping between EMAIL_* app settings and indico.conf settings
app.config['EMAIL_BACKEND'] = 'indico.vendor.django_mail.backends.smtp.EmailBackend'
app.config['EMAIL_HOST'] = config.SMTP_SERVER[0]
app.config['EMAIL_PORT'] = config.SMTP_SERVER[1]
app.config['EMAIL_HOST_USER'] = config.SMTP_LOGIN
app.config['EMAIL_HOST_PASSWORD'] = config.SMTP_PASSWORD
app.config['EMAIL_USE_TLS'] = config.SMTP_USE_TLS
app.config['EMAIL_USE_SSL'] = False
app.config['EMAIL_TIMEOUT'] = config.SMTP_TIMEOUT
def configure_xsendfile(app, method):
if not method:
return
elif isinstance(method, str):
args = None
else:
method, args = method
if not method:
return
app.config['USE_X_SENDFILE'] = True
if method == 'xsendfile': # apache mod_xsendfile, lighttpd
pass
elif method == 'xaccelredirect': # nginx
if not args or not hasattr(args, 'items'):
raise ValueError('STATIC_FILE_METHOD args must be a dict containing at least one mapping')
app.wsgi_app = XAccelMiddleware(app.wsgi_app, args)
else:
raise ValueError('Invalid static file method: %s' % method)
def _get_indico_version():
version = Version(indico.__version__)
version_parts = [version.base_version]
if version.is_prerelease:
version_parts.append('pre')
return 'v' + '-'.join(version_parts)
def _get_cache_version():
version = Version(indico.__version__)
return f'v{version.major}.{version.minor}'
def setup_jinja(app):
app.jinja_env.policies['ext.i18n.trimmed'] = True
# Useful (Python) builtins
app.add_template_global(dict)
# Global functions
app.add_template_global(url_for)
app.add_template_global(url_for_plugin)
app.add_template_global(url_rule_to_js)
app.add_template_global(IndicoConfig(exc=Exception), 'indico_config')
app.add_template_global(call_template_hook, 'template_hook')
app.add_template_global(is_single_line_field, '_is_single_line_field')
app.add_template_global(render_field, '_render_field')
app.add_template_global(iter_form_fields, '_iter_form_fields')
app.add_template_global(format_currency)
app.add_template_global(get_currency_name)
app.add_template_global(url_for_index)
app.add_template_global(url_for_login)
app.add_template_global(url_for_logout)
app.add_template_global(lambda: str(uuid.uuid4()), 'uuid')
app.add_template_global(icon_from_mimetype)
app.add_template_global(render_sidemenu)
app.add_template_global(slugify)
app.add_template_global(lambda: date_time_util.now_utc(False), 'now')
app.add_template_global(render_session_bar)
app.add_template_global(get_request_stats)
app.add_template_global(_get_indico_version(), 'indico_version')
# Global variables
app.add_template_global(LocalProxy(get_current_locale), 'current_locale')
app.add_template_global(LocalProxy(lambda: current_plugin.manifest if current_plugin else None), 'plugin_webpack')
# Useful constants
app.add_template_global('^([0-9]|0[0-9]|1[0-9]|2[0-3]):[0-5][0-9]$', name='time_regex_hhmm') # for input[type=time]
# Filters (indico functions returning UTF8)
app.add_template_filter(date_time_util.format_date)
app.add_template_filter(date_time_util.format_time)
app.add_template_filter(date_time_util.format_datetime)
app.add_template_filter(date_time_util.format_human_date)
app.add_template_filter(date_time_util.format_timedelta)
app.add_template_filter(date_time_util.format_number)
# Filters (new ones returning unicode)
app.add_template_filter(date_time_util.format_human_timedelta)
app.add_template_filter(date_time_util.format_pretty_date)
app.add_template_filter(date_time_util.format_pretty_datetime)
app.add_template_filter(lambda d: Markup(html_params(**d)), 'html_params')
app.add_template_filter(underline)
app.add_template_filter(markdown)
app.add_template_filter(dedent)
app.add_template_filter(html_to_plaintext)
app.add_template_filter(natsort)
app.add_template_filter(groupby)
app.add_template_filter(plusdelta)
app.add_template_filter(decodeprincipal)
app.add_template_filter(any)
app.add_template_filter(alpha_enum)
app.add_template_filter(crc32)
app.add_template_filter(bool)
app.add_template_filter(lambda s: Markup(sanitize_html(s or '')), 'sanitize_html')
app.add_template_filter(RichMarkup, 'rich_markup')
# Tests
app.add_template_test(instanceof) # only use this test if you really have to!
app.add_template_test(subclassof) # only use this test if you really have to!
# i18n
app.jinja_env.add_extension('jinja2.ext.i18n')
app.jinja_env.install_gettext_callables(gettext_context, ngettext_context, True,
pgettext=pgettext_context, npgettext=npgettext_context)
def setup_jinja_customization(app):
# add template customization paths provided by plugins
paths = values_from_signal(signals.plugin.get_template_customization_paths.send())
app.jinja_env.loader.fs_loader.searchpath += sorted(paths)
def configure_db(app):
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
if app.config['TESTING']:
# tests do not actually use sqlite but run a postgres instance and
# reconfigure flask-sqlalchemy to use that database. by setting
# a dummy uri explicitly instead of letting flask-sqlalchemy do
# the exact same thing we avoid a warning when running tests.
app.config.setdefault('SQLALCHEMY_DATABASE_URI', 'sqlite:///:memory:')
else:
if config.SQLALCHEMY_DATABASE_URI is None:
raise Exception("No proper SQLAlchemy store has been configured. Please edit your indico.conf")
app.config['SQLALCHEMY_DATABASE_URI'] = config.SQLALCHEMY_DATABASE_URI
app.config['SQLALCHEMY_RECORD_QUERIES'] = False
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {
'pool_size': config.SQLALCHEMY_POOL_SIZE,
'pool_timeout': config.SQLALCHEMY_POOL_TIMEOUT,
'pool_recycle': config.SQLALCHEMY_POOL_RECYCLE,
'max_overflow': config.SQLALCHEMY_MAX_OVERFLOW,
}
import_all_models()
db.init_app(app)
if not app.config['TESTING']:
apply_db_loggers(app)
plugins_loaded.connect(lambda sender: configure_mappers(), app, weak=False)
def check_db():
# If something triggered database queries during startup, we have connections in the pool
# and when using uwsgi which preforks workers after initializing the WSGI app (ie ``make_app``
# runs only once for all workers), this passes a pool containing unusable connections to the
# workers, which results in VERY strange databases errors such as "error with status PGRES_TUPLES_OK
# and no message from the libpq".
# Since we do not expect any queries during startup, and it's a bad idea, we fail hard when
# in debug mode. For the unlikely case that existing code is doing this (it shouldn't!), we
# just dispose the engine which creates a fresh pool (with no connections in it) to avoid
# the errors.
if not isinstance(db.engine.pool, QueuePool):
# tests that don't use the database have a StaticPool
return
if db.engine.pool.checkedin() or db.engine.pool.checkedout():
if config.DEBUG:
raise Exception('Something triggered queries during app initialization. This is probably a bad idea. '
'Read the comment in the method that raised this exception for details.')
Logger.get('db').warning('Connection pool populated during startup (%s); resetting it', db.engine.pool.status())
db.engine.dispose()
def extend_url_map(app):
app.url_map.converters['list'] = ListConverter
def add_handlers(app):
app.before_request(canonicalize_url)
app.before_request(reject_nuls)
app.after_request(inject_current_url)
app.register_blueprint(errors_bp)
def add_blueprints(app):
blueprints, compat_blueprints = discover_blueprints()
for blueprint in blueprints:
app.register_blueprint(blueprint)
if config.ROUTE_OLD_URLS:
for blueprint in compat_blueprints:
app.register_blueprint(blueprint)
def add_plugin_blueprints(app):
blueprint_names = set()
for plugin, blueprint in values_from_signal(signals.plugin.get_blueprints.send(app), return_plugins=True):
expected_names = {f'plugin_{plugin.name}', f'plugin_compat_{plugin.name}'}
if blueprint.name not in expected_names:
raise Exception(f"Blueprint '{blueprint.name}' does not match plugin name '{plugin.name}'")
if blueprint.name in blueprint_names:
raise Exception(f"Blueprint '{blueprint.name}' defined by multiple plugins")
if not config.ROUTE_OLD_URLS and blueprint.name.startswith('plugin_compat_'):
continue
blueprint_names.add(blueprint.name)
with plugin.plugin_context():
app.register_blueprint(blueprint)
def canonicalize_url():
url_root = request.url_root.rstrip('/')
if config.BASE_URL != url_root:
Logger.get('flask').info('Received request with invalid url root for %s', request.url)
return render_template('bad_url_error.html'), 404
def reject_nuls():
for key, values in request.values.lists():
if '\0' in key or any('\0' in x for x in values):
raise BadRequest('NUL byte found in request data')
def inject_current_url(response):
# Make the current URL available. This is useful e.g. in case of
# AJAX requests that were redirected due to url normalization if
# we need to know the actual URL
url = request.relative_url
# Headers cannot continue linebreaks; and while Flask rejects such
# headers on its own it comes with a ValueError.
if '\r' in url or '\n' in url:
return response
try:
# Werkzeug encodes header values as latin1 in Python2.
# In case of URLs containing utter garbage (usually a 404
# anyway) they may not be latin1-compatible so let's not
# add the header at all in this case instead of failing later
# XXX: apparently this is still the case in Python3
url.encode('latin1')
except UnicodeEncodeError:
return response
response.headers['X-Indico-URL'] = url
return response
def make_app(testing=False, config_override=None):
# If you are reading this code and wonder how to access the app:
# >>> from flask import current_app as app
# This only works while inside an application context but you really shouldn't have any
# reason to access it outside this method without being inside an application context.
if _app_ctx_stack.top:
Logger.get('flask').warning('make_app called within app context, using existing app')
return _app_ctx_stack.top.app
app = IndicoFlask('indico', static_folder='web/static', static_url_path='/', template_folder='web/templates')
app.config['TESTING'] = testing
app.config['INDICO'] = load_config(only_defaults=testing, override=config_override)
configure_app(app)
with app.app_context():
if not testing:
Logger.init(app)
init_sentry(app)
celery.init_app(app)
cache.init_app(app)
babel.init_app(app)
if config.DEFAULT_LOCALE not in get_all_locales():
Logger.get('i18n').error(f'Configured DEFAULT_LOCALE ({config.DEFAULT_LOCALE}) does not exist')
multipass.init_app(app)
setup_oauth_provider(app)
webpack.init_app(app)
setup_jinja(app)
configure_db(app)
mm.init_app(app) # must be called after `configure_db`!
limiter.init_app(app)
extend_url_map(app)
add_handlers(app)
setup_request_stats(app)
add_blueprints(app)
plugin_engine.init_app(app, Logger.get('plugins'))
if not plugin_engine.load_plugins(app):
raise Exception('Could not load some plugins: {}'.format(', '.join(plugin_engine.get_failed_plugins(app))))
setup_jinja_customization(app)
# Below this points plugins are available, i.e. sending signals makes sense
add_plugin_blueprints(app)
# themes can be provided by plugins
signals.app_created.send(app)
config.validate()
check_db()
return app
| mit |
jayrumi/walmart-reviews | setup.py | 1 | 1197 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from setuptools import setup, find_packages
from codecs import open
from os import path
here = path.abspath(path.dirname(__file__))
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
setup(
name = 'walmart-reviews',
version = '1.2.0.dev1',
packages = find_packages(),
requires = ['python (>= 3.5)'],
#install_requires = ['random', 'requests', 'lxml', 'datetime', 'time'],
description = 'Parsing reviews from Walmart.com without using API',
long_description = long_description, #'A package for parsing reviews and all information about reviewers from walmart.com for specific item. For more information read README.rst', #open('README.rst').read(),
author = 'Yauheni Rumiantsau',
author_email = 'jrumyantsev@gmail.com',
url = 'https://github.com/jayrumi/walmart-reviews',
#download_url = '',
license = 'MIT License',
keywords = 'walmart parsing',
classifiers = [
'Intended Audience :: Developers',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
],
)
| mit |
harrylewis/python-uinames | uinames/models.py | 1 | 1384 | from utils import PropertyUnavailable
class People(object):
"""
A collection of people, represented by the Person class.
"""
def __init__(self, json=None):
self._json = json or {}
self.data = [Person(identity) for identity in self._json]
def __str__(self):
return self.__unicode__()
def __unicode__(self):
return "<People instance: {} Persons>".format(len(self.data))
class Person(object):
"""
A representation of a person identity, generated from the UINames API.
"""
def __init__(self, json=None):
self._json = json or {}
def __getattr__(self, item):
try:
obj = self._json[item]
# determine if string or dict
if isinstance(obj, str) or isinstance(obj, unicode):
return obj.encode("utf-8")
return obj
except KeyError:
raise PropertyUnavailable(
"Property '{}' is does not exist or is not available for this "
"Person.".format(item))
def __str__(self):
return self.__unicode__()
def __unicode__(self):
return "<Person instance: {} {} from {}>".format(self.name,
self.surname,
self.region)
if __name__ == "__main__":
pass
| mit |
valexandersaulys/prudential_insurance_kaggle | venv/lib/python2.7/site-packages/pandas/io/tests/test_common.py | 9 | 2087 | """
Tests for the pandas.io.common functionalities
"""
from pandas.compat import StringIO
import os
from os.path import isabs
import nose
import pandas.util.testing as tm
from pandas.io import common
try:
from pathlib import Path
except ImportError:
pass
try:
from py.path import local as LocalPath
except ImportError:
pass
class TestCommonIOCapabilities(tm.TestCase):
def test_expand_user(self):
filename = '~/sometest'
expanded_name = common._expand_user(filename)
self.assertNotEqual(expanded_name, filename)
self.assertTrue(isabs(expanded_name))
self.assertEqual(os.path.expanduser(filename), expanded_name)
def test_expand_user_normal_path(self):
filename = '/somefolder/sometest'
expanded_name = common._expand_user(filename)
self.assertEqual(expanded_name, filename)
self.assertEqual(os.path.expanduser(filename), expanded_name)
def test_stringify_path_pathlib(self):
tm._skip_if_no_pathlib()
rel_path = common._stringify_path(Path('.'))
self.assertEqual(rel_path, '.')
redundant_path = common._stringify_path(Path('foo//bar'))
self.assertEqual(redundant_path, os.path.join('foo', 'bar'))
def test_stringify_path_localpath(self):
tm._skip_if_no_localpath()
path = os.path.join('foo', 'bar')
abs_path = os.path.abspath(path)
lpath = LocalPath(path)
self.assertEqual(common._stringify_path(lpath), abs_path)
def test_get_filepath_or_buffer_with_path(self):
filename = '~/sometest'
filepath_or_buffer, _, _ = common.get_filepath_or_buffer(filename)
self.assertNotEqual(filepath_or_buffer, filename)
self.assertTrue(isabs(filepath_or_buffer))
self.assertEqual(os.path.expanduser(filename), filepath_or_buffer)
def test_get_filepath_or_buffer_with_buffer(self):
input_buffer = StringIO()
filepath_or_buffer, _, _ = common.get_filepath_or_buffer(input_buffer)
self.assertEqual(filepath_or_buffer, input_buffer)
| gpl-2.0 |
yonggang985/Sniper | scripts/stattrace.py | 2 | 2693 | """
stattrace.py
Write a trace of deltas for an arbitrary statistic.
First argument is the name of the statistic (<component-name>[.<subcomponent>].<stat-name>)
Second argument is either a filename, or none to write to standard output
Third argument is the interval size in nanoseconds (default is 10000)
"""
import sys, os, sim
class StatTrace:
def setup(self, args):
args = dict(enumerate((args or '').split(':')))
stat = args[0]
filename = args.get(1, None)
interval_ns = long(args.get(2, 10000))
if '.' not in stat:
print 'Stat name needs to be of the format <component>.<statname>, now %s' % stat
return
self.stat_name = stat
stat_component, stat_name = stat.rsplit('.', 1)
valid = False
for core in range(sim.config.ncores):
try:
sim.stats.get(stat_component, core, stat_name)
except ValueError:
continue
else:
valid = True
break
if not valid:
print 'Stat %s[*].%s not found' % (stat_component, stat_name)
return
if filename:
self.fd = file(os.path.join(sim.config.output_dir, filename), 'w')
self.isTerminal = False
else:
self.fd = sys.stdout
self.isTerminal = True
self.sd = sim.util.StatsDelta()
self.stats = {
'time': [ self.getStatsGetter('performance_model', core, 'elapsed_time') for core in range(sim.config.ncores) ],
'ffwd_time': [ self.getStatsGetter('fastforward_performance_model', core, 'fastforwarded_time') for core in range(sim.config.ncores) ],
'stat': [ self.getStatsGetter(stat_component, core, stat_name) for core in range(sim.config.ncores) ],
}
sim.util.Every(interval_ns * sim.util.Time.NS, self.periodic, statsdelta = self.sd, roi_only = True)
def periodic(self, time, time_delta):
if self.isTerminal:
self.fd.write('[STAT:%s] ' % self.stat_name)
self.fd.write('%u' % (time / 1e6)) # Time in ns
for core in range(sim.config.ncores):
timediff = (self.stats['time'][core].delta - self.stats['ffwd_time'][core].delta) / 1e6 # Time in ns
statdiff = self.stats['stat'][core].delta
value = statdiff / (timediff or 1) # Avoid division by zero
self.fd.write(' %.3f' % value)
self.fd.write('\n')
def getStatsGetter(self, component, core, metric):
# Some components don't exist (i.e. DRAM reads on cores that don't have a DRAM controller),
# return a special object that always returns 0 in these cases
try:
return self.sd.getter(component, core, metric)
except:
class Zero():
def __init__(self): self.delta = 0
def update(self): pass
return Zero()
sim.util.register(StatTrace())
| mit |
TheKK/Shedskin | examples/score4.py | 6 | 4867 | # connect four / four-in-a-row
# http://users.softlab.ece.ntua.gr/~ttsiod/score4.html
from sys import argv
WIDTH = 7
HEIGHT = 6
ORANGE_WINS = 1000000
YELLOW_WINS = -ORANGE_WINS
g_max_depth = 7
g_debug = False
class Cell:
Barren = 0
Orange = 1
Yellow = -1
def score_board(board):
counters = [0] * 9
# Horizontal spans
for y in xrange(HEIGHT):
score = board[y][0] + board[y][1] + board[y][2]
for x in xrange(3, WIDTH):
score += board[y][x]
counters[score + 4] += 1
score -= board[y][x - 3]
# Vertical spans
for x in xrange(WIDTH):
score = board[0][x] + board[1][x] + board[2][x]
for y in xrange(3, HEIGHT):
score += board[y][x]
counters[score + 4] += 1
score -= board[y - 3][x]
# Down-right (and up-left) diagonals
for y in xrange(HEIGHT - 3):
for x in xrange(WIDTH - 3):
score = 0
for idx in xrange(4):
yy = y + idx
xx = x + idx
score += board[yy][xx]
counters[score + 4] += 1
# up-right (and down-left) diagonals
for y in xrange(3, HEIGHT):
for x in xrange(WIDTH - 3):
score = 0
for idx in xrange(4):
yy = y - idx
xx = x + idx
score += board[yy][xx]
counters[score + 4] += 1
if counters[0] != 0:
return YELLOW_WINS
elif counters[8] != 0:
return ORANGE_WINS
else:
return (counters[5] + 2 * counters[6] + 5 * counters[7] +
10 * counters[8] - counters[3] - 2 * counters[2] -
5 * counters[1] - 10 * counters[0])
def drop_disk(board, column, color):
for y in xrange(HEIGHT - 1, -1, -1):
if board[y][column] == Cell.Barren:
board[y][column] = color
return y
return -1
def load_board(args):
global g_debug, g_max_depth
new_board = [[Cell.Barren] * WIDTH for _ in xrange(HEIGHT)]
for i, arg in enumerate(args[1:]):
if arg[0] == 'o' or arg[0] == 'y':
new_board[ord(arg[1]) - ord('0')][ord(arg[2]) - ord('0')] = \
Cell.Orange if arg[0] == 'o' else Cell.Yellow
elif arg == "-debug":
g_debug = True
elif arg == "-level" and i < (len(args) - 2):
g_max_depth = int(args[i + 2])
return new_board
def ab_minimax(maximize_or_minimize, color, depth, board):
global g_max_depth, g_debug
if depth == 0:
return (-1, score_board(board))
else:
best_score = -10000000 if maximize_or_minimize else 10000000
bestMove = -1
for column in xrange(WIDTH):
if board[0][column] != Cell.Barren:
continue
rowFilled = drop_disk(board, column, color)
if rowFilled == -1:
continue
s = score_board(board)
if s == (ORANGE_WINS if maximize_or_minimize else YELLOW_WINS):
bestMove = column
best_score = s
board[rowFilled][column] = Cell.Barren
break
move, score = ab_minimax(not maximize_or_minimize,
Cell.Yellow if color == Cell.Orange else Cell.Orange,
depth - 1, board)
board[rowFilled][column] = Cell.Barren
if depth == g_max_depth and g_debug:
print "Depth %d, placing on %d, score:%d" % (depth, column, score)
if maximize_or_minimize:
if score >= best_score:
best_score = score
bestMove = column
else:
if score <= best_score:
best_score = score
bestMove = column
return (bestMove, best_score)
def main(args):
global g_max_depth
board = load_board(args)
score_orig = score_board(board)
if score_orig == ORANGE_WINS:
print "I win."
return -1
elif score_orig == YELLOW_WINS:
print "You win."
return -1
else:
move, score = ab_minimax(True, Cell.Orange, g_max_depth, board)
if move != -1:
print move
drop_disk(board, move, Cell.Orange)
score_orig = score_board(board)
if score_orig == ORANGE_WINS:
print "I win."
return -1
elif score_orig == YELLOW_WINS:
print "You win."
return -1
else:
return 0
else:
print "No move possible."
return -1
exit(main(argv))
| gpl-3.0 |
digwanderlust/pants | tests/python/pants_test/base/test_payload_field.py | 1 | 10723 | # coding=utf-8
# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import (absolute_import, division, generators, nested_scopes, print_function,
unicode_literals, with_statement)
from hashlib import sha1
from pants.backend.jvm.targets.exclude import Exclude
from pants.backend.jvm.targets.jar_dependency import IvyArtifact, JarDependency
from pants.backend.python.python_requirement import PythonRequirement
from pants.base.payload import Payload
from pants.base.payload_field import (ExcludesField, FileField, FingerprintedField,
FingerprintedMixin, JarsField, PrimitiveField,
PythonRequirementsField, SourcesField, TargetListField)
from pants_test.base_test import BaseTest
class PayloadTest(BaseTest):
def test_excludes_field(self):
empty = ExcludesField()
empty_fp = empty.fingerprint()
self.assertEqual(empty_fp, empty.fingerprint())
normal = ExcludesField([Exclude('com', 'foozle'), Exclude('org')])
normal_fp = normal.fingerprint()
self.assertEqual(normal_fp, normal.fingerprint())
normal_dup = ExcludesField([Exclude('com', 'foozle'), Exclude('org')])
self.assertEqual(normal_fp, normal_dup.fingerprint())
self.assertNotEqual(empty_fp, normal_fp)
def test_jars_field_order(self):
jar1 = JarDependency('com', 'foo', '1.0.0')
jar2 = JarDependency('org', 'baz')
self.assertNotEqual(
JarsField([jar1, jar2]).fingerprint(),
JarsField([jar2, jar1]).fingerprint(),
)
def test_jars_field_artifacts(self):
jar1 = JarDependency('com', 'foo', '1.0.0').with_artifact('com', 'baz')
jar2 = JarDependency('com', 'foo', '1.0.0')
self.assertNotEqual(
JarsField([jar1]).fingerprint(),
JarsField([jar2]).fingerprint(),
)
def test_jars_field_artifacts_arg(self):
jar1 = JarDependency('com', 'foo', '1.0.0', artifacts=[IvyArtifact('com', 'baz')])
jar2 = JarDependency('com', 'foo', '1.0.0')
self.assertNotEqual(
JarsField([jar1]).fingerprint(),
JarsField([jar2]).fingerprint(),
)
def test_jars_field_artifacts_arg_vs_method(self):
jar1 = JarDependency('com', 'foo', '1.0.0', artifacts=[IvyArtifact('com', 'baz')])
jar2 = JarDependency('com', 'foo', '1.0.0').with_artifact('com', 'baz')
self.assertEqual(
JarsField([jar1]).fingerprint(),
JarsField([jar2]).fingerprint(),
)
def test_jars_field_artifacts(self):
jar1 = (JarDependency('com', 'foo', '1.0.0')
.with_artifact('com', 'baz')
.with_artifact('org', 'bat'))
jar2 = (JarDependency('com', 'foo', '1.0.0')
.with_artifact('org', 'bat')
.with_artifact('com', 'baz'))
jar3 = (JarDependency('com', 'foo', '1.0.0')
.with_artifact('org', 'bat'))
jar4 = JarDependency('com', 'foo', '1.0.0')
self.assertEqual(
JarsField([jar1]).fingerprint(),
JarsField([jar2]).fingerprint(),
)
self.assertNotEqual(
JarsField([jar1]).fingerprint(),
JarsField([jar3]).fingerprint(),
)
self.assertNotEqual(
JarsField([jar1]).fingerprint(),
JarsField([jar4]).fingerprint(),
)
self.assertNotEqual(
JarsField([jar3]).fingerprint(),
JarsField([jar4]).fingerprint(),
)
def test_jars_field_artifacts_ordering(self):
"""JarDependencies throw away ordering information about their artifacts in the cache key.
But they do not throw it away in their internal representation! In the future, this should be
fixed: either they should sort them as they are added and keep a canonical representation, or
the order information should be preserved.
"""
jar1 = (JarDependency('com', 'foo', '1.0.0')
.with_artifact('com', 'baz')
.with_artifact('org', 'bat'))
jar2 = (JarDependency('com', 'foo', '1.0.0')
.with_artifact('org', 'bat')
.with_artifact('com', 'baz'))
self.assertEqual(
JarsField([jar1]).fingerprint(),
JarsField([jar2]).fingerprint(),
)
def test_deprecated_jars_field_methods(self):
"""with_sources() and with_docs() are now no-ops. This test shows they don't affect
fingerprinting.
"""
jar1 = (JarDependency('com', 'foo', '1.0.0'))
jar2 = (JarDependency('com', 'foo', '1.0.0')
.with_sources()
.with_docs())
self.assertEqual(
JarsField([jar1]).fingerprint(),
JarsField([jar2]).fingerprint(),
)
def test_jars_field_apidocs(self):
"""apidocs are not properly rolled into the cache key right now. Is this intentional?"""
jar1 = JarDependency('com', 'foo', '1.0.0', apidocs='pantsbuild.github.io')
jar2 = JarDependency('com', 'foo', '1.0.0', apidocs='someother.pantsbuild.github.io')
self.assertEqual(
JarsField([jar1]).fingerprint(),
JarsField([jar2]).fingerprint(),
)
def test_python_requirements_field(self):
req1 = PythonRequirement('foo==1.0')
req2 = PythonRequirement('bar==1.0')
self.assertNotEqual(
PythonRequirementsField([req1]).fingerprint(),
PythonRequirementsField([req2]).fingerprint(),
)
def test_python_requirements_field_version_filter(self):
"""version_filter is a lambda and can't be hashed properly.
Since in practice this is only ever used to differentiate between py3k and py2, it should use
a tuple of strings or even just a flag instead.
"""
req1 = PythonRequirement('foo==1.0', version_filter=lambda py, pl: False)
req2 = PythonRequirement('foo==1.0')
self.assertEqual(
PythonRequirementsField([req1]).fingerprint(),
PythonRequirementsField([req2]).fingerprint(),
)
def test_primitive_field(self):
self.assertEqual(
PrimitiveField({'foo': 'bar'}).fingerprint(),
PrimitiveField({'foo': 'bar'}).fingerprint(),
)
self.assertEqual(
PrimitiveField(['foo', 'bar']).fingerprint(),
PrimitiveField(('foo', 'bar')).fingerprint(),
)
self.assertEqual(
PrimitiveField(['foo', 'bar']).fingerprint(),
PrimitiveField(('foo', 'bar')).fingerprint(),
)
self.assertEqual(
PrimitiveField('foo').fingerprint(),
PrimitiveField(b'foo').fingerprint(),
)
self.assertNotEqual(
PrimitiveField('foo').fingerprint(),
PrimitiveField('bar').fingerprint(),
)
def test_excludes_field(self):
self.assertEqual(
ExcludesField([Exclude('com', 'foo')]).fingerprint(),
ExcludesField([Exclude('com', 'foo')]).fingerprint(),
)
self.assertEqual(
ExcludesField([]).fingerprint(),
ExcludesField().fingerprint(),
)
self.assertNotEqual(
ExcludesField([Exclude('com', 'foo')]).fingerprint(),
ExcludesField([Exclude('com')]).fingerprint(),
)
self.assertNotEqual(
ExcludesField([Exclude('com', 'foo'), Exclude('org', 'bar')]).fingerprint(),
ExcludesField([Exclude('org', 'bar'), Exclude('com', 'foo')]).fingerprint(),
)
def test_sources_field(self):
self.create_file('foo/bar/a.txt', 'a_contents')
self.create_file('foo/bar/b.txt', 'b_contents')
self.assertNotEqual(
SourcesField(
sources_rel_path='foo/bar',
sources=['a.txt'],
).fingerprint(),
SourcesField(
sources_rel_path='foo/bar',
sources=['b.txt'],
).fingerprint(),
)
self.assertEqual(
SourcesField(
sources_rel_path='foo/bar',
sources=['a.txt'],
).fingerprint(),
SourcesField(
sources_rel_path='foo/bar',
sources=['a.txt'],
).fingerprint(),
)
self.assertEqual(
SourcesField(
sources_rel_path='foo/bar',
sources=['a.txt'],
).fingerprint(),
SourcesField(
sources_rel_path='foo/bar',
sources=['a.txt'],
).fingerprint(),
)
self.assertEqual(
SourcesField(
sources_rel_path='foo/bar',
sources=['a.txt', 'b.txt'],
).fingerprint(),
SourcesField(
sources_rel_path='foo/bar',
sources=['b.txt', 'a.txt'],
).fingerprint(),
)
fp1 = SourcesField(
sources_rel_path='foo/bar',
sources=['a.txt'],
).fingerprint()
self.create_file('foo/bar/a.txt', 'a_contents_different')
fp2 = SourcesField(
sources_rel_path='foo/bar',
sources=['a.txt'],
).fingerprint()
self.assertNotEqual(fp1, fp2)
def test_fingerprinted_field(self):
class TestValue(FingerprintedMixin):
def __init__(self, test_value):
self.test_value = test_value
def fingerprint(self):
hasher = sha1()
hasher.update(self.test_value)
return hasher.hexdigest()
field1 = TestValue('field1')
field1_same = TestValue('field1')
field2 = TestValue('field2')
self.assertEquals(field1.fingerprint(), field1_same.fingerprint())
self.assertNotEquals(field1.fingerprint(), field2.fingerprint())
fingerprinted_field1 = FingerprintedField(field1)
fingerprinted_field1_same = FingerprintedField(field1_same)
fingerprinted_field2 = FingerprintedField(field2)
self.assertEquals(fingerprinted_field1.fingerprint(), fingerprinted_field1_same.fingerprint())
self.assertNotEquals(fingerprinted_field1.fingerprint(), fingerprinted_field2.fingerprint())
def test_unimplemented_fingerprinted_field(self):
class TestUnimplementedValue(FingerprintedMixin):
pass
with self.assertRaises(NotImplementedError):
FingerprintedField(TestUnimplementedValue()).fingerprint()
def test_file_field(self):
fp1 = FileField(self.create_file('foo/bar.config', contents='blah blah blah')).fingerprint()
fp2 = FileField(self.create_file('foo/bar.config', contents='meow meow meow')).fingerprint()
fp3 = FileField(self.create_file('spam/egg.config', contents='blah blah blah')).fingerprint()
self.assertNotEquals(fp1, fp2)
self.assertNotEquals(fp1, fp3)
self.assertNotEquals(fp2, fp3)
def test_target_list_field(self):
specs = [':t1', ':t2', ':t3']
payloads = [Payload() for i in range(3)]
for i, (s, p) in enumerate(zip(specs, payloads)):
p.add_field('foo', PrimitiveField(i))
self.make_target(s, payload=p)
s1, s2, s3 = specs
context = self.context()
fp1 = TargetListField([s1, s2]).fingerprint_with_context(context)
fp2 = TargetListField([s2, s1]).fingerprint_with_context(context)
fp3 = TargetListField([s1, s3]).fingerprint_with_context(context)
self.assertEquals(fp1, fp2)
self.assertNotEquals(fp1, fp3)
| apache-2.0 |
VeNoMouS/Sick-Beard | sickbeard/generic_queue.py | 52 | 3782 | # Author: Nic Wolfe <nic@wolfeden.ca>
# URL: http://code.google.com/p/sickbeard/
#
# This file is part of Sick Beard.
#
# Sick Beard is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Sick Beard is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Sick Beard. If not, see <http://www.gnu.org/licenses/>.
import datetime
import threading
from sickbeard import logger
class QueuePriorities:
LOW = 10
NORMAL = 20
HIGH = 30
class GenericQueue(object):
def __init__(self):
self.currentItem = None
self.queue = []
self.thread = None
self.queue_name = "QUEUE"
self.min_priority = 0
self.currentItem = None
def pause(self):
logger.log(u"Pausing queue")
self.min_priority = 999999999999
def unpause(self):
logger.log(u"Unpausing queue")
self.min_priority = 0
def add_item(self, item):
item.added = datetime.datetime.now()
self.queue.append(item)
return item
def run(self):
# only start a new task if one isn't already going
if self.thread == None or self.thread.isAlive() == False:
# if the thread is dead then the current item should be finished
if self.currentItem != None:
self.currentItem.finish()
self.currentItem = None
# if there's something in the queue then run it in a thread and take it out of the queue
if len(self.queue) > 0:
# sort by priority
def sorter(x,y):
"""
Sorts by priority descending then time ascending
"""
if x.priority == y.priority:
if y.added == x.added:
return 0
elif y.added < x.added:
return 1
elif y.added > x.added:
return -1
else:
return y.priority-x.priority
self.queue.sort(cmp=sorter)
queueItem = self.queue[0]
if queueItem.priority < self.min_priority:
return
# launch the queue item in a thread
# TODO: improve thread name
threadName = self.queue_name + '-' + queueItem.get_thread_name()
self.thread = threading.Thread(None, queueItem.execute, threadName)
self.thread.start()
self.currentItem = queueItem
# take it out of the queue
del self.queue[0]
class QueueItem:
def __init__(self, name, action_id = 0):
self.name = name
self.inProgress = False
self.priority = QueuePriorities.NORMAL
self.thread_name = None
self.action_id = action_id
self.added = None
def get_thread_name(self):
if self.thread_name:
return self.thread_name
else:
return self.name.replace(" ","-").upper()
def execute(self):
"""Implementing classes should call this"""
self.inProgress = True
def finish(self):
"""Implementing Classes should call this"""
self.inProgress = False
| gpl-3.0 |
RobKal/Photomasking-Deansfield | createTasks.py | 34 | 8692 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# This file is part of PyBOSSA.
#
# PyBOSSA is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# PyBOSSA is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with PyBOSSA. If not, see <http://www.gnu.org/licenses/>.
import json
from optparse import OptionParser
import pbclient
from get_images import get_s3_photos
import random
import logging
from requests import exceptions
from time import sleep
def contents(filename):
return file(filename).read()
def handle_arguments():
# Arguments for the application
usage = "usage: %prog [options]"
parser = OptionParser(usage)
# URL where PyBossa listens
parser.add_option("-s", "--server", dest="api_url",
help="PyBossa URL http://domain.com/", metavar="URL",
default="http://localhost:5000/")
# API-KEY
parser.add_option("-k", "--api-key", dest="api_key",
help="PyBossa User API-KEY to interact with PyBossa",
metavar="API-KEY")
# Create App
parser.add_option("-c", "--create-app", action="store_true",
dest="create_app",
help="Create the application",
metavar="CREATE-APP")
# Update template for tasks and long_description for app
parser.add_option("-t", "--update-template", action="store_true",
dest="update_template",
help="Update Tasks template",
metavar="UPDATE-TEMPLATE")
# Update tasks question
parser.add_option("-q", "--update-tasks",
type="int",
dest="update_tasks",
help="Update Tasks n_answers",
metavar="UPDATE-TASKS")
parser.add_option("-x", "--extra-task", action="store_true",
dest="add_more_tasks",
help="Add more tasks",
metavar="ADD-MORE-TASKS")
# S3 Bucket folder
parser.add_option("-b", "--s3-bucket", dest="s3_bucket_folder",
help="S3 Bucket folder to get pictures/photos",
metavar="S3-BUCKET-FOLDER")
# Modify the number of TaskRuns per Task
# (default 30)
parser.add_option("-n", "--number-answers",
type="int",
dest="n_answers",
help="Number of answers per task",
metavar="N-ANSWERS",
default=3)
parser.add_option("-a", "--application-config",
dest="app_config",
help="Application config file",
metavar="APP-CONFIG",
default="app.json")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose")
(options, args) = parser.parse_args()
if not options.create_app and not options.update_template\
and not options.add_more_tasks and not options.update_tasks:
parser.error("Please check --help or -h for the available options")
if not options.api_key:
parser.error("You must supply an API-KEY to create an \
application and tasks in PyBossa")
return options
def get_configuration():
options = handle_arguments()
# Load app details
try:
with file(options.app_config) as app_json:
app_config = json.load(app_json)
except IOError:
print "application config file is missing! Please create a new one"
exit(1)
return (app_config, options)
def run(app_config, options):
def check_api_error(api_response):
"""Check if returned API response contains an error"""
if type(api_response) == dict and (api_response.get('status') == 'failed'):
raise exceptions.HTTPError
def format_error(module, error):
"""Format the error for the given module"""
logging.error(module)
# Beautify JSON error
if type(error) == list:
print "Application not found"
else:
print json.dumps(error, sort_keys=True, indent=4, separators=(',', ': '))
exit(1)
def find_app_by_short_name():
try:
response = pbclient.find_app(short_name=app_config['short_name'])
check_api_error(response)
return response[0]
except:
format_error("pbclient.find_app", response)
def setup_app():
app = find_app_by_short_name()
app.long_description = contents('long_description.html')
app.info['task_presenter'] = contents('template.html')
app.info['thumbnail'] = app_config['thumbnail']
app.info['tutorial'] = contents('tutorial.html')
try:
response = pbclient.update_app(app)
check_api_error(response)
return app
except:
format_error("pbclient.update_app", response)
def create_photo_task(app, photo, question, priority=0):
# Data for the tasks
task_info = photo
try:
response = pbclient.create_task(app.id, task_info, priority_0=priority)
check_api_error(response)
except:
format_error("pbclient.create_task", response)
def add_photo_tasks(app):
# First of all we get the URL photos
# Then, we have to create a set of tasks for the application
# For this, we get first the photo URLs from S3
photos = get_s3_photos(options.s3_bucket_folder)
question = app_config['question']
#[create_photo_task(app, p, question, priority=random.random()) for p in photos]
for p in photos:
create_photo_task(app, p, question)
print "Creating task..."
sleep(4)
pbclient.set('api_key', options.api_key)
pbclient.set('endpoint', options.api_url)
if options.verbose:
print('Running against PyBosssa instance at: %s' % options.api_url)
print('Using API-KEY: %s' % options.api_key)
if options.create_app or options.add_more_tasks:
if options.s3_bucket_folder:
if options.create_app:
try:
response = pbclient.create_app(app_config['name'],
app_config['short_name'],
app_config['description'])
check_api_error(response)
app = setup_app()
except:
format_error("pbclient.create_app", response)
else:
app = find_app_by_short_name()
add_photo_tasks(app)
else:
parser.error("Please check --help or -h for the available options")
if options.update_template:
print "Updating app template"
# discard return value
setup_app()
if options.update_tasks:
def tasks(app):
offset = 0
limit = 100
while True:
try:
tasks = pbclient.get_tasks(app.id, offset=offset, limit=limit)
check_api_error(tasks)
if len(tasks) == 0:
break
for task in tasks:
yield task
offset += len(tasks)
except:
format_error("pbclient.get_tasks", response)
def update_task(task, count):
print "Updating task: %s" % task.id
if 'n_answers' in task.info:
del(task.info['n_answers'])
task.n_answers = options.update_tasks
try:
response = pbclient.update_task(task)
check_api_error(response)
count[0] += 1
except:
format_error("pbclient.update_task", response)
print "Updating task n_answers"
app = find_app_by_short_name()
n_tasks = [0]
[update_task(t, n_tasks) for t in tasks(app)]
print "%s Tasks have been updated!" % n_tasks[0]
if __name__ == "__main__":
app_config, options = get_configuration()
run(app_config, options)
| agpl-3.0 |
valkyriesavage/gasustainability | django/utils/unittest/result.py | 570 | 6105 | """Test result object"""
import sys
import traceback
import unittest
from StringIO import StringIO
from django.utils.unittest import util
from django.utils.unittest.compatibility import wraps
__unittest = True
def failfast(method):
@wraps(method)
def inner(self, *args, **kw):
if getattr(self, 'failfast', False):
self.stop()
return method(self, *args, **kw)
return inner
STDOUT_LINE = '\nStdout:\n%s'
STDERR_LINE = '\nStderr:\n%s'
class TestResult(unittest.TestResult):
"""Holder for test result information.
Test results are automatically managed by the TestCase and TestSuite
classes, and do not need to be explicitly manipulated by writers of tests.
Each instance holds the total number of tests run, and collections of
failures and errors that occurred among those test runs. The collections
contain tuples of (testcase, exceptioninfo), where exceptioninfo is the
formatted traceback of the error that occurred.
"""
_previousTestClass = None
_moduleSetUpFailed = False
def __init__(self):
self.failfast = False
self.failures = []
self.errors = []
self.testsRun = 0
self.skipped = []
self.expectedFailures = []
self.unexpectedSuccesses = []
self.shouldStop = False
self.buffer = False
self._stdout_buffer = None
self._stderr_buffer = None
self._original_stdout = sys.stdout
self._original_stderr = sys.stderr
self._mirrorOutput = False
def startTest(self, test):
"Called when the given test is about to be run"
self.testsRun += 1
self._mirrorOutput = False
if self.buffer:
if self._stderr_buffer is None:
self._stderr_buffer = StringIO()
self._stdout_buffer = StringIO()
sys.stdout = self._stdout_buffer
sys.stderr = self._stderr_buffer
def startTestRun(self):
"""Called once before any tests are executed.
See startTest for a method called before each test.
"""
def stopTest(self, test):
"""Called when the given test has been run"""
if self.buffer:
if self._mirrorOutput:
output = sys.stdout.getvalue()
error = sys.stderr.getvalue()
if output:
if not output.endswith('\n'):
output += '\n'
self._original_stdout.write(STDOUT_LINE % output)
if error:
if not error.endswith('\n'):
error += '\n'
self._original_stderr.write(STDERR_LINE % error)
sys.stdout = self._original_stdout
sys.stderr = self._original_stderr
self._stdout_buffer.seek(0)
self._stdout_buffer.truncate()
self._stderr_buffer.seek(0)
self._stderr_buffer.truncate()
self._mirrorOutput = False
def stopTestRun(self):
"""Called once after all tests are executed.
See stopTest for a method called after each test.
"""
@failfast
def addError(self, test, err):
"""Called when an error has occurred. 'err' is a tuple of values as
returned by sys.exc_info().
"""
self.errors.append((test, self._exc_info_to_string(err, test)))
self._mirrorOutput = True
@failfast
def addFailure(self, test, err):
"""Called when an error has occurred. 'err' is a tuple of values as
returned by sys.exc_info()."""
self.failures.append((test, self._exc_info_to_string(err, test)))
self._mirrorOutput = True
def addSuccess(self, test):
"Called when a test has completed successfully"
pass
def addSkip(self, test, reason):
"""Called when a test is skipped."""
self.skipped.append((test, reason))
def addExpectedFailure(self, test, err):
"""Called when an expected failure/error occured."""
self.expectedFailures.append(
(test, self._exc_info_to_string(err, test)))
@failfast
def addUnexpectedSuccess(self, test):
"""Called when a test was expected to fail, but succeed."""
self.unexpectedSuccesses.append(test)
def wasSuccessful(self):
"Tells whether or not this result was a success"
return (len(self.failures) + len(self.errors) == 0)
def stop(self):
"Indicates that the tests should be aborted"
self.shouldStop = True
def _exc_info_to_string(self, err, test):
"""Converts a sys.exc_info()-style tuple of values into a string."""
exctype, value, tb = err
# Skip test runner traceback levels
while tb and self._is_relevant_tb_level(tb):
tb = tb.tb_next
if exctype is test.failureException:
# Skip assert*() traceback levels
length = self._count_relevant_tb_levels(tb)
msgLines = traceback.format_exception(exctype, value, tb, length)
else:
msgLines = traceback.format_exception(exctype, value, tb)
if self.buffer:
output = sys.stdout.getvalue()
error = sys.stderr.getvalue()
if output:
if not output.endswith('\n'):
output += '\n'
msgLines.append(STDOUT_LINE % output)
if error:
if not error.endswith('\n'):
error += '\n'
msgLines.append(STDERR_LINE % error)
return ''.join(msgLines)
def _is_relevant_tb_level(self, tb):
return '__unittest' in tb.tb_frame.f_globals
def _count_relevant_tb_levels(self, tb):
length = 0
while tb and not self._is_relevant_tb_level(tb):
length += 1
tb = tb.tb_next
return length
def __repr__(self):
return "<%s run=%i errors=%i failures=%i>" % \
(util.strclass(self.__class__), self.testsRun, len(self.errors),
len(self.failures))
| bsd-3-clause |
nott/next.filmfest.by | cpm_generic/constants.py | 2 | 7163 | from django.utils.translation import ugettext_lazy as _
COUNTRIES = (
('AD', _('Andorra')),
('AE', _('United Arab Emirates')),
('AF', _('Afghanistan')),
('AG', _('Antigua & Barbuda')),
('AI', _('Anguilla')),
('AL', _('Albania')),
('AM', _('Armenia')),
('AN', _('Netherlands Antilles')),
('AO', _('Angola')),
('AQ', _('Antarctica')),
('AR', _('Argentina')),
('AS', _('American Samoa')),
('AT', _('Austria')),
('AU', _('Australia')),
('AW', _('Aruba')),
('AZ', _('Azerbaijan')),
('BA', _('Bosnia and Herzegovina')),
('BB', _('Barbados')),
('BD', _('Bangladesh')),
('BE', _('Belgium')),
('BF', _('Burkina Faso')),
('BG', _('Bulgaria')),
('BH', _('Bahrain')),
('BI', _('Burundi')),
('BJ', _('Benin')),
('BM', _('Bermuda')),
('BN', _('Brunei Darussalam')),
('BO', _('Bolivia')),
('BR', _('Brazil')),
('BS', _('Bahama')),
('BT', _('Bhutan')),
('BV', _('Bouvet Island')),
('BW', _('Botswana')),
('BY', _('Belarus')),
('BZ', _('Belize')),
('CA', _('Canada')),
('CC', _('Cocos (Keeling) Islands')),
('CF', _('Central African Republic')),
('CG', _('Congo')),
('CH', _('Switzerland')),
('CI', _('Ivory Coast')),
('CK', _('Cook Iislands')),
('CL', _('Chile')),
('CM', _('Cameroon')),
('CN', _('China')),
('CO', _('Colombia')),
('CR', _('Costa Rica')),
('CU', _('Cuba')),
('CV', _('Cape Verde')),
('CX', _('Christmas Island')),
('CY', _('Cyprus')),
('CZ', _('Czech Republic')),
('DE', _('Germany')),
('DJ', _('Djibouti')),
('DK', _('Denmark')),
('DM', _('Dominica')),
('DO', _('Dominican Republic')),
('DZ', _('Algeria')),
('EC', _('Ecuador')),
('EE', _('Estonia')),
('EG', _('Egypt')),
('EH', _('Western Sahara')),
('ER', _('Eritrea')),
('ES', _('Spain')),
('ET', _('Ethiopia')),
('FI', _('Finland')),
('FJ', _('Fiji')),
('FK', _('Falkland Islands (Malvinas)')),
('FM', _('Micronesia')),
('FO', _('Faroe Islands')),
('FR', _('France')),
('FX', _('France, Metropolitan')),
('GA', _('Gabon')),
('GB', _('United Kingdom (Great Britain)')),
('GD', _('Grenada')),
('GE', _('Georgia')),
('GF', _('French Guiana')),
('GH', _('Ghana')),
('GI', _('Gibraltar')),
('GL', _('Greenland')),
('GM', _('Gambia')),
('GN', _('Guinea')),
('GP', _('Guadeloupe')),
('GQ', _('Equatorial Guinea')),
('GR', _('Greece')),
('GS', _('South Georgia and the South Sandwich Islands')),
('GT', _('Guatemala')),
('GU', _('Guam')),
('GW', _('Guinea-Bissau')),
('GY', _('Guyana')),
('HK', _('Hong Kong')),
('HM', _('Heard & McDonald Islands')),
('HN', _('Honduras')),
('HR', _('Croatia')),
('HT', _('Haiti')),
('HU', _('Hungary')),
('ID', _('Indonesia')),
('IE', _('Ireland')),
('IL', _('Israel')),
('IN', _('India')),
('IO', _('British Indian Ocean Territory')),
('IQ', _('Iraq')),
('IR', _('Islamic Republic of Iran')),
('IS', _('Iceland')),
('IT', _('Italy')),
('JM', _('Jamaica')),
('JO', _('Jordan')),
('JP', _('Japan')),
('KE', _('Kenya')),
('KG', _('Kyrgyzstan')),
('KH', _('Cambodia')),
('KI', _('Kiribati')),
('KM', _('Comoros')),
('KN', _('St. Kitts and Nevis')),
('KP', _('Korea, Democratic People\'s Republic of')),
('KR', _('Korea, Republic of')),
('KW', _('Kuwait')),
('KY', _('Cayman Islands')),
('KZ', _('Kazakhstan')),
('LA', _('Lao People\'s Democratic Republic')),
('LB', _('Lebanon')),
('LC', _('Saint Lucia')),
('LI', _('Liechtenstein')),
('LK', _('Sri Lanka')),
('LR', _('Liberia')),
('LS', _('Lesotho')),
('LT', _('Lithuania')),
('LU', _('Luxembourg')),
('LV', _('Latvia')),
('LY', _('Libyan Arab Jamahiriya')),
('MA', _('Morocco')),
('MC', _('Monaco')),
('MD', _('Moldova, Republic of')),
('MG', _('Madagascar')),
('MH', _('Marshall Islands')),
('ML', _('Mali')),
('MN', _('Mongolia')),
('MM', _('Myanmar')),
('MO', _('Macau')),
('MP', _('Northern Mariana Islands')),
('MQ', _('Martinique')),
('MR', _('Mauritania')),
('MS', _('Monserrat')),
('MT', _('Malta')),
('MU', _('Mauritius')),
('MV', _('Maldives')),
('MW', _('Malawi')),
('MX', _('Mexico')),
('MY', _('Malaysia')),
('MZ', _('Mozambique')),
('NA', _('Namibia')),
('NC', _('New Caledonia')),
('NE', _('Niger')),
('NF', _('Norfolk Island')),
('NG', _('Nigeria')),
('NI', _('Nicaragua')),
('NL', _('Netherlands')),
('NO', _('Norway')),
('NP', _('Nepal')),
('NR', _('Nauru')),
('NU', _('Niue')),
('NZ', _('New Zealand')),
('OM', _('Oman')),
('PA', _('Panama')),
('PE', _('Peru')),
('PF', _('French Polynesia')),
('PG', _('Papua New Guinea')),
('PH', _('Philippines')),
('PK', _('Pakistan')),
('PL', _('Poland')),
('PM', _('St. Pierre & Miquelon')),
('PN', _('Pitcairn')),
('PR', _('Puerto Rico')),
('PT', _('Portugal')),
('PW', _('Palau')),
('PY', _('Paraguay')),
('QA', _('Qatar')),
('RE', _('Reunion')),
('RO', _('Romania')),
('RU', _('Russian Federation')),
('RW', _('Rwanda')),
('SA', _('Saudi Arabia')),
('SB', _('Solomon Islands')),
('SC', _('Seychelles')),
('SD', _('Sudan')),
('SE', _('Sweden')),
('SG', _('Singapore')),
('SH', _('St. Helena')),
('SI', _('Slovenia')),
('SJ', _('Svalbard & Jan Mayen Islands')),
('SK', _('Slovakia')),
('SL', _('Sierra Leone')),
('SM', _('San Marino')),
('SN', _('Senegal')),
('SO', _('Somalia')),
('SR', _('Suriname')),
('ST', _('Sao Tome & Principe')),
('SV', _('El Salvador')),
('SY', _('Syrian Arab Republic')),
('SZ', _('Swaziland')),
('TC', _('Turks & Caicos Islands')),
('TD', _('Chad')),
('TF', _('French Southern Territories')),
('TG', _('Togo')),
('TH', _('Thailand')),
('TJ', _('Tajikistan')),
('TK', _('Tokelau')),
('TM', _('Turkmenistan')),
('TN', _('Tunisia')),
('TO', _('Tonga')),
('TP', _('East Timor')),
('TR', _('Turkey')),
('TT', _('Trinidad & Tobago')),
('TV', _('Tuvalu')),
('TW', _('Taiwan, Province of China')),
('TZ', _('Tanzania, United Republic of')),
('UA', _('Ukraine')),
('UG', _('Uganda')),
('UM', _('United States Minor Outlying Islands')),
('US', _('United States of America')),
('UY', _('Uruguay')),
('UZ', _('Uzbekistan')),
('VA', _('Vatican City State (Holy See)')),
('VC', _('St. Vincent & the Grenadines')),
('VE', _('Venezuela')),
('VG', _('British Virgin Islands')),
('VI', _('United States Virgin Islands')),
('VN', _('Viet Nam')),
('VU', _('Vanuatu')),
('WF', _('Wallis & Futuna Islands')),
('WS', _('Samoa')),
('YE', _('Yemen')),
('YT', _('Mayotte')),
('YU', _('Yugoslavia')),
('ZA', _('South Africa')),
('ZM', _('Zambia')),
('ZR', _('Zaire')),
('ZW', _('Zimbabwe')),
('ZZ', _('Other')),
)
| unlicense |
Bushstar/UFO-Project | test/functional/mempool_limit.py | 2 | 3315 | #!/usr/bin/env python3
# Copyright (c) 2014-2018 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test mempool limiting together/eviction with the wallet."""
from decimal import Decimal
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import assert_equal, assert_greater_than, assert_raises_rpc_error, create_confirmed_utxos, create_lots_of_big_transactions, gen_return_txouts
class MempoolLimitTest(BitcoinTestFramework):
def set_test_params(self):
self.setup_clean_chain = True
self.num_nodes = 1
self.extra_args = [["-maxmempool=5", "-spendzeroconfchange=0"]]
def run_test(self):
txouts = gen_return_txouts()
relayfee = self.nodes[0].getnetworkinfo()['relayfee']
self.log.info('Check that mempoolminfee is minrelytxfee')
assert_equal(self.nodes[0].getmempoolinfo()['minrelaytxfee'], Decimal('0.00001000'))
assert_equal(self.nodes[0].getmempoolinfo()['mempoolminfee'], Decimal('0.00001000'))
txids = []
utxos = create_confirmed_utxos(relayfee, self.nodes[0], 91)
self.log.info('Create a mempool tx that will be evicted')
us0 = utxos.pop()
inputs = [{ "txid" : us0["txid"], "vout" : us0["vout"]}]
outputs = {self.nodes[0].getnewaddress() : 0.0001}
tx = self.nodes[0].createrawtransaction(inputs, outputs)
self.nodes[0].settxfee(relayfee) # specifically fund this tx with low fee
txF = self.nodes[0].fundrawtransaction(tx)
self.nodes[0].settxfee(0) # return to automatic fee selection
txFS = self.nodes[0].signrawtransactionwithwallet(txF['hex'])
txid = self.nodes[0].sendrawtransaction(txFS['hex'])
relayfee = self.nodes[0].getnetworkinfo()['relayfee']
base_fee = relayfee*100
for i in range (3):
txids.append([])
txids[i] = create_lots_of_big_transactions(self.nodes[0], txouts, utxos[30*i:30*i+30], 30, (i+1)*base_fee)
self.log.info('The tx should be evicted by now')
assert(txid not in self.nodes[0].getrawmempool())
txdata = self.nodes[0].gettransaction(txid)
assert(txdata['confirmations'] == 0) #confirmation should still be 0
self.log.info('Check that mempoolminfee is larger than minrelytxfee')
assert_equal(self.nodes[0].getmempoolinfo()['minrelaytxfee'], Decimal('0.00001000'))
assert_greater_than(self.nodes[0].getmempoolinfo()['mempoolminfee'], Decimal('0.00001000'))
self.log.info('Create a mempool tx that will not pass mempoolminfee')
us0 = utxos.pop()
inputs = [{ "txid" : us0["txid"], "vout" : us0["vout"]}]
outputs = {self.nodes[0].getnewaddress() : 0.0001}
tx = self.nodes[0].createrawtransaction(inputs, outputs)
# specifically fund this tx with a fee < mempoolminfee, >= than minrelaytxfee
txF = self.nodes[0].fundrawtransaction(tx, {'feeRate': relayfee})
txFS = self.nodes[0].signrawtransactionwithwallet(txF['hex'])
assert_raises_rpc_error(-26, "mempool min fee not met", self.nodes[0].sendrawtransaction, txFS['hex'])
if __name__ == '__main__':
MempoolLimitTest().main()
| mit |
hendradarwin/VTK | ThirdParty/AutobahnPython/autobahn/wamp1/prefixmap.py | 35 | 4187 | ###############################################################################
##
## Copyright (C) 2011-2013 Tavendo GmbH
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
##
###############################################################################
__all__ = ("PrefixMap",)
class PrefixMap:
"""
Provides a two-way mapping between CURIEs (Compact URI Expressions) and
full URIs. See http://www.w3.org/TR/curie/.
"""
def __init__(self):
self.index = {}
self.rindex = {}
## add a couple of well-know prefixes
##
#self.set("owl", "http://www.w3.org/2002/07/owl#")
#self.set("rdf", "http://www.w3.org/1999/02/22-rdf-syntax-ns#")
#self.set("rdfs", "http://www.w3.org/2000/01/rdf-schema#")
#self.set("rdfa", "http://www.w3.org/ns/rdfa#")
#self.set("xhv", "http://www.w3.org/1999/xhtml/vocab#")
#self.set("xml", "http://www.w3.org/XML/1998/namespace")
#self.set("xsd", "http://www.w3.org/2001/XMLSchema#")
def get(self, prefix):
"""
Returns the URI for the prefix or None if prefix has no mapped URI.
:param prefix: Prefix to map.
:type prefix: str
:returns: str -- Mapped URI for prefix or None.
"""
return self.index.get(prefix, None)
def set(self, prefix, uri):
"""
Set mapping of prefix to URI.
:param prefix: Prefix to be mapped.
:type prefix: str
:param uri: URI the prefix is to be mapped to.
:type uri: str
"""
self.index[prefix] = uri
self.rindex[uri] = prefix
def setDefault(self, uri):
"""
Set default URI mapping of empty prefix (prefix of length 0).
:param uri: URI the empty prefix to be mapped to (i.e. :label should map to uri:label).
:type str
"""
self.set("", uri)
def remove(self, prefix):
"""
Remove mapping of prefix to URI.
:param prefix: Prefix for which mapping should be removed.
:type str
"""
uri = self.index.get(prefix, None)
if uri:
del self.index[prefix]
del self.rindex[uri]
def resolve(self, curie):
"""
Resolve given CURIE to full URI.
:param curie: CURIE (i.e. "rdf:label").
:type curie: str
:returns: str -- Full URI for CURIE or None.
"""
i = curie.find(":")
if i > 0:
prefix = curie[:i]
if self.index.has_key(prefix):
return self.index[prefix] + curie[i+1:]
return None
def resolveOrPass(self, curieOrUri):
"""
Resolve given CURIE/URI and return string verbatim if cannot be resolved.
:param curieOrUri: CURIE or URI.
:type curieOrUri: str
:returns: str -- Full URI for CURIE or original string.
"""
u = self.resolve(curieOrUri)
if u:
return u
else:
return curieOrUri
def shrink(self, uri):
"""
Shrink given URI to CURIE. If no appropriate prefix mapping is available,
return original URI.
:param uri: URI to shrink.
:type uri: str
:returns str -- CURIE or original URI.
"""
for i in xrange(len(uri), 1, -1):
u = uri[:i]
p = self.rindex.get(u, None)
if p:
return p + ":" + uri[i:]
return uri
if __name__ == '__main__':
m = PrefixMap()
print(m.resolve("http://www.w3.org/1999/02/22-rdf-syntax-ns#label"))
print(m.resolve("rdf:label"))
print(m.resolve("foobar:label"))
print(m.shrink("http://www.w3.org/1999/02/22-rdf-syntax-ns#"))
print(m.shrink("http://www.w3.org/1999/02/22-rdf-syntax-ns#label"))
print(m.shrink("http://foobar.org#label"))
| bsd-3-clause |
rex-xxx/mt6572_x201 | external/markdown/markdown/commandline.py | 126 | 3534 | """
COMMAND-LINE SPECIFIC STUFF
=============================================================================
The rest of the code is specifically for handling the case where Python
Markdown is called from the command line.
"""
import markdown
import sys
import logging
from logging import DEBUG, INFO, WARN, ERROR, CRITICAL
EXECUTABLE_NAME_FOR_USAGE = "python markdown.py"
""" The name used in the usage statement displayed for python versions < 2.3.
(With python 2.3 and higher the usage statement is generated by optparse
and uses the actual name of the executable called.) """
OPTPARSE_WARNING = """
Python 2.3 or higher required for advanced command line options.
For lower versions of Python use:
%s INPUT_FILE > OUTPUT_FILE
""" % EXECUTABLE_NAME_FOR_USAGE
def parse_options():
"""
Define and parse `optparse` options for command-line usage.
"""
try:
optparse = __import__("optparse")
except:
if len(sys.argv) == 2:
return {'input': sys.argv[1],
'output': None,
'safe': False,
'extensions': [],
'encoding': None }, CRITICAL
else:
print OPTPARSE_WARNING
return None, None
parser = optparse.OptionParser(usage="%prog INPUTFILE [options]")
parser.add_option("-f", "--file", dest="filename", default=sys.stdout,
help="write output to OUTPUT_FILE",
metavar="OUTPUT_FILE")
parser.add_option("-e", "--encoding", dest="encoding",
help="encoding for input and output files",)
parser.add_option("-q", "--quiet", default = CRITICAL,
action="store_const", const=CRITICAL+10, dest="verbose",
help="suppress all messages")
parser.add_option("-v", "--verbose",
action="store_const", const=INFO, dest="verbose",
help="print info messages")
parser.add_option("-s", "--safe", dest="safe", default=False,
metavar="SAFE_MODE",
help="safe mode ('replace', 'remove' or 'escape' user's HTML tag)")
parser.add_option("-o", "--output_format", dest="output_format",
default='xhtml1', metavar="OUTPUT_FORMAT",
help="Format of output. One of 'xhtml1' (default) or 'html4'.")
parser.add_option("--noisy",
action="store_const", const=DEBUG, dest="verbose",
help="print debug messages")
parser.add_option("-x", "--extension", action="append", dest="extensions",
help = "load extension EXTENSION", metavar="EXTENSION")
(options, args) = parser.parse_args()
if not len(args) == 1:
parser.print_help()
return None, None
else:
input_file = args[0]
if not options.extensions:
options.extensions = []
return {'input': input_file,
'output': options.filename,
'safe_mode': options.safe,
'extensions': options.extensions,
'encoding': options.encoding,
'output_format': options.output_format}, options.verbose
def run():
"""Run Markdown from the command line."""
# Parse options and adjust logging level if necessary
options, logging_level = parse_options()
if not options: sys.exit(0)
if logging_level: logging.getLogger('MARKDOWN').setLevel(logging_level)
# Run
markdown.markdownFromFile(**options)
| gpl-2.0 |
c0deh4xor/CapTipper | storage/fiddler2pcap/fiddler2pcap.py | 7 | 7783 | #!/usr/bin/python
# This is probably useful to like 4 people. Some of the packet inection stuff is taken from rule2alert https://code.google.com/p/rule2alert/ which is GPLv2 so I guess this is well.
# This ultra alpha if everything isn't right it will fall on its face and probably cause you to run away from it screaming into the night
# This file is part of fiddler2pcap project (https://github.com/EmergingThreats/fiddler2pcap) by David McNelis
#TODO:
# 1. Optionally trim request line to start with uripath
# 2. Better error checking... Well any error checking really.
import random
import os
import sys
import re
import zipfile
import tempfile
import shutil
from xml.dom.minidom import parse, parseString
from scapy.utils import PcapWriter
from scapy.all import *
import glob
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-i", dest="input_target", type="string", help="path to fiddler raw directory we will read from glob format or path to saz file with --saz option")
parser.add_option("-o", dest="output_pcap", type="string", help="path to output PCAP file")
parser.add_option("--src", dest="srcip", type="string", help="src ip address to use if not specified we read it from the XML")
parser.add_option("--dst", dest="dstip", type="string", help="dst ip address to use if not specified we read it from the XML")
parser.add_option("--dproxy", dest="dproxy", action="store_true", default=False, help="attempt to unproxify the pcap")
parser.add_option("--saz", dest="input_is_saz", action="store_true", default=False, help="input is saz instead of raw directory")
src = None
dst = None
def validate_ip(ip):
if re.match(r"^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$",ip) != None:
return True
else:
print "The ip address you provides is invalid %s exiting" % (ip)
sys.exit(-1)
(options, args) = parser.parse_args()
if options == []:
print parser.print_help()
sys.exit(-1)
if not options.input_target or options.input_target == "":
print parser.print_help()
sys.exit(-1)
if not options.output_pcap or options.output_pcap == "":
print parser.print_help()
sys.exit(-1)
if options.srcip and validate_ip(options.srcip):
src = options.srcip
if options.dstip and validate_ip(options.dstip):
dst = options.dstip
#Open our packet dumper
pktdump = PcapWriter(options.output_pcap, sync=True)
def build_handshake(src,dst,sport,dport):
ipsrc = src
ipdst = dst
portsrc = sport
portdst = dport
# We don't deal with session wrap around so lets make the range smaller for now
# client_isn = random.randint(1024, (2**32)-1)
# server_isn = random.randint(1024, (2**32)-1)
client_isn = random.randint(1024, 10000)
server_isn = random.randint(1024, 10000)
syn = IP(src=ipsrc, dst=ipdst)/TCP(flags="S", sport=portsrc, dport=portdst, seq=client_isn)
synack = IP(src=ipdst, dst=ipsrc)/TCP(flags="SA", sport=portdst, dport=portsrc, seq=server_isn, ack=syn.seq+1)
ack = IP(src=ipsrc, dst=ipdst)/TCP(flags="A", sport=portsrc, dport=portdst, seq=syn.seq+1, ack=synack.seq+1)
pktdump.write(syn)
pktdump.write(synack)
pktdump.write(ack)
return(ack.seq,ack.ack)
def build_finshake(src,dst,sport,dport,seq,ack):
ipsrc = src
ipdst = dst
portsrc = sport
portdst = dport
finAck = IP(src=ipsrc, dst=ipdst)/TCP(flags="FA", sport=sport, dport=dport, seq=seq, ack=ack)
finalAck = IP(src=ipdst, dst=ipsrc)/TCP(flags="A", sport=dport, dport=sport, seq=finAck.ack, ack=finAck.seq+1)
pktdump.write(finAck)
pktdump.write(finalAck)
#http://stackoverflow.com/questions/18854620/whats-the-best-way-to-split-a-string-into-fixed-length-chunks-and-work-with-the
def chunkstring(string, length):
return (string[0+i:length+i] for i in range(0, len(string), length))
def make_poop(src,dst,sport,dport,seq,ack,payload):
segments = []
if len(payload) > 1460:
segments=chunkstring(payload,1460)
else:
segments.append(payload)
ipsrc = src
ipdst = dst
portsrc = sport
portdst = dport
for segment in segments:
p = IP(src=ipsrc, dst=ipdst)/TCP(flags="PA", sport=sport, dport=dport, seq=seq, ack=ack)/segment
returnAck = IP(src=ipdst, dst=ipsrc)/TCP(flags="A", sport=dport, dport=sport, seq=p.ack, ack=(p.seq + len(p[Raw])))
seq = returnAck.ack
ack = returnAck.seq
pktdump.write(p)
pktdump.write(returnAck)
return(returnAck.seq,returnAck.ack)
if options.input_is_saz and os.path.isfile(options.input_target):
try:
options.tmpdir = tempfile.mkdtemp()
except:
print "failed to create temp directory for saz extraction"
sys.exit(-1)
try:
z = zipfile.ZipFile(options.input_target,"r")
except:
print "failed to open saz file %s" % (options.input_target)
sys.exit(-1)
try:
z.extractall(options.tmpdir)
z.close()
except:
print "failed to extract saz file %s to %s" % (options.input_target, options.tmpdir)
sys.exit(-1)
if os.path.isdir("%s/raw/" % (options.tmpdir)):
options.fiddler_raw_dir = "%s/raw/" % (options.tmpdir)
else:
print "failed to find raw directory in extracted files %s/raw (must remove tmp file yourself)" % (options.tmpdir)
sys.exit(-1)
elif os.path.isdir(options.input_target):
options.fiddler_raw_dir = options.input_target
options.tmpdir = None
if os.path.isdir(options.fiddler_raw_dir):
m_file_list=glob.glob("%s/%s" % (options.fiddler_raw_dir,"*_m.xml"))
m_file_list.sort()
for xml_file in m_file_list:
sport=""
dport=80
dom = parse(xml_file)
m = re.match(r"^(?P<fid>\d+)_m\.xml",os.path.basename(xml_file))
if m:
fid = m.group("fid")
else:
print("failed to get fiddler id tag")
sys.exit(-1)
xmlTags = dom.getElementsByTagName('SessionFlag')
for xmlTag in xmlTags:
xmlTag = xmlTag.toxml()
m = re.match(r"\<SessionFlag N=\x22x-(?:client(?:ip\x22 V=\x22[^\x22]*?(?P<clientip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})|port\x22 V=\x22(?P<sport>\d+))|hostip\x22 V=\x22[^\x22]*?(?P<hostip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}))\x22",xmlTag)
if m and m.group("sport"):
sport = int(m.group("sport"))
#sport = random.randint(1024, 65535)
elif m and m.group("clientip") and src == None:
src = m.group("clientip")
elif m and m.group("hostip") and dst == None:
dst = m.group("hostip")
req = open(options.fiddler_raw_dir + fid + "_c.txt").read()
m=re.match(r"^[^\r\n\s]+\s+(?P<host_and_port>https?\:\/\/[^\/\r\n\:]+(\:(?P<dport>\d{1,5}))?)\/",req)
if m and options.dproxy and m.group("host_and_port"):
req = req.replace(m.group("host_and_port"),"",1)
if m.group("dport") and int(m.group("dport")) <= 65535:
dport = int(m.group("dport"))
resp = open(options.fiddler_raw_dir + fid + "_s.txt").read()
print "src: %s dst: %s sport: %s dport: %s" % (src, dst, sport, dport)
(seq,ack)=build_handshake(src,dst,sport,dport)
(seq,ack)=make_poop(src,dst,sport,dport,seq,ack,req)
(seq,ack)=make_poop(dst,src,dport,sport,seq,ack,resp)
build_finshake(src,dst,sport,dport,seq,ack)
if options.tmpdir:
try:
shutil.rmtree(options.tmpdir)
except:
print "failed to clean up tmpdir %s you will have to do it" % (options.tmpdir)
else:
print "fiddler raw dir specified:%s dos not exist" % (options.fiddler_raw_dir)
sys.exit(-1)
pktdump.close()
| gpl-3.0 |
hectormartinez/rougexstem | taln2016/icsisumm-primary-sys34_v1/nltk/nltk-0.9.2/yaml/constructor.py | 9 | 25101 |
__all__ = ['BaseConstructor', 'SafeConstructor', 'Constructor',
'ConstructorError']
from error import *
from nodes import *
import datetime
try:
set
except NameError:
from sets import Set as set
import binascii, re, sys
class ConstructorError(MarkedYAMLError):
pass
class BaseConstructor(object):
yaml_constructors = {}
yaml_multi_constructors = {}
def __init__(self):
self.constructed_objects = {}
self.recursive_objects = {}
self.state_generators = []
self.deep_construct = False
def check_data(self):
# If there are more documents available?
return self.check_node()
def get_data(self):
# Construct and return the next document.
if self.check_node():
return self.construct_document(self.get_node())
def g(): yield None
generator_type = type(g())
del g
def construct_document(self, node):
data = self.construct_object(node)
while self.state_generators:
state_generators = self.state_generators
self.state_generators = []
for generator in state_generators:
for dummy in generator:
pass
self.constructed_objects = {}
self.recursive_objects = {}
self.deep_construct = False
return data
def construct_object(self, node, deep=False):
if deep:
old_deep = self.deep_construct
self.deep_construct = True
if node in self.constructed_objects:
return self.constructed_objects[node]
if node in self.recursive_objects:
raise ConstructorError(None, None,
"found unconstructable recursive node", node.start_mark)
self.recursive_objects[node] = None
constructor = None
state_constructor = None
tag_suffix = None
if node.tag in self.yaml_constructors:
constructor = self.yaml_constructors[node.tag]
else:
for tag_prefix in self.yaml_multi_constructors:
if node.tag.startswith(tag_prefix):
tag_suffix = node.tag[len(tag_prefix):]
constructor = self.yaml_multi_constructors[tag_prefix]
break
else:
if None in self.yaml_multi_constructors:
tag_suffix = node.tag
constructor = self.yaml_multi_constructors[None]
elif None in self.yaml_constructors:
constructor = self.yaml_constructors[None]
elif isinstance(node, ScalarNode):
constructor = self.__class__.construct_scalar
elif isinstance(node, SequenceNode):
constructor = self.__class__.construct_sequence
elif isinstance(node, MappingNode):
constructor = self.__class__.construct_mapping
if tag_suffix is None:
data = constructor(self, node)
else:
data = constructor(self, tag_suffix, node)
if isinstance(data, self.generator_type):
generator = data
data = generator.next()
if self.deep_construct:
for dummy in generator:
pass
else:
self.state_generators.append(generator)
self.constructed_objects[node] = data
del self.recursive_objects[node]
if deep:
self.deep_construct = old_deep
return data
def construct_scalar(self, node):
if not isinstance(node, ScalarNode):
raise ConstructorError(None, None,
"expected a scalar node, but found %s" % node.id,
node.start_mark)
return node.value
def construct_sequence(self, node, deep=False):
if not isinstance(node, SequenceNode):
raise ConstructorError(None, None,
"expected a sequence node, but found %s" % node.id,
node.start_mark)
return [self.construct_object(child, deep=deep)
for child in node.value]
def construct_mapping(self, node, deep=False):
if not isinstance(node, MappingNode):
raise ConstructorError(None, None,
"expected a mapping node, but found %s" % node.id,
node.start_mark)
mapping = {}
for key_node, value_node in node.value:
key = self.construct_object(key_node, deep=deep)
try:
hash(key)
except TypeError, exc:
raise ConstructorError("while constructing a mapping", node.start_mark,
"found unacceptable key (%s)" % exc, key_node.start_mark)
value = self.construct_object(value_node, deep=deep)
mapping[key] = value
return mapping
def construct_pairs(self, node, deep=False):
if not isinstance(node, MappingNode):
raise ConstructorError(None, None,
"expected a mapping node, but found %s" % node.id,
node.start_mark)
pairs = []
for key_node, value_node in node.value:
key = self.construct_object(key_node, deep=deep)
value = self.construct_object(value_node, deep=deep)
pairs.append((key, value))
return pairs
def add_constructor(cls, tag, constructor):
if not 'yaml_constructors' in cls.__dict__:
cls.yaml_constructors = cls.yaml_constructors.copy()
cls.yaml_constructors[tag] = constructor
add_constructor = classmethod(add_constructor)
def add_multi_constructor(cls, tag_prefix, multi_constructor):
if not 'yaml_multi_constructors' in cls.__dict__:
cls.yaml_multi_constructors = cls.yaml_multi_constructors.copy()
cls.yaml_multi_constructors[tag_prefix] = multi_constructor
add_multi_constructor = classmethod(add_multi_constructor)
class SafeConstructor(BaseConstructor):
def construct_scalar(self, node):
if isinstance(node, MappingNode):
for key_node, value_node in node.value:
if key_node.tag == u'tag:yaml.org,2002:value':
return self.construct_scalar(value_node)
return BaseConstructor.construct_scalar(self, node)
def flatten_mapping(self, node):
merge = []
index = 0
while index < len(node.value):
key_node, value_node = node.value[index]
if key_node.tag == u'tag:yaml.org,2002:merge':
del node.value[index]
if isinstance(value_node, MappingNode):
self.flatten_mapping(value_node)
merge.extend(value_node.value)
elif isinstance(value_node, SequenceNode):
submerge = []
for subnode in value_node.value:
if not isinstance(subnode, MappingNode):
raise ConstructorError("while constructing a mapping",
node.start_mark,
"expected a mapping for merging, but found %s"
% subnode.id, subnode.start_mark)
self.flatten_mapping(subnode)
submerge.append(subnode.value)
submerge.reverse()
for value in submerge:
merge.extend(value)
else:
raise ConstructorError("while constructing a mapping", node.start_mark,
"expected a mapping or list of mappings for merging, but found %s"
% value_node.id, value_node.start_mark)
elif key_node.tag == u'tag:yaml.org,2002:value':
key_node.tag = u'tag:yaml.org,2002:str'
index += 1
else:
index += 1
if merge:
node.value = merge + node.value
def construct_mapping(self, node, deep=False):
if isinstance(node, MappingNode):
self.flatten_mapping(node)
return BaseConstructor.construct_mapping(self, node, deep=deep)
def construct_yaml_null(self, node):
self.construct_scalar(node)
return None
bool_values = {
u'yes': True,
u'no': False,
u'true': True,
u'false': False,
u'on': True,
u'off': False,
}
def construct_yaml_bool(self, node):
value = self.construct_scalar(node)
return self.bool_values[value.lower()]
def construct_yaml_int(self, node):
value = str(self.construct_scalar(node))
value = value.replace('_', '')
sign = +1
if value[0] == '-':
sign = -1
if value[0] in '+-':
value = value[1:]
if value == '0':
return 0
elif value.startswith('0b'):
return sign*int(value[2:], 2)
elif value.startswith('0x'):
return sign*int(value[2:], 16)
elif value[0] == '0':
return sign*int(value, 8)
elif ':' in value:
digits = [int(part) for part in value.split(':')]
digits.reverse()
base = 1
value = 0
for digit in digits:
value += digit*base
base *= 60
return sign*value
else:
return sign*int(value)
inf_value = 1e300
while inf_value != inf_value*inf_value:
inf_value *= inf_value
nan_value = -inf_value/inf_value # Trying to make a quiet NaN (like C99).
def construct_yaml_float(self, node):
value = str(self.construct_scalar(node))
value = value.replace('_', '').lower()
sign = +1
if value[0] == '-':
sign = -1
if value[0] in '+-':
value = value[1:]
if value == '.inf':
return sign*self.inf_value
elif value == '.nan':
return self.nan_value
elif ':' in value:
digits = [float(part) for part in value.split(':')]
digits.reverse()
base = 1
value = 0.0
for digit in digits:
value += digit*base
base *= 60
return sign*value
else:
return sign*float(value)
def construct_yaml_binary(self, node):
value = self.construct_scalar(node)
try:
return str(value).decode('base64')
except (binascii.Error, UnicodeEncodeError), exc:
raise ConstructorError(None, None,
"failed to decode base64 data: %s" % exc, node.start_mark)
timestamp_regexp = re.compile(
ur'''^(?P<year>[0-9][0-9][0-9][0-9])
-(?P<month>[0-9][0-9]?)
-(?P<day>[0-9][0-9]?)
(?:(?:[Tt]|[ \t]+)
(?P<hour>[0-9][0-9]?)
:(?P<minute>[0-9][0-9])
:(?P<second>[0-9][0-9])
(?:(?P<fraction>\.[0-9]*))?
(?:[ \t]*(?P<tz>Z|(?P<tz_sign>[-+])(?P<tz_hour>[0-9][0-9]?)
(?::(?P<tz_minute>[0-9][0-9]))?))?)?$''', re.X)
def construct_yaml_timestamp(self, node):
value = self.construct_scalar(node)
match = self.timestamp_regexp.match(node.value)
values = match.groupdict()
year = int(values['year'])
month = int(values['month'])
day = int(values['day'])
if not values['hour']:
return datetime.date(year, month, day)
hour = int(values['hour'])
minute = int(values['minute'])
second = int(values['second'])
fraction = 0
if values['fraction']:
fraction = int(float(values['fraction'])*1000000)
delta = None
if values['tz_sign']:
tz_hour = int(values['tz_hour'])
tz_minute = int(values['tz_minute'] or 0)
delta = datetime.timedelta(hours=tz_hour, minutes=tz_minute)
if values['tz_sign'] == '-':
delta = -delta
data = datetime.datetime(year, month, day, hour, minute, second, fraction)
if delta:
data -= delta
return data
def construct_yaml_omap(self, node):
# Note: we do not check for duplicate keys, because it's too
# CPU-expensive.
omap = []
yield omap
if not isinstance(node, SequenceNode):
raise ConstructorError("while constructing an ordered map", node.start_mark,
"expected a sequence, but found %s" % node.id, node.start_mark)
for subnode in node.value:
if not isinstance(subnode, MappingNode):
raise ConstructorError("while constructing an ordered map", node.start_mark,
"expected a mapping of length 1, but found %s" % subnode.id,
subnode.start_mark)
if len(subnode.value) != 1:
raise ConstructorError("while constructing an ordered map", node.start_mark,
"expected a single mapping item, but found %d items" % len(subnode.value),
subnode.start_mark)
key_node, value_node = subnode.value[0]
key = self.construct_object(key_node)
value = self.construct_object(value_node)
omap.append((key, value))
def construct_yaml_pairs(self, node):
# Note: the same code as `construct_yaml_omap`.
pairs = []
yield pairs
if not isinstance(node, SequenceNode):
raise ConstructorError("while constructing pairs", node.start_mark,
"expected a sequence, but found %s" % node.id, node.start_mark)
for subnode in node.value:
if not isinstance(subnode, MappingNode):
raise ConstructorError("while constructing pairs", node.start_mark,
"expected a mapping of length 1, but found %s" % subnode.id,
subnode.start_mark)
if len(subnode.value) != 1:
raise ConstructorError("while constructing pairs", node.start_mark,
"expected a single mapping item, but found %d items" % len(subnode.value),
subnode.start_mark)
key_node, value_node = subnode.value[0]
key = self.construct_object(key_node)
value = self.construct_object(value_node)
pairs.append((key, value))
def construct_yaml_set(self, node):
data = set()
yield data
value = self.construct_mapping(node)
data.update(value)
def construct_yaml_str(self, node):
value = self.construct_scalar(node)
try:
return str(value)
except UnicodeEncodeError:
return value
def construct_yaml_seq(self, node):
data = []
yield data
data.extend(self.construct_sequence(node))
def construct_yaml_map(self, node):
data = {}
yield data
value = self.construct_mapping(node)
data.update(value)
def construct_yaml_object(self, node, cls):
data = cls.__new__(cls)
yield data
if hasattr(data, '__setstate__'):
state = self.construct_mapping(node, deep=True)
data.__setstate__(state)
else:
state = self.construct_mapping(node)
data.__dict__.update(state)
def construct_undefined(self, node):
raise ConstructorError(None, None,
"could not determine a constructor for the tag %r" % node.tag.encode('utf-8'),
node.start_mark)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:null',
SafeConstructor.construct_yaml_null)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:bool',
SafeConstructor.construct_yaml_bool)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:int',
SafeConstructor.construct_yaml_int)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:float',
SafeConstructor.construct_yaml_float)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:binary',
SafeConstructor.construct_yaml_binary)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:timestamp',
SafeConstructor.construct_yaml_timestamp)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:omap',
SafeConstructor.construct_yaml_omap)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:pairs',
SafeConstructor.construct_yaml_pairs)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:set',
SafeConstructor.construct_yaml_set)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:str',
SafeConstructor.construct_yaml_str)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:seq',
SafeConstructor.construct_yaml_seq)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:map',
SafeConstructor.construct_yaml_map)
SafeConstructor.add_constructor(None,
SafeConstructor.construct_undefined)
class Constructor(SafeConstructor):
def construct_python_str(self, node):
return self.construct_scalar(node).encode('utf-8')
def construct_python_unicode(self, node):
return self.construct_scalar(node)
def construct_python_long(self, node):
return long(self.construct_yaml_int(node))
def construct_python_complex(self, node):
return complex(self.construct_scalar(node))
def construct_python_tuple(self, node):
return tuple(self.construct_sequence(node))
def find_python_module(self, name, mark):
if not name:
raise ConstructorError("while constructing a Python module", mark,
"expected non-empty name appended to the tag", mark)
try:
__import__(name)
except ImportError, exc:
raise ConstructorError("while constructing a Python module", mark,
"cannot find module %r (%s)" % (name.encode('utf-8'), exc), mark)
return sys.modules[name]
def find_python_name(self, name, mark):
if not name:
raise ConstructorError("while constructing a Python object", mark,
"expected non-empty name appended to the tag", mark)
if u'.' in name:
# Python 2.4 only
#module_name, object_name = name.rsplit('.', 1)
items = name.split('.')
object_name = items.pop()
module_name = '.'.join(items)
else:
module_name = '__builtin__'
object_name = name
try:
__import__(module_name)
except ImportError, exc:
raise ConstructorError("while constructing a Python object", mark,
"cannot find module %r (%s)" % (module_name.encode('utf-8'), exc), mark)
module = sys.modules[module_name]
if not hasattr(module, object_name):
raise ConstructorError("while constructing a Python object", mark,
"cannot find %r in the module %r" % (object_name.encode('utf-8'),
module.__name__), mark)
return getattr(module, object_name)
def construct_python_name(self, suffix, node):
value = self.construct_scalar(node)
if value:
raise ConstructorError("while constructing a Python name", node.start_mark,
"expected the empty value, but found %r" % value.encode('utf-8'),
node.start_mark)
return self.find_python_name(suffix, node.start_mark)
def construct_python_module(self, suffix, node):
value = self.construct_scalar(node)
if value:
raise ConstructorError("while constructing a Python module", node.start_mark,
"expected the empty value, but found %r" % value.encode('utf-8'),
node.start_mark)
return self.find_python_module(suffix, node.start_mark)
class classobj: pass
def make_python_instance(self, suffix, node,
args=None, kwds=None, newobj=False):
if not args:
args = []
if not kwds:
kwds = {}
cls = self.find_python_name(suffix, node.start_mark)
if newobj and isinstance(cls, type(self.classobj)) \
and not args and not kwds:
instance = self.classobj()
instance.__class__ = cls
return instance
elif newobj and isinstance(cls, type):
return cls.__new__(cls, *args, **kwds)
else:
return cls(*args, **kwds)
def set_python_instance_state(self, instance, state):
if hasattr(instance, '__setstate__'):
instance.__setstate__(state)
else:
slotstate = {}
if isinstance(state, tuple) and len(state) == 2:
state, slotstate = state
if hasattr(instance, '__dict__'):
instance.__dict__.update(state)
elif state:
slotstate.update(state)
for key, value in slotstate.items():
setattr(object, key, value)
def construct_python_object(self, suffix, node):
# Format:
# !!python/object:module.name { ... state ... }
instance = self.make_python_instance(suffix, node, newobj=True)
yield instance
deep = hasattr(instance, '__setstate__')
state = self.construct_mapping(node, deep=deep)
self.set_python_instance_state(instance, state)
def construct_python_object_apply(self, suffix, node, newobj=False):
# Format:
# !!python/object/apply # (or !!python/object/new)
# args: [ ... arguments ... ]
# kwds: { ... keywords ... }
# state: ... state ...
# listitems: [ ... listitems ... ]
# dictitems: { ... dictitems ... }
# or short format:
# !!python/object/apply [ ... arguments ... ]
# The difference between !!python/object/apply and !!python/object/new
# is how an object is created, check make_python_instance for details.
if isinstance(node, SequenceNode):
args = self.construct_sequence(node, deep=True)
kwds = {}
state = {}
listitems = []
dictitems = {}
else:
value = self.construct_mapping(node, deep=True)
args = value.get('args', [])
kwds = value.get('kwds', {})
state = value.get('state', {})
listitems = value.get('listitems', [])
dictitems = value.get('dictitems', {})
instance = self.make_python_instance(suffix, node, args, kwds, newobj)
if state:
self.set_python_instance_state(instance, state)
if listitems:
instance.extend(listitems)
if dictitems:
for key in dictitems:
instance[key] = dictitems[key]
return instance
def construct_python_object_new(self, suffix, node):
return self.construct_python_object_apply(suffix, node, newobj=True)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/none',
Constructor.construct_yaml_null)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/bool',
Constructor.construct_yaml_bool)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/str',
Constructor.construct_python_str)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/unicode',
Constructor.construct_python_unicode)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/int',
Constructor.construct_yaml_int)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/long',
Constructor.construct_python_long)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/float',
Constructor.construct_yaml_float)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/complex',
Constructor.construct_python_complex)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/list',
Constructor.construct_yaml_seq)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/tuple',
Constructor.construct_python_tuple)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/dict',
Constructor.construct_yaml_map)
Constructor.add_multi_constructor(
u'tag:yaml.org,2002:python/name:',
Constructor.construct_python_name)
Constructor.add_multi_constructor(
u'tag:yaml.org,2002:python/module:',
Constructor.construct_python_module)
Constructor.add_multi_constructor(
u'tag:yaml.org,2002:python/object:',
Constructor.construct_python_object)
Constructor.add_multi_constructor(
u'tag:yaml.org,2002:python/object/apply:',
Constructor.construct_python_object_apply)
Constructor.add_multi_constructor(
u'tag:yaml.org,2002:python/object/new:',
Constructor.construct_python_object_new)
| apache-2.0 |
pinax/pinax-eventlog | pinax/eventlog/migrations/0001_initial.py | 1 | 1443 | # Generated by Django 3.1 on 2020-08-15 10:08
from django.conf import settings
import django.core.serializers.json
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
from ..compat import JSONField
class Migration(migrations.Migration):
initial = True
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Log',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('timestamp', models.DateTimeField(db_index=True, default=django.utils.timezone.now)),
('action', models.CharField(db_index=True, max_length=50)),
('object_id', models.PositiveIntegerField(blank=True, null=True)),
('extra', JSONField(blank=True, encoder=django.core.serializers.json.DjangoJSONEncoder)),
('content_type', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='contenttypes.contenttype')),
('user', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['-timestamp'],
},
),
]
| mit |
wrCisco/Sigil | src/Resource_Files/python3lib/ncxgenerator.py | 5 | 8708 | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# vim:ts=4:sw=4:softtabstop=4:smarttab:expandtab
import sys
import os
from quickparser import QuickXHTMLParser
from hrefutils import startingDir, buildBookPath, buildRelativePath, relativePath
from hrefutils import urldecodepart, urlencodepart
_epubtype_guide_map = {
'acknowledgements' : 'acknowledgments',
'afterword' : 'other.afterword',
'appendix' : 'other.appendix',
'backmatter' : 'other.backmatter',
'bibliography' : 'bibliography',
'bodymatter' : 'text',
'chapter' : 'other.chapter',
'colophon' : 'colophon',
'conclusion' : 'other.conclusion',
'contributors' : 'other.contributors',
'copyright-page' : 'copyright-page',
'cover' : 'cover',
'dedication' : 'dedication',
'division' : 'other.division',
'epigraph' : 'epigraph',
'epilogue' : 'other.epilogue',
'errata' : 'other.errata',
'footnotes' : 'other.footnotes',
'foreword' : 'foreword',
'frontmatter' : 'other.frontmatter',
'glossary' : 'glossary',
'halftitlepage' : 'other.halftitlepage',
'imprint' : 'other.imprint',
'imprimatur' : 'other.imprimatur',
'index' : 'index',
'introduction' : 'other.introduction',
'landmarks' : 'other.landmarks',
'loa' : 'other.loa',
'loi' : 'loi',
'lot' : 'lot',
'lov' : 'other.lov',
# '' : 'notes',
'notice' : 'other.notice',
'other-credits' : 'other.other-credits',
'part' : 'other.part',
'preamble' : 'other.preamble',
'preface' : 'preface',
'prologue' : 'other.prologue',
'rearnotes' : 'other.rearnotes',
'subchapter' : 'other.subchapter',
'titlepage' : 'title-page',
'toc' : 'toc',
'volume' : 'other.volume',
'warning' : 'other.warning'
}
# parse the current nav.xhtml to extract toc, pagelist, and landmarks
# note all hrefs, src are in urlencoded (raw) form when they are stored
# note all hrefs, src, have been converted to be relative to newdir
def parse_nav(qp, navdata, navbkpath, newdir):
qp.setContent(navdata)
toclist = []
pagelist = []
landmarks = []
lvl = 0
pgcnt = 0
maxlvl = -1
nav_type = None
href = None
title = ""
play = 0
navdir = startingDir(navbkpath)
for txt, tp, tname, ttype, tattr in qp.parse_iter():
if txt is not None:
if ".a." in tp or tp.endswith(".a"):
title = title + txt
else:
title = ""
else:
if tname == "nav":
if ttype == "begin":
nav_type = tattr.get("epub:type", None)
if ttype == "end":
nav_type = None
continue
if tname == "ol" and nav_type is not None and nav_type in ("toc","page-list","landmarks"):
if ttype == "begin":
lvl += 1
if nav_type == "toc":
if lvl > maxlvl: maxlvl = lvl
if ttype == "end": lvl -= 1
continue
if tname == "a" and ttype == "begin":
# get the raw href (urlencoded)
href = tattr.get("href", "")
if href.find(":") == -1:
# first strip off any fragment
fragment = ""
if href.find("#") != -1:
href, fragment = href.split("#")
# find destination bookpath
href = urldecodepart(href)
fragment = urldecodepart(fragment)
if href.startswith("./"): href=href[2:]
if href == "":
destbkpath = navbkpath
else:
destbkpath = buildBookPath(href, navdir)
# create relative path to destbkpath from newdir
href = relativePath(destbkpath, newdir)
href = urlencodepart(href)
fragment = urlencodepart(fragment)
if fragment != "":
href = href + "#" + fragment
epubtype = tattr.get("epub:type", None)
continue
if tname == "a" and ttype == "end":
if nav_type == "toc":
play += 1
toclist.append((play, lvl, href, title))
elif nav_type == "page-list":
pgcnt += 1
pagelist.append((pgcnt, href, title))
elif nav_type == "landmarks":
if epubtype is not None:
gtype = _epubtype_guide_map.get(epubtype, None)
landmarks.append((gtype, href, title))
title = ""
continue
return toclist, pagelist, landmarks, maxlvl, pgcnt
# build ncx from epub3 toc from toclist, pagelist and old opf2 guide info for landmarks
def build_ncx(doctitle, mainid, maxlvl, pgcnt, toclist, pagelist):
ncxres = []
ind = ' '
ncxres.append('<?xml version="1.0" encoding="utf-8"?>\n')
ncxres.append('<ncx xmlns="http://www.daisy.org/z3986/2005/ncx/" version="2005-1">\n')
ncxres.append(' <head>\n')
ncxres.append(' <meta name="dtb:uid" content="' + mainid + '" />\n')
ncxres.append(' <meta name="dtb:depth" content="' + str(maxlvl) + '" />\n')
ncxres.append(' <meta name="dtb:totalPageCount" content="' + str(pgcnt) + '" />\n')
ncxres.append(' <meta name="dtb:maxPageNumber" content="' + str(pgcnt) + '" />\n')
ncxres.append(' </head>\n')
ncxres.append('<docTitle>\n')
ncxres.append(' <text>' + doctitle + '</text>\n')
ncxres.append('</docTitle>\n')
ncxres.append('<navMap>\n')
plvl = -1
for (po, lvl, href, title) in toclist:
# note all hrefs should already be in urlencoded form
# first close off any already opened navPoints
while lvl <= plvl:
space = ind*plvl
ncxres.append(space + '</navPoint>\n')
plvl -= 1
# now append this navpoint
space = ind*lvl
porder = str(po)
if title is None:
title = ""
ncxres.append(space + '<navPoint id="navPoint' + porder +'">\n')
ncxres.append(space + ' <navLabel>\n')
ncxres.append(space + ' <text>' + title + '</text>\n')
ncxres.append(space + ' </navLabel>\n')
ncxres.append(space + ' <content src="' + href + '" />\n')
plvl = lvl
# now finish off any open navpoints
while plvl > 0:
space = ind*plvl
ncxres.append(space + '</navPoint>\n')
plvl -= 1
ncxres.append('</navMap>\n')
if pgcnt > 0:
play = len(toclist)
ncxres.append('<pageList>\n')
for (cnt, href, title) in pagelist:
porder = str(play + cnt)
target = ind + '<pageTarget id="navPoint' + porder + '" type="normal"'
target += ' value="' + title + '">\n'
ncxres.append(target)
ncxres.append(ind*2 + '<navLabel><text>' + title + '</text></navLabel>\n')
ncxres.append(ind*2 + '<content src="' + href + '" />\n')
ncxres.append(ind + '</pageTarget>\n')
ncxres.append('</pageList>\n')
# now close it off
ncxres.append('</ncx>\n')
return "".join(ncxres)
# the entry points
def generateNCX(navdata, navbkpath, ncxdir, doctitle, mainid):
has_error = False
# main id must exactly match used in the opf
# if mainid.startswith("urn:uuid:"): mainid = mainid[9:]
# try:
qp = QuickXHTMLParser()
toclist, pagelist, landmarks, maxlvl, pgcnt = parse_nav(qp, navdata, navbkpath, ncxdir)
ncxdata = build_ncx(doctitle, mainid, maxlvl, pgcnt, toclist, pagelist)
# except:
# has_error = True
# pass
# if has_error:
# return ""
return ncxdata
def generateGuideEntries(navdata, navbkpath, opfdir):
has_error = False
try:
qp = QuickXHTMLParser()
toclist, pagelist, landmarks, maxlvl, pgcnt = parse_nav(qp, navdata, navbkpath, opfdir)
except:
has_error = True
pass
if has_error:
return [("","","")]
return landmarks
def main():
argv = sys.argv
return 0
if __name__ == '__main__':
sys.exit(main())
| gpl-3.0 |
zhukaixy/kbengine | kbe/res/scripts/common/Lib/test/test_genericpath.py | 81 | 16219 | """
Tests common to genericpath, macpath, ntpath and posixpath
"""
import genericpath
import os
import sys
import unittest
import warnings
from test import support
def safe_rmdir(dirname):
try:
os.rmdir(dirname)
except OSError:
pass
class GenericTest:
common_attributes = ['commonprefix', 'getsize', 'getatime', 'getctime',
'getmtime', 'exists', 'isdir', 'isfile']
attributes = []
def test_no_argument(self):
for attr in self.common_attributes + self.attributes:
with self.assertRaises(TypeError):
getattr(self.pathmodule, attr)()
raise self.fail("{}.{}() did not raise a TypeError"
.format(self.pathmodule.__name__, attr))
def test_commonprefix(self):
commonprefix = self.pathmodule.commonprefix
self.assertEqual(
commonprefix([]),
""
)
self.assertEqual(
commonprefix(["/home/swenson/spam", "/home/swen/spam"]),
"/home/swen"
)
self.assertEqual(
commonprefix(["/home/swen/spam", "/home/swen/eggs"]),
"/home/swen/"
)
self.assertEqual(
commonprefix(["/home/swen/spam", "/home/swen/spam"]),
"/home/swen/spam"
)
self.assertEqual(
commonprefix(["home:swenson:spam", "home:swen:spam"]),
"home:swen"
)
self.assertEqual(
commonprefix([":home:swen:spam", ":home:swen:eggs"]),
":home:swen:"
)
self.assertEqual(
commonprefix([":home:swen:spam", ":home:swen:spam"]),
":home:swen:spam"
)
self.assertEqual(
commonprefix([b"/home/swenson/spam", b"/home/swen/spam"]),
b"/home/swen"
)
self.assertEqual(
commonprefix([b"/home/swen/spam", b"/home/swen/eggs"]),
b"/home/swen/"
)
self.assertEqual(
commonprefix([b"/home/swen/spam", b"/home/swen/spam"]),
b"/home/swen/spam"
)
self.assertEqual(
commonprefix([b"home:swenson:spam", b"home:swen:spam"]),
b"home:swen"
)
self.assertEqual(
commonprefix([b":home:swen:spam", b":home:swen:eggs"]),
b":home:swen:"
)
self.assertEqual(
commonprefix([b":home:swen:spam", b":home:swen:spam"]),
b":home:swen:spam"
)
testlist = ['', 'abc', 'Xbcd', 'Xb', 'XY', 'abcd',
'aXc', 'abd', 'ab', 'aX', 'abcX']
for s1 in testlist:
for s2 in testlist:
p = commonprefix([s1, s2])
self.assertTrue(s1.startswith(p))
self.assertTrue(s2.startswith(p))
if s1 != s2:
n = len(p)
self.assertNotEqual(s1[n:n+1], s2[n:n+1])
def test_getsize(self):
f = open(support.TESTFN, "wb")
try:
f.write(b"foo")
f.close()
self.assertEqual(self.pathmodule.getsize(support.TESTFN), 3)
finally:
if not f.closed:
f.close()
support.unlink(support.TESTFN)
def test_time(self):
f = open(support.TESTFN, "wb")
try:
f.write(b"foo")
f.close()
f = open(support.TESTFN, "ab")
f.write(b"bar")
f.close()
f = open(support.TESTFN, "rb")
d = f.read()
f.close()
self.assertEqual(d, b"foobar")
self.assertLessEqual(
self.pathmodule.getctime(support.TESTFN),
self.pathmodule.getmtime(support.TESTFN)
)
finally:
if not f.closed:
f.close()
support.unlink(support.TESTFN)
def test_exists(self):
self.assertIs(self.pathmodule.exists(support.TESTFN), False)
f = open(support.TESTFN, "wb")
try:
f.write(b"foo")
f.close()
self.assertIs(self.pathmodule.exists(support.TESTFN), True)
if not self.pathmodule == genericpath:
self.assertIs(self.pathmodule.lexists(support.TESTFN),
True)
finally:
if not f.close():
f.close()
support.unlink(support.TESTFN)
@unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()")
def test_exists_fd(self):
r, w = os.pipe()
try:
self.assertTrue(self.pathmodule.exists(r))
finally:
os.close(r)
os.close(w)
self.assertFalse(self.pathmodule.exists(r))
def test_isdir(self):
self.assertIs(self.pathmodule.isdir(support.TESTFN), False)
f = open(support.TESTFN, "wb")
try:
f.write(b"foo")
f.close()
self.assertIs(self.pathmodule.isdir(support.TESTFN), False)
os.remove(support.TESTFN)
os.mkdir(support.TESTFN)
self.assertIs(self.pathmodule.isdir(support.TESTFN), True)
os.rmdir(support.TESTFN)
finally:
if not f.close():
f.close()
support.unlink(support.TESTFN)
safe_rmdir(support.TESTFN)
def test_isfile(self):
self.assertIs(self.pathmodule.isfile(support.TESTFN), False)
f = open(support.TESTFN, "wb")
try:
f.write(b"foo")
f.close()
self.assertIs(self.pathmodule.isfile(support.TESTFN), True)
os.remove(support.TESTFN)
os.mkdir(support.TESTFN)
self.assertIs(self.pathmodule.isfile(support.TESTFN), False)
os.rmdir(support.TESTFN)
finally:
if not f.close():
f.close()
support.unlink(support.TESTFN)
safe_rmdir(support.TESTFN)
@staticmethod
def _create_file(filename):
with open(filename, 'wb') as f:
f.write(b'foo')
def test_samefile(self):
try:
test_fn = support.TESTFN + "1"
self._create_file(test_fn)
self.assertTrue(self.pathmodule.samefile(test_fn, test_fn))
self.assertRaises(TypeError, self.pathmodule.samefile)
finally:
os.remove(test_fn)
@support.skip_unless_symlink
def test_samefile_on_symlink(self):
self._test_samefile_on_link_func(os.symlink)
def test_samefile_on_link(self):
self._test_samefile_on_link_func(os.link)
def _test_samefile_on_link_func(self, func):
try:
test_fn1 = support.TESTFN + "1"
test_fn2 = support.TESTFN + "2"
self._create_file(test_fn1)
func(test_fn1, test_fn2)
self.assertTrue(self.pathmodule.samefile(test_fn1, test_fn2))
os.remove(test_fn2)
self._create_file(test_fn2)
self.assertFalse(self.pathmodule.samefile(test_fn1, test_fn2))
finally:
os.remove(test_fn1)
os.remove(test_fn2)
def test_samestat(self):
try:
test_fn = support.TESTFN + "1"
self._create_file(test_fn)
test_fns = [test_fn]*2
stats = map(os.stat, test_fns)
self.assertTrue(self.pathmodule.samestat(*stats))
finally:
os.remove(test_fn)
@support.skip_unless_symlink
def test_samestat_on_symlink(self):
self._test_samestat_on_link_func(os.symlink)
def test_samestat_on_link(self):
self._test_samestat_on_link_func(os.link)
def _test_samestat_on_link_func(self, func):
try:
test_fn1 = support.TESTFN + "1"
test_fn2 = support.TESTFN + "2"
self._create_file(test_fn1)
test_fns = (test_fn1, test_fn2)
func(*test_fns)
stats = map(os.stat, test_fns)
self.assertTrue(self.pathmodule.samestat(*stats))
os.remove(test_fn2)
self._create_file(test_fn2)
stats = map(os.stat, test_fns)
self.assertFalse(self.pathmodule.samestat(*stats))
self.assertRaises(TypeError, self.pathmodule.samestat)
finally:
os.remove(test_fn1)
os.remove(test_fn2)
def test_sameopenfile(self):
fname = support.TESTFN + "1"
with open(fname, "wb") as a, open(fname, "wb") as b:
self.assertTrue(self.pathmodule.sameopenfile(
a.fileno(), b.fileno()))
class TestGenericTest(GenericTest, unittest.TestCase):
# Issue 16852: GenericTest can't inherit from unittest.TestCase
# for test discovery purposes; CommonTest inherits from GenericTest
# and is only meant to be inherited by others.
pathmodule = genericpath
# Following TestCase is not supposed to be run from test_genericpath.
# It is inherited by other test modules (macpath, ntpath, posixpath).
class CommonTest(GenericTest):
common_attributes = GenericTest.common_attributes + [
# Properties
'curdir', 'pardir', 'extsep', 'sep',
'pathsep', 'defpath', 'altsep', 'devnull',
# Methods
'normcase', 'splitdrive', 'expandvars', 'normpath', 'abspath',
'join', 'split', 'splitext', 'isabs', 'basename', 'dirname',
'lexists', 'islink', 'ismount', 'expanduser', 'normpath', 'realpath',
]
def test_normcase(self):
normcase = self.pathmodule.normcase
# check that normcase() is idempotent
for p in ["FoO/./BaR", b"FoO/./BaR"]:
p = normcase(p)
self.assertEqual(p, normcase(p))
self.assertEqual(normcase(''), '')
self.assertEqual(normcase(b''), b'')
# check that normcase raises a TypeError for invalid types
for path in (None, True, 0, 2.5, [], bytearray(b''), {'o','o'}):
self.assertRaises(TypeError, normcase, path)
def test_splitdrive(self):
# splitdrive for non-NT paths
splitdrive = self.pathmodule.splitdrive
self.assertEqual(splitdrive("/foo/bar"), ("", "/foo/bar"))
self.assertEqual(splitdrive("foo:bar"), ("", "foo:bar"))
self.assertEqual(splitdrive(":foo:bar"), ("", ":foo:bar"))
self.assertEqual(splitdrive(b"/foo/bar"), (b"", b"/foo/bar"))
self.assertEqual(splitdrive(b"foo:bar"), (b"", b"foo:bar"))
self.assertEqual(splitdrive(b":foo:bar"), (b"", b":foo:bar"))
def test_expandvars(self):
if self.pathmodule.__name__ == 'macpath':
self.skipTest('macpath.expandvars is a stub')
expandvars = self.pathmodule.expandvars
with support.EnvironmentVarGuard() as env:
env.clear()
env["foo"] = "bar"
env["{foo"] = "baz1"
env["{foo}"] = "baz2"
self.assertEqual(expandvars("foo"), "foo")
self.assertEqual(expandvars("$foo bar"), "bar bar")
self.assertEqual(expandvars("${foo}bar"), "barbar")
self.assertEqual(expandvars("$[foo]bar"), "$[foo]bar")
self.assertEqual(expandvars("$bar bar"), "$bar bar")
self.assertEqual(expandvars("$?bar"), "$?bar")
self.assertEqual(expandvars("$foo}bar"), "bar}bar")
self.assertEqual(expandvars("${foo"), "${foo")
self.assertEqual(expandvars("${{foo}}"), "baz1}")
self.assertEqual(expandvars("$foo$foo"), "barbar")
self.assertEqual(expandvars("$bar$bar"), "$bar$bar")
self.assertEqual(expandvars(b"foo"), b"foo")
self.assertEqual(expandvars(b"$foo bar"), b"bar bar")
self.assertEqual(expandvars(b"${foo}bar"), b"barbar")
self.assertEqual(expandvars(b"$[foo]bar"), b"$[foo]bar")
self.assertEqual(expandvars(b"$bar bar"), b"$bar bar")
self.assertEqual(expandvars(b"$?bar"), b"$?bar")
self.assertEqual(expandvars(b"$foo}bar"), b"bar}bar")
self.assertEqual(expandvars(b"${foo"), b"${foo")
self.assertEqual(expandvars(b"${{foo}}"), b"baz1}")
self.assertEqual(expandvars(b"$foo$foo"), b"barbar")
self.assertEqual(expandvars(b"$bar$bar"), b"$bar$bar")
@unittest.skipUnless(support.FS_NONASCII, 'need support.FS_NONASCII')
def test_expandvars_nonascii(self):
if self.pathmodule.__name__ == 'macpath':
self.skipTest('macpath.expandvars is a stub')
expandvars = self.pathmodule.expandvars
def check(value, expected):
self.assertEqual(expandvars(value), expected)
with support.EnvironmentVarGuard() as env:
env.clear()
nonascii = support.FS_NONASCII
env['spam'] = nonascii
env[nonascii] = 'ham' + nonascii
check(nonascii, nonascii)
check('$spam bar', '%s bar' % nonascii)
check('${spam}bar', '%sbar' % nonascii)
check('${%s}bar' % nonascii, 'ham%sbar' % nonascii)
check('$bar%s bar' % nonascii, '$bar%s bar' % nonascii)
check('$spam}bar', '%s}bar' % nonascii)
check(os.fsencode(nonascii), os.fsencode(nonascii))
check(b'$spam bar', os.fsencode('%s bar' % nonascii))
check(b'${spam}bar', os.fsencode('%sbar' % nonascii))
check(os.fsencode('${%s}bar' % nonascii),
os.fsencode('ham%sbar' % nonascii))
check(os.fsencode('$bar%s bar' % nonascii),
os.fsencode('$bar%s bar' % nonascii))
check(b'$spam}bar', os.fsencode('%s}bar' % nonascii))
def test_abspath(self):
self.assertIn("foo", self.pathmodule.abspath("foo"))
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
self.assertIn(b"foo", self.pathmodule.abspath(b"foo"))
# Abspath returns bytes when the arg is bytes
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
for path in (b'', b'foo', b'f\xf2\xf2', b'/foo', b'C:\\'):
self.assertIsInstance(self.pathmodule.abspath(path), bytes)
def test_realpath(self):
self.assertIn("foo", self.pathmodule.realpath("foo"))
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
self.assertIn(b"foo", self.pathmodule.realpath(b"foo"))
def test_normpath_issue5827(self):
# Make sure normpath preserves unicode
for path in ('', '.', '/', '\\', '///foo/.//bar//'):
self.assertIsInstance(self.pathmodule.normpath(path), str)
def test_abspath_issue3426(self):
# Check that abspath returns unicode when the arg is unicode
# with both ASCII and non-ASCII cwds.
abspath = self.pathmodule.abspath
for path in ('', 'fuu', 'f\xf9\xf9', '/fuu', 'U:\\'):
self.assertIsInstance(abspath(path), str)
unicwd = '\xe7w\xf0'
try:
os.fsencode(unicwd)
except (AttributeError, UnicodeEncodeError):
# FS encoding is probably ASCII
pass
else:
with support.temp_cwd(unicwd):
for path in ('', 'fuu', 'f\xf9\xf9', '/fuu', 'U:\\'):
self.assertIsInstance(abspath(path), str)
def test_nonascii_abspath(self):
if (support.TESTFN_UNDECODABLE
# Mac OS X denies the creation of a directory with an invalid
# UTF-8 name. Windows allows to create a directory with an
# arbitrary bytes name, but fails to enter this directory
# (when the bytes name is used).
and sys.platform not in ('win32', 'darwin')):
name = support.TESTFN_UNDECODABLE
elif support.TESTFN_NONASCII:
name = support.TESTFN_NONASCII
else:
self.skipTest("need support.TESTFN_NONASCII")
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
with support.temp_cwd(name):
self.test_abspath()
if __name__=="__main__":
unittest.main()
| lgpl-3.0 |
ppries/tensorflow | tensorflow/contrib/labeled_tensor/python/ops/core.py | 8 | 38128 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Core classes and core ops for LabeledTensor.
Core ops are ops which will eventually be called by LabeledTensor methods,
and ops which a core op depends upon.
For example, `add` is a core op because we'll eventually support the `+`
operator.
Non-core ops should go in `ops.py`.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import contextlib
import numbers
import types
import numpy as np
from six import binary_type
from six import string_types
from six import text_type
from six.moves import range # pylint: disable=redefined-builtin
from tensorflow.contrib.labeled_tensor.python.ops import _typecheck as tc
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import math_ops
# pylint: disable=invalid-name
# Types coercible to Axis.labels
# We use this instead of collections.Sequence to exclude strings.
LabelsLike = tc.Union(np.ndarray, range, list, tuple)
# Types coercible to a tf.Dimension
DimensionLike = tc.Optional(tc.Union(tensor_shape.Dimension, int))
# Types usable for axis values
AxisValue = tc.Union(LabelsLike, DimensionLike)
# Valid scalar values for TensorFlow
Scalar = tc.Union(numbers.Number, bool, binary_type, text_type)
# pylint: enable=invalid-name
class Axis(object):
"""Size and label information for an axis.
Axis contains either a tf.Dimension indicating the size of an axis,
or a tuple of tick labels for the axis.
If tick labels are provided, they must be unique.
"""
@tc.accepts(object, string_types, AxisValue)
def __init__(self, name, value):
"""Construct an Axis.
Args:
name: Name of the axis.
value: Either None, an int or tf.Dimension giving the size of the axis,
or a sequence that is not a string additionally providing coordinate
(tick) labels.
Raises:
ValueError: If the user provides labels with duplicate values.
"""
if isinstance(value, tensor_shape.Dimension):
dimension = value
labels = None
elif isinstance(value, int) or value is None:
dimension = tensor_shape.Dimension(value)
labels = None
else:
dimension = tensor_shape.Dimension(len(value))
labels = tuple(value)
if dimension.value == 0:
# Treat a zero-length axis as if it has labels.
labels = ()
if labels is not None:
index = dict(zip(labels, range(len(labels))))
if len(index) != len(labels):
raise ValueError('Tick labels must be unique, but got {}'
.format(labels))
else:
index = None
self._name = name # type: string_types
self._dimension = dimension # type: tensor_shape.Dimension
self._labels = labels # type: Optional[tuple]
self._index = index # type: Optional[Dict[Any, int]]
@property
@tc.returns(string_types)
def name(self):
return self._name
@tc.returns(string_types)
def __repr__(self):
# Axis('x', Dimension(2))
# TODO(shoyer): make very long reprs more succint?
return "%s('%s', %r)" % (type(self).__name__, self.name, self.value)
@tc.returns(bool)
def __eq__(self, other):
return (isinstance(other, Axis) and
self.name == other.name and
self.size == other.size and
self.labels == other.labels)
def __hash__(self):
return hash((self.name, self.size, self.labels))
@tc.returns(bool)
def __ne__(self, other):
return not self == other
@tc.returns(int)
def __len__(self):
size = self.size
if size is None:
raise ValueError('axis %r has unknown length' % self.name)
return size
@property
@tc.returns(tc.Optional(tensor_shape.Dimension))
def dimension(self):
return self._dimension
@property
@tc.returns(tc.Optional(int))
def size(self):
return self._dimension.value
@property
@tc.returns(tc.Union(tuple, tensor_shape.Dimension))
def value(self):
"""Returns the tf.Dimension or tuple specifying axis ticks."""
if self.labels is None:
return self.dimension
else:
return self.labels
@property
@tc.returns(tc.Optional(tuple))
def labels(self):
"""Returns the tuple containing coordinate labels, else None."""
return self._labels
def index(self, value):
"""Returns the integer position of the given tick label."""
if self._index is None:
raise ValueError('Axis does not have tick labels')
return self._index[value]
# tc class for anything that can be coerced into an Axis
# pylint: disable=invalid-name
AxisLike = tc.Union(Axis, tc.Tuple(string_types, AxisValue))
# pylint: enable=invalid-name
@tc.returns(Axis)
@tc.accepts(AxisLike)
def as_axis(axis_data):
"""Convert an AxisLike object into an Axis.
Args:
axis_data: Axis object or tuple (axis_name, axis_value) describing an axis.
Returns:
Axis object. This may be the original object if axis_data is an Axis.
"""
if isinstance(axis_data, Axis):
axis = axis_data
else:
axis = Axis(*axis_data)
return axis
class Axes(collections.Mapping):
"""Axis names and indices for a tensor.
It is an ordered mapping, with keys given by axis name and values given
by Axis objets. Duplicate axis names are not allowed.
"""
@tc.accepts(object, tc.List(AxisLike))
def __init__(self, axes):
"""Construct an Axes.
Args:
axes: A list of Axis objects or (axis_name, axis_value) tuples.
Raises:
ValueError: If the user provides empty or duplicate axis names.
"""
self._axes = collections.OrderedDict()
for axis_data in axes:
axis = as_axis(axis_data)
name = axis.name
if name in self._axes:
raise ValueError('Duplicate axis name: %s' % name)
self._axes[name] = axis
def __iter__(self):
return iter(self._axes)
@tc.returns(string_types)
def __repr__(self):
# Axes([('x', Dimension(2)),
# ('y', ['a', 'b', 'c']),
# ('z', Dimension(4))])
cls_name = type(self).__name__
values = ["('%s', %r)" % (v.name, v.value) for v in self._axes.values()]
values_repr = (',\n' + ' ' * len(cls_name + '([')).join(values)
return '%s([%s])' % (cls_name, values_repr)
@tc.returns(Axis)
@tc.accepts(object, string_types)
def __getitem__(self, name):
return self._axes[name]
@tc.returns(bool)
def __contains__(self, name):
return name in self._axes
@tc.returns(int)
def __len__(self):
return len(self._axes)
def __hash__(self):
return hash(tuple(self.items()))
@tc.accepts(object, string_types)
def remove(self, axis_name):
"""Creates a new Axes object without the given axis."""
if axis_name not in self:
raise KeyError(axis_name)
remaining_axes = [axis for axis in self.values() if axis.name != axis_name]
return Axes(remaining_axes)
class LabeledTensor(object):
"""A tensor with annotated axes.
It has the following invariants:
1) The dimensionality of the tensor is equal to the number of elements
in axes.
2) The number of coordinate values in the ith dimension is equal to the
size of the tensor in the ith dimension.
Attributes:
tensor: tf.Tensor containing the data.
axes: lt.Axes containing axis names and coordinate labels.
"""
@tc.accepts(object, ops.Tensor,
tc.Union(Axes, tc.Collection(tc.Union(string_types, AxisLike))))
def __init__(self, tensor, axes):
"""Construct a LabeledTenor.
Args:
tensor: The underlying tensor containing the data.
axes: An Axes object, or a collection of strings, Axis objects or tuples
of (name, value) pairs indicating the axes.
Raises:
ValueError: If the provided axes do not satisfy the class invariants.
"""
self._tensor = tensor
shape = tensor.get_shape()
if isinstance(axes, Axes):
unvalidated_axes = axes
else:
mutable_axes = []
for position, axis_like in enumerate(axes):
if isinstance(axis_like, string_types):
# The coordinates for this axes are unlabeled.
# Infer the size of the axis.
value = shape[position]
axis_like = (axis_like, value)
mutable_axes.append(axis_like)
# Construct the Axis object, which will additionally validate the contents
# of the object.
unvalidated_axes = Axes(mutable_axes)
# Check our invariants.
# First, the rank of the tensor must be equal to the number of axes.
if len(shape) != len(unvalidated_axes):
raise ValueError('Tensor rank was not equal to the number of axes: %r, %r'
% (shape, unvalidated_axes))
# Second, the size of each tensor dimension must match the size of the
# corresponding indices.
for (d, axis) in zip(shape, unvalidated_axes.values()):
if d != axis.size:
raise ValueError(
'Provided axis size %d does not match tensor dimension size %d' %
(axis.size, d))
self._axes = unvalidated_axes
def __repr__(self):
# <LabeledTensor 'foo' shape=(2, 3, 4) dtype=float32
# axes=[('x', Dimension(2)),
# ('y', ('a', 'b', 'c'),
# ('z', Dimension(4))]>
axes = ["('%s', %r)" % (v.name, v.value) for v in self.axes.values()]
axes_repr = (',\n' + ' ' * len(' axes=[')).join(axes)
return ("<%s '%s' shape=%s dtype=%s\n axes=[%s]>" %
(type(self).__name__, self.tensor.name, self.tensor.get_shape(),
self.tensor.dtype.name, axes_repr))
@property
def tensor(self):
return self._tensor
def _as_graph_element(self):
"""Support tf.Graph.as_graph_element on LabeledTensor objects.
This allows operations such as tf.name_scope to take labeled tensors.
Returns:
self.tensor
"""
return self.tensor
@property
def axes(self):
return self._axes
# properties/methods directly borrowed from tf.Tensor:
@property
def dtype(self):
return self._tensor.dtype
@property
def name(self):
return self._tensor.name
def get_shape(self):
"""Returns the TensorShape that represents the shape of this tensor.
See tf.Tensor.get_shape().
Returns:
A TensorShape representing the shape of this tensor.
"""
return self._tensor.get_shape()
# TODO(shoyer): consider how/if to implement .eval(). Maybe it should return
# an xarray.DataArray?
def __getitem__(self, key):
# This should work exactly like tf.Tensor.__getitem__, except it preserves
# labels.
if not isinstance(key, tuple):
key = (key,)
if len(key) != len(self.axes):
raise ValueError('indexer %r must have the same length as the Tensor '
'rank (%r)' % (key, len(self.axes)))
selection = {a: k for a, k in zip(self.axes.keys(), key)}
return slice_function(self, selection)
# special methods for overloading arithmetic operations:
def __abs__(self):
return abs_function(self)
def __neg__(self):
return neg(self)
def __pos__(self):
return self
def __add__(self, other):
return add(self, other)
def __radd__(self, other):
return add(other, self)
def __sub__(self, other):
return sub(self, other)
def __rsub__(self, other):
return sub(other, self)
def __mul__(self, other):
return mul(self, other)
def __rmul__(self, other):
return mul(other, self)
def __truediv__(self, other):
return div(self, other)
__div__ = __truediv__
def __rtruediv__(self, other):
return div(other, self)
__rdiv__ = __rtruediv__
def __mod__(self, other):
return mod(self, other)
def __rmod__(self, other):
return mod(other, self)
def __pow__(self, other):
return pow_function(self, other)
def __rpow__(self, other):
return pow_function(other, self)
# logical operations:
def __invert__(self):
return logical_not(self)
def __and__(self, other):
return logical_and(self, other)
def __or__(self, other):
return logical_or(self, other)
def __xor__(self, other):
return logical_xor(self, other)
# boolean operations:
def __lt__(self, other):
return less(self, other)
def __le__(self, other):
return less_equal(self, other)
def __gt__(self, other):
return greater(self, other)
def __ge__(self, other):
return greater_equal(self, other)
def __eq__(self, other):
# for consistency with tf.Tensor
if not isinstance(other, LabeledTensor):
return False
return self.tensor == other.tensor and self.axes == other.axes
def __ne__(self, other):
return not self == other
def __hash__(self):
return hash((self.tensor, self.axes))
# typecheck type abbreviations:
# abbreviations for third-party types with very long reprs
tc.register_type_abbreviation(tensor_shape.Dimension, 'tensorflow.Dimension')
tc.register_type_abbreviation(ops.Tensor, 'tensorflow.Tensor')
tc.register_type_abbreviation(dtypes.DType, 'tensorflow.DType')
# core LabeledTensor types
tc.register_type_abbreviation(Axis, 'labeled_tensor.Axis')
tc.register_type_abbreviation(Axes, 'labeled_tensor.Axes')
tc.register_type_abbreviation(LabeledTensor, 'labeled_tensor.LabeledTensor')
@tc.returns(ops.Tensor)
@tc.accepts(LabeledTensor)
def _convert_labeled_tensor_to_tensor(value, *args, **kwargs):
# call ops.convert_to_tensor to handle optional arguments appropriately
return ops.internal_convert_to_tensor(value.tensor, *args, **kwargs)
ops.register_tensor_conversion_function(
LabeledTensor, _convert_labeled_tensor_to_tensor)
# tc class for anything that can be coerced into a LabeledTensor
# pylint: disable=invalid-name
LabeledTensorLike = tc.Union(LabeledTensor, ops.Tensor, np.ndarray, Scalar)
# pylint: enable=invalid-name
@tc.returns(LabeledTensor)
@tc.accepts(LabeledTensorLike, object, tc.Optional(string_types))
def convert_to_labeled_tensor(value, dtype=None, name=None):
"""Converts the given `value` to a `LabeledTensor`.
This function accepts `LabeledTensor` objects, 0-dimensional `Tensor` objects
and numpy arrays, and Python scalars. Higher dimensional unlabeled tensors
must use the `LabeledTensor` constructor explicitly.
Args:
value: Object to convert.
dtype: Optional element type for the returned tensor. If missing, the type
is inferred from the type of value.
name: Optional name to use if a new Tensor is created.
Returns:
`value` converted into a `LabeledTensor` object.
Raises:
ValueError: If the output would have rank>0 but the input was not already a
`LabeledTensor`.
"""
# TODO(shoyer): consider extending to accept xarray.DataArray as input.
if isinstance(value, LabeledTensor):
axes = value.axes.values()
value = value.tensor
else:
axes = []
# We call convert_to_tensor even for LabeledTensor input because it also
# checks to make sure the dtype argument is compatible.
tensor = ops.convert_to_tensor(value, dtype=dtype, name=name)
if len(tensor.get_shape()) != len(axes):
raise ValueError('cannot automatically convert unlabeled arrays or tensors '
'with rank>0 into LabeledTensors: %r' % value)
return LabeledTensor(tensor, axes)
@tc.returns(Axis)
@tc.accepts(tc.Collection(Axis))
def concat_axes(axes):
"""Concatenate a list of Axes.
Args:
axes: A collection of Axis objects.
Returns:
The concatenation of the axes.
If all axes have labels, the result has the concatenation of the labels.
Else, the result has no labels, and its size is the sum of the sizes
of the axes.
Raises:
ValueError: If `others` is not a collection of Axes or if it is empty.
"""
if not axes:
raise ValueError('axes must not be empty')
for a in axes:
if not isinstance(a, Axis):
raise ValueError('Expected an Axis, but got %r of type %r' % (a, type(a)))
names = set(a.name for a in axes)
if len(names) > 1:
raise ValueError('axes do not all have the same name: %r' % names)
name, = names
all_have_labels = all(a.labels is not None for a in axes)
any_has_unknown_size = any(a.size is None for a in axes)
if all_have_labels:
value = tuple(label for a in axes for label in a.labels)
elif any_has_unknown_size:
value = None
else:
value = sum(len(a) for a in axes)
return Axis(name, value)
@tc.returns(LabeledTensor)
@tc.accepts(LabeledTensorLike, tc.Optional(string_types))
def identity(labeled_tensor, name=None):
"""The identity op.
See tf.identity.
Args:
labeled_tensor: The input tensor.
name: Optional op name.
Returns:
The tensor.
"""
with ops.name_scope(name, 'lt_identity', [labeled_tensor]) as scope:
labeled_tensor = convert_to_labeled_tensor(labeled_tensor)
return LabeledTensor(
array_ops.identity(labeled_tensor.tensor, name=scope),
labeled_tensor.axes)
# We don't call this slice because that shadows a built-in. Instead, we alias
# this to lt.slice in __init__.py.
@tc.returns(LabeledTensor)
@tc.accepts(LabeledTensorLike, tc.Mapping(string_types, tc.Union(int, slice)),
tc.Optional(string_types))
def slice_function(labeled_tensor, selection, name=None):
"""Slice out a subset of the tensor.
This is an analogue of tf.slice.
For example:
>>> tensor = tf.reshape(tf.range(0, 6), [3, 2])
>>> labeled_tensor = lt.LabeledTensor(tensor, ['a', ('b', ['foo', 'bar'])])
>>> lt.slice(labeled_tensor, {'a': slice(0, 2), 'b': 1})
<LabeledTensor 'lt_slice:...' shape=(2,) dtype=int32
axes=[('a', Dimension(2))]>
Args:
labeled_tensor: The input tensor.
selection: A dictionary of type str -> Union(int, slice of int) mapping
axis names to sub-selections.
name: Optional op name.
Returns:
The slice as a `LabeledTensor`.
"""
with ops.name_scope(name, 'lt_slice', [labeled_tensor]) as scope:
labeled_tensor = convert_to_labeled_tensor(labeled_tensor)
slices = []
for axis_name in labeled_tensor.axes:
if axis_name not in selection:
# We're not sub-selecting this axis, so use the full slice.
slices.append(slice(None))
else:
slices.append(selection[axis_name])
sliced_tensor = labeled_tensor.tensor[tuple(slices)]
sliced_axes = []
for axis, s in zip(labeled_tensor.axes.values(), slices):
# We sub-select this axis's index with the slice s.
# `s` is either an int or a proper slice.
if isinstance(s, slice):
if axis.labels is None:
# We're not tracking coordinate names for this axis.
sliced_axes.append(axis.name)
else:
sliced_axes.append((axis.name, axis.labels[s]))
else:
# If the slice is an int this dimension now has size 1, so we remove it.
assert isinstance(s, int)
return LabeledTensor(array_ops.identity(sliced_tensor, name=scope),
sliced_axes)
@tc.returns(LabeledTensor)
@tc.accepts(LabeledTensorLike, tc.Optional(tc.Collection(string_types)),
tc.Optional(string_types))
def transpose(labeled_tensor, axis_order=None, name=None):
"""Permute a tensor's axes.
See tf.transpose.
Args:
labeled_tensor: The input tensor.
axis_order: Optional desired axis order, as a list of names. By default, the
order of axes is reversed.
name: Optional op name.
Returns:
The permuted tensor.
Raises:
ValueError: If axis_order isn't a permutation of the existing axes.
"""
with ops.name_scope(name, 'lt_transpose', [labeled_tensor]) as scope:
labeled_tensor = convert_to_labeled_tensor(labeled_tensor)
original_order = list(labeled_tensor.axes.keys())
if axis_order is None:
axis_order = list(reversed(original_order))
elif sorted(axis_order) != sorted(original_order):
raise ValueError(
'The new axis order must have the same names as the original axes, '
'but the new order is %r while the original order is %r' %
(axis_order, original_order))
axis_names = list(labeled_tensor.axes.keys())
permutation = [axis_names.index(n) for n in axis_order]
# Note: TensorFlow doesn't copy data for the identity tranpose.
transpose_tensor = array_ops.transpose(labeled_tensor.tensor,
permutation,
name=scope)
permuted_axes = [labeled_tensor.axes[n] for n in axis_order]
return LabeledTensor(transpose_tensor, permuted_axes)
@tc.returns(LabeledTensor)
@tc.accepts(LabeledTensorLike, tc.Collection(tc.Union(string_types, tc.Tuple(
string_types, collections.Hashable))), tc.Optional(string_types))
def expand_dims(labeled_tensor, axes, name=None):
"""Insert dimensions of size 1.
See tf.expand_dims.
Args:
labeled_tensor: The input tensor.
axes: The desired axis names as strings or tuples of (name, label),
where `label` is the coordinate name for the new dimension `name`.
These must include the existing axis names, and the existing names must
appear in the same order in this list as they do in the input tensor.
name: Optional op name.
Returns:
A tensor with an axis for each axis in axes.
New axes are created with size 1 and do not have labeled coordinates.
Raises:
AxisOrderError: If axis names don't appear in the same order in axes
and the labeled tensor.
"""
with ops.name_scope(name, 'lt_expand_dims', [labeled_tensor]) as scope:
labeled_tensor = convert_to_labeled_tensor(labeled_tensor)
axis_names = [a if isinstance(a, string_types) else a[0] for a in axes]
check_axis_order(labeled_tensor, axis_names)
reshaped_axes = []
shape = []
for axis_spec in axes:
if axis_spec in labeled_tensor.axes:
axis = labeled_tensor.axes[axis_spec]
reshaped_axes.append(axis)
shape.append(-1 if axis.size is None else axis.size)
else:
if isinstance(axis_spec, string_types):
reshaped_axes.append((axis_spec, 1))
else:
(name, label) = axis_spec
reshaped_axes.append((name, (label,)))
shape.append(1)
reshaped_tensor = array_ops.reshape(labeled_tensor.tensor, shape,
name=scope)
return LabeledTensor(reshaped_tensor, reshaped_axes)
# This should only be added to a graph collection once.
_AXIS_ORDER_KEY = ('__axis_order',)
@tc.returns(tc.Optional(tc.List(string_types)))
def get_axis_order():
"""Get the axis_order set by any containing axis_order_scope.
Returns:
List of strings giving an order to use for axis names, or None, if no axis
order is set.
"""
# By storing axis_order in the graph, we can ensure that axis_order_scope is
# thread-safe.
axis_order_list = ops.get_collection(_AXIS_ORDER_KEY)
if axis_order_list:
axis_order, = axis_order_list
else:
axis_order = None
return axis_order
@tc.accepts(tc.Optional(tc.List(string_types)))
def _set_axis_order(axis_order):
axis_order_list = ops.get_collection_ref(_AXIS_ORDER_KEY)
if axis_order_list:
axis_order_list[0] = axis_order
else:
axis_order_list.append(axis_order)
@contextlib.contextmanager
@tc.accepts(tc.Optional(tc.List(string_types)))
def axis_order_scope(axis_order=None):
"""Set axis order for the result of broadcasting operations within a scope.
This allows you to ensure that tensors resulting from arithmetic have a
predictable axis order.
Example usage:
with lt.axis_order_scope(['x', 'y', 'z']):
# result is guranteed to have the correct axis order
result = w + b
You can nest scopes, in which case only the inner-most scope applies, e.g.,
with lt.axis_order(['x', 'y', 'z']):
with lt.axis_order():
result = w + b # uses the default (left-most) axis ordering
Args:
axis_order: optional list of strings providing axis names. By default,
creates a scope without axis order.
Yields:
The provided axis_order or `None`.
"""
original_axis_order = get_axis_order()
_set_axis_order(axis_order)
try:
yield axis_order
finally:
_set_axis_order(original_axis_order)
@tc.returns(tc.List(string_types))
def _get_valid_axis_order():
axis_order = get_axis_order()
if axis_order is None:
raise AxisOrderError('an explicit axis order must be provided with the '
'axis_order argument or by using an axis_order_scope')
return axis_order
class AxisOrderError(ValueError):
"""Error class for cases where there is no valid axis order."""
# TODO(shoyer): should this function accept a list of labeled tensors instead?
@tc.returns(type(None))
@tc.accepts(LabeledTensorLike, tc.Optional(tc.Collection(string_types)))
def check_axis_order(labeled_tensor, axis_order=None):
"""Verify that the given tensor has a consistent axis order.
Args:
labeled_tensor: The input tensor. All axes on this tensor must appear in
axis_order.
axis_order: Optional desired axis order, as a list of names. If not
provided, defaults to the current axis_order_scope (if set).
Raises:
AxisOrderError: If the axis_order is unavailable, inconsistent or does not
include all existing axes.
"""
labeled_tensor = convert_to_labeled_tensor(labeled_tensor)
if axis_order is None:
axis_order = _get_valid_axis_order()
relevant_axis_order = [a for a in axis_order if a in labeled_tensor.axes]
if len(relevant_axis_order) < len(labeled_tensor.axes):
raise AxisOrderError(
'not all axis names appear in the required axis order %r: %r' %
(axis_order, labeled_tensor))
if relevant_axis_order != list(labeled_tensor.axes):
raise AxisOrderError(
'axes on a labeled tensor do not appear in the same order as the '
'required axis order %r: %r' % (axis_order, labeled_tensor))
@tc.returns(LabeledTensor)
@tc.accepts(LabeledTensorLike, tc.Optional(tc.Collection(string_types)),
tc.Optional(string_types))
def impose_axis_order(labeled_tensor, axis_order=None, name=None):
"""Impose desired axis order on a labeled tensor.
Args:
labeled_tensor: The input tensor.
axis_order: Optional desired axis order, as a list of names. If not
provided, defaults to the current axis_order_scope (if set).
name: Optional op name.
Returns:
Labeled tensor with possibly transposed axes.
Raises:
AxisOrderError: If no axis_order is provided or axis_order does not contain
all axes on the input tensor.
"""
with ops.name_scope(name, 'lt_impose_axis_order', [labeled_tensor]) as scope:
labeled_tensor = convert_to_labeled_tensor(labeled_tensor)
if axis_order is None:
axis_order = _get_valid_axis_order()
relevant_axis_order = [a for a in axis_order if a in labeled_tensor.axes]
return transpose(labeled_tensor, relevant_axis_order, name=scope)
@tc.returns(tc.Optional(list))
@tc.accepts(list, list)
def _find_consistent_ordering(a, b):
"""Find the left-most consistent ordering between two lists of unique items.
A consistent ordering combines all elements in both a and b while keeping all
elements in their original order in both inputs. The left-most consistent
ordering orders elements from `a` not found in `b` before elements in `b` not
found in `a`.
For example, given ['x', 'z'] and ['y', 'z'], both ['x', 'y', 'z'] and ['y',
'x', 'z'] are consistent orderings because each of the inputs appears in
each consistent ordering in the same order, and ['x', 'y', 'z'] is the
left-most, because 'x' appears only in `a` and 'y' appears only in `b`. In
contrast, there is no consistent ordering between ['x', 'y'] and ['y', 'x'].
Args:
a: list with unique elements.
b: list with unique elements.
Returns:
List containing all elements in either a or b, or None, if no consistent
ordering exists.
"""
a_set = set(a)
b_set = set(b)
i = 0
j = 0
ordering = []
while i < len(a) and j < len(b):
if a[i] not in b_set:
ordering.append(a[i])
i += 1
elif b[j] not in a_set:
ordering.append(b[j])
j += 1
elif a[i] == b[j]:
ordering.append(a[i])
i += 1
j += 1
else:
return None
ordering.extend(a[i:])
ordering.extend(b[j:])
return ordering
@tc.returns(LabeledTensor, LabeledTensor, Axes)
@tc.accepts(LabeledTensorLike, LabeledTensorLike, tc.Optional(string_types))
def align(labeled_tensor_0, labeled_tensor_1, name=None):
"""Align the axes of two tensors so they may be broadcast to each other.
Axes are ordered by the current axis order scope, if present, or by the left-
most consistent ordering. An exception is raised if it is impossible to align
the tensors without a transpose (align never copies the input data).
Example usage:
>>> a = lt.LabeledTensor(tf.ones((2, 4)), ['x', 'z'])
>>> b = lt.LabeledTensor(tf.ones((3, 4)), ['y', 'z'])
>>> a2, b2, axes = lt.align(a, b)
>>> a2
<LabeledTensor 'lt_align_1/lt_align_1/0:...' shape=(2, 1, 4) dtype=float32
axes=[('x', Dimension(2)),
('y', Dimension(1)),
('z', Dimension(4))]>
>>> b2
<LabeledTensor 'lt_align_1/lt_align_1/1:...' shape=(1, 3, 4) dtype=float32
axes=[('x', Dimension(1)),
('y', Dimension(3)),
('z', Dimension(4))]>
>>> axes
Axes([('x', Dimension(2)),
('y', Dimension(3)),
('z', Dimension(4))])
Args:
labeled_tensor_0: An input tensor.
labeled_tensor_1: An input tensor.
name: Optional op name.
Returns:
The aligned tensors and the axes the resulting tensor would have if the two
aligned tensors were broadcast to each other. The aligned tensors have the
same rank but not necessarily the same shape, with axes in the same order.
Raises:
ValueError: If axes with the same name on the inputs are not equal.
AxisOrderError: If there is no way to reshape the input tensors into the
output without a transpose.
"""
with ops.name_scope(name, 'lt_align',
[labeled_tensor_0, labeled_tensor_1]) as scope:
labeled_tensor_0 = convert_to_labeled_tensor(labeled_tensor_0)
labeled_tensor_1 = convert_to_labeled_tensor(labeled_tensor_1)
axes_0 = labeled_tensor_0.axes
axes_1 = labeled_tensor_1.axes
for axis_name in axes_0:
if axis_name in axes_1:
if axes_0[axis_name] != axes_1[axis_name]:
raise ValueError('Mismatched %r axis on input tensors: %r and %r' %
(axis_name, axes_0[axis_name], axes_1[axis_name]))
axis_scope_order = get_axis_order()
if axis_scope_order is not None:
# we are in an axis_order_scope
axis_names_set = set(axes_0) | set(axes_1)
new_axis_names = [a for a in axis_scope_order if a in axis_names_set]
check_axis_order(labeled_tensor_0, axis_scope_order)
check_axis_order(labeled_tensor_1, axis_scope_order)
else:
# attempt to find a consistent ordering
new_axis_names = _find_consistent_ordering(list(axes_0), list(axes_1))
if new_axis_names is None:
raise AxisOrderError(
'No consistent axis order allows for aligning tensors with axis '
'orders %r and %r without copying data. Use transpose or '
'impose_axis_order to reorder axes on one of more of the inputs.' %
(axes_0.keys(), axes_1.keys()))
labeled_tensor_0 = expand_dims(labeled_tensor_0,
new_axis_names,
name=scope + '0')
labeled_tensor_1 = expand_dims(labeled_tensor_1,
new_axis_names,
name=scope + '1')
broadcast_axes = []
for axis_name in new_axis_names:
if axis_name in axes_0:
broadcast_axes.append(axes_0[axis_name])
else:
broadcast_axes.append(axes_1[axis_name])
return labeled_tensor_0, labeled_tensor_1, Axes(broadcast_axes)
@tc.returns(types.FunctionType)
@tc.accepts(string_types, collections.Callable)
def define_unary_op(op_name, elementwise_function):
"""Define a unary operation for labeled tensors.
Args:
op_name: string name of the TensorFlow op.
elementwise_function: function to call to evaluate the op on a single
tf.Tensor object. This function must accept two arguments: a tf.Tensor
object, and an optional `name`.
Returns:
Function defining the given op that acts on LabeledTensors.
"""
default_name = 'lt_%s' % op_name
@tc.returns(LabeledTensor)
@tc.accepts(LabeledTensorLike, tc.Optional(string_types))
def op(labeled_tensor, name=None):
"""LabeledTensor version of `tf.{op_name}`.
See `tf.{op_name}` for full details.
Args:
labeled_tensor: Input tensor.
name: Optional op name.
Returns:
A LabeledTensor with result of applying `tf.{op_name}` elementwise.
"""
with ops.name_scope(name, default_name, [labeled_tensor]) as scope:
labeled_tensor = convert_to_labeled_tensor(labeled_tensor)
result_tensor = elementwise_function(labeled_tensor.tensor, name=scope)
return LabeledTensor(result_tensor, labeled_tensor.axes)
op.__doc__ = op.__doc__.format(op_name=op_name)
op.__name__ = op_name
return op
abs_function = define_unary_op('abs', math_ops.abs)
neg = define_unary_op('neg', math_ops.neg)
sign = define_unary_op('sign', math_ops.sign)
reciprocal = define_unary_op('reciprocal', math_ops.reciprocal)
square = define_unary_op('square', math_ops.square)
round_function = define_unary_op('round', math_ops.round)
sqrt = define_unary_op('sqrt', math_ops.sqrt)
rsqrt = define_unary_op('rsqrt', math_ops.rsqrt)
exp = define_unary_op('exp', math_ops.exp)
log = define_unary_op('log', math_ops.log)
ceil = define_unary_op('ceil', math_ops.ceil)
floor = define_unary_op('floor', math_ops.floor)
cos = define_unary_op('cos', math_ops.cos)
sin = define_unary_op('sin', math_ops.sin)
tan = define_unary_op('tan', math_ops.tan)
acos = define_unary_op('acos', math_ops.acos)
asin = define_unary_op('asin', math_ops.asin)
atan = define_unary_op('atan', math_ops.atan)
lgamma = define_unary_op('lgamma', math_ops.lgamma)
digamma = define_unary_op('digamma', math_ops.digamma)
erf = define_unary_op('erf', math_ops.erf)
erfc = define_unary_op('erfc', math_ops.erfc)
logical_not = define_unary_op('logical_not', math_ops.logical_not)
tanh = define_unary_op('tanh', math_ops.tanh)
sigmoid = define_unary_op('sigmoid', math_ops.sigmoid)
@tc.returns(types.FunctionType)
@tc.accepts(string_types, collections.Callable)
def define_binary_op(op_name, elementwise_function):
"""Define a binary operation that broadcasts labeled tensors.
Args:
op_name: string name of the TensorFlow op.
elementwise_function: function to call to evaluate the op on tf.Tensor
objects. This function must accept three arguments: two tf.Tensor objects,
and an optional `name`.
Returns:
Function defining the given op that acts on LabeledTensors.
"""
default_name = 'lt_%s' % op_name
@tc.returns(LabeledTensor)
@tc.accepts(LabeledTensorLike, LabeledTensorLike, tc.Optional(string_types))
def op(labeled_tensor_0, labeled_tensor_1, name=None):
"""LabeledTensor version of `tf.{op_name}` with label based alignment.
See `tf.{op_name}` for full details.
Args:
labeled_tensor_0: Input tensor.
labeled_tensor_1: Input tensor.
name: Optional op name.
Returns:
A LabeledTensor with result of applying `tf.{op_name}` elementwise.
"""
with ops.name_scope(name, default_name,
[labeled_tensor_0, labeled_tensor_1]) as scope:
align_0, align_1, broadcast_axes = align(labeled_tensor_0,
labeled_tensor_1)
tensor = elementwise_function(align_0.tensor, align_1.tensor, name=scope)
return LabeledTensor(tensor, broadcast_axes)
op.__doc__ = op.__doc__.format(op_name=op_name)
op.__name__ = op_name
return op
add = define_binary_op('add', math_ops.add)
sub = define_binary_op('sub', math_ops.sub)
mul = define_binary_op('mul', math_ops.mul)
div = define_binary_op('div', math_ops.div)
mod = define_binary_op('mod', math_ops.mod)
pow_function = define_binary_op('pow', math_ops.pow)
equal = define_binary_op('equal', math_ops.equal)
greater = define_binary_op('greater', math_ops.greater)
greater_equal = define_binary_op('greater_equal', math_ops.greater_equal)
not_equal = define_binary_op('not_equal', math_ops.not_equal)
less = define_binary_op('less', math_ops.less)
less_equal = define_binary_op('less_equal', math_ops.less_equal)
logical_and = define_binary_op('logical_and', math_ops.logical_and)
logical_or = define_binary_op('logical_or', math_ops.logical_or)
logical_xor = define_binary_op('logical_xor', math_ops.logical_xor)
maximum = define_binary_op('maximum', math_ops.maximum)
minimum = define_binary_op('minimum', math_ops.minimum)
squared_difference = define_binary_op(
'squared_difference', math_ops.squared_difference)
igamma = define_binary_op('igamma', math_ops.igamma)
igammac = define_binary_op('igammac', math_ops.igammac)
zeta = define_binary_op('zeta', math_ops.zeta)
polygamma = define_binary_op('polygamma', math_ops.polygamma)
| apache-2.0 |
JamesMura/sentry | src/sentry/south_migrations/0069_auto__add_lostpasswordhash.py | 36 | 19967 | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'LostPasswordHash'
db.create_table('sentry_lostpasswordhash', (
('id', self.gf('sentry.db.models.fields.bounded.BoundedBigAutoField')(primary_key=True)),
('user', self.gf('sentry.db.models.fields.FlexibleForeignKey')(to=orm['sentry.User'], unique=True)),
('hash', self.gf('django.db.models.fields.CharField')(max_length=32)),
('date_added', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now)),
))
db.send_create_signal('sentry', ['LostPasswordHash'])
def backwards(self, orm):
# Deleting model 'LostPasswordHash'
db.delete_table('sentry_lostpasswordhash')
models = {
'sentry.user': {
'Meta': {'object_name': 'User', 'db_table': "'auth_user'"},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'sentry.event': {
'Meta': {'unique_together': "(('project', 'event_id'),)", 'object_name': 'Event', 'db_table': "'sentry_message'"},
'checksum': ('django.db.models.fields.CharField', [], {'max_length': '32', 'db_index': 'True'}),
'culprit': ('django.db.models.fields.CharField', [], {'max_length': '200', 'null': 'True', 'db_column': "'view'", 'blank': 'True'}),
'data': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'datetime': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'db_index': 'True'}),
'event_id': ('django.db.models.fields.CharField', [], {'max_length': '32', 'null': 'True', 'db_column': "'message_id'"}),
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'blank': 'True', 'related_name': "'event_set'", 'null': 'True', 'to': "orm['sentry.Group']"}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'level': ('django.db.models.fields.PositiveIntegerField', [], {'default': '40', 'db_index': 'True', 'blank': 'True'}),
'logger': ('django.db.models.fields.CharField', [], {'default': "'root'", 'max_length': '64', 'db_index': 'True', 'blank': 'True'}),
'message': ('django.db.models.fields.TextField', [], {}),
'platform': ('django.db.models.fields.CharField', [], {'max_length': '64', 'null': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'server_name': ('django.db.models.fields.CharField', [], {'max_length': '128', 'null': 'True', 'db_index': 'True'}),
'site': ('django.db.models.fields.CharField', [], {'max_length': '128', 'null': 'True', 'db_index': 'True'}),
'time_spent': ('django.db.models.fields.FloatField', [], {'null': 'True'})
},
'sentry.filterkey': {
'Meta': {'unique_together': "(('project', 'key'),)", 'object_name': 'FilterKey'},
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']"})
},
'sentry.filtervalue': {
'Meta': {'unique_together': "(('project', 'key', 'value'),)", 'object_name': 'FilterValue'},
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'value': ('django.db.models.fields.CharField', [], {'max_length': '200'})
},
'sentry.group': {
'Meta': {'unique_together': "(('project', 'logger', 'culprit', 'checksum'),)", 'object_name': 'Group', 'db_table': "'sentry_groupedmessage'"},
'active_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'db_index': 'True'}),
'checksum': ('django.db.models.fields.CharField', [], {'max_length': '32', 'db_index': 'True'}),
'culprit': ('django.db.models.fields.CharField', [], {'max_length': '200', 'null': 'True', 'db_column': "'view'", 'blank': 'True'}),
'data': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'first_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'db_index': 'True'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'is_public': ('django.db.models.fields.NullBooleanField', [], {'default': 'False', 'null': 'True', 'blank': 'True'}),
'last_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'db_index': 'True'}),
'level': ('django.db.models.fields.PositiveIntegerField', [], {'default': '40', 'db_index': 'True', 'blank': 'True'}),
'logger': ('django.db.models.fields.CharField', [], {'default': "'root'", 'max_length': '64', 'db_index': 'True', 'blank': 'True'}),
'message': ('django.db.models.fields.TextField', [], {}),
'platform': ('django.db.models.fields.CharField', [], {'max_length': '64', 'null': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'resolved_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'db_index': 'True'}),
'score': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'status': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0', 'db_index': 'True'}),
'time_spent_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'time_spent_total': ('django.db.models.fields.FloatField', [], {'default': '0'}),
'times_seen': ('django.db.models.fields.PositiveIntegerField', [], {'default': '1', 'db_index': 'True'})
},
'sentry.groupbookmark': {
'Meta': {'unique_together': "(('project', 'user', 'group'),)", 'object_name': 'GroupBookmark'},
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'bookmark_set'", 'to': "orm['sentry.Group']"}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'bookmark_set'", 'to': "orm['sentry.Project']"}),
'user': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'sentry_bookmark_set'", 'to': "orm['sentry.User']"})
},
'sentry.groupmeta': {
'Meta': {'unique_together': "(('group', 'key'),)", 'object_name': 'GroupMeta'},
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Group']"}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '64'}),
'value': ('django.db.models.fields.TextField', [], {})
},
'sentry.lostpasswordhash': {
'Meta': {'object_name': 'LostPasswordHash'},
'date_added': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'hash': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'user': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.User']", 'unique': 'True'})
},
'sentry.messagecountbyminute': {
'Meta': {'unique_together': "(('project', 'group', 'date'),)", 'object_name': 'MessageCountByMinute'},
'date': ('django.db.models.fields.DateTimeField', [], {'db_index': 'True'}),
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Group']"}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'time_spent_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'time_spent_total': ('django.db.models.fields.FloatField', [], {'default': '0'}),
'times_seen': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'})
},
'sentry.messagefiltervalue': {
'Meta': {'unique_together': "(('project', 'key', 'value', 'group'),)", 'object_name': 'MessageFilterValue'},
'first_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'null': 'True', 'db_index': 'True'}),
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Group']"}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'last_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'null': 'True', 'db_index': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'times_seen': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'value': ('django.db.models.fields.CharField', [], {'max_length': '200'})
},
'sentry.messageindex': {
'Meta': {'unique_together': "(('column', 'value', 'object_id'),)", 'object_name': 'MessageIndex'},
'column': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'object_id': ('django.db.models.fields.PositiveIntegerField', [], {}),
'value': ('django.db.models.fields.CharField', [], {'max_length': '128'})
},
'sentry.option': {
'Meta': {'object_name': 'Option'},
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '64'}),
'value': ('picklefield.fields.PickledObjectField', [], {})
},
'sentry.pendingteammember': {
'Meta': {'unique_together': "(('team', 'email'),)", 'object_name': 'PendingTeamMember'},
'date_added': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'team': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'pending_member_set'", 'to': "orm['sentry.Team']"}),
'type': ('django.db.models.fields.IntegerField', [], {'default': '0'})
},
'sentry.project': {
'Meta': {'object_name': 'Project'},
'date_added': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'owner': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'sentry_owned_project_set'", 'null': 'True', 'to': "orm['sentry.User']"}),
'public': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '50', 'unique': 'True', 'null': 'True'}),
'status': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0', 'db_index': 'True'}),
'team': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Team']", 'null': 'True'})
},
'sentry.projectcountbyminute': {
'Meta': {'unique_together': "(('project', 'date'),)", 'object_name': 'ProjectCountByMinute'},
'date': ('django.db.models.fields.DateTimeField', [], {}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'time_spent_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'time_spent_total': ('django.db.models.fields.FloatField', [], {'default': '0'}),
'times_seen': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'})
},
'sentry.projectkey': {
'Meta': {'object_name': 'ProjectKey'},
'date_added': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'null': 'True'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'key_set'", 'to': "orm['sentry.Project']"}),
'public_key': ('django.db.models.fields.CharField', [], {'max_length': '32', 'unique': 'True', 'null': 'True'}),
'secret_key': ('django.db.models.fields.CharField', [], {'max_length': '32', 'unique': 'True', 'null': 'True'}),
'user': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.User']", 'null': 'True'}),
'user_added': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'keys_added_set'", 'null': 'True', 'to': "orm['sentry.User']"})
},
'sentry.projectoption': {
'Meta': {'unique_together': "(('project', 'key'),)", 'object_name': 'ProjectOption', 'db_table': "'sentry_projectoptions'"},
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '64'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']"}),
'value': ('picklefield.fields.PickledObjectField', [], {})
},
'sentry.searchdocument': {
'Meta': {'unique_together': "(('project', 'group'),)", 'object_name': 'SearchDocument'},
'date_added': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'date_changed': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Group']"}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']"}),
'status': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'total_events': ('django.db.models.fields.PositiveIntegerField', [], {'default': '1'})
},
'sentry.searchtoken': {
'Meta': {'unique_together': "(('document', 'field', 'token'),)", 'object_name': 'SearchToken'},
'document': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'token_set'", 'to': "orm['sentry.SearchDocument']"}),
'field': ('django.db.models.fields.CharField', [], {'default': "'text'", 'max_length': '64'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'times_seen': ('django.db.models.fields.PositiveIntegerField', [], {'default': '1'}),
'token': ('django.db.models.fields.CharField', [], {'max_length': '128'})
},
'sentry.team': {
'Meta': {'object_name': 'Team'},
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '64'}),
'owner': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.User']"}),
'slug': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '50'})
},
'sentry.teammember': {
'Meta': {'unique_together': "(('team', 'user'),)", 'object_name': 'TeamMember'},
'date_added': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'team': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'member_set'", 'to': "orm['sentry.Team']"}),
'type': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'user': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'sentry_teammember_set'", 'to': "orm['sentry.User']"})
},
'sentry.useroption': {
'Meta': {'unique_together': "(('user', 'project', 'key'),)", 'object_name': 'UserOption'},
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '64'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'user': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.User']"}),
'value': ('picklefield.fields.PickledObjectField', [], {})
}
}
complete_apps = ['sentry']
| bsd-3-clause |
simbha/mAngE-Gin | lib/Django 1.7/django/utils/autoreload.py | 40 | 10748 | # Autoreloading launcher.
# Borrowed from Peter Hunt and the CherryPy project (http://www.cherrypy.org).
# Some taken from Ian Bicking's Paste (http://pythonpaste.org/).
#
# Portions copyright (c) 2004, CherryPy Team (team@cherrypy.org)
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * Neither the name of the CherryPy Team nor the names of its contributors
# may be used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from __future__ import absolute_import # Avoid importing `importlib` from this package.
import os
import signal
import sys
import time
import traceback
from django.apps import apps
from django.conf import settings
from django.core.signals import request_finished
try:
from django.utils.six.moves import _thread as thread
except ImportError:
from django.utils.six.moves import _dummy_thread as thread
# This import does nothing, but it's necessary to avoid some race conditions
# in the threading module. See http://code.djangoproject.com/ticket/2330 .
try:
import threading # NOQA
except ImportError:
pass
try:
import termios
except ImportError:
termios = None
USE_INOTIFY = False
try:
# Test whether inotify is enabled and likely to work
import pyinotify
fd = pyinotify.INotifyWrapper.create().inotify_init()
if fd >= 0:
USE_INOTIFY = True
os.close(fd)
except ImportError:
pass
RUN_RELOADER = True
FILE_MODIFIED = 1
I18N_MODIFIED = 2
_mtimes = {}
_win = (sys.platform == "win32")
_error_files = []
_cached_modules = set()
_cached_filenames = []
def gen_filenames(only_new=False):
"""
Returns a list of filenames referenced in sys.modules and translation
files.
"""
# N.B. ``list(...)`` is needed, because this runs in parallel with
# application code which might be mutating ``sys.modules``, and this will
# fail with RuntimeError: cannot mutate dictionary while iterating
global _cached_modules, _cached_filenames
module_values = set(sys.modules.values())
_cached_filenames = clean_files(_cached_filenames)
if _cached_modules == module_values:
# No changes in module list, short-circuit the function
if only_new:
return []
else:
return _cached_filenames
new_modules = module_values - _cached_modules
new_filenames = clean_files(
[filename.__file__ for filename in new_modules
if hasattr(filename, '__file__')])
if not _cached_filenames and settings.USE_I18N:
# Add the names of the .mo files that can be generated
# by compilemessages management command to the list of files watched.
basedirs = [os.path.join(os.path.dirname(os.path.dirname(__file__)),
'conf', 'locale'),
'locale']
for app_config in reversed(list(apps.get_app_configs())):
basedirs.append(os.path.join(app_config.path, 'locale'))
basedirs.extend(settings.LOCALE_PATHS)
basedirs = [os.path.abspath(basedir) for basedir in basedirs
if os.path.isdir(basedir)]
for basedir in basedirs:
for dirpath, dirnames, locale_filenames in os.walk(basedir):
for filename in locale_filenames:
if filename.endswith('.mo'):
new_filenames.append(os.path.join(dirpath, filename))
_cached_modules = _cached_modules.union(new_modules)
_cached_filenames += new_filenames
if only_new:
return new_filenames
else:
return _cached_filenames + clean_files(_error_files)
def clean_files(filelist):
filenames = []
for filename in filelist:
if not filename:
continue
if filename.endswith(".pyc") or filename.endswith(".pyo"):
filename = filename[:-1]
if filename.endswith("$py.class"):
filename = filename[:-9] + ".py"
if os.path.exists(filename):
filenames.append(filename)
return filenames
def reset_translations():
import gettext
from django.utils.translation import trans_real
gettext._translations = {}
trans_real._translations = {}
trans_real._default = None
trans_real._active = threading.local()
def inotify_code_changed():
"""
Checks for changed code using inotify. After being called
it blocks until a change event has been fired.
"""
class EventHandler(pyinotify.ProcessEvent):
modified_code = None
def process_default(self, event):
if event.path.endswith('.mo'):
EventHandler.modified_code = I18N_MODIFIED
else:
EventHandler.modified_code = FILE_MODIFIED
wm = pyinotify.WatchManager()
notifier = pyinotify.Notifier(wm, EventHandler())
def update_watch(sender=None, **kwargs):
if sender and getattr(sender, 'handles_files', False):
# No need to update watches when request serves files.
# (sender is supposed to be a django.core.handlers.BaseHandler subclass)
return
mask = (
pyinotify.IN_MODIFY |
pyinotify.IN_DELETE |
pyinotify.IN_ATTRIB |
pyinotify.IN_MOVED_FROM |
pyinotify.IN_MOVED_TO |
pyinotify.IN_CREATE
)
for path in gen_filenames(only_new=True):
wm.add_watch(path, mask)
# New modules may get imported when a request is processed.
request_finished.connect(update_watch)
# Block until an event happens.
update_watch()
notifier.check_events(timeout=None)
notifier.read_events()
notifier.process_events()
notifier.stop()
# If we are here the code must have changed.
return EventHandler.modified_code
def code_changed():
global _mtimes, _win
for filename in gen_filenames():
stat = os.stat(filename)
mtime = stat.st_mtime
if _win:
mtime -= stat.st_ctime
if filename not in _mtimes:
_mtimes[filename] = mtime
continue
if mtime != _mtimes[filename]:
_mtimes = {}
try:
del _error_files[_error_files.index(filename)]
except ValueError:
pass
return I18N_MODIFIED if filename.endswith('.mo') else FILE_MODIFIED
return False
def check_errors(fn):
def wrapper(*args, **kwargs):
try:
fn(*args, **kwargs)
except (ImportError, IndentationError, NameError, SyntaxError,
TypeError, AttributeError):
et, ev, tb = sys.exc_info()
if getattr(ev, 'filename', None) is None:
# get the filename from the last item in the stack
filename = traceback.extract_tb(tb)[-1][0]
else:
filename = ev.filename
if filename not in _error_files:
_error_files.append(filename)
raise
return wrapper
def ensure_echo_on():
if termios:
fd = sys.stdin
if fd.isatty():
attr_list = termios.tcgetattr(fd)
if not attr_list[3] & termios.ECHO:
attr_list[3] |= termios.ECHO
if hasattr(signal, 'SIGTTOU'):
old_handler = signal.signal(signal.SIGTTOU, signal.SIG_IGN)
else:
old_handler = None
termios.tcsetattr(fd, termios.TCSANOW, attr_list)
if old_handler is not None:
signal.signal(signal.SIGTTOU, old_handler)
def reloader_thread():
ensure_echo_on()
if USE_INOTIFY:
fn = inotify_code_changed
else:
fn = code_changed
while RUN_RELOADER:
change = fn()
if change == FILE_MODIFIED:
sys.exit(3) # force reload
elif change == I18N_MODIFIED:
reset_translations()
time.sleep(1)
def restart_with_reloader():
while True:
args = [sys.executable] + ['-W%s' % o for o in sys.warnoptions] + sys.argv
if sys.platform == "win32":
args = ['"%s"' % arg for arg in args]
new_environ = os.environ.copy()
new_environ["RUN_MAIN"] = 'true'
exit_code = os.spawnve(os.P_WAIT, sys.executable, args, new_environ)
if exit_code != 3:
return exit_code
def python_reloader(main_func, args, kwargs):
if os.environ.get("RUN_MAIN") == "true":
thread.start_new_thread(main_func, args, kwargs)
try:
reloader_thread()
except KeyboardInterrupt:
pass
else:
try:
exit_code = restart_with_reloader()
if exit_code < 0:
os.kill(os.getpid(), -exit_code)
else:
sys.exit(exit_code)
except KeyboardInterrupt:
pass
def jython_reloader(main_func, args, kwargs):
from _systemrestart import SystemRestart
thread.start_new_thread(main_func, args)
while True:
if code_changed():
raise SystemRestart
time.sleep(1)
def main(main_func, args=None, kwargs=None):
if args is None:
args = ()
if kwargs is None:
kwargs = {}
if sys.platform.startswith('java'):
reloader = jython_reloader
else:
reloader = python_reloader
wrapped_main_func = check_errors(main_func)
reloader(wrapped_main_func, args, kwargs)
| mit |
lucasb-eyer/DeepFried2 | DeepFried2/containers/Backward.py | 2 | 3344 | import DeepFried2 as df
def backward(start, end):
"""
Returns a function that, when executed, sends its argument backwards
through the (sub-)graph that goes from `start` to `end`.
Useful e.g. for bwd-conv and un-pooling.
Please give credit when simply copy-pasting into your code.
"""
return lambda x: df.th.grad(None, wrt=start, known_grads={end: x})
class Backward(df.SingleModuleContainer):
"""
Uses the backward-pass of the contained `Module` as forward pass.
For example, to get an un-pooling layer:
p = df.PoolingCUDNN((2,2))
u = df.Backward(p)
net = df.Sequential(..., p, ..., u, ...)
Or a "deconvolution", better called backward-convolution:
c = df.SpatialConvolutionCUDNN(n_in, n_out, (3,3))
d = df.Backward(c)
net = df.Sequential(..., c, ..., d, ...)
Note that in this case, the convolution and backward-convolution share
the same weights! This might not be what you want.
If you don't want to share weights, you need to create a second `Module`
which shall be used for the backward-pass, but "relate" it to the original
module that it should "undo" using the `wrt` keyword argument:
c = df.SpatialConvolutionCUDNN(n_in, n_out, (3,3))
d = df.Backward(df.SpatialConvolutionCUDNN(n_in, n_out, (3,3)), wrt=c)
net = df.Sequential(..., c, ..., d, ...)
In this case, each have their own set of independent weights.
NOTE: The contained module (or that in `wrt`, if given) must appear in the
network's graph before the `Backward` version of it.
"""
def __init__(self, module, wrt=None):
df.SingleModuleContainer.__init__(self, module)
self.wrt = wrt
def symb_forward(self, symb_input):
# If no `wrt` is passed, we use the referenced module's graph and go
# through it backwards, using all its parameters etc.
if self.wrt is None:
try:
start = self.modules[0]._last_symb_inp[self._mode]
end = self.modules[0]._last_symb_out[self._mode]
except KeyError:
raise ValueError("The module contained by `Backward` needs to occur first in the graph!")
# But if `wrt` is passed, we call `symb_forward` of the referenced
# module with the input that `wrt` had gotten, in order to create a
# new graph through which we then go backwards.
else:
try:
start = self.wrt._last_symb_inp[self._mode]
except KeyError:
raise ValueError("The module referenced by `Backward`'s `wrt` argument needs to occur first in the graph!")
end = self.modules[0](start)
end = df.utils.flatten(end)
inp = df.utils.flatten(symb_input)
# Match all "backward outputs" with inputs here.
assert len(end) == len(inp), "Need same number of inputs to `Backward` as contained module has outputs ({})".format(len(end))
known_grads = dict(zip(end, inp))
# Go backwards for each "backward input" which then becomes output here.
if isinstance(start, (list, tuple)):
return [df.th.grad(None, wrt=s, known_grads=known_grads) for s in start]
else:
return df.th.grad(None, wrt=start, known_grads=known_grads)
| mit |
telefonicaid/iotqatools | iotqatools/cb_ngsiv2_utils.py | 2 | 20060 | # -*- coding: utf-8 -*-
"""
Copyright 2015 Telefonica Investigación y Desarrollo, S.A.U
This file is part of telefonica-iotqatools
iotqatools is free software: you can redistribute it and/or
modify it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the License,
or (at your option) any later version.
iotqatools is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public
License along with iotqatools.
If not, seehttp://www.gnu.org/licenses/.
For those usages not covered by the GNU Affero General Public License
please contact with::[iot_support@tid.es]
"""
__author__ = 'Manu'
import requests
import json
from iotqatools.iot_logger import get_logger
from requests.exceptions import RequestException
from iotqatools.iot_tools import PqaTools
class MetadataV2(object):
"""
- a metadata name, describing the role of the metadata at the place where it occurs;
for example, the metadata name accuracy indicates that the metadata value describes how accurate a given attribute
value is
- a metadata type, describing the NGSI value type of the metadata value
- a metadata value containing the actual metadata
"""
def __init__(self, md_name, md_value, md_type=None):
# set medatada attributes
self.md_name = md_name
self.md_value = md_value
if md_type is not None:
self.md_type = md_type
# Compose the metadata
self.metadata = {md_name: {'value': md_value}}
if md_type is not None:
self.metadata[md_name].update({'type': md_type})
def get_metadata(self):
return self.metadata
class AttributeV2(object):
"""
Class that represent the attributes to build the payload to send to a contextBroker
The format created is:
{
"value": <...>,
"type": <...>,
"metadata": <...>
}
"""
def __init__(self, att_name, att_value, att_type=None, metadata_list=None):
# Set attributes
self.metadata_list = []
self.att_name = att_name
self.att_value = att_value
# Compose the attribute
self.attribute = {att_name: {'value': att_value}}
if att_type is not None:
self.attribute[att_name].update({'type': att_type})
self.att_type = att_type
if metadata_list is not None:
# Check if metadata is an instance of Metadata class
for metadata in metadata_list:
if not isinstance(metadata, MetadataV2):
raise ValueError('The metadata argument has to be an instance of Metadata class')
self.add_metadata(metadata)
def add_metadata(self, metadata):
if not isinstance(metadata, MetadataV2):
raise ValueError('The metadata argument has to be an instance of Metadata class')
if 'metadata' in self.attribute[self.att_name]:
self.attribute[self.att_name]['metadata'].update(metadata.get_metadata())
else:
self.attribute[self.att_name].update({'metadata': metadata.get_metadata()})
self.metadata_list.append(metadata)
def get_attribute(self):
return self.attribute
class EntityV2(object):
"""
Class that represent the entities to build the payload to send to a contextBroker
The format created is:
{
"id": "entityID",
"type": "entityType",
"attr_1": <val_1>,
"attr_2": <val_2>,
...
"attr_N": <val_N>
}
"""
def __init__(self, entity_id, entity_type, attribute_list=None):
# set class attributes
self.entity_id = entity_id
self.entity_type = entity_type
self.attribute_list = []
# Compose the entity
self.entity = {'id': entity_id, 'type': entity_type}
if attribute_list is not None:
for attribute in attribute_list:
if not isinstance(attribute, AttributeV2):
raise ValueError('The attributes argument has to be an instance of Attribute class')
self.entity.update(attribute.get_attribute())
def add_attribute(self, attribute):
if not isinstance(attribute, AttributeV2):
raise ValueError('The attributes argument has to be an instance of Attribute class')
self.entity.update(attribute.get_attribute())
def get_entity(self):
return self.entity
class PayloadUtilsV2(object):
"""
Class who construct the payloads
"""
@staticmethod
def build_create_entity_payload(entity):
"""
Build the payload to send to context broker to create a new entity with the standard api
:param entity: EntityV2 type
:return: the payload in json format
"""
if not isinstance(entity, EntityV2):
raise ValueError('The entity argument has to be an instance of EntityV2')
payload = entity.get_entity()
return payload
class CbNgsi10v2Utils(object):
"""
Basic functionality for ContextBroker v2 API
"""
def __init__(self, instance,
protocol="http",
port="1026",
path_list_entities="/v2/entities",
path_create_entity="/v2/entities",
path_retrieve_entity_by_id="/v2/entities/entityId",
path_retrieve_entity_attributes="/v2/entities/entityId/attrs",
path_update_or_append_entity_attributes="/v2/entities/entityId/attrs",
path_update_existing_entity_attributes="/v2/entities/entityId/attrs",
path_replace_all_entity_attributes="/v2/entities/entityId/attrs",
path_remove_entity="/v2/entities",
path_get_attribute_data="/v2/entities/entityId/attrs/attrName",
path_update_attribute_data="/v2/entities/entityId/attrs/attrName",
path_remove_a_single_attribute="/v2/entities/entityId/attrs/attrName",
path_get_attribute_value="/v2/entities/entityId/attrs/attrName/value",
path_attribute_value_update="/v2/entities/entityId/attrs/attrName/value",
path_retrieve_entity_types="/v2/types",
path_retrieve_emtity_type="/v2/types/entityType",
path_retrieve_subscriptions="/v2/subscriptions",
path_retrieve_subscription_by_id="/v2/subscriptions/subscriptionId",
path_update_subscription="/v2/subscriptions/subscriptionId",
path_delete_subscription="/v2/subscriptions/subscriptionId",
path_statistics="/statistics",
path_version="/version",
log_instance=None,
log_verbosity='DEBUG',
default_headers={'Accept': 'application/json'},
verify=False,
check_json=True):
"""
CB Utils constructor
:param instance:
:param protocol:
:param port:
:param path_list_entities:
:param path_create_entity:
:param path_retrieve_entity_by_id:
:param path_retrieve_entity_attributes:
:param path_update_or_append_entity_attributes:
:param path_update_existing_entity_attributes:
:param path_replace_all_entity_attributes:
:param path_remove_entity:
:param path_get_attribute_data:
:param path_update_attribute_data:
:param path_remove_a_single_attribute:
:param path_get_attribute_value:
:param path_attribute_value_update:
:param path_retrieve_entity_types:
:param path_retrieve_emtity_type:
:param path_retrieve_subscriptions:
:param path_retrieve_subscription_by_id:
:param path_update_subscription:
:param path_delete_subscription:
:param path_statistics:
:param path_version:
:param log_instance:
:param log_verbosity:
:param default_headers:
:param verify: ssl check
:param check_json:
"""
# initialize logger
if log_instance is not None:
self.log = log_instance
else:
self.log = get_logger('CbNgsi10Utilsv2', log_verbosity)
# Assign the values
self.default_endpoint = "{}://{}:{}".format(protocol, instance, port)
self.headers = default_headers
self.path_list_entities = "{}{}".format(self.default_endpoint, path_list_entities)
self.path_get_attribute_data = "{}{}".format(self.default_endpoint, path_update_attribute_data)
self.path_statistics = path_statistics
self.path_create_entity = "{}{}".format(self.default_endpoint, path_create_entity)
self.path_context_subscriptions = "{}{}".format(self.default_endpoint, path_retrieve_subscriptions)
self.path_context_subscriptions_by_id = "{}{}".format(self.default_endpoint, path_retrieve_subscription_by_id)
self.path_version = path_version
self.verify = verify
self.check_json = check_json
def __send_request(self, method, url, headers=None, payload=None, verify=None, query=None):
"""
Send a request to a specific url in a specifying type of http request
"""
parameters = {
'method': method,
'url': url,
}
if headers is not None:
parameters.update({'headers': headers})
if payload is not None:
if self.check_json:
parameters.update({'data': json.dumps(payload, ensure_ascii=False).encode('utf-8')})
else:
parameters.update({'data': payload})
if query is not None:
parameters.update({'params': query})
if verify is not None:
parameters.update({'verify': verify})
else:
# If the method does not include the verify parameter, it takes the value from object
parameters.update({'verify': self.verify})
# Send the requests
try:
response = requests.request(**parameters)
except RequestException, e:
PqaTools.log_requestAndResponse(url=url, headers=headers, params=query, data=payload, comp='CB',
method=method)
assert False, 'ERROR: [NETWORK ERROR] {}'.format(e)
# Log data
PqaTools.log_fullRequest(comp='CB', response=response, params=parameters)
return response
def version(self):
"""
Get CB version
"""
url = self.default_endpoint + self.path_version
# send the request for the subscription
response = self.__send_request('get', url, self.headers)
return response
def statistics(self):
"""
Get CB statistics
"""
url = self.default_endpoint + self.path_statistics
# send the request for the subscription
response = self.__send_request('get', url, self.headers)
return response
def create_entity(self, headers, payload, params=None):
"""
Create a entity in ContextBroker with the standard entity creation
:param headers: headers for the requests (fiware-service, fiware-servic-path and x-auth-token)
:param payload: the payload
:param params: params of the query if applicable
The payload has to be like:
{
"type": "Room",
"id": "Bcn-Welt",
"temperature": {
"value": 21.7
},
"humidity": {
"value": 60
},
"location": {
"value": "41.3763726, 2.1864475",
"type": "geo:point",
"metadata": {
"crs": {
"value": "WGS84"
}
}
}
}
"""
headers.update(self.headers)
headers.update({'content-type': 'application/json'})
return self.__send_request('post', self.path_create_entity, payload=payload, headers=headers, query=params,
verify=None)
def list_entities(self, headers, filters=None):
"""
Retrieves a list of entities which match different criteria (by id, idPattern, type or those which match a
query or geographical query)
:param headers:
:param filters:
:rtype : object
:
"""
# Add default headers to the request
headers.update(self.headers)
# Set the filters of the requests as params
if filters is not None:
params = filters
else:
params = None
return self.__send_request('get', self.path_list_entities, headers=headers, verify=None, query=params)
def get_attribute_data(self, headers, entity_id, entity_type, attribute_name):
"""
GET
http://orion.lab.fiware.org/v2/entities/entityId/attrs/attrName?type=type
Parameters
entityId:
Entity ID Example: Bcn_Welt. (String)
type:
Entity type, to avoid ambiguity in the case there are several entities with the same entity id. (String)
attrName:
Attribute to be retrieved. Example: temperature. (String)
Response
200
HEADERS
Content-Type:application/json
BODY
{
"value": 21.7,
"type": "none",
"metadata": {}
}
:param entity_id:
:param entity_type:
:param attribute_name:
:return:
"""
# Add default headers to the request
headers.update(self.headers)
# Compose path
path = self.path_get_attribute_data.replace('entityId', entity_id).replace('attrName', attribute_name)
# Compose params
params = {'type': entity_type}
# Make request
return self.__send_request('get', path, headers=headers, verify=None, query=params)
def retrieve_subscriptions(self, headers, options=None):
"""
Response
200
BODY
[
{
"id": "abcdefg",
"description": "One subscription to rule them all",
"subject": {
"entities": [
{
"id": "Bcn_Welt",
"type": "Room"
}
],
"condition": {
"attrs": [
"temperature "
],
"expression": {
"q": "temperature>40"
}
}
},
"notification": {
"httpCustom": {
"url": "http://localhost:1234",
"headers": {
"X-MyHeader": "foo"
},
"qs": {
"authToken": "bar"
}
},
"attrsFormat": "keyValues",
"attrs": [
"temperature",
"humidity"
],
"timesSent": 12,
"lastNotification": "2015-10-05T16:00:00.00Z"
},
"expires": "2016-04-05T14:00:00.00Z",
"status": "active",
"throttling": 5
}
]
"""
# Add default headers to the request
headers.update(self.headers)
# Check params is a correct dict
if options is not None:
if not isinstance(options, dict):
raise Exception('Wrong type in options. Dictionary is needed')
# Make request
return self.__send_request('get', self.path_context_subscriptions, headers=headers, verify=None, query=options)
def retrieve_subscription_by_id(self, headers, subscription_id):
"""
Response
200
HEADERS
Content-Type:application/json
BODY
{
"id": "abcdef",
"description": "One subscription to rule them all",
"subject": {
"entities": [
{
"idPattern": ".*",
"type": "Room"
}
],
"condition": {
"attrs": [ "temperature " ],
"expression": {
"q": "temperature>40"
}
}
},
"notification": {
"http": {
"url": "http://localhost:1234"
},
"attrs": ["temperature", "humidity"],
"timesSent": 12,
"lastNotification": "2015-10-05T16:00:00.00Z"
},
"expires": "2016-04-05T14:00:00.00Z",
"status": "active",
"throttling": 5,
}
:param headers:
:param options:
:return:
"""
# Add default headers to the request
headers.update(self.headers)
# Compose path
path = self.path_context_subscriptions_by_id.replace('subscriptionId', subscription_id)
# Make request
return self.__send_request('get', path, headers=headers, verify=None)
if __name__ == '__main__':
# Example if use of the library as a client
cb = CbNgsi10v2Utils('127.0.0.1', 'http')
# ====================create entity================
# Compose the metadatas
md = MetadataV2(md_name='crs', md_value='WGS84')
md2 = MetadataV2(md_name='crs2', md_value='WGS83')
print(md.get_metadata())
# Compose the attributes
attr = AttributeV2(att_name='location', att_value='41.3763726, 2.1864475', att_type='geo:point')
attr.add_metadata(md)
attr.add_metadata(md2)
attr2 = AttributeV2(att_name='temperature', att_value=21.7)
attr3 = AttributeV2(att_name='humidity', att_value=120)
print(attr.get_attribute())
# Compose the entity
ent1 = EntityV2(entity_id='Bcn-Welt25', entity_type='Room5')
ent1.add_attribute(attr)
ent1.add_attribute(attr2)
ent1.add_attribute(attr3)
print(ent1.get_entity())
# create payload
pl = PayloadUtilsV2.build_create_entity_payload(ent1)
# invoke CB
headers = {'fiware-service': 'city012', 'fiware-servicepath': '/electricidad'}
# headers = {'fiware-service': 'eeee', 'fiware-servicepath': '/uuu'}
resp = cb.create_entity(headers=headers, payload=pl)
# ====================list entities================
# create filters
filters2 = {'type': 'Room5', 'limit': 3, 'q': 'humidity~=120;temperature~=21.7'}
filters = {'q': 'dateCreated>2016-04-04T14:00:00.00Z', 'options':'dateCreated'}
# invoke CB
resp = cb.list_entities(headers=headers, filters=filters2)
# ===================get attribute data============
# Get attribute data
# resp = cb.get_attribute_data(headers=headers, entity_id='Bcn-Welt25', entity_type='Room5', attribute_name='location')
# ===============retrieve subscriptions============
parameteres = {'limit': 1, 'offset': 4, 'options': 'count'}
wrong_parameters = '23'
# resp = cb.retrieve_subscriptions(headers=headers, options=parameteres)
# resp = cb.retrieve_subscriptions(headers=headers, options=wrong_parameters)
# ===============retrieve subscriptions by id ============
id = '572b35cc377ea57e2ay69771'
# resp = cb.retrieve_subscription_by_id(headers=headers, subscription_id=id)
| agpl-3.0 |
tinkerinestudio/Tinkerine-Suite | TinkerineSuite/PIL/SpiderImagePlugin.py | 10 | 9101 | #
# The Python Imaging Library.
#
# SPIDER image file handling
#
# History:
# 2004-08-02 Created BB
# 2006-03-02 added save method
# 2006-03-13 added support for stack images
#
# Copyright (c) 2004 by Health Research Inc. (HRI) RENSSELAER, NY 12144.
# Copyright (c) 2004 by William Baxter.
# Copyright (c) 2004 by Secret Labs AB.
# Copyright (c) 2004 by Fredrik Lundh.
#
##
# Image plugin for the Spider image format. This format is is used
# by the SPIDER software, in processing image data from electron
# microscopy and tomography.
##
#
# SpiderImagePlugin.py
#
# The Spider image format is used by SPIDER software, in processing
# image data from electron microscopy and tomography.
#
# Spider home page:
# http://www.wadsworth.org/spider_doc/spider/docs/spider.html
#
# Details about the Spider image format:
# http://www.wadsworth.org/spider_doc/spider/docs/image_doc.html
#
from __future__ import print_function
from PIL import Image, ImageFile
import os, struct, sys
def isInt(f):
try:
i = int(f)
if f-i == 0: return 1
else: return 0
except:
return 0
iforms = [1,3,-11,-12,-21,-22]
# There is no magic number to identify Spider files, so just check a
# series of header locations to see if they have reasonable values.
# Returns no.of bytes in the header, if it is a valid Spider header,
# otherwise returns 0
def isSpiderHeader(t):
h = (99,) + t # add 1 value so can use spider header index start=1
# header values 1,2,5,12,13,22,23 should be integers
for i in [1,2,5,12,13,22,23]:
if not isInt(h[i]): return 0
# check iform
iform = int(h[5])
if not iform in iforms: return 0
# check other header values
labrec = int(h[13]) # no. records in file header
labbyt = int(h[22]) # total no. of bytes in header
lenbyt = int(h[23]) # record length in bytes
#print "labrec = %d, labbyt = %d, lenbyt = %d" % (labrec,labbyt,lenbyt)
if labbyt != (labrec * lenbyt): return 0
# looks like a valid header
return labbyt
def isSpiderImage(filename):
fp = open(filename,'rb')
f = fp.read(92) # read 23 * 4 bytes
fp.close()
bigendian = 1
t = struct.unpack('>23f',f) # try big-endian first
hdrlen = isSpiderHeader(t)
if hdrlen == 0:
bigendian = 0
t = struct.unpack('<23f',f) # little-endian
hdrlen = isSpiderHeader(t)
return hdrlen
class SpiderImageFile(ImageFile.ImageFile):
format = "SPIDER"
format_description = "Spider 2D image"
def _open(self):
# check header
n = 27 * 4 # read 27 float values
f = self.fp.read(n)
try:
self.bigendian = 1
t = struct.unpack('>27f',f) # try big-endian first
hdrlen = isSpiderHeader(t)
if hdrlen == 0:
self.bigendian = 0
t = struct.unpack('<27f',f) # little-endian
hdrlen = isSpiderHeader(t)
if hdrlen == 0:
raise SyntaxError("not a valid Spider file")
except struct.error:
raise SyntaxError("not a valid Spider file")
h = (99,) + t # add 1 value : spider header index starts at 1
iform = int(h[5])
if iform != 1:
raise SyntaxError("not a Spider 2D image")
self.size = int(h[12]), int(h[2]) # size in pixels (width, height)
self.istack = int(h[24])
self.imgnumber = int(h[27])
if self.istack == 0 and self.imgnumber == 0:
# stk=0, img=0: a regular 2D image
offset = hdrlen
self.nimages = 1
elif self.istack > 0 and self.imgnumber == 0:
# stk>0, img=0: Opening the stack for the first time
self.imgbytes = int(h[12]) * int(h[2]) * 4
self.hdrlen = hdrlen
self.nimages = int(h[26])
# Point to the first image in the stack
offset = hdrlen * 2
self.imgnumber = 1
elif self.istack == 0 and self.imgnumber > 0:
# stk=0, img>0: an image within the stack
offset = hdrlen + self.stkoffset
self.istack = 2 # So Image knows it's still a stack
else:
raise SyntaxError("inconsistent stack header values")
if self.bigendian:
self.rawmode = "F;32BF"
else:
self.rawmode = "F;32F"
self.mode = "F"
self.tile = [("raw", (0, 0) + self.size, offset,
(self.rawmode, 0, 1))]
self.__fp = self.fp # FIXME: hack
# 1st image index is zero (although SPIDER imgnumber starts at 1)
def tell(self):
if self.imgnumber < 1:
return 0
else:
return self.imgnumber - 1
def seek(self, frame):
if self.istack == 0:
return
if frame >= self.nimages:
raise EOFError("attempt to seek past end of file")
self.stkoffset = self.hdrlen + frame * (self.hdrlen + self.imgbytes)
self.fp = self.__fp
self.fp.seek(self.stkoffset)
self._open()
# returns a byte image after rescaling to 0..255
def convert2byte(self, depth=255):
(min, max) = self.getextrema()
m = 1
if max != min:
m = depth / (max-min)
b = -m * min
return self.point(lambda i, m=m, b=b: i * m + b).convert("L")
# returns a ImageTk.PhotoImage object, after rescaling to 0..255
def tkPhotoImage(self):
from PIL import ImageTk
return ImageTk.PhotoImage(self.convert2byte(), palette=256)
# --------------------------------------------------------------------
# Image series
# given a list of filenames, return a list of images
def loadImageSeries(filelist=None):
" create a list of Image.images for use in montage "
if filelist == None or len(filelist) < 1:
return
imglist = []
for img in filelist:
if not os.path.exists(img):
print("unable to find %s" % img)
continue
try:
im = Image.open(img).convert2byte()
except:
if not isSpiderImage(img):
print(img + " is not a Spider image file")
continue
im.info['filename'] = img
imglist.append(im)
return imglist
# --------------------------------------------------------------------
# For saving images in Spider format
def makeSpiderHeader(im):
nsam,nrow = im.size
lenbyt = nsam * 4 # There are labrec records in the header
labrec = 1024 / lenbyt
if 1024%lenbyt != 0: labrec += 1
labbyt = labrec * lenbyt
hdr = []
nvalues = labbyt / 4
for i in range(nvalues):
hdr.append(0.0)
if len(hdr) < 23:
return []
# NB these are Fortran indices
hdr[1] = 1.0 # nslice (=1 for an image)
hdr[2] = float(nrow) # number of rows per slice
hdr[5] = 1.0 # iform for 2D image
hdr[12] = float(nsam) # number of pixels per line
hdr[13] = float(labrec) # number of records in file header
hdr[22] = float(labbyt) # total number of bytes in header
hdr[23] = float(lenbyt) # record length in bytes
# adjust for Fortran indexing
hdr = hdr[1:]
hdr.append(0.0)
# pack binary data into a string
hdrstr = []
for v in hdr:
hdrstr.append(struct.pack('f',v))
return hdrstr
def _save(im, fp, filename):
if im.mode[0] != "F":
im = im.convert('F')
hdr = makeSpiderHeader(im)
if len(hdr) < 256:
raise IOError("Error creating Spider header")
# write the SPIDER header
try:
fp = open(filename, 'wb')
except:
raise IOError("Unable to open %s for writing" % filename)
fp.writelines(hdr)
rawmode = "F;32NF" #32-bit native floating point
ImageFile._save(im, fp, [("raw", (0,0)+im.size, 0, (rawmode,0,1))])
fp.close()
def _save_spider(im, fp, filename):
# get the filename extension and register it with Image
fn, ext = os.path.splitext(filename)
Image.register_extension("SPIDER", ext)
_save(im, fp, filename)
# --------------------------------------------------------------------
Image.register_open("SPIDER", SpiderImageFile)
Image.register_save("SPIDER", _save_spider)
if __name__ == "__main__":
if not sys.argv[1:]:
print("Syntax: python SpiderImagePlugin.py Spiderimage [outfile]")
sys.exit()
filename = sys.argv[1]
if not isSpiderImage(filename):
print("input image must be in Spider format")
sys.exit()
outfile = ""
if len(sys.argv[1:]) > 1:
outfile = sys.argv[2]
im = Image.open(filename)
print("image: " + str(im))
print("format: " + str(im.format))
print("size: " + str(im.size))
print("mode: " + str(im.mode))
print("max, min: ", end=' ')
print(im.getextrema())
if outfile != "":
# perform some image operation
im = im.transpose(Image.FLIP_LEFT_RIGHT)
print("saving a flipped version of %s as %s " % (os.path.basename(filename), outfile))
im.save(outfile, "SPIDER")
| agpl-3.0 |
deathping1994/sendmail-api | venv/lib/python2.7/site-packages/wheel/util.py | 219 | 4192 | """Utility functions."""
import sys
import os
import base64
import json
import hashlib
__all__ = ['urlsafe_b64encode', 'urlsafe_b64decode', 'utf8',
'to_json', 'from_json', 'matches_requirement']
def urlsafe_b64encode(data):
"""urlsafe_b64encode without padding"""
return base64.urlsafe_b64encode(data).rstrip(binary('='))
def urlsafe_b64decode(data):
"""urlsafe_b64decode without padding"""
pad = b'=' * (4 - (len(data) & 3))
return base64.urlsafe_b64decode(data + pad)
def to_json(o):
'''Convert given data to JSON.'''
return json.dumps(o, sort_keys=True)
def from_json(j):
'''Decode a JSON payload.'''
return json.loads(j)
def open_for_csv(name, mode):
if sys.version_info[0] < 3:
nl = {}
bin = 'b'
else:
nl = { 'newline': '' }
bin = ''
return open(name, mode + bin, **nl)
try:
unicode
def utf8(data):
'''Utf-8 encode data.'''
if isinstance(data, unicode):
return data.encode('utf-8')
return data
except NameError:
def utf8(data):
'''Utf-8 encode data.'''
if isinstance(data, str):
return data.encode('utf-8')
return data
try:
# For encoding ascii back and forth between bytestrings, as is repeatedly
# necessary in JSON-based crypto under Python 3
unicode
def native(s):
return s
def binary(s):
if isinstance(s, unicode):
return s.encode('ascii')
return s
except NameError:
def native(s):
if isinstance(s, bytes):
return s.decode('ascii')
return s
def binary(s):
if isinstance(s, str):
return s.encode('ascii')
class HashingFile(object):
def __init__(self, fd, hashtype='sha256'):
self.fd = fd
self.hashtype = hashtype
self.hash = hashlib.new(hashtype)
self.length = 0
def write(self, data):
self.hash.update(data)
self.length += len(data)
self.fd.write(data)
def close(self):
self.fd.close()
def digest(self):
if self.hashtype == 'md5':
return self.hash.hexdigest()
digest = self.hash.digest()
return self.hashtype + '=' + native(urlsafe_b64encode(digest))
if sys.platform == 'win32':
import ctypes.wintypes
# CSIDL_APPDATA for reference - not used here for compatibility with
# dirspec, which uses LOCAL_APPDATA and COMMON_APPDATA in that order
csidl = dict(CSIDL_APPDATA=26, CSIDL_LOCAL_APPDATA=28,
CSIDL_COMMON_APPDATA=35)
def get_path(name):
SHGFP_TYPE_CURRENT = 0
buf = ctypes.create_unicode_buffer(ctypes.wintypes.MAX_PATH)
ctypes.windll.shell32.SHGetFolderPathW(0, csidl[name], 0, SHGFP_TYPE_CURRENT, buf)
return buf.value
def save_config_path(*resource):
appdata = get_path("CSIDL_LOCAL_APPDATA")
path = os.path.join(appdata, *resource)
if not os.path.isdir(path):
os.makedirs(path)
return path
def load_config_paths(*resource):
ids = ["CSIDL_LOCAL_APPDATA", "CSIDL_COMMON_APPDATA"]
for id in ids:
base = get_path(id)
path = os.path.join(base, *resource)
if os.path.exists(path):
yield path
else:
def save_config_path(*resource):
import xdg.BaseDirectory
return xdg.BaseDirectory.save_config_path(*resource)
def load_config_paths(*resource):
import xdg.BaseDirectory
return xdg.BaseDirectory.load_config_paths(*resource)
def matches_requirement(req, wheels):
"""List of wheels matching a requirement.
:param req: The requirement to satisfy
:param wheels: List of wheels to search.
"""
try:
from pkg_resources import Distribution, Requirement
except ImportError:
raise RuntimeError("Cannot use requirements without pkg_resources")
req = Requirement.parse(req)
selected = []
for wf in wheels:
f = wf.parsed_filename
dist = Distribution(project_name=f.group("name"), version=f.group("ver"))
if dist in req:
selected.append(wf)
return selected
| apache-2.0 |
fivejjs/ibis | scripts/cleanup_testing_data.py | 5 | 1516 | #! /usr/bin/env python
# Copyright 2015 Cloudera Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Cleans up the ibis-testing-data from Impala/HDFS and also the HDFS tmp data
# directory
from __future__ import print_function
from posixpath import join as pjoin
import os
import posixpath
import shutil
import sys
import tempfile
import subprocess
import ibis
from ibis.tests.util import IbisTestEnv
ENV = IbisTestEnv()
def make_connection():
ic = ibis.impala_connect(host=ENV.impala_host, port=ENV.impala_port,
protocol=ENV.impala_protocol)
hdfs = ibis.hdfs_connect(host=ENV.nn_host, port=ENV.webhdfs_port)
return ibis.make_client(ic, hdfs_client=hdfs)
if __name__ == '__main__':
if ENV.cleanup_test_data:
con = make_connection()
con.drop_database(ENV.test_data_db, force=True)
con.hdfs.rmdir(ENV.test_data_dir)
con.hdfs.rmdir(ENV.tmp_dir)
else:
print('IBIS_TEST_CLEANUP_TEST_DATA not set to True; refusing to clean')
| apache-2.0 |
jzoldak/edx-platform | lms/djangoapps/student_profile/test/test_views.py | 113 | 3370 | # -*- coding: utf-8 -*-
""" Tests for student profile views. """
from django.conf import settings
from django.core.urlresolvers import reverse
from django.test import TestCase
from django.test.client import RequestFactory
from util.testing import UrlResetMixin
from student.tests.factories import UserFactory
from student_profile.views import learner_profile_context
class LearnerProfileViewTest(UrlResetMixin, TestCase):
""" Tests for the student profile view. """
USERNAME = "username"
PASSWORD = "password"
CONTEXT_DATA = [
'default_public_account_fields',
'accounts_api_url',
'preferences_api_url',
'account_settings_page_url',
'has_preferences_access',
'own_profile',
'country_options',
'language_options',
'account_settings_data',
'preferences_data',
]
def setUp(self):
super(LearnerProfileViewTest, self).setUp()
self.user = UserFactory.create(username=self.USERNAME, password=self.PASSWORD)
self.client.login(username=self.USERNAME, password=self.PASSWORD)
def test_context(self):
"""
Verify learner profile page context data.
"""
request = RequestFactory().get('/url')
request.user = self.user
context = learner_profile_context(request, self.USERNAME, self.user.is_staff)
self.assertEqual(
context['data']['default_public_account_fields'],
settings.ACCOUNT_VISIBILITY_CONFIGURATION['public_fields']
)
self.assertEqual(
context['data']['accounts_api_url'],
reverse("accounts_api", kwargs={'username': self.user.username})
)
self.assertEqual(
context['data']['preferences_api_url'],
reverse('preferences_api', kwargs={'username': self.user.username})
)
self.assertEqual(
context['data']['profile_image_upload_url'],
reverse("profile_image_upload", kwargs={'username': self.user.username})
)
self.assertEqual(
context['data']['profile_image_remove_url'],
reverse('profile_image_remove', kwargs={'username': self.user.username})
)
self.assertEqual(
context['data']['profile_image_max_bytes'],
settings.PROFILE_IMAGE_MAX_BYTES
)
self.assertEqual(
context['data']['profile_image_min_bytes'],
settings.PROFILE_IMAGE_MIN_BYTES
)
self.assertEqual(context['data']['account_settings_page_url'], reverse('account_settings'))
for attribute in self.CONTEXT_DATA:
self.assertIn(attribute, context['data'])
def test_view(self):
"""
Verify learner profile page view.
"""
profile_path = reverse('learner_profile', kwargs={'username': self.USERNAME})
response = self.client.get(path=profile_path)
for attribute in self.CONTEXT_DATA:
self.assertIn(attribute, response.content)
def test_undefined_profile_page(self):
"""
Verify that a 404 is returned for a non-existent profile page.
"""
profile_path = reverse('learner_profile', kwargs={'username': "no_such_user"})
response = self.client.get(path=profile_path)
self.assertEqual(404, response.status_code)
| agpl-3.0 |
xme1226/sahara | sahara/utils/remote.py | 2 | 5045 | # Copyright (c) 2013 Mirantis Inc.
# Copyright (c) 2013 Hortonworks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
from oslo.config import cfg
import six
from sahara import exceptions as ex
from sahara.i18n import _
# These options are for SSH remote only
ssh_opts = [
cfg.IntOpt('global_remote_threshold', default=100,
help='Maximum number of remote operations that will '
'be running at the same time. Note that each '
'remote operation requires its own process to '
'run.'),
cfg.IntOpt('cluster_remote_threshold', default=70,
help='The same as global_remote_threshold, but for '
'a single cluster.'),
cfg.StrOpt('proxy_command', default='',
help='Proxy command used to connect to instances. If set, this '
'command should open a netcat socket, that Sahara will use for '
'SSH and HTTP connections. Use {host} and {port} to describe '
'the destination. Other available keywords: {tenant_id}, '
'{network_id}, {router_id}.'),
]
CONF = cfg.CONF
CONF.register_opts(ssh_opts)
DRIVER = None
@six.add_metaclass(abc.ABCMeta)
class RemoteDriver(object):
@abc.abstractmethod
def setup_remote(self, engine):
"""Performs driver initialization."""
@abc.abstractmethod
def get_remote(self, instance):
"""Returns driver specific Remote."""
@abc.abstractmethod
def get_userdata_template(self):
"""Returns userdata template preparing instance to work with driver."""
@abc.abstractmethod
def get_type_and_version(self):
"""Returns engine type and version
Result should be in the form 'type.major.minor'.
"""
@six.add_metaclass(abc.ABCMeta)
class Remote(object):
@abc.abstractmethod
def get_neutron_info(self):
"""Returns dict which later could be passed to get_http_client."""
@abc.abstractmethod
def get_http_client(self, port, info=None):
"""Returns HTTP client for a given instance's port."""
@abc.abstractmethod
def close_http_session(self, port):
"""Closes cached HTTP session for a given instance's port."""
@abc.abstractmethod
def execute_command(self, cmd, run_as_root=False, get_stderr=False,
raise_when_error=True, timeout=300):
"""Execute specified command remotely using existing ssh connection.
Return exit code, stdout data and stderr data of the executed command.
"""
@abc.abstractmethod
def write_file_to(self, remote_file, data, run_as_root=False, timeout=120):
"""Create remote file and write the given data to it.
Uses existing ssh connection.
"""
@abc.abstractmethod
def append_to_file(self, r_file, data, run_as_root=False, timeout=120):
"""Append the given data to remote file.
Uses existing ssh connection.
"""
@abc.abstractmethod
def write_files_to(self, files, run_as_root=False, timeout=120):
"""Copy file->data dictionary in a single ssh connection."""
@abc.abstractmethod
def append_to_files(self, files, run_as_root=False, timeout=120):
"""Copy file->data dictionary in a single ssh connection."""
@abc.abstractmethod
def read_file_from(self, remote_file, run_as_root=False, timeout=120):
"""Read remote file from the specified host and return given data."""
@abc.abstractmethod
def replace_remote_string(self, remote_file, old_str, new_str,
timeout=120):
"""Replaces strings in remote file using sed command."""
def setup_remote(driver, engine):
global DRIVER
DRIVER = driver
DRIVER.setup_remote(engine)
def get_remote_type_and_version():
return DRIVER.get_type_and_version()
def _check_driver_is_loaded():
if not DRIVER:
raise ex.SystemError(_('Remote driver is not loaded. Most probably '
'you see this error because you are running '
'Sahara in distributed mode and it is broken.'
'Try running sahara-all instead.'))
def get_remote(instance):
"""Returns Remote for a given instance."""
_check_driver_is_loaded()
return DRIVER.get_remote(instance)
def get_userdata_template():
"""Returns userdata template as a string."""
_check_driver_is_loaded()
return DRIVER.get_userdata_template()
| apache-2.0 |
AutorestCI/azure-sdk-for-python | azure-mgmt-network/azure/mgmt/network/v2017_11_01/models/express_route_circuit_sku.py | 1 | 1525 | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class ExpressRouteCircuitSku(Model):
"""Contains SKU in an ExpressRouteCircuit.
:param name: The name of the SKU.
:type name: str
:param tier: The tier of the SKU. Possible values are 'Standard' and
'Premium'. Possible values include: 'Standard', 'Premium'
:type tier: str or
~azure.mgmt.network.v2017_11_01.models.ExpressRouteCircuitSkuTier
:param family: The family of the SKU. Possible values are: 'UnlimitedData'
and 'MeteredData'. Possible values include: 'UnlimitedData', 'MeteredData'
:type family: str or
~azure.mgmt.network.v2017_11_01.models.ExpressRouteCircuitSkuFamily
"""
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'tier': {'key': 'tier', 'type': 'str'},
'family': {'key': 'family', 'type': 'str'},
}
def __init__(self, name=None, tier=None, family=None):
super(ExpressRouteCircuitSku, self).__init__()
self.name = name
self.tier = tier
self.family = family
| mit |
byterom/android_external_chromium_org | tools/memory_inspector/memory_inspector/frontends/command_line.py | 83 | 5717 | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Command line frontend for Memory Inspector"""
import json
import memory_inspector
import optparse
import os
import time
from memory_inspector import constants
from memory_inspector.classification import mmap_classifier
from memory_inspector.core import backends
from memory_inspector.data import serialization
def main():
COMMANDS = ['devices', 'ps', 'stats', 'mmaps', 'classified_mmaps']
usage = ('%prog [options] ' + ' | '.join(COMMANDS))
parser = optparse.OptionParser(usage=usage)
parser.add_option('-b', '--backend', help='Backend name '
'(e.g., Android)', type='string', default='Android')
parser.add_option('-s', '--device_id', help='Device '
'id (e.g., Android serial)', type='string')
parser.add_option('-p', '--process_id', help='Target process id',
type='int')
parser.add_option('-m', '--filter_process_name', help='Process '
'name to match', type='string')
parser.add_option('-r', '--mmap_rule',
help='mmap rule', type='string',
default=os.path.join(constants.CLASSIFICATION_RULES_PATH,
'default', 'mmap-android.py'))
(options, args) = parser.parse_args()
memory_inspector.RegisterAllBackends()
if not args or args[0] not in COMMANDS:
parser.print_help()
return -1
if args[0] == 'devices':
_ListDevices(options.backend)
return 0
number_of_devices = 0
if options.device_id:
device_id = options.device_id
number_of_devices = 1
else:
for device in backends.ListDevices():
if device.backend.name == options.backend:
number_of_devices += 1
device_id = device.id
if number_of_devices == 0:
print "No devices connected"
return -1
if number_of_devices > 1:
print ('More than 1 device connected. You need to provide'
' --device_id')
return -1
device = backends.GetDevice(options.backend, device_id)
if not device:
print 'Device', device_id, 'does not exist'
return -1
device.Initialize()
if args[0] == 'ps':
if not options.filter_process_name:
print 'Listing all processes'
else:
print ('Listing processes matching '
+ options.filter_process_name.lower())
print ''
print '%-10s : %-50s : %12s %12s %12s' % (
'Process ID', 'Process Name', 'RUN_TIME', 'THREADS',
'MEM_RSS_KB')
print ''
for process in device.ListProcesses():
if (not options.filter_process_name or
options.filter_process_name.lower() in process.name.lower()):
stats = process.GetStats()
run_time_min, run_time_sec = divmod(stats.run_time, 60)
print '%10s : %-50s : %6s m %2s s %8s %12s' % (
process.pid, _Truncate(process.name, 50), run_time_min,
run_time_sec, stats.threads, stats.vm_rss)
return 0
if not options.process_id:
print 'You need to provide --process_id'
return -1
process = device.GetProcess(options.process_id)
if not process:
print 'Cannot find process [%d] on device %s' % (
options.process_id, device.id)
return -1
elif args[0] == 'stats':
_ListProcessStats(process)
return 0
elif args[0] == 'mmaps':
_ListProcessMmaps(process)
return 0
elif args[0] == 'classified_mmaps':
_ListProcessClassifiedMmaps(process, options.mmap_rule)
return 0
def _ListDevices(backend_name):
print 'Device list:'
print ''
for device in backends.ListDevices():
if device.backend.name == backend_name:
print '%-16s : %s' % (device.id, device.name)
def _ListProcessStats(process):
"""Prints process stats periodically
"""
print 'Stats for process: [%d] %s' % (process.pid, process.name)
print '%-10s : %-50s : %12s %12s %13s %12s %14s' % (
'Process ID', 'Process Name', 'RUN_TIME', 'THREADS',
'CPU_USAGE', 'MEM_RSS_KB', 'PAGE_FAULTS')
print ''
while True:
stats = process.GetStats()
run_time_min, run_time_sec = divmod(stats.run_time, 60)
print '%10s : %-50s : %6s m %2s s %8s %12s %13s %11s' % (
process.pid, _Truncate(process.name, 50), run_time_min, run_time_sec,
stats.threads, stats.cpu_usage, stats.vm_rss, stats.page_faults)
time.sleep(1)
def _ListProcessMmaps(process):
"""Prints process memory maps
"""
print 'Memory Maps for process: [%d] %s' % (process.pid, process.name)
print '%-10s %-10s %6s %12s %12s %13s %13s %-40s' % (
'START', 'END', 'FLAGS', 'PRIV.DIRTY', 'PRIV.CLEAN',
'SHARED DIRTY', 'SHARED CLEAN', 'MAPPED_FILE')
print '%38s %12s %12s %13s' % ('(kb)', '(kb)', '(kb)', '(kb)')
print ''
maps = process.DumpMemoryMaps()
for entry in maps.entries:
print '%-10x %-10x %6s %12s %12s %13s %13s %-40s' % (
entry.start, entry.end, entry.prot_flags,
entry.priv_dirty_bytes / 1024, entry.priv_clean_bytes / 1024,
entry.shared_dirty_bytes / 1024,
entry.shared_clean_bytes / 1024, entry.mapped_file)
def _ListProcessClassifiedMmaps(process, mmap_rule):
"""Prints process classified memory maps
"""
maps = process.DumpMemoryMaps()
if not os.path.exists(mmap_rule):
print 'File', mmap_rule, 'not found'
return
with open(mmap_rule) as f:
rules = mmap_classifier.LoadRules(f.read())
classified_results_tree = mmap_classifier.Classify(maps, rules)
print json.dumps(classified_results_tree, cls=serialization.Encoder)
def _Truncate(name, max_length):
if len(name) <= max_length:
return name
return '%s...' % name[0:(max_length - 3)]
| bsd-3-clause |
rolando/scrapy | tests/test_utils_url.py | 16 | 19716 | # -*- coding: utf-8 -*-
import unittest
import six
from six.moves.urllib.parse import urlparse
from scrapy.spiders import Spider
from scrapy.utils.url import (url_is_from_any_domain, url_is_from_spider,
add_http_if_no_scheme, guess_scheme,
parse_url, strip_url)
__doctests__ = ['scrapy.utils.url']
class UrlUtilsTest(unittest.TestCase):
def test_url_is_from_any_domain(self):
url = 'http://www.wheele-bin-art.co.uk/get/product/123'
self.assertTrue(url_is_from_any_domain(url, ['wheele-bin-art.co.uk']))
self.assertFalse(url_is_from_any_domain(url, ['art.co.uk']))
url = 'http://wheele-bin-art.co.uk/get/product/123'
self.assertTrue(url_is_from_any_domain(url, ['wheele-bin-art.co.uk']))
self.assertFalse(url_is_from_any_domain(url, ['art.co.uk']))
url = 'http://www.Wheele-Bin-Art.co.uk/get/product/123'
self.assertTrue(url_is_from_any_domain(url, ['wheele-bin-art.CO.UK']))
self.assertTrue(url_is_from_any_domain(url, ['WHEELE-BIN-ART.CO.UK']))
url = 'http://192.169.0.15:8080/mypage.html'
self.assertTrue(url_is_from_any_domain(url, ['192.169.0.15:8080']))
self.assertFalse(url_is_from_any_domain(url, ['192.169.0.15']))
url = 'javascript:%20document.orderform_2581_1190810811.mode.value=%27add%27;%20javascript:%20document.orderform_2581_1190810811.submit%28%29'
self.assertFalse(url_is_from_any_domain(url, ['testdomain.com']))
self.assertFalse(url_is_from_any_domain(url+'.testdomain.com', ['testdomain.com']))
def test_url_is_from_spider(self):
spider = Spider(name='example.com')
self.assertTrue(url_is_from_spider('http://www.example.com/some/page.html', spider))
self.assertTrue(url_is_from_spider('http://sub.example.com/some/page.html', spider))
self.assertFalse(url_is_from_spider('http://www.example.org/some/page.html', spider))
self.assertFalse(url_is_from_spider('http://www.example.net/some/page.html', spider))
def test_url_is_from_spider_class_attributes(self):
class MySpider(Spider):
name = 'example.com'
self.assertTrue(url_is_from_spider('http://www.example.com/some/page.html', MySpider))
self.assertTrue(url_is_from_spider('http://sub.example.com/some/page.html', MySpider))
self.assertFalse(url_is_from_spider('http://www.example.org/some/page.html', MySpider))
self.assertFalse(url_is_from_spider('http://www.example.net/some/page.html', MySpider))
def test_url_is_from_spider_with_allowed_domains(self):
spider = Spider(name='example.com', allowed_domains=['example.org', 'example.net'])
self.assertTrue(url_is_from_spider('http://www.example.com/some/page.html', spider))
self.assertTrue(url_is_from_spider('http://sub.example.com/some/page.html', spider))
self.assertTrue(url_is_from_spider('http://example.com/some/page.html', spider))
self.assertTrue(url_is_from_spider('http://www.example.org/some/page.html', spider))
self.assertTrue(url_is_from_spider('http://www.example.net/some/page.html', spider))
self.assertFalse(url_is_from_spider('http://www.example.us/some/page.html', spider))
spider = Spider(name='example.com', allowed_domains=set(('example.com', 'example.net')))
self.assertTrue(url_is_from_spider('http://www.example.com/some/page.html', spider))
spider = Spider(name='example.com', allowed_domains=('example.com', 'example.net'))
self.assertTrue(url_is_from_spider('http://www.example.com/some/page.html', spider))
def test_url_is_from_spider_with_allowed_domains_class_attributes(self):
class MySpider(Spider):
name = 'example.com'
allowed_domains = ('example.org', 'example.net')
self.assertTrue(url_is_from_spider('http://www.example.com/some/page.html', MySpider))
self.assertTrue(url_is_from_spider('http://sub.example.com/some/page.html', MySpider))
self.assertTrue(url_is_from_spider('http://example.com/some/page.html', MySpider))
self.assertTrue(url_is_from_spider('http://www.example.org/some/page.html', MySpider))
self.assertTrue(url_is_from_spider('http://www.example.net/some/page.html', MySpider))
self.assertFalse(url_is_from_spider('http://www.example.us/some/page.html', MySpider))
class AddHttpIfNoScheme(unittest.TestCase):
def test_add_scheme(self):
self.assertEqual(add_http_if_no_scheme('www.example.com'),
'http://www.example.com')
def test_without_subdomain(self):
self.assertEqual(add_http_if_no_scheme('example.com'),
'http://example.com')
def test_path(self):
self.assertEqual(add_http_if_no_scheme('www.example.com/some/page.html'),
'http://www.example.com/some/page.html')
def test_port(self):
self.assertEqual(add_http_if_no_scheme('www.example.com:80'),
'http://www.example.com:80')
def test_fragment(self):
self.assertEqual(add_http_if_no_scheme('www.example.com/some/page#frag'),
'http://www.example.com/some/page#frag')
def test_query(self):
self.assertEqual(add_http_if_no_scheme('www.example.com/do?a=1&b=2&c=3'),
'http://www.example.com/do?a=1&b=2&c=3')
def test_username_password(self):
self.assertEqual(add_http_if_no_scheme('username:password@www.example.com'),
'http://username:password@www.example.com')
def test_complete_url(self):
self.assertEqual(add_http_if_no_scheme('username:password@www.example.com:80/some/page/do?a=1&b=2&c=3#frag'),
'http://username:password@www.example.com:80/some/page/do?a=1&b=2&c=3#frag')
def test_preserve_http(self):
self.assertEqual(add_http_if_no_scheme('http://www.example.com'),
'http://www.example.com')
def test_preserve_http_without_subdomain(self):
self.assertEqual(add_http_if_no_scheme('http://example.com'),
'http://example.com')
def test_preserve_http_path(self):
self.assertEqual(add_http_if_no_scheme('http://www.example.com/some/page.html'),
'http://www.example.com/some/page.html')
def test_preserve_http_port(self):
self.assertEqual(add_http_if_no_scheme('http://www.example.com:80'),
'http://www.example.com:80')
def test_preserve_http_fragment(self):
self.assertEqual(add_http_if_no_scheme('http://www.example.com/some/page#frag'),
'http://www.example.com/some/page#frag')
def test_preserve_http_query(self):
self.assertEqual(add_http_if_no_scheme('http://www.example.com/do?a=1&b=2&c=3'),
'http://www.example.com/do?a=1&b=2&c=3')
def test_preserve_http_username_password(self):
self.assertEqual(add_http_if_no_scheme('http://username:password@www.example.com'),
'http://username:password@www.example.com')
def test_preserve_http_complete_url(self):
self.assertEqual(add_http_if_no_scheme('http://username:password@www.example.com:80/some/page/do?a=1&b=2&c=3#frag'),
'http://username:password@www.example.com:80/some/page/do?a=1&b=2&c=3#frag')
def test_protocol_relative(self):
self.assertEqual(add_http_if_no_scheme('//www.example.com'),
'http://www.example.com')
def test_protocol_relative_without_subdomain(self):
self.assertEqual(add_http_if_no_scheme('//example.com'),
'http://example.com')
def test_protocol_relative_path(self):
self.assertEqual(add_http_if_no_scheme('//www.example.com/some/page.html'),
'http://www.example.com/some/page.html')
def test_protocol_relative_port(self):
self.assertEqual(add_http_if_no_scheme('//www.example.com:80'),
'http://www.example.com:80')
def test_protocol_relative_fragment(self):
self.assertEqual(add_http_if_no_scheme('//www.example.com/some/page#frag'),
'http://www.example.com/some/page#frag')
def test_protocol_relative_query(self):
self.assertEqual(add_http_if_no_scheme('//www.example.com/do?a=1&b=2&c=3'),
'http://www.example.com/do?a=1&b=2&c=3')
def test_protocol_relative_username_password(self):
self.assertEqual(add_http_if_no_scheme('//username:password@www.example.com'),
'http://username:password@www.example.com')
def test_protocol_relative_complete_url(self):
self.assertEqual(add_http_if_no_scheme('//username:password@www.example.com:80/some/page/do?a=1&b=2&c=3#frag'),
'http://username:password@www.example.com:80/some/page/do?a=1&b=2&c=3#frag')
def test_preserve_https(self):
self.assertEqual(add_http_if_no_scheme('https://www.example.com'),
'https://www.example.com')
def test_preserve_ftp(self):
self.assertEqual(add_http_if_no_scheme('ftp://www.example.com'),
'ftp://www.example.com')
class GuessSchemeTest(unittest.TestCase):
pass
def create_guess_scheme_t(args):
def do_expected(self):
url = guess_scheme(args[0])
assert url.startswith(args[1]), \
'Wrong scheme guessed: for `%s` got `%s`, expected `%s...`' % (
args[0], url, args[1])
return do_expected
def create_skipped_scheme_t(args):
def do_expected(self):
raise unittest.SkipTest(args[2])
url = guess_scheme(args[0])
assert url.startswith(args[1])
return do_expected
for k, args in enumerate ([
('/index', 'file://'),
('/index.html', 'file://'),
('./index.html', 'file://'),
('../index.html', 'file://'),
('../../index.html', 'file://'),
('./data/index.html', 'file://'),
('.hidden/data/index.html', 'file://'),
('/home/user/www/index.html', 'file://'),
('//home/user/www/index.html', 'file://'),
('file:///home/user/www/index.html', 'file://'),
('index.html', 'http://'),
('example.com', 'http://'),
('www.example.com', 'http://'),
('www.example.com/index.html', 'http://'),
('http://example.com', 'http://'),
('http://example.com/index.html', 'http://'),
('localhost', 'http://'),
('localhost/index.html', 'http://'),
# some corner cases (default to http://)
('/', 'http://'),
('.../test', 'http://'),
], start=1):
t_method = create_guess_scheme_t(args)
t_method.__name__ = 'test_uri_%03d' % k
setattr (GuessSchemeTest, t_method.__name__, t_method)
# TODO: the following tests do not pass with current implementation
for k, args in enumerate ([
('C:\absolute\path\to\a\file.html', 'file://',
'Windows filepath are not supported for scrapy shell'),
], start=1):
t_method = create_skipped_scheme_t(args)
t_method.__name__ = 'test_uri_skipped_%03d' % k
setattr (GuessSchemeTest, t_method.__name__, t_method)
class StripUrl(unittest.TestCase):
def test_noop(self):
self.assertEqual(strip_url(
'http://www.example.com/index.html'),
'http://www.example.com/index.html')
def test_noop_query_string(self):
self.assertEqual(strip_url(
'http://www.example.com/index.html?somekey=somevalue'),
'http://www.example.com/index.html?somekey=somevalue')
def test_fragments(self):
self.assertEqual(strip_url(
'http://www.example.com/index.html?somekey=somevalue#section', strip_fragment=False),
'http://www.example.com/index.html?somekey=somevalue#section')
def test_path(self):
for input_url, origin, output_url in [
('http://www.example.com/',
False,
'http://www.example.com/'),
('http://www.example.com',
False,
'http://www.example.com'),
('http://www.example.com',
True,
'http://www.example.com/'),
]:
self.assertEqual(strip_url(input_url, origin_only=origin), output_url)
def test_credentials(self):
for i, o in [
('http://username@www.example.com/index.html?somekey=somevalue#section',
'http://www.example.com/index.html?somekey=somevalue'),
('https://username:@www.example.com/index.html?somekey=somevalue#section',
'https://www.example.com/index.html?somekey=somevalue'),
('ftp://username:password@www.example.com/index.html?somekey=somevalue#section',
'ftp://www.example.com/index.html?somekey=somevalue'),
]:
self.assertEqual(strip_url(i, strip_credentials=True), o)
def test_credentials_encoded_delims(self):
for i, o in [
# user: "username@"
# password: none
('http://username%40@www.example.com/index.html?somekey=somevalue#section',
'http://www.example.com/index.html?somekey=somevalue'),
# user: "username:pass"
# password: ""
('https://username%3Apass:@www.example.com/index.html?somekey=somevalue#section',
'https://www.example.com/index.html?somekey=somevalue'),
# user: "me"
# password: "user@domain.com"
('ftp://me:user%40domain.com@www.example.com/index.html?somekey=somevalue#section',
'ftp://www.example.com/index.html?somekey=somevalue'),
]:
self.assertEqual(strip_url(i, strip_credentials=True), o)
def test_default_ports_creds_off(self):
for i, o in [
('http://username:password@www.example.com:80/index.html?somekey=somevalue#section',
'http://www.example.com/index.html?somekey=somevalue'),
('http://username:password@www.example.com:8080/index.html#section',
'http://www.example.com:8080/index.html'),
('http://username:password@www.example.com:443/index.html?somekey=somevalue&someotherkey=sov#section',
'http://www.example.com:443/index.html?somekey=somevalue&someotherkey=sov'),
('https://username:password@www.example.com:443/index.html',
'https://www.example.com/index.html'),
('https://username:password@www.example.com:442/index.html',
'https://www.example.com:442/index.html'),
('https://username:password@www.example.com:80/index.html',
'https://www.example.com:80/index.html'),
('ftp://username:password@www.example.com:21/file.txt',
'ftp://www.example.com/file.txt'),
('ftp://username:password@www.example.com:221/file.txt',
'ftp://www.example.com:221/file.txt'),
]:
self.assertEqual(strip_url(i), o)
def test_default_ports(self):
for i, o in [
('http://username:password@www.example.com:80/index.html',
'http://username:password@www.example.com/index.html'),
('http://username:password@www.example.com:8080/index.html',
'http://username:password@www.example.com:8080/index.html'),
('http://username:password@www.example.com:443/index.html',
'http://username:password@www.example.com:443/index.html'),
('https://username:password@www.example.com:443/index.html',
'https://username:password@www.example.com/index.html'),
('https://username:password@www.example.com:442/index.html',
'https://username:password@www.example.com:442/index.html'),
('https://username:password@www.example.com:80/index.html',
'https://username:password@www.example.com:80/index.html'),
('ftp://username:password@www.example.com:21/file.txt',
'ftp://username:password@www.example.com/file.txt'),
('ftp://username:password@www.example.com:221/file.txt',
'ftp://username:password@www.example.com:221/file.txt'),
]:
self.assertEqual(strip_url(i, strip_default_port=True, strip_credentials=False), o)
def test_default_ports_keep(self):
for i, o in [
('http://username:password@www.example.com:80/index.html?somekey=somevalue&someotherkey=sov#section',
'http://username:password@www.example.com:80/index.html?somekey=somevalue&someotherkey=sov'),
('http://username:password@www.example.com:8080/index.html?somekey=somevalue&someotherkey=sov#section',
'http://username:password@www.example.com:8080/index.html?somekey=somevalue&someotherkey=sov'),
('http://username:password@www.example.com:443/index.html',
'http://username:password@www.example.com:443/index.html'),
('https://username:password@www.example.com:443/index.html',
'https://username:password@www.example.com:443/index.html'),
('https://username:password@www.example.com:442/index.html',
'https://username:password@www.example.com:442/index.html'),
('https://username:password@www.example.com:80/index.html',
'https://username:password@www.example.com:80/index.html'),
('ftp://username:password@www.example.com:21/file.txt',
'ftp://username:password@www.example.com:21/file.txt'),
('ftp://username:password@www.example.com:221/file.txt',
'ftp://username:password@www.example.com:221/file.txt'),
]:
self.assertEqual(strip_url(i, strip_default_port=False, strip_credentials=False), o)
def test_origin_only(self):
for i, o in [
('http://username:password@www.example.com/index.html',
'http://www.example.com/'),
('http://username:password@www.example.com:80/foo/bar?query=value#somefrag',
'http://www.example.com/'),
('http://username:password@www.example.com:8008/foo/bar?query=value#somefrag',
'http://www.example.com:8008/'),
('https://username:password@www.example.com:443/index.html',
'https://www.example.com/'),
]:
self.assertEqual(strip_url(i, origin_only=True), o)
if __name__ == "__main__":
unittest.main()
| bsd-3-clause |
jwhitehorn/p2pool | p2pool/util/logging.py | 287 | 2995 | import codecs
import datetime
import os
import sys
from twisted.python import log
class EncodeReplacerPipe(object):
def __init__(self, inner_file):
self.inner_file = inner_file
self.softspace = 0
def write(self, data):
if isinstance(data, unicode):
try:
data = data.encode(self.inner_file.encoding, 'replace')
except:
data = data.encode('ascii', 'replace')
self.inner_file.write(data)
def flush(self):
self.inner_file.flush()
class LogFile(object):
def __init__(self, filename):
self.filename = filename
self.inner_file = None
self.reopen()
def reopen(self):
if self.inner_file is not None:
self.inner_file.close()
open(self.filename, 'a').close()
f = open(self.filename, 'rb')
f.seek(0, os.SEEK_END)
length = f.tell()
if length > 100*1000*1000:
f.seek(-1000*1000, os.SEEK_END)
while True:
if f.read(1) in ('', '\n'):
break
data = f.read()
f.close()
f = open(self.filename, 'wb')
f.write(data)
f.close()
self.inner_file = codecs.open(self.filename, 'a', 'utf-8')
def write(self, data):
self.inner_file.write(data)
def flush(self):
self.inner_file.flush()
class TeePipe(object):
def __init__(self, outputs):
self.outputs = outputs
def write(self, data):
for output in self.outputs:
output.write(data)
def flush(self):
for output in self.outputs:
output.flush()
class TimestampingPipe(object):
def __init__(self, inner_file):
self.inner_file = inner_file
self.buf = ''
self.softspace = 0
def write(self, data):
buf = self.buf + data
lines = buf.split('\n')
for line in lines[:-1]:
self.inner_file.write('%s %s\n' % (datetime.datetime.now(), line))
self.inner_file.flush()
self.buf = lines[-1]
def flush(self):
pass
class AbortPipe(object):
def __init__(self, inner_file):
self.inner_file = inner_file
self.softspace = 0
def write(self, data):
try:
self.inner_file.write(data)
except:
sys.stdout = sys.__stdout__
log.DefaultObserver.stderr = sys.stderr = sys.__stderr__
raise
def flush(self):
self.inner_file.flush()
class PrefixPipe(object):
def __init__(self, inner_file, prefix):
self.inner_file = inner_file
self.prefix = prefix
self.buf = ''
self.softspace = 0
def write(self, data):
buf = self.buf + data
lines = buf.split('\n')
for line in lines[:-1]:
self.inner_file.write(self.prefix + line + '\n')
self.inner_file.flush()
self.buf = lines[-1]
def flush(self):
pass
| gpl-3.0 |
tlatzko/spmcluster | .tox/docs/lib/python2.7/site-packages/docutils/transforms/parts.py | 187 | 6980 | # $Id: parts.py 6073 2009-08-06 12:21:10Z milde $
# Authors: David Goodger <goodger@python.org>; Ueli Schlaepfer; Dmitry Jemerov
# Copyright: This module has been placed in the public domain.
"""
Transforms related to document parts.
"""
__docformat__ = 'reStructuredText'
import re
import sys
from docutils import nodes, utils
from docutils.transforms import TransformError, Transform
class SectNum(Transform):
"""
Automatically assigns numbers to the titles of document sections.
It is possible to limit the maximum section level for which the numbers
are added. For those sections that are auto-numbered, the "autonum"
attribute is set, informing the contents table generator that a different
form of the TOC should be used.
"""
default_priority = 710
"""Should be applied before `Contents`."""
def apply(self):
self.maxdepth = self.startnode.details.get('depth', None)
self.startvalue = self.startnode.details.get('start', 1)
self.prefix = self.startnode.details.get('prefix', '')
self.suffix = self.startnode.details.get('suffix', '')
self.startnode.parent.remove(self.startnode)
if self.document.settings.sectnum_xform:
if self.maxdepth is None:
self.maxdepth = sys.maxint
self.update_section_numbers(self.document)
else: # store details for eventual section numbering by the writer
self.document.settings.sectnum_depth = self.maxdepth
self.document.settings.sectnum_start = self.startvalue
self.document.settings.sectnum_prefix = self.prefix
self.document.settings.sectnum_suffix = self.suffix
def update_section_numbers(self, node, prefix=(), depth=0):
depth += 1
if prefix:
sectnum = 1
else:
sectnum = self.startvalue
for child in node:
if isinstance(child, nodes.section):
numbers = prefix + (str(sectnum),)
title = child[0]
# Use for spacing:
generated = nodes.generated(
'', (self.prefix + '.'.join(numbers) + self.suffix
+ u'\u00a0' * 3),
classes=['sectnum'])
title.insert(0, generated)
title['auto'] = 1
if depth < self.maxdepth:
self.update_section_numbers(child, numbers, depth)
sectnum += 1
class Contents(Transform):
"""
This transform generates a table of contents from the entire document tree
or from a single branch. It locates "section" elements and builds them
into a nested bullet list, which is placed within a "topic" created by the
contents directive. A title is either explicitly specified, taken from
the appropriate language module, or omitted (local table of contents).
The depth may be specified. Two-way references between the table of
contents and section titles are generated (requires Writer support).
This transform requires a startnode, which contains generation
options and provides the location for the generated table of contents (the
startnode is replaced by the table of contents "topic").
"""
default_priority = 720
def apply(self):
try: # let the writer (or output software) build the contents list?
toc_by_writer = self.document.settings.use_latex_toc
except AttributeError:
toc_by_writer = False
details = self.startnode.details
if 'local' in details:
startnode = self.startnode.parent.parent
while not (isinstance(startnode, nodes.section)
or isinstance(startnode, nodes.document)):
# find the ToC root: a direct ancestor of startnode
startnode = startnode.parent
else:
startnode = self.document
self.toc_id = self.startnode.parent['ids'][0]
if 'backlinks' in details:
self.backlinks = details['backlinks']
else:
self.backlinks = self.document.settings.toc_backlinks
if toc_by_writer:
# move customization settings to the parent node
self.startnode.parent.attributes.update(details)
self.startnode.parent.remove(self.startnode)
else:
contents = self.build_contents(startnode)
if len(contents):
self.startnode.replace_self(contents)
else:
self.startnode.parent.parent.remove(self.startnode.parent)
def build_contents(self, node, level=0):
level += 1
sections = [sect for sect in node if isinstance(sect, nodes.section)]
entries = []
autonum = 0
depth = self.startnode.details.get('depth', sys.maxint)
for section in sections:
title = section[0]
auto = title.get('auto') # May be set by SectNum.
entrytext = self.copy_and_filter(title)
reference = nodes.reference('', '', refid=section['ids'][0],
*entrytext)
ref_id = self.document.set_id(reference)
entry = nodes.paragraph('', '', reference)
item = nodes.list_item('', entry)
if ( self.backlinks in ('entry', 'top')
and title.next_node(nodes.reference) is None):
if self.backlinks == 'entry':
title['refid'] = ref_id
elif self.backlinks == 'top':
title['refid'] = self.toc_id
if level < depth:
subsects = self.build_contents(section, level)
item += subsects
entries.append(item)
if entries:
contents = nodes.bullet_list('', *entries)
if auto:
contents['classes'].append('auto-toc')
return contents
else:
return []
def copy_and_filter(self, node):
"""Return a copy of a title, with references, images, etc. removed."""
visitor = ContentsFilter(self.document)
node.walkabout(visitor)
return visitor.get_entry_text()
class ContentsFilter(nodes.TreeCopyVisitor):
def get_entry_text(self):
return self.get_tree_copy().children
def visit_citation_reference(self, node):
raise nodes.SkipNode
def visit_footnote_reference(self, node):
raise nodes.SkipNode
def visit_image(self, node):
if node.hasattr('alt'):
self.parent.append(nodes.Text(node['alt']))
raise nodes.SkipNode
def ignore_node_but_process_children(self, node):
raise nodes.SkipDeparture
visit_interpreted = ignore_node_but_process_children
visit_problematic = ignore_node_but_process_children
visit_reference = ignore_node_but_process_children
visit_target = ignore_node_but_process_children
| bsd-2-clause |
bdh1011/wau | venv/lib/python2.7/site-packages/pandas/io/tests/test_sql.py | 1 | 93879 | """SQL io tests
The SQL tests are broken down in different classes:
- `PandasSQLTest`: base class with common methods for all test classes
- Tests for the public API (only tests with sqlite3)
- `_TestSQLApi` base class
- `TestSQLApi`: test the public API with sqlalchemy engine
- `TestSQLiteFallbackApi`: test the public API with a sqlite DBAPI connection
- Tests for the different SQL flavors (flavor specific type conversions)
- Tests for the sqlalchemy mode: `_TestSQLAlchemy` is the base class with
common methods, the different tested flavors (sqlite3, MySQL, PostgreSQL)
derive from the base class
- Tests for the fallback mode (`TestSQLiteFallback` and `TestMySQLLegacy`)
"""
from __future__ import print_function
import unittest
import sqlite3
import csv
import os
import sys
import nose
import warnings
import numpy as np
from datetime import datetime, date, time
from pandas import DataFrame, Series, Index, MultiIndex, isnull, concat
from pandas import date_range, to_datetime, to_timedelta, Timestamp
import pandas.compat as compat
from pandas.compat import StringIO, range, lrange, string_types
from pandas.core.datetools import format as date_format
import pandas.io.sql as sql
from pandas.io.sql import read_sql_table, read_sql_query
import pandas.util.testing as tm
try:
import sqlalchemy
import sqlalchemy.schema
import sqlalchemy.sql.sqltypes as sqltypes
SQLALCHEMY_INSTALLED = True
except ImportError:
SQLALCHEMY_INSTALLED = False
SQL_STRINGS = {
'create_iris': {
'sqlite': """CREATE TABLE iris (
"SepalLength" REAL,
"SepalWidth" REAL,
"PetalLength" REAL,
"PetalWidth" REAL,
"Name" TEXT
)""",
'mysql': """CREATE TABLE iris (
`SepalLength` DOUBLE,
`SepalWidth` DOUBLE,
`PetalLength` DOUBLE,
`PetalWidth` DOUBLE,
`Name` VARCHAR(200)
)""",
'postgresql': """CREATE TABLE iris (
"SepalLength" DOUBLE PRECISION,
"SepalWidth" DOUBLE PRECISION,
"PetalLength" DOUBLE PRECISION,
"PetalWidth" DOUBLE PRECISION,
"Name" VARCHAR(200)
)"""
},
'insert_iris': {
'sqlite': """INSERT INTO iris VALUES(?, ?, ?, ?, ?)""",
'mysql': """INSERT INTO iris VALUES(%s, %s, %s, %s, "%s");""",
'postgresql': """INSERT INTO iris VALUES(%s, %s, %s, %s, %s);"""
},
'create_test_types': {
'sqlite': """CREATE TABLE types_test_data (
"TextCol" TEXT,
"DateCol" TEXT,
"IntDateCol" INTEGER,
"FloatCol" REAL,
"IntCol" INTEGER,
"BoolCol" INTEGER,
"IntColWithNull" INTEGER,
"BoolColWithNull" INTEGER
)""",
'mysql': """CREATE TABLE types_test_data (
`TextCol` TEXT,
`DateCol` DATETIME,
`IntDateCol` INTEGER,
`FloatCol` DOUBLE,
`IntCol` INTEGER,
`BoolCol` BOOLEAN,
`IntColWithNull` INTEGER,
`BoolColWithNull` BOOLEAN
)""",
'postgresql': """CREATE TABLE types_test_data (
"TextCol" TEXT,
"DateCol" TIMESTAMP,
"DateColWithTz" TIMESTAMP WITH TIME ZONE,
"IntDateCol" INTEGER,
"FloatCol" DOUBLE PRECISION,
"IntCol" INTEGER,
"BoolCol" BOOLEAN,
"IntColWithNull" INTEGER,
"BoolColWithNull" BOOLEAN
)"""
},
'insert_test_types': {
'sqlite': {
'query': """
INSERT INTO types_test_data
VALUES(?, ?, ?, ?, ?, ?, ?, ?)
""",
'fields': (
'TextCol', 'DateCol', 'IntDateCol', 'FloatCol',
'IntCol', 'BoolCol', 'IntColWithNull', 'BoolColWithNull'
)
},
'mysql': {
'query': """
INSERT INTO types_test_data
VALUES("%s", %s, %s, %s, %s, %s, %s, %s)
""",
'fields': (
'TextCol', 'DateCol', 'IntDateCol', 'FloatCol',
'IntCol', 'BoolCol', 'IntColWithNull', 'BoolColWithNull'
)
},
'postgresql': {
'query': """
INSERT INTO types_test_data
VALUES(%s, %s, %s, %s, %s, %s, %s, %s, %s)
""",
'fields': (
'TextCol', 'DateCol', 'DateColWithTz', 'IntDateCol', 'FloatCol',
'IntCol', 'BoolCol', 'IntColWithNull', 'BoolColWithNull'
)
},
},
'read_parameters': {
'sqlite': "SELECT * FROM iris WHERE Name=? AND SepalLength=?",
'mysql': 'SELECT * FROM iris WHERE `Name`="%s" AND `SepalLength`=%s',
'postgresql': 'SELECT * FROM iris WHERE "Name"=%s AND "SepalLength"=%s'
},
'read_named_parameters': {
'sqlite': """
SELECT * FROM iris WHERE Name=:name AND SepalLength=:length
""",
'mysql': """
SELECT * FROM iris WHERE
`Name`="%(name)s" AND `SepalLength`=%(length)s
""",
'postgresql': """
SELECT * FROM iris WHERE
"Name"=%(name)s AND "SepalLength"=%(length)s
"""
}
}
class PandasSQLTest(unittest.TestCase):
"""
Base class with common private methods for SQLAlchemy and fallback cases.
"""
def drop_table(self, table_name):
self._get_exec().execute("DROP TABLE IF EXISTS %s" % table_name)
def _get_exec(self):
if hasattr(self.conn, 'execute'):
return self.conn
else:
return self.conn.cursor()
def _load_iris_data(self):
import io
iris_csv_file = os.path.join(tm.get_data_path(), 'iris.csv')
self.drop_table('iris')
self._get_exec().execute(SQL_STRINGS['create_iris'][self.flavor])
with io.open(iris_csv_file, mode='r', newline=None) as iris_csv:
r = csv.reader(iris_csv)
next(r) # skip header row
ins = SQL_STRINGS['insert_iris'][self.flavor]
for row in r:
self._get_exec().execute(ins, row)
def _check_iris_loaded_frame(self, iris_frame):
pytype = iris_frame.dtypes[0].type
row = iris_frame.iloc[0]
self.assertTrue(
issubclass(pytype, np.floating), 'Loaded frame has incorrect type')
tm.equalContents(row.values, [5.1, 3.5, 1.4, 0.2, 'Iris-setosa'])
def _load_test1_data(self):
columns = ['index', 'A', 'B', 'C', 'D']
data = [(
'2000-01-03 00:00:00', 0.980268513777, 3.68573087906, -0.364216805298, -1.15973806169),
('2000-01-04 00:00:00', 1.04791624281, -
0.0412318367011, -0.16181208307, 0.212549316967),
('2000-01-05 00:00:00', 0.498580885705,
0.731167677815, -0.537677223318, 1.34627041952),
('2000-01-06 00:00:00', 1.12020151869, 1.56762092543, 0.00364077397681, 0.67525259227)]
self.test_frame1 = DataFrame(data, columns=columns)
def _load_test2_data(self):
df = DataFrame(dict(A=[4, 1, 3, 6],
B=['asd', 'gsq', 'ylt', 'jkl'],
C=[1.1, 3.1, 6.9, 5.3],
D=[False, True, True, False],
E=['1990-11-22', '1991-10-26', '1993-11-26', '1995-12-12']))
df['E'] = to_datetime(df['E'])
self.test_frame2 = df
def _load_test3_data(self):
columns = ['index', 'A', 'B']
data = [(
'2000-01-03 00:00:00', 2 ** 31 - 1, -1.987670),
('2000-01-04 00:00:00', -29, -0.0412318367011),
('2000-01-05 00:00:00', 20000, 0.731167677815),
('2000-01-06 00:00:00', -290867, 1.56762092543)]
self.test_frame3 = DataFrame(data, columns=columns)
def _load_raw_sql(self):
self.drop_table('types_test_data')
self._get_exec().execute(SQL_STRINGS['create_test_types'][self.flavor])
ins = SQL_STRINGS['insert_test_types'][self.flavor]
data = [
{
'TextCol': 'first',
'DateCol': '2000-01-03 00:00:00',
'DateColWithTz': '2000-01-01 00:00:00-08:00',
'IntDateCol': 535852800,
'FloatCol': 10.10,
'IntCol': 1,
'BoolCol': False,
'IntColWithNull': 1,
'BoolColWithNull': False,
},
{
'TextCol': 'first',
'DateCol': '2000-01-04 00:00:00',
'DateColWithTz': '2000-06-01 00:00:00-07:00',
'IntDateCol': 1356998400,
'FloatCol': 10.10,
'IntCol': 1,
'BoolCol': False,
'IntColWithNull': None,
'BoolColWithNull': None,
},
]
for d in data:
self._get_exec().execute(
ins['query'],
[d[field] for field in ins['fields']]
)
def _count_rows(self, table_name):
result = self._get_exec().execute(
"SELECT count(*) AS count_1 FROM %s" % table_name).fetchone()
return result[0]
def _read_sql_iris(self):
iris_frame = self.pandasSQL.read_query("SELECT * FROM iris")
self._check_iris_loaded_frame(iris_frame)
def _read_sql_iris_parameter(self):
query = SQL_STRINGS['read_parameters'][self.flavor]
params = ['Iris-setosa', 5.1]
iris_frame = self.pandasSQL.read_query(query, params=params)
self._check_iris_loaded_frame(iris_frame)
def _read_sql_iris_named_parameter(self):
query = SQL_STRINGS['read_named_parameters'][self.flavor]
params = {'name': 'Iris-setosa', 'length': 5.1}
iris_frame = self.pandasSQL.read_query(query, params=params)
self._check_iris_loaded_frame(iris_frame)
def _to_sql(self):
self.drop_table('test_frame1')
self.pandasSQL.to_sql(self.test_frame1, 'test_frame1')
self.assertTrue(self.pandasSQL.has_table(
'test_frame1'), 'Table not written to DB')
# Nuke table
self.drop_table('test_frame1')
def _to_sql_empty(self):
self.drop_table('test_frame1')
self.pandasSQL.to_sql(self.test_frame1.iloc[:0], 'test_frame1')
def _to_sql_fail(self):
self.drop_table('test_frame1')
self.pandasSQL.to_sql(
self.test_frame1, 'test_frame1', if_exists='fail')
self.assertTrue(self.pandasSQL.has_table(
'test_frame1'), 'Table not written to DB')
self.assertRaises(ValueError, self.pandasSQL.to_sql,
self.test_frame1, 'test_frame1', if_exists='fail')
self.drop_table('test_frame1')
def _to_sql_replace(self):
self.drop_table('test_frame1')
self.pandasSQL.to_sql(
self.test_frame1, 'test_frame1', if_exists='fail')
# Add to table again
self.pandasSQL.to_sql(
self.test_frame1, 'test_frame1', if_exists='replace')
self.assertTrue(self.pandasSQL.has_table(
'test_frame1'), 'Table not written to DB')
num_entries = len(self.test_frame1)
num_rows = self._count_rows('test_frame1')
self.assertEqual(
num_rows, num_entries, "not the same number of rows as entries")
self.drop_table('test_frame1')
def _to_sql_append(self):
# Nuke table just in case
self.drop_table('test_frame1')
self.pandasSQL.to_sql(
self.test_frame1, 'test_frame1', if_exists='fail')
# Add to table again
self.pandasSQL.to_sql(
self.test_frame1, 'test_frame1', if_exists='append')
self.assertTrue(self.pandasSQL.has_table(
'test_frame1'), 'Table not written to DB')
num_entries = 2 * len(self.test_frame1)
num_rows = self._count_rows('test_frame1')
self.assertEqual(
num_rows, num_entries, "not the same number of rows as entries")
self.drop_table('test_frame1')
def _roundtrip(self):
self.drop_table('test_frame_roundtrip')
self.pandasSQL.to_sql(self.test_frame1, 'test_frame_roundtrip')
result = self.pandasSQL.read_query('SELECT * FROM test_frame_roundtrip')
result.set_index('level_0', inplace=True)
# result.index.astype(int)
result.index.name = None
tm.assert_frame_equal(result, self.test_frame1)
def _execute_sql(self):
# drop_sql = "DROP TABLE IF EXISTS test" # should already be done
iris_results = self.pandasSQL.execute("SELECT * FROM iris")
row = iris_results.fetchone()
tm.equalContents(row, [5.1, 3.5, 1.4, 0.2, 'Iris-setosa'])
def _to_sql_save_index(self):
df = DataFrame.from_records([(1,2.1,'line1'), (2,1.5,'line2')],
columns=['A','B','C'], index=['A'])
self.pandasSQL.to_sql(df, 'test_to_sql_saves_index')
ix_cols = self._get_index_columns('test_to_sql_saves_index')
self.assertEqual(ix_cols, [['A',],])
def _transaction_test(self):
self.pandasSQL.execute("CREATE TABLE test_trans (A INT, B TEXT)")
ins_sql = "INSERT INTO test_trans (A,B) VALUES (1, 'blah')"
# Make sure when transaction is rolled back, no rows get inserted
try:
with self.pandasSQL.run_transaction() as trans:
trans.execute(ins_sql)
raise Exception('error')
except:
# ignore raised exception
pass
res = self.pandasSQL.read_query('SELECT * FROM test_trans')
self.assertEqual(len(res), 0)
# Make sure when transaction is committed, rows do get inserted
with self.pandasSQL.run_transaction() as trans:
trans.execute(ins_sql)
res2 = self.pandasSQL.read_query('SELECT * FROM test_trans')
self.assertEqual(len(res2), 1)
#------------------------------------------------------------------------------
#--- Testing the public API
class _TestSQLApi(PandasSQLTest):
"""
Base class to test the public API.
From this two classes are derived to run these tests for both the
sqlalchemy mode (`TestSQLApi`) and the fallback mode (`TestSQLiteFallbackApi`).
These tests are run with sqlite3. Specific tests for the different
sql flavours are included in `_TestSQLAlchemy`.
Notes:
flavor can always be passed even in SQLAlchemy mode,
should be correctly ignored.
we don't use drop_table because that isn't part of the public api
"""
flavor = 'sqlite'
mode = None
def setUp(self):
self.conn = self.connect()
self._load_iris_data()
self._load_test1_data()
self._load_test2_data()
self._load_test3_data()
self._load_raw_sql()
def test_read_sql_iris(self):
iris_frame = sql.read_sql_query(
"SELECT * FROM iris", self.conn)
self._check_iris_loaded_frame(iris_frame)
def test_legacy_read_frame(self):
with tm.assert_produces_warning(FutureWarning):
iris_frame = sql.read_frame(
"SELECT * FROM iris", self.conn)
self._check_iris_loaded_frame(iris_frame)
def test_to_sql(self):
sql.to_sql(self.test_frame1, 'test_frame1', self.conn, flavor='sqlite')
self.assertTrue(
sql.has_table('test_frame1', self.conn, flavor='sqlite'), 'Table not written to DB')
def test_to_sql_fail(self):
sql.to_sql(self.test_frame1, 'test_frame2',
self.conn, flavor='sqlite', if_exists='fail')
self.assertTrue(
sql.has_table('test_frame2', self.conn, flavor='sqlite'), 'Table not written to DB')
self.assertRaises(ValueError, sql.to_sql, self.test_frame1,
'test_frame2', self.conn, flavor='sqlite', if_exists='fail')
def test_to_sql_replace(self):
sql.to_sql(self.test_frame1, 'test_frame3',
self.conn, flavor='sqlite', if_exists='fail')
# Add to table again
sql.to_sql(self.test_frame1, 'test_frame3',
self.conn, flavor='sqlite', if_exists='replace')
self.assertTrue(
sql.has_table('test_frame3', self.conn, flavor='sqlite'),
'Table not written to DB')
num_entries = len(self.test_frame1)
num_rows = self._count_rows('test_frame3')
self.assertEqual(
num_rows, num_entries, "not the same number of rows as entries")
def test_to_sql_append(self):
sql.to_sql(self.test_frame1, 'test_frame4',
self.conn, flavor='sqlite', if_exists='fail')
# Add to table again
sql.to_sql(self.test_frame1, 'test_frame4',
self.conn, flavor='sqlite', if_exists='append')
self.assertTrue(
sql.has_table('test_frame4', self.conn, flavor='sqlite'),
'Table not written to DB')
num_entries = 2 * len(self.test_frame1)
num_rows = self._count_rows('test_frame4')
self.assertEqual(
num_rows, num_entries, "not the same number of rows as entries")
def test_to_sql_type_mapping(self):
sql.to_sql(self.test_frame3, 'test_frame5',
self.conn, flavor='sqlite', index=False)
result = sql.read_sql("SELECT * FROM test_frame5", self.conn)
tm.assert_frame_equal(self.test_frame3, result)
def test_to_sql_series(self):
s = Series(np.arange(5, dtype='int64'), name='series')
sql.to_sql(s, "test_series", self.conn, flavor='sqlite', index=False)
s2 = sql.read_sql_query("SELECT * FROM test_series", self.conn)
tm.assert_frame_equal(s.to_frame(), s2)
def test_to_sql_panel(self):
panel = tm.makePanel()
self.assertRaises(NotImplementedError, sql.to_sql, panel,
'test_panel', self.conn, flavor='sqlite')
def test_legacy_write_frame(self):
# Assume that functionality is already tested above so just do
# quick check that it basically works
with tm.assert_produces_warning(FutureWarning):
sql.write_frame(self.test_frame1, 'test_frame_legacy', self.conn,
flavor='sqlite')
self.assertTrue(
sql.has_table('test_frame_legacy', self.conn, flavor='sqlite'),
'Table not written to DB')
def test_roundtrip(self):
sql.to_sql(self.test_frame1, 'test_frame_roundtrip',
con=self.conn, flavor='sqlite')
result = sql.read_sql_query(
'SELECT * FROM test_frame_roundtrip',
con=self.conn)
# HACK!
result.index = self.test_frame1.index
result.set_index('level_0', inplace=True)
result.index.astype(int)
result.index.name = None
tm.assert_frame_equal(result, self.test_frame1)
def test_roundtrip_chunksize(self):
sql.to_sql(self.test_frame1, 'test_frame_roundtrip', con=self.conn,
index=False, flavor='sqlite', chunksize=2)
result = sql.read_sql_query(
'SELECT * FROM test_frame_roundtrip',
con=self.conn)
tm.assert_frame_equal(result, self.test_frame1)
def test_execute_sql(self):
# drop_sql = "DROP TABLE IF EXISTS test" # should already be done
iris_results = sql.execute("SELECT * FROM iris", con=self.conn)
row = iris_results.fetchone()
tm.equalContents(row, [5.1, 3.5, 1.4, 0.2, 'Iris-setosa'])
def test_date_parsing(self):
# Test date parsing in read_sq
# No Parsing
df = sql.read_sql_query("SELECT * FROM types_test_data", self.conn)
self.assertFalse(
issubclass(df.DateCol.dtype.type, np.datetime64),
"DateCol loaded with incorrect type")
df = sql.read_sql_query("SELECT * FROM types_test_data", self.conn,
parse_dates=['DateCol'])
self.assertTrue(
issubclass(df.DateCol.dtype.type, np.datetime64),
"DateCol loaded with incorrect type")
df = sql.read_sql_query("SELECT * FROM types_test_data", self.conn,
parse_dates={'DateCol': '%Y-%m-%d %H:%M:%S'})
self.assertTrue(
issubclass(df.DateCol.dtype.type, np.datetime64),
"DateCol loaded with incorrect type")
df = sql.read_sql_query("SELECT * FROM types_test_data", self.conn,
parse_dates=['IntDateCol'])
self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64),
"IntDateCol loaded with incorrect type")
df = sql.read_sql_query("SELECT * FROM types_test_data", self.conn,
parse_dates={'IntDateCol': 's'})
self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64),
"IntDateCol loaded with incorrect type")
def test_date_and_index(self):
# Test case where same column appears in parse_date and index_col
df = sql.read_sql_query("SELECT * FROM types_test_data", self.conn,
index_col='DateCol',
parse_dates=['DateCol', 'IntDateCol'])
self.assertTrue(issubclass(df.index.dtype.type, np.datetime64),
"DateCol loaded with incorrect type")
self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64),
"IntDateCol loaded with incorrect type")
def test_timedelta(self):
# see #6921
df = to_timedelta(Series(['00:00:01', '00:00:03'], name='foo')).to_frame()
with tm.assert_produces_warning(UserWarning):
df.to_sql('test_timedelta', self.conn)
result = sql.read_sql_query('SELECT * FROM test_timedelta', self.conn)
tm.assert_series_equal(result['foo'], df['foo'].astype('int64'))
def test_complex(self):
df = DataFrame({'a':[1+1j, 2j]})
# Complex data type should raise error
self.assertRaises(ValueError, df.to_sql, 'test_complex', self.conn)
def test_to_sql_index_label(self):
temp_frame = DataFrame({'col1': range(4)})
# no index name, defaults to 'index'
sql.to_sql(temp_frame, 'test_index_label', self.conn)
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
self.assertEqual(frame.columns[0], 'index')
# specifying index_label
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace', index_label='other_label')
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
self.assertEqual(frame.columns[0], 'other_label',
"Specified index_label not written to database")
# using the index name
temp_frame.index.name = 'index_name'
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace')
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
self.assertEqual(frame.columns[0], 'index_name',
"Index name not written to database")
# has index name, but specifying index_label
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace', index_label='other_label')
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
self.assertEqual(frame.columns[0], 'other_label',
"Specified index_label not written to database")
def test_to_sql_index_label_multiindex(self):
temp_frame = DataFrame({'col1': range(4)},
index=MultiIndex.from_product([('A0', 'A1'), ('B0', 'B1')]))
# no index name, defaults to 'level_0' and 'level_1'
sql.to_sql(temp_frame, 'test_index_label', self.conn)
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
self.assertEqual(frame.columns[0], 'level_0')
self.assertEqual(frame.columns[1], 'level_1')
# specifying index_label
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace', index_label=['A', 'B'])
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
self.assertEqual(frame.columns[:2].tolist(), ['A', 'B'],
"Specified index_labels not written to database")
# using the index name
temp_frame.index.names = ['A', 'B']
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace')
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
self.assertEqual(frame.columns[:2].tolist(), ['A', 'B'],
"Index names not written to database")
# has index name, but specifying index_label
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace', index_label=['C', 'D'])
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
self.assertEqual(frame.columns[:2].tolist(), ['C', 'D'],
"Specified index_labels not written to database")
# wrong length of index_label
self.assertRaises(ValueError, sql.to_sql, temp_frame,
'test_index_label', self.conn, if_exists='replace',
index_label='C')
def test_multiindex_roundtrip(self):
df = DataFrame.from_records([(1,2.1,'line1'), (2,1.5,'line2')],
columns=['A','B','C'], index=['A','B'])
df.to_sql('test_multiindex_roundtrip', self.conn)
result = sql.read_sql_query('SELECT * FROM test_multiindex_roundtrip',
self.conn, index_col=['A','B'])
tm.assert_frame_equal(df, result, check_index_type=True)
def test_integer_col_names(self):
df = DataFrame([[1, 2], [3, 4]], columns=[0, 1])
sql.to_sql(df, "test_frame_integer_col_names", self.conn,
if_exists='replace')
def test_get_schema(self):
create_sql = sql.get_schema(self.test_frame1, 'test', 'sqlite',
con=self.conn)
self.assertTrue('CREATE' in create_sql)
def test_get_schema_dtypes(self):
float_frame = DataFrame({'a':[1.1,1.2], 'b':[2.1,2.2]})
dtype = sqlalchemy.Integer if self.mode == 'sqlalchemy' else 'INTEGER'
create_sql = sql.get_schema(float_frame, 'test', 'sqlite',
con=self.conn, dtype={'b':dtype})
self.assertTrue('CREATE' in create_sql)
self.assertTrue('INTEGER' in create_sql)
def test_chunksize_read(self):
df = DataFrame(np.random.randn(22, 5), columns=list('abcde'))
df.to_sql('test_chunksize', self.conn, index=False)
# reading the query in one time
res1 = sql.read_sql_query("select * from test_chunksize", self.conn)
# reading the query in chunks with read_sql_query
res2 = DataFrame()
i = 0
sizes = [5, 5, 5, 5, 2]
for chunk in sql.read_sql_query("select * from test_chunksize",
self.conn, chunksize=5):
res2 = concat([res2, chunk], ignore_index=True)
self.assertEqual(len(chunk), sizes[i])
i += 1
tm.assert_frame_equal(res1, res2)
# reading the query in chunks with read_sql_query
if self.mode == 'sqlalchemy':
res3 = DataFrame()
i = 0
sizes = [5, 5, 5, 5, 2]
for chunk in sql.read_sql_table("test_chunksize", self.conn,
chunksize=5):
res3 = concat([res3, chunk], ignore_index=True)
self.assertEqual(len(chunk), sizes[i])
i += 1
tm.assert_frame_equal(res1, res3)
def test_categorical(self):
# GH8624
# test that categorical gets written correctly as dense column
df = DataFrame(
{'person_id': [1, 2, 3],
'person_name': ['John P. Doe', 'Jane Dove', 'John P. Doe']})
df2 = df.copy()
df2['person_name'] = df2['person_name'].astype('category')
df2.to_sql('test_categorical', self.conn, index=False)
res = sql.read_sql_query('SELECT * FROM test_categorical', self.conn)
tm.assert_frame_equal(res, df)
class TestSQLApi(_TestSQLApi):
"""
Test the public API as it would be used directly
Tests for `read_sql_table` are included here, as this is specific for the
sqlalchemy mode.
"""
flavor = 'sqlite'
mode = 'sqlalchemy'
def connect(self):
if SQLALCHEMY_INSTALLED:
return sqlalchemy.create_engine('sqlite:///:memory:')
else:
raise nose.SkipTest('SQLAlchemy not installed')
def test_read_table_columns(self):
# test columns argument in read_table
sql.to_sql(self.test_frame1, 'test_frame', self.conn)
cols = ['A', 'B']
result = sql.read_sql_table('test_frame', self.conn, columns=cols)
self.assertEqual(result.columns.tolist(), cols,
"Columns not correctly selected")
def test_read_table_index_col(self):
# test columns argument in read_table
sql.to_sql(self.test_frame1, 'test_frame', self.conn)
result = sql.read_sql_table('test_frame', self.conn, index_col="index")
self.assertEqual(result.index.names, ["index"],
"index_col not correctly set")
result = sql.read_sql_table('test_frame', self.conn, index_col=["A", "B"])
self.assertEqual(result.index.names, ["A", "B"],
"index_col not correctly set")
result = sql.read_sql_table('test_frame', self.conn, index_col=["A", "B"],
columns=["C", "D"])
self.assertEqual(result.index.names, ["A", "B"],
"index_col not correctly set")
self.assertEqual(result.columns.tolist(), ["C", "D"],
"columns not set correctly whith index_col")
def test_read_sql_delegate(self):
iris_frame1 = sql.read_sql_query(
"SELECT * FROM iris", self.conn)
iris_frame2 = sql.read_sql(
"SELECT * FROM iris", self.conn)
tm.assert_frame_equal(iris_frame1, iris_frame2)
iris_frame1 = sql.read_sql_table('iris', self.conn)
iris_frame2 = sql.read_sql('iris', self.conn)
tm.assert_frame_equal(iris_frame1, iris_frame2)
def test_not_reflect_all_tables(self):
# create invalid table
qry = """CREATE TABLE invalid (x INTEGER, y UNKNOWN);"""
self.conn.execute(qry)
qry = """CREATE TABLE other_table (x INTEGER, y INTEGER);"""
self.conn.execute(qry)
with warnings.catch_warnings(record=True) as w:
# Cause all warnings to always be triggered.
warnings.simplefilter("always")
# Trigger a warning.
sql.read_sql_table('other_table', self.conn)
sql.read_sql_query('SELECT * FROM other_table', self.conn)
# Verify some things
self.assertEqual(len(w), 0, "Warning triggered for other table")
def test_warning_case_insensitive_table_name(self):
# see GH7815.
# We can't test that this warning is triggered, a the database
# configuration would have to be altered. But here we test that
# the warning is certainly NOT triggered in a normal case.
with warnings.catch_warnings(record=True) as w:
# Cause all warnings to always be triggered.
warnings.simplefilter("always")
# This should not trigger a Warning
self.test_frame1.to_sql('CaseSensitive', self.conn)
# Verify some things
self.assertEqual(len(w), 0, "Warning triggered for writing a table")
def _get_index_columns(self, tbl_name):
from sqlalchemy.engine import reflection
insp = reflection.Inspector.from_engine(self.conn)
ixs = insp.get_indexes('test_index_saved')
ixs = [i['column_names'] for i in ixs]
return ixs
def test_sqlalchemy_type_mapping(self):
# Test Timestamp objects (no datetime64 because of timezone) (GH9085)
df = DataFrame({'time': to_datetime(['201412120154', '201412110254'],
utc=True)})
db = sql.SQLDatabase(self.conn)
table = sql.SQLTable("test_type", db, frame=df)
self.assertTrue(isinstance(table.table.c['time'].type, sqltypes.DateTime))
class TestSQLiteFallbackApi(_TestSQLApi):
"""
Test the public sqlite connection fallback API
"""
flavor = 'sqlite'
mode = 'fallback'
def connect(self, database=":memory:"):
return sqlite3.connect(database)
def test_sql_open_close(self):
# Test if the IO in the database still work if the connection closed
# between the writing and reading (as in many real situations).
with tm.ensure_clean() as name:
conn = self.connect(name)
sql.to_sql(self.test_frame3, "test_frame3_legacy", conn,
flavor="sqlite", index=False)
conn.close()
conn = self.connect(name)
result = sql.read_sql_query("SELECT * FROM test_frame3_legacy;",
conn)
conn.close()
tm.assert_frame_equal(self.test_frame3, result)
def test_read_sql_delegate(self):
iris_frame1 = sql.read_sql_query("SELECT * FROM iris", self.conn)
iris_frame2 = sql.read_sql("SELECT * FROM iris", self.conn)
tm.assert_frame_equal(iris_frame1, iris_frame2)
self.assertRaises(sql.DatabaseError, sql.read_sql, 'iris', self.conn)
def test_safe_names_warning(self):
# GH 6798
df = DataFrame([[1, 2], [3, 4]], columns=['a', 'b ']) # has a space
# warns on create table with spaces in names
with tm.assert_produces_warning():
sql.to_sql(df, "test_frame3_legacy", self.conn,
flavor="sqlite", index=False)
def test_get_schema2(self):
# without providing a connection object (available for backwards comp)
create_sql = sql.get_schema(self.test_frame1, 'test', 'sqlite')
self.assertTrue('CREATE' in create_sql)
def test_tquery(self):
with tm.assert_produces_warning(FutureWarning):
iris_results = sql.tquery("SELECT * FROM iris", con=self.conn)
row = iris_results[0]
tm.equalContents(row, [5.1, 3.5, 1.4, 0.2, 'Iris-setosa'])
def test_uquery(self):
with tm.assert_produces_warning(FutureWarning):
rows = sql.uquery("SELECT * FROM iris LIMIT 1", con=self.conn)
self.assertEqual(rows, -1)
def _get_sqlite_column_type(self, schema, column):
for col in schema.split('\n'):
if col.split()[0].strip('""') == column:
return col.split()[1]
raise ValueError('Column %s not found' % (column))
def test_sqlite_type_mapping(self):
# Test Timestamp objects (no datetime64 because of timezone) (GH9085)
df = DataFrame({'time': to_datetime(['201412120154', '201412110254'],
utc=True)})
db = sql.SQLiteDatabase(self.conn, self.flavor)
table = sql.SQLiteTable("test_type", db, frame=df)
schema = table.sql_schema()
self.assertEqual(self._get_sqlite_column_type(schema, 'time'),
"TIMESTAMP")
#------------------------------------------------------------------------------
#--- Database flavor specific tests
class _TestSQLAlchemy(PandasSQLTest):
"""
Base class for testing the sqlalchemy backend.
Subclasses for specific database types are created below. Tests that
deviate for each flavor are overwritten there.
"""
flavor = None
@classmethod
def setUpClass(cls):
cls.setup_import()
cls.setup_driver()
# test connection
try:
conn = cls.connect()
conn.connect()
except sqlalchemy.exc.OperationalError:
msg = "{0} - can't connect to {1} server".format(cls, cls.flavor)
raise nose.SkipTest(msg)
def setUp(self):
self.setup_connect()
self._load_iris_data()
self._load_raw_sql()
self._load_test1_data()
@classmethod
def setup_import(cls):
# Skip this test if SQLAlchemy not available
if not SQLALCHEMY_INSTALLED:
raise nose.SkipTest('SQLAlchemy not installed')
@classmethod
def setup_driver(cls):
raise NotImplementedError()
@classmethod
def connect(cls):
raise NotImplementedError()
def setup_connect(self):
try:
self.conn = self.connect()
self.pandasSQL = sql.SQLDatabase(self.conn)
# to test if connection can be made:
self.conn.connect()
except sqlalchemy.exc.OperationalError:
raise nose.SkipTest("Can't connect to {0} server".format(self.flavor))
def tearDown(self):
raise NotImplementedError()
def test_aread_sql(self):
self._read_sql_iris()
def test_read_sql_parameter(self):
self._read_sql_iris_parameter()
def test_read_sql_named_parameter(self):
self._read_sql_iris_named_parameter()
def test_to_sql(self):
self._to_sql()
def test_to_sql_empty(self):
self._to_sql_empty()
def test_to_sql_fail(self):
self._to_sql_fail()
def test_to_sql_replace(self):
self._to_sql_replace()
def test_to_sql_append(self):
self._to_sql_append()
def test_create_table(self):
temp_conn = self.connect()
temp_frame = DataFrame(
{'one': [1., 2., 3., 4.], 'two': [4., 3., 2., 1.]})
pandasSQL = sql.SQLDatabase(temp_conn)
pandasSQL.to_sql(temp_frame, 'temp_frame')
self.assertTrue(
temp_conn.has_table('temp_frame'), 'Table not written to DB')
def test_drop_table(self):
temp_conn = self.connect()
temp_frame = DataFrame(
{'one': [1., 2., 3., 4.], 'two': [4., 3., 2., 1.]})
pandasSQL = sql.SQLDatabase(temp_conn)
pandasSQL.to_sql(temp_frame, 'temp_frame')
self.assertTrue(
temp_conn.has_table('temp_frame'), 'Table not written to DB')
pandasSQL.drop_table('temp_frame')
self.assertFalse(
temp_conn.has_table('temp_frame'), 'Table not deleted from DB')
def test_roundtrip(self):
self._roundtrip()
def test_execute_sql(self):
self._execute_sql()
def test_read_table(self):
iris_frame = sql.read_sql_table("iris", con=self.conn)
self._check_iris_loaded_frame(iris_frame)
def test_read_table_columns(self):
iris_frame = sql.read_sql_table(
"iris", con=self.conn, columns=['SepalLength', 'SepalLength'])
tm.equalContents(
iris_frame.columns.values, ['SepalLength', 'SepalLength'])
def test_read_table_absent(self):
self.assertRaises(
ValueError, sql.read_sql_table, "this_doesnt_exist", con=self.conn)
def test_default_type_conversion(self):
df = sql.read_sql_table("types_test_data", self.conn)
self.assertTrue(issubclass(df.FloatCol.dtype.type, np.floating),
"FloatCol loaded with incorrect type")
self.assertTrue(issubclass(df.IntCol.dtype.type, np.integer),
"IntCol loaded with incorrect type")
self.assertTrue(issubclass(df.BoolCol.dtype.type, np.bool_),
"BoolCol loaded with incorrect type")
# Int column with NA values stays as float
self.assertTrue(issubclass(df.IntColWithNull.dtype.type, np.floating),
"IntColWithNull loaded with incorrect type")
# Bool column with NA values becomes object
self.assertTrue(issubclass(df.BoolColWithNull.dtype.type, np.object),
"BoolColWithNull loaded with incorrect type")
def test_bigint(self):
# int64 should be converted to BigInteger, GH7433
df = DataFrame(data={'i64':[2**62]})
df.to_sql('test_bigint', self.conn, index=False)
result = sql.read_sql_table('test_bigint', self.conn)
tm.assert_frame_equal(df, result)
def test_default_date_load(self):
df = sql.read_sql_table("types_test_data", self.conn)
# IMPORTANT - sqlite has no native date type, so shouldn't parse, but
# MySQL SHOULD be converted.
self.assertTrue(issubclass(df.DateCol.dtype.type, np.datetime64),
"DateCol loaded with incorrect type")
def test_date_parsing(self):
# No Parsing
df = sql.read_sql_table("types_test_data", self.conn)
df = sql.read_sql_table("types_test_data", self.conn,
parse_dates=['DateCol'])
self.assertTrue(issubclass(df.DateCol.dtype.type, np.datetime64),
"DateCol loaded with incorrect type")
df = sql.read_sql_table("types_test_data", self.conn,
parse_dates={'DateCol': '%Y-%m-%d %H:%M:%S'})
self.assertTrue(issubclass(df.DateCol.dtype.type, np.datetime64),
"DateCol loaded with incorrect type")
df = sql.read_sql_table("types_test_data", self.conn, parse_dates={
'DateCol': {'format': '%Y-%m-%d %H:%M:%S'}})
self.assertTrue(issubclass(df.DateCol.dtype.type, np.datetime64),
"IntDateCol loaded with incorrect type")
df = sql.read_sql_table(
"types_test_data", self.conn, parse_dates=['IntDateCol'])
self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64),
"IntDateCol loaded with incorrect type")
df = sql.read_sql_table(
"types_test_data", self.conn, parse_dates={'IntDateCol': 's'})
self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64),
"IntDateCol loaded with incorrect type")
df = sql.read_sql_table(
"types_test_data", self.conn, parse_dates={'IntDateCol': {'unit': 's'}})
self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64),
"IntDateCol loaded with incorrect type")
def test_datetime(self):
df = DataFrame({'A': date_range('2013-01-01 09:00:00', periods=3),
'B': np.arange(3.0)})
df.to_sql('test_datetime', self.conn)
# with read_table -> type information from schema used
result = sql.read_sql_table('test_datetime', self.conn)
result = result.drop('index', axis=1)
tm.assert_frame_equal(result, df)
# with read_sql -> no type information -> sqlite has no native
result = sql.read_sql_query('SELECT * FROM test_datetime', self.conn)
result = result.drop('index', axis=1)
if self.flavor == 'sqlite':
self.assertTrue(isinstance(result.loc[0, 'A'], string_types))
result['A'] = to_datetime(result['A'])
tm.assert_frame_equal(result, df)
else:
tm.assert_frame_equal(result, df)
def test_datetime_NaT(self):
df = DataFrame({'A': date_range('2013-01-01 09:00:00', periods=3),
'B': np.arange(3.0)})
df.loc[1, 'A'] = np.nan
df.to_sql('test_datetime', self.conn, index=False)
# with read_table -> type information from schema used
result = sql.read_sql_table('test_datetime', self.conn)
tm.assert_frame_equal(result, df)
# with read_sql -> no type information -> sqlite has no native
result = sql.read_sql_query('SELECT * FROM test_datetime', self.conn)
if self.flavor == 'sqlite':
self.assertTrue(isinstance(result.loc[0, 'A'], string_types))
result['A'] = to_datetime(result['A'], coerce=True)
tm.assert_frame_equal(result, df)
else:
tm.assert_frame_equal(result, df)
def test_datetime_date(self):
# test support for datetime.date
df = DataFrame([date(2014, 1, 1), date(2014, 1, 2)], columns=["a"])
df.to_sql('test_date', self.conn, index=False)
res = read_sql_table('test_date', self.conn)
# comes back as datetime64
tm.assert_series_equal(res['a'], to_datetime(df['a']))
def test_datetime_time(self):
# test support for datetime.time
df = DataFrame([time(9, 0, 0), time(9, 1, 30)], columns=["a"])
df.to_sql('test_time', self.conn, index=False)
res = read_sql_table('test_time', self.conn)
tm.assert_frame_equal(res, df)
def test_mixed_dtype_insert(self):
# see GH6509
s1 = Series(2**25 + 1,dtype=np.int32)
s2 = Series(0.0,dtype=np.float32)
df = DataFrame({'s1': s1, 's2': s2})
# write and read again
df.to_sql("test_read_write", self.conn, index=False)
df2 = sql.read_sql_table("test_read_write", self.conn)
tm.assert_frame_equal(df, df2, check_dtype=False, check_exact=True)
def test_nan_numeric(self):
# NaNs in numeric float column
df = DataFrame({'A':[0, 1, 2], 'B':[0.2, np.nan, 5.6]})
df.to_sql('test_nan', self.conn, index=False)
# with read_table
result = sql.read_sql_table('test_nan', self.conn)
tm.assert_frame_equal(result, df)
# with read_sql
result = sql.read_sql_query('SELECT * FROM test_nan', self.conn)
tm.assert_frame_equal(result, df)
def test_nan_fullcolumn(self):
# full NaN column (numeric float column)
df = DataFrame({'A':[0, 1, 2], 'B':[np.nan, np.nan, np.nan]})
df.to_sql('test_nan', self.conn, index=False)
# with read_table
result = sql.read_sql_table('test_nan', self.conn)
tm.assert_frame_equal(result, df)
# with read_sql -> not type info from table -> stays None
df['B'] = df['B'].astype('object')
df['B'] = None
result = sql.read_sql_query('SELECT * FROM test_nan', self.conn)
tm.assert_frame_equal(result, df)
def test_nan_string(self):
# NaNs in string column
df = DataFrame({'A':[0, 1, 2], 'B':['a', 'b', np.nan]})
df.to_sql('test_nan', self.conn, index=False)
# NaNs are coming back as None
df.loc[2, 'B'] = None
# with read_table
result = sql.read_sql_table('test_nan', self.conn)
tm.assert_frame_equal(result, df)
# with read_sql
result = sql.read_sql_query('SELECT * FROM test_nan', self.conn)
tm.assert_frame_equal(result, df)
def _get_index_columns(self, tbl_name):
from sqlalchemy.engine import reflection
insp = reflection.Inspector.from_engine(self.conn)
ixs = insp.get_indexes(tbl_name)
ixs = [i['column_names'] for i in ixs]
return ixs
def test_to_sql_save_index(self):
self._to_sql_save_index()
def test_transactions(self):
self._transaction_test()
def test_get_schema_create_table(self):
# Use a dataframe without a bool column, since MySQL converts bool to
# TINYINT (which read_sql_table returns as an int and causes a dtype
# mismatch)
self._load_test3_data()
tbl = 'test_get_schema_create_table'
create_sql = sql.get_schema(self.test_frame3, tbl, con=self.conn)
blank_test_df = self.test_frame3.iloc[:0]
self.drop_table(tbl)
self.conn.execute(create_sql)
returned_df = sql.read_sql_table(tbl, self.conn)
tm.assert_frame_equal(returned_df, blank_test_df)
self.drop_table(tbl)
def test_dtype(self):
cols = ['A', 'B']
data = [(0.8, True),
(0.9, None)]
df = DataFrame(data, columns=cols)
df.to_sql('dtype_test', self.conn)
df.to_sql('dtype_test2', self.conn, dtype={'B': sqlalchemy.TEXT})
meta = sqlalchemy.schema.MetaData(bind=self.conn)
meta.reflect()
sqltype = meta.tables['dtype_test2'].columns['B'].type
self.assertTrue(isinstance(sqltype, sqlalchemy.TEXT))
self.assertRaises(ValueError, df.to_sql,
'error', self.conn, dtype={'B': str})
# GH9083
df.to_sql('dtype_test3', self.conn, dtype={'B': sqlalchemy.String(10)})
meta.reflect()
sqltype = meta.tables['dtype_test3'].columns['B'].type
self.assertTrue(isinstance(sqltype, sqlalchemy.String))
self.assertEqual(sqltype.length, 10)
def test_notnull_dtype(self):
cols = {'Bool': Series([True,None]),
'Date': Series([datetime(2012, 5, 1), None]),
'Int' : Series([1, None], dtype='object'),
'Float': Series([1.1, None])
}
df = DataFrame(cols)
tbl = 'notnull_dtype_test'
df.to_sql(tbl, self.conn)
returned_df = sql.read_sql_table(tbl, self.conn)
meta = sqlalchemy.schema.MetaData(bind=self.conn)
meta.reflect()
if self.flavor == 'mysql':
my_type = sqltypes.Integer
else:
my_type = sqltypes.Boolean
col_dict = meta.tables[tbl].columns
self.assertTrue(isinstance(col_dict['Bool'].type, my_type))
self.assertTrue(isinstance(col_dict['Date'].type, sqltypes.DateTime))
self.assertTrue(isinstance(col_dict['Int'].type, sqltypes.Integer))
self.assertTrue(isinstance(col_dict['Float'].type, sqltypes.Float))
def test_double_precision(self):
V = 1.23456789101112131415
df = DataFrame({'f32':Series([V,], dtype='float32'),
'f64':Series([V,], dtype='float64'),
'f64_as_f32':Series([V,], dtype='float64'),
'i32':Series([5,], dtype='int32'),
'i64':Series([5,], dtype='int64'),
})
df.to_sql('test_dtypes', self.conn, index=False, if_exists='replace',
dtype={'f64_as_f32':sqlalchemy.Float(precision=23)})
res = sql.read_sql_table('test_dtypes', self.conn)
# check precision of float64
self.assertEqual(np.round(df['f64'].iloc[0],14),
np.round(res['f64'].iloc[0],14))
# check sql types
meta = sqlalchemy.schema.MetaData(bind=self.conn)
meta.reflect()
col_dict = meta.tables['test_dtypes'].columns
self.assertEqual(str(col_dict['f32'].type),
str(col_dict['f64_as_f32'].type))
self.assertTrue(isinstance(col_dict['f32'].type, sqltypes.Float))
self.assertTrue(isinstance(col_dict['f64'].type, sqltypes.Float))
self.assertTrue(isinstance(col_dict['i32'].type, sqltypes.Integer))
self.assertTrue(isinstance(col_dict['i64'].type, sqltypes.BigInteger))
class TestSQLiteAlchemy(_TestSQLAlchemy):
"""
Test the sqlalchemy backend against an in-memory sqlite database.
"""
flavor = 'sqlite'
@classmethod
def connect(cls):
return sqlalchemy.create_engine('sqlite:///:memory:')
@classmethod
def setup_driver(cls):
# sqlite3 is built-in
cls.driver = None
def tearDown(self):
# in memory so tables should not be removed explicitly
pass
def test_default_type_conversion(self):
df = sql.read_sql_table("types_test_data", self.conn)
self.assertTrue(issubclass(df.FloatCol.dtype.type, np.floating),
"FloatCol loaded with incorrect type")
self.assertTrue(issubclass(df.IntCol.dtype.type, np.integer),
"IntCol loaded with incorrect type")
# sqlite has no boolean type, so integer type is returned
self.assertTrue(issubclass(df.BoolCol.dtype.type, np.integer),
"BoolCol loaded with incorrect type")
# Int column with NA values stays as float
self.assertTrue(issubclass(df.IntColWithNull.dtype.type, np.floating),
"IntColWithNull loaded with incorrect type")
# Non-native Bool column with NA values stays as float
self.assertTrue(issubclass(df.BoolColWithNull.dtype.type, np.floating),
"BoolColWithNull loaded with incorrect type")
def test_default_date_load(self):
df = sql.read_sql_table("types_test_data", self.conn)
# IMPORTANT - sqlite has no native date type, so shouldn't parse, but
self.assertFalse(issubclass(df.DateCol.dtype.type, np.datetime64),
"DateCol loaded with incorrect type")
def test_bigint_warning(self):
# test no warning for BIGINT (to support int64) is raised (GH7433)
df = DataFrame({'a':[1,2]}, dtype='int64')
df.to_sql('test_bigintwarning', self.conn, index=False)
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
sql.read_sql_table('test_bigintwarning', self.conn)
self.assertEqual(len(w), 0, "Warning triggered for other table")
class TestMySQLAlchemy(_TestSQLAlchemy):
"""
Test the sqlalchemy backend against an MySQL database.
"""
flavor = 'mysql'
@classmethod
def connect(cls):
url = 'mysql+{driver}://root@localhost/pandas_nosetest'
return sqlalchemy.create_engine(url.format(driver=cls.driver))
@classmethod
def setup_driver(cls):
try:
import pymysql
cls.driver = 'pymysql'
except ImportError:
raise nose.SkipTest('pymysql not installed')
def tearDown(self):
c = self.conn.execute('SHOW TABLES')
for table in c.fetchall():
self.conn.execute('DROP TABLE %s' % table[0])
def test_default_type_conversion(self):
df = sql.read_sql_table("types_test_data", self.conn)
self.assertTrue(issubclass(df.FloatCol.dtype.type, np.floating),
"FloatCol loaded with incorrect type")
self.assertTrue(issubclass(df.IntCol.dtype.type, np.integer),
"IntCol loaded with incorrect type")
# MySQL has no real BOOL type (it's an alias for TINYINT)
self.assertTrue(issubclass(df.BoolCol.dtype.type, np.integer),
"BoolCol loaded with incorrect type")
# Int column with NA values stays as float
self.assertTrue(issubclass(df.IntColWithNull.dtype.type, np.floating),
"IntColWithNull loaded with incorrect type")
# Bool column with NA = int column with NA values => becomes float
self.assertTrue(issubclass(df.BoolColWithNull.dtype.type, np.floating),
"BoolColWithNull loaded with incorrect type")
def test_read_procedure(self):
# see GH7324. Although it is more an api test, it is added to the
# mysql tests as sqlite does not have stored procedures
df = DataFrame({'a': [1, 2, 3], 'b':[0.1, 0.2, 0.3]})
df.to_sql('test_procedure', self.conn, index=False)
proc = """DROP PROCEDURE IF EXISTS get_testdb;
CREATE PROCEDURE get_testdb ()
BEGIN
SELECT * FROM test_procedure;
END"""
connection = self.conn.connect()
trans = connection.begin()
try:
r1 = connection.execute(proc)
trans.commit()
except:
trans.rollback()
raise
res1 = sql.read_sql_query("CALL get_testdb();", self.conn)
tm.assert_frame_equal(df, res1)
# test delegation to read_sql_query
res2 = sql.read_sql("CALL get_testdb();", self.conn)
tm.assert_frame_equal(df, res2)
class TestPostgreSQLAlchemy(_TestSQLAlchemy):
"""
Test the sqlalchemy backend against an PostgreSQL database.
"""
flavor = 'postgresql'
@classmethod
def connect(cls):
url = 'postgresql+{driver}://postgres@localhost/pandas_nosetest'
return sqlalchemy.create_engine(url.format(driver=cls.driver))
@classmethod
def setup_driver(cls):
try:
import psycopg2
cls.driver = 'psycopg2'
except ImportError:
raise nose.SkipTest('psycopg2 not installed')
def tearDown(self):
c = self.conn.execute(
"SELECT table_name FROM information_schema.tables"
" WHERE table_schema = 'public'")
for table in c.fetchall():
self.conn.execute("DROP TABLE %s" % table[0])
def test_schema_support(self):
# only test this for postgresql (schema's not supported in mysql/sqlite)
df = DataFrame({'col1':[1, 2], 'col2':[0.1, 0.2], 'col3':['a', 'n']})
# create a schema
self.conn.execute("DROP SCHEMA IF EXISTS other CASCADE;")
self.conn.execute("CREATE SCHEMA other;")
# write dataframe to different schema's
df.to_sql('test_schema_public', self.conn, index=False)
df.to_sql('test_schema_public_explicit', self.conn, index=False,
schema='public')
df.to_sql('test_schema_other', self.conn, index=False, schema='other')
# read dataframes back in
res1 = sql.read_sql_table('test_schema_public', self.conn)
tm.assert_frame_equal(df, res1)
res2 = sql.read_sql_table('test_schema_public_explicit', self.conn)
tm.assert_frame_equal(df, res2)
res3 = sql.read_sql_table('test_schema_public_explicit', self.conn,
schema='public')
tm.assert_frame_equal(df, res3)
res4 = sql.read_sql_table('test_schema_other', self.conn,
schema='other')
tm.assert_frame_equal(df, res4)
self.assertRaises(ValueError, sql.read_sql_table, 'test_schema_other',
self.conn, schema='public')
## different if_exists options
# create a schema
self.conn.execute("DROP SCHEMA IF EXISTS other CASCADE;")
self.conn.execute("CREATE SCHEMA other;")
# write dataframe with different if_exists options
df.to_sql('test_schema_other', self.conn, schema='other', index=False)
df.to_sql('test_schema_other', self.conn, schema='other', index=False,
if_exists='replace')
df.to_sql('test_schema_other', self.conn, schema='other', index=False,
if_exists='append')
res = sql.read_sql_table('test_schema_other', self.conn, schema='other')
tm.assert_frame_equal(concat([df, df], ignore_index=True), res)
## specifying schema in user-provided meta
engine2 = self.connect()
meta = sqlalchemy.MetaData(engine2, schema='other')
pdsql = sql.SQLDatabase(engine2, meta=meta)
pdsql.to_sql(df, 'test_schema_other2', index=False)
pdsql.to_sql(df, 'test_schema_other2', index=False, if_exists='replace')
pdsql.to_sql(df, 'test_schema_other2', index=False, if_exists='append')
res1 = sql.read_sql_table('test_schema_other2', self.conn, schema='other')
res2 = pdsql.read_table('test_schema_other2')
tm.assert_frame_equal(res1, res2)
def test_datetime_with_time_zone(self):
# Test to see if we read the date column with timezones that
# the timezone information is converted to utc and into a
# np.datetime64 (GH #7139)
df = sql.read_sql_table("types_test_data", self.conn)
self.assertTrue(issubclass(df.DateColWithTz.dtype.type, np.datetime64),
"DateColWithTz loaded with incorrect type")
# "2000-01-01 00:00:00-08:00" should convert to "2000-01-01 08:00:00"
self.assertEqual(df.DateColWithTz[0], Timestamp('2000-01-01 08:00:00'))
# "2000-06-01 00:00:00-07:00" should convert to "2000-06-01 07:00:00"
self.assertEqual(df.DateColWithTz[1], Timestamp('2000-06-01 07:00:00'))
#------------------------------------------------------------------------------
#--- Test Sqlite / MySQL fallback
class TestSQLiteFallback(PandasSQLTest):
"""
Test the fallback mode against an in-memory sqlite database.
"""
flavor = 'sqlite'
@classmethod
def connect(cls):
return sqlite3.connect(':memory:')
def drop_table(self, table_name):
cur = self.conn.cursor()
cur.execute("DROP TABLE IF EXISTS %s" % table_name)
self.conn.commit()
def setUp(self):
self.conn = self.connect()
self.pandasSQL = sql.SQLiteDatabase(self.conn, 'sqlite')
self._load_iris_data()
self._load_test1_data()
def test_invalid_flavor(self):
self.assertRaises(
NotImplementedError, sql.SQLiteDatabase, self.conn, 'oracle')
def test_read_sql(self):
self._read_sql_iris()
def test_read_sql_parameter(self):
self._read_sql_iris_parameter()
def test_read_sql_named_parameter(self):
self._read_sql_iris_named_parameter()
def test_to_sql(self):
self._to_sql()
def test_to_sql_empty(self):
self._to_sql_empty()
def test_to_sql_fail(self):
self._to_sql_fail()
def test_to_sql_replace(self):
self._to_sql_replace()
def test_to_sql_append(self):
self._to_sql_append()
def test_create_and_drop_table(self):
temp_frame = DataFrame(
{'one': [1., 2., 3., 4.], 'two': [4., 3., 2., 1.]})
self.pandasSQL.to_sql(temp_frame, 'drop_test_frame')
self.assertTrue(self.pandasSQL.has_table('drop_test_frame'),
'Table not written to DB')
self.pandasSQL.drop_table('drop_test_frame')
self.assertFalse(self.pandasSQL.has_table('drop_test_frame'),
'Table not deleted from DB')
def test_roundtrip(self):
self._roundtrip()
def test_execute_sql(self):
self._execute_sql()
def test_datetime_date(self):
# test support for datetime.date
df = DataFrame([date(2014, 1, 1), date(2014, 1, 2)], columns=["a"])
df.to_sql('test_date', self.conn, index=False, flavor=self.flavor)
res = read_sql_query('SELECT * FROM test_date', self.conn)
if self.flavor == 'sqlite':
# comes back as strings
tm.assert_frame_equal(res, df.astype(str))
elif self.flavor == 'mysql':
tm.assert_frame_equal(res, df)
def test_datetime_time(self):
# test support for datetime.time
df = DataFrame([time(9, 0, 0), time(9, 1, 30)], columns=["a"])
# test it raises an error and not fails silently (GH8341)
if self.flavor == 'sqlite':
self.assertRaises(sqlite3.InterfaceError, sql.to_sql, df,
'test_time', self.conn)
def _get_index_columns(self, tbl_name):
ixs = sql.read_sql_query(
"SELECT * FROM sqlite_master WHERE type = 'index' " +
"AND tbl_name = '%s'" % tbl_name, self.conn)
ix_cols = []
for ix_name in ixs.name:
ix_info = sql.read_sql_query(
"PRAGMA index_info(%s)" % ix_name, self.conn)
ix_cols.append(ix_info.name.tolist())
return ix_cols
def test_to_sql_save_index(self):
self._to_sql_save_index()
def test_transactions(self):
self._transaction_test()
def _get_sqlite_column_type(self, table, column):
recs = self.conn.execute('PRAGMA table_info(%s)' % table)
for cid, name, ctype, not_null, default, pk in recs:
if name == column:
return ctype
raise ValueError('Table %s, column %s not found' % (table, column))
def test_dtype(self):
if self.flavor == 'mysql':
raise nose.SkipTest('Not applicable to MySQL legacy')
cols = ['A', 'B']
data = [(0.8, True),
(0.9, None)]
df = DataFrame(data, columns=cols)
df.to_sql('dtype_test', self.conn)
df.to_sql('dtype_test2', self.conn, dtype={'B': 'STRING'})
# sqlite stores Boolean values as INTEGER
self.assertEqual(self._get_sqlite_column_type('dtype_test', 'B'), 'INTEGER')
self.assertEqual(self._get_sqlite_column_type('dtype_test2', 'B'), 'STRING')
self.assertRaises(ValueError, df.to_sql,
'error', self.conn, dtype={'B': bool})
def test_notnull_dtype(self):
if self.flavor == 'mysql':
raise nose.SkipTest('Not applicable to MySQL legacy')
cols = {'Bool': Series([True,None]),
'Date': Series([datetime(2012, 5, 1), None]),
'Int' : Series([1, None], dtype='object'),
'Float': Series([1.1, None])
}
df = DataFrame(cols)
tbl = 'notnull_dtype_test'
df.to_sql(tbl, self.conn)
self.assertEqual(self._get_sqlite_column_type(tbl, 'Bool'), 'INTEGER')
self.assertEqual(self._get_sqlite_column_type(tbl, 'Date'), 'TIMESTAMP')
self.assertEqual(self._get_sqlite_column_type(tbl, 'Int'), 'INTEGER')
self.assertEqual(self._get_sqlite_column_type(tbl, 'Float'), 'REAL')
def test_illegal_names(self):
# For sqlite, these should work fine
df = DataFrame([[1, 2], [3, 4]], columns=['a', 'b'])
# Raise error on blank
self.assertRaises(ValueError, df.to_sql, "", self.conn,
flavor=self.flavor)
for ndx, weird_name in enumerate(['test_weird_name]','test_weird_name[',
'test_weird_name`','test_weird_name"', 'test_weird_name\'',
'_b.test_weird_name_01-30', '"_b.test_weird_name_01-30"',
'12345','12345blah']):
df.to_sql(weird_name, self.conn, flavor=self.flavor)
sql.table_exists(weird_name, self.conn)
df2 = DataFrame([[1, 2], [3, 4]], columns=['a', weird_name])
c_tbl = 'test_weird_col_name%d'%ndx
df2.to_sql(c_tbl, self.conn, flavor=self.flavor)
sql.table_exists(c_tbl, self.conn)
class TestMySQLLegacy(TestSQLiteFallback):
"""
Test the legacy mode against a MySQL database.
"""
flavor = 'mysql'
@classmethod
def setUpClass(cls):
cls.setup_driver()
# test connection
try:
cls.connect()
except cls.driver.err.OperationalError:
raise nose.SkipTest("{0} - can't connect to MySQL server".format(cls))
@classmethod
def setup_driver(cls):
try:
import pymysql
cls.driver = pymysql
except ImportError:
raise nose.SkipTest('pymysql not installed')
@classmethod
def connect(cls):
return cls.driver.connect(host='127.0.0.1', user='root', passwd='', db='pandas_nosetest')
def drop_table(self, table_name):
cur = self.conn.cursor()
cur.execute("DROP TABLE IF EXISTS %s" % table_name)
self.conn.commit()
def _count_rows(self, table_name):
cur = self._get_exec()
cur.execute(
"SELECT count(*) AS count_1 FROM %s" % table_name)
rows = cur.fetchall()
return rows[0][0]
def setUp(self):
try:
self.conn = self.connect()
except self.driver.err.OperationalError:
raise nose.SkipTest("Can't connect to MySQL server")
self.pandasSQL = sql.SQLiteDatabase(self.conn, 'mysql')
self._load_iris_data()
self._load_test1_data()
def tearDown(self):
c = self.conn.cursor()
c.execute('SHOW TABLES')
for table in c.fetchall():
c.execute('DROP TABLE %s' % table[0])
self.conn.commit()
self.conn.close()
def test_a_deprecation(self):
with tm.assert_produces_warning(FutureWarning):
sql.to_sql(self.test_frame1, 'test_frame1', self.conn,
flavor='mysql')
self.assertTrue(
sql.has_table('test_frame1', self.conn, flavor='mysql'),
'Table not written to DB')
def _get_index_columns(self, tbl_name):
ixs = sql.read_sql_query(
"SHOW INDEX IN %s" % tbl_name, self.conn)
ix_cols = {}
for ix_name, ix_col in zip(ixs.Key_name, ixs.Column_name):
if ix_name not in ix_cols:
ix_cols[ix_name] = []
ix_cols[ix_name].append(ix_col)
return list(ix_cols.values())
def test_to_sql_save_index(self):
self._to_sql_save_index()
for ix_name, ix_col in zip(ixs.Key_name, ixs.Column_name):
if ix_name not in ix_cols:
ix_cols[ix_name] = []
ix_cols[ix_name].append(ix_col)
return ix_cols.values()
def test_to_sql_save_index(self):
self._to_sql_save_index()
def test_illegal_names(self):
df = DataFrame([[1, 2], [3, 4]], columns=['a', 'b'])
# These tables and columns should be ok
for ndx, ok_name in enumerate(['99beginswithnumber','12345']):
df.to_sql(ok_name, self.conn, flavor=self.flavor, index=False,
if_exists='replace')
self.conn.cursor().execute("DROP TABLE `%s`" % ok_name)
self.conn.commit()
df2 = DataFrame([[1, 2], [3, 4]], columns=['a', ok_name])
c_tbl = 'test_ok_col_name%d'%ndx
df2.to_sql(c_tbl, self.conn, flavor=self.flavor, index=False,
if_exists='replace')
self.conn.cursor().execute("DROP TABLE `%s`" % c_tbl)
self.conn.commit()
# For MySQL, these should raise ValueError
for ndx, illegal_name in enumerate(['test_illegal_name]','test_illegal_name[',
'test_illegal_name`','test_illegal_name"', 'test_illegal_name\'', '']):
self.assertRaises(ValueError, df.to_sql, illegal_name, self.conn,
flavor=self.flavor, index=False)
df2 = DataFrame([[1, 2], [3, 4]], columns=['a', illegal_name])
c_tbl = 'test_illegal_col_name%d'%ndx
self.assertRaises(ValueError, df2.to_sql, c_tbl,
self.conn, flavor=self.flavor, index=False)
#------------------------------------------------------------------------------
#--- Old tests from 0.13.1 (before refactor using sqlalchemy)
_formatters = {
datetime: lambda dt: "'%s'" % date_format(dt),
str: lambda x: "'%s'" % x,
np.str_: lambda x: "'%s'" % x,
compat.text_type: lambda x: "'%s'" % x,
compat.binary_type: lambda x: "'%s'" % x,
float: lambda x: "%.8f" % x,
int: lambda x: "%s" % x,
type(None): lambda x: "NULL",
np.float64: lambda x: "%.10f" % x,
bool: lambda x: "'%s'" % x,
}
def format_query(sql, *args):
"""
"""
processed_args = []
for arg in args:
if isinstance(arg, float) and isnull(arg):
arg = None
formatter = _formatters[type(arg)]
processed_args.append(formatter(arg))
return sql % tuple(processed_args)
def _skip_if_no_pymysql():
try:
import pymysql
except ImportError:
raise nose.SkipTest('pymysql not installed, skipping')
class TestXSQLite(tm.TestCase):
def setUp(self):
self.db = sqlite3.connect(':memory:')
def test_basic(self):
frame = tm.makeTimeDataFrame()
self._check_roundtrip(frame)
def test_write_row_by_row(self):
frame = tm.makeTimeDataFrame()
frame.ix[0, 0] = np.nan
create_sql = sql.get_schema(frame, 'test', 'sqlite')
cur = self.db.cursor()
cur.execute(create_sql)
cur = self.db.cursor()
ins = "INSERT INTO test VALUES (%s, %s, %s, %s)"
for idx, row in frame.iterrows():
fmt_sql = format_query(ins, *row)
sql.tquery(fmt_sql, cur=cur)
self.db.commit()
result = sql.read_frame("select * from test", con=self.db)
result.index = frame.index
tm.assert_frame_equal(result, frame)
def test_execute(self):
frame = tm.makeTimeDataFrame()
create_sql = sql.get_schema(frame, 'test', 'sqlite')
cur = self.db.cursor()
cur.execute(create_sql)
ins = "INSERT INTO test VALUES (?, ?, ?, ?)"
row = frame.ix[0]
sql.execute(ins, self.db, params=tuple(row))
self.db.commit()
result = sql.read_frame("select * from test", self.db)
result.index = frame.index[:1]
tm.assert_frame_equal(result, frame[:1])
def test_schema(self):
frame = tm.makeTimeDataFrame()
create_sql = sql.get_schema(frame, 'test', 'sqlite')
lines = create_sql.splitlines()
for l in lines:
tokens = l.split(' ')
if len(tokens) == 2 and tokens[0] == 'A':
self.assertTrue(tokens[1] == 'DATETIME')
frame = tm.makeTimeDataFrame()
create_sql = sql.get_schema(frame, 'test', 'sqlite', keys=['A', 'B'],)
lines = create_sql.splitlines()
self.assertTrue('PRIMARY KEY ("A","B")' in create_sql)
cur = self.db.cursor()
cur.execute(create_sql)
def test_execute_fail(self):
create_sql = """
CREATE TABLE test
(
a TEXT,
b TEXT,
c REAL,
PRIMARY KEY (a, b)
);
"""
cur = self.db.cursor()
cur.execute(create_sql)
sql.execute('INSERT INTO test VALUES("foo", "bar", 1.234)', self.db)
sql.execute('INSERT INTO test VALUES("foo", "baz", 2.567)', self.db)
try:
sys.stdout = StringIO()
self.assertRaises(Exception, sql.execute,
'INSERT INTO test VALUES("foo", "bar", 7)',
self.db)
finally:
sys.stdout = sys.__stdout__
def test_execute_closed_connection(self):
create_sql = """
CREATE TABLE test
(
a TEXT,
b TEXT,
c REAL,
PRIMARY KEY (a, b)
);
"""
cur = self.db.cursor()
cur.execute(create_sql)
sql.execute('INSERT INTO test VALUES("foo", "bar", 1.234)', self.db)
self.db.close()
try:
sys.stdout = StringIO()
self.assertRaises(Exception, sql.tquery, "select * from test",
con=self.db)
finally:
sys.stdout = sys.__stdout__
def test_na_roundtrip(self):
pass
def _check_roundtrip(self, frame):
sql.write_frame(frame, name='test_table', con=self.db)
result = sql.read_frame("select * from test_table", self.db)
# HACK! Change this once indexes are handled properly.
result.index = frame.index
expected = frame
tm.assert_frame_equal(result, expected)
frame['txt'] = ['a'] * len(frame)
frame2 = frame.copy()
frame2['Idx'] = Index(lrange(len(frame2))) + 10
sql.write_frame(frame2, name='test_table2', con=self.db)
result = sql.read_frame("select * from test_table2", self.db,
index_col='Idx')
expected = frame.copy()
expected.index = Index(lrange(len(frame2))) + 10
expected.index.name = 'Idx'
tm.assert_frame_equal(expected, result)
def test_tquery(self):
frame = tm.makeTimeDataFrame()
sql.write_frame(frame, name='test_table', con=self.db)
result = sql.tquery("select A from test_table", self.db)
expected = Series(frame.A.values, frame.index) # not to have name
result = Series(result, frame.index)
tm.assert_series_equal(result, expected)
try:
sys.stdout = StringIO()
self.assertRaises(sql.DatabaseError, sql.tquery,
'select * from blah', con=self.db)
self.assertRaises(sql.DatabaseError, sql.tquery,
'select * from blah', con=self.db, retry=True)
finally:
sys.stdout = sys.__stdout__
def test_uquery(self):
frame = tm.makeTimeDataFrame()
sql.write_frame(frame, name='test_table', con=self.db)
stmt = 'INSERT INTO test_table VALUES(2.314, -123.1, 1.234, 2.3)'
self.assertEqual(sql.uquery(stmt, con=self.db), 1)
try:
sys.stdout = StringIO()
self.assertRaises(sql.DatabaseError, sql.tquery,
'insert into blah values (1)', con=self.db)
self.assertRaises(sql.DatabaseError, sql.tquery,
'insert into blah values (1)', con=self.db,
retry=True)
finally:
sys.stdout = sys.__stdout__
def test_keyword_as_column_names(self):
'''
'''
df = DataFrame({'From':np.ones(5)})
sql.write_frame(df, con = self.db, name = 'testkeywords')
def test_onecolumn_of_integer(self):
# GH 3628
# a column_of_integers dataframe should transfer well to sql
mono_df=DataFrame([1 , 2], columns=['c0'])
sql.write_frame(mono_df, con = self.db, name = 'mono_df')
# computing the sum via sql
con_x=self.db
the_sum=sum([my_c0[0] for my_c0 in con_x.execute("select * from mono_df")])
# it should not fail, and gives 3 ( Issue #3628 )
self.assertEqual(the_sum , 3)
result = sql.read_frame("select * from mono_df",con_x)
tm.assert_frame_equal(result,mono_df)
def test_if_exists(self):
df_if_exists_1 = DataFrame({'col1': [1, 2], 'col2': ['A', 'B']})
df_if_exists_2 = DataFrame({'col1': [3, 4, 5], 'col2': ['C', 'D', 'E']})
table_name = 'table_if_exists'
sql_select = "SELECT * FROM %s" % table_name
def clean_up(test_table_to_drop):
"""
Drops tables created from individual tests
so no dependencies arise from sequential tests
"""
if sql.table_exists(test_table_to_drop, self.db, flavor='sqlite'):
cur = self.db.cursor()
cur.execute("DROP TABLE %s" % test_table_to_drop)
cur.close()
# test if invalid value for if_exists raises appropriate error
self.assertRaises(ValueError,
sql.write_frame,
frame=df_if_exists_1,
con=self.db,
name=table_name,
flavor='sqlite',
if_exists='notvalidvalue')
clean_up(table_name)
# test if_exists='fail'
sql.write_frame(frame=df_if_exists_1, con=self.db, name=table_name,
flavor='sqlite', if_exists='fail')
self.assertRaises(ValueError,
sql.write_frame,
frame=df_if_exists_1,
con=self.db,
name=table_name,
flavor='sqlite',
if_exists='fail')
# test if_exists='replace'
sql.write_frame(frame=df_if_exists_1, con=self.db, name=table_name,
flavor='sqlite', if_exists='replace')
self.assertEqual(sql.tquery(sql_select, con=self.db),
[(1, 'A'), (2, 'B')])
sql.write_frame(frame=df_if_exists_2, con=self.db, name=table_name,
flavor='sqlite', if_exists='replace')
self.assertEqual(sql.tquery(sql_select, con=self.db),
[(3, 'C'), (4, 'D'), (5, 'E')])
clean_up(table_name)
# test if_exists='append'
sql.write_frame(frame=df_if_exists_1, con=self.db, name=table_name,
flavor='sqlite', if_exists='fail')
self.assertEqual(sql.tquery(sql_select, con=self.db),
[(1, 'A'), (2, 'B')])
sql.write_frame(frame=df_if_exists_2, con=self.db, name=table_name,
flavor='sqlite', if_exists='append')
self.assertEqual(sql.tquery(sql_select, con=self.db),
[(1, 'A'), (2, 'B'), (3, 'C'), (4, 'D'), (5, 'E')])
clean_up(table_name)
class TestXMySQL(tm.TestCase):
@classmethod
def setUpClass(cls):
_skip_if_no_pymysql()
# test connection
import pymysql
try:
# Try Travis defaults.
# No real user should allow root access with a blank password.
pymysql.connect(host='localhost', user='root', passwd='',
db='pandas_nosetest')
except:
pass
else:
return
try:
pymysql.connect(read_default_group='pandas')
except pymysql.ProgrammingError as e:
raise nose.SkipTest(
"Create a group of connection parameters under the heading "
"[pandas] in your system's mysql default file, "
"typically located at ~/.my.cnf or /etc/.my.cnf. ")
except pymysql.Error as e:
raise nose.SkipTest(
"Cannot connect to database. "
"Create a group of connection parameters under the heading "
"[pandas] in your system's mysql default file, "
"typically located at ~/.my.cnf or /etc/.my.cnf. ")
def setUp(self):
_skip_if_no_pymysql()
import pymysql
try:
# Try Travis defaults.
# No real user should allow root access with a blank password.
self.db = pymysql.connect(host='localhost', user='root', passwd='',
db='pandas_nosetest')
except:
pass
else:
return
try:
self.db = pymysql.connect(read_default_group='pandas')
except pymysql.ProgrammingError as e:
raise nose.SkipTest(
"Create a group of connection parameters under the heading "
"[pandas] in your system's mysql default file, "
"typically located at ~/.my.cnf or /etc/.my.cnf. ")
except pymysql.Error as e:
raise nose.SkipTest(
"Cannot connect to database. "
"Create a group of connection parameters under the heading "
"[pandas] in your system's mysql default file, "
"typically located at ~/.my.cnf or /etc/.my.cnf. ")
def tearDown(self):
from pymysql.err import Error
try:
self.db.close()
except Error:
pass
def test_basic(self):
_skip_if_no_pymysql()
frame = tm.makeTimeDataFrame()
self._check_roundtrip(frame)
def test_write_row_by_row(self):
_skip_if_no_pymysql()
frame = tm.makeTimeDataFrame()
frame.ix[0, 0] = np.nan
drop_sql = "DROP TABLE IF EXISTS test"
create_sql = sql.get_schema(frame, 'test', 'mysql')
cur = self.db.cursor()
cur.execute(drop_sql)
cur.execute(create_sql)
ins = "INSERT INTO test VALUES (%s, %s, %s, %s)"
for idx, row in frame.iterrows():
fmt_sql = format_query(ins, *row)
sql.tquery(fmt_sql, cur=cur)
self.db.commit()
result = sql.read_frame("select * from test", con=self.db)
result.index = frame.index
tm.assert_frame_equal(result, frame)
def test_execute(self):
_skip_if_no_pymysql()
frame = tm.makeTimeDataFrame()
drop_sql = "DROP TABLE IF EXISTS test"
create_sql = sql.get_schema(frame, 'test', 'mysql')
cur = self.db.cursor()
with warnings.catch_warnings():
warnings.filterwarnings("ignore", "Unknown table.*")
cur.execute(drop_sql)
cur.execute(create_sql)
ins = "INSERT INTO test VALUES (%s, %s, %s, %s)"
row = frame.ix[0].values.tolist()
sql.execute(ins, self.db, params=tuple(row))
self.db.commit()
result = sql.read_frame("select * from test", self.db)
result.index = frame.index[:1]
tm.assert_frame_equal(result, frame[:1])
def test_schema(self):
_skip_if_no_pymysql()
frame = tm.makeTimeDataFrame()
create_sql = sql.get_schema(frame, 'test', 'mysql')
lines = create_sql.splitlines()
for l in lines:
tokens = l.split(' ')
if len(tokens) == 2 and tokens[0] == 'A':
self.assertTrue(tokens[1] == 'DATETIME')
frame = tm.makeTimeDataFrame()
drop_sql = "DROP TABLE IF EXISTS test"
create_sql = sql.get_schema(frame, 'test', 'mysql', keys=['A', 'B'],)
lines = create_sql.splitlines()
self.assertTrue('PRIMARY KEY (`A`,`B`)' in create_sql)
cur = self.db.cursor()
cur.execute(drop_sql)
cur.execute(create_sql)
def test_execute_fail(self):
_skip_if_no_pymysql()
drop_sql = "DROP TABLE IF EXISTS test"
create_sql = """
CREATE TABLE test
(
a TEXT,
b TEXT,
c REAL,
PRIMARY KEY (a(5), b(5))
);
"""
cur = self.db.cursor()
cur.execute(drop_sql)
cur.execute(create_sql)
sql.execute('INSERT INTO test VALUES("foo", "bar", 1.234)', self.db)
sql.execute('INSERT INTO test VALUES("foo", "baz", 2.567)', self.db)
try:
sys.stdout = StringIO()
self.assertRaises(Exception, sql.execute,
'INSERT INTO test VALUES("foo", "bar", 7)',
self.db)
finally:
sys.stdout = sys.__stdout__
def test_execute_closed_connection(self):
_skip_if_no_pymysql()
drop_sql = "DROP TABLE IF EXISTS test"
create_sql = """
CREATE TABLE test
(
a TEXT,
b TEXT,
c REAL,
PRIMARY KEY (a(5), b(5))
);
"""
cur = self.db.cursor()
cur.execute(drop_sql)
cur.execute(create_sql)
sql.execute('INSERT INTO test VALUES("foo", "bar", 1.234)', self.db)
self.db.close()
try:
sys.stdout = StringIO()
self.assertRaises(Exception, sql.tquery, "select * from test",
con=self.db)
finally:
sys.stdout = sys.__stdout__
def test_na_roundtrip(self):
_skip_if_no_pymysql()
pass
def _check_roundtrip(self, frame):
_skip_if_no_pymysql()
drop_sql = "DROP TABLE IF EXISTS test_table"
cur = self.db.cursor()
with warnings.catch_warnings():
warnings.filterwarnings("ignore", "Unknown table.*")
cur.execute(drop_sql)
sql.write_frame(frame, name='test_table', con=self.db, flavor='mysql')
result = sql.read_frame("select * from test_table", self.db)
# HACK! Change this once indexes are handled properly.
result.index = frame.index
result.index.name = frame.index.name
expected = frame
tm.assert_frame_equal(result, expected)
frame['txt'] = ['a'] * len(frame)
frame2 = frame.copy()
index = Index(lrange(len(frame2))) + 10
frame2['Idx'] = index
drop_sql = "DROP TABLE IF EXISTS test_table2"
cur = self.db.cursor()
with warnings.catch_warnings():
warnings.filterwarnings("ignore", "Unknown table.*")
cur.execute(drop_sql)
sql.write_frame(frame2, name='test_table2', con=self.db, flavor='mysql')
result = sql.read_frame("select * from test_table2", self.db,
index_col='Idx')
expected = frame.copy()
# HACK! Change this once indexes are handled properly.
expected.index = index
expected.index.names = result.index.names
tm.assert_frame_equal(expected, result)
def test_tquery(self):
try:
import pymysql
except ImportError:
raise nose.SkipTest("no pymysql")
frame = tm.makeTimeDataFrame()
drop_sql = "DROP TABLE IF EXISTS test_table"
cur = self.db.cursor()
cur.execute(drop_sql)
sql.write_frame(frame, name='test_table', con=self.db, flavor='mysql')
result = sql.tquery("select A from test_table", self.db)
expected = Series(frame.A.values, frame.index) # not to have name
result = Series(result, frame.index)
tm.assert_series_equal(result, expected)
try:
sys.stdout = StringIO()
self.assertRaises(sql.DatabaseError, sql.tquery,
'select * from blah', con=self.db)
self.assertRaises(sql.DatabaseError, sql.tquery,
'select * from blah', con=self.db, retry=True)
finally:
sys.stdout = sys.__stdout__
def test_uquery(self):
try:
import pymysql
except ImportError:
raise nose.SkipTest("no pymysql")
frame = tm.makeTimeDataFrame()
drop_sql = "DROP TABLE IF EXISTS test_table"
cur = self.db.cursor()
cur.execute(drop_sql)
sql.write_frame(frame, name='test_table', con=self.db, flavor='mysql')
stmt = 'INSERT INTO test_table VALUES(2.314, -123.1, 1.234, 2.3)'
self.assertEqual(sql.uquery(stmt, con=self.db), 1)
try:
sys.stdout = StringIO()
self.assertRaises(sql.DatabaseError, sql.tquery,
'insert into blah values (1)', con=self.db)
self.assertRaises(sql.DatabaseError, sql.tquery,
'insert into blah values (1)', con=self.db,
retry=True)
finally:
sys.stdout = sys.__stdout__
def test_keyword_as_column_names(self):
'''
'''
_skip_if_no_pymysql()
df = DataFrame({'From':np.ones(5)})
sql.write_frame(df, con = self.db, name = 'testkeywords',
if_exists='replace', flavor='mysql')
def test_if_exists(self):
_skip_if_no_pymysql()
df_if_exists_1 = DataFrame({'col1': [1, 2], 'col2': ['A', 'B']})
df_if_exists_2 = DataFrame({'col1': [3, 4, 5], 'col2': ['C', 'D', 'E']})
table_name = 'table_if_exists'
sql_select = "SELECT * FROM %s" % table_name
def clean_up(test_table_to_drop):
"""
Drops tables created from individual tests
so no dependencies arise from sequential tests
"""
if sql.table_exists(test_table_to_drop, self.db, flavor='mysql'):
cur = self.db.cursor()
cur.execute("DROP TABLE %s" % test_table_to_drop)
cur.close()
# test if invalid value for if_exists raises appropriate error
self.assertRaises(ValueError,
sql.write_frame,
frame=df_if_exists_1,
con=self.db,
name=table_name,
flavor='mysql',
if_exists='notvalidvalue')
clean_up(table_name)
# test if_exists='fail'
sql.write_frame(frame=df_if_exists_1, con=self.db, name=table_name,
flavor='mysql', if_exists='fail')
self.assertRaises(ValueError,
sql.write_frame,
frame=df_if_exists_1,
con=self.db,
name=table_name,
flavor='mysql',
if_exists='fail')
# test if_exists='replace'
sql.write_frame(frame=df_if_exists_1, con=self.db, name=table_name,
flavor='mysql', if_exists='replace')
self.assertEqual(sql.tquery(sql_select, con=self.db),
[(1, 'A'), (2, 'B')])
sql.write_frame(frame=df_if_exists_2, con=self.db, name=table_name,
flavor='mysql', if_exists='replace')
self.assertEqual(sql.tquery(sql_select, con=self.db),
[(3, 'C'), (4, 'D'), (5, 'E')])
clean_up(table_name)
# test if_exists='append'
sql.write_frame(frame=df_if_exists_1, con=self.db, name=table_name,
flavor='mysql', if_exists='fail')
self.assertEqual(sql.tquery(sql_select, con=self.db),
[(1, 'A'), (2, 'B')])
sql.write_frame(frame=df_if_exists_2, con=self.db, name=table_name,
flavor='mysql', if_exists='append')
self.assertEqual(sql.tquery(sql_select, con=self.db),
[(1, 'A'), (2, 'B'), (3, 'C'), (4, 'D'), (5, 'E')])
clean_up(table_name)
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
| mit |
curtisstpierre/django | django/conf/locale/ru/formats.py | 1059 | 1267 | # -*- encoding: utf-8 -*-
# This file is distributed under the same license as the Django package.
#
from __future__ import unicode_literals
# The *_FORMAT strings use the Django date format syntax,
# see http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
DATE_FORMAT = 'j E Y г.'
TIME_FORMAT = 'G:i'
DATETIME_FORMAT = 'j E Y г. G:i'
YEAR_MONTH_FORMAT = 'F Y г.'
MONTH_DAY_FORMAT = 'j F'
SHORT_DATE_FORMAT = 'd.m.Y'
SHORT_DATETIME_FORMAT = 'd.m.Y H:i'
FIRST_DAY_OF_WEEK = 1 # Monday
# The *_INPUT_FORMATS strings use the Python strftime format syntax,
# see http://docs.python.org/library/datetime.html#strftime-strptime-behavior
DATE_INPUT_FORMATS = [
'%d.%m.%Y', # '25.10.2006'
'%d.%m.%y', # '25.10.06'
]
DATETIME_INPUT_FORMATS = [
'%d.%m.%Y %H:%M:%S', # '25.10.2006 14:30:59'
'%d.%m.%Y %H:%M:%S.%f', # '25.10.2006 14:30:59.000200'
'%d.%m.%Y %H:%M', # '25.10.2006 14:30'
'%d.%m.%Y', # '25.10.2006'
'%d.%m.%y %H:%M:%S', # '25.10.06 14:30:59'
'%d.%m.%y %H:%M:%S.%f', # '25.10.06 14:30:59.000200'
'%d.%m.%y %H:%M', # '25.10.06 14:30'
'%d.%m.%y', # '25.10.06'
]
DECIMAL_SEPARATOR = ','
THOUSAND_SEPARATOR = '\xa0' # non-breaking space
NUMBER_GROUPING = 3
| bsd-3-clause |
lesserwhirls/scipy-cwt | doc/postprocess.py | 9 | 1240 | #!/usr/bin/env python
"""
%prog MODE FILES...
Post-processes HTML and Latex files output by Sphinx.
MODE is either 'html' or 'tex'.
"""
import re, optparse
def main():
p = optparse.OptionParser(__doc__)
options, args = p.parse_args()
if len(args) < 1:
p.error('no mode given')
mode = args.pop(0)
if mode not in ('html', 'tex'):
p.error('unknown mode %s' % mode)
for fn in args:
f = open(fn, 'r')
try:
if mode == 'html':
lines = process_html(fn, f.readlines())
elif mode == 'tex':
lines = process_tex(f.readlines())
finally:
f.close()
f = open(fn, 'w')
f.write("".join(lines))
f.close()
def process_html(fn, lines):
return lines
def process_tex(lines):
"""
Remove unnecessary section titles from the LaTeX file,
and convert UTF-8 non-breaking spaces to Latex nbsps.
"""
new_lines = []
for line in lines:
if re.match(r'^\\(section|subsection|subsubsection|paragraph|subparagraph){(numpy|scipy)\.', line):
pass # skip!
else:
new_lines.append(line)
return new_lines
if __name__ == "__main__":
main()
| bsd-3-clause |
neilLasrado/frappe | frappe/model/db_query.py | 2 | 23555 | # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
# MIT License. See license.txt
from __future__ import unicode_literals
from six import iteritems, string_types
"""build query for doclistview and return results"""
import frappe, json, copy, re
import frappe.defaults
import frappe.share
import frappe.permissions
from frappe.utils import flt, cint, getdate, get_datetime, get_time, make_filter_tuple, get_filter, add_to_date
from frappe import _
from frappe.model import optional_fields
from frappe.model.utils.user_settings import get_user_settings, update_user_settings
from datetime import datetime
class DatabaseQuery(object):
def __init__(self, doctype, user=None):
self.doctype = doctype
self.tables = []
self.conditions = []
self.or_conditions = []
self.fields = None
self.user = user or frappe.session.user
self.ignore_ifnull = False
self.flags = frappe._dict()
def execute(self, query=None, fields=None, filters=None, or_filters=None,
docstatus=None, group_by=None, order_by=None, limit_start=False,
limit_page_length=None, as_list=False, with_childnames=False, debug=False,
ignore_permissions=False, user=None, with_comment_count=False,
join='left join', distinct=False, start=None, page_length=None, limit=None,
ignore_ifnull=False, save_user_settings=False, save_user_settings_fields=False,
update=None, add_total_row=None, user_settings=None):
if not ignore_permissions and not frappe.has_permission(self.doctype, "read", user=user):
frappe.flags.error_message = _('Insufficient Permission for {0}').format(frappe.bold(self.doctype))
raise frappe.PermissionError(self.doctype)
# filters and fields swappable
# its hard to remember what comes first
if (isinstance(fields, dict)
or (isinstance(fields, list) and fields and isinstance(fields[0], list))):
# if fields is given as dict/list of list, its probably filters
filters, fields = fields, filters
elif fields and isinstance(filters, list) \
and len(filters) > 1 and isinstance(filters[0], string_types):
# if `filters` is a list of strings, its probably fields
filters, fields = fields, filters
if fields:
self.fields = fields
else:
self.fields = ["`tab{0}`.`name`".format(self.doctype)]
if start: limit_start = start
if page_length: limit_page_length = page_length
if limit: limit_page_length = limit
self.filters = filters or []
self.or_filters = or_filters or []
self.docstatus = docstatus or []
self.group_by = group_by
self.order_by = order_by
self.limit_start = 0 if (limit_start is False) else cint(limit_start)
self.limit_page_length = cint(limit_page_length) if limit_page_length else None
self.with_childnames = with_childnames
self.debug = debug
self.join = join
self.distinct = distinct
self.as_list = as_list
self.ignore_ifnull = ignore_ifnull
self.flags.ignore_permissions = ignore_permissions
self.user = user or frappe.session.user
self.update = update
self.user_settings_fields = copy.deepcopy(self.fields)
if user_settings:
self.user_settings = json.loads(user_settings)
if query:
result = self.run_custom_query(query)
else:
result = self.build_and_run()
if with_comment_count and not as_list and self.doctype:
self.add_comment_count(result)
if save_user_settings:
self.save_user_settings_fields = save_user_settings_fields
self.update_user_settings()
return result
def build_and_run(self):
args = self.prepare_args()
args.limit = self.add_limit()
if args.conditions:
args.conditions = "where " + args.conditions
if self.distinct:
args.fields = 'distinct ' + args.fields
query = """select %(fields)s from %(tables)s %(conditions)s
%(group_by)s %(order_by)s %(limit)s""" % args
return frappe.db.sql(query, as_dict=not self.as_list, debug=self.debug, update=self.update)
def prepare_args(self):
self.parse_args()
self.sanitize_fields()
self.extract_tables()
self.set_optional_columns()
self.build_conditions()
args = frappe._dict()
if self.with_childnames:
for t in self.tables:
if t != "`tab" + self.doctype + "`":
self.fields.append(t + ".name as '%s:name'" % t[4:-1])
# query dict
args.tables = self.tables[0]
# left join parent, child tables
for child in self.tables[1:]:
args.tables += " {join} {child} on ({child}.parent = {main}.name)".format(join=self.join,
child=child, main=self.tables[0])
if self.grouped_or_conditions:
self.conditions.append("({0})".format(" or ".join(self.grouped_or_conditions)))
args.conditions = ' and '.join(self.conditions)
if self.or_conditions:
args.conditions += (' or ' if args.conditions else "") + \
' or '.join(self.or_conditions)
self.set_field_tables()
args.fields = ', '.join(self.fields)
self.set_order_by(args)
self.validate_order_by_and_group_by(args.order_by)
args.order_by = args.order_by and (" order by " + args.order_by) or ""
self.validate_order_by_and_group_by(self.group_by)
args.group_by = self.group_by and (" group by " + self.group_by) or ""
return args
def parse_args(self):
"""Convert fields and filters from strings to list, dicts"""
if isinstance(self.fields, string_types):
if self.fields == "*":
self.fields = ["*"]
else:
try:
self.fields = json.loads(self.fields)
except ValueError:
self.fields = [f.strip() for f in self.fields.split(",")]
for filter_name in ["filters", "or_filters"]:
filters = getattr(self, filter_name)
if isinstance(filters, string_types):
filters = json.loads(filters)
if isinstance(filters, dict):
fdict = filters
filters = []
for key, value in iteritems(fdict):
filters.append(make_filter_tuple(self.doctype, key, value))
setattr(self, filter_name, filters)
def sanitize_fields(self):
'''
regex : ^.*[,();].*
purpose : The regex will look for malicious patterns like `,`, '(', ')', ';' in each
field which may leads to sql injection.
example :
field = "`DocType`.`issingle`, version()"
As field contains `,` and mysql function `version()`, with the help of regex
the system will filter out this field.
'''
sub_query_regex = re.compile("^.*[,();].*")
blacklisted_keywords = ['select', 'create', 'insert', 'delete', 'drop', 'update', 'case']
blacklisted_functions = ['concat', 'concat_ws', 'if', 'ifnull', 'nullif', 'coalesce',
'connection_id', 'current_user', 'database', 'last_insert_id', 'session_user',
'system_user', 'user', 'version']
def _raise_exception():
frappe.throw(_('Cannot use sub-query or function in fields'), frappe.DataError)
for field in self.fields:
if sub_query_regex.match(field):
if any(keyword in field.lower().split() for keyword in blacklisted_keywords):
_raise_exception()
if any("({0}".format(keyword) in field.lower() for keyword in blacklisted_keywords):
_raise_exception()
if any("{0}(".format(keyword) in field.lower() for keyword in blacklisted_functions):
_raise_exception()
if re.compile("[a-zA-Z]+\s*'").match(field):
_raise_exception()
if re.compile('[a-zA-Z]+\s*,').match(field):
_raise_exception()
def extract_tables(self):
"""extract tables from fields"""
self.tables = ['`tab' + self.doctype + '`']
# add tables from fields
if self.fields:
for f in self.fields:
if ( not ("tab" in f and "." in f) ) or ("locate(" in f) or ("count(" in f):
continue
table_name = f.split('.')[0]
if table_name.lower().startswith('group_concat('):
table_name = table_name[13:]
if table_name.lower().startswith('ifnull('):
table_name = table_name[7:]
if not table_name[0]=='`':
table_name = '`' + table_name + '`'
if not table_name in self.tables:
self.append_table(table_name)
def append_table(self, table_name):
self.tables.append(table_name)
doctype = table_name[4:-1]
if (not self.flags.ignore_permissions) and (not frappe.has_permission(doctype)):
frappe.flags.error_message = _('Insufficient Permission for {0}').format(frappe.bold(doctype))
raise frappe.PermissionError(doctype)
def set_field_tables(self):
'''If there are more than one table, the fieldname must not be ambigous.
If the fieldname is not explicitly mentioned, set the default table'''
if len(self.tables) > 1:
for i, f in enumerate(self.fields):
if '.' not in f:
self.fields[i] = '{0}.{1}'.format(self.tables[0], f)
def set_optional_columns(self):
"""Removes optional columns like `_user_tags`, `_comments` etc. if not in table"""
columns = frappe.db.get_table_columns(self.doctype)
# remove from fields
to_remove = []
for fld in self.fields:
for f in optional_fields:
if f in fld and not f in columns:
to_remove.append(fld)
for fld in to_remove:
del self.fields[self.fields.index(fld)]
# remove from filters
to_remove = []
for each in self.filters:
if isinstance(each, string_types):
each = [each]
for element in each:
if element in optional_fields and element not in columns:
to_remove.append(each)
for each in to_remove:
if isinstance(self.filters, dict):
del self.filters[each]
else:
self.filters.remove(each)
def build_conditions(self):
self.conditions = []
self.grouped_or_conditions = []
self.build_filter_conditions(self.filters, self.conditions)
self.build_filter_conditions(self.or_filters, self.grouped_or_conditions)
# match conditions
if not self.flags.ignore_permissions:
match_conditions = self.build_match_conditions()
if match_conditions:
self.conditions.append("(" + match_conditions + ")")
def build_filter_conditions(self, filters, conditions, ignore_permissions=None):
"""build conditions from user filters"""
if ignore_permissions is not None:
self.flags.ignore_permissions = ignore_permissions
if isinstance(filters, dict):
filters = [filters]
for f in filters:
if isinstance(f, string_types):
conditions.append(f)
else:
conditions.append(self.prepare_filter_condition(f))
def prepare_filter_condition(self, f):
"""Returns a filter condition in the format:
ifnull(`tabDocType`.`fieldname`, fallback) operator "value"
"""
f = get_filter(self.doctype, f)
tname = ('`tab' + f.doctype + '`')
if not tname in self.tables:
self.append_table(tname)
if 'ifnull(' in f.fieldname:
column_name = f.fieldname
else:
column_name = '{tname}.{fname}'.format(tname=tname,
fname=f.fieldname)
can_be_null = True
# prepare in condition
if f.operator.lower() in ('ancestors of', 'descendants of', 'not ancestors of', 'not descendants of'):
values = f.value or ''
# TODO: handle list and tuple
# if not isinstance(values, (list, tuple)):
# values = values.split(",")
ref_doctype = f.doctype
if frappe.get_meta(f.doctype).get_field(f.fieldname) is not None :
ref_doctype = frappe.get_meta(f.doctype).get_field(f.fieldname).options
result=[]
lft, rgt = frappe.db.get_value(ref_doctype, f.value, ["lft", "rgt"])
# Get descendants elements of a DocType with a tree structure
if f.operator.lower() in ('descendants of', 'not descendants of') :
result = frappe.db.sql_list("""select name from `tab{0}`
where lft>%s and rgt<%s order by lft asc""".format(ref_doctype), (lft, rgt))
else :
# Get ancestor elements of a DocType with a tree structure
result = frappe.db.sql_list("""select name from `tab{0}`
where lft<%s and rgt>%s order by lft desc""".format(ref_doctype), (lft, rgt))
fallback = "''"
value = (frappe.db.escape((v or '').strip(), percent=False) for v in result)
value = '("{0}")'.format('", "'.join(value))
# changing operator to IN as the above code fetches all the parent / child values and convert into tuple
# which can be directly used with IN operator to query.
f.operator = 'not in' if f.operator.lower() in ('not ancestors of', 'not descendants of') else 'in'
elif f.operator.lower() in ('in', 'not in'):
values = f.value or ''
if not isinstance(values, (list, tuple)):
values = values.split(",")
fallback = "''"
value = (frappe.db.escape((v or '').strip(), percent=False) for v in values)
value = '("{0}")'.format('", "'.join(value))
else:
df = frappe.get_meta(f.doctype).get("fields", {"fieldname": f.fieldname})
df = df[0] if df else None
if df and df.fieldtype in ("Check", "Float", "Int", "Currency", "Percent"):
can_be_null = False
if f.operator.lower() == 'between' and \
(f.fieldname in ('creation', 'modified') or (df and (df.fieldtype=="Date" or df.fieldtype=="Datetime"))):
value = get_between_date_filter(f.value, df)
fallback = "'0000-00-00 00:00:00'"
elif df and df.fieldtype=="Date":
value = getdate(f.value).strftime("%Y-%m-%d")
fallback = "'0000-00-00'"
elif (df and df.fieldtype=="Datetime") or isinstance(f.value, datetime):
value = get_datetime(f.value).strftime("%Y-%m-%d %H:%M:%S.%f")
fallback = "'0000-00-00 00:00:00'"
elif df and df.fieldtype=="Time":
value = get_time(f.value).strftime("%H:%M:%S.%f")
fallback = "'00:00:00'"
elif f.operator.lower() in ("like", "not like") or (isinstance(f.value, string_types) and
(not df or df.fieldtype not in ["Float", "Int", "Currency", "Percent", "Check"])):
value = "" if f.value==None else f.value
fallback = '""'
if f.operator.lower() in ("like", "not like") and isinstance(value, string_types):
# because "like" uses backslash (\) for escaping
value = value.replace("\\", "\\\\").replace("%", "%%")
else:
value = flt(f.value)
fallback = 0
# put it inside double quotes
if isinstance(value, string_types) and not f.operator.lower() == 'between':
value = '"{0}"'.format(frappe.db.escape(value, percent=False))
if (self.ignore_ifnull
or not can_be_null
or (f.value and f.operator.lower() in ('=', 'like'))
or 'ifnull(' in column_name.lower()):
condition = '{column_name} {operator} {value}'.format(
column_name=column_name, operator=f.operator,
value=value)
else:
condition = 'ifnull({column_name}, {fallback}) {operator} {value}'.format(
column_name=column_name, fallback=fallback, operator=f.operator,
value=value)
return condition
def build_match_conditions(self, as_condition=True):
"""add match conditions if applicable"""
self.match_filters = []
self.match_conditions = []
only_if_shared = False
if not self.user:
self.user = frappe.session.user
if not self.tables: self.extract_tables()
meta = frappe.get_meta(self.doctype)
role_permissions = frappe.permissions.get_role_permissions(meta, user=self.user)
self.shared = frappe.share.get_shared(self.doctype, self.user)
if (not meta.istable and
not role_permissions.get("read") and
not self.flags.ignore_permissions and
not has_any_user_permission_for_doctype(self.doctype, self.user)):
only_if_shared = True
if not self.shared:
frappe.throw(_("No permission to read {0}").format(self.doctype), frappe.PermissionError)
else:
self.conditions.append(self.get_share_condition())
else:
if role_permissions.get("if_owner", {}).get("read"): #if has if_owner permission skip user perm check
self.match_conditions.append("`tab{0}`.owner = '{1}'".format(self.doctype,
frappe.db.escape(self.user, percent=False)))
elif role_permissions.get("read"): # add user permission only if role has read perm
# get user permissions
user_permissions = frappe.permissions.get_user_permissions(self.user)
self.add_user_permissions(user_permissions)
if as_condition:
conditions = ""
if self.match_conditions:
# will turn out like ((blog_post in (..) and blogger in (...)) or (blog_category in (...)))
conditions = "((" + ") or (".join(self.match_conditions) + "))"
doctype_conditions = self.get_permission_query_conditions()
if doctype_conditions:
conditions += (' and ' + doctype_conditions) if conditions else doctype_conditions
# share is an OR condition, if there is a role permission
if not only_if_shared and self.shared and conditions:
conditions = "({conditions}) or ({shared_condition})".format(
conditions=conditions, shared_condition=self.get_share_condition())
return conditions
else:
return self.match_filters
def get_share_condition(self):
return """`tab{0}`.name in ({1})""".format(self.doctype, ", ".join(["'%s'"] * len(self.shared))) % \
tuple([frappe.db.escape(s, percent=False) for s in self.shared])
def add_user_permissions(self, user_permissions):
meta = frappe.get_meta(self.doctype)
doctype_link_fields = []
doctype_link_fields = meta.get_link_fields()
doctype_link_fields.append(dict(
options=self.doctype,
fieldname='name',
))
# appended current doctype with fieldname as 'name' to
# and condition on doc name if user permission is found for current doctype
match_filters = {}
match_conditions = []
for df in doctype_link_fields:
user_permission_values = user_permissions.get(df.get('options'), {})
if df.get('ignore_user_permissions'): continue
empty_value_condition = 'ifnull(`tab{doctype}`.`{fieldname}`, "")=""'.format(
doctype=self.doctype, fieldname=df.get('fieldname')
)
if (user_permission_values.get("docs", [])
and not self.doctype in user_permission_values.get("skip_for_doctype", [])):
if frappe.get_system_settings("apply_strict_user_permissions"):
condition = ""
else:
condition = empty_value_condition + " or "
condition += """`tab{doctype}`.`{fieldname}` in ({values})""".format(
doctype=self.doctype, fieldname=df.get('fieldname'),
values=", ".join([('"'+frappe.db.escape(v, percent=False)+'"')
for v in user_permission_values.get("docs")]))
match_conditions.append("({condition})".format(condition=condition))
match_filters[df.get('options')] = user_permission_values.get("docs")
if match_conditions:
self.match_conditions.append(" and ".join(match_conditions))
if match_filters:
self.match_filters.append(match_filters)
def get_permission_query_conditions(self):
condition_methods = frappe.get_hooks("permission_query_conditions", {}).get(self.doctype, [])
if condition_methods:
conditions = []
for method in condition_methods:
c = frappe.call(frappe.get_attr(method), self.user)
if c:
conditions.append(c)
return " and ".join(conditions) if conditions else None
def run_custom_query(self, query):
if '%(key)s' in query:
query = query.replace('%(key)s', 'name')
return frappe.db.sql(query, as_dict = (not self.as_list))
def set_order_by(self, args):
meta = frappe.get_meta(self.doctype)
if self.order_by:
args.order_by = self.order_by
else:
args.order_by = ""
# don't add order by from meta if a mysql group function is used without group by clause
group_function_without_group_by = (len(self.fields)==1 and
( self.fields[0].lower().startswith("count(")
or self.fields[0].lower().startswith("min(")
or self.fields[0].lower().startswith("max(")
) and not self.group_by)
if not group_function_without_group_by:
sort_field = sort_order = None
if meta.sort_field and ',' in meta.sort_field:
# multiple sort given in doctype definition
# Example:
# `idx desc, modified desc`
# will covert to
# `tabItem`.`idx` desc, `tabItem`.`modified` desc
args.order_by = ', '.join(['`tab{0}`.`{1}` {2}'.format(self.doctype,
f.split()[0].strip(), f.split()[1].strip()) for f in meta.sort_field.split(',')])
else:
sort_field = meta.sort_field or 'modified'
sort_order = (meta.sort_field and meta.sort_order) or 'desc'
args.order_by = "`tab{0}`.`{1}` {2}".format(self.doctype, sort_field or "modified", sort_order or "desc")
# draft docs always on top
if meta.is_submittable:
args.order_by = "`tab{0}`.docstatus asc, {1}".format(self.doctype, args.order_by)
def validate_order_by_and_group_by(self, parameters):
"""Check order by, group by so that atleast one column is selected and does not have subquery"""
if not parameters:
return
_lower = parameters.lower()
if 'select' in _lower and ' from ' in _lower:
frappe.throw(_('Cannot use sub-query in order by'))
for field in parameters.split(","):
if "." in field and field.strip().startswith("`tab"):
tbl = field.strip().split('.')[0]
if tbl not in self.tables:
if tbl.startswith('`'):
tbl = tbl[4:-1]
frappe.throw(_("Please select atleast 1 column from {0} to sort/group").format(tbl))
def add_limit(self):
if self.limit_page_length:
return 'limit %s, %s' % (self.limit_start, self.limit_page_length)
else:
return ''
def add_comment_count(self, result):
for r in result:
if not r.name:
continue
r._comment_count = 0
if "_comments" in r:
r._comment_count = len(json.loads(r._comments or "[]"))
def update_user_settings(self):
# update user settings if new search
user_settings = json.loads(get_user_settings(self.doctype))
if hasattr(self, 'user_settings'):
user_settings.update(self.user_settings)
if self.save_user_settings_fields:
user_settings['fields'] = self.user_settings_fields
update_user_settings(self.doctype, user_settings)
def get_order_by(doctype, meta):
order_by = ""
sort_field = sort_order = None
if meta.sort_field and ',' in meta.sort_field:
# multiple sort given in doctype definition
# Example:
# `idx desc, modified desc`
# will covert to
# `tabItem`.`idx` desc, `tabItem`.`modified` desc
order_by = ', '.join(['`tab{0}`.`{1}` {2}'.format(doctype,
f.split()[0].strip(), f.split()[1].strip()) for f in meta.sort_field.split(',')])
else:
sort_field = meta.sort_field or 'modified'
sort_order = (meta.sort_field and meta.sort_order) or 'desc'
order_by = "`tab{0}`.`{1}` {2}".format(doctype, sort_field or "modified", sort_order or "desc")
# draft docs always on top
if meta.is_submittable:
order_by = "`tab{0}`.docstatus asc, {1}".format(doctype, order_by)
return order_by
@frappe.whitelist()
def get_list(doctype, *args, **kwargs):
'''wrapper for DatabaseQuery'''
kwargs.pop('cmd', None)
return DatabaseQuery(doctype).execute(None, *args, **kwargs)
def is_parent_only_filter(doctype, filters):
#check if filters contains only parent doctype
only_parent_doctype = True
if isinstance(filters, list):
for flt in filters:
if doctype not in flt:
only_parent_doctype = False
if 'Between' in flt:
flt[3] = get_between_date_filter(flt[3])
return only_parent_doctype
def has_any_user_permission_for_doctype(doctype, user):
user_permissions = frappe.permissions.get_user_permissions(user=user)
return user_permissions and user_permissions.get(doctype)
def get_between_date_filter(value, df=None):
'''
return the formattted date as per the given example
[u'2017-11-01', u'2017-11-03'] => '2017-11-01 00:00:00.000000' AND '2017-11-04 00:00:00.000000'
'''
from_date = None
to_date = None
date_format = "%Y-%m-%d %H:%M:%S.%f"
if df:
date_format = "%Y-%m-%d %H:%M:%S.%f" if df.fieldtype == 'Datetime' else "%Y-%m-%d"
if value and isinstance(value, (list, tuple)):
if len(value) >= 1: from_date = value[0]
if len(value) >= 2: to_date = value[1]
if not df or (df and df.fieldtype == 'Datetime'):
to_date = add_to_date(to_date,days=1)
data = "'%s' AND '%s'" % (
get_datetime(from_date).strftime(date_format),
get_datetime(to_date).strftime(date_format))
return data
| mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.