repo_name
stringlengths 5
100
| path
stringlengths 4
375
| copies
stringclasses 991
values | size
stringlengths 4
7
| content
stringlengths 666
1M
| license
stringclasses 15
values |
|---|---|---|---|---|---|
girving/tensorflow
|
tensorflow/contrib/distributions/python/ops/shape.py
|
32
|
20029
|
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""A helper class for inferring Distribution shape."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import contextlib
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import check_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops.distributions import util as distribution_util
from tensorflow.python.util import deprecation
class _DistributionShape(object):
"""Manage and manipulate `Distribution` shape.
#### Terminology
Recall that a `Tensor` has:
- `shape`: size of `Tensor` dimensions,
- `ndims`: size of `shape`; number of `Tensor` dimensions,
- `dims`: indexes into `shape`; useful for transpose, reduce.
`Tensor`s sampled from a `Distribution` can be partitioned by `sample_dims`,
`batch_dims`, and `event_dims`. To understand the semantics of these
dimensions, consider when two of the three are fixed and the remaining
is varied:
- `sample_dims`: indexes independent draws from identical
parameterizations of the `Distribution`.
- `batch_dims`: indexes independent draws from non-identical
parameterizations of the `Distribution`.
- `event_dims`: indexes event coordinates from one sample.
The `sample`, `batch`, and `event` dimensions constitute the entirety of a
`Distribution` `Tensor`'s shape.
The dimensions are always in `sample`, `batch`, `event` order.
#### Purpose
This class partitions `Tensor` notions of `shape`, `ndims`, and `dims` into
`Distribution` notions of `sample,` `batch,` and `event` dimensions. That
is, it computes any of:
```
sample_shape batch_shape event_shape
sample_dims batch_dims event_dims
sample_ndims batch_ndims event_ndims
```
for a given `Tensor`, e.g., the result of
`Distribution.sample(sample_shape=...)`.
For a given `Tensor`, this class computes the above table using minimal
information: `batch_ndims` and `event_ndims`.
#### Examples
We show examples of distribution shape semantics.
- Sample dimensions:
Computing summary statistics, i.e., the average is a reduction over sample
dimensions.
```python
sample_dims = [0]
tf.reduce_mean(Normal(loc=1.3, scale=1.).sample_n(1000),
axis=sample_dims) # ~= 1.3
```
- Batch dimensions:
Monte Carlo estimation of a marginal probability:
Average over batch dimensions where batch dimensions are associated with
random draws from a prior.
E.g., suppose we want to find the Monte Carlo estimate of the marginal
distribution of a `Normal` with a random `Laplace` location:
```
P(X=x) = integral P(X=x|y) P(Y=y) dy
~= 1/n sum_{i=1}^n P(X=x|y_i), y_i ~iid Laplace(0,1)
= tf.reduce_mean(Normal(loc=Laplace(0., 1.).sample_n(n=1000),
scale=tf.ones(1000)).prob(x),
axis=batch_dims)
```
The `Laplace` distribution generates a `Tensor` of shape `[1000]`. When
fed to a `Normal`, this is interpreted as 1000 different locations, i.e.,
1000 non-identical Normals. Therefore a single call to `prob(x)` yields
1000 probabilities, one for every location. The average over this batch
yields the marginal.
- Event dimensions:
Computing the determinant of the Jacobian of a function of a random
variable involves a reduction over event dimensions.
E.g., Jacobian of the transform `Y = g(X) = exp(X)`:
```python
tf.div(1., tf.reduce_prod(x, event_dims))
```
We show examples using this class.
Write `S, B, E` for `sample_shape`, `batch_shape`, and `event_shape`.
```python
# 150 iid samples from one multivariate Normal with two degrees of freedom.
mu = [0., 0]
sigma = [[1., 0],
[0, 1]]
mvn = MultivariateNormal(mu, sigma)
rand_mvn = mvn.sample(sample_shape=[3, 50])
shaper = DistributionShape(batch_ndims=0, event_ndims=1)
S, B, E = shaper.get_shape(rand_mvn)
# S = [3, 50]
# B = []
# E = [2]
# 12 iid samples from one Wishart with 2x2 events.
sigma = [[1., 0],
[2, 1]]
wishart = Wishart(df=5, scale=sigma)
rand_wishart = wishart.sample(sample_shape=[3, 4])
shaper = DistributionShape(batch_ndims=0, event_ndims=2)
S, B, E = shaper.get_shape(rand_wishart)
# S = [3, 4]
# B = []
# E = [2, 2]
# 100 iid samples from two, non-identical trivariate Normal distributions.
mu = ... # shape(2, 3)
sigma = ... # shape(2, 3, 3)
X = MultivariateNormal(mu, sigma).sample(shape=[4, 25])
# S = [4, 25]
# B = [2]
# E = [3]
```
#### Argument Validation
When `validate_args=False`, checks that cannot be done during
graph construction are performed at graph execution. This may result in a
performance degradation because data must be switched from GPU to CPU.
For example, when `validate_args=False` and `event_ndims` is a
non-constant `Tensor`, it is checked to be a non-negative integer at graph
execution. (Same for `batch_ndims`). Constant `Tensor`s and non-`Tensor`
arguments are always checked for correctness since this can be done for
"free," i.e., during graph construction.
"""
@deprecation.deprecated(
"2018-10-01",
"The TensorFlow Distributions library has moved to "
"TensorFlow Probability "
"(https://github.com/tensorflow/probability). You "
"should update all references to use `tfp.distributions` "
"instead of `tf.contrib.distributions`.",
warn_once=True)
def __init__(self,
batch_ndims=None,
event_ndims=None,
validate_args=False,
name="DistributionShape"):
"""Construct `DistributionShape` with fixed `batch_ndims`, `event_ndims`.
`batch_ndims` and `event_ndims` are fixed throughout the lifetime of a
`Distribution`. They may only be known at graph execution.
If both `batch_ndims` and `event_ndims` are python scalars (rather than
either being a `Tensor`), functions in this class automatically perform
sanity checks during graph construction.
Args:
batch_ndims: `Tensor`. Number of `dims` (`rank`) of the batch portion of
indexes of a `Tensor`. A "batch" is a non-identical distribution, i.e,
Normal with different parameters.
event_ndims: `Tensor`. Number of `dims` (`rank`) of the event portion of
indexes of a `Tensor`. An "event" is what is sampled from a
distribution, i.e., a trivariate Normal has an event shape of [3] and a
4 dimensional Wishart has an event shape of [4, 4].
validate_args: Python `bool`, default `False`. When `True`,
non-`tf.constant` `Tensor` arguments are checked for correctness.
(`tf.constant` arguments are always checked.)
name: Python `str`. The name prepended to Ops created by this class.
Raises:
ValueError: if either `batch_ndims` or `event_ndims` are: `None`,
negative, not `int32`.
"""
if batch_ndims is None: raise ValueError("batch_ndims cannot be None")
if event_ndims is None: raise ValueError("event_ndims cannot be None")
self._batch_ndims = batch_ndims
self._event_ndims = event_ndims
self._validate_args = validate_args
with ops.name_scope(name):
self._name = name
with ops.name_scope("init"):
self._batch_ndims = self._assert_non_negative_int32_scalar(
ops.convert_to_tensor(
batch_ndims, name="batch_ndims"))
self._batch_ndims_static, self._batch_ndims_is_0 = (
self._introspect_ndims(self._batch_ndims))
self._event_ndims = self._assert_non_negative_int32_scalar(
ops.convert_to_tensor(
event_ndims, name="event_ndims"))
self._event_ndims_static, self._event_ndims_is_0 = (
self._introspect_ndims(self._event_ndims))
@property
def name(self):
"""Name given to ops created by this class."""
return self._name
@property
def batch_ndims(self):
"""Returns number of dimensions corresponding to non-identical draws."""
return self._batch_ndims
@property
def event_ndims(self):
"""Returns number of dimensions needed to index a sample's coordinates."""
return self._event_ndims
@property
def validate_args(self):
"""Returns True if graph-runtime `Tensor` checks are enabled."""
return self._validate_args
def get_ndims(self, x, name="get_ndims"):
"""Get `Tensor` number of dimensions (rank).
Args:
x: `Tensor`.
name: Python `str`. The name to give this op.
Returns:
ndims: Scalar number of dimensions associated with a `Tensor`.
"""
with self._name_scope(name, values=[x]):
x = ops.convert_to_tensor(x, name="x")
ndims = x.get_shape().ndims
if ndims is None:
return array_ops.rank(x, name="ndims")
return ops.convert_to_tensor(ndims, dtype=dtypes.int32, name="ndims")
def get_sample_ndims(self, x, name="get_sample_ndims"):
"""Returns number of dimensions corresponding to iid draws ("sample").
Args:
x: `Tensor`.
name: Python `str`. The name to give this op.
Returns:
sample_ndims: `Tensor` (0D, `int32`).
Raises:
ValueError: if `sample_ndims` is calculated to be negative.
"""
with self._name_scope(name, values=[x]):
ndims = self.get_ndims(x, name=name)
if self._is_all_constant_helper(ndims, self.batch_ndims,
self.event_ndims):
ndims = tensor_util.constant_value(ndims)
sample_ndims = (ndims - self._batch_ndims_static -
self._event_ndims_static)
if sample_ndims < 0:
raise ValueError(
"expected batch_ndims(%d) + event_ndims(%d) <= ndims(%d)" %
(self._batch_ndims_static, self._event_ndims_static, ndims))
return ops.convert_to_tensor(sample_ndims, name="sample_ndims")
else:
with ops.name_scope(name="sample_ndims"):
sample_ndims = ndims - self.batch_ndims - self.event_ndims
if self.validate_args:
sample_ndims = control_flow_ops.with_dependencies(
[check_ops.assert_non_negative(sample_ndims)], sample_ndims)
return sample_ndims
def get_dims(self, x, name="get_dims"):
"""Returns dimensions indexing `sample_shape`, `batch_shape`, `event_shape`.
Example:
```python
x = ... # Tensor with shape [4, 3, 2, 1]
sample_dims, batch_dims, event_dims = _DistributionShape(
batch_ndims=2, event_ndims=1).get_dims(x)
# sample_dims == [0]
# batch_dims == [1, 2]
# event_dims == [3]
# Note that these are not the shape parts, but rather indexes into shape.
```
Args:
x: `Tensor`.
name: Python `str`. The name to give this op.
Returns:
sample_dims: `Tensor` (1D, `int32`).
batch_dims: `Tensor` (1D, `int32`).
event_dims: `Tensor` (1D, `int32`).
"""
with self._name_scope(name, values=[x]):
def make_dims(start_sum, size, name):
"""Closure to make dims range."""
start_sum = start_sum if start_sum else [
array_ops.zeros([], dtype=dtypes.int32, name="zero")]
if self._is_all_constant_helper(size, *start_sum):
start = sum(tensor_util.constant_value(s) for s in start_sum)
stop = start + tensor_util.constant_value(size)
return ops.convert_to_tensor(
list(range(start, stop)), dtype=dtypes.int32, name=name)
else:
start = sum(start_sum)
return math_ops.range(start, start + size)
sample_ndims = self.get_sample_ndims(x, name=name)
return (make_dims([], sample_ndims, name="sample_dims"),
make_dims([sample_ndims], self.batch_ndims, name="batch_dims"),
make_dims([sample_ndims, self.batch_ndims],
self.event_ndims, name="event_dims"))
def get_shape(self, x, name="get_shape"):
"""Returns `Tensor`'s shape partitioned into `sample`, `batch`, `event`.
Args:
x: `Tensor`.
name: Python `str`. The name to give this op.
Returns:
sample_shape: `Tensor` (1D, `int32`).
batch_shape: `Tensor` (1D, `int32`).
event_shape: `Tensor` (1D, `int32`).
"""
with self._name_scope(name, values=[x]):
x = ops.convert_to_tensor(x, name="x")
def slice_shape(start_sum, size, name):
"""Closure to slice out shape."""
start_sum = start_sum if start_sum else [
array_ops.zeros([], dtype=dtypes.int32, name="zero")]
if (x.get_shape().ndims is not None and
self._is_all_constant_helper(size, *start_sum)):
start = sum(tensor_util.constant_value(s) for s in start_sum)
stop = start + tensor_util.constant_value(size)
slice_ = x.get_shape()[start:stop].as_list()
if all(s is not None for s in slice_):
return ops.convert_to_tensor(slice_, dtype=dtypes.int32, name=name)
return array_ops.slice(array_ops.shape(x), [sum(start_sum)], [size])
sample_ndims = self.get_sample_ndims(x, name=name)
return (slice_shape([], sample_ndims,
name="sample_shape"),
slice_shape([sample_ndims], self.batch_ndims,
name="batch_shape"),
slice_shape([sample_ndims, self.batch_ndims], self.event_ndims,
name="event_shape"))
# TODO(jvdillon): Make remove expand_batch_dim and make expand_batch_dim=False
# the default behavior.
def make_batch_of_event_sample_matrices(
self, x, expand_batch_dim=True,
name="make_batch_of_event_sample_matrices"):
"""Reshapes/transposes `Distribution` `Tensor` from S+B+E to B_+E_+S_.
Where:
- `B_ = B if B or not expand_batch_dim else [1]`,
- `E_ = E if E else [1]`,
- `S_ = [tf.reduce_prod(S)]`.
Args:
x: `Tensor`.
expand_batch_dim: Python `bool`. If `True` the batch dims will be expanded
such that `batch_ndims >= 1`.
name: Python `str`. The name to give this op.
Returns:
x: `Tensor`. Input transposed/reshaped to `B_+E_+S_`.
sample_shape: `Tensor` (1D, `int32`).
"""
with self._name_scope(name, values=[x]):
x = ops.convert_to_tensor(x, name="x")
# x.shape: S+B+E
sample_shape, batch_shape, event_shape = self.get_shape(x)
event_shape = distribution_util.pick_vector(
self._event_ndims_is_0, [1], event_shape)
if expand_batch_dim:
batch_shape = distribution_util.pick_vector(
self._batch_ndims_is_0, [1], batch_shape)
new_shape = array_ops.concat([[-1], batch_shape, event_shape], 0)
x = array_ops.reshape(x, shape=new_shape)
# x.shape: [prod(S)]+B_+E_
x = distribution_util.rotate_transpose(x, shift=-1)
# x.shape: B_+E_+[prod(S)]
return x, sample_shape
# TODO(jvdillon): Make remove expand_batch_dim and make expand_batch_dim=False
# the default behavior.
def undo_make_batch_of_event_sample_matrices(
self, x, sample_shape, expand_batch_dim=True,
name="undo_make_batch_of_event_sample_matrices"):
"""Reshapes/transposes `Distribution` `Tensor` from B_+E_+S_ to S+B+E.
Where:
- `B_ = B if B or not expand_batch_dim else [1]`,
- `E_ = E if E else [1]`,
- `S_ = [tf.reduce_prod(S)]`.
This function "reverses" `make_batch_of_event_sample_matrices`.
Args:
x: `Tensor` of shape `B_+E_+S_`.
sample_shape: `Tensor` (1D, `int32`).
expand_batch_dim: Python `bool`. If `True` the batch dims will be expanded
such that `batch_ndims>=1`.
name: Python `str`. The name to give this op.
Returns:
x: `Tensor`. Input transposed/reshaped to `S+B+E`.
"""
with self._name_scope(name, values=[x, sample_shape]):
x = ops.convert_to_tensor(x, name="x")
# x.shape: _B+_E+[prod(S)]
sample_shape = ops.convert_to_tensor(sample_shape, name="sample_shape")
x = distribution_util.rotate_transpose(x, shift=1)
# x.shape: [prod(S)]+_B+_E
if self._is_all_constant_helper(self.batch_ndims, self.event_ndims):
if self._batch_ndims_is_0 or self._event_ndims_is_0:
squeeze_dims = []
if self._event_ndims_is_0:
squeeze_dims += [-1]
if self._batch_ndims_is_0 and expand_batch_dim:
squeeze_dims += [1]
if squeeze_dims:
x = array_ops.squeeze(x, axis=squeeze_dims)
# x.shape: [prod(S)]+B+E
_, batch_shape, event_shape = self.get_shape(x)
else:
s = (x.get_shape().as_list() if x.get_shape().is_fully_defined()
else array_ops.shape(x))
batch_shape = s[1:1+self.batch_ndims]
# Since sample_dims=1 and is left-most, we add 1 to the number of
# batch_ndims to get the event start dim.
event_start = array_ops.where(
math_ops.logical_and(expand_batch_dim, self._batch_ndims_is_0),
2, 1 + self.batch_ndims)
event_shape = s[event_start:event_start+self.event_ndims]
new_shape = array_ops.concat([sample_shape, batch_shape, event_shape], 0)
x = array_ops.reshape(x, shape=new_shape)
# x.shape: S+B+E
return x
@contextlib.contextmanager
def _name_scope(self, name=None, values=None):
"""Helper function to standardize op scope."""
with ops.name_scope(self.name):
with ops.name_scope(name, values=(
(values or []) + [self.batch_ndims, self.event_ndims])) as scope:
yield scope
def _is_all_constant_helper(self, *args):
"""Helper which returns True if all inputs are constant_value."""
return all(tensor_util.constant_value(x) is not None for x in args)
def _assert_non_negative_int32_scalar(self, x):
"""Helper which ensures that input is a non-negative, int32, scalar."""
x = ops.convert_to_tensor(x, name="x")
if x.dtype.base_dtype != dtypes.int32.base_dtype:
raise TypeError("%s.dtype=%s is not %s" % (x.name, x.dtype, dtypes.int32))
x_value_static = tensor_util.constant_value(x)
if x.get_shape().ndims is not None and x_value_static is not None:
if x.get_shape().ndims != 0:
raise ValueError("%s.ndims=%d is not 0 (scalar)" %
(x.name, x.get_shape().ndims))
if x_value_static < 0:
raise ValueError("%s.value=%d cannot be negative" %
(x.name, x_value_static))
return x
if self.validate_args:
x = control_flow_ops.with_dependencies([
check_ops.assert_rank(x, 0),
check_ops.assert_non_negative(x)], x)
return x
def _introspect_ndims(self, ndims):
"""Helper to establish some properties of input ndims args."""
if self._is_all_constant_helper(ndims):
return (tensor_util.constant_value(ndims),
tensor_util.constant_value(ndims) == 0)
return None, math_ops.equal(ndims, 0)
|
apache-2.0
|
miipl-naveen/optibizz
|
addons/hr_holidays/hr_holidays.py
|
6
|
33614
|
# -*- coding: utf-8 -*-
##################################################################################
#
# Copyright (c) 2005-2006 Axelor SARL. (http://www.axelor.com)
# and 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# $Id: hr.py 4656 2006-11-24 09:58:42Z Cyp $
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import datetime
import math
import time
from operator import attrgetter
from openerp.exceptions import Warning
from openerp import tools
from openerp.osv import fields, osv
from openerp.tools.translate import _
class hr_holidays_status(osv.osv):
_name = "hr.holidays.status"
_description = "Leave Type"
def get_days(self, cr, uid, ids, employee_id, context=None):
result = dict((id, dict(max_leaves=0, leaves_taken=0, remaining_leaves=0,
virtual_remaining_leaves=0)) for id in ids)
holiday_ids = self.pool['hr.holidays'].search(cr, uid, [('employee_id', '=', employee_id),
('state', 'in', ['confirm', 'validate1', 'validate']),
('holiday_status_id', 'in', ids)
], context=context)
for holiday in self.pool['hr.holidays'].browse(cr, uid, holiday_ids, context=context):
status_dict = result[holiday.holiday_status_id.id]
if holiday.type == 'add':
status_dict['virtual_remaining_leaves'] += holiday.number_of_days_temp
if holiday.state == 'validate':
status_dict['max_leaves'] += holiday.number_of_days_temp
status_dict['remaining_leaves'] += holiday.number_of_days_temp
elif holiday.type == 'remove': # number of days is negative
status_dict['virtual_remaining_leaves'] -= holiday.number_of_days_temp
if holiday.state == 'validate':
status_dict['leaves_taken'] += holiday.number_of_days_temp
status_dict['remaining_leaves'] -= holiday.number_of_days_temp
return result
def _user_left_days(self, cr, uid, ids, name, args, context=None):
employee_id = False
if context and 'employee_id' in context:
employee_id = context['employee_id']
else:
employee_ids = self.pool.get('hr.employee').search(cr, uid, [('user_id', '=', uid)], context=context)
if employee_ids:
employee_id = employee_ids[0]
if employee_id:
res = self.get_days(cr, uid, ids, employee_id, context=context)
else:
res = dict((res_id, {'leaves_taken': 0, 'remaining_leaves': 0, 'max_leaves': 0}) for res_id in ids)
return res
_columns = {
'name': fields.char('Leave Type', size=64, required=True, translate=True),
'categ_id': fields.many2one('calendar.event.type', 'Meeting Type',
help='Once a leave is validated, Odoo will create a corresponding meeting of this type in the calendar.'),
'color_name': fields.selection([('red', 'Red'),('blue','Blue'), ('lightgreen', 'Light Green'), ('lightblue','Light Blue'), ('lightyellow', 'Light Yellow'), ('magenta', 'Magenta'),('lightcyan', 'Light Cyan'),('black', 'Black'),('lightpink', 'Light Pink'),('brown', 'Brown'),('violet', 'Violet'),('lightcoral', 'Light Coral'),('lightsalmon', 'Light Salmon'),('lavender', 'Lavender'),('wheat', 'Wheat'),('ivory', 'Ivory')],'Color in Report', required=True, help='This color will be used in the leaves summary located in Reporting\Leaves by Department.'),
'limit': fields.boolean('Allow to Override Limit', help='If you select this check box, the system allows the employees to take more leaves than the available ones for this type and will not take them into account for the "Remaining Legal Leaves" defined on the employee form.'),
'active': fields.boolean('Active', help="If the active field is set to false, it will allow you to hide the leave type without removing it."),
'max_leaves': fields.function(_user_left_days, string='Maximum Allowed', help='This value is given by the sum of all holidays requests with a positive value.', multi='user_left_days'),
'leaves_taken': fields.function(_user_left_days, string='Leaves Already Taken', help='This value is given by the sum of all holidays requests with a negative value.', multi='user_left_days'),
'remaining_leaves': fields.function(_user_left_days, string='Remaining Leaves', help='Maximum Leaves Allowed - Leaves Already Taken', multi='user_left_days'),
'virtual_remaining_leaves': fields.function(_user_left_days, string='Virtual Remaining Leaves', help='Maximum Leaves Allowed - Leaves Already Taken - Leaves Waiting Approval', multi='user_left_days'),
'double_validation': fields.boolean('Apply Double Validation', help="When selected, the Allocation/Leave Requests for this type require a second validation to be approved."),
}
_defaults = {
'color_name': 'red',
'active': True,
}
def name_get(self, cr, uid, ids, context=None):
if context is None:
context = {}
if not context.get('employee_id',False):
# leave counts is based on employee_id, would be inaccurate if not based on correct employee
return super(hr_holidays_status, self).name_get(cr, uid, ids, context=context)
res = []
for record in self.browse(cr, uid, ids, context=context):
name = record.name
if not record.limit:
name = name + (' (%g/%g)' % (record.leaves_taken or 0.0, record.max_leaves or 0.0))
res.append((record.id, name))
return res
class hr_holidays(osv.osv):
_name = "hr.holidays"
_description = "Leave"
_order = "type desc, date_from asc"
_inherit = ['mail.thread', 'ir.needaction_mixin']
_track = {
'state': {
'hr_holidays.mt_holidays_approved': lambda self, cr, uid, obj, ctx=None: obj.state == 'validate',
'hr_holidays.mt_holidays_refused': lambda self, cr, uid, obj, ctx=None: obj.state == 'refuse',
'hr_holidays.mt_holidays_confirmed': lambda self, cr, uid, obj, ctx=None: obj.state == 'confirm',
},
}
def _employee_get(self, cr, uid, context=None):
emp_id = context.get('default_employee_id', False)
if emp_id:
return emp_id
ids = self.pool.get('hr.employee').search(cr, uid, [('user_id', '=', uid)], context=context)
if ids:
return ids[0]
return False
def _compute_number_of_days(self, cr, uid, ids, name, args, context=None):
result = {}
for hol in self.browse(cr, uid, ids, context=context):
if hol.type=='remove':
result[hol.id] = -hol.number_of_days_temp
else:
result[hol.id] = hol.number_of_days_temp
return result
def _get_can_reset(self, cr, uid, ids, name, arg, context=None):
"""User can reset a leave request if it is its own leave request or if
he is an Hr Manager. """
user = self.pool['res.users'].browse(cr, uid, uid, context=context)
group_hr_manager_id = self.pool.get('ir.model.data').get_object_reference(cr, uid, 'base', 'group_hr_manager')[1]
if group_hr_manager_id in [g.id for g in user.groups_id]:
return dict.fromkeys(ids, True)
result = dict.fromkeys(ids, False)
for holiday in self.browse(cr, uid, ids, context=context):
if holiday.employee_id and holiday.employee_id.user_id and holiday.employee_id.user_id.id == uid:
result[holiday.id] = True
return result
def _check_date(self, cr, uid, ids, context=None):
for holiday in self.browse(cr, uid, ids, context=context):
domain = [
('date_from', '<=', holiday.date_to),
('date_to', '>=', holiday.date_from),
('employee_id', '=', holiday.employee_id.id),
('id', '!=', holiday.id),
('state', 'not in', ['cancel', 'refuse']),
]
nholidays = self.search_count(cr, uid, domain, context=context)
if nholidays:
return False
return True
_check_holidays = lambda self, cr, uid, ids, context=None: self.check_holidays(cr, uid, ids, context=context)
_columns = {
'name': fields.char('Description', size=64),
'state': fields.selection([('draft', 'To Submit'), ('cancel', 'Cancelled'),('confirm', 'To Approve'), ('refuse', 'Refused'), ('validate1', 'Second Approval'), ('validate', 'Approved')],
'Status', readonly=True, track_visibility='onchange', copy=False,
help='The status is set to \'To Submit\', when a holiday request is created.\
\nThe status is \'To Approve\', when holiday request is confirmed by user.\
\nThe status is \'Refused\', when holiday request is refused by manager.\
\nThe status is \'Approved\', when holiday request is approved by manager.'),
'user_id':fields.related('employee_id', 'user_id', type='many2one', relation='res.users', string='User', store=True),
'date_from': fields.datetime('Start Date', readonly=True, states={'draft':[('readonly',False)], 'confirm':[('readonly',False)]}, select=True, copy=False),
'date_to': fields.datetime('End Date', readonly=True, states={'draft':[('readonly',False)], 'confirm':[('readonly',False)]}, copy=False),
'holiday_status_id': fields.many2one("hr.holidays.status", "Leave Type", required=True,readonly=True, states={'draft':[('readonly',False)], 'confirm':[('readonly',False)]}),
'employee_id': fields.many2one('hr.employee', "Employee", select=True, invisible=False, readonly=True, states={'draft':[('readonly',False)], 'confirm':[('readonly',False)]}),
'manager_id': fields.many2one('hr.employee', 'First Approval', invisible=False, readonly=True, copy=False,
help='This area is automatically filled by the user who validate the leave'),
'notes': fields.text('Reasons',readonly=True, states={'draft':[('readonly',False)], 'confirm':[('readonly',False)]}),
'number_of_days_temp': fields.float('Allocation', readonly=True, states={'draft':[('readonly',False)], 'confirm':[('readonly',False)]}, copy=False),
'number_of_days': fields.function(_compute_number_of_days, string='Number of Days', store=True),
'meeting_id': fields.many2one('calendar.event', 'Meeting'),
'type': fields.selection([('remove','Leave Request'),('add','Allocation Request')], 'Request Type', required=True, readonly=True, states={'draft':[('readonly',False)], 'confirm':[('readonly',False)]}, help="Choose 'Leave Request' if someone wants to take an off-day. \nChoose 'Allocation Request' if you want to increase the number of leaves available for someone", select=True),
'parent_id': fields.many2one('hr.holidays', 'Parent'),
'linked_request_ids': fields.one2many('hr.holidays', 'parent_id', 'Linked Requests',),
'department_id':fields.related('employee_id', 'department_id', string='Department', type='many2one', relation='hr.department', readonly=True, store=True),
'category_id': fields.many2one('hr.employee.category', "Employee Tag", help='Category of Employee', readonly=True, states={'draft':[('readonly',False)], 'confirm':[('readonly',False)]}),
'holiday_type': fields.selection([('employee','By Employee'),('category','By Employee Tag')], 'Allocation Mode', readonly=True, states={'draft':[('readonly',False)], 'confirm':[('readonly',False)]}, help='By Employee: Allocation/Request for individual Employee, By Employee Tag: Allocation/Request for group of employees in category', required=True),
'manager_id2': fields.many2one('hr.employee', 'Second Approval', readonly=True, copy=False,
help='This area is automaticly filled by the user who validate the leave with second level (If Leave type need second validation)'),
'double_validation': fields.related('holiday_status_id', 'double_validation', type='boolean', relation='hr.holidays.status', string='Apply Double Validation'),
'can_reset': fields.function(
_get_can_reset,
type='boolean'),
}
_defaults = {
'employee_id': _employee_get,
'state': 'confirm',
'type': 'remove',
'user_id': lambda obj, cr, uid, context: uid,
'holiday_type': 'employee'
}
_constraints = [
(_check_date, 'You can not have 2 leaves that overlaps on same day!', ['date_from','date_to']),
(_check_holidays, 'The number of remaining leaves is not sufficient for this leave type', ['state','number_of_days_temp'])
]
_sql_constraints = [
('type_value', "CHECK( (holiday_type='employee' AND employee_id IS NOT NULL) or (holiday_type='category' AND category_id IS NOT NULL))",
"The employee or employee category of this request is missing. Please make sure that your user login is linked to an employee."),
('date_check2', "CHECK ( (type='add') OR (date_from <= date_to))", "The start date must be anterior to the end date."),
('date_check', "CHECK ( number_of_days_temp >= 0 )", "The number of days must be greater than 0."),
]
def _create_resource_leave(self, cr, uid, leaves, context=None):
'''This method will create entry in resource calendar leave object at the time of holidays validated '''
obj_res_leave = self.pool.get('resource.calendar.leaves')
for leave in leaves:
vals = {
'name': leave.name,
'date_from': leave.date_from,
'holiday_id': leave.id,
'date_to': leave.date_to,
'resource_id': leave.employee_id.resource_id.id,
'calendar_id': leave.employee_id.resource_id.calendar_id.id
}
obj_res_leave.create(cr, uid, vals, context=context)
return True
def _remove_resource_leave(self, cr, uid, ids, context=None):
'''This method will create entry in resource calendar leave object at the time of holidays cancel/removed'''
obj_res_leave = self.pool.get('resource.calendar.leaves')
leave_ids = obj_res_leave.search(cr, uid, [('holiday_id', 'in', ids)], context=context)
return obj_res_leave.unlink(cr, uid, leave_ids, context=context)
def onchange_type(self, cr, uid, ids, holiday_type, employee_id=False, context=None):
result = {}
if holiday_type == 'employee' and not employee_id:
ids_employee = self.pool.get('hr.employee').search(cr, uid, [('user_id','=', uid)])
if ids_employee:
result['value'] = {
'employee_id': ids_employee[0]
}
elif holiday_type != 'employee':
result['value'] = {
'employee_id': False
}
return result
def onchange_employee(self, cr, uid, ids, employee_id):
result = {'value': {'department_id': False}}
if employee_id:
employee = self.pool.get('hr.employee').browse(cr, uid, employee_id)
result['value'] = {'department_id': employee.department_id.id}
return result
# TODO: can be improved using resource calendar method
def _get_number_of_days(self, date_from, date_to):
"""Returns a float equals to the timedelta between two dates given as string."""
DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
from_dt = datetime.datetime.strptime(date_from, DATETIME_FORMAT)
to_dt = datetime.datetime.strptime(date_to, DATETIME_FORMAT)
timedelta = to_dt - from_dt
diff_day = timedelta.days + float(timedelta.seconds) / 86400
return diff_day
def unlink(self, cr, uid, ids, context=None):
for rec in self.browse(cr, uid, ids, context=context):
if rec.state not in ['draft', 'cancel', 'confirm']:
raise osv.except_osv(_('Warning!'),_('You cannot delete a leave which is in %s state.')%(rec.state))
return super(hr_holidays, self).unlink(cr, uid, ids, context)
def onchange_date_from(self, cr, uid, ids, date_to, date_from):
"""
If there are no date set for date_to, automatically set one 8 hours later than
the date_from.
Also update the number_of_days.
"""
# date_to has to be greater than date_from
if (date_from and date_to) and (date_from > date_to):
raise osv.except_osv(_('Warning!'),_('The start date must be anterior to the end date.'))
result = {'value': {}}
# No date_to set so far: automatically compute one 8 hours later
if date_from and not date_to:
date_to_with_delta = datetime.datetime.strptime(date_from, tools.DEFAULT_SERVER_DATETIME_FORMAT) + datetime.timedelta(hours=8)
result['value']['date_to'] = str(date_to_with_delta)
# Compute and update the number of days
if (date_to and date_from) and (date_from <= date_to):
diff_day = self._get_number_of_days(date_from, date_to)
result['value']['number_of_days_temp'] = round(math.floor(diff_day))+1
else:
result['value']['number_of_days_temp'] = 0
return result
def onchange_date_to(self, cr, uid, ids, date_to, date_from):
"""
Update the number_of_days.
"""
# date_to has to be greater than date_from
if (date_from and date_to) and (date_from > date_to):
raise osv.except_osv(_('Warning!'),_('The start date must be anterior to the end date.'))
result = {'value': {}}
# Compute and update the number of days
if (date_to and date_from) and (date_from <= date_to):
diff_day = self._get_number_of_days(date_from, date_to)
result['value']['number_of_days_temp'] = round(math.floor(diff_day))+1
else:
result['value']['number_of_days_temp'] = 0
return result
def create(self, cr, uid, values, context=None):
""" Override to avoid automatic logging of creation """
if context is None:
context = {}
context = dict(context, mail_create_nolog=True, mail_create_nosubscribe=True)
if values.get('state') and values['state'] not in ['draft', 'confirm', 'cancel'] and not self.pool['res.users'].has_group(cr, uid, 'base.group_hr_user'):
raise osv.except_osv(_('Warning!'), _('You cannot set a leave request as \'%s\'. Contact a human resource manager.') % values.get('state'))
return super(hr_holidays, self).create(cr, uid, values, context=context)
def write(self, cr, uid, ids, vals, context=None):
if vals.get('state') and vals['state'] not in ['draft', 'confirm', 'cancel'] and not self.pool['res.users'].has_group(cr, uid, 'base.group_hr_user'):
raise osv.except_osv(_('Warning!'), _('You cannot set a leave request as \'%s\'. Contact a human resource manager.') % vals.get('state'))
return super(hr_holidays, self).write(cr, uid, ids, vals, context=context)
def holidays_reset(self, cr, uid, ids, context=None):
self.write(cr, uid, ids, {
'state': 'draft',
'manager_id': False,
'manager_id2': False,
})
to_unlink = []
for record in self.browse(cr, uid, ids, context=context):
for record2 in record.linked_request_ids:
self.holidays_reset(cr, uid, [record2.id], context=context)
to_unlink.append(record2.id)
if to_unlink:
self.unlink(cr, uid, to_unlink, context=context)
return True
def holidays_first_validate(self, cr, uid, ids, context=None):
obj_emp = self.pool.get('hr.employee')
ids2 = obj_emp.search(cr, uid, [('user_id', '=', uid)])
manager = ids2 and ids2[0] or False
self.holidays_first_validate_notificate(cr, uid, ids, context=context)
return self.write(cr, uid, ids, {'state':'validate1', 'manager_id': manager})
def holidays_validate(self, cr, uid, ids, context=None):
obj_emp = self.pool.get('hr.employee')
ids2 = obj_emp.search(cr, uid, [('user_id', '=', uid)])
manager = ids2 and ids2[0] or False
self.write(cr, uid, ids, {'state':'validate'})
data_holiday = self.browse(cr, uid, ids)
for record in data_holiday:
if record.double_validation:
self.write(cr, uid, [record.id], {'manager_id2': manager})
else:
self.write(cr, uid, [record.id], {'manager_id': manager})
if record.holiday_type == 'employee' and record.type == 'remove':
meeting_obj = self.pool.get('calendar.event')
meeting_vals = {
'name': record.name or _('Leave Request'),
'categ_ids': record.holiday_status_id.categ_id and [(6,0,[record.holiday_status_id.categ_id.id])] or [],
'duration': record.number_of_days_temp * 8,
'description': record.notes,
'user_id': record.user_id.id,
'start': record.date_from,
'stop': record.date_to,
'allday': False,
'state': 'open', # to block that meeting date in the calendar
'class': 'confidential'
}
#Add the partner_id (if exist) as an attendee
if record.user_id and record.user_id.partner_id:
meeting_vals['partner_ids'] = [(4,record.user_id.partner_id.id)]
ctx_no_email = dict(context or {}, no_email=True)
meeting_id = meeting_obj.create(cr, uid, meeting_vals, context=ctx_no_email)
self._create_resource_leave(cr, uid, [record], context=context)
self.write(cr, uid, ids, {'meeting_id': meeting_id})
elif record.holiday_type == 'category':
emp_ids = obj_emp.search(cr, uid, [('category_ids', 'child_of', [record.category_id.id])])
leave_ids = []
batch_context = dict(context, mail_notify_force_send=False)
for emp in obj_emp.browse(cr, uid, emp_ids, context=context):
vals = {
'name': record.name,
'type': record.type,
'holiday_type': 'employee',
'holiday_status_id': record.holiday_status_id.id,
'date_from': record.date_from,
'date_to': record.date_to,
'notes': record.notes,
'number_of_days_temp': record.number_of_days_temp,
'parent_id': record.id,
'employee_id': emp.id
}
leave_ids.append(self.create(cr, uid, vals, context=batch_context))
for leave_id in leave_ids:
# TODO is it necessary to interleave the calls?
for sig in ('confirm', 'validate', 'second_validate'):
self.signal_workflow(cr, uid, [leave_id], sig)
return True
def holidays_confirm(self, cr, uid, ids, context=None):
for record in self.browse(cr, uid, ids, context=context):
if record.employee_id and record.employee_id.parent_id and record.employee_id.parent_id.user_id:
self.message_subscribe_users(cr, uid, [record.id], user_ids=[record.employee_id.parent_id.user_id.id], context=context)
return self.write(cr, uid, ids, {'state': 'confirm'})
def holidays_refuse(self, cr, uid, ids, context=None):
obj_emp = self.pool.get('hr.employee')
ids2 = obj_emp.search(cr, uid, [('user_id', '=', uid)])
manager = ids2 and ids2[0] or False
for holiday in self.browse(cr, uid, ids, context=context):
if holiday.state == 'validate1':
self.write(cr, uid, [holiday.id], {'state': 'refuse', 'manager_id': manager})
else:
self.write(cr, uid, [holiday.id], {'state': 'refuse', 'manager_id2': manager})
self.holidays_cancel(cr, uid, ids, context=context)
return True
def holidays_cancel(self, cr, uid, ids, context=None):
for record in self.browse(cr, uid, ids):
# Delete the meeting
if record.meeting_id:
record.meeting_id.unlink()
# If a category that created several holidays, cancel all related
self.signal_workflow(cr, uid, map(attrgetter('id'), record.linked_request_ids or []), 'refuse')
self._remove_resource_leave(cr, uid, ids, context=context)
return True
def check_holidays(self, cr, uid, ids, context=None):
for record in self.browse(cr, uid, ids, context=context):
if record.holiday_type != 'employee' or record.type != 'remove' or not record.employee_id or record.holiday_status_id.limit:
continue
leave_days = self.pool.get('hr.holidays.status').get_days(cr, uid, [record.holiday_status_id.id], record.employee_id.id, context=context)[record.holiday_status_id.id]
if leave_days['remaining_leaves'] < 0 or leave_days['virtual_remaining_leaves'] < 0:
# Raising a warning gives a more user-friendly feedback than the default constraint error
raise Warning(_('The number of remaining leaves is not sufficient for this leave type.\n'
'Please verify also the leaves waiting for validation.'))
return True
# -----------------------------
# OpenChatter and notifications
# -----------------------------
def _needaction_domain_get(self, cr, uid, context=None):
emp_obj = self.pool.get('hr.employee')
empids = emp_obj.search(cr, uid, [('parent_id.user_id', '=', uid)], context=context)
dom = ['&', ('state', '=', 'confirm'), ('employee_id', 'in', empids)]
# if this user is a hr.manager, he should do second validations
if self.pool.get('res.users').has_group(cr, uid, 'base.group_hr_manager'):
dom = ['|'] + dom + [('state', '=', 'validate1')]
return dom
def holidays_first_validate_notificate(self, cr, uid, ids, context=None):
for obj in self.browse(cr, uid, ids, context=context):
self.message_post(cr, uid, [obj.id],
_("Request approved, waiting second validation."), context=context)
class resource_calendar_leaves(osv.osv):
_inherit = "resource.calendar.leaves"
_description = "Leave Detail"
_columns = {
'holiday_id': fields.many2one("hr.holidays", "Leave Request"),
}
class hr_employee(osv.osv):
_inherit="hr.employee"
def create(self, cr, uid, vals, context=None):
# don't pass the value of remaining leave if it's 0 at the creation time, otherwise it will trigger the inverse
# function _set_remaining_days and the system may not be configured for. Note that we don't have this problem on
# the write because the clients only send the fields that have been modified.
if 'remaining_leaves' in vals and not vals['remaining_leaves']:
del(vals['remaining_leaves'])
return super(hr_employee, self).create(cr, uid, vals, context=context)
def _set_remaining_days(self, cr, uid, empl_id, name, value, arg, context=None):
employee = self.browse(cr, uid, empl_id, context=context)
diff = value - employee.remaining_leaves
type_obj = self.pool.get('hr.holidays.status')
holiday_obj = self.pool.get('hr.holidays')
# Find for holidays status
status_ids = type_obj.search(cr, uid, [('limit', '=', False)], context=context)
if len(status_ids) != 1 :
raise osv.except_osv(_('Warning!'),_("The feature behind the field 'Remaining Legal Leaves' can only be used when there is only one leave type with the option 'Allow to Override Limit' unchecked. (%s Found). Otherwise, the update is ambiguous as we cannot decide on which leave type the update has to be done. \nYou may prefer to use the classic menus 'Leave Requests' and 'Allocation Requests' located in 'Human Resources \ Leaves' to manage the leave days of the employees if the configuration does not allow to use this field.") % (len(status_ids)))
status_id = status_ids and status_ids[0] or False
if not status_id:
return False
if diff > 0:
leave_id = holiday_obj.create(cr, uid, {'name': _('Allocation for %s') % employee.name, 'employee_id': employee.id, 'holiday_status_id': status_id, 'type': 'add', 'holiday_type': 'employee', 'number_of_days_temp': diff}, context=context)
elif diff < 0:
raise osv.except_osv(_('Warning!'), _('You cannot reduce validated allocation requests'))
else:
return False
for sig in ('confirm', 'validate', 'second_validate'):
holiday_obj.signal_workflow(cr, uid, [leave_id], sig)
return True
def _get_remaining_days(self, cr, uid, ids, name, args, context=None):
cr.execute("""SELECT
sum(h.number_of_days) as days,
h.employee_id
from
hr_holidays h
join hr_holidays_status s on (s.id=h.holiday_status_id)
where
h.state='validate' and
s.limit=False and
h.employee_id in %s
group by h.employee_id""", (tuple(ids),))
res = cr.dictfetchall()
remaining = {}
for r in res:
remaining[r['employee_id']] = r['days']
for employee_id in ids:
if not remaining.get(employee_id):
remaining[employee_id] = 0.0
return remaining
def _get_leave_status(self, cr, uid, ids, name, args, context=None):
holidays_obj = self.pool.get('hr.holidays')
holidays_id = holidays_obj.search(cr, uid,
[('employee_id', 'in', ids), ('date_from','<=',time.strftime('%Y-%m-%d %H:%M:%S')),
('date_to','>=',time.strftime('%Y-%m-%d 23:59:59')),('type','=','remove'),('state','not in',('cancel','refuse'))],
context=context)
result = {}
for id in ids:
result[id] = {
'current_leave_state': False,
'current_leave_id': False,
'leave_date_from':False,
'leave_date_to':False,
}
for holiday in self.pool.get('hr.holidays').browse(cr, uid, holidays_id, context=context):
result[holiday.employee_id.id]['leave_date_from'] = holiday.date_from
result[holiday.employee_id.id]['leave_date_to'] = holiday.date_to
result[holiday.employee_id.id]['current_leave_state'] = holiday.state
result[holiday.employee_id.id]['current_leave_id'] = holiday.holiday_status_id.id
return result
def _leaves_count(self, cr, uid, ids, field_name, arg, context=None):
Holidays = self.pool['hr.holidays']
return {
employee_id: Holidays.search_count(cr,uid, [('employee_id', '=', employee_id), ('type', '=', 'remove')], context=context)
for employee_id in ids
}
_columns = {
'remaining_leaves': fields.function(_get_remaining_days, string='Remaining Legal Leaves', fnct_inv=_set_remaining_days, type="float", help='Total number of legal leaves allocated to this employee, change this value to create allocation/leave request. Total based on all the leave types without overriding limit.'),
'current_leave_state': fields.function(_get_leave_status, multi="leave_status", string="Current Leave Status", type="selection",
selection=[('draft', 'New'), ('confirm', 'Waiting Approval'), ('refuse', 'Refused'),
('validate1', 'Waiting Second Approval'), ('validate', 'Approved'), ('cancel', 'Cancelled')]),
'current_leave_id': fields.function(_get_leave_status, multi="leave_status", string="Current Leave Type",type='many2one', relation='hr.holidays.status'),
'leave_date_from': fields.function(_get_leave_status, multi='leave_status', type='date', string='From Date'),
'leave_date_to': fields.function(_get_leave_status, multi='leave_status', type='date', string='To Date'),
'leaves_count': fields.function(_leaves_count, type='integer', string='Leaves'),
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
|
agpl-3.0
|
kartoza/geonode
|
geonode/base/management/commands/restore.py
|
2
|
21256
|
# -*- coding: utf-8 -*-
#########################################################################
#
# Copyright (C) 2018 OSGeo
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#########################################################################
try:
import json
except ImportError:
from django.utils import simplejson as json
import traceback
import os
import time
import shutil
import requests
import tempfile
import helpers
from helpers import Config
from distutils import dir_util
from requests.auth import HTTPBasicAuth
from optparse import make_option
from geonode.utils import designals, resignals
from django.conf import settings
from django.core.management import call_command
from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
help = 'Restore the GeoNode application data'
option_list = BaseCommand.option_list + Config.geoserver_option_list + (
Config.option,
make_option(
'-i',
'--ignore-errors',
action='store_true',
dest='ignore_errors',
default=False,
help='Stop after any errors are encountered.'),
make_option(
'-f',
'--force',
action='store_true',
dest='force_exec',
default=False,
help='Forces the execution without asking for confirmation.'),
make_option(
'--skip-geoserver',
action='store_true',
default=False,
help='Skips geoserver backup'),
make_option(
'--backup-file',
dest='backup_file',
type="string",
default=None,
help='Backup archive containing GeoNode data to restore.'),
make_option(
'--backup-dir',
dest='backup_dir',
type="string",
default=None,
help='Backup directory containing GeoNode data to restore.'))
def restore_geoserver_backup(self, settings, target_folder):
"""Restore GeoServer Catalog"""
url = settings.OGC_SERVER['default']['PUBLIC_LOCATION']
user = settings.OGC_SERVER['default']['USER']
passwd = settings.OGC_SERVER['default']['PASSWORD']
geoserver_bk_file = os.path.join(target_folder, 'geoserver_catalog.zip')
if not os.path.exists(geoserver_bk_file):
print('Skipping geoserver restore: ' +
'file "{}" not found.'.format(geoserver_bk_file))
return
print "Restoring 'GeoServer Catalog ["+url+"]' into '"+geoserver_bk_file+"'."
# Best Effort Restore: 'options': {'option': ['BK_BEST_EFFORT=true']}
data = {'restore': {'archiveFile': geoserver_bk_file, 'options': {}}}
headers = {'Content-type': 'application/json'}
r = requests.post(url + 'rest/br/restore/', data=json.dumps(data),
headers=headers, auth=HTTPBasicAuth(user, passwd))
error_backup = 'Could not successfully restore GeoServer ' + \
'catalog [{}rest/br/backup/]: {} - {}'
if (r.status_code > 201):
try:
gs_backup = r.json()
except ValueError:
raise ValueError(error_backup.format(url, r.status_code, r.text))
gs_bk_exec_id = gs_backup['restore']['execution']['id']
r = requests.get(url + 'rest/br/restore/' + str(gs_bk_exec_id) + '.json',
auth=HTTPBasicAuth(user, passwd))
if (r.status_code == 200):
try:
gs_backup = r.json()
except ValueError:
raise ValueError(error_backup.format(url, r.status_code, r.text))
gs_bk_progress = gs_backup['restore']['execution']['progress']
print gs_bk_progress
raise ValueError(error_backup.format(url, r.status_code, r.text))
else:
try:
gs_backup = r.json()
except ValueError:
raise ValueError(error_backup.format(url, r.status_code, r.text))
gs_bk_exec_id = gs_backup['restore']['execution']['id']
r = requests.get(url + 'rest/br/restore/' + str(gs_bk_exec_id) + '.json',
auth=HTTPBasicAuth(user, passwd))
if (r.status_code == 200):
gs_bk_exec_status = gs_backup['restore']['execution']['status']
gs_bk_exec_progress = gs_backup['restore']['execution']['progress']
gs_bk_exec_progress_updated = '0/0'
while (gs_bk_exec_status != 'COMPLETED' and gs_bk_exec_status != 'FAILED'):
if (gs_bk_exec_progress != gs_bk_exec_progress_updated):
gs_bk_exec_progress_updated = gs_bk_exec_progress
r = requests.get(url + 'rest/br/restore/' + str(gs_bk_exec_id) + '.json',
auth=HTTPBasicAuth(user, passwd))
if (r.status_code == 200):
try:
gs_backup = r.json()
except ValueError:
raise ValueError(error_backup.format(url, r.status_code, r.text))
gs_bk_exec_status = gs_backup['restore']['execution']['status']
gs_bk_exec_progress = gs_backup['restore']['execution']['progress']
print str(gs_bk_exec_status) + ' - ' + gs_bk_exec_progress
time.sleep(3)
else:
raise ValueError(error_backup.format(url, r.status_code, r.text))
else:
raise ValueError(error_backup.format(url, r.status_code, r.text))
def restore_geoserver_raster_data(self, config, settings, target_folder):
if (config.gs_data_dir):
if (config.gs_dump_raster_data):
gs_data_folder = os.path.join(target_folder, 'gs_data_dir', 'data', 'geonode')
if not os.path.exists(gs_data_folder):
print('Skipping geoserver raster data restore: ' +
'directory "{}" not found.'.format(gs_data_folder))
return
# Restore '$config.gs_data_dir/data/geonode'
gs_data_root = os.path.join(config.gs_data_dir, 'data', 'geonode')
if not os.path.isabs(gs_data_root):
gs_data_root = os.path.join(settings.PROJECT_ROOT, '..', gs_data_root)
try:
helpers.chmod_tree(gs_data_root)
except:
print 'Original GeoServer Data Dir "{}" must be writable by the current user. \
Do not forget to copy it first. It will be wiped-out by the Restore procedure!'.format(gs_data_root)
raise
try:
shutil.rmtree(gs_data_root)
print 'Cleaned out old GeoServer Data Dir: ' + gs_data_root
except:
pass
if not os.path.exists(gs_data_root):
os.makedirs(gs_data_root)
helpers.copy_tree(gs_data_folder, gs_data_root)
helpers.chmod_tree(gs_data_root)
print "GeoServer Uploaded Data Restored to '"+gs_data_root+"'."
# Cleanup '$config.gs_data_dir/gwc-layers'
gwc_layers_root = os.path.join(config.gs_data_dir, 'gwc-layers')
if not os.path.isabs(gwc_layers_root):
gwc_layers_root = os.path.join(settings.PROJECT_ROOT, '..', gwc_layers_root)
try:
shutil.rmtree(gwc_layers_root)
print 'Cleaned out old GeoServer GWC Layers Config: ' + gwc_layers_root
except:
pass
if not os.path.exists(gwc_layers_root):
os.makedirs(gwc_layers_root)
def restore_geoserver_vector_data(self, config, settings, target_folder):
"""Restore Vectorial Data from DB"""
if (config.gs_dump_vector_data):
gs_data_folder = os.path.join(target_folder, 'gs_data_dir', 'data', 'geonode')
if not os.path.exists(gs_data_folder):
print('Skipping geoserver vector data restore: ' +
'directory "{}" not found.'.format(gs_data_folder))
return
datastore = settings.OGC_SERVER['default']['DATASTORE']
if (datastore):
ogc_db_name = settings.DATABASES[datastore]['NAME']
ogc_db_user = settings.DATABASES[datastore]['USER']
ogc_db_passwd = settings.DATABASES[datastore]['PASSWORD']
ogc_db_host = settings.DATABASES[datastore]['HOST']
ogc_db_port = settings.DATABASES[datastore]['PORT']
helpers.restore_db(config, ogc_db_name, ogc_db_user, ogc_db_port,
ogc_db_host, ogc_db_passwd, gs_data_folder)
def restore_geoserver_externals(self, config, settings, target_folder):
"""Restore external references from XML files"""
external_folder = os.path.join(target_folder, helpers.EXTERNAL_ROOT)
if os.path.exists(external_folder):
dir_util.copy_tree(external_folder, '/')
def handle(self, **options):
# ignore_errors = options.get('ignore_errors')
config = Config(options)
force_exec = options.get('force_exec')
backup_file = options.get('backup_file')
skip_geoserver = options.get('skip_geoserver')
backup_dir = options.get('backup_dir')
if not any([backup_file, backup_dir]):
raise CommandError("Mandatory option (--backup-file|--backup-dir)")
if all([backup_file, backup_dir]):
raise CommandError("Exclusive option (--backup-file|--backup-dir)")
if backup_file and not os.path.isfile(backup_file):
raise CommandError("Provided '--backup-file' is not a file")
if backup_dir and not os.path.isdir(backup_dir):
raise CommandError("Provided '--backup-dir' is not a directory")
print "Before proceeding with the Restore, please ensure that:"
print " 1. The backend (DB or whatever) is accessible and you have rights"
print " 2. The GeoServer is up and running and reachable from this machine"
message = 'WARNING: The restore will overwrite ALL GeoNode data. You want to proceed?'
if force_exec or helpers.confirm(prompt=message, resp=False):
target_folder = backup_dir
if backup_file:
# Create Target Folder
restore_folder = os.path.join(tempfile.gettempdir(), 'restore')
if not os.path.exists(restore_folder):
os.makedirs(restore_folder)
# Extract ZIP Archive to Target Folder
target_folder = helpers.unzip_file(backup_file, restore_folder)
# Write Checks
media_root = settings.MEDIA_ROOT
media_folder = os.path.join(target_folder, helpers.MEDIA_ROOT)
static_root = settings.STATIC_ROOT
static_folder = os.path.join(target_folder, helpers.STATIC_ROOT)
static_folders = settings.STATICFILES_DIRS
static_files_folders = os.path.join(target_folder, helpers.STATICFILES_DIRS)
template_folders = settings.TEMPLATE_DIRS
template_files_folders = os.path.join(target_folder, helpers.TEMPLATE_DIRS)
locale_folders = settings.LOCALE_PATHS
locale_files_folders = os.path.join(target_folder, helpers.LOCALE_PATHS)
try:
print("[Sanity Check] Full Write Access to '{}' ...".format(media_root))
helpers.chmod_tree(media_root)
print("[Sanity Check] Full Write Access to '{}' ...".format(static_root))
helpers.chmod_tree(static_root)
for static_files_folder in static_folders:
print("[Sanity Check] Full Write Access to '{}' ...".format(static_files_folder))
helpers.chmod_tree(static_files_folder)
for template_files_folder in template_folders:
print("[Sanity Check] Full Write Access to '{}' ...".format(template_files_folder))
helpers.chmod_tree(template_files_folder)
for locale_files_folder in locale_folders:
print("[Sanity Check] Full Write Access to '{}' ...".format(locale_files_folder))
helpers.chmod_tree(locale_files_folder)
except:
print("...Sanity Checks on Folder failed. Please make sure that the current user has full WRITE access to the above folders (and sub-folders or files).")
print("Reason:")
raise
if not skip_geoserver:
self.restore_geoserver_backup(settings, target_folder)
self.restore_geoserver_raster_data(config, settings, target_folder)
self.restore_geoserver_vector_data(config, settings, target_folder)
print("Restoring geoserver external resources")
self.restore_geoserver_externals(config, settings, target_folder)
else:
print("Skipping geoserver backup restore")
# Prepare Target DB
try:
call_command('migrate', interactive=False, load_initial_data=False)
db_name = settings.DATABASES['default']['NAME']
db_user = settings.DATABASES['default']['USER']
db_port = settings.DATABASES['default']['PORT']
db_host = settings.DATABASES['default']['HOST']
db_passwd = settings.DATABASES['default']['PASSWORD']
helpers.patch_db(db_name, db_user, db_port, db_host, db_passwd, settings.MONITORING_ENABLED)
except:
traceback.print_exc()
try:
# Deactivate GeoNode Signals
print "Deactivating GeoNode Signals..."
designals()
print "...done!"
# Flush DB
try:
db_name = settings.DATABASES['default']['NAME']
db_user = settings.DATABASES['default']['USER']
db_port = settings.DATABASES['default']['PORT']
db_host = settings.DATABASES['default']['HOST']
db_passwd = settings.DATABASES['default']['PASSWORD']
helpers.flush_db(db_name, db_user, db_port, db_host, db_passwd)
except:
try:
call_command('flush', interactive=False, load_initial_data=False)
except:
traceback.print_exc()
raise
# Restore Fixtures
for app_name, dump_name in zip(config.app_names, config.dump_names):
fixture_file = os.path.join(target_folder, dump_name+'.json')
print "Deserializing "+fixture_file
try:
call_command('loaddata', fixture_file, app_label=app_name)
except:
traceback.print_exc()
print "WARNING: No valid fixture data found for '"+dump_name+"'."
# helpers.load_fixture(app_name, fixture_file)
raise
# Restore Media Root
try:
shutil.rmtree(media_root)
except:
pass
if not os.path.exists(media_root):
os.makedirs(media_root)
helpers.copy_tree(media_folder, media_root)
helpers.chmod_tree(media_root)
print "Media Files Restored into '"+media_root+"'."
# Restore Static Root
try:
shutil.rmtree(static_root)
except:
pass
if not os.path.exists(static_root):
os.makedirs(static_root)
helpers.copy_tree(static_folder, static_root)
helpers.chmod_tree(static_root)
print "Static Root Restored into '"+static_root+"'."
# Restore Static Root
try:
shutil.rmtree(static_root)
except:
pass
if not os.path.exists(static_root):
os.makedirs(static_root)
helpers.copy_tree(static_folder, static_root)
helpers.chmod_tree(static_root)
print "Static Root Restored into '"+static_root+"'."
# Restore Static Folders
for static_files_folder in static_folders:
try:
shutil.rmtree(static_files_folder)
except:
pass
if not os.path.exists(static_files_folder):
os.makedirs(static_files_folder)
helpers.copy_tree(os.path.join(static_files_folders,
os.path.basename(os.path.normpath(static_files_folder))),
static_files_folder)
helpers.chmod_tree(static_files_folder)
print "Static Files Restored into '"+static_files_folder+"'."
# Restore Template Folders
for template_files_folder in template_folders:
try:
shutil.rmtree(template_files_folder)
except:
pass
if not os.path.exists(template_files_folder):
os.makedirs(template_files_folder)
helpers.copy_tree(os.path.join(template_files_folders,
os.path.basename(os.path.normpath(template_files_folder))),
template_files_folder)
helpers.chmod_tree(template_files_folder)
print "Template Files Restored into '"+template_files_folder+"'."
# Restore Locale Folders
for locale_files_folder in locale_folders:
try:
shutil.rmtree(locale_files_folder)
except:
pass
if not os.path.exists(locale_files_folder):
os.makedirs(locale_files_folder)
helpers.copy_tree(os.path.join(locale_files_folders,
os.path.basename(os.path.normpath(locale_files_folder))),
locale_files_folder)
helpers.chmod_tree(locale_files_folder)
print "Locale Files Restored into '"+locale_files_folder+"'."
call_command('collectstatic', interactive=False)
# Cleanup DB
try:
db_name = settings.DATABASES['default']['NAME']
db_user = settings.DATABASES['default']['USER']
db_port = settings.DATABASES['default']['PORT']
db_host = settings.DATABASES['default']['HOST']
db_passwd = settings.DATABASES['default']['PASSWORD']
helpers.cleanup_db(db_name, db_user, db_port, db_host, db_passwd)
except:
traceback.print_exc()
return str(target_folder)
finally:
# Reactivate GeoNode Signals
print "Reactivating GeoNode Signals..."
resignals()
print "...done!"
call_command('migrate', interactive=False, load_initial_data=False, fake=True)
print "HINT: If you migrated from another site, do not forget to run the command 'migrate_baseurl' to fix Links"
print " e.g.: DJANGO_SETTINGS_MODULE=my_geonode.settings python manage.py migrate_baseurl --source-address=my-host-dev.geonode.org --target-address=my-host-prod.geonode.org"
print "Restore finished. Please find restored files and dumps into:"
|
gpl-3.0
|
CityOfNewYork/NYCOpenRecords
|
app/lib/onelogin/saml2/logout_request.py
|
1
|
13394
|
# -*- coding: utf-8 -*-
""" OneLogin_Saml2_Logout_Request class
Copyright (c) 2010-2018 OneLogin, Inc.
MIT License
Logout Request class of OneLogin's Python Toolkit.
"""
from app.lib.onelogin.saml2 import compat
from app.lib.onelogin.saml2.constants import OneLogin_Saml2_Constants
from app.lib.onelogin.saml2.utils import OneLogin_Saml2_Utils, OneLogin_Saml2_Error, OneLogin_Saml2_ValidationError
from app.lib.onelogin.saml2.xml_templates import OneLogin_Saml2_Templates
from app.lib.onelogin.saml2.xml_utils import OneLogin_Saml2_XML
class OneLogin_Saml2_Logout_Request(object):
"""
This class handles a Logout Request.
Builds a Logout Response object and validates it.
"""
def __init__(self, settings, request=None, name_id=None, session_index=None, nq=None, name_id_format=None):
"""
Constructs the Logout Request object.
:param settings: Setting data
:type settings: OneLogin_Saml2_Settings
:param request: Optional. A LogoutRequest to be loaded instead build one.
:type request: string
:param name_id: The NameID that will be set in the LogoutRequest.
:type name_id: string
:param session_index: SessionIndex that identifies the session of the user.
:type session_index: string
:param nq: IDP Name Qualifier
:type: string
:param name_id_format: The NameID Format that will be set in the LogoutRequest.
:type: string
"""
self.__settings = settings
self.__error = None
self.id = None
if request is None:
sp_data = self.__settings.get_sp_data()
idp_data = self.__settings.get_idp_data()
security = self.__settings.get_security_data()
uid = OneLogin_Saml2_Utils.generate_unique_id()
self.id = uid
issue_instant = OneLogin_Saml2_Utils.parse_time_to_SAML(OneLogin_Saml2_Utils.now())
cert = None
if security['nameIdEncrypted']:
exists_multix509enc = 'x509certMulti' in idp_data and \
'encryption' in idp_data['x509certMulti'] and \
idp_data['x509certMulti']['encryption']
if exists_multix509enc:
cert = idp_data['x509certMulti']['encryption'][0]
else:
cert = idp_data['x509cert']
if name_id is not None:
if not name_id_format and sp_data['NameIDFormat'] != OneLogin_Saml2_Constants.NAMEID_UNSPECIFIED:
name_id_format = sp_data['NameIDFormat']
else:
name_id_format = OneLogin_Saml2_Constants.NAMEID_ENTITY
sp_name_qualifier = None
if name_id_format == OneLogin_Saml2_Constants.NAMEID_ENTITY:
name_id = idp_data['entityId']
nq = None
elif nq is not None:
# We only gonna include SPNameQualifier if NameQualifier is provided
sp_name_qualifier = sp_data['entityId']
name_id_obj = OneLogin_Saml2_Utils.generate_name_id(
name_id,
sp_name_qualifier,
name_id_format,
cert,
False,
nq
)
if session_index:
session_index_str = '<samlp:SessionIndex>%s</samlp:SessionIndex>' % session_index
else:
session_index_str = ''
logout_request = OneLogin_Saml2_Templates.LOGOUT_REQUEST % \
{
'id': uid,
'issue_instant': issue_instant,
'single_logout_url': idp_data['singleLogoutService']['url'],
'entity_id': sp_data['entityId'],
'name_id': name_id_obj,
'session_index': session_index_str,
}
else:
logout_request = OneLogin_Saml2_Utils.decode_base64_and_inflate(request, ignore_zip=True)
self.id = self.get_id(logout_request)
self.__logout_request = compat.to_string(logout_request)
def get_request(self, deflate=True):
"""
Returns the Logout Request deflated, base64encoded
:param deflate: It makes the deflate process optional
:type: bool
:return: Logout Request maybe deflated and base64 encoded
:rtype: str object
"""
if deflate:
request = OneLogin_Saml2_Utils.deflate_and_base64_encode(self.__logout_request)
else:
request = OneLogin_Saml2_Utils.b64encode(self.__logout_request)
return request
def get_xml(self):
"""
Returns the XML that will be sent as part of the request
or that was received at the SP
:return: XML request body
:rtype: string
"""
return self.__logout_request
@staticmethod
def get_id(request):
"""
Returns the ID of the Logout Request
:param request: Logout Request Message
:type request: string|DOMDocument
:return: string ID
:rtype: str object
"""
elem = OneLogin_Saml2_XML.to_etree(request)
return elem.get('ID', None)
@staticmethod
def get_nameid_data(request, key=None):
"""
Gets the NameID Data of the the Logout Request
:param request: Logout Request Message
:type request: string|DOMDocument
:param key: The SP key
:type key: string
:return: Name ID Data (Value, Format, NameQualifier, SPNameQualifier)
:rtype: dict
"""
elem = OneLogin_Saml2_XML.to_etree(request)
name_id = None
encrypted_entries = OneLogin_Saml2_XML.query(elem, '/samlp:LogoutRequest/saml:EncryptedID')
if len(encrypted_entries) == 1:
if key is None:
raise OneLogin_Saml2_Error(
'Private Key is required in order to decrypt the NameID, check settings',
OneLogin_Saml2_Error.PRIVATE_KEY_NOT_FOUND
)
encrypted_data_nodes = OneLogin_Saml2_XML.query(elem, '/samlp:LogoutRequest/saml:EncryptedID/xenc:EncryptedData')
if len(encrypted_data_nodes) == 1:
encrypted_data = encrypted_data_nodes[0]
name_id = OneLogin_Saml2_Utils.decrypt_element(encrypted_data, key)
else:
entries = OneLogin_Saml2_XML.query(elem, '/samlp:LogoutRequest/saml:NameID')
if len(entries) == 1:
name_id = entries[0]
if name_id is None:
raise OneLogin_Saml2_ValidationError(
'NameID not found in the Logout Request',
OneLogin_Saml2_ValidationError.NO_NAMEID
)
name_id_data = {
'Value': OneLogin_Saml2_XML.element_text(name_id)
}
for attr in ['Format', 'SPNameQualifier', 'NameQualifier']:
if attr in name_id.attrib:
name_id_data[attr] = name_id.attrib[attr]
return name_id_data
@staticmethod
def get_nameid(request, key=None):
"""
Gets the NameID of the Logout Request Message
:param request: Logout Request Message
:type request: string|DOMDocument
:param key: The SP key
:type key: string
:return: Name ID Value
:rtype: string
"""
name_id = OneLogin_Saml2_Logout_Request.get_nameid_data(request, key)
return name_id['Value']
@staticmethod
def get_nameid_format(request, key=None):
"""
Gets the NameID Format of the Logout Request Message
:param request: Logout Request Message
:type request: string|DOMDocument
:param key: The SP key
:type key: string
:return: Name ID Format
:rtype: string
"""
name_id_format = None
name_id_data = OneLogin_Saml2_Logout_Request.get_nameid_data(request, key)
if name_id_data and 'Format' in name_id_data.keys():
name_id_format = name_id_data['Format']
return name_id_format
@staticmethod
def get_issuer(request):
"""
Gets the Issuer of the Logout Request Message
:param request: Logout Request Message
:type request: string|DOMDocument
:return: The Issuer
:rtype: string
"""
elem = OneLogin_Saml2_XML.to_etree(request)
issuer = None
issuer_nodes = OneLogin_Saml2_XML.query(elem, '/samlp:LogoutRequest/saml:Issuer')
if len(issuer_nodes) == 1:
issuer = OneLogin_Saml2_XML.element_text(issuer_nodes[0])
return issuer
@staticmethod
def get_session_indexes(request):
"""
Gets the SessionIndexes from the Logout Request
:param request: Logout Request Message
:type request: string|DOMDocument
:return: The SessionIndex value
:rtype: list
"""
elem = OneLogin_Saml2_XML.to_etree(request)
session_indexes = []
session_index_nodes = OneLogin_Saml2_XML.query(elem, '/samlp:LogoutRequest/samlp:SessionIndex')
for session_index_node in session_index_nodes:
session_indexes.append(OneLogin_Saml2_XML.element_text(session_index_node))
return session_indexes
def is_valid(self, request_data, raise_exceptions=False):
"""
Checks if the Logout Request received is valid
:param request_data: Request Data
:type request_data: dict
:param raise_exceptions: Whether to return false on failure or raise an exception
:type raise_exceptions: Boolean
:return: If the Logout Request is or not valid
:rtype: boolean
"""
self.__error = None
try:
root = OneLogin_Saml2_XML.to_etree(self.__logout_request)
idp_data = self.__settings.get_idp_data()
idp_entity_id = idp_data['entityId']
get_data = ('get_data' in request_data and request_data['get_data']) or dict()
if self.__settings.is_strict():
res = OneLogin_Saml2_XML.validate_xml(root, 'saml-schema-protocol-2.0.xsd', self.__settings.is_debug_active())
if isinstance(res, str):
raise OneLogin_Saml2_ValidationError(
'Invalid SAML Logout Request. Not match the saml-schema-protocol-2.0.xsd',
OneLogin_Saml2_ValidationError.INVALID_XML_FORMAT
)
security = self.__settings.get_security_data()
current_url = OneLogin_Saml2_Utils.get_self_url_no_query(request_data)
# Check NotOnOrAfter
if root.get('NotOnOrAfter', None):
na = OneLogin_Saml2_Utils.parse_SAML_to_time(root.get('NotOnOrAfter'))
if na <= OneLogin_Saml2_Utils.now():
raise OneLogin_Saml2_ValidationError(
'Could not validate timestamp: expired. Check system clock.)',
OneLogin_Saml2_ValidationError.RESPONSE_EXPIRED
)
# Check destination
if root.get('Destination', None):
destination = root.get('Destination')
if destination != '':
if current_url not in destination:
raise OneLogin_Saml2_ValidationError(
'The LogoutRequest was received at '
'%(currentURL)s instead of %(destination)s' %
{
'currentURL': current_url,
'destination': destination,
},
OneLogin_Saml2_ValidationError.WRONG_DESTINATION
)
# Check issuer
issuer = OneLogin_Saml2_Logout_Request.get_issuer(root)
if issuer is not None and issuer != idp_entity_id:
raise OneLogin_Saml2_ValidationError(
'Invalid issuer in the Logout Request (expected %(idpEntityId)s, got %(issuer)s)' %
{
'idpEntityId': idp_entity_id,
'issuer': issuer
},
OneLogin_Saml2_ValidationError.WRONG_ISSUER
)
if security['wantMessagesSigned']:
if 'Signature' not in get_data:
raise OneLogin_Saml2_ValidationError(
'The Message of the Logout Request is not signed and the SP require it',
OneLogin_Saml2_ValidationError.NO_SIGNED_MESSAGE
)
return True
except Exception as err:
# pylint: disable=R0801
self.__error = str(err)
debug = self.__settings.is_debug_active()
if debug:
print(err)
if raise_exceptions:
raise
return False
def get_error(self):
"""
After executing a validation process, if it fails this method returns the cause
"""
return self.__error
|
apache-2.0
|
tux-00/ansible
|
lib/ansible/modules/cloud/vmware/vmware_vmkernel_ip_config.py
|
70
|
3992
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Joseph Callen <jcallen () csc.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: vmware_vmkernel_ip_config
short_description: Configure the VMkernel IP Address
description:
- Configure the VMkernel IP Address
version_added: 2.0
author: "Joseph Callen (@jcpowermac), Russell Teague (@mtnbikenc)"
notes:
- Tested on vSphere 5.5
requirements:
- "python >= 2.6"
- PyVmomi
options:
vmk_name:
description:
- VMkernel interface name
required: True
ip_address:
description:
- IP address to assign to VMkernel interface
required: True
subnet_mask:
description:
- Subnet Mask to assign to VMkernel interface
required: True
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = '''
# Example command from Ansible Playbook
- name: Configure IP address on ESX host
local_action:
module: vmware_vmkernel_ip_config
hostname: esxi_hostname
username: esxi_username
password: esxi_password
vmk_name: vmk0
ip_address: 10.0.0.10
subnet_mask: 255.255.255.0
'''
try:
from pyVmomi import vim, vmodl
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False
def configure_vmkernel_ip_address(host_system, vmk_name, ip_address, subnet_mask):
host_config_manager = host_system.configManager
host_network_system = host_config_manager.networkSystem
for vnic in host_network_system.networkConfig.vnic:
if vnic.device == vmk_name:
spec = vnic.spec
if spec.ip.ipAddress != ip_address:
spec.ip.dhcp = False
spec.ip.ipAddress = ip_address
spec.ip.subnetMask = subnet_mask
host_network_system.UpdateVirtualNic(vmk_name, spec)
return True
return False
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(dict(vmk_name=dict(required=True, type='str'),
ip_address=dict(required=True, type='str'),
subnet_mask=dict(required=True, type='str')))
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False)
if not HAS_PYVMOMI:
module.fail_json(msg='pyvmomi is required for this module')
vmk_name = module.params['vmk_name']
ip_address = module.params['ip_address']
subnet_mask = module.params['subnet_mask']
try:
content = connect_to_api(module, False)
host = get_all_objs(content, [vim.HostSystem])
if not host:
module.fail_json(msg="Unable to locate Physical Host.")
host_system = host.keys()[0]
changed = configure_vmkernel_ip_address(host_system, vmk_name, ip_address, subnet_mask)
module.exit_json(changed=changed)
except vmodl.RuntimeFault as runtime_fault:
module.fail_json(msg=runtime_fault.msg)
except vmodl.MethodFault as method_fault:
module.fail_json(msg=method_fault.msg)
except Exception as e:
module.fail_json(msg=str(e))
from ansible.module_utils.vmware import *
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()
|
gpl-3.0
|
Einsteinish/PyTune3
|
vendor/feedvalidator/author.py
|
16
|
1386
|
"""$Id: author.py 699 2006-09-25 02:01:18Z rubys $"""
__author__ = "Sam Ruby <http://intertwingly.net/> and Mark Pilgrim <http://diveintomark.org/>"
__version__ = "$Revision: 699 $"
__date__ = "$Date: 2006-09-25 02:01:18 +0000 (Mon, 25 Sep 2006) $"
__copyright__ = "Copyright (c) 2002 Sam Ruby and Mark Pilgrim"
from base import validatorBase
from validators import *
#
# author element.
#
class author(validatorBase):
def getExpectedAttrNames(self):
return [(u'http://www.w3.org/1999/02/22-rdf-syntax-ns#', u'parseType')]
def validate(self):
if not "name" in self.children and not "atom_name" in self.children:
self.log(MissingElement({"parent":self.name, "element":"name"}))
def do_name(self):
return nonhtml(), nonemail(), nonblank(), noduplicates()
def do_email(self):
return addr_spec(), noduplicates()
def do_uri(self):
return nonblank(), rfc3987(), nows(), noduplicates()
def do_foaf_workplaceHomepage(self):
return rdfResourceURI()
def do_foaf_homepage(self):
return rdfResourceURI()
def do_foaf_weblog(self):
return rdfResourceURI()
def do_foaf_plan(self):
return text()
def do_foaf_firstName(self):
return text()
def do_xhtml_div(self):
from content import diveater
return diveater()
# RSS/Atom support
do_atom_name = do_name
do_atom_email = do_email
do_atom_uri = do_uri
|
mit
|
xmaruto/mcord
|
xos/synchronizers/ec2/deleters/slice_deployment_deleter.py
|
4
|
1652
|
from core.models import Slice, SliceDeployments, User
from synchronizers.base.deleter import Deleter
from openstack.driver import OpenStackDriver
class SliceDeploymentsDeleter(Deleter):
model='SliceDeployments'
def call(self, pk):
slice_deployment = SliceDeployments.objects.get(pk=pk)
user = User.objects.get(id=slice_deployment.slice.creator.id)
driver = OpenStackDriver().admin_driver(deployment=slice_deployment.deployment.name)
client_driver = driver.client_driver(caller=user,
tenant=slice_deployment.slice.name,
deployment=slice_deployment.deployment.name)
if slice_deployment.router_id and slice_deployment.subnet_id:
client_driver.delete_router_interface(slice_deployment.router_id, slice_deployment.subnet_id)
if slice_deployment.subnet_id:
client_driver.delete_subnet(slice_deployment.subnet_id)
if slice_deployment.router_id:
client_driver.delete_router(slice_deployment.router_id)
if slice_deployment.network_id:
client_driver.delete_network(slice_deployment.network_id)
if slice_deployment.tenant_id:
driver.delete_tenant(slice_deployment.tenant_id)
# delete external route
#subnet = None
#subnets = client_driver.shell.quantum.list_subnets()['subnets']
#for snet in subnets:
# if snet['id'] == slice_deployment.subnet_id:
# subnet = snet
#if subnet:
# driver.delete_external_route(subnet)
slice_deployment.delete()
|
apache-2.0
|
majek/ziutek
|
ziutek/lrucache.py
|
1
|
1553
|
class LRUCache:
'''
>>> l = LRUCache(maxsize=3)
>>> l.add('a')
>>> l.add('b')
>>> l.add('c')
>>> l.add('d')
>>> l.to_list()
['b', 'c', 'd']
>>> l.add('b')
>>> l.to_list()
['c', 'b', 'd']
>>> l.add('b')
>>> l.to_list()
['c', 'd', 'b']
>>> l.add('b')
>>> l.to_list()
['c', 'd', 'b']
'''
def __init__(self, maxsize=100):
assert maxsize > 0
self.i = {}
self.l = [None] * maxsize
self.p = 0
self.maxsize = maxsize
def add(self, key):
if key not in self.i:
self.i[key] = self.p
self.l[self.p] = key
self.p = (self.p + 1) % self.maxsize
else:
a = self.i[key]
b = (a+1) % self.maxsize
if b == self.p:
return
keyb = self.l[b]
self.l[a], self.l[b] = self.l[b], self.l[a]
self.i[key] = b
self.i[keyb] = a
def __contains__(self, key):
'''
>>> l = LRUCache(maxsize=3)
>>> l.add('a')
>>> 'a' in l
True
'''
return (key in self.i)
def __len__(self):
'''
>>> l = LRUCache(maxsize=3)
>>> l.add('a')
>>> len(l)
1
'''
return len(self.i)
def to_list(self):
'''
>>> l = LRUCache(maxsize=3)
>>> l.add('a')
>>> l.to_list()
[None, None, 'a']
'''
return [self.l[(self.p+p) % self.maxsize] for p in range(self.maxsize)]
|
bsd-3-clause
|
seanlong/crosswalk
|
tools/increment-version.py
|
9
|
2640
|
#!/usr/bin/env python
# Copyright (c) 2013 Intel Corporation. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
increment-version.py -- Bump Beta or Canary version number across all required
files.
Crosswalk's versioning schema is "MAJOR.MINOR.BUILD.PATCH". Incrementing a beta
version will monotonically increase the PATCH number, while incrementing a
canary version will monotonically increase the BUILD number.
"""
import optparse
import os
import re
import sys
def PathFromRoot(path):
"""
Returns the absolute path to |path|, which is supposed to be relative to the
repository's root directory.
"""
return os.path.join(os.path.abspath(os.path.dirname(__file__)), '..', path)
def IncrementVersions(replacements):
"""
|replacements| is a dictionary whose keys are files (relative to the root of
the repository) and values are regular expresions that match a section in the
file with the version number we want to increase.
The regular expression is expected to have 2 groups, the first matching
whatever precedes the version number and needs to stay the same, and the
second matching the number itself.
Each of the files specified will be overwritten with new version numbers.
"""
for path, regexp in replacements.iteritems():
# The paths are always relative to the repository's root directory.
path = PathFromRoot(path)
def _ReplacementFunction(match_obj):
version_number = int(match_obj.group(2))
return '%s%s' % (match_obj.group(1), version_number + 1)
contents = re.sub(regexp, _ReplacementFunction, open(path).read())
open(path, 'w').write(contents)
def Main():
option_parser = optparse.OptionParser()
option_parser.add_option(
'', '--type', choices=('beta', 'canary'), dest='release_type',
help='What part of the version number must be increased. \"beta\" '
'increases the patch version, \"canary\" increases the build '
'version.')
options, _ = option_parser.parse_args()
if options.release_type == 'beta':
replacements = {
'VERSION': r'(PATCH=)(\d+)',
'packaging/crosswalk.spec': r'(Version:\s+\d+\.\d+\.\d+\.)(\d+)',
}
IncrementVersions(replacements)
elif options.release_type == 'canary':
replacements = {
'VERSION': r'(BUILD=)(\d+)',
'packaging/crosswalk.spec': r'(Version:\s+\d+\.\d+\.)(\d+)',
}
IncrementVersions(replacements)
else:
print '--type is a required argument and has not been specified. Exiting.'
return 1
return 0
if __name__ == '__main__':
sys.exit(Main())
|
bsd-3-clause
|
mdworks2016/work_development
|
Python/20_Third_Certification/venv/lib/python3.7/site-packages/celery/exceptions.py
|
1
|
7700
|
# -*- coding: utf-8 -*-
"""Celery error types.
Error Hierarchy
===============
- :exc:`Exception`
- :exc:`celery.exceptions.CeleryError`
- :exc:`~celery.exceptions.ImproperlyConfigured`
- :exc:`~celery.exceptions.SecurityError`
- :exc:`~celery.exceptions.TaskPredicate`
- :exc:`~celery.exceptions.Ignore`
- :exc:`~celery.exceptions.Reject`
- :exc:`~celery.exceptions.Retry`
- :exc:`~celery.exceptions.TaskError`
- :exc:`~celery.exceptions.QueueNotFound`
- :exc:`~celery.exceptions.IncompleteStream`
- :exc:`~celery.exceptions.NotRegistered`
- :exc:`~celery.exceptions.AlreadyRegistered`
- :exc:`~celery.exceptions.TimeoutError`
- :exc:`~celery.exceptions.MaxRetriesExceededError`
- :exc:`~celery.exceptions.TaskRevokedError`
- :exc:`~celery.exceptions.InvalidTaskError`
- :exc:`~celery.exceptions.ChordError`
- :class:`kombu.exceptions.KombuError`
- :exc:`~celery.exceptions.OperationalError`
Raised when a transport connection error occurs while
sending a message (be it a task, remote control command error).
.. note::
This exception does not inherit from
:exc:`~celery.exceptions.CeleryError`.
- **billiard errors** (prefork pool)
- :exc:`~celery.exceptions.SoftTimeLimitExceeded`
- :exc:`~celery.exceptions.TimeLimitExceeded`
- :exc:`~celery.exceptions.WorkerLostError`
- :exc:`~celery.exceptions.Terminated`
- :class:`UserWarning`
- :class:`~celery.exceptions.CeleryWarning`
- :class:`~celery.exceptions.AlwaysEagerIgnored`
- :class:`~celery.exceptions.DuplicateNodenameWarning`
- :class:`~celery.exceptions.FixupWarning`
- :class:`~celery.exceptions.NotConfigured`
- :exc:`BaseException`
- :exc:`SystemExit`
- :exc:`~celery.exceptions.WorkerTerminate`
- :exc:`~celery.exceptions.WorkerShutdown`
"""
from __future__ import absolute_import, unicode_literals
import numbers
from billiard.exceptions import (SoftTimeLimitExceeded, Terminated,
TimeLimitExceeded, WorkerLostError)
from kombu.exceptions import OperationalError
from .five import python_2_unicode_compatible, string_t
__all__ = (
# Warnings
'CeleryWarning',
'AlwaysEagerIgnored', 'DuplicateNodenameWarning',
'FixupWarning', 'NotConfigured',
# Core errors
'CeleryError',
'ImproperlyConfigured', 'SecurityError',
# Kombu (messaging) errors.
'OperationalError',
# Task semi-predicates
'TaskPredicate', 'Ignore', 'Reject', 'Retry',
# Task related errors.
'TaskError', 'QueueNotFound', 'IncompleteStream',
'NotRegistered', 'AlreadyRegistered', 'TimeoutError',
'MaxRetriesExceededError', 'TaskRevokedError',
'InvalidTaskError', 'ChordError',
# Billiard task errors.
'SoftTimeLimitExceeded', 'TimeLimitExceeded',
'WorkerLostError', 'Terminated',
# Deprecation warnings (forcing Python to emit them).
'CPendingDeprecationWarning', 'CDeprecationWarning',
# Worker shutdown semi-predicates (inherits from SystemExit).
'WorkerShutdown', 'WorkerTerminate',
)
UNREGISTERED_FMT = """\
Task of kind {0} never registered, please make sure it's imported.\
"""
class CeleryWarning(UserWarning):
"""Base class for all Celery warnings."""
class AlwaysEagerIgnored(CeleryWarning):
"""send_task ignores :setting:`task_always_eager` option."""
class DuplicateNodenameWarning(CeleryWarning):
"""Multiple workers are using the same nodename."""
class FixupWarning(CeleryWarning):
"""Fixup related warning."""
class NotConfigured(CeleryWarning):
"""Celery hasn't been configured, as no config module has been found."""
class CeleryError(Exception):
"""Base class for all Celery errors."""
class TaskPredicate(CeleryError):
"""Base class for task-related semi-predicates."""
@python_2_unicode_compatible
class Retry(TaskPredicate):
"""The task is to be retried later."""
#: Optional message describing context of retry.
message = None
#: Exception (if any) that caused the retry to happen.
exc = None
#: Time of retry (ETA), either :class:`numbers.Real` or
#: :class:`~datetime.datetime`.
when = None
def __init__(self, message=None, exc=None, when=None, **kwargs):
from kombu.utils.encoding import safe_repr
self.message = message
if isinstance(exc, string_t):
self.exc, self.excs = None, exc
else:
self.exc, self.excs = exc, safe_repr(exc) if exc else None
self.when = when
super(Retry, self).__init__(self, exc, when, **kwargs)
def humanize(self):
if isinstance(self.when, numbers.Number):
return 'in {0.when}s'.format(self)
return 'at {0.when}'.format(self)
def __str__(self):
if self.message:
return self.message
if self.excs:
return 'Retry {0}: {1}'.format(self.humanize(), self.excs)
return 'Retry {0}'.format(self.humanize())
def __reduce__(self):
return self.__class__, (self.message, self.excs, self.when)
RetryTaskError = Retry # noqa: E305 XXX compat
class Ignore(TaskPredicate):
"""A task can raise this to ignore doing state updates."""
@python_2_unicode_compatible
class Reject(TaskPredicate):
"""A task can raise this if it wants to reject/re-queue the message."""
def __init__(self, reason=None, requeue=False):
self.reason = reason
self.requeue = requeue
super(Reject, self).__init__(reason, requeue)
def __repr__(self):
return 'reject requeue=%s: %s' % (self.requeue, self.reason)
class ImproperlyConfigured(CeleryError):
"""Celery is somehow improperly configured."""
class SecurityError(CeleryError):
"""Security related exception."""
class TaskError(CeleryError):
"""Task related errors."""
class QueueNotFound(KeyError, TaskError):
"""Task routed to a queue not in ``conf.queues``."""
class IncompleteStream(TaskError):
"""Found the end of a stream of data, but the data isn't complete."""
@python_2_unicode_compatible
class NotRegistered(KeyError, TaskError):
"""The task ain't registered."""
def __repr__(self):
return UNREGISTERED_FMT.format(self)
class AlreadyRegistered(TaskError):
"""The task is already registered."""
# XXX Unused
class TimeoutError(TaskError):
"""The operation timed out."""
class MaxRetriesExceededError(TaskError):
"""The tasks max restart limit has been exceeded."""
def __init__(self, *args, **kwargs):
self.task_args = kwargs.pop("task_args", [])
self.task_kwargs = kwargs.pop("task_kwargs", dict())
super(MaxRetriesExceededError, self).__init__(*args, **kwargs)
class TaskRevokedError(TaskError):
"""The task has been revoked, so no result available."""
class InvalidTaskError(TaskError):
"""The task has invalid data or ain't properly constructed."""
class ChordError(TaskError):
"""A task part of the chord raised an exception."""
class CPendingDeprecationWarning(PendingDeprecationWarning):
"""Warning of pending deprecation."""
class CDeprecationWarning(DeprecationWarning):
"""Warning of deprecation."""
class WorkerTerminate(SystemExit):
"""Signals that the worker should terminate immediately."""
SystemTerminate = WorkerTerminate # noqa: E305 XXX compat
class WorkerShutdown(SystemExit):
"""Signals that the worker should perform a warm shutdown."""
|
apache-2.0
|
racker/omnibus
|
source/db-5.0.26.NC/dist/winmsi/genWix.py
|
2
|
10906
|
#
#
# genWix.py is used to generate a WiX .wxs format file that
# can be compiled by the candle.exe WiX compiler.
#
# Usage: python genWix.py <output_file>
#
# The current directory is expected to be the top of a tree
# of built programs, libraries, documentation and files.
#
# The list of directories traversed is at the bottom of this script,
# in "main." Extra directories that do not exist are fine and will
# be ignored. That makes the script a bit more general-purpose.
#
# "Excluded" directories/files are listed below in the GenWix class
# constructor in the excludes variable. These will *not* be included
# in packaging.
#
# The output file is expected to be post-processed using XQuery Update
# to add ComponentGroup elements for the various WiX Feature elements.
#
# The generated output for each directory traversed will look like:
# <Directory Id="dir_dirname_N" Name="dirname">
# <Component DiskId="1" Guid="..." Id="some_id" KeyPath="yes">...
# <File Id="..." Name="..." Source="pathtofile"/>
# <File.../>
# </Component>
# </Directory>
#
# Subdirectories are new elements under each top-level Directory element
#
# NOTE: at this time each top-level directory is its own Component. This
# mechanism does NOT generate multiple Components in a single Directory.
# That should be done as an enhancement to allow, for example, the "bin"
# directory to contain files that are part of multiple Components such
# as "runtime" "java" "sql" etc.
# WiX will do this but this script plus the generateGroups.xq XQuery script
# cannot (yet). Doing that will be a bit of work as well as creating
# additional lists of files that indicate their respective Components.
#
import sys
import os
class GenWix:
def __init__(self, sourcePfx, outfile, dbg):
self.debugOn = dbg
self.componentId = 0
self.indentLevel = 0
self.indentIncr = 2
self.shortId = 0
self.fragName="all"
self.refDirectory = "INSTALLDIR"
self.compPrefix = ""
self.dirPrefix = "dir"
self.sourcePrefix = os.path.normpath(sourcePfx)
# use excludes to exclude paths, e.g. add files to the array:
# ...os.path.normpath("dbxml/test"), os.path.normpath("a/b/c")...
self.excludes = []
self.groups = ["group_csharp", "group_cxx", "group_devo", "group_doc", "group_examples", "group_java", "group_runtime", "group_sql"]
self.groupfiles = ["group.csharp", "group.cxx", "group.devo", "group.doc", "group.examples", "group.java", "group.runtime", "group.sql"]
self.groupcontent = ["","","","","","","",""]
self.outputFile = outfile
self.out = open(self.outputFile, "ab")
self.out.truncate(0)
self.initGroupFiles()
def __del__(self):
self.out.close()
def initGroupFiles(self):
idx = 0
for file in self.groupfiles:
f = open(file, 'r')
self.groupcontent[idx] = os.path.normpath(f.read())
f.close()
idx = idx + 1
def checkExclude(self, fname):
for ex in self.excludes:
if fname.find(ex) != -1:
return True
return False
# NOTE: this will count leading/trailing '/'
def count(self, path):
return len(path.split("/"))
def nextId(self):
self.componentId = self.componentId + 1
def printComponentId(self, fragname):
return self.makeId("%s_%s_%d"%(self.compPrefix,fragname,self.componentId))
def printDirectoryId(self,dirname):
return self.makeId("%s_%s_%d"%(self.dirPrefix,dirname,self.componentId))
def indent(self, arg):
if arg == "-" and self.indentLevel != 0:
self.indentLevel = self.indentLevel - self.indentIncr
i = 0
while i != self.indentLevel:
self.out.write(" ")
i = i+1
if arg == "+":
self.indentLevel = self.indentLevel + self.indentIncr
def echo(self, arg, indentArg):
self.indent(indentArg)
#sys.stdout.write(arg+"\n")
self.out.write(arg+"\n")
def generateGuid(self):
if sys.version_info[1] < 5:
return "REPLACE_WITH_GUID"
else:
import uuid
return uuid.uuid1()
# used by makeShortName
def cleanName(self, name):
for c in ("-","%","@","!"):
name = name.replace(c,"")
return name
def makeId(self, id):
tid = id.replace("-","_")
if len(tid) > 70:
#print "chopping string %s"%tid
tid = tid[len(tid)-70:len(tid)]
# id can't start with a number...
i = 0
while 1:
try:
int(tid[i])
except:
break
i = i+1
return tid[i:len(tid)]
return tid
# turn names into Windows 8.3 names.
# A semi-unique "ID" is inserted, using 3 bytes of hex,
# which gives us a total of 4096 "unique" IDs. If
# that number is exceeded in one class instance, a bad
# name is returned, which will eventually cause a
# recognizable failure. Names look like: ABCD~NNN.EXT
# E.g. NAMEISLONG.EXTLONG => NAME~123.EXT
#
def makeShortName(self, longName):
name = longName.upper()
try:
index = name.find(".")
except ValueError:
index = -1
if index == -1:
if len(name) <= 8:
return longName
after = ""
else:
if index <= 8 and (len(name) - index) <= 4:
return longName
after = "." + name[index+1:index+4]
after = self.cleanName(after)
self.shortId = self.shortId + 1
if self.shortId >= 4096: # check for overflow of ID space
return "too_many_ids.bad" # will cause a failure...
hid = hex(self.shortId)
name = self.cleanName(name) # remove stray chars
# first 5 chars + ~ + Id + . + extension
return name[0:4]+"~"+str(hid)[2:5]+after
def makeFullPath(self, fname, root):
return os.path.join(self.sourcePrefix,os.path.join(root,fname))
def makeNames(self, fname):
return "Name=\'%s\'"%fname
#shortName = self.makeShortName(fname)
#if shortName != fname:
# longName="LongName=\'%s\'"%fname
#else:
# longName=""
#return "Name=\'%s\' %s"%(shortName,longName)
def generateFile(self, fname, root, dirId):
# allow exclusion of individual files
if self.checkExclude(os.path.join(root,fname)):
self.debug("excluding %s\n"%os.path.join(root,fname))
return
idname = self.makeId("%s_%s"%(dirId,fname))
elem ="<File Id=\'%s\' %s Source=\'%s\' />"%(idname,self.makeNames(fname),self.makeFullPath(fname, root))
self.echo(elem,"")
def startDirectory(self, dir, parent):
# use parent dirname as part of name for more uniqueness
self.debug("Starting dir %s"%dir)
self.nextId()
dirId = self.printDirectoryId(dir)
elem ="<Directory Id=\'%s\' %s>"%(dirId,self.makeNames(dir))
self.echo(elem,"+")
return dirId
def endDirectory(self, dir):
self.debug("Ending dir %s"%dir)
self.echo("</Directory>","-")
def startComponent(self, dir, group):
self.debug("Starting Component for dir %s, group %s"%(dir,group))
# Use the group name in the component id so it can be used later
celem ="<Component Id=\'%s\' DiskId='1' KeyPath='yes' Guid=\'%s\'>"%(self.printComponentId(group),self.generateGuid())
self.echo(celem,"+")
def endComponent(self, dir, group):
self.debug("Ending Component for dir %s, group %s"%(dir,group))
self.echo("</Component>","-")
def generatePreamble(self):
# leave off the XML decl and Wix default namespace -- candle.exe
# doesn't seem to care and it makes updating simpler
self.echo("<Wix>","+")
self.echo("<Fragment>","+")
self.echo("<DirectoryRef Id='%s'>"%self.refDirectory,"+")
def generateClose(self):
self.echo("</DirectoryRef>","-")
self.echo("</Fragment>","-")
self.echo("</Wix>","-")
def debug(self, msg):
if self.debugOn:
sys.stdout.write(msg+"\n")
def generateDir(self, dir, path):
fullPath = os.path.join(path,dir)
if self.checkExclude(fullPath):
self.debug("excluding %s\n"%fullPath)
return
# ignore top-level directories that are missing, or other
# errors (e.g. regular file)
try:
files = os.listdir(fullPath)
except:
return
# check for empty dir (this won't detect directories that contain
# only empty directories -- just don't do that...)
if len(files) == 0:
self.debug("skipping empty dir %s"%dir)
return
dirId = self.startDirectory(dir, os.path.basename(path))
# generate a component for each possible group. Most of these
# will be empty but that is OK. Components will have Id's that
# indicate their group. This is used by the XQuery script
# that creates the ComponentGroup elements and references.
# Post-processing of this is necessary to remove empty
# Component elements or empty directories will be installed.
# pruneComponents.xq is used for this.
idx = 0
for group in self.groups:
self.startComponent(dir, group)
# process regular files before directories
fileList = [f for f in files if os.path.isfile(os.path.join(fullPath,f))]
for file in fileList:
fullFile = os.path.join(fullPath,file)
#self.debug("looking for file %s"%fullFile)
found = self.groupcontent[idx].find(fullFile)
if found >= 0:
self.debug("found %s"%file)
#last = self.groupcontext[idx+len(file)]
#if last != "\n":
# continue
self.generateFile(file,fullPath, dirId)
# Component element must end before subdirectories start
self.endComponent(dir, group)
idx = idx + 1
# now directories
dirList = [d for d in files if os.path.isdir(os.path.join(fullPath,d))]
for directory in dirList:
self.generateDir(directory, fullPath)
self.endDirectory(dir)
def generateRequiredFiles(self):
# LICENSE.txt, README.txt
celem ="<Component Id='license_readme' DiskId='1' KeyPath='yes' Guid=\'%s\'>"%self.generateGuid()
self.echo(celem,"+")
elem ="<File Id='LICENSE.txt' Name='LICENSE.txt' Source=\'%s\' />"%self.makeFullPath("LICENSE", "")
self.echo(elem,"")
elem ="<File Id='README.txt' Name='README.txt' Source=\'%s\' />"%self.makeFullPath("README", "")
self.echo(elem,"")
self.echo("</Component>","-")
def generate(self, directories):
self.generatePreamble()
self.generateRequiredFiles()
for dir in directories:
self.generateDir(dir, "")
self.generateClose()
#
# Main script
#
if __name__ == "__main__":
outfile = sys.argv[-1]
if outfile == sys.argv[0]:
print "Usage: genWix.py <output_file>"
sys.exit()
print "Generating into file: " + outfile
gw = GenWix(os.path.realpath("."),outfile,False)
# extra directory names here that don't exist are fine and make it easier to
# share this script across products
gw.generate(["bin","lib","include","jar","docs", "examples_c", "examples_cxx", "examples_java","examples_stl", "examples_csharp", "build_windows", "clib", "sql", "dbinc", "dbxml", "perl","python","php"])
|
apache-2.0
|
liorvh/golismero
|
thirdparty_libs/nltk/corpus/reader/util.py
|
12
|
31147
|
# Natural Language Toolkit: Corpus Reader Utilities
#
# Copyright (C) 2001-2012 NLTK Project
# Author: Steven Bird <sb@ldc.upenn.edu>
# Edward Loper <edloper@gradient.cis.upenn.edu>
# URL: <http://www.nltk.org/>
# For license information, see LICENSE.TXT
import os
import sys
import bisect
import re
import tempfile
try: import cPickle as pickle
except ImportError: import pickle
from itertools import islice
# Use the c version of ElementTree, which is faster, if possible:
try: from xml.etree import cElementTree as ElementTree
except ImportError: from xml.etree import ElementTree
from nltk.tokenize import wordpunct_tokenize
from nltk.internals import slice_bounds
from nltk.data import PathPointer, FileSystemPathPointer, ZipFilePathPointer
from nltk.data import SeekableUnicodeStreamReader
from nltk.sourcedstring import SourcedStringStream
from nltk.util import AbstractLazySequence, LazySubsequence, LazyConcatenation, py25
######################################################################
#{ Corpus View
######################################################################
class StreamBackedCorpusView(AbstractLazySequence):
"""
A 'view' of a corpus file, which acts like a sequence of tokens:
it can be accessed by index, iterated over, etc. However, the
tokens are only constructed as-needed -- the entire corpus is
never stored in memory at once.
The constructor to ``StreamBackedCorpusView`` takes two arguments:
a corpus fileid (specified as a string or as a ``PathPointer``);
and a block reader. A "block reader" is a function that reads
zero or more tokens from a stream, and returns them as a list. A
very simple example of a block reader is:
>>> def simple_block_reader(stream):
... return stream.readline().split()
This simple block reader reads a single line at a time, and
returns a single token (consisting of a string) for each
whitespace-separated substring on the line.
When deciding how to define the block reader for a given
corpus, careful consideration should be given to the size of
blocks handled by the block reader. Smaller block sizes will
increase the memory requirements of the corpus view's internal
data structures (by 2 integers per block). On the other hand,
larger block sizes may decrease performance for random access to
the corpus. (But note that larger block sizes will *not*
decrease performance for iteration.)
Internally, ``CorpusView`` maintains a partial mapping from token
index to file position, with one entry per block. When a token
with a given index *i* is requested, the ``CorpusView`` constructs
it as follows:
1. First, it searches the toknum/filepos mapping for the token
index closest to (but less than or equal to) *i*.
2. Then, starting at the file position corresponding to that
index, it reads one block at a time using the block reader
until it reaches the requested token.
The toknum/filepos mapping is created lazily: it is initially
empty, but every time a new block is read, the block's
initial token is added to the mapping. (Thus, the toknum/filepos
map has one entry per block.)
In order to increase efficiency for random access patterns that
have high degrees of locality, the corpus view may cache one or
more blocks.
:note: Each ``CorpusView`` object internally maintains an open file
object for its underlying corpus file. This file should be
automatically closed when the ``CorpusView`` is garbage collected,
but if you wish to close it manually, use the ``close()``
method. If you access a ``CorpusView``'s items after it has been
closed, the file object will be automatically re-opened.
:warning: If the contents of the file are modified during the
lifetime of the ``CorpusView``, then the ``CorpusView``'s behavior
is undefined.
:warning: If a unicode encoding is specified when constructing a
``CorpusView``, then the block reader may only call
``stream.seek()`` with offsets that have been returned by
``stream.tell()``; in particular, calling ``stream.seek()`` with
relative offsets, or with offsets based on string lengths, may
lead to incorrect behavior.
:ivar _block_reader: The function used to read
a single block from the underlying file stream.
:ivar _toknum: A list containing the token index of each block
that has been processed. In particular, ``_toknum[i]`` is the
token index of the first token in block ``i``. Together
with ``_filepos``, this forms a partial mapping between token
indices and file positions.
:ivar _filepos: A list containing the file position of each block
that has been processed. In particular, ``_toknum[i]`` is the
file position of the first character in block ``i``. Together
with ``_toknum``, this forms a partial mapping between token
indices and file positions.
:ivar _stream: The stream used to access the underlying corpus file.
:ivar _len: The total number of tokens in the corpus, if known;
or None, if the number of tokens is not yet known.
:ivar _eofpos: The character position of the last character in the
file. This is calculated when the corpus view is initialized,
and is used to decide when the end of file has been reached.
:ivar _cache: A cache of the most recently read block. It
is encoded as a tuple (start_toknum, end_toknum, tokens), where
start_toknum is the token index of the first token in the block;
end_toknum is the token index of the first token not in the
block; and tokens is a list of the tokens in the block.
"""
def __init__(self, fileid, block_reader=None, startpos=0,
encoding=None, source=None):
"""
Create a new corpus view, based on the file ``fileid``, and
read with ``block_reader``. See the class documentation
for more information.
:param fileid: The path to the file that is read by this
corpus view. ``fileid`` can either be a string or a
``PathPointer``.
:param startpos: The file position at which the view will
start reading. This can be used to skip over preface
sections.
:param encoding: The unicode encoding that should be used to
read the file's contents. If no encoding is specified,
then the file's contents will be read as a non-unicode
string (i.e., a str).
:param source: If specified, then use an ``SourcedStringStream``
to annotate all strings read from the file with
information about their start offset, end ofset,
and docid. The value of ``source`` will be used as the docid.
"""
if block_reader:
self.read_block = block_reader
# Initialize our toknum/filepos mapping.
self._toknum = [0]
self._filepos = [startpos]
self._encoding = encoding
self._source = source
# We don't know our length (number of tokens) yet.
self._len = None
self._fileid = fileid
self._stream = None
self._current_toknum = None
"""This variable is set to the index of the next token that
will be read, immediately before ``self.read_block()`` is
called. This is provided for the benefit of the block
reader, which under rare circumstances may need to know
the current token number."""
self._current_blocknum = None
"""This variable is set to the index of the next block that
will be read, immediately before ``self.read_block()`` is
called. This is provided for the benefit of the block
reader, which under rare circumstances may need to know
the current block number."""
# Find the length of the file.
try:
if isinstance(self._fileid, PathPointer):
self._eofpos = self._fileid.file_size()
else:
self._eofpos = os.stat(self._fileid).st_size
except Exception, exc:
raise ValueError('Unable to open or access %r -- %s' %
(fileid, exc))
# Maintain a cache of the most recently read block, to
# increase efficiency of random access.
self._cache = (-1, -1, None)
fileid = property(lambda self: self._fileid, doc="""
The fileid of the file that is accessed by this view.
:type: str or PathPointer""")
def read_block(self, stream):
"""
Read a block from the input stream.
:return: a block of tokens from the input stream
:rtype: list(any)
:param stream: an input stream
:type stream: stream
"""
raise NotImplementedError('Abstract Method')
def _open(self):
"""
Open the file stream associated with this corpus view. This
will be called performed if any value is read from the view
while its file stream is closed.
"""
if isinstance(self._fileid, PathPointer):
self._stream = self._fileid.open(self._encoding)
elif self._encoding:
self._stream = SeekableUnicodeStreamReader(
open(self._fileid, 'rb'), self._encoding)
else:
self._stream = open(self._fileid, 'rb')
if self._source is not None:
self._stream = SourcedStringStream(self._stream, self._source)
def close(self):
"""
Close the file stream associated with this corpus view. This
can be useful if you are worried about running out of file
handles (although the stream should automatically be closed
upon garbage collection of the corpus view). If the corpus
view is accessed after it is closed, it will be automatically
re-opened.
"""
if self._stream is not None:
self._stream.close()
self._stream = None
def __len__(self):
if self._len is None:
# iterate_from() sets self._len when it reaches the end
# of the file:
for tok in self.iterate_from(self._toknum[-1]): pass
return self._len
def __getitem__(self, i):
if isinstance(i, slice):
start, stop = slice_bounds(self, i)
# Check if it's in the cache.
offset = self._cache[0]
if offset <= start and stop <= self._cache[1]:
return self._cache[2][start-offset:stop-offset]
# Construct & return the result.
return LazySubsequence(self, start, stop)
else:
# Handle negative indices
if i < 0: i += len(self)
if i < 0: raise IndexError('index out of range')
# Check if it's in the cache.
offset = self._cache[0]
if offset <= i < self._cache[1]:
return self._cache[2][i-offset]
# Use iterate_from to extract it.
try:
return self.iterate_from(i).next()
except StopIteration:
raise IndexError('index out of range')
# If we wanted to be thread-safe, then this method would need to
# do some locking.
def iterate_from(self, start_tok):
# Start by feeding from the cache, if possible.
if self._cache[0] <= start_tok < self._cache[1]:
for tok in self._cache[2][start_tok-self._cache[0]:]:
yield tok
start_tok += 1
# Decide where in the file we should start. If `start` is in
# our mapping, then we can jump straight to the correct block;
# otherwise, start at the last block we've processed.
if start_tok < self._toknum[-1]:
block_index = bisect.bisect_right(self._toknum, start_tok)-1
toknum = self._toknum[block_index]
filepos = self._filepos[block_index]
else:
block_index = len(self._toknum)-1
toknum = self._toknum[-1]
filepos = self._filepos[-1]
# Open the stream, if it's not open already.
if self._stream is None:
self._open()
# Each iteration through this loop, we read a single block
# from the stream.
while filepos < self._eofpos:
# Read the next block.
self._stream.seek(filepos)
self._current_toknum = toknum
self._current_blocknum = block_index
tokens = self.read_block(self._stream)
assert isinstance(tokens, (tuple, list, AbstractLazySequence)), (
'block reader %s() should return list or tuple.' %
self.read_block.__name__)
num_toks = len(tokens)
new_filepos = self._stream.tell()
assert new_filepos > filepos, (
'block reader %s() should consume at least 1 byte (filepos=%d)' %
(self.read_block.__name__, filepos))
# Update our cache.
self._cache = (toknum, toknum+num_toks, list(tokens))
# Update our mapping.
assert toknum <= self._toknum[-1]
if num_toks > 0:
block_index += 1
if toknum == self._toknum[-1]:
assert new_filepos > self._filepos[-1] # monotonic!
self._filepos.append(new_filepos)
self._toknum.append(toknum+num_toks)
else:
# Check for consistency:
assert new_filepos == self._filepos[block_index], (
'inconsistent block reader (num chars read)')
assert toknum+num_toks == self._toknum[block_index], (
'inconsistent block reader (num tokens returned)')
# If we reached the end of the file, then update self._len
if new_filepos == self._eofpos:
self._len = toknum + num_toks
# Generate the tokens in this block (but skip any tokens
# before start_tok). Note that between yields, our state
# may be modified.
for tok in tokens[max(0, start_tok-toknum):]:
yield tok
# If we're at the end of the file, then we're done.
assert new_filepos <= self._eofpos
if new_filepos == self._eofpos:
break
# Update our indices
toknum += num_toks
filepos = new_filepos
# If we reach this point, then we should know our length.
assert self._len is not None
# Use concat for these, so we can use a ConcatenatedCorpusView
# when possible.
def __add__(self, other):
return concat([self, other])
def __radd__(self, other):
return concat([other, self])
def __mul__(self, count):
return concat([self] * count)
def __rmul__(self, count):
return concat([self] * count)
class ConcatenatedCorpusView(AbstractLazySequence):
"""
A 'view' of a corpus file that joins together one or more
``StreamBackedCorpusViews<StreamBackedCorpusView>``. At most
one file handle is left open at any time.
"""
def __init__(self, corpus_views):
self._pieces = corpus_views
"""A list of the corpus subviews that make up this
concatenation."""
self._offsets = [0]
"""A list of offsets, indicating the index at which each
subview begins. In particular::
offsets[i] = sum([len(p) for p in pieces[:i]])"""
self._open_piece = None
"""The most recently accessed corpus subview (or None).
Before a new subview is accessed, this subview will be closed."""
def __len__(self):
if len(self._offsets) <= len(self._pieces):
# Iterate to the end of the corpus.
for tok in self.iterate_from(self._offsets[-1]): pass
return self._offsets[-1]
def close(self):
for piece in self._pieces:
piece.close()
def iterate_from(self, start_tok):
piecenum = bisect.bisect_right(self._offsets, start_tok)-1
while piecenum < len(self._pieces):
offset = self._offsets[piecenum]
piece = self._pieces[piecenum]
# If we've got another piece open, close it first.
if self._open_piece is not piece:
if self._open_piece is not None:
self._open_piece.close()
self._open_piece = piece
# Get everything we can from this piece.
for tok in piece.iterate_from(max(0, start_tok-offset)):
yield tok
# Update the offset table.
if piecenum+1 == len(self._offsets):
self._offsets.append(self._offsets[-1] + len(piece))
# Move on to the next piece.
piecenum += 1
def concat(docs):
"""
Concatenate together the contents of multiple documents from a
single corpus, using an appropriate concatenation function. This
utility function is used by corpus readers when the user requests
more than one document at a time.
"""
if len(docs) == 1:
return docs[0]
if len(docs) == 0:
raise ValueError('concat() expects at least one object!')
types = set([d.__class__ for d in docs])
# If they're all strings, use string concatenation.
if types.issubset([str, unicode, basestring]):
return reduce((lambda a,b:a+b), docs, '')
# If they're all corpus views, then use ConcatenatedCorpusView.
for typ in types:
if not issubclass(typ, (StreamBackedCorpusView,
ConcatenatedCorpusView)):
break
else:
return ConcatenatedCorpusView(docs)
# If they're all lazy sequences, use a lazy concatenation
for typ in types:
if not issubclass(typ, AbstractLazySequence):
break
else:
return LazyConcatenation(docs)
# Otherwise, see what we can do:
if len(types) == 1:
typ = list(types)[0]
if issubclass(typ, list):
return reduce((lambda a,b:a+b), docs, [])
if issubclass(typ, tuple):
return reduce((lambda a,b:a+b), docs, ())
if ElementTree.iselement(typ):
xmltree = ElementTree.Element('documents')
for doc in docs: xmltree.append(doc)
return xmltree
# No method found!
raise ValueError("Don't know how to concatenate types: %r" % types)
######################################################################
#{ Corpus View for Pickled Sequences
######################################################################
class PickleCorpusView(StreamBackedCorpusView):
"""
A stream backed corpus view for corpus files that consist of
sequences of serialized Python objects (serialized using
``pickle.dump``). One use case for this class is to store the
result of running feature detection on a corpus to disk. This can
be useful when performing feature detection is expensive (so we
don't want to repeat it); but the corpus is too large to store in
memory. The following example illustrates this technique:
.. doctest::
:options: +SKIP
>>> from nltk.corpus.reader.util import PickleCorpusView
>>> from nltk.util import LazyMap
>>> feature_corpus = LazyMap(detect_features, corpus)
>>> PickleCorpusView.write(feature_corpus, some_fileid)
>>> pcv = PickleCorpusView(some_fileid)
"""
BLOCK_SIZE = 100
PROTOCOL = -1
def __init__(self, fileid, delete_on_gc=False):
"""
Create a new corpus view that reads the pickle corpus
``fileid``.
:param delete_on_gc: If true, then ``fileid`` will be deleted
whenever this object gets garbage-collected.
"""
self._delete_on_gc = delete_on_gc
StreamBackedCorpusView.__init__(self, fileid)
def read_block(self, stream):
result = []
for i in range(self.BLOCK_SIZE):
try: result.append(pickle.load(stream))
except EOFError: break
return result
def __del__(self):
"""
If ``delete_on_gc`` was set to true when this
``PickleCorpusView`` was created, then delete the corpus view's
fileid. (This method is called whenever a
``PickledCorpusView`` is garbage-collected.
"""
if getattr(self, '_delete_on_gc'):
if os.path.exists(self._fileid):
try: os.remove(self._fileid)
except (OSError, IOError): pass
self.__dict__.clear() # make the garbage collector's job easier
@classmethod
def write(cls, sequence, output_file):
if isinstance(output_file, basestring):
output_file = open(output_file, 'wb')
for item in sequence:
pickle.dump(item, output_file, cls.PROTOCOL)
@classmethod
def cache_to_tempfile(cls, sequence, delete_on_gc=True):
"""
Write the given sequence to a temporary file as a pickle
corpus; and then return a ``PickleCorpusView`` view for that
temporary corpus file.
:param delete_on_gc: If true, then the temporary file will be
deleted whenever this object gets garbage-collected.
"""
try:
fd, output_file_name = tempfile.mkstemp('.pcv', 'nltk-')
output_file = os.fdopen(fd, 'wb')
cls.write(sequence, output_file)
output_file.close()
return PickleCorpusView(output_file_name, delete_on_gc)
except (OSError, IOError), e:
raise ValueError('Error while creating temp file: %s' % e)
######################################################################
#{ Block Readers
######################################################################
def read_whitespace_block(stream):
toks = []
for i in range(20): # Read 20 lines at a time.
toks.extend(stream.readline().split())
return toks
def read_wordpunct_block(stream):
toks = []
for i in range(20): # Read 20 lines at a time.
toks.extend(wordpunct_tokenize(stream.readline()))
return toks
def read_line_block(stream):
toks = []
for i in range(20):
line = stream.readline()
if not line: return toks
toks.append(line.rstrip('\n'))
return toks
def read_blankline_block(stream):
s = ''
while True:
line = stream.readline()
# End of file:
if not line:
if s: return [s]
else: return []
# Blank line:
elif line and not line.strip():
if s: return [s]
# Other line:
else:
s += line
def read_alignedsent_block(stream):
s = ''
while True:
line = stream.readline()
if line[0] == '=' or line[0] == '\n' or line[:2] == '\r\n':
continue
# End of file:
if not line:
if s: return [s]
else: return []
# Other line:
else:
s += line
if re.match('^\d+-\d+', line) is not None:
return [s]
def read_regexp_block(stream, start_re, end_re=None):
"""
Read a sequence of tokens from a stream, where tokens begin with
lines that match ``start_re``. If ``end_re`` is specified, then
tokens end with lines that match ``end_re``; otherwise, tokens end
whenever the next line matching ``start_re`` or EOF is found.
"""
# Scan until we find a line matching the start regexp.
while True:
line = stream.readline()
if not line: return [] # end of file.
if re.match(start_re, line): break
# Scan until we find another line matching the regexp, or EOF.
lines = [line]
while True:
oldpos = stream.tell()
line = stream.readline()
# End of file:
if not line:
return [''.join(lines)]
# End of token:
if end_re is not None and re.match(end_re, line):
return [''.join(lines)]
# Start of new token: backup to just before it starts, and
# return the token we've already collected.
if end_re is None and re.match(start_re, line):
stream.seek(oldpos)
return [''.join(lines)]
# Anything else is part of the token.
lines.append(line)
def read_sexpr_block(stream, block_size=16384, comment_char=None):
"""
Read a sequence of s-expressions from the stream, and leave the
stream's file position at the end the last complete s-expression
read. This function will always return at least one s-expression,
unless there are no more s-expressions in the file.
If the file ends in in the middle of an s-expression, then that
incomplete s-expression is returned when the end of the file is
reached.
:param block_size: The default block size for reading. If an
s-expression is longer than one block, then more than one
block will be read.
:param comment_char: A character that marks comments. Any lines
that begin with this character will be stripped out.
(If spaces or tabs precede the comment character, then the
line will not be stripped.)
"""
start = stream.tell()
block = stream.read(block_size)
encoding = getattr(stream, 'encoding', None)
assert encoding is not None or isinstance(block, str)
if encoding not in (None, 'utf-8'):
import warnings
warnings.warn('Parsing may fail, depending on the properties '
'of the %s encoding!' % encoding)
# (e.g., the utf-16 encoding does not work because it insists
# on adding BOMs to the beginning of encoded strings.)
if comment_char:
COMMENT = re.compile('(?m)^%s.*$' % re.escape(comment_char))
while True:
try:
# If we're stripping comments, then make sure our block ends
# on a line boundary; and then replace any comments with
# space characters. (We can't just strip them out -- that
# would make our offset wrong.)
if comment_char:
block += stream.readline()
block = re.sub(COMMENT, _sub_space, block)
# Read the block.
tokens, offset = _parse_sexpr_block(block)
# Skip whitespace
offset = re.compile(r'\s*').search(block, offset).end()
# Move to the end position.
if encoding is None:
stream.seek(start+offset)
else:
stream.seek(start+len(block[:offset].encode(encoding)))
# Return the list of tokens we processed
return tokens
except ValueError, e:
if e.args[0] == 'Block too small':
next_block = stream.read(block_size)
if next_block:
block += next_block
continue
else:
# The file ended mid-sexpr -- return what we got.
return [block.strip()]
else: raise
def _sub_space(m):
"""Helper function: given a regexp match, return a string of
spaces that's the same length as the matched string."""
return ' '*(m.end()-m.start())
def _parse_sexpr_block(block):
tokens = []
start = end = 0
while end < len(block):
m = re.compile(r'\S').search(block, end)
if not m:
return tokens, end
start = m.start()
# Case 1: sexpr is not parenthesized.
if m.group() != '(':
m2 = re.compile(r'[\s(]').search(block, start)
if m2:
end = m2.start()
else:
if tokens: return tokens, end
raise ValueError('Block too small')
# Case 2: parenthesized sexpr.
else:
nesting = 0
for m in re.compile(r'[()]').finditer(block, start):
if m.group()=='(': nesting += 1
else: nesting -= 1
if nesting == 0:
end = m.end()
break
else:
if tokens: return tokens, end
raise ValueError('Block too small')
tokens.append(block[start:end])
return tokens, end
######################################################################
#{ Finding Corpus Items
######################################################################
def find_corpus_fileids(root, regexp):
if not isinstance(root, PathPointer):
raise TypeError('find_corpus_fileids: expected a PathPointer')
regexp += '$'
# Find fileids in a zipfile: scan the zipfile's namelist. Filter
# out entries that end in '/' -- they're directories.
if isinstance(root, ZipFilePathPointer):
fileids = [name[len(root.entry):] for name in root.zipfile.namelist()
if not name.endswith('/')]
items = [name for name in fileids if re.match(regexp, name)]
return sorted(items)
# Find fileids in a directory: use os.walk to search all (proper
# or symlinked) subdirectories, and match paths against the regexp.
elif isinstance(root, FileSystemPathPointer):
items = []
# workaround for py25 which doesn't support followlinks
kwargs = {}
if not py25():
kwargs = {'followlinks': True}
for dirname, subdirs, fileids in os.walk(root.path, **kwargs):
prefix = ''.join('%s/' % p for p in _path_from(root.path, dirname))
items += [prefix+fileid for fileid in fileids
if re.match(regexp, prefix+fileid)]
# Don't visit svn directories:
if '.svn' in subdirs: subdirs.remove('.svn')
return sorted(items)
else:
raise AssertionError("Don't know how to handle %r" % root)
def _path_from(parent, child):
if os.path.split(parent)[1] == '':
parent = os.path.split(parent)[0]
path = []
while parent != child:
child, dirname = os.path.split(child)
path.insert(0, dirname)
assert os.path.split(child)[0] != child
return path
######################################################################
#{ Paragraph structure in Treebank files
######################################################################
def tagged_treebank_para_block_reader(stream):
# Read the next paragraph.
para = ''
while True:
line = stream.readline()
# End of paragraph:
if re.match('======+\s*$', line):
if para.strip(): return [para]
# End of file:
elif line == '':
if para.strip(): return [para]
else: return []
# Content line:
else:
para += line
|
gpl-2.0
|
dronefly/dronefly.github.io
|
flask/lib/python2.7/site-packages/pip/_vendor/requests/packages/chardet/charsetprober.py
|
3127
|
1902
|
######################## BEGIN LICENSE BLOCK ########################
# The Original Code is Mozilla Universal charset detector code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 2001
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
# Shy Shalom - original C code
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
from . import constants
import re
class CharSetProber:
def __init__(self):
pass
def reset(self):
self._mState = constants.eDetecting
def get_charset_name(self):
return None
def feed(self, aBuf):
pass
def get_state(self):
return self._mState
def get_confidence(self):
return 0.0
def filter_high_bit_only(self, aBuf):
aBuf = re.sub(b'([\x00-\x7F])+', b' ', aBuf)
return aBuf
def filter_without_english_letters(self, aBuf):
aBuf = re.sub(b'([A-Za-z])+', b' ', aBuf)
return aBuf
def filter_with_english_letters(self, aBuf):
# TODO
return aBuf
|
apache-2.0
|
mohrtw/algorithms
|
algorithms/tests/test_deque.py
|
1
|
3879
|
import unittest
from unittest import TestCase
from algorithms.dataStructures.Deque import Deque
class deque_Test(TestCase):
def setUp(self):
self.d = Deque()
def test_create_deque(self):
self.assertIsInstance(self.d, Deque)
def test_push_to_empty_deque(self):
self.d.push(1)
self.assertEqual(1, self.d.items.head.data)
self.assertEqual(1, self.d.items.tail.data)
def test_push_multiple_items(self):
self.d.push(1)
self.d.push(2)
self.d.push(3)
self.d.push('a')
self.assertEqual(1, self.d.items.head.data)
self.assertEqual(2, self.d.items.head.next.data)
self.assertEqual(3, self.d.items.tail.previous.data)
self.assertEqual('a', self.d.items.tail.data)
def test_inject_to_empty_deque(self):
self.d.inject(1)
self.assertEqual(1, self.d.items.head.data)
self.assertEqual(1, self.d.items.tail.data)
def test_inject_multiple_items(self):
self.d.inject(1)
self.d.inject(2)
self.d.inject(3)
self.d.inject('a')
self.assertEqual('a', self.d.items.head.data)
self.assertEqual(3, self.d.items.head.next.data)
self.assertEqual(2, self.d.items.tail.previous.data)
self.assertEqual(1, self.d.items.tail.data)
def test_pop_single_element_deque(self):
self.d.push(1)
data = self.d.pop()
self.assertEqual(1, data)
self.assertIsNone(self.d.items.head)
self.assertIsNone(self.d.items.tail)
def test_pop_multiple_items(self):
self.d.push(1)
self.d.push(2)
self.d.push(3)
self.d.push('a')
data = self.d.pop()
self.assertEqual(1, data)
data = self.d.pop()
self.assertEqual(2, data)
data = self.d.pop()
self.assertEqual(3, data)
data = self.d.pop()
self.assertEqual('a', data)
def test_eject_single_element_deque(self):
self.d.inject(1)
data = self.d.eject()
self.assertEqual(1, data)
self.assertIsNone(self.d.items.head)
self.assertIsNone(self.d.items.tail)
def test_eject_multiple_items(self):
self.d.inject(1)
self.d.inject(2)
self.d.inject(3)
self.d.inject('a')
data = self.d.eject()
self.assertEqual(1, data)
data = self.d.eject()
self.assertEqual(2, data)
data = self.d.eject()
self.assertEqual(3, data)
data = self.d.eject()
self.assertEqual('a', data)
def test_peek_first_single_element_deque(self):
self.d.push(1)
data = self.d.peek_first()
self.assertEqual(1, data)
def test_peek_first_multiple_items(self):
self.d.push(1)
self.d.push(2)
self.d.push(3)
self.d.push('a')
data = self.d.peek_first()
self.assertEqual(1, data)
self.d.pop()
data = self.d.peek_first()
self.assertEqual(2, data)
self.d.pop()
data = self.d.peek_first()
self.assertEqual(3, data)
self.d.pop()
data = self.d.peek_first()
self.assertEqual('a', data)
def test_peek_last_single_element_deque(self):
self.d.push(1)
data = self.d.peek_last()
self.assertEqual(1, data)
def test_peek_last_multiple_items(self):
self.d.push(1)
self.d.push(2)
self.d.push(3)
self.d.push('a')
data = self.d.peek_last()
self.assertEqual('a', data)
self.d.eject()
data = self.d.peek_last()
self.assertEqual(3, data)
self.d.eject()
data = self.d.peek_last()
self.assertEqual(2, data)
self.d.eject()
data = self.d.peek_last()
self.assertEqual(1, data)
if __name__ == '__main__':
unittest.main()
|
mit
|
endlessm/chromium-browser
|
third_party/catapult/third_party/gsutil/third_party/pyasn1-modules/pyasn1_modules/rfc5208.py
|
6
|
1426
|
#
# This file is part of pyasn1-modules software.
#
# Copyright (c) 2005-2017, Ilya Etingof <etingof@gmail.com>
# License: http://pyasn1.sf.net/license.html
#
# PKCS#8 syntax
#
# ASN.1 source from:
# http://tools.ietf.org/html/rfc5208
#
# Sample captures could be obtained with "openssl pkcs8 -topk8" command
#
from pyasn1_modules import rfc2251
from pyasn1_modules.rfc2459 import *
class KeyEncryptionAlgorithms(AlgorithmIdentifier):
pass
class PrivateKeyAlgorithms(AlgorithmIdentifier):
pass
class EncryptedData(univ.OctetString):
pass
class EncryptedPrivateKeyInfo(univ.Sequence):
componentType = namedtype.NamedTypes(
namedtype.NamedType('encryptionAlgorithm', AlgorithmIdentifier()),
namedtype.NamedType('encryptedData', EncryptedData())
)
class PrivateKey(univ.OctetString):
pass
class Attributes(univ.SetOf):
componentType = rfc2251.Attribute()
class Version(univ.Integer):
namedValues = namedval.NamedValues(('v1', 0), ('v2', 1))
class PrivateKeyInfo(univ.Sequence):
componentType = namedtype.NamedTypes(
namedtype.NamedType('version', Version()),
namedtype.NamedType('privateKeyAlgorithm', AlgorithmIdentifier()),
namedtype.NamedType('privateKey', PrivateKey()),
namedtype.OptionalNamedType('attributes', Attributes().subtype(
implicitTag=tag.Tag(tag.tagClassContext, tag.tagFormatConstructed, 0)))
)
|
bsd-3-clause
|
ychen820/microblog
|
y/google-cloud-sdk/lib/googlecloudsdk/compute/lib/ssh_utils.py
|
4
|
13028
|
# Copyright 2014 Google Inc. All Rights Reserved.
"""Utilities for subcommands that need to SSH into virtual machine guests."""
import logging
import os
import subprocess
from googlecloudsdk.calliope import exceptions
from googlecloudsdk.compute.lib import base_classes
from googlecloudsdk.compute.lib import constants
from googlecloudsdk.compute.lib import metadata_utils
from googlecloudsdk.compute.lib import path_simplifier
from googlecloudsdk.compute.lib import request_helper
from googlecloudsdk.compute.lib import time_utils
from googlecloudsdk.compute.lib import utils
from googlecloudsdk.core import log
from googlecloudsdk.core import properties
from googlecloudsdk.core.util import console_io
from googlecloudsdk.core.util import files
# The maximum amount of time to wait for a newly-added SSH key to
# propagate before giving up.
_SSH_KEY_PROPAGATION_TIMEOUT_SEC = 60
def UserHost(user, host):
"""Returns a string of the form user@host."""
if user:
return user + '@' + host
else:
return host
def GetExternalIPAddress(instance_resource, no_raise=False):
"""Returns the external IP address of the instance.
Args:
instance_resource: An instance resource object.
no_raise: A boolean flag indicating whether or not to return None instead of
raising.
Raises:
ToolException: If no external IP address is found for the instance_resource
and no_raise is False.
Returns:
A string IP or None is no_raise is True and no ip exists.
"""
if instance_resource.networkInterfaces:
access_configs = instance_resource.networkInterfaces[0].accessConfigs
if access_configs:
ip_address = access_configs[0].natIP
if ip_address:
return ip_address
elif not no_raise:
raise exceptions.ToolException(
'Instance [{0}] in zone [{1}] has not been allocated an external '
'IP address yet. Try rerunning this command later.'.format(
instance_resource.name,
path_simplifier.Name(instance_resource.zone)))
if no_raise:
return None
raise exceptions.ToolException(
'Instance [{0}] in zone [{1}] does not have an external IP address, '
'so you cannot SSH into it. To add an external IP address to the '
'instance, use [gcloud compute instances add-access-config].'
.format(instance_resource.name,
path_simplifier.Name(instance_resource.zone)))
def _RunExecutable(cmd_args, interactive_ssh=False):
try:
subprocess.check_call(cmd_args)
except OSError as e:
raise exceptions.ToolException(
'[{0}] exited with [{1}].'.format(cmd_args[0], e.strerror))
except subprocess.CalledProcessError as e:
# Interactive ssh exits with the exit status of the last command executed.
# This is traditionally not interpreted as an error.
if not interactive_ssh or e.returncode == 255:
raise exceptions.ToolException(
'[{0}] exited with return code [{1}].'
.format(cmd_args[0], e.returncode))
def _GetSSHKeysFromMetadata(metadata):
"""Returns the value of the "sshKeys" metadata as a list."""
if not metadata:
return []
for item in metadata.items:
if item.key == constants.SSH_KEYS_METADATA_KEY:
return [key.strip() for key in item.value.split('\n') if key]
return []
def _PrepareSSHKeysValue(ssh_keys):
"""Returns a string appropriate for the metadata.
Values from are taken from the tail until either all values are
taken or _MAX_METADATA_VALUE_SIZE_IN_BYTES is reached, whichever
comes first. The selected values are then reversed. Only values at
the head of the list will be subject to removal.
Args:
ssh_keys: A list of keys. Each entry should be one key.
Returns:
A new-line-joined string of SSH keys.
"""
keys = []
bytes_consumed = 0
for key in reversed(ssh_keys):
num_bytes = len(key + '\n')
if bytes_consumed + num_bytes > constants.MAX_METADATA_VALUE_SIZE_IN_BYTES:
log.warn('The following SSH key will be removed from your project '
'because your sshKeys metadata value has reached its '
'maximum allowed size of {0} bytes: {1}'
.format(constants.MAX_METADATA_VALUE_SIZE_IN_BYTES, key))
else:
keys.append(key)
bytes_consumed += num_bytes
keys.reverse()
return '\n'.join(keys)
def _AddSSHKeyToMetadataMessage(message_classes, user, public_key, metadata):
"""Adds the public key material to the metadata if it's not already there."""
entry = '{user}:{public_key}'.format(
user=user, public_key=public_key)
ssh_keys = _GetSSHKeysFromMetadata(metadata)
log.debug('Current SSH keys in project: {0}'.format(ssh_keys))
if entry in ssh_keys:
return metadata
else:
ssh_keys.append(entry)
return metadata_utils.ConstructMetadataMessage(
message_classes=message_classes,
metadata={
constants.SSH_KEYS_METADATA_KEY: _PrepareSSHKeysValue(ssh_keys)},
existing_metadata=metadata)
class BaseSSHCommand(base_classes.BaseCommand):
"""Base class for subcommands that need to connect to instances using SSH.
Subclasses can call EnsureSSHKeyIsInProject() to make sure that the
user's public SSH key is placed in the project metadata before
proceeding.
"""
@staticmethod
def Args(parser):
ssh_key_file = parser.add_argument(
'--ssh-key-file',
help='The path to the SSH key file.')
ssh_key_file.detailed_help = """\
The path to the SSH key file. By default, this is ``{0}''.
""".format(constants.DEFAULT_SSH_KEY_FILE)
def GetProject(self):
"""Returns the project object."""
errors = []
objects = list(request_helper.MakeRequests(
requests=[(self.compute.projects,
'Get',
self.messages.ComputeProjectsGetRequest(
project=properties.VALUES.core.project.Get(
required=True),
))],
http=self.http,
batch_url=self.batch_url,
errors=errors,
custom_get_requests=None))
if errors:
utils.RaiseToolException(
errors,
error_message='Could not fetch project resource:')
return objects[0]
def SetProjectMetadata(self, new_metadata):
"""Sets the project metadata to the new metadata."""
compute = self.compute
errors = []
list(request_helper.MakeRequests(
requests=[
(compute.projects,
'SetCommonInstanceMetadata',
self.messages.ComputeProjectsSetCommonInstanceMetadataRequest(
metadata=new_metadata,
project=properties.VALUES.core.project.Get(
required=True),
))],
http=self.http,
batch_url=self.batch_url,
errors=errors,
custom_get_requests=None))
if errors:
utils.RaiseToolException(
errors,
error_message='Could not add SSH key to project metadata:')
def EnsureSSHKeyIsInProject(self, user):
"""Ensures that the user's public SSH key is in the project metadata."""
# First, grab the public key from the user's computer. If the
# public key doesn't already exist, GetPublicKey() should create
# it.
public_key = self.GetPublicKey()
# Second, let's make sure the public key is in the project metadata.
project = self.GetProject()
existing_metadata = project.commonInstanceMetadata
new_metadata = _AddSSHKeyToMetadataMessage(
self.messages, user, public_key, existing_metadata)
if new_metadata != existing_metadata:
self.SetProjectMetadata(new_metadata)
return True
else:
return False
def GetPublicKey(self):
"""Generates an SSH key using ssh-key (if necessary) and returns it."""
public_ssh_key_file = self.ssh_key_file + '.pub'
if (not os.path.exists(self.ssh_key_file) or
not os.path.exists(public_ssh_key_file)):
log.warn('You do not have an SSH key for Google Compute Engine.')
log.warn('[%s] will be executed to generate a key.',
self.ssh_keygen_executable)
ssh_directory = os.path.dirname(public_ssh_key_file)
if not os.path.exists(ssh_directory):
if console_io.PromptContinue(
'This tool needs to create the directory [{0}] before being able '
'to generate SSH keys.'.format(ssh_directory)):
files.MakeDir(ssh_directory, 0700)
else:
raise exceptions.ToolException('SSH key generation aborted by user.')
keygen_args = [
self.ssh_keygen_executable,
'-t', 'rsa',
'-f', self.ssh_key_file,
]
_RunExecutable(keygen_args)
with open(public_ssh_key_file) as f:
return f.readline().strip()
@property
def resource_type(self):
return 'instances'
def Run(self, args):
"""Subclasses must call this in their Run() before continuing."""
self.scp_executable = files.FindExecutableOnPath('scp')
self.ssh_executable = files.FindExecutableOnPath('ssh')
self.ssh_keygen_executable = files.FindExecutableOnPath('ssh-keygen')
if (not self.scp_executable or
not self.ssh_executable or
not self.ssh_keygen_executable):
raise exceptions.ToolException('Your platform does not support OpenSSH.')
self.ssh_key_file = os.path.realpath(os.path.expanduser(
args.ssh_key_file or constants.DEFAULT_SSH_KEY_FILE))
class BaseSSHCLICommand(BaseSSHCommand):
"""Base class for subcommands that use ssh or scp."""
@staticmethod
def Args(parser):
BaseSSHCommand.Args(parser)
parser.add_argument(
'--dry-run',
action='store_true',
help=('If provided, prints the command that would be run to standard '
'out instead of executing it.'))
plain = parser.add_argument(
'--plain',
action='store_true',
help='Suppresses the automatic addition of ssh/scp flags.')
plain.detailed_help = """\
Suppresses the automatic addition of *ssh(1)*/*scp(1)* flags. This flag
is useful if you want to take care of authentication yourself or
re-enable strict host checking.
"""
def GetDefaultFlags(self):
"""Returns a list of default commandline flags."""
return [
'-i', self.ssh_key_file,
'-o', 'UserKnownHostsFile=/dev/null',
'-o', 'CheckHostIP=no',
'-o', 'StrictHostKeyChecking=no',
]
def GetInstanceExternalIpAddress(self, instance_ref):
"""Returns the external ip address for the given instance."""
request = (self.compute.instances,
'Get',
self.messages.ComputeInstancesGetRequest(
instance=instance_ref.Name(),
project=self.project,
zone=instance_ref.zone))
errors = []
objects = list(request_helper.MakeRequests(
requests=[request],
http=self.http,
batch_url=self.batch_url,
errors=errors,
custom_get_requests=None))
if errors:
utils.RaiseToolException(
errors,
error_message='Could not fetch instance:')
return GetExternalIPAddress(objects[0])
def WaitUntilSSHable(self, user, external_ip_address):
"""Blocks until SSHing to the given host succeeds."""
ssh_args_for_polling = [self.ssh_executable]
ssh_args_for_polling.extend(self.GetDefaultFlags())
ssh_args_for_polling.append(UserHost(user, external_ip_address))
ssh_args_for_polling.append('true')
start_sec = time_utils.CurrentTimeSec()
while True:
logging.debug('polling instance for SSHability')
retval = subprocess.call(ssh_args_for_polling)
if retval == 0:
break
if (time_utils.CurrentTimeSec() - start_sec >
_SSH_KEY_PROPAGATION_TIMEOUT_SEC):
raise exceptions.ToolException(
'Could not SSH to the instance. It is possible that '
'your SSH key has not propagated to the instance yet. '
'Try running this command again. If you still cannot connect, '
'verify that the firewall and instance are set to accept '
'ssh traffic.')
time_utils.Sleep(5)
def ActuallyRun(self, args, cmd_args, user, external_ip_address,
interactive_ssh=False):
"""Runs the scp/ssh command specified in cmd_args.
Args:
args: argparse.Namespace, The calling command invocation args.
cmd_args: [str], The argv for the command to execute.
user: str, The user name.
external_ip_address: str, The external IP address.
interactive_ssh: bool, True is cmd_args is an interactive ssh session.
"""
if args.dry_run:
log.out.Print(' '.join(cmd_args))
return
if self.EnsureSSHKeyIsInProject(user):
self.WaitUntilSSHable(user, external_ip_address)
logging.debug('%s command: %s', cmd_args[0], ' '.join(cmd_args))
_RunExecutable(cmd_args, interactive_ssh=interactive_ssh)
|
bsd-3-clause
|
zouyapeng/horizon-newtouch
|
openstack_dashboard/dashboards/project/volumes/urls.py
|
6
|
1689
|
# Copyright 2012 Nebula, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from django.conf.urls import include # noqa
from django.conf.urls import patterns
from django.conf.urls import url
from openstack_dashboard.dashboards.project.volumes.backups \
import urls as backups_urls
from openstack_dashboard.dashboards.project.volumes.snapshots \
import urls as snapshot_urls
from openstack_dashboard.dashboards.project.volumes import views
from openstack_dashboard.dashboards.project.volumes.volumes \
import urls as volume_urls
urlpatterns = patterns('',
url(r'^$', views.IndexView.as_view(), name='index'),
url(r'^\?tab=volumes_and_snapshots__snapshots_tab$',
views.IndexView.as_view(), name='snapshots_tab'),
url(r'^\?tab=volumes_and_snapshots__volumes_tab$',
views.IndexView.as_view(), name='volumes_tab'),
url(r'^\?tab=volumes_and_snapshots__backups_tab$',
views.IndexView.as_view(), name='backups_tab'),
url(r'', include(volume_urls, namespace='volumes')),
url(r'backups/', include(backups_urls, namespace='backups')),
url(r'snapshots/', include(snapshot_urls, namespace='snapshots')),
)
|
apache-2.0
|
wesm/arrow
|
dev/archery/archery/integration/tester.py
|
6
|
1993
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# Base class for language-specific integration test harnesses
import subprocess
from .util import log
class Tester(object):
PRODUCER = False
CONSUMER = False
FLIGHT_SERVER = False
FLIGHT_CLIENT = False
def __init__(self, debug=False, **args):
self.args = args
self.debug = debug
def run_shell_command(self, cmd):
cmd = ' '.join(cmd)
if self.debug:
log(cmd)
subprocess.check_call(cmd, shell=True)
def json_to_file(self, json_path, arrow_path):
raise NotImplementedError
def stream_to_file(self, stream_path, file_path):
raise NotImplementedError
def file_to_stream(self, file_path, stream_path):
raise NotImplementedError
def validate(self, json_path, arrow_path):
raise NotImplementedError
def flight_server(self, scenario_name=None):
"""Start the Flight server on a free port.
This should be a context manager that returns the port as the
managed object, and cleans up the server on exit.
"""
raise NotImplementedError
def flight_request(self, port, json_path=None, scenario_name=None):
raise NotImplementedError
|
apache-2.0
|
DoubleNegativeVisualEffects/cortex
|
test/IECore/PointsMotionOpTest.py
|
12
|
7993
|
##########################################################################
#
# Copyright (c) 2010, Image Engine Design Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# * Neither the name of Image Engine Design nor the names of any
# other contributors to this software may be used to endorse or
# promote products derived from this software without specific prior
# written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
# IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
##########################################################################
import unittest
from IECore import *
class PointsMotionOpTest( unittest.TestCase ) :
def _buildPoints( self, time ):
p = PointsPrimitive( 5 )
p[ "P" ] = PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, V3fVectorData( [ V3f(time*1), V3f(time*2), V3f(time*3), V3f(time*4), V3f(time*5) ] ) )
p[ "id" ] = PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, IntVectorData( [ 1, 2, 3, 4, 5 ] ) )
p[ "float" ] = PrimitiveVariable( PrimitiveVariable.Interpolation.Varying, FloatVectorData( [ time*1, time*2, time*3, time*4, time*5 ] ) )
p[ "vec3" ] = PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, V3fVectorData( [ V3f(time*2), V3f(time*3), V3f(time*4), V3f(time*5), V3f(time*6) ] ) )
p[ "vec2" ] = PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, V2fVectorData( [ V2f(time*2), V2f(time*3), V2f(time*4), V2f(time*5), V2f(time*6) ] ) )
p[ "C" ] = PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, Color3fVectorData( [ Color3f(1), Color3f(2), Color3f(4), Color3f(5), Color3f(6) ] ) )
p[ "C2" ] = PrimitiveVariable( PrimitiveVariable.Interpolation.Vertex, Color4fVectorData( [ Color4f(1), Color4f(2), Color4f(4), Color4f(5), Color4f(6) ] ) )
p[ "G" ] = PrimitiveVariable( PrimitiveVariable.Interpolation.Uniform, V3fVectorData( [ V3f(time) ] ) )
p[ "c" ] = PrimitiveVariable( PrimitiveVariable.Interpolation.Constant, FloatData( time ) )
return p
def testConstruction( self ) :
op = PointsMotionOp()
def testPrimvarCopy( self ) :
p1 = self._buildPoints( 1.0 )
p2 = self._buildPoints( 2.0 )
points = ObjectVector()
points.append( p1 )
points.append( p2 )
op = PointsMotionOp()
result = op( snapshotTimes = FloatVectorData( [ 1, 2 ] ), pointsPrimitives = points )
# proves that the output correspond exactly to the input
self.assertEqual( len(result), 2 )
self.assertEqual( result[1], p1 )
self.assertEqual( result[2], p2 )
self.assertNotEqual( result[1], p2 )
p1[ "P" ].data[3] = V3f(1,2,3)
self.assertEqual( p1["P"].data[3], V3f(1,2,3) )
# proves that destroying the original input value, keeps the output untouched.
self.assertEqual( result[1]["P"].data[3], V3f(4) )
def testInvalidInputParams( self ):
p1 = self._buildPoints( 1.0 )
p2 = self._buildPoints( 2.0 )
points = ObjectVector()
points.append( p1 )
points.append( p2 )
op = PointsMotionOp()
op['pointsPrimitives'] = points
# snapshots count != primitive count
op['snapshotTimes'] = FloatVectorData( [ 1 ] )
self.assertRaises( RuntimeError, op.__call__, **{} )
# test no id
op['snapshotTimes'] = FloatVectorData( [ 1, 2 ] )
op['idPrimVarName'] = "dontExist"
self.assertRaises( RuntimeError, op.__call__, **{} )
op['idPrimVarName'] = "id"
op() # should work
# test unmatching primvars
del p1["vec2"]
self.assertRaises( RuntimeError, op.__call__, **{} )
del p2["vec2"]
op() # should work
# test no P
del p2["P"]
self.assertRaises( RuntimeError, op.__call__, **{} )
def testDifferentPointsOrder( self ):
def rearrangeVec( vec ):
lastIndex = len(vec) - 1
for i in xrange(0,len(vec)/2 ):
tmp = vec[i]
vec[i] = vec[ lastIndex ]
vec[ lastIndex ] = tmp
lastIndex -= 1
p1 = self._buildPoints( 1.0 )
p2 = self._buildPoints( 1.0 )
points = ObjectVector()
points.append( p1 )
points.append( p2 )
op = PointsMotionOp()
for primVar in p2.values() :
if primVar.interpolation in [ PrimitiveVariable.Interpolation.Vertex, PrimitiveVariable.Interpolation.Varying, PrimitiveVariable.Interpolation.FaceVarying ] :
rearrangeVec( primVar.data )
self.assertNotEqual( p1, p2 )
result = op( snapshotTimes = FloatVectorData( [ 1, 2 ] ), pointsPrimitives = points )
self.assertEqual( result[1], p1 )
self.assertEqual( result[2], p1 )
def testSingleSnapshot( self ):
p1 = self._buildPoints( 1.0 )
points = ObjectVector()
points.append( p1 )
op = PointsMotionOp()
result = op( snapshotTimes = FloatVectorData( [ 1 ] ), pointsPrimitives = points )
self.assertEqual( len(result), 1 )
self.assertEqual( result[1], p1 )
def testNoSnapshots( self ):
points = ObjectVector()
op = PointsMotionOp()
result = op( snapshotTimes = FloatVectorData( [] ), pointsPrimitives = points )
self.assertEqual( len(result), 0 )
def testMissingSnapshots( self ):
p1 = self._buildPoints( 1.0 )
p2 = self._buildPoints( 2.0 )
p3 = self._buildPoints( 3.0 )
p4 = self._buildPoints( 4.0 )
p5 = self._buildPoints( 5.0 )
points = ObjectVector()
points.append( p1 )
points.append( p2 )
points.append( p3 )
points.append( p4 )
points.append( p5 )
# change some ids to simulate dead particles, and birth of new particles
p1["id"].data[0] = 10
p1["id"].data[1] = 11
p2["id"].data[1] = 11
p4["id"].data[3] = 12
p5["id"].data[3] = 12
p5["id"].data[4] = 13
op = PointsMotionOp()
result = op( snapshotTimes = FloatVectorData( [ 1, 2, 3, 4, 5 ] ), pointsPrimitives = points, maskedPrimVars = StringVectorData( [ 'vec3' ] ) )
self.assertEqual( len(result), 5 )
# checking ids
self.assertEqual( result[1]["id"].data, IntVectorData([ 10,11,3,4,5,1,2,12,13 ]) )
self.assertEqual( result[1]["id"].data, result[2]["id"].data )
self.assertEqual( result[1]["id"].data, result[3]["id"].data )
self.assertEqual( result[1]["id"].data, result[4]["id"].data )
self.assertEqual( result[1]["id"].data, result[5]["id"].data )
# checking if unmasked prim var was filled with closest available value
self.assertEqual( result[1]["P"].data, V3fVectorData( [ V3f(1*1), V3f(1*2), V3f(1*3), V3f(1*4), V3f(1*5), V3f(2*1), V3f(3*2), V3f(4*4), V3f(5*5) ] ) )
self.assertEqual( result[3]["P"].data, V3fVectorData( [ V3f(1*1), V3f(2*2), V3f(3*3), V3f(3*4), V3f(3*5), V3f(3*1), V3f(3*2), V3f(4*4), V3f(5*5) ] ) )
self.assertEqual( result[5]["P"].data, V3fVectorData( [ V3f(1*1), V3f(2*2), V3f(5*3), V3f(3*4), V3f(4*5), V3f(5*1), V3f(5*2), V3f(5*4), V3f(5*5) ] ) )
# checkin if masked prim var was filled with zero
self.assertEqual( result[3]["vec3"].data, V3fVectorData( [ V3f(0), V3f(0), V3f(3*4), V3f(3*5), V3f(3*6), V3f(3*2), V3f(3*3), V3f(0), V3f(0) ] ) )
if __name__ == "__main__":
unittest.main()
|
bsd-3-clause
|
ericlink/adms-server
|
playframework-dist/1.1-src/python/Lib/stringprep.py
|
30
|
13794
|
# This file is generated by mkstringprep.py. DO NOT EDIT.
"""Library that exposes various tables found in the StringPrep RFC 3454.
There are two kinds of tables: sets, for which a member test is provided,
and mappings, for which a mapping function is provided.
"""
from unicodedata import ucd_3_2_0 as unicodedata
assert unicodedata.unidata_version == '3.2.0'
def in_table_a1(code):
if unicodedata.category(code) != 'Cn': return False
c = ord(code)
if 0xFDD0 <= c < 0xFDF0: return False
return (c & 0xFFFF) not in (0xFFFE, 0xFFFF)
b1_set = set([173, 847, 6150, 6155, 6156, 6157, 8203, 8204, 8205, 8288, 65279] + range(65024,65040))
def in_table_b1(code):
return ord(code) in b1_set
b3_exceptions = {
0xb5:u'\u03bc', 0xdf:u'ss', 0x130:u'i\u0307', 0x149:u'\u02bcn',
0x17f:u's', 0x1f0:u'j\u030c', 0x345:u'\u03b9', 0x37a:u' \u03b9',
0x390:u'\u03b9\u0308\u0301', 0x3b0:u'\u03c5\u0308\u0301', 0x3c2:u'\u03c3', 0x3d0:u'\u03b2',
0x3d1:u'\u03b8', 0x3d2:u'\u03c5', 0x3d3:u'\u03cd', 0x3d4:u'\u03cb',
0x3d5:u'\u03c6', 0x3d6:u'\u03c0', 0x3f0:u'\u03ba', 0x3f1:u'\u03c1',
0x3f2:u'\u03c3', 0x3f5:u'\u03b5', 0x587:u'\u0565\u0582', 0x1e96:u'h\u0331',
0x1e97:u't\u0308', 0x1e98:u'w\u030a', 0x1e99:u'y\u030a', 0x1e9a:u'a\u02be',
0x1e9b:u'\u1e61', 0x1f50:u'\u03c5\u0313', 0x1f52:u'\u03c5\u0313\u0300', 0x1f54:u'\u03c5\u0313\u0301',
0x1f56:u'\u03c5\u0313\u0342', 0x1f80:u'\u1f00\u03b9', 0x1f81:u'\u1f01\u03b9', 0x1f82:u'\u1f02\u03b9',
0x1f83:u'\u1f03\u03b9', 0x1f84:u'\u1f04\u03b9', 0x1f85:u'\u1f05\u03b9', 0x1f86:u'\u1f06\u03b9',
0x1f87:u'\u1f07\u03b9', 0x1f88:u'\u1f00\u03b9', 0x1f89:u'\u1f01\u03b9', 0x1f8a:u'\u1f02\u03b9',
0x1f8b:u'\u1f03\u03b9', 0x1f8c:u'\u1f04\u03b9', 0x1f8d:u'\u1f05\u03b9', 0x1f8e:u'\u1f06\u03b9',
0x1f8f:u'\u1f07\u03b9', 0x1f90:u'\u1f20\u03b9', 0x1f91:u'\u1f21\u03b9', 0x1f92:u'\u1f22\u03b9',
0x1f93:u'\u1f23\u03b9', 0x1f94:u'\u1f24\u03b9', 0x1f95:u'\u1f25\u03b9', 0x1f96:u'\u1f26\u03b9',
0x1f97:u'\u1f27\u03b9', 0x1f98:u'\u1f20\u03b9', 0x1f99:u'\u1f21\u03b9', 0x1f9a:u'\u1f22\u03b9',
0x1f9b:u'\u1f23\u03b9', 0x1f9c:u'\u1f24\u03b9', 0x1f9d:u'\u1f25\u03b9', 0x1f9e:u'\u1f26\u03b9',
0x1f9f:u'\u1f27\u03b9', 0x1fa0:u'\u1f60\u03b9', 0x1fa1:u'\u1f61\u03b9', 0x1fa2:u'\u1f62\u03b9',
0x1fa3:u'\u1f63\u03b9', 0x1fa4:u'\u1f64\u03b9', 0x1fa5:u'\u1f65\u03b9', 0x1fa6:u'\u1f66\u03b9',
0x1fa7:u'\u1f67\u03b9', 0x1fa8:u'\u1f60\u03b9', 0x1fa9:u'\u1f61\u03b9', 0x1faa:u'\u1f62\u03b9',
0x1fab:u'\u1f63\u03b9', 0x1fac:u'\u1f64\u03b9', 0x1fad:u'\u1f65\u03b9', 0x1fae:u'\u1f66\u03b9',
0x1faf:u'\u1f67\u03b9', 0x1fb2:u'\u1f70\u03b9', 0x1fb3:u'\u03b1\u03b9', 0x1fb4:u'\u03ac\u03b9',
0x1fb6:u'\u03b1\u0342', 0x1fb7:u'\u03b1\u0342\u03b9', 0x1fbc:u'\u03b1\u03b9', 0x1fbe:u'\u03b9',
0x1fc2:u'\u1f74\u03b9', 0x1fc3:u'\u03b7\u03b9', 0x1fc4:u'\u03ae\u03b9', 0x1fc6:u'\u03b7\u0342',
0x1fc7:u'\u03b7\u0342\u03b9', 0x1fcc:u'\u03b7\u03b9', 0x1fd2:u'\u03b9\u0308\u0300', 0x1fd3:u'\u03b9\u0308\u0301',
0x1fd6:u'\u03b9\u0342', 0x1fd7:u'\u03b9\u0308\u0342', 0x1fe2:u'\u03c5\u0308\u0300', 0x1fe3:u'\u03c5\u0308\u0301',
0x1fe4:u'\u03c1\u0313', 0x1fe6:u'\u03c5\u0342', 0x1fe7:u'\u03c5\u0308\u0342', 0x1ff2:u'\u1f7c\u03b9',
0x1ff3:u'\u03c9\u03b9', 0x1ff4:u'\u03ce\u03b9', 0x1ff6:u'\u03c9\u0342', 0x1ff7:u'\u03c9\u0342\u03b9',
0x1ffc:u'\u03c9\u03b9', 0x20a8:u'rs', 0x2102:u'c', 0x2103:u'\xb0c',
0x2107:u'\u025b', 0x2109:u'\xb0f', 0x210b:u'h', 0x210c:u'h',
0x210d:u'h', 0x2110:u'i', 0x2111:u'i', 0x2112:u'l',
0x2115:u'n', 0x2116:u'no', 0x2119:u'p', 0x211a:u'q',
0x211b:u'r', 0x211c:u'r', 0x211d:u'r', 0x2120:u'sm',
0x2121:u'tel', 0x2122:u'tm', 0x2124:u'z', 0x2128:u'z',
0x212c:u'b', 0x212d:u'c', 0x2130:u'e', 0x2131:u'f',
0x2133:u'm', 0x213e:u'\u03b3', 0x213f:u'\u03c0', 0x2145:u'd',
0x3371:u'hpa', 0x3373:u'au', 0x3375:u'ov', 0x3380:u'pa',
0x3381:u'na', 0x3382:u'\u03bca', 0x3383:u'ma', 0x3384:u'ka',
0x3385:u'kb', 0x3386:u'mb', 0x3387:u'gb', 0x338a:u'pf',
0x338b:u'nf', 0x338c:u'\u03bcf', 0x3390:u'hz', 0x3391:u'khz',
0x3392:u'mhz', 0x3393:u'ghz', 0x3394:u'thz', 0x33a9:u'pa',
0x33aa:u'kpa', 0x33ab:u'mpa', 0x33ac:u'gpa', 0x33b4:u'pv',
0x33b5:u'nv', 0x33b6:u'\u03bcv', 0x33b7:u'mv', 0x33b8:u'kv',
0x33b9:u'mv', 0x33ba:u'pw', 0x33bb:u'nw', 0x33bc:u'\u03bcw',
0x33bd:u'mw', 0x33be:u'kw', 0x33bf:u'mw', 0x33c0:u'k\u03c9',
0x33c1:u'm\u03c9', 0x33c3:u'bq', 0x33c6:u'c\u2215kg', 0x33c7:u'co.',
0x33c8:u'db', 0x33c9:u'gy', 0x33cb:u'hp', 0x33cd:u'kk',
0x33ce:u'km', 0x33d7:u'ph', 0x33d9:u'ppm', 0x33da:u'pr',
0x33dc:u'sv', 0x33dd:u'wb', 0xfb00:u'ff', 0xfb01:u'fi',
0xfb02:u'fl', 0xfb03:u'ffi', 0xfb04:u'ffl', 0xfb05:u'st',
0xfb06:u'st', 0xfb13:u'\u0574\u0576', 0xfb14:u'\u0574\u0565', 0xfb15:u'\u0574\u056b',
0xfb16:u'\u057e\u0576', 0xfb17:u'\u0574\u056d', 0x1d400:u'a', 0x1d401:u'b',
0x1d402:u'c', 0x1d403:u'd', 0x1d404:u'e', 0x1d405:u'f',
0x1d406:u'g', 0x1d407:u'h', 0x1d408:u'i', 0x1d409:u'j',
0x1d40a:u'k', 0x1d40b:u'l', 0x1d40c:u'm', 0x1d40d:u'n',
0x1d40e:u'o', 0x1d40f:u'p', 0x1d410:u'q', 0x1d411:u'r',
0x1d412:u's', 0x1d413:u't', 0x1d414:u'u', 0x1d415:u'v',
0x1d416:u'w', 0x1d417:u'x', 0x1d418:u'y', 0x1d419:u'z',
0x1d434:u'a', 0x1d435:u'b', 0x1d436:u'c', 0x1d437:u'd',
0x1d438:u'e', 0x1d439:u'f', 0x1d43a:u'g', 0x1d43b:u'h',
0x1d43c:u'i', 0x1d43d:u'j', 0x1d43e:u'k', 0x1d43f:u'l',
0x1d440:u'm', 0x1d441:u'n', 0x1d442:u'o', 0x1d443:u'p',
0x1d444:u'q', 0x1d445:u'r', 0x1d446:u's', 0x1d447:u't',
0x1d448:u'u', 0x1d449:u'v', 0x1d44a:u'w', 0x1d44b:u'x',
0x1d44c:u'y', 0x1d44d:u'z', 0x1d468:u'a', 0x1d469:u'b',
0x1d46a:u'c', 0x1d46b:u'd', 0x1d46c:u'e', 0x1d46d:u'f',
0x1d46e:u'g', 0x1d46f:u'h', 0x1d470:u'i', 0x1d471:u'j',
0x1d472:u'k', 0x1d473:u'l', 0x1d474:u'm', 0x1d475:u'n',
0x1d476:u'o', 0x1d477:u'p', 0x1d478:u'q', 0x1d479:u'r',
0x1d47a:u's', 0x1d47b:u't', 0x1d47c:u'u', 0x1d47d:u'v',
0x1d47e:u'w', 0x1d47f:u'x', 0x1d480:u'y', 0x1d481:u'z',
0x1d49c:u'a', 0x1d49e:u'c', 0x1d49f:u'd', 0x1d4a2:u'g',
0x1d4a5:u'j', 0x1d4a6:u'k', 0x1d4a9:u'n', 0x1d4aa:u'o',
0x1d4ab:u'p', 0x1d4ac:u'q', 0x1d4ae:u's', 0x1d4af:u't',
0x1d4b0:u'u', 0x1d4b1:u'v', 0x1d4b2:u'w', 0x1d4b3:u'x',
0x1d4b4:u'y', 0x1d4b5:u'z', 0x1d4d0:u'a', 0x1d4d1:u'b',
0x1d4d2:u'c', 0x1d4d3:u'd', 0x1d4d4:u'e', 0x1d4d5:u'f',
0x1d4d6:u'g', 0x1d4d7:u'h', 0x1d4d8:u'i', 0x1d4d9:u'j',
0x1d4da:u'k', 0x1d4db:u'l', 0x1d4dc:u'm', 0x1d4dd:u'n',
0x1d4de:u'o', 0x1d4df:u'p', 0x1d4e0:u'q', 0x1d4e1:u'r',
0x1d4e2:u's', 0x1d4e3:u't', 0x1d4e4:u'u', 0x1d4e5:u'v',
0x1d4e6:u'w', 0x1d4e7:u'x', 0x1d4e8:u'y', 0x1d4e9:u'z',
0x1d504:u'a', 0x1d505:u'b', 0x1d507:u'd', 0x1d508:u'e',
0x1d509:u'f', 0x1d50a:u'g', 0x1d50d:u'j', 0x1d50e:u'k',
0x1d50f:u'l', 0x1d510:u'm', 0x1d511:u'n', 0x1d512:u'o',
0x1d513:u'p', 0x1d514:u'q', 0x1d516:u's', 0x1d517:u't',
0x1d518:u'u', 0x1d519:u'v', 0x1d51a:u'w', 0x1d51b:u'x',
0x1d51c:u'y', 0x1d538:u'a', 0x1d539:u'b', 0x1d53b:u'd',
0x1d53c:u'e', 0x1d53d:u'f', 0x1d53e:u'g', 0x1d540:u'i',
0x1d541:u'j', 0x1d542:u'k', 0x1d543:u'l', 0x1d544:u'm',
0x1d546:u'o', 0x1d54a:u's', 0x1d54b:u't', 0x1d54c:u'u',
0x1d54d:u'v', 0x1d54e:u'w', 0x1d54f:u'x', 0x1d550:u'y',
0x1d56c:u'a', 0x1d56d:u'b', 0x1d56e:u'c', 0x1d56f:u'd',
0x1d570:u'e', 0x1d571:u'f', 0x1d572:u'g', 0x1d573:u'h',
0x1d574:u'i', 0x1d575:u'j', 0x1d576:u'k', 0x1d577:u'l',
0x1d578:u'm', 0x1d579:u'n', 0x1d57a:u'o', 0x1d57b:u'p',
0x1d57c:u'q', 0x1d57d:u'r', 0x1d57e:u's', 0x1d57f:u't',
0x1d580:u'u', 0x1d581:u'v', 0x1d582:u'w', 0x1d583:u'x',
0x1d584:u'y', 0x1d585:u'z', 0x1d5a0:u'a', 0x1d5a1:u'b',
0x1d5a2:u'c', 0x1d5a3:u'd', 0x1d5a4:u'e', 0x1d5a5:u'f',
0x1d5a6:u'g', 0x1d5a7:u'h', 0x1d5a8:u'i', 0x1d5a9:u'j',
0x1d5aa:u'k', 0x1d5ab:u'l', 0x1d5ac:u'm', 0x1d5ad:u'n',
0x1d5ae:u'o', 0x1d5af:u'p', 0x1d5b0:u'q', 0x1d5b1:u'r',
0x1d5b2:u's', 0x1d5b3:u't', 0x1d5b4:u'u', 0x1d5b5:u'v',
0x1d5b6:u'w', 0x1d5b7:u'x', 0x1d5b8:u'y', 0x1d5b9:u'z',
0x1d5d4:u'a', 0x1d5d5:u'b', 0x1d5d6:u'c', 0x1d5d7:u'd',
0x1d5d8:u'e', 0x1d5d9:u'f', 0x1d5da:u'g', 0x1d5db:u'h',
0x1d5dc:u'i', 0x1d5dd:u'j', 0x1d5de:u'k', 0x1d5df:u'l',
0x1d5e0:u'm', 0x1d5e1:u'n', 0x1d5e2:u'o', 0x1d5e3:u'p',
0x1d5e4:u'q', 0x1d5e5:u'r', 0x1d5e6:u's', 0x1d5e7:u't',
0x1d5e8:u'u', 0x1d5e9:u'v', 0x1d5ea:u'w', 0x1d5eb:u'x',
0x1d5ec:u'y', 0x1d5ed:u'z', 0x1d608:u'a', 0x1d609:u'b',
0x1d60a:u'c', 0x1d60b:u'd', 0x1d60c:u'e', 0x1d60d:u'f',
0x1d60e:u'g', 0x1d60f:u'h', 0x1d610:u'i', 0x1d611:u'j',
0x1d612:u'k', 0x1d613:u'l', 0x1d614:u'm', 0x1d615:u'n',
0x1d616:u'o', 0x1d617:u'p', 0x1d618:u'q', 0x1d619:u'r',
0x1d61a:u's', 0x1d61b:u't', 0x1d61c:u'u', 0x1d61d:u'v',
0x1d61e:u'w', 0x1d61f:u'x', 0x1d620:u'y', 0x1d621:u'z',
0x1d63c:u'a', 0x1d63d:u'b', 0x1d63e:u'c', 0x1d63f:u'd',
0x1d640:u'e', 0x1d641:u'f', 0x1d642:u'g', 0x1d643:u'h',
0x1d644:u'i', 0x1d645:u'j', 0x1d646:u'k', 0x1d647:u'l',
0x1d648:u'm', 0x1d649:u'n', 0x1d64a:u'o', 0x1d64b:u'p',
0x1d64c:u'q', 0x1d64d:u'r', 0x1d64e:u's', 0x1d64f:u't',
0x1d650:u'u', 0x1d651:u'v', 0x1d652:u'w', 0x1d653:u'x',
0x1d654:u'y', 0x1d655:u'z', 0x1d670:u'a', 0x1d671:u'b',
0x1d672:u'c', 0x1d673:u'd', 0x1d674:u'e', 0x1d675:u'f',
0x1d676:u'g', 0x1d677:u'h', 0x1d678:u'i', 0x1d679:u'j',
0x1d67a:u'k', 0x1d67b:u'l', 0x1d67c:u'm', 0x1d67d:u'n',
0x1d67e:u'o', 0x1d67f:u'p', 0x1d680:u'q', 0x1d681:u'r',
0x1d682:u's', 0x1d683:u't', 0x1d684:u'u', 0x1d685:u'v',
0x1d686:u'w', 0x1d687:u'x', 0x1d688:u'y', 0x1d689:u'z',
0x1d6a8:u'\u03b1', 0x1d6a9:u'\u03b2', 0x1d6aa:u'\u03b3', 0x1d6ab:u'\u03b4',
0x1d6ac:u'\u03b5', 0x1d6ad:u'\u03b6', 0x1d6ae:u'\u03b7', 0x1d6af:u'\u03b8',
0x1d6b0:u'\u03b9', 0x1d6b1:u'\u03ba', 0x1d6b2:u'\u03bb', 0x1d6b3:u'\u03bc',
0x1d6b4:u'\u03bd', 0x1d6b5:u'\u03be', 0x1d6b6:u'\u03bf', 0x1d6b7:u'\u03c0',
0x1d6b8:u'\u03c1', 0x1d6b9:u'\u03b8', 0x1d6ba:u'\u03c3', 0x1d6bb:u'\u03c4',
0x1d6bc:u'\u03c5', 0x1d6bd:u'\u03c6', 0x1d6be:u'\u03c7', 0x1d6bf:u'\u03c8',
0x1d6c0:u'\u03c9', 0x1d6d3:u'\u03c3', 0x1d6e2:u'\u03b1', 0x1d6e3:u'\u03b2',
0x1d6e4:u'\u03b3', 0x1d6e5:u'\u03b4', 0x1d6e6:u'\u03b5', 0x1d6e7:u'\u03b6',
0x1d6e8:u'\u03b7', 0x1d6e9:u'\u03b8', 0x1d6ea:u'\u03b9', 0x1d6eb:u'\u03ba',
0x1d6ec:u'\u03bb', 0x1d6ed:u'\u03bc', 0x1d6ee:u'\u03bd', 0x1d6ef:u'\u03be',
0x1d6f0:u'\u03bf', 0x1d6f1:u'\u03c0', 0x1d6f2:u'\u03c1', 0x1d6f3:u'\u03b8',
0x1d6f4:u'\u03c3', 0x1d6f5:u'\u03c4', 0x1d6f6:u'\u03c5', 0x1d6f7:u'\u03c6',
0x1d6f8:u'\u03c7', 0x1d6f9:u'\u03c8', 0x1d6fa:u'\u03c9', 0x1d70d:u'\u03c3',
0x1d71c:u'\u03b1', 0x1d71d:u'\u03b2', 0x1d71e:u'\u03b3', 0x1d71f:u'\u03b4',
0x1d720:u'\u03b5', 0x1d721:u'\u03b6', 0x1d722:u'\u03b7', 0x1d723:u'\u03b8',
0x1d724:u'\u03b9', 0x1d725:u'\u03ba', 0x1d726:u'\u03bb', 0x1d727:u'\u03bc',
0x1d728:u'\u03bd', 0x1d729:u'\u03be', 0x1d72a:u'\u03bf', 0x1d72b:u'\u03c0',
0x1d72c:u'\u03c1', 0x1d72d:u'\u03b8', 0x1d72e:u'\u03c3', 0x1d72f:u'\u03c4',
0x1d730:u'\u03c5', 0x1d731:u'\u03c6', 0x1d732:u'\u03c7', 0x1d733:u'\u03c8',
0x1d734:u'\u03c9', 0x1d747:u'\u03c3', 0x1d756:u'\u03b1', 0x1d757:u'\u03b2',
0x1d758:u'\u03b3', 0x1d759:u'\u03b4', 0x1d75a:u'\u03b5', 0x1d75b:u'\u03b6',
0x1d75c:u'\u03b7', 0x1d75d:u'\u03b8', 0x1d75e:u'\u03b9', 0x1d75f:u'\u03ba',
0x1d760:u'\u03bb', 0x1d761:u'\u03bc', 0x1d762:u'\u03bd', 0x1d763:u'\u03be',
0x1d764:u'\u03bf', 0x1d765:u'\u03c0', 0x1d766:u'\u03c1', 0x1d767:u'\u03b8',
0x1d768:u'\u03c3', 0x1d769:u'\u03c4', 0x1d76a:u'\u03c5', 0x1d76b:u'\u03c6',
0x1d76c:u'\u03c7', 0x1d76d:u'\u03c8', 0x1d76e:u'\u03c9', 0x1d781:u'\u03c3',
0x1d790:u'\u03b1', 0x1d791:u'\u03b2', 0x1d792:u'\u03b3', 0x1d793:u'\u03b4',
0x1d794:u'\u03b5', 0x1d795:u'\u03b6', 0x1d796:u'\u03b7', 0x1d797:u'\u03b8',
0x1d798:u'\u03b9', 0x1d799:u'\u03ba', 0x1d79a:u'\u03bb', 0x1d79b:u'\u03bc',
0x1d79c:u'\u03bd', 0x1d79d:u'\u03be', 0x1d79e:u'\u03bf', 0x1d79f:u'\u03c0',
0x1d7a0:u'\u03c1', 0x1d7a1:u'\u03b8', 0x1d7a2:u'\u03c3', 0x1d7a3:u'\u03c4',
0x1d7a4:u'\u03c5', 0x1d7a5:u'\u03c6', 0x1d7a6:u'\u03c7', 0x1d7a7:u'\u03c8',
0x1d7a8:u'\u03c9', 0x1d7bb:u'\u03c3', }
def map_table_b3(code):
r = b3_exceptions.get(ord(code))
if r is not None: return r
return code.lower()
def map_table_b2(a):
al = map_table_b3(a)
b = unicodedata.normalize("NFKC", al)
bl = u"".join([map_table_b3(ch) for ch in b])
c = unicodedata.normalize("NFKC", bl)
if b != c:
return c
else:
return al
def in_table_c11(code):
return code == u" "
def in_table_c12(code):
return unicodedata.category(code) == "Zs" and code != u" "
def in_table_c11_c12(code):
return unicodedata.category(code) == "Zs"
def in_table_c21(code):
return ord(code) < 128 and unicodedata.category(code) == "Cc"
c22_specials = set([1757, 1807, 6158, 8204, 8205, 8232, 8233, 65279] + range(8288,8292) + range(8298,8304) + range(65529,65533) + range(119155,119163))
def in_table_c22(code):
c = ord(code)
if c < 128: return False
if unicodedata.category(code) == "Cc": return True
return c in c22_specials
def in_table_c21_c22(code):
return unicodedata.category(code) == "Cc" or \
ord(code) in c22_specials
def in_table_c3(code):
return unicodedata.category(code) == "Co"
def in_table_c4(code):
c = ord(code)
if c < 0xFDD0: return False
if c < 0xFDF0: return True
return (ord(code) & 0xFFFF) in (0xFFFE, 0xFFFF)
def in_table_c5(code):
return unicodedata.category(code) == "Cs"
c6_set = set(range(65529,65534))
def in_table_c6(code):
return ord(code) in c6_set
c7_set = set(range(12272,12284))
def in_table_c7(code):
return ord(code) in c7_set
c8_set = set([832, 833, 8206, 8207] + range(8234,8239) + range(8298,8304))
def in_table_c8(code):
return ord(code) in c8_set
c9_set = set([917505] + range(917536,917632))
def in_table_c9(code):
return ord(code) in c9_set
def in_table_d1(code):
return unicodedata.bidirectional(code) in ("R","AL")
def in_table_d2(code):
return unicodedata.bidirectional(code) == "L"
|
mit
|
xiaolonginfo/decode-Django
|
Django-1.5.1/tests/modeltests/custom_managers/tests.py
|
41
|
2797
|
from __future__ import absolute_import
from django.test import TestCase
from django.utils import six
from .models import (ObjectQuerySet, RelatedObject, Person, Book, Car, PersonManager,
PublishedBookManager)
class CustomManagerTests(TestCase):
def test_manager(self):
p1 = Person.objects.create(first_name="Bugs", last_name="Bunny", fun=True)
p2 = Person.objects.create(first_name="Droopy", last_name="Dog", fun=False)
self.assertQuerysetEqual(
Person.objects.get_fun_people(), [
"Bugs Bunny"
],
six.text_type
)
# The RelatedManager used on the 'books' descriptor extends the default
# manager
self.assertTrue(isinstance(p2.books, PublishedBookManager))
b1 = Book.published_objects.create(
title="How to program", author="Rodney Dangerfield", is_published=True
)
b2 = Book.published_objects.create(
title="How to be smart", author="Albert Einstein", is_published=False
)
# The default manager, "objects", doesn't exist, because a custom one
# was provided.
self.assertRaises(AttributeError, lambda: Book.objects)
# The RelatedManager used on the 'authors' descriptor extends the
# default manager
self.assertTrue(isinstance(b2.authors, PersonManager))
self.assertQuerysetEqual(
Book.published_objects.all(), [
"How to program",
],
lambda b: b.title
)
c1 = Car.cars.create(name="Corvette", mileage=21, top_speed=180)
c2 = Car.cars.create(name="Neon", mileage=31, top_speed=100)
self.assertQuerysetEqual(
Car.cars.order_by("name"), [
"Corvette",
"Neon",
],
lambda c: c.name
)
self.assertQuerysetEqual(
Car.fast_cars.all(), [
"Corvette",
],
lambda c: c.name
)
# Each model class gets a "_default_manager" attribute, which is a
# reference to the first manager defined in the class. In this case,
# it's "cars".
self.assertQuerysetEqual(
Car._default_manager.order_by("name"), [
"Corvette",
"Neon",
],
lambda c: c.name
)
def test_related_manager(self):
"""
Make sure un-saved object's related managers always return an instance
of the same class the manager's `get_query_set` returns. Refs #19652.
"""
rel_qs = RelatedObject().objs.all()
self.assertIsInstance(rel_qs, ObjectQuerySet)
with self.assertNumQueries(0):
self.assertFalse(rel_qs.exists())
|
gpl-2.0
|
patochectp/navitia
|
source/navitiacommon/navitiacommon/default_values.py
|
1
|
4496
|
# Copyright (c) 2001-2014, Canal TP and/or its affiliates. All rights reserved.
#
# This file is part of Navitia,
# the software to build cool stuff with public transport.
#
# Hope you'll enjoy and contribute to this project,
# powered by Canal TP (www.canaltp.fr).
# Help us simplify mobility and open public transport:
# a non ending quest to the responsive locomotion way of traveling!
#
# LICENCE: This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Stay tuned using
# twitter @navitia
# IRC #navitia on freenode
# https://groups.google.com/d/forum/navitia
# www.navitia.io
"""
This file contains the default values for parameters used in the journey planner.
This parameters are used on the creation of the instances in the tyr database, they will not be updated automatically.
This parameters can be used directly by jormungandr if the instance is not known in tyr, typically in development setup
"""
import logging
import sys
max_walking_duration_to_pt = 30 * 60
max_bike_duration_to_pt = 30 * 60
max_bss_duration_to_pt = 30 * 60
max_car_duration_to_pt = 30 * 60
max_car_no_park_duration_to_pt = 30 * 60
walking_speed = 1.12
bike_speed = 4.1
bss_speed = 4.1
car_speed = 11.11
car_no_park_speed = 6.94 # 25km/h
max_nb_transfers = 10
journey_order = 'arrival_time'
min_bike = 4 * 60
min_bss = 4 * 60 + 3 * 60 # we want 4minute on the bike, so we add the time to pick and put back the bss
min_car = 5 * 60
# specify the latest time point of request, relative to datetime, in second
max_duration = 60 * 60 * 24 # seconds
# the penalty for each walking transfer
walking_transfer_penalty = 120 # seconds
# night bus filtering parameter
night_bus_filter_max_factor = 1.5
# night bus filtering parameter
night_bus_filter_base_factor = 15 * 60 # seconds
# priority value to be used to choose a kraken instance among valid instances. The greater it is, the more the instance
# will be chosen.
priority = 0
# activate / desactivate call to bss provider
bss_provider = True
# activate / desactivate call to car parking provider
car_park_provider = True
# Maximum number of connections allowed in journeys is calculated as
# max_additional_connections + minimum connections among the journeys
max_additional_connections = 2
# the id of physical_mode, as sent by kraken to jormungandr, used by _max_successive_physical_mode rule
successive_physical_mode_to_limit_id = 'physical_mode:Bus'
# Number of greenlets simultaneously used for Real-Time proxies calls
realtime_pool_size = 3
# Minimum number of different suggested journeys
min_nb_journeys = 0
# Maximum number of different suggested journeys
max_nb_journeys = 99999
# Maximum number of successive physical modes for an itinerary
max_successive_physical_mode = 99
# Minimum number of calls to kraken
min_journeys_calls = 1
# Filter on vj using same lines and same stops
final_line_filter = False
# Maximum number of second pass to get more itineraries
max_extra_second_pass = 0
max_nb_crowfly_by_mode = {'walking': 5000, 'car': 5000, 'bike': 5000, 'bss': 5000}
autocomplete_backend = 'kraken'
# Additionnal time in second after the taxi section when used as first section mode
additional_time_after_first_section_taxi = 5 * 60
# Additionnal time in second before the taxi section when used as last section mode
additional_time_before_last_section_taxi = 5 * 60
def get_value_or_default(attr, instance, instance_name):
if not instance or getattr(instance, attr, None) == None:
value = getattr(sys.modules[__name__], attr)
if not instance:
logger = logging.getLogger(__name__)
logger.warn(
'instance %s not found in db, we use the default value (%s) for the param %s',
instance_name,
value,
attr,
)
return value
else:
return getattr(instance, attr)
|
agpl-3.0
|
kk9599/pupy
|
pupy/packages/windows/x86/psutil/_pswindows.py
|
66
|
19166
|
#!/usr/bin/env python
# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Windows platform implementation."""
import errno
import functools
import os
import sys
from collections import namedtuple
from . import _common
from . import _psutil_windows as cext
from ._common import conn_tmap, usage_percent, isfile_strict
from ._common import sockfam_to_enum, socktype_to_enum
from ._compat import PY3, xrange, lru_cache, long
from ._psutil_windows import (ABOVE_NORMAL_PRIORITY_CLASS,
BELOW_NORMAL_PRIORITY_CLASS,
HIGH_PRIORITY_CLASS,
IDLE_PRIORITY_CLASS,
NORMAL_PRIORITY_CLASS,
REALTIME_PRIORITY_CLASS)
if sys.version_info >= (3, 4):
import enum
else:
enum = None
# process priority constants, import from __init__.py:
# http://msdn.microsoft.com/en-us/library/ms686219(v=vs.85).aspx
__extra__all__ = ["ABOVE_NORMAL_PRIORITY_CLASS", "BELOW_NORMAL_PRIORITY_CLASS",
"HIGH_PRIORITY_CLASS", "IDLE_PRIORITY_CLASS",
"NORMAL_PRIORITY_CLASS", "REALTIME_PRIORITY_CLASS",
"CONN_DELETE_TCB",
"AF_LINK",
]
# --- module level constants (gets pushed up to psutil module)
CONN_DELETE_TCB = "DELETE_TCB"
WAIT_TIMEOUT = 0x00000102 # 258 in decimal
ACCESS_DENIED_SET = frozenset([errno.EPERM, errno.EACCES,
cext.ERROR_ACCESS_DENIED])
if enum is None:
AF_LINK = -1
else:
AddressFamily = enum.IntEnum('AddressFamily', {'AF_LINK': -1})
AF_LINK = AddressFamily.AF_LINK
TCP_STATUSES = {
cext.MIB_TCP_STATE_ESTAB: _common.CONN_ESTABLISHED,
cext.MIB_TCP_STATE_SYN_SENT: _common.CONN_SYN_SENT,
cext.MIB_TCP_STATE_SYN_RCVD: _common.CONN_SYN_RECV,
cext.MIB_TCP_STATE_FIN_WAIT1: _common.CONN_FIN_WAIT1,
cext.MIB_TCP_STATE_FIN_WAIT2: _common.CONN_FIN_WAIT2,
cext.MIB_TCP_STATE_TIME_WAIT: _common.CONN_TIME_WAIT,
cext.MIB_TCP_STATE_CLOSED: _common.CONN_CLOSE,
cext.MIB_TCP_STATE_CLOSE_WAIT: _common.CONN_CLOSE_WAIT,
cext.MIB_TCP_STATE_LAST_ACK: _common.CONN_LAST_ACK,
cext.MIB_TCP_STATE_LISTEN: _common.CONN_LISTEN,
cext.MIB_TCP_STATE_CLOSING: _common.CONN_CLOSING,
cext.MIB_TCP_STATE_DELETE_TCB: CONN_DELETE_TCB,
cext.PSUTIL_CONN_NONE: _common.CONN_NONE,
}
if enum is not None:
class Priority(enum.IntEnum):
ABOVE_NORMAL_PRIORITY_CLASS = ABOVE_NORMAL_PRIORITY_CLASS
BELOW_NORMAL_PRIORITY_CLASS = BELOW_NORMAL_PRIORITY_CLASS
HIGH_PRIORITY_CLASS = HIGH_PRIORITY_CLASS
IDLE_PRIORITY_CLASS = IDLE_PRIORITY_CLASS
NORMAL_PRIORITY_CLASS = NORMAL_PRIORITY_CLASS
REALTIME_PRIORITY_CLASS = REALTIME_PRIORITY_CLASS
globals().update(Priority.__members__)
scputimes = namedtuple('scputimes', ['user', 'system', 'idle'])
svmem = namedtuple('svmem', ['total', 'available', 'percent', 'used', 'free'])
pextmem = namedtuple(
'pextmem', ['num_page_faults', 'peak_wset', 'wset', 'peak_paged_pool',
'paged_pool', 'peak_nonpaged_pool', 'nonpaged_pool',
'pagefile', 'peak_pagefile', 'private'])
pmmap_grouped = namedtuple('pmmap_grouped', ['path', 'rss'])
pmmap_ext = namedtuple(
'pmmap_ext', 'addr perms ' + ' '.join(pmmap_grouped._fields))
ntpinfo = namedtuple(
'ntpinfo', ['num_handles', 'ctx_switches', 'user_time', 'kernel_time',
'create_time', 'num_threads', 'io_rcount', 'io_wcount',
'io_rbytes', 'io_wbytes'])
# set later from __init__.py
NoSuchProcess = None
AccessDenied = None
TimeoutExpired = None
@lru_cache(maxsize=512)
def _win32_QueryDosDevice(s):
return cext.win32_QueryDosDevice(s)
def _convert_raw_path(s):
# convert paths using native DOS format like:
# "\Device\HarddiskVolume1\Windows\systemew\file.txt"
# into: "C:\Windows\systemew\file.txt"
if PY3 and not isinstance(s, str):
s = s.decode('utf8')
rawdrive = '\\'.join(s.split('\\')[:3])
driveletter = _win32_QueryDosDevice(rawdrive)
return os.path.join(driveletter, s[len(rawdrive):])
def py2_strencode(s, encoding=sys.getfilesystemencoding()):
if PY3 or isinstance(s, str):
return s
else:
try:
return s.encode(encoding)
except UnicodeEncodeError:
# Filesystem codec failed, return the plain unicode
# string (this should never happen).
return s
# --- public functions
def virtual_memory():
"""System virtual memory as a namedtuple."""
mem = cext.virtual_mem()
totphys, availphys, totpagef, availpagef, totvirt, freevirt = mem
#
total = totphys
avail = availphys
free = availphys
used = total - avail
percent = usage_percent((total - avail), total, _round=1)
return svmem(total, avail, percent, used, free)
def swap_memory():
"""Swap system memory as a (total, used, free, sin, sout) tuple."""
mem = cext.virtual_mem()
total = mem[2]
free = mem[3]
used = total - free
percent = usage_percent(used, total, _round=1)
return _common.sswap(total, used, free, percent, 0, 0)
def disk_usage(path):
"""Return disk usage associated with path."""
try:
total, free = cext.disk_usage(path)
except WindowsError:
if not os.path.exists(path):
msg = "No such file or directory: '%s'" % path
raise OSError(errno.ENOENT, msg)
raise
used = total - free
percent = usage_percent(used, total, _round=1)
return _common.sdiskusage(total, used, free, percent)
def disk_partitions(all):
"""Return disk partitions."""
rawlist = cext.disk_partitions(all)
return [_common.sdiskpart(*x) for x in rawlist]
def cpu_times():
"""Return system CPU times as a named tuple."""
user, system, idle = cext.cpu_times()
return scputimes(user, system, idle)
def per_cpu_times():
"""Return system per-CPU times as a list of named tuples."""
ret = []
for cpu_t in cext.per_cpu_times():
user, system, idle = cpu_t
item = scputimes(user, system, idle)
ret.append(item)
return ret
def cpu_count_logical():
"""Return the number of logical CPUs in the system."""
return cext.cpu_count_logical()
def cpu_count_physical():
"""Return the number of physical CPUs in the system."""
return cext.cpu_count_phys()
def boot_time():
"""The system boot time expressed in seconds since the epoch."""
return cext.boot_time()
def net_connections(kind, _pid=-1):
"""Return socket connections. If pid == -1 return system-wide
connections (as opposed to connections opened by one process only).
"""
if kind not in conn_tmap:
raise ValueError("invalid %r kind argument; choose between %s"
% (kind, ', '.join([repr(x) for x in conn_tmap])))
families, types = conn_tmap[kind]
rawlist = cext.net_connections(_pid, families, types)
ret = set()
for item in rawlist:
fd, fam, type, laddr, raddr, status, pid = item
status = TCP_STATUSES[status]
fam = sockfam_to_enum(fam)
type = socktype_to_enum(type)
if _pid == -1:
nt = _common.sconn(fd, fam, type, laddr, raddr, status, pid)
else:
nt = _common.pconn(fd, fam, type, laddr, raddr, status)
ret.add(nt)
return list(ret)
def net_if_stats():
ret = cext.net_if_stats()
for name, items in ret.items():
name = py2_strencode(name)
isup, duplex, speed, mtu = items
if hasattr(_common, 'NicDuplex'):
duplex = _common.NicDuplex(duplex)
ret[name] = _common.snicstats(isup, duplex, speed, mtu)
return ret
def net_io_counters():
ret = cext.net_io_counters()
return dict([(py2_strencode(k), v) for k, v in ret.items()])
def net_if_addrs():
ret = []
for items in cext.net_if_addrs():
items = list(items)
items[0] = py2_strencode(items[0])
ret.append(items)
return ret
def users():
"""Return currently connected users as a list of namedtuples."""
retlist = []
rawlist = cext.users()
for item in rawlist:
user, hostname, tstamp = item
user = py2_strencode(user)
nt = _common.suser(user, None, hostname, tstamp)
retlist.append(nt)
return retlist
pids = cext.pids
pid_exists = cext.pid_exists
disk_io_counters = cext.disk_io_counters
ppid_map = cext.ppid_map # not meant to be public
def wrap_exceptions(fun):
"""Decorator which translates bare OSError and WindowsError
exceptions into NoSuchProcess and AccessDenied.
"""
@functools.wraps(fun)
def wrapper(self, *args, **kwargs):
try:
return fun(self, *args, **kwargs)
except OSError as err:
# support for private module import
if NoSuchProcess is None or AccessDenied is None:
raise
if err.errno in ACCESS_DENIED_SET:
raise AccessDenied(self.pid, self._name)
if err.errno == errno.ESRCH:
raise NoSuchProcess(self.pid, self._name)
raise
return wrapper
class Process(object):
"""Wrapper class around underlying C implementation."""
__slots__ = ["pid", "_name", "_ppid"]
def __init__(self, pid):
self.pid = pid
self._name = None
self._ppid = None
@wrap_exceptions
def name(self):
"""Return process name, which on Windows is always the final
part of the executable.
"""
# This is how PIDs 0 and 4 are always represented in taskmgr
# and process-hacker.
if self.pid == 0:
return "System Idle Process"
elif self.pid == 4:
return "System"
else:
try:
# Note: this will fail with AD for most PIDs owned
# by another user but it's faster.
return py2_strencode(os.path.basename(self.exe()))
except AccessDenied:
return py2_strencode(cext.proc_name(self.pid))
@wrap_exceptions
def exe(self):
# Note: os.path.exists(path) may return False even if the file
# is there, see:
# http://stackoverflow.com/questions/3112546/os-path-exists-lies
# see https://github.com/giampaolo/psutil/issues/414
# see https://github.com/giampaolo/psutil/issues/528
if self.pid in (0, 4):
raise AccessDenied(self.pid, self._name)
return py2_strencode(_convert_raw_path(cext.proc_exe(self.pid)))
@wrap_exceptions
def cmdline(self):
ret = cext.proc_cmdline(self.pid)
if PY3:
return ret
else:
return [py2_strencode(s) for s in ret]
def ppid(self):
try:
return ppid_map()[self.pid]
except KeyError:
raise NoSuchProcess(self.pid, self._name)
def _get_raw_meminfo(self):
try:
return cext.proc_memory_info(self.pid)
except OSError as err:
if err.errno in ACCESS_DENIED_SET:
# TODO: the C ext can probably be refactored in order
# to get this from cext.proc_info()
return cext.proc_memory_info_2(self.pid)
raise
@wrap_exceptions
def memory_info(self):
# on Windows RSS == WorkingSetSize and VSM == PagefileUsage
# fields of PROCESS_MEMORY_COUNTERS struct:
# http://msdn.microsoft.com/en-us/library/windows/desktop/
# ms684877(v=vs.85).aspx
t = self._get_raw_meminfo()
return _common.pmem(t[2], t[7])
@wrap_exceptions
def memory_info_ex(self):
return pextmem(*self._get_raw_meminfo())
def memory_maps(self):
try:
raw = cext.proc_memory_maps(self.pid)
except OSError as err:
# XXX - can't use wrap_exceptions decorator as we're
# returning a generator; probably needs refactoring.
if err.errno in ACCESS_DENIED_SET:
raise AccessDenied(self.pid, self._name)
if err.errno == errno.ESRCH:
raise NoSuchProcess(self.pid, self._name)
raise
else:
for addr, perm, path, rss in raw:
path = _convert_raw_path(path)
addr = hex(addr)
yield (addr, perm, path, rss)
@wrap_exceptions
def kill(self):
return cext.proc_kill(self.pid)
@wrap_exceptions
def send_signal(self, sig):
os.kill(self.pid, sig)
@wrap_exceptions
def wait(self, timeout=None):
if timeout is None:
timeout = cext.INFINITE
else:
# WaitForSingleObject() expects time in milliseconds
timeout = int(timeout * 1000)
ret = cext.proc_wait(self.pid, timeout)
if ret == WAIT_TIMEOUT:
# support for private module import
if TimeoutExpired is None:
raise RuntimeError("timeout expired")
raise TimeoutExpired(timeout, self.pid, self._name)
return ret
@wrap_exceptions
def username(self):
if self.pid in (0, 4):
return 'NT AUTHORITY\\SYSTEM'
return cext.proc_username(self.pid)
@wrap_exceptions
def create_time(self):
# special case for kernel process PIDs; return system boot time
if self.pid in (0, 4):
return boot_time()
try:
return cext.proc_create_time(self.pid)
except OSError as err:
if err.errno in ACCESS_DENIED_SET:
return ntpinfo(*cext.proc_info(self.pid)).create_time
raise
@wrap_exceptions
def num_threads(self):
return ntpinfo(*cext.proc_info(self.pid)).num_threads
@wrap_exceptions
def threads(self):
rawlist = cext.proc_threads(self.pid)
retlist = []
for thread_id, utime, stime in rawlist:
ntuple = _common.pthread(thread_id, utime, stime)
retlist.append(ntuple)
return retlist
@wrap_exceptions
def cpu_times(self):
try:
ret = cext.proc_cpu_times(self.pid)
except OSError as err:
if err.errno in ACCESS_DENIED_SET:
nt = ntpinfo(*cext.proc_info(self.pid))
ret = (nt.user_time, nt.kernel_time)
else:
raise
return _common.pcputimes(*ret)
@wrap_exceptions
def suspend(self):
return cext.proc_suspend(self.pid)
@wrap_exceptions
def resume(self):
return cext.proc_resume(self.pid)
@wrap_exceptions
def cwd(self):
if self.pid in (0, 4):
raise AccessDenied(self.pid, self._name)
# return a normalized pathname since the native C function appends
# "\\" at the and of the path
path = cext.proc_cwd(self.pid)
return py2_strencode(os.path.normpath(path))
@wrap_exceptions
def open_files(self):
if self.pid in (0, 4):
return []
ret = set()
# Filenames come in in native format like:
# "\Device\HarddiskVolume1\Windows\systemew\file.txt"
# Convert the first part in the corresponding drive letter
# (e.g. "C:\") by using Windows's QueryDosDevice()
raw_file_names = cext.proc_open_files(self.pid)
for _file in raw_file_names:
_file = _convert_raw_path(_file)
if isfile_strict(_file):
if not PY3:
_file = py2_strencode(_file)
ntuple = _common.popenfile(_file, -1)
ret.add(ntuple)
return list(ret)
@wrap_exceptions
def connections(self, kind='inet'):
return net_connections(kind, _pid=self.pid)
@wrap_exceptions
def nice_get(self):
value = cext.proc_priority_get(self.pid)
if enum is not None:
value = Priority(value)
return value
@wrap_exceptions
def nice_set(self, value):
return cext.proc_priority_set(self.pid, value)
# available on Windows >= Vista
if hasattr(cext, "proc_io_priority_get"):
@wrap_exceptions
def ionice_get(self):
return cext.proc_io_priority_get(self.pid)
@wrap_exceptions
def ionice_set(self, value, _):
if _:
raise TypeError("set_proc_ionice() on Windows takes only "
"1 argument (2 given)")
if value not in (2, 1, 0):
raise ValueError("value must be 2 (normal), 1 (low) or 0 "
"(very low); got %r" % value)
return cext.proc_io_priority_set(self.pid, value)
@wrap_exceptions
def io_counters(self):
try:
ret = cext.proc_io_counters(self.pid)
except OSError as err:
if err.errno in ACCESS_DENIED_SET:
nt = ntpinfo(*cext.proc_info(self.pid))
ret = (nt.io_rcount, nt.io_wcount, nt.io_rbytes, nt.io_wbytes)
else:
raise
return _common.pio(*ret)
@wrap_exceptions
def status(self):
suspended = cext.proc_is_suspended(self.pid)
if suspended:
return _common.STATUS_STOPPED
else:
return _common.STATUS_RUNNING
@wrap_exceptions
def cpu_affinity_get(self):
def from_bitmask(x):
return [i for i in xrange(64) if (1 << i) & x]
bitmask = cext.proc_cpu_affinity_get(self.pid)
return from_bitmask(bitmask)
@wrap_exceptions
def cpu_affinity_set(self, value):
def to_bitmask(l):
if not l:
raise ValueError("invalid argument %r" % l)
out = 0
for b in l:
out |= 2 ** b
return out
# SetProcessAffinityMask() states that ERROR_INVALID_PARAMETER
# is returned for an invalid CPU but this seems not to be true,
# therefore we check CPUs validy beforehand.
allcpus = list(range(len(per_cpu_times())))
for cpu in value:
if cpu not in allcpus:
if not isinstance(cpu, (int, long)):
raise TypeError(
"invalid CPU %r; an integer is required" % cpu)
else:
raise ValueError("invalid CPU %r" % cpu)
bitmask = to_bitmask(value)
cext.proc_cpu_affinity_set(self.pid, bitmask)
@wrap_exceptions
def num_handles(self):
try:
return cext.proc_num_handles(self.pid)
except OSError as err:
if err.errno in ACCESS_DENIED_SET:
return ntpinfo(*cext.proc_info(self.pid)).num_handles
raise
@wrap_exceptions
def num_ctx_switches(self):
ctx_switches = ntpinfo(*cext.proc_info(self.pid)).ctx_switches
# only voluntary ctx switches are supported
return _common.pctxsw(ctx_switches, 0)
|
bsd-3-clause
|
erjohnso/ansible
|
lib/ansible/modules/network/panos/panos_object.py
|
42
|
16678
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Ansible module to manage PaloAltoNetworks Firewall
# (c) 2016, techbizdev <techbizdev@paloaltonetworks.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# limitations under the License.
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: panos_object
short_description: create/read/update/delete object in PAN-OS or Panorama
description: >
- Policy objects form the match criteria for policy rules and many other functions in PAN-OS. These may include
address object, address groups, service objects, service groups, and tag.
author: "Bob Hagen (@rnh556)"
version_added: "2.4"
requirements:
- pan-python can be obtained from PyPi U(https://pypi.python.org/pypi/pan-python)
- pandevice can be obtained from PyPi U(https://pypi.python.org/pypi/pandevice)
notes:
- Checkmode is not supported.
- Panorama is supported.
options:
ip_address:
description:
- IP address (or hostname) of PAN-OS device or Panorama management console being configured.
required: true
username:
description:
- Username credentials to use for authentication.
required: false
default: "admin"
password:
description:
- Password credentials to use for authentication.
required: true
api_key:
description:
- API key that can be used instead of I(username)/I(password) credentials.
operation:
description:
- The operation to be performed. Supported values are I(add)/I(delete)/I(find).
required: true
addressobject:
description:
- The name of the address object.
address:
description:
- The IP address of the host or network in CIDR notation.
address_type:
description:
- The type of address object definition. Valid types are I(ip-netmask) and I(ip-range).
addressgroup:
description:
- A static group of address objects or dynamic address group.
static_value:
description:
- A group of address objects to be used in an addressgroup definition.
dynamic_value:
description:
- The filter match criteria to be used in a dynamic addressgroup definition.
serviceobject:
description:
- The name of the service object.
source_port:
description:
- The source port to be used in a service object definition.
destination_port:
description:
- The destination port to be used in a service object definition.
protocol:
description:
- The IP protocol to be used in a service object definition. Valid values are I(tcp) or I(udp).
servicegroup:
description:
- A group of service objects.
services:
description:
- The group of service objects used in a servicegroup definition.
description:
description:
- The description of the object.
tag_name:
description:
- The name of an object or rule tag.
color:
description: >
- The color of the tag object. Valid values are I(red, green, blue, yellow, copper, orange, purple, gray,
light green, cyan, light gray, blue gray, lime, black, gold, and brown).
devicegroup:
description: >
- The name of the Panorama device group. The group must exist on Panorama. If device group is not defined it
is assumed that we are contacting a firewall.
required: false
default: None
'''
EXAMPLES = '''
- name: search for shared address object
panos_object:
ip_address: '{{ ip_address }}'
username: '{{ username }}'
password: '{{ password }}'
operation: 'find'
address: 'DevNet'
- name: create an address group in devicegroup using API key
panos_object:
ip_address: '{{ ip_address }}'
api_key: '{{ api_key }}'
operation: 'add'
addressgroup: 'Prod_DB_Svrs'
static_value: ['prod-db1', 'prod-db2', 'prod-db3']
description: 'Production DMZ database servers'
tag_name: 'DMZ'
devicegroup: 'DMZ Firewalls'
- name: create a global service for TCP 3306
panos_object:
ip_address: '{{ ip_address }}'
api_key: '{{ api_key }}'
operation: 'add'
serviceobject: 'mysql-3306'
destination_port: '3306'
protocol: 'tcp'
description: 'MySQL on tcp/3306'
- name: create a global tag
panos_object:
ip_address: '{{ ip_address }}'
username: '{{ username }}'
password: '{{ password }}'
operation: 'add'
tag_name: 'ProjectX'
color: 'yellow'
description: 'Associated with Project X'
- name: delete an address object from a devicegroup using API key
panos_object:
ip_address: '{{ ip_address }}'
api_key: '{{ api_key }}'
operation: 'delete'
addressobject: 'Win2K test'
'''
RETURN = '''
# Default return values
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.basic import get_exception
try:
import pan.xapi
from pan.xapi import PanXapiError
import pandevice
from pandevice import base
from pandevice import firewall
from pandevice import panorama
from pandevice import objects
import xmltodict
import json
HAS_LIB = True
except ImportError:
HAS_LIB = False
def get_devicegroup(device, devicegroup):
dg_list = device.refresh_devices()
for group in dg_list:
if isinstance(group, pandevice.panorama.DeviceGroup):
if group.name == devicegroup:
return group
return False
def find_object(device, dev_group, obj_name, obj_type):
# Get the firewall objects
obj_type.refreshall(device)
if isinstance(device, pandevice.firewall.Firewall):
addr = device.find(obj_name, obj_type)
return addr
elif isinstance(device, pandevice.panorama.Panorama):
addr = device.find(obj_name, obj_type)
if addr is None:
if dev_group:
device.add(dev_group)
obj_type.refreshall(dev_group)
addr = dev_group.find(obj_name, obj_type)
return addr
else:
return False
def create_object(**kwargs):
if kwargs['addressobject']:
newobject = objects.AddressObject(
name=kwargs['addressobject'],
value=kwargs['address'],
type=kwargs['address_type'],
description=kwargs['description'],
tag=kwargs['tag_name']
)
if newobject.type and newobject.value:
return newobject
else:
return False
elif kwargs['addressgroup']:
newobject = objects.AddressGroup(
name=kwargs['addressgroup'],
static_value=kwargs['static_value'],
dynamic_value=kwargs['dynamic_value'],
description=kwargs['description'],
tag=kwargs['tag_name']
)
if newobject.static_value or newobject.dynamic_value:
return newobject
else:
return False
elif kwargs['serviceobject']:
newobject = objects.ServiceObject(
name=kwargs['serviceobject'],
protocol=kwargs['protocol'],
source_port=kwargs['source_port'],
destination_port=kwargs['destination_port'],
tag=kwargs['tag_name']
)
if newobject.protocol and newobject.destination_port:
return newobject
else:
return False
elif kwargs['servicegroup']:
newobject = objects.ServiceGroup(
name=kwargs['servicegroup'],
value=kwargs['services'],
tag=kwargs['tag_name']
)
if newobject.value:
return newobject
else:
return False
elif kwargs['tag_name']:
newobject = objects.Tag(
name=kwargs['tag_name'],
color=kwargs['color'],
comments=kwargs['description']
)
if newobject.name:
return newobject
else:
return False
else:
return False
def add_object(device, dev_group, new_object):
if dev_group:
dev_group.add(new_object)
else:
device.add(new_object)
new_object.create()
return True
def main():
argument_spec = dict(
ip_address=dict(required=True),
password=dict(no_log=True),
username=dict(default='admin'),
api_key=dict(no_log=True),
operation=dict(required=True, choices=['add', 'update', 'delete', 'find']),
addressobject=dict(default=None),
addressgroup=dict(default=None),
serviceobject=dict(default=None),
servicegroup=dict(default=None),
address=dict(default=None),
address_type=dict(default='ip-netmask', choices=['ip-netmask', 'ip-range', 'fqdn']),
static_value=dict(type='list', default=None),
dynamic_value=dict(default=None),
protocol=dict(default=None, choices=['tcp', 'udp']),
source_port=dict(default=None),
destination_port=dict(default=None),
services=dict(type='list', default=None),
description=dict(default=None),
tag_name=dict(default=None),
color=dict(default=None, choices=['red', 'green', 'blue', 'yellow', 'copper', 'orange', 'purple',
'gray', 'light green', 'cyan', 'light gray', 'blue gray',
'lime', 'black', 'gold', 'brown']),
devicegroup=dict(default=None)
)
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False,
required_one_of=[['api_key', 'password']],
mutually_exclusive=[['addressobject', 'addressgroup',
'serviceobject', 'servicegroup',
'tag_name']]
)
if not HAS_LIB:
module.fail_json(msg='Missing required libraries.')
ip_address = module.params["ip_address"]
password = module.params["password"]
username = module.params['username']
api_key = module.params['api_key']
operation = module.params['operation']
addressobject = module.params['addressobject']
addressgroup = module.params['addressgroup']
serviceobject = module.params['serviceobject']
servicegroup = module.params['servicegroup']
address = module.params['address']
address_type = module.params['address_type']
static_value = module.params['static_value']
dynamic_value = module.params['dynamic_value']
protocol = module.params['protocol']
source_port = module.params['source_port']
destination_port = module.params['destination_port']
services = module.params['services']
description = module.params['description']
tag_name = module.params['tag_name']
color = module.params['color']
devicegroup = module.params['devicegroup']
# Create the device with the appropriate pandevice type
device = base.PanDevice.create_from_device(ip_address, username, password, api_key=api_key)
# If Panorama, validate the devicegroup
dev_group = None
if devicegroup and isinstance(device, panorama.Panorama):
dev_group = get_devicegroup(device, devicegroup)
if dev_group:
device.add(dev_group)
else:
module.fail_json(msg='\'%s\' device group not found in Panorama. Is the name correct?' % devicegroup)
# What type of object are we talking about?
if addressobject:
obj_name = addressobject
obj_type = objects.AddressObject
elif addressgroup:
obj_name = addressgroup
obj_type = objects.AddressGroup
elif serviceobject:
obj_name = serviceobject
obj_type = objects.ServiceObject
elif servicegroup:
obj_name = servicegroup
obj_type = objects.ServiceGroup
elif tag_name:
obj_name = tag_name
obj_type = objects.Tag
else:
module.fail_json(msg='No object type defined!')
# Which operation shall we perform on the object?
if operation == "find":
# Search for the object
match = find_object(device, dev_group, obj_name, obj_type)
# If found, format and return the result
if match:
match_dict = xmltodict.parse(match.element_str())
module.exit_json(
stdout_lines=json.dumps(match_dict, indent=2),
msg='Object matched'
)
else:
module.fail_json(msg='Object \'%s\' not found. Is the name correct?' % obj_name)
elif operation == "delete":
# Search for the object
match = find_object(device, dev_group, obj_name, obj_type)
# If found, delete it
if match:
try:
match.delete()
except PanXapiError:
exc = get_exception()
module.fail_json(msg=exc.message)
module.exit_json(changed=True, msg='Object \'%s\' successfully deleted' % obj_name)
else:
module.fail_json(msg='Object \'%s\' not found. Is the name correct?' % obj_name)
elif operation == "add":
# Search for the object. Fail if found.
match = find_object(device, dev_group, obj_name, obj_type)
if match:
module.fail_json(msg='Object \'%s\' already exists. Use operation: \'update\' to change it.' % obj_name)
else:
try:
new_object = create_object(
addressobject=addressobject,
addressgroup=addressgroup,
serviceobject=serviceobject,
servicegroup=servicegroup,
address=address,
address_type=address_type,
static_value=static_value,
dynamic_value=dynamic_value,
protocol=protocol,
source_port=source_port,
destination_port=destination_port,
services=services,
description=description,
tag_name=tag_name,
color=color
)
changed = add_object(device, dev_group, new_object)
except PanXapiError:
exc = get_exception()
module.fail_json(msg=exc.message)
module.exit_json(changed=changed, msg='Object \'%s\' successfully added' % obj_name)
elif operation == "update":
# Search for the object. Update if found.
match = find_object(device, dev_group, obj_name, obj_type)
if match:
try:
new_object = create_object(
addressobject=addressobject,
addressgroup=addressgroup,
serviceobject=serviceobject,
servicegroup=servicegroup,
address=address,
address_type=address_type,
static_value=static_value,
dynamic_value=dynamic_value,
protocol=protocol,
source_port=source_port,
destination_port=destination_port,
services=services,
description=description,
tag_name=tag_name,
color=color
)
changed = add_object(device, dev_group, new_object)
except PanXapiError:
exc = get_exception()
module.fail_json(msg=exc.message)
module.exit_json(changed=changed, msg='Object \'%s\' successfully updated.' % obj_name)
else:
module.fail_json(msg='Object \'%s\' does not exist. Use operation: \'add\' to add it.' % obj_name)
if __name__ == '__main__':
main()
|
gpl-3.0
|
HrWangChengdu/CS231n
|
assignment3/cs231n/image_utils.py
|
16
|
2342
|
import urllib2, os, tempfile
import numpy as np
from scipy.misc import imread
from cs231n.fast_layers import conv_forward_fast
"""
Utility functions used for viewing and processing images.
"""
def blur_image(X):
"""
A very gentle image blurring operation, to be used as a regularizer for image
generation.
Inputs:
- X: Image data of shape (N, 3, H, W)
Returns:
- X_blur: Blurred version of X, of shape (N, 3, H, W)
"""
w_blur = np.zeros((3, 3, 3, 3))
b_blur = np.zeros(3)
blur_param = {'stride': 1, 'pad': 1}
for i in xrange(3):
w_blur[i, i] = np.asarray([[1, 2, 1], [2, 188, 2], [1, 2, 1]], dtype=np.float32)
w_blur /= 200.0
return conv_forward_fast(X, w_blur, b_blur, blur_param)[0]
def preprocess_image(img, mean_img, mean='image'):
"""
Convert to float, transepose, and subtract mean pixel
Input:
- img: (H, W, 3)
Returns:
- (1, 3, H, 3)
"""
if mean == 'image':
mean = mean_img
elif mean == 'pixel':
mean = mean_img.mean(axis=(1, 2), keepdims=True)
elif mean == 'none':
mean = 0
else:
raise ValueError('mean must be image or pixel or none')
return img.astype(np.float32).transpose(2, 0, 1)[None] - mean
def deprocess_image(img, mean_img, mean='image', renorm=False):
"""
Add mean pixel, transpose, and convert to uint8
Input:
- (1, 3, H, W) or (3, H, W)
Returns:
- (H, W, 3)
"""
if mean == 'image':
mean = mean_img
elif mean == 'pixel':
mean = mean_img.mean(axis=(1, 2), keepdims=True)
elif mean == 'none':
mean = 0
else:
raise ValueError('mean must be image or pixel or none')
if img.ndim == 3:
img = img[None]
img = (img + mean)[0].transpose(1, 2, 0)
if renorm:
low, high = img.min(), img.max()
img = 255.0 * (img - low) / (high - low)
return img.astype(np.uint8)
def image_from_url(url):
"""
Read an image from a URL. Returns a numpy array with the pixel data.
We write the image to a temporary file then read it back. Kinda gross.
"""
try:
f = urllib2.urlopen(url)
_, fname = tempfile.mkstemp()
with open(fname, 'wb') as ff:
ff.write(f.read())
img = imread(fname)
os.remove(fname)
return img
except urllib2.URLError as e:
print 'URL Error: ', e.reason, url
except urllib2.HTTPError as e:
print 'HTTP Error: ', e.code, url
|
mit
|
smarter/aom
|
test/android/get_files.py
|
9
|
3344
|
#
# Copyright (c) 2016, Alliance for Open Media. All rights reserved
#
# This source code is subject to the terms of the BSD 2 Clause License and
# the Alliance for Open Media Patent License 1.0. If the BSD 2 Clause License
# was not distributed with this source code in the LICENSE file, you can
# obtain it at www.aomedia.org/license/software. If the Alliance for Open
# Media Patent License 1.0 was not distributed with this source code in the
# PATENTS file, you can obtain it at www.aomedia.org/license/patent.
#
# This simple script pulls test files from the webm homepage
# It is intelligent enough to only pull files if
# 1) File / test_data folder does not exist
# 2) SHA mismatch
import pycurl
import csv
import hashlib
import re
import os.path
import time
import itertools
import sys
import getopt
#globals
url = ''
file_list_path = ''
local_resource_path = ''
# Helper functions:
# A simple function which returns the sha hash of a file in hex
def get_file_sha(filename):
try:
sha_hash = hashlib.sha1()
with open(filename, 'rb') as file:
buf = file.read(HASH_CHUNK)
while len(buf) > 0:
sha_hash.update(buf)
buf = file.read(HASH_CHUNK)
return sha_hash.hexdigest()
except IOError:
print "Error reading " + filename
# Downloads a file from a url, and then checks the sha against the passed
# in sha
def download_and_check_sha(url, filename, sha):
path = os.path.join(local_resource_path, filename)
fp = open(path, "wb")
curl = pycurl.Curl()
curl.setopt(pycurl.URL, url + "/" + filename)
curl.setopt(pycurl.WRITEDATA, fp)
curl.perform()
curl.close()
fp.close()
return get_file_sha(path) == sha
#constants
ftp_retries = 3
SHA_COL = 0
NAME_COL = 1
EXPECTED_COL = 2
HASH_CHUNK = 65536
# Main script
try:
opts, args = \
getopt.getopt(sys.argv[1:], \
"u:i:o:", ["url=", "input_csv=", "output_dir="])
except:
print 'get_files.py -u <url> -i <input_csv> -o <output_dir>'
sys.exit(2)
for opt, arg in opts:
if opt == '-u':
url = arg
elif opt in ("-i", "--input_csv"):
file_list_path = os.path.join(arg)
elif opt in ("-o", "--output_dir"):
local_resource_path = os.path.join(arg)
if len(sys.argv) != 7:
print "Expects two paths and a url!"
exit(1)
if not os.path.isdir(local_resource_path):
os.makedirs(local_resource_path)
file_list_csv = open(file_list_path, "rb")
# Our 'csv' file uses multiple spaces as a delimiter, python's
# csv class only uses single character delimiters, so we convert them below
file_list_reader = csv.reader((re.sub(' +', ' ', line) \
for line in file_list_csv), delimiter = ' ')
file_shas = []
file_names = []
for row in file_list_reader:
if len(row) != EXPECTED_COL:
continue
file_shas.append(row[SHA_COL])
file_names.append(row[NAME_COL])
file_list_csv.close()
# Download files, only if they don't already exist and have correct shas
for filename, sha in itertools.izip(file_names, file_shas):
path = os.path.join(local_resource_path, filename)
if os.path.isfile(path) \
and get_file_sha(path) == sha:
print path + ' exists, skipping'
continue
for retry in range(0, ftp_retries):
print "Downloading " + path
if not download_and_check_sha(url, filename, sha):
print "Sha does not match, retrying..."
else:
break
|
bsd-2-clause
|
mwmuni/LIGGGHTS_GUI
|
OpenGL/GL/NV/parameter_buffer_object.py
|
9
|
2582
|
'''OpenGL extension NV.parameter_buffer_object
This module customises the behaviour of the
OpenGL.raw.GL.NV.parameter_buffer_object to provide a more
Python-friendly API
Overview (from the spec)
This extension, in conjunction with NV_gpu_program4, provides a new type
of program parameter than can be used as a constant during vertex,
fragment, or geometry program execution. Each program target has a set of
parameter buffer binding points to which buffer objects can be attached.
A vertex, fragment, or geometry program can read data from the attached
buffer objects using a binding of the form "program.buffer[a][b]". This
binding reads data from the buffer object attached to binding point <a>.
The buffer object attached is treated either as an array of 32-bit words
or an array of four-component vectors, and the binding above reads the
array element numbered <b>.
The use of buffer objects allows applications to change large blocks of
program parameters at once, simply by binding a new buffer object. It
also provides a number of new ways to load parameter values, including
readback from the frame buffer (EXT_pixel_buffer_object), transform
feedback (NV_transform_feedback), buffer object loading functions such as
MapBuffer and BufferData, as well as dedicated parameter buffer update
functions provided by this extension.
The official definition of this extension is available here:
http://www.opengl.org/registry/specs/NV/parameter_buffer_object.txt
'''
from OpenGL import platform, constant, arrays
from OpenGL import extensions, wrapper
import ctypes
from OpenGL.raw.GL import _types, _glgets
from OpenGL.raw.GL.NV.parameter_buffer_object import *
from OpenGL.raw.GL.NV.parameter_buffer_object import _EXTENSION_NAME
def glInitParameterBufferObjectNV():
'''Return boolean indicating whether this extension is available'''
from OpenGL import extensions
return extensions.hasGLExtension( _EXTENSION_NAME )
# INPUT glProgramBufferParametersfvNV.params size not checked against count
glProgramBufferParametersfvNV=wrapper.wrapper(glProgramBufferParametersfvNV).setInputArraySize(
'params', None
)
# INPUT glProgramBufferParametersIivNV.params size not checked against count
glProgramBufferParametersIivNV=wrapper.wrapper(glProgramBufferParametersIivNV).setInputArraySize(
'params', None
)
# INPUT glProgramBufferParametersIuivNV.params size not checked against count
glProgramBufferParametersIuivNV=wrapper.wrapper(glProgramBufferParametersIuivNV).setInputArraySize(
'params', None
)
### END AUTOGENERATED SECTION
|
gpl-3.0
|
GiovanniConserva/TestDeploy
|
venv/Lib/site-packages/django/templatetags/i18n.py
|
219
|
19311
|
from __future__ import unicode_literals
import sys
from django.conf import settings
from django.template import Library, Node, TemplateSyntaxError, Variable
from django.template.base import TOKEN_TEXT, TOKEN_VAR, render_value_in_context
from django.template.defaulttags import token_kwargs
from django.utils import six, translation
from django.utils.safestring import SafeData, mark_safe
register = Library()
class GetAvailableLanguagesNode(Node):
def __init__(self, variable):
self.variable = variable
def render(self, context):
context[self.variable] = [(k, translation.ugettext(v)) for k, v in settings.LANGUAGES]
return ''
class GetLanguageInfoNode(Node):
def __init__(self, lang_code, variable):
self.lang_code = lang_code
self.variable = variable
def render(self, context):
lang_code = self.lang_code.resolve(context)
context[self.variable] = translation.get_language_info(lang_code)
return ''
class GetLanguageInfoListNode(Node):
def __init__(self, languages, variable):
self.languages = languages
self.variable = variable
def get_language_info(self, language):
# ``language`` is either a language code string or a sequence
# with the language code as its first item
if len(language[0]) > 1:
return translation.get_language_info(language[0])
else:
return translation.get_language_info(str(language))
def render(self, context):
langs = self.languages.resolve(context)
context[self.variable] = [self.get_language_info(lang) for lang in langs]
return ''
class GetCurrentLanguageNode(Node):
def __init__(self, variable):
self.variable = variable
def render(self, context):
context[self.variable] = translation.get_language()
return ''
class GetCurrentLanguageBidiNode(Node):
def __init__(self, variable):
self.variable = variable
def render(self, context):
context[self.variable] = translation.get_language_bidi()
return ''
class TranslateNode(Node):
def __init__(self, filter_expression, noop, asvar=None,
message_context=None):
self.noop = noop
self.asvar = asvar
self.message_context = message_context
self.filter_expression = filter_expression
if isinstance(self.filter_expression.var, six.string_types):
self.filter_expression.var = Variable("'%s'" %
self.filter_expression.var)
def render(self, context):
self.filter_expression.var.translate = not self.noop
if self.message_context:
self.filter_expression.var.message_context = (
self.message_context.resolve(context))
output = self.filter_expression.resolve(context)
value = render_value_in_context(output, context)
# Restore percent signs. Percent signs in template text are doubled
# so they are not interpreted as string format flags.
is_safe = isinstance(value, SafeData)
value = value.replace('%%', '%')
value = mark_safe(value) if is_safe else value
if self.asvar:
context[self.asvar] = value
return ''
else:
return value
class BlockTranslateNode(Node):
def __init__(self, extra_context, singular, plural=None, countervar=None,
counter=None, message_context=None, trimmed=False, asvar=None):
self.extra_context = extra_context
self.singular = singular
self.plural = plural
self.countervar = countervar
self.counter = counter
self.message_context = message_context
self.trimmed = trimmed
self.asvar = asvar
def render_token_list(self, tokens):
result = []
vars = []
for token in tokens:
if token.token_type == TOKEN_TEXT:
result.append(token.contents.replace('%', '%%'))
elif token.token_type == TOKEN_VAR:
result.append('%%(%s)s' % token.contents)
vars.append(token.contents)
msg = ''.join(result)
if self.trimmed:
msg = translation.trim_whitespace(msg)
return msg, vars
def render(self, context, nested=False):
if self.message_context:
message_context = self.message_context.resolve(context)
else:
message_context = None
tmp_context = {}
for var, val in self.extra_context.items():
tmp_context[var] = val.resolve(context)
# Update() works like a push(), so corresponding context.pop() is at
# the end of function
context.update(tmp_context)
singular, vars = self.render_token_list(self.singular)
if self.plural and self.countervar and self.counter:
count = self.counter.resolve(context)
context[self.countervar] = count
plural, plural_vars = self.render_token_list(self.plural)
if message_context:
result = translation.npgettext(message_context, singular,
plural, count)
else:
result = translation.ungettext(singular, plural, count)
vars.extend(plural_vars)
else:
if message_context:
result = translation.pgettext(message_context, singular)
else:
result = translation.ugettext(singular)
default_value = context.template.engine.string_if_invalid
def render_value(key):
if key in context:
val = context[key]
else:
val = default_value % key if '%s' in default_value else default_value
return render_value_in_context(val, context)
data = {v: render_value(v) for v in vars}
context.pop()
try:
result = result % data
except (KeyError, ValueError):
if nested:
# Either string is malformed, or it's a bug
raise TemplateSyntaxError("'blocktrans' is unable to format "
"string returned by gettext: %r using %r" % (result, data))
with translation.override(None):
result = self.render(context, nested=True)
if self.asvar:
context[self.asvar] = result
return ''
else:
return result
class LanguageNode(Node):
def __init__(self, nodelist, language):
self.nodelist = nodelist
self.language = language
def render(self, context):
with translation.override(self.language.resolve(context)):
output = self.nodelist.render(context)
return output
@register.tag("get_available_languages")
def do_get_available_languages(parser, token):
"""
This will store a list of available languages
in the context.
Usage::
{% get_available_languages as languages %}
{% for language in languages %}
...
{% endfor %}
This will just pull the LANGUAGES setting from
your setting file (or the default settings) and
put it into the named variable.
"""
# token.split_contents() isn't useful here because this tag doesn't accept variable as arguments
args = token.contents.split()
if len(args) != 3 or args[1] != 'as':
raise TemplateSyntaxError("'get_available_languages' requires 'as variable' (got %r)" % args)
return GetAvailableLanguagesNode(args[2])
@register.tag("get_language_info")
def do_get_language_info(parser, token):
"""
This will store the language information dictionary for the given language
code in a context variable.
Usage::
{% get_language_info for LANGUAGE_CODE as l %}
{{ l.code }}
{{ l.name }}
{{ l.name_translated }}
{{ l.name_local }}
{{ l.bidi|yesno:"bi-directional,uni-directional" }}
"""
args = token.split_contents()
if len(args) != 5 or args[1] != 'for' or args[3] != 'as':
raise TemplateSyntaxError("'%s' requires 'for string as variable' (got %r)" % (args[0], args[1:]))
return GetLanguageInfoNode(parser.compile_filter(args[2]), args[4])
@register.tag("get_language_info_list")
def do_get_language_info_list(parser, token):
"""
This will store a list of language information dictionaries for the given
language codes in a context variable. The language codes can be specified
either as a list of strings or a settings.LANGUAGES style list (or any
sequence of sequences whose first items are language codes).
Usage::
{% get_language_info_list for LANGUAGES as langs %}
{% for l in langs %}
{{ l.code }}
{{ l.name }}
{{ l.name_translated }}
{{ l.name_local }}
{{ l.bidi|yesno:"bi-directional,uni-directional" }}
{% endfor %}
"""
args = token.split_contents()
if len(args) != 5 or args[1] != 'for' or args[3] != 'as':
raise TemplateSyntaxError("'%s' requires 'for sequence as variable' (got %r)" % (args[0], args[1:]))
return GetLanguageInfoListNode(parser.compile_filter(args[2]), args[4])
@register.filter
def language_name(lang_code):
return translation.get_language_info(lang_code)['name']
@register.filter
def language_name_translated(lang_code):
english_name = translation.get_language_info(lang_code)['name']
return translation.ugettext(english_name)
@register.filter
def language_name_local(lang_code):
return translation.get_language_info(lang_code)['name_local']
@register.filter
def language_bidi(lang_code):
return translation.get_language_info(lang_code)['bidi']
@register.tag("get_current_language")
def do_get_current_language(parser, token):
"""
This will store the current language in the context.
Usage::
{% get_current_language as language %}
This will fetch the currently active language and
put it's value into the ``language`` context
variable.
"""
# token.split_contents() isn't useful here because this tag doesn't accept variable as arguments
args = token.contents.split()
if len(args) != 3 or args[1] != 'as':
raise TemplateSyntaxError("'get_current_language' requires 'as variable' (got %r)" % args)
return GetCurrentLanguageNode(args[2])
@register.tag("get_current_language_bidi")
def do_get_current_language_bidi(parser, token):
"""
This will store the current language layout in the context.
Usage::
{% get_current_language_bidi as bidi %}
This will fetch the currently active language's layout and
put it's value into the ``bidi`` context variable.
True indicates right-to-left layout, otherwise left-to-right
"""
# token.split_contents() isn't useful here because this tag doesn't accept variable as arguments
args = token.contents.split()
if len(args) != 3 or args[1] != 'as':
raise TemplateSyntaxError("'get_current_language_bidi' requires 'as variable' (got %r)" % args)
return GetCurrentLanguageBidiNode(args[2])
@register.tag("trans")
def do_translate(parser, token):
"""
This will mark a string for translation and will
translate the string for the current language.
Usage::
{% trans "this is a test" %}
This will mark the string for translation so it will
be pulled out by mark-messages.py into the .po files
and will run the string through the translation engine.
There is a second form::
{% trans "this is a test" noop %}
This will only mark for translation, but will return
the string unchanged. Use it when you need to store
values into forms that should be translated later on.
You can use variables instead of constant strings
to translate stuff you marked somewhere else::
{% trans variable %}
This will just try to translate the contents of
the variable ``variable``. Make sure that the string
in there is something that is in the .po file.
It is possible to store the translated string into a variable::
{% trans "this is a test" as var %}
{{ var }}
Contextual translations are also supported::
{% trans "this is a test" context "greeting" %}
This is equivalent to calling pgettext instead of (u)gettext.
"""
bits = token.split_contents()
if len(bits) < 2:
raise TemplateSyntaxError("'%s' takes at least one argument" % bits[0])
message_string = parser.compile_filter(bits[1])
remaining = bits[2:]
noop = False
asvar = None
message_context = None
seen = set()
invalid_context = {'as', 'noop'}
while remaining:
option = remaining.pop(0)
if option in seen:
raise TemplateSyntaxError(
"The '%s' option was specified more than once." % option,
)
elif option == 'noop':
noop = True
elif option == 'context':
try:
value = remaining.pop(0)
except IndexError:
msg = "No argument provided to the '%s' tag for the context option." % bits[0]
six.reraise(TemplateSyntaxError, TemplateSyntaxError(msg), sys.exc_info()[2])
if value in invalid_context:
raise TemplateSyntaxError(
"Invalid argument '%s' provided to the '%s' tag for the context option" % (value, bits[0]),
)
message_context = parser.compile_filter(value)
elif option == 'as':
try:
value = remaining.pop(0)
except IndexError:
msg = "No argument provided to the '%s' tag for the as option." % bits[0]
six.reraise(TemplateSyntaxError, TemplateSyntaxError(msg), sys.exc_info()[2])
asvar = value
else:
raise TemplateSyntaxError(
"Unknown argument for '%s' tag: '%s'. The only options "
"available are 'noop', 'context' \"xxx\", and 'as VAR'." % (
bits[0], option,
)
)
seen.add(option)
return TranslateNode(message_string, noop, asvar, message_context)
@register.tag("blocktrans")
def do_block_translate(parser, token):
"""
This will translate a block of text with parameters.
Usage::
{% blocktrans with bar=foo|filter boo=baz|filter %}
This is {{ bar }} and {{ boo }}.
{% endblocktrans %}
Additionally, this supports pluralization::
{% blocktrans count count=var|length %}
There is {{ count }} object.
{% plural %}
There are {{ count }} objects.
{% endblocktrans %}
This is much like ngettext, only in template syntax.
The "var as value" legacy format is still supported::
{% blocktrans with foo|filter as bar and baz|filter as boo %}
{% blocktrans count var|length as count %}
The translated string can be stored in a variable using `asvar`::
{% blocktrans with bar=foo|filter boo=baz|filter asvar var %}
This is {{ bar }} and {{ boo }}.
{% endblocktrans %}
{{ var }}
Contextual translations are supported::
{% blocktrans with bar=foo|filter context "greeting" %}
This is {{ bar }}.
{% endblocktrans %}
This is equivalent to calling pgettext/npgettext instead of
(u)gettext/(u)ngettext.
"""
bits = token.split_contents()
options = {}
remaining_bits = bits[1:]
asvar = None
while remaining_bits:
option = remaining_bits.pop(0)
if option in options:
raise TemplateSyntaxError('The %r option was specified more '
'than once.' % option)
if option == 'with':
value = token_kwargs(remaining_bits, parser, support_legacy=True)
if not value:
raise TemplateSyntaxError('"with" in %r tag needs at least '
'one keyword argument.' % bits[0])
elif option == 'count':
value = token_kwargs(remaining_bits, parser, support_legacy=True)
if len(value) != 1:
raise TemplateSyntaxError('"count" in %r tag expected exactly '
'one keyword argument.' % bits[0])
elif option == "context":
try:
value = remaining_bits.pop(0)
value = parser.compile_filter(value)
except Exception:
msg = (
'"context" in %r tag expected '
'exactly one argument.') % bits[0]
six.reraise(TemplateSyntaxError, TemplateSyntaxError(msg), sys.exc_info()[2])
elif option == "trimmed":
value = True
elif option == "asvar":
try:
value = remaining_bits.pop(0)
except IndexError:
msg = "No argument provided to the '%s' tag for the asvar option." % bits[0]
six.reraise(TemplateSyntaxError, TemplateSyntaxError(msg), sys.exc_info()[2])
asvar = value
else:
raise TemplateSyntaxError('Unknown argument for %r tag: %r.' %
(bits[0], option))
options[option] = value
if 'count' in options:
countervar, counter = list(options['count'].items())[0]
else:
countervar, counter = None, None
if 'context' in options:
message_context = options['context']
else:
message_context = None
extra_context = options.get('with', {})
trimmed = options.get("trimmed", False)
singular = []
plural = []
while parser.tokens:
token = parser.next_token()
if token.token_type in (TOKEN_VAR, TOKEN_TEXT):
singular.append(token)
else:
break
if countervar and counter:
if token.contents.strip() != 'plural':
raise TemplateSyntaxError("'blocktrans' doesn't allow other block tags inside it")
while parser.tokens:
token = parser.next_token()
if token.token_type in (TOKEN_VAR, TOKEN_TEXT):
plural.append(token)
else:
break
if token.contents.strip() != 'endblocktrans':
raise TemplateSyntaxError("'blocktrans' doesn't allow other block tags (seen %r) inside it" % token.contents)
return BlockTranslateNode(extra_context, singular, plural, countervar,
counter, message_context, trimmed=trimmed,
asvar=asvar)
@register.tag
def language(parser, token):
"""
This will enable the given language just for this block.
Usage::
{% language "de" %}
This is {{ bar }} and {{ boo }}.
{% endlanguage %}
"""
bits = token.split_contents()
if len(bits) != 2:
raise TemplateSyntaxError("'%s' takes one argument (language)" % bits[0])
language = parser.compile_filter(bits[1])
nodelist = parser.parse(('endlanguage',))
parser.delete_first_token()
return LanguageNode(nodelist, language)
|
bsd-3-clause
|
withanage/meTypeset
|
bin/interactive.py
|
2
|
15252
|
#!/usr/bin/env python
from __future__ import print_function
__author__ = "Martin Paul Eve"
__email__ = "martin@martineve.com"
"""
A class to handle an interactive prompt.
Portions of this file are Copyright 2014, Adrian Sampson.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""
from debug import Debuggable
import sys
from difflib import SequenceMatcher
import locale
class Interactive(Debuggable):
def __init__(self, global_variables):
self.gv = global_variables
self.debug = self.gv.debug
Debuggable.__init__(self, 'Interactive Prompt Handler')
# ANSI terminal colorization code heavily inspired by pygments:
# http://dev.pocoo.org/hg/pygments-main/file/b2deea5b5030/pygments/console.py
# (pygments is by Tim Hatch, Armin Ronacher, et al.)
self.COLOR_ESCAPE = "\x1b["
self.DARK_COLORS = ["black", "darkred", "darkgreen", "brown", "darkblue",
"purple", "teal", "lightgray"]
self.LIGHT_COLORS = ["darkgray", "red", "green", "yellow", "blue",
"fuchsia", "turquoise", "white"]
self.RESET_COLOR = self.COLOR_ESCAPE + "39;49;00m"
def input_options(self, options, require=False, prompt=None, fallback_prompt=None,
numrange=None, default=None, max_width=72):
"""Prompts a user for input. The sequence of `options` defines the
choices the user has. A single-letter shortcut is inferred for each
option; the user's choice is returned as that single, lower-case
letter. The options should be provided as lower-case strings unless
a particular shortcut is desired; in that case, only that letter
should be capitalized.
By default, the first option is the default. `default` can be provided to
override this. If `require` is provided, then there is no default. The
prompt and fallback prompt are also inferred but can be overridden.
If numrange is provided, it is a pair of `(high, low)` (both ints)
indicating that, in addition to `options`, the user may enter an
integer in that inclusive range.
`max_width` specifies the maximum number of columns in the
automatically generated prompt string.
"""
# Assign single letters to each option. Also capitalize the options
# to indicate the letter.
letters = {}
display_letters = []
capitalized = []
first = True
for option in options:
# Is a letter already capitalized?
for letter in option:
if letter.isalpha() and letter.upper() == letter:
found_letter = letter
break
else:
# Infer a letter.
for letter in option:
if not letter.isalpha():
continue # Don't use punctuation.
if letter not in letters:
found_letter = letter
break
else:
raise ValueError('no unambiguous lettering found')
letters[found_letter.lower()] = option
index = option.index(found_letter)
# Mark the option's shortcut letter for display.
if not require and ((default is None and not numrange and first) or
(isinstance(default, str) and
found_letter.lower() == default.lower())):
# The first option is the default; mark it.
show_letter = '[%s]' % found_letter.upper()
is_default = True
else:
show_letter = found_letter.upper()
is_default = False
# Colorize the letter shortcut.
show_letter = self.colorize('green' if is_default else 'red',
show_letter)
# Insert the highlighted letter back into the word.
capitalized.append(
option[:index] + show_letter + option[index + 1:]
)
display_letters.append(found_letter.upper())
first = False
# The default is just the first option if unspecified.
if require:
default = None
elif default is None:
if numrange:
default = numrange[0]
else:
default = display_letters[0].lower()
# Make a prompt if one is not provided.
if not prompt:
prompt_parts = []
prompt_part_lengths = []
if numrange:
if isinstance(default, int):
default_name = str(default)
default_name = self.colorize('turquoise', default_name)
tmpl = '# selection (default %s)'
prompt_parts.append(tmpl % default_name)
prompt_part_lengths.append(len(tmpl % str(default)))
else:
prompt_parts.append('# selection')
prompt_part_lengths.append(len(prompt_parts[-1]))
prompt_parts += capitalized
prompt_part_lengths += [len(s) for s in options]
# Wrap the query text.
prompt = ''
line_length = 0
for i, (part, length) in enumerate(zip(prompt_parts,
prompt_part_lengths)):
# Add punctuation.
if i == len(prompt_parts) - 1:
part += '?'
else:
part += ','
length += 1
# Choose either the current line or the beginning of the next.
if line_length + length + 1 > max_width:
prompt += '\n'
line_length = 0
if line_length != 0:
# Not the beginning of the line; need a space.
part = ' ' + part
length += 1
prompt += part
line_length += length
# Make a fallback prompt too. This is displayed if the user enters
# something that is not recognized.
if not fallback_prompt:
fallback_prompt = 'Enter one of '
if numrange:
fallback_prompt += '%i-%i, ' % numrange
fallback_prompt += ', '.join(display_letters) + ':'
resp = self.input_(prompt)
while True:
resp = resp.strip().lower()
# Try default option.
if default is not None and not resp:
resp = default
# Try an integer input if available.
if numrange:
try:
resp = int(resp)
except ValueError:
pass
else:
low, high = numrange
if low <= resp <= high:
return resp
else:
resp = None
# Try a normal letter input.
if resp:
resp = resp[0]
if resp in letters:
return resp
# Prompt for new input.
resp = self.input_(fallback_prompt)
def input_(self, prompt=None):
"""Like `raw_input`, but decodes the result to a Unicode string.
Raises a UserError if stdin is not available. The prompt is sent to
stdout rather than stderr. A printed between the prompt and the
input cursor.
"""
# raw_input incorrectly sends prompts to stderr, not stdout, so we
# use print() explicitly to display prompts.
# http://bugs.python.org/issue1927
if prompt:
print(prompt, end=' ')
try:
resp = input()
except EOFError:
self.debug.print_debug('stdin stream ended while input required')
return resp.encode(sys.stdin.encoding or 'utf8', 'ignore').decode('utf-8')
def _encoding(self):
"""Tries to guess the encoding used by the terminal."""
# Determine from locale settings.
try:
return locale.getdefaultlocale()[1] or 'utf8'
except ValueError:
# Invalid locale environment variable setting. To avoid
# failing entirely for no good reason, assume UTF-8.
return 'utf8'
def _colorize(self, color, text):
"""Returns a string that prints the given text in the given color
in a terminal that is ANSI color-aware. The color must be something
in DARK_COLORS or LIGHT_COLORS.
"""
if color in self.DARK_COLORS:
escape = self.COLOR_ESCAPE + "%im" % (self.DARK_COLORS.index(color) + 30)
elif color in self.LIGHT_COLORS:
escape = self.COLOR_ESCAPE + "%i;01m" % (self.LIGHT_COLORS.index(color) + 30)
else:
raise ValueError('no such color %s', color)
return escape + text + self.RESET_COLOR
def colorize(self, color, text):
"""Colorize text if colored output is enabled. (Like _colorize but
conditional.)
"""
if self.gv.settings.get_setting('color', self) == 'True':
return self._colorize(color, text)
else:
return text
def _colordiff(self, a, b, highlight='red', minor_highlight='lightgray'):
"""Given two values, return the same pair of strings except with
their differences highlighted in the specified color. Strings are
highlighted intelligently to show differences; other values are
stringified and highlighted in their entirety.
"""
if not isinstance(a, str) or not isinstance(b, str):
# Non-strings: use ordinary equality.
if a == b:
return a, b
else:
return self.colorize(highlight, a), self.colorize(highlight, b)
if isinstance(a, bytes) or isinstance(b, bytes):
# A path field.
a = self.displayable_path(a)
b = self.displayable_path(b)
a_out = []
b_out = []
matcher = SequenceMatcher(lambda x: False, a, b)
for op, a_start, a_end, b_start, b_end in matcher.get_opcodes():
if op == 'equal':
# In both strings.
a_out.append(a[a_start:a_end])
b_out.append(b[b_start:b_end])
elif op == 'insert':
# Right only.
b_out.append(self.colorize(highlight, b[b_start:b_end]))
elif op == 'delete':
# Left only.
a_out.append(self.colorize(highlight, a[a_start:a_end]))
elif op == 'replace':
# Right and left differ. Colorise with second highlight if
# it's just a case change.
if a[a_start:a_end].lower() != b[b_start:b_end].lower():
color = highlight
else:
color = minor_highlight
a_out.append(self.colorize(color, a[a_start:a_end]))
b_out.append(self.colorize(color, b[b_start:b_end]))
else:
assert(False)
return u''.join(a_out), u''.join(b_out)
def displayable_path(self, path, separator=u'; '):
"""Attempts to decode a bytestring path to a unicode object for the
purpose of displaying it to the user. If the `path` argument is a
list or a tuple, the elements are joined with `separator`.
"""
if isinstance(path, (list, tuple)):
return separator.join(self.displayable_path(p) for p in path)
elif not isinstance(path, str):
# A non-string object: just get its unicode representation.
return path
try:
return path.decode(self._fsencoding(), 'ignore')
except (UnicodeError, LookupError):
return path.decode('utf8', 'ignore')
def _fsencoding(self):
"""Get the system's filesystem encoding. On Windows, this is always
UTF-8 (not MBCS).
"""
encoding = sys.getfilesystemencoding() or sys.getdefaultencoding()
if encoding == 'mbcs':
# On Windows, a broken encoding known to Python as "MBCS" is
# used for the filesystem. However, we only use the Unicode API
# for Windows paths, so the encoding is actually immaterial so
# we can avoid dealing with this nastiness. We arbitrarily
# choose UTF-8.
encoding = 'utf8'
return encoding
def colordiff(self, a, b, highlight='red'):
"""Colorize differences between two values if color is enabled.
(Like _colordiff but conditional.)
"""
if self.gv.settings.get_setting('color', self) == 'True':
return self._colordiff(a, b, highlight)
else:
return a,b
def print_(self, *strings):
"""Like print, but rather than raising an error when a character
is not in the terminal's encoding's character set, just silently
replaces it.
"""
if strings:
txt = u' '.join(strings)
else:
txt = u''
txt = txt.encode(self._encoding(), 'replace')
print(txt.decode('utf-8'))
def color_diff_suffix(self, a, b, highlight='red'):
"""Colorize the differing suffix between two strings."""
if not self.gv.settings.get_setting('color', self) == 'True':
return a, b
# Fast path.
if a == b:
return a, b
# Find the longest common prefix.
first_diff = None
for i in range(min(len(a), len(b))):
if a[i] != b[i]:
first_diff = i
break
else:
first_diff = min(len(a), len(b))
# Colorize from the first difference on.
return a[:first_diff] + self.colorize(highlight, a[first_diff:]), \
b[:first_diff] + self.colorize(highlight, b[first_diff:])
def choose_candidate(self, candidates, manipulate, opts, item=None, itemcount=None):
self.print_(u'Candidates:')
for i, match in enumerate(candidates):
# Index, metadata, and distance.
line = [
u'{0}.'.format(i + 1),
u'{0}'.format(manipulate.get_stripped_text(match.reference_to_link)
)
]
self.print_(' '.join(line))
# Ask the user for a choice.
sel = self.input_options(opts, numrange=(1, len(candidates)))
return sel
|
gpl-2.0
|
neurodebian/htcondor
|
src/condor_contrib/condor_pigeon/src/condor_pigeon_client/skype_linux_tools/Skype4Py/Languages/bg.py
|
10
|
18721
|
apiAttachAvailable = u'\u0414\u043e\u0441\u0442\u044a\u043f\u0435\u043d \u0447\u0440\u0435\u0437 API'
apiAttachNotAvailable = u'\u041d\u0435\u0434\u043e\u0441\u0442\u044a\u043f\u0435\u043d'
apiAttachPendingAuthorization = u'\u0427\u0430\u043a\u0430 \u0441\u0435 \u043e\u0442\u043e\u0440\u0438\u0437\u0430\u0446\u0438\u044f'
apiAttachRefused = u'\u041e\u0442\u043a\u0430\u0437\u0430\u043d\u0430'
apiAttachSuccess = u'\u0423\u0441\u043f\u0435\u0445'
apiAttachUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
budDeletedFriend = u'\u0418\u0437\u0442\u0440\u0438\u0442 \u043e\u0442 \u0421\u043f\u0438\u0441\u044a\u043a\u0430 \u0441 \u043f\u0440\u0438\u044f\u0442\u0435\u043b\u0438'
budFriend = u'\u041f\u0440\u0438\u044f\u0442\u0435\u043b'
budNeverBeenFriend = u'\u041d\u0438\u043a\u043e\u0433\u0430 \u043d\u0435 \u0435 \u0431\u0438\u043b \u0432 \u0421\u043f\u0438\u0441\u044a\u043a\u0430 \u0441 \u043f\u0440\u0438\u044f\u0442\u0435\u043b\u0438'
budPendingAuthorization = u'\u0427\u0430\u043a\u0430 \u0441\u0435 \u043e\u0442\u043e\u0440\u0438\u0437\u0430\u0446\u0438\u044f'
budUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
cfrBlockedByRecipient = u'\u041e\u0431\u0430\u0436\u0434\u0430\u043d\u0435\u0442\u043e \u0435 \u0431\u043b\u043e\u043a\u0438\u0440\u0430\u043d\u043e \u043e\u0442 \u043f\u0440\u0438\u0435\u043c\u0430\u0449\u0438\u044f'
cfrMiscError = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430 \u0433\u0440\u0435\u0448\u043a\u0430'
cfrNoCommonCodec = u'\u041d\u044f\u043c\u0430 \u043f\u043e\u0434\u0445\u043e\u0434\u044f\u0449 \u043a\u043e\u0434\u0435\u043a'
cfrNoProxyFound = u'\u041d\u044f\u043c\u0430 \u043d\u0430\u043c\u0435\u0440\u0435\u043d\u0438 \u043f\u0440\u043e\u043a\u0441\u0438 \u0441\u044a\u0440\u0432\u044a\u0440\u0438'
cfrNotAuthorizedByRecipient = u'\u0422\u043e\u0437\u0438 \u043f\u043e\u0442\u0440\u0435\u0431\u0438\u0442\u0435\u043b \u043d\u0435 \u0435 \u043e\u0442\u043e\u0440\u0438\u0437\u0438\u0440\u0430\u043d \u043e\u0442 \u043f\u0440\u0438\u0435\u043c\u0430\u0449\u0438\u044f'
cfrRecipientNotFriend = u'\u041f\u0440\u0438\u0435\u043c\u0430\u0449\u0438\u044f\u0442 \u043d\u0435 \u0435 \u043f\u0440\u0438\u044f\u0442\u0435\u043b'
cfrRemoteDeviceError = u'\u041f\u0440\u043e\u0431\u043b\u0435\u043c \u0441 \u043e\u0442\u0434\u0430\u043b\u0435\u0447\u0435\u043d\u043e \u0437\u0432\u0443\u043a\u043e\u0432\u043e \u0443\u0441\u0442\u0440\u043e\u0439\u0441\u0442\u0432\u043e'
cfrSessionTerminated = u'\u041f\u0440\u0435\u043a\u0440\u0430\u0442\u0435\u043d\u0430 \u0441\u0435\u0441\u0438\u044f'
cfrSoundIOError = u'\u0412\u0445\u043e\u0434\u043d\u043e/\u0438\u0437\u0445\u043e\u0434\u043d\u0430 \u0433\u0440\u0435\u0448\u043a\u0430 \u0441\u044a\u0441 \u0437\u0432\u0443\u043a\u0430'
cfrSoundRecordingError = u'\u0413\u0440\u0435\u0448\u043a\u0430 \u043f\u0440\u0438 \u0437\u0430\u043f\u0438\u0441\u0430 \u043d\u0430 \u0437\u0432\u0443\u043a'
cfrUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
cfrUserDoesNotExist = u'\u0410\u0431\u043e\u043d\u0430\u0442\u044a\u0442/\u0442\u0435\u043b\u0435\u0444\u043e\u043d\u043d\u0438\u044f\u0442 \u043d\u043e\u043c\u0435\u0440 \u043d\u0435 \u0441\u044a\u0449\u0435\u0441\u0442\u0432\u0443\u0432\u0430'
cfrUserIsOffline = u'\u0422\u043e\u0439/\u0442\u044f \u0435 \u0438\u0437\u0432\u044a\u043d \u043b\u0438\u043d\u0438\u044f.'
chsAllCalls = u'\u0421\u0442\u0430\u0440 \u0434\u0438\u0430\u043b\u043e\u0433'
chsDialog = u'\u0414\u0438\u0430\u043b\u043e\u0433'
chsIncomingCalls = u'\u041c\u043d\u043e\u0437\u0438\u043d\u0430 \u0442\u0440\u044f\u0431\u0432\u0430 \u0434\u0430 \u043f\u0440\u0438\u0435\u043c\u0430\u0442'
chsLegacyDialog = u'\u0421\u0442\u0430\u0440 \u0434\u0438\u0430\u043b\u043e\u0433'
chsMissedCalls = u'\u0414\u0438\u0430\u043b\u043e\u0433'
chsMultiNeedAccept = u'\u041c\u043d\u043e\u0437\u0438\u043d\u0430 \u0442\u0440\u044f\u0431\u0432\u0430 \u0434\u0430 \u043f\u0440\u0438\u0435\u043c\u0430\u0442'
chsMultiSubscribed = u'\u041c\u043d\u043e\u0437\u0438\u043d\u0430 \u0441\u0430 \u0430\u0431\u043e\u043d\u0430\u0442\u0438'
chsOutgoingCalls = u'\u041c\u043d\u043e\u0437\u0438\u043d\u0430 \u0441\u0430 \u0430\u0431\u043e\u043d\u0430\u0442\u0438'
chsUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
chsUnsubscribed = u'\u041d\u0435 \u0435 \u0430\u0431\u043e\u043d\u0430\u0442'
clsBusy = u'\u0417\u0430\u0435\u0442\u043e'
clsCancelled = u'\u041f\u0440\u0435\u043a\u0440\u0430\u0442\u0435\u043d'
clsEarlyMedia = u'\u0412\u044a\u0437\u043f\u0440\u043e\u0438\u0437\u0432\u0435\u0436\u0434\u0430\u043d\u0435 \u043d\u0430 \u043f\u0440\u0435\u0434\u0440\u0430\u0437\u0433\u043e\u0432\u043e\u0440\u0435\u043d \u0448\u0443\u043c (Early Media)'
clsFailed = u'\u0417\u0430 \u0441\u044a\u0436\u0430\u043b\u0435\u043d\u0438\u0435, \u0440\u0430\u0437\u0433\u043e\u0432\u043e\u0440\u044a\u0442 \u0441\u0435 \u0440\u0430\u0437\u043f\u0430\u0434\u043d\u0430!'
clsFinished = u'\u041f\u0440\u0438\u043a\u043b\u044e\u0447\u0435\u043d'
clsInProgress = u'\u0412 \u043f\u0440\u043e\u0446\u0435\u0441 \u043d\u0430 \u0440\u0430\u0437\u0433\u043e\u0432\u043e\u0440'
clsLocalHold = u'\u041d\u0430 \u0438\u0437\u0447\u0430\u043a\u0432\u0430\u043d\u0435 \u043e\u0442 \u0432\u0430\u0448\u0430 \u0441\u0442\u0440\u0430\u043d\u0430'
clsMissed = u'\u043f\u0440\u043e\u043f\u0443\u0441\u043d\u0430\u0442 \u0440\u0430\u0437\u0433\u043e\u0432\u043e\u0440'
clsOnHold = u'\u0417\u0430\u0434\u044a\u0440\u0436\u0430\u043d\u0435'
clsRefused = u'\u041e\u0442\u043a\u0430\u0437\u0430\u043d\u0430'
clsRemoteHold = u'\u041d\u0430 \u0438\u0437\u0447\u0430\u043a\u0432\u0430\u043d\u0435 \u043e\u0442 \u043e\u0442\u0441\u0440\u0435\u0449\u043d\u0430\u0442\u0430 \u0441\u0442\u0440\u0430\u043d\u0430'
clsRinging = u'\u0433\u043e\u0432\u043e\u0440\u0438\u0442\u0435'
clsRouting = u'\u041f\u0440\u0435\u043d\u0430\u0441\u043e\u0447\u0432\u0430\u043d\u0435'
clsTransferred = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
clsTransferring = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
clsUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
clsUnplaced = u'\u041d\u0438\u043a\u043e\u0433\u0430 \u043d\u0435 \u0435 \u043f\u0440\u043e\u0432\u0435\u0436\u0434\u0430\u043d'
clsVoicemailBufferingGreeting = u'\u0411\u0443\u0444\u0435\u0440\u0438\u0440\u0430\u043d\u0435 \u043d\u0430 \u043f\u043e\u0437\u0434\u0440\u0430\u0432\u0430'
clsVoicemailCancelled = u'\u0413\u043b\u0430\u0441\u043e\u0432\u0430\u0442\u0430 \u043f\u043e\u0449\u0430 \u043e\u0442\u043c\u0435\u043d\u0435\u043d\u0430'
clsVoicemailFailed = u'\u041f\u0440\u043e\u0432\u0430\u043b\u0435\u043d\u043e \u0433\u043b\u0430\u0441\u043e\u0432\u043e \u0441\u044a\u043e\u0431\u0449\u0435\u043d\u0438\u0435'
clsVoicemailPlayingGreeting = u'\u0412\u044a\u0437\u043f\u0440\u043e\u0438\u0437\u0432\u0435\u0436\u0434\u0430\u043d\u0435 \u043d\u0430 \u043f\u043e\u0437\u0434\u0440\u0430\u0432\u0430'
clsVoicemailRecording = u'\u0417\u0430\u043f\u0438\u0441 \u043d\u0430 \u0433\u043b\u0430\u0441\u043e\u0432\u043e \u0441\u044a\u043e\u0431\u0449\u0435\u043d\u0438\u0435'
clsVoicemailSent = u'\u0413\u043b\u0430\u0441\u043e\u0432\u0430\u0442\u0430 \u043f\u043e\u0449\u0430 \u0438\u0437\u043f\u0440\u0430\u0442\u0435\u043d\u0430'
clsVoicemailUploading = u'\u0413\u043b\u0430\u0441\u043e\u0432\u0430\u0442\u0430 \u043f\u043e\u0449\u0430 \u0441\u0435 \u043a\u0430\u0447\u0432\u0430'
cltIncomingP2P = u'\u0412\u0445\u043e\u0434\u044f\u0449\u043e Peer-to-Peer \u043e\u0431\u0430\u0436\u0434\u0430\u043d\u0435'
cltIncomingPSTN = u'\u0412\u0445\u043e\u0434\u044f\u0449\u043e \u0442\u0435\u043b\u0435\u0444\u043e\u043d\u043d\u043e \u043e\u0431\u0430\u0436\u0434\u0430\u043d\u0435'
cltOutgoingP2P = u'\u0418\u0437\u0445\u043e\u0434\u044f\u0449\u043e Peer-to-Peer \u043e\u0431\u0430\u0436\u0434\u0430\u043d\u0435'
cltOutgoingPSTN = u'\u0418\u0437\u0445\u043e\u0434\u044f\u0449\u043e \u0442\u0435\u043b\u0435\u0444\u043e\u043d\u043d\u043e \u043e\u0431\u0430\u0436\u0434\u0430\u043d\u0435'
cltUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
cmeAddedMembers = u'\u0414\u043e\u0431\u0430\u0432\u0438 \u0447\u043b\u0435\u043d\u043e\u0432\u0435'
cmeCreatedChatWith = u'\u0421\u044a\u0437\u0434\u0430\u0434\u0435 \u0447\u0430\u0442 \u0441\u044a\u0441'
cmeEmoted = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
cmeLeft = u'\u041d\u0430\u043f\u0443\u0441\u043d\u0430'
cmeSaid = u'\u041a\u0430\u0437\u0430'
cmeSawMembers = u'\u0412\u0438\u0434\u044f \u0447\u043b\u0435\u043d\u043e\u0432\u0435'
cmeSetTopic = u'\u0417\u0430\u0434\u0430\u0439 \u0442\u0435\u043c\u0430'
cmeUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
cmsRead = u'\u041f\u0440\u043e\u0447\u0435\u0442\u0435\u043d\u043e'
cmsReceived = u'\u041f\u043e\u043b\u0443\u0447\u0435\u043d\u043e'
cmsSending = u'\u0418\u0437\u043f\u0440\u0430\u0449\u0430\u043d\u0435...'
cmsSent = u'\u041f\u0440\u0430\u0442\u0435\u043d\u0430'
cmsUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
conConnecting = u'\u0412 \u043f\u0440\u043e\u0446\u0435\u0441 \u043d\u0430 \u0441\u0432\u044a\u0440\u0437\u0432\u0430\u043d\u0435'
conOffline = u'\u0418\u0437\u0432\u044a\u043d \u043b\u0438\u043d\u0438\u044f'
conOnline = u'\u041d\u0430 \u043b\u0438\u043d\u0438\u044f'
conPausing = u'\u041f\u0430\u0443\u0437\u0430'
conUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
cusAway = u'\u041e\u0442\u0441\u044a\u0441\u0442\u0432\u0430\u0449'
cusDoNotDisturb = u'\u041e\u0442\u043f\u043e\u0447\u0438\u0432\u0430\u0449'
cusInvisible = u'\u0418\u043d\u043a\u043e\u0433\u043d\u0438\u0442\u043e'
cusLoggedOut = u'\u0418\u0437\u0432\u044a\u043d \u043b\u0438\u043d\u0438\u044f'
cusNotAvailable = u'\u041d\u0435\u0434\u043e\u0441\u0442\u044a\u043f\u0435\u043d'
cusOffline = u'\u0418\u0437\u0432\u044a\u043d \u043b\u0438\u043d\u0438\u044f'
cusOnline = u'\u041d\u0430 \u043b\u0438\u043d\u0438\u044f'
cusSkypeMe = u'\u0421\u043a\u0430\u0439\u043f\u0432\u0430\u0449'
cusUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
cvsBothEnabled = u'\u0418\u0437\u043f\u0440\u0430\u0449\u0430\u043d\u0435 \u0438 \u043f\u043e\u043b\u0443\u0447\u0430\u0432\u0430\u043d\u0435 \u043d\u0430 \u0432\u0438\u0434\u0435\u043e'
cvsNone = u'\u041d\u044f\u043c\u0430 \u0432\u0438\u0434\u0435\u043e'
cvsReceiveEnabled = u'\u041f\u043e\u043b\u0443\u0447\u0430\u0432\u0430\u043d\u0435 \u043d\u0430 \u0432\u0438\u0434\u0435\u043e'
cvsSendEnabled = u'\u0418\u0437\u043f\u0440\u0430\u0449\u0430\u043d\u0435 \u043d\u0430 \u0432\u0438\u0434\u0435\u043e'
cvsUnknown = u''
grpAllFriends = u'\u0412\u0441\u0438\u0447\u043a\u0438 \u041f\u0440\u0438\u044f\u0442\u0435\u043b\u0438'
grpAllUsers = u'\u0412\u0441\u0438\u0447\u043a\u0438 \u0430\u0431\u043e\u043d\u0430\u0442\u0438'
grpCustomGroup = u'\u041f\u043e\u0442\u0440\u0435\u0431\u0438\u0442\u0435\u043b\u0441\u043a\u0430'
grpOnlineFriends = u'\u041f\u0440\u0438\u044f\u0442\u0435\u043b\u0438 \u043e\u043d\u043b\u0430\u0439\u043d'
grpPendingAuthorizationFriends = u'\u0427\u0430\u043a\u0430 \u0441\u0435 \u043e\u0442\u043e\u0440\u0438\u0437\u0430\u0446\u0438\u044f'
grpProposedSharedGroup = u'Proposed Shared Group'
grpRecentlyContactedUsers = u'\u0410\u0431\u043e\u043d\u0430\u0442\u0438, \u0441 \u043a\u043e\u0438\u0442\u043e \u0441\u043a\u043e\u0440\u043e \u0435 \u0443\u0441\u0442\u0430\u043d\u043e\u0432\u0435\u043d\u0430 \u0432\u0440\u044a\u0437\u043a\u0430'
grpSharedGroup = u'Shared Group'
grpSkypeFriends = u'Skype \u041f\u0440\u0438\u044f\u0442\u0435\u043b\u0438'
grpSkypeOutFriends = u'SkypeOut \u041f\u0440\u0438\u044f\u0442\u0435\u043b\u0438'
grpUngroupedFriends = u'\u041d\u0435\u0433\u0440\u0443\u043f\u0438\u0440\u0430\u043d\u0438 \u041f\u0440\u0438\u044f\u0442\u0435\u043b\u0438'
grpUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
grpUsersAuthorizedByMe = u'\u041e\u0442\u043e\u0440\u0438\u0437\u0438\u0440\u0430\u043d\u0438 \u043e\u0442 \u043c\u0435\u043d'
grpUsersBlockedByMe = u'\u0411\u043b\u043e\u043a\u0438\u0440\u0430\u043d\u0438 \u043e\u0442 \u043c\u0435\u043d'
grpUsersWaitingMyAuthorization = u'\u0427\u0430\u043a\u0430\u0449\u0438 \u043c\u043e\u044f\u0442\u0430 \u043e\u0442\u043e\u0440\u0438\u0437\u0430\u0446\u0438\u044f'
leaAddDeclined = u'\u0414\u043e\u0431\u0430\u0432\u044f\u043d\u0435\u0442\u043e \u043e\u0442\u043a\u0430\u0437\u0430\u043d\u043e'
leaAddedNotAuthorized = u'\u0414\u043e\u0431\u044f\u0432\u0430\u043d\u0438\u044f\u0442 \u0442\u0440\u044f\u0431\u0432\u0430 \u0434\u0430 \u0435 \u043e\u0442\u043e\u0440\u0438\u0437\u0438\u0440\u0430\u043d'
leaAdderNotFriend = u'\u0414\u043e\u0431\u0430\u0432\u044f\u0449\u0438\u044f\u0442 \u0442\u0440\u044f\u0431\u0432\u0430 \u0434\u0430 \u0435 \u041f\u0440\u0438\u044f\u0442\u0435\u043b'
leaUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
leaUnsubscribe = u'\u041d\u0435 \u0435 \u0430\u0431\u043e\u043d\u0430\u0442'
leaUserIncapable = u'\u0410\u0431\u043e\u043d\u0430\u0442\u044a\u0442 \u043d\u044f\u043c\u0430 \u0432\u044a\u0437\u043c\u043e\u0436\u043d\u043e\u0441\u0442 \u0437\u0430 \u0432\u0440\u044a\u0437\u043a\u0430'
leaUserNotFound = u'\u0410\u0431\u043e\u043d\u0430\u0442\u044a\u0442 \u043d\u0435 \u0435 \u043d\u0430\u043c\u0435\u0440\u0435\u043d'
olsAway = u'\u041e\u0442\u0441\u044a\u0441\u0442\u0432\u0430\u0449'
olsDoNotDisturb = u'\u041e\u0442\u043f\u043e\u0447\u0438\u0432\u0430\u0449'
olsNotAvailable = u'\u041d\u0435\u0434\u043e\u0441\u0442\u044a\u043f\u0435\u043d'
olsOffline = u'\u0418\u0437\u0432\u044a\u043d \u043b\u0438\u043d\u0438\u044f'
olsOnline = u'\u041d\u0430 \u043b\u0438\u043d\u0438\u044f'
olsSkypeMe = u'\u0421\u043a\u0430\u0439\u043f\u0432\u0430\u0449'
olsSkypeOut = u'SkypeOut'
olsUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
smsMessageStatusComposing = u'Composing'
smsMessageStatusDelivered = u'Delivered'
smsMessageStatusFailed = u'Failed'
smsMessageStatusRead = u'Read'
smsMessageStatusReceived = u'Received'
smsMessageStatusSendingToServer = u'Sending to Server'
smsMessageStatusSentToServer = u'Sent to Server'
smsMessageStatusSomeTargetsFailed = u'Some Targets Failed'
smsMessageStatusUnknown = u'Unknown'
smsMessageTypeCCRequest = u'Confirmation Code Request'
smsMessageTypeCCSubmit = u'Confirmation Code Submit'
smsMessageTypeIncoming = u'Incoming'
smsMessageTypeOutgoing = u'Outgoing'
smsMessageTypeUnknown = u'Unknown'
smsTargetStatusAcceptable = u'Acceptable'
smsTargetStatusAnalyzing = u'Analyzing'
smsTargetStatusDeliveryFailed = u'Delivery Failed'
smsTargetStatusDeliveryPending = u'Delivery Pending'
smsTargetStatusDeliverySuccessful = u'Delivery Successful'
smsTargetStatusNotRoutable = u'Not Routable'
smsTargetStatusUndefined = u'Undefined'
smsTargetStatusUnknown = u'Unknown'
usexFemale = u'\u0436\u0435\u043d\u0441\u043a\u0438'
usexMale = u'\u043c\u044a\u0436\u043a\u0438'
usexUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
vmrConnectError = u'\u0413\u0440\u0435\u0448\u043a\u0430 \u043f\u0440\u0438 \u0441\u0432\u044a\u0440\u0437\u0432\u0430\u043d\u0435\u0442\u043e'
vmrFileReadError = u'\u0413\u0440\u0435\u0448\u043a\u0430 \u043f\u0440\u0438 \u0447\u0435\u0442\u0435\u043d\u0435 \u043d\u0430 \u0444\u0430\u0439\u043b\u0430'
vmrFileWriteError = u'\u0413\u0440\u0435\u0448\u043a\u0430 \u043f\u0440\u0438 \u0437\u0430\u043f\u0438\u0441 \u043d\u0430 \u0444\u0430\u0439\u043b\u0430'
vmrMiscError = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430 \u0433\u0440\u0435\u0448\u043a\u0430'
vmrNoError = u'\u041d\u044f\u043c\u0430 \u0433\u0440\u0435\u0448\u043a\u0430'
vmrNoPrivilege = u'\u041d\u044f\u043c\u0430 \u043f\u0440\u0438\u0432\u0438\u043b\u0435\u0433\u0438\u044f \u0437\u0430 \u0433\u043b\u0430\u0441\u043e\u0432\u0430 \u043f\u043e\u0449\u0430'
vmrNoVoicemail = u'\u041d\u044f\u043c\u0430 \u0442\u0430\u043a\u0430\u0432\u0430 \u0433\u043b\u0430\u0441\u043e\u0432\u0430 \u043f\u043e\u0449\u0430'
vmrPlaybackError = u'\u0413\u0440\u0435\u0448\u043a\u0430 \u043f\u0440\u0438 \u043f\u0440\u043e\u0441\u043b\u0443\u0448\u0432\u0430\u043d\u0435'
vmrRecordingError = u'\u0413\u0440\u0435\u0448\u043a\u0430 \u043f\u0440\u0438 \u0437\u0430\u043f\u0438\u0441'
vmrUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
vmsBlank = u'\u041f\u0440\u0430\u0437\u043d\u0430'
vmsBuffering = u'\u0411\u0443\u0444\u0435\u0440\u0438\u0440\u0430\u043d\u0435'
vmsDeleting = u'\u0418\u0437\u0442\u0440\u0438\u0432\u0430\u043d\u0435'
vmsDownloading = u'\u0421\u0432\u0430\u043b\u044f\u043d\u0435'
vmsFailed = u'\u041d\u0435\u0443\u0441\u043f\u0435\u0448\u043d\u0430'
vmsNotDownloaded = u'\u041d\u0435 \u0435 \u0441\u0432\u0430\u043b\u0435\u043d\u0430'
vmsPlayed = u'\u041f\u0440\u043e\u0441\u043b\u0443\u0448\u0430\u043d\u0430'
vmsPlaying = u'\u041f\u0440\u043e\u0441\u043b\u0443\u0448\u0432\u0430\u043d\u0435'
vmsRecorded = u'\u0417\u0430\u043f\u0438\u0441\u0430\u043d\u0430'
vmsRecording = u'\u0417\u0430\u043f\u0438\u0441 \u043d\u0430 \u0433\u043b\u0430\u0441\u043e\u0432\u043e \u0441\u044a\u043e\u0431\u0449\u0435\u043d\u0438\u0435'
vmsUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
vmsUnplayed = u'\u041d\u0435 \u0435 \u043f\u0440\u043e\u0441\u043b\u0443\u0448\u0430\u043d\u0430'
vmsUploaded = u'\u041a\u0430\u0447\u0435\u043d\u0430'
vmsUploading = u'\u041a\u0430\u0447\u0432\u0430\u043d\u0435'
vmtCustomGreeting = u'\u041f\u0435\u0440\u0441\u043e\u043d\u0430\u043b\u0438\u0437\u0438\u0440\u0430\u043d \u043f\u043e\u0437\u0434\u0440\u0430\u0432'
vmtDefaultGreeting = u'\u041f\u043e\u0437\u0434\u0440\u0430\u0432 \u043f\u043e \u043f\u043e\u0434\u0440\u0430\u0437\u0431\u0438\u0440\u0430\u043d\u0435'
vmtIncoming = u'\u0432\u0445\u043e\u0434\u044f\u0449\u043e \u0433\u043b\u0430\u0441\u043e\u0432\u043e \u0441\u044a\u043e\u0431\u0449\u0435\u043d\u0438\u0435'
vmtOutgoing = u'\u0418\u0437\u0445\u043e\u0434\u044f\u0449\u0430'
vmtUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
vssAvailable = u'\u0414\u043e\u0441\u0442\u044a\u043f\u0435\u043d'
vssNotAvailable = u'\u041d\u0435\u0434\u043e\u0441\u0442\u044a\u043f\u0435\u043d'
vssPaused = u'\u0412 \u043f\u0430\u0443\u0437\u0430'
vssRejected = u'\u041e\u0442\u0445\u0432\u044a\u0440\u043b\u0435\u043d\u043e'
vssRunning = u'\u041f\u0440\u043e\u0442\u0438\u0447\u0430'
vssStarting = u'\u0417\u0430\u043f\u043e\u0447\u0432\u0430'
vssStopping = u'\u041f\u0440\u0438\u043a\u043b\u044e\u0447\u0432\u0430'
vssUnknown = u'\u041d\u0435\u0438\u0437\u0432\u0435\u0441\u0442\u043d\u0430'
|
apache-2.0
|
Jumpscale/ays9
|
JumpScale9AYS/clients/sync_client/client_support.py
|
3
|
6067
|
"""
support methods for python clients
"""
import json
import collections
from datetime import datetime
from uuid import UUID
from enum import Enum
from dateutil import parser
# python2/3 compatible basestring, for use in to_dict
try:
basestring
except NameError:
basestring = str
def timestamp_from_datetime(datetime):
"""
Convert from datetime format to timestamp format
Input: Time in datetime format
Output: Time in timestamp format
"""
return datetime.strftime('%Y-%m-%dT%H:%M:%S.%fZ')
def timestamp_to_datetime(timestamp):
"""
Convert from timestamp format to datetime format
Input: Time in timestamp format
Output: Time in datetime format
"""
return parser.parse(timestamp).replace(tzinfo=None)
def has_properties(cls, property, child_properties):
for child_prop in child_properties:
if getattr(property, child_prop, None) is None:
return False
return True
def list_factory(val, member_type):
if not isinstance(val, list):
raise ValueError('list_factory: value must be a list')
return [val_factory(v, member_type) for v in val]
def dict_factory(val, objmap):
# objmap is a dict outlining the structure of this value
# its format is {'attrname': {'datatype': [type], 'required': bool}}
objdict = {}
for attrname, attrdict in objmap.items():
value = val.get(attrname)
if value is not None:
for dt in attrdict['datatype']:
try:
if isinstance(dt, dict):
objdict[attrname] = dict_factory(value, attrdict)
else:
objdict[attrname] = val_factory(value, [dt])
except Exception:
pass
if objdict.get(attrname) is None:
raise ValueError('dict_factory: {attr}: unable to instantiate with any supplied type'.format(attr=attrname))
elif attrdict.get('required'):
raise ValueError('dict_factory: {attr} is required'.format(attr=attrname))
return objdict
def val_factory(val, datatypes):
"""
return an instance of `val` that is of type `datatype`.
keep track of exceptions so we can produce meaningful error messages.
"""
exceptions = []
for dt in datatypes:
try:
if isinstance(val, dt):
return val
return type_handler_object(val, dt)
except Exception as e:
exceptions.append(str(e))
# if we get here, we never found a valid value. raise an error
raise ValueError('val_factory: Unable to instantiate {val} from types {types}. Exceptions: {excs}'.
format(val=val, types=datatypes, excs=exceptions))
def to_json(cls, indent=0):
"""
serialize to JSON
:rtype: str
"""
# for consistency, use as_dict then go to json from there
return json.dumps(cls.as_dict(), indent=indent)
def to_dict(cls, convert_datetime=True):
"""
return a dict representation of the Event and its sub-objects
`convert_datetime` controls whether datetime objects are converted to strings or not
:rtype: dict
"""
def todict(obj):
"""
recurse the objects and represent as a dict
use the registered handlers if possible
"""
data = {}
if isinstance(obj, dict):
for (key, val) in obj.items():
data[key] = todict(val)
return data
if not convert_datetime and isinstance(obj, datetime):
return obj
elif type_handler_value(obj):
return type_handler_value(obj)
elif isinstance(obj, collections.Sequence) and not isinstance(obj, basestring):
return [todict(v) for v in obj]
elif hasattr(obj, "__dict__"):
for key, value in obj.__dict__.items():
if not callable(value) and not key.startswith('_'):
data[key] = todict(value)
return data
else:
return obj
return todict(cls)
class DatetimeHandler(object):
"""
output datetime objects as iso-8601 compliant strings
"""
@classmethod
def flatten(cls, obj):
"""flatten"""
return timestamp_from_datetime(obj)
@classmethod
def restore(cls, data):
"""restore"""
return timestamp_to_datetime(data)
class UUIDHandler(object):
"""
output UUID objects as a string
"""
@classmethod
def flatten(cls, obj):
"""flatten"""
return str(obj)
@classmethod
def restore(cls, data):
"""restore"""
return UUID(data)
class EnumHandler(object):
"""
output Enum objects as their value
"""
@classmethod
def flatten(cls, obj):
"""flatten"""
return obj.value
@classmethod
def restore(cls, data):
"""
cannot restore here because we don't know what type of enum it is
"""
raise NotImplementedError
handlers = {
datetime: DatetimeHandler,
Enum: EnumHandler,
UUID: UUIDHandler,
}
def handler_for(obj):
"""return the handler for the object type"""
for handler_type in handlers:
if isinstance(obj, handler_type):
return handlers[handler_type]
try:
for handler_type in handlers:
if issubclass(obj, handler_type):
return handlers[handler_type]
except TypeError:
# if obj isn't a class, issubclass will raise a TypeError
pass
def type_handler_value(obj):
"""
return the serialized (flattened) value from the registered handler for the type
"""
handler = handler_for(obj)
if handler:
return handler().flatten(obj)
def type_handler_object(val, objtype):
"""
return the deserialized (restored) value from the registered handler for the type
"""
handler = handlers.get(objtype)
if handler:
return handler().restore(val)
else:
return objtype(val)
|
apache-2.0
|
tparks5/tor-stem
|
test/unit/util/str_tools.py
|
1
|
6327
|
"""
Unit tests for the stem.util.str_tools functions.
"""
import datetime
import unittest
from stem.util import str_tools
class TestStrTools(unittest.TestCase):
def test_to_int(self):
"""
Checks the _to_int() function.
"""
test_inputs = {
'': 0,
'h': 104,
'hi': 26729,
'hello': 448378203247,
str_tools._to_bytes('hello'): 448378203247,
str_tools._to_unicode('hello'): 448378203247,
}
for arg, expected in test_inputs.items():
self.assertEqual(expected, str_tools._to_int(arg))
def test_to_camel_case(self):
"""
Checks the _to_camel_case() function.
"""
# test the pydoc example
self.assertEqual('I Like Pepperjack!', str_tools._to_camel_case('I_LIKE_PEPPERJACK!'))
# check a few edge cases
self.assertEqual('', str_tools._to_camel_case(''))
self.assertEqual('Hello', str_tools._to_camel_case('hello'))
self.assertEqual('Hello', str_tools._to_camel_case('HELLO'))
self.assertEqual('Hello World', str_tools._to_camel_case('hello__world'))
self.assertEqual('Hello\tworld', str_tools._to_camel_case('hello\tWORLD'))
self.assertEqual('Hello\t\tWorld', str_tools._to_camel_case('hello__world', '_', '\t'))
def test_crop(self):
# test the pydoc examples
self.assertEqual('This is a looo...', str_tools.crop('This is a looooong message', 17))
self.assertEqual('This is a...', str_tools.crop('This is a looooong message', 12))
self.assertEqual('', str_tools.crop('This is a looooong message', 3))
self.assertEqual('', str_tools.crop('This is a looooong message', 0))
def test_size_label(self):
"""
Checks the size_label() function.
"""
# test the pydoc examples
self.assertEqual('1 MB', str_tools.size_label(2000000))
self.assertEqual('1.02 KB', str_tools.size_label(1050, 2))
self.assertEqual('1.025 Kilobytes', str_tools.size_label(1050, 3, True))
self.assertEqual('0 B', str_tools.size_label(0))
self.assertEqual('0 Bytes', str_tools.size_label(0, is_long = True))
self.assertEqual('0.00 B', str_tools.size_label(0, 2))
self.assertEqual('-10 B', str_tools.size_label(-10))
self.assertEqual('80 b', str_tools.size_label(10, is_bytes = False))
self.assertEqual('-1 MB', str_tools.size_label(-2000000))
# checking that we round down
self.assertEqual('23.43 Kb', str_tools.size_label(3000, 2, is_bytes = False))
self.assertRaises(TypeError, str_tools.size_label, None)
self.assertRaises(TypeError, str_tools.size_label, 'hello world')
def test_time_label(self):
"""
Checks the time_label() function.
"""
# test the pydoc examples
self.assertEqual('2h', str_tools.time_label(10000))
self.assertEqual('1.0 minute', str_tools.time_label(61, 1, True))
self.assertEqual('1.01 minutes', str_tools.time_label(61, 2, True))
self.assertEqual('0s', str_tools.time_label(0))
self.assertEqual('0 seconds', str_tools.time_label(0, is_long = True))
self.assertEqual('0.00s', str_tools.time_label(0, 2))
self.assertEqual('-10s', str_tools.time_label(-10))
self.assertRaises(TypeError, str_tools.time_label, None)
self.assertRaises(TypeError, str_tools.time_label, 'hello world')
def test_time_labels(self):
"""
Checks the time_labels() function.
"""
# test the pydoc examples
self.assertEqual(['6m', '40s'], str_tools.time_labels(400))
self.assertEqual(['1 hour', '40 seconds'], str_tools.time_labels(3640, True))
self.assertEqual([], str_tools.time_labels(0))
self.assertEqual(['-10s'], str_tools.time_labels(-10))
self.assertRaises(TypeError, str_tools.time_labels, None)
self.assertRaises(TypeError, str_tools.time_labels, 'hello world')
def test_short_time_label(self):
"""
Checks the short_time_label() function.
"""
# test the pydoc examples
self.assertEqual('01:51', str_tools.short_time_label(111))
self.assertEqual('6-07:08:20', str_tools.short_time_label(544100))
self.assertEqual('00:00', str_tools.short_time_label(0))
self.assertRaises(TypeError, str_tools.short_time_label, None)
self.assertRaises(TypeError, str_tools.short_time_label, 'hello world')
self.assertRaises(ValueError, str_tools.short_time_label, -5)
def test_parse_short_time_label(self):
"""
Checks the parse_short_time_label() function.
"""
# test the pydoc examples
self.assertEqual(111, str_tools.parse_short_time_label('01:51'))
self.assertEqual(544100, str_tools.parse_short_time_label('6-07:08:20'))
self.assertEqual(110, str_tools.parse_short_time_label('01:50.62'))
self.assertEqual(0, str_tools.parse_short_time_label('00:00'))
# these aren't technically valid, but might as well allow unnecessary
# digits to be dropped
self.assertEqual(300, str_tools.parse_short_time_label('05:0'))
self.assertEqual(300, str_tools.parse_short_time_label('5:00'))
self.assertRaises(TypeError, str_tools.parse_short_time_label, None)
self.assertRaises(TypeError, str_tools.parse_short_time_label, 100)
self.assertRaises(ValueError, str_tools.parse_short_time_label, 'blarg')
self.assertRaises(ValueError, str_tools.parse_short_time_label, '00')
self.assertRaises(ValueError, str_tools.parse_short_time_label, '05:')
self.assertRaises(ValueError, str_tools.parse_short_time_label, '05a:00')
self.assertRaises(ValueError, str_tools.parse_short_time_label, '-05:00')
def test_parse_iso_timestamp(self):
"""
Checks the _parse_iso_timestamp() function.
"""
test_inputs = {
'2012-11-08T16:48:41.420251':
datetime.datetime(2012, 11, 8, 16, 48, 41, 420251),
'2012-11-08T16:48:41.000000':
datetime.datetime(2012, 11, 8, 16, 48, 41, 0),
'2012-11-08T16:48:41':
datetime.datetime(2012, 11, 8, 16, 48, 41, 0),
}
for arg, expected in test_inputs.items():
self.assertEqual(expected, str_tools._parse_iso_timestamp(arg))
invalid_input = [
None,
32,
'hello world',
'2012-11-08T16:48:41.42025', # too few microsecond digits
'2012-11-08T16:48:41.4202511', # too many microsecond digits
'2012-11-08T16:48',
]
for arg in invalid_input:
self.assertRaises(ValueError, str_tools._parse_iso_timestamp, arg)
|
lgpl-3.0
|
achals/servo
|
tests/wpt/web-platform-tests/tools/wptserve/tests/functional/base.py
|
293
|
1831
|
import base64
import logging
import os
import unittest
import urllib
import urllib2
import urlparse
import wptserve
logging.basicConfig()
here = os.path.split(__file__)[0]
doc_root = os.path.join(here, "docroot")
class Request(urllib2.Request):
def __init__(self, *args, **kwargs):
urllib2.Request.__init__(self, *args, **kwargs)
self.method = "GET"
def get_method(self):
return self.method
def add_data(self, data):
if hasattr(data, "iteritems"):
data = urllib.urlencode(data)
print data
self.add_header("Content-Length", str(len(data)))
urllib2.Request.add_data(self, data)
class TestUsingServer(unittest.TestCase):
def setUp(self):
self.server = wptserve.server.WebTestHttpd(host="localhost",
port=0,
use_ssl=False,
certificate=None,
doc_root=doc_root)
self.server.start(False)
def tearDown(self):
self.server.stop()
def abs_url(self, path, query=None):
return urlparse.urlunsplit(("http", "%s:%i" % (self.server.host, self.server.port), path, query, None))
def request(self, path, query=None, method="GET", headers=None, body=None, auth=None):
req = Request(self.abs_url(path, query))
req.method = method
if headers is None:
headers = {}
for name, value in headers.iteritems():
req.add_header(name, value)
if body is not None:
req.add_data(body)
if auth is not None:
req.add_header("Authorization", "Basic %s" % base64.encodestring('%s:%s' % auth))
return urllib2.urlopen(req)
|
mpl-2.0
|
schaubl/libcloud
|
libcloud/storage/drivers/local.py
|
37
|
19092
|
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Provides storage driver for working with local filesystem
"""
from __future__ import with_statement
import errno
import os
import shutil
import sys
try:
import lockfile
from lockfile import LockTimeout, mkdirlockfile
except ImportError:
raise ImportError('Missing lockfile dependency, you can install it '
'using pip: pip install lockfile')
from libcloud.utils.files import read_in_chunks
from libcloud.utils.py3 import relpath
from libcloud.utils.py3 import u
from libcloud.common.base import Connection
from libcloud.storage.base import Object, Container, StorageDriver
from libcloud.common.types import LibcloudError
from libcloud.storage.types import ContainerAlreadyExistsError
from libcloud.storage.types import ContainerDoesNotExistError
from libcloud.storage.types import ContainerIsNotEmptyError
from libcloud.storage.types import ObjectError
from libcloud.storage.types import ObjectDoesNotExistError
from libcloud.storage.types import InvalidContainerNameError
IGNORE_FOLDERS = ['.lock', '.hash']
class LockLocalStorage(object):
"""
A class to help in locking a local path before being updated
"""
def __init__(self, path):
self.path = path
self.lock = mkdirlockfile.MkdirLockFile(self.path, threaded=True)
def __enter__(self):
try:
self.lock.acquire(timeout=0.1)
except LockTimeout:
raise LibcloudError('Lock timeout')
def __exit__(self, type, value, traceback):
if self.lock.is_locked():
self.lock.release()
if value is not None:
raise value
class LocalStorageDriver(StorageDriver):
"""
Implementation of local file-system based storage. This is helpful
where the user would want to use the same code (using libcloud) and
switch between cloud storage and local storage
"""
connectionCls = Connection
name = 'Local Storage'
website = 'http://example.com'
hash_type = 'md5'
def __init__(self, key, secret=None, secure=True, host=None, port=None,
**kwargs):
# Use the key as the path to the storage
self.base_path = key
if not os.path.isdir(self.base_path):
raise LibcloudError('The base path is not a directory')
super(LocalStorageDriver, self).__init__(key=key, secret=secret,
secure=secure, host=host,
port=port, **kwargs)
def _make_path(self, path, ignore_existing=True):
"""
Create a path by checking if it already exists
"""
try:
os.makedirs(path)
except OSError:
exp = sys.exc_info()[1]
if exp.errno == errno.EEXIST and not ignore_existing:
raise exp
def _check_container_name(self, container_name):
"""
Check if the container name is valid
:param container_name: Container name
:type container_name: ``str``
"""
if '/' in container_name or '\\' in container_name:
raise InvalidContainerNameError(value=None, driver=self,
container_name=container_name)
def _make_container(self, container_name):
"""
Create a container instance
:param container_name: Container name.
:type container_name: ``str``
:return: Container instance.
:rtype: :class:`Container`
"""
self._check_container_name(container_name)
full_path = os.path.join(self.base_path, container_name)
try:
stat = os.stat(full_path)
if not os.path.isdir(full_path):
raise OSError('Target path is not a directory')
except OSError:
raise ContainerDoesNotExistError(value=None, driver=self,
container_name=container_name)
extra = {}
extra['creation_time'] = stat.st_ctime
extra['access_time'] = stat.st_atime
extra['modify_time'] = stat.st_mtime
return Container(name=container_name, extra=extra, driver=self)
def _make_object(self, container, object_name):
"""
Create an object instance
:param container: Container.
:type container: :class:`Container`
:param object_name: Object name.
:type object_name: ``str``
:return: Object instance.
:rtype: :class:`Object`
"""
full_path = os.path.join(self.base_path, container.name, object_name)
if os.path.isdir(full_path):
raise ObjectError(value=None, driver=self, object_name=object_name)
try:
stat = os.stat(full_path)
except Exception:
raise ObjectDoesNotExistError(value=None, driver=self,
object_name=object_name)
# Make a hash for the file based on the metadata. We can safely
# use only the mtime attribute here. If the file contents change,
# the underlying file-system will change mtime
data_hash = self._get_hash_function()
data_hash.update(u(stat.st_mtime).encode('ascii'))
data_hash = data_hash.hexdigest()
extra = {}
extra['creation_time'] = stat.st_ctime
extra['access_time'] = stat.st_atime
extra['modify_time'] = stat.st_mtime
return Object(name=object_name, size=stat.st_size, extra=extra,
driver=self, container=container, hash=data_hash,
meta_data=None)
def iterate_containers(self):
"""
Return a generator of containers.
:return: A generator of Container instances.
:rtype: ``generator`` of :class:`Container`
"""
for container_name in os.listdir(self.base_path):
full_path = os.path.join(self.base_path, container_name)
if not os.path.isdir(full_path):
continue
yield self._make_container(container_name)
def _get_objects(self, container):
"""
Recursively iterate through the file-system and return the object names
"""
cpath = self.get_container_cdn_url(container, check=True)
for folder, subfolders, files in os.walk(cpath, topdown=True):
# Remove unwanted subfolders
for subf in IGNORE_FOLDERS:
if subf in subfolders:
subfolders.remove(subf)
for name in files:
full_path = os.path.join(folder, name)
object_name = relpath(full_path, start=cpath)
yield self._make_object(container, object_name)
def iterate_container_objects(self, container):
"""
Returns a generator of objects for the given container.
:param container: Container instance
:type container: :class:`Container`
:return: A generator of Object instances.
:rtype: ``generator`` of :class:`Object`
"""
return self._get_objects(container)
def get_container(self, container_name):
"""
Return a container instance.
:param container_name: Container name.
:type container_name: ``str``
:return: :class:`Container` instance.
:rtype: :class:`Container`
"""
return self._make_container(container_name)
def get_container_cdn_url(self, container, check=False):
"""
Return a container CDN URL.
:param container: Container instance
:type container: :class:`Container`
:param check: Indicates if the path's existence must be checked
:type check: ``bool``
:return: A CDN URL for this container.
:rtype: ``str``
"""
path = os.path.join(self.base_path, container.name)
if check and not os.path.isdir(path):
raise ContainerDoesNotExistError(value=None, driver=self,
container_name=container.name)
return path
def get_object(self, container_name, object_name):
"""
Return an object instance.
:param container_name: Container name.
:type container_name: ``str``
:param object_name: Object name.
:type object_name: ``str``
:return: :class:`Object` instance.
:rtype: :class:`Object`
"""
container = self._make_container(container_name)
return self._make_object(container, object_name)
def get_object_cdn_url(self, obj):
"""
Return an object CDN URL.
:param obj: Object instance
:type obj: :class:`Object`
:return: A CDN URL for this object.
:rtype: ``str``
"""
return os.path.join(self.base_path, obj.container.name, obj.name)
def enable_container_cdn(self, container):
"""
Enable container CDN.
:param container: Container instance
:type container: :class:`Container`
:rtype: ``bool``
"""
path = self.get_container_cdn_url(container)
lockfile.MkdirFileLock(path, threaded=True)
with LockLocalStorage(path):
self._make_path(path)
return True
def enable_object_cdn(self, obj):
"""
Enable object CDN.
:param obj: Object instance
:type obj: :class:`Object`
:rtype: ``bool``
"""
path = self.get_object_cdn_url(obj)
with LockLocalStorage(path):
if os.path.exists(path):
return False
try:
obj_file = open(path, 'w')
obj_file.close()
except:
return False
return True
def download_object(self, obj, destination_path, overwrite_existing=False,
delete_on_failure=True):
"""
Download an object to the specified destination path.
:param obj: Object instance.
:type obj: :class:`Object`
:param destination_path: Full path to a file or a directory where the
incoming file will be saved.
:type destination_path: ``str``
:param overwrite_existing: True to overwrite an existing file,
defaults to False.
:type overwrite_existing: ``bool``
:param delete_on_failure: True to delete a partially downloaded file if
the download was not successful (hash mismatch / file size).
:type delete_on_failure: ``bool``
:return: True if an object has been successfully downloaded, False
otherwise.
:rtype: ``bool``
"""
obj_path = self.get_object_cdn_url(obj)
base_name = os.path.basename(destination_path)
if not base_name and not os.path.exists(destination_path):
raise LibcloudError(
value='Path %s does not exist' % (destination_path),
driver=self)
if not base_name:
file_path = os.path.join(destination_path, obj.name)
else:
file_path = destination_path
if os.path.exists(file_path) and not overwrite_existing:
raise LibcloudError(
value='File %s already exists, but ' % (file_path) +
'overwrite_existing=False',
driver=self)
try:
shutil.copy(obj_path, file_path)
except IOError:
if delete_on_failure:
try:
os.unlink(file_path)
except Exception:
pass
return False
return True
def download_object_as_stream(self, obj, chunk_size=None):
"""
Return a generator which yields object data.
:param obj: Object instance
:type obj: :class:`Object`
:param chunk_size: Optional chunk size (in bytes).
:type chunk_size: ``int``
:return: A stream of binary chunks of data.
:rtype: ``object``
"""
path = self.get_object_cdn_url(obj)
with open(path, 'rb') as obj_file:
for data in read_in_chunks(obj_file, chunk_size=chunk_size):
yield data
def upload_object(self, file_path, container, object_name, extra=None,
verify_hash=True):
"""
Upload an object currently located on a disk.
:param file_path: Path to the object on disk.
:type file_path: ``str``
:param container: Destination container.
:type container: :class:`Container`
:param object_name: Object name.
:type object_name: ``str``
:param verify_hash: Verify hast
:type verify_hash: ``bool``
:param extra: (optional) Extra attributes (driver specific).
:type extra: ``dict``
:rtype: ``object``
"""
path = self.get_container_cdn_url(container, check=True)
obj_path = os.path.join(path, object_name)
base_path = os.path.dirname(obj_path)
self._make_path(base_path)
with LockLocalStorage(obj_path):
shutil.copy(file_path, obj_path)
os.chmod(obj_path, int('664', 8))
return self._make_object(container, object_name)
def upload_object_via_stream(self, iterator, container,
object_name,
extra=None):
"""
Upload an object using an iterator.
If a provider supports it, chunked transfer encoding is used and you
don't need to know in advance the amount of data to be uploaded.
Otherwise if a provider doesn't support it, iterator will be exhausted
so a total size for data to be uploaded can be determined.
Note: Exhausting the iterator means that the whole data must be
buffered in memory which might result in memory exhausting when
uploading a very large object.
If a file is located on a disk you are advised to use upload_object
function which uses fs.stat function to determine the file size and it
doesn't need to buffer whole object in the memory.
:type iterator: ``object``
:param iterator: An object which implements the iterator
interface and yields binary chunks of data.
:type container: :class:`Container`
:param container: Destination container.
:type object_name: ``str``
:param object_name: Object name.
:type extra: ``dict``
:param extra: (optional) Extra attributes (driver specific). Note:
This dictionary must contain a 'content_type' key which represents
a content type of the stored object.
:rtype: ``object``
"""
path = self.get_container_cdn_url(container, check=True)
obj_path = os.path.join(path, object_name)
base_path = os.path.dirname(obj_path)
self._make_path(base_path)
with LockLocalStorage(obj_path):
with open(obj_path, 'wb') as obj_file:
for data in iterator:
obj_file.write(data)
os.chmod(obj_path, int('664', 8))
return self._make_object(container, object_name)
def delete_object(self, obj):
"""
Delete an object.
:type obj: :class:`Object`
:param obj: Object instance.
:return: ``bool`` True on success.
:rtype: ``bool``
"""
path = self.get_object_cdn_url(obj)
with LockLocalStorage(path):
try:
os.unlink(path)
except Exception:
return False
# Check and delete all the empty parent folders
path = os.path.dirname(path)
container_url = obj.container.get_cdn_url()
# Delete the empty parent folders till the container's level
while path != container_url:
try:
os.rmdir(path)
except OSError:
exp = sys.exc_info()[1]
if exp.errno == errno.ENOTEMPTY:
break
raise exp
path = os.path.dirname(path)
return True
def create_container(self, container_name):
"""
Create a new container.
:type container_name: ``str``
:param container_name: Container name.
:return: :class:`Container` instance on success.
:rtype: :class:`Container`
"""
self._check_container_name(container_name)
path = os.path.join(self.base_path, container_name)
try:
self._make_path(path, ignore_existing=False)
except OSError:
exp = sys.exc_info()[1]
if exp.errno == errno.EEXIST:
raise ContainerAlreadyExistsError(
value='Container with this name already exists. The name '
'must be unique among all the containers in the '
'system',
container_name=container_name, driver=self)
else:
raise LibcloudError(
'Error creating container %s' % container_name,
driver=self)
except Exception:
raise LibcloudError(
'Error creating container %s' % container_name, driver=self)
return self._make_container(container_name)
def delete_container(self, container):
"""
Delete a container.
:type container: :class:`Container`
:param container: Container instance
:return: True on success, False otherwise.
:rtype: ``bool``
"""
# Check if there are any objects inside this
for obj in self._get_objects(container):
raise ContainerIsNotEmptyError(value='Container is not empty',
container_name=container.name,
driver=self)
path = self.get_container_cdn_url(container, check=True)
with LockLocalStorage(path):
try:
shutil.rmtree(path)
except Exception:
return False
return True
|
apache-2.0
|
kuiwei/edx-platform
|
common/djangoapps/user_api/tests/test_account_api.py
|
8
|
13151
|
# -*- coding: utf-8 -*-
""" Tests for the account API. """
import re
from unittest import skipUnless
from nose.tools import raises
from mock import patch
import ddt
from dateutil.parser import parse as parse_datetime
from django.core import mail
from django.test import TestCase
from django.conf import settings
from user_api.api import account as account_api
from user_api.models import UserProfile
@ddt.ddt
class AccountApiTest(TestCase):
USERNAME = u'frank-underwood'
PASSWORD = u'ṕáśśẃőŕd'
EMAIL = u'frank+underwood@example.com'
ORIG_HOST = 'example.com'
IS_SECURE = False
INVALID_USERNAMES = [
None,
u'',
u'a',
u'a' * (account_api.USERNAME_MAX_LENGTH + 1),
u'invalid_symbol_@',
u'invalid-unicode_fŕáńḱ',
]
INVALID_EMAILS = [
None,
u'',
u'a',
'no_domain',
'no+domain',
'@',
'@domain.com',
'test@no_extension',
u'fŕáńḱ@example.com',
u'frank@éxáḿṕĺé.ćőḿ',
# Long email -- subtract the length of the @domain
# except for one character (so we exceed the max length limit)
u'{user}@example.com'.format(
user=(u'e' * (account_api.EMAIL_MAX_LENGTH - 11))
)
]
INVALID_PASSWORDS = [
None,
u'',
u'a',
u'a' * (account_api.PASSWORD_MAX_LENGTH + 1)
]
def test_activate_account(self):
# Create the account, which is initially inactive
activation_key = account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
account = account_api.account_info(self.USERNAME)
self.assertEqual(account, {
'username': self.USERNAME,
'email': self.EMAIL,
'is_active': False
})
# Activate the account and verify that it is now active
account_api.activate_account(activation_key)
account = account_api.account_info(self.USERNAME)
self.assertTrue(account['is_active'])
def test_change_email(self):
# Request an email change
account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
activation_key = account_api.request_email_change(
self.USERNAME, u'new+email@example.com', self.PASSWORD
)
# Verify that the email has not yet changed
account = account_api.account_info(self.USERNAME)
self.assertEqual(account['email'], self.EMAIL)
# Confirm the change, using the activation code
old_email, new_email = account_api.confirm_email_change(activation_key)
self.assertEqual(old_email, self.EMAIL)
self.assertEqual(new_email, u'new+email@example.com')
# Verify that the email is changed
account = account_api.account_info(self.USERNAME)
self.assertEqual(account['email'], u'new+email@example.com')
def test_confirm_email_change_repeat(self):
account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
activation_key = account_api.request_email_change(
self.USERNAME, u'new+email@example.com', self.PASSWORD
)
# Confirm the change once
account_api.confirm_email_change(activation_key)
# Confirm the change again. The activation code should be
# single-use, so this should raise an error.
with self.assertRaises(account_api.AccountNotAuthorized):
account_api.confirm_email_change(activation_key)
def test_create_account_duplicate_username(self):
account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
with self.assertRaises(account_api.AccountUserAlreadyExists):
account_api.create_account(self.USERNAME, self.PASSWORD, 'different+email@example.com')
# Email uniqueness constraints were introduced in a database migration,
# which we disable in the unit tests to improve the speed of the test suite.
@skipUnless(settings.SOUTH_TESTS_MIGRATE, "South migrations required")
def test_create_account_duplicate_email(self):
account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
with self.assertRaises(account_api.AccountUserAlreadyExists):
account_api.create_account('different_user', self.PASSWORD, self.EMAIL)
def test_username_too_long(self):
long_username = 'e' * (account_api.USERNAME_MAX_LENGTH + 1)
with self.assertRaises(account_api.AccountUsernameInvalid):
account_api.create_account(long_username, self.PASSWORD, self.EMAIL)
def test_account_info_no_user(self):
self.assertIs(account_api.account_info('does_not_exist'), None)
@raises(account_api.AccountEmailInvalid)
@ddt.data(*INVALID_EMAILS)
def test_create_account_invalid_email(self, invalid_email):
account_api.create_account(self.USERNAME, self.PASSWORD, invalid_email)
@raises(account_api.AccountPasswordInvalid)
@ddt.data(*INVALID_PASSWORDS)
def test_create_account_invalid_password(self, invalid_password):
account_api.create_account(self.USERNAME, invalid_password, self.EMAIL)
@raises(account_api.AccountPasswordInvalid)
def test_create_account_username_password_equal(self):
# Username and password cannot be the same
account_api.create_account(self.USERNAME, self.USERNAME, self.EMAIL)
@raises(account_api.AccountRequestError)
@ddt.data(*INVALID_USERNAMES)
def test_create_account_invalid_username(self, invalid_username):
account_api.create_account(invalid_username, self.PASSWORD, self.EMAIL)
@raises(account_api.AccountNotAuthorized)
def test_activate_account_invalid_key(self):
account_api.activate_account(u'invalid')
@raises(account_api.AccountUserNotFound)
def test_request_email_change_no_user(self):
account_api.request_email_change(u'no_such_user', self.EMAIL, self.PASSWORD)
@ddt.data(*INVALID_EMAILS)
def test_request_email_change_invalid_email(self, invalid_email):
# Create an account with a valid email address
account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
# Attempt to change the account to an invalid email
with self.assertRaises(account_api.AccountEmailInvalid):
account_api.request_email_change(self.USERNAME, invalid_email, self.PASSWORD)
def test_request_email_change_already_exists(self):
# Create two accounts, both activated
activation_key = account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
account_api.activate_account(activation_key)
activation_key = account_api.create_account(u'another_user', u'password', u'another+user@example.com')
account_api.activate_account(activation_key)
# Try to change the first user's email to the same as the second user's
with self.assertRaises(account_api.AccountEmailAlreadyExists):
account_api.request_email_change(self.USERNAME, u'another+user@example.com', self.PASSWORD)
def test_request_email_change_duplicates_unactivated_account(self):
# Create two accounts, but the second account is inactive
activation_key = account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
account_api.activate_account(activation_key)
account_api.create_account(u'another_user', u'password', u'another+user@example.com')
# Try to change the first user's email to the same as the second user's
# Since the second user has not yet activated, this should succeed.
account_api.request_email_change(self.USERNAME, u'another+user@example.com', self.PASSWORD)
def test_request_email_change_same_address(self):
# Create and activate the account
activation_key = account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
account_api.activate_account(activation_key)
# Try to change the email address to the current address
with self.assertRaises(account_api.AccountEmailAlreadyExists):
account_api.request_email_change(self.USERNAME, self.EMAIL, self.PASSWORD)
def test_request_email_change_wrong_password(self):
account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
# Use the wrong password
with self.assertRaises(account_api.AccountNotAuthorized):
account_api.request_email_change(self.USERNAME, u'new+email@example.com', u'wrong password')
def test_confirm_email_change_invalid_activation_key(self):
account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
account_api.request_email_change(self.USERNAME, u'new+email@example.com', self.PASSWORD)
with self.assertRaises(account_api.AccountNotAuthorized):
account_api.confirm_email_change(u'invalid')
def test_confirm_email_change_no_request_pending(self):
account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
def test_confirm_email_already_exists(self):
account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
# Request a change
activation_key = account_api.request_email_change(
self.USERNAME, u'new+email@example.com', self.PASSWORD
)
# Another use takes the email before we confirm the change
account_api.create_account(u'other_user', u'password', u'new+email@example.com')
# When we try to confirm our change, we get an error because the email is taken
with self.assertRaises(account_api.AccountEmailAlreadyExists):
account_api.confirm_email_change(activation_key)
# Verify that the email was NOT changed
self.assertEqual(account_api.account_info(self.USERNAME)['email'], self.EMAIL)
def test_confirm_email_no_user_profile(self):
account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
activation_key = account_api.request_email_change(
self.USERNAME, u'new+email@example.com', self.PASSWORD
)
# This should never happen, but just in case...
UserProfile.objects.get(user__username=self.USERNAME).delete()
with self.assertRaises(account_api.AccountInternalError):
account_api.confirm_email_change(activation_key)
def test_record_email_change_history(self):
account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
# Change the email once
activation_key = account_api.request_email_change(
self.USERNAME, u'new+email@example.com', self.PASSWORD
)
account_api.confirm_email_change(activation_key)
# Verify that the old email appears in the history
meta = UserProfile.objects.get(user__username=self.USERNAME).get_meta()
self.assertEqual(len(meta['old_emails']), 1)
email, timestamp = meta['old_emails'][0]
self.assertEqual(email, self.EMAIL)
self._assert_is_datetime(timestamp)
# Change the email again
activation_key = account_api.request_email_change(
self.USERNAME, u'another_new+email@example.com', self.PASSWORD
)
account_api.confirm_email_change(activation_key)
# Verify that both emails appear in the history
meta = UserProfile.objects.get(user__username=self.USERNAME).get_meta()
self.assertEqual(len(meta['old_emails']), 2)
email, timestamp = meta['old_emails'][1]
self.assertEqual(email, 'new+email@example.com')
self._assert_is_datetime(timestamp)
@skipUnless(settings.ROOT_URLCONF == 'lms.urls', 'Test only valid in LMS')
def test_request_password_change(self):
# Create and activate an account
activation_key = account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
account_api.activate_account(activation_key)
# Request a password change
account_api.request_password_change(self.EMAIL, self.ORIG_HOST, self.IS_SECURE)
# Verify that one email message has been sent
self.assertEqual(len(mail.outbox), 1)
# Verify that the body of the message contains something that looks
# like an activation link
email_body = mail.outbox[0].body
result = re.search('(?P<url>https?://[^\s]+)', email_body)
self.assertIsNot(result, None)
@raises(account_api.AccountUserNotFound)
@ddt.data(True, False)
def test_request_password_change_invalid_user(self, create_inactive_account):
if create_inactive_account:
# Create an account, but do not activate it
account_api.create_account(self.USERNAME, self.PASSWORD, self.EMAIL)
account_api.request_password_change(self.EMAIL, self.ORIG_HOST, self.IS_SECURE)
# Verify that no email messages have been sent
self.assertEqual(len(mail.outbox), 0)
def _assert_is_datetime(self, timestamp):
if not timestamp:
return False
try:
parse_datetime(timestamp)
except ValueError:
return False
else:
return True
|
agpl-3.0
|
josecolella/PLD
|
bin/osx/treasurehunters.app/Contents/Resources/lib/python3.4/numpy/oldnumeric/ufuncs.py
|
13
|
1297
|
from __future__ import division, absolute_import, print_function
__all__ = ['less', 'cosh', 'arcsinh', 'add', 'ceil', 'arctan2', 'floor_divide',
'fmod', 'hypot', 'logical_and', 'power', 'sinh', 'remainder', 'cos',
'equal', 'arccos', 'less_equal', 'divide', 'bitwise_or',
'bitwise_and', 'logical_xor', 'log', 'subtract', 'invert',
'negative', 'log10', 'arcsin', 'arctanh', 'logical_not',
'not_equal', 'tanh', 'true_divide', 'maximum', 'arccosh',
'logical_or', 'minimum', 'conjugate', 'tan', 'greater',
'bitwise_xor', 'fabs', 'floor', 'sqrt', 'arctan', 'right_shift',
'absolute', 'sin', 'multiply', 'greater_equal', 'left_shift',
'exp', 'divide_safe']
from numpy import less, cosh, arcsinh, add, ceil, arctan2, floor_divide, \
fmod, hypot, logical_and, power, sinh, remainder, cos, \
equal, arccos, less_equal, divide, bitwise_or, bitwise_and, \
logical_xor, log, subtract, invert, negative, log10, arcsin, \
arctanh, logical_not, not_equal, tanh, true_divide, maximum, \
arccosh, logical_or, minimum, conjugate, tan, greater, bitwise_xor, \
fabs, floor, sqrt, arctan, right_shift, absolute, sin, \
multiply, greater_equal, left_shift, exp, divide as divide_safe
|
mit
|
springmeyer/gyp
|
test/errors/gyptest-errors.py
|
54
|
1921
|
#!/usr/bin/env python
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Test that two targets with the same name generates an error.
"""
import os
import sys
import TestGyp
import TestCmd
# TODO(sbc): Remove the use of match_re below, done because scons
# error messages were not consistent with other generators.
# Also remove input.py:generator_wants_absolute_build_file_paths.
test = TestGyp.TestGyp()
stderr = ('gyp: Duplicate target definitions for '
'.*duplicate_targets.gyp:foo#target\n')
test.run_gyp('duplicate_targets.gyp', status=1, stderr=stderr,
match=TestCmd.match_re)
stderr = ('.*: Unable to find targets in build file .*missing_targets.gyp.*')
test.run_gyp('missing_targets.gyp', status=1, stderr=stderr,
match=TestCmd.match_re_dotall)
stderr = ('gyp: rule bar exists in duplicate, target '
'.*duplicate_rule.gyp:foo#target\n')
test.run_gyp('duplicate_rule.gyp', status=1, stderr=stderr,
match=TestCmd.match_re)
stderr = ("gyp: Key 'targets' repeated at level 1 with key path '' while "
"reading .*duplicate_node.gyp.*")
test.run_gyp('duplicate_node.gyp', '--check', status=1, stderr=stderr,
match=TestCmd.match_re_dotall)
stderr = (".*target0.*target1.*target2.*target0.*")
test.run_gyp('dependency_cycle.gyp', status=1, stderr=stderr,
match=TestCmd.match_re_dotall)
stderr = (".*file_cycle0.*file_cycle1.*file_cycle0.*")
test.run_gyp('file_cycle0.gyp', status=1, stderr=stderr,
match=TestCmd.match_re_dotall)
stderr = ("gyp: Dependency '.*missing_dep.gyp:missing.gyp#target' not found "
"while trying to load target .*missing_dep.gyp:foo#target\n")
test.run_gyp('missing_dep.gyp', status=1, stderr=stderr,
match=TestCmd.match_re)
test.pass_test()
|
bsd-3-clause
|
Kazade/NeHe-Website
|
google_appengine/lib/django-1.2/django/templatetags/cache.py
|
309
|
2406
|
from django.template import Library, Node, TemplateSyntaxError, Variable, VariableDoesNotExist
from django.template import resolve_variable
from django.core.cache import cache
from django.utils.encoding import force_unicode
from django.utils.http import urlquote
from django.utils.hashcompat import md5_constructor
register = Library()
class CacheNode(Node):
def __init__(self, nodelist, expire_time_var, fragment_name, vary_on):
self.nodelist = nodelist
self.expire_time_var = Variable(expire_time_var)
self.fragment_name = fragment_name
self.vary_on = vary_on
def render(self, context):
try:
expire_time = self.expire_time_var.resolve(context)
except VariableDoesNotExist:
raise TemplateSyntaxError('"cache" tag got an unknown variable: %r' % self.expire_time_var.var)
try:
expire_time = int(expire_time)
except (ValueError, TypeError):
raise TemplateSyntaxError('"cache" tag got a non-integer timeout value: %r' % expire_time)
# Build a unicode key for this fragment and all vary-on's.
args = md5_constructor(u':'.join([urlquote(resolve_variable(var, context)) for var in self.vary_on]))
cache_key = 'template.cache.%s.%s' % (self.fragment_name, args.hexdigest())
value = cache.get(cache_key)
if value is None:
value = self.nodelist.render(context)
cache.set(cache_key, value, expire_time)
return value
def do_cache(parser, token):
"""
This will cache the contents of a template fragment for a given amount
of time.
Usage::
{% load cache %}
{% cache [expire_time] [fragment_name] %}
.. some expensive processing ..
{% endcache %}
This tag also supports varying by a list of arguments::
{% load cache %}
{% cache [expire_time] [fragment_name] [var1] [var2] .. %}
.. some expensive processing ..
{% endcache %}
Each unique set of arguments will result in a unique cache entry.
"""
nodelist = parser.parse(('endcache',))
parser.delete_first_token()
tokens = token.contents.split()
if len(tokens) < 3:
raise TemplateSyntaxError(u"'%r' tag requires at least 2 arguments." % tokens[0])
return CacheNode(nodelist, tokens[1], tokens[2], tokens[3:])
register.tag('cache', do_cache)
|
bsd-3-clause
|
florentx/OpenUpgrade
|
addons/hr_payroll/wizard/__init__.py
|
442
|
1159
|
#-*- coding:utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>). All Rights Reserved
# d$
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import hr_payroll_payslips_by_employees
import hr_payroll_contribution_register_report
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
|
agpl-3.0
|
byterom/android_external_chromium_org
|
chrome/browser/resources/chromeos/chromevox/PRESUBMIT.py
|
86
|
1391
|
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Presubmit script for ChromeVox."""
def CheckChangeOnUpload(input_api, output_api):
paths = input_api.AbsoluteLocalPaths()
def ShouldCheckFile(path):
return path.endswith('.js') or path.endswith('.py')
def ScriptFilter(path):
return (path.endswith('check_chromevox.py') or
path.endswith('jscompilerwrapper.py') or
path.endswith('jsbundler.py'))
# Only care about changes to JS files or the scripts that check them.
paths = [p for p in paths if ShouldCheckFile(p)]
if not paths:
return []
# If changing what the presubmit script uses, run the check on all
# scripts. Otherwise, let CheckChromeVox figure out what scripts to
# compile, if any, based on the changed paths.
if any((ScriptFilter(p) for p in paths)):
paths = None
import sys
if not sys.platform.startswith('linux'):
return []
sys.path.insert(0, input_api.os_path.join(
input_api.PresubmitLocalPath(), 'tools'))
try:
from check_chromevox import CheckChromeVox
finally:
sys.path.pop(0)
success, output = CheckChromeVox(paths)
if not success:
return [output_api.PresubmitError(
'ChromeVox closure compilation failed',
long_text=output)]
return []
|
bsd-3-clause
|
mantidproject/mantid
|
Framework/PythonInterface/test/python/plugins/algorithms/WorkflowAlgorithms/IndirectILLEnergyTransferTest.py
|
3
|
6789
|
# Mantid Repository : https://github.com/mantidproject/mantid
#
# Copyright © 2018 ISIS Rutherford Appleton Laboratory UKRI,
# NScD Oak Ridge National Laboratory, European Spallation Source,
# Institut Laue - Langevin & CSNS, Institute of High Energy Physics, CAS
# SPDX - License - Identifier: GPL - 3.0 +
import os
import unittest
from mantid.simpleapi import *
from mantid.api import MatrixWorkspace,WorkspaceGroup
from mantid import config
class IndirectILLEnergyTransferTest(unittest.TestCase):
_runs = dict([('one_wing_QENS', '090661'),
('one_wing_EFWS', '083072'),
('one_wing_IFWS', '083073'),
('two_wing_QENS', '136558-136559'),
('two_wing_EFWS', '143720'),
('two_wing_IFWS', '170300'),
('bats', '215962'),
('3_single_dets', '318724')])
# cache the def instrument and data search dirs
_def_fac = config['default.facility']
_def_inst = config['default.instrument']
_data_dirs = config['datasearch.directories']
def setUp(self):
# set instrument and append datasearch directory
config['default.facility'] = 'ILL'
config['default.instrument'] = 'IN16B'
config.appendDataSearchSubDir('ILL/IN16B/')
def tearDown(self):
# set cached facility and datasearch directory
config['default.facility'] = self._def_fac
config['default.instrument'] = self._def_inst
config['datasearch.directories'] = self._data_dirs
def test_complete_options(self):
# Tests for map file, no verbose, multiple runs, and crop dead channels for two wing QENS data
# manually get name of grouping file from parameter file
idf = os.path.join(config['instrumentDefinition.directory'], "IN16B_Definition.xml")
ipf = os.path.join(config['instrumentDefinition.directory'], "IN16B_Parameters.xml")
ws = LoadEmptyInstrument(Filename=idf)
LoadParameterFile(ws, Filename=ipf)
instrument = ws.getInstrument()
grouping_filename = instrument.getStringParameter('Workflow.GroupingFile')[0]
DeleteWorkspace(ws)
args = {'Run': self._runs['two_wing_QENS'],
'MapFile': os.path.join(config['groupingFiles.directory'], grouping_filename),
'CropDeadMonitorChannels': True,
'OutputWorkspace': 'red'}
IndirectILLEnergyTransfer(**args)
self._check_workspace_group(mtd['red'], 2, 18, 1017)
deltaE = mtd['red'][0].readX(0)
bsize = mtd['red'][0].blocksize()
self.assertAlmostEquals(deltaE[bsize//2], 0, 4)
self.assertTrue(deltaE[-1] > -deltaE[0])
def test_one_wing_QENS(self):
# tests one wing QENS with PSD range
args = {'Run': self._runs['one_wing_QENS'],
'ManualPSDIntegrationRange': [20,100]}
res = IndirectILLEnergyTransfer(**args)
self._check_workspace_group(res, 1, 18, 1024)
deltaE = res[0].readX(0)
bsize = res[0].blocksize()
self.assertEquals(deltaE[bsize//2], 0)
self.assertTrue(deltaE[-1] > -deltaE[0])
def test_one_wing_EFWS(self):
args = {'Run': self._runs['one_wing_EFWS']}
res = IndirectILLEnergyTransfer(**args)
self._check_workspace_group(res, 1, 18, 256)
def test_one_wing_IFWS(self):
args = {'Run': self._runs['one_wing_IFWS']}
res = IndirectILLEnergyTransfer(**args)
self._check_workspace_group(res, 1, 18, 256)
def test_two_wing_EFWS(self):
args = {'Run': self._runs['two_wing_EFWS']}
res = IndirectILLEnergyTransfer(**args)
self._check_workspace_group(res, 2, 18, 8)
def test_two_wing_IFWS(self):
args = {'Run': self._runs['two_wing_IFWS']}
res = IndirectILLEnergyTransfer(**args)
self._check_workspace_group(res, 2, 18, 512)
def test_spectrum_axis(self):
args = {'Run': self._runs['one_wing_EFWS'], 'SpectrumAxis': '2Theta'}
res = IndirectILLEnergyTransfer(**args)
self.assertTrue(res.getItem(0).getAxis(1).getUnit().unitID(), "Theta")
def test_bats(self):
args = {'Run': self._runs['bats'], 'PulseChopper': '34', 'GroupDetectors': False}
res = IndirectILLEnergyTransfer(**args)
self._check_workspace_group(res, 1, 2050, 1121)
def test_bats_grouped(self):
args = {'Run': self._runs['bats'], 'PulseChopper': '34'}
res = IndirectILLEnergyTransfer(**args)
self._check_workspace_group(res, 1, 18, 1121)
def test_psd_tubes_only(self):
args = {'Run': self._runs['one_wing_QENS'],
'DiscardSingleDetectors': True}
res = IndirectILLEnergyTransfer(**args)
self._check_workspace_group(res, 1, 16, 1024)
def test_3_sd(self):
args = {'Run': self._runs['3_single_dets'],
'DiscardSingleDetectors': False,
"GroupDetectors": False}
res = IndirectILLEnergyTransfer(**args)
self._check_workspace_group(res, 1, 2051, 984)
def test_equatorial_fit(self):
args = {'Run': self._runs['3_single_dets'],
'OutputWorkspace': "res",
'DiscardSingleDetectors': False,
'GroupDetectors': False,
'ElasticPeakFitting': 'FitEquatorialOnly',
'OutputElasticChannelWorkspace': 'out_epp_ws'}
IndirectILLEnergyTransfer(**args)
self._check_workspace_group(mtd["res"], 1, 2051, 984)
epp_ws = mtd['out_epp_ws']
self.assertEquals(epp_ws.rowCount(), 4)
def test_fit_all(self):
args = {'Run': self._runs['3_single_dets'],
'OutputWorkspace': "res",
'DiscardSingleDetectors': False,
'GroupDetectors': False,
'ElasticPeakFitting': 'FitAllPixelGroups',
'OutputElasticChannelWorkspace': 'out_epp_ws'}
IndirectILLEnergyTransfer(**args)
self._check_workspace_group(mtd["res"], 1, 2051, 984)
epp_ws = mtd['out_epp_ws']
self.assertEquals(epp_ws.rowCount(), 516)
def _check_workspace_group(self, wsgroup, nentries, nspectra, nbins):
self.assertTrue(isinstance(wsgroup, WorkspaceGroup))
self.assertEquals(wsgroup.getNumberOfEntries(),nentries)
item = wsgroup.getItem(0)
self.assertTrue(isinstance(item, MatrixWorkspace))
self.assertEqual(item.getAxis(0).getUnit().unitID(), "DeltaE")
self.assertEquals(item.getNumberHistograms(),nspectra)
self.assertEquals(item.blocksize(), nbins)
self.assertTrue(item.getSampleDetails())
self.assertTrue(item.getHistory().lastAlgorithm())
if __name__ == '__main__':
unittest.main()
|
gpl-3.0
|
open-synergy/hr
|
hr_employee_benefit/models/hr_contract.py
|
15
|
1171
|
# -*- coding:utf-8 -*-
##############################################################################
#
# Copyright (C) 2015 Savoir-faire Linux. All Rights Reserved.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published
# by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp import fields, models
class HrContract(models.Model):
_inherit = 'hr.contract'
benefit_line_ids = fields.One2many(
'hr.employee.benefit',
'contract_id',
'Employee Benefits',
)
|
agpl-3.0
|
cloudfoundry/php-buildpack-legacy
|
builds/runtimes/python-2.7.6/lib/python2.7/test/test_nis.py
|
91
|
1344
|
from test import test_support
import unittest
nis = test_support.import_module('nis')
class NisTests(unittest.TestCase):
def test_maps(self):
try:
maps = nis.maps()
except nis.error, msg:
# NIS is probably not active, so this test isn't useful
if test_support.verbose:
print "Test Skipped:", msg
# Can't raise SkipTest as regrtest only recognizes the exception
# import time.
return
try:
# On some systems, this map is only accessible to the
# super user
maps.remove("passwd.adjunct.byname")
except ValueError:
pass
done = 0
for nismap in maps:
mapping = nis.cat(nismap)
for k, v in mapping.items():
if not k:
continue
if nis.match(k, nismap) != v:
self.fail("NIS match failed for key `%s' in map `%s'" % (k, nismap))
else:
# just test the one key, otherwise this test could take a
# very long time
done = 1
break
if done:
break
def test_main():
test_support.run_unittest(NisTests)
if __name__ == '__main__':
test_main()
|
mit
|
steventimberman/masterDebater
|
env/lib/python2.7/site-packages/django/core/mail/message.py
|
36
|
19471
|
from __future__ import unicode_literals
import mimetypes
import os
import random
import time
from email import (
charset as Charset, encoders as Encoders, generator, message_from_string,
)
from email.header import Header
from email.message import Message
from email.mime.base import MIMEBase
from email.mime.message import MIMEMessage
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.utils import formatdate, getaddresses, parseaddr
from io import BytesIO
from django.conf import settings
from django.core.mail.utils import DNS_NAME
from django.utils import six
from django.utils.encoding import force_text
# Don't BASE64-encode UTF-8 messages so that we avoid unwanted attention from
# some spam filters.
utf8_charset = Charset.Charset('utf-8')
utf8_charset.body_encoding = None # Python defaults to BASE64
utf8_charset_qp = Charset.Charset('utf-8')
utf8_charset_qp.body_encoding = Charset.QP
# Default MIME type to use on attachments (if it is not explicitly given
# and cannot be guessed).
DEFAULT_ATTACHMENT_MIME_TYPE = 'application/octet-stream'
RFC5322_EMAIL_LINE_LENGTH_LIMIT = 998
class BadHeaderError(ValueError):
pass
# Copied from Python 3.2+ standard library, with the following modifications:
# * Used cached hostname for performance.
# TODO: replace with email.utils.make_msgid(.., domain=DNS_NAME) when dropping
# Python 2 (Python 2's version doesn't have domain parameter) (#23905).
def make_msgid(idstring=None, domain=None):
"""Returns a string suitable for RFC 5322 compliant Message-ID, e.g:
<20020201195627.33539.96671@nightshade.la.mastaler.com>
Optional idstring if given is a string used to strengthen the
uniqueness of the message id. Optional domain if given provides the
portion of the message id after the '@'. It defaults to the locally
defined hostname.
"""
timeval = time.time()
utcdate = time.strftime('%Y%m%d%H%M%S', time.gmtime(timeval))
pid = os.getpid()
randint = random.randrange(100000)
if idstring is None:
idstring = ''
else:
idstring = '.' + idstring
if domain is None:
# stdlib uses socket.getfqdn() here instead
domain = DNS_NAME
msgid = '<%s.%s.%s%s@%s>' % (utcdate, pid, randint, idstring, domain)
return msgid
# Header names that contain structured address data (RFC #5322)
ADDRESS_HEADERS = {
'from',
'sender',
'reply-to',
'to',
'cc',
'bcc',
'resent-from',
'resent-sender',
'resent-to',
'resent-cc',
'resent-bcc',
}
def forbid_multi_line_headers(name, val, encoding):
"""Forbids multi-line headers, to prevent header injection."""
encoding = encoding or settings.DEFAULT_CHARSET
val = force_text(val)
if '\n' in val or '\r' in val:
raise BadHeaderError("Header values can't contain newlines (got %r for header %r)" % (val, name))
try:
val.encode('ascii')
except UnicodeEncodeError:
if name.lower() in ADDRESS_HEADERS:
val = ', '.join(sanitize_address(addr, encoding) for addr in getaddresses((val,)))
else:
val = Header(val, encoding).encode()
else:
if name.lower() == 'subject':
val = Header(val).encode()
return str(name), val
def split_addr(addr, encoding):
"""
Split the address into local part and domain, properly encoded.
When non-ascii characters are present in the local part, it must be
MIME-word encoded. The domain name must be idna-encoded if it contains
non-ascii characters.
"""
if '@' in addr:
localpart, domain = addr.split('@', 1)
# Try to get the simplest encoding - ascii if possible so that
# to@example.com doesn't become =?utf-8?q?to?=@example.com. This
# makes unit testing a bit easier and more readable.
try:
localpart.encode('ascii')
except UnicodeEncodeError:
localpart = Header(localpart, encoding).encode()
domain = domain.encode('idna').decode('ascii')
else:
localpart = Header(addr, encoding).encode()
domain = ''
return (localpart, domain)
def sanitize_address(addr, encoding):
"""
Format a pair of (name, address) or an email address string.
"""
if not isinstance(addr, tuple):
addr = parseaddr(force_text(addr))
nm, addr = addr
localpart, domain = None, None
nm = Header(nm, encoding).encode()
try:
addr.encode('ascii')
except UnicodeEncodeError: # IDN or non-ascii in the local part
localpart, domain = split_addr(addr, encoding)
if six.PY2:
# On Python 2, use the stdlib since `email.headerregistry` doesn't exist.
from email.utils import formataddr
if localpart and domain:
addr = '@'.join([localpart, domain])
return formataddr((nm, addr))
# On Python 3, an `email.headerregistry.Address` object is used since
# email.utils.formataddr() naively encodes the name as ascii (see #25986).
from email.headerregistry import Address
from email.errors import InvalidHeaderDefect, NonASCIILocalPartDefect
if localpart and domain:
address = Address(nm, username=localpart, domain=domain)
return str(address)
try:
address = Address(nm, addr_spec=addr)
except (InvalidHeaderDefect, NonASCIILocalPartDefect):
localpart, domain = split_addr(addr, encoding)
address = Address(nm, username=localpart, domain=domain)
return str(address)
class MIMEMixin():
def as_string(self, unixfrom=False, linesep='\n'):
"""Return the entire formatted message as a string.
Optional `unixfrom' when True, means include the Unix From_ envelope
header.
This overrides the default as_string() implementation to not mangle
lines that begin with 'From '. See bug #13433 for details.
"""
fp = six.StringIO()
g = generator.Generator(fp, mangle_from_=False)
if six.PY2:
g.flatten(self, unixfrom=unixfrom)
else:
g.flatten(self, unixfrom=unixfrom, linesep=linesep)
return fp.getvalue()
if six.PY2:
as_bytes = as_string
else:
def as_bytes(self, unixfrom=False, linesep='\n'):
"""Return the entire formatted message as bytes.
Optional `unixfrom' when True, means include the Unix From_ envelope
header.
This overrides the default as_bytes() implementation to not mangle
lines that begin with 'From '. See bug #13433 for details.
"""
fp = BytesIO()
g = generator.BytesGenerator(fp, mangle_from_=False)
g.flatten(self, unixfrom=unixfrom, linesep=linesep)
return fp.getvalue()
class SafeMIMEMessage(MIMEMixin, MIMEMessage):
def __setitem__(self, name, val):
# message/rfc822 attachments must be ASCII
name, val = forbid_multi_line_headers(name, val, 'ascii')
MIMEMessage.__setitem__(self, name, val)
class SafeMIMEText(MIMEMixin, MIMEText):
def __init__(self, _text, _subtype='plain', _charset=None):
self.encoding = _charset
MIMEText.__init__(self, _text, _subtype=_subtype, _charset=_charset)
def __setitem__(self, name, val):
name, val = forbid_multi_line_headers(name, val, self.encoding)
MIMEText.__setitem__(self, name, val)
def set_payload(self, payload, charset=None):
if charset == 'utf-8':
has_long_lines = any(
len(l.encode('utf-8')) > RFC5322_EMAIL_LINE_LENGTH_LIMIT
for l in payload.splitlines()
)
# Quoted-Printable encoding has the side effect of shortening long
# lines, if any (#22561).
charset = utf8_charset_qp if has_long_lines else utf8_charset
MIMEText.set_payload(self, payload, charset=charset)
class SafeMIMEMultipart(MIMEMixin, MIMEMultipart):
def __init__(self, _subtype='mixed', boundary=None, _subparts=None, encoding=None, **_params):
self.encoding = encoding
MIMEMultipart.__init__(self, _subtype, boundary, _subparts, **_params)
def __setitem__(self, name, val):
name, val = forbid_multi_line_headers(name, val, self.encoding)
MIMEMultipart.__setitem__(self, name, val)
class EmailMessage(object):
"""
A container for email information.
"""
content_subtype = 'plain'
mixed_subtype = 'mixed'
encoding = None # None => use settings default
def __init__(self, subject='', body='', from_email=None, to=None, bcc=None,
connection=None, attachments=None, headers=None, cc=None,
reply_to=None):
"""
Initialize a single email message (which can be sent to multiple
recipients).
All strings used to create the message can be unicode strings
(or UTF-8 bytestrings). The SafeMIMEText class will handle any
necessary encoding conversions.
"""
if to:
if isinstance(to, six.string_types):
raise TypeError('"to" argument must be a list or tuple')
self.to = list(to)
else:
self.to = []
if cc:
if isinstance(cc, six.string_types):
raise TypeError('"cc" argument must be a list or tuple')
self.cc = list(cc)
else:
self.cc = []
if bcc:
if isinstance(bcc, six.string_types):
raise TypeError('"bcc" argument must be a list or tuple')
self.bcc = list(bcc)
else:
self.bcc = []
if reply_to:
if isinstance(reply_to, six.string_types):
raise TypeError('"reply_to" argument must be a list or tuple')
self.reply_to = list(reply_to)
else:
self.reply_to = []
self.from_email = from_email or settings.DEFAULT_FROM_EMAIL
self.subject = subject
self.body = body
self.attachments = []
if attachments:
for attachment in attachments:
if isinstance(attachment, MIMEBase):
self.attach(attachment)
else:
self.attach(*attachment)
self.extra_headers = headers or {}
self.connection = connection
def get_connection(self, fail_silently=False):
from django.core.mail import get_connection
if not self.connection:
self.connection = get_connection(fail_silently=fail_silently)
return self.connection
def message(self):
encoding = self.encoding or settings.DEFAULT_CHARSET
msg = SafeMIMEText(self.body, self.content_subtype, encoding)
msg = self._create_message(msg)
msg['Subject'] = self.subject
msg['From'] = self.extra_headers.get('From', self.from_email)
msg['To'] = self.extra_headers.get('To', ', '.join(map(force_text, self.to)))
if self.cc:
msg['Cc'] = ', '.join(map(force_text, self.cc))
if self.reply_to:
msg['Reply-To'] = self.extra_headers.get('Reply-To', ', '.join(map(force_text, self.reply_to)))
# Email header names are case-insensitive (RFC 2045), so we have to
# accommodate that when doing comparisons.
header_names = [key.lower() for key in self.extra_headers]
if 'date' not in header_names:
# formatdate() uses stdlib methods to format the date, which use
# the stdlib/OS concept of a timezone, however, Django sets the
# TZ environment variable based on the TIME_ZONE setting which
# will get picked up by formatdate().
msg['Date'] = formatdate(localtime=settings.EMAIL_USE_LOCALTIME)
if 'message-id' not in header_names:
# Use cached DNS_NAME for performance
msg['Message-ID'] = make_msgid(domain=DNS_NAME)
for name, value in self.extra_headers.items():
if name.lower() in ('from', 'to'): # From and To are already handled
continue
msg[name] = value
return msg
def recipients(self):
"""
Returns a list of all recipients of the email (includes direct
addressees as well as Cc and Bcc entries).
"""
return [email for email in (self.to + self.cc + self.bcc) if email]
def send(self, fail_silently=False):
"""Sends the email message."""
if not self.recipients():
# Don't bother creating the network connection if there's nobody to
# send to.
return 0
return self.get_connection(fail_silently).send_messages([self])
def attach(self, filename=None, content=None, mimetype=None):
"""
Attaches a file with the given filename and content. The filename can
be omitted and the mimetype is guessed, if not provided.
If the first parameter is a MIMEBase subclass it is inserted directly
into the resulting message attachments.
For a text/* mimetype (guessed or specified), when a bytes object is
specified as content, it will be decoded as UTF-8. If that fails,
the mimetype will be set to DEFAULT_ATTACHMENT_MIME_TYPE and the
content is not decoded.
"""
if isinstance(filename, MIMEBase):
assert content is None
assert mimetype is None
self.attachments.append(filename)
else:
assert content is not None
if not mimetype:
mimetype, _ = mimetypes.guess_type(filename)
if not mimetype:
mimetype = DEFAULT_ATTACHMENT_MIME_TYPE
basetype, subtype = mimetype.split('/', 1)
if basetype == 'text':
if isinstance(content, six.binary_type):
try:
content = content.decode('utf-8')
except UnicodeDecodeError:
# If mimetype suggests the file is text but it's actually
# binary, read() will raise a UnicodeDecodeError on Python 3.
mimetype = DEFAULT_ATTACHMENT_MIME_TYPE
self.attachments.append((filename, content, mimetype))
def attach_file(self, path, mimetype=None):
"""
Attaches a file from the filesystem.
The mimetype will be set to the DEFAULT_ATTACHMENT_MIME_TYPE if it is
not specified and cannot be guessed.
For a text/* mimetype (guessed or specified), the file's content
will be decoded as UTF-8. If that fails, the mimetype will be set to
DEFAULT_ATTACHMENT_MIME_TYPE and the content is not decoded.
"""
filename = os.path.basename(path)
with open(path, 'rb') as file:
content = file.read()
self.attach(filename, content, mimetype)
def _create_message(self, msg):
return self._create_attachments(msg)
def _create_attachments(self, msg):
if self.attachments:
encoding = self.encoding or settings.DEFAULT_CHARSET
body_msg = msg
msg = SafeMIMEMultipart(_subtype=self.mixed_subtype, encoding=encoding)
if self.body:
msg.attach(body_msg)
for attachment in self.attachments:
if isinstance(attachment, MIMEBase):
msg.attach(attachment)
else:
msg.attach(self._create_attachment(*attachment))
return msg
def _create_mime_attachment(self, content, mimetype):
"""
Converts the content, mimetype pair into a MIME attachment object.
If the mimetype is message/rfc822, content may be an
email.Message or EmailMessage object, as well as a str.
"""
basetype, subtype = mimetype.split('/', 1)
if basetype == 'text':
encoding = self.encoding or settings.DEFAULT_CHARSET
attachment = SafeMIMEText(content, subtype, encoding)
elif basetype == 'message' and subtype == 'rfc822':
# Bug #18967: per RFC2046 s5.2.1, message/rfc822 attachments
# must not be base64 encoded.
if isinstance(content, EmailMessage):
# convert content into an email.Message first
content = content.message()
elif not isinstance(content, Message):
# For compatibility with existing code, parse the message
# into an email.Message object if it is not one already.
content = message_from_string(content)
attachment = SafeMIMEMessage(content, subtype)
else:
# Encode non-text attachments with base64.
attachment = MIMEBase(basetype, subtype)
attachment.set_payload(content)
Encoders.encode_base64(attachment)
return attachment
def _create_attachment(self, filename, content, mimetype=None):
"""
Converts the filename, content, mimetype triple into a MIME attachment
object.
"""
attachment = self._create_mime_attachment(content, mimetype)
if filename:
try:
filename.encode('ascii')
except UnicodeEncodeError:
if six.PY2:
filename = filename.encode('utf-8')
filename = ('utf-8', '', filename)
attachment.add_header('Content-Disposition', 'attachment',
filename=filename)
return attachment
class EmailMultiAlternatives(EmailMessage):
"""
A version of EmailMessage that makes it easy to send multipart/alternative
messages. For example, including text and HTML versions of the text is
made easier.
"""
alternative_subtype = 'alternative'
def __init__(self, subject='', body='', from_email=None, to=None, bcc=None,
connection=None, attachments=None, headers=None, alternatives=None,
cc=None, reply_to=None):
"""
Initialize a single email message (which can be sent to multiple
recipients).
All strings used to create the message can be unicode strings (or UTF-8
bytestrings). The SafeMIMEText class will handle any necessary encoding
conversions.
"""
super(EmailMultiAlternatives, self).__init__(
subject, body, from_email, to, bcc, connection, attachments,
headers, cc, reply_to,
)
self.alternatives = alternatives or []
def attach_alternative(self, content, mimetype):
"""Attach an alternative content representation."""
assert content is not None
assert mimetype is not None
self.alternatives.append((content, mimetype))
def _create_message(self, msg):
return self._create_attachments(self._create_alternatives(msg))
def _create_alternatives(self, msg):
encoding = self.encoding or settings.DEFAULT_CHARSET
if self.alternatives:
body_msg = msg
msg = SafeMIMEMultipart(_subtype=self.alternative_subtype, encoding=encoding)
if self.body:
msg.attach(body_msg)
for alternative in self.alternatives:
msg.attach(self._create_mime_attachment(*alternative))
return msg
|
mit
|
pytest-dev/pytest-services
|
tests/test_plugin.py
|
1
|
1365
|
"""Tests for pytest-services plugin."""
import os.path
import socket
import pylibmc
import MySQLdb
def test_memcached(request, memcached, memcached_socket):
"""Test memcached service."""
mc = pylibmc.Client([memcached_socket])
mc.set('some', 1)
assert mc.get('some') == 1
# check memcached cleaner
request.getfixturevalue('memcached_clean')
assert mc.get('some') is None
def test_mysql(mysql, mysql_connection, mysql_socket):
"""Test mysql service."""
conn = MySQLdb.connect(user='root', unix_socket=mysql_socket)
assert conn
def test_xvfb(xvfb, xvfb_display):
"""Test xvfb service."""
socket.create_connection(('127.0.0.1', 6000 + xvfb_display))
def test_port_getter(port_getter):
"""Test port getter utility."""
port1 = port_getter()
sock1 = socket.socket(socket.AF_INET)
sock1.bind(('127.0.0.1', port1))
assert port1
port2 = port_getter()
sock2 = socket.socket(socket.AF_INET)
sock2.bind(('127.0.0.1', port2))
assert port2
assert port1 != port2
def test_display_getter(display_getter):
"""Test display getter utility."""
display1 = display_getter()
assert display1
display2 = display_getter()
assert display2
assert display1 != display2
def test_temp_dir(temp_dir):
"""Test temp dir directory."""
assert os.path.isdir(temp_dir)
|
mit
|
anryko/ansible
|
test/units/modules/network/onyx/test_onyx_snmp_users.py
|
9
|
4077
|
#
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from units.compat.mock import patch
from ansible.modules.network.onyx import onyx_snmp_users
from units.modules.utils import set_module_args
from .onyx_module import TestOnyxModule, load_fixture
class TestOnyxSNMPUsersModule(TestOnyxModule):
module = onyx_snmp_users
def setUp(self):
self.enabled = False
super(TestOnyxSNMPUsersModule, self).setUp()
self.mock_get_config = patch.object(
onyx_snmp_users.OnyxSNMPUsersModule, "_show_users")
self.get_config = self.mock_get_config.start()
self.mock_load_config = patch(
'ansible.module_utils.network.onyx.onyx.load_config')
self.load_config = self.mock_load_config.start()
def tearDown(self):
super(TestOnyxSNMPUsersModule, self).tearDown()
self.mock_get_config.stop()
self.mock_load_config.stop()
def load_fixtures(self, commands=None, transport='cli'):
config_file = 'onyx_show_snmp_users.cfg'
self.get_config.return_value = load_fixture(config_file)
self.load_config.return_value = None
def test_snmp_user_state_no_change(self):
set_module_args(dict(users=[dict(name='sara',
enabled='true')]))
self.execute_module(changed=False)
def test_snmp_user_state_with_change(self):
set_module_args(dict(users=[dict(name='sara',
enabled='false')]))
commands = ['no snmp-server user sara v3 enable']
self.execute_module(changed=True, commands=commands)
def test_snmp_user_set_access_state_no_change(self):
set_module_args(dict(users=[dict(name='sara',
set_access_enabled='true')]))
self.execute_module(changed=False)
def test_snmp_user_set_access_state_with_change(self):
set_module_args(dict(users=[dict(name='sara',
set_access_enabled='false')]))
commands = ['no snmp-server user sara v3 enable sets']
self.execute_module(changed=True, commands=commands)
def test_snmp_user_require_privacy_state_no_change(self):
set_module_args(dict(users=[dict(name='sara',
require_privacy='false')]))
self.execute_module(changed=False)
def test_snmp_user_require_privacy_state_with_change(self):
set_module_args(dict(users=[dict(name='sara',
require_privacy='yes')]))
commands = ['snmp-server user sara v3 require-privacy']
self.execute_module(changed=True, commands=commands)
def test_snmp_user_auth_type_no_change(self):
set_module_args(dict(users=[dict(name='sara',
auth_type='sha',
auth_password='12sara123456')]))
self.execute_module(changed=False)
def test_snmp_user_auth_type_with_change(self):
set_module_args(dict(users=[dict(name='sara',
auth_type='md5',
auth_password='12sara123456')]))
commands = ['snmp-server user sara v3 auth md5 12sara123456']
self.execute_module(changed=True, commands=commands)
def test_snmp_user_capability_level_no_change(self):
set_module_args(dict(users=[dict(name='sara',
capability_level='admin')]))
self.execute_module(changed=False)
def test_snmp_user_capability_level_with_change(self):
set_module_args(dict(users=[dict(name='sara',
capability_level='monitor')]))
commands = ['snmp-server user sara v3 capability monitor']
self.execute_module(changed=True, commands=commands)
|
gpl-3.0
|
fbradyirl/home-assistant
|
homeassistant/components/joaoapps_join/notify.py
|
4
|
2731
|
"""Support for Join notifications."""
import logging
import voluptuous as vol
from homeassistant.components.notify import (
ATTR_DATA,
ATTR_TITLE,
ATTR_TITLE_DEFAULT,
PLATFORM_SCHEMA,
BaseNotificationService,
)
from homeassistant.const import CONF_API_KEY
import homeassistant.helpers.config_validation as cv
_LOGGER = logging.getLogger(__name__)
CONF_DEVICE_ID = "device_id"
CONF_DEVICE_IDS = "device_ids"
CONF_DEVICE_NAMES = "device_names"
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend(
{
vol.Required(CONF_API_KEY): cv.string,
vol.Optional(CONF_DEVICE_ID): cv.string,
vol.Optional(CONF_DEVICE_IDS): cv.string,
vol.Optional(CONF_DEVICE_NAMES): cv.string,
}
)
def get_service(hass, config, discovery_info=None):
"""Get the Join notification service."""
api_key = config.get(CONF_API_KEY)
device_id = config.get(CONF_DEVICE_ID)
device_ids = config.get(CONF_DEVICE_IDS)
device_names = config.get(CONF_DEVICE_NAMES)
if api_key:
from pyjoin import get_devices
if not get_devices(api_key):
_LOGGER.error("Error connecting to Join. Check the API key")
return False
if device_id is None and device_ids is None and device_names is None:
_LOGGER.error(
"No device was provided. Please specify device_id"
", device_ids, or device_names"
)
return False
return JoinNotificationService(api_key, device_id, device_ids, device_names)
class JoinNotificationService(BaseNotificationService):
"""Implement the notification service for Join."""
def __init__(self, api_key, device_id, device_ids, device_names):
"""Initialize the service."""
self._api_key = api_key
self._device_id = device_id
self._device_ids = device_ids
self._device_names = device_names
def send_message(self, message="", **kwargs):
"""Send a message to a user."""
from pyjoin import send_notification
title = kwargs.get(ATTR_TITLE, ATTR_TITLE_DEFAULT)
data = kwargs.get(ATTR_DATA) or {}
send_notification(
device_id=self._device_id,
device_ids=self._device_ids,
device_names=self._device_names,
text=message,
title=title,
icon=data.get("icon"),
smallicon=data.get("smallicon"),
image=data.get("image"),
sound=data.get("sound"),
notification_id=data.get("notification_id"),
url=data.get("url"),
tts=data.get("tts"),
tts_language=data.get("tts_language"),
vibration=data.get("vibration"),
api_key=self._api_key,
)
|
apache-2.0
|
sbellem/django-rest-framework
|
rest_framework/reverse.py
|
56
|
2304
|
"""
Provide urlresolver functions that return fully qualified URLs or view names
"""
from __future__ import unicode_literals
from django.core.urlresolvers import reverse as django_reverse
from django.core.urlresolvers import NoReverseMatch
from django.utils import six
from django.utils.functional import lazy
from rest_framework.settings import api_settings
from rest_framework.utils.urls import replace_query_param
def preserve_builtin_query_params(url, request=None):
"""
Given an incoming request, and an outgoing URL representation,
append the value of any built-in query parameters.
"""
if request is None:
return url
overrides = [
api_settings.URL_FORMAT_OVERRIDE,
api_settings.URL_ACCEPT_OVERRIDE
]
for param in overrides:
if param and (param in request.GET):
value = request.GET[param]
url = replace_query_param(url, param, value)
return url
def reverse(viewname, args=None, kwargs=None, request=None, format=None, **extra):
"""
If versioning is being used then we pass any `reverse` calls through
to the versioning scheme instance, so that the resulting URL
can be modified if needed.
"""
scheme = getattr(request, 'versioning_scheme', None)
if scheme is not None:
try:
url = scheme.reverse(viewname, args, kwargs, request, format, **extra)
except NoReverseMatch:
# In case the versioning scheme reversal fails, fallback to the
# default implementation
url = _reverse(viewname, args, kwargs, request, format, **extra)
else:
url = _reverse(viewname, args, kwargs, request, format, **extra)
return preserve_builtin_query_params(url, request)
def _reverse(viewname, args=None, kwargs=None, request=None, format=None, **extra):
"""
Same as `django.core.urlresolvers.reverse`, but optionally takes a request
and returns a fully qualified URL, using the request to get the base URL.
"""
if format is not None:
kwargs = kwargs or {}
kwargs['format'] = format
url = django_reverse(viewname, args=args, kwargs=kwargs, **extra)
if request:
return request.build_absolute_uri(url)
return url
reverse_lazy = lazy(reverse, six.text_type)
|
bsd-2-clause
|
MrMC/mrmc
|
tools/EventClients/lib/python/ps3/sixpair.py
|
208
|
2903
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
import sys
import usb
vendor = 0x054c
product = 0x0268
timeout = 5000
passed_value = 0x03f5
def find_sixaxes():
res = []
for bus in usb.busses():
for dev in bus.devices:
if dev.idVendor == vendor and dev.idProduct == product:
res.append(dev)
return res
def find_interface(dev):
for cfg in dev.configurations:
for itf in cfg.interfaces:
for alt in itf:
if alt.interfaceClass == 3:
return alt
raise Exception("Unable to find interface")
def mac_to_string(mac):
return "%02x:%02x:%02x:%02x:%02x:%02x" % (mac[0], mac[1], mac[2], mac[3], mac[4], mac[5])
def set_pair_filename(dirname, filename, mac):
for bus in usb.busses():
if int(bus.dirname) == int(dirname):
for dev in bus.devices:
if int(dev.filename) == int(filename):
if dev.idVendor == vendor and dev.idProduct == product:
update_pair(dev, mac)
return
else:
raise Exception("Device is not a sixaxis")
raise Exception("Device not found")
def set_pair(dev, mac):
itf = find_interface(dev)
handle = dev.open()
msg = (0x01, 0x00) + mac;
try:
handle.detachKernelDriver(itf.interfaceNumber)
except usb.USBError:
pass
handle.claimInterface(itf.interfaceNumber)
try:
handle.controlMsg(usb.ENDPOINT_OUT | usb.TYPE_CLASS | usb.RECIP_INTERFACE
, usb.REQ_SET_CONFIGURATION, msg, passed_value, itf.interfaceNumber, timeout)
finally:
handle.releaseInterface()
def get_pair(dev):
itf = find_interface(dev)
handle = dev.open()
try:
handle.detachKernelDriver(itf.interfaceNumber)
except usb.USBError:
pass
handle.claimInterface(itf.interfaceNumber)
try:
msg = handle.controlMsg(usb.ENDPOINT_IN | usb.TYPE_CLASS | usb.RECIP_INTERFACE
, usb.REQ_CLEAR_FEATURE, 8, passed_value, itf.interfaceNumber, timeout)
finally:
handle.releaseInterface()
return msg[2:8]
def set_pair_all(mac):
devs = find_sixaxes()
for dev in devs:
update_pair(dev, mac)
def update_pair(dev, mac):
old = get_pair(dev)
if old != mac:
print "Reparing sixaxis from:" + mac_to_string(old) + " to:" + mac_to_string(mac)
set_pair(dev, mac)
if __name__=="__main__":
devs = find_sixaxes()
mac = None
if len(sys.argv) > 1:
try:
mac = sys.argv[1].split(':')
mac = tuple([int(x, 16) for x in mac])
if len(mac) != 6:
print "Invalid length of HCI address, should be 6 parts"
mac = None
except:
print "Failed to parse HCI address"
mac = None
for dev in devs:
if mac:
update_pair(dev, mac)
else:
print "Found sixaxis paired to: " + mac_to_string(get_pair(dev))
|
gpl-2.0
|
qusp/orange3
|
Orange/canvas/gui/dropshadow.py
|
16
|
13298
|
"""
=================
Drop Shadow Frame
=================
A widget providing a drop shadow (gaussian blur effect) around another
widget.
"""
from PyQt4.QtGui import (
QWidget, QPainter, QPixmap, QGraphicsScene, QGraphicsRectItem,
QGraphicsDropShadowEffect, QColor, QPen, QPalette, QStyleOption,
QAbstractScrollArea, QToolBar, QRegion
)
from PyQt4.QtCore import (
Qt, QPoint, QPointF, QRect, QRectF, QSize, QSizeF, QEvent
)
from PyQt4.QtCore import pyqtProperty as Property
CACHED_SHADOW_RECT_SIZE = (50, 50)
def render_drop_shadow_frame(pixmap, shadow_rect, shadow_color,
offset, radius, rect_fill_color):
pixmap.fill(QColor(0, 0, 0, 0))
scene = QGraphicsScene()
rect = QGraphicsRectItem(shadow_rect)
rect.setBrush(QColor(rect_fill_color))
rect.setPen(QPen(Qt.NoPen))
scene.addItem(rect)
effect = QGraphicsDropShadowEffect(color=shadow_color,
blurRadius=radius,
offset=offset)
rect.setGraphicsEffect(effect)
scene.setSceneRect(QRectF(QPointF(0, 0), QSizeF(pixmap.size())))
painter = QPainter(pixmap)
scene.render(painter)
painter.end()
scene.clear()
scene.deleteLater()
return pixmap
class DropShadowFrame(QWidget):
"""
A widget drawing a drop shadow effect around the geometry of
another widget (works similar to :class:`QFocusFrame`).
Parameters
----------
parent : :class:`QObject`
Parent object.
color : :class:`QColor`
The color of the drop shadow.
radius : float
Shadow radius.
"""
def __init__(self, parent=None, color=None, radius=5,
**kwargs):
QWidget.__init__(self, parent, **kwargs)
self.setAttribute(Qt.WA_TransparentForMouseEvents, True)
self.setAttribute(Qt.WA_NoChildEventsForParent, True)
self.setFocusPolicy(Qt.NoFocus)
if color is None:
color = self.palette().color(QPalette.Dark)
self.__color = color
self.__radius = radius
self.__widget = None
self.__widgetParent = None
self.__updatePixmap()
def setColor(self, color):
"""
Set the color of the shadow.
"""
if not isinstance(color, QColor):
color = QColor(color)
if self.__color != color:
self.__color = QColor(color)
self.__updatePixmap()
def color(self):
"""
Return the color of the drop shadow.
"""
return QColor(self.__color)
color_ = Property(QColor, fget=color, fset=setColor, designable=True,
doc="Drop shadow color")
def setRadius(self, radius):
"""
Set the drop shadow's blur radius.
"""
if self.__radius != radius:
self.__radius = radius
self.__updateGeometry()
self.__updatePixmap()
def radius(self):
"""
Return the shadow blur radius.
"""
return self.__radius
radius_ = Property(int, fget=radius, fset=setRadius, designable=True,
doc="Drop shadow blur radius.")
def setWidget(self, widget):
"""
Set the widget around which to show the shadow.
"""
if self.__widget:
self.__widget.removeEventFilter(self)
self.__widget = widget
if self.__widget:
self.__widget.installEventFilter(self)
# Find the parent for the frame
# This is the top level window a toolbar or a viewport
# of a scroll area
parent = widget.parentWidget()
while not (isinstance(parent, (QAbstractScrollArea, QToolBar)) or \
parent.isWindow()):
parent = parent.parentWidget()
if isinstance(parent, QAbstractScrollArea):
parent = parent.viewport()
self.__widgetParent = parent
self.setParent(parent)
self.stackUnder(widget)
self.__updateGeometry()
self.setVisible(widget.isVisible())
def widget(self):
"""
Return the widget that was set by `setWidget`.
"""
return self.__widget
def paintEvent(self, event):
# TODO: Use QPainter.drawPixmapFragments on Qt 4.7
opt = QStyleOption()
opt.initFrom(self)
pixmap = self.__shadowPixmap
shadow_rect = QRectF(opt.rect)
widget_rect = QRectF(self.widget().geometry())
widget_rect.moveTo(self.radius_, self.radius_)
left = top = right = bottom = self.radius_
pixmap_rect = QRectF(QPointF(0, 0), QSizeF(pixmap.size()))
# Shadow casting rectangle in the source pixmap.
pixmap_shadow_rect = pixmap_rect.adjusted(left, top, -right, -bottom)
source_rects = self.__shadowPixmapFragments(pixmap_rect,
pixmap_shadow_rect)
target_rects = self.__shadowPixmapFragments(shadow_rect, widget_rect)
painter = QPainter(self)
for source, target in zip(source_rects, target_rects):
painter.drawPixmap(target, pixmap, source)
painter.end()
def eventFilter(self, obj, event):
etype = event.type()
if etype == QEvent.Move or etype == QEvent.Resize:
self.__updateGeometry()
elif etype == QEvent.Show:
self.__updateGeometry()
self.show()
elif etype == QEvent.Hide:
self.hide()
return QWidget.eventFilter(self, obj, event)
def __updateGeometry(self):
"""
Update the shadow geometry to fit the widget's changed
geometry.
"""
widget = self.__widget
parent = self.__widgetParent
radius = self.radius_
pos = widget.pos()
if parent != widget.parentWidget():
pos = widget.parentWidget().mapTo(parent, pos)
geom = QRect(pos, widget.size())
geom.adjust(-radius, -radius, radius, radius)
if geom != self.geometry():
self.setGeometry(geom)
# Set the widget mask (punch a hole through to the `widget` instance.
rect = self.rect()
mask = QRegion(rect)
transparent = QRegion(rect.adjusted(radius, radius, -radius, -radius))
mask = mask.subtracted(transparent)
self.setMask(mask)
def __updatePixmap(self):
"""
Update the cached shadow pixmap.
"""
rect_size = QSize(50, 50)
left = top = right = bottom = self.radius_
# Size of the pixmap.
pixmap_size = QSize(rect_size.width() + left + right,
rect_size.height() + top + bottom)
shadow_rect = QRect(QPoint(left, top), rect_size)
pixmap = QPixmap(pixmap_size)
pixmap.fill(QColor(0, 0, 0, 0))
rect_fill_color = self.palette().color(QPalette.Window)
pixmap = render_drop_shadow_frame(
pixmap,
QRectF(shadow_rect),
shadow_color=self.color_,
offset=QPointF(0, 0),
radius=self.radius_,
rect_fill_color=rect_fill_color
)
self.__shadowPixmap = pixmap
self.update()
def __shadowPixmapFragments(self, pixmap_rect, shadow_rect):
"""
Return a list of 8 QRectF fragments for drawing a shadow.
"""
s_left, s_top, s_right, s_bottom = \
shadow_rect.left(), shadow_rect.top(), \
shadow_rect.right(), shadow_rect.bottom()
s_width, s_height = shadow_rect.width(), shadow_rect.height()
p_width, p_height = pixmap_rect.width(), pixmap_rect.height()
top_left = QRectF(0.0, 0.0, s_left, s_top)
top = QRectF(s_left, 0.0, s_width, s_top)
top_right = QRectF(s_right, 0.0, p_width - s_width, s_top)
right = QRectF(s_right, s_top, p_width - s_right, s_height)
right_bottom = QRectF(shadow_rect.bottomRight(),
pixmap_rect.bottomRight())
bottom = QRectF(shadow_rect.bottomLeft(),
pixmap_rect.bottomRight() - \
QPointF(p_width - s_right, 0.0))
bottom_left = QRectF(shadow_rect.bottomLeft() - QPointF(s_left, 0.0),
pixmap_rect.bottomLeft() + QPointF(s_left, 0.0))
left = QRectF(pixmap_rect.topLeft() + QPointF(0.0, s_top),
shadow_rect.bottomLeft())
return [top_left, top, top_right, right, right_bottom,
bottom, bottom_left, left]
# A different obsolete implementation
class _DropShadowWidget(QWidget):
"""A frame widget drawing a drop shadow effect around its
contents.
"""
def __init__(self, parent=None, offset=None, radius=None,
color=None, **kwargs):
QWidget.__init__(self, parent, **kwargs)
# Bypass the overloaded method to set the default margins.
QWidget.setContentsMargins(self, 10, 10, 10, 10)
if offset is None:
offset = QPointF(0., 0.)
if radius is None:
radius = 20
if color is None:
color = QColor(Qt.black)
self.offset = offset
self.radius = radius
self.color = color
self._shadowPixmap = None
self._updateShadowPixmap()
def setOffset(self, offset):
"""Set the drop shadow offset (`QPoint`)
"""
self.offset = offset
self._updateShadowPixmap()
self.update()
def setRadius(self, radius):
"""Set the drop shadow blur radius (`float`).
"""
self.radius = radius
self._updateShadowPixmap()
self.update()
def setColor(self, color):
"""Set the drop shadow color (`QColor`).
"""
self.color = color
self._updateShadowPixmap()
self.update()
def setContentsMargins(self, *args, **kwargs):
QWidget.setContentsMargins(self, *args, **kwargs)
self._updateShadowPixmap()
def _updateShadowPixmap(self):
"""Update the cached drop shadow pixmap.
"""
# Rectangle casting the shadow
rect_size = QSize(*CACHED_SHADOW_RECT_SIZE)
left, top, right, bottom = self.getContentsMargins()
# Size of the pixmap.
pixmap_size = QSize(rect_size.width() + left + right,
rect_size.height() + top + bottom)
shadow_rect = QRect(QPoint(left, top), rect_size)
pixmap = QPixmap(pixmap_size)
pixmap.fill(QColor(0, 0, 0, 0))
rect_fill_color = self.palette().color(QPalette.Window)
pixmap = render_drop_shadow_frame(pixmap, QRectF(shadow_rect),
shadow_color=self.color,
offset=self.offset,
radius=self.radius,
rect_fill_color=rect_fill_color)
self._shadowPixmap = pixmap
def paintEvent(self, event):
pixmap = self._shadowPixmap
widget_rect = QRectF(QPointF(0.0, 0.0), QSizeF(self.size()))
frame_rect = QRectF(self.contentsRect())
left, top, right, bottom = self.getContentsMargins()
pixmap_rect = QRectF(QPointF(0, 0), QSizeF(pixmap.size()))
# Shadow casting rectangle.
pixmap_shadow_rect = pixmap_rect.adjusted(left, top, -right, -bottom)
source_rects = self._shadowPixmapFragments(pixmap_rect,
pixmap_shadow_rect)
target_rects = self._shadowPixmapFragments(widget_rect, frame_rect)
painter = QPainter(self)
for source, target in zip(source_rects, target_rects):
painter.drawPixmap(target, pixmap, source)
painter.end()
def _shadowPixmapFragments(self, pixmap_rect, shadow_rect):
"""Return a list of 8 QRectF fragments for drawing a shadow.
"""
s_left, s_top, s_right, s_bottom = \
shadow_rect.left(), shadow_rect.top(), \
shadow_rect.right(), shadow_rect.bottom()
s_width, s_height = shadow_rect.width(), shadow_rect.height()
p_width, p_height = pixmap_rect.width(), pixmap_rect.height()
top_left = QRectF(0.0, 0.0, s_left, s_top)
top = QRectF(s_left, 0.0, s_width, s_top)
top_right = QRectF(s_right, 0.0, p_width - s_width, s_top)
right = QRectF(s_right, s_top, p_width - s_right, s_height)
right_bottom = QRectF(shadow_rect.bottomRight(),
pixmap_rect.bottomRight())
bottom = QRectF(shadow_rect.bottomLeft(),
pixmap_rect.bottomRight() - \
QPointF(p_width - s_right, 0.0))
bottom_left = QRectF(shadow_rect.bottomLeft() - QPointF(s_left, 0.0),
pixmap_rect.bottomLeft() + QPointF(s_left, 0.0))
left = QRectF(pixmap_rect.topLeft() + QPointF(0.0, s_top),
shadow_rect.bottomLeft())
return [top_left, top, top_right, right, right_bottom,
bottom, bottom_left, left]
|
bsd-2-clause
|
yk5/incubator-airflow
|
tests/operators/python_operator.py
|
6
|
9948
|
# -*- coding: utf-8 -*-
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from __future__ import print_function, unicode_literals
import datetime
import unittest
from airflow import configuration, DAG
from airflow.models import TaskInstance as TI
from airflow.operators.python_operator import PythonOperator, BranchPythonOperator
from airflow.operators.python_operator import ShortCircuitOperator
from airflow.operators.dummy_operator import DummyOperator
from airflow.settings import Session
from airflow.utils import timezone
from airflow.utils.state import State
from airflow.exceptions import AirflowException
import logging
DEFAULT_DATE = timezone.datetime(2016, 1, 1)
END_DATE = timezone.datetime(2016, 1, 2)
INTERVAL = datetime.timedelta(hours=12)
FROZEN_NOW = timezone.datetime(2016, 1, 2, 12, 1, 1)
class PythonOperatorTest(unittest.TestCase):
def setUp(self):
super(PythonOperatorTest, self).setUp()
configuration.load_test_config()
self.dag = DAG(
'test_dag',
default_args={
'owner': 'airflow',
'start_date': DEFAULT_DATE},
schedule_interval=INTERVAL)
self.addCleanup(self.dag.clear)
self.clear_run()
self.addCleanup(self.clear_run)
def do_run(self):
self.run = True
def clear_run(self):
self.run = False
def is_run(self):
return self.run
def test_python_operator_run(self):
"""Tests that the python callable is invoked on task run."""
task = PythonOperator(
python_callable=self.do_run,
task_id='python_operator',
dag=self.dag)
self.assertFalse(self.is_run())
task.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
self.assertTrue(self.is_run())
def test_python_operator_python_callable_is_callable(self):
"""Tests that PythonOperator will only instantiate if
the python_callable argument is callable."""
not_callable = {}
with self.assertRaises(AirflowException):
PythonOperator(
python_callable=not_callable,
task_id='python_operator',
dag=self.dag)
not_callable = None
with self.assertRaises(AirflowException):
PythonOperator(
python_callable=not_callable,
task_id='python_operator',
dag=self.dag)
class BranchOperatorTest(unittest.TestCase):
def setUp(self):
self.dag = DAG('branch_operator_test',
default_args={
'owner': 'airflow',
'start_date': DEFAULT_DATE},
schedule_interval=INTERVAL)
self.branch_op = BranchPythonOperator(task_id='make_choice',
dag=self.dag,
python_callable=lambda: 'branch_1')
self.branch_1 = DummyOperator(task_id='branch_1', dag=self.dag)
self.branch_1.set_upstream(self.branch_op)
self.branch_2 = DummyOperator(task_id='branch_2', dag=self.dag)
self.branch_2.set_upstream(self.branch_op)
self.dag.clear()
def test_without_dag_run(self):
"""This checks the defensive against non existent tasks in a dag run"""
self.branch_op.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
session = Session()
tis = session.query(TI).filter(
TI.dag_id == self.dag.dag_id,
TI.execution_date == DEFAULT_DATE
)
session.close()
for ti in tis:
if ti.task_id == 'make_choice':
self.assertEquals(ti.state, State.SUCCESS)
elif ti.task_id == 'branch_1':
# should exist with state None
self.assertEquals(ti.state, State.NONE)
elif ti.task_id == 'branch_2':
self.assertEquals(ti.state, State.SKIPPED)
else:
raise
def test_with_dag_run(self):
dr = self.dag.create_dagrun(
run_id="manual__",
start_date=timezone.utcnow(),
execution_date=DEFAULT_DATE,
state=State.RUNNING
)
self.branch_op.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
tis = dr.get_task_instances()
for ti in tis:
if ti.task_id == 'make_choice':
self.assertEquals(ti.state, State.SUCCESS)
elif ti.task_id == 'branch_1':
self.assertEquals(ti.state, State.NONE)
elif ti.task_id == 'branch_2':
self.assertEquals(ti.state, State.SKIPPED)
else:
raise
class ShortCircuitOperatorTest(unittest.TestCase):
def test_without_dag_run(self):
"""This checks the defensive against non existent tasks in a dag run"""
value = False
dag = DAG('shortcircuit_operator_test_without_dag_run',
default_args={
'owner': 'airflow',
'start_date': DEFAULT_DATE
},
schedule_interval=INTERVAL)
short_op = ShortCircuitOperator(task_id='make_choice',
dag=dag,
python_callable=lambda: value)
branch_1 = DummyOperator(task_id='branch_1', dag=dag)
branch_1.set_upstream(short_op)
branch_2 = DummyOperator(task_id='branch_2', dag=dag)
branch_2.set_upstream(branch_1)
upstream = DummyOperator(task_id='upstream', dag=dag)
upstream.set_downstream(short_op)
dag.clear()
short_op.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
session = Session()
tis = session.query(TI).filter(
TI.dag_id == dag.dag_id,
TI.execution_date == DEFAULT_DATE
)
for ti in tis:
if ti.task_id == 'make_choice':
self.assertEquals(ti.state, State.SUCCESS)
elif ti.task_id == 'upstream':
# should not exist
raise
elif ti.task_id == 'branch_1' or ti.task_id == 'branch_2':
self.assertEquals(ti.state, State.SKIPPED)
else:
raise
value = True
dag.clear()
short_op.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
for ti in tis:
if ti.task_id == 'make_choice':
self.assertEquals(ti.state, State.SUCCESS)
elif ti.task_id == 'upstream':
# should not exist
raise
elif ti.task_id == 'branch_1' or ti.task_id == 'branch_2':
self.assertEquals(ti.state, State.NONE)
else:
raise
session.close()
def test_with_dag_run(self):
value = False
dag = DAG('shortcircuit_operator_test_with_dag_run',
default_args={
'owner': 'airflow',
'start_date': DEFAULT_DATE
},
schedule_interval=INTERVAL)
short_op = ShortCircuitOperator(task_id='make_choice',
dag=dag,
python_callable=lambda: value)
branch_1 = DummyOperator(task_id='branch_1', dag=dag)
branch_1.set_upstream(short_op)
branch_2 = DummyOperator(task_id='branch_2', dag=dag)
branch_2.set_upstream(branch_1)
upstream = DummyOperator(task_id='upstream', dag=dag)
upstream.set_downstream(short_op)
dag.clear()
logging.error("Tasks {}".format(dag.tasks))
dr = dag.create_dagrun(
run_id="manual__",
start_date=timezone.utcnow(),
execution_date=DEFAULT_DATE,
state=State.RUNNING
)
upstream.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
short_op.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
tis = dr.get_task_instances()
self.assertEqual(len(tis), 4)
for ti in tis:
if ti.task_id == 'make_choice':
self.assertEquals(ti.state, State.SUCCESS)
elif ti.task_id == 'upstream':
self.assertEquals(ti.state, State.SUCCESS)
elif ti.task_id == 'branch_1' or ti.task_id == 'branch_2':
self.assertEquals(ti.state, State.SKIPPED)
else:
raise
value = True
dag.clear()
dr.verify_integrity()
upstream.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
short_op.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
tis = dr.get_task_instances()
self.assertEqual(len(tis), 4)
for ti in tis:
if ti.task_id == 'make_choice':
self.assertEquals(ti.state, State.SUCCESS)
elif ti.task_id == 'upstream':
self.assertEquals(ti.state, State.SUCCESS)
elif ti.task_id == 'branch_1' or ti.task_id == 'branch_2':
self.assertEquals(ti.state, State.NONE)
else:
raise
|
apache-2.0
|
neumerance/deploy
|
openstack_dashboard/dashboards/admin/domains/tests.py
|
12
|
16094
|
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from django.core.urlresolvers import reverse # noqa
from django import http
from mox import IgnoreArg # noqa
from mox import IsA # noqa
from horizon.workflows import views
from openstack_dashboard import api
from openstack_dashboard.test import helpers as test
from openstack_dashboard.dashboards.admin.domains import constants
from openstack_dashboard.dashboards.admin.domains import workflows
DOMAINS_INDEX_URL = reverse(constants.DOMAINS_INDEX_URL)
DOMAIN_CREATE_URL = reverse(constants.DOMAINS_CREATE_URL)
DOMAIN_UPDATE_URL = reverse(constants.DOMAINS_UPDATE_URL, args=[1])
GROUP_ROLE_PREFIX = constants.DOMAIN_GROUP_MEMBER_SLUG + "_role_"
class DomainsViewTests(test.BaseAdminViewTests):
@test.create_stubs({api.keystone: ('domain_list',)})
def test_index(self):
api.keystone.domain_list(IgnoreArg()).AndReturn(self.domains.list())
self.mox.ReplayAll()
res = self.client.get(DOMAINS_INDEX_URL)
self.assertTemplateUsed(res, constants.DOMAINS_INDEX_VIEW_TEMPLATE)
self.assertItemsEqual(res.context['table'].data, self.domains.list())
self.assertContains(res, 'Create Domain')
self.assertContains(res, 'Edit')
self.assertContains(res, 'Delete Domain')
@test.create_stubs({api.keystone: ('domain_list',
'keystone_can_edit_domain')})
def test_index_with_keystone_can_edit_domain_false(self):
api.keystone.domain_list(IgnoreArg()).AndReturn(self.domains.list())
api.keystone.keystone_can_edit_domain() \
.MultipleTimes().AndReturn(False)
self.mox.ReplayAll()
res = self.client.get(DOMAINS_INDEX_URL)
self.assertTemplateUsed(res, constants.DOMAINS_INDEX_VIEW_TEMPLATE)
self.assertItemsEqual(res.context['table'].data, self.domains.list())
self.assertNotContains(res, 'Create Domain')
self.assertNotContains(res, 'Edit')
self.assertNotContains(res, 'Delete Domain')
@test.create_stubs({api.keystone: ('domain_list',
'domain_delete')})
def test_delete_domain(self):
domain = self.domains.get(id="2")
api.keystone.domain_list(IgnoreArg()).AndReturn(self.domains.list())
api.keystone.domain_delete(IgnoreArg(), domain.id)
self.mox.ReplayAll()
formData = {'action': 'domains__delete__%s' % domain.id}
res = self.client.post(DOMAINS_INDEX_URL, formData)
self.assertRedirectsNoFollow(res, DOMAINS_INDEX_URL)
@test.create_stubs({api.keystone: ('domain_list', )})
def test_delete_with_enabled_domain(self):
domain = self.domains.get(id="1")
api.keystone.domain_list(IgnoreArg()).AndReturn(self.domains.list())
self.mox.ReplayAll()
formData = {'action': 'domains__delete__%s' % domain.id}
res = self.client.post(DOMAINS_INDEX_URL, formData)
self.assertRedirectsNoFollow(res, DOMAINS_INDEX_URL)
self.assertMessageCount(error=2)
@test.create_stubs({api.keystone: ('domain_get',
'domain_list', )})
def test_set_clear_domain_context(self):
domain = self.domains.get(id="1")
api.keystone.domain_get(IgnoreArg(), domain.id).AndReturn(domain)
api.keystone.domain_get(IgnoreArg(), domain.id).AndReturn(domain)
api.keystone.domain_list(IgnoreArg()).AndReturn(self.domains.list())
self.mox.ReplayAll()
formData = {'action': 'domains__set_domain_context__%s' % domain.id}
res = self.client.post(DOMAINS_INDEX_URL, formData)
self.assertTemplateUsed(res, constants.DOMAINS_INDEX_VIEW_TEMPLATE)
self.assertItemsEqual(res.context['table'].data, [domain, ])
self.assertContains(res, "<em>test_domain:</em>")
formData = {'action': 'domains__clear_domain_context__%s' % domain.id}
res = self.client.post(DOMAINS_INDEX_URL, formData)
self.assertTemplateUsed(res, constants.DOMAINS_INDEX_VIEW_TEMPLATE)
self.assertItemsEqual(res.context['table'].data, self.domains.list())
self.assertNotContains(res, "<em>test_domain:</em>")
class CreateDomainWorkflowTests(test.BaseAdminViewTests):
def _get_domain_info(self, domain):
domain_info = {"name": domain.name,
"description": domain.description,
"enabled": domain.enabled}
return domain_info
def _get_workflow_data(self, domain):
domain_info = self._get_domain_info(domain)
return domain_info
def test_add_domain_get(self):
url = reverse('horizon:admin:domains:create')
res = self.client.get(url)
self.assertTemplateUsed(res, views.WorkflowView.template_name)
workflow = res.context['workflow']
self.assertEqual(res.context['workflow'].name,
workflows.CreateDomain.name)
self.assertQuerysetEqual(workflow.steps,
['<CreateDomainInfo: create_domain>', ])
@test.create_stubs({api.keystone: ('domain_create', )})
def test_add_domain_post(self):
domain = self.domains.get(id="1")
api.keystone.domain_create(IsA(http.HttpRequest),
description=domain.description,
enabled=domain.enabled,
name=domain.name).AndReturn(domain)
self.mox.ReplayAll()
workflow_data = self._get_workflow_data(domain)
res = self.client.post(DOMAIN_CREATE_URL, workflow_data)
self.assertNoFormErrors(res)
self.assertRedirectsNoFollow(res, DOMAINS_INDEX_URL)
class UpdateDomainWorkflowTests(test.BaseAdminViewTests):
def _get_domain_info(self, domain):
domain_info = {"domain_id": domain.id,
"name": domain.name,
"description": domain.description,
"enabled": domain.enabled}
return domain_info
def _get_workflow_data(self, domain):
domain_info = self._get_domain_info(domain)
return domain_info
def _get_all_groups(self, domain_id):
if not domain_id:
groups = self.groups.list()
else:
groups = [group for group in self.groups.list()
if group.domain_id == domain_id]
return groups
def _get_domain_groups(self, domain_id):
# all domain groups have role assignments
return self._get_all_groups(domain_id)
@test.create_stubs({api.keystone: ('domain_get',
'get_default_role',
'role_list',
'group_list',
'roles_for_group')})
def test_update_domain_get(self):
default_role = self.roles.first()
domain = self.domains.get(id="1")
groups = self._get_all_groups(domain.id)
roles = self.roles.list()
api.keystone.domain_get(IsA(http.HttpRequest), '1').AndReturn(domain)
api.keystone.get_default_role(IsA(http.HttpRequest)) \
.MultipleTimes().AndReturn(default_role)
api.keystone.role_list(IsA(http.HttpRequest)) \
.MultipleTimes().AndReturn(roles)
api.keystone.group_list(IsA(http.HttpRequest), domain=domain.id) \
.AndReturn(groups)
for group in groups:
api.keystone.roles_for_group(IsA(http.HttpRequest),
group=group.id,
domain=domain.id) \
.AndReturn(roles)
self.mox.ReplayAll()
res = self.client.get(DOMAIN_UPDATE_URL)
self.assertTemplateUsed(res, views.WorkflowView.template_name)
workflow = res.context['workflow']
self.assertEqual(res.context['workflow'].name,
workflows.UpdateDomain.name)
step = workflow.get_step("update_domain")
self.assertEqual(step.action.initial['name'], domain.name)
self.assertEqual(step.action.initial['description'],
domain.description)
self.assertQuerysetEqual(workflow.steps,
['<UpdateDomainInfo: update_domain>',
'<UpdateDomainGroups: update_group_members>'])
@test.create_stubs({api.keystone: ('domain_get',
'domain_update',
'get_default_role',
'role_list',
'group_list',
'roles_for_group',
'remove_group_role',
'add_group_role',)})
def test_update_domain_post(self):
default_role = self.roles.first()
domain = self.domains.get(id="1")
test_description = 'updated description'
groups = self._get_all_groups(domain.id)
domain_groups = self._get_domain_groups(domain.id)
roles = self.roles.list()
api.keystone.domain_get(IsA(http.HttpRequest), '1').AndReturn(domain)
api.keystone.get_default_role(IsA(http.HttpRequest)) \
.MultipleTimes().AndReturn(default_role)
api.keystone.role_list(IsA(http.HttpRequest)) \
.MultipleTimes().AndReturn(roles)
api.keystone.group_list(IsA(http.HttpRequest), domain=domain.id) \
.AndReturn(groups)
for group in groups:
api.keystone.roles_for_group(IsA(http.HttpRequest),
group=group.id,
domain=domain.id) \
.AndReturn(roles)
workflow_data = self._get_workflow_data(domain)
# update some fields
workflow_data['description'] = test_description
# Group assignment form data
workflow_data[GROUP_ROLE_PREFIX + "1"] = ['3'] # admin role
workflow_data[GROUP_ROLE_PREFIX + "2"] = ['2'] # member role
# handle
api.keystone.domain_update(IsA(http.HttpRequest),
description=test_description,
domain_id=domain.id,
enabled=domain.enabled,
name=domain.name).AndReturn(None)
# Group assignments
api.keystone.group_list(IsA(http.HttpRequest),
domain=domain.id).AndReturn(domain_groups)
# admin group - try to remove all roles on current domain
api.keystone.roles_for_group(IsA(http.HttpRequest),
group='1',
domain=domain.id) \
.AndReturn(roles)
for role in roles:
api.keystone.remove_group_role(IsA(http.HttpRequest),
role=role.id,
group='1',
domain=domain.id)
# member group 1 - has role 1, will remove it
api.keystone.roles_for_group(IsA(http.HttpRequest),
group='2',
domain=domain.id) \
.AndReturn((roles[0],))
# remove role 1
api.keystone.remove_group_role(IsA(http.HttpRequest),
role='1',
group='2',
domain=domain.id)
# add role 2
api.keystone.add_group_role(IsA(http.HttpRequest),
role='2',
group='2',
domain=domain.id)
# member group 3 - has role 2
api.keystone.roles_for_group(IsA(http.HttpRequest),
group='3',
domain=domain.id) \
.AndReturn((roles[1],))
# remove role 2
api.keystone.remove_group_role(IsA(http.HttpRequest),
role='2',
group='3',
domain=domain.id)
# add role 1
api.keystone.add_group_role(IsA(http.HttpRequest),
role='1',
group='3',
domain=domain.id)
self.mox.ReplayAll()
res = self.client.post(DOMAIN_UPDATE_URL, workflow_data)
self.assertNoFormErrors(res)
self.assertMessageCount(success=1)
self.assertRedirectsNoFollow(res, DOMAINS_INDEX_URL)
@test.create_stubs({api.keystone: ('domain_get',)})
def test_update_domain_get_error(self):
domain = self.domains.get(id="1")
api.keystone.domain_get(IsA(http.HttpRequest), domain.id) \
.AndRaise(self.exceptions.keystone)
self.mox.ReplayAll()
res = self.client.get(DOMAIN_UPDATE_URL)
self.assertRedirectsNoFollow(res, DOMAINS_INDEX_URL)
@test.create_stubs({api.keystone: ('domain_get',
'domain_update',
'get_default_role',
'role_list',
'group_list',
'roles_for_group')})
def test_update_domain_post_error(self):
default_role = self.roles.first()
domain = self.domains.get(id="1")
test_description = 'updated description'
groups = self._get_all_groups(domain.id)
roles = self.roles.list()
api.keystone.domain_get(IsA(http.HttpRequest), '1').AndReturn(domain)
api.keystone.get_default_role(IsA(http.HttpRequest)) \
.MultipleTimes().AndReturn(default_role)
api.keystone.role_list(IsA(http.HttpRequest)) \
.MultipleTimes().AndReturn(roles)
api.keystone.group_list(IsA(http.HttpRequest), domain=domain.id) \
.AndReturn(groups)
for group in groups:
api.keystone.roles_for_group(IsA(http.HttpRequest),
group=group.id,
domain=domain.id) \
.AndReturn(roles)
workflow_data = self._get_workflow_data(domain)
# update some fields
workflow_data['description'] = test_description
# Group assignment form data
workflow_data[GROUP_ROLE_PREFIX + "1"] = ['3'] # admin role
workflow_data[GROUP_ROLE_PREFIX + "2"] = ['2'] # member role
# handle
api.keystone.domain_update(IsA(http.HttpRequest),
description=test_description,
domain_id=domain.id,
enabled=domain.enabled,
name=domain.name) \
.AndRaise(self.exceptions.keystone)
self.mox.ReplayAll()
res = self.client.post(DOMAIN_UPDATE_URL, workflow_data)
self.assertNoFormErrors(res)
self.assertMessageCount(error=1)
self.assertRedirectsNoFollow(res, DOMAINS_INDEX_URL)
|
apache-2.0
|
avances123/huertos-api
|
farms/tests.py
|
1
|
5079
|
from rest_framework import status
from rest_framework.test import APITestCase
from django.urls import reverse
from faker import Factory
from django.contrib.auth.models import User
from farms.models import Farm,Zone
from farms.serializers import FarmSerializer
from collections import OrderedDict
###### Module configs and inits
fake = Factory.create('es_ES')
######
class FarmApiTest(APITestCase):
"""
"""
def setUp(self):
self.user = User.objects.create_user(username=fake.user_name(),password=fake.password(),email=fake.email())
self.client.force_authenticate(user=self.user)
self.farm = Farm.objects.create(owner=self.user,name=fake.md5())
self.zones = [Zone.objects.create(farm=self.farm,cols=fake.random_int(1,20),rows=fake.random_int(1,20),x=fake.random_int(1,50),y=fake.random_int(1,50)) for i in range(10)]
#####################################################
# CRUD FARM
#####################################################
def test_get_farm(self):
url = reverse('farm-detail',kwargs={'pk': self.farm.id})
response = self.client.get(url, format='json')
#print(response.data)
self.assertEqual(response.status_code, status.HTTP_200_OK)
#@override_settings()
def test_post_farm(self):
url = reverse('farm-list')
data = {'owner':self.user.id,'name':fake.md5(),'zone_set':[]}
response = self.client.post(url, data,format='json')
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
def test_delete_farm(self):
url = reverse('farm-detail',kwargs={'pk': self.farm.id})
response = self.client.delete(url,format='json')
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
def test_put_farm_same_zones(self):
url = reverse('farm-detail',kwargs={'pk': self.farm.id})
serializer = FarmSerializer(self.farm)
data = {'owner':self.user.id,'name':fake.md5(),'zone_set': serializer.data['zone_set']}
response = self.client.put(url, data,format='json')
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_put_farm_add_zone(self):
url = reverse('farm-detail',kwargs={'pk': self.farm.id})
serializer = FarmSerializer(self.farm)
serializer.data['zone_set'].append(OrderedDict([('especies', None), ('cols', 8), ('rows', 17), ('x', 42), ('y', 5), ('farm', self.farm.id)]))
data = {'owner':self.user.id,'name':fake.md5(),'zone_set': serializer.data['zone_set']}
response = self.client.put(url, data,format='json')
self.assertEqual(response.status_code, status.HTTP_200_OK)
farm = Farm.objects.get(id=self.farm.id)
self.assertEqual(farm.zone_set.count() , 10 + 1)
def test_put_farm_modify_zone(self):
url = reverse('farm-detail',kwargs={'pk': self.farm.id})
serializer = FarmSerializer(self.farm)
zone = serializer.data['zone_set'][0]
serializer.data['zone_set'][0] = OrderedDict([('id',zone['id']),('especies', None), ('cols', 2), ('rows', 2), ('x', 2), ('y', 2), ('farm', self.farm.id)])
data = {'owner':self.user.id,'name':fake.md5(),'zone_set': serializer.data['zone_set']}
response = self.client.put(url, data,format='json')
self.assertEqual(response.status_code, status.HTTP_200_OK)
farm = Farm.objects.get(id=self.farm.id)
self.assertEqual(farm.zone_set.count() , 10)
def test_put_farm_delete_zone(self):
url = reverse('farm-detail',kwargs={'pk': self.farm.id})
serializer = FarmSerializer(self.farm)
data= serializer.data['zone_set'][1:]
data = {'owner':self.user.id,'name':fake.md5(),'zone_set': data}
response = self.client.put(url, data,format='json')
self.assertEqual(response.status_code, status.HTTP_200_OK)
farm = Farm.objects.get(id=self.farm.id)
self.assertEqual(farm.zone_set.count() , 10 - 1)
#####################################################
# FARM ACTIONS
#####################################################
def test_regar_farm(self):
url = reverse('farm-regar',kwargs={'pk': self.farm.id})
response = self.client.post(url, {},format='json')
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
self.assertQuerysetEqual(self.farm.action_object_actions.all(),['regar','created'],transform=lambda x:x.verb)
def test_filter_my_huertos(self):
other_user = User.objects.create_user(username=fake.user_name(),password=fake.password(),email=fake.email())
Farm.objects.create(owner=other_user,name=fake.md5())
from django.http import QueryDict
qdict = QueryDict('',mutable=True)
qdict.update({'owner__username':self.user.username})
url = reverse('farm-list') + '?' + qdict.urlencode()
response = self.client.get(url,format='json')
self.assertEqual(len(response.data),Farm.objects.filter(owner=self.user).count())
|
gpl-2.0
|
dstroppa/openstack-smartos-nova-grizzly
|
nova/tests/api/openstack/compute/contrib/test_flavorextradata.py
|
7
|
3554
|
# Copyright 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import webob
from nova.compute import instance_types
from nova.openstack.common import jsonutils
from nova import test
from nova.tests.api.openstack import fakes
def fake_get_instance_type_by_flavor_id(flavorid):
return {
'id': flavorid,
'flavorid': str(flavorid),
'root_gb': 1,
'ephemeral_gb': 1,
'name': u'test',
'deleted': False,
'created_at': datetime.datetime(2012, 1, 1, 1, 1, 1, 1),
'updated_at': None,
'memory_mb': 512,
'vcpus': 1,
'extra_specs': {},
'deleted_at': None,
'vcpu_weight': None,
}
def fake_get_all_types(inactive=0, filters=None):
return {
'fake1': fake_get_instance_type_by_flavor_id(1),
'fake2': fake_get_instance_type_by_flavor_id(2)
}
class FlavorextradataTest(test.TestCase):
def setUp(self):
super(FlavorextradataTest, self).setUp()
ext = ('nova.api.openstack.compute.contrib'
'.flavorextradata.Flavorextradata')
self.flags(osapi_compute_extension=[ext])
self.stubs.Set(instance_types, 'get_instance_type_by_flavor_id',
fake_get_instance_type_by_flavor_id)
self.stubs.Set(instance_types, 'get_all_types', fake_get_all_types)
def _verify_flavor_response(self, flavor, expected):
for key in expected:
self.assertEquals(flavor[key], expected[key])
def test_show(self):
expected = {
'flavor': {
'id': '1',
'name': 'test',
'ram': 512,
'vcpus': 1,
'disk': 1,
'OS-FLV-EXT-DATA:ephemeral': 1,
}
}
url = '/v2/fake/flavors/1'
req = webob.Request.blank(url)
req.headers['Content-Type'] = 'application/json'
res = req.get_response(fakes.wsgi_app(init_only=('flavors',)))
body = jsonutils.loads(res.body)
self._verify_flavor_response(body['flavor'], expected['flavor'])
def test_detail(self):
expected = [
{
'id': '1',
'name': 'test',
'ram': 512,
'vcpus': 1,
'disk': 1,
'OS-FLV-EXT-DATA:ephemeral': 1,
},
{
'id': '2',
'name': 'test',
'ram': 512,
'vcpus': 1,
'disk': 1,
'OS-FLV-EXT-DATA:ephemeral': 1,
},
]
url = '/v2/fake/flavors/detail'
req = webob.Request.blank(url)
req.headers['Content-Type'] = 'application/json'
res = req.get_response(fakes.wsgi_app(init_only=('flavors',)))
body = jsonutils.loads(res.body)
for i, flavor in enumerate(body['flavors']):
self._verify_flavor_response(flavor, expected[i])
|
apache-2.0
|
vmthunder/nova
|
nova/virt/libvirt/vif.py
|
9
|
27229
|
# Copyright (C) 2011 Midokura KK
# Copyright (C) 2011 Nicira, Inc
# Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""VIF drivers for libvirt."""
import copy
from oslo.config import cfg
from nova import exception
from nova.i18n import _
from nova.i18n import _LE
from nova.network import linux_net
from nova.network import model as network_model
from nova.openstack.common import log as logging
from nova.openstack.common import processutils
from nova import utils
from nova.virt.libvirt import config as vconfig
from nova.virt.libvirt import designer
LOG = logging.getLogger(__name__)
libvirt_vif_opts = [
cfg.BoolOpt('use_virtio_for_bridges',
default=True,
help='Use virtio for bridge interfaces with KVM/QEMU'),
]
CONF = cfg.CONF
CONF.register_opts(libvirt_vif_opts, 'libvirt')
CONF.import_opt('use_ipv6', 'nova.netconf')
DEV_PREFIX_ETH = 'eth'
def is_vif_model_valid_for_virt(virt_type, vif_model):
valid_models = {
'qemu': [network_model.VIF_MODEL_VIRTIO,
network_model.VIF_MODEL_NE2K_PCI,
network_model.VIF_MODEL_PCNET,
network_model.VIF_MODEL_RTL8139,
network_model.VIF_MODEL_E1000,
network_model.VIF_MODEL_SPAPR_VLAN],
'kvm': [network_model.VIF_MODEL_VIRTIO,
network_model.VIF_MODEL_NE2K_PCI,
network_model.VIF_MODEL_PCNET,
network_model.VIF_MODEL_RTL8139,
network_model.VIF_MODEL_E1000,
network_model.VIF_MODEL_SPAPR_VLAN],
'xen': [network_model.VIF_MODEL_NETFRONT,
network_model.VIF_MODEL_NE2K_PCI,
network_model.VIF_MODEL_PCNET,
network_model.VIF_MODEL_RTL8139,
network_model.VIF_MODEL_E1000],
'lxc': [],
'uml': [],
}
if vif_model is None:
return True
if virt_type not in valid_models:
raise exception.UnsupportedVirtType(virt=virt_type)
return vif_model in valid_models[virt_type]
class LibvirtGenericVIFDriver(object):
"""Generic VIF driver for libvirt networking."""
def __init__(self, get_connection):
self.get_connection = get_connection
def _normalize_vif_type(self, vif_type):
return vif_type.replace('2.1q', '2q')
def get_vif_devname(self, vif):
if 'devname' in vif:
return vif['devname']
return ("nic" + vif['id'])[:network_model.NIC_NAME_LEN]
def get_vif_devname_with_prefix(self, vif, prefix):
devname = self.get_vif_devname(vif)
return prefix + devname[3:]
def get_base_config(self, instance, vif, image_meta,
inst_type, virt_type):
conf = vconfig.LibvirtConfigGuestInterface()
# Default to letting libvirt / the hypervisor choose the model
model = None
driver = None
# If the user has specified a 'vif_model' against the
# image then honour that model
if image_meta:
vif_model = image_meta.get('properties',
{}).get('hw_vif_model')
if vif_model is not None:
model = vif_model
# Else if the virt type is KVM/QEMU, use virtio according
# to the global config parameter
if (model is None and
virt_type in ('kvm', 'qemu') and
CONF.libvirt.use_virtio_for_bridges):
model = network_model.VIF_MODEL_VIRTIO
# Workaround libvirt bug, where it mistakenly
# enables vhost mode, even for non-KVM guests
if (model == network_model.VIF_MODEL_VIRTIO and
virt_type == "qemu"):
driver = "qemu"
if not is_vif_model_valid_for_virt(virt_type,
model):
raise exception.UnsupportedHardware(model=model,
virt=virt_type)
designer.set_vif_guest_frontend_config(
conf, vif['address'], model, driver)
return conf
def get_bridge_name(self, vif):
return vif['network']['bridge']
def get_ovs_interfaceid(self, vif):
return vif.get('ovs_interfaceid') or vif['id']
def get_br_name(self, iface_id):
return ("qbr" + iface_id)[:network_model.NIC_NAME_LEN]
def get_veth_pair_names(self, iface_id):
return (("qvb%s" % iface_id)[:network_model.NIC_NAME_LEN],
("qvo%s" % iface_id)[:network_model.NIC_NAME_LEN])
def get_firewall_required(self, vif):
if vif.is_neutron_filtering_enabled():
return False
if CONF.firewall_driver != "nova.virt.firewall.NoopFirewallDriver":
return True
return False
def get_config_bridge(self, instance, vif, image_meta,
inst_type, virt_type):
"""Get VIF configurations for bridge type."""
conf = self.get_base_config(instance, vif, image_meta,
inst_type, virt_type)
designer.set_vif_host_backend_bridge_config(
conf, self.get_bridge_name(vif),
self.get_vif_devname(vif))
mac_id = vif['address'].replace(':', '')
name = "nova-instance-" + instance['name'] + "-" + mac_id
if self.get_firewall_required(vif):
conf.filtername = name
designer.set_vif_bandwidth_config(conf, inst_type)
return conf
def get_config_ovs_bridge(self, instance, vif, image_meta,
inst_type, virt_type):
conf = self.get_base_config(instance, vif, image_meta,
inst_type, virt_type)
designer.set_vif_host_backend_ovs_config(
conf, self.get_bridge_name(vif),
self.get_ovs_interfaceid(vif),
self.get_vif_devname(vif))
designer.set_vif_bandwidth_config(conf, inst_type)
return conf
def get_config_ovs_hybrid(self, instance, vif, image_meta,
inst_type, virt_type):
newvif = copy.deepcopy(vif)
newvif['network']['bridge'] = self.get_br_name(vif['id'])
return self.get_config_bridge(instance, newvif, image_meta,
inst_type, virt_type)
def get_config_ovs(self, instance, vif, image_meta,
inst_type, virt_type):
if self.get_firewall_required(vif) or vif.is_hybrid_plug_enabled():
return self.get_config_ovs_hybrid(instance, vif,
image_meta,
inst_type,
virt_type)
else:
return self.get_config_ovs_bridge(instance, vif,
image_meta,
inst_type,
virt_type)
def get_config_ivs_hybrid(self, instance, vif, image_meta,
inst_type, virt_type):
newvif = copy.deepcopy(vif)
newvif['network']['bridge'] = self.get_br_name(vif['id'])
return self.get_config_bridge(instance,
newvif,
image_meta,
inst_type,
virt_type)
def get_config_ivs_ethernet(self, instance, vif, image_meta,
inst_type, virt_type):
conf = self.get_base_config(instance,
vif,
image_meta,
inst_type,
virt_type)
dev = self.get_vif_devname(vif)
designer.set_vif_host_backend_ethernet_config(conf, dev)
return conf
def get_config_ivs(self, instance, vif, image_meta,
inst_type, virt_type):
if self.get_firewall_required(vif) or vif.is_hybrid_plug_enabled():
return self.get_config_ivs_hybrid(instance, vif,
image_meta,
inst_type,
virt_type)
else:
return self.get_config_ivs_ethernet(instance, vif,
image_meta,
inst_type,
virt_type)
def get_config_802qbg(self, instance, vif, image_meta,
inst_type, virt_type):
conf = self.get_base_config(instance, vif, image_meta,
inst_type, virt_type)
params = vif["qbg_params"]
designer.set_vif_host_backend_802qbg_config(
conf, vif['network'].get_meta('interface'),
params['managerid'],
params['typeid'],
params['typeidversion'],
params['instanceid'])
designer.set_vif_bandwidth_config(conf, inst_type)
return conf
def get_config_802qbh(self, instance, vif, image_meta,
inst_type, virt_type):
conf = self.get_base_config(instance, vif, image_meta,
inst_type, virt_type)
profile = vif["profile"]
vif_details = vif["details"]
net_type = 'direct'
if vif['vnic_type'] == network_model.VNIC_TYPE_DIRECT:
net_type = 'hostdev'
designer.set_vif_host_backend_802qbh_config(
conf, net_type, profile['pci_slot'],
vif_details[network_model.VIF_DETAILS_PROFILEID])
designer.set_vif_bandwidth_config(conf, inst_type)
return conf
def get_config_hw_veb(self, instance, vif, image_meta,
inst_type, virt_type):
conf = self.get_base_config(instance, vif, image_meta,
inst_type, virt_type)
profile = vif["profile"]
vif_details = vif["details"]
net_type = 'direct'
if vif['vnic_type'] == network_model.VNIC_TYPE_DIRECT:
net_type = 'hostdev'
designer.set_vif_host_backend_hw_veb(
conf, net_type, profile['pci_slot'],
vif_details[network_model.VIF_DETAILS_VLAN])
designer.set_vif_bandwidth_config(conf, inst_type)
return conf
def get_config_iovisor(self, instance, vif, image_meta,
inst_type, virt_type):
conf = self.get_base_config(instance, vif, image_meta,
inst_type, virt_type)
dev = self.get_vif_devname(vif)
designer.set_vif_host_backend_ethernet_config(conf, dev)
designer.set_vif_bandwidth_config(conf, inst_type)
return conf
def get_config_midonet(self, instance, vif, image_meta,
inst_type, virt_type):
conf = self.get_base_config(instance, vif, image_meta,
inst_type, virt_type)
dev = self.get_vif_devname(vif)
designer.set_vif_host_backend_ethernet_config(conf, dev)
return conf
def get_config_mlnx_direct(self, instance, vif, image_meta,
inst_type, virt_type):
conf = self.get_base_config(instance, vif, image_meta,
inst_type, virt_type)
devname = self.get_vif_devname_with_prefix(vif, DEV_PREFIX_ETH)
designer.set_vif_host_backend_direct_config(conf, devname)
designer.set_vif_bandwidth_config(conf, inst_type)
return conf
def get_config(self, instance, vif, image_meta,
inst_type, virt_type):
vif_type = vif['type']
LOG.debug('vif_type=%(vif_type)s instance=%(instance)s '
'vif=%(vif)s virt_type%(virt_type)s',
{'vif_type': vif_type, 'instance': instance,
'vif': vif, 'virt_type': virt_type})
if vif_type is None:
raise exception.NovaException(
_("vif_type parameter must be present "
"for this vif_driver implementation"))
vif_slug = self._normalize_vif_type(vif_type)
func = getattr(self, 'get_config_%s' % vif_slug, None)
if not func:
raise exception.NovaException(
_("Unexpected vif_type=%s") % vif_type)
return func(instance, vif, image_meta,
inst_type, virt_type)
def plug_bridge(self, instance, vif):
"""Ensure that the bridge exists, and add VIF to it."""
network = vif['network']
if (not network.get_meta('multi_host', False) and
network.get_meta('should_create_bridge', False)):
if network.get_meta('should_create_vlan', False):
iface = CONF.vlan_interface or \
network.get_meta('bridge_interface')
LOG.debug('Ensuring vlan %(vlan)s and bridge %(bridge)s',
{'vlan': network.get_meta('vlan'),
'bridge': self.get_bridge_name(vif)},
instance=instance)
linux_net.LinuxBridgeInterfaceDriver.ensure_vlan_bridge(
network.get_meta('vlan'),
self.get_bridge_name(vif),
iface)
else:
iface = CONF.flat_interface or \
network.get_meta('bridge_interface')
LOG.debug("Ensuring bridge %s",
self.get_bridge_name(vif), instance=instance)
linux_net.LinuxBridgeInterfaceDriver.ensure_bridge(
self.get_bridge_name(vif),
iface)
def plug_ovs_bridge(self, instance, vif):
"""No manual plugging required."""
pass
def plug_ovs_hybrid(self, instance, vif):
"""Plug using hybrid strategy
Create a per-VIF linux bridge, then link that bridge to the OVS
integration bridge via a veth device, setting up the other end
of the veth device just like a normal OVS port. Then boot the
VIF on the linux bridge using standard libvirt mechanisms.
"""
iface_id = self.get_ovs_interfaceid(vif)
br_name = self.get_br_name(vif['id'])
v1_name, v2_name = self.get_veth_pair_names(vif['id'])
if not linux_net.device_exists(br_name):
utils.execute('brctl', 'addbr', br_name, run_as_root=True)
utils.execute('brctl', 'setfd', br_name, 0, run_as_root=True)
utils.execute('brctl', 'stp', br_name, 'off', run_as_root=True)
utils.execute('tee',
('/sys/class/net/%s/bridge/multicast_snooping' %
br_name),
process_input='0',
run_as_root=True,
check_exit_code=[0, 1])
if not linux_net.device_exists(v2_name):
linux_net._create_veth_pair(v1_name, v2_name)
utils.execute('ip', 'link', 'set', br_name, 'up', run_as_root=True)
utils.execute('brctl', 'addif', br_name, v1_name, run_as_root=True)
linux_net.create_ovs_vif_port(self.get_bridge_name(vif),
v2_name, iface_id, vif['address'],
instance['uuid'])
def plug_ovs(self, instance, vif):
if self.get_firewall_required(vif) or vif.is_hybrid_plug_enabled():
self.plug_ovs_hybrid(instance, vif)
else:
self.plug_ovs_bridge(instance, vif)
def plug_ivs_ethernet(self, instance, vif):
iface_id = self.get_ovs_interfaceid(vif)
dev = self.get_vif_devname(vif)
linux_net.create_tap_dev(dev)
linux_net.create_ivs_vif_port(dev, iface_id, vif['address'],
instance['uuid'])
def plug_ivs_hybrid(self, instance, vif):
"""Plug using hybrid strategy (same as OVS)
Create a per-VIF linux bridge, then link that bridge to the OVS
integration bridge via a veth device, setting up the other end
of the veth device just like a normal IVS port. Then boot the
VIF on the linux bridge using standard libvirt mechanisms.
"""
iface_id = self.get_ovs_interfaceid(vif)
br_name = self.get_br_name(vif['id'])
v1_name, v2_name = self.get_veth_pair_names(vif['id'])
if not linux_net.device_exists(br_name):
utils.execute('brctl', 'addbr', br_name, run_as_root=True)
utils.execute('brctl', 'setfd', br_name, 0, run_as_root=True)
utils.execute('brctl', 'stp', br_name, 'off', run_as_root=True)
utils.execute('tee',
('/sys/class/net/%s/bridge/multicast_snooping' %
br_name),
process_input='0',
run_as_root=True,
check_exit_code=[0, 1])
if not linux_net.device_exists(v2_name):
linux_net._create_veth_pair(v1_name, v2_name)
utils.execute('ip', 'link', 'set', br_name, 'up', run_as_root=True)
utils.execute('brctl', 'addif', br_name, v1_name, run_as_root=True)
linux_net.create_ivs_vif_port(v2_name, iface_id, vif['address'],
instance['uuid'])
def plug_ivs(self, instance, vif):
if self.get_firewall_required(vif) or vif.is_hybrid_plug_enabled():
self.plug_ivs_hybrid(instance, vif)
else:
self.plug_ivs_ethernet(instance, vif)
def plug_mlnx_direct(self, instance, vif):
vnic_mac = vif['address']
device_id = instance['uuid']
fabric = vif.get_physical_network()
if not fabric:
raise exception.NetworkMissingPhysicalNetwork(
network_uuid=vif['network']['id'])
dev_name = self.get_vif_devname_with_prefix(vif, DEV_PREFIX_ETH)
try:
utils.execute('ebrctl', 'add-port', vnic_mac, device_id, fabric,
network_model.VIF_TYPE_MLNX_DIRECT, dev_name,
run_as_root=True)
except processutils.ProcessExecutionError:
LOG.exception(_LE("Failed while plugging vif"), instance=instance)
def plug_802qbg(self, instance, vif):
pass
def plug_802qbh(self, instance, vif):
pass
def plug_hw_veb(self, instance, vif):
pass
def plug_midonet(self, instance, vif):
"""Plug into MidoNet's network port
Bind the vif to a MidoNet virtual port.
"""
dev = self.get_vif_devname(vif)
port_id = vif['id']
try:
linux_net.create_tap_dev(dev)
utils.execute('mm-ctl', '--bind-port', port_id, dev,
run_as_root=True)
except processutils.ProcessExecutionError:
LOG.exception(_LE("Failed while plugging vif"), instance=instance)
def plug_iovisor(self, instance, vif):
"""Plug using PLUMgrid IO Visor Driver
Connect a network device to their respective
Virtual Domain in PLUMgrid Platform.
"""
dev = self.get_vif_devname(vif)
iface_id = vif['id']
linux_net.create_tap_dev(dev)
net_id = vif['network']['id']
tenant_id = instance["project_id"]
try:
utils.execute('ifc_ctl', 'gateway', 'add_port', dev,
run_as_root=True)
utils.execute('ifc_ctl', 'gateway', 'ifup', dev,
'access_vm',
vif['network']['label'] + "_" + iface_id,
vif['address'], 'pgtag2=%s' % net_id,
'pgtag1=%s' % tenant_id, run_as_root=True)
except processutils.ProcessExecutionError:
LOG.exception(_LE("Failed while plugging vif"), instance=instance)
def plug(self, instance, vif):
vif_type = vif['type']
LOG.debug('vif_type=%(vif_type)s instance=%(instance)s '
'vif=%(vif)s',
{'vif_type': vif_type, 'instance': instance,
'vif': vif})
if vif_type is None:
raise exception.NovaException(
_("vif_type parameter must be present "
"for this vif_driver implementation"))
vif_slug = self._normalize_vif_type(vif_type)
func = getattr(self, 'plug_%s' % vif_slug, None)
if not func:
raise exception.NovaException(
_("Unexpected vif_type=%s") % vif_type)
func(instance, vif)
def unplug_bridge(self, instance, vif):
"""No manual unplugging required."""
pass
def unplug_ovs_bridge(self, instance, vif):
"""No manual unplugging required."""
pass
def unplug_ovs_hybrid(self, instance, vif):
"""UnPlug using hybrid strategy
Unhook port from OVS, unhook port from bridge, delete
bridge, and delete both veth devices.
"""
try:
br_name = self.get_br_name(vif['id'])
v1_name, v2_name = self.get_veth_pair_names(vif['id'])
if linux_net.device_exists(br_name):
utils.execute('brctl', 'delif', br_name, v1_name,
run_as_root=True)
utils.execute('ip', 'link', 'set', br_name, 'down',
run_as_root=True)
utils.execute('brctl', 'delbr', br_name,
run_as_root=True)
linux_net.delete_ovs_vif_port(self.get_bridge_name(vif),
v2_name)
except processutils.ProcessExecutionError:
LOG.exception(_LE("Failed while unplugging vif"),
instance=instance)
def unplug_ovs(self, instance, vif):
if self.get_firewall_required(vif) or vif.is_hybrid_plug_enabled():
self.unplug_ovs_hybrid(instance, vif)
else:
self.unplug_ovs_bridge(instance, vif)
def unplug_ivs_ethernet(self, instance, vif):
"""Unplug the VIF by deleting the port from the bridge."""
try:
linux_net.delete_ivs_vif_port(self.get_vif_devname(vif))
except processutils.ProcessExecutionError:
LOG.exception(_LE("Failed while unplugging vif"),
instance=instance)
def unplug_ivs_hybrid(self, instance, vif):
"""UnPlug using hybrid strategy (same as OVS)
Unhook port from IVS, unhook port from bridge, delete
bridge, and delete both veth devices.
"""
try:
br_name = self.get_br_name(vif['id'])
v1_name, v2_name = self.get_veth_pair_names(vif['id'])
utils.execute('brctl', 'delif', br_name, v1_name, run_as_root=True)
utils.execute('ip', 'link', 'set', br_name, 'down',
run_as_root=True)
utils.execute('brctl', 'delbr', br_name, run_as_root=True)
linux_net.delete_ivs_vif_port(v2_name)
except processutils.ProcessExecutionError:
LOG.exception(_LE("Failed while unplugging vif"),
instance=instance)
def unplug_ivs(self, instance, vif):
if self.get_firewall_required(vif) or vif.is_hybrid_plug_enabled():
self.unplug_ivs_hybrid(instance, vif)
else:
self.unplug_ivs_ethernet(instance, vif)
def unplug_mlnx_direct(self, instance, vif):
vnic_mac = vif['address']
fabric = vif.get_physical_network()
if not fabric:
raise exception.NetworkMissingPhysicalNetwork(
network_uuid=vif['network']['id'])
try:
utils.execute('ebrctl', 'del-port', fabric,
vnic_mac, run_as_root=True)
except processutils.ProcessExecutionError:
LOG.exception(_LE("Failed while unplugging vif"),
instance=instance)
def unplug_802qbg(self, instance, vif):
pass
def unplug_802qbh(self, instance, vif):
pass
def unplug_hw_veb(self, instance, vif):
pass
def unplug_midonet(self, instance, vif):
"""Unplug from MidoNet network port
Unbind the vif from a MidoNet virtual port.
"""
dev = self.get_vif_devname(vif)
port_id = vif['id']
try:
utils.execute('mm-ctl', '--unbind-port', port_id,
run_as_root=True)
linux_net.delete_net_dev(dev)
except processutils.ProcessExecutionError:
LOG.exception(_LE("Failed while unplugging vif"),
instance=instance)
def unplug_iovisor(self, instance, vif):
"""Unplug using PLUMgrid IO Visor Driver
Delete network device and to their respective
connection to the Virtual Domain in PLUMgrid Platform.
"""
iface_id = vif['id']
dev = self.get_vif_devname(vif)
try:
utils.execute('ifc_ctl', 'gateway', 'ifdown',
dev, 'access_vm',
vif['network']['label'] + "_" + iface_id,
vif['address'], run_as_root=True)
utils.execute('ifc_ctl', 'gateway', 'del_port', dev,
run_as_root=True)
linux_net.delete_net_dev(dev)
except processutils.ProcessExecutionError:
LOG.exception(_LE("Failed while unplugging vif"),
instance=instance)
def unplug(self, instance, vif):
vif_type = vif['type']
LOG.debug('vif_type=%(vif_type)s instance=%(instance)s '
'vif=%(vif)s',
{'vif_type': vif_type, 'instance': instance,
'vif': vif})
if vif_type is None:
raise exception.NovaException(
_("vif_type parameter must be present "
"for this vif_driver implementation"))
vif_slug = self._normalize_vif_type(vif_type)
func = getattr(self, 'unplug_%s' % vif_slug, None)
if not func:
raise exception.NovaException(
_("Unexpected vif_type=%s") % vif_type)
func(instance, vif)
|
apache-2.0
|
victorbriz/rethinkdb
|
test/rql_test/connections/http_support/flask/globals.py
|
783
|
1137
|
# -*- coding: utf-8 -*-
"""
flask.globals
~~~~~~~~~~~~~
Defines all the global objects that are proxies to the current
active context.
:copyright: (c) 2011 by Armin Ronacher.
:license: BSD, see LICENSE for more details.
"""
from functools import partial
from werkzeug.local import LocalStack, LocalProxy
def _lookup_req_object(name):
top = _request_ctx_stack.top
if top is None:
raise RuntimeError('working outside of request context')
return getattr(top, name)
def _lookup_app_object(name):
top = _app_ctx_stack.top
if top is None:
raise RuntimeError('working outside of application context')
return getattr(top, name)
def _find_app():
top = _app_ctx_stack.top
if top is None:
raise RuntimeError('working outside of application context')
return top.app
# context locals
_request_ctx_stack = LocalStack()
_app_ctx_stack = LocalStack()
current_app = LocalProxy(_find_app)
request = LocalProxy(partial(_lookup_req_object, 'request'))
session = LocalProxy(partial(_lookup_req_object, 'session'))
g = LocalProxy(partial(_lookup_app_object, 'g'))
|
agpl-3.0
|
Kazade/NeHe-Website
|
google_appengine/lib/django-1.4/tests/modeltests/delete/models.py
|
44
|
3200
|
from django.db import models
class R(models.Model):
is_default = models.BooleanField(default=False)
def __str__(self):
return "%s" % self.pk
get_default_r = lambda: R.objects.get_or_create(is_default=True)[0]
class S(models.Model):
r = models.ForeignKey(R)
class T(models.Model):
s = models.ForeignKey(S)
class U(models.Model):
t = models.ForeignKey(T)
class RChild(R):
pass
class A(models.Model):
name = models.CharField(max_length=30)
auto = models.ForeignKey(R, related_name="auto_set")
auto_nullable = models.ForeignKey(R, null=True,
related_name='auto_nullable_set')
setvalue = models.ForeignKey(R, on_delete=models.SET(get_default_r),
related_name='setvalue')
setnull = models.ForeignKey(R, on_delete=models.SET_NULL, null=True,
related_name='setnull_set')
setdefault = models.ForeignKey(R, on_delete=models.SET_DEFAULT,
default=get_default_r, related_name='setdefault_set')
setdefault_none = models.ForeignKey(R, on_delete=models.SET_DEFAULT,
default=None, null=True, related_name='setnull_nullable_set')
cascade = models.ForeignKey(R, on_delete=models.CASCADE,
related_name='cascade_set')
cascade_nullable = models.ForeignKey(R, on_delete=models.CASCADE, null=True,
related_name='cascade_nullable_set')
protect = models.ForeignKey(R, on_delete=models.PROTECT, null=True)
donothing = models.ForeignKey(R, on_delete=models.DO_NOTHING, null=True,
related_name='donothing_set')
child = models.ForeignKey(RChild, related_name="child")
child_setnull = models.ForeignKey(RChild, on_delete=models.SET_NULL, null=True,
related_name="child_setnull")
# A OneToOneField is just a ForeignKey unique=True, so we don't duplicate
# all the tests; just one smoke test to ensure on_delete works for it as
# well.
o2o_setnull = models.ForeignKey(R, null=True,
on_delete=models.SET_NULL, related_name="o2o_nullable_set")
def create_a(name):
a = A(name=name)
for name in ('auto', 'auto_nullable', 'setvalue', 'setnull', 'setdefault',
'setdefault_none', 'cascade', 'cascade_nullable', 'protect',
'donothing', 'o2o_setnull'):
r = R.objects.create()
setattr(a, name, r)
a.child = RChild.objects.create()
a.child_setnull = RChild.objects.create()
a.save()
return a
class M(models.Model):
m2m = models.ManyToManyField(R, related_name="m_set")
m2m_through = models.ManyToManyField(R, through="MR",
related_name="m_through_set")
m2m_through_null = models.ManyToManyField(R, through="MRNull",
related_name="m_through_null_set")
class MR(models.Model):
m = models.ForeignKey(M)
r = models.ForeignKey(R)
class MRNull(models.Model):
m = models.ForeignKey(M)
r = models.ForeignKey(R, null=True, on_delete=models.SET_NULL)
class Avatar(models.Model):
pass
class User(models.Model):
avatar = models.ForeignKey(Avatar, null=True)
class HiddenUser(models.Model):
r = models.ForeignKey(R, related_name="+")
class HiddenUserProfile(models.Model):
user = models.ForeignKey(HiddenUser)
|
bsd-3-clause
|
sudheesh001/oh-mainline
|
vendor/packages/Pygments/pygments/lexers/templates.py
|
72
|
55945
|
# -*- coding: utf-8 -*-
"""
pygments.lexers.templates
~~~~~~~~~~~~~~~~~~~~~~~~~
Lexers for various template engines' markup.
:copyright: Copyright 2006-2013 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
from pygments.lexers.web import \
PhpLexer, HtmlLexer, XmlLexer, JavascriptLexer, CssLexer, LassoLexer
from pygments.lexers.agile import PythonLexer, PerlLexer
from pygments.lexers.compiled import JavaLexer
from pygments.lexers.jvm import TeaLangLexer
from pygments.lexer import Lexer, DelegatingLexer, RegexLexer, bygroups, \
include, using, this
from pygments.token import Error, Punctuation, \
Text, Comment, Operator, Keyword, Name, String, Number, Other, Token
from pygments.util import html_doctype_matches, looks_like_xml
__all__ = ['HtmlPhpLexer', 'XmlPhpLexer', 'CssPhpLexer',
'JavascriptPhpLexer', 'ErbLexer', 'RhtmlLexer',
'XmlErbLexer', 'CssErbLexer', 'JavascriptErbLexer',
'SmartyLexer', 'HtmlSmartyLexer', 'XmlSmartyLexer',
'CssSmartyLexer', 'JavascriptSmartyLexer', 'DjangoLexer',
'HtmlDjangoLexer', 'CssDjangoLexer', 'XmlDjangoLexer',
'JavascriptDjangoLexer', 'GenshiLexer', 'HtmlGenshiLexer',
'GenshiTextLexer', 'CssGenshiLexer', 'JavascriptGenshiLexer',
'MyghtyLexer', 'MyghtyHtmlLexer', 'MyghtyXmlLexer',
'MyghtyCssLexer', 'MyghtyJavascriptLexer', 'MasonLexer', 'MakoLexer',
'MakoHtmlLexer', 'MakoXmlLexer', 'MakoJavascriptLexer',
'MakoCssLexer', 'JspLexer', 'CheetahLexer', 'CheetahHtmlLexer',
'CheetahXmlLexer', 'CheetahJavascriptLexer', 'EvoqueLexer',
'EvoqueHtmlLexer', 'EvoqueXmlLexer', 'ColdfusionLexer',
'ColdfusionHtmlLexer', 'VelocityLexer', 'VelocityHtmlLexer',
'VelocityXmlLexer', 'SspLexer', 'TeaTemplateLexer', 'LassoHtmlLexer',
'LassoXmlLexer', 'LassoCssLexer', 'LassoJavascriptLexer']
class ErbLexer(Lexer):
"""
Generic `ERB <http://ruby-doc.org/core/classes/ERB.html>`_ (Ruby Templating)
lexer.
Just highlights ruby code between the preprocessor directives, other data
is left untouched by the lexer.
All options are also forwarded to the `RubyLexer`.
"""
name = 'ERB'
aliases = ['erb']
mimetypes = ['application/x-ruby-templating']
_block_re = re.compile(r'(<%%|%%>|<%=|<%#|<%-|<%|-%>|%>|^%[^%].*?$)', re.M)
def __init__(self, **options):
from pygments.lexers.agile import RubyLexer
self.ruby_lexer = RubyLexer(**options)
Lexer.__init__(self, **options)
def get_tokens_unprocessed(self, text):
"""
Since ERB doesn't allow "<%" and other tags inside of ruby
blocks we have to use a split approach here that fails for
that too.
"""
tokens = self._block_re.split(text)
tokens.reverse()
state = idx = 0
try:
while True:
# text
if state == 0:
val = tokens.pop()
yield idx, Other, val
idx += len(val)
state = 1
# block starts
elif state == 1:
tag = tokens.pop()
# literals
if tag in ('<%%', '%%>'):
yield idx, Other, tag
idx += 3
state = 0
# comment
elif tag == '<%#':
yield idx, Comment.Preproc, tag
val = tokens.pop()
yield idx + 3, Comment, val
idx += 3 + len(val)
state = 2
# blocks or output
elif tag in ('<%', '<%=', '<%-'):
yield idx, Comment.Preproc, tag
idx += len(tag)
data = tokens.pop()
r_idx = 0
for r_idx, r_token, r_value in \
self.ruby_lexer.get_tokens_unprocessed(data):
yield r_idx + idx, r_token, r_value
idx += len(data)
state = 2
elif tag in ('%>', '-%>'):
yield idx, Error, tag
idx += len(tag)
state = 0
# % raw ruby statements
else:
yield idx, Comment.Preproc, tag[0]
r_idx = 0
for r_idx, r_token, r_value in \
self.ruby_lexer.get_tokens_unprocessed(tag[1:]):
yield idx + 1 + r_idx, r_token, r_value
idx += len(tag)
state = 0
# block ends
elif state == 2:
tag = tokens.pop()
if tag not in ('%>', '-%>'):
yield idx, Other, tag
else:
yield idx, Comment.Preproc, tag
idx += len(tag)
state = 0
except IndexError:
return
def analyse_text(text):
if '<%' in text and '%>' in text:
return 0.4
class SmartyLexer(RegexLexer):
"""
Generic `Smarty <http://smarty.php.net/>`_ template lexer.
Just highlights smarty code between the preprocessor directives, other
data is left untouched by the lexer.
"""
name = 'Smarty'
aliases = ['smarty']
filenames = ['*.tpl']
mimetypes = ['application/x-smarty']
flags = re.MULTILINE | re.DOTALL
tokens = {
'root': [
(r'[^{]+', Other),
(r'(\{)(\*.*?\*)(\})',
bygroups(Comment.Preproc, Comment, Comment.Preproc)),
(r'(\{php\})(.*?)(\{/php\})',
bygroups(Comment.Preproc, using(PhpLexer, startinline=True),
Comment.Preproc)),
(r'(\{)(/?[a-zA-Z_][a-zA-Z0-9_]*)(\s*)',
bygroups(Comment.Preproc, Name.Function, Text), 'smarty'),
(r'\{', Comment.Preproc, 'smarty')
],
'smarty': [
(r'\s+', Text),
(r'\}', Comment.Preproc, '#pop'),
(r'#[a-zA-Z_][a-zA-Z0-9_]*#', Name.Variable),
(r'\$[a-zA-Z_][a-zA-Z0-9_]*(\.[a-zA-Z0-9_]+)*', Name.Variable),
(r'[~!%^&*()+=|\[\]:;,.<>/?{}@-]', Operator),
(r'(true|false|null)\b', Keyword.Constant),
(r"[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|"
r"0[xX][0-9a-fA-F]+[Ll]?", Number),
(r'"(\\\\|\\"|[^"])*"', String.Double),
(r"'(\\\\|\\'|[^'])*'", String.Single),
(r'[a-zA-Z_][a-zA-Z0-9_]*', Name.Attribute)
]
}
def analyse_text(text):
rv = 0.0
if re.search('\{if\s+.*?\}.*?\{/if\}', text):
rv += 0.15
if re.search('\{include\s+file=.*?\}', text):
rv += 0.15
if re.search('\{foreach\s+.*?\}.*?\{/foreach\}', text):
rv += 0.15
if re.search('\{\$.*?\}', text):
rv += 0.01
return rv
class VelocityLexer(RegexLexer):
"""
Generic `Velocity <http://velocity.apache.org/>`_ template lexer.
Just highlights velocity directives and variable references, other
data is left untouched by the lexer.
"""
name = 'Velocity'
aliases = ['velocity']
filenames = ['*.vm','*.fhtml']
flags = re.MULTILINE | re.DOTALL
identifier = r'[a-zA-Z_][a-zA-Z0-9_]*'
tokens = {
'root': [
(r'[^{#$]+', Other),
(r'(#)(\*.*?\*)(#)',
bygroups(Comment.Preproc, Comment, Comment.Preproc)),
(r'(##)(.*?$)',
bygroups(Comment.Preproc, Comment)),
(r'(#\{?)(' + identifier + r')(\}?)(\s?\()',
bygroups(Comment.Preproc, Name.Function, Comment.Preproc, Punctuation),
'directiveparams'),
(r'(#\{?)(' + identifier + r')(\}|\b)',
bygroups(Comment.Preproc, Name.Function, Comment.Preproc)),
(r'\$\{?', Punctuation, 'variable')
],
'variable': [
(identifier, Name.Variable),
(r'\(', Punctuation, 'funcparams'),
(r'(\.)(' + identifier + r')',
bygroups(Punctuation, Name.Variable), '#push'),
(r'\}', Punctuation, '#pop'),
(r'', Other, '#pop')
],
'directiveparams': [
(r'(&&|\|\||==?|!=?|[-<>+*%&\|\^/])|\b(eq|ne|gt|lt|ge|le|not|in)\b',
Operator),
(r'\[', Operator, 'rangeoperator'),
(r'\b' + identifier + r'\b', Name.Function),
include('funcparams')
],
'rangeoperator': [
(r'\.\.', Operator),
include('funcparams'),
(r'\]', Operator, '#pop')
],
'funcparams': [
(r'\$\{?', Punctuation, 'variable'),
(r'\s+', Text),
(r',', Punctuation),
(r'"(\\\\|\\"|[^"])*"', String.Double),
(r"'(\\\\|\\'|[^'])*'", String.Single),
(r"0[xX][0-9a-fA-F]+[Ll]?", Number),
(r"\b[0-9]+\b", Number),
(r'(true|false|null)\b', Keyword.Constant),
(r'\(', Punctuation, '#push'),
(r'\)', Punctuation, '#pop')
]
}
def analyse_text(text):
rv = 0.0
if re.search(r'#\{?macro\}?\(.*?\).*?#\{?end\}?', text):
rv += 0.25
if re.search(r'#\{?if\}?\(.+?\).*?#\{?end\}?', text):
rv += 0.15
if re.search(r'#\{?foreach\}?\(.+?\).*?#\{?end\}?', text):
rv += 0.15
if re.search(r'\$\{?[a-zA-Z_][a-zA-Z0-9_]*(\([^)]*\))?'
r'(\.[a-zA-Z0-9_]+(\([^)]*\))?)*\}?', text):
rv += 0.01
return rv
class VelocityHtmlLexer(DelegatingLexer):
"""
Subclass of the `VelocityLexer` that highlights unlexer data
with the `HtmlLexer`.
"""
name = 'HTML+Velocity'
aliases = ['html+velocity']
alias_filenames = ['*.html','*.fhtml']
mimetypes = ['text/html+velocity']
def __init__(self, **options):
super(VelocityHtmlLexer, self).__init__(HtmlLexer, VelocityLexer,
**options)
class VelocityXmlLexer(DelegatingLexer):
"""
Subclass of the `VelocityLexer` that highlights unlexer data
with the `XmlLexer`.
"""
name = 'XML+Velocity'
aliases = ['xml+velocity']
alias_filenames = ['*.xml','*.vm']
mimetypes = ['application/xml+velocity']
def __init__(self, **options):
super(VelocityXmlLexer, self).__init__(XmlLexer, VelocityLexer,
**options)
def analyse_text(text):
rv = VelocityLexer.analyse_text(text) - 0.01
if looks_like_xml(text):
rv += 0.5
return rv
class DjangoLexer(RegexLexer):
"""
Generic `django <http://www.djangoproject.com/documentation/templates/>`_
and `jinja <http://wsgiarea.pocoo.org/jinja/>`_ template lexer.
It just highlights django/jinja code between the preprocessor directives,
other data is left untouched by the lexer.
"""
name = 'Django/Jinja'
aliases = ['django', 'jinja']
mimetypes = ['application/x-django-templating', 'application/x-jinja']
flags = re.M | re.S
tokens = {
'root': [
(r'[^{]+', Other),
(r'\{\{', Comment.Preproc, 'var'),
# jinja/django comments
(r'\{[*#].*?[*#]\}', Comment),
# django comments
(r'(\{%)(-?\s*)(comment)(\s*-?)(%\})(.*?)'
r'(\{%)(-?\s*)(endcomment)(\s*-?)(%\})',
bygroups(Comment.Preproc, Text, Keyword, Text, Comment.Preproc,
Comment, Comment.Preproc, Text, Keyword, Text,
Comment.Preproc)),
# raw jinja blocks
(r'(\{%)(-?\s*)(raw)(\s*-?)(%\})(.*?)'
r'(\{%)(-?\s*)(endraw)(\s*-?)(%\})',
bygroups(Comment.Preproc, Text, Keyword, Text, Comment.Preproc,
Text, Comment.Preproc, Text, Keyword, Text,
Comment.Preproc)),
# filter blocks
(r'(\{%)(-?\s*)(filter)(\s+)([a-zA-Z_][a-zA-Z0-9_]*)',
bygroups(Comment.Preproc, Text, Keyword, Text, Name.Function),
'block'),
(r'(\{%)(-?\s*)([a-zA-Z_][a-zA-Z0-9_]*)',
bygroups(Comment.Preproc, Text, Keyword), 'block'),
(r'\{', Other)
],
'varnames': [
(r'(\|)(\s*)([a-zA-Z_][a-zA-Z0-9_]*)',
bygroups(Operator, Text, Name.Function)),
(r'(is)(\s+)(not)?(\s+)?([a-zA-Z_][a-zA-Z0-9_]*)',
bygroups(Keyword, Text, Keyword, Text, Name.Function)),
(r'(_|true|false|none|True|False|None)\b', Keyword.Pseudo),
(r'(in|as|reversed|recursive|not|and|or|is|if|else|import|'
r'with(?:(?:out)?\s*context)?|scoped|ignore\s+missing)\b',
Keyword),
(r'(loop|block|super|forloop)\b', Name.Builtin),
(r'[a-zA-Z][a-zA-Z0-9_-]*', Name.Variable),
(r'\.[a-zA-Z0-9_]+', Name.Variable),
(r':?"(\\\\|\\"|[^"])*"', String.Double),
(r":?'(\\\\|\\'|[^'])*'", String.Single),
(r'([{}()\[\]+\-*/,:~]|[><=]=?)', Operator),
(r"[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|"
r"0[xX][0-9a-fA-F]+[Ll]?", Number),
],
'var': [
(r'\s+', Text),
(r'(-?)(\}\})', bygroups(Text, Comment.Preproc), '#pop'),
include('varnames')
],
'block': [
(r'\s+', Text),
(r'(-?)(%\})', bygroups(Text, Comment.Preproc), '#pop'),
include('varnames'),
(r'.', Punctuation)
]
}
def analyse_text(text):
rv = 0.0
if re.search(r'\{%\s*(block|extends)', text) is not None:
rv += 0.4
if re.search(r'\{%\s*if\s*.*?%\}', text) is not None:
rv += 0.1
if re.search(r'\{\{.*?\}\}', text) is not None:
rv += 0.1
return rv
class MyghtyLexer(RegexLexer):
"""
Generic `myghty templates`_ lexer. Code that isn't Myghty
markup is yielded as `Token.Other`.
*New in Pygments 0.6.*
.. _myghty templates: http://www.myghty.org/
"""
name = 'Myghty'
aliases = ['myghty']
filenames = ['*.myt', 'autodelegate']
mimetypes = ['application/x-myghty']
tokens = {
'root': [
(r'\s+', Text),
(r'(<%(?:def|method))(\s*)(.*?)(>)(.*?)(</%\2\s*>)(?s)',
bygroups(Name.Tag, Text, Name.Function, Name.Tag,
using(this), Name.Tag)),
(r'(<%\w+)(.*?)(>)(.*?)(</%\2\s*>)(?s)',
bygroups(Name.Tag, Name.Function, Name.Tag,
using(PythonLexer), Name.Tag)),
(r'(<&[^|])(.*?)(,.*?)?(&>)',
bygroups(Name.Tag, Name.Function, using(PythonLexer), Name.Tag)),
(r'(<&\|)(.*?)(,.*?)?(&>)(?s)',
bygroups(Name.Tag, Name.Function, using(PythonLexer), Name.Tag)),
(r'</&>', Name.Tag),
(r'(<%!?)(.*?)(%>)(?s)',
bygroups(Name.Tag, using(PythonLexer), Name.Tag)),
(r'(?<=^)#[^\n]*(\n|\Z)', Comment),
(r'(?<=^)(%)([^\n]*)(\n|\Z)',
bygroups(Name.Tag, using(PythonLexer), Other)),
(r"""(?sx)
(.+?) # anything, followed by:
(?:
(?<=\n)(?=[%#]) | # an eval or comment line
(?=</?[%&]) | # a substitution or block or
# call start or end
# - don't consume
(\\\n) | # an escaped newline
\Z # end of string
)""", bygroups(Other, Operator)),
]
}
class MyghtyHtmlLexer(DelegatingLexer):
"""
Subclass of the `MyghtyLexer` that highlights unlexer data
with the `HtmlLexer`.
*New in Pygments 0.6.*
"""
name = 'HTML+Myghty'
aliases = ['html+myghty']
mimetypes = ['text/html+myghty']
def __init__(self, **options):
super(MyghtyHtmlLexer, self).__init__(HtmlLexer, MyghtyLexer,
**options)
class MyghtyXmlLexer(DelegatingLexer):
"""
Subclass of the `MyghtyLexer` that highlights unlexer data
with the `XmlLexer`.
*New in Pygments 0.6.*
"""
name = 'XML+Myghty'
aliases = ['xml+myghty']
mimetypes = ['application/xml+myghty']
def __init__(self, **options):
super(MyghtyXmlLexer, self).__init__(XmlLexer, MyghtyLexer,
**options)
class MyghtyJavascriptLexer(DelegatingLexer):
"""
Subclass of the `MyghtyLexer` that highlights unlexer data
with the `JavascriptLexer`.
*New in Pygments 0.6.*
"""
name = 'JavaScript+Myghty'
aliases = ['js+myghty', 'javascript+myghty']
mimetypes = ['application/x-javascript+myghty',
'text/x-javascript+myghty',
'text/javascript+mygthy']
def __init__(self, **options):
super(MyghtyJavascriptLexer, self).__init__(JavascriptLexer,
MyghtyLexer, **options)
class MyghtyCssLexer(DelegatingLexer):
"""
Subclass of the `MyghtyLexer` that highlights unlexer data
with the `CssLexer`.
*New in Pygments 0.6.*
"""
name = 'CSS+Myghty'
aliases = ['css+myghty']
mimetypes = ['text/css+myghty']
def __init__(self, **options):
super(MyghtyCssLexer, self).__init__(CssLexer, MyghtyLexer,
**options)
class MasonLexer(RegexLexer):
"""
Generic `mason templates`_ lexer. Stolen from Myghty lexer. Code that isn't
Mason markup is HTML.
.. _mason templates: http://www.masonhq.com/
*New in Pygments 1.4.*
"""
name = 'Mason'
aliases = ['mason']
filenames = ['*.m', '*.mhtml', '*.mc', '*.mi', 'autohandler', 'dhandler']
mimetypes = ['application/x-mason']
tokens = {
'root': [
(r'\s+', Text),
(r'(<%doc>)(.*?)(</%doc>)(?s)',
bygroups(Name.Tag, Comment.Multiline, Name.Tag)),
(r'(<%(?:def|method))(\s*)(.*?)(>)(.*?)(</%\2\s*>)(?s)',
bygroups(Name.Tag, Text, Name.Function, Name.Tag,
using(this), Name.Tag)),
(r'(<%\w+)(.*?)(>)(.*?)(</%\2\s*>)(?s)',
bygroups(Name.Tag, Name.Function, Name.Tag,
using(PerlLexer), Name.Tag)),
(r'(<&[^|])(.*?)(,.*?)?(&>)(?s)',
bygroups(Name.Tag, Name.Function, using(PerlLexer), Name.Tag)),
(r'(<&\|)(.*?)(,.*?)?(&>)(?s)',
bygroups(Name.Tag, Name.Function, using(PerlLexer), Name.Tag)),
(r'</&>', Name.Tag),
(r'(<%!?)(.*?)(%>)(?s)',
bygroups(Name.Tag, using(PerlLexer), Name.Tag)),
(r'(?<=^)#[^\n]*(\n|\Z)', Comment),
(r'(?<=^)(%)([^\n]*)(\n|\Z)',
bygroups(Name.Tag, using(PerlLexer), Other)),
(r"""(?sx)
(.+?) # anything, followed by:
(?:
(?<=\n)(?=[%#]) | # an eval or comment line
(?=</?[%&]) | # a substitution or block or
# call start or end
# - don't consume
(\\\n) | # an escaped newline
\Z # end of string
)""", bygroups(using(HtmlLexer), Operator)),
]
}
def analyse_text(text):
rv = 0.0
if re.search('<&', text) is not None:
rv = 1.0
return rv
class MakoLexer(RegexLexer):
"""
Generic `mako templates`_ lexer. Code that isn't Mako
markup is yielded as `Token.Other`.
*New in Pygments 0.7.*
.. _mako templates: http://www.makotemplates.org/
"""
name = 'Mako'
aliases = ['mako']
filenames = ['*.mao']
mimetypes = ['application/x-mako']
tokens = {
'root': [
(r'(\s*)(%)(\s*end(?:\w+))(\n|\Z)',
bygroups(Text, Comment.Preproc, Keyword, Other)),
(r'(\s*)(%)([^\n]*)(\n|\Z)',
bygroups(Text, Comment.Preproc, using(PythonLexer), Other)),
(r'(\s*)(##[^\n]*)(\n|\Z)',
bygroups(Text, Comment.Preproc, Other)),
(r'(?s)<%doc>.*?</%doc>', Comment.Preproc),
(r'(<%)([\w\.\:]+)',
bygroups(Comment.Preproc, Name.Builtin), 'tag'),
(r'(</%)([\w\.\:]+)(>)',
bygroups(Comment.Preproc, Name.Builtin, Comment.Preproc)),
(r'<%(?=([\w\.\:]+))', Comment.Preproc, 'ondeftags'),
(r'(<%(?:!?))(.*?)(%>)(?s)',
bygroups(Comment.Preproc, using(PythonLexer), Comment.Preproc)),
(r'(\$\{)(.*?)(\})',
bygroups(Comment.Preproc, using(PythonLexer), Comment.Preproc)),
(r'''(?sx)
(.+?) # anything, followed by:
(?:
(?<=\n)(?=%|\#\#) | # an eval or comment line
(?=\#\*) | # multiline comment
(?=</?%) | # a python block
# call start or end
(?=\$\{) | # a substitution
(?<=\n)(?=\s*%) |
# - don't consume
(\\\n) | # an escaped newline
\Z # end of string
)
''', bygroups(Other, Operator)),
(r'\s+', Text),
],
'ondeftags': [
(r'<%', Comment.Preproc),
(r'(?<=<%)(include|inherit|namespace|page)', Name.Builtin),
include('tag'),
],
'tag': [
(r'((?:\w+)\s*=)(\s*)(".*?")',
bygroups(Name.Attribute, Text, String)),
(r'/?\s*>', Comment.Preproc, '#pop'),
(r'\s+', Text),
],
'attr': [
('".*?"', String, '#pop'),
("'.*?'", String, '#pop'),
(r'[^\s>]+', String, '#pop'),
],
}
class MakoHtmlLexer(DelegatingLexer):
"""
Subclass of the `MakoLexer` that highlights unlexed data
with the `HtmlLexer`.
*New in Pygments 0.7.*
"""
name = 'HTML+Mako'
aliases = ['html+mako']
mimetypes = ['text/html+mako']
def __init__(self, **options):
super(MakoHtmlLexer, self).__init__(HtmlLexer, MakoLexer,
**options)
class MakoXmlLexer(DelegatingLexer):
"""
Subclass of the `MakoLexer` that highlights unlexer data
with the `XmlLexer`.
*New in Pygments 0.7.*
"""
name = 'XML+Mako'
aliases = ['xml+mako']
mimetypes = ['application/xml+mako']
def __init__(self, **options):
super(MakoXmlLexer, self).__init__(XmlLexer, MakoLexer,
**options)
class MakoJavascriptLexer(DelegatingLexer):
"""
Subclass of the `MakoLexer` that highlights unlexer data
with the `JavascriptLexer`.
*New in Pygments 0.7.*
"""
name = 'JavaScript+Mako'
aliases = ['js+mako', 'javascript+mako']
mimetypes = ['application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako']
def __init__(self, **options):
super(MakoJavascriptLexer, self).__init__(JavascriptLexer,
MakoLexer, **options)
class MakoCssLexer(DelegatingLexer):
"""
Subclass of the `MakoLexer` that highlights unlexer data
with the `CssLexer`.
*New in Pygments 0.7.*
"""
name = 'CSS+Mako'
aliases = ['css+mako']
mimetypes = ['text/css+mako']
def __init__(self, **options):
super(MakoCssLexer, self).__init__(CssLexer, MakoLexer,
**options)
# Genshi and Cheetah lexers courtesy of Matt Good.
class CheetahPythonLexer(Lexer):
"""
Lexer for handling Cheetah's special $ tokens in Python syntax.
"""
def get_tokens_unprocessed(self, text):
pylexer = PythonLexer(**self.options)
for pos, type_, value in pylexer.get_tokens_unprocessed(text):
if type_ == Token.Error and value == '$':
type_ = Comment.Preproc
yield pos, type_, value
class CheetahLexer(RegexLexer):
"""
Generic `cheetah templates`_ lexer. Code that isn't Cheetah
markup is yielded as `Token.Other`. This also works for
`spitfire templates`_ which use the same syntax.
.. _cheetah templates: http://www.cheetahtemplate.org/
.. _spitfire templates: http://code.google.com/p/spitfire/
"""
name = 'Cheetah'
aliases = ['cheetah', 'spitfire']
filenames = ['*.tmpl', '*.spt']
mimetypes = ['application/x-cheetah', 'application/x-spitfire']
tokens = {
'root': [
(r'(##[^\n]*)$',
(bygroups(Comment))),
(r'#[*](.|\n)*?[*]#', Comment),
(r'#end[^#\n]*(?:#|$)', Comment.Preproc),
(r'#slurp$', Comment.Preproc),
(r'(#[a-zA-Z]+)([^#\n]*)(#|$)',
(bygroups(Comment.Preproc, using(CheetahPythonLexer),
Comment.Preproc))),
# TODO support other Python syntax like $foo['bar']
(r'(\$)([a-zA-Z_][a-zA-Z0-9_\.]*[a-zA-Z0-9_])',
bygroups(Comment.Preproc, using(CheetahPythonLexer))),
(r'(\$\{!?)(.*?)(\})(?s)',
bygroups(Comment.Preproc, using(CheetahPythonLexer),
Comment.Preproc)),
(r'''(?sx)
(.+?) # anything, followed by:
(?:
(?=[#][#a-zA-Z]*) | # an eval comment
(?=\$[a-zA-Z_{]) | # a substitution
\Z # end of string
)
''', Other),
(r'\s+', Text),
],
}
class CheetahHtmlLexer(DelegatingLexer):
"""
Subclass of the `CheetahLexer` that highlights unlexer data
with the `HtmlLexer`.
"""
name = 'HTML+Cheetah'
aliases = ['html+cheetah', 'html+spitfire']
mimetypes = ['text/html+cheetah', 'text/html+spitfire']
def __init__(self, **options):
super(CheetahHtmlLexer, self).__init__(HtmlLexer, CheetahLexer,
**options)
class CheetahXmlLexer(DelegatingLexer):
"""
Subclass of the `CheetahLexer` that highlights unlexer data
with the `XmlLexer`.
"""
name = 'XML+Cheetah'
aliases = ['xml+cheetah', 'xml+spitfire']
mimetypes = ['application/xml+cheetah', 'application/xml+spitfire']
def __init__(self, **options):
super(CheetahXmlLexer, self).__init__(XmlLexer, CheetahLexer,
**options)
class CheetahJavascriptLexer(DelegatingLexer):
"""
Subclass of the `CheetahLexer` that highlights unlexer data
with the `JavascriptLexer`.
"""
name = 'JavaScript+Cheetah'
aliases = ['js+cheetah', 'javascript+cheetah',
'js+spitfire', 'javascript+spitfire']
mimetypes = ['application/x-javascript+cheetah',
'text/x-javascript+cheetah',
'text/javascript+cheetah',
'application/x-javascript+spitfire',
'text/x-javascript+spitfire',
'text/javascript+spitfire']
def __init__(self, **options):
super(CheetahJavascriptLexer, self).__init__(JavascriptLexer,
CheetahLexer, **options)
class GenshiTextLexer(RegexLexer):
"""
A lexer that highlights `genshi <http://genshi.edgewall.org/>`_ text
templates.
"""
name = 'Genshi Text'
aliases = ['genshitext']
mimetypes = ['application/x-genshi-text', 'text/x-genshi']
tokens = {
'root': [
(r'[^#\$\s]+', Other),
(r'^(\s*)(##.*)$', bygroups(Text, Comment)),
(r'^(\s*)(#)', bygroups(Text, Comment.Preproc), 'directive'),
include('variable'),
(r'[#\$\s]', Other),
],
'directive': [
(r'\n', Text, '#pop'),
(r'(?:def|for|if)\s+.*', using(PythonLexer), '#pop'),
(r'(choose|when|with)([^\S\n]+)(.*)',
bygroups(Keyword, Text, using(PythonLexer)), '#pop'),
(r'(choose|otherwise)\b', Keyword, '#pop'),
(r'(end\w*)([^\S\n]*)(.*)', bygroups(Keyword, Text, Comment), '#pop'),
],
'variable': [
(r'(?<!\$)(\$\{)(.+?)(\})',
bygroups(Comment.Preproc, using(PythonLexer), Comment.Preproc)),
(r'(?<!\$)(\$)([a-zA-Z_][a-zA-Z0-9_\.]*)',
Name.Variable),
]
}
class GenshiMarkupLexer(RegexLexer):
"""
Base lexer for Genshi markup, used by `HtmlGenshiLexer` and
`GenshiLexer`.
"""
flags = re.DOTALL
tokens = {
'root': [
(r'[^<\$]+', Other),
(r'(<\?python)(.*?)(\?>)',
bygroups(Comment.Preproc, using(PythonLexer), Comment.Preproc)),
# yield style and script blocks as Other
(r'<\s*(script|style)\s*.*?>.*?<\s*/\1\s*>', Other),
(r'<\s*py:[a-zA-Z0-9]+', Name.Tag, 'pytag'),
(r'<\s*[a-zA-Z0-9:]+', Name.Tag, 'tag'),
include('variable'),
(r'[<\$]', Other),
],
'pytag': [
(r'\s+', Text),
(r'[a-zA-Z0-9_:-]+\s*=', Name.Attribute, 'pyattr'),
(r'/?\s*>', Name.Tag, '#pop'),
],
'pyattr': [
('(")(.*?)(")', bygroups(String, using(PythonLexer), String), '#pop'),
("(')(.*?)(')", bygroups(String, using(PythonLexer), String), '#pop'),
(r'[^\s>]+', String, '#pop'),
],
'tag': [
(r'\s+', Text),
(r'py:[a-zA-Z0-9_-]+\s*=', Name.Attribute, 'pyattr'),
(r'[a-zA-Z0-9_:-]+\s*=', Name.Attribute, 'attr'),
(r'/?\s*>', Name.Tag, '#pop'),
],
'attr': [
('"', String, 'attr-dstring'),
("'", String, 'attr-sstring'),
(r'[^\s>]*', String, '#pop')
],
'attr-dstring': [
('"', String, '#pop'),
include('strings'),
("'", String)
],
'attr-sstring': [
("'", String, '#pop'),
include('strings'),
("'", String)
],
'strings': [
('[^"\'$]+', String),
include('variable')
],
'variable': [
(r'(?<!\$)(\$\{)(.+?)(\})',
bygroups(Comment.Preproc, using(PythonLexer), Comment.Preproc)),
(r'(?<!\$)(\$)([a-zA-Z_][a-zA-Z0-9_\.]*)',
Name.Variable),
]
}
class HtmlGenshiLexer(DelegatingLexer):
"""
A lexer that highlights `genshi <http://genshi.edgewall.org/>`_ and
`kid <http://kid-templating.org/>`_ kid HTML templates.
"""
name = 'HTML+Genshi'
aliases = ['html+genshi', 'html+kid']
alias_filenames = ['*.html', '*.htm', '*.xhtml']
mimetypes = ['text/html+genshi']
def __init__(self, **options):
super(HtmlGenshiLexer, self).__init__(HtmlLexer, GenshiMarkupLexer,
**options)
def analyse_text(text):
rv = 0.0
if re.search('\$\{.*?\}', text) is not None:
rv += 0.2
if re.search('py:(.*?)=["\']', text) is not None:
rv += 0.2
return rv + HtmlLexer.analyse_text(text) - 0.01
class GenshiLexer(DelegatingLexer):
"""
A lexer that highlights `genshi <http://genshi.edgewall.org/>`_ and
`kid <http://kid-templating.org/>`_ kid XML templates.
"""
name = 'Genshi'
aliases = ['genshi', 'kid', 'xml+genshi', 'xml+kid']
filenames = ['*.kid']
alias_filenames = ['*.xml']
mimetypes = ['application/x-genshi', 'application/x-kid']
def __init__(self, **options):
super(GenshiLexer, self).__init__(XmlLexer, GenshiMarkupLexer,
**options)
def analyse_text(text):
rv = 0.0
if re.search('\$\{.*?\}', text) is not None:
rv += 0.2
if re.search('py:(.*?)=["\']', text) is not None:
rv += 0.2
return rv + XmlLexer.analyse_text(text) - 0.01
class JavascriptGenshiLexer(DelegatingLexer):
"""
A lexer that highlights javascript code in genshi text templates.
"""
name = 'JavaScript+Genshi Text'
aliases = ['js+genshitext', 'js+genshi', 'javascript+genshitext',
'javascript+genshi']
alias_filenames = ['*.js']
mimetypes = ['application/x-javascript+genshi',
'text/x-javascript+genshi',
'text/javascript+genshi']
def __init__(self, **options):
super(JavascriptGenshiLexer, self).__init__(JavascriptLexer,
GenshiTextLexer,
**options)
def analyse_text(text):
return GenshiLexer.analyse_text(text) - 0.05
class CssGenshiLexer(DelegatingLexer):
"""
A lexer that highlights CSS definitions in genshi text templates.
"""
name = 'CSS+Genshi Text'
aliases = ['css+genshitext', 'css+genshi']
alias_filenames = ['*.css']
mimetypes = ['text/css+genshi']
def __init__(self, **options):
super(CssGenshiLexer, self).__init__(CssLexer, GenshiTextLexer,
**options)
def analyse_text(text):
return GenshiLexer.analyse_text(text) - 0.05
class RhtmlLexer(DelegatingLexer):
"""
Subclass of the ERB lexer that highlights the unlexed data with the
html lexer.
Nested Javascript and CSS is highlighted too.
"""
name = 'RHTML'
aliases = ['rhtml', 'html+erb', 'html+ruby']
filenames = ['*.rhtml']
alias_filenames = ['*.html', '*.htm', '*.xhtml']
mimetypes = ['text/html+ruby']
def __init__(self, **options):
super(RhtmlLexer, self).__init__(HtmlLexer, ErbLexer, **options)
def analyse_text(text):
rv = ErbLexer.analyse_text(text) - 0.01
if html_doctype_matches(text):
# one more than the XmlErbLexer returns
rv += 0.5
return rv
class XmlErbLexer(DelegatingLexer):
"""
Subclass of `ErbLexer` which highlights data outside preprocessor
directives with the `XmlLexer`.
"""
name = 'XML+Ruby'
aliases = ['xml+erb', 'xml+ruby']
alias_filenames = ['*.xml']
mimetypes = ['application/xml+ruby']
def __init__(self, **options):
super(XmlErbLexer, self).__init__(XmlLexer, ErbLexer, **options)
def analyse_text(text):
rv = ErbLexer.analyse_text(text) - 0.01
if looks_like_xml(text):
rv += 0.4
return rv
class CssErbLexer(DelegatingLexer):
"""
Subclass of `ErbLexer` which highlights unlexed data with the `CssLexer`.
"""
name = 'CSS+Ruby'
aliases = ['css+erb', 'css+ruby']
alias_filenames = ['*.css']
mimetypes = ['text/css+ruby']
def __init__(self, **options):
super(CssErbLexer, self).__init__(CssLexer, ErbLexer, **options)
def analyse_text(text):
return ErbLexer.analyse_text(text) - 0.05
class JavascriptErbLexer(DelegatingLexer):
"""
Subclass of `ErbLexer` which highlights unlexed data with the
`JavascriptLexer`.
"""
name = 'JavaScript+Ruby'
aliases = ['js+erb', 'javascript+erb', 'js+ruby', 'javascript+ruby']
alias_filenames = ['*.js']
mimetypes = ['application/x-javascript+ruby',
'text/x-javascript+ruby',
'text/javascript+ruby']
def __init__(self, **options):
super(JavascriptErbLexer, self).__init__(JavascriptLexer, ErbLexer,
**options)
def analyse_text(text):
return ErbLexer.analyse_text(text) - 0.05
class HtmlPhpLexer(DelegatingLexer):
"""
Subclass of `PhpLexer` that highlights unhandled data with the `HtmlLexer`.
Nested Javascript and CSS is highlighted too.
"""
name = 'HTML+PHP'
aliases = ['html+php']
filenames = ['*.phtml']
alias_filenames = ['*.php', '*.html', '*.htm', '*.xhtml',
'*.php[345]']
mimetypes = ['application/x-php',
'application/x-httpd-php', 'application/x-httpd-php3',
'application/x-httpd-php4', 'application/x-httpd-php5']
def __init__(self, **options):
super(HtmlPhpLexer, self).__init__(HtmlLexer, PhpLexer, **options)
def analyse_text(text):
rv = PhpLexer.analyse_text(text) - 0.01
if html_doctype_matches(text):
rv += 0.5
return rv
class XmlPhpLexer(DelegatingLexer):
"""
Subclass of `PhpLexer` that higlights unhandled data with the `XmlLexer`.
"""
name = 'XML+PHP'
aliases = ['xml+php']
alias_filenames = ['*.xml', '*.php', '*.php[345]']
mimetypes = ['application/xml+php']
def __init__(self, **options):
super(XmlPhpLexer, self).__init__(XmlLexer, PhpLexer, **options)
def analyse_text(text):
rv = PhpLexer.analyse_text(text) - 0.01
if looks_like_xml(text):
rv += 0.4
return rv
class CssPhpLexer(DelegatingLexer):
"""
Subclass of `PhpLexer` which highlights unmatched data with the `CssLexer`.
"""
name = 'CSS+PHP'
aliases = ['css+php']
alias_filenames = ['*.css']
mimetypes = ['text/css+php']
def __init__(self, **options):
super(CssPhpLexer, self).__init__(CssLexer, PhpLexer, **options)
def analyse_text(text):
return PhpLexer.analyse_text(text) - 0.05
class JavascriptPhpLexer(DelegatingLexer):
"""
Subclass of `PhpLexer` which highlights unmatched data with the
`JavascriptLexer`.
"""
name = 'JavaScript+PHP'
aliases = ['js+php', 'javascript+php']
alias_filenames = ['*.js']
mimetypes = ['application/x-javascript+php',
'text/x-javascript+php',
'text/javascript+php']
def __init__(self, **options):
super(JavascriptPhpLexer, self).__init__(JavascriptLexer, PhpLexer,
**options)
def analyse_text(text):
return PhpLexer.analyse_text(text)
class HtmlSmartyLexer(DelegatingLexer):
"""
Subclass of the `SmartyLexer` that highighlights unlexed data with the
`HtmlLexer`.
Nested Javascript and CSS is highlighted too.
"""
name = 'HTML+Smarty'
aliases = ['html+smarty']
alias_filenames = ['*.html', '*.htm', '*.xhtml', '*.tpl']
mimetypes = ['text/html+smarty']
def __init__(self, **options):
super(HtmlSmartyLexer, self).__init__(HtmlLexer, SmartyLexer, **options)
def analyse_text(text):
rv = SmartyLexer.analyse_text(text) - 0.01
if html_doctype_matches(text):
rv += 0.5
return rv
class XmlSmartyLexer(DelegatingLexer):
"""
Subclass of the `SmartyLexer` that highlights unlexed data with the
`XmlLexer`.
"""
name = 'XML+Smarty'
aliases = ['xml+smarty']
alias_filenames = ['*.xml', '*.tpl']
mimetypes = ['application/xml+smarty']
def __init__(self, **options):
super(XmlSmartyLexer, self).__init__(XmlLexer, SmartyLexer, **options)
def analyse_text(text):
rv = SmartyLexer.analyse_text(text) - 0.01
if looks_like_xml(text):
rv += 0.4
return rv
class CssSmartyLexer(DelegatingLexer):
"""
Subclass of the `SmartyLexer` that highlights unlexed data with the
`CssLexer`.
"""
name = 'CSS+Smarty'
aliases = ['css+smarty']
alias_filenames = ['*.css', '*.tpl']
mimetypes = ['text/css+smarty']
def __init__(self, **options):
super(CssSmartyLexer, self).__init__(CssLexer, SmartyLexer, **options)
def analyse_text(text):
return SmartyLexer.analyse_text(text) - 0.05
class JavascriptSmartyLexer(DelegatingLexer):
"""
Subclass of the `SmartyLexer` that highlights unlexed data with the
`JavascriptLexer`.
"""
name = 'JavaScript+Smarty'
aliases = ['js+smarty', 'javascript+smarty']
alias_filenames = ['*.js', '*.tpl']
mimetypes = ['application/x-javascript+smarty',
'text/x-javascript+smarty',
'text/javascript+smarty']
def __init__(self, **options):
super(JavascriptSmartyLexer, self).__init__(JavascriptLexer, SmartyLexer,
**options)
def analyse_text(text):
return SmartyLexer.analyse_text(text) - 0.05
class HtmlDjangoLexer(DelegatingLexer):
"""
Subclass of the `DjangoLexer` that highighlights unlexed data with the
`HtmlLexer`.
Nested Javascript and CSS is highlighted too.
"""
name = 'HTML+Django/Jinja'
aliases = ['html+django', 'html+jinja']
alias_filenames = ['*.html', '*.htm', '*.xhtml']
mimetypes = ['text/html+django', 'text/html+jinja']
def __init__(self, **options):
super(HtmlDjangoLexer, self).__init__(HtmlLexer, DjangoLexer, **options)
def analyse_text(text):
rv = DjangoLexer.analyse_text(text) - 0.01
if html_doctype_matches(text):
rv += 0.5
return rv
class XmlDjangoLexer(DelegatingLexer):
"""
Subclass of the `DjangoLexer` that highlights unlexed data with the
`XmlLexer`.
"""
name = 'XML+Django/Jinja'
aliases = ['xml+django', 'xml+jinja']
alias_filenames = ['*.xml']
mimetypes = ['application/xml+django', 'application/xml+jinja']
def __init__(self, **options):
super(XmlDjangoLexer, self).__init__(XmlLexer, DjangoLexer, **options)
def analyse_text(text):
rv = DjangoLexer.analyse_text(text) - 0.01
if looks_like_xml(text):
rv += 0.4
return rv
class CssDjangoLexer(DelegatingLexer):
"""
Subclass of the `DjangoLexer` that highlights unlexed data with the
`CssLexer`.
"""
name = 'CSS+Django/Jinja'
aliases = ['css+django', 'css+jinja']
alias_filenames = ['*.css']
mimetypes = ['text/css+django', 'text/css+jinja']
def __init__(self, **options):
super(CssDjangoLexer, self).__init__(CssLexer, DjangoLexer, **options)
def analyse_text(text):
return DjangoLexer.analyse_text(text) - 0.05
class JavascriptDjangoLexer(DelegatingLexer):
"""
Subclass of the `DjangoLexer` that highlights unlexed data with the
`JavascriptLexer`.
"""
name = 'JavaScript+Django/Jinja'
aliases = ['js+django', 'javascript+django',
'js+jinja', 'javascript+jinja']
alias_filenames = ['*.js']
mimetypes = ['application/x-javascript+django',
'application/x-javascript+jinja',
'text/x-javascript+django',
'text/x-javascript+jinja',
'text/javascript+django',
'text/javascript+jinja']
def __init__(self, **options):
super(JavascriptDjangoLexer, self).__init__(JavascriptLexer, DjangoLexer,
**options)
def analyse_text(text):
return DjangoLexer.analyse_text(text) - 0.05
class JspRootLexer(RegexLexer):
"""
Base for the `JspLexer`. Yields `Token.Other` for area outside of
JSP tags.
*New in Pygments 0.7.*
"""
tokens = {
'root': [
(r'<%\S?', Keyword, 'sec'),
# FIXME: I want to make these keywords but still parse attributes.
(r'</?jsp:(forward|getProperty|include|plugin|setProperty|useBean).*?>',
Keyword),
(r'[^<]+', Other),
(r'<', Other),
],
'sec': [
(r'%>', Keyword, '#pop'),
# note: '\w\W' != '.' without DOTALL.
(r'[\w\W]+?(?=%>|\Z)', using(JavaLexer)),
],
}
class JspLexer(DelegatingLexer):
"""
Lexer for Java Server Pages.
*New in Pygments 0.7.*
"""
name = 'Java Server Page'
aliases = ['jsp']
filenames = ['*.jsp']
mimetypes = ['application/x-jsp']
def __init__(self, **options):
super(JspLexer, self).__init__(XmlLexer, JspRootLexer, **options)
def analyse_text(text):
rv = JavaLexer.analyse_text(text) - 0.01
if looks_like_xml(text):
rv += 0.4
if '<%' in text and '%>' in text:
rv += 0.1
return rv
class EvoqueLexer(RegexLexer):
"""
For files using the Evoque templating system.
*New in Pygments 1.1.*
"""
name = 'Evoque'
aliases = ['evoque']
filenames = ['*.evoque']
mimetypes = ['application/x-evoque']
flags = re.DOTALL
tokens = {
'root': [
(r'[^#$]+', Other),
(r'#\[', Comment.Multiline, 'comment'),
(r'\$\$', Other),
# svn keywords
(r'\$\w+:[^$\n]*\$', Comment.Multiline),
# directives: begin, end
(r'(\$)(begin|end)(\{(%)?)(.*?)((?(4)%)\})',
bygroups(Punctuation, Name.Builtin, Punctuation, None,
String, Punctuation)),
# directives: evoque, overlay
# see doc for handling first name arg: /directives/evoque/
#+ minor inconsistency: the "name" in e.g. $overlay{name=site_base}
# should be using(PythonLexer), not passed out as String
(r'(\$)(evoque|overlay)(\{(%)?)(\s*[#\w\-"\'.]+[^=,%}]+?)?'
r'(.*?)((?(4)%)\})',
bygroups(Punctuation, Name.Builtin, Punctuation, None,
String, using(PythonLexer), Punctuation)),
# directives: if, for, prefer, test
(r'(\$)(\w+)(\{(%)?)(.*?)((?(4)%)\})',
bygroups(Punctuation, Name.Builtin, Punctuation, None,
using(PythonLexer), Punctuation)),
# directive clauses (no {} expression)
(r'(\$)(else|rof|fi)', bygroups(Punctuation, Name.Builtin)),
# expressions
(r'(\$\{(%)?)(.*?)((!)(.*?))?((?(2)%)\})',
bygroups(Punctuation, None, using(PythonLexer),
Name.Builtin, None, None, Punctuation)),
(r'#', Other),
],
'comment': [
(r'[^\]#]', Comment.Multiline),
(r'#\[', Comment.Multiline, '#push'),
(r'\]#', Comment.Multiline, '#pop'),
(r'[\]#]', Comment.Multiline)
],
}
class EvoqueHtmlLexer(DelegatingLexer):
"""
Subclass of the `EvoqueLexer` that highlights unlexed data with the
`HtmlLexer`.
*New in Pygments 1.1.*
"""
name = 'HTML+Evoque'
aliases = ['html+evoque']
filenames = ['*.html']
mimetypes = ['text/html+evoque']
def __init__(self, **options):
super(EvoqueHtmlLexer, self).__init__(HtmlLexer, EvoqueLexer,
**options)
class EvoqueXmlLexer(DelegatingLexer):
"""
Subclass of the `EvoqueLexer` that highlights unlexed data with the
`XmlLexer`.
*New in Pygments 1.1.*
"""
name = 'XML+Evoque'
aliases = ['xml+evoque']
filenames = ['*.xml']
mimetypes = ['application/xml+evoque']
def __init__(self, **options):
super(EvoqueXmlLexer, self).__init__(XmlLexer, EvoqueLexer,
**options)
class ColdfusionLexer(RegexLexer):
"""
Coldfusion statements
"""
name = 'cfstatement'
aliases = ['cfs']
filenames = []
mimetypes = []
flags = re.IGNORECASE | re.MULTILINE
tokens = {
'root': [
(r'//.*', Comment),
(r'\+\+|--', Operator),
(r'[-+*/^&=!]', Operator),
(r'<=|>=|<|>', Operator),
(r'mod\b', Operator),
(r'(eq|lt|gt|lte|gte|not|is|and|or)\b', Operator),
(r'\|\||&&', Operator),
(r'"', String.Double, 'string'),
# There is a special rule for allowing html in single quoted
# strings, evidently.
(r"'.*?'", String.Single),
(r'\d+', Number),
(r'(if|else|len|var|case|default|break|switch)\b', Keyword),
(r'([A-Za-z_$][A-Za-z0-9_.]*)(\s*)(\()',
bygroups(Name.Function, Text, Punctuation)),
(r'[A-Za-z_$][A-Za-z0-9_.]*', Name.Variable),
(r'[()\[\]{};:,.\\]', Punctuation),
(r'\s+', Text),
],
'string': [
(r'""', String.Double),
(r'#.+?#', String.Interp),
(r'[^"#]+', String.Double),
(r'#', String.Double),
(r'"', String.Double, '#pop'),
],
}
class ColdfusionMarkupLexer(RegexLexer):
"""
Coldfusion markup only
"""
name = 'Coldfusion'
aliases = ['cf']
filenames = []
mimetypes = []
tokens = {
'root': [
(r'[^<]+', Other),
include('tags'),
(r'<[^<>]*', Other),
],
'tags': [
(r'(?s)<!---.*?--->', Comment.Multiline),
(r'(?s)<!--.*?-->', Comment),
(r'<cfoutput.*?>', Name.Builtin, 'cfoutput'),
(r'(?s)(<cfscript.*?>)(.+?)(</cfscript.*?>)',
bygroups(Name.Builtin, using(ColdfusionLexer), Name.Builtin)),
# negative lookbehind is for strings with embedded >
(r'(?s)(</?cf(?:component|include|if|else|elseif|loop|return|'
r'dbinfo|dump|abort|location|invoke|throw|file|savecontent|'
r'mailpart|mail|header|content|zip|image|lock|argument|try|'
r'catch|break|directory|http|set|function|param)\b)(.*?)((?<!\\)>)',
bygroups(Name.Builtin, using(ColdfusionLexer), Name.Builtin)),
],
'cfoutput': [
(r'[^#<]+', Other),
(r'(#)(.*?)(#)', bygroups(Punctuation, using(ColdfusionLexer),
Punctuation)),
#(r'<cfoutput.*?>', Name.Builtin, '#push'),
(r'</cfoutput.*?>', Name.Builtin, '#pop'),
include('tags'),
(r'(?s)<[^<>]*', Other),
(r'#', Other),
],
}
class ColdfusionHtmlLexer(DelegatingLexer):
"""
Coldfusion markup in html
"""
name = 'Coldfusion HTML'
aliases = ['cfm']
filenames = ['*.cfm', '*.cfml', '*.cfc']
mimetypes = ['application/x-coldfusion']
def __init__(self, **options):
super(ColdfusionHtmlLexer, self).__init__(HtmlLexer, ColdfusionMarkupLexer,
**options)
class SspLexer(DelegatingLexer):
"""
Lexer for Scalate Server Pages.
*New in Pygments 1.4.*
"""
name = 'Scalate Server Page'
aliases = ['ssp']
filenames = ['*.ssp']
mimetypes = ['application/x-ssp']
def __init__(self, **options):
super(SspLexer, self).__init__(XmlLexer, JspRootLexer, **options)
def analyse_text(text):
rv = 0.0
if re.search('val \w+\s*:', text):
rv += 0.6
if looks_like_xml(text):
rv += 0.2
if '<%' in text and '%>' in text:
rv += 0.1
return rv
class TeaTemplateRootLexer(RegexLexer):
"""
Base for the `TeaTemplateLexer`. Yields `Token.Other` for area outside of
code blocks.
*New in Pygments 1.5.*
"""
tokens = {
'root': [
(r'<%\S?', Keyword, 'sec'),
(r'[^<]+', Other),
(r'<', Other),
],
'sec': [
(r'%>', Keyword, '#pop'),
# note: '\w\W' != '.' without DOTALL.
(r'[\w\W]+?(?=%>|\Z)', using(TeaLangLexer)),
],
}
class TeaTemplateLexer(DelegatingLexer):
"""
Lexer for `Tea Templates <http://teatrove.org/>`_.
*New in Pygments 1.5.*
"""
name = 'Tea'
aliases = ['tea']
filenames = ['*.tea']
mimetypes = ['text/x-tea']
def __init__(self, **options):
super(TeaTemplateLexer, self).__init__(XmlLexer,
TeaTemplateRootLexer, **options)
def analyse_text(text):
rv = TeaLangLexer.analyse_text(text) - 0.01
if looks_like_xml(text):
rv += 0.4
if '<%' in text and '%>' in text:
rv += 0.1
return rv
class LassoHtmlLexer(DelegatingLexer):
"""
Subclass of the `LassoLexer` which highlights unhandled data with the
`HtmlLexer`.
Nested JavaScript and CSS is also highlighted.
*New in Pygments 1.6.*
"""
name = 'HTML+Lasso'
aliases = ['html+lasso']
alias_filenames = ['*.html', '*.htm', '*.xhtml', '*.lasso', '*.lasso[89]',
'*.incl', '*.inc', '*.las']
mimetypes = ['text/html+lasso',
'application/x-httpd-lasso',
'application/x-httpd-lasso[89]']
def __init__(self, **options):
super(LassoHtmlLexer, self).__init__(HtmlLexer, LassoLexer, **options)
def analyse_text(text):
rv = LassoLexer.analyse_text(text)
if re.search(r'<\w+>', text, re.I):
rv += 0.2
if html_doctype_matches(text):
rv += 0.5
return rv
class LassoXmlLexer(DelegatingLexer):
"""
Subclass of the `LassoLexer` which highlights unhandled data with the
`XmlLexer`.
*New in Pygments 1.6.*
"""
name = 'XML+Lasso'
aliases = ['xml+lasso']
alias_filenames = ['*.xml', '*.lasso', '*.lasso[89]',
'*.incl', '*.inc', '*.las']
mimetypes = ['application/xml+lasso']
def __init__(self, **options):
super(LassoXmlLexer, self).__init__(XmlLexer, LassoLexer, **options)
def analyse_text(text):
rv = LassoLexer.analyse_text(text)
if looks_like_xml(text):
rv += 0.5
return rv
class LassoCssLexer(DelegatingLexer):
"""
Subclass of the `LassoLexer` which highlights unhandled data with the
`CssLexer`.
*New in Pygments 1.6.*
"""
name = 'CSS+Lasso'
aliases = ['css+lasso']
alias_filenames = ['*.css']
mimetypes = ['text/css+lasso']
def __init__(self, **options):
options['requiredelimiters'] = True
super(LassoCssLexer, self).__init__(CssLexer, LassoLexer, **options)
def analyse_text(text):
rv = LassoLexer.analyse_text(text)
if re.search(r'\w+:.+;', text):
rv += 0.1
if 'padding:' in text:
rv += 0.1
return rv
class LassoJavascriptLexer(DelegatingLexer):
"""
Subclass of the `LassoLexer` which highlights unhandled data with the
`JavascriptLexer`.
*New in Pygments 1.6.*
"""
name = 'JavaScript+Lasso'
aliases = ['js+lasso', 'javascript+lasso']
alias_filenames = ['*.js']
mimetypes = ['application/x-javascript+lasso',
'text/x-javascript+lasso',
'text/javascript+lasso']
def __init__(self, **options):
options['requiredelimiters'] = True
super(LassoJavascriptLexer, self).__init__(JavascriptLexer, LassoLexer,
**options)
def analyse_text(text):
rv = LassoLexer.analyse_text(text)
if 'function' in text:
rv += 0.2
return rv
|
agpl-3.0
|
mensler/ansible
|
lib/ansible/plugins/connection/ssh.py
|
25
|
40554
|
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# Copyright 2015 Abhijit Menon-Sen <ams@2ndQuadrant.com>
# Copyright 2017 Toshio Kuratomi <tkuratomi@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
'''
DOCUMENTATION:
connection: ssh
short_description: connect via ssh client binary
description:
- This connection plugin allows ansible to communicate to the target machines via normal ssh command line.
author: ansible (@core)
version_added: historical
options:
_host:
description: Hostname/ip to connect to.
default: inventory_hostname
host_vars:
- ansible_host
- ansible_ssh_host
_host_key_checking:
type: bool
description: Determines if ssh should check host keys
config:
- section: defaults
key: 'host_key_checking'
env_vars:
- ANSIBLE_HOST_KEY_CHECKING
_password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
host_vars:
- ansible_password
- ansible_ssh_pass
_ssh_args:
description: Arguments to pass to all ssh cli tools
default: '-C -o ControlMaster=auto -o ControlPersist=60s'
config:
- section: 'ssh_connection'
key: 'ssh_args'
env_vars:
- ANSIBLE_SSH_ARGS
_ssh_common_args:
description: Common extra args for ssh CLI tools
host_vars:
- ansible_ssh_common_args
_scp_extra_args:
description: Extra exclusive to the 'scp' CLI
host_vars:
- ansible_scp_extra_args
_sftp_extra_args:
description: Extra exclusive to the 'sftp' CLI
host_vars:
- ansible_sftp_extra_args
_ssh_extra_args:
description: Extra exclusive to the 'ssh' CLI
host_vars:
- ansible_ssh_extra_args
port:
description: Remote port to connect to.
type: int
config:
- section: defaults
key: remote_port
default: 22
env_vars:
- ANSIBLE_REMOTE_PORT
host_vars:
- ansible_port
- ansible_ssh_port
remote_user:
description:
- User name with which to login to the remote server, normally set by the remote_user keyword.
- If no user is supplied, Ansible will let the ssh client binary choose the user as it normally
config:
- section: defaults
key: remote_user
env_vars:
- ANSIBLE_REMOTE_USER
host_vars:
- ansible_user
- ansible_ssh_user
'''
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import fcntl
import hashlib
import os
import pty
import socket
import subprocess
import time
from functools import wraps
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound
from ansible.errors import AnsibleOptionsError
from ansible.module_utils.basic import BOOLEANS
from ansible.compat import selectors
from ansible.module_utils.six import PY3, text_type, binary_type
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.utils.path import unfrackpath, makedirs_safe
boolean = C.mk_boolean
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
SSHPASS_AVAILABLE = None
class AnsibleControlPersistBrokenPipeError(AnsibleError):
''' ControlPersist broken pipe '''
pass
def _ssh_retry(func):
"""
Decorator to retry ssh/scp/sftp in the case of a connection failure
Will retry if:
* an exception is caught
* ssh returns 255
Will not retry if
* remaining_tries is <2
* retries limit reached
"""
@wraps(func)
def wrapped(self, *args, **kwargs):
remaining_tries = int(C.ANSIBLE_SSH_RETRIES) + 1
cmd_summary = "%s..." % args[0]
for attempt in range(remaining_tries):
try:
try:
return_tuple = func(self, *args, **kwargs)
display.vvv(return_tuple, host=self.host)
# 0 = success
# 1-254 = remote command return code
# 255 = failure from the ssh command itself
except (AnsibleControlPersistBrokenPipeError) as e:
# Retry one more time because of the ControlPersist broken pipe (see #16731)
display.vvv(u"RETRYING BECAUSE OF CONTROLPERSIST BROKEN PIPE")
return_tuple = func(self, *args, **kwargs)
if return_tuple[0] != 255:
break
else:
raise AnsibleConnectionFailure("Failed to connect to the host via ssh: %s" % to_native(return_tuple[2]))
except (AnsibleConnectionFailure, Exception) as e:
if attempt == remaining_tries - 1:
raise
else:
pause = 2 ** attempt - 1
if pause > 30:
pause = 30
if isinstance(e, AnsibleConnectionFailure):
msg = "ssh_retry: attempt: %d, ssh return code is 255. cmd (%s), pausing for %d seconds" % (attempt, cmd_summary, pause)
else:
msg = "ssh_retry: attempt: %d, caught exception(%s) from cmd (%s), pausing for %d seconds" % (attempt, e, cmd_summary, pause)
display.vv(msg, host=self.host)
time.sleep(pause)
continue
return return_tuple
return wrapped
class Connection(ConnectionBase):
''' ssh based connections '''
transport = 'ssh'
has_pipelining = True
become_methods = frozenset(C.BECOME_METHODS).difference(['runas'])
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
self.host = self._play_context.remote_addr
self.port = self._play_context.port
self.user = self._play_context.remote_user
self.control_path = C.ANSIBLE_SSH_CONTROL_PATH
self.control_path_dir = C.ANSIBLE_SSH_CONTROL_PATH_DIR
# The connection is created by running ssh/scp/sftp from the exec_command,
# put_file, and fetch_file methods, so we don't need to do any connection
# management here.
def _connect(self):
return self
def transport_test(self, connect_timeout):
''' Test the transport mechanism, if available '''
port = int(self.port or 22)
display.vvv("attempting transport test to %s:%s" % (self.host, port))
sock = socket.create_connection((self.host, port), connect_timeout)
sock.close()
@staticmethod
def _create_control_path(host, port, user):
'''Make a hash for the controlpath based on con attributes'''
pstring = '%s-%s-%s' % (host, port, user)
m = hashlib.sha1()
m.update(to_bytes(pstring))
digest = m.hexdigest()
cpath = '%(directory)s/' + digest[:10]
return cpath
@staticmethod
def _sshpass_available():
global SSHPASS_AVAILABLE
# We test once if sshpass is available, and remember the result. It
# would be nice to use distutils.spawn.find_executable for this, but
# distutils isn't always available; shutils.which() is Python3-only.
if SSHPASS_AVAILABLE is None:
try:
p = subprocess.Popen(["sshpass"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
SSHPASS_AVAILABLE = True
except OSError:
SSHPASS_AVAILABLE = False
return SSHPASS_AVAILABLE
@staticmethod
def _persistence_controls(b_command):
'''
Takes a command array and scans it for ControlPersist and ControlPath
settings and returns two booleans indicating whether either was found.
This could be smarter, e.g. returning false if ControlPersist is 'no',
but for now we do it simple way.
'''
controlpersist = False
controlpath = False
for b_arg in (a.lower() for a in b_command):
if b'controlpersist' in b_arg:
controlpersist = True
elif b'controlpath' in b_arg:
controlpath = True
return controlpersist, controlpath
def _add_args(self, b_command, b_args, explanation):
"""
Adds arguments to the ssh command and displays a caller-supplied explanation of why.
:arg b_command: A list containing the command to add the new arguments to.
This list will be modified by this method.
:arg b_args: An iterable of new arguments to add. This iterable is used
more than once so it must be persistent (ie: a list is okay but a
StringIO would not)
:arg explanation: A text string containing explaining why the arguments
were added. It will be displayed with a high enough verbosity.
.. note:: This function does its work via side-effect. The b_command list has the new arguments appended.
"""
display.vvvvv(u'SSH: %s: (%s)' % (explanation, ')('.join(to_text(a) for a in b_args)), host=self._play_context.remote_addr)
b_command += b_args
def _build_command(self, binary, *other_args):
'''
Takes a binary (ssh, scp, sftp) and optional extra arguments and returns
a command line as an array that can be passed to subprocess.Popen.
'''
b_command = []
#
# First, the command to invoke
#
# If we want to use password authentication, we have to set up a pipe to
# write the password to sshpass.
if self._play_context.password:
if not self._sshpass_available():
raise AnsibleError("to use the 'ssh' connection type with passwords, you must install the sshpass program")
self.sshpass_pipe = os.pipe()
b_command += [b'sshpass', b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')]
if binary == 'ssh':
b_command += [to_bytes(self._play_context.ssh_executable, errors='surrogate_or_strict')]
else:
b_command += [to_bytes(binary, errors='surrogate_or_strict')]
#
# Next, additional arguments based on the configuration.
#
# sftp batch mode allows us to correctly catch failed transfers, but can
# be disabled if the client side doesn't support the option. However,
# sftp batch mode does not prompt for passwords so it must be disabled
# if not using controlpersist and using sshpass
if binary == 'sftp' and C.DEFAULT_SFTP_BATCH_MODE:
if self._play_context.password:
b_args = [b'-o', b'BatchMode=no']
self._add_args(b_command, b_args, u'disable batch mode for sshpass')
b_command += [b'-b', b'-']
if self._play_context.verbosity > 3:
b_command.append(b'-vvv')
#
# Next, we add [ssh_connection]ssh_args from ansible.cfg.
#
if self._play_context.ssh_args:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in
self._split_ssh_args(self._play_context.ssh_args)]
self._add_args(b_command, b_args, u"ansible.cfg set ssh_args")
# Now we add various arguments controlled by configuration file settings
# (e.g. host_key_checking) or inventory variables (ansible_ssh_port) or
# a combination thereof.
if not C.HOST_KEY_CHECKING:
b_args = (b"-o", b"StrictHostKeyChecking=no")
self._add_args(b_command, b_args, u"ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled")
if self._play_context.port is not None:
b_args = (b"-o", b"Port=" + to_bytes(self._play_context.port, nonstring='simplerepr', errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"ANSIBLE_REMOTE_PORT/remote_port/ansible_port set")
key = self._play_context.private_key_file
if key:
b_args = (b"-o", b'IdentityFile="' + to_bytes(os.path.expanduser(key), errors='surrogate_or_strict') + b'"')
self._add_args(b_command, b_args, u"ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set")
if not self._play_context.password:
self._add_args(
b_command, (
b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey",
b"-o", b"PasswordAuthentication=no"
),
u"ansible_password/ansible_ssh_pass not set"
)
user = self._play_context.remote_user
if user:
self._add_args(b_command,
(b"-o", b"User=" + to_bytes(self._play_context.remote_user, errors='surrogate_or_strict')),
u"ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set"
)
self._add_args(b_command,
(b"-o", b"ConnectTimeout=" + to_bytes(self._play_context.timeout, errors='surrogate_or_strict', nonstring='simplerepr')),
u"ANSIBLE_TIMEOUT/timeout set"
)
# Add in any common or binary-specific arguments from the PlayContext
# (i.e. inventory or task settings or overrides on the command line).
for opt in (u'ssh_common_args', u'{0}_extra_args'.format(binary)):
attr = getattr(self._play_context, opt, None)
if attr is not None:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(attr)]
self._add_args(b_command, b_args, u"PlayContext set %s" % opt)
# Check if ControlPersist is enabled and add a ControlPath if one hasn't
# already been set.
controlpersist, controlpath = self._persistence_controls(b_command)
if controlpersist:
self._persistent = True
if not controlpath:
cpdir = unfrackpath(self.control_path_dir)
b_cpdir = to_bytes(cpdir, errors='surrogate_or_strict')
# The directory must exist and be writable.
makedirs_safe(b_cpdir, 0o700)
if not os.access(b_cpdir, os.W_OK):
raise AnsibleError("Cannot write to ControlPath %s" % to_native(cpdir))
if not self.control_path:
self.control_path = self._create_control_path(
self.host,
self.port,
self.user
)
b_args = (b"-o", b"ControlPath=" + to_bytes(self.control_path % dict(directory=cpdir), errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"found only ControlPersist; added ControlPath")
# Finally, we add any caller-supplied extras.
if other_args:
b_command += [to_bytes(a) for a in other_args]
return b_command
def _send_initial_data(self, fh, in_data):
'''
Writes initial data to the stdin filehandle of the subprocess and closes
it. (The handle must be closed; otherwise, for example, "sftp -b -" will
just hang forever waiting for more commands.)
'''
display.debug('Sending initial data')
try:
fh.write(to_bytes(in_data))
fh.close()
except (OSError, IOError):
raise AnsibleConnectionFailure('SSH Error: data could not be sent to remote host "%s". Make sure this host can be reached over ssh' % self.host)
display.debug('Sent initial data (%d bytes)' % len(in_data))
# Used by _run() to kill processes on failures
@staticmethod
def _terminate_process(p):
""" Terminate a process, ignoring errors """
try:
p.terminate()
except (OSError, IOError):
pass
# This is separate from _run() because we need to do the same thing for stdout
# and stderr.
def _examine_output(self, source, state, b_chunk, sudoable):
'''
Takes a string, extracts complete lines from it, tests to see if they
are a prompt, error message, etc., and sets appropriate flags in self.
Prompt and success lines are removed.
Returns the processed (i.e. possibly-edited) output and the unprocessed
remainder (to be processed with the next chunk) as strings.
'''
output = []
for b_line in b_chunk.splitlines(True):
display_line = to_text(b_line).rstrip('\r\n')
suppress_output = False
#display.debug("Examining line (source=%s, state=%s): '%s'" % (source, state, display_line))
if self._play_context.prompt and self.check_password_prompt(b_line):
display.debug("become_prompt: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_prompt'] = True
suppress_output = True
elif self._play_context.success_key and self.check_become_success(b_line):
display.debug("become_success: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_success'] = True
suppress_output = True
elif sudoable and self.check_incorrect_password(b_line):
display.debug("become_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_error'] = True
elif sudoable and self.check_missing_password(b_line):
display.debug("become_nopasswd_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_nopasswd_error'] = True
if not suppress_output:
output.append(b_line)
# The chunk we read was most likely a series of complete lines, but just
# in case the last line was incomplete (and not a prompt, which we would
# have removed from the output), we retain it to be processed with the
# next chunk.
remainder = b''
if output and not output[-1].endswith(b'\n'):
remainder = output[-1]
output = output[:-1]
return b''.join(output), remainder
@_ssh_retry
def _run(self, cmd, in_data, sudoable=True, checkrc=True):
'''
Starts the command and communicates with it until it ends.
'''
display_cmd = list(map(shlex_quote, map(to_text, cmd)))
display.vvv(u'SSH: EXEC {0}'.format(u' '.join(display_cmd)), host=self.host)
# Start the given command. If we don't need to pipeline data, we can try
# to use a pseudo-tty (ssh will have been invoked with -tt). If we are
# pipelining data, or can't create a pty, we fall back to using plain
# old pipes.
p = None
if isinstance(cmd, (text_type, binary_type)):
cmd = to_bytes(cmd)
else:
cmd = list(map(to_bytes, cmd))
if not in_data:
try:
# Make sure stdin is a proper pty to avoid tcgetattr errors
master, slave = pty.openpty()
if PY3 and self._play_context.password:
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdin = os.fdopen(master, 'wb', 0)
os.close(slave)
except (OSError, IOError):
p = None
if not p:
if PY3 and self._play_context.password:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdin = p.stdin
# If we are using SSH password authentication, write the password into
# the pipe we opened in _build_command.
if self._play_context.password:
os.close(self.sshpass_pipe[0])
try:
os.write(self.sshpass_pipe[1], to_bytes(self._play_context.password) + b'\n')
except OSError as e:
# Ignore broken pipe errors if the sshpass process has exited.
if e.errno != errno.EPIPE or p.poll() is None:
raise
os.close(self.sshpass_pipe[1])
#
# SSH state machine
#
# Now we read and accumulate output from the running process until it
# exits. Depending on the circumstances, we may also need to write an
# escalation password and/or pipelined input to the process.
states = [
'awaiting_prompt', 'awaiting_escalation', 'ready_to_send', 'awaiting_exit'
]
# Are we requesting privilege escalation? Right now, we may be invoked
# to execute sftp/scp with sudoable=True, but we can request escalation
# only when using ssh. Otherwise we can send initial data straightaway.
state = states.index('ready_to_send')
if b'ssh' in cmd:
if self._play_context.prompt:
# We're requesting escalation with a password, so we have to
# wait for a password prompt.
state = states.index('awaiting_prompt')
display.debug(u'Initial state: %s: %s' % (states[state], self._play_context.prompt))
elif self._play_context.become and self._play_context.success_key:
# We're requesting escalation without a password, so we have to
# detect success/failure before sending any initial data.
state = states.index('awaiting_escalation')
display.debug(u'Initial state: %s: %s' % (states[state], self._play_context.success_key))
# We store accumulated stdout and stderr output from the process here,
# but strip any privilege escalation prompt/confirmation lines first.
# Output is accumulated into tmp_*, complete lines are extracted into
# an array, then checked and removed or copied to stdout or stderr. We
# set any flags based on examining the output in self._flags.
b_stdout = b_stderr = b''
b_tmp_stdout = b_tmp_stderr = b''
self._flags = dict(
become_prompt=False, become_success=False,
become_error=False, become_nopasswd_error=False
)
# select timeout should be longer than the connect timeout, otherwise
# they will race each other when we can't connect, and the connect
# timeout usually fails
timeout = 2 + self._play_context.timeout
for fd in (p.stdout, p.stderr):
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
### TODO: bcoca would like to use SelectSelector() when open
# filehandles is low, then switch to more efficient ones when higher.
# select is faster when filehandles is low.
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
# If we can send initial data without waiting for anything, we do so
# before we start polling
if states[state] == 'ready_to_send' and in_data:
self._send_initial_data(stdin, in_data)
state += 1
try:
while True:
events = selector.select(timeout)
# We pay attention to timeouts only while negotiating a prompt.
if not events:
# We timed out
if state <= states.index('awaiting_escalation'):
# If the process has already exited, then it's not really a
# timeout; we'll let the normal error handling deal with it.
if p.poll() is not None:
break
self._terminate_process(p)
raise AnsibleError('Timeout (%ds) waiting for privilege escalation prompt: %s' % (timeout, to_native(b_stdout)))
# Read whatever output is available on stdout and stderr, and stop
# listening to the pipe if it's been closed.
for key, event in events:
if key.fileobj == p.stdout:
b_chunk = p.stdout.read()
if b_chunk == b'':
# stdout has been closed, stop watching it
selector.unregister(p.stdout)
# When ssh has ControlMaster (+ControlPath/Persist) enabled, the
# first connection goes into the background and we never see EOF
# on stderr. If we see EOF on stdout, lower the select timeout
# to reduce the time wasted selecting on stderr if we observe
# that the process has not yet existed after this EOF. Otherwise
# we may spend a long timeout period waiting for an EOF that is
# not going to arrive until the persisted connection closes.
timeout = 1
b_tmp_stdout += b_chunk
display.debug("stdout chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
elif key.fileobj == p.stderr:
b_chunk = p.stderr.read()
if b_chunk == b'':
# stderr has been closed, stop watching it
selector.unregister(p.stderr)
b_tmp_stderr += b_chunk
display.debug("stderr chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
# We examine the output line-by-line until we have negotiated any
# privilege escalation prompt and subsequent success/error message.
# Afterwards, we can accumulate output without looking at it.
if state < states.index('ready_to_send'):
if b_tmp_stdout:
b_output, b_unprocessed = self._examine_output('stdout', states[state], b_tmp_stdout, sudoable)
b_stdout += b_output
b_tmp_stdout = b_unprocessed
if b_tmp_stderr:
b_output, b_unprocessed = self._examine_output('stderr', states[state], b_tmp_stderr, sudoable)
b_stderr += b_output
b_tmp_stderr = b_unprocessed
else:
b_stdout += b_tmp_stdout
b_stderr += b_tmp_stderr
b_tmp_stdout = b_tmp_stderr = b''
# If we see a privilege escalation prompt, we send the password.
# (If we're expecting a prompt but the escalation succeeds, we
# didn't need the password and can carry on regardless.)
if states[state] == 'awaiting_prompt':
if self._flags['become_prompt']:
display.debug('Sending become_pass in response to prompt')
stdin.write(to_bytes(self._play_context.become_pass) + b'\n')
self._flags['become_prompt'] = False
state += 1
elif self._flags['become_success']:
state += 1
# We've requested escalation (with or without a password), now we
# wait for an error message or a successful escalation.
if states[state] == 'awaiting_escalation':
if self._flags['become_success']:
display.debug('Escalation succeeded')
self._flags['become_success'] = False
state += 1
elif self._flags['become_error']:
display.debug('Escalation failed')
self._terminate_process(p)
self._flags['become_error'] = False
raise AnsibleError('Incorrect %s password' % self._play_context.become_method)
elif self._flags['become_nopasswd_error']:
display.debug('Escalation requires password')
self._terminate_process(p)
self._flags['become_nopasswd_error'] = False
raise AnsibleError('Missing %s password' % self._play_context.become_method)
elif self._flags['become_prompt']:
# This shouldn't happen, because we should see the "Sorry,
# try again" message first.
display.debug('Escalation prompt repeated')
self._terminate_process(p)
self._flags['become_prompt'] = False
raise AnsibleError('Incorrect %s password' % self._play_context.become_method)
# Once we're sure that the privilege escalation prompt, if any, has
# been dealt with, we can send any initial data and start waiting
# for output.
if states[state] == 'ready_to_send':
if in_data:
self._send_initial_data(stdin, in_data)
state += 1
# Now we're awaiting_exit: has the child process exited? If it has,
# and we've read all available output from it, we're done.
if p.poll() is not None:
if not selector.get_map() or not events:
break
# We should not see further writes to the stdout/stderr file
# descriptors after the process has closed, set the select
# timeout to gather any last writes we may have missed.
timeout = 0
continue
# If the process has not yet exited, but we've already read EOF from
# its stdout and stderr (and thus no longer watching any file
# descriptors), we can just wait for it to exit.
elif not selector.get_map():
p.wait()
break
# Otherwise there may still be outstanding data to read.
finally:
selector.close()
# close stdin after process is terminated and stdout/stderr are read
# completely (see also issue #848)
stdin.close()
if C.HOST_KEY_CHECKING:
if cmd[0] == b"sshpass" and p.returncode == 6:
raise AnsibleError('Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support '
'this. Please add this host\'s fingerprint to your known_hosts file to manage this host.')
controlpersisterror = b'Bad configuration option: ControlPersist' in b_stderr or b'unknown configuration option: ControlPersist' in b_stderr
if p.returncode != 0 and controlpersisterror:
raise AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" '
'(or ssh_args in [ssh_connection] section of the config file) before running again')
# If we find a broken pipe because of ControlPersist timeout expiring (see #16731),
# we raise a special exception so that we can retry a connection.
controlpersist_broken_pipe = b'mux_client_hello_exchange: write packet: Broken pipe' in b_stderr
if p.returncode == 255 and controlpersist_broken_pipe:
raise AnsibleControlPersistBrokenPipeError('SSH Error: data could not be sent because of ControlPersist broken pipe.')
if p.returncode == 255 and in_data and checkrc:
raise AnsibleConnectionFailure('SSH Error: data could not be sent to remote host "%s". Make sure this host can be reached over ssh' % self.host)
return (p.returncode, b_stdout, b_stderr)
def _file_transport_command(self, in_path, out_path, sftp_action):
# scp and sftp require square brackets for IPv6 addresses, but
# accept them for hostnames and IPv4 addresses too.
host = '[%s]' % self.host
# Transfer methods to try
methods = []
# Use the transfer_method option if set, otherwise use scp_if_ssh
ssh_transfer_method = self._play_context.ssh_transfer_method
if ssh_transfer_method is not None:
if not (ssh_transfer_method in ('smart', 'sftp', 'scp', 'piped')):
raise AnsibleOptionsError('transfer_method needs to be one of [smart|sftp|scp|piped]')
if ssh_transfer_method == 'smart':
methods = ['sftp', 'scp', 'piped']
else:
methods = [ssh_transfer_method]
else:
# since this can be a non-bool now, we need to handle it correctly
scp_if_ssh = C.DEFAULT_SCP_IF_SSH
if not isinstance(scp_if_ssh, bool):
scp_if_ssh = scp_if_ssh.lower()
if scp_if_ssh in BOOLEANS:
scp_if_ssh = boolean(scp_if_ssh)
elif scp_if_ssh != 'smart':
raise AnsibleOptionsError('scp_if_ssh needs to be one of [smart|True|False]')
if scp_if_ssh == 'smart':
methods = ['sftp', 'scp', 'piped']
elif scp_if_ssh is True:
methods = ['scp']
else:
methods = ['sftp']
success = False
for method in methods:
returncode = stdout = stderr = None
if method == 'sftp':
cmd = self._build_command('sftp', to_bytes(host))
in_data = u"{0} {1} {2}\n".format(sftp_action, shlex_quote(in_path), shlex_quote(out_path))
in_data = to_bytes(in_data, nonstring='passthru')
(returncode, stdout, stderr) = self._run(cmd, in_data, checkrc=False)
elif method == 'scp':
if sftp_action == 'get':
cmd = self._build_command('scp', u'{0}:{1}'.format(host, shlex_quote(in_path)), out_path)
else:
cmd = self._build_command('scp', in_path, u'{0}:{1}'.format(host, shlex_quote(out_path)))
in_data = None
(returncode, stdout, stderr) = self._run(cmd, in_data, checkrc=False)
elif method == 'piped':
if sftp_action == 'get':
# we pass sudoable=False to disable pty allocation, which
# would end up mixing stdout/stderr and screwing with newlines
(returncode, stdout, stderr) = self.exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), sudoable=False)
out_file = open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+')
out_file.write(stdout)
out_file.close()
else:
in_data = open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb').read()
in_data = to_bytes(in_data, nonstring='passthru')
(returncode, stdout, stderr) = self.exec_command('dd of=%s bs=%s' % (out_path, BUFSIZE), in_data=in_data)
# Check the return code and rollover to next method if failed
if returncode == 0:
return (returncode, stdout, stderr)
else:
# If not in smart mode, the data will be printed by the raise below
if len(methods) > 1:
display.warning(msg='%s transfer mechanism failed on %s. Use ANSIBLE_DEBUG=1 to see detailed information' % (method, host))
display.debug(msg='%s' % to_native(stdout))
display.debug(msg='%s' % to_native(stderr))
if returncode == 255:
raise AnsibleConnectionFailure("Failed to connect to the host via %s: %s" % (method, to_native(stderr)))
else:
raise AnsibleError("failed to transfer file to {0} {1}:\n{2}\n{3}"\
.format(to_native(in_path), to_native(out_path), to_native(stdout), to_native(stderr)))
#
# Main public methods
#
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the remote host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
display.vvv(u"ESTABLISH SSH CONNECTION FOR USER: {0}".format(self._play_context.remote_user), host=self._play_context.remote_addr)
# we can only use tty when we are not pipelining the modules. piping
# data into /usr/bin/python inside a tty automatically invokes the
# python interactive-mode but the modules are not compatible with the
# interactive-mode ("unexpected indent" mainly because of empty lines)
ssh_executable = self._play_context.ssh_executable
if not in_data and sudoable:
args = (ssh_executable, '-tt', self.host, cmd)
else:
args = (ssh_executable, self.host, cmd)
cmd = self._build_command(*args)
(returncode, stdout, stderr) = self._run(cmd, in_data, sudoable=sudoable)
return (returncode, stdout, stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
super(Connection, self).put_file(in_path, out_path)
display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.host)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path)))
return self._file_transport_command(in_path, out_path, 'put')
def fetch_file(self, in_path, out_path):
''' fetch a file from remote to local '''
super(Connection, self).fetch_file(in_path, out_path)
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.host)
return self._file_transport_command(in_path, out_path, 'get')
def reset(self):
# If we have a persistent ssh connection (ControlPersist), we can ask it to stop listening.
cmd = map(to_bytes, self._build_command(self._play_context.ssh_executable, '-O', 'stop', self.host))
controlpersist, controlpath = self._persistence_controls(cmd)
if controlpersist:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
display.vvv(u'sending stop: %s' % cmd)
self.close()
def close(self):
self._connected = False
|
gpl-3.0
|
AladdinSonni/phantomjs
|
src/qt/qtwebkit/Tools/Scripts/webkitpy/common/system/systemhost.py
|
129
|
2196
|
# Copyright (c) 2011 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import platform
import sys
from webkitpy.common.system import environment, executive, file_lock, filesystem, platforminfo, user, workspace
class SystemHost(object):
def __init__(self):
self.executive = executive.Executive()
self.filesystem = filesystem.FileSystem()
self.user = user.User()
self.platform = platforminfo.PlatformInfo(sys, platform, self.executive)
self.workspace = workspace.Workspace(self.filesystem, self.executive)
def copy_current_environment(self):
return environment.Environment(os.environ.copy())
def make_file_lock(self, path):
return file_lock.FileLock(path)
|
bsd-3-clause
|
msegado/edx-platform
|
common/djangoapps/terrain/stubs/youtube.py
|
88
|
6138
|
"""
Stub implementation of YouTube for acceptance tests.
To start this stub server on its own from Vagrant:
1.) Locally, modify your Vagrantfile so that it contains:
config.vm.network :forwarded_port, guest: 8031, host: 8031
2.) From within Vagrant dev environment do:
cd common/djangoapps/terrain
python -m stubs.start youtube 8031
3.) Locally, try accessing http://localhost:8031/ and see that
you get "Unused url" message inside the browser.
"""
from .http import StubHttpRequestHandler, StubHttpService
import json
import time
import requests
from urlparse import urlparse
from collections import OrderedDict
class StubYouTubeHandler(StubHttpRequestHandler):
"""
A handler for Youtube GET requests.
"""
# Default number of seconds to delay the response to simulate network latency.
DEFAULT_DELAY_SEC = 0.5
def do_DELETE(self): # pylint: disable=invalid-name
"""
Allow callers to delete all the server configurations using the /del_config URL.
"""
if self.path == "/del_config" or self.path == "/del_config/":
self.server.config = dict()
self.log_message("Reset Server Configuration.")
self.send_response(200)
else:
self.send_response(404)
def do_GET(self):
"""
Handle a GET request from the client and sends response back.
"""
self.log_message(
"Youtube provider received GET request to path {}".format(self.path)
)
if 'get_config' in self.path:
self.send_json_response(self.server.config)
elif 'test_transcripts_youtube' in self.path:
if 't__eq_exist' in self.path:
status_message = "".join([
'<?xml version="1.0" encoding="utf-8" ?>',
'<transcript><text start="1.0" dur="1.0">',
'Equal transcripts</text></transcript>'
])
self.send_response(
200, content=status_message, headers={'Content-type': 'application/xml'}
)
elif 't_neq_exist' in self.path:
status_message = "".join([
'<?xml version="1.0" encoding="utf-8" ?>',
'<transcript><text start="1.1" dur="5.5">',
'Transcripts sample, different that on server',
'</text></transcript>'
])
self.send_response(
200, content=status_message, headers={'Content-type': 'application/xml'}
)
else:
self.send_response(404)
elif 'test_youtube' in self.path:
params = urlparse(self.path)
youtube_id = params.path.split('/').pop()
if self.server.config.get('youtube_api_private_video'):
self._send_private_video_response(youtube_id, "I'm youtube private video.")
else:
self._send_video_response(youtube_id, "I'm youtube.")
elif 'get_youtube_api' in self.path:
# Delay the response to simulate network latency
time.sleep(self.server.config.get('time_to_response', self.DEFAULT_DELAY_SEC))
if self.server.config.get('youtube_api_blocked'):
self.send_response(404, content='', headers={'Content-type': 'text/plain'})
else:
# Get the response to send from YouTube.
# We need to do this every time because Google sometimes sends different responses
# as part of their own experiments, which has caused our tests to become "flaky"
self.log_message("Getting iframe api from youtube.com")
iframe_api_response = requests.get('https://www.youtube.com/iframe_api').content.strip("\n")
self.send_response(200, content=iframe_api_response, headers={'Content-type': 'text/html'})
else:
self.send_response(
404, content="Unused url", headers={'Content-type': 'text/plain'}
)
def _send_video_response(self, youtube_id, message):
"""
Send message back to the client for video player requests.
Requires sending back callback id.
"""
# Delay the response to simulate network latency
time.sleep(self.server.config.get('time_to_response', self.DEFAULT_DELAY_SEC))
# Construct the response content
callback = self.get_params['callback']
data = OrderedDict({
'items': list(
OrderedDict({
'contentDetails': OrderedDict({
'id': youtube_id,
'duration': 'PT2M20S',
})
})
)
})
response = "{cb}({data})".format(cb=callback, data=json.dumps(data))
self.send_response(200, content=response, headers={'Content-type': 'text/html'})
self.log_message("Youtube: sent response {}".format(message))
def _send_private_video_response(self, message):
"""
Send private video error message back to the client for video player requests.
"""
# Construct the response content
callback = self.get_params['callback']
data = OrderedDict({
"error": OrderedDict({
"code": 403,
"errors": [
{
"code": "ServiceForbiddenException",
"domain": "GData",
"internalReason": "Private video"
}
],
"message": message,
})
})
response = "{cb}({data})".format(cb=callback, data=json.dumps(data))
self.send_response(200, content=response, headers={'Content-type': 'text/html'})
self.log_message("Youtube: sent response {}".format(message))
class StubYouTubeService(StubHttpService):
"""
A stub Youtube provider server that responds to GET requests to localhost.
"""
HANDLER_CLASS = StubYouTubeHandler
|
agpl-3.0
|
fpsw/Servo
|
servo/tasks.py
|
1
|
5336
|
# -*- coding: utf-8 -*-
from email.parser import BytesParser
from django.core.cache import cache
from servo.lib.utils import empty
from servo.exceptions import ConfigurationError
from servo.models import Configuration, User, Order, Note, Template
def get_rules():
"""
Get the rules from the JSON file and cache them.
Fail silently if not configured.
@TODO: Need GUI for managing local_rules.json!
"""
import json
try:
fh = open("local_rules.json", "r")
except IOError:
return []
rules = json.load(fh)
cache.set('rules', rules)
return rules
def apply_rules(event):
"""
Applies configured rules to an event
event is the Event object that was triggered
"""
counter = 0
rules = cache.get('rules', get_rules())
order = event.content_object
user = event.triggered_by
for r in rules:
match = r.get('match', event.description)
if (r['event'] == event.action and match == event.description):
if isinstance(r['data'], dict):
tpl_id = r['data']['template']
r['data'] = Template.objects.get(pk=tpl_id).render(order)
else:
r['data'] = Template(content=r['data']).render(order)
if r['action'] == "set_queue":
order.set_queue(r['data'], user)
if r['action'] == "set_priority":
pass
if r['action'] == "send_email":
try:
email = order.customer.valid_email()
except Exception:
continue # skip customers w/o valid emails
note = Note(order=order, created_by=user)
note.body = r['data']
note.recipient = email
note.render_subject({'note': note})
note.save()
try:
note.send_mail(user)
except ValueError as e:
print('Sending email failed (%s)' % e)
if r['action'] == "send_sms":
number = 0
try:
number = order.customer.get_standard_phone()
except Exception:
continue # skip customers w/o valid phone numbers
note = Note(order=order, created_by=user)
note.body = r['data']
note.save()
try:
note.send_sms(number, user)
except ValueError as e:
print('Sending SMS to %s failed (%s)' % (number, e))
counter += 1
return '%d/%d rules processed' % (counter, len(rules))
def batch_process(user, data):
"""
/orders/batch
"""
processed = 0
orders = data['orders'].strip().split("\r\n")
for o in orders:
try:
order = Order.objects.get(code=o)
except Exception as e:
continue
if data['status'] and order.queue:
status = order.queue.queuestatus_set.get(status_id=data['status'])
order.set_status(status, user)
if data['queue']:
order.set_queue(data['queue'], user)
if len(data['sms']) > 0:
try:
number = order.customer.get_standard_phone()
note = Note(order=order, created_by=user, body=data['sms'])
note.render_body({'order': order})
note.save()
try:
note.send_sms(number, user)
except Exception as e:
note.delete()
print("Failed to send SMS to: %s" % number)
except AttributeError as e: # customer has no phone number
continue
if len(data['email']) > 0:
note = Note(order=order, created_by=user, body=data['email'])
note.sender = user.email
try:
note.recipient = order.customer.email
note.render_subject({'note': note})
note.render_body({'order': order})
note.save()
note.send_mail(user)
except Exception as e:
# customer has no email address or some other error...
pass
if len(data['note']) > 0:
note = Note(order=order, created_by=user, body=data['note'])
note.render_body({'order': order})
note.save()
processed += 1
return '%d/%d orders processed' % (processed, len(orders))
def check_mail():
"""
Checks IMAP box for incoming mail
"""
uid = Configuration.conf('imap_act')
if empty(uid):
err = 'User account for incoming messages not configured'
raise ConfigurationError(err)
counter = 0
user = User.objects.get(pk=uid)
server = Configuration.get_imap_server()
typ, data = server.search(None, "UnSeen")
for num in data[0].split():
typ, data = server.fetch(num, "(RFC822)")
# parsestr() seems to return an email.message?
msg = BytesParser().parsebytes(data[0][1])
Note.from_email(msg, user)
#server.copy(num, 'servo')
server.store(num, '+FLAGS', '\\Seen')
counter += 1
server.close()
server.logout()
return '%d messages processed' % counter
|
bsd-2-clause
|
sestrella/ansible
|
lib/ansible/modules/cloud/lxd/lxd_container.py
|
15
|
22900
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Hiroaki Nakamura <hnakamur@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: lxd_container
short_description: Manage LXD Containers
version_added: "2.2"
description:
- Management of LXD containers
author: "Hiroaki Nakamura (@hnakamur)"
options:
name:
description:
- Name of a container.
required: true
architecture:
description:
- The architecture for the container (e.g. "x86_64" or "i686").
See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-1)
required: false
config:
description:
- 'The config for the container (e.g. {"limits.cpu": "2"}).
See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-1)'
- If the container already exists and its "config" value in metadata
obtained from
GET /1.0/containers/<name>
U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#10containersname)
are different, they this module tries to apply the configurations.
- The key starts with 'volatile.' are ignored for this comparison.
- Not all config values are supported to apply the existing container.
Maybe you need to delete and recreate a container.
required: false
devices:
description:
- 'The devices for the container
(e.g. { "rootfs": { "path": "/dev/kvm", "type": "unix-char" }).
See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-1)'
required: false
ephemeral:
description:
- Whether or not the container is ephemeral (e.g. true or false).
See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-1)
required: false
type: bool
source:
description:
- 'The source for the container
(e.g. { "type": "image",
"mode": "pull",
"server": "https://images.linuxcontainers.org",
"protocol": "lxd",
"alias": "ubuntu/xenial/amd64" }).'
- 'See U(https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-1) for complete API documentation.'
- 'Note that C(protocol) accepts two choices: C(lxd) or C(simplestreams)'
required: false
state:
choices:
- started
- stopped
- restarted
- absent
- frozen
description:
- Define the state of a container.
required: false
default: started
timeout:
description:
- A timeout for changing the state of the container.
- This is also used as a timeout for waiting until IPv4 addresses
are set to the all network interfaces in the container after
starting or restarting.
required: false
default: 30
wait_for_ipv4_addresses:
description:
- If this is true, the C(lxd_container) waits until IPv4 addresses
are set to the all network interfaces in the container after
starting or restarting.
required: false
default: false
type: bool
force_stop:
description:
- If this is true, the C(lxd_container) forces to stop the container
when it stops or restarts the container.
required: false
default: false
type: bool
url:
description:
- The unix domain socket path or the https URL for the LXD server.
required: false
default: unix:/var/lib/lxd/unix.socket
snap_url:
description:
- The unix domain socket path when LXD is installed by snap package manager.
required: false
default: unix:/var/snap/lxd/common/lxd/unix.socket
version_added: '2.8'
client_key:
description:
- The client certificate key file path.
required: false
default: '"{}/.config/lxc/client.key" .format(os.environ["HOME"])'
aliases: [ key_file ]
client_cert:
description:
- The client certificate file path.
required: false
default: '"{}/.config/lxc/client.crt" .format(os.environ["HOME"])'
aliases: [ cert_file ]
trust_password:
description:
- The client trusted password.
- You need to set this password on the LXD server before
running this module using the following command.
lxc config set core.trust_password <some random password>
See U(https://www.stgraber.org/2016/04/18/lxd-api-direct-interaction/)
- If trust_password is set, this module send a request for
authentication before sending any requests.
required: false
notes:
- Containers must have a unique name. If you attempt to create a container
with a name that already existed in the users namespace the module will
simply return as "unchanged".
- There are two ways to run commands in containers, using the command
module or using the ansible lxd connection plugin bundled in Ansible >=
2.1, the later requires python to be installed in the container which can
be done with the command module.
- You can copy a file from the host to the container
with the Ansible M(copy) and M(template) module and the `lxd` connection plugin.
See the example below.
- You can copy a file in the created container to the localhost
with `command=lxc file pull container_name/dir/filename filename`.
See the first example below.
'''
EXAMPLES = '''
# An example for creating a Ubuntu container and install python
- hosts: localhost
connection: local
tasks:
- name: Create a started container
lxd_container:
name: mycontainer
state: started
source:
type: image
mode: pull
server: https://images.linuxcontainers.org
protocol: lxd # if you get a 404, try setting protocol: simplestreams
alias: ubuntu/xenial/amd64
profiles: ["default"]
wait_for_ipv4_addresses: true
timeout: 600
- name: check python is installed in container
delegate_to: mycontainer
raw: dpkg -s python
register: python_install_check
failed_when: python_install_check.rc not in [0, 1]
changed_when: false
- name: install python in container
delegate_to: mycontainer
raw: apt-get install -y python
when: python_install_check.rc == 1
# An example for creating an Ubuntu 14.04 container using an image fingerprint.
# This requires changing 'server' and 'protocol' key values, replacing the
# 'alias' key with with 'fingerprint' and supplying an appropriate value that
# matches the container image you wish to use.
- hosts: localhost
connection: local
tasks:
- name: Create a started container
lxd_container:
name: mycontainer
state: started
source:
type: image
mode: pull
# Provides current (and older) Ubuntu images with listed fingerprints
server: https://cloud-images.ubuntu.com/releases
# Protocol used by 'ubuntu' remote (as shown by 'lxc remote list')
protocol: simplestreams
# This provides an Ubuntu 14.04 LTS amd64 image from 20150814.
fingerprint: e9a8bdfab6dc
profiles: ["default"]
wait_for_ipv4_addresses: true
timeout: 600
# An example for deleting a container
- hosts: localhost
connection: local
tasks:
- name: Delete a container
lxd_container:
name: mycontainer
state: absent
# An example for restarting a container
- hosts: localhost
connection: local
tasks:
- name: Restart a container
lxd_container:
name: mycontainer
state: restarted
# An example for restarting a container using https to connect to the LXD server
- hosts: localhost
connection: local
tasks:
- name: Restart a container
lxd_container:
url: https://127.0.0.1:8443
# These client_cert and client_key values are equal to the default values.
#client_cert: "{{ lookup('env', 'HOME') }}/.config/lxc/client.crt"
#client_key: "{{ lookup('env', 'HOME') }}/.config/lxc/client.key"
trust_password: mypassword
name: mycontainer
state: restarted
# Note your container must be in the inventory for the below example.
#
# [containers]
# mycontainer ansible_connection=lxd
#
- hosts:
- mycontainer
tasks:
- name: copy /etc/hosts in the created container to localhost with name "mycontainer-hosts"
fetch:
src: /etc/hosts
dest: /tmp/mycontainer-hosts
flat: true
'''
RETURN = '''
addresses:
description: Mapping from the network device name to a list of IPv4 addresses in the container
returned: when state is started or restarted
type: dict
sample: {"eth0": ["10.155.92.191"]}
old_state:
description: The old state of the container
returned: when state is started or restarted
type: str
sample: "stopped"
logs:
description: The logs of requests and responses.
returned: when ansible-playbook is invoked with -vvvv.
type: list
sample: "(too long to be placed here)"
actions:
description: List of actions performed for the container.
returned: success
type: list
sample: '["create", "start"]'
'''
import datetime
import os
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.lxd import LXDClient, LXDClientException
# LXD_ANSIBLE_STATES is a map of states that contain values of methods used
# when a particular state is evoked.
LXD_ANSIBLE_STATES = {
'started': '_started',
'stopped': '_stopped',
'restarted': '_restarted',
'absent': '_destroyed',
'frozen': '_frozen'
}
# ANSIBLE_LXD_STATES is a map of states of lxd containers to the Ansible
# lxc_container module state parameter value.
ANSIBLE_LXD_STATES = {
'Running': 'started',
'Stopped': 'stopped',
'Frozen': 'frozen',
}
# CONFIG_PARAMS is a list of config attribute names.
CONFIG_PARAMS = [
'architecture', 'config', 'devices', 'ephemeral', 'profiles', 'source'
]
class LXDContainerManagement(object):
def __init__(self, module):
"""Management of LXC containers via Ansible.
:param module: Processed Ansible Module.
:type module: ``object``
"""
self.module = module
self.name = self.module.params['name']
self._build_config()
self.state = self.module.params['state']
self.timeout = self.module.params['timeout']
self.wait_for_ipv4_addresses = self.module.params['wait_for_ipv4_addresses']
self.force_stop = self.module.params['force_stop']
self.addresses = None
self.key_file = self.module.params.get('client_key', None)
self.cert_file = self.module.params.get('client_cert', None)
self.debug = self.module._verbosity >= 4
try:
if os.path.exists(self.module.params['snap_url'].replace('unix:', '')):
self.url = self.module.params['snap_url']
else:
self.url = self.module.params['url']
except Exception as e:
self.module.fail_json(msg=e.msg)
try:
self.client = LXDClient(
self.url, key_file=self.key_file, cert_file=self.cert_file,
debug=self.debug
)
except LXDClientException as e:
self.module.fail_json(msg=e.msg)
self.trust_password = self.module.params.get('trust_password', None)
self.actions = []
def _build_config(self):
self.config = {}
for attr in CONFIG_PARAMS:
param_val = self.module.params.get(attr, None)
if param_val is not None:
self.config[attr] = param_val
def _get_container_json(self):
return self.client.do(
'GET', '/1.0/containers/{0}'.format(self.name),
ok_error_codes=[404]
)
def _get_container_state_json(self):
return self.client.do(
'GET', '/1.0/containers/{0}/state'.format(self.name),
ok_error_codes=[404]
)
@staticmethod
def _container_json_to_module_state(resp_json):
if resp_json['type'] == 'error':
return 'absent'
return ANSIBLE_LXD_STATES[resp_json['metadata']['status']]
def _change_state(self, action, force_stop=False):
body_json = {'action': action, 'timeout': self.timeout}
if force_stop:
body_json['force'] = True
return self.client.do('PUT', '/1.0/containers/{0}/state'.format(self.name), body_json=body_json)
def _create_container(self):
config = self.config.copy()
config['name'] = self.name
self.client.do('POST', '/1.0/containers', config)
self.actions.append('create')
def _start_container(self):
self._change_state('start')
self.actions.append('start')
def _stop_container(self):
self._change_state('stop', self.force_stop)
self.actions.append('stop')
def _restart_container(self):
self._change_state('restart', self.force_stop)
self.actions.append('restart')
def _delete_container(self):
self.client.do('DELETE', '/1.0/containers/{0}'.format(self.name))
self.actions.append('delete')
def _freeze_container(self):
self._change_state('freeze')
self.actions.append('freeze')
def _unfreeze_container(self):
self._change_state('unfreeze')
self.actions.append('unfreez')
def _container_ipv4_addresses(self, ignore_devices=None):
ignore_devices = ['lo'] if ignore_devices is None else ignore_devices
resp_json = self._get_container_state_json()
network = resp_json['metadata']['network'] or {}
network = dict((k, v) for k, v in network.items() if k not in ignore_devices) or {}
addresses = dict((k, [a['address'] for a in v['addresses'] if a['family'] == 'inet']) for k, v in network.items()) or {}
return addresses
@staticmethod
def _has_all_ipv4_addresses(addresses):
return len(addresses) > 0 and all(len(v) > 0 for v in addresses.values())
def _get_addresses(self):
try:
due = datetime.datetime.now() + datetime.timedelta(seconds=self.timeout)
while datetime.datetime.now() < due:
time.sleep(1)
addresses = self._container_ipv4_addresses()
if self._has_all_ipv4_addresses(addresses):
self.addresses = addresses
return
except LXDClientException as e:
e.msg = 'timeout for getting IPv4 addresses'
raise
def _started(self):
if self.old_state == 'absent':
self._create_container()
self._start_container()
else:
if self.old_state == 'frozen':
self._unfreeze_container()
elif self.old_state == 'stopped':
self._start_container()
if self._needs_to_apply_container_configs():
self._apply_container_configs()
if self.wait_for_ipv4_addresses:
self._get_addresses()
def _stopped(self):
if self.old_state == 'absent':
self._create_container()
else:
if self.old_state == 'stopped':
if self._needs_to_apply_container_configs():
self._start_container()
self._apply_container_configs()
self._stop_container()
else:
if self.old_state == 'frozen':
self._unfreeze_container()
if self._needs_to_apply_container_configs():
self._apply_container_configs()
self._stop_container()
def _restarted(self):
if self.old_state == 'absent':
self._create_container()
self._start_container()
else:
if self.old_state == 'frozen':
self._unfreeze_container()
if self._needs_to_apply_container_configs():
self._apply_container_configs()
self._restart_container()
if self.wait_for_ipv4_addresses:
self._get_addresses()
def _destroyed(self):
if self.old_state != 'absent':
if self.old_state == 'frozen':
self._unfreeze_container()
if self.old_state != 'stopped':
self._stop_container()
self._delete_container()
def _frozen(self):
if self.old_state == 'absent':
self._create_container()
self._start_container()
self._freeze_container()
else:
if self.old_state == 'stopped':
self._start_container()
if self._needs_to_apply_container_configs():
self._apply_container_configs()
self._freeze_container()
def _needs_to_change_container_config(self, key):
if key not in self.config:
return False
if key == 'config':
old_configs = dict((k, v) for k, v in self.old_container_json['metadata'][key].items() if not k.startswith('volatile.'))
for k, v in self.config['config'].items():
if k not in old_configs:
return True
if old_configs[k] != v:
return True
return False
else:
old_configs = self.old_container_json['metadata'][key]
return self.config[key] != old_configs
def _needs_to_apply_container_configs(self):
return (
self._needs_to_change_container_config('architecture') or
self._needs_to_change_container_config('config') or
self._needs_to_change_container_config('ephemeral') or
self._needs_to_change_container_config('devices') or
self._needs_to_change_container_config('profiles')
)
def _apply_container_configs(self):
old_metadata = self.old_container_json['metadata']
body_json = {
'architecture': old_metadata['architecture'],
'config': old_metadata['config'],
'devices': old_metadata['devices'],
'profiles': old_metadata['profiles']
}
if self._needs_to_change_container_config('architecture'):
body_json['architecture'] = self.config['architecture']
if self._needs_to_change_container_config('config'):
for k, v in self.config['config'].items():
body_json['config'][k] = v
if self._needs_to_change_container_config('ephemeral'):
body_json['ephemeral'] = self.config['ephemeral']
if self._needs_to_change_container_config('devices'):
body_json['devices'] = self.config['devices']
if self._needs_to_change_container_config('profiles'):
body_json['profiles'] = self.config['profiles']
self.client.do('PUT', '/1.0/containers/{0}'.format(self.name), body_json=body_json)
self.actions.append('apply_container_configs')
def run(self):
"""Run the main method."""
try:
if self.trust_password is not None:
self.client.authenticate(self.trust_password)
self.old_container_json = self._get_container_json()
self.old_state = self._container_json_to_module_state(self.old_container_json)
action = getattr(self, LXD_ANSIBLE_STATES[self.state])
action()
state_changed = len(self.actions) > 0
result_json = {
'log_verbosity': self.module._verbosity,
'changed': state_changed,
'old_state': self.old_state,
'actions': self.actions
}
if self.client.debug:
result_json['logs'] = self.client.logs
if self.addresses is not None:
result_json['addresses'] = self.addresses
self.module.exit_json(**result_json)
except LXDClientException as e:
state_changed = len(self.actions) > 0
fail_params = {
'msg': e.msg,
'changed': state_changed,
'actions': self.actions
}
if self.client.debug:
fail_params['logs'] = e.kwargs['logs']
self.module.fail_json(**fail_params)
def main():
"""Ansible Main module."""
module = AnsibleModule(
argument_spec=dict(
name=dict(
type='str',
required=True
),
architecture=dict(
type='str',
),
config=dict(
type='dict',
),
devices=dict(
type='dict',
),
ephemeral=dict(
type='bool',
),
profiles=dict(
type='list',
),
source=dict(
type='dict',
),
state=dict(
choices=LXD_ANSIBLE_STATES.keys(),
default='started'
),
timeout=dict(
type='int',
default=30
),
wait_for_ipv4_addresses=dict(
type='bool',
default=False
),
force_stop=dict(
type='bool',
default=False
),
url=dict(
type='str',
default='unix:/var/lib/lxd/unix.socket'
),
snap_url=dict(
type='str',
default='unix:/var/snap/lxd/common/lxd/unix.socket'
),
client_key=dict(
type='str',
default='{0}/.config/lxc/client.key'.format(os.environ['HOME']),
aliases=['key_file']
),
client_cert=dict(
type='str',
default='{0}/.config/lxc/client.crt'.format(os.environ['HOME']),
aliases=['cert_file']
),
trust_password=dict(type='str', no_log=True)
),
supports_check_mode=False,
)
lxd_manage = LXDContainerManagement(module=module)
lxd_manage.run()
if __name__ == '__main__':
main()
|
gpl-3.0
|
atlefren/beerdatabase
|
alembic/versions/2b35f2f2adcb_add_geoloc_tables.py
|
1
|
1700
|
"""add geoloc tables
Revision ID: 2b35f2f2adcb
Revises: 29474f196c96
Create Date: 2015-11-09 00:12:02.604229
"""
# revision identifiers, used by Alembic.
revision = '2b35f2f2adcb'
down_revision = '29474f196c96'
branch_labels = None
depends_on = None
import os
import json
from alembic import op
import sqlalchemy as sa
def upgrade():
path = os.path.dirname(os.path.dirname(os.path.dirname(os.path.realpath(__file__))))
countries_table = op.create_table(
'rb_countries',
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('name', sa.Unicode(255)),
)
countries_file = os.path.join(path, 'data', 'ratebeer_countries.json')
with open(countries_file, 'r') as countries_data:
op.bulk_insert(countries_table, json.loads(countries_data.read()))
subregions_table = op.create_table(
'rb_subregions',
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('name', sa.Unicode(255)),
)
subregions_file = os.path.join(path, 'data', 'ratebeer_subregions.json')
with open(subregions_file, 'r') as subregions_data:
op.bulk_insert(subregions_table, json.loads(subregions_data.read()))
countrycodes_table = op.create_table(
'country_codes',
sa.Column('id', sa.Unicode(2), primary_key=True),
sa.Column('name', sa.Unicode(255)),
)
countrycodes_file = os.path.join(path, 'data', '3166-1alpha-2.json')
with open(countrycodes_file, 'r') as countrycodes_data:
op.bulk_insert(countrycodes_table, json.loads(countrycodes_data.read()))
def downgrade():
op.drop_table('rb_countries')
op.drop_table('rb_subregions')
op.drop_table('country_codes')
|
mit
|
hectord/lettuce
|
tests/integration/lib/Django-1.2.5/django/utils/cache.py
|
44
|
9146
|
"""
This module contains helper functions for controlling caching. It does so by
managing the "Vary" header of responses. It includes functions to patch the
header of response objects directly and decorators that change functions to do
that header-patching themselves.
For information on the Vary header, see:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.44
Essentially, the "Vary" HTTP header defines which headers a cache should take
into account when building its cache key. Requests with the same path but
different header content for headers named in "Vary" need to get different
cache keys to prevent delivery of wrong content.
An example: i18n middleware would need to distinguish caches by the
"Accept-language" header.
"""
import re
import time
from django.conf import settings
from django.core.cache import cache
from django.utils.encoding import smart_str, iri_to_uri
from django.utils.http import http_date
from django.utils.hashcompat import md5_constructor
from django.utils.translation import get_language
from django.http import HttpRequest
cc_delim_re = re.compile(r'\s*,\s*')
def patch_cache_control(response, **kwargs):
"""
This function patches the Cache-Control header by adding all
keyword arguments to it. The transformation is as follows:
* All keyword parameter names are turned to lowercase, and underscores
are converted to hyphens.
* If the value of a parameter is True (exactly True, not just a
true value), only the parameter name is added to the header.
* All other parameters are added with their value, after applying
str() to it.
"""
def dictitem(s):
t = s.split('=', 1)
if len(t) > 1:
return (t[0].lower(), t[1])
else:
return (t[0].lower(), True)
def dictvalue(t):
if t[1] is True:
return t[0]
else:
return t[0] + '=' + smart_str(t[1])
if response.has_header('Cache-Control'):
cc = cc_delim_re.split(response['Cache-Control'])
cc = dict([dictitem(el) for el in cc])
else:
cc = {}
# If there's already a max-age header but we're being asked to set a new
# max-age, use the minimum of the two ages. In practice this happens when
# a decorator and a piece of middleware both operate on a given view.
if 'max-age' in cc and 'max_age' in kwargs:
kwargs['max_age'] = min(cc['max-age'], kwargs['max_age'])
for (k, v) in kwargs.items():
cc[k.replace('_', '-')] = v
cc = ', '.join([dictvalue(el) for el in cc.items()])
response['Cache-Control'] = cc
def get_max_age(response):
"""
Returns the max-age from the response Cache-Control header as an integer
(or ``None`` if it wasn't found or wasn't an integer.
"""
if not response.has_header('Cache-Control'):
return
cc = dict([_to_tuple(el) for el in
cc_delim_re.split(response['Cache-Control'])])
if 'max-age' in cc:
try:
return int(cc['max-age'])
except (ValueError, TypeError):
pass
def patch_response_headers(response, cache_timeout=None):
"""
Adds some useful headers to the given HttpResponse object:
ETag, Last-Modified, Expires and Cache-Control
Each header is only added if it isn't already set.
cache_timeout is in seconds. The CACHE_MIDDLEWARE_SECONDS setting is used
by default.
"""
if cache_timeout is None:
cache_timeout = settings.CACHE_MIDDLEWARE_SECONDS
if cache_timeout < 0:
cache_timeout = 0 # Can't have max-age negative
if not response.has_header('ETag'):
response['ETag'] = '"%s"' % md5_constructor(response.content).hexdigest()
if not response.has_header('Last-Modified'):
response['Last-Modified'] = http_date()
if not response.has_header('Expires'):
response['Expires'] = http_date(time.time() + cache_timeout)
patch_cache_control(response, max_age=cache_timeout)
def add_never_cache_headers(response):
"""
Adds headers to a response to indicate that a page should never be cached.
"""
patch_response_headers(response, cache_timeout=-1)
def patch_vary_headers(response, newheaders):
"""
Adds (or updates) the "Vary" header in the given HttpResponse object.
newheaders is a list of header names that should be in "Vary". Existing
headers in "Vary" aren't removed.
"""
# Note that we need to keep the original order intact, because cache
# implementations may rely on the order of the Vary contents in, say,
# computing an MD5 hash.
if response.has_header('Vary'):
vary_headers = cc_delim_re.split(response['Vary'])
else:
vary_headers = []
# Use .lower() here so we treat headers as case-insensitive.
existing_headers = set([header.lower() for header in vary_headers])
additional_headers = [newheader for newheader in newheaders
if newheader.lower() not in existing_headers]
response['Vary'] = ', '.join(vary_headers + additional_headers)
def has_vary_header(response, header_query):
"""
Checks to see if the response has a given header name in its Vary header.
"""
if not response.has_header('Vary'):
return False
vary_headers = cc_delim_re.split(response['Vary'])
existing_headers = set([header.lower() for header in vary_headers])
return header_query.lower() in existing_headers
def _i18n_cache_key_suffix(request, cache_key):
"""If enabled, returns the cache key ending with a locale."""
if settings.USE_I18N:
# first check if LocaleMiddleware or another middleware added
# LANGUAGE_CODE to request, then fall back to the active language
# which in turn can also fall back to settings.LANGUAGE_CODE
cache_key += '.%s' % getattr(request, 'LANGUAGE_CODE', get_language())
return cache_key
def _generate_cache_key(request, headerlist, key_prefix):
"""Returns a cache key from the headers given in the header list."""
ctx = md5_constructor()
for header in headerlist:
value = request.META.get(header, None)
if value is not None:
ctx.update(value)
path = md5_constructor(iri_to_uri(request.path))
cache_key = 'views.decorators.cache.cache_page.%s.%s.%s' % (
key_prefix, path.hexdigest(), ctx.hexdigest())
return _i18n_cache_key_suffix(request, cache_key)
def _generate_cache_header_key(key_prefix, request):
"""Returns a cache key for the header cache."""
path = md5_constructor(iri_to_uri(request.path))
cache_key = 'views.decorators.cache.cache_header.%s.%s' % (
key_prefix, path.hexdigest())
return _i18n_cache_key_suffix(request, cache_key)
def get_cache_key(request, key_prefix=None):
"""
Returns a cache key based on the request path. It can be used in the
request phase because it pulls the list of headers to take into account
from the global path registry and uses those to build a cache key to check
against.
If there is no headerlist stored, the page needs to be rebuilt, so this
function returns None.
"""
if key_prefix is None:
key_prefix = settings.CACHE_MIDDLEWARE_KEY_PREFIX
cache_key = _generate_cache_header_key(key_prefix, request)
headerlist = cache.get(cache_key, None)
if headerlist is not None:
return _generate_cache_key(request, headerlist, key_prefix)
else:
return None
def learn_cache_key(request, response, cache_timeout=None, key_prefix=None):
"""
Learns what headers to take into account for some request path from the
response object. It stores those headers in a global path registry so that
later access to that path will know what headers to take into account
without building the response object itself. The headers are named in the
Vary header of the response, but we want to prevent response generation.
The list of headers to use for cache key generation is stored in the same
cache as the pages themselves. If the cache ages some data out of the
cache, this just means that we have to build the response once to get at
the Vary header and so at the list of headers to use for the cache key.
"""
if key_prefix is None:
key_prefix = settings.CACHE_MIDDLEWARE_KEY_PREFIX
if cache_timeout is None:
cache_timeout = settings.CACHE_MIDDLEWARE_SECONDS
cache_key = _generate_cache_header_key(key_prefix, request)
if response.has_header('Vary'):
headerlist = ['HTTP_'+header.upper().replace('-', '_')
for header in cc_delim_re.split(response['Vary'])]
cache.set(cache_key, headerlist, cache_timeout)
return _generate_cache_key(request, headerlist, key_prefix)
else:
# if there is no Vary header, we still need a cache key
# for the request.path
cache.set(cache_key, [], cache_timeout)
return _generate_cache_key(request, [], key_prefix)
def _to_tuple(s):
t = s.split('=',1)
if len(t) == 2:
return t[0].lower(), t[1]
return t[0].lower(), True
|
gpl-3.0
|
apark263/tensorflow
|
tensorflow/contrib/learn/python/learn/estimators/tensor_signature_test.py
|
137
|
8063
|
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for learn.estimators.tensor_signature."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.contrib.learn.python.learn.estimators import tensor_signature
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.ops import array_ops
from tensorflow.python.platform import test
class TensorSignatureTest(test.TestCase):
def testTensorPlaceholderNone(self):
self.assertEqual(None,
tensor_signature.create_placeholders_from_signatures(None))
def testTensorSignatureNone(self):
self.assertEqual(None, tensor_signature.create_signatures(None))
def testTensorSignatureCompatible(self):
placeholder_a = array_ops.placeholder(
name='test', shape=[None, 100], dtype=dtypes.int32)
placeholder_b = array_ops.placeholder(
name='another', shape=[256, 100], dtype=dtypes.int32)
placeholder_c = array_ops.placeholder(
name='mismatch', shape=[256, 100], dtype=dtypes.float32)
placeholder_d = array_ops.placeholder(
name='mismatch', shape=[128, 100], dtype=dtypes.int32)
signatures = tensor_signature.create_signatures(placeholder_a)
self.assertTrue(tensor_signature.tensors_compatible(None, None))
self.assertFalse(tensor_signature.tensors_compatible(None, signatures))
self.assertFalse(tensor_signature.tensors_compatible(placeholder_a, None))
self.assertTrue(
tensor_signature.tensors_compatible(placeholder_a, signatures))
self.assertTrue(
tensor_signature.tensors_compatible(placeholder_b, signatures))
self.assertFalse(
tensor_signature.tensors_compatible(placeholder_c, signatures))
self.assertTrue(
tensor_signature.tensors_compatible(placeholder_d, signatures))
inputs = {'a': placeholder_a}
signatures = tensor_signature.create_signatures(inputs)
self.assertTrue(tensor_signature.tensors_compatible(inputs, signatures))
self.assertFalse(
tensor_signature.tensors_compatible(placeholder_a, signatures))
self.assertFalse(
tensor_signature.tensors_compatible(placeholder_b, signatures))
self.assertFalse(
tensor_signature.tensors_compatible({
'b': placeholder_b
}, signatures))
self.assertTrue(
tensor_signature.tensors_compatible({
'a': placeholder_b,
'c': placeholder_c
}, signatures))
self.assertFalse(
tensor_signature.tensors_compatible({
'a': placeholder_c
}, signatures))
def testSparseTensorCompatible(self):
t = sparse_tensor.SparseTensor(
indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])
signatures = tensor_signature.create_signatures(t)
self.assertTrue(tensor_signature.tensors_compatible(t, signatures))
def testTensorSignaturePlaceholders(self):
placeholder_a = array_ops.placeholder(
name='test', shape=[None, 100], dtype=dtypes.int32)
signatures = tensor_signature.create_signatures(placeholder_a)
placeholder_out = tensor_signature.create_placeholders_from_signatures(
signatures)
self.assertEqual(placeholder_out.dtype, placeholder_a.dtype)
self.assertTrue(placeholder_out.get_shape().is_compatible_with(
placeholder_a.get_shape()))
self.assertTrue(
tensor_signature.tensors_compatible(placeholder_out, signatures))
inputs = {'a': placeholder_a}
signatures = tensor_signature.create_signatures(inputs)
placeholders_out = tensor_signature.create_placeholders_from_signatures(
signatures)
self.assertEqual(placeholders_out['a'].dtype, placeholder_a.dtype)
self.assertTrue(placeholders_out['a'].get_shape().is_compatible_with(
placeholder_a.get_shape()))
self.assertTrue(
tensor_signature.tensors_compatible(placeholders_out, signatures))
def testSparseTensorSignaturePlaceholders(self):
tensor = sparse_tensor.SparseTensor(
values=[1.0, 2.0], indices=[[0, 2], [0, 3]], dense_shape=[5, 5])
signature = tensor_signature.create_signatures(tensor)
placeholder = tensor_signature.create_placeholders_from_signatures(
signature)
self.assertTrue(isinstance(placeholder, sparse_tensor.SparseTensor))
self.assertEqual(placeholder.values.dtype, tensor.values.dtype)
def testTensorSignatureExampleParserSingle(self):
examples = array_ops.placeholder(
name='example', shape=[None], dtype=dtypes.string)
placeholder_a = array_ops.placeholder(
name='test', shape=[None, 100], dtype=dtypes.int32)
signatures = tensor_signature.create_signatures(placeholder_a)
result = tensor_signature.create_example_parser_from_signatures(signatures,
examples)
self.assertTrue(tensor_signature.tensors_compatible(result, signatures))
new_signatures = tensor_signature.create_signatures(result)
self.assertTrue(new_signatures.is_compatible_with(signatures))
def testTensorSignatureExampleParserDict(self):
examples = array_ops.placeholder(
name='example', shape=[None], dtype=dtypes.string)
placeholder_a = array_ops.placeholder(
name='test', shape=[None, 100], dtype=dtypes.int32)
placeholder_b = array_ops.placeholder(
name='bb', shape=[None, 100], dtype=dtypes.float64)
inputs = {'a': placeholder_a, 'b': placeholder_b}
signatures = tensor_signature.create_signatures(inputs)
result = tensor_signature.create_example_parser_from_signatures(signatures,
examples)
self.assertTrue(tensor_signature.tensors_compatible(result, signatures))
new_signatures = tensor_signature.create_signatures(result)
self.assertTrue(new_signatures['a'].is_compatible_with(signatures['a']))
self.assertTrue(new_signatures['b'].is_compatible_with(signatures['b']))
def testUnknownShape(self):
placeholder_unk = array_ops.placeholder(
name='unk', shape=None, dtype=dtypes.string)
placeholder_a = array_ops.placeholder(
name='a', shape=[None], dtype=dtypes.string)
placeholder_b = array_ops.placeholder(
name='b', shape=[128, 2], dtype=dtypes.string)
placeholder_c = array_ops.placeholder(
name='c', shape=[128, 2], dtype=dtypes.int32)
unk_signature = tensor_signature.create_signatures(placeholder_unk)
# Tensors of same dtype match unk shape signature.
self.assertTrue(
tensor_signature.tensors_compatible(placeholder_unk, unk_signature))
self.assertTrue(
tensor_signature.tensors_compatible(placeholder_a, unk_signature))
self.assertTrue(
tensor_signature.tensors_compatible(placeholder_b, unk_signature))
self.assertFalse(
tensor_signature.tensors_compatible(placeholder_c, unk_signature))
string_signature = tensor_signature.create_signatures(placeholder_a)
int_signature = tensor_signature.create_signatures(placeholder_c)
# Unk shape Tensor matche signatures same dtype.
self.assertTrue(
tensor_signature.tensors_compatible(placeholder_unk, string_signature))
self.assertFalse(
tensor_signature.tensors_compatible(placeholder_unk, int_signature))
if __name__ == '__main__':
test.main()
|
apache-2.0
|
listamilton/supermilton.repository
|
plugin.video.efilmes/bs4/tests/test_builder_registry.py
|
485
|
5374
|
"""Tests of the builder registry."""
import unittest
from bs4 import BeautifulSoup
from bs4.builder import (
builder_registry as registry,
HTMLParserTreeBuilder,
TreeBuilderRegistry,
)
try:
from bs4.builder import HTML5TreeBuilder
HTML5LIB_PRESENT = True
except ImportError:
HTML5LIB_PRESENT = False
try:
from bs4.builder import (
LXMLTreeBuilderForXML,
LXMLTreeBuilder,
)
LXML_PRESENT = True
except ImportError:
LXML_PRESENT = False
class BuiltInRegistryTest(unittest.TestCase):
"""Test the built-in registry with the default builders registered."""
def test_combination(self):
if LXML_PRESENT:
self.assertEqual(registry.lookup('fast', 'html'),
LXMLTreeBuilder)
if LXML_PRESENT:
self.assertEqual(registry.lookup('permissive', 'xml'),
LXMLTreeBuilderForXML)
self.assertEqual(registry.lookup('strict', 'html'),
HTMLParserTreeBuilder)
if HTML5LIB_PRESENT:
self.assertEqual(registry.lookup('html5lib', 'html'),
HTML5TreeBuilder)
def test_lookup_by_markup_type(self):
if LXML_PRESENT:
self.assertEqual(registry.lookup('html'), LXMLTreeBuilder)
self.assertEqual(registry.lookup('xml'), LXMLTreeBuilderForXML)
else:
self.assertEqual(registry.lookup('xml'), None)
if HTML5LIB_PRESENT:
self.assertEqual(registry.lookup('html'), HTML5TreeBuilder)
else:
self.assertEqual(registry.lookup('html'), HTMLParserTreeBuilder)
def test_named_library(self):
if LXML_PRESENT:
self.assertEqual(registry.lookup('lxml', 'xml'),
LXMLTreeBuilderForXML)
self.assertEqual(registry.lookup('lxml', 'html'),
LXMLTreeBuilder)
if HTML5LIB_PRESENT:
self.assertEqual(registry.lookup('html5lib'),
HTML5TreeBuilder)
self.assertEqual(registry.lookup('html.parser'),
HTMLParserTreeBuilder)
def test_beautifulsoup_constructor_does_lookup(self):
# You can pass in a string.
BeautifulSoup("", features="html")
# Or a list of strings.
BeautifulSoup("", features=["html", "fast"])
# You'll get an exception if BS can't find an appropriate
# builder.
self.assertRaises(ValueError, BeautifulSoup,
"", features="no-such-feature")
class RegistryTest(unittest.TestCase):
"""Test the TreeBuilderRegistry class in general."""
def setUp(self):
self.registry = TreeBuilderRegistry()
def builder_for_features(self, *feature_list):
cls = type('Builder_' + '_'.join(feature_list),
(object,), {'features' : feature_list})
self.registry.register(cls)
return cls
def test_register_with_no_features(self):
builder = self.builder_for_features()
# Since the builder advertises no features, you can't find it
# by looking up features.
self.assertEqual(self.registry.lookup('foo'), None)
# But you can find it by doing a lookup with no features, if
# this happens to be the only registered builder.
self.assertEqual(self.registry.lookup(), builder)
def test_register_with_features_makes_lookup_succeed(self):
builder = self.builder_for_features('foo', 'bar')
self.assertEqual(self.registry.lookup('foo'), builder)
self.assertEqual(self.registry.lookup('bar'), builder)
def test_lookup_fails_when_no_builder_implements_feature(self):
builder = self.builder_for_features('foo', 'bar')
self.assertEqual(self.registry.lookup('baz'), None)
def test_lookup_gets_most_recent_registration_when_no_feature_specified(self):
builder1 = self.builder_for_features('foo')
builder2 = self.builder_for_features('bar')
self.assertEqual(self.registry.lookup(), builder2)
def test_lookup_fails_when_no_tree_builders_registered(self):
self.assertEqual(self.registry.lookup(), None)
def test_lookup_gets_most_recent_builder_supporting_all_features(self):
has_one = self.builder_for_features('foo')
has_the_other = self.builder_for_features('bar')
has_both_early = self.builder_for_features('foo', 'bar', 'baz')
has_both_late = self.builder_for_features('foo', 'bar', 'quux')
lacks_one = self.builder_for_features('bar')
has_the_other = self.builder_for_features('foo')
# There are two builders featuring 'foo' and 'bar', but
# the one that also features 'quux' was registered later.
self.assertEqual(self.registry.lookup('foo', 'bar'),
has_both_late)
# There is only one builder featuring 'foo', 'bar', and 'baz'.
self.assertEqual(self.registry.lookup('foo', 'bar', 'baz'),
has_both_early)
def test_lookup_fails_when_cannot_reconcile_requested_features(self):
builder1 = self.builder_for_features('foo', 'bar')
builder2 = self.builder_for_features('foo', 'baz')
self.assertEqual(self.registry.lookup('bar', 'baz'), None)
|
gpl-2.0
|
shubhamdhama/zulip
|
zerver/lib/bugdown/fenced_code.py
|
2
|
14935
|
"""
Fenced Code Extension for Python Markdown
=========================================
This extension adds Fenced Code Blocks to Python-Markdown.
>>> import markdown
>>> text = '''
... A paragraph before a fenced code block:
...
... ~~~
... Fenced code block
... ~~~
... '''
>>> html = markdown.markdown(text, extensions=['fenced_code'])
>>> print html
<p>A paragraph before a fenced code block:</p>
<pre><code>Fenced code block
</code></pre>
Works with safe_mode also (we check this because we are using the HtmlStash):
>>> print markdown.markdown(text, extensions=['fenced_code'], safe_mode='replace')
<p>A paragraph before a fenced code block:</p>
<pre><code>Fenced code block
</code></pre>
Include tilde's in a code block and wrap with blank lines:
>>> text = '''
... ~~~~~~~~
...
... ~~~~
... ~~~~~~~~'''
>>> print markdown.markdown(text, extensions=['fenced_code'])
<pre><code>
~~~~
</code></pre>
Removes trailing whitespace from code blocks that cause horizontal scrolling
>>> import markdown
>>> text = '''
... A paragraph before a fenced code block:
...
... ~~~
... Fenced code block \t\t\t\t\t\t\t
... ~~~
... '''
>>> html = markdown.markdown(text, extensions=['fenced_code'])
>>> print html
<p>A paragraph before a fenced code block:</p>
<pre><code>Fenced code block
</code></pre>
Language tags:
>>> text = '''
... ~~~~{.python}
... # Some python code
... ~~~~'''
>>> print markdown.markdown(text, extensions=['fenced_code'])
<pre><code class="python"># Some python code
</code></pre>
Copyright 2007-2008 [Waylan Limberg](http://achinghead.com/).
Project website: <http://packages.python.org/Markdown/extensions/fenced_code_blocks.html>
Contact: markdown@freewisdom.org
License: BSD (see ../docs/LICENSE for details)
Dependencies:
* [Python 2.4+](http://python.org)
* [Markdown 2.0+](http://packages.python.org/Markdown/)
* [Pygments (optional)](http://pygments.org)
"""
import re
from typing import Any, Dict, Iterable, List, Mapping, MutableSequence, Optional
import markdown
from django.utils.html import escape
from markdown.extensions.codehilite import CodeHilite, CodeHiliteExtension
from zerver.lib.exceptions import BugdownRenderingException
from zerver.lib.tex import render_tex
# Global vars
FENCE_RE = re.compile("""
# ~~~ or ```
(?P<fence>
^(?:~{3,}|`{3,})
)
[ ]* # spaces
(
\\{?\\.?
(?P<lang>
[a-zA-Z0-9_+-./#]*
) # "py" or "javascript"
\\}?
) # language, like ".py" or "{javascript}"
[ ]* # spaces
(
\\{?\\.?
(?P<header>
[^~`]*
)
\\}?
) # header for features that use fenced block header syntax (like spoilers)
$
""", re.VERBOSE)
CODE_WRAP = '<pre><code%s>%s\n</code></pre>'
LANG_TAG = ' class="%s"'
def validate_curl_content(lines: List[str]) -> None:
error_msg = """
Missing required -X argument in curl command:
{command}
""".strip()
for line in lines:
regex = r'curl [-](sS)?X "?(GET|DELETE|PATCH|POST)"?'
if line.startswith('curl'):
if re.search(regex, line) is None:
raise BugdownRenderingException(error_msg.format(command=line.strip()))
CODE_VALIDATORS = {
'curl': validate_curl_content,
}
class FencedCodeExtension(markdown.Extension):
def __init__(self, config: Mapping[str, Any] = {}) -> None:
self.config = {
'run_content_validators': [
config.get('run_content_validators', False),
'Boolean specifying whether to run content validation code in CodeHandler',
],
}
for key, value in config.items():
self.setConfig(key, value)
def extendMarkdown(self, md: markdown.Markdown, md_globals: Dict[str, Any]) -> None:
""" Add FencedBlockPreprocessor to the Markdown instance. """
md.registerExtension(self)
processor = FencedBlockPreprocessor(
md, run_content_validators=self.config['run_content_validators'][0])
md.preprocessors.register(processor, 'fenced_code_block', 25)
class BaseHandler:
def handle_line(self, line: str) -> None:
raise NotImplementedError()
def done(self) -> None:
raise NotImplementedError()
def generic_handler(processor: Any, output: MutableSequence[str],
fence: str, lang: str, header: str,
run_content_validators: bool=False,
default_language: Optional[str]=None) -> BaseHandler:
lang = lang.lower()
if lang in ('quote', 'quoted'):
return QuoteHandler(processor, output, fence, default_language)
elif lang == 'math':
return TexHandler(processor, output, fence)
elif lang == 'spoiler':
return SpoilerHandler(processor, output, fence, header)
else:
return CodeHandler(processor, output, fence, lang, run_content_validators)
def check_for_new_fence(processor: Any, output: MutableSequence[str], line: str,
run_content_validators: bool=False,
default_language: Optional[str]=None) -> None:
m = FENCE_RE.match(line)
if m:
fence = m.group('fence')
lang = m.group('lang')
header = m.group('header')
if not lang and default_language:
lang = default_language
handler = generic_handler(processor, output, fence, lang, header,
run_content_validators, default_language)
processor.push(handler)
else:
output.append(line)
class OuterHandler(BaseHandler):
def __init__(self, processor: Any, output: MutableSequence[str],
run_content_validators: bool=False,
default_language: Optional[str]=None) -> None:
self.output = output
self.processor = processor
self.run_content_validators = run_content_validators
self.default_language = default_language
def handle_line(self, line: str) -> None:
check_for_new_fence(self.processor, self.output, line,
self.run_content_validators, self.default_language)
def done(self) -> None:
self.processor.pop()
class CodeHandler(BaseHandler):
def __init__(self, processor: Any, output: MutableSequence[str],
fence: str, lang: str, run_content_validators: bool=False) -> None:
self.processor = processor
self.output = output
self.fence = fence
self.lang = lang
self.lines: List[str] = []
self.run_content_validators = run_content_validators
def handle_line(self, line: str) -> None:
if line.rstrip() == self.fence:
self.done()
else:
self.lines.append(line.rstrip())
def done(self) -> None:
text = '\n'.join(self.lines)
# run content validators (if any)
if self.run_content_validators:
validator = CODE_VALIDATORS.get(self.lang, lambda text: None)
validator(self.lines)
text = self.processor.format_code(self.lang, text)
text = self.processor.placeholder(text)
processed_lines = text.split('\n')
self.output.append('')
self.output.extend(processed_lines)
self.output.append('')
self.processor.pop()
class QuoteHandler(BaseHandler):
def __init__(self, processor: Any, output: MutableSequence[str],
fence: str, default_language: Optional[str]=None) -> None:
self.processor = processor
self.output = output
self.fence = fence
self.lines: List[str] = []
self.default_language = default_language
def handle_line(self, line: str) -> None:
if line.rstrip() == self.fence:
self.done()
else:
check_for_new_fence(self.processor, self.lines, line, default_language=self.default_language)
def done(self) -> None:
text = '\n'.join(self.lines)
text = self.processor.format_quote(text)
processed_lines = text.split('\n')
self.output.append('')
self.output.extend(processed_lines)
self.output.append('')
self.processor.pop()
class SpoilerHandler(BaseHandler):
def __init__(self, processor: Any, output: MutableSequence[str],
fence: str, spoiler_header: str) -> None:
self.processor = processor
self.output = output
self.fence = fence
self.spoiler_header = spoiler_header
self.lines: List[str] = []
def handle_line(self, line: str) -> None:
if line.rstrip() == self.fence:
self.done()
else:
check_for_new_fence(self.processor, self.lines, line)
def done(self) -> None:
if len(self.lines) == 0:
# No content, do nothing
return
else:
header = self.spoiler_header
text = '\n'.join(self.lines)
text = self.processor.format_spoiler(header, text)
processed_lines = text.split('\n')
self.output.append('')
self.output.extend(processed_lines)
self.output.append('')
self.processor.pop()
class TexHandler(BaseHandler):
def __init__(self, processor: Any, output: MutableSequence[str], fence: str) -> None:
self.processor = processor
self.output = output
self.fence = fence
self.lines: List[str] = []
def handle_line(self, line: str) -> None:
if line.rstrip() == self.fence:
self.done()
else:
self.lines.append(line)
def done(self) -> None:
text = '\n'.join(self.lines)
text = self.processor.format_tex(text)
text = self.processor.placeholder(text)
processed_lines = text.split('\n')
self.output.append('')
self.output.extend(processed_lines)
self.output.append('')
self.processor.pop()
class FencedBlockPreprocessor(markdown.preprocessors.Preprocessor):
def __init__(self, md: markdown.Markdown, run_content_validators: bool=False) -> None:
markdown.preprocessors.Preprocessor.__init__(self, md)
self.checked_for_codehilite = False
self.run_content_validators = run_content_validators
self.codehilite_conf: Dict[str, List[Any]] = {}
def push(self, handler: BaseHandler) -> None:
self.handlers.append(handler)
def pop(self) -> None:
self.handlers.pop()
def run(self, lines: Iterable[str]) -> List[str]:
""" Match and store Fenced Code Blocks in the HtmlStash. """
output: List[str] = []
processor = self
self.handlers: List[BaseHandler] = []
default_language = None
try:
default_language = self.md.zulip_realm.default_code_block_language
except AttributeError:
pass
handler = OuterHandler(processor, output, self.run_content_validators, default_language)
self.push(handler)
for line in lines:
self.handlers[-1].handle_line(line)
while self.handlers:
self.handlers[-1].done()
# This fiddly handling of new lines at the end of our output was done to make
# existing tests pass. Bugdown is just kind of funny when it comes to new lines,
# but we could probably remove this hack.
if len(output) > 2 and output[-2] != '':
output.append('')
return output
def format_code(self, lang: str, text: str) -> str:
if lang:
langclass = LANG_TAG % (lang,)
else:
langclass = ''
# Check for code hilite extension
if not self.checked_for_codehilite:
for ext in self.md.registeredExtensions:
if isinstance(ext, CodeHiliteExtension):
self.codehilite_conf = ext.config
break
self.checked_for_codehilite = True
# If config is not empty, then the codehighlite extension
# is enabled, so we call it to highlite the code
if self.codehilite_conf:
highliter = CodeHilite(text,
linenums=self.codehilite_conf['linenums'][0],
guess_lang=self.codehilite_conf['guess_lang'][0],
css_class=self.codehilite_conf['css_class'][0],
style=self.codehilite_conf['pygments_style'][0],
use_pygments=self.codehilite_conf['use_pygments'][0],
lang=(lang or None),
noclasses=self.codehilite_conf['noclasses'][0])
code = highliter.hilite()
else:
code = CODE_WRAP % (langclass, self._escape(text))
return code
def format_quote(self, text: str) -> str:
paragraphs = text.split("\n\n")
quoted_paragraphs = []
for paragraph in paragraphs:
lines = paragraph.split("\n")
quoted_paragraphs.append("\n".join("> " + line for line in lines if line != ''))
return "\n\n".join(quoted_paragraphs)
def format_spoiler(self, header: str, text: str) -> str:
output = []
header_div_open_html = '<div class="spoiler-block"><div class="spoiler-header">'
end_header_start_content_html = '</div><div class="spoiler-content"' \
' aria-hidden="true">'
footer_html = '</div></div>'
output.append(self.placeholder(header_div_open_html))
output.append(header)
output.append(self.placeholder(end_header_start_content_html))
output.append(text)
output.append(self.placeholder(footer_html))
return "\n\n".join(output)
def format_tex(self, text: str) -> str:
paragraphs = text.split("\n\n")
tex_paragraphs = []
for paragraph in paragraphs:
html = render_tex(paragraph, is_inline=False)
if html is not None:
tex_paragraphs.append(html)
else:
tex_paragraphs.append('<span class="tex-error">' +
escape(paragraph) + '</span>')
return "\n\n".join(tex_paragraphs)
def placeholder(self, code: str) -> str:
return self.md.htmlStash.store(code)
def _escape(self, txt: str) -> str:
""" basic html escaping """
txt = txt.replace('&', '&')
txt = txt.replace('<', '<')
txt = txt.replace('>', '>')
txt = txt.replace('"', '"')
return txt
def makeExtension(*args: Any, **kwargs: None) -> FencedCodeExtension:
return FencedCodeExtension(kwargs)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
apache-2.0
|
liberation/sesql
|
sesql/daemon/sesql-update.py
|
1
|
4201
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) Pilot Systems and Libération, 2010-2011
# This file is part of SeSQL.
# SeSQL is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
# SeSQL is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with SeSQL. If not, see <http://www.gnu.org/licenses/>.
"""
Update the scheduled updates
"""
import sys
import time
import traceback
import logging
from sesql import index
from sesql import config
from sesql import results
from sesql import __version__
from sesql.daemon.cmdline import CmdLine
from sesql.daemon.unixdaemon import UnixDaemon
from django.db import connection, transaction
def version():
print "sesql update daemon, v " + __version__
class UpdateDaemon(UnixDaemon):
"""
The daemon class
"""
def __init__(self, chunk, delay, pidfile):
UnixDaemon.__init__(self, pidfile)
self.chunk = int(chunk)
self.delay = float(delay)
self.log = logging.getLogger('sesql-update')
def run(self):
"""
Main loop
"""
while True:
try:
self.process_chunk()
except:
type, value, tb = sys.exc_info()
error = traceback.format_exception_only(type, value)[0]
print >> sys.stderr, error
self.log.error(error)
connection.close()
self.log.debug("Sleeping for %.2f second(s)" % self.delay)
time.sleep(self.delay)
@transaction.commit_manually
def process_chunk(self):
"""
Process a chunk
"""
cursor = connection.cursor()
cursor.execute("""SELECT classname, objid
FROM sesql_reindex_schedule
ORDER BY scheduled_at ASC LIMIT %d""" % self.chunk)
rows = cursor.fetchall()
if not rows:
transaction.rollback()
return
self.log.info("Found %d row(s) to reindex" % len(rows))
done = set()
for row in rows:
try:
row = tuple(row)
if not row in done:
self.log.info("Reindexing %s:%d" % row)
done.add(row)
try:
obj = results.SeSQLResultSet.load(row)
index.index(obj)
except config.orm.not_found:
self.log.info("%s:%d doesn't exist anymore, undexing" % row)
index.unindex(row)
cursor.execute("""DELETE FROM sesql_reindex_schedule
WHERE classname=%s AND objid=%s""", row)
except Exception, e:
self.log.error('Error in row %s:%s : %s' % (row[0], row[1], e))
if cmd["debug"]:
import pdb
pdb.post_mortem()
transaction.commit()
if __name__ == "__main__":
cmd = CmdLine(sys.argv)
cmd.add_opt('debug', 'd', None, "Run in debug mode (don't daemonize)")
cmd.add_opt('chunk', 'c', str(config.DAEMON_DEFAULT_CHUNK), "Chunk size")
cmd.add_opt('wait', 'w', str(config.DAEMON_DEFAULT_DELAY),
"Wait between each chunk")
cmd.add_opt('pidfile', 'p', str(config.DAEMON_DEFAULT_PID), "Pidfile to use")
cmd.parse_opt()
if cmd["help"]:
cmd.show_help()
sys.exit(0)
if cmd["version"]:
version()
sys.exit(0)
daemon = UpdateDaemon(cmd["chunk"], cmd["wait"], cmd["pidfile"])
if cmd["debug"]:
ch = logging.StreamHandler()
logger = logging.getLogger('sesql-update')
ch.setLevel(logging.DEBUG)
logger.addHandler(ch)
daemon.run()
else:
daemon.start_deamon()
|
gpl-2.0
|
jfwood/barbican-1
|
barbican/openstack/common/rpc/matchmaker_redis.py
|
1
|
4862
|
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2013 Cloudscaling Group, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
The MatchMaker classes should accept a Topic or Fanout exchange key and
return keys for direct exchanges, per (approximate) AMQP parlance.
"""
from oslo.config import cfg
from barbican.openstack.common import importutils
from barbican.openstack.common import log as logging
from barbican.openstack.common.rpc import matchmaker as mm_common
redis = importutils.try_import('redis')
matchmaker_redis_opts = [
cfg.StrOpt('host',
default='127.0.0.1',
help='Host to locate redis'),
cfg.IntOpt('port',
default=6379,
help='Use this port to connect to redis host.'),
cfg.StrOpt('password',
default=None,
help='Password for Redis server. (optional)'),
]
CONF = cfg.CONF
opt_group = cfg.OptGroup(name='matchmaker_redis',
title='Options for Redis-based MatchMaker')
CONF.register_group(opt_group)
CONF.register_opts(matchmaker_redis_opts, opt_group)
LOG = logging.getLogger(__name__)
class RedisExchange(mm_common.Exchange):
def __init__(self, matchmaker):
self.matchmaker = matchmaker
self.redis = matchmaker.redis
super(RedisExchange, self).__init__()
class RedisTopicExchange(RedisExchange):
"""Exchange where all topic keys are split, sending to second half.
i.e. "compute.host" sends a message to "compute" running on "host"
"""
def run(self, topic):
while True:
member_name = self.redis.srandmember(topic)
if not member_name:
# If this happens, there are no
# longer any members.
break
if not self.matchmaker.is_alive(topic, member_name):
continue
host = member_name.split('.', 1)[1]
return [(member_name, host)]
return []
class RedisFanoutExchange(RedisExchange):
"""Return a list of all hosts."""
def run(self, topic):
topic = topic.split('~', 1)[1]
hosts = self.redis.smembers(topic)
good_hosts = filter(
lambda host: self.matchmaker.is_alive(topic, host), hosts)
return [(x, x.split('.', 1)[1]) for x in good_hosts]
class MatchMakerRedis(mm_common.HeartbeatMatchMakerBase):
"""MatchMaker registering and looking-up hosts with a Redis server."""
def __init__(self):
super(MatchMakerRedis, self).__init__()
if not redis:
raise ImportError("Failed to import module redis.")
self.redis = redis.StrictRedis(
host=CONF.matchmaker_redis.host,
port=CONF.matchmaker_redis.port,
password=CONF.matchmaker_redis.password)
self.add_binding(mm_common.FanoutBinding(), RedisFanoutExchange(self))
self.add_binding(mm_common.DirectBinding(), mm_common.DirectExchange())
self.add_binding(mm_common.TopicBinding(), RedisTopicExchange(self))
def ack_alive(self, key, host):
topic = "%s.%s" % (key, host)
if not self.redis.expire(topic, CONF.matchmaker_heartbeat_ttl):
# If we could not update the expiration, the key
# might have been pruned. Re-register, creating a new
# key in Redis.
self.register(self.topic_host[host], host)
def is_alive(self, topic, host):
if self.redis.ttl(host) == -1:
self.expire(topic, host)
return False
return True
def expire(self, topic, host):
with self.redis.pipeline() as pipe:
pipe.multi()
pipe.delete(host)
pipe.srem(topic, host)
pipe.execute()
def backend_register(self, key, key_host):
with self.redis.pipeline() as pipe:
pipe.multi()
pipe.sadd(key, key_host)
# No value is needed, we just
# care if it exists. Sets aren't viable
# because only keys can expire.
pipe.set(key_host, '')
pipe.execute()
def backend_unregister(self, key, key_host):
with self.redis.pipeline() as pipe:
pipe.multi()
pipe.srem(key, key_host)
pipe.delete(key_host)
pipe.execute()
|
apache-2.0
|
kratorius/ads
|
python/sorting/merge_sort.py
|
1
|
1064
|
import unittest
def merge_sort(array):
if len(array) <= 1:
return array
mid = len(array) / 2
merge1, merge2 = merge_sort(array[:mid]), merge_sort(array[mid:])
def merge(left, right):
l, r = 0, 0
while l < len(left) and r < len(right):
if left[l] < right[r]:
yield left[l]
l += 1
else:
yield right[r]
r += 1
while l != len(left):
yield left[l]
l += 1
while r != len(right):
yield right[r]
r += 1
return list(merge(merge1, merge2))
class TestMergeSort(unittest.TestCase):
def test_merge_sort(self):
self.assertEqual(range(1, 8), merge_sort([1, 2, 3, 4, 5, 6, 7]))
self.assertEqual(range(1, 8), merge_sort([5, 1, 6, 3, 4, 2, 7]))
self.assertEqual(range(1, 8), merge_sort([7, 6, 5, 4, 3, 2, 1]))
self.assertEqual([], merge_sort([]))
self.assertEqual([10], merge_sort([10]))
if __name__ == "__main__":
unittest.main()
|
mit
|
a40223217/2015cdb_g6team3
|
static/Brython3.1.1-20150328-091302/Lib/site-packages/pygame/version.py
|
607
|
1334
|
## pygame - Python Game Library
## Copyright (C) 2000-2003 Pete Shinners
##
## This library is free software; you can redistribute it and/or
## modify it under the terms of the GNU Library General Public
## License as published by the Free Software Foundation; either
## version 2 of the License, or (at your option) any later version.
##
## This library is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## Library General Public License for more details.
##
## You should have received a copy of the GNU Library General Public
## License along with this library; if not, write to the Free
## Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##
## Pete Shinners
## pete@shinners.org
"""Simply the current installed pygame version. The version information is
stored in the regular pygame module as 'pygame.ver'. Keeping the version
information also available in a separate module allows you to test the
pygame version without importing the main pygame module.
The python version information should always compare greater than any previous
releases. (hmm, until we get to versions > 10)
"""
ver = '1.8.0pre'
vernum = 1,8,0
|
gpl-3.0
|
Nowheresly/odoo
|
openerp/addons/base/tests/test_api.py
|
56
|
17851
|
from openerp import models
from openerp.tools import mute_logger
from openerp.osv.orm import except_orm
from openerp.tests import common
class TestAPI(common.TransactionCase):
""" test the new API of the ORM """
def assertIsRecordset(self, value, model):
self.assertIsInstance(value, models.BaseModel)
self.assertEqual(value._name, model)
def assertIsRecord(self, value, model):
self.assertIsRecordset(value, model)
self.assertTrue(len(value) <= 1)
def assertIsNull(self, value, model):
self.assertIsRecordset(value, model)
self.assertFalse(value)
@mute_logger('openerp.models')
def test_00_query(self):
""" Build a recordset, and check its contents. """
domain = [('name', 'ilike', 'j')]
ids = self.registry('res.partner').search(self.cr, self.uid, domain)
partners = self.env['res.partner'].search(domain)
# partners is a collection of browse records corresponding to ids
self.assertTrue(ids)
self.assertTrue(partners)
# partners and its contents are instance of the model
self.assertIsRecordset(partners, 'res.partner')
for p in partners:
self.assertIsRecord(p, 'res.partner')
self.assertEqual([p.id for p in partners], ids)
self.assertEqual(self.env['res.partner'].browse(ids), partners)
@mute_logger('openerp.models')
def test_01_query_offset(self):
""" Build a recordset with offset, and check equivalence. """
partners1 = self.env['res.partner'].search([], offset=10)
partners2 = self.env['res.partner'].search([])[10:]
self.assertIsRecordset(partners1, 'res.partner')
self.assertIsRecordset(partners2, 'res.partner')
self.assertEqual(list(partners1), list(partners2))
@mute_logger('openerp.models')
def test_02_query_limit(self):
""" Build a recordset with offset, and check equivalence. """
partners1 = self.env['res.partner'].search([], limit=10)
partners2 = self.env['res.partner'].search([])[:10]
self.assertIsRecordset(partners1, 'res.partner')
self.assertIsRecordset(partners2, 'res.partner')
self.assertEqual(list(partners1), list(partners2))
@mute_logger('openerp.models')
def test_03_query_offset_limit(self):
""" Build a recordset with offset and limit, and check equivalence. """
partners1 = self.env['res.partner'].search([], offset=3, limit=7)
partners2 = self.env['res.partner'].search([])[3:10]
self.assertIsRecordset(partners1, 'res.partner')
self.assertIsRecordset(partners2, 'res.partner')
self.assertEqual(list(partners1), list(partners2))
@mute_logger('openerp.models')
def test_04_query_count(self):
""" Test the search method with count=True. """
count1 = self.registry('res.partner').search(self.cr, self.uid, [], count=True)
count2 = self.env['res.partner'].search([], count=True)
self.assertIsInstance(count1, (int, long))
self.assertIsInstance(count2, (int, long))
self.assertEqual(count1, count2)
@mute_logger('openerp.models')
def test_05_immutable(self):
""" Check that a recordset remains the same, even after updates. """
domain = [('name', 'ilike', 'j')]
partners = self.env['res.partner'].search(domain)
self.assertTrue(partners)
ids = map(int, partners)
# modify those partners, and check that partners has not changed
self.registry('res.partner').write(self.cr, self.uid, ids, {'active': False})
self.assertEqual(ids, map(int, partners))
# redo the search, and check that the result is now empty
partners2 = self.env['res.partner'].search(domain)
self.assertFalse(partners2)
@mute_logger('openerp.models')
def test_06_fields(self):
""" Check that relation fields return records, recordsets or nulls. """
user = self.registry('res.users').browse(self.cr, self.uid, self.uid)
self.assertIsRecord(user, 'res.users')
self.assertIsRecord(user.partner_id, 'res.partner')
self.assertIsRecordset(user.groups_id, 'res.groups')
partners = self.env['res.partner'].search([])
for name, field in partners._fields.iteritems():
if field.type == 'many2one':
for p in partners:
self.assertIsRecord(p[name], field.comodel_name)
elif field.type == 'reference':
for p in partners:
if p[name]:
self.assertIsRecord(p[name], field.comodel_name)
elif field.type in ('one2many', 'many2many'):
for p in partners:
self.assertIsRecordset(p[name], field.comodel_name)
@mute_logger('openerp.models')
def test_07_null(self):
""" Check behavior of null instances. """
# select a partner without a parent
partner = self.env['res.partner'].search([('parent_id', '=', False)])[0]
# check partner and related null instances
self.assertTrue(partner)
self.assertIsRecord(partner, 'res.partner')
self.assertFalse(partner.parent_id)
self.assertIsNull(partner.parent_id, 'res.partner')
self.assertIs(partner.parent_id.id, False)
self.assertFalse(partner.parent_id.user_id)
self.assertIsNull(partner.parent_id.user_id, 'res.users')
self.assertIs(partner.parent_id.user_id.name, False)
self.assertFalse(partner.parent_id.user_id.groups_id)
self.assertIsRecordset(partner.parent_id.user_id.groups_id, 'res.groups')
@mute_logger('openerp.models')
def test_10_old_old(self):
""" Call old-style methods in the old-fashioned way. """
partners = self.env['res.partner'].search([('name', 'ilike', 'j')])
self.assertTrue(partners)
ids = map(int, partners)
# call method name_get on partners' model, and check its effect
res = partners._model.name_get(self.cr, self.uid, ids)
self.assertEqual(len(res), len(ids))
self.assertEqual(set(val[0] for val in res), set(ids))
@mute_logger('openerp.models')
def test_20_old_new(self):
""" Call old-style methods in the new API style. """
partners = self.env['res.partner'].search([('name', 'ilike', 'j')])
self.assertTrue(partners)
# call method name_get on partners itself, and check its effect
res = partners.name_get()
self.assertEqual(len(res), len(partners))
self.assertEqual(set(val[0] for val in res), set(map(int, partners)))
@mute_logger('openerp.models')
def test_25_old_new(self):
""" Call old-style methods on records (new API style). """
partners = self.env['res.partner'].search([('name', 'ilike', 'j')])
self.assertTrue(partners)
# call method name_get on partner records, and check its effect
for p in partners:
res = p.name_get()
self.assertTrue(isinstance(res, list) and len(res) == 1)
self.assertTrue(isinstance(res[0], tuple) and len(res[0]) == 2)
self.assertEqual(res[0][0], p.id)
@mute_logger('openerp.models')
def test_30_new_old(self):
""" Call new-style methods in the old-fashioned way. """
partners = self.env['res.partner'].search([('name', 'ilike', 'j')])
self.assertTrue(partners)
ids = map(int, partners)
# call method write on partners' model, and check its effect
partners._model.write(self.cr, self.uid, ids, {'active': False})
for p in partners:
self.assertFalse(p.active)
@mute_logger('openerp.models')
def test_40_new_new(self):
""" Call new-style methods in the new API style. """
partners = self.env['res.partner'].search([('name', 'ilike', 'j')])
self.assertTrue(partners)
# call method write on partners itself, and check its effect
partners.write({'active': False})
for p in partners:
self.assertFalse(p.active)
@mute_logger('openerp.models')
def test_45_new_new(self):
""" Call new-style methods on records (new API style). """
partners = self.env['res.partner'].search([('name', 'ilike', 'j')])
self.assertTrue(partners)
# call method write on partner records, and check its effects
for p in partners:
p.write({'active': False})
for p in partners:
self.assertFalse(p.active)
@mute_logger('openerp.models')
@mute_logger('openerp.addons.base.ir.ir_model')
def test_50_environment(self):
""" Test environment on records. """
# partners and reachable records are attached to self.env
partners = self.env['res.partner'].search([('name', 'ilike', 'j')])
self.assertEqual(partners.env, self.env)
for x in (partners, partners[0], partners[0].company_id):
self.assertEqual(x.env, self.env)
for p in partners:
self.assertEqual(p.env, self.env)
# check that the current user can read and modify company data
partners[0].company_id.name
partners[0].company_id.write({'name': 'Fools'})
# create an environment with the demo user
demo = self.env['res.users'].search([('login', '=', 'demo')])[0]
demo_env = self.env(user=demo)
self.assertNotEqual(demo_env, self.env)
# partners and related records are still attached to self.env
self.assertEqual(partners.env, self.env)
for x in (partners, partners[0], partners[0].company_id):
self.assertEqual(x.env, self.env)
for p in partners:
self.assertEqual(p.env, self.env)
# create record instances attached to demo_env
demo_partners = partners.sudo(demo)
self.assertEqual(demo_partners.env, demo_env)
for x in (demo_partners, demo_partners[0], demo_partners[0].company_id):
self.assertEqual(x.env, demo_env)
for p in demo_partners:
self.assertEqual(p.env, demo_env)
# demo user can read but not modify company data
demo_partners[0].company_id.name
with self.assertRaises(except_orm):
demo_partners[0].company_id.write({'name': 'Pricks'})
# remove demo user from all groups
demo.write({'groups_id': [(5,)]})
# demo user can no longer access partner data
with self.assertRaises(except_orm):
demo_partners[0].company_id.name
@mute_logger('openerp.models')
def test_55_draft(self):
""" Test draft mode nesting. """
env = self.env
self.assertFalse(env.in_draft)
with env.do_in_draft():
self.assertTrue(env.in_draft)
with env.do_in_draft():
self.assertTrue(env.in_draft)
with env.do_in_draft():
self.assertTrue(env.in_draft)
self.assertTrue(env.in_draft)
self.assertTrue(env.in_draft)
self.assertFalse(env.in_draft)
@mute_logger('openerp.models')
def test_60_cache(self):
""" Check the record cache behavior """
partners = self.env['res.partner'].search([('child_ids', '!=', False)])
partner1, partner2 = partners[0], partners[1]
children1, children2 = partner1.child_ids, partner2.child_ids
self.assertTrue(children1)
self.assertTrue(children2)
# take a child contact
child = children1[0]
self.assertEqual(child.parent_id, partner1)
self.assertIn(child, partner1.child_ids)
self.assertNotIn(child, partner2.child_ids)
# fetch data in the cache
for p in partners:
p.name, p.company_id.name, p.user_id.name, p.contact_address
self.env.check_cache()
# change its parent
child.write({'parent_id': partner2.id})
self.env.check_cache()
# check recordsets
self.assertEqual(child.parent_id, partner2)
self.assertNotIn(child, partner1.child_ids)
self.assertIn(child, partner2.child_ids)
self.assertEqual(set(partner1.child_ids + child), set(children1))
self.assertEqual(set(partner2.child_ids), set(children2 + child))
self.env.check_cache()
# delete it
child.unlink()
self.env.check_cache()
# check recordsets
self.assertEqual(set(partner1.child_ids), set(children1) - set([child]))
self.assertEqual(set(partner2.child_ids), set(children2))
self.env.check_cache()
@mute_logger('openerp.models')
def test_60_cache_prefetching(self):
""" Check the record cache prefetching """
self.env.invalidate_all()
# all the records of an instance already have an entry in cache
partners = self.env['res.partner'].search([])
partner_ids = self.env.prefetch['res.partner']
self.assertEqual(set(partners.ids), set(partner_ids))
# countries have not been fetched yet; their cache must be empty
countries = self.env['res.country'].browse()
self.assertFalse(self.env.prefetch['res.country'])
# reading ONE partner should fetch them ALL
countries |= partners[0].country_id
country_cache = self.env.cache[partners._fields['country_id']]
self.assertLessEqual(set(partners._ids), set(country_cache))
# read all partners, and check that the cache already contained them
country_ids = list(self.env.prefetch['res.country'])
for p in partners:
countries |= p.country_id
self.assertLessEqual(set(countries.ids), set(country_ids))
@mute_logger('openerp.models')
def test_70_one(self):
""" Check method one(). """
# check with many records
ps = self.env['res.partner'].search([('name', 'ilike', 'a')])
self.assertTrue(len(ps) > 1)
with self.assertRaises(except_orm):
ps.ensure_one()
p1 = ps[0]
self.assertEqual(len(p1), 1)
self.assertEqual(p1.ensure_one(), p1)
p0 = self.env['res.partner'].browse()
self.assertEqual(len(p0), 0)
with self.assertRaises(except_orm):
p0.ensure_one()
@mute_logger('openerp.models')
def test_80_contains(self):
""" Test membership on recordset. """
p1 = self.env['res.partner'].search([('name', 'ilike', 'a')], limit=1).ensure_one()
ps = self.env['res.partner'].search([('name', 'ilike', 'a')])
self.assertTrue(p1 in ps)
@mute_logger('openerp.models')
def test_80_set_operations(self):
""" Check set operations on recordsets. """
pa = self.env['res.partner'].search([('name', 'ilike', 'a')])
pb = self.env['res.partner'].search([('name', 'ilike', 'b')])
self.assertTrue(pa)
self.assertTrue(pb)
self.assertTrue(set(pa) & set(pb))
concat = pa + pb
self.assertEqual(list(concat), list(pa) + list(pb))
self.assertEqual(len(concat), len(pa) + len(pb))
difference = pa - pb
self.assertEqual(len(difference), len(set(difference)))
self.assertEqual(set(difference), set(pa) - set(pb))
self.assertLessEqual(difference, pa)
intersection = pa & pb
self.assertEqual(len(intersection), len(set(intersection)))
self.assertEqual(set(intersection), set(pa) & set(pb))
self.assertLessEqual(intersection, pa)
self.assertLessEqual(intersection, pb)
union = pa | pb
self.assertEqual(len(union), len(set(union)))
self.assertEqual(set(union), set(pa) | set(pb))
self.assertGreaterEqual(union, pa)
self.assertGreaterEqual(union, pb)
# one cannot mix different models with set operations
ps = pa
ms = self.env['ir.ui.menu'].search([])
self.assertNotEqual(ps._name, ms._name)
self.assertNotEqual(ps, ms)
with self.assertRaises(except_orm):
res = ps + ms
with self.assertRaises(except_orm):
res = ps - ms
with self.assertRaises(except_orm):
res = ps & ms
with self.assertRaises(except_orm):
res = ps | ms
with self.assertRaises(except_orm):
res = ps < ms
with self.assertRaises(except_orm):
res = ps <= ms
with self.assertRaises(except_orm):
res = ps > ms
with self.assertRaises(except_orm):
res = ps >= ms
@mute_logger('openerp.models')
def test_80_filter(self):
""" Check filter on recordsets. """
ps = self.env['res.partner'].search([])
customers = ps.browse([p.id for p in ps if p.customer])
# filter on a single field
self.assertEqual(ps.filtered(lambda p: p.customer), customers)
self.assertEqual(ps.filtered('customer'), customers)
# filter on a sequence of fields
self.assertEqual(
ps.filtered(lambda p: p.parent_id.customer),
ps.filtered('parent_id.customer')
)
@mute_logger('openerp.models')
def test_80_map(self):
""" Check map on recordsets. """
ps = self.env['res.partner'].search([])
parents = ps.browse()
for p in ps: parents |= p.parent_id
# map a single field
self.assertEqual(ps.mapped(lambda p: p.parent_id), parents)
self.assertEqual(ps.mapped('parent_id'), parents)
# map a sequence of fields
self.assertEqual(
ps.mapped(lambda p: p.parent_id.name),
[p.parent_id.name for p in ps]
)
self.assertEqual(
ps.mapped('parent_id.name'),
[p.name for p in parents]
)
|
agpl-3.0
|
denovator/myfriki
|
lib/jinja2/jinja2/parser.py
|
637
|
35186
|
# -*- coding: utf-8 -*-
"""
jinja2.parser
~~~~~~~~~~~~~
Implements the template parser.
:copyright: (c) 2010 by the Jinja Team.
:license: BSD, see LICENSE for more details.
"""
from jinja2 import nodes
from jinja2.exceptions import TemplateSyntaxError, TemplateAssertionError
from jinja2.lexer import describe_token, describe_token_expr
from jinja2._compat import next, imap
#: statements that callinto
_statement_keywords = frozenset(['for', 'if', 'block', 'extends', 'print',
'macro', 'include', 'from', 'import',
'set'])
_compare_operators = frozenset(['eq', 'ne', 'lt', 'lteq', 'gt', 'gteq'])
class Parser(object):
"""This is the central parsing class Jinja2 uses. It's passed to
extensions and can be used to parse expressions or statements.
"""
def __init__(self, environment, source, name=None, filename=None,
state=None):
self.environment = environment
self.stream = environment._tokenize(source, name, filename, state)
self.name = name
self.filename = filename
self.closed = False
self.extensions = {}
for extension in environment.iter_extensions():
for tag in extension.tags:
self.extensions[tag] = extension.parse
self._last_identifier = 0
self._tag_stack = []
self._end_token_stack = []
def fail(self, msg, lineno=None, exc=TemplateSyntaxError):
"""Convenience method that raises `exc` with the message, passed
line number or last line number as well as the current name and
filename.
"""
if lineno is None:
lineno = self.stream.current.lineno
raise exc(msg, lineno, self.name, self.filename)
def _fail_ut_eof(self, name, end_token_stack, lineno):
expected = []
for exprs in end_token_stack:
expected.extend(imap(describe_token_expr, exprs))
if end_token_stack:
currently_looking = ' or '.join(
"'%s'" % describe_token_expr(expr)
for expr in end_token_stack[-1])
else:
currently_looking = None
if name is None:
message = ['Unexpected end of template.']
else:
message = ['Encountered unknown tag \'%s\'.' % name]
if currently_looking:
if name is not None and name in expected:
message.append('You probably made a nesting mistake. Jinja '
'is expecting this tag, but currently looking '
'for %s.' % currently_looking)
else:
message.append('Jinja was looking for the following tags: '
'%s.' % currently_looking)
if self._tag_stack:
message.append('The innermost block that needs to be '
'closed is \'%s\'.' % self._tag_stack[-1])
self.fail(' '.join(message), lineno)
def fail_unknown_tag(self, name, lineno=None):
"""Called if the parser encounters an unknown tag. Tries to fail
with a human readable error message that could help to identify
the problem.
"""
return self._fail_ut_eof(name, self._end_token_stack, lineno)
def fail_eof(self, end_tokens=None, lineno=None):
"""Like fail_unknown_tag but for end of template situations."""
stack = list(self._end_token_stack)
if end_tokens is not None:
stack.append(end_tokens)
return self._fail_ut_eof(None, stack, lineno)
def is_tuple_end(self, extra_end_rules=None):
"""Are we at the end of a tuple?"""
if self.stream.current.type in ('variable_end', 'block_end', 'rparen'):
return True
elif extra_end_rules is not None:
return self.stream.current.test_any(extra_end_rules)
return False
def free_identifier(self, lineno=None):
"""Return a new free identifier as :class:`~jinja2.nodes.InternalName`."""
self._last_identifier += 1
rv = object.__new__(nodes.InternalName)
nodes.Node.__init__(rv, 'fi%d' % self._last_identifier, lineno=lineno)
return rv
def parse_statement(self):
"""Parse a single statement."""
token = self.stream.current
if token.type != 'name':
self.fail('tag name expected', token.lineno)
self._tag_stack.append(token.value)
pop_tag = True
try:
if token.value in _statement_keywords:
return getattr(self, 'parse_' + self.stream.current.value)()
if token.value == 'call':
return self.parse_call_block()
if token.value == 'filter':
return self.parse_filter_block()
ext = self.extensions.get(token.value)
if ext is not None:
return ext(self)
# did not work out, remove the token we pushed by accident
# from the stack so that the unknown tag fail function can
# produce a proper error message.
self._tag_stack.pop()
pop_tag = False
self.fail_unknown_tag(token.value, token.lineno)
finally:
if pop_tag:
self._tag_stack.pop()
def parse_statements(self, end_tokens, drop_needle=False):
"""Parse multiple statements into a list until one of the end tokens
is reached. This is used to parse the body of statements as it also
parses template data if appropriate. The parser checks first if the
current token is a colon and skips it if there is one. Then it checks
for the block end and parses until if one of the `end_tokens` is
reached. Per default the active token in the stream at the end of
the call is the matched end token. If this is not wanted `drop_needle`
can be set to `True` and the end token is removed.
"""
# the first token may be a colon for python compatibility
self.stream.skip_if('colon')
# in the future it would be possible to add whole code sections
# by adding some sort of end of statement token and parsing those here.
self.stream.expect('block_end')
result = self.subparse(end_tokens)
# we reached the end of the template too early, the subparser
# does not check for this, so we do that now
if self.stream.current.type == 'eof':
self.fail_eof(end_tokens)
if drop_needle:
next(self.stream)
return result
def parse_set(self):
"""Parse an assign statement."""
lineno = next(self.stream).lineno
target = self.parse_assign_target()
self.stream.expect('assign')
expr = self.parse_tuple()
return nodes.Assign(target, expr, lineno=lineno)
def parse_for(self):
"""Parse a for loop."""
lineno = self.stream.expect('name:for').lineno
target = self.parse_assign_target(extra_end_rules=('name:in',))
self.stream.expect('name:in')
iter = self.parse_tuple(with_condexpr=False,
extra_end_rules=('name:recursive',))
test = None
if self.stream.skip_if('name:if'):
test = self.parse_expression()
recursive = self.stream.skip_if('name:recursive')
body = self.parse_statements(('name:endfor', 'name:else'))
if next(self.stream).value == 'endfor':
else_ = []
else:
else_ = self.parse_statements(('name:endfor',), drop_needle=True)
return nodes.For(target, iter, body, else_, test,
recursive, lineno=lineno)
def parse_if(self):
"""Parse an if construct."""
node = result = nodes.If(lineno=self.stream.expect('name:if').lineno)
while 1:
node.test = self.parse_tuple(with_condexpr=False)
node.body = self.parse_statements(('name:elif', 'name:else',
'name:endif'))
token = next(self.stream)
if token.test('name:elif'):
new_node = nodes.If(lineno=self.stream.current.lineno)
node.else_ = [new_node]
node = new_node
continue
elif token.test('name:else'):
node.else_ = self.parse_statements(('name:endif',),
drop_needle=True)
else:
node.else_ = []
break
return result
def parse_block(self):
node = nodes.Block(lineno=next(self.stream).lineno)
node.name = self.stream.expect('name').value
node.scoped = self.stream.skip_if('name:scoped')
# common problem people encounter when switching from django
# to jinja. we do not support hyphens in block names, so let's
# raise a nicer error message in that case.
if self.stream.current.type == 'sub':
self.fail('Block names in Jinja have to be valid Python '
'identifiers and may not contain hyphens, use an '
'underscore instead.')
node.body = self.parse_statements(('name:endblock',), drop_needle=True)
self.stream.skip_if('name:' + node.name)
return node
def parse_extends(self):
node = nodes.Extends(lineno=next(self.stream).lineno)
node.template = self.parse_expression()
return node
def parse_import_context(self, node, default):
if self.stream.current.test_any('name:with', 'name:without') and \
self.stream.look().test('name:context'):
node.with_context = next(self.stream).value == 'with'
self.stream.skip()
else:
node.with_context = default
return node
def parse_include(self):
node = nodes.Include(lineno=next(self.stream).lineno)
node.template = self.parse_expression()
if self.stream.current.test('name:ignore') and \
self.stream.look().test('name:missing'):
node.ignore_missing = True
self.stream.skip(2)
else:
node.ignore_missing = False
return self.parse_import_context(node, True)
def parse_import(self):
node = nodes.Import(lineno=next(self.stream).lineno)
node.template = self.parse_expression()
self.stream.expect('name:as')
node.target = self.parse_assign_target(name_only=True).name
return self.parse_import_context(node, False)
def parse_from(self):
node = nodes.FromImport(lineno=next(self.stream).lineno)
node.template = self.parse_expression()
self.stream.expect('name:import')
node.names = []
def parse_context():
if self.stream.current.value in ('with', 'without') and \
self.stream.look().test('name:context'):
node.with_context = next(self.stream).value == 'with'
self.stream.skip()
return True
return False
while 1:
if node.names:
self.stream.expect('comma')
if self.stream.current.type == 'name':
if parse_context():
break
target = self.parse_assign_target(name_only=True)
if target.name.startswith('_'):
self.fail('names starting with an underline can not '
'be imported', target.lineno,
exc=TemplateAssertionError)
if self.stream.skip_if('name:as'):
alias = self.parse_assign_target(name_only=True)
node.names.append((target.name, alias.name))
else:
node.names.append(target.name)
if parse_context() or self.stream.current.type != 'comma':
break
else:
break
if not hasattr(node, 'with_context'):
node.with_context = False
self.stream.skip_if('comma')
return node
def parse_signature(self, node):
node.args = args = []
node.defaults = defaults = []
self.stream.expect('lparen')
while self.stream.current.type != 'rparen':
if args:
self.stream.expect('comma')
arg = self.parse_assign_target(name_only=True)
arg.set_ctx('param')
if self.stream.skip_if('assign'):
defaults.append(self.parse_expression())
args.append(arg)
self.stream.expect('rparen')
def parse_call_block(self):
node = nodes.CallBlock(lineno=next(self.stream).lineno)
if self.stream.current.type == 'lparen':
self.parse_signature(node)
else:
node.args = []
node.defaults = []
node.call = self.parse_expression()
if not isinstance(node.call, nodes.Call):
self.fail('expected call', node.lineno)
node.body = self.parse_statements(('name:endcall',), drop_needle=True)
return node
def parse_filter_block(self):
node = nodes.FilterBlock(lineno=next(self.stream).lineno)
node.filter = self.parse_filter(None, start_inline=True)
node.body = self.parse_statements(('name:endfilter',),
drop_needle=True)
return node
def parse_macro(self):
node = nodes.Macro(lineno=next(self.stream).lineno)
node.name = self.parse_assign_target(name_only=True).name
self.parse_signature(node)
node.body = self.parse_statements(('name:endmacro',),
drop_needle=True)
return node
def parse_print(self):
node = nodes.Output(lineno=next(self.stream).lineno)
node.nodes = []
while self.stream.current.type != 'block_end':
if node.nodes:
self.stream.expect('comma')
node.nodes.append(self.parse_expression())
return node
def parse_assign_target(self, with_tuple=True, name_only=False,
extra_end_rules=None):
"""Parse an assignment target. As Jinja2 allows assignments to
tuples, this function can parse all allowed assignment targets. Per
default assignments to tuples are parsed, that can be disable however
by setting `with_tuple` to `False`. If only assignments to names are
wanted `name_only` can be set to `True`. The `extra_end_rules`
parameter is forwarded to the tuple parsing function.
"""
if name_only:
token = self.stream.expect('name')
target = nodes.Name(token.value, 'store', lineno=token.lineno)
else:
if with_tuple:
target = self.parse_tuple(simplified=True,
extra_end_rules=extra_end_rules)
else:
target = self.parse_primary()
target.set_ctx('store')
if not target.can_assign():
self.fail('can\'t assign to %r' % target.__class__.
__name__.lower(), target.lineno)
return target
def parse_expression(self, with_condexpr=True):
"""Parse an expression. Per default all expressions are parsed, if
the optional `with_condexpr` parameter is set to `False` conditional
expressions are not parsed.
"""
if with_condexpr:
return self.parse_condexpr()
return self.parse_or()
def parse_condexpr(self):
lineno = self.stream.current.lineno
expr1 = self.parse_or()
while self.stream.skip_if('name:if'):
expr2 = self.parse_or()
if self.stream.skip_if('name:else'):
expr3 = self.parse_condexpr()
else:
expr3 = None
expr1 = nodes.CondExpr(expr2, expr1, expr3, lineno=lineno)
lineno = self.stream.current.lineno
return expr1
def parse_or(self):
lineno = self.stream.current.lineno
left = self.parse_and()
while self.stream.skip_if('name:or'):
right = self.parse_and()
left = nodes.Or(left, right, lineno=lineno)
lineno = self.stream.current.lineno
return left
def parse_and(self):
lineno = self.stream.current.lineno
left = self.parse_not()
while self.stream.skip_if('name:and'):
right = self.parse_not()
left = nodes.And(left, right, lineno=lineno)
lineno = self.stream.current.lineno
return left
def parse_not(self):
if self.stream.current.test('name:not'):
lineno = next(self.stream).lineno
return nodes.Not(self.parse_not(), lineno=lineno)
return self.parse_compare()
def parse_compare(self):
lineno = self.stream.current.lineno
expr = self.parse_add()
ops = []
while 1:
token_type = self.stream.current.type
if token_type in _compare_operators:
next(self.stream)
ops.append(nodes.Operand(token_type, self.parse_add()))
elif self.stream.skip_if('name:in'):
ops.append(nodes.Operand('in', self.parse_add()))
elif self.stream.current.test('name:not') and \
self.stream.look().test('name:in'):
self.stream.skip(2)
ops.append(nodes.Operand('notin', self.parse_add()))
else:
break
lineno = self.stream.current.lineno
if not ops:
return expr
return nodes.Compare(expr, ops, lineno=lineno)
def parse_add(self):
lineno = self.stream.current.lineno
left = self.parse_sub()
while self.stream.current.type == 'add':
next(self.stream)
right = self.parse_sub()
left = nodes.Add(left, right, lineno=lineno)
lineno = self.stream.current.lineno
return left
def parse_sub(self):
lineno = self.stream.current.lineno
left = self.parse_concat()
while self.stream.current.type == 'sub':
next(self.stream)
right = self.parse_concat()
left = nodes.Sub(left, right, lineno=lineno)
lineno = self.stream.current.lineno
return left
def parse_concat(self):
lineno = self.stream.current.lineno
args = [self.parse_mul()]
while self.stream.current.type == 'tilde':
next(self.stream)
args.append(self.parse_mul())
if len(args) == 1:
return args[0]
return nodes.Concat(args, lineno=lineno)
def parse_mul(self):
lineno = self.stream.current.lineno
left = self.parse_div()
while self.stream.current.type == 'mul':
next(self.stream)
right = self.parse_div()
left = nodes.Mul(left, right, lineno=lineno)
lineno = self.stream.current.lineno
return left
def parse_div(self):
lineno = self.stream.current.lineno
left = self.parse_floordiv()
while self.stream.current.type == 'div':
next(self.stream)
right = self.parse_floordiv()
left = nodes.Div(left, right, lineno=lineno)
lineno = self.stream.current.lineno
return left
def parse_floordiv(self):
lineno = self.stream.current.lineno
left = self.parse_mod()
while self.stream.current.type == 'floordiv':
next(self.stream)
right = self.parse_mod()
left = nodes.FloorDiv(left, right, lineno=lineno)
lineno = self.stream.current.lineno
return left
def parse_mod(self):
lineno = self.stream.current.lineno
left = self.parse_pow()
while self.stream.current.type == 'mod':
next(self.stream)
right = self.parse_pow()
left = nodes.Mod(left, right, lineno=lineno)
lineno = self.stream.current.lineno
return left
def parse_pow(self):
lineno = self.stream.current.lineno
left = self.parse_unary()
while self.stream.current.type == 'pow':
next(self.stream)
right = self.parse_unary()
left = nodes.Pow(left, right, lineno=lineno)
lineno = self.stream.current.lineno
return left
def parse_unary(self, with_filter=True):
token_type = self.stream.current.type
lineno = self.stream.current.lineno
if token_type == 'sub':
next(self.stream)
node = nodes.Neg(self.parse_unary(False), lineno=lineno)
elif token_type == 'add':
next(self.stream)
node = nodes.Pos(self.parse_unary(False), lineno=lineno)
else:
node = self.parse_primary()
node = self.parse_postfix(node)
if with_filter:
node = self.parse_filter_expr(node)
return node
def parse_primary(self):
token = self.stream.current
if token.type == 'name':
if token.value in ('true', 'false', 'True', 'False'):
node = nodes.Const(token.value in ('true', 'True'),
lineno=token.lineno)
elif token.value in ('none', 'None'):
node = nodes.Const(None, lineno=token.lineno)
else:
node = nodes.Name(token.value, 'load', lineno=token.lineno)
next(self.stream)
elif token.type == 'string':
next(self.stream)
buf = [token.value]
lineno = token.lineno
while self.stream.current.type == 'string':
buf.append(self.stream.current.value)
next(self.stream)
node = nodes.Const(''.join(buf), lineno=lineno)
elif token.type in ('integer', 'float'):
next(self.stream)
node = nodes.Const(token.value, lineno=token.lineno)
elif token.type == 'lparen':
next(self.stream)
node = self.parse_tuple(explicit_parentheses=True)
self.stream.expect('rparen')
elif token.type == 'lbracket':
node = self.parse_list()
elif token.type == 'lbrace':
node = self.parse_dict()
else:
self.fail("unexpected '%s'" % describe_token(token), token.lineno)
return node
def parse_tuple(self, simplified=False, with_condexpr=True,
extra_end_rules=None, explicit_parentheses=False):
"""Works like `parse_expression` but if multiple expressions are
delimited by a comma a :class:`~jinja2.nodes.Tuple` node is created.
This method could also return a regular expression instead of a tuple
if no commas where found.
The default parsing mode is a full tuple. If `simplified` is `True`
only names and literals are parsed. The `no_condexpr` parameter is
forwarded to :meth:`parse_expression`.
Because tuples do not require delimiters and may end in a bogus comma
an extra hint is needed that marks the end of a tuple. For example
for loops support tuples between `for` and `in`. In that case the
`extra_end_rules` is set to ``['name:in']``.
`explicit_parentheses` is true if the parsing was triggered by an
expression in parentheses. This is used to figure out if an empty
tuple is a valid expression or not.
"""
lineno = self.stream.current.lineno
if simplified:
parse = self.parse_primary
elif with_condexpr:
parse = self.parse_expression
else:
parse = lambda: self.parse_expression(with_condexpr=False)
args = []
is_tuple = False
while 1:
if args:
self.stream.expect('comma')
if self.is_tuple_end(extra_end_rules):
break
args.append(parse())
if self.stream.current.type == 'comma':
is_tuple = True
else:
break
lineno = self.stream.current.lineno
if not is_tuple:
if args:
return args[0]
# if we don't have explicit parentheses, an empty tuple is
# not a valid expression. This would mean nothing (literally
# nothing) in the spot of an expression would be an empty
# tuple.
if not explicit_parentheses:
self.fail('Expected an expression, got \'%s\'' %
describe_token(self.stream.current))
return nodes.Tuple(args, 'load', lineno=lineno)
def parse_list(self):
token = self.stream.expect('lbracket')
items = []
while self.stream.current.type != 'rbracket':
if items:
self.stream.expect('comma')
if self.stream.current.type == 'rbracket':
break
items.append(self.parse_expression())
self.stream.expect('rbracket')
return nodes.List(items, lineno=token.lineno)
def parse_dict(self):
token = self.stream.expect('lbrace')
items = []
while self.stream.current.type != 'rbrace':
if items:
self.stream.expect('comma')
if self.stream.current.type == 'rbrace':
break
key = self.parse_expression()
self.stream.expect('colon')
value = self.parse_expression()
items.append(nodes.Pair(key, value, lineno=key.lineno))
self.stream.expect('rbrace')
return nodes.Dict(items, lineno=token.lineno)
def parse_postfix(self, node):
while 1:
token_type = self.stream.current.type
if token_type == 'dot' or token_type == 'lbracket':
node = self.parse_subscript(node)
# calls are valid both after postfix expressions (getattr
# and getitem) as well as filters and tests
elif token_type == 'lparen':
node = self.parse_call(node)
else:
break
return node
def parse_filter_expr(self, node):
while 1:
token_type = self.stream.current.type
if token_type == 'pipe':
node = self.parse_filter(node)
elif token_type == 'name' and self.stream.current.value == 'is':
node = self.parse_test(node)
# calls are valid both after postfix expressions (getattr
# and getitem) as well as filters and tests
elif token_type == 'lparen':
node = self.parse_call(node)
else:
break
return node
def parse_subscript(self, node):
token = next(self.stream)
if token.type == 'dot':
attr_token = self.stream.current
next(self.stream)
if attr_token.type == 'name':
return nodes.Getattr(node, attr_token.value, 'load',
lineno=token.lineno)
elif attr_token.type != 'integer':
self.fail('expected name or number', attr_token.lineno)
arg = nodes.Const(attr_token.value, lineno=attr_token.lineno)
return nodes.Getitem(node, arg, 'load', lineno=token.lineno)
if token.type == 'lbracket':
args = []
while self.stream.current.type != 'rbracket':
if args:
self.stream.expect('comma')
args.append(self.parse_subscribed())
self.stream.expect('rbracket')
if len(args) == 1:
arg = args[0]
else:
arg = nodes.Tuple(args, 'load', lineno=token.lineno)
return nodes.Getitem(node, arg, 'load', lineno=token.lineno)
self.fail('expected subscript expression', self.lineno)
def parse_subscribed(self):
lineno = self.stream.current.lineno
if self.stream.current.type == 'colon':
next(self.stream)
args = [None]
else:
node = self.parse_expression()
if self.stream.current.type != 'colon':
return node
next(self.stream)
args = [node]
if self.stream.current.type == 'colon':
args.append(None)
elif self.stream.current.type not in ('rbracket', 'comma'):
args.append(self.parse_expression())
else:
args.append(None)
if self.stream.current.type == 'colon':
next(self.stream)
if self.stream.current.type not in ('rbracket', 'comma'):
args.append(self.parse_expression())
else:
args.append(None)
else:
args.append(None)
return nodes.Slice(lineno=lineno, *args)
def parse_call(self, node):
token = self.stream.expect('lparen')
args = []
kwargs = []
dyn_args = dyn_kwargs = None
require_comma = False
def ensure(expr):
if not expr:
self.fail('invalid syntax for function call expression',
token.lineno)
while self.stream.current.type != 'rparen':
if require_comma:
self.stream.expect('comma')
# support for trailing comma
if self.stream.current.type == 'rparen':
break
if self.stream.current.type == 'mul':
ensure(dyn_args is None and dyn_kwargs is None)
next(self.stream)
dyn_args = self.parse_expression()
elif self.stream.current.type == 'pow':
ensure(dyn_kwargs is None)
next(self.stream)
dyn_kwargs = self.parse_expression()
else:
ensure(dyn_args is None and dyn_kwargs is None)
if self.stream.current.type == 'name' and \
self.stream.look().type == 'assign':
key = self.stream.current.value
self.stream.skip(2)
value = self.parse_expression()
kwargs.append(nodes.Keyword(key, value,
lineno=value.lineno))
else:
ensure(not kwargs)
args.append(self.parse_expression())
require_comma = True
self.stream.expect('rparen')
if node is None:
return args, kwargs, dyn_args, dyn_kwargs
return nodes.Call(node, args, kwargs, dyn_args, dyn_kwargs,
lineno=token.lineno)
def parse_filter(self, node, start_inline=False):
while self.stream.current.type == 'pipe' or start_inline:
if not start_inline:
next(self.stream)
token = self.stream.expect('name')
name = token.value
while self.stream.current.type == 'dot':
next(self.stream)
name += '.' + self.stream.expect('name').value
if self.stream.current.type == 'lparen':
args, kwargs, dyn_args, dyn_kwargs = self.parse_call(None)
else:
args = []
kwargs = []
dyn_args = dyn_kwargs = None
node = nodes.Filter(node, name, args, kwargs, dyn_args,
dyn_kwargs, lineno=token.lineno)
start_inline = False
return node
def parse_test(self, node):
token = next(self.stream)
if self.stream.current.test('name:not'):
next(self.stream)
negated = True
else:
negated = False
name = self.stream.expect('name').value
while self.stream.current.type == 'dot':
next(self.stream)
name += '.' + self.stream.expect('name').value
dyn_args = dyn_kwargs = None
kwargs = []
if self.stream.current.type == 'lparen':
args, kwargs, dyn_args, dyn_kwargs = self.parse_call(None)
elif self.stream.current.type in ('name', 'string', 'integer',
'float', 'lparen', 'lbracket',
'lbrace') and not \
self.stream.current.test_any('name:else', 'name:or',
'name:and'):
if self.stream.current.test('name:is'):
self.fail('You cannot chain multiple tests with is')
args = [self.parse_expression()]
else:
args = []
node = nodes.Test(node, name, args, kwargs, dyn_args,
dyn_kwargs, lineno=token.lineno)
if negated:
node = nodes.Not(node, lineno=token.lineno)
return node
def subparse(self, end_tokens=None):
body = []
data_buffer = []
add_data = data_buffer.append
if end_tokens is not None:
self._end_token_stack.append(end_tokens)
def flush_data():
if data_buffer:
lineno = data_buffer[0].lineno
body.append(nodes.Output(data_buffer[:], lineno=lineno))
del data_buffer[:]
try:
while self.stream:
token = self.stream.current
if token.type == 'data':
if token.value:
add_data(nodes.TemplateData(token.value,
lineno=token.lineno))
next(self.stream)
elif token.type == 'variable_begin':
next(self.stream)
add_data(self.parse_tuple(with_condexpr=True))
self.stream.expect('variable_end')
elif token.type == 'block_begin':
flush_data()
next(self.stream)
if end_tokens is not None and \
self.stream.current.test_any(*end_tokens):
return body
rv = self.parse_statement()
if isinstance(rv, list):
body.extend(rv)
else:
body.append(rv)
self.stream.expect('block_end')
else:
raise AssertionError('internal parsing error')
flush_data()
finally:
if end_tokens is not None:
self._end_token_stack.pop()
return body
def parse(self):
"""Parse the whole template into a `Template` node."""
result = nodes.Template(self.subparse(), lineno=1)
result.set_environment(self.environment)
return result
|
apache-2.0
|
orbitfp7/nova
|
nova/tests/unit/objects/test_virt_cpu_topology.py
|
94
|
1397
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova import objects
from nova.tests.unit.objects import test_objects
_top_dict = {
'sockets': 2,
'cores': 4,
'threads': 8
}
class _TestVirtCPUTopologyObject(object):
def test_object_from_dict(self):
top_obj = objects.VirtCPUTopology.from_dict(_top_dict)
self.compare_obj(top_obj, _top_dict)
def test_object_to_dict(self):
top_obj = objects.VirtCPUTopology()
top_obj.sockets = 2
top_obj.cores = 4
top_obj.threads = 8
spec = top_obj.to_dict()
self.assertEqual(_top_dict, spec)
class TestVirtCPUTopologyObject(test_objects._LocalTest,
_TestVirtCPUTopologyObject):
pass
class TestRemoteVirtCPUTopologyObject(test_objects._RemoteTest,
_TestVirtCPUTopologyObject):
pass
|
apache-2.0
|
nesdis/djongo
|
tests/django_tests/tests/v22/tests/validation/test_unique.py
|
86
|
6715
|
import datetime
import unittest
from django.apps.registry import Apps
from django.core.exceptions import ValidationError
from django.db import models
from django.test import TestCase
from .models import (
CustomPKModel, FlexibleDatePost, ModelToValidate, Post, UniqueErrorsModel,
UniqueFieldsModel, UniqueForDateModel, UniqueTogetherModel,
)
class GetUniqueCheckTests(unittest.TestCase):
def test_unique_fields_get_collected(self):
m = UniqueFieldsModel()
self.assertEqual(
([(UniqueFieldsModel, ('id',)),
(UniqueFieldsModel, ('unique_charfield',)),
(UniqueFieldsModel, ('unique_integerfield',))],
[]),
m._get_unique_checks()
)
def test_unique_together_gets_picked_up_and_converted_to_tuple(self):
m = UniqueTogetherModel()
self.assertEqual(
([(UniqueTogetherModel, ('ifield', 'cfield')),
(UniqueTogetherModel, ('ifield', 'efield')),
(UniqueTogetherModel, ('id',))],
[]),
m._get_unique_checks()
)
def test_unique_together_normalization(self):
"""
Test the Meta.unique_together normalization with different sorts of
objects.
"""
data = {
'2-tuple': (('foo', 'bar'), (('foo', 'bar'),)),
'list': (['foo', 'bar'], (('foo', 'bar'),)),
'already normalized': ((('foo', 'bar'), ('bar', 'baz')),
(('foo', 'bar'), ('bar', 'baz'))),
'set': ({('foo', 'bar'), ('bar', 'baz')}, # Ref #21469
(('foo', 'bar'), ('bar', 'baz'))),
}
for unique_together, normalized in data.values():
class M(models.Model):
foo = models.IntegerField()
bar = models.IntegerField()
baz = models.IntegerField()
Meta = type('Meta', (), {
'unique_together': unique_together,
'apps': Apps()
})
checks, _ = M()._get_unique_checks()
for t in normalized:
check = (M, t)
self.assertIn(check, checks)
def test_primary_key_is_considered_unique(self):
m = CustomPKModel()
self.assertEqual(([(CustomPKModel, ('my_pk_field',))], []), m._get_unique_checks())
def test_unique_for_date_gets_picked_up(self):
m = UniqueForDateModel()
self.assertEqual((
[(UniqueForDateModel, ('id',))],
[(UniqueForDateModel, 'date', 'count', 'start_date'),
(UniqueForDateModel, 'year', 'count', 'end_date'),
(UniqueForDateModel, 'month', 'order', 'end_date')]
), m._get_unique_checks()
)
def test_unique_for_date_exclusion(self):
m = UniqueForDateModel()
self.assertEqual((
[(UniqueForDateModel, ('id',))],
[(UniqueForDateModel, 'year', 'count', 'end_date'),
(UniqueForDateModel, 'month', 'order', 'end_date')]
), m._get_unique_checks(exclude='start_date')
)
class PerformUniqueChecksTest(TestCase):
def test_primary_key_unique_check_not_performed_when_adding_and_pk_not_specified(self):
# Regression test for #12560
with self.assertNumQueries(0):
mtv = ModelToValidate(number=10, name='Some Name')
setattr(mtv, '_adding', True)
mtv.full_clean()
def test_primary_key_unique_check_performed_when_adding_and_pk_specified(self):
# Regression test for #12560
with self.assertNumQueries(1):
mtv = ModelToValidate(number=10, name='Some Name', id=123)
setattr(mtv, '_adding', True)
mtv.full_clean()
def test_primary_key_unique_check_not_performed_when_not_adding(self):
# Regression test for #12132
with self.assertNumQueries(0):
mtv = ModelToValidate(number=10, name='Some Name')
mtv.full_clean()
def test_unique_for_date(self):
Post.objects.create(
title="Django 1.0 is released", slug="Django 1.0",
subtitle="Finally", posted=datetime.date(2008, 9, 3),
)
p = Post(title="Django 1.0 is released", posted=datetime.date(2008, 9, 3))
with self.assertRaises(ValidationError) as cm:
p.full_clean()
self.assertEqual(cm.exception.message_dict, {'title': ['Title must be unique for Posted date.']})
# Should work without errors
p = Post(title="Work on Django 1.1 begins", posted=datetime.date(2008, 9, 3))
p.full_clean()
# Should work without errors
p = Post(title="Django 1.0 is released", posted=datetime.datetime(2008, 9, 4))
p.full_clean()
p = Post(slug="Django 1.0", posted=datetime.datetime(2008, 1, 1))
with self.assertRaises(ValidationError) as cm:
p.full_clean()
self.assertEqual(cm.exception.message_dict, {'slug': ['Slug must be unique for Posted year.']})
p = Post(subtitle="Finally", posted=datetime.datetime(2008, 9, 30))
with self.assertRaises(ValidationError) as cm:
p.full_clean()
self.assertEqual(cm.exception.message_dict, {'subtitle': ['Subtitle must be unique for Posted month.']})
p = Post(title="Django 1.0 is released")
with self.assertRaises(ValidationError) as cm:
p.full_clean()
self.assertEqual(cm.exception.message_dict, {'posted': ['This field cannot be null.']})
def test_unique_for_date_with_nullable_date(self):
"""
unique_for_date/year/month checks shouldn't trigger when the
associated DateField is None.
"""
FlexibleDatePost.objects.create(
title="Django 1.0 is released", slug="Django 1.0",
subtitle="Finally", posted=datetime.date(2008, 9, 3),
)
p = FlexibleDatePost(title="Django 1.0 is released")
p.full_clean()
p = FlexibleDatePost(slug="Django 1.0")
p.full_clean()
p = FlexibleDatePost(subtitle="Finally")
p.full_clean()
def test_unique_errors(self):
UniqueErrorsModel.objects.create(name='Some Name', no=10)
m = UniqueErrorsModel(name='Some Name', no=11)
with self.assertRaises(ValidationError) as cm:
m.full_clean()
self.assertEqual(cm.exception.message_dict, {'name': ['Custom unique name message.']})
m = UniqueErrorsModel(name='Some Other Name', no=10)
with self.assertRaises(ValidationError) as cm:
m.full_clean()
self.assertEqual(cm.exception.message_dict, {'no': ['Custom unique number message.']})
|
agpl-3.0
|
mzdaniel/oh-mainline
|
vendor/packages/Django/django/core/files/utils.py
|
901
|
1230
|
class FileProxyMixin(object):
"""
A mixin class used to forward file methods to an underlaying file
object. The internal file object has to be called "file"::
class FileProxy(FileProxyMixin):
def __init__(self, file):
self.file = file
"""
encoding = property(lambda self: self.file.encoding)
fileno = property(lambda self: self.file.fileno)
flush = property(lambda self: self.file.flush)
isatty = property(lambda self: self.file.isatty)
newlines = property(lambda self: self.file.newlines)
read = property(lambda self: self.file.read)
readinto = property(lambda self: self.file.readinto)
readline = property(lambda self: self.file.readline)
readlines = property(lambda self: self.file.readlines)
seek = property(lambda self: self.file.seek)
softspace = property(lambda self: self.file.softspace)
tell = property(lambda self: self.file.tell)
truncate = property(lambda self: self.file.truncate)
write = property(lambda self: self.file.write)
writelines = property(lambda self: self.file.writelines)
xreadlines = property(lambda self: self.file.xreadlines)
def __iter__(self):
return iter(self.file)
|
agpl-3.0
|
f3zz3h/Embedded-Systems-Development
|
Client/Python-RTSP/keypad.py
|
1
|
6324
|
__author__ = "Luke Hart"
__copyright__ = "Copyright 2014, The Jeff Museum"
__credits__ = ["Luke Hart","Joe Ellis", "Chris Sewell", "Matt Ribbins",
"Sebastian Beaven", "Joshua Webb", "Andrew Bremmer"]
__license__ = "GPL"
__version__ = "0.9"
__maintainer__ = "Luke Hart"
__email__ = "luke2.hart@live.uwe.ac.uk"
__status__ = "Development"
import serial
import time
import io
import math
#Defined values
TTY = '/dev/ttyACM0'
BAUD = 115200
START = '@00'
END = '\r'
A = 0
B = 1
C = 2
OUTPUT = 0
INPUT = 1
#Input / Output - A - Both B - KeyPad C - 7 SEG LEDS
PORTABC_INPUT = ['@00D0FF\r','@00D1FF\r', '@00D2FF\r']
PORTABC_OUTPUT = ['@00D000\r','@00D100\r', '@00D200\r']
#Select column
SELECT_COLUMN = ['@00P001\r','@00P002\r','@00P004\r','@00P008\r']
CHECK_BUTTON = '@00P1?\r'
#Button defined names
BACK = 10
DOWN = 11
ACCEPT = 12
REWIND = 13
STOP = 0
PLAY = 14
FFWD = 15
VOLUP = 6
VOLDOWN = 9
PAUSE = 8
MUTE = 1
DEAUTH = 2
KEYPAD = [[ 1 ,2 ,3 ,10 ],
[ 4 ,5 ,6 ,11 ],
[ 7 ,8 ,9 ,12 ],
[ 13,0 ,14,15 ]]
class PIO:
"""
The pio class handles the IO through the BMCM USB-PIO
to the keypad and 7segment display array.
"""
def __init__(self):
"""
Intialize the serial port and create a TextIO serial IO port
which allows readline based on the PIO EOL of /r
"""
self.ser = serial.Serial(TTY, baudrate=BAUD, timeout=1)
self.port_setup(A, OUTPUT)
time.sleep(0.2)
self.port_setup(B, INPUT)
time.sleep(0.2)
self.port_setup(C, OUTPUT)
time.sleep(0.2)
self.ser_io = io.TextIOWrapper(io.BufferedRWPair(self.ser, self.ser, 1),
newline = '\r',
line_buffering = True)
self.ser_io.readlines()
def port_setup(self, port, io):
"""
Set ports to input output based on param io
"""
if (io == OUTPUT):
self.ser.write(PORTABC_OUTPUT[port])
elif (io == INPUT):
self.ser.write(PORTABC_INPUT[port])
def write(self,cmd):
"""
Write cmd param to the serial port and return the
response to the writing unedited
"""
self.ser_io.flush()
self.ser_io.write(unicode(cmd))
retval = self.ser_io.readline()
self.ser_io.flush()
return retval
def setup_display(self, display):
"""
Clear the seven seg display and select
the appropriate column for displaying based from
the display param
"""
# Clear first
self.write('@00P200\r')
# now select output display
self.write(SELECT_COLUMN[display])
def display(self, num):
"""
params: num[4] array of size 4 which takes numbers from 0-9 for each lcd item
ToDo: Could just be a number 0-9999 and be broken into components but for now
this works
"""
#for x in range(0, 10):
for ssd in range (0,4):
self.setup_display(ssd)
self.write('@00P2' + self.ledSwitch(str(num[ssd])) + END )
#Clear display at finish
for disp in range (0,4):
self.setup_display(disp)
def close(self):
"""
Close the serial connections
"""
self.ser_io.close()
self.ser.close()
def keypad_read(self):
"""
Get and array of 4 numbers representing the cols
on the keypad and return them.
"""
keys = [0,0,0,0]
#loop for each column
for i in range(0,4):
self.write(SELECT_COLUMN[i])
key = self.write(CHECK_BUTTON)
#Strip white space and preceeding !
key = key.lstrip('!')
key = key.rstrip()
#Check key isnt 0
if(int(key) != 0):
keys[i] = int(key)
return keys
def ledSwitch(self, choice):
"""
Decimal to 7 segment displays value, switch based on
the param choice and return the hex value
"""
return {
'0' : '3f', '1' : '06', '2' : '5B',
'3' : '4f', '4' : '66', '5' : '6D',
'6' : '7D', '7' : '07', '8' : '7F',
'9' : '6F', 'A' : '77', 'B' : '7C',
'C' : '39', 'D' : '5E', 'E' : '79',
'F' : '71', '.' : '80', ' ' : '00'
}[choice]
def keypadSwitch(self, column, value):
"""
Return a keypad value from the keypad table
"""
try:
keyPress = KEYPAD[int(math.log(value,2))][column]
return keyPress
except:
return None
def readWriteKeypad(self, numberOfValues=4, numOnly=True):
"""
Read keypad one number at a time and display each value
"""
output = [' ',' ',' ',' ']
#Loop once for each number required
for sseg in range(0,numberOfValues):
output[sseg] = '.'
#Clear any left overs from previous run!
self.ser_io.readlines()
gotNum = False
while(gotNum == False):
#get col values and check each for a key press
keys = self.keypad_read()
for col in range (0,4):
#Make sure a button and only one button is pressed
if ((keys[col] > 0) and (keys[col] < 9)):
if (( ((keys[col] % 2) == 0) or (keys[col] == 1) ) and (keys[col] != 6) ):
output[sseg] = self.keypadSwitch(col, keys[col])
if (numOnly == True):
if ((output[sseg] >= 0) and (output[sseg] < 10)):
gotNum = True
break
else:
output[sseg] = '.'
else:
gotNum = True
break
self.display(output)
return output
|
gpl-3.0
|
beni55/olympia
|
apps/search/views.py
|
13
|
23178
|
from django import http
from django.conf import settings
from django.db.models import Q
from django.shortcuts import render
from django.utils import translation
from django.utils.encoding import smart_str
from django.views.decorators.vary import vary_on_headers
import commonware.log
from mobility.decorators import mobile_template
from tower import ugettext as _
import amo
import bandwagon.views
import browse.views
from addons.models import Addon, Category
from amo.decorators import json_view
from amo.helpers import locale_url, urlparams
from amo.utils import sorted_groupby
from bandwagon.models import Collection
from versions.compare import dict_from_int, version_dict, version_int
from .forms import ESSearchForm, SecondarySearchForm
DEFAULT_NUM_COLLECTIONS = 20
DEFAULT_NUM_PERSONAS = 21 # Results appear in a grid of 3 personas x 7 rows.
log = commonware.log.getLogger('z.search')
def _personas(request):
"""Handle the request for persona searches."""
initial = dict(request.GET.items())
# Ignore these filters since return the same results for Firefox
# as for Thunderbird, etc.
initial.update(appver=None, platform=None)
form = ESSearchForm(initial, type=amo.ADDON_PERSONA)
form.is_valid()
qs = Addon.search_public()
filters = ['sort']
mapping = {'downloads': '-weekly_downloads',
'users': '-average_daily_users',
'rating': '-bayesian_rating',
'created': '-created',
'name': 'name_sort',
'updated': '-last_updated',
'hotness': '-hotness'}
results = _filter_search(request, qs, form.cleaned_data, filters,
sorting=mapping, types=[amo.ADDON_PERSONA])
form_data = form.cleaned_data.get('q', '')
search_opts = {}
search_opts['limit'] = form.cleaned_data.get('pp', DEFAULT_NUM_PERSONAS)
page = form.cleaned_data.get('page') or 1
search_opts['offset'] = (page - 1) * search_opts['limit']
pager = amo.utils.paginate(request, results, per_page=search_opts['limit'])
categories, filter, base, category = browse.views.personas_listing(request)
c = dict(pager=pager, form=form, categories=categories, query=form_data,
filter=filter, search_placeholder='themes')
return render(request, 'search/personas.html', c)
def _collections(request):
"""Handle the request for collections."""
# Sorting by relevance isn't an option. Instead the default is `weekly`.
initial = dict(sort='weekly')
# Update with GET variables.
initial.update(request.GET.items())
# Ignore appver/platform and set default number of collections per page.
initial.update(appver=None, platform=None, pp=DEFAULT_NUM_COLLECTIONS)
form = SecondarySearchForm(initial)
form.is_valid()
qs = Collection.search().filter(listed=True, app=request.APP.id)
filters = ['sort']
mapping = {'weekly': '-weekly_subscribers',
'monthly': '-monthly_subscribers',
'all': '-subscribers',
'rating': '-rating',
'created': '-created',
'name': 'name_sort',
'updated': '-modified'}
results = _filter_search(request, qs, form.cleaned_data, filters,
sorting=mapping,
sorting_default='-weekly_subscribers',
types=amo.COLLECTION_SEARCH_CHOICES)
form_data = form.cleaned_data.get('q', '')
search_opts = {}
search_opts['limit'] = form.cleaned_data.get('pp', DEFAULT_NUM_COLLECTIONS)
page = form.cleaned_data.get('page') or 1
search_opts['offset'] = (page - 1) * search_opts['limit']
search_opts['sort'] = form.cleaned_data.get('sort')
pager = amo.utils.paginate(request, results, per_page=search_opts['limit'])
c = dict(pager=pager, form=form, query=form_data, opts=search_opts,
filter=bandwagon.views.get_filter(request),
search_placeholder='collections')
return render(request, 'search/collections.html', c)
class BaseAjaxSearch(object):
"""Generates a list of dictionaries of add-on objects based on
ID or name matches. Safe to be served to a JSON-friendly view.
Sample output:
[
{
"id": 1865,
"name": "Adblock Plus",
"url": "http://path/to/details/page",
"icons": {
"32": "http://path/to/icon-32",
"64": "http://path/to/icon-64"
}
},
...
]
"""
def __init__(self, request, excluded_ids=(), ratings=False):
self.request = request
self.excluded_ids = excluded_ids
self.src = getattr(self, 'src', None)
self.types = getattr(self, 'types', amo.ADDON_TYPES.keys())
self.limit = 10
self.key = 'q' # Name of search field.
self.ratings = ratings
# Mapping of JSON key => add-on property.
default_fields = {
'id': 'id',
'name': 'name',
'url': 'get_url_path',
'icons': {
'32': ('get_icon_url', 32),
'64': ('get_icon_url', 64)
}
}
self.fields = getattr(self, 'fields', default_fields)
if self.ratings:
self.fields['rating'] = 'average_rating'
def queryset(self):
"""Get items based on ID or search by name."""
results = Addon.objects.none()
q = self.request.GET.get(self.key)
if q:
try:
pk = int(q)
except ValueError:
pk = None
qs = None
if pk:
qs = Addon.objects.reviewed().filter(id=int(q))
elif len(q) > 2:
qs = (Addon.search_public()
.query(or_=name_only_query(q.lower())))
if qs:
results = qs.filter(type__in=self.types)
return results
def _build_fields(self, item, fields):
data = {}
for key, prop in fields.iteritems():
if isinstance(prop, dict):
data[key] = self._build_fields(item, prop)
else:
# prop is a tuple like: ('method', 'arg1, 'argN').
if isinstance(prop, tuple):
val = getattr(item, prop[0])(*prop[1:])
else:
val = getattr(item, prop, '')
if callable(val):
val = val()
data[key] = unicode(val)
return data
def build_list(self):
"""Populate a list of dictionaries based on label => property."""
results = []
for item in self.queryset()[:self.limit]:
if item.id in self.excluded_ids:
continue
d = self._build_fields(item, self.fields)
if self.src and 'url' in d:
d['url'] = urlparams(d['url'], src=self.src)
results.append(d)
return results
@property
def items(self):
return self.build_list()
class SearchSuggestionsAjax(BaseAjaxSearch):
src = 'ss'
class AddonSuggestionsAjax(SearchSuggestionsAjax):
# No personas.
types = [amo.ADDON_ANY, amo.ADDON_EXTENSION, amo.ADDON_THEME,
amo.ADDON_DICT, amo.ADDON_SEARCH, amo.ADDON_LPAPP]
class PersonaSuggestionsAjax(SearchSuggestionsAjax):
types = [amo.ADDON_PERSONA]
@json_view
def ajax_search(request):
"""This is currently used only to return add-ons for populating a
new collection. Themes (formerly Personas) are included by default, so
this can be used elsewhere.
"""
search_obj = BaseAjaxSearch(request)
search_obj.types = amo.ADDON_SEARCH_TYPES
return search_obj.items
@json_view
def ajax_search_suggestions(request):
cat = request.GET.get('cat', 'all')
suggesterClass = {
'all': AddonSuggestionsAjax,
'themes': PersonaSuggestionsAjax,
}.get(cat, AddonSuggestionsAjax)
suggester = suggesterClass(request, ratings=False)
return _build_suggestions(
request,
cat,
suggester)
def _build_suggestions(request, cat, suggester):
results = []
q = request.GET.get('q')
if q and (q.isdigit() or len(q) > 2):
q_ = q.lower()
if cat != 'apps':
# Applications.
for a in amo.APP_USAGE:
name_ = unicode(a.pretty).lower()
word_matches = [w for w in q_.split() if name_ in w]
if q_ in name_ or word_matches:
results.append({
'id': a.id,
'name': _(u'{0} Add-ons').format(a.pretty),
'url': locale_url(a.short),
'cls': 'app ' + a.short
})
# Categories.
cats = Category.objects
cats = cats.filter(Q(application=request.APP.id) |
Q(type=amo.ADDON_SEARCH))
if cat == 'themes':
cats = cats.filter(type=amo.ADDON_PERSONA)
else:
cats = cats.exclude(type=amo.ADDON_PERSONA)
for c in cats:
if not c.name:
continue
name_ = unicode(c.name).lower()
word_matches = [w for w in q_.split() if name_ in w]
if q_ in name_ or word_matches:
results.append({
'id': c.id,
'name': unicode(c.name),
'url': c.get_url_path(),
'cls': 'cat'
})
results += suggester.items
return results
def _get_locale_analyzer():
analyzer = amo.SEARCH_LANGUAGE_TO_ANALYZER.get(translation.get_language())
if not settings.ES_USE_PLUGINS and analyzer in amo.SEARCH_ANALYZER_PLUGINS:
return None
return analyzer
def name_only_query(q):
d = {}
rules = {'match': {'query': q, 'boost': 3, 'analyzer': 'standard'},
'match': {'query': q, 'boost': 4, 'type': 'phrase'},
'fuzzy': {'value': q, 'boost': 2, 'prefix_length': 4},
'startswith': {'value': q, 'boost': 1.5}}
for k, v in rules.iteritems():
for field in ('name', 'slug', 'authors'):
d['%s__%s' % (field, k)] = v
analyzer = _get_locale_analyzer()
if analyzer:
d['name_%s__match' % analyzer] = {'query': q, 'boost': 2.5,
'analyzer': analyzer}
return d
def name_query(q):
# * Prefer text matches first, using the standard text analyzer (boost=3).
# * Then text matches, using language-specific analyzer (boost=2.5).
# * Then try fuzzy matches ("fire bug" => firebug) (boost=2).
# * Then look for the query as a prefix of a name (boost=1.5).
# * Look for phrase matches inside the summary (boost=0.8).
# * Look for phrase matches inside the summary using language specific
# analyzer (boost=0.6).
# * Look for phrase matches inside the description (boost=0.3).
# * Look for phrase matches inside the description using language
# specific analyzer (boost=0.1).
# * Look for matches inside tags (boost=0.1).
more = dict(summary__match={'query': q, 'boost': 0.8, 'type': 'phrase'},
description__match={'query': q, 'boost': 0.3,
'type': 'phrase'},
tags__match={'query': q.split(), 'boost': 0.1})
analyzer = _get_locale_analyzer()
if analyzer:
more['summary_%s__match' % analyzer] = {'query': q,
'boost': 0.6,
'type': 'phrase',
'analyzer': analyzer}
more['description_%s__match' % analyzer] = {'query': q,
'boost': 0.1,
'type': 'phrase',
'analyzer': analyzer}
return dict(more, **name_only_query(q))
def _filter_search(request, qs, query, filters, sorting,
sorting_default='-weekly_downloads', types=[]):
"""Filter an ES queryset based on a list of filters."""
APP = request.APP
# Intersection of the form fields present and the filters we want to apply.
show = [f for f in filters if query.get(f)]
if query.get('q'):
qs = qs.query(or_=name_query(query['q']))
if 'platform' in show and query['platform'] in amo.PLATFORM_DICT:
ps = (amo.PLATFORM_DICT[query['platform']].id, amo.PLATFORM_ALL.id)
# If we've selected "All Systems" don't filter by platform.
if ps[0] != ps[1]:
qs = qs.filter(platform__in=ps)
if 'appver' in show:
# Get a min version less than X.0.
low = version_int(query['appver'])
# Get a max version greater than X.0a.
high = version_int(query['appver'] + 'a')
# If we're not using D2C then fall back to appversion checking.
extensions_shown = (not query.get('atype') or
query['atype'] == amo.ADDON_EXTENSION)
if not extensions_shown or low < version_int('10.0'):
qs = qs.filter(**{'appversion.%s.max__gte' % APP.id: high,
'appversion.%s.min__lte' % APP.id: low})
if 'atype' in show and query['atype'] in amo.ADDON_TYPES:
qs = qs.filter(type=query['atype'])
else:
qs = qs.filter(type__in=types)
if 'cat' in show:
cat = (Category.objects.filter(id=query['cat'])
.filter(Q(application=APP.id) | Q(type=amo.ADDON_SEARCH)))
if not cat.exists():
show.remove('cat')
if 'cat' in show:
qs = qs.filter(category=query['cat'])
if 'tag' in show:
qs = qs.filter(tag=query['tag'])
if 'sort' in show:
qs = qs.order_by(sorting[query['sort']])
elif not query.get('q'):
# Sort by a default if there was no query so results are predictable.
qs = qs.order_by(sorting_default)
return qs
@mobile_template('search/{mobile/}results.html')
@vary_on_headers('X-PJAX')
def search(request, tag_name=None, template=None):
APP = request.APP
types = (amo.ADDON_EXTENSION, amo.ADDON_THEME, amo.ADDON_DICT,
amo.ADDON_SEARCH, amo.ADDON_LPAPP)
category = request.GET.get('cat')
if category == 'collections':
extra_params = {'sort': {'newest': 'created'}}
else:
extra_params = None
fixed = fix_search_query(request.GET, extra_params=extra_params)
if fixed is not request.GET:
return http.HttpResponsePermanentRedirect(urlparams(request.path,
**fixed))
facets = request.GET.copy()
# In order to differentiate between "all versions" and an undefined value,
# we use "any" instead of "" in the frontend.
if 'appver' in facets and facets['appver'] == 'any':
facets['appver'] = ''
form = ESSearchForm(facets or {})
form.is_valid() # Let the form try to clean data.
form_data = form.cleaned_data
if tag_name:
form_data['tag'] = tag_name
if category == 'collections':
return _collections(request)
elif category == 'themes' or form_data.get('atype') == amo.ADDON_PERSONA:
return _personas(request)
sort, extra_sort = split_choices(form.sort_choices, 'created')
if form_data.get('atype') == amo.ADDON_SEARCH:
# Search add-ons should not be searched by ADU, so replace 'Users'
# sort with 'Weekly Downloads'.
sort, extra_sort = list(sort), list(extra_sort)
sort[1] = extra_sort[1]
del extra_sort[1]
qs = (Addon.search_public().filter(app=APP.id)
.facet(tags={'terms': {'field': 'tag'}},
platforms={'terms': {'field': 'platform'}},
appversions={'terms':
{'field': 'appversion.%s.max' % APP.id}},
categories={'terms': {'field': 'category', 'size': 200}}))
filters = ['atype', 'appver', 'cat', 'sort', 'tag', 'platform']
mapping = {'users': '-average_daily_users',
'rating': '-bayesian_rating',
'created': '-created',
'name': 'name_sort',
'downloads': '-weekly_downloads',
'updated': '-last_updated',
'hotness': '-hotness'}
qs = _filter_search(request, qs, form_data, filters, mapping, types=types)
pager = amo.utils.paginate(request, qs)
ctx = {
'is_pjax': request.META.get('HTTP_X_PJAX'),
'pager': pager,
'query': form_data,
'form': form,
'sort_opts': sort,
'extra_sort_opts': extra_sort,
'sorting': sort_sidebar(request, form_data, form),
'sort': form_data.get('sort'),
}
if not ctx['is_pjax']:
facets = pager.object_list.facets
ctx.update({
'tag': tag_name,
'categories': category_sidebar(request, form_data, facets),
'platforms': platform_sidebar(request, form_data, facets),
'versions': version_sidebar(request, form_data, facets),
'tags': tag_sidebar(request, form_data, facets),
})
return render(request, template, ctx)
class FacetLink(object):
def __init__(self, text, urlparams, selected=False, children=None):
self.text = text
self.urlparams = urlparams
self.selected = selected
self.children = children or []
def sort_sidebar(request, form_data, form):
sort = form_data.get('sort')
return [FacetLink(text, dict(sort=key), key == sort)
for key, text in form.sort_choices]
def category_sidebar(request, form_data, facets):
APP = request.APP
qatype, qcat = form_data.get('atype'), form_data.get('cat')
cats = [f['term'] for f in facets['categories']]
categories = Category.objects.filter(id__in=cats)
if qatype in amo.ADDON_TYPES:
categories = categories.filter(type=qatype)
# Search categories don't have an application.
categories = categories.filter(Q(application=APP.id) |
Q(type=amo.ADDON_SEARCH))
# If category is listed as a facet but type is not, then show All.
if qcat in cats and not qatype:
qatype = True
# If category is not listed as a facet NOR available for this application,
# then show All.
if qcat not in categories.values_list('id', flat=True):
qatype = qcat = None
categories = [(_atype, sorted(_cats, key=lambda x: x.name))
for _atype, _cats in sorted_groupby(categories, 'type')]
rv = []
cat_params = dict(cat=None)
all_label = _(u'All Add-ons')
rv = [FacetLink(all_label, dict(atype=None, cat=None), not qatype)]
for addon_type, cats in categories:
selected = addon_type == qatype and not qcat
# Build the linkparams.
cat_params = cat_params.copy()
cat_params.update(atype=addon_type)
link = FacetLink(amo.ADDON_TYPES[addon_type],
cat_params, selected)
link.children = [FacetLink(c.name, dict(cat_params, **dict(cat=c.id)),
c.id == qcat) for c in cats]
rv.append(link)
return rv
def version_sidebar(request, form_data, facets):
appver = ''
# If appver is in the request, we read it cleaned via form_data.
if 'appver' in request.GET or form_data.get('appver'):
appver = form_data.get('appver')
app = unicode(request.APP.pretty)
exclude_versions = getattr(request.APP, 'exclude_versions', [])
# L10n: {0} is an application, such as Firefox. This means "any version of
# Firefox."
rv = [FacetLink(_(u'Any {0}').format(app), dict(appver='any'), not appver)]
vs = [dict_from_int(f['term']) for f in facets['appversions']]
# Insert the filtered app version even if it's not a facet.
av_dict = version_dict(appver)
if av_dict and av_dict not in vs and av_dict['major']:
vs.append(av_dict)
# Valid versions must be in the form of `major.minor`.
vs = set((v['major'], v['minor1'] if v['minor1'] not in (None, 99) else 0)
for v in vs)
versions = ['%s.%s' % v for v in sorted(vs, reverse=True)]
for version, floated in zip(versions, map(float, versions)):
if (floated not in exclude_versions
and floated > request.APP.min_display_version):
rv.append(FacetLink('%s %s' % (app, version), dict(appver=version),
appver == version))
return rv
def platform_sidebar(request, form_data, facets):
qplatform = form_data.get('platform')
app_platforms = request.APP.platforms.values()
ALL = app_platforms.pop(0)
# The default is to show "All Systems."
selected = amo.PLATFORM_DICT.get(qplatform, ALL)
if selected != ALL and selected not in app_platforms:
# Insert the filtered platform even if it's not a facet.
app_platforms.append(selected)
# L10n: "All Systems" means show everything regardless of platform.
rv = [FacetLink(_(u'All Systems'), dict(platform=ALL.shortname),
selected == ALL)]
for platform in app_platforms:
rv.append(FacetLink(platform.name, dict(platform=platform.shortname),
platform == selected))
return rv
def tag_sidebar(request, form_data, facets):
qtag = form_data.get('tag')
tags = [facet['term'] for facet in facets['tags']]
rv = [FacetLink(_(u'All Tags'), dict(tag=None), not qtag)]
rv += [FacetLink(tag, dict(tag=tag), tag == qtag) for tag in tags]
if qtag and qtag not in tags:
rv += [FacetLink(qtag, dict(tag=qtag), True)]
return rv
def fix_search_query(query, extra_params=None):
rv = dict((smart_str(k), v) for k, v in query.items())
changed = False
# Change old keys to new names.
keys = {
'lver': 'appver',
'pid': 'platform',
}
for old, new in keys.items():
if old in query:
rv[new] = rv.pop(old)
changed = True
# Change old parameter values to new values.
params = {
'sort': {
'newest': 'updated',
'popularity': 'downloads',
'weeklydownloads': 'users',
'averagerating': 'rating',
'sortby': 'sort',
},
'platform': dict((str(p.id), p.shortname)
for p in amo.PLATFORMS.values())
}
if extra_params:
params.update(extra_params)
for key, fixes in params.items():
if key in rv and rv[key] in fixes:
rv[key] = fixes[rv[key]]
changed = True
return rv if changed else query
def split_choices(choices, split):
"""Split a list of [(key, title)] pairs after key == split."""
index = [idx for idx, (key, title) in enumerate(choices)
if key == split]
if index:
index = index[0] + 1
return choices[:index], choices[index:]
else:
return choices, []
|
bsd-3-clause
|
sogelink/ansible
|
lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py
|
26
|
7794
|
#!/usr/bin/python
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: profitbricks_volume_attachments
short_description: Attach or detach a volume.
description:
- Allows you to attach or detach a volume from a ProfitBricks server. This module has a dependency on profitbricks >= 1.0.0
version_added: "2.0"
options:
datacenter:
description:
- The datacenter in which to operate.
required: true
server:
description:
- The name of the server you wish to detach or attach the volume.
required: true
volume:
description:
- The volume name or ID.
required: true
subscription_user:
description:
- The ProfitBricks username. Overrides the PB_SUBSCRIPTION_ID environment variable.
required: false
subscription_password:
description:
- THe ProfitBricks password. Overrides the PB_PASSWORD environment variable.
required: false
wait:
description:
- wait for the operation to complete before returning
required: false
default: "yes"
choices: [ "yes", "no" ]
wait_timeout:
description:
- how long before wait gives up, in seconds
default: 600
state:
description:
- Indicate desired state of the resource
required: false
default: 'present'
choices: ["present", "absent"]
requirements: [ "profitbricks" ]
author: Matt Baldwin (baldwin@stackpointcloud.com)
'''
EXAMPLES = '''
# Attach a Volume
- profitbricks_volume_attachments:
datacenter: Tardis One
server: node002
volume: vol01
wait_timeout: 500
state: present
# Detach a Volume
- profitbricks_volume_attachments:
datacenter: Tardis One
server: node002
volume: vol01
wait_timeout: 500
state: absent
'''
import re
import time
HAS_PB_SDK = True
try:
from profitbricks.client import ProfitBricksService
except ImportError:
HAS_PB_SDK = False
from ansible.module_utils.basic import AnsibleModule
uuid_match = re.compile(
'[\w]{8}-[\w]{4}-[\w]{4}-[\w]{4}-[\w]{12}', re.I)
def _wait_for_completion(profitbricks, promise, wait_timeout, msg):
if not promise:
return
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time():
time.sleep(5)
operation_result = profitbricks.get_request(
request_id=promise['requestId'],
status=True)
if operation_result['metadata']['status'] == "DONE":
return
elif operation_result['metadata']['status'] == "FAILED":
raise Exception(
'Request failed to complete ' + msg + ' "' + str(
promise['requestId']) + '" to complete.')
raise Exception(
'Timed out waiting for async operation ' + msg + ' "' + str(
promise['requestId']
) + '" to complete.')
def attach_volume(module, profitbricks):
"""
Attaches a volume.
This will attach a volume to the server.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True if the volume was attached, false otherwise
"""
datacenter = module.params.get('datacenter')
server = module.params.get('server')
volume = module.params.get('volume')
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
datacenter_list = profitbricks.list_datacenters()
for d in datacenter_list['items']:
dc = profitbricks.get_datacenter(d['id'])
if datacenter == dc['properties']['name']:
datacenter = d['id']
break
# Locate UUID for Server
if not (uuid_match.match(server)):
server_list = profitbricks.list_servers(datacenter)
for s in server_list['items']:
if server == s['properties']['name']:
server= s['id']
break
# Locate UUID for Volume
if not (uuid_match.match(volume)):
volume_list = profitbricks.list_volumes(datacenter)
for v in volume_list['items']:
if volume == v['properties']['name']:
volume = v['id']
break
return profitbricks.attach_volume(datacenter, server, volume)
def detach_volume(module, profitbricks):
"""
Detaches a volume.
This will remove a volume from the server.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True if the volume was detached, false otherwise
"""
datacenter = module.params.get('datacenter')
server = module.params.get('server')
volume = module.params.get('volume')
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
datacenter_list = profitbricks.list_datacenters()
for d in datacenter_list['items']:
dc = profitbricks.get_datacenter(d['id'])
if datacenter == dc['properties']['name']:
datacenter = d['id']
break
# Locate UUID for Server
if not (uuid_match.match(server)):
server_list = profitbricks.list_servers(datacenter)
for s in server_list['items']:
if server == s['properties']['name']:
server= s['id']
break
# Locate UUID for Volume
if not (uuid_match.match(volume)):
volume_list = profitbricks.list_volumes(datacenter)
for v in volume_list['items']:
if volume == v['properties']['name']:
volume = v['id']
break
return profitbricks.detach_volume(datacenter, server, volume)
def main():
module = AnsibleModule(
argument_spec=dict(
datacenter=dict(),
server=dict(),
volume=dict(),
subscription_user=dict(),
subscription_password=dict(no_log=True),
wait=dict(type='bool', default=True),
wait_timeout=dict(type='int', default=600),
state=dict(default='present'),
)
)
if not HAS_PB_SDK:
module.fail_json(msg='profitbricks required for this module')
if not module.params.get('subscription_user'):
module.fail_json(msg='subscription_user parameter is required')
if not module.params.get('subscription_password'):
module.fail_json(msg='subscription_password parameter is required')
if not module.params.get('datacenter'):
module.fail_json(msg='datacenter parameter is required')
if not module.params.get('server'):
module.fail_json(msg='server parameter is required')
if not module.params.get('volume'):
module.fail_json(msg='volume parameter is required')
subscription_user = module.params.get('subscription_user')
subscription_password = module.params.get('subscription_password')
profitbricks = ProfitBricksService(
username=subscription_user,
password=subscription_password)
state = module.params.get('state')
if state == 'absent':
try:
(changed) = detach_volume(module, profitbricks)
module.exit_json(changed=changed)
except Exception as e:
module.fail_json(msg='failed to set volume_attach state: %s' % str(e))
elif state == 'present':
try:
attach_volume(module, profitbricks)
module.exit_json()
except Exception as e:
module.fail_json(msg='failed to set volume_attach state: %s' % str(e))
if __name__ == '__main__':
main()
|
gpl-3.0
|
cccfran/sympy
|
sympy/liealgebras/tests/test_weyl_group.py
|
22
|
1587
|
from sympy.liealgebras.weyl_group import WeylGroup
from sympy.liealgebras.type_a import TypeA
from sympy.liealgebras.type_b import TypeB
from sympy.matrices import Matrix
def test_weyl_group():
c = WeylGroup("A3")
assert c.matrix_form('r1*r2') == Matrix([[0, 0, 1, 0], [1, 0, 0, 0],
[0, 1, 0, 0], [0, 0, 0, 1]])
assert c.generators() == ['r1', 'r2', 'r3']
assert c.group_order() == 24.0
assert c.group_name() == "S4: the symmetric group acting on 4 elements."
assert c.coxeter_diagram() == "0---0---0\n1 2 3"
assert c.element_order('r1*r2*r3') == 4
assert c.element_order('r1*r3*r2*r3') == 3
d = WeylGroup("B5")
assert d.group_order() == 3840
assert d.element_order('r1*r2*r4*r5') == 12
assert d.matrix_form('r2*r3') == Matrix([[0, 0, 1, 0, 0], [1, 0, 0, 0, 0],
[0, 1, 0, 0, 0], [0, 0, 0, 1, 0], [0, 0, 0, 0, 1]])
assert d.element_order('r1*r2*r1*r3*r5') == 6
e = WeylGroup("D5")
assert e.element_order('r2*r3*r5') == 4
assert e.matrix_form('r2*r3*r5') == Matrix([[1, 0, 0, 0, 0], [0, 0, 0, 0, -1],
[0, 1, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, -1, 0]])
f = WeylGroup("G2")
assert f.element_order('r1*r2*r1*r2') == 3
assert f.element_order('r2*r1*r1*r2') == 1
assert f.matrix_form('r1*r2*r1*r2') == Matrix([[0, 1, 0], [0, 0, 1], [1, 0, 0]])
g = WeylGroup("F4")
assert g.matrix_form('r2*r3') == Matrix([[1, 0, 0, 0], [0, 1, 0, 0],
[0, 0, 0, -1], [0, 0, 1, 0]])
assert g.element_order('r2*r3') == 4
h = WeylGroup("E6")
assert h.group_order() == 51840
|
bsd-3-clause
|
mozilla/build-relengapi
|
relengapi/blueprints/badpenny/__init__.py
|
3
|
4007
|
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
from __future__ import absolute_import
import structlog
from flask import Blueprint
from flask import current_app
from flask import url_for
from werkzeug.exceptions import NotFound
from relengapi.blueprints.badpenny import cleanup
from relengapi.blueprints.badpenny import cron
from relengapi.blueprints.badpenny import execution
from relengapi.blueprints.badpenny import rest
from relengapi.blueprints.badpenny import tables
from relengapi.lib import angular
from relengapi.lib import api
from relengapi.lib import permissions
from relengapi.lib import time
from relengapi.lib.api import apimethod
from relengapi.lib.permissions import p
logger = structlog.get_logger()
bp = Blueprint('badpenny', __name__,
static_folder='static',
template_folder='templates')
p.base.badpenny.view.doc('See scheduled tasks and logs of previous jobs')
p.base.badpenny.run.doc('Force a run of a badpenny task')
def permitted():
return permissions.can(p.base.badpenny.view)
bp.root_widget_template(
'badpenny_root_widget.html', priority=100, condition=permitted)
@bp.route('/')
@p.base.badpenny.view.require()
def root():
return angular.template('badpenny.html',
url_for('.static', filename='badpenny.js'),
url_for('.static', filename='badpenny.css'),
tasks=api.get_data(list_tasks))
@bp.route('/tasks')
@apimethod([rest.BadpennyTask], bool)
@p.base.badpenny.view.require()
def list_tasks(all=False):
"""List all badpenny tasks. With "?all=1", include inactive tasks."""
rv = [t.to_jsontask() for t in tables.BadpennyTask.query.all()]
if not all:
rv = [t for t in rv if t.active]
return rv
@bp.route('/tasks/<task_name>')
@apimethod(rest.BadpennyTask, unicode)
@p.base.badpenny.view.require()
def get_task(task_name):
"""Get information on a badpenny task by name."""
t = tables.BadpennyTask.query.filter(
tables.BadpennyTask.name == task_name).first()
if not t:
raise NotFound
return t.to_jsontask(with_jobs=True)
@bp.route('/tasks/<task_name>/run-now', methods=['POST'])
@apimethod(rest.BadpennyJob, unicode)
@p.base.badpenny.run.require()
def run_task_now(task_name):
"""Force the given badpenny task to run now. This method requires the
``base.badpenny.run`` permission."""
t = tables.BadpennyTask.query.filter(
tables.BadpennyTask.name == task_name).first()
if not t:
raise NotFound
session = current_app.db.session('relengapi')
job = tables.BadpennyJob(task=t, created_at=time.now())
session.add(job)
session.commit()
execution.submit_job(task_name=t.name, job_id=job.id)
return job.to_jsonjob()
@bp.route('/jobs')
@apimethod([rest.BadpennyJob])
@p.base.badpenny.view.require()
def list_jobs():
"""List all badpenny jobs."""
return [t.to_jsonjob() for t in tables.BadpennyJob.query.all()]
@bp.route('/jobs/<job_id>')
@apimethod(rest.BadpennyJob, int)
@p.base.badpenny.view.require()
def get_job(job_id):
"""Get information on a badpenny job by its ID. Use this to poll for job
completion."""
j = tables.BadpennyJob.query.filter(
tables.BadpennyJob.id == job_id).first()
if not j:
raise NotFound
return j.to_jsonjob()
@bp.route('/jobs/<job_id>/logs')
@apimethod(rest.BadpennyJobLog, int)
@p.base.badpenny.view.require()
def get_job_logs(job_id):
"""Get logs for a badpenny job by its ID."""
j = tables.BadpennyJobLog.query.filter(
tables.BadpennyJobLog.id == job_id).first()
if not j:
raise NotFound
return rest.BadpennyJobLog(content=j.content)
# Flask is fond of module-level code, which means imports have side-effects,
# which upsets pyflakes.
_hush_pyflakes = [cron, cleanup]
|
mpl-2.0
|
mitodl/open-discussions
|
channels/serializers/comments.py
|
1
|
10793
|
"""Serializers for commment REST APIs"""
from datetime import datetime, timezone
from django.contrib.auth import get_user_model
from praw.models import Comment, MoreComments
from praw.models.reddit.submission import Submission
from rest_framework import serializers
from rest_framework.exceptions import ValidationError
from channels import task_helpers
from channels.constants import DELETED_COMMENT_OR_POST_TEXT
from channels.utils import get_kind_mapping, get_kind_and_id, get_reddit_slug
from channels.models import Subscription
from channels.serializers.base import RedditObjectSerializer
from channels.serializers.utils import parse_bool
from open_discussions.serializers import WriteableSerializerMethodField
from profiles.utils import image_uri
User = get_user_model()
class BaseCommentSerializer(RedditObjectSerializer):
"""
Basic serializer class for reddit comments. Only includes serialization functionality
(no deserialization or validation), and does not fetch/serialize Subscription data
"""
id = serializers.CharField(read_only=True)
parent_id = serializers.SerializerMethodField()
post_id = serializers.SerializerMethodField()
comment_id = serializers.CharField(
write_only=True, allow_blank=True, required=False
)
text = serializers.CharField(source="body")
author_id = serializers.SerializerMethodField()
score = serializers.IntegerField(read_only=True)
upvoted = WriteableSerializerMethodField()
removed = WriteableSerializerMethodField()
ignore_reports = serializers.BooleanField(required=False, write_only=True)
downvoted = WriteableSerializerMethodField()
created = serializers.SerializerMethodField()
profile_image = serializers.SerializerMethodField()
author_name = serializers.SerializerMethodField()
author_headline = serializers.SerializerMethodField()
edited = serializers.SerializerMethodField()
comment_type = serializers.SerializerMethodField()
num_reports = serializers.IntegerField(read_only=True)
deleted = serializers.SerializerMethodField()
def get_post_id(self, instance):
"""The post id for this comment"""
return instance.submission.id
def get_parent_id(self, instance):
"""The parent id for this comment"""
parent = instance.parent()
if isinstance(parent, Submission):
return None
else:
return parent.id
def get_author_id(self, instance):
"""Get the author username or else [deleted]"""
user = self._get_user(instance)
if not user:
return "[deleted]"
return user.username
def get_upvoted(self, instance):
"""Is a comment upvoted?"""
return instance.likes is True
def get_profile_image(self, instance):
"""Find the Profile for the comment author"""
return image_uri(self._get_profile(instance))
def get_author_name(self, instance):
"""get the author name"""
# this is a similar setup to the get_profile_image thingy above
user = self._get_user(instance)
if user and user.profile.name:
return user.profile.name
return "[deleted]"
def get_author_headline(self, instance):
"""get the author headline"""
user = self._get_user(instance)
if user and user.profile:
return user.profile.headline
return None
def get_downvoted(self, instance):
"""Is a comment downvoted?"""
return instance.likes is False
def get_created(self, instance):
"""The ISO-8601 formatted datetime for the creation time"""
return datetime.fromtimestamp(instance.created, tz=timezone.utc).isoformat()
def get_edited(self, instance):
"""Return a Boolean signifying if the comment has been edited or not"""
return instance.edited if instance.edited is False else True
def get_comment_type(self, instance): # pylint: disable=unused-argument
"""Let the frontend know which type this is"""
return "comment"
def get_removed(self, instance):
"""Returns True if the comment was removed"""
return instance.banned_by is not None
def get_deleted(self, instance):
"""Returns True if the comment was deleted"""
return instance.body == DELETED_COMMENT_OR_POST_TEXT # only way to tell
def to_representation(self, instance):
data = super().to_representation(instance)
if self.context.get("include_permalink_data", False):
return {
**data,
"post_slug": get_reddit_slug(instance.submission.permalink),
"channel_name": instance.submission.subreddit.display_name,
}
return data
class CommentSerializer(BaseCommentSerializer):
"""
Full serializer class for reddit comments. Includes deserialization and validation functionality
and can fetch/serialize Subscription information.
"""
subscribed = WriteableSerializerMethodField()
@property
def _current_user(self):
"""Get the current user"""
return self.context["current_user"]
def get_subscribed(self, instance):
"""Returns True if user is subscribed to the comment"""
if self.context.get("omit_subscriptions", False):
return None
if "comment_subscriptions" not in self.context:
# this code is run if a comment was just created
return Subscription.objects.filter(
user=self._current_user,
post_id=instance.submission.id,
comment_id=instance.id,
).exists()
return instance.id in self.context.get("comment_subscriptions", [])
def validate_upvoted(self, value):
"""Validate that upvoted is a bool"""
return {"upvoted": parse_bool(value, "upvoted")}
def validate_downvoted(self, value):
"""Validate that downvoted is a bool"""
return {"downvoted": parse_bool(value, "downvoted")}
def validate_subscribed(self, value):
"""Validate that subscribed is a bool"""
return {"subscribed": parse_bool(value, "subscribed")}
def validate_removed(self, value):
"""Validate that removed is a bool"""
return {"removed": parse_bool(value, "removed")}
def validate(self, attrs):
"""Validate that the the combination of fields makes sense"""
if attrs.get("upvoted") and attrs.get("downvoted"):
raise ValidationError("upvoted and downvoted cannot both be true")
return attrs
def create(self, validated_data):
api = self.context["channel_api"]
post_id = self.context["view"].kwargs["post_id"]
kwargs = {}
if validated_data.get("comment_id"):
kwargs["comment_id"] = validated_data["comment_id"]
else:
kwargs["post_id"] = post_id
comment = api.create_comment(text=validated_data["body"], **kwargs)
api.add_comment_subscription(post_id, comment.id)
changed = api.apply_comment_vote(comment, validated_data)
from notifications.tasks import notify_subscribed_users
notify_subscribed_users.delay(
post_id, validated_data.get("comment_id", None), comment.id
)
if not api.is_moderator(comment.subreddit.display_name, comment.author.name):
task_helpers.check_comment_for_spam(self.context["request"], comment.id)
if changed:
return api.get_comment(comment.id)
else:
return comment
def update(self, instance, validated_data):
if validated_data.get("comment_id"):
raise ValidationError("comment_id must be provided via URL")
api = self.context["channel_api"]
if "body" in validated_data:
text = validated_data["body"]
api.update_comment(comment_id=instance.id, text=text)
if not api.is_moderator(
instance.subreddit.display_name, instance.author.name
):
task_helpers.check_comment_for_spam(
self.context["request"], instance.id
)
if "removed" in validated_data:
if validated_data["removed"] is True:
api.remove_comment(comment_id=instance.id)
else:
api.approve_comment(comment_id=instance.id)
if "ignore_reports" in validated_data:
ignore_reports = validated_data["ignore_reports"]
if ignore_reports is True:
api.approve_comment(comment_id=instance.id)
api.ignore_comment_reports(instance.id)
if "subscribed" in validated_data:
post_id = instance.submission.id
if validated_data["subscribed"] is True:
api.add_comment_subscription(post_id, instance.id)
elif validated_data["subscribed"] is False:
api.remove_comment_subscription(post_id, instance.id)
api.apply_comment_vote(instance, validated_data)
return api.get_comment(comment_id=instance.id)
class MoreCommentsSerializer(serializers.Serializer):
"""
Serializer for MoreComments objects
"""
parent_id = serializers.SerializerMethodField()
post_id = serializers.SerializerMethodField()
children = serializers.SerializerMethodField()
comment_type = serializers.SerializerMethodField()
def get_parent_id(self, instance):
"""Returns the comment id for the parent comment, or None if the parent is a post"""
kind, _id = get_kind_and_id(instance.parent_id)
if kind == get_kind_mapping()["comment"]:
return _id
return None
def get_post_id(self, instance):
"""Returns the post id the comment belongs to"""
return instance.submission.id
def get_children(self, instance):
"""A list of comment ids for child comments that can be loaded"""
return instance.children
def get_comment_type(self, instance): # pylint: disable=unused-argument
"""Let the frontend know which type this is"""
return "more_comments"
class GenericCommentSerializer(serializers.Serializer):
"""
Hack to serialize different types with only one entrypoint
"""
def to_representation(self, instance):
"""
Overrides the class method to add an extra field
"""
if isinstance(instance, MoreComments):
return MoreCommentsSerializer(instance, context=self.context).data
elif isinstance(instance, Comment):
return CommentSerializer(instance, context=self.context).data
raise ValueError("Unknown type {} in the comments list".format(type(instance)))
|
bsd-3-clause
|
m42e/jirash
|
deps/requests/packages/charade/jisfreq.py
|
3131
|
47315
|
######################## BEGIN LICENSE BLOCK ########################
# The Original Code is Mozilla Communicator client code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 1998
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
# Sampling from about 20M text materials include literature and computer technology
#
# Japanese frequency table, applied to both S-JIS and EUC-JP
# They are sorted in order.
# 128 --> 0.77094
# 256 --> 0.85710
# 512 --> 0.92635
# 1024 --> 0.97130
# 2048 --> 0.99431
#
# Ideal Distribution Ratio = 0.92635 / (1-0.92635) = 12.58
# Random Distribution Ration = 512 / (2965+62+83+86-512) = 0.191
#
# Typical Distribution Ratio, 25% of IDR
JIS_TYPICAL_DISTRIBUTION_RATIO = 3.0
# Char to FreqOrder table ,
JIS_TABLE_SIZE = 4368
JISCharToFreqOrder = (
40, 1, 6, 182, 152, 180, 295,2127, 285, 381,3295,4304,3068,4606,3165,3510, # 16
3511,1822,2785,4607,1193,2226,5070,4608, 171,2996,1247, 18, 179,5071, 856,1661, # 32
1262,5072, 619, 127,3431,3512,3230,1899,1700, 232, 228,1294,1298, 284, 283,2041, # 48
2042,1061,1062, 48, 49, 44, 45, 433, 434,1040,1041, 996, 787,2997,1255,4305, # 64
2108,4609,1684,1648,5073,5074,5075,5076,5077,5078,3687,5079,4610,5080,3927,3928, # 80
5081,3296,3432, 290,2285,1471,2187,5082,2580,2825,1303,2140,1739,1445,2691,3375, # 96
1691,3297,4306,4307,4611, 452,3376,1182,2713,3688,3069,4308,5083,5084,5085,5086, # 112
5087,5088,5089,5090,5091,5092,5093,5094,5095,5096,5097,5098,5099,5100,5101,5102, # 128
5103,5104,5105,5106,5107,5108,5109,5110,5111,5112,4097,5113,5114,5115,5116,5117, # 144
5118,5119,5120,5121,5122,5123,5124,5125,5126,5127,5128,5129,5130,5131,5132,5133, # 160
5134,5135,5136,5137,5138,5139,5140,5141,5142,5143,5144,5145,5146,5147,5148,5149, # 176
5150,5151,5152,4612,5153,5154,5155,5156,5157,5158,5159,5160,5161,5162,5163,5164, # 192
5165,5166,5167,5168,5169,5170,5171,5172,5173,5174,5175,1472, 598, 618, 820,1205, # 208
1309,1412,1858,1307,1692,5176,5177,5178,5179,5180,5181,5182,1142,1452,1234,1172, # 224
1875,2043,2149,1793,1382,2973, 925,2404,1067,1241, 960,1377,2935,1491, 919,1217, # 240
1865,2030,1406,1499,2749,4098,5183,5184,5185,5186,5187,5188,2561,4099,3117,1804, # 256
2049,3689,4309,3513,1663,5189,3166,3118,3298,1587,1561,3433,5190,3119,1625,2998, # 272
3299,4613,1766,3690,2786,4614,5191,5192,5193,5194,2161, 26,3377, 2,3929, 20, # 288
3691, 47,4100, 50, 17, 16, 35, 268, 27, 243, 42, 155, 24, 154, 29, 184, # 304
4, 91, 14, 92, 53, 396, 33, 289, 9, 37, 64, 620, 21, 39, 321, 5, # 320
12, 11, 52, 13, 3, 208, 138, 0, 7, 60, 526, 141, 151,1069, 181, 275, # 336
1591, 83, 132,1475, 126, 331, 829, 15, 69, 160, 59, 22, 157, 55,1079, 312, # 352
109, 38, 23, 25, 10, 19, 79,5195, 61, 382,1124, 8, 30,5196,5197,5198, # 368
5199,5200,5201,5202,5203,5204,5205,5206, 89, 62, 74, 34,2416, 112, 139, 196, # 384
271, 149, 84, 607, 131, 765, 46, 88, 153, 683, 76, 874, 101, 258, 57, 80, # 400
32, 364, 121,1508, 169,1547, 68, 235, 145,2999, 41, 360,3027, 70, 63, 31, # 416
43, 259, 262,1383, 99, 533, 194, 66, 93, 846, 217, 192, 56, 106, 58, 565, # 432
280, 272, 311, 256, 146, 82, 308, 71, 100, 128, 214, 655, 110, 261, 104,1140, # 448
54, 51, 36, 87, 67,3070, 185,2618,2936,2020, 28,1066,2390,2059,5207,5208, # 464
5209,5210,5211,5212,5213,5214,5215,5216,4615,5217,5218,5219,5220,5221,5222,5223, # 480
5224,5225,5226,5227,5228,5229,5230,5231,5232,5233,5234,5235,5236,3514,5237,5238, # 496
5239,5240,5241,5242,5243,5244,2297,2031,4616,4310,3692,5245,3071,5246,3598,5247, # 512
4617,3231,3515,5248,4101,4311,4618,3808,4312,4102,5249,4103,4104,3599,5250,5251, # 528
5252,5253,5254,5255,5256,5257,5258,5259,5260,5261,5262,5263,5264,5265,5266,5267, # 544
5268,5269,5270,5271,5272,5273,5274,5275,5276,5277,5278,5279,5280,5281,5282,5283, # 560
5284,5285,5286,5287,5288,5289,5290,5291,5292,5293,5294,5295,5296,5297,5298,5299, # 576
5300,5301,5302,5303,5304,5305,5306,5307,5308,5309,5310,5311,5312,5313,5314,5315, # 592
5316,5317,5318,5319,5320,5321,5322,5323,5324,5325,5326,5327,5328,5329,5330,5331, # 608
5332,5333,5334,5335,5336,5337,5338,5339,5340,5341,5342,5343,5344,5345,5346,5347, # 624
5348,5349,5350,5351,5352,5353,5354,5355,5356,5357,5358,5359,5360,5361,5362,5363, # 640
5364,5365,5366,5367,5368,5369,5370,5371,5372,5373,5374,5375,5376,5377,5378,5379, # 656
5380,5381, 363, 642,2787,2878,2788,2789,2316,3232,2317,3434,2011, 165,1942,3930, # 672
3931,3932,3933,5382,4619,5383,4620,5384,5385,5386,5387,5388,5389,5390,5391,5392, # 688
5393,5394,5395,5396,5397,5398,5399,5400,5401,5402,5403,5404,5405,5406,5407,5408, # 704
5409,5410,5411,5412,5413,5414,5415,5416,5417,5418,5419,5420,5421,5422,5423,5424, # 720
5425,5426,5427,5428,5429,5430,5431,5432,5433,5434,5435,5436,5437,5438,5439,5440, # 736
5441,5442,5443,5444,5445,5446,5447,5448,5449,5450,5451,5452,5453,5454,5455,5456, # 752
5457,5458,5459,5460,5461,5462,5463,5464,5465,5466,5467,5468,5469,5470,5471,5472, # 768
5473,5474,5475,5476,5477,5478,5479,5480,5481,5482,5483,5484,5485,5486,5487,5488, # 784
5489,5490,5491,5492,5493,5494,5495,5496,5497,5498,5499,5500,5501,5502,5503,5504, # 800
5505,5506,5507,5508,5509,5510,5511,5512,5513,5514,5515,5516,5517,5518,5519,5520, # 816
5521,5522,5523,5524,5525,5526,5527,5528,5529,5530,5531,5532,5533,5534,5535,5536, # 832
5537,5538,5539,5540,5541,5542,5543,5544,5545,5546,5547,5548,5549,5550,5551,5552, # 848
5553,5554,5555,5556,5557,5558,5559,5560,5561,5562,5563,5564,5565,5566,5567,5568, # 864
5569,5570,5571,5572,5573,5574,5575,5576,5577,5578,5579,5580,5581,5582,5583,5584, # 880
5585,5586,5587,5588,5589,5590,5591,5592,5593,5594,5595,5596,5597,5598,5599,5600, # 896
5601,5602,5603,5604,5605,5606,5607,5608,5609,5610,5611,5612,5613,5614,5615,5616, # 912
5617,5618,5619,5620,5621,5622,5623,5624,5625,5626,5627,5628,5629,5630,5631,5632, # 928
5633,5634,5635,5636,5637,5638,5639,5640,5641,5642,5643,5644,5645,5646,5647,5648, # 944
5649,5650,5651,5652,5653,5654,5655,5656,5657,5658,5659,5660,5661,5662,5663,5664, # 960
5665,5666,5667,5668,5669,5670,5671,5672,5673,5674,5675,5676,5677,5678,5679,5680, # 976
5681,5682,5683,5684,5685,5686,5687,5688,5689,5690,5691,5692,5693,5694,5695,5696, # 992
5697,5698,5699,5700,5701,5702,5703,5704,5705,5706,5707,5708,5709,5710,5711,5712, # 1008
5713,5714,5715,5716,5717,5718,5719,5720,5721,5722,5723,5724,5725,5726,5727,5728, # 1024
5729,5730,5731,5732,5733,5734,5735,5736,5737,5738,5739,5740,5741,5742,5743,5744, # 1040
5745,5746,5747,5748,5749,5750,5751,5752,5753,5754,5755,5756,5757,5758,5759,5760, # 1056
5761,5762,5763,5764,5765,5766,5767,5768,5769,5770,5771,5772,5773,5774,5775,5776, # 1072
5777,5778,5779,5780,5781,5782,5783,5784,5785,5786,5787,5788,5789,5790,5791,5792, # 1088
5793,5794,5795,5796,5797,5798,5799,5800,5801,5802,5803,5804,5805,5806,5807,5808, # 1104
5809,5810,5811,5812,5813,5814,5815,5816,5817,5818,5819,5820,5821,5822,5823,5824, # 1120
5825,5826,5827,5828,5829,5830,5831,5832,5833,5834,5835,5836,5837,5838,5839,5840, # 1136
5841,5842,5843,5844,5845,5846,5847,5848,5849,5850,5851,5852,5853,5854,5855,5856, # 1152
5857,5858,5859,5860,5861,5862,5863,5864,5865,5866,5867,5868,5869,5870,5871,5872, # 1168
5873,5874,5875,5876,5877,5878,5879,5880,5881,5882,5883,5884,5885,5886,5887,5888, # 1184
5889,5890,5891,5892,5893,5894,5895,5896,5897,5898,5899,5900,5901,5902,5903,5904, # 1200
5905,5906,5907,5908,5909,5910,5911,5912,5913,5914,5915,5916,5917,5918,5919,5920, # 1216
5921,5922,5923,5924,5925,5926,5927,5928,5929,5930,5931,5932,5933,5934,5935,5936, # 1232
5937,5938,5939,5940,5941,5942,5943,5944,5945,5946,5947,5948,5949,5950,5951,5952, # 1248
5953,5954,5955,5956,5957,5958,5959,5960,5961,5962,5963,5964,5965,5966,5967,5968, # 1264
5969,5970,5971,5972,5973,5974,5975,5976,5977,5978,5979,5980,5981,5982,5983,5984, # 1280
5985,5986,5987,5988,5989,5990,5991,5992,5993,5994,5995,5996,5997,5998,5999,6000, # 1296
6001,6002,6003,6004,6005,6006,6007,6008,6009,6010,6011,6012,6013,6014,6015,6016, # 1312
6017,6018,6019,6020,6021,6022,6023,6024,6025,6026,6027,6028,6029,6030,6031,6032, # 1328
6033,6034,6035,6036,6037,6038,6039,6040,6041,6042,6043,6044,6045,6046,6047,6048, # 1344
6049,6050,6051,6052,6053,6054,6055,6056,6057,6058,6059,6060,6061,6062,6063,6064, # 1360
6065,6066,6067,6068,6069,6070,6071,6072,6073,6074,6075,6076,6077,6078,6079,6080, # 1376
6081,6082,6083,6084,6085,6086,6087,6088,6089,6090,6091,6092,6093,6094,6095,6096, # 1392
6097,6098,6099,6100,6101,6102,6103,6104,6105,6106,6107,6108,6109,6110,6111,6112, # 1408
6113,6114,2044,2060,4621, 997,1235, 473,1186,4622, 920,3378,6115,6116, 379,1108, # 1424
4313,2657,2735,3934,6117,3809, 636,3233, 573,1026,3693,3435,2974,3300,2298,4105, # 1440
854,2937,2463, 393,2581,2417, 539, 752,1280,2750,2480, 140,1161, 440, 708,1569, # 1456
665,2497,1746,1291,1523,3000, 164,1603, 847,1331, 537,1997, 486, 508,1693,2418, # 1472
1970,2227, 878,1220, 299,1030, 969, 652,2751, 624,1137,3301,2619, 65,3302,2045, # 1488
1761,1859,3120,1930,3694,3516, 663,1767, 852, 835,3695, 269, 767,2826,2339,1305, # 1504
896,1150, 770,1616,6118, 506,1502,2075,1012,2519, 775,2520,2975,2340,2938,4314, # 1520
3028,2086,1224,1943,2286,6119,3072,4315,2240,1273,1987,3935,1557, 175, 597, 985, # 1536
3517,2419,2521,1416,3029, 585, 938,1931,1007,1052,1932,1685,6120,3379,4316,4623, # 1552
804, 599,3121,1333,2128,2539,1159,1554,2032,3810, 687,2033,2904, 952, 675,1467, # 1568
3436,6121,2241,1096,1786,2440,1543,1924, 980,1813,2228, 781,2692,1879, 728,1918, # 1584
3696,4624, 548,1950,4625,1809,1088,1356,3303,2522,1944, 502, 972, 373, 513,2827, # 1600
586,2377,2391,1003,1976,1631,6122,2464,1084, 648,1776,4626,2141, 324, 962,2012, # 1616
2177,2076,1384, 742,2178,1448,1173,1810, 222, 102, 301, 445, 125,2420, 662,2498, # 1632
277, 200,1476,1165,1068, 224,2562,1378,1446, 450,1880, 659, 791, 582,4627,2939, # 1648
3936,1516,1274, 555,2099,3697,1020,1389,1526,3380,1762,1723,1787,2229, 412,2114, # 1664
1900,2392,3518, 512,2597, 427,1925,2341,3122,1653,1686,2465,2499, 697, 330, 273, # 1680
380,2162, 951, 832, 780, 991,1301,3073, 965,2270,3519, 668,2523,2636,1286, 535, # 1696
1407, 518, 671, 957,2658,2378, 267, 611,2197,3030,6123, 248,2299, 967,1799,2356, # 1712
850,1418,3437,1876,1256,1480,2828,1718,6124,6125,1755,1664,2405,6126,4628,2879, # 1728
2829, 499,2179, 676,4629, 557,2329,2214,2090, 325,3234, 464, 811,3001, 992,2342, # 1744
2481,1232,1469, 303,2242, 466,1070,2163, 603,1777,2091,4630,2752,4631,2714, 322, # 1760
2659,1964,1768, 481,2188,1463,2330,2857,3600,2092,3031,2421,4632,2318,2070,1849, # 1776
2598,4633,1302,2254,1668,1701,2422,3811,2905,3032,3123,2046,4106,1763,1694,4634, # 1792
1604, 943,1724,1454, 917, 868,2215,1169,2940, 552,1145,1800,1228,1823,1955, 316, # 1808
1080,2510, 361,1807,2830,4107,2660,3381,1346,1423,1134,4108,6127, 541,1263,1229, # 1824
1148,2540, 545, 465,1833,2880,3438,1901,3074,2482, 816,3937, 713,1788,2500, 122, # 1840
1575, 195,1451,2501,1111,6128, 859, 374,1225,2243,2483,4317, 390,1033,3439,3075, # 1856
2524,1687, 266, 793,1440,2599, 946, 779, 802, 507, 897,1081, 528,2189,1292, 711, # 1872
1866,1725,1167,1640, 753, 398,2661,1053, 246, 348,4318, 137,1024,3440,1600,2077, # 1888
2129, 825,4319, 698, 238, 521, 187,2300,1157,2423,1641,1605,1464,1610,1097,2541, # 1904
1260,1436, 759,2255,1814,2150, 705,3235, 409,2563,3304, 561,3033,2005,2564, 726, # 1920
1956,2343,3698,4109, 949,3812,3813,3520,1669, 653,1379,2525, 881,2198, 632,2256, # 1936
1027, 778,1074, 733,1957, 514,1481,2466, 554,2180, 702,3938,1606,1017,1398,6129, # 1952
1380,3521, 921, 993,1313, 594, 449,1489,1617,1166, 768,1426,1360, 495,1794,3601, # 1968
1177,3602,1170,4320,2344, 476, 425,3167,4635,3168,1424, 401,2662,1171,3382,1998, # 1984
1089,4110, 477,3169, 474,6130,1909, 596,2831,1842, 494, 693,1051,1028,1207,3076, # 2000
606,2115, 727,2790,1473,1115, 743,3522, 630, 805,1532,4321,2021, 366,1057, 838, # 2016
684,1114,2142,4322,2050,1492,1892,1808,2271,3814,2424,1971,1447,1373,3305,1090, # 2032
1536,3939,3523,3306,1455,2199, 336, 369,2331,1035, 584,2393, 902, 718,2600,6131, # 2048
2753, 463,2151,1149,1611,2467, 715,1308,3124,1268, 343,1413,3236,1517,1347,2663, # 2064
2093,3940,2022,1131,1553,2100,2941,1427,3441,2942,1323,2484,6132,1980, 872,2368, # 2080
2441,2943, 320,2369,2116,1082, 679,1933,3941,2791,3815, 625,1143,2023, 422,2200, # 2096
3816,6133, 730,1695, 356,2257,1626,2301,2858,2637,1627,1778, 937, 883,2906,2693, # 2112
3002,1769,1086, 400,1063,1325,3307,2792,4111,3077, 456,2345,1046, 747,6134,1524, # 2128
884,1094,3383,1474,2164,1059, 974,1688,2181,2258,1047, 345,1665,1187, 358, 875, # 2144
3170, 305, 660,3524,2190,1334,1135,3171,1540,1649,2542,1527, 927, 968,2793, 885, # 2160
1972,1850, 482, 500,2638,1218,1109,1085,2543,1654,2034, 876, 78,2287,1482,1277, # 2176
861,1675,1083,1779, 724,2754, 454, 397,1132,1612,2332, 893, 672,1237, 257,2259, # 2192
2370, 135,3384, 337,2244, 547, 352, 340, 709,2485,1400, 788,1138,2511, 540, 772, # 2208
1682,2260,2272,2544,2013,1843,1902,4636,1999,1562,2288,4637,2201,1403,1533, 407, # 2224
576,3308,1254,2071, 978,3385, 170, 136,1201,3125,2664,3172,2394, 213, 912, 873, # 2240
3603,1713,2202, 699,3604,3699, 813,3442, 493, 531,1054, 468,2907,1483, 304, 281, # 2256
4112,1726,1252,2094, 339,2319,2130,2639, 756,1563,2944, 748, 571,2976,1588,2425, # 2272
2715,1851,1460,2426,1528,1392,1973,3237, 288,3309, 685,3386, 296, 892,2716,2216, # 2288
1570,2245, 722,1747,2217, 905,3238,1103,6135,1893,1441,1965, 251,1805,2371,3700, # 2304
2601,1919,1078, 75,2182,1509,1592,1270,2640,4638,2152,6136,3310,3817, 524, 706, # 2320
1075, 292,3818,1756,2602, 317, 98,3173,3605,3525,1844,2218,3819,2502, 814, 567, # 2336
385,2908,1534,6137, 534,1642,3239, 797,6138,1670,1529, 953,4323, 188,1071, 538, # 2352
178, 729,3240,2109,1226,1374,2000,2357,2977, 731,2468,1116,2014,2051,6139,1261, # 2368
1593, 803,2859,2736,3443, 556, 682, 823,1541,6140,1369,2289,1706,2794, 845, 462, # 2384
2603,2665,1361, 387, 162,2358,1740, 739,1770,1720,1304,1401,3241,1049, 627,1571, # 2400
2427,3526,1877,3942,1852,1500, 431,1910,1503, 677, 297,2795, 286,1433,1038,1198, # 2416
2290,1133,1596,4113,4639,2469,1510,1484,3943,6141,2442, 108, 712,4640,2372, 866, # 2432
3701,2755,3242,1348, 834,1945,1408,3527,2395,3243,1811, 824, 994,1179,2110,1548, # 2448
1453, 790,3003, 690,4324,4325,2832,2909,3820,1860,3821, 225,1748, 310, 346,1780, # 2464
2470, 821,1993,2717,2796, 828, 877,3528,2860,2471,1702,2165,2910,2486,1789, 453, # 2480
359,2291,1676, 73,1164,1461,1127,3311, 421, 604, 314,1037, 589, 116,2487, 737, # 2496
837,1180, 111, 244, 735,6142,2261,1861,1362, 986, 523, 418, 581,2666,3822, 103, # 2512
855, 503,1414,1867,2488,1091, 657,1597, 979, 605,1316,4641,1021,2443,2078,2001, # 2528
1209, 96, 587,2166,1032, 260,1072,2153, 173, 94, 226,3244, 819,2006,4642,4114, # 2544
2203, 231,1744, 782, 97,2667, 786,3387, 887, 391, 442,2219,4326,1425,6143,2694, # 2560
633,1544,1202, 483,2015, 592,2052,1958,2472,1655, 419, 129,4327,3444,3312,1714, # 2576
1257,3078,4328,1518,1098, 865,1310,1019,1885,1512,1734, 469,2444, 148, 773, 436, # 2592
1815,1868,1128,1055,4329,1245,2756,3445,2154,1934,1039,4643, 579,1238, 932,2320, # 2608
353, 205, 801, 115,2428, 944,2321,1881, 399,2565,1211, 678, 766,3944, 335,2101, # 2624
1459,1781,1402,3945,2737,2131,1010, 844, 981,1326,1013, 550,1816,1545,2620,1335, # 2640
1008, 371,2881, 936,1419,1613,3529,1456,1395,2273,1834,2604,1317,2738,2503, 416, # 2656
1643,4330, 806,1126, 229, 591,3946,1314,1981,1576,1837,1666, 347,1790, 977,3313, # 2672
764,2861,1853, 688,2429,1920,1462, 77, 595, 415,2002,3034, 798,1192,4115,6144, # 2688
2978,4331,3035,2695,2582,2072,2566, 430,2430,1727, 842,1396,3947,3702, 613, 377, # 2704
278, 236,1417,3388,3314,3174, 757,1869, 107,3530,6145,1194, 623,2262, 207,1253, # 2720
2167,3446,3948, 492,1117,1935, 536,1838,2757,1246,4332, 696,2095,2406,1393,1572, # 2736
3175,1782, 583, 190, 253,1390,2230, 830,3126,3389, 934,3245,1703,1749,2979,1870, # 2752
2545,1656,2204, 869,2346,4116,3176,1817, 496,1764,4644, 942,1504, 404,1903,1122, # 2768
1580,3606,2945,1022, 515, 372,1735, 955,2431,3036,6146,2797,1110,2302,2798, 617, # 2784
6147, 441, 762,1771,3447,3607,3608,1904, 840,3037, 86, 939,1385, 572,1370,2445, # 2800
1336, 114,3703, 898, 294, 203,3315, 703,1583,2274, 429, 961,4333,1854,1951,3390, # 2816
2373,3704,4334,1318,1381, 966,1911,2322,1006,1155, 309, 989, 458,2718,1795,1372, # 2832
1203, 252,1689,1363,3177, 517,1936, 168,1490, 562, 193,3823,1042,4117,1835, 551, # 2848
470,4645, 395, 489,3448,1871,1465,2583,2641, 417,1493, 279,1295, 511,1236,1119, # 2864
72,1231,1982,1812,3004, 871,1564, 984,3449,1667,2696,2096,4646,2347,2833,1673, # 2880
3609, 695,3246,2668, 807,1183,4647, 890, 388,2333,1801,1457,2911,1765,1477,1031, # 2896
3316,3317,1278,3391,2799,2292,2526, 163,3450,4335,2669,1404,1802,6148,2323,2407, # 2912
1584,1728,1494,1824,1269, 298, 909,3318,1034,1632, 375, 776,1683,2061, 291, 210, # 2928
1123, 809,1249,1002,2642,3038, 206,1011,2132, 144, 975, 882,1565, 342, 667, 754, # 2944
1442,2143,1299,2303,2062, 447, 626,2205,1221,2739,2912,1144,1214,2206,2584, 760, # 2960
1715, 614, 950,1281,2670,2621, 810, 577,1287,2546,4648, 242,2168, 250,2643, 691, # 2976
123,2644, 647, 313,1029, 689,1357,2946,1650, 216, 771,1339,1306, 808,2063, 549, # 2992
913,1371,2913,2914,6149,1466,1092,1174,1196,1311,2605,2396,1783,1796,3079, 406, # 3008
2671,2117,3949,4649, 487,1825,2220,6150,2915, 448,2348,1073,6151,2397,1707, 130, # 3024
900,1598, 329, 176,1959,2527,1620,6152,2275,4336,3319,1983,2191,3705,3610,2155, # 3040
3706,1912,1513,1614,6153,1988, 646, 392,2304,1589,3320,3039,1826,1239,1352,1340, # 3056
2916, 505,2567,1709,1437,2408,2547, 906,6154,2672, 384,1458,1594,1100,1329, 710, # 3072
423,3531,2064,2231,2622,1989,2673,1087,1882, 333, 841,3005,1296,2882,2379, 580, # 3088
1937,1827,1293,2585, 601, 574, 249,1772,4118,2079,1120, 645, 901,1176,1690, 795, # 3104
2207, 478,1434, 516,1190,1530, 761,2080, 930,1264, 355, 435,1552, 644,1791, 987, # 3120
220,1364,1163,1121,1538, 306,2169,1327,1222, 546,2645, 218, 241, 610,1704,3321, # 3136
1984,1839,1966,2528, 451,6155,2586,3707,2568, 907,3178, 254,2947, 186,1845,4650, # 3152
745, 432,1757, 428,1633, 888,2246,2221,2489,3611,2118,1258,1265, 956,3127,1784, # 3168
4337,2490, 319, 510, 119, 457,3612, 274,2035,2007,4651,1409,3128, 970,2758, 590, # 3184
2800, 661,2247,4652,2008,3950,1420,1549,3080,3322,3951,1651,1375,2111, 485,2491, # 3200
1429,1156,6156,2548,2183,1495, 831,1840,2529,2446, 501,1657, 307,1894,3247,1341, # 3216
666, 899,2156,1539,2549,1559, 886, 349,2208,3081,2305,1736,3824,2170,2759,1014, # 3232
1913,1386, 542,1397,2948, 490, 368, 716, 362, 159, 282,2569,1129,1658,1288,1750, # 3248
2674, 276, 649,2016, 751,1496, 658,1818,1284,1862,2209,2087,2512,3451, 622,2834, # 3264
376, 117,1060,2053,1208,1721,1101,1443, 247,1250,3179,1792,3952,2760,2398,3953, # 3280
6157,2144,3708, 446,2432,1151,2570,3452,2447,2761,2835,1210,2448,3082, 424,2222, # 3296
1251,2449,2119,2836, 504,1581,4338, 602, 817, 857,3825,2349,2306, 357,3826,1470, # 3312
1883,2883, 255, 958, 929,2917,3248, 302,4653,1050,1271,1751,2307,1952,1430,2697, # 3328
2719,2359, 354,3180, 777, 158,2036,4339,1659,4340,4654,2308,2949,2248,1146,2232, # 3344
3532,2720,1696,2623,3827,6158,3129,1550,2698,1485,1297,1428, 637, 931,2721,2145, # 3360
914,2550,2587, 81,2450, 612, 827,2646,1242,4655,1118,2884, 472,1855,3181,3533, # 3376
3534, 569,1353,2699,1244,1758,2588,4119,2009,2762,2171,3709,1312,1531,6159,1152, # 3392
1938, 134,1830, 471,3710,2276,1112,1535,3323,3453,3535, 982,1337,2950, 488, 826, # 3408
674,1058,1628,4120,2017, 522,2399, 211, 568,1367,3454, 350, 293,1872,1139,3249, # 3424
1399,1946,3006,1300,2360,3324, 588, 736,6160,2606, 744, 669,3536,3828,6161,1358, # 3440
199, 723, 848, 933, 851,1939,1505,1514,1338,1618,1831,4656,1634,3613, 443,2740, # 3456
3829, 717,1947, 491,1914,6162,2551,1542,4121,1025,6163,1099,1223, 198,3040,2722, # 3472
370, 410,1905,2589, 998,1248,3182,2380, 519,1449,4122,1710, 947, 928,1153,4341, # 3488
2277, 344,2624,1511, 615, 105, 161,1212,1076,1960,3130,2054,1926,1175,1906,2473, # 3504
414,1873,2801,6164,2309, 315,1319,3325, 318,2018,2146,2157, 963, 631, 223,4342, # 3520
4343,2675, 479,3711,1197,2625,3712,2676,2361,6165,4344,4123,6166,2451,3183,1886, # 3536
2184,1674,1330,1711,1635,1506, 799, 219,3250,3083,3954,1677,3713,3326,2081,3614, # 3552
1652,2073,4657,1147,3041,1752, 643,1961, 147,1974,3955,6167,1716,2037, 918,3007, # 3568
1994, 120,1537, 118, 609,3184,4345, 740,3455,1219, 332,1615,3830,6168,1621,2980, # 3584
1582, 783, 212, 553,2350,3714,1349,2433,2082,4124, 889,6169,2310,1275,1410, 973, # 3600
166,1320,3456,1797,1215,3185,2885,1846,2590,2763,4658, 629, 822,3008, 763, 940, # 3616
1990,2862, 439,2409,1566,1240,1622, 926,1282,1907,2764, 654,2210,1607, 327,1130, # 3632
3956,1678,1623,6170,2434,2192, 686, 608,3831,3715, 903,3957,3042,6171,2741,1522, # 3648
1915,1105,1555,2552,1359, 323,3251,4346,3457, 738,1354,2553,2311,2334,1828,2003, # 3664
3832,1753,2351,1227,6172,1887,4125,1478,6173,2410,1874,1712,1847, 520,1204,2607, # 3680
264,4659, 836,2677,2102, 600,4660,3833,2278,3084,6174,4347,3615,1342, 640, 532, # 3696
543,2608,1888,2400,2591,1009,4348,1497, 341,1737,3616,2723,1394, 529,3252,1321, # 3712
983,4661,1515,2120, 971,2592, 924, 287,1662,3186,4349,2700,4350,1519, 908,1948, # 3728
2452, 156, 796,1629,1486,2223,2055, 694,4126,1259,1036,3392,1213,2249,2742,1889, # 3744
1230,3958,1015, 910, 408, 559,3617,4662, 746, 725, 935,4663,3959,3009,1289, 563, # 3760
867,4664,3960,1567,2981,2038,2626, 988,2263,2381,4351, 143,2374, 704,1895,6175, # 3776
1188,3716,2088, 673,3085,2362,4352, 484,1608,1921,2765,2918, 215, 904,3618,3537, # 3792
894, 509, 976,3043,2701,3961,4353,2837,2982, 498,6176,6177,1102,3538,1332,3393, # 3808
1487,1636,1637, 233, 245,3962, 383, 650, 995,3044, 460,1520,1206,2352, 749,3327, # 3824
530, 700, 389,1438,1560,1773,3963,2264, 719,2951,2724,3834, 870,1832,1644,1000, # 3840
839,2474,3717, 197,1630,3394, 365,2886,3964,1285,2133, 734, 922, 818,1106, 732, # 3856
480,2083,1774,3458, 923,2279,1350, 221,3086, 85,2233,2234,3835,1585,3010,2147, # 3872
1387,1705,2382,1619,2475, 133, 239,2802,1991,1016,2084,2383, 411,2838,1113, 651, # 3888
1985,1160,3328, 990,1863,3087,1048,1276,2647, 265,2627,1599,3253,2056, 150, 638, # 3904
2019, 656, 853, 326,1479, 680,1439,4354,1001,1759, 413,3459,3395,2492,1431, 459, # 3920
4355,1125,3329,2265,1953,1450,2065,2863, 849, 351,2678,3131,3254,3255,1104,1577, # 3936
227,1351,1645,2453,2193,1421,2887, 812,2121, 634, 95,2435, 201,2312,4665,1646, # 3952
1671,2743,1601,2554,2702,2648,2280,1315,1366,2089,3132,1573,3718,3965,1729,1189, # 3968
328,2679,1077,1940,1136, 558,1283, 964,1195, 621,2074,1199,1743,3460,3619,1896, # 3984
1916,1890,3836,2952,1154,2112,1064, 862, 378,3011,2066,2113,2803,1568,2839,6178, # 4000
3088,2919,1941,1660,2004,1992,2194, 142, 707,1590,1708,1624,1922,1023,1836,1233, # 4016
1004,2313, 789, 741,3620,6179,1609,2411,1200,4127,3719,3720,4666,2057,3721, 593, # 4032
2840, 367,2920,1878,6180,3461,1521, 628,1168, 692,2211,2649, 300, 720,2067,2571, # 4048
2953,3396, 959,2504,3966,3539,3462,1977, 701,6181, 954,1043, 800, 681, 183,3722, # 4064
1803,1730,3540,4128,2103, 815,2314, 174, 467, 230,2454,1093,2134, 755,3541,3397, # 4080
1141,1162,6182,1738,2039, 270,3256,2513,1005,1647,2185,3837, 858,1679,1897,1719, # 4096
2954,2324,1806, 402, 670, 167,4129,1498,2158,2104, 750,6183, 915, 189,1680,1551, # 4112
455,4356,1501,2455, 405,1095,2955, 338,1586,1266,1819, 570, 641,1324, 237,1556, # 4128
2650,1388,3723,6184,1368,2384,1343,1978,3089,2436, 879,3724, 792,1191, 758,3012, # 4144
1411,2135,1322,4357, 240,4667,1848,3725,1574,6185, 420,3045,1546,1391, 714,4358, # 4160
1967, 941,1864, 863, 664, 426, 560,1731,2680,1785,2864,1949,2363, 403,3330,1415, # 4176
1279,2136,1697,2335, 204, 721,2097,3838, 90,6186,2085,2505, 191,3967, 124,2148, # 4192
1376,1798,1178,1107,1898,1405, 860,4359,1243,1272,2375,2983,1558,2456,1638, 113, # 4208
3621, 578,1923,2609, 880, 386,4130, 784,2186,2266,1422,2956,2172,1722, 497, 263, # 4224
2514,1267,2412,2610, 177,2703,3542, 774,1927,1344, 616,1432,1595,1018, 172,4360, # 4240
2325, 911,4361, 438,1468,3622, 794,3968,2024,2173,1681,1829,2957, 945, 895,3090, # 4256
575,2212,2476, 475,2401,2681, 785,2744,1745,2293,2555,1975,3133,2865, 394,4668, # 4272
3839, 635,4131, 639, 202,1507,2195,2766,1345,1435,2572,3726,1908,1184,1181,2457, # 4288
3727,3134,4362, 843,2611, 437, 916,4669, 234, 769,1884,3046,3047,3623, 833,6187, # 4304
1639,2250,2402,1355,1185,2010,2047, 999, 525,1732,1290,1488,2612, 948,1578,3728, # 4320
2413,2477,1216,2725,2159, 334,3840,1328,3624,2921,1525,4132, 564,1056, 891,4363, # 4336
1444,1698,2385,2251,3729,1365,2281,2235,1717,6188, 864,3841,2515, 444, 527,2767, # 4352
2922,3625, 544, 461,6189, 566, 209,2437,3398,2098,1065,2068,3331,3626,3257,2137, # 4368 #last 512
#Everything below is of no interest for detection purpose
2138,2122,3730,2888,1995,1820,1044,6190,6191,6192,6193,6194,6195,6196,6197,6198, # 4384
6199,6200,6201,6202,6203,6204,6205,4670,6206,6207,6208,6209,6210,6211,6212,6213, # 4400
6214,6215,6216,6217,6218,6219,6220,6221,6222,6223,6224,6225,6226,6227,6228,6229, # 4416
6230,6231,6232,6233,6234,6235,6236,6237,3187,6238,6239,3969,6240,6241,6242,6243, # 4432
6244,4671,6245,6246,4672,6247,6248,4133,6249,6250,4364,6251,2923,2556,2613,4673, # 4448
4365,3970,6252,6253,6254,6255,4674,6256,6257,6258,2768,2353,4366,4675,4676,3188, # 4464
4367,3463,6259,4134,4677,4678,6260,2267,6261,3842,3332,4368,3543,6262,6263,6264, # 4480
3013,1954,1928,4135,4679,6265,6266,2478,3091,6267,4680,4369,6268,6269,1699,6270, # 4496
3544,4136,4681,6271,4137,6272,4370,2804,6273,6274,2593,3971,3972,4682,6275,2236, # 4512
4683,6276,6277,4684,6278,6279,4138,3973,4685,6280,6281,3258,6282,6283,6284,6285, # 4528
3974,4686,2841,3975,6286,6287,3545,6288,6289,4139,4687,4140,6290,4141,6291,4142, # 4544
6292,6293,3333,6294,6295,6296,4371,6297,3399,6298,6299,4372,3976,6300,6301,6302, # 4560
4373,6303,6304,3843,3731,6305,4688,4374,6306,6307,3259,2294,6308,3732,2530,4143, # 4576
6309,4689,6310,6311,6312,3048,6313,6314,4690,3733,2237,6315,6316,2282,3334,6317, # 4592
6318,3844,6319,6320,4691,6321,3400,4692,6322,4693,6323,3049,6324,4375,6325,3977, # 4608
6326,6327,6328,3546,6329,4694,3335,6330,4695,4696,6331,6332,6333,6334,4376,3978, # 4624
6335,4697,3979,4144,6336,3980,4698,6337,6338,6339,6340,6341,4699,4700,4701,6342, # 4640
6343,4702,6344,6345,4703,6346,6347,4704,6348,4705,4706,3135,6349,4707,6350,4708, # 4656
6351,4377,6352,4709,3734,4145,6353,2506,4710,3189,6354,3050,4711,3981,6355,3547, # 4672
3014,4146,4378,3735,2651,3845,3260,3136,2224,1986,6356,3401,6357,4712,2594,3627, # 4688
3137,2573,3736,3982,4713,3628,4714,4715,2682,3629,4716,6358,3630,4379,3631,6359, # 4704
6360,6361,3983,6362,6363,6364,6365,4147,3846,4717,6366,6367,3737,2842,6368,4718, # 4720
2628,6369,3261,6370,2386,6371,6372,3738,3984,4719,3464,4720,3402,6373,2924,3336, # 4736
4148,2866,6374,2805,3262,4380,2704,2069,2531,3138,2806,2984,6375,2769,6376,4721, # 4752
4722,3403,6377,6378,3548,6379,6380,2705,3092,1979,4149,2629,3337,2889,6381,3338, # 4768
4150,2557,3339,4381,6382,3190,3263,3739,6383,4151,4723,4152,2558,2574,3404,3191, # 4784
6384,6385,4153,6386,4724,4382,6387,6388,4383,6389,6390,4154,6391,4725,3985,6392, # 4800
3847,4155,6393,6394,6395,6396,6397,3465,6398,4384,6399,6400,6401,6402,6403,6404, # 4816
4156,6405,6406,6407,6408,2123,6409,6410,2326,3192,4726,6411,6412,6413,6414,4385, # 4832
4157,6415,6416,4158,6417,3093,3848,6418,3986,6419,6420,3849,6421,6422,6423,4159, # 4848
6424,6425,4160,6426,3740,6427,6428,6429,6430,3987,6431,4727,6432,2238,6433,6434, # 4864
4386,3988,6435,6436,3632,6437,6438,2843,6439,6440,6441,6442,3633,6443,2958,6444, # 4880
6445,3466,6446,2364,4387,3850,6447,4388,2959,3340,6448,3851,6449,4728,6450,6451, # 4896
3264,4729,6452,3193,6453,4389,4390,2706,3341,4730,6454,3139,6455,3194,6456,3051, # 4912
2124,3852,1602,4391,4161,3853,1158,3854,4162,3989,4392,3990,4731,4732,4393,2040, # 4928
4163,4394,3265,6457,2807,3467,3855,6458,6459,6460,3991,3468,4733,4734,6461,3140, # 4944
2960,6462,4735,6463,6464,6465,6466,4736,4737,4738,4739,6467,6468,4164,2403,3856, # 4960
6469,6470,2770,2844,6471,4740,6472,6473,6474,6475,6476,6477,6478,3195,6479,4741, # 4976
4395,6480,2867,6481,4742,2808,6482,2493,4165,6483,6484,6485,6486,2295,4743,6487, # 4992
6488,6489,3634,6490,6491,6492,6493,6494,6495,6496,2985,4744,6497,6498,4745,6499, # 5008
6500,2925,3141,4166,6501,6502,4746,6503,6504,4747,6505,6506,6507,2890,6508,6509, # 5024
6510,6511,6512,6513,6514,6515,6516,6517,6518,6519,3469,4167,6520,6521,6522,4748, # 5040
4396,3741,4397,4749,4398,3342,2125,4750,6523,4751,4752,4753,3052,6524,2961,4168, # 5056
6525,4754,6526,4755,4399,2926,4169,6527,3857,6528,4400,4170,6529,4171,6530,6531, # 5072
2595,6532,6533,6534,6535,3635,6536,6537,6538,6539,6540,6541,6542,4756,6543,6544, # 5088
6545,6546,6547,6548,4401,6549,6550,6551,6552,4402,3405,4757,4403,6553,6554,6555, # 5104
4172,3742,6556,6557,6558,3992,3636,6559,6560,3053,2726,6561,3549,4173,3054,4404, # 5120
6562,6563,3993,4405,3266,3550,2809,4406,6564,6565,6566,4758,4759,6567,3743,6568, # 5136
4760,3744,4761,3470,6569,6570,6571,4407,6572,3745,4174,6573,4175,2810,4176,3196, # 5152
4762,6574,4177,6575,6576,2494,2891,3551,6577,6578,3471,6579,4408,6580,3015,3197, # 5168
6581,3343,2532,3994,3858,6582,3094,3406,4409,6583,2892,4178,4763,4410,3016,4411, # 5184
6584,3995,3142,3017,2683,6585,4179,6586,6587,4764,4412,6588,6589,4413,6590,2986, # 5200
6591,2962,3552,6592,2963,3472,6593,6594,4180,4765,6595,6596,2225,3267,4414,6597, # 5216
3407,3637,4766,6598,6599,3198,6600,4415,6601,3859,3199,6602,3473,4767,2811,4416, # 5232
1856,3268,3200,2575,3996,3997,3201,4417,6603,3095,2927,6604,3143,6605,2268,6606, # 5248
3998,3860,3096,2771,6607,6608,3638,2495,4768,6609,3861,6610,3269,2745,4769,4181, # 5264
3553,6611,2845,3270,6612,6613,6614,3862,6615,6616,4770,4771,6617,3474,3999,4418, # 5280
4419,6618,3639,3344,6619,4772,4182,6620,2126,6621,6622,6623,4420,4773,6624,3018, # 5296
6625,4774,3554,6626,4183,2025,3746,6627,4184,2707,6628,4421,4422,3097,1775,4185, # 5312
3555,6629,6630,2868,6631,6632,4423,6633,6634,4424,2414,2533,2928,6635,4186,2387, # 5328
6636,4775,6637,4187,6638,1891,4425,3202,3203,6639,6640,4776,6641,3345,6642,6643, # 5344
3640,6644,3475,3346,3641,4000,6645,3144,6646,3098,2812,4188,3642,3204,6647,3863, # 5360
3476,6648,3864,6649,4426,4001,6650,6651,6652,2576,6653,4189,4777,6654,6655,6656, # 5376
2846,6657,3477,3205,4002,6658,4003,6659,3347,2252,6660,6661,6662,4778,6663,6664, # 5392
6665,6666,6667,6668,6669,4779,4780,2048,6670,3478,3099,6671,3556,3747,4004,6672, # 5408
6673,6674,3145,4005,3748,6675,6676,6677,6678,6679,3408,6680,6681,6682,6683,3206, # 5424
3207,6684,6685,4781,4427,6686,4782,4783,4784,6687,6688,6689,4190,6690,6691,3479, # 5440
6692,2746,6693,4428,6694,6695,6696,6697,6698,6699,4785,6700,6701,3208,2727,6702, # 5456
3146,6703,6704,3409,2196,6705,4429,6706,6707,6708,2534,1996,6709,6710,6711,2747, # 5472
6712,6713,6714,4786,3643,6715,4430,4431,6716,3557,6717,4432,4433,6718,6719,6720, # 5488
6721,3749,6722,4006,4787,6723,6724,3644,4788,4434,6725,6726,4789,2772,6727,6728, # 5504
6729,6730,6731,2708,3865,2813,4435,6732,6733,4790,4791,3480,6734,6735,6736,6737, # 5520
4436,3348,6738,3410,4007,6739,6740,4008,6741,6742,4792,3411,4191,6743,6744,6745, # 5536
6746,6747,3866,6748,3750,6749,6750,6751,6752,6753,6754,6755,3867,6756,4009,6757, # 5552
4793,4794,6758,2814,2987,6759,6760,6761,4437,6762,6763,6764,6765,3645,6766,6767, # 5568
3481,4192,6768,3751,6769,6770,2174,6771,3868,3752,6772,6773,6774,4193,4795,4438, # 5584
3558,4796,4439,6775,4797,6776,6777,4798,6778,4799,3559,4800,6779,6780,6781,3482, # 5600
6782,2893,6783,6784,4194,4801,4010,6785,6786,4440,6787,4011,6788,6789,6790,6791, # 5616
6792,6793,4802,6794,6795,6796,4012,6797,6798,6799,6800,3349,4803,3483,6801,4804, # 5632
4195,6802,4013,6803,6804,4196,6805,4014,4015,6806,2847,3271,2848,6807,3484,6808, # 5648
6809,6810,4441,6811,4442,4197,4443,3272,4805,6812,3412,4016,1579,6813,6814,4017, # 5664
6815,3869,6816,2964,6817,4806,6818,6819,4018,3646,6820,6821,4807,4019,4020,6822, # 5680
6823,3560,6824,6825,4021,4444,6826,4198,6827,6828,4445,6829,6830,4199,4808,6831, # 5696
6832,6833,3870,3019,2458,6834,3753,3413,3350,6835,4809,3871,4810,3561,4446,6836, # 5712
6837,4447,4811,4812,6838,2459,4448,6839,4449,6840,6841,4022,3872,6842,4813,4814, # 5728
6843,6844,4815,4200,4201,4202,6845,4023,6846,6847,4450,3562,3873,6848,6849,4816, # 5744
4817,6850,4451,4818,2139,6851,3563,6852,6853,3351,6854,6855,3352,4024,2709,3414, # 5760
4203,4452,6856,4204,6857,6858,3874,3875,6859,6860,4819,6861,6862,6863,6864,4453, # 5776
3647,6865,6866,4820,6867,6868,6869,6870,4454,6871,2869,6872,6873,4821,6874,3754, # 5792
6875,4822,4205,6876,6877,6878,3648,4206,4455,6879,4823,6880,4824,3876,6881,3055, # 5808
4207,6882,3415,6883,6884,6885,4208,4209,6886,4210,3353,6887,3354,3564,3209,3485, # 5824
2652,6888,2728,6889,3210,3755,6890,4025,4456,6891,4825,6892,6893,6894,6895,4211, # 5840
6896,6897,6898,4826,6899,6900,4212,6901,4827,6902,2773,3565,6903,4828,6904,6905, # 5856
6906,6907,3649,3650,6908,2849,3566,6909,3567,3100,6910,6911,6912,6913,6914,6915, # 5872
4026,6916,3355,4829,3056,4457,3756,6917,3651,6918,4213,3652,2870,6919,4458,6920, # 5888
2438,6921,6922,3757,2774,4830,6923,3356,4831,4832,6924,4833,4459,3653,2507,6925, # 5904
4834,2535,6926,6927,3273,4027,3147,6928,3568,6929,6930,6931,4460,6932,3877,4461, # 5920
2729,3654,6933,6934,6935,6936,2175,4835,2630,4214,4028,4462,4836,4215,6937,3148, # 5936
4216,4463,4837,4838,4217,6938,6939,2850,4839,6940,4464,6941,6942,6943,4840,6944, # 5952
4218,3274,4465,6945,6946,2710,6947,4841,4466,6948,6949,2894,6950,6951,4842,6952, # 5968
4219,3057,2871,6953,6954,6955,6956,4467,6957,2711,6958,6959,6960,3275,3101,4843, # 5984
6961,3357,3569,6962,4844,6963,6964,4468,4845,3570,6965,3102,4846,3758,6966,4847, # 6000
3878,4848,4849,4029,6967,2929,3879,4850,4851,6968,6969,1733,6970,4220,6971,6972, # 6016
6973,6974,6975,6976,4852,6977,6978,6979,6980,6981,6982,3759,6983,6984,6985,3486, # 6032
3487,6986,3488,3416,6987,6988,6989,6990,6991,6992,6993,6994,6995,6996,6997,4853, # 6048
6998,6999,4030,7000,7001,3211,7002,7003,4221,7004,7005,3571,4031,7006,3572,7007, # 6064
2614,4854,2577,7008,7009,2965,3655,3656,4855,2775,3489,3880,4222,4856,3881,4032, # 6080
3882,3657,2730,3490,4857,7010,3149,7011,4469,4858,2496,3491,4859,2283,7012,7013, # 6096
7014,2365,4860,4470,7015,7016,3760,7017,7018,4223,1917,7019,7020,7021,4471,7022, # 6112
2776,4472,7023,7024,7025,7026,4033,7027,3573,4224,4861,4034,4862,7028,7029,1929, # 6128
3883,4035,7030,4473,3058,7031,2536,3761,3884,7032,4036,7033,2966,2895,1968,4474, # 6144
3276,4225,3417,3492,4226,2105,7034,7035,1754,2596,3762,4227,4863,4475,3763,4864, # 6160
3764,2615,2777,3103,3765,3658,3418,4865,2296,3766,2815,7036,7037,7038,3574,2872, # 6176
3277,4476,7039,4037,4477,7040,7041,4038,7042,7043,7044,7045,7046,7047,2537,7048, # 6192
7049,7050,7051,7052,7053,7054,4478,7055,7056,3767,3659,4228,3575,7057,7058,4229, # 6208
7059,7060,7061,3660,7062,3212,7063,3885,4039,2460,7064,7065,7066,7067,7068,7069, # 6224
7070,7071,7072,7073,7074,4866,3768,4867,7075,7076,7077,7078,4868,3358,3278,2653, # 6240
7079,7080,4479,3886,7081,7082,4869,7083,7084,7085,7086,7087,7088,2538,7089,7090, # 6256
7091,4040,3150,3769,4870,4041,2896,3359,4230,2930,7092,3279,7093,2967,4480,3213, # 6272
4481,3661,7094,7095,7096,7097,7098,7099,7100,7101,7102,2461,3770,7103,7104,4231, # 6288
3151,7105,7106,7107,4042,3662,7108,7109,4871,3663,4872,4043,3059,7110,7111,7112, # 6304
3493,2988,7113,4873,7114,7115,7116,3771,4874,7117,7118,4232,4875,7119,3576,2336, # 6320
4876,7120,4233,3419,4044,4877,4878,4482,4483,4879,4484,4234,7121,3772,4880,1045, # 6336
3280,3664,4881,4882,7122,7123,7124,7125,4883,7126,2778,7127,4485,4486,7128,4884, # 6352
3214,3887,7129,7130,3215,7131,4885,4045,7132,7133,4046,7134,7135,7136,7137,7138, # 6368
7139,7140,7141,7142,7143,4235,7144,4886,7145,7146,7147,4887,7148,7149,7150,4487, # 6384
4047,4488,7151,7152,4888,4048,2989,3888,7153,3665,7154,4049,7155,7156,7157,7158, # 6400
7159,7160,2931,4889,4890,4489,7161,2631,3889,4236,2779,7162,7163,4891,7164,3060, # 6416
7165,1672,4892,7166,4893,4237,3281,4894,7167,7168,3666,7169,3494,7170,7171,4050, # 6432
7172,7173,3104,3360,3420,4490,4051,2684,4052,7174,4053,7175,7176,7177,2253,4054, # 6448
7178,7179,4895,7180,3152,3890,3153,4491,3216,7181,7182,7183,2968,4238,4492,4055, # 6464
7184,2990,7185,2479,7186,7187,4493,7188,7189,7190,7191,7192,4896,7193,4897,2969, # 6480
4494,4898,7194,3495,7195,7196,4899,4495,7197,3105,2731,7198,4900,7199,7200,7201, # 6496
4056,7202,3361,7203,7204,4496,4901,4902,7205,4497,7206,7207,2315,4903,7208,4904, # 6512
7209,4905,2851,7210,7211,3577,7212,3578,4906,7213,4057,3667,4907,7214,4058,2354, # 6528
3891,2376,3217,3773,7215,7216,7217,7218,7219,4498,7220,4908,3282,2685,7221,3496, # 6544
4909,2632,3154,4910,7222,2337,7223,4911,7224,7225,7226,4912,4913,3283,4239,4499, # 6560
7227,2816,7228,7229,7230,7231,7232,7233,7234,4914,4500,4501,7235,7236,7237,2686, # 6576
7238,4915,7239,2897,4502,7240,4503,7241,2516,7242,4504,3362,3218,7243,7244,7245, # 6592
4916,7246,7247,4505,3363,7248,7249,7250,7251,3774,4506,7252,7253,4917,7254,7255, # 6608
3284,2991,4918,4919,3219,3892,4920,3106,3497,4921,7256,7257,7258,4922,7259,4923, # 6624
3364,4507,4508,4059,7260,4240,3498,7261,7262,4924,7263,2992,3893,4060,3220,7264, # 6640
7265,7266,7267,7268,7269,4509,3775,7270,2817,7271,4061,4925,4510,3776,7272,4241, # 6656
4511,3285,7273,7274,3499,7275,7276,7277,4062,4512,4926,7278,3107,3894,7279,7280, # 6672
4927,7281,4513,7282,7283,3668,7284,7285,4242,4514,4243,7286,2058,4515,4928,4929, # 6688
4516,7287,3286,4244,7288,4517,7289,7290,7291,3669,7292,7293,4930,4931,4932,2355, # 6704
4933,7294,2633,4518,7295,4245,7296,7297,4519,7298,7299,4520,4521,4934,7300,4246, # 6720
4522,7301,7302,7303,3579,7304,4247,4935,7305,4936,7306,7307,7308,7309,3777,7310, # 6736
4523,7311,7312,7313,4248,3580,7314,4524,3778,4249,7315,3581,7316,3287,7317,3221, # 6752
7318,4937,7319,7320,7321,7322,7323,7324,4938,4939,7325,4525,7326,7327,7328,4063, # 6768
7329,7330,4940,7331,7332,4941,7333,4526,7334,3500,2780,1741,4942,2026,1742,7335, # 6784
7336,3582,4527,2388,7337,7338,7339,4528,7340,4250,4943,7341,7342,7343,4944,7344, # 6800
7345,7346,3020,7347,4945,7348,7349,7350,7351,3895,7352,3896,4064,3897,7353,7354, # 6816
7355,4251,7356,7357,3898,7358,3779,7359,3780,3288,7360,7361,4529,7362,4946,4530, # 6832
2027,7363,3899,4531,4947,3222,3583,7364,4948,7365,7366,7367,7368,4949,3501,4950, # 6848
3781,4951,4532,7369,2517,4952,4252,4953,3155,7370,4954,4955,4253,2518,4533,7371, # 6864
7372,2712,4254,7373,7374,7375,3670,4956,3671,7376,2389,3502,4065,7377,2338,7378, # 6880
7379,7380,7381,3061,7382,4957,7383,7384,7385,7386,4958,4534,7387,7388,2993,7389, # 6896
3062,7390,4959,7391,7392,7393,4960,3108,4961,7394,4535,7395,4962,3421,4536,7396, # 6912
4963,7397,4964,1857,7398,4965,7399,7400,2176,3584,4966,7401,7402,3422,4537,3900, # 6928
3585,7403,3782,7404,2852,7405,7406,7407,4538,3783,2654,3423,4967,4539,7408,3784, # 6944
3586,2853,4540,4541,7409,3901,7410,3902,7411,7412,3785,3109,2327,3903,7413,7414, # 6960
2970,4066,2932,7415,7416,7417,3904,3672,3424,7418,4542,4543,4544,7419,4968,7420, # 6976
7421,4255,7422,7423,7424,7425,7426,4067,7427,3673,3365,4545,7428,3110,2559,3674, # 6992
7429,7430,3156,7431,7432,3503,7433,3425,4546,7434,3063,2873,7435,3223,4969,4547, # 7008
4548,2898,4256,4068,7436,4069,3587,3786,2933,3787,4257,4970,4971,3788,7437,4972, # 7024
3064,7438,4549,7439,7440,7441,7442,7443,4973,3905,7444,2874,7445,7446,7447,7448, # 7040
3021,7449,4550,3906,3588,4974,7450,7451,3789,3675,7452,2578,7453,4070,7454,7455, # 7056
7456,4258,3676,7457,4975,7458,4976,4259,3790,3504,2634,4977,3677,4551,4260,7459, # 7072
7460,7461,7462,3907,4261,4978,7463,7464,7465,7466,4979,4980,7467,7468,2213,4262, # 7088
7469,7470,7471,3678,4981,7472,2439,7473,4263,3224,3289,7474,3908,2415,4982,7475, # 7104
4264,7476,4983,2655,7477,7478,2732,4552,2854,2875,7479,7480,4265,7481,4553,4984, # 7120
7482,7483,4266,7484,3679,3366,3680,2818,2781,2782,3367,3589,4554,3065,7485,4071, # 7136
2899,7486,7487,3157,2462,4072,4555,4073,4985,4986,3111,4267,2687,3368,4556,4074, # 7152
3791,4268,7488,3909,2783,7489,2656,1962,3158,4557,4987,1963,3159,3160,7490,3112, # 7168
4988,4989,3022,4990,4991,3792,2855,7491,7492,2971,4558,7493,7494,4992,7495,7496, # 7184
7497,7498,4993,7499,3426,4559,4994,7500,3681,4560,4269,4270,3910,7501,4075,4995, # 7200
4271,7502,7503,4076,7504,4996,7505,3225,4997,4272,4077,2819,3023,7506,7507,2733, # 7216
4561,7508,4562,7509,3369,3793,7510,3590,2508,7511,7512,4273,3113,2994,2616,7513, # 7232
7514,7515,7516,7517,7518,2820,3911,4078,2748,7519,7520,4563,4998,7521,7522,7523, # 7248
7524,4999,4274,7525,4564,3682,2239,4079,4565,7526,7527,7528,7529,5000,7530,7531, # 7264
5001,4275,3794,7532,7533,7534,3066,5002,4566,3161,7535,7536,4080,7537,3162,7538, # 7280
7539,4567,7540,7541,7542,7543,7544,7545,5003,7546,4568,7547,7548,7549,7550,7551, # 7296
7552,7553,7554,7555,7556,5004,7557,7558,7559,5005,7560,3795,7561,4569,7562,7563, # 7312
7564,2821,3796,4276,4277,4081,7565,2876,7566,5006,7567,7568,2900,7569,3797,3912, # 7328
7570,7571,7572,4278,7573,7574,7575,5007,7576,7577,5008,7578,7579,4279,2934,7580, # 7344
7581,5009,7582,4570,7583,4280,7584,7585,7586,4571,4572,3913,7587,4573,3505,7588, # 7360
5010,7589,7590,7591,7592,3798,4574,7593,7594,5011,7595,4281,7596,7597,7598,4282, # 7376
5012,7599,7600,5013,3163,7601,5014,7602,3914,7603,7604,2734,4575,4576,4577,7605, # 7392
7606,7607,7608,7609,3506,5015,4578,7610,4082,7611,2822,2901,2579,3683,3024,4579, # 7408
3507,7612,4580,7613,3226,3799,5016,7614,7615,7616,7617,7618,7619,7620,2995,3290, # 7424
7621,4083,7622,5017,7623,7624,7625,7626,7627,4581,3915,7628,3291,7629,5018,7630, # 7440
7631,7632,7633,4084,7634,7635,3427,3800,7636,7637,4582,7638,5019,4583,5020,7639, # 7456
3916,7640,3801,5021,4584,4283,7641,7642,3428,3591,2269,7643,2617,7644,4585,3592, # 7472
7645,4586,2902,7646,7647,3227,5022,7648,4587,7649,4284,7650,7651,7652,4588,2284, # 7488
7653,5023,7654,7655,7656,4589,5024,3802,7657,7658,5025,3508,4590,7659,7660,7661, # 7504
1969,5026,7662,7663,3684,1821,2688,7664,2028,2509,4285,7665,2823,1841,7666,2689, # 7520
3114,7667,3917,4085,2160,5027,5028,2972,7668,5029,7669,7670,7671,3593,4086,7672, # 7536
4591,4087,5030,3803,7673,7674,7675,7676,7677,7678,7679,4286,2366,4592,4593,3067, # 7552
2328,7680,7681,4594,3594,3918,2029,4287,7682,5031,3919,3370,4288,4595,2856,7683, # 7568
3509,7684,7685,5032,5033,7686,7687,3804,2784,7688,7689,7690,7691,3371,7692,7693, # 7584
2877,5034,7694,7695,3920,4289,4088,7696,7697,7698,5035,7699,5036,4290,5037,5038, # 7600
5039,7700,7701,7702,5040,5041,3228,7703,1760,7704,5042,3229,4596,2106,4089,7705, # 7616
4597,2824,5043,2107,3372,7706,4291,4090,5044,7707,4091,7708,5045,3025,3805,4598, # 7632
4292,4293,4294,3373,7709,4599,7710,5046,7711,7712,5047,5048,3806,7713,7714,7715, # 7648
5049,7716,7717,7718,7719,4600,5050,7720,7721,7722,5051,7723,4295,3429,7724,7725, # 7664
7726,7727,3921,7728,3292,5052,4092,7729,7730,7731,7732,7733,7734,7735,5053,5054, # 7680
7736,7737,7738,7739,3922,3685,7740,7741,7742,7743,2635,5055,7744,5056,4601,7745, # 7696
7746,2560,7747,7748,7749,7750,3923,7751,7752,7753,7754,7755,4296,2903,7756,7757, # 7712
7758,7759,7760,3924,7761,5057,4297,7762,7763,5058,4298,7764,4093,7765,7766,5059, # 7728
3925,7767,7768,7769,7770,7771,7772,7773,7774,7775,7776,3595,7777,4299,5060,4094, # 7744
7778,3293,5061,7779,7780,4300,7781,7782,4602,7783,3596,7784,7785,3430,2367,7786, # 7760
3164,5062,5063,4301,7787,7788,4095,5064,5065,7789,3374,3115,7790,7791,7792,7793, # 7776
7794,7795,7796,3597,4603,7797,7798,3686,3116,3807,5066,7799,7800,5067,7801,7802, # 7792
4604,4302,5068,4303,4096,7803,7804,3294,7805,7806,5069,4605,2690,7807,3026,7808, # 7808
7809,7810,7811,7812,7813,7814,7815,7816,7817,7818,7819,7820,7821,7822,7823,7824, # 7824
7825,7826,7827,7828,7829,7830,7831,7832,7833,7834,7835,7836,7837,7838,7839,7840, # 7840
7841,7842,7843,7844,7845,7846,7847,7848,7849,7850,7851,7852,7853,7854,7855,7856, # 7856
7857,7858,7859,7860,7861,7862,7863,7864,7865,7866,7867,7868,7869,7870,7871,7872, # 7872
7873,7874,7875,7876,7877,7878,7879,7880,7881,7882,7883,7884,7885,7886,7887,7888, # 7888
7889,7890,7891,7892,7893,7894,7895,7896,7897,7898,7899,7900,7901,7902,7903,7904, # 7904
7905,7906,7907,7908,7909,7910,7911,7912,7913,7914,7915,7916,7917,7918,7919,7920, # 7920
7921,7922,7923,7924,3926,7925,7926,7927,7928,7929,7930,7931,7932,7933,7934,7935, # 7936
7936,7937,7938,7939,7940,7941,7942,7943,7944,7945,7946,7947,7948,7949,7950,7951, # 7952
7952,7953,7954,7955,7956,7957,7958,7959,7960,7961,7962,7963,7964,7965,7966,7967, # 7968
7968,7969,7970,7971,7972,7973,7974,7975,7976,7977,7978,7979,7980,7981,7982,7983, # 7984
7984,7985,7986,7987,7988,7989,7990,7991,7992,7993,7994,7995,7996,7997,7998,7999, # 8000
8000,8001,8002,8003,8004,8005,8006,8007,8008,8009,8010,8011,8012,8013,8014,8015, # 8016
8016,8017,8018,8019,8020,8021,8022,8023,8024,8025,8026,8027,8028,8029,8030,8031, # 8032
8032,8033,8034,8035,8036,8037,8038,8039,8040,8041,8042,8043,8044,8045,8046,8047, # 8048
8048,8049,8050,8051,8052,8053,8054,8055,8056,8057,8058,8059,8060,8061,8062,8063, # 8064
8064,8065,8066,8067,8068,8069,8070,8071,8072,8073,8074,8075,8076,8077,8078,8079, # 8080
8080,8081,8082,8083,8084,8085,8086,8087,8088,8089,8090,8091,8092,8093,8094,8095, # 8096
8096,8097,8098,8099,8100,8101,8102,8103,8104,8105,8106,8107,8108,8109,8110,8111, # 8112
8112,8113,8114,8115,8116,8117,8118,8119,8120,8121,8122,8123,8124,8125,8126,8127, # 8128
8128,8129,8130,8131,8132,8133,8134,8135,8136,8137,8138,8139,8140,8141,8142,8143, # 8144
8144,8145,8146,8147,8148,8149,8150,8151,8152,8153,8154,8155,8156,8157,8158,8159, # 8160
8160,8161,8162,8163,8164,8165,8166,8167,8168,8169,8170,8171,8172,8173,8174,8175, # 8176
8176,8177,8178,8179,8180,8181,8182,8183,8184,8185,8186,8187,8188,8189,8190,8191, # 8192
8192,8193,8194,8195,8196,8197,8198,8199,8200,8201,8202,8203,8204,8205,8206,8207, # 8208
8208,8209,8210,8211,8212,8213,8214,8215,8216,8217,8218,8219,8220,8221,8222,8223, # 8224
8224,8225,8226,8227,8228,8229,8230,8231,8232,8233,8234,8235,8236,8237,8238,8239, # 8240
8240,8241,8242,8243,8244,8245,8246,8247,8248,8249,8250,8251,8252,8253,8254,8255, # 8256
8256,8257,8258,8259,8260,8261,8262,8263,8264,8265,8266,8267,8268,8269,8270,8271) # 8272
# flake8: noqa
|
mit
|
abstract-open-solutions/OCB
|
openerp/report/custom.py
|
338
|
25091
|
# -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import os
import time
import openerp
import openerp.tools as tools
from openerp.tools.safe_eval import safe_eval as eval
import print_xml
import render
from interface import report_int
import common
from openerp.osv.osv import except_osv
from openerp.osv.orm import BaseModel
from pychart import *
import misc
import cStringIO
from lxml import etree
from openerp.tools.translate import _
class external_pdf(render.render):
def __init__(self, pdf):
render.render.__init__(self)
self.pdf = pdf
self.output_type='pdf'
def _render(self):
return self.pdf
theme.use_color = 1
#TODO: devrait heriter de report_rml a la place de report_int
# -> pourrait overrider que create_xml a la place de tout create
# heuu, ca marche pas ds tous les cas car graphs sont generes en pdf directment
# par pychart, et on passe donc pas par du rml
class report_custom(report_int):
def __init__(self, name):
report_int.__init__(self, name)
#
# PRE:
# fields = [['address','city'],['name'], ['zip']]
# conditions = [[('zip','==','3'),(,)],(,),(,)] #same structure as fields
# row_canvas = ['Rue', None, None]
# POST:
# [ ['ville','name','zip'] ]
#
def _row_get(self, cr, uid, objs, fields, conditions, row_canvas=None, group_by=None):
result = []
for obj in objs:
tobreak = False
for cond in conditions:
if cond and cond[0]:
c = cond[0]
temp = c[0](eval('obj.'+c[1],{'obj': obj}))
if not eval('\''+temp+'\''+' '+c[2]+' '+'\''+str(c[3])+'\''):
tobreak = True
if tobreak:
break
levels = {}
row = []
for i in range(len(fields)):
if not fields[i]:
row.append(row_canvas and row_canvas[i])
if row_canvas[i]:
row_canvas[i]=False
elif len(fields[i])==1:
if obj:
row.append(str(eval('obj.'+fields[i][0],{'obj': obj})))
else:
row.append(None)
else:
row.append(None)
levels[fields[i][0]]=True
if not levels:
result.append(row)
else:
# Process group_by data first
key = []
if group_by is not None and fields[group_by] is not None:
if fields[group_by][0] in levels.keys():
key.append(fields[group_by][0])
for l in levels.keys():
if l != fields[group_by][0]:
key.append(l)
else:
key = levels.keys()
for l in key:
objs = eval('obj.'+l,{'obj': obj})
if not isinstance(objs, (BaseModel, list)):
objs = [objs]
field_new = []
cond_new = []
for f in range(len(fields)):
if (fields[f] and fields[f][0])==l:
field_new.append(fields[f][1:])
cond_new.append(conditions[f][1:])
else:
field_new.append(None)
cond_new.append(None)
if len(objs):
result += self._row_get(cr, uid, objs, field_new, cond_new, row, group_by)
else:
result.append(row)
return result
def create(self, cr, uid, ids, datas, context=None):
if not context:
context={}
self.pool = openerp.registry(cr.dbname)
report = self.pool['ir.report.custom'].browse(cr, uid, [datas['report_id']])[0]
datas['model'] = report.model_id.model
if report.menu_id:
ids = self.pool[report.model_id.model].search(cr, uid, [])
datas['ids'] = ids
report_id = datas['report_id']
report = self.pool['ir.report.custom'].read(cr, uid, [report_id], context=context)[0]
fields = self.pool['ir.report.custom.fields'].read(cr, uid, report['fields_child0'], context=context)
fields.sort(lambda x,y : x['sequence'] - y['sequence'])
if report['field_parent']:
parent_field = self.pool['ir.model.fields'].read(cr, uid, [report['field_parent'][0]], ['model'])
model_name = self.pool['ir.model'].read(cr, uid, [report['model_id'][0]], ['model'], context=context)[0]['model']
fct = {
'id': lambda x: x,
'gety': lambda x: x.split('-')[0],
'in': lambda x: x.split(',')
}
new_fields = []
new_cond = []
for f in fields:
row = []
cond = []
for i in range(4):
field_child = f['field_child'+str(i)]
if field_child:
row.append(
self.pool['ir.model.fields'].read(cr, uid, [field_child[0]], ['name'], context=context)[0]['name']
)
if f['fc'+str(i)+'_operande']:
fct_name = 'id'
cond_op = f['fc'+str(i)+'_op']
if len(f['fc'+str(i)+'_op'].split(',')) == 2:
cond_op = f['fc'+str(i)+'_op'].split(',')[1]
fct_name = f['fc'+str(i)+'_op'].split(',')[0]
cond.append((fct[fct_name], f['fc'+str(i)+'_operande'][1], cond_op, f['fc'+str(i)+'_condition']))
else:
cond.append(None)
new_fields.append(row)
new_cond.append(cond)
objs = self.pool[model_name].browse(cr, uid, ids)
# Group by
groupby = None
idx = 0
for f in fields:
if f['groupby']:
groupby = idx
idx += 1
results = []
if report['field_parent']:
level = []
def build_tree(obj, level, depth):
res = self._row_get(cr, uid,[obj], new_fields, new_cond)
level.append(depth)
new_obj = eval('obj.'+report['field_parent'][1],{'obj': obj})
if not isinstance(new_obj, list) :
new_obj = [new_obj]
for o in new_obj:
if o:
res += build_tree(o, level, depth+1)
return res
for obj in objs:
results += build_tree(obj, level, 0)
else:
results = self._row_get(cr, uid,objs, new_fields, new_cond, group_by=groupby)
fct = {
'calc_sum': lambda l: reduce(lambda x,y: float(x)+float(y), filter(None, l), 0),
'calc_avg': lambda l: reduce(lambda x,y: float(x)+float(y), filter(None, l), 0) / (len(filter(None, l)) or 1.0),
'calc_max': lambda l: reduce(lambda x,y: max(x,y), [(i or 0.0) for i in l], 0),
'calc_min': lambda l: reduce(lambda x,y: min(x,y), [(i or 0.0) for i in l], 0),
'calc_count': lambda l: len(filter(None, l)),
'False': lambda l: '\r\n'.join(filter(None, l)),
'groupby': lambda l: reduce(lambda x,y: x or y, l)
}
new_res = []
prev = None
if groupby is not None:
res_dic = {}
for line in results:
if not line[groupby] and prev in res_dic:
res_dic[prev].append(line)
else:
prev = line[groupby]
res_dic.setdefault(line[groupby], [])
res_dic[line[groupby]].append(line)
#we use the keys in results since they are ordered, whereas in res_dic.heys() they aren't
for key in filter(None, [x[groupby] for x in results]):
row = []
for col in range(len(fields)):
if col == groupby:
row.append(fct['groupby'](map(lambda x: x[col], res_dic[key])))
else:
row.append(fct[str(fields[col]['operation'])](map(lambda x: x[col], res_dic[key])))
new_res.append(row)
results = new_res
if report['type']=='table':
if report['field_parent']:
res = self._create_tree(uid, ids, report, fields, level, results, context)
else:
sort_idx = 0
for idx in range(len(fields)):
if fields[idx]['name'] == report['sortby']:
sort_idx = idx
break
try :
results.sort(lambda x,y : cmp(float(x[sort_idx]),float(y[sort_idx])))
except :
results.sort(lambda x,y : cmp(x[sort_idx],y[sort_idx]))
if report['limitt']:
results = results[:int(report['limitt'])]
res = self._create_table(uid, ids, report, fields, None, results, context)
elif report['type'] in ('pie','bar', 'line'):
results2 = []
prev = False
for r in results:
row = []
for j in range(len(r)):
if j == 0 and not r[j]:
row.append(prev)
elif j == 0 and r[j]:
prev = r[j]
row.append(r[j])
else:
try:
row.append(float(r[j]))
except Exception:
row.append(r[j])
results2.append(row)
if report['type']=='pie':
res = self._create_pie(cr,uid, ids, report, fields, results2, context)
elif report['type']=='bar':
res = self._create_bars(cr,uid, ids, report, fields, results2, context)
elif report['type']=='line':
res = self._create_lines(cr,uid, ids, report, fields, results2, context)
return self.obj.get(), 'pdf'
def _create_tree(self, uid, ids, report, fields, level, results, context):
pageSize=common.pageSize.get(report['print_format'], [210.0,297.0])
if report['print_orientation']=='landscape':
pageSize=[pageSize[1],pageSize[0]]
new_doc = etree.Element('report')
config = etree.SubElement(new_doc, 'config')
def _append_node(name, text):
n = etree.SubElement(config, name)
n.text = text
_append_node('date', time.strftime('%d/%m/%Y'))
_append_node('PageFormat', '%s' % report['print_format'])
_append_node('PageSize', '%.2fmm,%.2fmm' % tuple(pageSize))
_append_node('PageWidth', '%.2f' % (pageSize[0] * 2.8346,))
_append_node('PageHeight', '%.2f' %(pageSize[1] * 2.8346,))
length = pageSize[0]-30-reduce(lambda x,y:x+(y['width'] or 0), fields, 0)
count = 0
for f in fields:
if not f['width']: count+=1
for f in fields:
if not f['width']:
f['width']=round((float(length)/count)-0.5)
_append_node('tableSize', '%s' % ','.join(map(lambda x: '%.2fmm' % (x['width'],), fields)))
_append_node('report-header', '%s' % (report['title'],))
_append_node('report-footer', '%s' % (report['footer'],))
header = etree.SubElement(new_doc, 'header')
for f in fields:
field = etree.SubElement(header, 'field')
field.text = f['name']
lines = etree.SubElement(new_doc, 'lines')
level.reverse()
for line in results:
shift = level.pop()
node_line = etree.SubElement(lines, 'row')
prefix = '+'
for f in range(len(fields)):
col = etree.SubElement(node_line, 'col')
if f == 0:
col.attrib.update(para='yes',
tree='yes',
space=str(3*shift)+'mm')
if line[f] is not None:
col.text = prefix+str(line[f]) or ''
else:
col.text = '/'
prefix = ''
transform = etree.XSLT(
etree.parse(os.path.join(tools.config['root_path'],
'addons/base/report/custom_new.xsl')))
rml = etree.tostring(transform(new_doc))
self.obj = render.rml(rml)
self.obj.render()
return True
def _create_lines(self, cr, uid, ids, report, fields, results, context):
pool = openerp.registry(cr.dbname)
pdf_string = cStringIO.StringIO()
can = canvas.init(fname=pdf_string, format='pdf')
can.show(80,380,'/16/H'+report['title'])
ar = area.T(size=(350,350),
#x_coord = category_coord.T(['2005-09-01','2005-10-22'],0),
x_axis = axis.X(label = fields[0]['name'], format="/a-30{}%s"),
y_axis = axis.Y(label = ', '.join(map(lambda x : x['name'], fields[1:]))))
process_date = {
'D': lambda x: reduce(lambda xx, yy: xx + '-' + yy, x.split('-')[1:3]),
'M': lambda x: x.split('-')[1],
'Y': lambda x: x.split('-')[0]
}
order_date = {
'D': lambda x: time.mktime((2005, int(x.split('-')[0]), int(x.split('-')[1]), 0, 0, 0, 0, 0, 0)),
'M': lambda x: x,
'Y': lambda x: x
}
abscissa = []
idx = 0
date_idx = None
fct = {}
for f in fields:
field_id = (f['field_child3'] and f['field_child3'][0]) or (f['field_child2'] and f['field_child2'][0]) or (f['field_child1'] and f['field_child1'][0]) or (f['field_child0'] and f['field_child0'][0])
if field_id:
type = pool['ir.model.fields'].read(cr, uid, [field_id],['ttype'])
if type[0]['ttype'] == 'date':
date_idx = idx
fct[idx] = process_date[report['frequency']]
else:
fct[idx] = lambda x : x
else:
fct[idx] = lambda x : x
idx+=1
# plots are usually displayed year by year
# so we do so if the first field is a date
data_by_year = {}
if date_idx is not None:
for r in results:
key = process_date['Y'](r[date_idx])
if key not in data_by_year:
data_by_year[key] = []
for i in range(len(r)):
r[i] = fct[i](r[i])
data_by_year[key].append(r)
else:
data_by_year[''] = results
idx0 = 0
nb_bar = len(data_by_year)*(len(fields)-1)
colors = map(lambda x:line_style.T(color=x), misc.choice_colors(nb_bar))
abscissa = {}
for line in data_by_year.keys():
fields_bar = []
# sum data and save it in a list. An item for a fields
for d in data_by_year[line]:
for idx in range(len(fields)-1):
fields_bar.append({})
if d[0] in fields_bar[idx]:
fields_bar[idx][d[0]] += d[idx+1]
else:
fields_bar[idx][d[0]] = d[idx+1]
for idx in range(len(fields)-1):
data = {}
for k in fields_bar[idx].keys():
if k in data:
data[k] += fields_bar[idx][k]
else:
data[k] = fields_bar[idx][k]
data_cum = []
prev = 0.0
keys = data.keys()
keys.sort()
# cumulate if necessary
for k in keys:
data_cum.append([k, float(data[k])+float(prev)])
if fields[idx+1]['cumulate']:
prev += data[k]
idx0 = 0
plot = line_plot.T(label=fields[idx+1]['name']+' '+str(line), data = data_cum, line_style=colors[idx0*(len(fields)-1)+idx])
ar.add_plot(plot)
abscissa.update(fields_bar[idx])
idx0 += 1
abscissa = map(lambda x : [x, None], abscissa)
ar.x_coord = category_coord.T(abscissa,0)
ar.draw(can)
can.close()
self.obj = external_pdf(pdf_string.getvalue())
self.obj.render()
pdf_string.close()
return True
def _create_bars(self, cr, uid, ids, report, fields, results, context):
pool = openerp.registry(cr.dbname)
pdf_string = cStringIO.StringIO()
can = canvas.init(fname=pdf_string, format='pdf')
can.show(80,380,'/16/H'+report['title'])
process_date = {
'D': lambda x: reduce(lambda xx, yy: xx + '-' + yy, x.split('-')[1:3]),
'M': lambda x: x.split('-')[1],
'Y': lambda x: x.split('-')[0]
}
order_date = {
'D': lambda x: time.mktime((2005, int(x.split('-')[0]), int(x.split('-')[1]), 0, 0, 0, 0, 0, 0)),
'M': lambda x: x,
'Y': lambda x: x
}
ar = area.T(size=(350,350),
x_axis = axis.X(label = fields[0]['name'], format="/a-30{}%s"),
y_axis = axis.Y(label = ', '.join(map(lambda x : x['name'], fields[1:]))))
idx = 0
date_idx = None
fct = {}
for f in fields:
field_id = (f['field_child3'] and f['field_child3'][0]) or (f['field_child2'] and f['field_child2'][0]) or (f['field_child1'] and f['field_child1'][0]) or (f['field_child0'] and f['field_child0'][0])
if field_id:
type = pool['ir.model.fields'].read(cr, uid, [field_id],['ttype'])
if type[0]['ttype'] == 'date':
date_idx = idx
fct[idx] = process_date[report['frequency']]
else:
fct[idx] = lambda x : x
else:
fct[idx] = lambda x : x
idx+=1
# plot are usually displayed year by year
# so we do so if the first field is a date
data_by_year = {}
if date_idx is not None:
for r in results:
key = process_date['Y'](r[date_idx])
if key not in data_by_year:
data_by_year[key] = []
for i in range(len(r)):
r[i] = fct[i](r[i])
data_by_year[key].append(r)
else:
data_by_year[''] = results
nb_bar = len(data_by_year)*(len(fields)-1)
colors = map(lambda x:fill_style.Plain(bgcolor=x), misc.choice_colors(nb_bar))
abscissa = {}
for line in data_by_year.keys():
fields_bar = []
# sum data and save it in a list. An item for a fields
for d in data_by_year[line]:
for idx in range(len(fields)-1):
fields_bar.append({})
if d[0] in fields_bar[idx]:
fields_bar[idx][d[0]] += d[idx+1]
else:
fields_bar[idx][d[0]] = d[idx+1]
for idx in range(len(fields)-1):
data = {}
for k in fields_bar[idx].keys():
if k in data:
data[k] += fields_bar[idx][k]
else:
data[k] = fields_bar[idx][k]
data_cum = []
prev = 0.0
keys = data.keys()
keys.sort()
# cumulate if necessary
for k in keys:
data_cum.append([k, float(data[k])+float(prev)])
if fields[idx+1]['cumulate']:
prev += data[k]
idx0 = 0
plot = bar_plot.T(label=fields[idx+1]['name']+' '+str(line), data = data_cum, cluster=(idx0*(len(fields)-1)+idx,nb_bar), fill_style=colors[idx0*(len(fields)-1)+idx])
ar.add_plot(plot)
abscissa.update(fields_bar[idx])
idx0 += 1
abscissa = map(lambda x : [x, None], abscissa)
abscissa.sort()
ar.x_coord = category_coord.T(abscissa,0)
ar.draw(can)
can.close()
self.obj = external_pdf(pdf_string.getvalue())
self.obj.render()
pdf_string.close()
return True
def _create_pie(self, cr, uid, ids, report, fields, results, context):
pdf_string = cStringIO.StringIO()
can = canvas.init(fname=pdf_string, format='pdf')
ar = area.T(size=(350,350), legend=legend.T(),
x_grid_style = None, y_grid_style = None)
colors = map(lambda x:fill_style.Plain(bgcolor=x), misc.choice_colors(len(results)))
if reduce(lambda x,y : x+y, map(lambda x : x[1],results)) == 0.0:
raise except_osv(_('Error'), _("The sum of the data (2nd field) is null.\nWe can't draw a pie chart !"))
plot = pie_plot.T(data=results, arc_offsets=[0,10,0,10],
shadow = (2, -2, fill_style.gray50),
label_offset = 25,
arrow_style = arrow.a3,
fill_styles=colors)
ar.add_plot(plot)
ar.draw(can)
can.close()
self.obj = external_pdf(pdf_string.getvalue())
self.obj.render()
pdf_string.close()
return True
def _create_table(self, uid, ids, report, fields, tree, results, context):
pageSize=common.pageSize.get(report['print_format'], [210.0,297.0])
if report['print_orientation']=='landscape':
pageSize=[pageSize[1],pageSize[0]]
new_doc = etree.Element('report')
config = etree.SubElement(new_doc, 'config')
def _append_node(name, text):
n = etree.SubElement(config, name)
n.text = text
_append_node('date', time.strftime('%d/%m/%Y'))
_append_node('PageSize', '%.2fmm,%.2fmm' % tuple(pageSize))
_append_node('PageFormat', '%s' % report['print_format'])
_append_node('PageWidth', '%.2f' % (pageSize[0] * 2.8346,))
_append_node('PageHeight', '%.2f' %(pageSize[1] * 2.8346,))
length = pageSize[0]-30-reduce(lambda x,y:x+(y['width'] or 0), fields, 0)
count = 0
for f in fields:
if not f['width']: count+=1
for f in fields:
if not f['width']:
f['width']=round((float(length)/count)-0.5)
_append_node('tableSize', '%s' % ','.join(map(lambda x: '%.2fmm' % (x['width'],), fields)))
_append_node('report-header', '%s' % (report['title'],))
_append_node('report-footer', '%s' % (report['footer'],))
header = etree.SubElement(new_doc, 'header')
for f in fields:
field = etree.SubElement(header, 'field')
field.text = f['name']
lines = etree.SubElement(new_doc, 'lines')
for line in results:
node_line = etree.SubElement(lines, 'row')
for f in range(len(fields)):
col = etree.SubElement(node_line, 'col', tree='no')
if line[f] is not None:
col.text = line[f] or ''
else:
col.text = '/'
transform = etree.XSLT(
etree.parse(os.path.join(tools.config['root_path'],
'addons/base/report/custom_new.xsl')))
rml = etree.tostring(transform(new_doc))
self.obj = render.rml(rml)
self.obj.render()
return True
report_custom('report.custom')
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
|
agpl-3.0
|
dbrattli/aioreactive
|
test/test_flat_map.py
|
1
|
2446
|
import aioreactive as rx
import pytest
from aioreactive.notification import OnCompleted, OnNext
from aioreactive.testing import AsyncTestObserver, VirtualTimeEventLoop
from aioreactive.types import AsyncObservable
from expression.core import pipe
@pytest.yield_fixture() # type: ignore
def event_loop():
loop = VirtualTimeEventLoop()
yield loop
loop.close()
@pytest.mark.asyncio
async def test_flap_map_done():
xs: rx.AsyncSubject[int] = rx.AsyncSubject()
def mapper(value: int) -> rx.AsyncObservable[int]:
return rx.from_iterable([value])
ys = pipe(xs, rx.flat_map(mapper))
obv = AsyncTestObserver()
await ys.subscribe_async(obv)
await xs.asend(10)
await xs.asend(20)
await xs.aclose()
await obv
assert obv.values == [(0, OnNext(10)), (0, OnNext(20)), (0, OnCompleted)]
@pytest.mark.asyncio
async def test_flat_map_monad():
m = rx.single(42)
def mapper(x: int) -> AsyncObservable[int]:
return rx.single(x * 10)
a = await rx.run(pipe(m, rx.flat_map(mapper)))
b = await rx.run(rx.single(420))
assert a == b
@pytest.mark.asyncio
async def test_flat_map_monad_law_left_identity():
# return x >>= f is the same thing as f x
x = 3
def f(x: int) -> AsyncObservable[int]:
return rx.single(x + 100000)
a = await rx.run(pipe(rx.single(x), rx.flat_map(f)))
b = await rx.run(f(x))
assert a == b
@pytest.mark.asyncio
async def test_flat_map_monad_law_right_identity():
# m >>= return is no different than just m.
m = rx.single("move on up")
def mapper(x: str) -> AsyncObservable[str]:
return rx.single(x)
a = await rx.run(pipe(m, rx.flat_map(mapper)))
b = await rx.run(m)
assert a == b
@pytest.mark.asyncio
async def test_flat_map_monad_law_associativity():
# (m >>= f) >>= g is just like doing m >>= (\x -> f x >>= g)
m = rx.single(42)
def f(x: int) -> AsyncObservable[int]:
return rx.single(x + 1000)
def g(y: int) -> AsyncObservable[int]:
return rx.single(y * 333)
def h(x: int) -> AsyncObservable[int]:
return pipe(f(x), rx.flat_map(g))
zs = pipe(m, rx.flat_map(f))
a = await rx.run(pipe(zs, rx.flat_map(g)))
b = await rx.run(pipe(m, rx.flat_map(h)))
assert a == b
# if __name__ == "__main__":
# loop = asyncio.get_event_loop()
# loop.run_until_complete(test_flat_map_monad())
# loop.close()
|
mit
|
pbrady/sympy
|
sympy/vector/scalar.py
|
42
|
1640
|
from sympy.core.symbol import Symbol
from sympy.core.compatibility import u, range
from sympy.printing.pretty.stringpict import prettyForm
class BaseScalar(Symbol):
"""
A coordinate symbol/base scalar.
Ideally, users should not instantiate this class.
"""
def __new__(cls, name, index, system, pretty_str, latex_str):
from sympy.vector.coordsysrect import CoordSysCartesian
obj = super(BaseScalar, cls).__new__(cls, name)
if not isinstance(system, CoordSysCartesian):
raise TypeError("system should be a CoordSysCartesian")
if index not in range(0, 3):
raise ValueError("Invalid index specified.")
#The _id is used for equating purposes, and for hashing
obj._id = (index, system)
obj._name = name
obj._pretty_form = u(pretty_str)
obj._latex_form = latex_str
obj._system = system
return obj
def _latex(self, printer=None):
return self._latex_form
def _pretty(self, printer=None):
return prettyForm(self._pretty_form)
@property
def system(self):
return self._system
def __eq__(self, other):
#Check if the other object is a BaseScalar of same index
#and coordinate system
if isinstance(other, BaseScalar):
if other._id == self._id:
return True
return False
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return self._id.__hash__()
def __str__(self, printer=None):
return self._name
__repr__ = __str__
_sympystr = __str__
|
bsd-3-clause
|
mgit-at/ansible
|
test/units/modules/network/f5/test_bigip_smtp.py
|
21
|
5016
|
# -*- coding: utf-8 -*-
#
# Copyright: (c) 2017, F5 Networks Inc.
# GNU General Public License v3.0 (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import json
import pytest
import sys
if sys.version_info < (2, 7):
pytestmark = pytest.mark.skip("F5 Ansible modules require Python >= 2.7")
from ansible.module_utils.basic import AnsibleModule
try:
from library.modules.bigip_smtp import ApiParameters
from library.modules.bigip_smtp import ModuleParameters
from library.modules.bigip_smtp import ModuleManager
from library.modules.bigip_smtp import ArgumentSpec
# In Ansible 2.8, Ansible changed import paths.
from test.units.compat import unittest
from test.units.compat.mock import Mock
from test.units.compat.mock import patch
from test.units.modules.utils import set_module_args
except ImportError:
from ansible.modules.network.f5.bigip_smtp import ApiParameters
from ansible.modules.network.f5.bigip_smtp import ModuleParameters
from ansible.modules.network.f5.bigip_smtp import ModuleManager
from ansible.modules.network.f5.bigip_smtp import ArgumentSpec
# Ansible 2.8 imports
from units.compat import unittest
from units.compat.mock import Mock
from units.compat.mock import patch
from units.modules.utils import set_module_args
fixture_path = os.path.join(os.path.dirname(__file__), 'fixtures')
fixture_data = {}
def load_fixture(name):
path = os.path.join(fixture_path, name)
if path in fixture_data:
return fixture_data[path]
with open(path) as f:
data = f.read()
try:
data = json.loads(data)
except Exception:
pass
fixture_data[path] = data
return data
class TestParameters(unittest.TestCase):
def test_module_parameters(self):
args = dict(
name='foo',
smtp_server='1.1.1.1',
smtp_server_port='25',
smtp_server_username='admin',
smtp_server_password='password',
local_host_name='smtp.mydomain.com',
encryption='tls',
update_password='always',
from_address='no-reply@mydomain.com',
authentication=True,
)
p = ModuleParameters(params=args)
assert p.name == 'foo'
assert p.smtp_server == '1.1.1.1'
assert p.smtp_server_port == 25
assert p.smtp_server_username == 'admin'
assert p.smtp_server_password == 'password'
assert p.local_host_name == 'smtp.mydomain.com'
assert p.encryption == 'tls'
assert p.update_password == 'always'
assert p.from_address == 'no-reply@mydomain.com'
assert p.authentication_disabled is None
assert p.authentication_enabled is True
def test_api_parameters(self):
p = ApiParameters(params=load_fixture('load_sys_smtp_server.json'))
assert p.name == 'foo'
assert p.smtp_server == 'mail.foo.bar'
assert p.smtp_server_port == 465
assert p.smtp_server_username == 'admin'
assert p.smtp_server_password == '$M$Ch$this-is-encrypted=='
assert p.local_host_name == 'mail-host.foo.bar'
assert p.encryption == 'ssl'
assert p.from_address == 'no-reply@foo.bar'
assert p.authentication_disabled is None
assert p.authentication_enabled is True
class TestManager(unittest.TestCase):
def setUp(self):
self.spec = ArgumentSpec()
def test_create_monitor(self, *args):
set_module_args(dict(
name='foo',
smtp_server='1.1.1.1',
smtp_server_port='25',
smtp_server_username='admin',
smtp_server_password='password',
local_host_name='smtp.mydomain.com',
encryption='tls',
update_password='always',
from_address='no-reply@mydomain.com',
authentication=True,
partition='Common',
server='localhost',
password='password',
user='admin'
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode
)
# Override methods in the specific type of manager
mm = ModuleManager(module=module)
mm.exists = Mock(side_effect=[False, True])
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
assert results['encryption'] == 'tls'
assert results['smtp_server'] == '1.1.1.1'
assert results['smtp_server_port'] == 25
assert results['local_host_name'] == 'smtp.mydomain.com'
assert results['authentication'] is True
assert results['from_address'] == 'no-reply@mydomain.com'
assert 'smtp_server_username' not in results
assert 'smtp_server_password' not in results
|
gpl-3.0
|
mmilidoni/system-utility
|
vim-config/.vim/ftplugin/python/pyflakes/pyflakes/scripts/pyflakes.py
|
41
|
2656
|
"""
Implementation of the command-line I{pyflakes} tool.
"""
import sys
import os
import _ast
checker = __import__('pyflakes.checker').checker
def check(codeString, filename):
"""
Check the Python source given by C{codeString} for flakes.
@param codeString: The Python source to check.
@type codeString: C{str}
@param filename: The name of the file the source came from, used to report
errors.
@type filename: C{str}
@return: The number of warnings emitted.
@rtype: C{int}
"""
# First, compile into an AST and handle syntax errors.
try:
tree = compile(codeString, filename, "exec", _ast.PyCF_ONLY_AST)
except SyntaxError, value:
msg = value.args[0]
(lineno, offset, text) = value.lineno, value.offset, value.text
# If there's an encoding problem with the file, the text is None.
if text is None:
# Avoid using msg, since for the only known case, it contains a
# bogus message that claims the encoding the file declared was
# unknown.
print >> sys.stderr, "%s: problem decoding source" % (filename, )
else:
line = text.splitlines()[-1]
if offset is not None:
offset = offset - (len(text) - len(line))
print >> sys.stderr, '%s:%d: %s' % (filename, lineno, msg)
print >> sys.stderr, line
if offset is not None:
print >> sys.stderr, " " * offset, "^"
return 1
else:
# Okay, it's syntactically valid. Now check it.
w = checker.Checker(tree, filename)
w.messages.sort(lambda a, b: cmp(a.lineno, b.lineno))
for warning in w.messages:
print warning
return len(w.messages)
def checkPath(filename):
"""
Check the given path, printing out any warnings detected.
@return: the number of warnings printed
"""
try:
return check(file(filename, 'U').read() + '\n', filename)
except IOError, msg:
print >> sys.stderr, "%s: %s" % (filename, msg.args[1])
return 1
def main():
warnings = 0
args = sys.argv[1:]
if args:
for arg in args:
if os.path.isdir(arg):
for dirpath, dirnames, filenames in os.walk(arg):
for filename in filenames:
if filename.endswith('.py'):
warnings += checkPath(os.path.join(dirpath, filename))
else:
warnings += checkPath(arg)
else:
warnings += check(sys.stdin.read(), '<stdin>')
raise SystemExit(warnings > 0)
|
gpl-2.0
|
lunyang/pylearn2
|
pylearn2/training_algorithms/tests/test_sgd.py
|
32
|
61311
|
from __future__ import print_function
import numpy as np
from theano.compat.six.moves import cStringIO, xrange
import theano.tensor as T
from theano.tests import disturb_mem
from theano.tests.record import Record, RecordMode
from pylearn2.compat import first_key
from pylearn2.costs.cost import Cost, SumOfCosts, DefaultDataSpecsMixin
from pylearn2.datasets.dense_design_matrix import DenseDesignMatrix
from pylearn2.models.model import Model
from pylearn2.monitor import Monitor, push_monitor
from pylearn2.space import CompositeSpace, Conv2DSpace, VectorSpace
from pylearn2.termination_criteria import EpochCounter
from pylearn2.testing.cost import CallbackCost, SumOfParams
from pylearn2.testing.datasets import ArangeDataset
from pylearn2.train import Train
from pylearn2.training_algorithms.sgd import (ExponentialDecay,
PolyakAveraging,
LinearDecay,
LinearDecayOverEpoch,
MonitorBasedLRAdjuster,
SGD,
AnnealedLearningRate,
EpochMonitor)
from pylearn2.training_algorithms.learning_rule import (Momentum,
MomentumAdjustor)
from pylearn2.utils.iteration import _iteration_schemes
from pylearn2.utils import safe_izip, safe_union, sharedX
from pylearn2.utils.exc import reraise_as
class SupervisedDummyCost(DefaultDataSpecsMixin, Cost):
supervised = True
def expr(self, model, data):
space, sources = self.get_data_specs(model)
space.validate(data)
(X, Y) = data
return T.square(model(X) - Y).mean()
class DummyCost(DefaultDataSpecsMixin, Cost):
def expr(self, model, data):
space, sources = self.get_data_specs(model)
space.validate(data)
X = data
return T.square(model(X) - X).mean()
class DummyModel(Model):
"""
A dummy model used for testing.
Parameters
----------
shapes : list
List of shapes for each parameter.
lr_scalers : list, optional
Scalers to use for each parameter.
init_type : string, optional
How to fill initial values in parameters: `random` - generate random
values; `zeros` - set all to zeros.
"""
def __init__(self, shapes, lr_scalers=None, init_type='random'):
super(DummyModel, self).__init__()
if init_type == 'random':
self._params = [sharedX(np.random.random(shp)) for shp in shapes]
elif init_type == 'zeros':
self._params = [sharedX(np.zeros(shp)) for shp in shapes]
else:
raise ValueError('Unknown value for init_type: %s',
init_type)
self.input_space = VectorSpace(1)
self.lr_scalers = lr_scalers
def __call__(self, X):
# Implemented only so that DummyCost would work
return X
def get_lr_scalers(self):
if self.lr_scalers:
return dict(zip(self._params, self.lr_scalers))
else:
return dict()
class SoftmaxModel(Model):
"""A dummy model used for testing.
Important properties:
has a parameter (P) for SGD to act on
has a get_output_space method, so it can tell the
algorithm what kind of space the targets for supervised
learning live in
has a get_input_space method, so it can tell the
algorithm what kind of space the features live in
"""
def __init__(self, dim):
super(SoftmaxModel, self).__init__()
self.dim = dim
rng = np.random.RandomState([2012, 9, 25])
self.P = sharedX(rng.uniform(-1., 1., (dim, )))
def get_params(self):
return [self.P]
def get_input_space(self):
return VectorSpace(self.dim)
def get_output_space(self):
return VectorSpace(self.dim)
def __call__(self, X):
# Make the test fail if algorithm does not
# respect get_input_space
assert X.ndim == 2
# Multiplying by P ensures the shape as well
# as ndim is correct
return T.nnet.softmax(X*self.P)
class TopoSoftmaxModel(Model):
"""A dummy model used for testing.
Like SoftmaxModel but its features have 2 topological
dimensions. This tests that the training algorithm
will provide topological data correctly.
"""
def __init__(self, rows, cols, channels):
super(TopoSoftmaxModel, self).__init__()
dim = rows * cols * channels
self.input_space = Conv2DSpace((rows, cols), channels)
self.dim = dim
rng = np.random.RandomState([2012, 9, 25])
self.P = sharedX(rng.uniform(-1., 1., (dim, )))
def get_params(self):
return [self.P]
def get_output_space(self):
return VectorSpace(self.dim)
def __call__(self, X):
# Make the test fail if algorithm does not
# respect get_input_space
assert X.ndim == 4
# Multiplying by P ensures the shape as well
# as ndim is correct
return T.nnet.softmax(X.reshape((X.shape[0], self.dim)) * self.P)
def test_sgd_unspec_num_mon_batch():
# tests that if you don't specify a number of
# monitoring batches, SGD configures the monitor
# to run on all the data
m = 25
visited = [False] * m
rng = np.random.RandomState([25, 9, 2012])
X = np.zeros((m, 1))
X[:, 0] = np.arange(m)
dataset = DenseDesignMatrix(X=X)
model = SoftmaxModel(1)
learning_rate = 1e-3
batch_size = 5
cost = DummyCost()
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
monitoring_batches=None,
monitoring_dataset=dataset,
termination_criterion=None,
update_callbacks=None,
set_batch_size=False)
algorithm.setup(dataset=dataset, model=model)
monitor = Monitor.get_monitor(model)
X = T.matrix()
def tracker(*data):
X, = data
assert X.shape[1] == 1
for i in xrange(X.shape[0]):
visited[int(X[i, 0])] = True
monitor.add_channel(name='tracker',
ipt=X,
val=0.,
prereqs=[tracker],
data_specs=(model.get_input_space(),
model.get_input_source()))
monitor()
if False in visited:
print(visited)
assert False
def test_sgd_sup():
# tests that we can run the sgd algorithm
# on a supervised cost.
# does not test for correctness at all, just
# that the algorithm runs without dying
dim = 3
m = 10
rng = np.random.RandomState([25, 9, 2012])
X = rng.randn(m, dim)
idx = rng.randint(0, dim, (m, ))
Y = np.zeros((m, dim))
for i in xrange(m):
Y[i, idx[i]] = 1
dataset = DenseDesignMatrix(X=X, y=Y)
m = 15
X = rng.randn(m, dim)
idx = rng.randint(0, dim, (m,))
Y = np.zeros((m, dim))
for i in xrange(m):
Y[i, idx[i]] = 1
# Including a monitoring dataset lets us test that
# the monitor works with supervised data
monitoring_dataset = DenseDesignMatrix(X=X, y=Y)
model = SoftmaxModel(dim)
learning_rate = 1e-3
batch_size = 5
cost = SupervisedDummyCost()
# We need to include this so the test actually stops running at some point
termination_criterion = EpochCounter(5)
algorithm = SGD(learning_rate, cost,
batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=None)
train.main_loop()
def test_sgd_unsup():
# tests that we can run the sgd algorithm
# on an supervised cost.
# does not test for correctness at all, just
# that the algorithm runs without dying
dim = 3
m = 10
rng = np.random.RandomState([25, 9, 2012])
X = rng.randn(m, dim)
dataset = DenseDesignMatrix(X=X)
m = 15
X = rng.randn(m, dim)
# Including a monitoring dataset lets us test that
# the monitor works with unsupervised data
monitoring_dataset = DenseDesignMatrix(X=X)
model = SoftmaxModel(dim)
learning_rate = 1e-3
batch_size = 5
cost = DummyCost()
# We need to include this so the test actually stops running at some point
termination_criterion = EpochCounter(5)
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=None)
train.main_loop()
def get_topological_dataset(rng, rows, cols, channels, m):
X = rng.randn(m, rows, cols, channels)
dim = rows * cols * channels
idx = rng.randint(0, dim, (m,))
Y = np.zeros((m, dim))
for i in xrange(m):
Y[i, idx[i]] = 1
return DenseDesignMatrix(topo_view=X, y=Y)
def test_linear_decay():
# tests that the class LinearDecay in sgd.py
# gets the learning rate properly over the training batches
# it runs a small softmax and at the end checks the learning values.
# the learning rates are expected to start changing at batch 'start'
# by an amount of 'step' specified below.
# the decrease of the learning rate should continue linearly until
# we reach batch 'saturate' at which the learning rate equals
# 'learning_rate * decay_factor'
class LearningRateTracker(object):
def __init__(self):
self.lr_rates = []
def __call__(self, algorithm):
self.lr_rates.append(algorithm.learning_rate.get_value())
dim = 3
dataset_size = 10
rng = np.random.RandomState([25, 9, 2012])
X = rng.randn(dataset_size, dim)
dataset = DenseDesignMatrix(X=X)
m = 15
X = rng.randn(m, dim)
# including a monitoring datasets lets us test that
# the monitor works with supervised data
monitoring_dataset = DenseDesignMatrix(X=X)
model = SoftmaxModel(dim)
learning_rate = 1e-1
batch_size = 5
# We need to include this so the test actually stops running at some point
epoch_num = 15
termination_criterion = EpochCounter(epoch_num)
cost = DummyCost()
start = 5
saturate = 10
decay_factor = 0.1
linear_decay = LinearDecay(start=start, saturate=saturate,
decay_factor=decay_factor)
# including this extension for saving learning rate value after each batch
lr_tracker = LearningRateTracker()
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=termination_criterion,
update_callbacks=[linear_decay, lr_tracker],
set_batch_size=False)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=None)
train.main_loop()
step = (learning_rate - learning_rate*decay_factor)/(saturate - start + 1)
num_batches = np.ceil(dataset_size / float(batch_size)).astype(int)
for i in xrange(epoch_num * num_batches):
actual = lr_tracker.lr_rates[i]
batches_seen = i + 1
if batches_seen < start:
expected = learning_rate
elif batches_seen >= saturate:
expected = learning_rate*decay_factor
elif (start <= batches_seen) and (batches_seen < saturate):
expected = (decay_factor * learning_rate +
(saturate - batches_seen) * step)
if not np.allclose(actual, expected):
raise AssertionError("After %d batches, expected learning rate to "
"be %f, but it is %f." %
(batches_seen, expected, actual))
def test_annealed_learning_rate():
# tests that the class AnnealedLearingRate in sgd.py
# gets the learning rate properly over the training batches
# it runs a small softmax and at the end checks the learning values.
# the learning rates are expected to start changing at batch 'anneal_start'
# After batch anneal_start, the learning rate should be
# learning_rate * anneal_start/number of batches seen
class LearningRateTracker(object):
def __init__(self):
self.lr_rates = []
def __call__(self, algorithm):
self.lr_rates.append(algorithm.learning_rate.get_value())
dim = 3
dataset_size = 10
rng = np.random.RandomState([25, 9, 2012])
X = rng.randn(dataset_size, dim)
dataset = DenseDesignMatrix(X=X)
m = 15
X = rng.randn(m, dim)
# including a monitoring datasets lets us test that
# the monitor works with supervised data
monitoring_dataset = DenseDesignMatrix(X=X)
model = SoftmaxModel(dim)
learning_rate = 1e-1
batch_size = 5
# We need to include this so the test actually stops running at some point
epoch_num = 15
termination_criterion = EpochCounter(epoch_num)
cost = DummyCost()
anneal_start = 5
annealed_rate = AnnealedLearningRate(anneal_start=anneal_start)
# including this extension for saving learning rate value after each batch
lr_tracker = LearningRateTracker()
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=termination_criterion,
update_callbacks=[annealed_rate, lr_tracker],
set_batch_size=False)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=None)
train.main_loop()
num_batches = np.ceil(dataset_size / float(batch_size)).astype(int)
for i in xrange(epoch_num * num_batches):
actual = lr_tracker.lr_rates[i]
batches_seen = i + 1
expected = learning_rate*min(1, float(anneal_start)/batches_seen)
if not np.allclose(actual, expected):
raise AssertionError("After %d batches, expected learning rate to "
"be %f, but it is %f." %
(batches_seen, expected, actual))
def test_linear_decay_over_epoch():
# tests that the class LinearDecayOverEpoch in sgd.py
# gets the learning rate properly over the training epochs
# it runs a small softmax and at the end checks the learning values.
# the learning rates are expected to start changing at epoch 'start' by an
# amount of 'step' specified below.
# the decrease of the learning rate should continue linearly until we
# reach epoch 'saturate' at which the learning rate equals
# 'learning_rate * decay_factor'
dim = 3
m = 10
rng = np.random.RandomState([25, 9, 2012])
X = rng.randn(m, dim)
dataset = DenseDesignMatrix(X=X)
m = 15
X = rng.randn(m, dim)
# including a monitoring datasets lets us test that
# the monitor works with supervised data
monitoring_dataset = DenseDesignMatrix(X=X)
model = SoftmaxModel(dim)
learning_rate = 1e-1
batch_size = 5
# We need to include this so the test actually stops running at some point
epoch_num = 15
termination_criterion = EpochCounter(epoch_num)
cost = DummyCost()
algorithm = SGD(learning_rate, cost, batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
start = 5
saturate = 10
decay_factor = 0.1
linear_decay = LinearDecayOverEpoch(start=start,
saturate=saturate,
decay_factor=decay_factor)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=[linear_decay])
train.main_loop()
lr = model.monitor.channels['learning_rate']
step = (learning_rate - learning_rate*decay_factor)/(saturate - start + 1)
for i in xrange(epoch_num + 1):
actual = lr.val_record[i]
if i < start:
expected = learning_rate
elif i >= saturate:
expected = learning_rate*decay_factor
elif (start <= i) and (i < saturate):
expected = decay_factor * learning_rate + (saturate - i) * step
if not np.allclose(actual, expected):
raise AssertionError("After %d epochs, expected learning rate to "
"be %f, but it is %f." %
(i, expected, actual))
def test_linear_decay_epoch_xfer():
# tests that the class LinearDecayOverEpoch in sgd.py
# gets the epochs xfered over properly
dim = 3
m = 10
rng = np.random.RandomState([25, 9, 2012])
X = rng.randn(m, dim)
dataset = DenseDesignMatrix(X=X)
m = 15
X = rng.randn(m, dim)
# including a monitoring datasets lets us test that
# the monitor works with supervised data
monitoring_dataset = DenseDesignMatrix(X=X)
model = SoftmaxModel(dim)
learning_rate = 1e-1
batch_size = 5
# We need to include this so the test actually stops running at some point
epoch_num = 6
termination_criterion = EpochCounter(epoch_num)
cost = DummyCost()
algorithm = SGD(learning_rate, cost, batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
start = 5
saturate = 10
decay_factor = 0.1
linear_decay = LinearDecayOverEpoch(start=start,
saturate=saturate,
decay_factor=decay_factor)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=[linear_decay])
train.main_loop()
lr = model.monitor.channels['learning_rate']
final_learning_rate = lr.val_record[-1]
algorithm2 = SGD(learning_rate, cost,
batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=EpochCounter(epoch_num+1,
new_epochs=False),
update_callbacks=None,
set_batch_size=False)
model_xfer = push_monitor(name="old_monitor",
transfer_experience=True,
model=model)
linear_decay2 = LinearDecayOverEpoch(start=start,
saturate=saturate,
decay_factor=decay_factor)
train2 = Train(dataset,
model_xfer,
algorithm2,
save_path=None,
save_freq=0,
extensions=[linear_decay2])
train2.main_loop()
lr_resume = model_xfer.monitor.channels['learning_rate']
resume_learning_rate = lr_resume.val_record[0]
assert np.allclose(resume_learning_rate,
final_learning_rate)
def test_momentum_epoch_xfer():
# tests that the class MomentumAdjustor in learning_rate.py
# gets the epochs xfered over properly
dim = 3
m = 10
rng = np.random.RandomState([25, 9, 2012])
X = rng.randn(m, dim)
dataset = DenseDesignMatrix(X=X)
m = 15
X = rng.randn(m, dim)
# including a monitoring datasets lets us test that
# the monitor works with supervised data
monitoring_dataset = DenseDesignMatrix(X=X)
model = SoftmaxModel(dim)
learning_rate = 1e-1
batch_size = 5
# We need to include this so the test actually stops running at some point
epoch_num = 6
termination_criterion = EpochCounter(epoch_num)
cost = DummyCost()
algorithm = SGD(learning_rate, cost, batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False,
learning_rule=Momentum(.4))
start = 1
saturate = 11
final_momentum = 0.9
momentum_adjustor = MomentumAdjustor(final_momentum=final_momentum,
start=start,
saturate=saturate)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=[momentum_adjustor])
train.main_loop()
mm = model.monitor.channels['momentum']
final_momentum_init = mm.val_record[-1]
algorithm2 = SGD(learning_rate, cost,
batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=EpochCounter(epoch_num+1,
new_epochs=False),
update_callbacks=None,
set_batch_size=False,
learning_rule=Momentum(.4))
model_xfer = push_monitor(name="old_monitor",
transfer_experience=True,
model=model)
momentum_adjustor2 = MomentumAdjustor(final_momentum=final_momentum,
start=start,
saturate=saturate)
train2 = Train(dataset,
model_xfer,
algorithm2,
save_path=None,
save_freq=0,
extensions=[momentum_adjustor2])
train2.main_loop()
assert np.allclose(model.monitor.channels['momentum'].val_record[0],
final_momentum_init)
def test_val_records_xfer():
# tests that the class push_motnior in learning_rate.py
# gets the epochs xfered over properly
dim = 3
m = 10
rng = np.random.RandomState([25, 9, 2012])
X = rng.randn(m, dim)
dataset = DenseDesignMatrix(X=X)
m = 15
X = rng.randn(m, dim)
# including a monitoring datasets lets us test that
# the monitor works with supervised data
monitoring_dataset = DenseDesignMatrix(X=X)
model = SoftmaxModel(dim)
learning_rate = 1e-1
batch_size = 5
# We need to include this so the test actually stops running at some point
epoch_num = 6
termination_criterion = EpochCounter(epoch_num)
cost = DummyCost()
algorithm = SGD(learning_rate, cost, batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0)
train.main_loop()
assert len(model.monitor.channels['objective'].val_record) ==\
model.monitor._epochs_seen + 1
final_obj = model.monitor.channels['objective'].val_record[-1]
algorithm2 = SGD(learning_rate, cost,
batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=EpochCounter(epoch_num+1,
new_epochs=False),
update_callbacks=None,
set_batch_size=False)
model_xfer = push_monitor(name="old_monitor",
transfer_experience=True,
model=model)
train2 = Train(dataset,
model_xfer,
algorithm2,
save_path=None,
save_freq=0)
train2.main_loop()
assert np.allclose(model.monitor.channels['objective'].val_record[0],
final_obj)
assert len(model.monitor.channels['objective'].val_record) == 2
def test_save_records():
# tests that the flag save_records in class
# push_monitor in learning_rate.py
# gets the val_records xfered over properly
dim = 3
m = 10
rng = np.random.RandomState([25, 9, 2012])
X = rng.randn(m, dim)
dataset = DenseDesignMatrix(X=X)
m = 15
X = rng.randn(m, dim)
# including a monitoring datasets lets us test that
# the monitor works with supervised data
monitoring_dataset = DenseDesignMatrix(X=X)
model = SoftmaxModel(dim)
learning_rate = 1e-1
batch_size = 5
# We need to include this so the test actually stops running at some point
epoch_num = 6
termination_criterion = EpochCounter(epoch_num)
cost = DummyCost()
algorithm = SGD(learning_rate, cost, batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0)
train.main_loop()
old_monitor_len =\
len(model.monitor.channels['objective'].val_record)
assert old_monitor_len == model.monitor._epochs_seen + 1
init_obj = model.monitor.channels['objective'].val_record[0]
final_obj = model.monitor.channels['objective'].val_record[-1]
index_final_obj =\
len(model.monitor.channels['objective'].val_record) - 1
algorithm2 = SGD(learning_rate, cost,
batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=EpochCounter(epoch_num+1,
new_epochs=False),
update_callbacks=None,
set_batch_size=False)
model_xfer = push_monitor(name="old_monitor",
transfer_experience=True,
model=model,
save_records=True)
train2 = Train(dataset,
model_xfer,
algorithm2,
save_path=None,
save_freq=0)
train2.main_loop()
assert len(model.old_monitor.channels['objective'].val_record) ==\
old_monitor_len
assert np.allclose(model.monitor.channels['objective'].val_record[0],
init_obj)
assert len(model.monitor.channels['objective'].val_record) ==\
model.monitor._epochs_seen + 1
assert len(model.monitor.channels['objective'].val_record) ==\
epoch_num + 2
assert model.monitor.channels['objective'].val_record[index_final_obj] ==\
final_obj
def test_monitor_based_lr():
# tests that the class MonitorBasedLRAdjuster in sgd.py
# gets the learning rate properly over the training epochs
# it runs a small softmax and at the end checks the learning values. It
# runs 2 loops. Each loop evaluates one of the if clauses when checking
# the observation channels. Otherwise, longer training epochs are needed
# to observe both if and elif cases.
high_trigger = 1.0
shrink_amt = 0.99
low_trigger = 0.99
grow_amt = 1.01
min_lr = 1e-7
max_lr = 1.
dim = 3
m = 10
rng = np.random.RandomState([25, 9, 2012])
X = rng.randn(m, dim)
dataset = DenseDesignMatrix(X=X)
m = 15
X = rng.randn(m, dim)
learning_rate = 1e-2
batch_size = 5
# We need to include this so the test actually stops running at some point
epoch_num = 5
# including a monitoring datasets lets us test that
# the monitor works with supervised data
monitoring_dataset = DenseDesignMatrix(X=X)
cost = DummyCost()
for i in xrange(2):
if i == 1:
high_trigger = 0.99
model = SoftmaxModel(dim)
termination_criterion = EpochCounter(epoch_num)
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
monitor_lr = MonitorBasedLRAdjuster(high_trigger=high_trigger,
shrink_amt=shrink_amt,
low_trigger=low_trigger,
grow_amt=grow_amt,
min_lr=min_lr,
max_lr=max_lr)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=[monitor_lr])
train.main_loop()
v = model.monitor.channels['objective'].val_record
lr = model.monitor.channels['learning_rate'].val_record
lr_monitor = learning_rate
for i in xrange(2, epoch_num + 1):
if v[i-1] > high_trigger * v[i-2]:
lr_monitor *= shrink_amt
elif v[i-1] > low_trigger * v[i-2]:
lr_monitor *= grow_amt
lr_monitor = max(min_lr, lr_monitor)
lr_monitor = min(max_lr, lr_monitor)
assert np.allclose(lr_monitor, lr[i])
def test_bad_monitoring_input_in_monitor_based_lr():
# tests that the class MonitorBasedLRAdjuster in sgd.py avoids wrong
# settings of channel_name or dataset_name in the constructor.
dim = 3
m = 10
rng = np.random.RandomState([6, 2, 2014])
X = rng.randn(m, dim)
learning_rate = 1e-2
batch_size = 5
# We need to include this so the test actually stops running at some point
epoch_num = 2
dataset = DenseDesignMatrix(X=X)
# including a monitoring datasets lets us test that
# the monitor works with supervised data
monitoring_dataset = DenseDesignMatrix(X=X)
cost = DummyCost()
model = SoftmaxModel(dim)
termination_criterion = EpochCounter(epoch_num)
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
monitoring_batches=2,
monitoring_dataset=monitoring_dataset,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
# testing for bad dataset_name input
dummy = 'void'
monitor_lr = MonitorBasedLRAdjuster(dataset_name=dummy)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=[monitor_lr])
try:
train.main_loop()
except ValueError as e:
pass
except Exception:
reraise_as(AssertionError("MonitorBasedLRAdjuster takes dataset_name "
"that is invalid "))
# testing for bad channel_name input
monitor_lr2 = MonitorBasedLRAdjuster(channel_name=dummy)
model2 = SoftmaxModel(dim)
train2 = Train(dataset,
model2,
algorithm,
save_path=None,
save_freq=0,
extensions=[monitor_lr2])
try:
train2.main_loop()
except ValueError as e:
pass
except Exception:
reraise_as(AssertionError("MonitorBasedLRAdjuster takes channel_name "
"that is invalid "))
return
def testing_multiple_datasets_in_monitor_based_lr():
# tests that the class MonitorBasedLRAdjuster in sgd.py does not take
# multiple datasets in which multiple channels ending in '_objective'
# exist.
# This case happens when the user has not specified either channel_name or
# dataset_name in the constructor
dim = 3
m = 10
rng = np.random.RandomState([6, 2, 2014])
X = rng.randn(m, dim)
Y = rng.randn(m, dim)
learning_rate = 1e-2
batch_size = 5
# We need to include this so the test actually stops running at some point
epoch_num = 1
# including a monitoring datasets lets us test that
# the monitor works with supervised data
monitoring_train = DenseDesignMatrix(X=X)
monitoring_test = DenseDesignMatrix(X=Y)
cost = DummyCost()
model = SoftmaxModel(dim)
dataset = DenseDesignMatrix(X=X)
termination_criterion = EpochCounter(epoch_num)
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
monitoring_batches=2,
monitoring_dataset={'train': monitoring_train,
'test': monitoring_test},
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
monitor_lr = MonitorBasedLRAdjuster()
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=[monitor_lr])
try:
train.main_loop()
except ValueError:
return
raise AssertionError("MonitorBasedLRAdjuster takes multiple dataset names "
"in which more than one \"objective\" channel exist "
"and the user has not specified either channel_name "
"or database_name in the constructor to "
"disambiguate.")
def testing_multiple_datasets_with_specified_dataset_in_monitor_based_lr():
# tests that the class MonitorBasedLRAdjuster in sgd.py can properly use
# the spcified dataset_name in the constructor when multiple datasets
# exist.
dim = 3
m = 10
rng = np.random.RandomState([6, 2, 2014])
X = rng.randn(m, dim)
Y = rng.randn(m, dim)
learning_rate = 1e-2
batch_size = 5
# We need to include this so the test actually stops running at some point
epoch_num = 1
# including a monitoring datasets lets us test that
# the monitor works with supervised data
monitoring_train = DenseDesignMatrix(X=X)
monitoring_test = DenseDesignMatrix(X=Y)
cost = DummyCost()
model = SoftmaxModel(dim)
dataset = DenseDesignMatrix(X=X)
termination_criterion = EpochCounter(epoch_num)
monitoring_dataset = {'train': monitoring_train, 'test': monitoring_test}
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
monitoring_batches=2,
monitoring_dataset=monitoring_dataset,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
dataset_name = first_key(monitoring_dataset)
monitor_lr = MonitorBasedLRAdjuster(dataset_name=dataset_name)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=[monitor_lr])
train.main_loop()
def test_sgd_topo():
# tests that we can run the sgd algorithm
# on data with topology
# does not test for correctness at all, just
# that the algorithm runs without dying
rows = 3
cols = 4
channels = 2
dim = rows * cols * channels
m = 10
rng = np.random.RandomState([25, 9, 2012])
dataset = get_topological_dataset(rng, rows, cols, channels, m)
# including a monitoring datasets lets us test that
# the monitor works with supervised data
m = 15
monitoring_dataset = get_topological_dataset(rng, rows, cols, channels, m)
model = TopoSoftmaxModel(rows, cols, channels)
learning_rate = 1e-3
batch_size = 5
cost = SupervisedDummyCost()
# We need to include this so the test actually stops running at some point
termination_criterion = EpochCounter(5)
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=monitoring_dataset,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=None)
train.main_loop()
def test_sgd_no_mon():
# tests that we can run the sgd algorithm
# wihout a monitoring dataset
# does not test for correctness at all, just
# that the algorithm runs without dying
dim = 3
m = 10
rng = np.random.RandomState([25, 9, 2012])
X = rng.randn(m, dim)
idx = rng.randint(0, dim, (m,))
Y = np.zeros((m, dim))
for i in xrange(m):
Y[i, idx[i]] = 1
dataset = DenseDesignMatrix(X=X, y=Y)
m = 15
X = rng.randn(m, dim)
idx = rng.randint(0, dim, (m,))
Y = np.zeros((m, dim))
for i in xrange(m):
Y[i, idx[i]] = 1
model = SoftmaxModel(dim)
learning_rate = 1e-3
batch_size = 5
cost = SupervisedDummyCost()
# We need to include this so the test actually stops running at some point
termination_criterion = EpochCounter(5)
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
monitoring_dataset=None,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=None)
train.main_loop()
def test_reject_mon_batch_without_mon():
# tests that setting up the sgd algorithm
# without a monitoring dataset
# but with monitoring_batches specified is an error
dim = 3
m = 10
rng = np.random.RandomState([25, 9, 2012])
X = rng.randn(m, dim)
idx = rng.randint(0, dim, (m,))
Y = np.zeros((m, dim))
for i in xrange(m):
Y[i, idx[i]] = 1
dataset = DenseDesignMatrix(X=X, y=Y)
m = 15
X = rng.randn(m, dim)
idx = rng.randint(0, dim, (m, ))
Y = np.zeros((m, dim))
for i in xrange(m):
Y[i, idx[i]] = 1
model = SoftmaxModel(dim)
learning_rate = 1e-3
batch_size = 5
cost = SupervisedDummyCost()
try:
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
monitoring_batches=3,
monitoring_dataset=None,
update_callbacks=None,
set_batch_size=False)
except ValueError:
return
assert False
def test_sgd_sequential():
# tests that requesting train_iteration_mode = 'sequential'
# works
dim = 1
batch_size = 3
m = 5 * batch_size
dataset = ArangeDataset(m)
model = SoftmaxModel(dim)
learning_rate = 1e-3
batch_size = 5
visited = [False] * m
def visit(X):
assert X.shape[1] == 1
assert np.all(X[1:] == X[0:-1]+1)
start = int(X[0, 0])
if start > 0:
assert visited[start - 1]
for i in xrange(batch_size):
assert not visited[start+i]
visited[start+i] = 1
data_specs = (model.get_input_space(), model.get_input_source())
cost = CallbackCost(visit, data_specs)
# We need to include this so the test actually stops running at some point
termination_criterion = EpochCounter(5)
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
train_iteration_mode='sequential',
monitoring_dataset=None,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
algorithm.setup(dataset=dataset, model=model)
algorithm.train(dataset)
assert all(visited)
def test_determinism():
# Verifies that running SGD twice results in the same examples getting
# visited in the same order
for mode in _iteration_schemes:
dim = 1
batch_size = 3
num_batches = 5
m = num_batches * batch_size
dataset = ArangeDataset(m)
model = SoftmaxModel(dim)
learning_rate = 1e-3
batch_size = 5
visited = [[-1] * m]
def visit(X):
mx = max(visited[0])
counter = mx + 1
for i in X[:, 0]:
i = int(i)
assert visited[0][i] == -1
visited[0][i] = counter
counter += 1
data_specs = (model.get_input_space(), model.get_input_source())
cost = CallbackCost(visit, data_specs)
# We need to include this so the test actually stops running at some
# point
termination_criterion = EpochCounter(5)
def run_algorithm():
unsupported_modes = ['random_slice', 'random_uniform',
'even_sequences']
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
train_iteration_mode=mode,
monitoring_dataset=None,
termination_criterion=termination_criterion,
update_callbacks=None,
set_batch_size=False)
algorithm.setup(dataset=dataset, model=model)
raised = False
try:
algorithm.train(dataset)
except ValueError:
assert mode in unsupported_modes
raised = True
if mode in unsupported_modes:
assert raised
return True
return False
if run_algorithm():
continue
visited.insert(0, [-1] * m)
del model.monitor
run_algorithm()
for v in visited:
assert len(v) == m
for elem in range(m):
assert elem in v
assert len(visited) == 2
print(visited[0])
print(visited[1])
assert np.all(np.asarray(visited[0]) == np.asarray(visited[1]))
def test_determinism_2():
"""
A more aggressive determinism test. Tests that apply nodes are all passed
inputs with the same md5sums, apply nodes are run in same order, etc. Uses
disturb_mem to try to cause dictionaries to iterate in different orders,
etc.
"""
def run_sgd(mode):
# Must be seeded the same both times run_sgd is called
disturb_mem.disturb_mem()
rng = np.random.RandomState([2012, 11, 27])
batch_size = 5
train_batches = 3
valid_batches = 4
num_features = 2
# Synthesize dataset with a linear decision boundary
w = rng.randn(num_features)
def make_dataset(num_batches):
disturb_mem.disturb_mem()
m = num_batches*batch_size
X = rng.randn(m, num_features)
y = np.zeros((m, 1))
y[:, 0] = np.dot(X, w) > 0.
rval = DenseDesignMatrix(X=X, y=y)
rval.yaml_src = "" # suppress no yaml_src warning
X = rval.get_batch_design(batch_size)
assert X.shape == (batch_size, num_features)
return rval
train = make_dataset(train_batches)
valid = make_dataset(valid_batches)
num_chunks = 10
chunk_width = 2
class ManyParamsModel(Model):
"""
Make a model with lots of parameters, so that there are many
opportunities for their updates to get accidentally re-ordered
non-deterministically. This makes non-determinism bugs manifest
more frequently.
"""
def __init__(self):
super(ManyParamsModel, self).__init__()
self.W1 = [sharedX(rng.randn(num_features, chunk_width)) for i
in xrange(num_chunks)]
disturb_mem.disturb_mem()
self.W2 = [sharedX(rng.randn(chunk_width))
for i in xrange(num_chunks)]
self._params = safe_union(self.W1, self.W2)
self.input_space = VectorSpace(num_features)
self.output_space = VectorSpace(1)
disturb_mem.disturb_mem()
model = ManyParamsModel()
disturb_mem.disturb_mem()
class LotsOfSummingCost(Cost):
"""
Make a cost whose gradient on the parameters involves summing many
terms together, so that T.grad is more likely to sum things in a
random order.
"""
supervised = True
def expr(self, model, data, **kwargs):
self.get_data_specs(model)[0].validate(data)
X, Y = data
disturb_mem.disturb_mem()
def mlp_pred(non_linearity):
Z = [T.dot(X, W) for W in model.W1]
H = [non_linearity(z) for z in Z]
Z = [T.dot(h, W) for h, W in safe_izip(H, model.W2)]
pred = sum(Z)
return pred
nonlinearity_predictions = map(mlp_pred,
[T.nnet.sigmoid,
T.nnet.softplus,
T.sqr,
T.sin])
pred = sum(nonlinearity_predictions)
disturb_mem.disturb_mem()
return abs(pred-Y[:, 0]).sum()
def get_data_specs(self, model):
data = CompositeSpace((model.get_input_space(),
model.get_output_space()))
source = (model.get_input_source(), model.get_target_source())
return (data, source)
cost = LotsOfSummingCost()
disturb_mem.disturb_mem()
algorithm = SGD(cost=cost,
batch_size=batch_size,
learning_rule=Momentum(.5),
learning_rate=1e-3,
monitoring_dataset={'train': train, 'valid': valid},
update_callbacks=[ExponentialDecay(decay_factor=2.,
min_lr=.0001)],
termination_criterion=EpochCounter(max_epochs=5))
disturb_mem.disturb_mem()
train_object = Train(dataset=train,
model=model,
algorithm=algorithm,
extensions=[PolyakAveraging(start=0),
MomentumAdjustor(final_momentum=.9,
start=1,
saturate=5), ],
save_freq=0)
disturb_mem.disturb_mem()
train_object.main_loop()
output = cStringIO()
record = Record(file_object=output, replay=False)
record_mode = RecordMode(record)
run_sgd(record_mode)
output = cStringIO(output.getvalue())
playback = Record(file_object=output, replay=True)
playback_mode = RecordMode(playback)
run_sgd(playback_mode)
def test_lr_scalers():
"""
Tests that SGD respects Model.get_lr_scalers
"""
# We include a cost other than SumOfParams so that data is actually
# queried from the training set, and the expected number of updates
# are applied.
cost = SumOfCosts([SumOfParams(), (0., DummyCost())])
scales = [.01, .02, .05, 1., 5.]
shapes = [(1,), (9,), (8, 7), (6, 5, 4), (3, 2, 2, 2)]
learning_rate = .001
class ModelWithScalers(Model):
def __init__(self):
super(ModelWithScalers, self).__init__()
self._params = [sharedX(np.zeros(shape)) for shape in shapes]
self.input_space = VectorSpace(1)
def __call__(self, X):
# Implemented only so that DummyCost would work
return X
def get_lr_scalers(self):
return dict(zip(self._params, scales))
model = ModelWithScalers()
dataset = ArangeDataset(1)
sgd = SGD(cost=cost,
learning_rate=learning_rate,
learning_rule=Momentum(.0),
batch_size=1)
sgd.setup(model=model, dataset=dataset)
manual = [param.get_value() for param in model.get_params()]
manual = [param - learning_rate * scale for param, scale in
zip(manual, scales)]
sgd.train(dataset=dataset)
assert all(np.allclose(manual_param, sgd_param.get_value())
for manual_param, sgd_param
in zip(manual, model.get_params()))
manual = [param - learning_rate * scale
for param, scale
in zip(manual, scales)]
sgd.train(dataset=dataset)
assert all(np.allclose(manual_param, sgd_param.get_value())
for manual_param, sgd_param
in zip(manual, model.get_params()))
def test_lr_scalers_momentum():
"""
Tests that SGD respects Model.get_lr_scalers when using
momentum.
"""
# We include a cost other than SumOfParams so that data is actually
# queried from the training set, and the expected number of updates
# are applied.
cost = SumOfCosts([SumOfParams(), (0., DummyCost())])
scales = [.01, .02, .05, 1., 5.]
shapes = [(1,), (9,), (8, 7), (6, 5, 4), (3, 2, 2, 2)]
model = DummyModel(shapes, lr_scalers=scales)
dataset = ArangeDataset(1)
learning_rate = .001
momentum = 0.5
sgd = SGD(cost=cost,
learning_rate=learning_rate,
learning_rule=Momentum(momentum),
batch_size=1)
sgd.setup(model=model, dataset=dataset)
manual = [param.get_value() for param in model.get_params()]
inc = [-learning_rate * scale for param, scale in zip(manual, scales)]
manual = [param + i for param, i in zip(manual, inc)]
sgd.train(dataset=dataset)
assert all(np.allclose(manual_param, sgd_param.get_value())
for manual_param, sgd_param
in zip(manual, model.get_params()))
manual = [param - learning_rate * scale + i * momentum
for param, scale, i in
zip(manual, scales, inc)]
sgd.train(dataset=dataset)
assert all(np.allclose(manual_param, sgd_param.get_value())
for manual_param, sgd_param
in zip(manual, model.get_params()))
def test_batch_size_specialization():
# Tests that using a batch size of 1 for training and a batch size
# other than 1 for monitoring does not result in a crash.
# This catches a bug reported in the pylearn-dev@googlegroups.com
# e-mail "[pylearn-dev] monitor assertion error: channel_X.type != X.type"
# The training data was specialized to a row matrix (theano tensor with
# first dim broadcastable) and the monitor ended up with expressions
# mixing the specialized and non-specialized version of the expression.
m = 2
rng = np.random.RandomState([25, 9, 2012])
X = np.zeros((m, 1))
dataset = DenseDesignMatrix(X=X)
model = SoftmaxModel(1)
learning_rate = 1e-3
cost = DummyCost()
algorithm = SGD(learning_rate, cost,
batch_size=1,
monitoring_batches=1,
monitoring_dataset=dataset,
termination_criterion=EpochCounter(max_epochs=1),
update_callbacks=None,
set_batch_size=False)
train = Train(dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=None)
train.main_loop()
def test_empty_monitoring_datasets():
"""
Test that handling of monitoring datasets dictionnary
does not fail when it is empty.
"""
learning_rate = 1e-3
batch_size = 5
dim = 3
rng = np.random.RandomState([25, 9, 2012])
train_dataset = DenseDesignMatrix(X=rng.randn(10, dim))
model = SoftmaxModel(dim)
cost = DummyCost()
algorithm = SGD(learning_rate, cost,
batch_size=batch_size,
monitoring_dataset={},
termination_criterion=EpochCounter(2))
train = Train(train_dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=None)
train.main_loop()
def test_uneven_batch_size():
"""
Testing extensively sgd parametrisations for datasets with a number of
examples not divisible by batch size
The tested settings are:
- Model with force_batch_size = True or False
- Training dataset with number of examples divisible or not by batch size
- Monitoring dataset with number of examples divisible or not by batch size
- Even or uneven iterators
2 tests out of 10 should raise ValueError
"""
learning_rate = 1e-3
batch_size = 5
dim = 3
m1, m2, m3 = 10, 15, 22
rng = np.random.RandomState([25, 9, 2012])
dataset1 = DenseDesignMatrix(X=rng.randn(m1, dim))
dataset2 = DenseDesignMatrix(X=rng.randn(m2, dim))
dataset3 = DenseDesignMatrix(X=rng.randn(m3, dim))
def train_with_monitoring_datasets(train_dataset,
monitoring_datasets,
model_force_batch_size,
train_iteration_mode,
monitor_iteration_mode):
model = SoftmaxModel(dim)
if model_force_batch_size:
model.force_batch_size = model_force_batch_size
cost = DummyCost()
algorithm = SGD(learning_rate, cost,
batch_size=batch_size,
train_iteration_mode=train_iteration_mode,
monitor_iteration_mode=monitor_iteration_mode,
monitoring_dataset=monitoring_datasets,
termination_criterion=EpochCounter(2))
train = Train(train_dataset,
model,
algorithm,
save_path=None,
save_freq=0,
extensions=None)
train.main_loop()
no_monitoring_datasets = None
even_monitoring_datasets = {'valid': dataset2}
uneven_monitoring_datasets = {'valid': dataset2, 'test': dataset3}
# without monitoring datasets
train_with_monitoring_datasets(
train_dataset=dataset1,
monitoring_datasets=no_monitoring_datasets,
model_force_batch_size=False,
train_iteration_mode='sequential',
monitor_iteration_mode='sequential')
train_with_monitoring_datasets(
train_dataset=dataset1,
monitoring_datasets=no_monitoring_datasets,
model_force_batch_size=batch_size,
train_iteration_mode='sequential',
monitor_iteration_mode='sequential')
# with uneven training datasets
train_with_monitoring_datasets(
train_dataset=dataset3,
monitoring_datasets=no_monitoring_datasets,
model_force_batch_size=False,
train_iteration_mode='sequential',
monitor_iteration_mode='sequential')
try:
train_with_monitoring_datasets(
train_dataset=dataset3,
monitoring_datasets=no_monitoring_datasets,
model_force_batch_size=batch_size,
train_iteration_mode='sequential',
monitor_iteration_mode='sequential')
assert False
except ValueError:
pass
train_with_monitoring_datasets(
train_dataset=dataset3,
monitoring_datasets=no_monitoring_datasets,
model_force_batch_size=batch_size,
train_iteration_mode='even_sequential',
monitor_iteration_mode='sequential')
# with even monitoring datasets
train_with_monitoring_datasets(
train_dataset=dataset1,
monitoring_datasets=even_monitoring_datasets,
model_force_batch_size=False,
train_iteration_mode='sequential',
monitor_iteration_mode='sequential')
train_with_monitoring_datasets(
train_dataset=dataset1,
monitoring_datasets=even_monitoring_datasets,
model_force_batch_size=batch_size,
train_iteration_mode='sequential',
monitor_iteration_mode='sequential')
# with uneven monitoring datasets
train_with_monitoring_datasets(
train_dataset=dataset1,
monitoring_datasets=uneven_monitoring_datasets,
model_force_batch_size=False,
train_iteration_mode='sequential',
monitor_iteration_mode='sequential')
try:
train_with_monitoring_datasets(
train_dataset=dataset1,
monitoring_datasets=uneven_monitoring_datasets,
model_force_batch_size=batch_size,
train_iteration_mode='sequential',
monitor_iteration_mode='sequential')
assert False
except ValueError:
pass
train_with_monitoring_datasets(
train_dataset=dataset1,
monitoring_datasets=uneven_monitoring_datasets,
model_force_batch_size=batch_size,
train_iteration_mode='sequential',
monitor_iteration_mode='even_sequential')
def test_epoch_monitor():
"""
Checks that monitored channels contain expected number of values
when using EpochMonitor for updates within epochs.
"""
dim = 1
batch_size = 3
n_batches = 10
m = n_batches * batch_size
dataset = ArangeDataset(m)
model = SoftmaxModel(dim)
monitor = Monitor.get_monitor(model)
learning_rate = 1e-3
data_specs = (model.get_input_space(), model.get_input_source())
cost = DummyCost()
termination_criterion = EpochCounter(1)
monitor_rate = 3
em = EpochMonitor(model=model,
tick_rate=None,
monitor_rate=monitor_rate)
algorithm = SGD(learning_rate,
cost,
batch_size=batch_size,
train_iteration_mode='sequential',
monitoring_dataset=dataset,
termination_criterion=termination_criterion,
update_callbacks=[em],
set_batch_size=False)
algorithm.setup(dataset=dataset, model=model)
algorithm.train(dataset)
for key, val in monitor.channels.items():
assert len(val.val_record) == n_batches//monitor_rate
if __name__ == '__main__':
test_monitor_based_lr()
|
bsd-3-clause
|
ehashman/oh-mainline
|
vendor/packages/PyYaml/lib/yaml/serializer.py
|
560
|
4171
|
__all__ = ['Serializer', 'SerializerError']
from error import YAMLError
from events import *
from nodes import *
class SerializerError(YAMLError):
pass
class Serializer(object):
ANCHOR_TEMPLATE = u'id%03d'
def __init__(self, encoding=None,
explicit_start=None, explicit_end=None, version=None, tags=None):
self.use_encoding = encoding
self.use_explicit_start = explicit_start
self.use_explicit_end = explicit_end
self.use_version = version
self.use_tags = tags
self.serialized_nodes = {}
self.anchors = {}
self.last_anchor_id = 0
self.closed = None
def open(self):
if self.closed is None:
self.emit(StreamStartEvent(encoding=self.use_encoding))
self.closed = False
elif self.closed:
raise SerializerError("serializer is closed")
else:
raise SerializerError("serializer is already opened")
def close(self):
if self.closed is None:
raise SerializerError("serializer is not opened")
elif not self.closed:
self.emit(StreamEndEvent())
self.closed = True
#def __del__(self):
# self.close()
def serialize(self, node):
if self.closed is None:
raise SerializerError("serializer is not opened")
elif self.closed:
raise SerializerError("serializer is closed")
self.emit(DocumentStartEvent(explicit=self.use_explicit_start,
version=self.use_version, tags=self.use_tags))
self.anchor_node(node)
self.serialize_node(node, None, None)
self.emit(DocumentEndEvent(explicit=self.use_explicit_end))
self.serialized_nodes = {}
self.anchors = {}
self.last_anchor_id = 0
def anchor_node(self, node):
if node in self.anchors:
if self.anchors[node] is None:
self.anchors[node] = self.generate_anchor(node)
else:
self.anchors[node] = None
if isinstance(node, SequenceNode):
for item in node.value:
self.anchor_node(item)
elif isinstance(node, MappingNode):
for key, value in node.value:
self.anchor_node(key)
self.anchor_node(value)
def generate_anchor(self, node):
self.last_anchor_id += 1
return self.ANCHOR_TEMPLATE % self.last_anchor_id
def serialize_node(self, node, parent, index):
alias = self.anchors[node]
if node in self.serialized_nodes:
self.emit(AliasEvent(alias))
else:
self.serialized_nodes[node] = True
self.descend_resolver(parent, index)
if isinstance(node, ScalarNode):
detected_tag = self.resolve(ScalarNode, node.value, (True, False))
default_tag = self.resolve(ScalarNode, node.value, (False, True))
implicit = (node.tag == detected_tag), (node.tag == default_tag)
self.emit(ScalarEvent(alias, node.tag, implicit, node.value,
style=node.style))
elif isinstance(node, SequenceNode):
implicit = (node.tag
== self.resolve(SequenceNode, node.value, True))
self.emit(SequenceStartEvent(alias, node.tag, implicit,
flow_style=node.flow_style))
index = 0
for item in node.value:
self.serialize_node(item, node, index)
index += 1
self.emit(SequenceEndEvent())
elif isinstance(node, MappingNode):
implicit = (node.tag
== self.resolve(MappingNode, node.value, True))
self.emit(MappingStartEvent(alias, node.tag, implicit,
flow_style=node.flow_style))
for key, value in node.value:
self.serialize_node(key, node, None)
self.serialize_node(value, node, key)
self.emit(MappingEndEvent())
self.ascend_resolver()
|
agpl-3.0
|
NonLogicalDev/ucsd_cat125
|
client/libs/custom/three.js/utils/exporters/maya/plug-ins/threeJsFileTranslator.py
|
210
|
16160
|
__author__ = 'Sean Griffin'
__version__ = '1.0.0'
__email__ = 'sean@thoughtbot.com'
import sys
import os.path
import json
import shutil
from pymel.core import *
from maya.OpenMaya import *
from maya.OpenMayaMPx import *
kPluginTranslatorTypeName = 'Three.js'
kOptionScript = 'ThreeJsExportScript'
kDefaultOptionsString = '0'
FLOAT_PRECISION = 8
class ThreeJsWriter(object):
def __init__(self):
self.componentKeys = ['vertices', 'normals', 'colors', 'uvs', 'faces',
'materials', 'diffuseMaps', 'specularMaps', 'bumpMaps', 'copyTextures',
'bones', 'skeletalAnim', 'bakeAnimations', 'prettyOutput']
def write(self, path, optionString, accessMode):
self.path = path
self._parseOptions(optionString)
self.verticeOffset = 0
self.uvOffset = 0
self.normalOffset = 0
self.vertices = []
self.materials = []
self.faces = []
self.normals = []
self.uvs = []
self.morphTargets = []
self.bones = []
self.animations = []
self.skinIndices = []
self.skinWeights = []
if self.options["bakeAnimations"]:
print("exporting animations")
self._exportAnimations()
self._goToFrame(self.options["startFrame"])
if self.options["materials"]:
print("exporting materials")
self._exportMaterials()
if self.options["bones"]:
print("exporting bones")
select(map(lambda m: m.getParent(), ls(type='mesh')))
runtime.GoToBindPose()
self._exportBones()
print("exporting skins")
self._exportSkins()
print("exporting meshes")
self._exportMeshes()
if self.options["skeletalAnim"]:
print("exporting keyframe animations")
self._exportKeyframeAnimations()
print("writing file")
output = {
'metadata': {
'formatVersion': 3.1,
'generatedBy': 'Maya Exporter'
},
'vertices': self.vertices,
'uvs': [self.uvs],
'faces': self.faces,
'normals': self.normals,
'materials': self.materials,
}
if self.options['bakeAnimations']:
output['morphTargets'] = self.morphTargets
if self.options['bones']:
output['bones'] = self.bones
output['skinIndices'] = self.skinIndices
output['skinWeights'] = self.skinWeights
output['influencesPerVertex'] = self.options["influencesPerVertex"]
if self.options['skeletalAnim']:
output['animations'] = self.animations
with file(path, 'w') as f:
if self.options['prettyOutput']:
f.write(json.dumps(output, sort_keys=True, indent=4, separators=(',', ': ')))
else:
f.write(json.dumps(output, separators=(",",":")))
def _allMeshes(self):
if not hasattr(self, '__allMeshes'):
self.__allMeshes = filter(lambda m: len(m.listConnections()) > 0, ls(type='mesh'))
return self.__allMeshes
def _parseOptions(self, optionsString):
self.options = dict([(x, False) for x in self.componentKeys])
for key in self.componentKeys:
self.options[key] = key in optionsString
if self.options["bones"]:
boneOptionsString = optionsString[optionsString.find("bones"):]
boneOptions = boneOptionsString.split(' ')
self.options["influencesPerVertex"] = int(boneOptions[1])
if self.options["bakeAnimations"]:
bakeAnimOptionsString = optionsString[optionsString.find("bakeAnimations"):]
bakeAnimOptions = bakeAnimOptionsString.split(' ')
self.options["startFrame"] = int(bakeAnimOptions[1])
self.options["endFrame"] = int(bakeAnimOptions[2])
self.options["stepFrame"] = int(bakeAnimOptions[3])
def _exportMeshes(self):
if self.options['vertices']:
self._exportVertices()
for mesh in self._allMeshes():
self._exportMesh(mesh)
def _exportMesh(self, mesh):
print("Exporting " + mesh.name())
if self.options['faces']:
print("Exporting faces")
self._exportFaces(mesh)
self.verticeOffset += len(mesh.getPoints())
self.uvOffset += mesh.numUVs()
self.normalOffset += mesh.numNormals()
if self.options['normals']:
print("Exporting normals")
self._exportNormals(mesh)
if self.options['uvs']:
print("Exporting UVs")
self._exportUVs(mesh)
def _getMaterialIndex(self, face, mesh):
if not hasattr(self, '_materialIndices'):
self._materialIndices = dict([(mat['DbgName'], i) for i, mat in enumerate(self.materials)])
if self.options['materials']:
for engine in mesh.listConnections(type='shadingEngine'):
if sets(engine, isMember=face) or sets(engine, isMember=mesh):
for material in engine.listConnections(type='lambert'):
if self._materialIndices.has_key(material.name()):
return self._materialIndices[material.name()]
return -1
def _exportVertices(self):
self.vertices += self._getVertices()
def _exportAnimations(self):
for frame in self._framesToExport():
self._exportAnimationForFrame(frame)
def _framesToExport(self):
return range(self.options["startFrame"], self.options["endFrame"], self.options["stepFrame"])
def _exportAnimationForFrame(self, frame):
print("exporting frame " + str(frame))
self._goToFrame(frame)
self.morphTargets.append({
'name': "frame_" + str(frame),
'vertices': self._getVertices()
})
def _getVertices(self):
return [coord for mesh in self._allMeshes() for point in mesh.getPoints(space='world') for coord in [round(point.x, FLOAT_PRECISION), round(point.y, FLOAT_PRECISION), round(point.z, FLOAT_PRECISION)]]
def _goToFrame(self, frame):
currentTime(frame)
def _exportFaces(self, mesh):
typeBitmask = self._getTypeBitmask()
for face in mesh.faces:
materialIndex = self._getMaterialIndex(face, mesh)
hasMaterial = materialIndex != -1
self._exportFaceBitmask(face, typeBitmask, hasMaterial=hasMaterial)
self.faces += map(lambda x: x + self.verticeOffset, face.getVertices())
if self.options['materials']:
if hasMaterial:
self.faces.append(materialIndex)
if self.options['uvs'] and face.hasUVs():
self.faces += map(lambda v: face.getUVIndex(v) + self.uvOffset, range(face.polygonVertexCount()))
if self.options['normals']:
self._exportFaceVertexNormals(face)
def _exportFaceBitmask(self, face, typeBitmask, hasMaterial=True):
if face.polygonVertexCount() == 4:
faceBitmask = 1
else:
faceBitmask = 0
if hasMaterial:
faceBitmask |= (1 << 1)
if self.options['uvs'] and face.hasUVs():
faceBitmask |= (1 << 3)
self.faces.append(typeBitmask | faceBitmask)
def _exportFaceVertexNormals(self, face):
for i in range(face.polygonVertexCount()):
self.faces.append(face.normalIndex(i) + self.normalOffset)
def _exportNormals(self, mesh):
for normal in mesh.getNormals():
self.normals += [round(normal.x, FLOAT_PRECISION), round(normal.y, FLOAT_PRECISION), round(normal.z, FLOAT_PRECISION)]
def _exportUVs(self, mesh):
us, vs = mesh.getUVs()
for i, u in enumerate(us):
self.uvs.append(u)
self.uvs.append(vs[i])
def _getTypeBitmask(self):
bitmask = 0
if self.options['normals']:
bitmask |= 32
return bitmask
def _exportMaterials(self):
for mat in ls(type='lambert'):
self.materials.append(self._exportMaterial(mat))
def _exportMaterial(self, mat):
result = {
"DbgName": mat.name(),
"blending": "NormalBlending",
"colorDiffuse": map(lambda i: i * mat.getDiffuseCoeff(), mat.getColor().rgb),
"colorAmbient": mat.getAmbientColor().rgb,
"depthTest": True,
"depthWrite": True,
"shading": mat.__class__.__name__,
"transparency": mat.getTransparency().a,
"transparent": mat.getTransparency().a != 1.0,
"vertexColors": False
}
if isinstance(mat, nodetypes.Phong):
result["colorSpecular"] = mat.getSpecularColor().rgb
result["specularCoef"] = mat.getCosPower()
if self.options["specularMaps"]:
self._exportSpecularMap(result, mat)
if self.options["bumpMaps"]:
self._exportBumpMap(result, mat)
if self.options["diffuseMaps"]:
self._exportDiffuseMap(result, mat)
return result
def _exportBumpMap(self, result, mat):
for bump in mat.listConnections(type='bump2d'):
for f in bump.listConnections(type='file'):
result["mapNormalFactor"] = 1
self._exportFile(result, f, "Normal")
def _exportDiffuseMap(self, result, mat):
for f in mat.attr('color').inputs():
result["colorDiffuse"] = f.attr('defaultColor').get()
self._exportFile(result, f, "Diffuse")
def _exportSpecularMap(self, result, mat):
for f in mat.attr('specularColor').inputs():
result["colorSpecular"] = f.attr('defaultColor').get()
self._exportFile(result, f, "Specular")
def _exportFile(self, result, mapFile, mapType):
src = mapFile.ftn.get()
targetDir = os.path.dirname(self.path)
fName = os.path.basename(src)
if self.options['copyTextures']:
shutil.copy2(src, os.path.join(targetDir, fName))
result["map" + mapType] = fName
result["map" + mapType + "Repeat"] = [1, 1]
result["map" + mapType + "Wrap"] = ["repeat", "repeat"]
result["map" + mapType + "Anistropy"] = 4
def _exportBones(self):
for joint in ls(type='joint'):
if joint.getParent():
parentIndex = self._indexOfJoint(joint.getParent().name())
else:
parentIndex = -1
rotq = joint.getRotation(quaternion=True) * joint.getOrientation()
pos = joint.getTranslation()
self.bones.append({
"parent": parentIndex,
"name": joint.name(),
"pos": self._roundPos(pos),
"rotq": self._roundQuat(rotq)
})
def _indexOfJoint(self, name):
if not hasattr(self, '_jointNames'):
self._jointNames = dict([(joint.name(), i) for i, joint in enumerate(ls(type='joint'))])
if name in self._jointNames:
return self._jointNames[name]
else:
return -1
def _exportKeyframeAnimations(self):
hierarchy = []
i = -1
frameRate = FramesPerSecond(currentUnit(query=True, time=True)).value()
for joint in ls(type='joint'):
hierarchy.append({
"parent": i,
"keys": self._getKeyframes(joint, frameRate)
})
i += 1
self.animations.append({
"name": "skeletalAction.001",
"length": (playbackOptions(maxTime=True, query=True) - playbackOptions(minTime=True, query=True)) / frameRate,
"fps": 1,
"hierarchy": hierarchy
})
def _getKeyframes(self, joint, frameRate):
firstFrame = playbackOptions(minTime=True, query=True)
lastFrame = playbackOptions(maxTime=True, query=True)
frames = sorted(list(set(keyframe(joint, query=True) + [firstFrame, lastFrame])))
keys = []
print("joint " + joint.name() + " has " + str(len(frames)) + " keyframes")
for frame in frames:
self._goToFrame(frame)
keys.append(self._getCurrentKeyframe(joint, frame, frameRate))
return keys
def _getCurrentKeyframe(self, joint, frame, frameRate):
pos = joint.getTranslation()
rot = joint.getRotation(quaternion=True) * joint.getOrientation()
return {
'time': (frame - playbackOptions(minTime=True, query=True)) / frameRate,
'pos': self._roundPos(pos),
'rot': self._roundQuat(rot),
'scl': [1,1,1]
}
def _roundPos(self, pos):
return map(lambda x: round(x, FLOAT_PRECISION), [pos.x, pos.y, pos.z])
def _roundQuat(self, rot):
return map(lambda x: round(x, FLOAT_PRECISION), [rot.x, rot.y, rot.z, rot.w])
def _exportSkins(self):
for mesh in self._allMeshes():
print("exporting skins for mesh: " + mesh.name())
skins = filter(lambda skin: mesh in skin.getOutputGeometry(), ls(type='skinCluster'))
if len(skins) > 0:
print("mesh has " + str(len(skins)) + " skins")
skin = skins[0]
joints = skin.influenceObjects()
for weights in skin.getWeights(mesh.vtx):
numWeights = 0
for i in range(0, len(weights)):
if weights[i] > 0:
self.skinWeights.append(weights[i])
self.skinIndices.append(self._indexOfJoint(joints[i].name()))
numWeights += 1
if numWeights > self.options["influencesPerVertex"]:
raise Exception("More than " + str(self.options["influencesPerVertex"]) + " influences on a vertex in " + mesh.name() + ".")
for i in range(0, self.options["influencesPerVertex"] - numWeights):
self.skinWeights.append(0)
self.skinIndices.append(0)
else:
print("mesh has no skins, appending 0")
for i in range(0, len(mesh.getPoints()) * self.options["influencesPerVertex"]):
self.skinWeights.append(0)
self.skinIndices.append(0)
class NullAnimCurve(object):
def getValue(self, index):
return 0.0
class ThreeJsTranslator(MPxFileTranslator):
def __init__(self):
MPxFileTranslator.__init__(self)
def haveWriteMethod(self):
return True
def filter(self):
return '*.js'
def defaultExtension(self):
return 'js'
def writer(self, fileObject, optionString, accessMode):
path = fileObject.fullName()
writer = ThreeJsWriter()
writer.write(path, optionString, accessMode)
def translatorCreator():
return asMPxPtr(ThreeJsTranslator())
def initializePlugin(mobject):
mplugin = MFnPlugin(mobject)
try:
mplugin.registerFileTranslator(kPluginTranslatorTypeName, None, translatorCreator, kOptionScript, kDefaultOptionsString)
except:
sys.stderr.write('Failed to register translator: %s' % kPluginTranslatorTypeName)
raise
def uninitializePlugin(mobject):
mplugin = MFnPlugin(mobject)
try:
mplugin.deregisterFileTranslator(kPluginTranslatorTypeName)
except:
sys.stderr.write('Failed to deregister translator: %s' % kPluginTranslatorTypeName)
raise
class FramesPerSecond(object):
MAYA_VALUES = {
'game': 15,
'film': 24,
'pal': 25,
'ntsc': 30,
'show': 48,
'palf': 50,
'ntscf': 60
}
def __init__(self, fpsString):
self.fpsString = fpsString
def value(self):
if self.fpsString in FramesPerSecond.MAYA_VALUES:
return FramesPerSecond.MAYA_VALUES[self.fpsString]
else:
return int(filter(lambda c: c.isdigit(), self.fpsString))
|
mit
|
epyatopal/geocoder-1
|
geocoder/here_reverse.py
|
3
|
1251
|
#!/usr/bin/python
# coding: utf8
from __future__ import absolute_import
from geocoder.base import Base
from geocoder.keys import app_id, app_code
from geocoder.location import Location
from geocoder.here import Here
class HereReverse(Here, Base):
"""
HERE Geocoding REST API
=======================
Send a request to the geocode endpoint to find an address
using a combination of country, state, county, city,
postal code, district, street and house number.
API Reference
-------------
https://developer.here.com/rest-apis/documentation/geocoder
"""
provider = 'here'
method = 'reverse'
def __init__(self, location, **kwargs):
self.url = 'http://reverse.geocoder.cit.api.here.com/6.2/reversegeocode.json'
self.location = str(Location(location))
self.params = {
'prox': self.location,
'app_id': kwargs.get('app_id', app_id),
'app_code': kwargs.get('app_code', app_code),
'mode': 'retrieveAddresses',
'gen': 8,
}
self._initialize(**kwargs)
@property
def ok(self):
return bool(self.address)
if __name__ == '__main__':
g = HereReverse([45.4049053, -75.7077965])
g.debug()
|
mit
|
etherkit/OpenBeacon2
|
macos/venv/lib/python3.8/site-packages/pip/_vendor/msgpack/fallback.py
|
21
|
37133
|
"""Fallback pure Python implementation of msgpack"""
from datetime import datetime as _DateTime
import sys
import struct
PY2 = sys.version_info[0] == 2
if PY2:
int_types = (int, long)
def dict_iteritems(d):
return d.iteritems()
else:
int_types = int
unicode = str
xrange = range
def dict_iteritems(d):
return d.items()
if sys.version_info < (3, 5):
# Ugly hack...
RecursionError = RuntimeError
def _is_recursionerror(e):
return (
len(e.args) == 1
and isinstance(e.args[0], str)
and e.args[0].startswith("maximum recursion depth exceeded")
)
else:
def _is_recursionerror(e):
return True
if hasattr(sys, "pypy_version_info"):
# StringIO is slow on PyPy, StringIO is faster. However: PyPy's own
# StringBuilder is fastest.
from __pypy__ import newlist_hint
try:
from __pypy__.builders import BytesBuilder as StringBuilder
except ImportError:
from __pypy__.builders import StringBuilder
USING_STRINGBUILDER = True
class StringIO(object):
def __init__(self, s=b""):
if s:
self.builder = StringBuilder(len(s))
self.builder.append(s)
else:
self.builder = StringBuilder()
def write(self, s):
if isinstance(s, memoryview):
s = s.tobytes()
elif isinstance(s, bytearray):
s = bytes(s)
self.builder.append(s)
def getvalue(self):
return self.builder.build()
else:
USING_STRINGBUILDER = False
from io import BytesIO as StringIO
newlist_hint = lambda size: []
from .exceptions import BufferFull, OutOfData, ExtraData, FormatError, StackError
from .ext import ExtType, Timestamp
EX_SKIP = 0
EX_CONSTRUCT = 1
EX_READ_ARRAY_HEADER = 2
EX_READ_MAP_HEADER = 3
TYPE_IMMEDIATE = 0
TYPE_ARRAY = 1
TYPE_MAP = 2
TYPE_RAW = 3
TYPE_BIN = 4
TYPE_EXT = 5
DEFAULT_RECURSE_LIMIT = 511
def _check_type_strict(obj, t, type=type, tuple=tuple):
if type(t) is tuple:
return type(obj) in t
else:
return type(obj) is t
def _get_data_from_buffer(obj):
view = memoryview(obj)
if view.itemsize != 1:
raise ValueError("cannot unpack from multi-byte object")
return view
def unpackb(packed, **kwargs):
"""
Unpack an object from `packed`.
Raises ``ExtraData`` when *packed* contains extra bytes.
Raises ``ValueError`` when *packed* is incomplete.
Raises ``FormatError`` when *packed* is not valid msgpack.
Raises ``StackError`` when *packed* contains too nested.
Other exceptions can be raised during unpacking.
See :class:`Unpacker` for options.
"""
unpacker = Unpacker(None, max_buffer_size=len(packed), **kwargs)
unpacker.feed(packed)
try:
ret = unpacker._unpack()
except OutOfData:
raise ValueError("Unpack failed: incomplete input")
except RecursionError as e:
if _is_recursionerror(e):
raise StackError
raise
if unpacker._got_extradata():
raise ExtraData(ret, unpacker._get_extradata())
return ret
if sys.version_info < (2, 7, 6):
def _unpack_from(f, b, o=0):
"""Explicit type cast for legacy struct.unpack_from"""
return struct.unpack_from(f, bytes(b), o)
else:
_unpack_from = struct.unpack_from
class Unpacker(object):
"""Streaming unpacker.
Arguments:
:param file_like:
File-like object having `.read(n)` method.
If specified, unpacker reads serialized data from it and :meth:`feed()` is not usable.
:param int read_size:
Used as `file_like.read(read_size)`. (default: `min(16*1024, max_buffer_size)`)
:param bool use_list:
If true, unpack msgpack array to Python list.
Otherwise, unpack to Python tuple. (default: True)
:param bool raw:
If true, unpack msgpack raw to Python bytes.
Otherwise, unpack to Python str by decoding with UTF-8 encoding (default).
:param int timestamp:
Control how timestamp type is unpacked:
0 - Timestamp
1 - float (Seconds from the EPOCH)
2 - int (Nanoseconds from the EPOCH)
3 - datetime.datetime (UTC). Python 2 is not supported.
:param bool strict_map_key:
If true (default), only str or bytes are accepted for map (dict) keys.
:param callable object_hook:
When specified, it should be callable.
Unpacker calls it with a dict argument after unpacking msgpack map.
(See also simplejson)
:param callable object_pairs_hook:
When specified, it should be callable.
Unpacker calls it with a list of key-value pairs after unpacking msgpack map.
(See also simplejson)
:param str unicode_errors:
The error handler for decoding unicode. (default: 'strict')
This option should be used only when you have msgpack data which
contains invalid UTF-8 string.
:param int max_buffer_size:
Limits size of data waiting unpacked. 0 means 2**32-1.
The default value is 100*1024*1024 (100MiB).
Raises `BufferFull` exception when it is insufficient.
You should set this parameter when unpacking data from untrusted source.
:param int max_str_len:
Deprecated, use *max_buffer_size* instead.
Limits max length of str. (default: max_buffer_size)
:param int max_bin_len:
Deprecated, use *max_buffer_size* instead.
Limits max length of bin. (default: max_buffer_size)
:param int max_array_len:
Limits max length of array.
(default: max_buffer_size)
:param int max_map_len:
Limits max length of map.
(default: max_buffer_size//2)
:param int max_ext_len:
Deprecated, use *max_buffer_size* instead.
Limits max size of ext type. (default: max_buffer_size)
Example of streaming deserialize from file-like object::
unpacker = Unpacker(file_like)
for o in unpacker:
process(o)
Example of streaming deserialize from socket::
unpacker = Unpacker(max_buffer_size)
while True:
buf = sock.recv(1024**2)
if not buf:
break
unpacker.feed(buf)
for o in unpacker:
process(o)
Raises ``ExtraData`` when *packed* contains extra bytes.
Raises ``OutOfData`` when *packed* is incomplete.
Raises ``FormatError`` when *packed* is not valid msgpack.
Raises ``StackError`` when *packed* contains too nested.
Other exceptions can be raised during unpacking.
"""
def __init__(
self,
file_like=None,
read_size=0,
use_list=True,
raw=False,
timestamp=0,
strict_map_key=True,
object_hook=None,
object_pairs_hook=None,
list_hook=None,
unicode_errors=None,
max_buffer_size=100 * 1024 * 1024,
ext_hook=ExtType,
max_str_len=-1,
max_bin_len=-1,
max_array_len=-1,
max_map_len=-1,
max_ext_len=-1,
):
if unicode_errors is None:
unicode_errors = "strict"
if file_like is None:
self._feeding = True
else:
if not callable(file_like.read):
raise TypeError("`file_like.read` must be callable")
self.file_like = file_like
self._feeding = False
#: array of bytes fed.
self._buffer = bytearray()
#: Which position we currently reads
self._buff_i = 0
# When Unpacker is used as an iterable, between the calls to next(),
# the buffer is not "consumed" completely, for efficiency sake.
# Instead, it is done sloppily. To make sure we raise BufferFull at
# the correct moments, we have to keep track of how sloppy we were.
# Furthermore, when the buffer is incomplete (that is: in the case
# we raise an OutOfData) we need to rollback the buffer to the correct
# state, which _buf_checkpoint records.
self._buf_checkpoint = 0
if not max_buffer_size:
max_buffer_size = 2 ** 31 - 1
if max_str_len == -1:
max_str_len = max_buffer_size
if max_bin_len == -1:
max_bin_len = max_buffer_size
if max_array_len == -1:
max_array_len = max_buffer_size
if max_map_len == -1:
max_map_len = max_buffer_size // 2
if max_ext_len == -1:
max_ext_len = max_buffer_size
self._max_buffer_size = max_buffer_size
if read_size > self._max_buffer_size:
raise ValueError("read_size must be smaller than max_buffer_size")
self._read_size = read_size or min(self._max_buffer_size, 16 * 1024)
self._raw = bool(raw)
self._strict_map_key = bool(strict_map_key)
self._unicode_errors = unicode_errors
self._use_list = use_list
if not (0 <= timestamp <= 3):
raise ValueError("timestamp must be 0..3")
self._timestamp = timestamp
self._list_hook = list_hook
self._object_hook = object_hook
self._object_pairs_hook = object_pairs_hook
self._ext_hook = ext_hook
self._max_str_len = max_str_len
self._max_bin_len = max_bin_len
self._max_array_len = max_array_len
self._max_map_len = max_map_len
self._max_ext_len = max_ext_len
self._stream_offset = 0
if list_hook is not None and not callable(list_hook):
raise TypeError("`list_hook` is not callable")
if object_hook is not None and not callable(object_hook):
raise TypeError("`object_hook` is not callable")
if object_pairs_hook is not None and not callable(object_pairs_hook):
raise TypeError("`object_pairs_hook` is not callable")
if object_hook is not None and object_pairs_hook is not None:
raise TypeError(
"object_pairs_hook and object_hook are mutually " "exclusive"
)
if not callable(ext_hook):
raise TypeError("`ext_hook` is not callable")
def feed(self, next_bytes):
assert self._feeding
view = _get_data_from_buffer(next_bytes)
if len(self._buffer) - self._buff_i + len(view) > self._max_buffer_size:
raise BufferFull
# Strip buffer before checkpoint before reading file.
if self._buf_checkpoint > 0:
del self._buffer[: self._buf_checkpoint]
self._buff_i -= self._buf_checkpoint
self._buf_checkpoint = 0
# Use extend here: INPLACE_ADD += doesn't reliably typecast memoryview in jython
self._buffer.extend(view)
def _consume(self):
""" Gets rid of the used parts of the buffer. """
self._stream_offset += self._buff_i - self._buf_checkpoint
self._buf_checkpoint = self._buff_i
def _got_extradata(self):
return self._buff_i < len(self._buffer)
def _get_extradata(self):
return self._buffer[self._buff_i :]
def read_bytes(self, n):
ret = self._read(n)
self._consume()
return ret
def _read(self, n):
# (int) -> bytearray
self._reserve(n)
i = self._buff_i
self._buff_i = i + n
return self._buffer[i : i + n]
def _reserve(self, n):
remain_bytes = len(self._buffer) - self._buff_i - n
# Fast path: buffer has n bytes already
if remain_bytes >= 0:
return
if self._feeding:
self._buff_i = self._buf_checkpoint
raise OutOfData
# Strip buffer before checkpoint before reading file.
if self._buf_checkpoint > 0:
del self._buffer[: self._buf_checkpoint]
self._buff_i -= self._buf_checkpoint
self._buf_checkpoint = 0
# Read from file
remain_bytes = -remain_bytes
while remain_bytes > 0:
to_read_bytes = max(self._read_size, remain_bytes)
read_data = self.file_like.read(to_read_bytes)
if not read_data:
break
assert isinstance(read_data, bytes)
self._buffer += read_data
remain_bytes -= len(read_data)
if len(self._buffer) < n + self._buff_i:
self._buff_i = 0 # rollback
raise OutOfData
def _read_header(self, execute=EX_CONSTRUCT):
typ = TYPE_IMMEDIATE
n = 0
obj = None
self._reserve(1)
b = self._buffer[self._buff_i]
self._buff_i += 1
if b & 0b10000000 == 0:
obj = b
elif b & 0b11100000 == 0b11100000:
obj = -1 - (b ^ 0xFF)
elif b & 0b11100000 == 0b10100000:
n = b & 0b00011111
typ = TYPE_RAW
if n > self._max_str_len:
raise ValueError("%s exceeds max_str_len(%s)", n, self._max_str_len)
obj = self._read(n)
elif b & 0b11110000 == 0b10010000:
n = b & 0b00001111
typ = TYPE_ARRAY
if n > self._max_array_len:
raise ValueError("%s exceeds max_array_len(%s)", n, self._max_array_len)
elif b & 0b11110000 == 0b10000000:
n = b & 0b00001111
typ = TYPE_MAP
if n > self._max_map_len:
raise ValueError("%s exceeds max_map_len(%s)", n, self._max_map_len)
elif b == 0xC0:
obj = None
elif b == 0xC2:
obj = False
elif b == 0xC3:
obj = True
elif b == 0xC4:
typ = TYPE_BIN
self._reserve(1)
n = self._buffer[self._buff_i]
self._buff_i += 1
if n > self._max_bin_len:
raise ValueError("%s exceeds max_bin_len(%s)" % (n, self._max_bin_len))
obj = self._read(n)
elif b == 0xC5:
typ = TYPE_BIN
self._reserve(2)
n = _unpack_from(">H", self._buffer, self._buff_i)[0]
self._buff_i += 2
if n > self._max_bin_len:
raise ValueError("%s exceeds max_bin_len(%s)" % (n, self._max_bin_len))
obj = self._read(n)
elif b == 0xC6:
typ = TYPE_BIN
self._reserve(4)
n = _unpack_from(">I", self._buffer, self._buff_i)[0]
self._buff_i += 4
if n > self._max_bin_len:
raise ValueError("%s exceeds max_bin_len(%s)" % (n, self._max_bin_len))
obj = self._read(n)
elif b == 0xC7: # ext 8
typ = TYPE_EXT
self._reserve(2)
L, n = _unpack_from("Bb", self._buffer, self._buff_i)
self._buff_i += 2
if L > self._max_ext_len:
raise ValueError("%s exceeds max_ext_len(%s)" % (L, self._max_ext_len))
obj = self._read(L)
elif b == 0xC8: # ext 16
typ = TYPE_EXT
self._reserve(3)
L, n = _unpack_from(">Hb", self._buffer, self._buff_i)
self._buff_i += 3
if L > self._max_ext_len:
raise ValueError("%s exceeds max_ext_len(%s)" % (L, self._max_ext_len))
obj = self._read(L)
elif b == 0xC9: # ext 32
typ = TYPE_EXT
self._reserve(5)
L, n = _unpack_from(">Ib", self._buffer, self._buff_i)
self._buff_i += 5
if L > self._max_ext_len:
raise ValueError("%s exceeds max_ext_len(%s)" % (L, self._max_ext_len))
obj = self._read(L)
elif b == 0xCA:
self._reserve(4)
obj = _unpack_from(">f", self._buffer, self._buff_i)[0]
self._buff_i += 4
elif b == 0xCB:
self._reserve(8)
obj = _unpack_from(">d", self._buffer, self._buff_i)[0]
self._buff_i += 8
elif b == 0xCC:
self._reserve(1)
obj = self._buffer[self._buff_i]
self._buff_i += 1
elif b == 0xCD:
self._reserve(2)
obj = _unpack_from(">H", self._buffer, self._buff_i)[0]
self._buff_i += 2
elif b == 0xCE:
self._reserve(4)
obj = _unpack_from(">I", self._buffer, self._buff_i)[0]
self._buff_i += 4
elif b == 0xCF:
self._reserve(8)
obj = _unpack_from(">Q", self._buffer, self._buff_i)[0]
self._buff_i += 8
elif b == 0xD0:
self._reserve(1)
obj = _unpack_from("b", self._buffer, self._buff_i)[0]
self._buff_i += 1
elif b == 0xD1:
self._reserve(2)
obj = _unpack_from(">h", self._buffer, self._buff_i)[0]
self._buff_i += 2
elif b == 0xD2:
self._reserve(4)
obj = _unpack_from(">i", self._buffer, self._buff_i)[0]
self._buff_i += 4
elif b == 0xD3:
self._reserve(8)
obj = _unpack_from(">q", self._buffer, self._buff_i)[0]
self._buff_i += 8
elif b == 0xD4: # fixext 1
typ = TYPE_EXT
if self._max_ext_len < 1:
raise ValueError("%s exceeds max_ext_len(%s)" % (1, self._max_ext_len))
self._reserve(2)
n, obj = _unpack_from("b1s", self._buffer, self._buff_i)
self._buff_i += 2
elif b == 0xD5: # fixext 2
typ = TYPE_EXT
if self._max_ext_len < 2:
raise ValueError("%s exceeds max_ext_len(%s)" % (2, self._max_ext_len))
self._reserve(3)
n, obj = _unpack_from("b2s", self._buffer, self._buff_i)
self._buff_i += 3
elif b == 0xD6: # fixext 4
typ = TYPE_EXT
if self._max_ext_len < 4:
raise ValueError("%s exceeds max_ext_len(%s)" % (4, self._max_ext_len))
self._reserve(5)
n, obj = _unpack_from("b4s", self._buffer, self._buff_i)
self._buff_i += 5
elif b == 0xD7: # fixext 8
typ = TYPE_EXT
if self._max_ext_len < 8:
raise ValueError("%s exceeds max_ext_len(%s)" % (8, self._max_ext_len))
self._reserve(9)
n, obj = _unpack_from("b8s", self._buffer, self._buff_i)
self._buff_i += 9
elif b == 0xD8: # fixext 16
typ = TYPE_EXT
if self._max_ext_len < 16:
raise ValueError("%s exceeds max_ext_len(%s)" % (16, self._max_ext_len))
self._reserve(17)
n, obj = _unpack_from("b16s", self._buffer, self._buff_i)
self._buff_i += 17
elif b == 0xD9:
typ = TYPE_RAW
self._reserve(1)
n = self._buffer[self._buff_i]
self._buff_i += 1
if n > self._max_str_len:
raise ValueError("%s exceeds max_str_len(%s)", n, self._max_str_len)
obj = self._read(n)
elif b == 0xDA:
typ = TYPE_RAW
self._reserve(2)
(n,) = _unpack_from(">H", self._buffer, self._buff_i)
self._buff_i += 2
if n > self._max_str_len:
raise ValueError("%s exceeds max_str_len(%s)", n, self._max_str_len)
obj = self._read(n)
elif b == 0xDB:
typ = TYPE_RAW
self._reserve(4)
(n,) = _unpack_from(">I", self._buffer, self._buff_i)
self._buff_i += 4
if n > self._max_str_len:
raise ValueError("%s exceeds max_str_len(%s)", n, self._max_str_len)
obj = self._read(n)
elif b == 0xDC:
typ = TYPE_ARRAY
self._reserve(2)
(n,) = _unpack_from(">H", self._buffer, self._buff_i)
self._buff_i += 2
if n > self._max_array_len:
raise ValueError("%s exceeds max_array_len(%s)", n, self._max_array_len)
elif b == 0xDD:
typ = TYPE_ARRAY
self._reserve(4)
(n,) = _unpack_from(">I", self._buffer, self._buff_i)
self._buff_i += 4
if n > self._max_array_len:
raise ValueError("%s exceeds max_array_len(%s)", n, self._max_array_len)
elif b == 0xDE:
self._reserve(2)
(n,) = _unpack_from(">H", self._buffer, self._buff_i)
self._buff_i += 2
if n > self._max_map_len:
raise ValueError("%s exceeds max_map_len(%s)", n, self._max_map_len)
typ = TYPE_MAP
elif b == 0xDF:
self._reserve(4)
(n,) = _unpack_from(">I", self._buffer, self._buff_i)
self._buff_i += 4
if n > self._max_map_len:
raise ValueError("%s exceeds max_map_len(%s)", n, self._max_map_len)
typ = TYPE_MAP
else:
raise FormatError("Unknown header: 0x%x" % b)
return typ, n, obj
def _unpack(self, execute=EX_CONSTRUCT):
typ, n, obj = self._read_header(execute)
if execute == EX_READ_ARRAY_HEADER:
if typ != TYPE_ARRAY:
raise ValueError("Expected array")
return n
if execute == EX_READ_MAP_HEADER:
if typ != TYPE_MAP:
raise ValueError("Expected map")
return n
# TODO should we eliminate the recursion?
if typ == TYPE_ARRAY:
if execute == EX_SKIP:
for i in xrange(n):
# TODO check whether we need to call `list_hook`
self._unpack(EX_SKIP)
return
ret = newlist_hint(n)
for i in xrange(n):
ret.append(self._unpack(EX_CONSTRUCT))
if self._list_hook is not None:
ret = self._list_hook(ret)
# TODO is the interaction between `list_hook` and `use_list` ok?
return ret if self._use_list else tuple(ret)
if typ == TYPE_MAP:
if execute == EX_SKIP:
for i in xrange(n):
# TODO check whether we need to call hooks
self._unpack(EX_SKIP)
self._unpack(EX_SKIP)
return
if self._object_pairs_hook is not None:
ret = self._object_pairs_hook(
(self._unpack(EX_CONSTRUCT), self._unpack(EX_CONSTRUCT))
for _ in xrange(n)
)
else:
ret = {}
for _ in xrange(n):
key = self._unpack(EX_CONSTRUCT)
if self._strict_map_key and type(key) not in (unicode, bytes):
raise ValueError(
"%s is not allowed for map key" % str(type(key))
)
if not PY2 and type(key) is str:
key = sys.intern(key)
ret[key] = self._unpack(EX_CONSTRUCT)
if self._object_hook is not None:
ret = self._object_hook(ret)
return ret
if execute == EX_SKIP:
return
if typ == TYPE_RAW:
if self._raw:
obj = bytes(obj)
else:
obj = obj.decode("utf_8", self._unicode_errors)
return obj
if typ == TYPE_BIN:
return bytes(obj)
if typ == TYPE_EXT:
if n == -1: # timestamp
ts = Timestamp.from_bytes(bytes(obj))
if self._timestamp == 1:
return ts.to_unix()
elif self._timestamp == 2:
return ts.to_unix_nano()
elif self._timestamp == 3:
return ts.to_datetime()
else:
return ts
else:
return self._ext_hook(n, bytes(obj))
assert typ == TYPE_IMMEDIATE
return obj
def __iter__(self):
return self
def __next__(self):
try:
ret = self._unpack(EX_CONSTRUCT)
self._consume()
return ret
except OutOfData:
self._consume()
raise StopIteration
except RecursionError:
raise StackError
next = __next__
def skip(self):
self._unpack(EX_SKIP)
self._consume()
def unpack(self):
try:
ret = self._unpack(EX_CONSTRUCT)
except RecursionError:
raise StackError
self._consume()
return ret
def read_array_header(self):
ret = self._unpack(EX_READ_ARRAY_HEADER)
self._consume()
return ret
def read_map_header(self):
ret = self._unpack(EX_READ_MAP_HEADER)
self._consume()
return ret
def tell(self):
return self._stream_offset
class Packer(object):
"""
MessagePack Packer
Usage:
packer = Packer()
astream.write(packer.pack(a))
astream.write(packer.pack(b))
Packer's constructor has some keyword arguments:
:param callable default:
Convert user type to builtin type that Packer supports.
See also simplejson's document.
:param bool use_single_float:
Use single precision float type for float. (default: False)
:param bool autoreset:
Reset buffer after each pack and return its content as `bytes`. (default: True).
If set this to false, use `bytes()` to get content and `.reset()` to clear buffer.
:param bool use_bin_type:
Use bin type introduced in msgpack spec 2.0 for bytes.
It also enables str8 type for unicode. (default: True)
:param bool strict_types:
If set to true, types will be checked to be exact. Derived classes
from serializable types will not be serialized and will be
treated as unsupported type and forwarded to default.
Additionally tuples will not be serialized as lists.
This is useful when trying to implement accurate serialization
for python types.
:param bool datetime:
If set to true, datetime with tzinfo is packed into Timestamp type.
Note that the tzinfo is stripped in the timestamp.
You can get UTC datetime with `timestamp=3` option of the Unpacker.
(Python 2 is not supported).
:param str unicode_errors:
The error handler for encoding unicode. (default: 'strict')
DO NOT USE THIS!! This option is kept for very specific usage.
"""
def __init__(
self,
default=None,
use_single_float=False,
autoreset=True,
use_bin_type=True,
strict_types=False,
datetime=False,
unicode_errors=None,
):
self._strict_types = strict_types
self._use_float = use_single_float
self._autoreset = autoreset
self._use_bin_type = use_bin_type
self._buffer = StringIO()
if PY2 and datetime:
raise ValueError("datetime is not supported in Python 2")
self._datetime = bool(datetime)
self._unicode_errors = unicode_errors or "strict"
if default is not None:
if not callable(default):
raise TypeError("default must be callable")
self._default = default
def _pack(
self,
obj,
nest_limit=DEFAULT_RECURSE_LIMIT,
check=isinstance,
check_type_strict=_check_type_strict,
):
default_used = False
if self._strict_types:
check = check_type_strict
list_types = list
else:
list_types = (list, tuple)
while True:
if nest_limit < 0:
raise ValueError("recursion limit exceeded")
if obj is None:
return self._buffer.write(b"\xc0")
if check(obj, bool):
if obj:
return self._buffer.write(b"\xc3")
return self._buffer.write(b"\xc2")
if check(obj, int_types):
if 0 <= obj < 0x80:
return self._buffer.write(struct.pack("B", obj))
if -0x20 <= obj < 0:
return self._buffer.write(struct.pack("b", obj))
if 0x80 <= obj <= 0xFF:
return self._buffer.write(struct.pack("BB", 0xCC, obj))
if -0x80 <= obj < 0:
return self._buffer.write(struct.pack(">Bb", 0xD0, obj))
if 0xFF < obj <= 0xFFFF:
return self._buffer.write(struct.pack(">BH", 0xCD, obj))
if -0x8000 <= obj < -0x80:
return self._buffer.write(struct.pack(">Bh", 0xD1, obj))
if 0xFFFF < obj <= 0xFFFFFFFF:
return self._buffer.write(struct.pack(">BI", 0xCE, obj))
if -0x80000000 <= obj < -0x8000:
return self._buffer.write(struct.pack(">Bi", 0xD2, obj))
if 0xFFFFFFFF < obj <= 0xFFFFFFFFFFFFFFFF:
return self._buffer.write(struct.pack(">BQ", 0xCF, obj))
if -0x8000000000000000 <= obj < -0x80000000:
return self._buffer.write(struct.pack(">Bq", 0xD3, obj))
if not default_used and self._default is not None:
obj = self._default(obj)
default_used = True
continue
raise OverflowError("Integer value out of range")
if check(obj, (bytes, bytearray)):
n = len(obj)
if n >= 2 ** 32:
raise ValueError("%s is too large" % type(obj).__name__)
self._pack_bin_header(n)
return self._buffer.write(obj)
if check(obj, unicode):
obj = obj.encode("utf-8", self._unicode_errors)
n = len(obj)
if n >= 2 ** 32:
raise ValueError("String is too large")
self._pack_raw_header(n)
return self._buffer.write(obj)
if check(obj, memoryview):
n = len(obj) * obj.itemsize
if n >= 2 ** 32:
raise ValueError("Memoryview is too large")
self._pack_bin_header(n)
return self._buffer.write(obj)
if check(obj, float):
if self._use_float:
return self._buffer.write(struct.pack(">Bf", 0xCA, obj))
return self._buffer.write(struct.pack(">Bd", 0xCB, obj))
if check(obj, (ExtType, Timestamp)):
if check(obj, Timestamp):
code = -1
data = obj.to_bytes()
else:
code = obj.code
data = obj.data
assert isinstance(code, int)
assert isinstance(data, bytes)
L = len(data)
if L == 1:
self._buffer.write(b"\xd4")
elif L == 2:
self._buffer.write(b"\xd5")
elif L == 4:
self._buffer.write(b"\xd6")
elif L == 8:
self._buffer.write(b"\xd7")
elif L == 16:
self._buffer.write(b"\xd8")
elif L <= 0xFF:
self._buffer.write(struct.pack(">BB", 0xC7, L))
elif L <= 0xFFFF:
self._buffer.write(struct.pack(">BH", 0xC8, L))
else:
self._buffer.write(struct.pack(">BI", 0xC9, L))
self._buffer.write(struct.pack("b", code))
self._buffer.write(data)
return
if check(obj, list_types):
n = len(obj)
self._pack_array_header(n)
for i in xrange(n):
self._pack(obj[i], nest_limit - 1)
return
if check(obj, dict):
return self._pack_map_pairs(
len(obj), dict_iteritems(obj), nest_limit - 1
)
if self._datetime and check(obj, _DateTime):
obj = Timestamp.from_datetime(obj)
default_used = 1
continue
if not default_used and self._default is not None:
obj = self._default(obj)
default_used = 1
continue
raise TypeError("Cannot serialize %r" % (obj,))
def pack(self, obj):
try:
self._pack(obj)
except:
self._buffer = StringIO() # force reset
raise
if self._autoreset:
ret = self._buffer.getvalue()
self._buffer = StringIO()
return ret
def pack_map_pairs(self, pairs):
self._pack_map_pairs(len(pairs), pairs)
if self._autoreset:
ret = self._buffer.getvalue()
self._buffer = StringIO()
return ret
def pack_array_header(self, n):
if n >= 2 ** 32:
raise ValueError
self._pack_array_header(n)
if self._autoreset:
ret = self._buffer.getvalue()
self._buffer = StringIO()
return ret
def pack_map_header(self, n):
if n >= 2 ** 32:
raise ValueError
self._pack_map_header(n)
if self._autoreset:
ret = self._buffer.getvalue()
self._buffer = StringIO()
return ret
def pack_ext_type(self, typecode, data):
if not isinstance(typecode, int):
raise TypeError("typecode must have int type.")
if not 0 <= typecode <= 127:
raise ValueError("typecode should be 0-127")
if not isinstance(data, bytes):
raise TypeError("data must have bytes type")
L = len(data)
if L > 0xFFFFFFFF:
raise ValueError("Too large data")
if L == 1:
self._buffer.write(b"\xd4")
elif L == 2:
self._buffer.write(b"\xd5")
elif L == 4:
self._buffer.write(b"\xd6")
elif L == 8:
self._buffer.write(b"\xd7")
elif L == 16:
self._buffer.write(b"\xd8")
elif L <= 0xFF:
self._buffer.write(b"\xc7" + struct.pack("B", L))
elif L <= 0xFFFF:
self._buffer.write(b"\xc8" + struct.pack(">H", L))
else:
self._buffer.write(b"\xc9" + struct.pack(">I", L))
self._buffer.write(struct.pack("B", typecode))
self._buffer.write(data)
def _pack_array_header(self, n):
if n <= 0x0F:
return self._buffer.write(struct.pack("B", 0x90 + n))
if n <= 0xFFFF:
return self._buffer.write(struct.pack(">BH", 0xDC, n))
if n <= 0xFFFFFFFF:
return self._buffer.write(struct.pack(">BI", 0xDD, n))
raise ValueError("Array is too large")
def _pack_map_header(self, n):
if n <= 0x0F:
return self._buffer.write(struct.pack("B", 0x80 + n))
if n <= 0xFFFF:
return self._buffer.write(struct.pack(">BH", 0xDE, n))
if n <= 0xFFFFFFFF:
return self._buffer.write(struct.pack(">BI", 0xDF, n))
raise ValueError("Dict is too large")
def _pack_map_pairs(self, n, pairs, nest_limit=DEFAULT_RECURSE_LIMIT):
self._pack_map_header(n)
for (k, v) in pairs:
self._pack(k, nest_limit - 1)
self._pack(v, nest_limit - 1)
def _pack_raw_header(self, n):
if n <= 0x1F:
self._buffer.write(struct.pack("B", 0xA0 + n))
elif self._use_bin_type and n <= 0xFF:
self._buffer.write(struct.pack(">BB", 0xD9, n))
elif n <= 0xFFFF:
self._buffer.write(struct.pack(">BH", 0xDA, n))
elif n <= 0xFFFFFFFF:
self._buffer.write(struct.pack(">BI", 0xDB, n))
else:
raise ValueError("Raw is too large")
def _pack_bin_header(self, n):
if not self._use_bin_type:
return self._pack_raw_header(n)
elif n <= 0xFF:
return self._buffer.write(struct.pack(">BB", 0xC4, n))
elif n <= 0xFFFF:
return self._buffer.write(struct.pack(">BH", 0xC5, n))
elif n <= 0xFFFFFFFF:
return self._buffer.write(struct.pack(">BI", 0xC6, n))
else:
raise ValueError("Bin is too large")
def bytes(self):
"""Return internal buffer contents as bytes object"""
return self._buffer.getvalue()
def reset(self):
"""Reset internal buffer.
This method is useful only when autoreset=False.
"""
self._buffer = StringIO()
def getbuffer(self):
"""Return view of internal buffer."""
if USING_STRINGBUILDER or PY2:
return memoryview(self.bytes())
else:
return self._buffer.getbuffer()
|
gpl-3.0
|
edisongustavo/Syrupy
|
setup.py
|
1
|
2177
|
#! /usr/bin/env python
############################################################################
## setup.py
##
## Copyright 2008 Jeet Sukumaran.
##
## This program is free software; you can redistribute it and/or modify
## it under the terms of the GNU General Public License as published by
## the Free Software Foundation; either version 3 of the License, or
## (at your option) any later version.
##
## This program is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
## GNU General Public License for more details.
##
## You should have received a copy of the GNU General Public License along
## with this program. If not, see <http://www.gnu.org/licenses/>.
##
############################################################################
"""
Package setup and installation.
"""
from setuptools import setup
from setuptools import find_packages
import sys
import os
import subprocess
version = "1.4.0"
setup(name='Syrupy',
version=version,
author='Jeet Sukumaran',
author_email='jeetsukumaran@gmail.com',
description="""\
System resource usage profiler""",
license='GPL 3+',
packages=['syrupy'],
package_dir={},
package_data={},
#scripts=['scripts/syrupy.py', 'scripts/syrupy-peak.py'],
include_package_data=True,
zip_safe=True,
install_requires=[
# -*- Extra requirements: -*-
],
entry_points={
'console_scripts': [
'syrupy = syrupy.__main__:main'
]
},
long_description="""\
System resource usage profiler: logs the CPU and
memory usage of a process at pre-specified intervals.""",
classifiers = [
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Library or General Public License (GPL)",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
],
keywords='profiler memory',
)
|
gpl-3.0
|
evensonbryan/yocto-autobuilder
|
lib/python2.7/site-packages/SQLAlchemy-0.8.0b2-py2.7-linux-x86_64.egg/sqlalchemy/util/_collections.py
|
6
|
24911
|
# util/_collections.py
# Copyright (C) 2005-2012 the SQLAlchemy authors and contributors <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""Collection classes and helpers."""
import sys
import itertools
import weakref
import operator
from .compat import threading
EMPTY_SET = frozenset()
class KeyedTuple(tuple):
"""``tuple`` subclass that adds labeled names.
E.g.::
>>> k = KeyedTuple([1, 2, 3], labels=["one", "two", "three"])
>>> k.one
1
>>> k.two
2
Result rows returned by :class:`.Query` that contain multiple
ORM entities and/or column expressions make use of this
class to return rows.
The :class:`.KeyedTuple` exhibits similar behavior to the
``collections.namedtuple()`` construct provided in the Python
standard library, however is architected very differently.
Unlike ``collections.namedtuple()``, :class:`.KeyedTuple` is
does not rely on creation of custom subtypes in order to represent
a new series of keys, instead each :class:`.KeyedTuple` instance
receives its list of keys in place. The subtype approach
of ``collections.namedtuple()`` introduces significant complexity
and performance overhead, which is not necessary for the
:class:`.Query` object's use case.
.. versionchanged:: 0.8
Compatibility methods with ``collections.namedtuple()`` have been
added including :attr:`.KeyedTuple._fields` and
:meth:`.KeyedTuple._asdict`.
.. seealso::
:ref:`ormtutorial_querying`
"""
def __new__(cls, vals, labels=None):
t = tuple.__new__(cls, vals)
t._labels = []
if labels:
t.__dict__.update(zip(labels, vals))
t._labels = labels
return t
def keys(self):
"""Return a list of string key names for this :class:`.KeyedTuple`.
.. seealso::
:attr:`.KeyedTuple._fields`
"""
return [l for l in self._labels if l is not None]
@property
def _fields(self):
"""Return a tuple of string key names for this :class:`.KeyedTuple`.
This method provides compatibility with ``collections.namedtuple()``.
.. versionadded:: 0.8
.. seealso::
:meth:`.KeyedTuple.keys`
"""
return tuple(self.keys())
def _asdict(self):
"""Return the contents of this :class:`.KeyedTuple` as a dictionary.
This method provides compatibility with ``collections.namedtuple()``,
with the exception that the dictionary returned is **not** ordered.
.. versionadded:: 0.8
"""
return dict((key, self.__dict__[key]) for key in self.keys())
class ImmutableContainer(object):
def _immutable(self, *arg, **kw):
raise TypeError("%s object is immutable" % self.__class__.__name__)
__delitem__ = __setitem__ = __setattr__ = _immutable
class immutabledict(ImmutableContainer, dict):
clear = pop = popitem = setdefault = \
update = ImmutableContainer._immutable
def __new__(cls, *args):
new = dict.__new__(cls)
dict.__init__(new, *args)
return new
def __init__(self, *args):
pass
def __reduce__(self):
return immutabledict, (dict(self), )
def union(self, d):
if not self:
return immutabledict(d)
else:
d2 = immutabledict(self)
dict.update(d2, d)
return d2
def __repr__(self):
return "immutabledict(%s)" % dict.__repr__(self)
class Properties(object):
"""Provide a __getattr__/__setattr__ interface over a dict."""
def __init__(self, data):
self.__dict__['_data'] = data
def __len__(self):
return len(self._data)
def __iter__(self):
return self._data.itervalues()
def __add__(self, other):
return list(self) + list(other)
def __setitem__(self, key, object):
self._data[key] = object
def __getitem__(self, key):
return self._data[key]
def __delitem__(self, key):
del self._data[key]
def __setattr__(self, key, object):
self._data[key] = object
def __getstate__(self):
return {'_data': self.__dict__['_data']}
def __setstate__(self, state):
self.__dict__['_data'] = state['_data']
def __getattr__(self, key):
try:
return self._data[key]
except KeyError:
raise AttributeError(key)
def __contains__(self, key):
return key in self._data
def as_immutable(self):
"""Return an immutable proxy for this :class:`.Properties`."""
return ImmutableProperties(self._data)
def update(self, value):
self._data.update(value)
def get(self, key, default=None):
if key in self:
return self[key]
else:
return default
def keys(self):
return self._data.keys()
def values(self):
return self._data.values()
def items(self):
return self._data.items()
def has_key(self, key):
return key in self._data
def clear(self):
self._data.clear()
class OrderedProperties(Properties):
"""Provide a __getattr__/__setattr__ interface with an OrderedDict
as backing store."""
def __init__(self):
Properties.__init__(self, OrderedDict())
class ImmutableProperties(ImmutableContainer, Properties):
"""Provide immutable dict/object attribute to an underlying dictionary."""
class OrderedDict(dict):
"""A dict that returns keys/values/items in the order they were added."""
def __init__(self, ____sequence=None, **kwargs):
self._list = []
if ____sequence is None:
if kwargs:
self.update(**kwargs)
else:
self.update(____sequence, **kwargs)
def clear(self):
self._list = []
dict.clear(self)
def copy(self):
return self.__copy__()
def __copy__(self):
return OrderedDict(self)
def sort(self, *arg, **kw):
self._list.sort(*arg, **kw)
def update(self, ____sequence=None, **kwargs):
if ____sequence is not None:
if hasattr(____sequence, 'keys'):
for key in ____sequence.keys():
self.__setitem__(key, ____sequence[key])
else:
for key, value in ____sequence:
self[key] = value
if kwargs:
self.update(kwargs)
def setdefault(self, key, value):
if key not in self:
self.__setitem__(key, value)
return value
else:
return self.__getitem__(key)
def __iter__(self):
return iter(self._list)
def values(self):
return [self[key] for key in self._list]
def itervalues(self):
return iter([self[key] for key in self._list])
def keys(self):
return list(self._list)
def iterkeys(self):
return iter(self.keys())
def items(self):
return [(key, self[key]) for key in self.keys()]
def iteritems(self):
return iter(self.items())
def __setitem__(self, key, object):
if key not in self:
try:
self._list.append(key)
except AttributeError:
# work around Python pickle loads() with
# dict subclass (seems to ignore __setstate__?)
self._list = [key]
dict.__setitem__(self, key, object)
def __delitem__(self, key):
dict.__delitem__(self, key)
self._list.remove(key)
def pop(self, key, *default):
present = key in self
value = dict.pop(self, key, *default)
if present:
self._list.remove(key)
return value
def popitem(self):
item = dict.popitem(self)
self._list.remove(item[0])
return item
class OrderedSet(set):
def __init__(self, d=None):
set.__init__(self)
self._list = []
if d is not None:
self.update(d)
def add(self, element):
if element not in self:
self._list.append(element)
set.add(self, element)
def remove(self, element):
set.remove(self, element)
self._list.remove(element)
def insert(self, pos, element):
if element not in self:
self._list.insert(pos, element)
set.add(self, element)
def discard(self, element):
if element in self:
self._list.remove(element)
set.remove(self, element)
def clear(self):
set.clear(self)
self._list = []
def __getitem__(self, key):
return self._list[key]
def __iter__(self):
return iter(self._list)
def __add__(self, other):
return self.union(other)
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, self._list)
__str__ = __repr__
def update(self, iterable):
for e in iterable:
if e not in self:
self._list.append(e)
set.add(self, e)
return self
__ior__ = update
def union(self, other):
result = self.__class__(self)
result.update(other)
return result
__or__ = union
def intersection(self, other):
other = set(other)
return self.__class__(a for a in self if a in other)
__and__ = intersection
def symmetric_difference(self, other):
other = set(other)
result = self.__class__(a for a in self if a not in other)
result.update(a for a in other if a not in self)
return result
__xor__ = symmetric_difference
def difference(self, other):
other = set(other)
return self.__class__(a for a in self if a not in other)
__sub__ = difference
def intersection_update(self, other):
other = set(other)
set.intersection_update(self, other)
self._list = [a for a in self._list if a in other]
return self
__iand__ = intersection_update
def symmetric_difference_update(self, other):
set.symmetric_difference_update(self, other)
self._list = [a for a in self._list if a in self]
self._list += [a for a in other._list if a in self]
return self
__ixor__ = symmetric_difference_update
def difference_update(self, other):
set.difference_update(self, other)
self._list = [a for a in self._list if a in self]
return self
__isub__ = difference_update
class IdentitySet(object):
"""A set that considers only object id() for uniqueness.
This strategy has edge cases for builtin types- it's possible to have
two 'foo' strings in one of these sets, for example. Use sparingly.
"""
_working_set = set
def __init__(self, iterable=None):
self._members = dict()
if iterable:
for o in iterable:
self.add(o)
def add(self, value):
self._members[id(value)] = value
def __contains__(self, value):
return id(value) in self._members
def remove(self, value):
del self._members[id(value)]
def discard(self, value):
try:
self.remove(value)
except KeyError:
pass
def pop(self):
try:
pair = self._members.popitem()
return pair[1]
except KeyError:
raise KeyError('pop from an empty set')
def clear(self):
self._members.clear()
def __cmp__(self, other):
raise TypeError('cannot compare sets using cmp()')
def __eq__(self, other):
if isinstance(other, IdentitySet):
return self._members == other._members
else:
return False
def __ne__(self, other):
if isinstance(other, IdentitySet):
return self._members != other._members
else:
return True
def issubset(self, iterable):
other = type(self)(iterable)
if len(self) > len(other):
return False
for m in itertools.ifilterfalse(other._members.__contains__,
self._members.iterkeys()):
return False
return True
def __le__(self, other):
if not isinstance(other, IdentitySet):
return NotImplemented
return self.issubset(other)
def __lt__(self, other):
if not isinstance(other, IdentitySet):
return NotImplemented
return len(self) < len(other) and self.issubset(other)
def issuperset(self, iterable):
other = type(self)(iterable)
if len(self) < len(other):
return False
for m in itertools.ifilterfalse(self._members.__contains__,
other._members.iterkeys()):
return False
return True
def __ge__(self, other):
if not isinstance(other, IdentitySet):
return NotImplemented
return self.issuperset(other)
def __gt__(self, other):
if not isinstance(other, IdentitySet):
return NotImplemented
return len(self) > len(other) and self.issuperset(other)
def union(self, iterable):
result = type(self)()
# testlib.pragma exempt:__hash__
members = self._member_id_tuples()
other = _iter_id(iterable)
result._members.update(self._working_set(members).union(other))
return result
def __or__(self, other):
if not isinstance(other, IdentitySet):
return NotImplemented
return self.union(other)
def update(self, iterable):
self._members = self.union(iterable)._members
def __ior__(self, other):
if not isinstance(other, IdentitySet):
return NotImplemented
self.update(other)
return self
def difference(self, iterable):
result = type(self)()
# testlib.pragma exempt:__hash__
members = self._member_id_tuples()
other = _iter_id(iterable)
result._members.update(self._working_set(members).difference(other))
return result
def __sub__(self, other):
if not isinstance(other, IdentitySet):
return NotImplemented
return self.difference(other)
def difference_update(self, iterable):
self._members = self.difference(iterable)._members
def __isub__(self, other):
if not isinstance(other, IdentitySet):
return NotImplemented
self.difference_update(other)
return self
def intersection(self, iterable):
result = type(self)()
# testlib.pragma exempt:__hash__
members = self._member_id_tuples()
other = _iter_id(iterable)
result._members.update(self._working_set(members).intersection(other))
return result
def __and__(self, other):
if not isinstance(other, IdentitySet):
return NotImplemented
return self.intersection(other)
def intersection_update(self, iterable):
self._members = self.intersection(iterable)._members
def __iand__(self, other):
if not isinstance(other, IdentitySet):
return NotImplemented
self.intersection_update(other)
return self
def symmetric_difference(self, iterable):
result = type(self)()
# testlib.pragma exempt:__hash__
members = self._member_id_tuples()
other = _iter_id(iterable)
result._members.update(
self._working_set(members).symmetric_difference(other))
return result
def _member_id_tuples(self):
return ((id(v), v) for v in self._members.itervalues())
def __xor__(self, other):
if not isinstance(other, IdentitySet):
return NotImplemented
return self.symmetric_difference(other)
def symmetric_difference_update(self, iterable):
self._members = self.symmetric_difference(iterable)._members
def __ixor__(self, other):
if not isinstance(other, IdentitySet):
return NotImplemented
self.symmetric_difference(other)
return self
def copy(self):
return type(self)(self._members.itervalues())
__copy__ = copy
def __len__(self):
return len(self._members)
def __iter__(self):
return self._members.itervalues()
def __hash__(self):
raise TypeError('set objects are unhashable')
def __repr__(self):
return '%s(%r)' % (type(self).__name__, self._members.values())
class WeakSequence(object):
def __init__(self, elements):
self._storage = weakref.WeakValueDictionary(
(idx, element) for idx, element in enumerate(elements)
)
def __iter__(self):
return self._storage.itervalues()
def __getitem__(self, index):
try:
return self._storage[index]
except KeyError:
raise IndexError("Index %s out of range" % index)
class OrderedIdentitySet(IdentitySet):
class _working_set(OrderedSet):
# a testing pragma: exempt the OIDS working set from the test suite's
# "never call the user's __hash__" assertions. this is a big hammer,
# but it's safe here: IDS operates on (id, instance) tuples in the
# working set.
__sa_hash_exempt__ = True
def __init__(self, iterable=None):
IdentitySet.__init__(self)
self._members = OrderedDict()
if iterable:
for o in iterable:
self.add(o)
if sys.version_info >= (2, 5):
class PopulateDict(dict):
"""A dict which populates missing values via a creation function.
Note the creation function takes a key, unlike
collections.defaultdict.
"""
def __init__(self, creator):
self.creator = creator
def __missing__(self, key):
self[key] = val = self.creator(key)
return val
else:
class PopulateDict(dict):
"""A dict which populates missing values via a creation function."""
def __init__(self, creator):
self.creator = creator
def __getitem__(self, key):
try:
return dict.__getitem__(self, key)
except KeyError:
self[key] = value = self.creator(key)
return value
# define collections that are capable of storing
# ColumnElement objects as hashable keys/elements.
column_set = set
column_dict = dict
ordered_column_set = OrderedSet
populate_column_dict = PopulateDict
def unique_list(seq, hashfunc=None):
seen = {}
if not hashfunc:
return [x for x in seq
if x not in seen
and not seen.__setitem__(x, True)]
else:
return [x for x in seq
if hashfunc(x) not in seen
and not seen.__setitem__(hashfunc(x), True)]
class UniqueAppender(object):
"""Appends items to a collection ensuring uniqueness.
Additional appends() of the same object are ignored. Membership is
determined by identity (``is a``) not equality (``==``).
"""
def __init__(self, data, via=None):
self.data = data
self._unique = {}
if via:
self._data_appender = getattr(data, via)
elif hasattr(data, 'append'):
self._data_appender = data.append
elif hasattr(data, 'add'):
self._data_appender = data.add
def append(self, item):
id_ = id(item)
if id_ not in self._unique:
self._data_appender(item)
self._unique[id_] = True
def __iter__(self):
return iter(self.data)
def to_list(x, default=None):
if x is None:
return default
if not isinstance(x, (list, tuple)):
return [x]
else:
return x
def to_set(x):
if x is None:
return set()
if not isinstance(x, set):
return set(to_list(x))
else:
return x
def to_column_set(x):
if x is None:
return column_set()
if not isinstance(x, column_set):
return column_set(to_list(x))
else:
return x
def update_copy(d, _new=None, **kw):
"""Copy the given dict and update with the given values."""
d = d.copy()
if _new:
d.update(_new)
d.update(**kw)
return d
def flatten_iterator(x):
"""Given an iterator of which further sub-elements may also be
iterators, flatten the sub-elements into a single iterator.
"""
for elem in x:
if not isinstance(elem, basestring) and hasattr(elem, '__iter__'):
for y in flatten_iterator(elem):
yield y
else:
yield elem
class LRUCache(dict):
"""Dictionary with 'squishy' removal of least
recently used items.
"""
def __init__(self, capacity=100, threshold=.5):
self.capacity = capacity
self.threshold = threshold
self._counter = 0
def _inc_counter(self):
self._counter += 1
return self._counter
def __getitem__(self, key):
item = dict.__getitem__(self, key)
item[2] = self._inc_counter()
return item[1]
def values(self):
return [i[1] for i in dict.values(self)]
def setdefault(self, key, value):
if key in self:
return self[key]
else:
self[key] = value
return value
def __setitem__(self, key, value):
item = dict.get(self, key)
if item is None:
item = [key, value, self._inc_counter()]
dict.__setitem__(self, key, item)
else:
item[1] = value
self._manage_size()
def _manage_size(self):
while len(self) > self.capacity + self.capacity * self.threshold:
by_counter = sorted(dict.values(self),
key=operator.itemgetter(2),
reverse=True)
for item in by_counter[self.capacity:]:
try:
del self[item[0]]
except KeyError:
# if we couldnt find a key, most
# likely some other thread broke in
# on us. loop around and try again
break
class ScopedRegistry(object):
"""A Registry that can store one or multiple instances of a single
class on the basis of a "scope" function.
The object implements ``__call__`` as the "getter", so by
calling ``myregistry()`` the contained object is returned
for the current scope.
:param createfunc:
a callable that returns a new object to be placed in the registry
:param scopefunc:
a callable that will return a key to store/retrieve an object.
"""
def __init__(self, createfunc, scopefunc):
"""Construct a new :class:`.ScopedRegistry`.
:param createfunc: A creation function that will generate
a new value for the current scope, if none is present.
:param scopefunc: A function that returns a hashable
token representing the current scope (such as, current
thread identifier).
"""
self.createfunc = createfunc
self.scopefunc = scopefunc
self.registry = {}
def __call__(self):
key = self.scopefunc()
try:
return self.registry[key]
except KeyError:
return self.registry.setdefault(key, self.createfunc())
def has(self):
"""Return True if an object is present in the current scope."""
return self.scopefunc() in self.registry
def set(self, obj):
"""Set the value forthe current scope."""
self.registry[self.scopefunc()] = obj
def clear(self):
"""Clear the current scope, if any."""
try:
del self.registry[self.scopefunc()]
except KeyError:
pass
class ThreadLocalRegistry(ScopedRegistry):
"""A :class:`.ScopedRegistry` that uses a ``threading.local()``
variable for storage.
"""
def __init__(self, createfunc):
self.createfunc = createfunc
self.registry = threading.local()
def __call__(self):
try:
return self.registry.value
except AttributeError:
val = self.registry.value = self.createfunc()
return val
def has(self):
return hasattr(self.registry, "value")
def set(self, obj):
self.registry.value = obj
def clear(self):
try:
del self.registry.value
except AttributeError:
pass
def _iter_id(iterable):
"""Generator: ((id(o), o) for o in iterable)."""
for item in iterable:
yield id(item), item
|
gpl-2.0
|
newsteinking/docker
|
tests/unit/test_unit_outdated.py
|
8
|
4870
|
import sys
import datetime
import os
from contextlib import contextmanager
import freezegun
import pytest
import pretend
import pip
from pip._vendor import lockfile
from pip.utils import outdated
@pytest.mark.parametrize(
['stored_time', 'newver', 'check', 'warn'],
[
('1970-01-01T10:00:00Z', '2.0', True, True),
('1970-01-01T10:00:00Z', '1.0', True, False),
('1970-01-06T10:00:00Z', '1.0', False, False),
('1970-01-06T10:00:00Z', '2.0', False, True),
]
)
def test_pip_version_check(monkeypatch, stored_time, newver, check, warn):
monkeypatch.setattr(pip, '__version__', '1.0')
resp = pretend.stub(
raise_for_status=pretend.call_recorder(lambda: None),
json=pretend.call_recorder(lambda: {"info": {"version": newver}}),
)
session = pretend.stub(
get=pretend.call_recorder(lambda u, headers=None: resp),
)
fake_state = pretend.stub(
state={"last_check": stored_time, 'pypi_version': '1.0'},
save=pretend.call_recorder(lambda v, t: None),
)
monkeypatch.setattr(
outdated, 'load_selfcheck_statefile', lambda: fake_state
)
monkeypatch.setattr(outdated.logger, 'warning',
pretend.call_recorder(lambda s: None))
monkeypatch.setattr(outdated.logger, 'debug',
pretend.call_recorder(lambda s, exc_info=None: None))
with freezegun.freeze_time(
"1970-01-09 10:00:00",
ignore=[
"six.moves",
"pip._vendor.six.moves",
"pip._vendor.requests.packages.urllib3.packages.six.moves",
]):
outdated.pip_version_check(session)
assert not outdated.logger.debug.calls
if check:
assert session.get.calls == [pretend.call(
"https://pypi.python.org/pypi/pip/json",
headers={"Accept": "application/json"}
)]
assert fake_state.save.calls == [
pretend.call(newver, datetime.datetime(1970, 1, 9, 10, 00, 00)),
]
if warn:
assert len(outdated.logger.warning.calls) == 1
else:
assert len(outdated.logger.warning.calls) == 0
else:
assert session.get.calls == []
assert fake_state.save.calls == []
def test_virtualenv_state(monkeypatch):
CONTENT = '{"last_check": "1970-01-02T11:00:00Z", "pypi_version": "1.0"}'
fake_file = pretend.stub(
read=pretend.call_recorder(lambda: CONTENT),
write=pretend.call_recorder(lambda s: None),
)
@pretend.call_recorder
@contextmanager
def fake_open(filename, mode='r'):
yield fake_file
monkeypatch.setattr(outdated, 'open', fake_open, raising=False)
monkeypatch.setattr(outdated, 'running_under_virtualenv',
pretend.call_recorder(lambda: True))
monkeypatch.setattr(sys, 'prefix', 'virtually_env')
state = outdated.load_selfcheck_statefile()
state.save('2.0', datetime.datetime.utcnow())
assert len(outdated.running_under_virtualenv.calls) == 1
expected_path = os.path.join('virtually_env', 'pip-selfcheck.json')
assert fake_open.calls == [
pretend.call(expected_path),
pretend.call(expected_path, 'w'),
]
# json.dumps will call this a number of times
assert len(fake_file.write.calls)
def test_global_state(monkeypatch):
CONTENT = '''{"pip_prefix": {"last_check": "1970-01-02T11:00:00Z",
"pypi_version": "1.0"}}'''
fake_file = pretend.stub(
read=pretend.call_recorder(lambda: CONTENT),
write=pretend.call_recorder(lambda s: None),
)
@pretend.call_recorder
@contextmanager
def fake_open(filename, mode='r'):
yield fake_file
monkeypatch.setattr(outdated, 'open', fake_open, raising=False)
@pretend.call_recorder
@contextmanager
def fake_lock(filename):
yield
monkeypatch.setattr(outdated, "check_path_owner", lambda p: True)
monkeypatch.setattr(lockfile, 'LockFile', fake_lock)
monkeypatch.setattr(os.path, "exists", lambda p: True)
monkeypatch.setattr(outdated, 'running_under_virtualenv',
pretend.call_recorder(lambda: False))
monkeypatch.setattr(outdated, 'USER_CACHE_DIR', 'cache_dir')
monkeypatch.setattr(sys, 'prefix', 'pip_prefix')
state = outdated.load_selfcheck_statefile()
state.save('2.0', datetime.datetime.utcnow())
assert len(outdated.running_under_virtualenv.calls) == 1
expected_path = os.path.join('cache_dir', 'selfcheck.json')
assert fake_lock.calls == [pretend.call(expected_path)]
assert fake_open.calls == [
pretend.call(expected_path),
pretend.call(expected_path),
pretend.call(expected_path, 'w'),
]
# json.dumps will call this a number of times
assert len(fake_file.write.calls)
|
mit
|
sharkykh/SickRage
|
lib/subliminal/providers/subscenter.py
|
25
|
9252
|
# -*- coding: utf-8 -*-
import bisect
from collections import defaultdict
import io
import json
import logging
import zipfile
from babelfish import Language
from guessit import guessit
from requests import Session
from . import ParserBeautifulSoup, Provider
from .. import __short_version__
from ..cache import SHOW_EXPIRATION_TIME, region
from ..exceptions import AuthenticationError, ConfigurationError, ProviderError
from ..subtitle import Subtitle, fix_line_ending, guess_matches
from ..utils import sanitize
from ..video import Episode, Movie
logger = logging.getLogger(__name__)
class SubsCenterSubtitle(Subtitle):
"""SubsCenter Subtitle."""
provider_name = 'subscenter'
def __init__(self, language, hearing_impaired, page_link, series, season, episode, title, subtitle_id, subtitle_key,
downloaded, releases):
super(SubsCenterSubtitle, self).__init__(language, hearing_impaired, page_link)
self.series = series
self.season = season
self.episode = episode
self.title = title
self.subtitle_id = subtitle_id
self.subtitle_key = subtitle_key
self.downloaded = downloaded
self.releases = releases
@property
def id(self):
return str(self.subtitle_id)
def get_matches(self, video):
matches = set()
# episode
if isinstance(video, Episode):
# series
if video.series and sanitize(self.series) == sanitize(video.series):
matches.add('series')
# season
if video.season and self.season == video.season:
matches.add('season')
# episode
if video.episode and self.episode == video.episode:
matches.add('episode')
# guess
for release in self.releases:
matches |= guess_matches(video, guessit(release, {'type': 'episode'}))
# movie
elif isinstance(video, Movie):
# guess
for release in self.releases:
matches |= guess_matches(video, guessit(release, {'type': 'movie'}))
# title
if video.title and sanitize(self.title) == sanitize(video.title):
matches.add('title')
return matches
class SubsCenterProvider(Provider):
"""SubsCenter Provider."""
languages = {Language.fromalpha2(l) for l in ['he']}
server_url = 'http://www.subscenter.co/he/'
def __init__(self, username=None, password=None):
if username is not None and password is None or username is None and password is not None:
raise ConfigurationError('Username and password must be specified')
self.session = None
self.username = username
self.password = password
self.logged_in = False
def initialize(self):
self.session = Session()
self.session.headers['User-Agent'] = 'Subliminal/{}'.format(__short_version__)
# login
if self.username is not None and self.password is not None:
logger.debug('Logging in')
url = self.server_url + 'subscenter/accounts/login/'
# retrieve CSRF token
self.session.get(url)
csrf_token = self.session.cookies['csrftoken']
# actual login
data = {'username': self.username, 'password': self.password, 'csrfmiddlewaretoken': csrf_token}
r = self.session.post(url, data, allow_redirects=False, timeout=10)
if r.status_code != 302:
raise AuthenticationError(self.username)
logger.info('Logged in')
self.logged_in = True
def terminate(self):
# logout
if self.logged_in:
logger.info('Logging out')
r = self.session.get(self.server_url + 'subscenter/accounts/logout/', timeout=10)
r.raise_for_status()
logger.info('Logged out')
self.logged_in = False
self.session.close()
@region.cache_on_arguments(expiration_time=SHOW_EXPIRATION_TIME)
def _search_url_titles(self, title):
"""Search the URL titles by kind for the given `title`.
:param str title: title to search for.
:return: the URL titles by kind.
:rtype: collections.defaultdict
"""
# make the search
logger.info('Searching title name for %r', title)
r = self.session.get(self.server_url + 'subtitle/search/', params={'q': title}, timeout=10)
r.raise_for_status()
# check for redirections
if r.history and all([h.status_code == 302 for h in r.history]):
logger.debug('Redirected to the subtitles page')
links = [r.url]
else:
# get the suggestions (if needed)
soup = ParserBeautifulSoup(r.content, ['lxml', 'html.parser'])
links = [link.attrs['href'] for link in soup.select('#processes div.generalWindowTop a')]
logger.debug('Found %d suggestions', len(links))
url_titles = defaultdict(list)
for link in links:
parts = link.split('/')
url_titles[parts[-3]].append(parts[-2])
return url_titles
def query(self, title, season=None, episode=None):
# search for the url title
url_titles = self._search_url_titles(title)
# episode
if season and episode:
if 'series' not in url_titles:
logger.error('No URL title found for series %r', title)
return []
url_title = url_titles['series'][0]
logger.debug('Using series title %r', url_title)
url = self.server_url + 'cst/data/series/sb/{}/{}/{}/'.format(url_title, season, episode)
page_link = self.server_url + 'subtitle/series/{}/{}/{}/'.format(url_title, season, episode)
else:
if 'movie' not in url_titles:
logger.error('No URL title found for movie %r', title)
return []
url_title = url_titles['movie'][0]
logger.debug('Using movie title %r', url_title)
url = self.server_url + 'cst/data/movie/sb/{}/'.format(url_title)
page_link = self.server_url + 'subtitle/movie/{}/'.format(url_title)
# get the list of subtitles
logger.debug('Getting the list of subtitles')
r = self.session.get(url)
r.raise_for_status()
results = json.loads(r.text)
# loop over results
subtitles = {}
for language_code, language_data in results.items():
for quality_data in language_data.values():
for quality, subtitles_data in quality_data.items():
for subtitle_item in subtitles_data.values():
# read the item
language = Language.fromalpha2(language_code)
hearing_impaired = bool(subtitle_item['hearing_impaired'])
subtitle_id = subtitle_item['id']
subtitle_key = subtitle_item['key']
downloaded = subtitle_item['downloaded']
release = subtitle_item['subtitle_version']
# add the release and increment downloaded count if we already have the subtitle
if subtitle_id in subtitles:
logger.debug('Found additional release %r for subtitle %d', release, subtitle_id)
bisect.insort_left(subtitles[subtitle_id].releases, release) # deterministic order
subtitles[subtitle_id].downloaded += downloaded
continue
# otherwise create it
subtitle = SubsCenterSubtitle(language, hearing_impaired, page_link, title, season, episode,
title, subtitle_id, subtitle_key, downloaded, [release])
logger.debug('Found subtitle %r', subtitle)
subtitles[subtitle_id] = subtitle
return subtitles.values()
def list_subtitles(self, video, languages):
season = episode = None
title = video.title
if isinstance(video, Episode):
title = video.series
season = video.season
episode = video.episode
return [s for s in self.query(title, season, episode) if s.language in languages]
def download_subtitle(self, subtitle):
# download
url = self.server_url + 'subtitle/download/{}/{}/'.format(subtitle.language.alpha2, subtitle.subtitle_id)
params = {'v': subtitle.releases[0], 'key': subtitle.subtitle_key}
r = self.session.get(url, params=params, headers={'Referer': subtitle.page_link}, timeout=10)
r.raise_for_status()
# open the zip
with zipfile.ZipFile(io.BytesIO(r.content)) as zf:
# remove some filenames from the namelist
namelist = [n for n in zf.namelist() if not n.endswith('.txt')]
if len(namelist) > 1:
raise ProviderError('More than one file to unzip')
subtitle.content = fix_line_ending(zf.read(namelist[0]))
|
gpl-3.0
|
kvalo/ath10k
|
tools/perf/tests/attr.py
|
3174
|
9441
|
#! /usr/bin/python
import os
import sys
import glob
import optparse
import tempfile
import logging
import shutil
import ConfigParser
class Fail(Exception):
def __init__(self, test, msg):
self.msg = msg
self.test = test
def getMsg(self):
return '\'%s\' - %s' % (self.test.path, self.msg)
class Unsup(Exception):
def __init__(self, test):
self.test = test
def getMsg(self):
return '\'%s\'' % self.test.path
class Event(dict):
terms = [
'cpu',
'flags',
'type',
'size',
'config',
'sample_period',
'sample_type',
'read_format',
'disabled',
'inherit',
'pinned',
'exclusive',
'exclude_user',
'exclude_kernel',
'exclude_hv',
'exclude_idle',
'mmap',
'comm',
'freq',
'inherit_stat',
'enable_on_exec',
'task',
'watermark',
'precise_ip',
'mmap_data',
'sample_id_all',
'exclude_host',
'exclude_guest',
'exclude_callchain_kernel',
'exclude_callchain_user',
'wakeup_events',
'bp_type',
'config1',
'config2',
'branch_sample_type',
'sample_regs_user',
'sample_stack_user',
]
def add(self, data):
for key, val in data:
log.debug(" %s = %s" % (key, val))
self[key] = val
def __init__(self, name, data, base):
log.debug(" Event %s" % name);
self.name = name;
self.group = ''
self.add(base)
self.add(data)
def compare_data(self, a, b):
# Allow multiple values in assignment separated by '|'
a_list = a.split('|')
b_list = b.split('|')
for a_item in a_list:
for b_item in b_list:
if (a_item == b_item):
return True
elif (a_item == '*') or (b_item == '*'):
return True
return False
def equal(self, other):
for t in Event.terms:
log.debug(" [%s] %s %s" % (t, self[t], other[t]));
if not self.has_key(t) or not other.has_key(t):
return False
if not self.compare_data(self[t], other[t]):
return False
return True
def diff(self, other):
for t in Event.terms:
if not self.has_key(t) or not other.has_key(t):
continue
if not self.compare_data(self[t], other[t]):
log.warning("expected %s=%s, got %s" % (t, self[t], other[t]))
# Test file description needs to have following sections:
# [config]
# - just single instance in file
# - needs to specify:
# 'command' - perf command name
# 'args' - special command arguments
# 'ret' - expected command return value (0 by default)
#
# [eventX:base]
# - one or multiple instances in file
# - expected values assignments
class Test(object):
def __init__(self, path, options):
parser = ConfigParser.SafeConfigParser()
parser.read(path)
log.warning("running '%s'" % path)
self.path = path
self.test_dir = options.test_dir
self.perf = options.perf
self.command = parser.get('config', 'command')
self.args = parser.get('config', 'args')
try:
self.ret = parser.get('config', 'ret')
except:
self.ret = 0
self.expect = {}
self.result = {}
log.debug(" loading expected events");
self.load_events(path, self.expect)
def is_event(self, name):
if name.find("event") == -1:
return False
else:
return True
def load_events(self, path, events):
parser_event = ConfigParser.SafeConfigParser()
parser_event.read(path)
# The event record section header contains 'event' word,
# optionaly followed by ':' allowing to load 'parent
# event' first as a base
for section in filter(self.is_event, parser_event.sections()):
parser_items = parser_event.items(section);
base_items = {}
# Read parent event if there's any
if (':' in section):
base = section[section.index(':') + 1:]
parser_base = ConfigParser.SafeConfigParser()
parser_base.read(self.test_dir + '/' + base)
base_items = parser_base.items('event')
e = Event(section, parser_items, base_items)
events[section] = e
def run_cmd(self, tempdir):
cmd = "PERF_TEST_ATTR=%s %s %s -o %s/perf.data %s" % (tempdir,
self.perf, self.command, tempdir, self.args)
ret = os.WEXITSTATUS(os.system(cmd))
log.info(" '%s' ret %d " % (cmd, ret))
if ret != int(self.ret):
raise Unsup(self)
def compare(self, expect, result):
match = {}
log.debug(" compare");
# For each expected event find all matching
# events in result. Fail if there's not any.
for exp_name, exp_event in expect.items():
exp_list = []
log.debug(" matching [%s]" % exp_name)
for res_name, res_event in result.items():
log.debug(" to [%s]" % res_name)
if (exp_event.equal(res_event)):
exp_list.append(res_name)
log.debug(" ->OK")
else:
log.debug(" ->FAIL");
log.debug(" match: [%s] matches %s" % (exp_name, str(exp_list)))
# we did not any matching event - fail
if (not exp_list):
exp_event.diff(res_event)
raise Fail(self, 'match failure');
match[exp_name] = exp_list
# For each defined group in the expected events
# check we match the same group in the result.
for exp_name, exp_event in expect.items():
group = exp_event.group
if (group == ''):
continue
for res_name in match[exp_name]:
res_group = result[res_name].group
if res_group not in match[group]:
raise Fail(self, 'group failure')
log.debug(" group: [%s] matches group leader %s" %
(exp_name, str(match[group])))
log.debug(" matched")
def resolve_groups(self, events):
for name, event in events.items():
group_fd = event['group_fd'];
if group_fd == '-1':
continue;
for iname, ievent in events.items():
if (ievent['fd'] == group_fd):
event.group = iname
log.debug('[%s] has group leader [%s]' % (name, iname))
break;
def run(self):
tempdir = tempfile.mkdtemp();
try:
# run the test script
self.run_cmd(tempdir);
# load events expectation for the test
log.debug(" loading result events");
for f in glob.glob(tempdir + '/event*'):
self.load_events(f, self.result);
# resolve group_fd to event names
self.resolve_groups(self.expect);
self.resolve_groups(self.result);
# do the expectation - results matching - both ways
self.compare(self.expect, self.result)
self.compare(self.result, self.expect)
finally:
# cleanup
shutil.rmtree(tempdir)
def run_tests(options):
for f in glob.glob(options.test_dir + '/' + options.test):
try:
Test(f, options).run()
except Unsup, obj:
log.warning("unsupp %s" % obj.getMsg())
def setup_log(verbose):
global log
level = logging.CRITICAL
if verbose == 1:
level = logging.WARNING
if verbose == 2:
level = logging.INFO
if verbose >= 3:
level = logging.DEBUG
log = logging.getLogger('test')
log.setLevel(level)
ch = logging.StreamHandler()
ch.setLevel(level)
formatter = logging.Formatter('%(message)s')
ch.setFormatter(formatter)
log.addHandler(ch)
USAGE = '''%s [OPTIONS]
-d dir # tests dir
-p path # perf binary
-t test # single test
-v # verbose level
''' % sys.argv[0]
def main():
parser = optparse.OptionParser(usage=USAGE)
parser.add_option("-t", "--test",
action="store", type="string", dest="test")
parser.add_option("-d", "--test-dir",
action="store", type="string", dest="test_dir")
parser.add_option("-p", "--perf",
action="store", type="string", dest="perf")
parser.add_option("-v", "--verbose",
action="count", dest="verbose")
options, args = parser.parse_args()
if args:
parser.error('FAILED wrong arguments %s' % ' '.join(args))
return -1
setup_log(options.verbose)
if not options.test_dir:
print 'FAILED no -d option specified'
sys.exit(-1)
if not options.test:
options.test = 'test*'
try:
run_tests(options)
except Fail, obj:
print "FAILED %s" % obj.getMsg();
sys.exit(-1)
sys.exit(0)
if __name__ == '__main__':
main()
|
gpl-2.0
|
CydarLtd/ansible
|
docs/bin/dump_keywords.py
|
33
|
2403
|
#!/usr/bin/env python
import optparse
import yaml
from jinja2 import Environment, FileSystemLoader
from ansible.playbook import Play
from ansible.playbook.block import Block
from ansible.playbook.role import Role
from ansible.playbook.task import Task
template_file = 'playbooks_keywords.rst.j2'
oblist = {}
clist = []
class_list = [ Play, Role, Block, Task ]
p = optparse.OptionParser(
version='%prog 1.0',
usage='usage: %prog [options]',
description='Generate module documentation from metadata',
)
p.add_option("-T", "--template-dir", action="store", dest="template_dir", default="../templates", help="directory containing Jinja2 templates")
p.add_option("-o", "--output-dir", action="store", dest="output_dir", default='/tmp/', help="Output directory for rst files")
p.add_option("-d", "--docs-source", action="store", dest="docs", default=None, help="Source for attribute docs")
(options, args) = p.parse_args()
for aclass in class_list:
aobj = aclass()
name = type(aobj).__name__
if options.docs:
with open(options.docs) as f:
docs = yaml.safe_load(f)
else:
docs = {}
# build ordered list to loop over and dict with attributes
clist.append(name)
oblist[name] = dict((x, aobj.__dict__['_attributes'][x]) for x in aobj.__dict__['_attributes'] if 'private' not in x or not x.private)
# pick up docs if they exist
for a in oblist[name]:
if a in docs:
oblist[name][a] = docs[a]
else:
oblist[name][a] = ' UNDOCUMENTED!! '
# loop is really with_ for users
if name == 'Task':
oblist[name]['with_<lookup_plugin>'] = 'with_ is how loops are defined, it can use any available lookup plugin to generate the item list'
# local_action is implicit with action
if 'action' in oblist[name]:
oblist[name]['local_action'] = 'Same as action but also implies ``delegate_to: localhost``'
# remove unusable (used to be private?)
for nouse in ('loop', 'loop_args'):
if nouse in oblist[name]:
del oblist[name][nouse]
env = Environment(loader=FileSystemLoader(options.template_dir), trim_blocks=True,)
template = env.get_template(template_file)
outputname = options.output_dir + template_file.replace('.j2','')
tempvars = { 'oblist': oblist, 'clist': clist }
with open( outputname, 'w') as f:
f.write(template.render(tempvars))
|
gpl-3.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.