repo_name stringlengths 5 100 | path stringlengths 4 375 | copies stringclasses 991 values | size stringlengths 4 7 | content stringlengths 666 1M | license stringclasses 15 values |
|---|---|---|---|---|---|
tgoldenberg/react-native | JSCLegacyProfiler/smap.py | 375 | 14793 |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
"""
adapted from https://github.com/martine/python-sourcemap into a reuasable module
"""
"""
Apache License
Version 2.0, January 2010
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
"""A module for parsing source maps, as output by the Closure and
CoffeeScript compilers and consumed by browsers. See
http://www.html5rocks.com/en/tutorials/developertools/sourcemaps/
"""
import collections
import json
import sys
import bisect
class entry(object):
def __init__(self, dst_line, dst_col, src, src_line, src_col):
self.dst_line = dst_line
self.dst_col = dst_col
self.src = src
self.src_line = src_line
self.src_col = src_col
def __cmp__(self, other):
#print(self)
#print(other)
if self.dst_line < other.dst_line:
return -1
if self.dst_line > other.dst_line:
return 1
if self.dst_col < other.dst_col:
return -1
if self.dst_col > other.dst_col:
return 1
return 0
SmapState = collections.namedtuple(
'SmapState', ['dst_line', 'dst_col',
'src', 'src_line', 'src_col',
'name'])
# Mapping of base64 letter -> integer value.
B64 = dict((c, i) for i, c in
enumerate('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
'0123456789+/'))
def _parse_vlq(segment):
"""Parse a string of VLQ-encoded data.
Returns:
a list of integers.
"""
values = []
cur, shift = 0, 0
for c in segment:
val = B64[c]
# Each character is 6 bits:
# 5 of value and the high bit is the continuation.
val, cont = val & 0b11111, val >> 5
cur += val << shift
shift += 5
if not cont:
# The low bit of the unpacked value is the sign.
cur, sign = cur >> 1, cur & 1
if sign:
cur = -cur
values.append(cur)
cur, shift = 0, 0
if cur or shift:
raise Exception('leftover cur/shift in vlq decode')
return values
def _parse_smap(file):
"""Given a file-like object, yield SmapState()s as they are read from it."""
smap = json.load(file)
sources = smap['sources']
names = smap['names']
mappings = smap['mappings']
lines = mappings.split(';')
dst_col, src_id, src_line, src_col, name_id = 0, 0, 0, 0, 0
for dst_line, line in enumerate(lines):
segments = line.split(',')
dst_col = 0
for segment in segments:
if not segment:
continue
parsed = _parse_vlq(segment)
dst_col += parsed[0]
src = None
name = None
if len(parsed) > 1:
src_id += parsed[1]
src = sources[src_id]
src_line += parsed[2]
src_col += parsed[3]
if len(parsed) > 4:
name_id += parsed[4]
name = names[name_id]
assert dst_line >= 0
assert dst_col >= 0
assert src_line >= 0
assert src_col >= 0
yield SmapState(dst_line, dst_col, src, src_line, src_col, name)
def find(entries, line, col):
test = entry(line, col, '', 0, 0)
index = bisect.bisect_right(entries, test)
if index == 0:
return None
return entries[index - 1]
def parse(file):
# Simple demo that shows files that most contribute to total size.
lookup = []
for state in _parse_smap(file):
lookup.append(entry(state.dst_line, state.dst_col, state.src, state.src_line, state.src_col))
sorted_lookup = list(sorted(lookup))
return sorted_lookup
| bsd-3-clause |
matthiasmengel/sealevel | sealevel/get_ipcc_data.py | 1 | 3623 | # This file is part of SEALEVEL - a tool to estimates future sea-level rise
# constrained by past obervations and long-term sea-level commitment
# Copyright (C) 2016 Matthias Mengel working at PIK Potsdam
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# LICENSE.txt for more details.
import os
import numpy as np
import dimarray as da
project_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
inputdatadir = os.path.join(project_dir, "data/input/")
######## IPCC mean sl contributions ########
# see Chapter 13, Fifth IPCC Report of WG1, Table 13.5
ipccdata = np.loadtxt(
inputdatadir +
"ipcc_ar5/slr_contributions_ch13.csv",
skiprows=1,
usecols=(
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15))
ipcc_columnames = [
"RCP3PD_med",
"RCP3PD_low",
"RCP3PD_high",
"RCP45_med",
"RCP45_low",
"RCP45_high",
"RCP60_med",
"RCP60_low",
"RCP60_high",
"RCP85_med",
"RCP85_low",
"RCP85_high"]
ipcc_rownames = [
"thermexp", "gic", "gis_smb", "ant_smb", "gis_sid", "ant_sid",
"LandWaterStorage", "mean_slr_2081_2100", "Greenland_Sum", "Antarctica_Sum", "Ice-sheet_rapid_dyn",
"rate_slr_2081_2100", "mean_slr_2046_2065",
"mean_slr_2100"]
def get_ipcc_range(rcp, contribution):
lowind = ipcc_columnames.index(rcp + "_low")
medind = ipcc_columnames.index(rcp + "_med")
highind = ipcc_columnames.index(rcp + "_high")
rowind = ipcc_rownames.index(contribution)
contrib = ipccdata[rowind, :]
return np.array([contrib[lowind], contrib[medind],
contrib[highind]]) * 1e3 # in mm
ipcc_contrib_estimates = {}
for contrib in ["thermexp", "gic", "gis_smb", "ant_smb", "gis_sid", "ant_sid"]:
ipcc_contrib_estimates[contrib] = {}
for rcp in ["RCP3PD", "RCP45", "RCP85"]:
ipcc_contrib_estimates[contrib][rcp] = get_ipcc_range(rcp, contrib)
ipcc_contrib_estimates["gis"] = {}
for rcp in ["RCP3PD", "RCP45", "RCP85"]:
ipcc_contrib_estimates["gis"][rcp] = (
ipcc_contrib_estimates["gis_sid"][rcp] + ipcc_contrib_estimates["gis_smb"][rcp])
######## IPCC global mean temperature estimate ########
## get IPCC AR5 global mean temperature pathways for each RCP scenario
## they can be downloaded from
## http://www.ipcc.ch/report/ar5/wg1/docs/ar5_wg1_annexI_all.zip
## define 1951-1980 to preindustrial (1850-1860)
## global temperature increase based on hadCrut v4.0 data
## see sealevel/get_gmt_data.py for calculation
preind_to_1951_1980 = 0.2640
tas_data = {}
for scen in ['rcp26','rcp45','rcp60','rcp85']:
try:
tas = np.loadtxt(os.path.join(inputdatadir,'ipcc_ar5',
'WGIAR5_FD_AnnexI_series_tas_modelmean_'+scen+'_world_annual.txt'))
except IOError:
raise IOError, ("IPCC global mean temperature data missing, "
"please run sealevel/download_input_data.py")
tasd = da.DimArray(tas[:,1],dims="time",axes=tas[:,0])
## create anomaly to hadcrutv4 1850-1860 mean
## which was used throughout the study as "relative to preindustrial"
tas_data[scen] = tasd - tasd[1951:1980].mean() + preind_to_1951_1980
| gpl-3.0 |
vineodd/PIMSim | GEM5Simulation/gem5/src/arch/x86/isa/insts/simd128/integer/arithmetic/__init__.py | 91 | 2477 | # Copyright (c) 2007 The Hewlett-Packard Development Company
# All rights reserved.
#
# The license below extends only to copyright in the software and shall
# not be construed as granting a license to any other intellectual
# property including but not limited to intellectual property relating
# to a hardware implementation of the functionality of the software
# licensed hereunder. You may use the software subject to the license
# terms below provided that you ensure that this notice is replicated
# unmodified and in its entirety in all distributions of the software,
# modified or unmodified, in source code or in binary form.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Authors: Gabe Black
categories = ["addition",
"subtraction",
"multiplication",
"multiply_add",
"average",
"sum_of_absolute_differences"]
microcode = '''
# 128 bit multimedia and scientific instructions
'''
for category in categories:
exec "import %s as cat" % category
microcode += cat.microcode
| gpl-3.0 |
Serag8/Bachelor | google_appengine/lib/django-1.5/django/contrib/auth/tests/__init__.py | 101 | 1191 | from django.contrib.auth.tests.custom_user import *
from django.contrib.auth.tests.auth_backends import *
from django.contrib.auth.tests.basic import *
from django.contrib.auth.tests.context_processors import *
from django.contrib.auth.tests.decorators import *
from django.contrib.auth.tests.forms import *
from django.contrib.auth.tests.remote_user import *
from django.contrib.auth.tests.management import *
from django.contrib.auth.tests.models import *
from django.contrib.auth.tests.handlers import *
from django.contrib.auth.tests.hashers import *
from django.contrib.auth.tests.signals import *
from django.contrib.auth.tests.tokens import *
from django.contrib.auth.tests.views import *
# The password for the fixture data users is 'password'
from django.dispatch import receiver
from django.test.signals import setting_changed
@receiver(setting_changed)
def user_model_swapped(**kwargs):
if kwargs['setting'] == 'AUTH_USER_MODEL':
from django.db.models.manager import ensure_default_manager
from django.contrib.auth.models import User
# Reset User manager
setattr(User, 'objects', User._default_manager)
ensure_default_manager(User)
| mit |
d40223223/40223223w0511 | static/Brython3.1.0-20150301-090019/Lib/numbers.py | 883 | 10398 | # Copyright 2007 Google, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
"""Abstract Base Classes (ABCs) for numbers, according to PEP 3141.
TODO: Fill out more detailed documentation on the operators."""
from abc import ABCMeta, abstractmethod
__all__ = ["Number", "Complex", "Real", "Rational", "Integral"]
class Number(metaclass=ABCMeta):
"""All numbers inherit from this class.
If you just want to check if an argument x is a number, without
caring what kind, use isinstance(x, Number).
"""
__slots__ = ()
# Concrete numeric types must provide their own hash implementation
__hash__ = None
## Notes on Decimal
## ----------------
## Decimal has all of the methods specified by the Real abc, but it should
## not be registered as a Real because decimals do not interoperate with
## binary floats (i.e. Decimal('3.14') + 2.71828 is undefined). But,
## abstract reals are expected to interoperate (i.e. R1 + R2 should be
## expected to work if R1 and R2 are both Reals).
class Complex(Number):
"""Complex defines the operations that work on the builtin complex type.
In short, those are: a conversion to complex, .real, .imag, +, -,
*, /, abs(), .conjugate, ==, and !=.
If it is given heterogenous arguments, and doesn't have special
knowledge about them, it should fall back to the builtin complex
type as described below.
"""
__slots__ = ()
@abstractmethod
def __complex__(self):
"""Return a builtin complex instance. Called for complex(self)."""
def __bool__(self):
"""True if self != 0. Called for bool(self)."""
return self != 0
@property
@abstractmethod
def real(self):
"""Retrieve the real component of this number.
This should subclass Real.
"""
raise NotImplementedError
@property
@abstractmethod
def imag(self):
"""Retrieve the imaginary component of this number.
This should subclass Real.
"""
raise NotImplementedError
@abstractmethod
def __add__(self, other):
"""self + other"""
raise NotImplementedError
@abstractmethod
def __radd__(self, other):
"""other + self"""
raise NotImplementedError
@abstractmethod
def __neg__(self):
"""-self"""
raise NotImplementedError
@abstractmethod
def __pos__(self):
"""+self"""
raise NotImplementedError
def __sub__(self, other):
"""self - other"""
return self + -other
def __rsub__(self, other):
"""other - self"""
return -self + other
@abstractmethod
def __mul__(self, other):
"""self * other"""
raise NotImplementedError
@abstractmethod
def __rmul__(self, other):
"""other * self"""
raise NotImplementedError
@abstractmethod
def __truediv__(self, other):
"""self / other: Should promote to float when necessary."""
raise NotImplementedError
@abstractmethod
def __rtruediv__(self, other):
"""other / self"""
raise NotImplementedError
@abstractmethod
def __pow__(self, exponent):
"""self**exponent; should promote to float or complex when necessary."""
raise NotImplementedError
@abstractmethod
def __rpow__(self, base):
"""base ** self"""
raise NotImplementedError
@abstractmethod
def __abs__(self):
"""Returns the Real distance from 0. Called for abs(self)."""
raise NotImplementedError
@abstractmethod
def conjugate(self):
"""(x+y*i).conjugate() returns (x-y*i)."""
raise NotImplementedError
@abstractmethod
def __eq__(self, other):
"""self == other"""
raise NotImplementedError
def __ne__(self, other):
"""self != other"""
# The default __ne__ doesn't negate __eq__ until 3.0.
return not (self == other)
Complex.register(complex)
class Real(Complex):
"""To Complex, Real adds the operations that work on real numbers.
In short, those are: a conversion to float, trunc(), divmod,
%, <, <=, >, and >=.
Real also provides defaults for the derived operations.
"""
__slots__ = ()
@abstractmethod
def __float__(self):
"""Any Real can be converted to a native float object.
Called for float(self)."""
raise NotImplementedError
@abstractmethod
def __trunc__(self):
"""trunc(self): Truncates self to an Integral.
Returns an Integral i such that:
* i>0 iff self>0;
* abs(i) <= abs(self);
* for any Integral j satisfying the first two conditions,
abs(i) >= abs(j) [i.e. i has "maximal" abs among those].
i.e. "truncate towards 0".
"""
raise NotImplementedError
@abstractmethod
def __floor__(self):
"""Finds the greatest Integral <= self."""
raise NotImplementedError
@abstractmethod
def __ceil__(self):
"""Finds the least Integral >= self."""
raise NotImplementedError
@abstractmethod
def __round__(self, ndigits=None):
"""Rounds self to ndigits decimal places, defaulting to 0.
If ndigits is omitted or None, returns an Integral, otherwise
returns a Real. Rounds half toward even.
"""
raise NotImplementedError
def __divmod__(self, other):
"""divmod(self, other): The pair (self // other, self % other).
Sometimes this can be computed faster than the pair of
operations.
"""
return (self // other, self % other)
def __rdivmod__(self, other):
"""divmod(other, self): The pair (self // other, self % other).
Sometimes this can be computed faster than the pair of
operations.
"""
return (other // self, other % self)
@abstractmethod
def __floordiv__(self, other):
"""self // other: The floor() of self/other."""
raise NotImplementedError
@abstractmethod
def __rfloordiv__(self, other):
"""other // self: The floor() of other/self."""
raise NotImplementedError
@abstractmethod
def __mod__(self, other):
"""self % other"""
raise NotImplementedError
@abstractmethod
def __rmod__(self, other):
"""other % self"""
raise NotImplementedError
@abstractmethod
def __lt__(self, other):
"""self < other
< on Reals defines a total ordering, except perhaps for NaN."""
raise NotImplementedError
@abstractmethod
def __le__(self, other):
"""self <= other"""
raise NotImplementedError
# Concrete implementations of Complex abstract methods.
def __complex__(self):
"""complex(self) == complex(float(self), 0)"""
return complex(float(self))
@property
def real(self):
"""Real numbers are their real component."""
return +self
@property
def imag(self):
"""Real numbers have no imaginary component."""
return 0
def conjugate(self):
"""Conjugate is a no-op for Reals."""
return +self
Real.register(float)
class Rational(Real):
""".numerator and .denominator should be in lowest terms."""
__slots__ = ()
@property
@abstractmethod
def numerator(self):
raise NotImplementedError
@property
@abstractmethod
def denominator(self):
raise NotImplementedError
# Concrete implementation of Real's conversion to float.
def __float__(self):
"""float(self) = self.numerator / self.denominator
It's important that this conversion use the integer's "true"
division rather than casting one side to float before dividing
so that ratios of huge integers convert without overflowing.
"""
return self.numerator / self.denominator
class Integral(Rational):
"""Integral adds a conversion to int and the bit-string operations."""
__slots__ = ()
@abstractmethod
def __int__(self):
"""int(self)"""
raise NotImplementedError
def __index__(self):
"""Called whenever an index is needed, such as in slicing"""
return int(self)
@abstractmethod
def __pow__(self, exponent, modulus=None):
"""self ** exponent % modulus, but maybe faster.
Accept the modulus argument if you want to support the
3-argument version of pow(). Raise a TypeError if exponent < 0
or any argument isn't Integral. Otherwise, just implement the
2-argument version described in Complex.
"""
raise NotImplementedError
@abstractmethod
def __lshift__(self, other):
"""self << other"""
raise NotImplementedError
@abstractmethod
def __rlshift__(self, other):
"""other << self"""
raise NotImplementedError
@abstractmethod
def __rshift__(self, other):
"""self >> other"""
raise NotImplementedError
@abstractmethod
def __rrshift__(self, other):
"""other >> self"""
raise NotImplementedError
@abstractmethod
def __and__(self, other):
"""self & other"""
raise NotImplementedError
@abstractmethod
def __rand__(self, other):
"""other & self"""
raise NotImplementedError
@abstractmethod
def __xor__(self, other):
"""self ^ other"""
raise NotImplementedError
@abstractmethod
def __rxor__(self, other):
"""other ^ self"""
raise NotImplementedError
@abstractmethod
def __or__(self, other):
"""self | other"""
raise NotImplementedError
@abstractmethod
def __ror__(self, other):
"""other | self"""
raise NotImplementedError
@abstractmethod
def __invert__(self):
"""~self"""
raise NotImplementedError
# Concrete implementations of Rational and Real abstract methods.
def __float__(self):
"""float(self) == float(int(self))"""
return float(int(self))
@property
def numerator(self):
"""Integers are their own numerators."""
return +self
@property
def denominator(self):
"""Integers have a denominator of 1."""
return 1
Integral.register(int)
| gpl-3.0 |
rickhurst/Django-non-rel-blog | django/contrib/gis/db/backends/spatialite/introspection.py | 401 | 2112 | from django.contrib.gis.gdal import OGRGeomType
from django.db.backends.sqlite3.introspection import DatabaseIntrospection, FlexibleFieldLookupDict
class GeoFlexibleFieldLookupDict(FlexibleFieldLookupDict):
"""
Sublcass that includes updates the `base_data_types_reverse` dict
for geometry field types.
"""
base_data_types_reverse = FlexibleFieldLookupDict.base_data_types_reverse.copy()
base_data_types_reverse.update(
{'point' : 'GeometryField',
'linestring' : 'GeometryField',
'polygon' : 'GeometryField',
'multipoint' : 'GeometryField',
'multilinestring' : 'GeometryField',
'multipolygon' : 'GeometryField',
'geometrycollection' : 'GeometryField',
})
class SpatiaLiteIntrospection(DatabaseIntrospection):
data_types_reverse = GeoFlexibleFieldLookupDict()
def get_geometry_type(self, table_name, geo_col):
cursor = self.connection.cursor()
try:
# Querying the `geometry_columns` table to get additional metadata.
cursor.execute('SELECT "coord_dimension", "srid", "type" '
'FROM "geometry_columns" '
'WHERE "f_table_name"=%s AND "f_geometry_column"=%s',
(table_name, geo_col))
row = cursor.fetchone()
if not row:
raise Exception('Could not find a geometry column for "%s"."%s"' %
(table_name, geo_col))
# OGRGeomType does not require GDAL and makes it easy to convert
# from OGC geom type name to Django field.
field_type = OGRGeomType(row[2]).django
# Getting any GeometryField keyword arguments that are not the default.
dim = row[0]
srid = row[1]
field_params = {}
if srid != 4326:
field_params['srid'] = srid
if isinstance(dim, basestring) and 'Z' in dim:
field_params['dim'] = 3
finally:
cursor.close()
return field_type, field_params
| bsd-3-clause |
vongochung/buiquocviet | django/test/html.py | 88 | 7324 | """
Comparing two html documents.
"""
import re
from HTMLParser import HTMLParseError
from django.utils.encoding import force_unicode
from django.utils.html_parser import HTMLParser
WHITESPACE = re.compile('\s+')
def normalize_whitespace(string):
return WHITESPACE.sub(' ', string)
class Element(object):
def __init__(self, name, attributes):
self.name = name
self.attributes = sorted(attributes)
self.children = []
def append(self, element):
if isinstance(element, basestring):
element = force_unicode(element)
element = normalize_whitespace(element)
if self.children:
if isinstance(self.children[-1], basestring):
self.children[-1] += element
self.children[-1] = normalize_whitespace(self.children[-1])
return
elif self.children:
# removing last children if it is only whitespace
# this can result in incorrect dom representations since
# whitespace between inline tags like <span> is significant
if isinstance(self.children[-1], basestring):
if self.children[-1].isspace():
self.children.pop()
if element:
self.children.append(element)
def finalize(self):
def rstrip_last_element(children):
if children:
if isinstance(children[-1], basestring):
children[-1] = children[-1].rstrip()
if not children[-1]:
children.pop()
children = rstrip_last_element(children)
return children
rstrip_last_element(self.children)
for i, child in enumerate(self.children):
if isinstance(child, basestring):
self.children[i] = child.strip()
elif hasattr(child, 'finalize'):
child.finalize()
def __eq__(self, element):
if not hasattr(element, 'name'):
return False
if hasattr(element, 'name') and self.name != element.name:
return False
if len(self.attributes) != len(element.attributes):
return False
if self.attributes != element.attributes:
# attributes without a value is same as attribute with value that
# equals the attributes name:
# <input checked> == <input checked="checked">
for i in range(len(self.attributes)):
attr, value = self.attributes[i]
other_attr, other_value = element.attributes[i]
if value is None:
value = attr
if other_value is None:
other_value = other_attr
if attr != other_attr or value != other_value:
return False
if self.children != element.children:
return False
return True
def __ne__(self, element):
return not self.__eq__(element)
def _count(self, element, count=True):
if not isinstance(element, basestring):
if self == element:
return 1
i = 0
for child in self.children:
# child is text content and element is also text content, then
# make a simple "text" in "text"
if isinstance(child, basestring):
if isinstance(element, basestring):
if count:
i += child.count(element)
elif element in child:
return 1
else:
i += child._count(element, count=count)
if not count and i:
return i
return i
def __contains__(self, element):
return self._count(element, count=False) > 0
def count(self, element):
return self._count(element, count=True)
def __getitem__(self, key):
return self.children[key]
def __unicode__(self):
output = u'<%s' % self.name
for key, value in self.attributes:
if value:
output += u' %s="%s"' % (key, value)
else:
output += u' %s' % key
if self.children:
output += u'>\n'
output += u''.join(unicode(c) for c in self.children)
output += u'\n</%s>' % self.name
else:
output += u' />'
return output
def __repr__(self):
return unicode(self)
class RootElement(Element):
def __init__(self):
super(RootElement, self).__init__(None, ())
def __unicode__(self):
return u''.join(unicode(c) for c in self.children)
class Parser(HTMLParser):
SELF_CLOSING_TAGS = ('br' , 'hr', 'input', 'img', 'meta', 'spacer',
'link', 'frame', 'base', 'col')
def __init__(self):
HTMLParser.__init__(self)
self.root = RootElement()
self.open_tags = []
self.element_positions = {}
def error(self, msg):
raise HTMLParseError(msg, self.getpos())
def format_position(self, position=None, element=None):
if not position and element:
position = self.element_positions[element]
if position is None:
position = self.getpos()
if hasattr(position, 'lineno'):
position = position.lineno, position.offset
return 'Line %d, Column %d' % position
@property
def current(self):
if self.open_tags:
return self.open_tags[-1]
else:
return self.root
def handle_startendtag(self, tag, attrs):
self.handle_starttag(tag, attrs)
if tag not in self.SELF_CLOSING_TAGS:
self.handle_endtag(tag)
def handle_starttag(self, tag, attrs):
element = Element(tag, attrs)
self.current.append(element)
if tag not in self.SELF_CLOSING_TAGS:
self.open_tags.append(element)
self.element_positions[element] = self.getpos()
def handle_endtag(self, tag):
if not self.open_tags:
self.error("Unexpected end tag `%s` (%s)" % (
tag, self.format_position()))
element = self.open_tags.pop()
while element.name != tag:
if not self.open_tags:
self.error("Unexpected end tag `%s` (%s)" % (
tag, self.format_position()))
element = self.open_tags.pop()
def handle_data(self, data):
self.current.append(data)
def handle_charref(self, name):
self.current.append('&%s;' % name)
def handle_entityref(self, name):
self.current.append('&%s;' % name)
def parse_html(html):
"""
Takes a string that contains *valid* HTML and turns it into a Python object
structure that can be easily compared against other HTML on semantic
equivilance. Syntactical differences like which quotation is used on
arguments will be ignored.
"""
parser = Parser()
parser.feed(html)
parser.close()
document = parser.root
document.finalize()
# Removing ROOT element if it's not necessary
if len(document.children) == 1:
if not isinstance(document.children[0], basestring):
document = document.children[0]
return document
| bsd-3-clause |
yenliangl/bitcoin | test/functional/wallet_balance.py | 9 | 6231 | #!/usr/bin/env python3
# Copyright (c) 2018 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test the wallet balance RPC methods."""
from decimal import Decimal
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import (
assert_equal,
assert_raises_rpc_error,
)
RANDOM_COINBASE_ADDRESS = 'mneYUmWYsuk7kySiURxCi3AGxrAqZxLgPZ'
def create_transactions(node, address, amt, fees):
# Create and sign raw transactions from node to address for amt.
# Creates a transaction for each fee and returns an array
# of the raw transactions.
utxos = node.listunspent(0)
# Create transactions
inputs = []
ins_total = 0
for utxo in utxos:
inputs.append({"txid": utxo["txid"], "vout": utxo["vout"]})
ins_total += utxo['amount']
if ins_total > amt:
break
txs = []
for fee in fees:
outputs = {address: amt, node.getrawchangeaddress(): ins_total - amt - fee}
raw_tx = node.createrawtransaction(inputs, outputs, 0, True)
raw_tx = node.signrawtransactionwithwallet(raw_tx)
txs.append(raw_tx)
return txs
class WalletTest(BitcoinTestFramework):
def set_test_params(self):
self.num_nodes = 2
self.setup_clean_chain = True
def skip_test_if_missing_module(self):
self.skip_if_no_wallet()
def run_test(self):
# Check that nodes don't own any UTXOs
assert_equal(len(self.nodes[0].listunspent()), 0)
assert_equal(len(self.nodes[1].listunspent()), 0)
self.log.info("Mining one block for each node")
self.nodes[0].generate(1)
self.sync_all()
self.nodes[1].generate(1)
self.nodes[1].generatetoaddress(100, RANDOM_COINBASE_ADDRESS)
self.sync_all()
assert_equal(self.nodes[0].getbalance(), 50)
assert_equal(self.nodes[1].getbalance(), 50)
self.log.info("Test getbalance with different arguments")
assert_equal(self.nodes[0].getbalance("*"), 50)
assert_equal(self.nodes[0].getbalance("*", 1), 50)
assert_equal(self.nodes[0].getbalance("*", 1, True), 50)
assert_equal(self.nodes[0].getbalance(minconf=1), 50)
# Send 40 BTC from 0 to 1 and 60 BTC from 1 to 0.
txs = create_transactions(self.nodes[0], self.nodes[1].getnewaddress(), 40, [Decimal('0.01')])
self.nodes[0].sendrawtransaction(txs[0]['hex'])
self.nodes[1].sendrawtransaction(txs[0]['hex']) # sending on both nodes is faster than waiting for propagation
self.sync_all()
txs = create_transactions(self.nodes[1], self.nodes[0].getnewaddress(), 60, [Decimal('0.01'), Decimal('0.02')])
self.nodes[1].sendrawtransaction(txs[0]['hex'])
self.nodes[0].sendrawtransaction(txs[0]['hex']) # sending on both nodes is faster than waiting for propagation
self.sync_all()
# First argument of getbalance must be set to "*"
assert_raises_rpc_error(-32, "dummy first argument must be excluded or set to \"*\"", self.nodes[1].getbalance, "")
self.log.info("Test getbalance and getunconfirmedbalance with unconfirmed inputs")
# getbalance without any arguments includes unconfirmed transactions, but not untrusted transactions
assert_equal(self.nodes[0].getbalance(), Decimal('9.99')) # change from node 0's send
assert_equal(self.nodes[1].getbalance(), Decimal('29.99')) # change from node 1's send
# Same with minconf=0
assert_equal(self.nodes[0].getbalance(minconf=0), Decimal('9.99'))
assert_equal(self.nodes[1].getbalance(minconf=0), Decimal('29.99'))
# getbalance with a minconf incorrectly excludes coins that have been spent more recently than the minconf blocks ago
# TODO: fix getbalance tracking of coin spentness depth
assert_equal(self.nodes[0].getbalance(minconf=1), Decimal('0'))
assert_equal(self.nodes[1].getbalance(minconf=1), Decimal('0'))
# getunconfirmedbalance
assert_equal(self.nodes[0].getunconfirmedbalance(), Decimal('60')) # output of node 1's spend
assert_equal(self.nodes[1].getunconfirmedbalance(), Decimal('0')) # Doesn't include output of node 0's send since it was spent
# Node 1 bumps the transaction fee and resends
self.nodes[1].sendrawtransaction(txs[1]['hex'])
self.sync_all()
self.log.info("Test getbalance and getunconfirmedbalance with conflicted unconfirmed inputs")
assert_equal(self.nodes[0].getwalletinfo()["unconfirmed_balance"], Decimal('60')) # output of node 1's send
assert_equal(self.nodes[0].getunconfirmedbalance(), Decimal('60'))
assert_equal(self.nodes[1].getwalletinfo()["unconfirmed_balance"], Decimal('0')) # Doesn't include output of node 0's send since it was spent
assert_equal(self.nodes[1].getunconfirmedbalance(), Decimal('0'))
self.nodes[1].generatetoaddress(1, RANDOM_COINBASE_ADDRESS)
self.sync_all()
# balances are correct after the transactions are confirmed
assert_equal(self.nodes[0].getbalance(), Decimal('69.99')) # node 1's send plus change from node 0's send
assert_equal(self.nodes[1].getbalance(), Decimal('29.98')) # change from node 0's send
# Send total balance away from node 1
txs = create_transactions(self.nodes[1], self.nodes[0].getnewaddress(), Decimal('29.97'), [Decimal('0.01')])
self.nodes[1].sendrawtransaction(txs[0]['hex'])
self.nodes[1].generatetoaddress(2, RANDOM_COINBASE_ADDRESS)
self.sync_all()
# getbalance with a minconf incorrectly excludes coins that have been spent more recently than the minconf blocks ago
# TODO: fix getbalance tracking of coin spentness depth
# getbalance with minconf=3 should still show the old balance
assert_equal(self.nodes[1].getbalance(minconf=3), Decimal('0'))
# getbalance with minconf=2 will show the new balance.
assert_equal(self.nodes[1].getbalance(minconf=2), Decimal('0'))
if __name__ == '__main__':
WalletTest().main()
| mit |
rpanah/centinel-server | config.py | 2 | 1334 | import os
import getpass
import logging
# misc
recommended_version = 1.1
production = True
# user details
current_user = getpass.getuser()
centinel_home = "/opt/centinel-server/"
# directory structure
results_dir = os.path.join(centinel_home, 'results')
experiments_dir = os.path.join(centinel_home, 'experiments')
inputs_dir = os.path.join(centinel_home, 'inputs')
static_files_allowed = ['economistDemocracyIndex.pdf', 'consent.js']
# details for how to access the database
def load_uri_from_file(filename):
with open(filename, 'r') as filep:
uri = filep.read()
return uri
# Setup the database to connect to
database_uri_file = os.path.join(centinel_home, "cent.pgpass")
if not production:
DATABASE_URI = "postgresql://postgres:postgres@localhost/centinel"
else:
DATABASE_URI = load_uri_from_file(database_uri_file)
maxmind_db = os.path.join(centinel_home, 'maxmind.mmdb')
# AS information lookup
net_to_asn_file = os.path.join(centinel_home, 'data-raw-table')
asn_to_owner_file = os.path.join(centinel_home, 'data-used-autnums')
# consent form
prefetch_freedomhouse = False
# web server
ssl_cert = "server.iclab.org.crt"
ssl_key = "server.iclab.org.key"
ssl_chain = "server.iclab.org_bundle.crt"
LOG_FILE = os.path.join(centinel_home, "centinel-server.log")
LOG_LEVEL = logging.DEBUG
| mit |
vermouthmjl/scikit-learn | sklearn/externals/funcsigs.py | 118 | 29982 | # Copyright 2001-2013 Python Software Foundation; All Rights Reserved
"""Function signature objects for callables
Back port of Python 3.3's function signature tools from the inspect module,
modified to be compatible with Python 2.6, 2.7 and 3.2+.
"""
from __future__ import absolute_import, division, print_function
import itertools
import functools
import re
import types
try:
from collections import OrderedDict
except ImportError:
from .odict import OrderedDict
__version__ = "0.4"
__all__ = ['BoundArguments', 'Parameter', 'Signature', 'signature']
_WrapperDescriptor = type(type.__call__)
_MethodWrapper = type(all.__call__)
_NonUserDefinedCallables = (_WrapperDescriptor,
_MethodWrapper,
types.BuiltinFunctionType)
def formatannotation(annotation, base_module=None):
if isinstance(annotation, type):
if annotation.__module__ in ('builtins', '__builtin__', base_module):
return annotation.__name__
return annotation.__module__+'.'+annotation.__name__
return repr(annotation)
def _get_user_defined_method(cls, method_name, *nested):
try:
if cls is type:
return
meth = getattr(cls, method_name)
for name in nested:
meth = getattr(meth, name, meth)
except AttributeError:
return
else:
if not isinstance(meth, _NonUserDefinedCallables):
# Once '__signature__' will be added to 'C'-level
# callables, this check won't be necessary
return meth
def signature(obj):
'''Get a signature object for the passed callable.'''
if not callable(obj):
raise TypeError('{0!r} is not a callable object'.format(obj))
if isinstance(obj, types.MethodType):
sig = signature(obj.__func__)
if obj.__self__ is None:
# Unbound method: the first parameter becomes positional-only
if sig.parameters:
first = sig.parameters.values()[0].replace(
kind=_POSITIONAL_ONLY)
return sig.replace(
parameters=(first,) + tuple(sig.parameters.values())[1:])
else:
return sig
else:
# In this case we skip the first parameter of the underlying
# function (usually `self` or `cls`).
return sig.replace(parameters=tuple(sig.parameters.values())[1:])
try:
sig = obj.__signature__
except AttributeError:
pass
else:
if sig is not None:
return sig
try:
# Was this function wrapped by a decorator?
wrapped = obj.__wrapped__
except AttributeError:
pass
else:
return signature(wrapped)
if isinstance(obj, types.FunctionType):
return Signature.from_function(obj)
if isinstance(obj, functools.partial):
sig = signature(obj.func)
new_params = OrderedDict(sig.parameters.items())
partial_args = obj.args or ()
partial_keywords = obj.keywords or {}
try:
ba = sig.bind_partial(*partial_args, **partial_keywords)
except TypeError as ex:
msg = 'partial object {0!r} has incorrect arguments'.format(obj)
raise ValueError(msg)
for arg_name, arg_value in ba.arguments.items():
param = new_params[arg_name]
if arg_name in partial_keywords:
# We set a new default value, because the following code
# is correct:
#
# >>> def foo(a): print(a)
# >>> print(partial(partial(foo, a=10), a=20)())
# 20
# >>> print(partial(partial(foo, a=10), a=20)(a=30))
# 30
#
# So, with 'partial' objects, passing a keyword argument is
# like setting a new default value for the corresponding
# parameter
#
# We also mark this parameter with '_partial_kwarg'
# flag. Later, in '_bind', the 'default' value of this
# parameter will be added to 'kwargs', to simulate
# the 'functools.partial' real call.
new_params[arg_name] = param.replace(default=arg_value,
_partial_kwarg=True)
elif (param.kind not in (_VAR_KEYWORD, _VAR_POSITIONAL) and
not param._partial_kwarg):
new_params.pop(arg_name)
return sig.replace(parameters=new_params.values())
sig = None
if isinstance(obj, type):
# obj is a class or a metaclass
# First, let's see if it has an overloaded __call__ defined
# in its metaclass
call = _get_user_defined_method(type(obj), '__call__')
if call is not None:
sig = signature(call)
else:
# Now we check if the 'obj' class has a '__new__' method
new = _get_user_defined_method(obj, '__new__')
if new is not None:
sig = signature(new)
else:
# Finally, we should have at least __init__ implemented
init = _get_user_defined_method(obj, '__init__')
if init is not None:
sig = signature(init)
elif not isinstance(obj, _NonUserDefinedCallables):
# An object with __call__
# We also check that the 'obj' is not an instance of
# _WrapperDescriptor or _MethodWrapper to avoid
# infinite recursion (and even potential segfault)
call = _get_user_defined_method(type(obj), '__call__', 'im_func')
if call is not None:
sig = signature(call)
if sig is not None:
# For classes and objects we skip the first parameter of their
# __call__, __new__, or __init__ methods
return sig.replace(parameters=tuple(sig.parameters.values())[1:])
if isinstance(obj, types.BuiltinFunctionType):
# Raise a nicer error message for builtins
msg = 'no signature found for builtin function {0!r}'.format(obj)
raise ValueError(msg)
raise ValueError('callable {0!r} is not supported by signature'.format(obj))
class _void(object):
'''A private marker - used in Parameter & Signature'''
class _empty(object):
pass
class _ParameterKind(int):
def __new__(self, *args, **kwargs):
obj = int.__new__(self, *args)
obj._name = kwargs['name']
return obj
def __str__(self):
return self._name
def __repr__(self):
return '<_ParameterKind: {0!r}>'.format(self._name)
_POSITIONAL_ONLY = _ParameterKind(0, name='POSITIONAL_ONLY')
_POSITIONAL_OR_KEYWORD = _ParameterKind(1, name='POSITIONAL_OR_KEYWORD')
_VAR_POSITIONAL = _ParameterKind(2, name='VAR_POSITIONAL')
_KEYWORD_ONLY = _ParameterKind(3, name='KEYWORD_ONLY')
_VAR_KEYWORD = _ParameterKind(4, name='VAR_KEYWORD')
class Parameter(object):
'''Represents a parameter in a function signature.
Has the following public attributes:
* name : str
The name of the parameter as a string.
* default : object
The default value for the parameter if specified. If the
parameter has no default value, this attribute is not set.
* annotation
The annotation for the parameter if specified. If the
parameter has no annotation, this attribute is not set.
* kind : str
Describes how argument values are bound to the parameter.
Possible values: `Parameter.POSITIONAL_ONLY`,
`Parameter.POSITIONAL_OR_KEYWORD`, `Parameter.VAR_POSITIONAL`,
`Parameter.KEYWORD_ONLY`, `Parameter.VAR_KEYWORD`.
'''
__slots__ = ('_name', '_kind', '_default', '_annotation', '_partial_kwarg')
POSITIONAL_ONLY = _POSITIONAL_ONLY
POSITIONAL_OR_KEYWORD = _POSITIONAL_OR_KEYWORD
VAR_POSITIONAL = _VAR_POSITIONAL
KEYWORD_ONLY = _KEYWORD_ONLY
VAR_KEYWORD = _VAR_KEYWORD
empty = _empty
def __init__(self, name, kind, default=_empty, annotation=_empty,
_partial_kwarg=False):
if kind not in (_POSITIONAL_ONLY, _POSITIONAL_OR_KEYWORD,
_VAR_POSITIONAL, _KEYWORD_ONLY, _VAR_KEYWORD):
raise ValueError("invalid value for 'Parameter.kind' attribute")
self._kind = kind
if default is not _empty:
if kind in (_VAR_POSITIONAL, _VAR_KEYWORD):
msg = '{0} parameters cannot have default values'.format(kind)
raise ValueError(msg)
self._default = default
self._annotation = annotation
if name is None:
if kind != _POSITIONAL_ONLY:
raise ValueError("None is not a valid name for a "
"non-positional-only parameter")
self._name = name
else:
name = str(name)
if kind != _POSITIONAL_ONLY and not re.match(r'[a-z_]\w*$', name, re.I):
msg = '{0!r} is not a valid parameter name'.format(name)
raise ValueError(msg)
self._name = name
self._partial_kwarg = _partial_kwarg
@property
def name(self):
return self._name
@property
def default(self):
return self._default
@property
def annotation(self):
return self._annotation
@property
def kind(self):
return self._kind
def replace(self, name=_void, kind=_void, annotation=_void,
default=_void, _partial_kwarg=_void):
'''Creates a customized copy of the Parameter.'''
if name is _void:
name = self._name
if kind is _void:
kind = self._kind
if annotation is _void:
annotation = self._annotation
if default is _void:
default = self._default
if _partial_kwarg is _void:
_partial_kwarg = self._partial_kwarg
return type(self)(name, kind, default=default, annotation=annotation,
_partial_kwarg=_partial_kwarg)
def __str__(self):
kind = self.kind
formatted = self._name
if kind == _POSITIONAL_ONLY:
if formatted is None:
formatted = ''
formatted = '<{0}>'.format(formatted)
# Add annotation and default value
if self._annotation is not _empty:
formatted = '{0}:{1}'.format(formatted,
formatannotation(self._annotation))
if self._default is not _empty:
formatted = '{0}={1}'.format(formatted, repr(self._default))
if kind == _VAR_POSITIONAL:
formatted = '*' + formatted
elif kind == _VAR_KEYWORD:
formatted = '**' + formatted
return formatted
def __repr__(self):
return '<{0} at {1:#x} {2!r}>'.format(self.__class__.__name__,
id(self), self.name)
def __hash__(self):
msg = "unhashable type: '{0}'".format(self.__class__.__name__)
raise TypeError(msg)
def __eq__(self, other):
return (issubclass(other.__class__, Parameter) and
self._name == other._name and
self._kind == other._kind and
self._default == other._default and
self._annotation == other._annotation)
def __ne__(self, other):
return not self.__eq__(other)
class BoundArguments(object):
'''Result of `Signature.bind` call. Holds the mapping of arguments
to the function's parameters.
Has the following public attributes:
* arguments : OrderedDict
An ordered mutable mapping of parameters' names to arguments' values.
Does not contain arguments' default values.
* signature : Signature
The Signature object that created this instance.
* args : tuple
Tuple of positional arguments values.
* kwargs : dict
Dict of keyword arguments values.
'''
def __init__(self, signature, arguments):
self.arguments = arguments
self._signature = signature
@property
def signature(self):
return self._signature
@property
def args(self):
args = []
for param_name, param in self._signature.parameters.items():
if (param.kind in (_VAR_KEYWORD, _KEYWORD_ONLY) or
param._partial_kwarg):
# Keyword arguments mapped by 'functools.partial'
# (Parameter._partial_kwarg is True) are mapped
# in 'BoundArguments.kwargs', along with VAR_KEYWORD &
# KEYWORD_ONLY
break
try:
arg = self.arguments[param_name]
except KeyError:
# We're done here. Other arguments
# will be mapped in 'BoundArguments.kwargs'
break
else:
if param.kind == _VAR_POSITIONAL:
# *args
args.extend(arg)
else:
# plain argument
args.append(arg)
return tuple(args)
@property
def kwargs(self):
kwargs = {}
kwargs_started = False
for param_name, param in self._signature.parameters.items():
if not kwargs_started:
if (param.kind in (_VAR_KEYWORD, _KEYWORD_ONLY) or
param._partial_kwarg):
kwargs_started = True
else:
if param_name not in self.arguments:
kwargs_started = True
continue
if not kwargs_started:
continue
try:
arg = self.arguments[param_name]
except KeyError:
pass
else:
if param.kind == _VAR_KEYWORD:
# **kwargs
kwargs.update(arg)
else:
# plain keyword argument
kwargs[param_name] = arg
return kwargs
def __hash__(self):
msg = "unhashable type: '{0}'".format(self.__class__.__name__)
raise TypeError(msg)
def __eq__(self, other):
return (issubclass(other.__class__, BoundArguments) and
self.signature == other.signature and
self.arguments == other.arguments)
def __ne__(self, other):
return not self.__eq__(other)
class Signature(object):
'''A Signature object represents the overall signature of a function.
It stores a Parameter object for each parameter accepted by the
function, as well as information specific to the function itself.
A Signature object has the following public attributes and methods:
* parameters : OrderedDict
An ordered mapping of parameters' names to the corresponding
Parameter objects (keyword-only arguments are in the same order
as listed in `code.co_varnames`).
* return_annotation : object
The annotation for the return type of the function if specified.
If the function has no annotation for its return type, this
attribute is not set.
* bind(*args, **kwargs) -> BoundArguments
Creates a mapping from positional and keyword arguments to
parameters.
* bind_partial(*args, **kwargs) -> BoundArguments
Creates a partial mapping from positional and keyword arguments
to parameters (simulating 'functools.partial' behavior.)
'''
__slots__ = ('_return_annotation', '_parameters')
_parameter_cls = Parameter
_bound_arguments_cls = BoundArguments
empty = _empty
def __init__(self, parameters=None, return_annotation=_empty,
__validate_parameters__=True):
'''Constructs Signature from the given list of Parameter
objects and 'return_annotation'. All arguments are optional.
'''
if parameters is None:
params = OrderedDict()
else:
if __validate_parameters__:
params = OrderedDict()
top_kind = _POSITIONAL_ONLY
for idx, param in enumerate(parameters):
kind = param.kind
if kind < top_kind:
msg = 'wrong parameter order: {0} before {1}'
msg = msg.format(top_kind, param.kind)
raise ValueError(msg)
else:
top_kind = kind
name = param.name
if name is None:
name = str(idx)
param = param.replace(name=name)
if name in params:
msg = 'duplicate parameter name: {0!r}'.format(name)
raise ValueError(msg)
params[name] = param
else:
params = OrderedDict(((param.name, param)
for param in parameters))
self._parameters = params
self._return_annotation = return_annotation
@classmethod
def from_function(cls, func):
'''Constructs Signature for the given python function'''
if not isinstance(func, types.FunctionType):
raise TypeError('{0!r} is not a Python function'.format(func))
Parameter = cls._parameter_cls
# Parameter information.
func_code = func.__code__
pos_count = func_code.co_argcount
arg_names = func_code.co_varnames
positional = tuple(arg_names[:pos_count])
keyword_only_count = getattr(func_code, 'co_kwonlyargcount', 0)
keyword_only = arg_names[pos_count:(pos_count + keyword_only_count)]
annotations = getattr(func, '__annotations__', {})
defaults = func.__defaults__
kwdefaults = getattr(func, '__kwdefaults__', None)
if defaults:
pos_default_count = len(defaults)
else:
pos_default_count = 0
parameters = []
# Non-keyword-only parameters w/o defaults.
non_default_count = pos_count - pos_default_count
for name in positional[:non_default_count]:
annotation = annotations.get(name, _empty)
parameters.append(Parameter(name, annotation=annotation,
kind=_POSITIONAL_OR_KEYWORD))
# ... w/ defaults.
for offset, name in enumerate(positional[non_default_count:]):
annotation = annotations.get(name, _empty)
parameters.append(Parameter(name, annotation=annotation,
kind=_POSITIONAL_OR_KEYWORD,
default=defaults[offset]))
# *args
if func_code.co_flags & 0x04:
name = arg_names[pos_count + keyword_only_count]
annotation = annotations.get(name, _empty)
parameters.append(Parameter(name, annotation=annotation,
kind=_VAR_POSITIONAL))
# Keyword-only parameters.
for name in keyword_only:
default = _empty
if kwdefaults is not None:
default = kwdefaults.get(name, _empty)
annotation = annotations.get(name, _empty)
parameters.append(Parameter(name, annotation=annotation,
kind=_KEYWORD_ONLY,
default=default))
# **kwargs
if func_code.co_flags & 0x08:
index = pos_count + keyword_only_count
if func_code.co_flags & 0x04:
index += 1
name = arg_names[index]
annotation = annotations.get(name, _empty)
parameters.append(Parameter(name, annotation=annotation,
kind=_VAR_KEYWORD))
return cls(parameters,
return_annotation=annotations.get('return', _empty),
__validate_parameters__=False)
@property
def parameters(self):
try:
return types.MappingProxyType(self._parameters)
except AttributeError:
return OrderedDict(self._parameters.items())
@property
def return_annotation(self):
return self._return_annotation
def replace(self, parameters=_void, return_annotation=_void):
'''Creates a customized copy of the Signature.
Pass 'parameters' and/or 'return_annotation' arguments
to override them in the new copy.
'''
if parameters is _void:
parameters = self.parameters.values()
if return_annotation is _void:
return_annotation = self._return_annotation
return type(self)(parameters,
return_annotation=return_annotation)
def __hash__(self):
msg = "unhashable type: '{0}'".format(self.__class__.__name__)
raise TypeError(msg)
def __eq__(self, other):
if (not issubclass(type(other), Signature) or
self.return_annotation != other.return_annotation or
len(self.parameters) != len(other.parameters)):
return False
other_positions = dict((param, idx)
for idx, param in enumerate(other.parameters.keys()))
for idx, (param_name, param) in enumerate(self.parameters.items()):
if param.kind == _KEYWORD_ONLY:
try:
other_param = other.parameters[param_name]
except KeyError:
return False
else:
if param != other_param:
return False
else:
try:
other_idx = other_positions[param_name]
except KeyError:
return False
else:
if (idx != other_idx or
param != other.parameters[param_name]):
return False
return True
def __ne__(self, other):
return not self.__eq__(other)
def _bind(self, args, kwargs, partial=False):
'''Private method. Don't use directly.'''
arguments = OrderedDict()
parameters = iter(self.parameters.values())
parameters_ex = ()
arg_vals = iter(args)
if partial:
# Support for binding arguments to 'functools.partial' objects.
# See 'functools.partial' case in 'signature()' implementation
# for details.
for param_name, param in self.parameters.items():
if (param._partial_kwarg and param_name not in kwargs):
# Simulating 'functools.partial' behavior
kwargs[param_name] = param.default
while True:
# Let's iterate through the positional arguments and corresponding
# parameters
try:
arg_val = next(arg_vals)
except StopIteration:
# No more positional arguments
try:
param = next(parameters)
except StopIteration:
# No more parameters. That's it. Just need to check that
# we have no `kwargs` after this while loop
break
else:
if param.kind == _VAR_POSITIONAL:
# That's OK, just empty *args. Let's start parsing
# kwargs
break
elif param.name in kwargs:
if param.kind == _POSITIONAL_ONLY:
msg = '{arg!r} parameter is positional only, ' \
'but was passed as a keyword'
msg = msg.format(arg=param.name)
raise TypeError(msg)
parameters_ex = (param,)
break
elif (param.kind == _VAR_KEYWORD or
param.default is not _empty):
# That's fine too - we have a default value for this
# parameter. So, lets start parsing `kwargs`, starting
# with the current parameter
parameters_ex = (param,)
break
else:
if partial:
parameters_ex = (param,)
break
else:
msg = '{arg!r} parameter lacking default value'
msg = msg.format(arg=param.name)
raise TypeError(msg)
else:
# We have a positional argument to process
try:
param = next(parameters)
except StopIteration:
raise TypeError('too many positional arguments')
else:
if param.kind in (_VAR_KEYWORD, _KEYWORD_ONLY):
# Looks like we have no parameter for this positional
# argument
raise TypeError('too many positional arguments')
if param.kind == _VAR_POSITIONAL:
# We have an '*args'-like argument, let's fill it with
# all positional arguments we have left and move on to
# the next phase
values = [arg_val]
values.extend(arg_vals)
arguments[param.name] = tuple(values)
break
if param.name in kwargs:
raise TypeError('multiple values for argument '
'{arg!r}'.format(arg=param.name))
arguments[param.name] = arg_val
# Now, we iterate through the remaining parameters to process
# keyword arguments
kwargs_param = None
for param in itertools.chain(parameters_ex, parameters):
if param.kind == _POSITIONAL_ONLY:
# This should never happen in case of a properly built
# Signature object (but let's have this check here
# to ensure correct behaviour just in case)
raise TypeError('{arg!r} parameter is positional only, '
'but was passed as a keyword'. \
format(arg=param.name))
if param.kind == _VAR_KEYWORD:
# Memorize that we have a '**kwargs'-like parameter
kwargs_param = param
continue
param_name = param.name
try:
arg_val = kwargs.pop(param_name)
except KeyError:
# We have no value for this parameter. It's fine though,
# if it has a default value, or it is an '*args'-like
# parameter, left alone by the processing of positional
# arguments.
if (not partial and param.kind != _VAR_POSITIONAL and
param.default is _empty):
raise TypeError('{arg!r} parameter lacking default value'. \
format(arg=param_name))
else:
arguments[param_name] = arg_val
if kwargs:
if kwargs_param is not None:
# Process our '**kwargs'-like parameter
arguments[kwargs_param.name] = kwargs
else:
raise TypeError('too many keyword arguments')
return self._bound_arguments_cls(self, arguments)
def bind(self, *args, **kwargs):
'''Get a BoundArguments object, that maps the passed `args`
and `kwargs` to the function's signature. Raises `TypeError`
if the passed arguments can not be bound.
'''
return self._bind(args, kwargs)
def bind_partial(self, *args, **kwargs):
'''Get a BoundArguments object, that partially maps the
passed `args` and `kwargs` to the function's signature.
Raises `TypeError` if the passed arguments can not be bound.
'''
return self._bind(args, kwargs, partial=True)
def __str__(self):
result = []
render_kw_only_separator = True
for idx, param in enumerate(self.parameters.values()):
formatted = str(param)
kind = param.kind
if kind == _VAR_POSITIONAL:
# OK, we have an '*args'-like parameter, so we won't need
# a '*' to separate keyword-only arguments
render_kw_only_separator = False
elif kind == _KEYWORD_ONLY and render_kw_only_separator:
# We have a keyword-only parameter to render and we haven't
# rendered an '*args'-like parameter before, so add a '*'
# separator to the parameters list ("foo(arg1, *, arg2)" case)
result.append('*')
# This condition should be only triggered once, so
# reset the flag
render_kw_only_separator = False
result.append(formatted)
rendered = '({0})'.format(', '.join(result))
if self.return_annotation is not _empty:
anno = formatannotation(self.return_annotation)
rendered += ' -> {0}'.format(anno)
return rendered
| bsd-3-clause |
m45t3r/i3pystatus | i3pystatus/cpu_freq.py | 8 | 1779 | from i3pystatus import IntervalModule
class CpuFreq(IntervalModule):
"""
class uses by default `/proc/cpuinfo` to determine the current cpu frequency
.. rubric:: Available formatters
* `{avg}` - mean from all cores in MHz `4.3f`
* `{avgg}` - mean from all cores in GHz `1.2f`
* `{coreX}` - frequency of core number `X` in MHz (format `4.3f`), where 0 <= `X` <= number of cores - 1
* `{coreXg}` - frequency of core number `X` in GHz (fromat `1.2f`), where 0 <= `X` <= number of cores - 1
"""
format = "{avgg}"
settings = (
"format",
("color", "The text color"),
("file", "override default path"),
)
file = '/proc/cpuinfo'
color = '#FFFFFF'
def createvaluesdict(self):
"""
function processes the /proc/cpuinfo file
:return: dictionary used as the full-text output for the module
"""
with open(self.file) as f:
mhz_values = [float(line.split(':')[1]) for line in f if line.startswith('cpu MHz')]
ghz_values = [value / 1000.0 for value in mhz_values]
mhz = {"core{}".format(key): "{0:4.3f}".format(value) for key, value in enumerate(mhz_values)}
ghz = {"core{}g".format(key): "{0:1.2f}".format(value) for key, value in enumerate(ghz_values)}
cdict = mhz.copy()
cdict.update(ghz)
cdict['avg'] = "{0:4.3f}".format(sum(mhz_values) / len(mhz_values))
cdict['avgg'] = "{0:1.2f}".format(sum(ghz_values) / len(ghz_values), 2)
return cdict
def run(self):
cdict = self.createvaluesdict()
self.data = cdict
self.output = {
"full_text": self.format.format(**cdict),
"color": self.color,
"format": self.format,
}
| mit |
creativewild/ansible | lib/ansible/inventory/script.py | 80 | 6338 | # (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import subprocess
import sys
from collections import Mapping
from ansible import constants as C
from ansible.errors import *
from ansible.inventory.host import Host
from ansible.inventory.group import Group
from ansible.module_utils.basic import json_dict_bytes_to_unicode
class InventoryScript:
''' Host inventory parser for ansible using external inventory scripts. '''
def __init__(self, loader, filename=C.DEFAULT_HOST_LIST):
self._loader = loader
# Support inventory scripts that are not prefixed with some
# path information but happen to be in the current working
# directory when '.' is not in PATH.
self.filename = os.path.abspath(filename)
cmd = [ self.filename, "--list" ]
try:
sp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError, e:
raise AnsibleError("problem running %s (%s)" % (' '.join(cmd), e))
(stdout, stderr) = sp.communicate()
if sp.returncode != 0:
raise AnsibleError("Inventory script (%s) had an execution error: %s " % (filename,stderr))
self.data = stdout
# see comment about _meta below
self.host_vars_from_top = None
self.groups = self._parse(stderr)
def _parse(self, err):
all_hosts = {}
# not passing from_remote because data from CMDB is trusted
try:
self.raw = self._loader.load(self.data)
except Exception as e:
sys.stderr.write(err + "\n")
raise AnsibleError("failed to parse executable inventory script results from {0}: {1}".format(self.filename, str(e)))
if not isinstance(self.raw, Mapping):
sys.stderr.write(err + "\n")
raise AnsibleError("failed to parse executable inventory script results from {0}: data needs to be formatted as a json dict".format(self.filename))
self.raw = json_dict_bytes_to_unicode(self.raw)
all = Group('all')
groups = dict(all=all)
group = None
for (group_name, data) in self.raw.items():
# in Ansible 1.3 and later, a "_meta" subelement may contain
# a variable "hostvars" which contains a hash for each host
# if this "hostvars" exists at all then do not call --host for each
# host. This is for efficiency and scripts should still return data
# if called with --host for backwards compat with 1.2 and earlier.
if group_name == '_meta':
if 'hostvars' in data:
self.host_vars_from_top = data['hostvars']
continue
if group_name != all.name:
group = groups[group_name] = Group(group_name)
else:
group = all
host = None
if not isinstance(data, dict):
data = {'hosts': data}
# is not those subkeys, then simplified syntax, host with vars
elif not any(k in data for k in ('hosts','vars')):
data = {'hosts': [group_name], 'vars': data}
if 'hosts' in data:
if not isinstance(data['hosts'], list):
raise AnsibleError("You defined a group \"%s\" with bad "
"data for the host list:\n %s" % (group_name, data))
for hostname in data['hosts']:
if not hostname in all_hosts:
all_hosts[hostname] = Host(hostname)
host = all_hosts[hostname]
group.add_host(host)
if 'vars' in data:
if not isinstance(data['vars'], dict):
raise AnsibleError("You defined a group \"%s\" with bad "
"data for variables:\n %s" % (group_name, data))
for k, v in data['vars'].iteritems():
if group.name == all.name:
all.set_variable(k, v)
else:
group.set_variable(k, v)
# Separate loop to ensure all groups are defined
for (group_name, data) in self.raw.items():
if group_name == '_meta':
continue
if isinstance(data, dict) and 'children' in data:
for child_name in data['children']:
if child_name in groups:
groups[group_name].add_child_group(groups[child_name])
for group in groups.values():
if group.depth == 0 and group.name != 'all':
all.add_child_group(group)
return groups
def get_host_variables(self, host):
""" Runs <script> --host <hostname> to determine additional host variables """
if self.host_vars_from_top is not None:
got = self.host_vars_from_top.get(host.name, {})
return got
cmd = [self.filename, "--host", host.name]
try:
sp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError, e:
raise AnsibleError("problem running %s (%s)" % (' '.join(cmd), e))
(out, err) = sp.communicate()
if out.strip() == '':
return dict()
try:
return json_dict_bytes_to_unicode(self._loader.load(out))
except ValueError:
raise AnsibleError("could not parse post variable response: %s, %s" % (cmd, out))
| gpl-3.0 |
cts-admin/cts | cts/fundraising/views.py | 1 | 7312 | import decimal
import json
import stripe
from django.conf import settings
from django.contrib import messages
from django.contrib.auth.decorators import login_required
from django.forms.models import modelformset_factory
from django.http import HttpResponse, JsonResponse
from django.shortcuts import get_object_or_404, redirect, render
from django.template.loader import render_to_string
from django.views.decorators.cache import never_cache
from django.views.decorators.csrf import csrf_exempt
from django.views.decorators.http import require_POST
from .exceptions import DonationError
from .forms import CTSDonorForm, DonationForm, PaymentForm
from .models import (
LEADERSHIP_LEVEL_AMOUNT, CTSDonor, Donation, Payment, Testimonial,
)
from home.tasks import mail_task
def index(request):
testimonial = Testimonial.objects.filter(is_active=True).order_by('?').first()
return render(request, 'fundraising/index.html', {
'testimonial': testimonial,
})
@require_POST
def donate(request):
form = PaymentForm(request.POST, user=request.user)
if form.is_valid():
# Try to create the charge on Stripe's servers - this will charge the user's card
try:
donation = form.make_donation()
except DonationError as donation_error:
data = {
'success': False,
'error': str(donation_error),
}
else:
data = {
'success': True,
'redirect': donation.get_absolute_url(),
}
else:
data = {
'success': False,
'error': form.errors.as_json(),
}
return JsonResponse(data)
def thank_you(request, donation):
donation = get_object_or_404(Donation, pk=donation)
if request.method == 'POST':
form = CTSDonorForm(
data=request.POST,
files=request.FILES,
instance=donation.donor,
)
if form.is_valid():
form.save()
messages.success(request, "Thank you for your contribution!")
return redirect('fundraising:index')
else:
form = CTSDonorForm(instance=donation.donor)
return render(request, 'fundraising/thank-you.html', {
'donation': donation,
'form': form,
'leadership_level_amount': LEADERSHIP_LEVEL_AMOUNT,
})
@login_required
@never_cache
def manage_donations(request, donor):
donor = get_object_or_404(CTSDonor, pk=donor)
recurring_donations = donor.donation_set.exclude(stripe_subscription_id='')
ModifyDonationsFormset = modelformset_factory(Donation, form=DonationForm, extra=0)
if request.method == 'POST':
donor_form = CTSDonorForm(
data=request.POST,
files=request.FILES,
instance=donor,
)
modify_donations_formset = ModifyDonationsFormset(
request.POST,
queryset=recurring_donations
)
if donor_form.is_valid() and modify_donations_formset.is_valid():
donor_form.save()
modify_donations_formset.save()
messages.success(request, "Your information has been updated.")
else:
donor_form = CTSDonorForm(instance=donor)
modify_donations_formset = ModifyDonationsFormset(
queryset=recurring_donations
)
return render(request, 'fundraising/manage-donations.html', {
'donor': donor,
'donor_form': donor_form,
'modify_donations_formset': modify_donations_formset,
'recurring_donations': recurring_donations,
'stripe_publishable_key': settings.STRIPE_PUBLISHABLE_KEY,
})
@require_POST
def update_card(request):
donation = get_object_or_404(Donation, id=request.POST['donation_id'])
try:
customer = stripe.Customer.retrieve(donation.stripe_customer_id)
subscription = customer.subscriptions.retrieve(donation.stripe_subscription_id)
subscription.source = request.POST['stripe_token']
subscription.save()
except stripe.StripeError as e:
data = {'success': False, 'error': str(e)}
else:
data = {'success': True}
return JsonResponse(data)
@require_POST
def cancel_donation(request, donor):
donation_id = request.POST.get('donation')
donor = get_object_or_404(CTSDonor, pk=donor)
donations = donor.donation_set.exclude(stripe_subscription_id='')
donation = get_object_or_404(donations, pk=donation_id)
customer = stripe.Customer.retrieve(donation.stripe_customer_id)
customer.subscriptions.retrieve(donation.stripe_subscription_id).delete()
donation.stripe_subscription_id = ''
donation.save()
messages.success(request, "Your donation has been canceled.")
return redirect('fundraising:manage-donations', donor=donor.pk)
@require_POST
@csrf_exempt
def receive_webhook(request):
try:
data = json.loads(request.body.decode())
except ValueError:
return HttpResponse(422)
# For security, re-request the event object from Stripe.
try:
event = stripe.Event.retrieve(data['id'])
except stripe.error.InvalidRequestError:
return HttpResponse(422)
return WebhookHandler(event).handle()
class WebhookHandler(object):
def __init__(self, event):
self.event = event
def handle(self):
handlers = {
'invoice.payment_succeeded': self.payment_succeeded,
'invoice.payment_failed': self.payment_failed,
'customer.subscription.deleted': self.subscription_cancelled,
}
handler = handlers.get(self.event.type, lambda: HttpResponse(422))
return handler()
def payment_succeeded(self):
invoice = self.event.data.object
# Ensure we haven't already processed this payment
if Payment.objects.filter(stripe_charge_id=invoice.charge).exists():
# We need a 2xx response otherwise Stripe will keep trying.
return HttpResponse()
donation = get_object_or_404(
Donation, stripe_subscription_id=invoice.subscription)
amount = decimal.Decimal(invoice.total) / 100
if invoice.charge:
donation.payment_set.create(amount=amount, stripe_charge_id=invoice.charge)
return HttpResponse(status=201)
def subscription_cancelled(self):
subscription = self.event.data.object
donation = get_object_or_404(
Donation, stripe_subscription_id=subscription.id)
donation.stripe_subscription_id = ''
donation.save()
mail_text = render_to_string(
'fundraising/email/subscription_cancelled.txt', {'donation': donation})
mail_task('Payment cancelled', mail_text,
settings.DEFAULT_FROM_EMAIL, [donation.donor.email])
return HttpResponse(status=204)
def payment_failed(self):
invoice = self.event.data.object
donation = get_object_or_404(
Donation, stripe_subscription_id=invoice.subscription)
mail_text = render_to_string(
'fundraising/email/payment_failed.txt', {'donation': donation})
mail_task('Payment failed', mail_text,
settings.DEFAULT_FROM_EMAIL, [donation.donor.email])
return HttpResponse(status=204)
| gpl-3.0 |
petemounce/ansible | lib/ansible/utils/module_docs_fragments/openstack.py | 133 | 3961 | # Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
class ModuleDocFragment(object):
# Standard openstack documentation fragment
DOCUMENTATION = '''
options:
cloud:
description:
- Named cloud to operate against. Provides default values for I(auth) and
I(auth_type). This parameter is not needed if I(auth) is provided or if
OpenStack OS_* environment variables are present.
required: false
auth:
description:
- Dictionary containing auth information as needed by the cloud's auth
plugin strategy. For the default I(password) plugin, this would contain
I(auth_url), I(username), I(password), I(project_name) and any
information about domains if the cloud supports them. For other plugins,
this param will need to contain whatever parameters that auth plugin
requires. This parameter is not needed if a named cloud is provided or
OpenStack OS_* environment variables are present.
required: false
auth_type:
description:
- Name of the auth plugin to use. If the cloud uses something other than
password authentication, the name of the plugin should be indicated here
and the contents of the I(auth) parameter should be updated accordingly.
required: false
default: password
region_name:
description:
- Name of the region.
required: false
wait:
description:
- Should ansible wait until the requested resource is complete.
required: false
default: "yes"
choices: ["yes", "no"]
timeout:
description:
- How long should ansible wait for the requested resource.
required: false
default: 180
api_timeout:
description:
- How long should the socket layer wait before timing out for API calls.
If this is omitted, nothing will be passed to the requests library.
required: false
default: None
validate_certs:
description:
- Whether or not SSL API requests should be verified. Before 2.3 this defaulted to True.
required: false
default: null
aliases: ['verify']
cacert:
description:
- A path to a CA Cert bundle that can be used as part of verifying
SSL API requests.
required: false
default: None
cert:
description:
- A path to a client certificate to use as part of the SSL transaction.
required: false
default: None
key:
description:
- A path to a client key to use as part of the SSL transaction.
required: false
default: None
endpoint_type:
description:
- Endpoint URL type to fetch from the service catalog.
choices: [public, internal, admin]
required: false
default: public
requirements:
- python >= 2.7
- shade
notes:
- The standard OpenStack environment variables, such as C(OS_USERNAME)
may be used instead of providing explicit values.
- Auth information is driven by os-client-config, which means that values
can come from a yaml config file in /etc/ansible/openstack.yaml,
/etc/openstack/clouds.yaml or ~/.config/openstack/clouds.yaml, then from
standard environment variables, then finally by explicit parameters in
plays. More information can be found at
U(http://docs.openstack.org/developer/os-client-config)
'''
| gpl-3.0 |
Sweetgrassbuffalo/ReactionSweeGrass-v2 | .meteor/local/dev_bundle/python/Lib/test/test_macos.py | 45 | 2783 | import unittest
from test import test_support
import os
import subprocess
MacOS = test_support.import_module('MacOS')
TESTFN2 = test_support.TESTFN + '2'
class TestMacOS(unittest.TestCase):
@unittest.skipUnless(os.path.exists('/Developer/Tools/SetFile'),
'/Developer/Tools/SetFile does not exist')
def testGetCreatorAndType(self):
try:
fp = open(test_support.TESTFN, 'w')
fp.write('\n')
fp.close()
subprocess.call(
['/Developer/Tools/SetFile', '-t', 'ABCD', '-c', 'EFGH',
test_support.TESTFN])
cr, tp = MacOS.GetCreatorAndType(test_support.TESTFN)
self.assertEqual(tp, 'ABCD')
self.assertEqual(cr, 'EFGH')
finally:
os.unlink(test_support.TESTFN)
@unittest.skipUnless(os.path.exists('/Developer/Tools/GetFileInfo'),
'/Developer/Tools/GetFileInfo does not exist')
def testSetCreatorAndType(self):
try:
fp = open(test_support.TESTFN, 'w')
fp.write('\n')
fp.close()
MacOS.SetCreatorAndType(test_support.TESTFN,
'ABCD', 'EFGH')
cr, tp = MacOS.GetCreatorAndType(test_support.TESTFN)
self.assertEqual(cr, 'ABCD')
self.assertEqual(tp, 'EFGH')
data = subprocess.Popen(["/Developer/Tools/GetFileInfo", test_support.TESTFN],
stdout=subprocess.PIPE).communicate()[0]
tp = None
cr = None
for ln in data.splitlines():
if ln.startswith('type:'):
tp = ln.split()[-1][1:-1]
if ln.startswith('creator:'):
cr = ln.split()[-1][1:-1]
self.assertEqual(cr, 'ABCD')
self.assertEqual(tp, 'EFGH')
finally:
os.unlink(test_support.TESTFN)
def testOpenRF(self):
try:
fp = open(test_support.TESTFN, 'w')
fp.write('hello world\n')
fp.close()
rfp = MacOS.openrf(test_support.TESTFN, '*wb')
rfp.write('goodbye world\n')
rfp.close()
fp = open(test_support.TESTFN, 'r')
data = fp.read()
fp.close()
self.assertEqual(data, 'hello world\n')
rfp = MacOS.openrf(test_support.TESTFN, '*rb')
data = rfp.read(100)
data2 = rfp.read(100)
rfp.close()
self.assertEqual(data, 'goodbye world\n')
self.assertEqual(data2, '')
finally:
os.unlink(test_support.TESTFN)
def test_main():
test_support.run_unittest(TestMacOS)
if __name__ == '__main__':
test_main()
| gpl-3.0 |
vvv1559/intellij-community | python/lib/Lib/ihooks.py | 100 | 17335 | """Import hook support.
Consistent use of this module will make it possible to change the
different mechanisms involved in loading modules independently.
While the built-in module imp exports interfaces to the built-in
module searching and loading algorithm, and it is possible to replace
the built-in function __import__ in order to change the semantics of
the import statement, until now it has been difficult to combine the
effect of different __import__ hacks, like loading modules from URLs
by rimport.py, or restricted execution by rexec.py.
This module defines three new concepts:
1) A "file system hooks" class provides an interface to a filesystem.
One hooks class is defined (Hooks), which uses the interface provided
by standard modules os and os.path. It should be used as the base
class for other hooks classes.
2) A "module loader" class provides an interface to search for a
module in a search path and to load it. It defines a method which
searches for a module in a single directory; by overriding this method
one can redefine the details of the search. If the directory is None,
built-in and frozen modules are searched instead.
Two module loader class are defined, both implementing the search
strategy used by the built-in __import__ function: ModuleLoader uses
the imp module's find_module interface, while HookableModuleLoader
uses a file system hooks class to interact with the file system. Both
use the imp module's load_* interfaces to actually load the module.
3) A "module importer" class provides an interface to import a
module, as well as interfaces to reload and unload a module. It also
provides interfaces to install and uninstall itself instead of the
default __import__ and reload (and unload) functions.
One module importer class is defined (ModuleImporter), which uses a
module loader instance passed in (by default HookableModuleLoader is
instantiated).
The classes defined here should be used as base classes for extended
functionality along those lines.
If a module importer class supports dotted names, its import_module()
must return a different value depending on whether it is called on
behalf of a "from ... import ..." statement or not. (This is caused
by the way the __import__ hook is used by the Python interpreter.) It
would also do wise to install a different version of reload().
"""
import __builtin__
import imp
import os
import sys
__all__ = ["BasicModuleLoader","Hooks","ModuleLoader","FancyModuleLoader",
"BasicModuleImporter","ModuleImporter","install","uninstall"]
VERBOSE = 0
from imp import C_EXTENSION, PY_SOURCE, PY_COMPILED
from imp import C_BUILTIN, PY_FROZEN, PKG_DIRECTORY
BUILTIN_MODULE = C_BUILTIN
FROZEN_MODULE = PY_FROZEN
class _Verbose:
def __init__(self, verbose = VERBOSE):
self.verbose = verbose
def get_verbose(self):
return self.verbose
def set_verbose(self, verbose):
self.verbose = verbose
# XXX The following is an experimental interface
def note(self, *args):
if self.verbose:
self.message(*args)
def message(self, format, *args):
if args:
print format%args
else:
print format
class BasicModuleLoader(_Verbose):
"""Basic module loader.
This provides the same functionality as built-in import. It
doesn't deal with checking sys.modules -- all it provides is
find_module() and a load_module(), as well as find_module_in_dir()
which searches just one directory, and can be overridden by a
derived class to change the module search algorithm when the basic
dependency on sys.path is unchanged.
The interface is a little more convenient than imp's:
find_module(name, [path]) returns None or 'stuff', and
load_module(name, stuff) loads the module.
"""
def find_module(self, name, path = None):
if path is None:
path = [None] + self.default_path()
for dir in path:
stuff = self.find_module_in_dir(name, dir)
if stuff: return stuff
return None
def default_path(self):
return sys.path
def find_module_in_dir(self, name, dir):
if dir is None:
return self.find_builtin_module(name)
else:
try:
return imp.find_module(name, [dir])
except ImportError:
return None
def find_builtin_module(self, name):
# XXX frozen packages?
if imp.is_builtin(name):
return None, '', ('', '', BUILTIN_MODULE)
if imp.is_frozen(name):
return None, '', ('', '', FROZEN_MODULE)
return None
def load_module(self, name, stuff):
file, filename, info = stuff
try:
return imp.load_module(name, file, filename, info)
finally:
if file: file.close()
class Hooks(_Verbose):
"""Hooks into the filesystem and interpreter.
By deriving a subclass you can redefine your filesystem interface,
e.g. to merge it with the URL space.
This base class behaves just like the native filesystem.
"""
# imp interface
def get_suffixes(self): return imp.get_suffixes()
def new_module(self, name): return imp.new_module(name)
def is_builtin(self, name): return imp.is_builtin(name)
def init_builtin(self, name): return imp.init_builtin(name)
def is_frozen(self, name): return imp.is_frozen(name)
def init_frozen(self, name): return imp.init_frozen(name)
def get_frozen_object(self, name): return imp.get_frozen_object(name)
def load_source(self, name, filename, file=None):
return imp.load_source(name, filename, file)
def load_compiled(self, name, filename, file=None):
return imp.load_compiled(name, filename, file)
def load_dynamic(self, name, filename, file=None):
return imp.load_dynamic(name, filename, file)
def load_package(self, name, filename, file=None):
return imp.load_module(name, file, filename, ("", "", PKG_DIRECTORY))
def add_module(self, name):
d = self.modules_dict()
if name in d: return d[name]
d[name] = m = self.new_module(name)
return m
# sys interface
def modules_dict(self): return sys.modules
def default_path(self): return sys.path
def path_split(self, x): return os.path.split(x)
def path_join(self, x, y): return os.path.join(x, y)
def path_isabs(self, x): return os.path.isabs(x)
# etc.
def path_exists(self, x): return os.path.exists(x)
def path_isdir(self, x): return os.path.isdir(x)
def path_isfile(self, x): return os.path.isfile(x)
def path_islink(self, x): return os.path.islink(x)
# etc.
def openfile(self, *x): return open(*x)
openfile_error = IOError
def listdir(self, x): return os.listdir(x)
listdir_error = os.error
# etc.
class ModuleLoader(BasicModuleLoader):
"""Default module loader; uses file system hooks.
By defining suitable hooks, you might be able to load modules from
other sources than the file system, e.g. from compressed or
encrypted files, tar files or (if you're brave!) URLs.
"""
def __init__(self, hooks = None, verbose = VERBOSE):
BasicModuleLoader.__init__(self, verbose)
self.hooks = hooks or Hooks(verbose)
def default_path(self):
return self.hooks.default_path()
def modules_dict(self):
return self.hooks.modules_dict()
def get_hooks(self):
return self.hooks
def set_hooks(self, hooks):
self.hooks = hooks
def find_builtin_module(self, name):
# XXX frozen packages?
if self.hooks.is_builtin(name):
return None, '', ('', '', BUILTIN_MODULE)
if self.hooks.is_frozen(name):
return None, '', ('', '', FROZEN_MODULE)
return None
def find_module_in_dir(self, name, dir, allow_packages=1):
if dir is None:
return self.find_builtin_module(name)
if allow_packages:
fullname = self.hooks.path_join(dir, name)
if self.hooks.path_isdir(fullname):
stuff = self.find_module_in_dir("__init__", fullname, 0)
if stuff:
file = stuff[0]
if file: file.close()
return None, fullname, ('', '', PKG_DIRECTORY)
for info in self.hooks.get_suffixes():
suff, mode, type = info
fullname = self.hooks.path_join(dir, name+suff)
try:
fp = self.hooks.openfile(fullname, mode)
return fp, fullname, info
except self.hooks.openfile_error:
pass
return None
def load_module(self, name, stuff):
file, filename, info = stuff
(suff, mode, type) = info
try:
if type == BUILTIN_MODULE:
return self.hooks.init_builtin(name)
if type == FROZEN_MODULE:
return self.hooks.init_frozen(name)
if type == C_EXTENSION:
m = self.hooks.load_dynamic(name, filename, file)
elif type == PY_SOURCE:
m = self.hooks.load_source(name, filename, file)
elif type == PY_COMPILED:
m = self.hooks.load_compiled(name, filename, file)
elif type == PKG_DIRECTORY:
m = self.hooks.load_package(name, filename, file)
else:
raise ImportError, "Unrecognized module type (%r) for %s" % \
(type, name)
finally:
if file: file.close()
m.__file__ = filename
return m
class FancyModuleLoader(ModuleLoader):
"""Fancy module loader -- parses and execs the code itself."""
def load_module(self, name, stuff):
file, filename, (suff, mode, type) = stuff
realfilename = filename
path = None
if type == PKG_DIRECTORY:
initstuff = self.find_module_in_dir("__init__", filename, 0)
if not initstuff:
raise ImportError, "No __init__ module in package %s" % name
initfile, initfilename, initinfo = initstuff
initsuff, initmode, inittype = initinfo
if inittype not in (PY_COMPILED, PY_SOURCE):
if initfile: initfile.close()
raise ImportError, \
"Bad type (%r) for __init__ module in package %s" % (
inittype, name)
path = [filename]
file = initfile
realfilename = initfilename
type = inittype
if type == FROZEN_MODULE:
code = self.hooks.get_frozen_object(name)
elif type == PY_COMPILED:
import marshal
file.seek(8)
code = marshal.load(file)
elif type == PY_SOURCE:
data = file.read()
code = compile(data, realfilename, 'exec')
else:
return ModuleLoader.load_module(self, name, stuff)
m = self.hooks.add_module(name)
if path:
m.__path__ = path
m.__file__ = filename
try:
exec code in m.__dict__
except:
d = self.hooks.modules_dict()
if name in d:
del d[name]
raise
return m
class BasicModuleImporter(_Verbose):
"""Basic module importer; uses module loader.
This provides basic import facilities but no package imports.
"""
def __init__(self, loader = None, verbose = VERBOSE):
_Verbose.__init__(self, verbose)
self.loader = loader or ModuleLoader(None, verbose)
self.modules = self.loader.modules_dict()
def get_loader(self):
return self.loader
def set_loader(self, loader):
self.loader = loader
def get_hooks(self):
return self.loader.get_hooks()
def set_hooks(self, hooks):
return self.loader.set_hooks(hooks)
def import_module(self, name, globals={}, locals={}, fromlist=[]):
name = str(name)
if name in self.modules:
return self.modules[name] # Fast path
stuff = self.loader.find_module(name)
if not stuff:
raise ImportError, "No module named %s" % name
return self.loader.load_module(name, stuff)
def reload(self, module, path = None):
name = str(module.__name__)
stuff = self.loader.find_module(name, path)
if not stuff:
raise ImportError, "Module %s not found for reload" % name
return self.loader.load_module(name, stuff)
def unload(self, module):
del self.modules[str(module.__name__)]
# XXX Should this try to clear the module's namespace?
def install(self):
self.save_import_module = __builtin__.__import__
self.save_reload = __builtin__.reload
if not hasattr(__builtin__, 'unload'):
__builtin__.unload = None
self.save_unload = __builtin__.unload
__builtin__.__import__ = self.import_module
__builtin__.reload = self.reload
__builtin__.unload = self.unload
def uninstall(self):
__builtin__.__import__ = self.save_import_module
__builtin__.reload = self.save_reload
__builtin__.unload = self.save_unload
if not __builtin__.unload:
del __builtin__.unload
class ModuleImporter(BasicModuleImporter):
"""A module importer that supports packages."""
def import_module(self, name, globals=None, locals=None, fromlist=None):
parent = self.determine_parent(globals)
q, tail = self.find_head_package(parent, str(name))
m = self.load_tail(q, tail)
if not fromlist:
return q
if hasattr(m, "__path__"):
self.ensure_fromlist(m, fromlist)
return m
def determine_parent(self, globals):
if not globals or not "__name__" in globals:
return None
pname = globals['__name__']
if "__path__" in globals:
parent = self.modules[pname]
assert globals is parent.__dict__
return parent
if '.' in pname:
i = pname.rfind('.')
pname = pname[:i]
parent = self.modules[pname]
assert parent.__name__ == pname
return parent
return None
def find_head_package(self, parent, name):
if '.' in name:
i = name.find('.')
head = name[:i]
tail = name[i+1:]
else:
head = name
tail = ""
if parent:
qname = "%s.%s" % (parent.__name__, head)
else:
qname = head
q = self.import_it(head, qname, parent)
if q: return q, tail
if parent:
qname = head
parent = None
q = self.import_it(head, qname, parent)
if q: return q, tail
raise ImportError, "No module named " + qname
def load_tail(self, q, tail):
m = q
while tail:
i = tail.find('.')
if i < 0: i = len(tail)
head, tail = tail[:i], tail[i+1:]
mname = "%s.%s" % (m.__name__, head)
m = self.import_it(head, mname, m)
if not m:
raise ImportError, "No module named " + mname
return m
def ensure_fromlist(self, m, fromlist, recursive=0):
for sub in fromlist:
if sub == "*":
if not recursive:
try:
all = m.__all__
except AttributeError:
pass
else:
self.ensure_fromlist(m, all, 1)
continue
if sub != "*" and not hasattr(m, sub):
subname = "%s.%s" % (m.__name__, sub)
submod = self.import_it(sub, subname, m)
if not submod:
raise ImportError, "No module named " + subname
def import_it(self, partname, fqname, parent, force_load=0):
if not partname:
raise ValueError, "Empty module name"
if not force_load:
try:
return self.modules[fqname]
except KeyError:
pass
try:
path = parent and parent.__path__
except AttributeError:
return None
partname = str(partname)
stuff = self.loader.find_module(partname, path)
if not stuff:
return None
fqname = str(fqname)
m = self.loader.load_module(fqname, stuff)
if parent:
setattr(parent, partname, m)
return m
def reload(self, module):
name = str(module.__name__)
if '.' not in name:
return self.import_it(name, name, None, force_load=1)
i = name.rfind('.')
pname = name[:i]
parent = self.modules[pname]
return self.import_it(name[i+1:], name, parent, force_load=1)
default_importer = None
current_importer = None
def install(importer = None):
global current_importer
current_importer = importer or default_importer or ModuleImporter()
current_importer.install()
def uninstall():
global current_importer
current_importer.uninstall()
| apache-2.0 |
beck/django | tests/serializers_regress/models.py | 169 | 8611 | """
A test spanning all the capabilities of all the serializers.
This class sets up a model for each model field type
(except for image types, because of the Pillow dependency).
"""
from django.contrib.contenttypes.fields import (
GenericForeignKey, GenericRelation,
)
from django.contrib.contenttypes.models import ContentType
from django.db import models
# The following classes are for testing basic data
# marshalling, including NULL values, where allowed.
class BinaryData(models.Model):
data = models.BinaryField(null=True)
class BooleanData(models.Model):
data = models.BooleanField(default=False)
class CharData(models.Model):
data = models.CharField(max_length=30, null=True)
class DateData(models.Model):
data = models.DateField(null=True)
class DateTimeData(models.Model):
data = models.DateTimeField(null=True)
class DecimalData(models.Model):
data = models.DecimalField(null=True, decimal_places=3, max_digits=5)
class EmailData(models.Model):
data = models.EmailField(null=True)
class FileData(models.Model):
data = models.FileField(null=True, upload_to='/foo/bar')
class FilePathData(models.Model):
data = models.FilePathField(null=True)
class FloatData(models.Model):
data = models.FloatField(null=True)
class IntegerData(models.Model):
data = models.IntegerField(null=True)
class BigIntegerData(models.Model):
data = models.BigIntegerField(null=True)
# class ImageData(models.Model):
# data = models.ImageField(null=True)
class GenericIPAddressData(models.Model):
data = models.GenericIPAddressField(null=True)
class NullBooleanData(models.Model):
data = models.NullBooleanField(null=True)
class PositiveIntegerData(models.Model):
data = models.PositiveIntegerField(null=True)
class PositiveSmallIntegerData(models.Model):
data = models.PositiveSmallIntegerField(null=True)
class SlugData(models.Model):
data = models.SlugField(null=True)
class SmallData(models.Model):
data = models.SmallIntegerField(null=True)
class TextData(models.Model):
data = models.TextField(null=True)
class TimeData(models.Model):
data = models.TimeField(null=True)
class Tag(models.Model):
"""A tag on an item."""
data = models.SlugField()
content_type = models.ForeignKey(ContentType, models.CASCADE)
object_id = models.PositiveIntegerField()
content_object = GenericForeignKey()
class Meta:
ordering = ["data"]
class GenericData(models.Model):
data = models.CharField(max_length=30)
tags = GenericRelation(Tag)
# The following test classes are all for validation
# of related objects; in particular, forward, backward,
# and self references.
class Anchor(models.Model):
"""This is a model that can be used as
something for other models to point at"""
data = models.CharField(max_length=30)
class Meta:
ordering = ('id',)
class NaturalKeyAnchorManager(models.Manager):
def get_by_natural_key(self, data):
return self.get(data=data)
class NaturalKeyAnchor(models.Model):
objects = NaturalKeyAnchorManager()
data = models.CharField(max_length=100, unique=True)
title = models.CharField(max_length=100, null=True)
def natural_key(self):
return (self.data,)
class UniqueAnchor(models.Model):
"""This is a model that can be used as
something for other models to point at"""
data = models.CharField(unique=True, max_length=30)
class FKData(models.Model):
data = models.ForeignKey(Anchor, models.SET_NULL, null=True)
class FKDataNaturalKey(models.Model):
data = models.ForeignKey(NaturalKeyAnchor, models.SET_NULL, null=True)
class M2MData(models.Model):
data = models.ManyToManyField(Anchor)
class O2OData(models.Model):
# One to one field can't be null here, since it is a PK.
data = models.OneToOneField(Anchor, models.CASCADE, primary_key=True)
class FKSelfData(models.Model):
data = models.ForeignKey('self', models.CASCADE, null=True)
class M2MSelfData(models.Model):
data = models.ManyToManyField('self', symmetrical=False)
class FKDataToField(models.Model):
data = models.ForeignKey(UniqueAnchor, models.SET_NULL, null=True, to_field='data')
class FKDataToO2O(models.Model):
data = models.ForeignKey(O2OData, models.SET_NULL, null=True)
class M2MIntermediateData(models.Model):
data = models.ManyToManyField(Anchor, through='Intermediate')
class Intermediate(models.Model):
left = models.ForeignKey(M2MIntermediateData, models.CASCADE)
right = models.ForeignKey(Anchor, models.CASCADE)
extra = models.CharField(max_length=30, blank=True, default="doesn't matter")
# The following test classes are for validating the
# deserialization of objects that use a user-defined
# field as the primary key.
# Some of these data types have been commented out
# because they can't be used as a primary key on one
# or all database backends.
class BooleanPKData(models.Model):
data = models.BooleanField(primary_key=True, default=False)
class CharPKData(models.Model):
data = models.CharField(max_length=30, primary_key=True)
# class DatePKData(models.Model):
# data = models.DateField(primary_key=True)
# class DateTimePKData(models.Model):
# data = models.DateTimeField(primary_key=True)
class DecimalPKData(models.Model):
data = models.DecimalField(primary_key=True, decimal_places=3, max_digits=5)
class EmailPKData(models.Model):
data = models.EmailField(primary_key=True)
# class FilePKData(models.Model):
# data = models.FileField(primary_key=True, upload_to='/foo/bar')
class FilePathPKData(models.Model):
data = models.FilePathField(primary_key=True)
class FloatPKData(models.Model):
data = models.FloatField(primary_key=True)
class IntegerPKData(models.Model):
data = models.IntegerField(primary_key=True)
# class ImagePKData(models.Model):
# data = models.ImageField(primary_key=True)
class GenericIPAddressPKData(models.Model):
data = models.GenericIPAddressField(primary_key=True)
# This is just a Boolean field with null=True, and we can't test a PK value of NULL.
# class NullBooleanPKData(models.Model):
# data = models.NullBooleanField(primary_key=True)
class PositiveIntegerPKData(models.Model):
data = models.PositiveIntegerField(primary_key=True)
class PositiveSmallIntegerPKData(models.Model):
data = models.PositiveSmallIntegerField(primary_key=True)
class SlugPKData(models.Model):
data = models.SlugField(primary_key=True)
class SmallPKData(models.Model):
data = models.SmallIntegerField(primary_key=True)
# class TextPKData(models.Model):
# data = models.TextField(primary_key=True)
# class TimePKData(models.Model):
# data = models.TimeField(primary_key=True)
class UUIDData(models.Model):
data = models.UUIDField(primary_key=True)
class FKToUUID(models.Model):
data = models.ForeignKey(UUIDData, models.CASCADE)
class ComplexModel(models.Model):
field1 = models.CharField(max_length=10)
field2 = models.CharField(max_length=10)
field3 = models.CharField(max_length=10)
# Tests for handling fields with pre_save functions, or
# models with save functions that modify data
class AutoNowDateTimeData(models.Model):
data = models.DateTimeField(null=True, auto_now=True)
class ModifyingSaveData(models.Model):
data = models.IntegerField(null=True)
def save(self, *args, **kwargs):
"""
A save method that modifies the data in the object.
Verifies that a user-defined save() method isn't called when objects
are deserialized (#4459).
"""
self.data = 666
super(ModifyingSaveData, self).save(*args, **kwargs)
# Tests for serialization of models using inheritance.
# Regression for #7202, #7350
class AbstractBaseModel(models.Model):
parent_data = models.IntegerField()
class Meta:
abstract = True
class InheritAbstractModel(AbstractBaseModel):
child_data = models.IntegerField()
class BaseModel(models.Model):
parent_data = models.IntegerField()
class InheritBaseModel(BaseModel):
child_data = models.IntegerField()
class ExplicitInheritBaseModel(BaseModel):
parent = models.OneToOneField(BaseModel, models.CASCADE)
child_data = models.IntegerField()
class ProxyBaseModel(BaseModel):
class Meta:
proxy = True
class ProxyProxyBaseModel(ProxyBaseModel):
class Meta:
proxy = True
class LengthModel(models.Model):
data = models.IntegerField()
def __len__(self):
return self.data
| bsd-3-clause |
unusedPhD/amoco | amoco/arch/x64/asm.py | 5 | 31540 | # -*- coding: utf-8 -*-
# This code is part of Amoco
# Copyright (C) 2006-2011 Axel Tillequin (bdcht3@gmail.com)
# published under GPLv2 license
from .env import *
from amoco.cas.utils import *
from amoco.logger import Log
logger = Log(__name__)
#------------------------------------------------------------------------------
# utils :
def push(fmap,x):
fmap[rsp] = fmap(rsp-x.length)
fmap[mem(rsp,x.size)] = x
def pop(fmap,l):
v = fmap(mem(rsp,l.size))
fmap[rsp] = fmap(rsp+l.length)
fmap[l] = v
def parity(x):
x = x.zeroextend(64)
x = x ^ (x>>1)
x = (x ^ (x>>2)) & 0x1111111111111111L
x = x * 0x1111111111111111L
p = (x>>60).bit(0)
return p
def parity8(x):
y = x ^ (x>>4)
y = cst(0x6996,16)>>(y[0:4])
p = y.bit(0)
return p
def halfcarry(x,y,c=None):
s,carry,o = AddWithCarry(x[0:4],y[0:4],c)
return carry
def halfborrow(x,y,c=None):
s,carry,o = SubWithBorrow(x[0:4],y[0:4],c)
return carry
# see Intel doc vol.1 §3.4.1.1 about 32-bits operands.
def _r32_zx64(op1,x):
if op1.size==32 and op1._is_reg:
return (op1.x,x.zeroextend(64))
else:
return (op1,x)
#------------------------------------------------------------------------------
def i_BSWAP(i,fmap):
fmap[rip] = fmap[rip]+i.length
dst = i.operands[0]
_t = fmap(dst)
if i.misc['REX'] and i.misc['REX'][0]==1:
fmap[dst[0 : 8]] = _t[56:64]
fmap[dst[8 :16]] = _t[48:56]
fmap[dst[16:24]] = _t[40:48]
fmap[dst[24:32]] = _t[32:40]
fmap[dst[32:40]] = _t[24:32]
fmap[dst[40:48]] = _t[16:24]
fmap[dst[48:56]] = _t[8 :16]
fmap[dst[56:64]] = _t[0 : 8]
else:
dst,_t = _r32_zx64(dst,_t)
fmap[dst] = _t
fmap[dst[0 : 8]] = _t[24:32]
fmap[dst[8 :16]] = _t[16:24]
fmap[dst[16:24]] = _t[8 :16]
fmap[dst[24:32]] = _t[0 : 8]
def i_NOP(i,fmap):
fmap[rip] = fmap[rip]+i.length
def i_WAIT(i,fmap):
fmap[rip] = fmap[rip]+i.length
# LEAVE instruction is a shortcut for 'mov rsp,ebp ; pop ebp ;'
def i_LEAVE(i,fmap):
fmap[rip] = fmap[rip]+i.length
fmap[rsp] = fmap[rbp]
pop(fmap,rbp)
def i_RET(i,fmap):
pop(fmap,rip)
def i_HLT(i,fmap):
fmap[rip] = top(64)
#------------------------------------------------------------------------------
def _ins_(i,fmap,l):
counter = cx if i.misc['adrsz'] else rcx
loc = mem(rdi,l*8)
src = ext('IN',size=l*8).call(fmap,port=fmap(dx))
if i.misc['rep']:
fmap[loc] = tst(fmap(counter)==0, fmap(loc), src)
fmap[counter] = fmap(counter)-1
fmap[rip] = tst(fmap(counter)==0, fmap[rip]+i.length, fmap[rip])
else:
fmap[loc] = src
fmap[rip] = fmap[rip]+i.length
fmap[rdi] = tst(fmap(df),fmap(rdi)-l,fmap(rdi)+l)
def i_INSB(i,fmap):
_ins_(i,fmap,1)
def i_INSW(i,fmap):
_ins_(i,fmap,2)
def i_INSD(i,fmap):
_ins_(i,fmap,4)
#------------------------------------------------------------------------------
def _outs_(i,fmap,l):
counter = cx if i.misc['adrsz'] else rcx
src = fmap(mem(rsi,l*8))
ext('OUT').call(fmap,src=fmap(mem(rsi,l*8)))
if i.misc['rep']:
fmap[counter] = fmap(counter)-1
fmap[rip] = tst(fmap(counter)==0, fmap[rip]+i.length, fmap[rip])
else:
fmap[rip] = fmap[rip]+i.length
fmap[rdi] = tst(fmap(df),fmap(rdi)-l,fmap(rdi)+l)
def i_OUTSB(i,fmap):
_outs_(i,fmap,1)
def i_OUTSW(i,fmap):
_outs_(i,fmap,2)
def i_OUTSD(i,fmap):
_outs_(i,fmap,4)
#------------------------------------------------------------------------------
def i_INT3(i,fmap):
fmap[rip] = ext('INT3',size=64)
def i_CLC(i,fmap):
fmap[rip] = fmap[rip]+i.length
fmap[cf] = bit0
def i_STC(i,fmap):
fmap[rip] = fmap[rip]+i.length
fmap[cf] = bit1
def i_CLD(i,fmap):
fmap[rip] = fmap[rip]+i.length
fmap[df] = bit0
def i_STD(i,fmap):
fmap[rip] = fmap[rip]+i.length
fmap[df] = bit1
def i_CMC(i,fmap):
fmap[rip] = fmap[rip]+i.length
fmap[cf] = ~fmap(cf)
def i_CBW(i,fmap):
fmap[rip] = fmap[rip]+i.length
fmap[ax] = fmap(al).signextend(16)
def i_CWDE(i,fmap):
fmap[rip] = fmap[rip]+i.length
fmap[rax] = fmap(ax).signextend(32).zeroextend(64)
def i_CDQE(i,fmap):
fmap[rip] = fmap[rip]+i.length
fmap[rax] = fmap(eax).signextend(64)
def i_CWD(i,fmap):
fmap[rip] = fmap[rip]+i.length
x = fmap(ax).signextend(32)
fmap[dx] = x[16:32]
fmap[ax] = x[0:16]
def i_CDQ(i,fmap):
fmap[rip] = fmap[rip]+i.length
x = fmap(eax).signextend(64)
fmap[rdx] = x[32:64].zeroextend(64)
fmap[rax] = x[0:32].zeroextend(64)
def i_CQO(i,fmap):
fmap[rip] = fmap[rip]+i.length
x = fmap(eax).signextend(128)
fmap[rdx] = x[64:128]
fmap[rax] = x[0:64]
def i_PUSHFQ(i,fmap):
fmap[rip] = fmap[rip]+i.length
push(fmap,fmap(rflags)&0x0000000000fcffffL)
def i_POPFQ(i,fmap):
fmap[rip] = fmap[rip]+i.length
pop(fmap,rflags)
#------------------------------------------------------------------------------
def _cmps_(i,fmap,l):
counter,d,s = (ecx,edi,esi) if i.misc['adrsz'] else (rcx,rdi,rsi)
dst = fmap(mem(d,l*8))
src = fmap(mem(s,l*8))
x, carry, overflow = SubWithBorrow(dst,src)
if i.misc['rep']:
fmap[af] = tst(fmap(counter)==0, fmap(af), halfborrow(dst,src))
fmap[pf] = tst(fmap(counter)==0, fmap(pf), parity8(x[0:8]))
fmap[zf] = tst(fmap(counter)==0, fmap(zf), x==0)
fmap[sf] = tst(fmap(counter)==0, fmap(sf), x<0)
fmap[cf] = tst(fmap(counter)==0, fmap(cf), carry)
fmap[of] = tst(fmap(counter)==0, fmap(of), overflow)
fmap[counter] = fmap(counter)-1
fmap[rip] = tst(fmap(counter)==0, fmap[rip]+i.length, fmap[rip])
else:
fmap[af] = halfborrow(dst,src)
fmap[pf] = parity8(x[0:8])
fmap[zf] = x==0
fmap[sf] = x<0
fmap[cf] = carry
fmap[of] = overflow
fmap[rip] = fmap[rip]+i.length
fmap[d] = fmap(tst(df,d-l,d+l))
fmap[s] = fmap(tst(df,s-l,s+l))
def i_CMPSB(i,fmap):
_cmps_(i,fmap,1)
def i_CMPSW(i,fmap):
_cmps_(i,fmap,2)
def i_CMPSD(i,fmap):
_cmps_(i,fmap,4)
def i_CMPSQ(i,fmap):
_cmps_(i,fmap,8)
#------------------------------------------------------------------------------
def _scas_(i,fmap,l):
counter,d = (ecx,edi) if i.misc['adrsz'] else (rcx,rdi)
a = fmap({1:al, 2:ax, 4:eax, 8:rax}[l])
src = fmap(mem(d,l*8))
x, carry, overflow = SubWithBorrow(a,src)
if i.misc['rep']:
fmap[af] = tst(fmap(counter)==0, fmap(af), halfborrow(a,src))
fmap[pf] = tst(fmap(counter)==0, fmap(pf), parity8(x[0:8]))
fmap[zf] = tst(fmap(counter)==0, fmap(zf), x==0)
fmap[sf] = tst(fmap(counter)==0, fmap(sf), x<0)
fmap[cf] = tst(fmap(counter)==0, fmap(cf), carry)
fmap[of] = tst(fmap(counter)==0, fmap(of), overflow)
fmap[counter] = fmap(counter)-1
fmap[rip] = tst(fmap(counter)==0, fmap[rip]+i.length, fmap[rip])
else:
fmap[af] = halfborrow(a,src)
fmap[pf] = parity8(x[0:8])
fmap[zf] = x==0
fmap[sf] = x<0
fmap[cf] = carry
fmap[of] = overflow
fmap[rip] = fmap[rip]+i.length
fmap[d] = tst(fmap(df),fmap(d)-l,fmap(d)+l)
def i_SCASB(i,fmap):
_scas_(i,fmap,1)
def i_SCASW(i,fmap):
_scas_(i,fmap,2)
def i_SCASD(i,fmap):
_scas_(i,fmap,4)
def i_SCASQ(i,fmap):
_scas_(i,fmap,8)
#------------------------------------------------------------------------------
def _lods_(i,fmap,l):
counter,s = (ecx,esi) if i.misc['adrsz'] else (rcx,rsi)
loc = {1:al, 2:ax, 4:eax, 8:rax}[l]
src = fmap(mem(s,l*8))
if i.misc['rep']:
fmap[loc] = tst(fmap(counter)==0, fmap(loc), src)
fmap[counter] = fmap(counter)-1
fmap[rip] = tst(fmap(counter)==0, fmap[rip]+i.length, fmap[rip])
else:
fmap[loc] = src
fmap[rip] = fmap[rip]+i.length
fmap[s] = fmap(tst(df,s-l,s+l))
def i_LODSB(i,fmap):
_lods_(i,fmap,1)
def i_LODSW(i,fmap):
_lods_(i,fmap,2)
def i_LODSD(i,fmap):
_lods_(i,fmap,4)
def i_LODSQ(i,fmap):
_lods_(i,fmap,8)
#------------------------------------------------------------------------------
def _stos_(i,fmap,l):
if i.misc['adrsz']==32:
counter,d = ecx,edi
else:
counter,d = rcx,rdi
loc = mem(d,l*8)
src = fmap({1:al, 2:ax, 4:eax, 8:rax}[l])
if i.misc['rep']:
fmap[loc] = tst(fmap(counter)==0, fmap(loc), src)
fmap[counter] = fmap(counter)-1
fmap[rip] = tst(fmap(counter)==0, fmap[rip]+i.length, fmap[rip])
else:
fmap[loc] = src
fmap[rip] = fmap[rip]+i.length
fmap[d] = tst(fmap(df),fmap(d)-l,fmap(d)+l)
def i_STOSB(i,fmap):
_stos_(i,fmap,1)
def i_STOSW(i,fmap):
_stos_(i,fmap,2)
def i_STOSD(i,fmap):
_stos_(i,fmap,4)
def i_STOSQ(i,fmap):
_stos_(i,fmap,8)
#------------------------------------------------------------------------------
def _movs_(i,fmap,l):
if i.misc['adrsz']==32:
counter,d,s = ecx,edi,esi
else:
counter,d,s = rcx,rdi,rsi
loc = mem(d,l*8)
src = fmap(mem(s,l*8))
if i.misc['rep']:
fmap[loc] = tst(fmap(counter)==0, fmap(loc), src)
fmap[counter] = fmap(counter)-1
fmap[rip] = tst(fmap(counter)==0, fmap[rip]+i.length, fmap[rip])
else:
fmap[loc] = src
fmap[rip] = fmap[rip]+i.length
fmap[s] = tst(fmap(df),fmap(s)-l,fmap(s)+l)
fmap[d] = tst(fmap(df),fmap(d)-l,fmap(d)+l)
def i_MOVSB(i,fmap):
_movs_(i,fmap,1)
def i_MOVSW(i,fmap):
_movs_(i,fmap,2)
def i_MOVSD(i,fmap):
_movs_(i,fmap,4)
def i_MOVSQ(i,fmap):
_movs_(i,fmap,8)
#------------------------------------------------------------------------------
def i_IN(i,fmap):
fmap[rip] = fmap[rip]+i.length
op1 = i.operands[0]
op2 = fmap(i.operands[1])
x = ext('IN%s'%op2,op1.size).call(fmap)
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_OUT(i,fmap):
fmap[rip] = fmap[rip]+i.length
op1 = fmap(i.operands[0])
op2 = fmap(i.operands[1])
ext('OUT%s'%op1).call(fmap,arg=op2)
#op1_src retreives fmap[op1] (op1 value):
def i_PUSH(i,fmap):
fmap[rip] = fmap[rip]+i.length
op1 = fmap(i.operands[0])
if op1.size==8: op1 = op1.signextend(64)
push(fmap,op1)
#op1_dst retreives op1 location:
def i_POP(i,fmap):
fmap[rip] = fmap[rip]+i.length
op1 = i.operands[0]
pop(fmap,op1)
def i_CALL(i,fmap):
pc = fmap[rip]+i.length
push(fmap,pc)
op1 = fmap(i.operands[0])
op1 = op1.signextend(pc.size)
target = pc+op1 if not i.misc['absolute'] else op1
fmap[rip] = target
def i_CALLF(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
pc = fmap[rip]+i.length
def i_JMP(i,fmap):
pc = fmap[rip]+i.length
fmap[rip] = pc
op1 = fmap(i.operands[0])
op1 = op1.signextend(pc.size)
target = pc+op1 if not i.misc['absolute'] else op1
fmap[rip] = target
def i_JMPF(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
pc = fmap[rip]+i.length
#------------------------------------------------------------------------------
def _loop_(i,fmap,cond):
opdsz = 16 if i.misc['opdsz'] else 64
src = i.operands[0].signextend(64)
loc = fmap[rip]+src
loc = loc[0:opdsz].zeroextend(64)
counter = cx if i.misc['adrsz'] else ecx
REX = i.misc['REX']
W = 0
if REX: W=REX[0]
if W==1: counter = rcx
fmap[counter] = fmap(counter)-1
fmap[rip] = tst(fmap(cond), loc, fmap[rip]+i.length)
def i_LOOP(i,fmap):
cond = (counter!=0)
_loop_(i,fmap,cond)
def i_LOOPE(i,fmap):
cond = zf&(counter!=0)
_loop_(i,fmap,cond)
def i_LOOPNE(i,fmap):
cond = (~zf)&(counter!=0)
_loop_(i,fmap,cond)
#------------------------------------------------------------------------------
def i_LSL(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_LTR(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
#######################
def i_Jcc(i,fmap):
fmap[rip] = fmap[rip]+i.length
op1 = fmap(i.operands[0])
op1 = op1.signextend(rip.size)
cond = i.cond[1]
fmap[rip] = tst(fmap(cond),fmap[rip]+op1,fmap[rip])
def i_JRCXZ(i,fmap):
pc = fmap[eip]+i.length
fmap[eip] = pc
op1 = fmap(i.operands[0])
op1 = op1.signextend(pc.size)
cond = (rcx==0)
target = tst(fmap(cond),fmap[eip]+op1,fmap[eip])
fmap[eip] = target
def i_JECXZ(i,fmap):
pc = fmap[eip]+i.length
fmap[eip] = pc
op1 = fmap(i.operands[0])
op1 = op1.signextend(pc.size)
cond = (ecx==0)
target = tst(fmap(cond),fmap[eip]+op1,fmap[eip])
fmap[eip] = target
def i_RETN(i,fmap):
src = i.operands[0].v
pop(fmap,rip)
fmap[rsp] = fmap(rsp)+src
def i_INT(i,fmap):
fmap[rip] = fmap[rip]+i.length
op1 = fmap(i.operands[0])
push(fmap,fmap[rip])
fmap[rip] = ext('INT',port=op1,size=64)
def i_INC(i,fmap):
op1 = i.operands[0]
fmap[rip] = fmap[rip]+i.length
a = fmap(op1)
b = cst(1,a.size)
x,carry,overflow = AddWithCarry(a,b)
#cf not affected
fmap[af] = halfcarry(a,b)
fmap[pf] = parity8(x[0:8])
fmap[zf] = x==0
fmap[sf] = x<0
fmap[of] = overflow
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_DEC(i,fmap):
op1 = i.operands[0]
fmap[rip] = fmap[rip]+i.length
a = fmap(op1)
b = cst(1,a.size)
x,carry,overflow = SubWithBorrow(a,b)
#cf not affected
fmap[af] = halfborrow(a,b)
fmap[pf] = parity8(x[0:8])
fmap[zf] = x==0
fmap[sf] = x<0
fmap[of] = overflow
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_NEG(i,fmap):
op1 = i.operands[0]
fmap[rip] = fmap[rip]+i.length
a = cst(0,op1.size)
b = fmap(op1)
x,carry,overflow = SubWithBorrow(a,b)
fmap[af] = halfborrow(a,b)
fmap[pf] = parity8(x[0:8])
fmap[cf] = b!=0
fmap[zf] = x==0
fmap[sf] = x<0
fmap[of] = overflow
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_NOT(i,fmap):
op1 = i.operands[0]
fmap[rip] = fmap[rip]+i.length
x = ~fmap(op1)
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_SETcc(i,fmap):
op1 = fmap(i.operands[0])
fmap[rip] = fmap[rip]+i.length
x = tst(fmap(i.cond[1]),cst(1,op1.size),cst(0,op1.size))
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_MOV(i,fmap):
op1 = i.operands[0]
op2 = fmap(i.operands[1])
fmap[rip] = fmap[rip]+i.length
op1,op2 = _r32_zx64(op1,op2)
fmap[op1] = op2
def i_MOVBE(i,fmap):
dst = i.operands[0]
_t = fmap(i.operands[1])
fmap[rip] = fmap[rip]+i.length
if i.misc['opdsz']==16:
fmap[dst[0 : 8]] = _t[8 :16]
fmap[dst[8 :16]] = _t[0 : 8]
else:
fmap[dst[0 : 8]] = _t[56:64]
fmap[dst[8 :16]] = _t[48:56]
fmap[dst[16:24]] = _t[40:48]
fmap[dst[24:32]] = _t[32:40]
fmap[dst[32:40]] = _t[24:32]
fmap[dst[40:48]] = _t[16:24]
fmap[dst[48:56]] = _t[8 :16]
fmap[dst[56:64]] = _t[0 : 8]
def i_MOVSX(i,fmap):
op1 = i.operands[0]
op2 = fmap(i.operands[1])
fmap[rip] = fmap[rip]+i.length
x = op2.signextend(op1.size)
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_MOVSXD(i,fmap):
op1 = i.operands[0]
op2 = fmap(i.operands[1])
fmap[rip] = fmap[rip]+i.length
fmap[op1] = op2.signextend(op1.size)
def i_MOVZX(i,fmap):
op1 = i.operands[0]
op2 = fmap(i.operands[1])
fmap[rip] = fmap[rip]+i.length
x = op2.zeroextend(op1.size)
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_ADC(i,fmap):
op1 = i.operands[0]
op2 = fmap(i.operands[1])
fmap[rip] = fmap[rip]+i.length
a=fmap(op1)
c = fmap(cf)
x,carry,overflow = AddWithCarry(a,op2,c)
fmap[pf] = parity8(x[0:8])
fmap[af] = halfcarry(a,op2,c)
fmap[zf] = x==0
fmap[sf] = x<0
fmap[cf] = carry
fmap[of] = overflow
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_ADD(i,fmap):
op1 = i.operands[0]
op2 = fmap(i.operands[1])
fmap[rip] = fmap[rip]+i.length
a=fmap(op1)
x,carry,overflow = AddWithCarry(a,op2)
fmap[pf] = parity8(x[0:8])
fmap[af] = halfcarry(a,op2)
fmap[zf] = x==0
fmap[sf] = x<0
fmap[cf] = carry
fmap[of] = overflow
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_SBB(i,fmap):
op1 = i.operands[0]
op2 = fmap(i.operands[1])
fmap[rip] = fmap[rip]+i.length
a=fmap(op1)
c=fmap(cf)
x,carry,overflow = SubWithBorrow(a,op2,c)
fmap[pf] = parity8(x[0:8])
fmap[af] = halfborrow(a,op2,c)
fmap[zf] = x==0
fmap[sf] = x<0
fmap[cf] = carry
fmap[of] = overflow
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_SUB(i,fmap):
op1 = i.operands[0]
op2 = fmap(i.operands[1])
fmap[rip] = fmap[rip]+i.length
a=fmap(op1)
x,carry,overflow = SubWithBorrow(a,op2)
fmap[pf] = parity8(x[0:8])
fmap[af] = halfborrow(a,op2)
fmap[zf] = x==0
fmap[sf] = x<0
fmap[cf] = carry
fmap[of] = overflow
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_AND(i,fmap):
op1 = i.operands[0]
op2 = fmap(i.operands[1])
fmap[rip] = fmap[rip]+i.length
if op2.size<op1.size:
op2 = op2.signextend(op1.size)
x=fmap(op1)&op2
fmap[zf] = x==0
fmap[sf] = x<0
fmap[cf] = bit0
fmap[of] = bit0
fmap[pf] = parity8(x[0:8])
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_OR(i,fmap):
op1 = i.operands[0]
op2 = fmap(i.operands[1])
fmap[rip] = fmap[rip]+i.length
x=fmap(op1)|op2
fmap[zf] = x==0
fmap[sf] = x<0
fmap[cf] = bit0
fmap[of] = bit0
fmap[pf] = parity8(x[0:8])
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_XOR(i,fmap):
fmap[rip] = fmap[rip]+i.length
op1 = i.operands[0]
op2 = fmap(i.operands[1])
x=fmap(op1)^op2
fmap[zf] = x==0
fmap[sf] = x<0
fmap[cf] = bit0
fmap[of] = bit0
fmap[pf] = parity8(x[0:8])
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_CMP(i,fmap):
fmap[rip] = fmap[rip]+i.length
op1 = fmap(i.operands[0])
op2 = fmap(i.operands[1])
x, carry, overflow = SubWithBorrow(op1,op2)
fmap[af] = halfborrow(op1,op2)
fmap[zf] = x==0
fmap[sf] = x<0
fmap[cf] = carry
fmap[of] = overflow
fmap[pf] = parity8(x[0:8])
def i_CMPXCHG(i,fmap):
fmap[rip] = fmap[rip]+i.length
dst,src = i.operands
acc = {8:al,16:ax,32:eax,64:rax}[dst.size]
t = fmap(acc==dst)
fmap[zf] = tst(t,bit1,bit0)
if dst.size==32 and dst._is_reg:
x = fmap(src).zeroextend(64)
v = fmap(dst).zeroextend(64)
dst = dst.x
acc = rax
else:
x = fmap(src)
v = fmap(dst)
fmap[dst] = tst(t,x,fmap(dst))
fmap[acc] = tst(t,fmap(acc),v)
def i_CMPXCHG8B(i,fmap):
fmap[rip] = fmap[rip]+i.length
dst = i.operands[0]
src = composer([ebx,ecx])
acc = composer([eax,edx])
t = fmap(acc==dst)
fmap[zf] = tst(t,bit1,bit0)
v = fmap(dst)
fmap[dst] = tst(t,fmap(src),v)
fmap[rax] = tst(t,fmap(rax),v[0:32].zeroextend(64))
fmap[rdx] = tst(t,fmap(rdx),v[32:64].zeroextend(64))
def i_CMPXCHG16B(i,fmap):
fmap[rip] = fmap[rip]+i.length
dst = i.operands[0]
src = composer([rbx,rcx])
acc = composer([rax,rdx])
t = fmap(acc==dst)
fmap[zf] = tst(t,bit1,bit0)
v = fmap(dst)
fmap[dst] = tst(t,fmap(src),v)
fmap[rax] = v[0:64]
fmap[rdx] = v[64:128]
def i_TEST(i,fmap):
fmap[rip] = fmap[rip]+i.length
op1 = fmap(i.operands[0])
op2 = fmap(i.operands[1])
x = op1&op2
fmap[zf] = x==0
fmap[sf] = x[x.size-1:x.size]
fmap[cf] = bit0
fmap[of] = bit0
fmap[pf] = parity8(x[0:8])
def i_LEA(i,fmap):
fmap[rip] = fmap[rip]+i.length
op1 = i.operands[0]
op2 = i.operands[1]
adr = op2.addr(fmap)
if op1.size>adr.size: adr = adr.zeroextend(op1.size)
elif op1.size<adr.size: adr = adr[0:op1.size]
fmap[op1] = adr
def i_XCHG(i,fmap):
fmap[rip] = fmap[rip]+i.length
op1 = i.operands[0]
op2 = i.operands[1]
tmp1 = fmap(op1)
tmp2 = fmap(op2)
op1,tmp2 = _r32_zx64(op1,tmp2)
fmap[op1] = tmp2
op2,tmp1 = _r32_zx64(op2,tmp1)
fmap[op2] = tmp1
def i_SHR(i,fmap):
REX = i.misc['REX']
W=0
if REX: W=REX[0]
mask = 0x3f if W==1 else 0x1f
op1 = i.operands[0]
count = fmap(i.operands[1]&mask)
fmap[rip] = fmap[rip]+i.length
a = fmap(op1)
if count._is_cst:
if count.value==0: return # flags unchanged
if count.value==1:
fmap[of] = a.bit(-1) # MSB of a
else:
fmap[of] = top(1)
if count.value<=a.size:
fmap[cf] = a.bit(count.value-1)
else:
fmap[cf] = bit0
else:
fmap[cf] = top(1)
fmap[of] = top(1)
res = a>>count
fmap[sf] = (res<0)
fmap[zf] = (res==0)
fmap[pf] = parity8(res[0:8])
op1,res = _r32_zx64(op1,res)
fmap[op1] = res
def i_SAR(i,fmap):
REX = i.misc['REX']
W=0
if REX: W=REX[0]
mask = 0x3f if W==1 else 0x1f
op1 = i.operands[0]
count = fmap(i.operands[1]&mask)
fmap[rip] = fmap[rip]+i.length
a = fmap(op1)
if count._is_cst:
if count.value==0: return
if count.value==1:
fmap[of] = bit0
else:
fmap[of] = top(1)
if count.value<=a.size:
fmap[cf] = a.bit(count.value-1)
else:
fmap[cf] = a.bit(-1)
else:
fmap[cf] = top(1)
fmap[of] = top(1)
res = a//count # (// is used as arithmetic shift in cas.py)
fmap[sf] = (res<0)
fmap[zf] = (res==0)
fmap[pf] = parity8(res[0:8])
op1,res = _r32_zx64(op1,res)
fmap[op1] = res
def i_SHL(i,fmap):
REX = i.misc['REX']
W=0
if REX: W=REX[0]
mask = 0x3f if W==1 else 0x1f
op1 = i.operands[0]
count = fmap(i.operands[1]&mask)
fmap[rip] = fmap[rip]+i.length
a = fmap(op1)
x = a<<count
if count._is_cst:
if count.value==0: return
if count.value==1:
fmap[of] = x.bit(-1)^fmap(cf)
else:
fmap[of] = top(1)
if count.value<=a.size:
fmap[cf] = a.bit(a.size-count.value)
else:
fmap[cf] = bit0
else:
fmap[cf] = top(1)
fmap[of] = top(1)
fmap[sf] = (x<0)
fmap[zf] = (x==0)
fmap[pf] = parity8(x[0:8])
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
i_SAL = i_SHL
def i_ROL(i,fmap):
REX = i.misc['REX']
W=0
if REX: W=REX[0]
mask = 0x3f if W==1 else 0x1f
op1 = i.operands[0]
size = op1.size
count = fmap(i.operands[1]&mask)%size
fmap[rip] = fmap[rip]+i.length
a = fmap(op1)
x = ROL(a,count)
if count._is_cst:
if count.value==0: return
fmap[cf] = x.bit(0)
if count.value==1:
fmap[of] = x.bit(-1)^fmap(cf)
else:
fmap[of] = top(1)
else:
fmap[cf] = top(1)
fmap[of] = top(1)
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_ROR(i,fmap):
REX = i.misc['REX']
W=0
if REX: W=REX[0]
mask = 0x3f if W==1 else 0x1f
op1 = i.operands[0]
size = op1.size
count = fmap(i.operands[1]&mask)%size
fmap[rip] = fmap[rip]+i.length
a = fmap(op1)
x = ROR(a,count)
if count._is_cst:
if count.value==0: return
fmap[cf] = x.bit(-1)
if count.value==1:
fmap[of] = x.bit(-1)^x.bit(-2)
else:
fmap[of] = top(1)
else:
fmap[cf] = top(1)
fmap[of] = top(1)
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_RCL(i,fmap):
REX = i.misc['REX']
W=0
if REX: W=REX[0]
mask = 0x3f if W==1 else 0x1f
op1 = i.operands[0]
size = op1.size
if size<32: size=size+1 # count cf
count = fmap(i.operands[1]&mask)%size
fmap[rip] = fmap[rip]+i.length
a = fmap(op1)
x,carry = ROLWithCarry(a,count,fmap(cf))
if count._is_cst:
if count.value==0: return
fmap[cf] = carry
if count.value==1:
fmap[of] = x.bit(-1)^fmap(cf)
else:
fmap[of] = top(1)
else:
fmap[cf] = top(1)
fmap[of] = top(1)
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_RCR(i,fmap):
REX = i.misc['REX']
W=0
if REX: W=REX[0]
mask = 0x3f if W==1 else 0x1f
op1 = i.operands[0]
size = op1.size
if size<32: size=size+1 # count cf
count = fmap(i.operands[1]&mask)%size
fmap[rip] = fmap[rip]+i.length
a = fmap(op1)
x,carry = RORWithCarry(a,count,fmap(cf))
if count._is_cst:
if count.value==0: return
if count.value==1:
fmap[of] = a.bit(-1)^fmap(cf)
else:
fmap[of] = top(1)
else:
fmap[cf] = top(1)
fmap[of] = top(1)
fmap[cf] = carry
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_CMOVcc(i,fmap):
op1 = i.operands[0]
op2 = fmap(i.operands[1])
fmap[rip] = fmap[rip]+i.length
a = fmap(op1)
x = tst(fmap(i.cond[1]),op2,a)
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_SHRD(i,fmap):
op1 = i.operands[0]
op2 = fmap(i.operands[1])
op3 = fmap(i.operands[2])
fmap[rip] = fmap[rip]+i.length
# op3 is a cst:
n = op3.value
r = op1.size-n
x = (fmap(op1)>>n) | (op2<<r)
fmap[sf] = (x<0)
fmap[zf] = (x==0)
fmap[pf] = parity8(x[0:8])
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_SHLD(i,fmap):
op1 = i.operands[0]
op2 = fmap(i.operands[1])
op3 = fmap(i.operands[2])
fmap[rip] = fmap[rip]+i.length
n = op3.value
r = op1.size-n
x = (fmap(op1)<<n) | (op2>>r)
fmap[sf] = (x<0)
fmap[zf] = (x==0)
fmap[pf] = parity8(x[0:8])
op1,x = _r32_zx64(op1,x)
fmap[op1] = x
def i_IMUL(i,fmap):
fmap[rip] = fmap[rip]+i.length
if len(i.operands)==1:
src = i.operands[0]
m,d = {8:(al,ah), 16:(ax,dx), 32:(eax,edx), 64:(rax,rdx)}[src.size]
r = fmap(m**src)
elif len(i.operands)==2:
dst,src = i.operands
m = d = dst
r = fmap(dst**src)
else:
dst,src,imm = i.operands
m = d = dst
r = fmap(src**imm.signextend(src.size))
lo = r[0:src.size]
hi = r[src.size:r.size]
fmap[cf] = hi!=(lo>>31)
fmap[of] = hi!=(lo>>31)
d,hi = _r32_zx64(d,hi)
fmap[d] = hi
m,lo = _r32_zx64(m,lo)
fmap[m] = lo
def i_MUL(i,fmap):
fmap[rip] = fmap[rip]+i.length
src = i.operands[0]
m,d = {8:(al,ah), 16:(ax,dx), 32:(eax,edx), 64:(rax,rdx)}[src.size]
r = fmap(m**src)
lo = r[0:src.size]
hi = r[src.size:r.size]
fmap[cf] = hi!=0
fmap[of] = hi!=0
d,hi = _r32_zx64(d,hi)
fmap[d] = hi
m,lo = _r32_zx64(m,lo)
fmap[m] = lo
def i_RDRAND(i,fmap):
fmap[rip] = fmap[rip]+i.length
dst = i.operands[0]
fmap[dst] = top(dst.size)
fmap[cf] = top(1)
for f in (of,sf,zf,af,pf): fmap[f] = bit0
def i_RDTSC(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
fmap[rdx] = top(64)
fmap[rax] = top(64)
def i_RDTSCP(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
fmap[rdx] = top(64)
fmap[rax] = top(64)
fmap[rcx] = top(64)
def i_BOUND(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
# #UD #BR exceptions not implemented
def i_LFENCE(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_MFENCE(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_SFENCE(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_MWAIT(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_LGDT(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_SGDT(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_LIDT(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_SIDT(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_LLDT(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_SLDT(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_LMSW(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
fmap[cr(0)[0:16]] = top(16)
def i_SMSW(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
dst = i.operands[0]
fmap[dst] = top(16)
def i_BSF(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
dst,src = i.operands
x = fmap(src)
fmap[zf] = x==0
fmap[dst] = top(dst.size)
def i_BSR(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
dst,src = i.operands
x = fmap(src)
fmap[zf] = x==0
fmap[dst] = top(dst.size)
def i_POPCNT(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
dst,src = i.operands
fmap[dst] = top(dst.size)
fmap[cf] = bit0
fmap[of] = bit0
fmap[sf] = bit0
fmap[af] = bit0
fmap[zf] = fmap(src)==0
fmap[rip] = fmap[rip]+i.length
def i_LZCNT(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
dst,src = i.operands
fmap[dst] = top(dst.size)
fmap[cf] = fmap[zf] = top(1)
fmap[rip] = fmap[rip]+i.length
def i_TZCNT(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
dst,src = i.operands
fmap[dst] = top(dst.size)
fmap[cf] = fmap[zf] = top(1)
fmap[rip] = fmap[rip]+i.length
def i_BT(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
dst,src = i.operands
fmap[cf] = top(1)
def i_BTC(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
dst,src = i.operands
fmap[cf] = top(1)
fmap[dst] = top(dst.size)
def i_BTR(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
dst,src = i.operands
fmap[cf] = top(1)
fmap[dst] = top(dst.size)
def i_BTS(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
dst,src = i.operands
fmap[cf] = top(1)
fmap[dst] = top(dst.size)
def i_CLFLUSH(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
# cache not supported
def i_INVD(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
# cache not supported
def i_INVLPG(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
# cache not supported
def i_CLI(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
# interruptions not supported
def i_PREFETCHT0(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
# interruptions not supported
def i_PREFETCHT1(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
# interruptions not supported
def i_PREFETCHT2(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
# interruptions not supported
def i_PREFETCHNTA(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
# interruptions not supported
def i_PREFETCHW(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
# interruptions not supported
def i_LAR(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
dst,src = i.operands
fmap[zf] = top(1)
fmap[dst] = top(dst.size)
def i_STR(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
dst = i.operands[0]
fmap[dst] = top(dst.size)
def i_RDMSR(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_RDPMC(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_RSM(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = fmap[rip]+i.length
def i_SYSENTER(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = top(64)
fmap[rsp] = top(64)
fmap[cs] = top(16)
fmap[ss] = top(16)
def i_SYSEXIT(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = top(64)
fmap[rsp] = top(64)
fmap[cs] = top(16)
fmap[ss] = top(16)
def i_SYSCALL(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = top(64)
fmap[rcx] = top(64)
fmap[r11] = top(64)
def i_SYSRET(i,fmap):
logger.verbose('%s semantic is not defined'%i.mnemonic)
fmap[rip] = top(64)
fmap[rsp] = top(64)
| gpl-2.0 |
googleapis/releasetool | releasetool/commands/tag/dotnet.py | 1 | 4710 | # Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import getpass
import re
from typing import Union
import click
import releasetool.git
import releasetool.github
import releasetool.secrets
import releasetool.commands.common
from releasetool.commands.common import TagContext
RELEASE_LINE_PATTERN = r"^(?:- )?Release ([^ ]*) version (\d+\.\d+.\d+(-[^ ]*)?)$"
def determine_release_pr(ctx: TagContext) -> None:
click.secho(
"> Let's figure out which pull request corresponds to your release.", fg="cyan"
)
pulls = ctx.github.list_pull_requests(ctx.upstream_repo, state="closed")
pulls = [pull for pull in pulls if "release" in pull["title"].lower()][:30]
click.secho("> Please pick one of the following PRs:\n")
for n, pull in enumerate(pulls, 1):
print(f"\t{n}: {pull['title']} ({pull['number']})")
pull_idx = click.prompt("\nWhich one do you want to tag and release?", type=int)
ctx.release_pr = pulls[pull_idx - 1]
def create_releases(ctx: TagContext) -> None:
click.secho("> Creating the release.")
commitish = ctx.release_pr["merge_commit_sha"]
title = ctx.release_pr["title"]
body_lines = ctx.release_pr["body"].splitlines()
all_lines = [title] + body_lines
pr_comment = ""
for line in all_lines:
match = re.search(RELEASE_LINE_PATTERN, line)
if match is not None:
package = match.group(1)
version = match.group(2)
tag = package + "-" + version
ctx.github.create_release(
repository=ctx.upstream_repo,
tag_name=tag,
target_commitish=commitish,
name=f"{package} version {version}",
# TODO: either reformat the message as we do in TagReleases,
# or make sure we create the PR with an "already-formatted"
# body. (The latter is probably simpler, and will make the
# PR easier to read anyway.)
body=ctx.release_pr["body"],
# Versions like "1.0.0-beta01" or "0.9.0" are prerelease
prerelease="-" in version or version.startswith("0."),
)
click.secho(f"Created release for {tag}")
pr_comment = pr_comment + f"- Created release for {tag}\n"
if pr_comment == "":
raise ValueError("No releases found within pull request")
ctx.github.create_pull_request_comment(
ctx.upstream_repo, ctx.release_pr["number"], pr_comment
)
# This isn't a tag, but that's okay - it just needs to be a commitish for
# Kokoro to build against.
ctx.release_tag = commitish
repo_short_name = ctx.upstream_repo.split("/")[-1]
ctx.kokoro_job_name = f"cloud-sharp/{repo_short_name}/gcp_windows/autorelease"
ctx.github.update_pull_labels(
ctx.release_pr, add=["autorelease: tagged"], remove=["autorelease: pending"]
)
releasetool.commands.common.publish_via_kokoro(ctx)
def kokoro_job_name(upstream_repo: str, package_name: str) -> Union[str, None]:
"""Return the Kokoro job name.
Args:
upstream_repo (str): The GitHub repo in the form of `<owner>/<repo>`
package_name (str): The name of package to release
Returns:
The name of the Kokoro job to trigger or None if there is no job to trigger
"""
return None
def package_name(pull: dict) -> Union[str, None]:
return None
# Note: unlike other languages, the .NET libraries may need multiple
# tags for a single release PR, usually for dependent APIs, e.g.
# Google.Cloud.Spanner.Data depending on Google.Cloud.Spanner.V1.
# We create multiple releases in the create_releases function, and set
# ctx.release_tag to the commit we've tagged (as all tags will use the same commit).
def tag(ctx: TagContext = None) -> TagContext:
if not ctx:
ctx = TagContext()
if ctx.interactive:
click.secho(f"o/ Hey, {getpass.getuser()}, let's tag a release!", fg="magenta")
if ctx.github is None:
releasetool.commands.common.setup_github_context(ctx)
if ctx.release_pr is None:
determine_release_pr(ctx)
create_releases(ctx)
return ctx
| apache-2.0 |
thanhacun/odoo | openerp/addons/base/module/wizard/base_import_language.py | 337 | 2644 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import base64
from tempfile import TemporaryFile
from openerp import tools
from openerp.osv import osv, fields
class base_language_import(osv.osv_memory):
""" Language Import """
_name = "base.language.import"
_description = "Language Import"
_columns = {
'name': fields.char('Language Name', required=True),
'code': fields.char('ISO Code', size=5, help="ISO Language and Country code, e.g. en_US", required=True),
'data': fields.binary('File', required=True),
'overwrite': fields.boolean('Overwrite Existing Terms',
help="If you enable this option, existing translations (including custom ones) "
"will be overwritten and replaced by those in this file"),
}
def import_lang(self, cr, uid, ids, context=None):
if context is None:
context = {}
this = self.browse(cr, uid, ids[0])
if this.overwrite:
context = dict(context, overwrite=True)
fileobj = TemporaryFile('w+')
try:
fileobj.write(base64.decodestring(this.data))
# now we determine the file format
fileobj.seek(0)
first_line = fileobj.readline().strip().replace('"', '').replace(' ', '')
fileformat = first_line.endswith("type,name,res_id,src,value") and 'csv' or 'po'
fileobj.seek(0)
tools.trans_load_data(cr, fileobj, fileformat, this.code, lang_name=this.name, context=context)
finally:
fileobj.close()
return True
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
effigies/mne-python | mne/viz/tests/test_evoked.py | 1 | 3310 | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Denis Engemann <denis.engemann@gmail.com>
# Martin Luessi <mluessi@nmr.mgh.harvard.edu>
# Eric Larson <larson.eric.d@gmail.com>
# Cathy Nangini <cnangini@gmail.com>
# Mainak Jas <mainak@neuro.hut.fi>
#
# License: Simplified BSD
import os.path as op
import warnings
import numpy as np
from numpy.testing import assert_raises
# Set our plotters to test mode
import matplotlib
matplotlib.use('Agg') # for testing don't use X server
import matplotlib.pyplot as plt
from mne import io, read_events, Epochs
from mne import pick_types
from mne.channels import read_layout
warnings.simplefilter('always') # enable b/c these tests throw warnings
base_dir = op.join(op.dirname(__file__), '..', '..', 'io', 'tests', 'data')
evoked_fname = op.join(base_dir, 'test-ave.fif')
raw_fname = op.join(base_dir, 'test_raw.fif')
cov_fname = op.join(base_dir, 'test-cov.fif')
event_name = op.join(base_dir, 'test-eve.fif')
event_id, tmin, tmax = 1, -0.1, 0.1
n_chan = 6
layout = read_layout('Vectorview-all')
def _get_raw():
return io.Raw(raw_fname, preload=False)
def _get_events():
return read_events(event_name)
def _get_picks(raw):
return pick_types(raw.info, meg=True, eeg=False, stim=False,
ecg=False, eog=False, exclude='bads')
def _get_epochs():
raw = _get_raw()
events = _get_events()
picks = _get_picks(raw)
# Use a subset of channels for plotting speed
picks = np.round(np.linspace(0, len(picks) + 1, n_chan)).astype(int)
epochs = Epochs(raw, events[:5], event_id, tmin, tmax, picks=picks,
baseline=(None, 0))
return epochs
def _get_epochs_delayed_ssp():
raw = _get_raw()
events = _get_events()
picks = _get_picks(raw)
reject = dict(mag=4e-12)
epochs_delayed_ssp = Epochs(raw, events[:10], event_id, tmin, tmax,
picks=picks, baseline=(None, 0),
proj='delayed', reject=reject)
return epochs_delayed_ssp
def test_plot_evoked():
"""Test plotting of evoked
"""
evoked = _get_epochs().average()
with warnings.catch_warnings(record=True):
evoked.plot(proj=True, hline=[1])
# plot with bad channels excluded
evoked.plot(exclude='bads')
evoked.plot(exclude=evoked.info['bads']) # does the same thing
# test selective updating of dict keys is working.
evoked.plot(hline=[1], units=dict(mag='femto foo'))
evoked_delayed_ssp = _get_epochs_delayed_ssp().average()
evoked_delayed_ssp.plot(proj='interactive')
evoked_delayed_ssp.apply_proj()
assert_raises(RuntimeError, evoked_delayed_ssp.plot,
proj='interactive')
evoked_delayed_ssp.info['projs'] = []
assert_raises(RuntimeError, evoked_delayed_ssp.plot,
proj='interactive')
assert_raises(RuntimeError, evoked_delayed_ssp.plot,
proj='interactive', axes='foo')
evoked.plot_image(proj=True)
# plot with bad channels excluded
evoked.plot_image(exclude='bads')
evoked.plot_image(exclude=evoked.info['bads']) # does the same thing
plt.close('all')
| bsd-3-clause |
y12uc231/edx-platform | lms/djangoapps/certificates/migrations/0010_auto__del_field_generatedcertificate_enabled__add_field_generatedcerti.py | 188 | 5338 | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Deleting field 'GeneratedCertificate.enabled'
db.delete_column('certificates_generatedcertificate', 'enabled')
# Adding field 'GeneratedCertificate.status'
db.add_column('certificates_generatedcertificate', 'status',
self.gf('django.db.models.fields.CharField')(default='unavailable', max_length=32),
keep_default=False)
def backwards(self, orm):
# Adding field 'GeneratedCertificate.enabled'
db.add_column('certificates_generatedcertificate', 'enabled',
self.gf('django.db.models.fields.BooleanField')(default=False),
keep_default=False)
# Deleting field 'GeneratedCertificate.status'
db.delete_column('certificates_generatedcertificate', 'status')
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'certificates.generatedcertificate': {
'Meta': {'unique_together': "(('user', 'course_id'),)", 'object_name': 'GeneratedCertificate'},
'certificate_id': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '32', 'blank': 'True'}),
'course_id': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '255', 'blank': 'True'}),
'distinction': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'download_url': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '128', 'blank': 'True'}),
'grade': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '5', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '32', 'blank': 'True'}),
'status': ('django.db.models.fields.CharField', [], {'default': "'unavailable'", 'max_length': '32'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
}
}
complete_apps = ['certificates']
| agpl-3.0 |
zhukaixy/kbengine | kbe/res/scripts/common/Lib/site-packages/setuptools/tests/doctest.py | 332 | 99828 | # Module doctest.
# Released to the public domain 16-Jan-2001, by Tim Peters (tim@python.org).
# Major enhancements and refactoring by:
# Jim Fulton
# Edward Loper
# Provided as-is; use at your own risk; no warranty; no promises; enjoy!
try:
basestring
except NameError:
basestring = str
try:
enumerate
except NameError:
def enumerate(seq):
return zip(range(len(seq)),seq)
r"""Module doctest -- a framework for running examples in docstrings.
In simplest use, end each module M to be tested with:
def _test():
import doctest
doctest.testmod()
if __name__ == "__main__":
_test()
Then running the module as a script will cause the examples in the
docstrings to get executed and verified:
python M.py
This won't display anything unless an example fails, in which case the
failing example(s) and the cause(s) of the failure(s) are printed to stdout
(why not stderr? because stderr is a lame hack <0.2 wink>), and the final
line of output is "Test failed.".
Run it with the -v switch instead:
python M.py -v
and a detailed report of all examples tried is printed to stdout, along
with assorted summaries at the end.
You can force verbose mode by passing "verbose=True" to testmod, or prohibit
it by passing "verbose=False". In either of those cases, sys.argv is not
examined by testmod.
There are a variety of other ways to run doctests, including integration
with the unittest framework, and support for running non-Python text
files containing doctests. There are also many ways to override parts
of doctest's default behaviors. See the Library Reference Manual for
details.
"""
__docformat__ = 'reStructuredText en'
__all__ = [
# 0, Option Flags
'register_optionflag',
'DONT_ACCEPT_TRUE_FOR_1',
'DONT_ACCEPT_BLANKLINE',
'NORMALIZE_WHITESPACE',
'ELLIPSIS',
'IGNORE_EXCEPTION_DETAIL',
'COMPARISON_FLAGS',
'REPORT_UDIFF',
'REPORT_CDIFF',
'REPORT_NDIFF',
'REPORT_ONLY_FIRST_FAILURE',
'REPORTING_FLAGS',
# 1. Utility Functions
'is_private',
# 2. Example & DocTest
'Example',
'DocTest',
# 3. Doctest Parser
'DocTestParser',
# 4. Doctest Finder
'DocTestFinder',
# 5. Doctest Runner
'DocTestRunner',
'OutputChecker',
'DocTestFailure',
'UnexpectedException',
'DebugRunner',
# 6. Test Functions
'testmod',
'testfile',
'run_docstring_examples',
# 7. Tester
'Tester',
# 8. Unittest Support
'DocTestSuite',
'DocFileSuite',
'set_unittest_reportflags',
# 9. Debugging Support
'script_from_examples',
'testsource',
'debug_src',
'debug',
]
import __future__
import sys, traceback, inspect, linecache, os, re, types
import unittest, difflib, pdb, tempfile
import warnings
from setuptools.compat import StringIO, execfile, func_code, im_func
# Don't whine about the deprecated is_private function in this
# module's tests.
warnings.filterwarnings("ignore", "is_private", DeprecationWarning,
__name__, 0)
# There are 4 basic classes:
# - Example: a <source, want> pair, plus an intra-docstring line number.
# - DocTest: a collection of examples, parsed from a docstring, plus
# info about where the docstring came from (name, filename, lineno).
# - DocTestFinder: extracts DocTests from a given object's docstring and
# its contained objects' docstrings.
# - DocTestRunner: runs DocTest cases, and accumulates statistics.
#
# So the basic picture is:
#
# list of:
# +------+ +---------+ +-------+
# |object| --DocTestFinder-> | DocTest | --DocTestRunner-> |results|
# +------+ +---------+ +-------+
# | Example |
# | ... |
# | Example |
# +---------+
# Option constants.
OPTIONFLAGS_BY_NAME = {}
def register_optionflag(name):
flag = 1 << len(OPTIONFLAGS_BY_NAME)
OPTIONFLAGS_BY_NAME[name] = flag
return flag
DONT_ACCEPT_TRUE_FOR_1 = register_optionflag('DONT_ACCEPT_TRUE_FOR_1')
DONT_ACCEPT_BLANKLINE = register_optionflag('DONT_ACCEPT_BLANKLINE')
NORMALIZE_WHITESPACE = register_optionflag('NORMALIZE_WHITESPACE')
ELLIPSIS = register_optionflag('ELLIPSIS')
IGNORE_EXCEPTION_DETAIL = register_optionflag('IGNORE_EXCEPTION_DETAIL')
COMPARISON_FLAGS = (DONT_ACCEPT_TRUE_FOR_1 |
DONT_ACCEPT_BLANKLINE |
NORMALIZE_WHITESPACE |
ELLIPSIS |
IGNORE_EXCEPTION_DETAIL)
REPORT_UDIFF = register_optionflag('REPORT_UDIFF')
REPORT_CDIFF = register_optionflag('REPORT_CDIFF')
REPORT_NDIFF = register_optionflag('REPORT_NDIFF')
REPORT_ONLY_FIRST_FAILURE = register_optionflag('REPORT_ONLY_FIRST_FAILURE')
REPORTING_FLAGS = (REPORT_UDIFF |
REPORT_CDIFF |
REPORT_NDIFF |
REPORT_ONLY_FIRST_FAILURE)
# Special string markers for use in `want` strings:
BLANKLINE_MARKER = '<BLANKLINE>'
ELLIPSIS_MARKER = '...'
######################################################################
## Table of Contents
######################################################################
# 1. Utility Functions
# 2. Example & DocTest -- store test cases
# 3. DocTest Parser -- extracts examples from strings
# 4. DocTest Finder -- extracts test cases from objects
# 5. DocTest Runner -- runs test cases
# 6. Test Functions -- convenient wrappers for testing
# 7. Tester Class -- for backwards compatibility
# 8. Unittest Support
# 9. Debugging Support
# 10. Example Usage
######################################################################
## 1. Utility Functions
######################################################################
def is_private(prefix, base):
"""prefix, base -> true iff name prefix + "." + base is "private".
Prefix may be an empty string, and base does not contain a period.
Prefix is ignored (although functions you write conforming to this
protocol may make use of it).
Return true iff base begins with an (at least one) underscore, but
does not both begin and end with (at least) two underscores.
>>> is_private("a.b", "my_func")
False
>>> is_private("____", "_my_func")
True
>>> is_private("someclass", "__init__")
False
>>> is_private("sometypo", "__init_")
True
>>> is_private("x.y.z", "_")
True
>>> is_private("_x.y.z", "__")
False
>>> is_private("", "") # senseless but consistent
False
"""
warnings.warn("is_private is deprecated; it wasn't useful; "
"examine DocTestFinder.find() lists instead",
DeprecationWarning, stacklevel=2)
return base[:1] == "_" and not base[:2] == "__" == base[-2:]
def _extract_future_flags(globs):
"""
Return the compiler-flags associated with the future features that
have been imported into the given namespace (globs).
"""
flags = 0
for fname in __future__.all_feature_names:
feature = globs.get(fname, None)
if feature is getattr(__future__, fname):
flags |= feature.compiler_flag
return flags
def _normalize_module(module, depth=2):
"""
Return the module specified by `module`. In particular:
- If `module` is a module, then return module.
- If `module` is a string, then import and return the
module with that name.
- If `module` is None, then return the calling module.
The calling module is assumed to be the module of
the stack frame at the given depth in the call stack.
"""
if inspect.ismodule(module):
return module
elif isinstance(module, basestring):
return __import__(module, globals(), locals(), ["*"])
elif module is None:
return sys.modules[sys._getframe(depth).f_globals['__name__']]
else:
raise TypeError("Expected a module, string, or None")
def _indent(s, indent=4):
"""
Add the given number of space characters to the beginning every
non-blank line in `s`, and return the result.
"""
# This regexp matches the start of non-blank lines:
return re.sub('(?m)^(?!$)', indent*' ', s)
def _exception_traceback(exc_info):
"""
Return a string containing a traceback message for the given
exc_info tuple (as returned by sys.exc_info()).
"""
# Get a traceback message.
excout = StringIO()
exc_type, exc_val, exc_tb = exc_info
traceback.print_exception(exc_type, exc_val, exc_tb, file=excout)
return excout.getvalue()
# Override some StringIO methods.
class _SpoofOut(StringIO):
def getvalue(self):
result = StringIO.getvalue(self)
# If anything at all was written, make sure there's a trailing
# newline. There's no way for the expected output to indicate
# that a trailing newline is missing.
if result and not result.endswith("\n"):
result += "\n"
# Prevent softspace from screwing up the next test case, in
# case they used print with a trailing comma in an example.
if hasattr(self, "softspace"):
del self.softspace
return result
def truncate(self, size=None):
StringIO.truncate(self, size)
if hasattr(self, "softspace"):
del self.softspace
# Worst-case linear-time ellipsis matching.
def _ellipsis_match(want, got):
"""
Essentially the only subtle case:
>>> _ellipsis_match('aa...aa', 'aaa')
False
"""
if want.find(ELLIPSIS_MARKER)==-1:
return want == got
# Find "the real" strings.
ws = want.split(ELLIPSIS_MARKER)
assert len(ws) >= 2
# Deal with exact matches possibly needed at one or both ends.
startpos, endpos = 0, len(got)
w = ws[0]
if w: # starts with exact match
if got.startswith(w):
startpos = len(w)
del ws[0]
else:
return False
w = ws[-1]
if w: # ends with exact match
if got.endswith(w):
endpos -= len(w)
del ws[-1]
else:
return False
if startpos > endpos:
# Exact end matches required more characters than we have, as in
# _ellipsis_match('aa...aa', 'aaa')
return False
# For the rest, we only need to find the leftmost non-overlapping
# match for each piece. If there's no overall match that way alone,
# there's no overall match period.
for w in ws:
# w may be '' at times, if there are consecutive ellipses, or
# due to an ellipsis at the start or end of `want`. That's OK.
# Search for an empty string succeeds, and doesn't change startpos.
startpos = got.find(w, startpos, endpos)
if startpos < 0:
return False
startpos += len(w)
return True
def _comment_line(line):
"Return a commented form of the given line"
line = line.rstrip()
if line:
return '# '+line
else:
return '#'
class _OutputRedirectingPdb(pdb.Pdb):
"""
A specialized version of the python debugger that redirects stdout
to a given stream when interacting with the user. Stdout is *not*
redirected when traced code is executed.
"""
def __init__(self, out):
self.__out = out
pdb.Pdb.__init__(self)
def trace_dispatch(self, *args):
# Redirect stdout to the given stream.
save_stdout = sys.stdout
sys.stdout = self.__out
# Call Pdb's trace dispatch method.
try:
return pdb.Pdb.trace_dispatch(self, *args)
finally:
sys.stdout = save_stdout
# [XX] Normalize with respect to os.path.pardir?
def _module_relative_path(module, path):
if not inspect.ismodule(module):
raise TypeError('Expected a module: %r' % module)
if path.startswith('/'):
raise ValueError('Module-relative files may not have absolute paths')
# Find the base directory for the path.
if hasattr(module, '__file__'):
# A normal module/package
basedir = os.path.split(module.__file__)[0]
elif module.__name__ == '__main__':
# An interactive session.
if len(sys.argv)>0 and sys.argv[0] != '':
basedir = os.path.split(sys.argv[0])[0]
else:
basedir = os.curdir
else:
# A module w/o __file__ (this includes builtins)
raise ValueError("Can't resolve paths relative to the module " +
module + " (it has no __file__)")
# Combine the base directory and the path.
return os.path.join(basedir, *(path.split('/')))
######################################################################
## 2. Example & DocTest
######################################################################
## - An "example" is a <source, want> pair, where "source" is a
## fragment of source code, and "want" is the expected output for
## "source." The Example class also includes information about
## where the example was extracted from.
##
## - A "doctest" is a collection of examples, typically extracted from
## a string (such as an object's docstring). The DocTest class also
## includes information about where the string was extracted from.
class Example:
"""
A single doctest example, consisting of source code and expected
output. `Example` defines the following attributes:
- source: A single Python statement, always ending with a newline.
The constructor adds a newline if needed.
- want: The expected output from running the source code (either
from stdout, or a traceback in case of exception). `want` ends
with a newline unless it's empty, in which case it's an empty
string. The constructor adds a newline if needed.
- exc_msg: The exception message generated by the example, if
the example is expected to generate an exception; or `None` if
it is not expected to generate an exception. This exception
message is compared against the return value of
`traceback.format_exception_only()`. `exc_msg` ends with a
newline unless it's `None`. The constructor adds a newline
if needed.
- lineno: The line number within the DocTest string containing
this Example where the Example begins. This line number is
zero-based, with respect to the beginning of the DocTest.
- indent: The example's indentation in the DocTest string.
I.e., the number of space characters that preceed the
example's first prompt.
- options: A dictionary mapping from option flags to True or
False, which is used to override default options for this
example. Any option flags not contained in this dictionary
are left at their default value (as specified by the
DocTestRunner's optionflags). By default, no options are set.
"""
def __init__(self, source, want, exc_msg=None, lineno=0, indent=0,
options=None):
# Normalize inputs.
if not source.endswith('\n'):
source += '\n'
if want and not want.endswith('\n'):
want += '\n'
if exc_msg is not None and not exc_msg.endswith('\n'):
exc_msg += '\n'
# Store properties.
self.source = source
self.want = want
self.lineno = lineno
self.indent = indent
if options is None: options = {}
self.options = options
self.exc_msg = exc_msg
class DocTest:
"""
A collection of doctest examples that should be run in a single
namespace. Each `DocTest` defines the following attributes:
- examples: the list of examples.
- globs: The namespace (aka globals) that the examples should
be run in.
- name: A name identifying the DocTest (typically, the name of
the object whose docstring this DocTest was extracted from).
- filename: The name of the file that this DocTest was extracted
from, or `None` if the filename is unknown.
- lineno: The line number within filename where this DocTest
begins, or `None` if the line number is unavailable. This
line number is zero-based, with respect to the beginning of
the file.
- docstring: The string that the examples were extracted from,
or `None` if the string is unavailable.
"""
def __init__(self, examples, globs, name, filename, lineno, docstring):
"""
Create a new DocTest containing the given examples. The
DocTest's globals are initialized with a copy of `globs`.
"""
assert not isinstance(examples, basestring), \
"DocTest no longer accepts str; use DocTestParser instead"
self.examples = examples
self.docstring = docstring
self.globs = globs.copy()
self.name = name
self.filename = filename
self.lineno = lineno
def __repr__(self):
if len(self.examples) == 0:
examples = 'no examples'
elif len(self.examples) == 1:
examples = '1 example'
else:
examples = '%d examples' % len(self.examples)
return ('<DocTest %s from %s:%s (%s)>' %
(self.name, self.filename, self.lineno, examples))
# This lets us sort tests by name:
def __cmp__(self, other):
if not isinstance(other, DocTest):
return -1
return cmp((self.name, self.filename, self.lineno, id(self)),
(other.name, other.filename, other.lineno, id(other)))
######################################################################
## 3. DocTestParser
######################################################################
class DocTestParser:
"""
A class used to parse strings containing doctest examples.
"""
# This regular expression is used to find doctest examples in a
# string. It defines three groups: `source` is the source code
# (including leading indentation and prompts); `indent` is the
# indentation of the first (PS1) line of the source code; and
# `want` is the expected output (including leading indentation).
_EXAMPLE_RE = re.compile(r'''
# Source consists of a PS1 line followed by zero or more PS2 lines.
(?P<source>
(?:^(?P<indent> [ ]*) >>> .*) # PS1 line
(?:\n [ ]* \.\.\. .*)*) # PS2 lines
\n?
# Want consists of any non-blank lines that do not start with PS1.
(?P<want> (?:(?![ ]*$) # Not a blank line
(?![ ]*>>>) # Not a line starting with PS1
.*$\n? # But any other line
)*)
''', re.MULTILINE | re.VERBOSE)
# A regular expression for handling `want` strings that contain
# expected exceptions. It divides `want` into three pieces:
# - the traceback header line (`hdr`)
# - the traceback stack (`stack`)
# - the exception message (`msg`), as generated by
# traceback.format_exception_only()
# `msg` may have multiple lines. We assume/require that the
# exception message is the first non-indented line starting with a word
# character following the traceback header line.
_EXCEPTION_RE = re.compile(r"""
# Grab the traceback header. Different versions of Python have
# said different things on the first traceback line.
^(?P<hdr> Traceback\ \(
(?: most\ recent\ call\ last
| innermost\ last
) \) :
)
\s* $ # toss trailing whitespace on the header.
(?P<stack> .*?) # don't blink: absorb stuff until...
^ (?P<msg> \w+ .*) # a line *starts* with alphanum.
""", re.VERBOSE | re.MULTILINE | re.DOTALL)
# A callable returning a true value iff its argument is a blank line
# or contains a single comment.
_IS_BLANK_OR_COMMENT = re.compile(r'^[ ]*(#.*)?$').match
def parse(self, string, name='<string>'):
"""
Divide the given string into examples and intervening text,
and return them as a list of alternating Examples and strings.
Line numbers for the Examples are 0-based. The optional
argument `name` is a name identifying this string, and is only
used for error messages.
"""
string = string.expandtabs()
# If all lines begin with the same indentation, then strip it.
min_indent = self._min_indent(string)
if min_indent > 0:
string = '\n'.join([l[min_indent:] for l in string.split('\n')])
output = []
charno, lineno = 0, 0
# Find all doctest examples in the string:
for m in self._EXAMPLE_RE.finditer(string):
# Add the pre-example text to `output`.
output.append(string[charno:m.start()])
# Update lineno (lines before this example)
lineno += string.count('\n', charno, m.start())
# Extract info from the regexp match.
(source, options, want, exc_msg) = \
self._parse_example(m, name, lineno)
# Create an Example, and add it to the list.
if not self._IS_BLANK_OR_COMMENT(source):
output.append( Example(source, want, exc_msg,
lineno=lineno,
indent=min_indent+len(m.group('indent')),
options=options) )
# Update lineno (lines inside this example)
lineno += string.count('\n', m.start(), m.end())
# Update charno.
charno = m.end()
# Add any remaining post-example text to `output`.
output.append(string[charno:])
return output
def get_doctest(self, string, globs, name, filename, lineno):
"""
Extract all doctest examples from the given string, and
collect them into a `DocTest` object.
`globs`, `name`, `filename`, and `lineno` are attributes for
the new `DocTest` object. See the documentation for `DocTest`
for more information.
"""
return DocTest(self.get_examples(string, name), globs,
name, filename, lineno, string)
def get_examples(self, string, name='<string>'):
"""
Extract all doctest examples from the given string, and return
them as a list of `Example` objects. Line numbers are
0-based, because it's most common in doctests that nothing
interesting appears on the same line as opening triple-quote,
and so the first interesting line is called \"line 1\" then.
The optional argument `name` is a name identifying this
string, and is only used for error messages.
"""
return [x for x in self.parse(string, name)
if isinstance(x, Example)]
def _parse_example(self, m, name, lineno):
"""
Given a regular expression match from `_EXAMPLE_RE` (`m`),
return a pair `(source, want)`, where `source` is the matched
example's source code (with prompts and indentation stripped);
and `want` is the example's expected output (with indentation
stripped).
`name` is the string's name, and `lineno` is the line number
where the example starts; both are used for error messages.
"""
# Get the example's indentation level.
indent = len(m.group('indent'))
# Divide source into lines; check that they're properly
# indented; and then strip their indentation & prompts.
source_lines = m.group('source').split('\n')
self._check_prompt_blank(source_lines, indent, name, lineno)
self._check_prefix(source_lines[1:], ' '*indent + '.', name, lineno)
source = '\n'.join([sl[indent+4:] for sl in source_lines])
# Divide want into lines; check that it's properly indented; and
# then strip the indentation. Spaces before the last newline should
# be preserved, so plain rstrip() isn't good enough.
want = m.group('want')
want_lines = want.split('\n')
if len(want_lines) > 1 and re.match(r' *$', want_lines[-1]):
del want_lines[-1] # forget final newline & spaces after it
self._check_prefix(want_lines, ' '*indent, name,
lineno + len(source_lines))
want = '\n'.join([wl[indent:] for wl in want_lines])
# If `want` contains a traceback message, then extract it.
m = self._EXCEPTION_RE.match(want)
if m:
exc_msg = m.group('msg')
else:
exc_msg = None
# Extract options from the source.
options = self._find_options(source, name, lineno)
return source, options, want, exc_msg
# This regular expression looks for option directives in the
# source code of an example. Option directives are comments
# starting with "doctest:". Warning: this may give false
# positives for string-literals that contain the string
# "#doctest:". Eliminating these false positives would require
# actually parsing the string; but we limit them by ignoring any
# line containing "#doctest:" that is *followed* by a quote mark.
_OPTION_DIRECTIVE_RE = re.compile(r'#\s*doctest:\s*([^\n\'"]*)$',
re.MULTILINE)
def _find_options(self, source, name, lineno):
"""
Return a dictionary containing option overrides extracted from
option directives in the given source string.
`name` is the string's name, and `lineno` is the line number
where the example starts; both are used for error messages.
"""
options = {}
# (note: with the current regexp, this will match at most once:)
for m in self._OPTION_DIRECTIVE_RE.finditer(source):
option_strings = m.group(1).replace(',', ' ').split()
for option in option_strings:
if (option[0] not in '+-' or
option[1:] not in OPTIONFLAGS_BY_NAME):
raise ValueError('line %r of the doctest for %s '
'has an invalid option: %r' %
(lineno+1, name, option))
flag = OPTIONFLAGS_BY_NAME[option[1:]]
options[flag] = (option[0] == '+')
if options and self._IS_BLANK_OR_COMMENT(source):
raise ValueError('line %r of the doctest for %s has an option '
'directive on a line with no example: %r' %
(lineno, name, source))
return options
# This regular expression finds the indentation of every non-blank
# line in a string.
_INDENT_RE = re.compile('^([ ]*)(?=\S)', re.MULTILINE)
def _min_indent(self, s):
"Return the minimum indentation of any non-blank line in `s`"
indents = [len(indent) for indent in self._INDENT_RE.findall(s)]
if len(indents) > 0:
return min(indents)
else:
return 0
def _check_prompt_blank(self, lines, indent, name, lineno):
"""
Given the lines of a source string (including prompts and
leading indentation), check to make sure that every prompt is
followed by a space character. If any line is not followed by
a space character, then raise ValueError.
"""
for i, line in enumerate(lines):
if len(line) >= indent+4 and line[indent+3] != ' ':
raise ValueError('line %r of the docstring for %s '
'lacks blank after %s: %r' %
(lineno+i+1, name,
line[indent:indent+3], line))
def _check_prefix(self, lines, prefix, name, lineno):
"""
Check that every line in the given list starts with the given
prefix; if any line does not, then raise a ValueError.
"""
for i, line in enumerate(lines):
if line and not line.startswith(prefix):
raise ValueError('line %r of the docstring for %s has '
'inconsistent leading whitespace: %r' %
(lineno+i+1, name, line))
######################################################################
## 4. DocTest Finder
######################################################################
class DocTestFinder:
"""
A class used to extract the DocTests that are relevant to a given
object, from its docstring and the docstrings of its contained
objects. Doctests can currently be extracted from the following
object types: modules, functions, classes, methods, staticmethods,
classmethods, and properties.
"""
def __init__(self, verbose=False, parser=DocTestParser(),
recurse=True, _namefilter=None, exclude_empty=True):
"""
Create a new doctest finder.
The optional argument `parser` specifies a class or
function that should be used to create new DocTest objects (or
objects that implement the same interface as DocTest). The
signature for this factory function should match the signature
of the DocTest constructor.
If the optional argument `recurse` is false, then `find` will
only examine the given object, and not any contained objects.
If the optional argument `exclude_empty` is false, then `find`
will include tests for objects with empty docstrings.
"""
self._parser = parser
self._verbose = verbose
self._recurse = recurse
self._exclude_empty = exclude_empty
# _namefilter is undocumented, and exists only for temporary backward-
# compatibility support of testmod's deprecated isprivate mess.
self._namefilter = _namefilter
def find(self, obj, name=None, module=None, globs=None,
extraglobs=None):
"""
Return a list of the DocTests that are defined by the given
object's docstring, or by any of its contained objects'
docstrings.
The optional parameter `module` is the module that contains
the given object. If the module is not specified or is None, then
the test finder will attempt to automatically determine the
correct module. The object's module is used:
- As a default namespace, if `globs` is not specified.
- To prevent the DocTestFinder from extracting DocTests
from objects that are imported from other modules.
- To find the name of the file containing the object.
- To help find the line number of the object within its
file.
Contained objects whose module does not match `module` are ignored.
If `module` is False, no attempt to find the module will be made.
This is obscure, of use mostly in tests: if `module` is False, or
is None but cannot be found automatically, then all objects are
considered to belong to the (non-existent) module, so all contained
objects will (recursively) be searched for doctests.
The globals for each DocTest is formed by combining `globs`
and `extraglobs` (bindings in `extraglobs` override bindings
in `globs`). A new copy of the globals dictionary is created
for each DocTest. If `globs` is not specified, then it
defaults to the module's `__dict__`, if specified, or {}
otherwise. If `extraglobs` is not specified, then it defaults
to {}.
"""
# If name was not specified, then extract it from the object.
if name is None:
name = getattr(obj, '__name__', None)
if name is None:
raise ValueError("DocTestFinder.find: name must be given "
"when obj.__name__ doesn't exist: %r" %
(type(obj),))
# Find the module that contains the given object (if obj is
# a module, then module=obj.). Note: this may fail, in which
# case module will be None.
if module is False:
module = None
elif module is None:
module = inspect.getmodule(obj)
# Read the module's source code. This is used by
# DocTestFinder._find_lineno to find the line number for a
# given object's docstring.
try:
file = inspect.getsourcefile(obj) or inspect.getfile(obj)
source_lines = linecache.getlines(file)
if not source_lines:
source_lines = None
except TypeError:
source_lines = None
# Initialize globals, and merge in extraglobs.
if globs is None:
if module is None:
globs = {}
else:
globs = module.__dict__.copy()
else:
globs = globs.copy()
if extraglobs is not None:
globs.update(extraglobs)
# Recursively expore `obj`, extracting DocTests.
tests = []
self._find(tests, obj, name, module, source_lines, globs, {})
return tests
def _filter(self, obj, prefix, base):
"""
Return true if the given object should not be examined.
"""
return (self._namefilter is not None and
self._namefilter(prefix, base))
def _from_module(self, module, object):
"""
Return true if the given object is defined in the given
module.
"""
if module is None:
return True
elif inspect.isfunction(object):
return module.__dict__ is func_globals(object)
elif inspect.isclass(object):
return module.__name__ == object.__module__
elif inspect.getmodule(object) is not None:
return module is inspect.getmodule(object)
elif hasattr(object, '__module__'):
return module.__name__ == object.__module__
elif isinstance(object, property):
return True # [XX] no way not be sure.
else:
raise ValueError("object must be a class or function")
def _find(self, tests, obj, name, module, source_lines, globs, seen):
"""
Find tests for the given object and any contained objects, and
add them to `tests`.
"""
if self._verbose:
print('Finding tests in %s' % name)
# If we've already processed this object, then ignore it.
if id(obj) in seen:
return
seen[id(obj)] = 1
# Find a test for this object, and add it to the list of tests.
test = self._get_test(obj, name, module, globs, source_lines)
if test is not None:
tests.append(test)
# Look for tests in a module's contained objects.
if inspect.ismodule(obj) and self._recurse:
for valname, val in obj.__dict__.items():
# Check if this contained object should be ignored.
if self._filter(val, name, valname):
continue
valname = '%s.%s' % (name, valname)
# Recurse to functions & classes.
if ((inspect.isfunction(val) or inspect.isclass(val)) and
self._from_module(module, val)):
self._find(tests, val, valname, module, source_lines,
globs, seen)
# Look for tests in a module's __test__ dictionary.
if inspect.ismodule(obj) and self._recurse:
for valname, val in getattr(obj, '__test__', {}).items():
if not isinstance(valname, basestring):
raise ValueError("DocTestFinder.find: __test__ keys "
"must be strings: %r" %
(type(valname),))
if not (inspect.isfunction(val) or inspect.isclass(val) or
inspect.ismethod(val) or inspect.ismodule(val) or
isinstance(val, basestring)):
raise ValueError("DocTestFinder.find: __test__ values "
"must be strings, functions, methods, "
"classes, or modules: %r" %
(type(val),))
valname = '%s.__test__.%s' % (name, valname)
self._find(tests, val, valname, module, source_lines,
globs, seen)
# Look for tests in a class's contained objects.
if inspect.isclass(obj) and self._recurse:
for valname, val in obj.__dict__.items():
# Check if this contained object should be ignored.
if self._filter(val, name, valname):
continue
# Special handling for staticmethod/classmethod.
if isinstance(val, staticmethod):
val = getattr(obj, valname)
if isinstance(val, classmethod):
val = im_func(getattr(obj, valname))
# Recurse to methods, properties, and nested classes.
if ((inspect.isfunction(val) or inspect.isclass(val) or
isinstance(val, property)) and
self._from_module(module, val)):
valname = '%s.%s' % (name, valname)
self._find(tests, val, valname, module, source_lines,
globs, seen)
def _get_test(self, obj, name, module, globs, source_lines):
"""
Return a DocTest for the given object, if it defines a docstring;
otherwise, return None.
"""
# Extract the object's docstring. If it doesn't have one,
# then return None (no test for this object).
if isinstance(obj, basestring):
docstring = obj
else:
try:
if obj.__doc__ is None:
docstring = ''
else:
docstring = obj.__doc__
if not isinstance(docstring, basestring):
docstring = str(docstring)
except (TypeError, AttributeError):
docstring = ''
# Find the docstring's location in the file.
lineno = self._find_lineno(obj, source_lines)
# Don't bother if the docstring is empty.
if self._exclude_empty and not docstring:
return None
# Return a DocTest for this object.
if module is None:
filename = None
else:
filename = getattr(module, '__file__', module.__name__)
if filename[-4:] in (".pyc", ".pyo"):
filename = filename[:-1]
return self._parser.get_doctest(docstring, globs, name,
filename, lineno)
def _find_lineno(self, obj, source_lines):
"""
Return a line number of the given object's docstring. Note:
this method assumes that the object has a docstring.
"""
lineno = None
# Find the line number for modules.
if inspect.ismodule(obj):
lineno = 0
# Find the line number for classes.
# Note: this could be fooled if a class is defined multiple
# times in a single file.
if inspect.isclass(obj):
if source_lines is None:
return None
pat = re.compile(r'^\s*class\s*%s\b' %
getattr(obj, '__name__', '-'))
for i, line in enumerate(source_lines):
if pat.match(line):
lineno = i
break
# Find the line number for functions & methods.
if inspect.ismethod(obj): obj = im_func(obj)
if inspect.isfunction(obj): obj = func_code(obj)
if inspect.istraceback(obj): obj = obj.tb_frame
if inspect.isframe(obj): obj = obj.f_code
if inspect.iscode(obj):
lineno = getattr(obj, 'co_firstlineno', None)-1
# Find the line number where the docstring starts. Assume
# that it's the first line that begins with a quote mark.
# Note: this could be fooled by a multiline function
# signature, where a continuation line begins with a quote
# mark.
if lineno is not None:
if source_lines is None:
return lineno+1
pat = re.compile('(^|.*:)\s*\w*("|\')')
for lineno in range(lineno, len(source_lines)):
if pat.match(source_lines[lineno]):
return lineno
# We couldn't find the line number.
return None
######################################################################
## 5. DocTest Runner
######################################################################
class DocTestRunner:
"""
A class used to run DocTest test cases, and accumulate statistics.
The `run` method is used to process a single DocTest case. It
returns a tuple `(f, t)`, where `t` is the number of test cases
tried, and `f` is the number of test cases that failed.
>>> tests = DocTestFinder().find(_TestClass)
>>> runner = DocTestRunner(verbose=False)
>>> for test in tests:
... print runner.run(test)
(0, 2)
(0, 1)
(0, 2)
(0, 2)
The `summarize` method prints a summary of all the test cases that
have been run by the runner, and returns an aggregated `(f, t)`
tuple:
>>> runner.summarize(verbose=1)
4 items passed all tests:
2 tests in _TestClass
2 tests in _TestClass.__init__
2 tests in _TestClass.get
1 tests in _TestClass.square
7 tests in 4 items.
7 passed and 0 failed.
Test passed.
(0, 7)
The aggregated number of tried examples and failed examples is
also available via the `tries` and `failures` attributes:
>>> runner.tries
7
>>> runner.failures
0
The comparison between expected outputs and actual outputs is done
by an `OutputChecker`. This comparison may be customized with a
number of option flags; see the documentation for `testmod` for
more information. If the option flags are insufficient, then the
comparison may also be customized by passing a subclass of
`OutputChecker` to the constructor.
The test runner's display output can be controlled in two ways.
First, an output function (`out) can be passed to
`TestRunner.run`; this function will be called with strings that
should be displayed. It defaults to `sys.stdout.write`. If
capturing the output is not sufficient, then the display output
can be also customized by subclassing DocTestRunner, and
overriding the methods `report_start`, `report_success`,
`report_unexpected_exception`, and `report_failure`.
"""
# This divider string is used to separate failure messages, and to
# separate sections of the summary.
DIVIDER = "*" * 70
def __init__(self, checker=None, verbose=None, optionflags=0):
"""
Create a new test runner.
Optional keyword arg `checker` is the `OutputChecker` that
should be used to compare the expected outputs and actual
outputs of doctest examples.
Optional keyword arg 'verbose' prints lots of stuff if true,
only failures if false; by default, it's true iff '-v' is in
sys.argv.
Optional argument `optionflags` can be used to control how the
test runner compares expected output to actual output, and how
it displays failures. See the documentation for `testmod` for
more information.
"""
self._checker = checker or OutputChecker()
if verbose is None:
verbose = '-v' in sys.argv
self._verbose = verbose
self.optionflags = optionflags
self.original_optionflags = optionflags
# Keep track of the examples we've run.
self.tries = 0
self.failures = 0
self._name2ft = {}
# Create a fake output target for capturing doctest output.
self._fakeout = _SpoofOut()
#/////////////////////////////////////////////////////////////////
# Reporting methods
#/////////////////////////////////////////////////////////////////
def report_start(self, out, test, example):
"""
Report that the test runner is about to process the given
example. (Only displays a message if verbose=True)
"""
if self._verbose:
if example.want:
out('Trying:\n' + _indent(example.source) +
'Expecting:\n' + _indent(example.want))
else:
out('Trying:\n' + _indent(example.source) +
'Expecting nothing\n')
def report_success(self, out, test, example, got):
"""
Report that the given example ran successfully. (Only
displays a message if verbose=True)
"""
if self._verbose:
out("ok\n")
def report_failure(self, out, test, example, got):
"""
Report that the given example failed.
"""
out(self._failure_header(test, example) +
self._checker.output_difference(example, got, self.optionflags))
def report_unexpected_exception(self, out, test, example, exc_info):
"""
Report that the given example raised an unexpected exception.
"""
out(self._failure_header(test, example) +
'Exception raised:\n' + _indent(_exception_traceback(exc_info)))
def _failure_header(self, test, example):
out = [self.DIVIDER]
if test.filename:
if test.lineno is not None and example.lineno is not None:
lineno = test.lineno + example.lineno + 1
else:
lineno = '?'
out.append('File "%s", line %s, in %s' %
(test.filename, lineno, test.name))
else:
out.append('Line %s, in %s' % (example.lineno+1, test.name))
out.append('Failed example:')
source = example.source
out.append(_indent(source))
return '\n'.join(out)
#/////////////////////////////////////////////////////////////////
# DocTest Running
#/////////////////////////////////////////////////////////////////
def __run(self, test, compileflags, out):
"""
Run the examples in `test`. Write the outcome of each example
with one of the `DocTestRunner.report_*` methods, using the
writer function `out`. `compileflags` is the set of compiler
flags that should be used to execute examples. Return a tuple
`(f, t)`, where `t` is the number of examples tried, and `f`
is the number of examples that failed. The examples are run
in the namespace `test.globs`.
"""
# Keep track of the number of failures and tries.
failures = tries = 0
# Save the option flags (since option directives can be used
# to modify them).
original_optionflags = self.optionflags
SUCCESS, FAILURE, BOOM = range(3) # `outcome` state
check = self._checker.check_output
# Process each example.
for examplenum, example in enumerate(test.examples):
# If REPORT_ONLY_FIRST_FAILURE is set, then supress
# reporting after the first failure.
quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and
failures > 0)
# Merge in the example's options.
self.optionflags = original_optionflags
if example.options:
for (optionflag, val) in example.options.items():
if val:
self.optionflags |= optionflag
else:
self.optionflags &= ~optionflag
# Record that we started this example.
tries += 1
if not quiet:
self.report_start(out, test, example)
# Use a special filename for compile(), so we can retrieve
# the source code during interactive debugging (see
# __patched_linecache_getlines).
filename = '<doctest %s[%d]>' % (test.name, examplenum)
# Run the example in the given context (globs), and record
# any exception that gets raised. (But don't intercept
# keyboard interrupts.)
try:
# Don't blink! This is where the user's code gets run.
exec(compile(example.source, filename, "single",
compileflags, 1), test.globs)
self.debugger.set_continue() # ==== Example Finished ====
exception = None
except KeyboardInterrupt:
raise
except:
exception = sys.exc_info()
self.debugger.set_continue() # ==== Example Finished ====
got = self._fakeout.getvalue() # the actual output
self._fakeout.truncate(0)
outcome = FAILURE # guilty until proved innocent or insane
# If the example executed without raising any exceptions,
# verify its output.
if exception is None:
if check(example.want, got, self.optionflags):
outcome = SUCCESS
# The example raised an exception: check if it was expected.
else:
exc_info = sys.exc_info()
exc_msg = traceback.format_exception_only(*exc_info[:2])[-1]
if not quiet:
got += _exception_traceback(exc_info)
# If `example.exc_msg` is None, then we weren't expecting
# an exception.
if example.exc_msg is None:
outcome = BOOM
# We expected an exception: see whether it matches.
elif check(example.exc_msg, exc_msg, self.optionflags):
outcome = SUCCESS
# Another chance if they didn't care about the detail.
elif self.optionflags & IGNORE_EXCEPTION_DETAIL:
m1 = re.match(r'[^:]*:', example.exc_msg)
m2 = re.match(r'[^:]*:', exc_msg)
if m1 and m2 and check(m1.group(0), m2.group(0),
self.optionflags):
outcome = SUCCESS
# Report the outcome.
if outcome is SUCCESS:
if not quiet:
self.report_success(out, test, example, got)
elif outcome is FAILURE:
if not quiet:
self.report_failure(out, test, example, got)
failures += 1
elif outcome is BOOM:
if not quiet:
self.report_unexpected_exception(out, test, example,
exc_info)
failures += 1
else:
assert False, ("unknown outcome", outcome)
# Restore the option flags (in case they were modified)
self.optionflags = original_optionflags
# Record and return the number of failures and tries.
self.__record_outcome(test, failures, tries)
return failures, tries
def __record_outcome(self, test, f, t):
"""
Record the fact that the given DocTest (`test`) generated `f`
failures out of `t` tried examples.
"""
f2, t2 = self._name2ft.get(test.name, (0,0))
self._name2ft[test.name] = (f+f2, t+t2)
self.failures += f
self.tries += t
__LINECACHE_FILENAME_RE = re.compile(r'<doctest '
r'(?P<name>[\w\.]+)'
r'\[(?P<examplenum>\d+)\]>$')
def __patched_linecache_getlines(self, filename, module_globals=None):
m = self.__LINECACHE_FILENAME_RE.match(filename)
if m and m.group('name') == self.test.name:
example = self.test.examples[int(m.group('examplenum'))]
return example.source.splitlines(True)
elif func_code(self.save_linecache_getlines).co_argcount > 1:
return self.save_linecache_getlines(filename, module_globals)
else:
return self.save_linecache_getlines(filename)
def run(self, test, compileflags=None, out=None, clear_globs=True):
"""
Run the examples in `test`, and display the results using the
writer function `out`.
The examples are run in the namespace `test.globs`. If
`clear_globs` is true (the default), then this namespace will
be cleared after the test runs, to help with garbage
collection. If you would like to examine the namespace after
the test completes, then use `clear_globs=False`.
`compileflags` gives the set of flags that should be used by
the Python compiler when running the examples. If not
specified, then it will default to the set of future-import
flags that apply to `globs`.
The output of each example is checked using
`DocTestRunner.check_output`, and the results are formatted by
the `DocTestRunner.report_*` methods.
"""
self.test = test
if compileflags is None:
compileflags = _extract_future_flags(test.globs)
save_stdout = sys.stdout
if out is None:
out = save_stdout.write
sys.stdout = self._fakeout
# Patch pdb.set_trace to restore sys.stdout during interactive
# debugging (so it's not still redirected to self._fakeout).
# Note that the interactive output will go to *our*
# save_stdout, even if that's not the real sys.stdout; this
# allows us to write test cases for the set_trace behavior.
save_set_trace = pdb.set_trace
self.debugger = _OutputRedirectingPdb(save_stdout)
self.debugger.reset()
pdb.set_trace = self.debugger.set_trace
# Patch linecache.getlines, so we can see the example's source
# when we're inside the debugger.
self.save_linecache_getlines = linecache.getlines
linecache.getlines = self.__patched_linecache_getlines
try:
return self.__run(test, compileflags, out)
finally:
sys.stdout = save_stdout
pdb.set_trace = save_set_trace
linecache.getlines = self.save_linecache_getlines
if clear_globs:
test.globs.clear()
#/////////////////////////////////////////////////////////////////
# Summarization
#/////////////////////////////////////////////////////////////////
def summarize(self, verbose=None):
"""
Print a summary of all the test cases that have been run by
this DocTestRunner, and return a tuple `(f, t)`, where `f` is
the total number of failed examples, and `t` is the total
number of tried examples.
The optional `verbose` argument controls how detailed the
summary is. If the verbosity is not specified, then the
DocTestRunner's verbosity is used.
"""
if verbose is None:
verbose = self._verbose
notests = []
passed = []
failed = []
totalt = totalf = 0
for x in self._name2ft.items():
name, (f, t) = x
assert f <= t
totalt += t
totalf += f
if t == 0:
notests.append(name)
elif f == 0:
passed.append( (name, t) )
else:
failed.append(x)
if verbose:
if notests:
print(len(notests), "items had no tests:")
notests.sort()
for thing in notests:
print(" ", thing)
if passed:
print(len(passed), "items passed all tests:")
passed.sort()
for thing, count in passed:
print(" %3d tests in %s" % (count, thing))
if failed:
print(self.DIVIDER)
print(len(failed), "items had failures:")
failed.sort()
for thing, (f, t) in failed:
print(" %3d of %3d in %s" % (f, t, thing))
if verbose:
print(totalt, "tests in", len(self._name2ft), "items.")
print(totalt - totalf, "passed and", totalf, "failed.")
if totalf:
print("***Test Failed***", totalf, "failures.")
elif verbose:
print("Test passed.")
return totalf, totalt
#/////////////////////////////////////////////////////////////////
# Backward compatibility cruft to maintain doctest.master.
#/////////////////////////////////////////////////////////////////
def merge(self, other):
d = self._name2ft
for name, (f, t) in other._name2ft.items():
if name in d:
print("*** DocTestRunner.merge: '" + name + "' in both" \
" testers; summing outcomes.")
f2, t2 = d[name]
f = f + f2
t = t + t2
d[name] = f, t
class OutputChecker:
"""
A class used to check the whether the actual output from a doctest
example matches the expected output. `OutputChecker` defines two
methods: `check_output`, which compares a given pair of outputs,
and returns true if they match; and `output_difference`, which
returns a string describing the differences between two outputs.
"""
def check_output(self, want, got, optionflags):
"""
Return True iff the actual output from an example (`got`)
matches the expected output (`want`). These strings are
always considered to match if they are identical; but
depending on what option flags the test runner is using,
several non-exact match types are also possible. See the
documentation for `TestRunner` for more information about
option flags.
"""
# Handle the common case first, for efficiency:
# if they're string-identical, always return true.
if got == want:
return True
# The values True and False replaced 1 and 0 as the return
# value for boolean comparisons in Python 2.3.
if not (optionflags & DONT_ACCEPT_TRUE_FOR_1):
if (got,want) == ("True\n", "1\n"):
return True
if (got,want) == ("False\n", "0\n"):
return True
# <BLANKLINE> can be used as a special sequence to signify a
# blank line, unless the DONT_ACCEPT_BLANKLINE flag is used.
if not (optionflags & DONT_ACCEPT_BLANKLINE):
# Replace <BLANKLINE> in want with a blank line.
want = re.sub('(?m)^%s\s*?$' % re.escape(BLANKLINE_MARKER),
'', want)
# If a line in got contains only spaces, then remove the
# spaces.
got = re.sub('(?m)^\s*?$', '', got)
if got == want:
return True
# This flag causes doctest to ignore any differences in the
# contents of whitespace strings. Note that this can be used
# in conjunction with the ELLIPSIS flag.
if optionflags & NORMALIZE_WHITESPACE:
got = ' '.join(got.split())
want = ' '.join(want.split())
if got == want:
return True
# The ELLIPSIS flag says to let the sequence "..." in `want`
# match any substring in `got`.
if optionflags & ELLIPSIS:
if _ellipsis_match(want, got):
return True
# We didn't find any match; return false.
return False
# Should we do a fancy diff?
def _do_a_fancy_diff(self, want, got, optionflags):
# Not unless they asked for a fancy diff.
if not optionflags & (REPORT_UDIFF |
REPORT_CDIFF |
REPORT_NDIFF):
return False
# If expected output uses ellipsis, a meaningful fancy diff is
# too hard ... or maybe not. In two real-life failures Tim saw,
# a diff was a major help anyway, so this is commented out.
# [todo] _ellipsis_match() knows which pieces do and don't match,
# and could be the basis for a kick-ass diff in this case.
##if optionflags & ELLIPSIS and ELLIPSIS_MARKER in want:
## return False
# ndiff does intraline difference marking, so can be useful even
# for 1-line differences.
if optionflags & REPORT_NDIFF:
return True
# The other diff types need at least a few lines to be helpful.
return want.count('\n') > 2 and got.count('\n') > 2
def output_difference(self, example, got, optionflags):
"""
Return a string describing the differences between the
expected output for a given example (`example`) and the actual
output (`got`). `optionflags` is the set of option flags used
to compare `want` and `got`.
"""
want = example.want
# If <BLANKLINE>s are being used, then replace blank lines
# with <BLANKLINE> in the actual output string.
if not (optionflags & DONT_ACCEPT_BLANKLINE):
got = re.sub('(?m)^[ ]*(?=\n)', BLANKLINE_MARKER, got)
# Check if we should use diff.
if self._do_a_fancy_diff(want, got, optionflags):
# Split want & got into lines.
want_lines = want.splitlines(True) # True == keep line ends
got_lines = got.splitlines(True)
# Use difflib to find their differences.
if optionflags & REPORT_UDIFF:
diff = difflib.unified_diff(want_lines, got_lines, n=2)
diff = list(diff)[2:] # strip the diff header
kind = 'unified diff with -expected +actual'
elif optionflags & REPORT_CDIFF:
diff = difflib.context_diff(want_lines, got_lines, n=2)
diff = list(diff)[2:] # strip the diff header
kind = 'context diff with expected followed by actual'
elif optionflags & REPORT_NDIFF:
engine = difflib.Differ(charjunk=difflib.IS_CHARACTER_JUNK)
diff = list(engine.compare(want_lines, got_lines))
kind = 'ndiff with -expected +actual'
else:
assert 0, 'Bad diff option'
# Remove trailing whitespace on diff output.
diff = [line.rstrip() + '\n' for line in diff]
return 'Differences (%s):\n' % kind + _indent(''.join(diff))
# If we're not using diff, then simply list the expected
# output followed by the actual output.
if want and got:
return 'Expected:\n%sGot:\n%s' % (_indent(want), _indent(got))
elif want:
return 'Expected:\n%sGot nothing\n' % _indent(want)
elif got:
return 'Expected nothing\nGot:\n%s' % _indent(got)
else:
return 'Expected nothing\nGot nothing\n'
class DocTestFailure(Exception):
"""A DocTest example has failed in debugging mode.
The exception instance has variables:
- test: the DocTest object being run
- excample: the Example object that failed
- got: the actual output
"""
def __init__(self, test, example, got):
self.test = test
self.example = example
self.got = got
def __str__(self):
return str(self.test)
class UnexpectedException(Exception):
"""A DocTest example has encountered an unexpected exception
The exception instance has variables:
- test: the DocTest object being run
- excample: the Example object that failed
- exc_info: the exception info
"""
def __init__(self, test, example, exc_info):
self.test = test
self.example = example
self.exc_info = exc_info
def __str__(self):
return str(self.test)
class DebugRunner(DocTestRunner):
r"""Run doc tests but raise an exception as soon as there is a failure.
If an unexpected exception occurs, an UnexpectedException is raised.
It contains the test, the example, and the original exception:
>>> runner = DebugRunner(verbose=False)
>>> test = DocTestParser().get_doctest('>>> raise KeyError\n42',
... {}, 'foo', 'foo.py', 0)
>>> try:
... runner.run(test)
... except UnexpectedException, failure:
... pass
>>> failure.test is test
True
>>> failure.example.want
'42\n'
>>> exc_info = failure.exc_info
>>> raise exc_info[0], exc_info[1], exc_info[2]
Traceback (most recent call last):
...
KeyError
We wrap the original exception to give the calling application
access to the test and example information.
If the output doesn't match, then a DocTestFailure is raised:
>>> test = DocTestParser().get_doctest('''
... >>> x = 1
... >>> x
... 2
... ''', {}, 'foo', 'foo.py', 0)
>>> try:
... runner.run(test)
... except DocTestFailure, failure:
... pass
DocTestFailure objects provide access to the test:
>>> failure.test is test
True
As well as to the example:
>>> failure.example.want
'2\n'
and the actual output:
>>> failure.got
'1\n'
If a failure or error occurs, the globals are left intact:
>>> del test.globs['__builtins__']
>>> test.globs
{'x': 1}
>>> test = DocTestParser().get_doctest('''
... >>> x = 2
... >>> raise KeyError
... ''', {}, 'foo', 'foo.py', 0)
>>> runner.run(test)
Traceback (most recent call last):
...
UnexpectedException: <DocTest foo from foo.py:0 (2 examples)>
>>> del test.globs['__builtins__']
>>> test.globs
{'x': 2}
But the globals are cleared if there is no error:
>>> test = DocTestParser().get_doctest('''
... >>> x = 2
... ''', {}, 'foo', 'foo.py', 0)
>>> runner.run(test)
(0, 1)
>>> test.globs
{}
"""
def run(self, test, compileflags=None, out=None, clear_globs=True):
r = DocTestRunner.run(self, test, compileflags, out, False)
if clear_globs:
test.globs.clear()
return r
def report_unexpected_exception(self, out, test, example, exc_info):
raise UnexpectedException(test, example, exc_info)
def report_failure(self, out, test, example, got):
raise DocTestFailure(test, example, got)
######################################################################
## 6. Test Functions
######################################################################
# These should be backwards compatible.
# For backward compatibility, a global instance of a DocTestRunner
# class, updated by testmod.
master = None
def testmod(m=None, name=None, globs=None, verbose=None, isprivate=None,
report=True, optionflags=0, extraglobs=None,
raise_on_error=False, exclude_empty=False):
"""m=None, name=None, globs=None, verbose=None, isprivate=None,
report=True, optionflags=0, extraglobs=None, raise_on_error=False,
exclude_empty=False
Test examples in docstrings in functions and classes reachable
from module m (or the current module if m is not supplied), starting
with m.__doc__. Unless isprivate is specified, private names
are not skipped.
Also test examples reachable from dict m.__test__ if it exists and is
not None. m.__test__ maps names to functions, classes and strings;
function and class docstrings are tested even if the name is private;
strings are tested directly, as if they were docstrings.
Return (#failures, #tests).
See doctest.__doc__ for an overview.
Optional keyword arg "name" gives the name of the module; by default
use m.__name__.
Optional keyword arg "globs" gives a dict to be used as the globals
when executing examples; by default, use m.__dict__. A copy of this
dict is actually used for each docstring, so that each docstring's
examples start with a clean slate.
Optional keyword arg "extraglobs" gives a dictionary that should be
merged into the globals that are used to execute examples. By
default, no extra globals are used. This is new in 2.4.
Optional keyword arg "verbose" prints lots of stuff if true, prints
only failures if false; by default, it's true iff "-v" is in sys.argv.
Optional keyword arg "report" prints a summary at the end when true,
else prints nothing at the end. In verbose mode, the summary is
detailed, else very brief (in fact, empty if all tests passed).
Optional keyword arg "optionflags" or's together module constants,
and defaults to 0. This is new in 2.3. Possible values (see the
docs for details):
DONT_ACCEPT_TRUE_FOR_1
DONT_ACCEPT_BLANKLINE
NORMALIZE_WHITESPACE
ELLIPSIS
IGNORE_EXCEPTION_DETAIL
REPORT_UDIFF
REPORT_CDIFF
REPORT_NDIFF
REPORT_ONLY_FIRST_FAILURE
Optional keyword arg "raise_on_error" raises an exception on the
first unexpected exception or failure. This allows failures to be
post-mortem debugged.
Deprecated in Python 2.4:
Optional keyword arg "isprivate" specifies a function used to
determine whether a name is private. The default function is
treat all functions as public. Optionally, "isprivate" can be
set to doctest.is_private to skip over functions marked as private
using the underscore naming convention; see its docs for details.
Advanced tomfoolery: testmod runs methods of a local instance of
class doctest.Tester, then merges the results into (or creates)
global Tester instance doctest.master. Methods of doctest.master
can be called directly too, if you want to do something unusual.
Passing report=0 to testmod is especially useful then, to delay
displaying a summary. Invoke doctest.master.summarize(verbose)
when you're done fiddling.
"""
global master
if isprivate is not None:
warnings.warn("the isprivate argument is deprecated; "
"examine DocTestFinder.find() lists instead",
DeprecationWarning)
# If no module was given, then use __main__.
if m is None:
# DWA - m will still be None if this wasn't invoked from the command
# line, in which case the following TypeError is about as good an error
# as we should expect
m = sys.modules.get('__main__')
# Check that we were actually given a module.
if not inspect.ismodule(m):
raise TypeError("testmod: module required; %r" % (m,))
# If no name was given, then use the module's name.
if name is None:
name = m.__name__
# Find, parse, and run all tests in the given module.
finder = DocTestFinder(_namefilter=isprivate, exclude_empty=exclude_empty)
if raise_on_error:
runner = DebugRunner(verbose=verbose, optionflags=optionflags)
else:
runner = DocTestRunner(verbose=verbose, optionflags=optionflags)
for test in finder.find(m, name, globs=globs, extraglobs=extraglobs):
runner.run(test)
if report:
runner.summarize()
if master is None:
master = runner
else:
master.merge(runner)
return runner.failures, runner.tries
def testfile(filename, module_relative=True, name=None, package=None,
globs=None, verbose=None, report=True, optionflags=0,
extraglobs=None, raise_on_error=False, parser=DocTestParser()):
"""
Test examples in the given file. Return (#failures, #tests).
Optional keyword arg "module_relative" specifies how filenames
should be interpreted:
- If "module_relative" is True (the default), then "filename"
specifies a module-relative path. By default, this path is
relative to the calling module's directory; but if the
"package" argument is specified, then it is relative to that
package. To ensure os-independence, "filename" should use
"/" characters to separate path segments, and should not
be an absolute path (i.e., it may not begin with "/").
- If "module_relative" is False, then "filename" specifies an
os-specific path. The path may be absolute or relative (to
the current working directory).
Optional keyword arg "name" gives the name of the test; by default
use the file's basename.
Optional keyword argument "package" is a Python package or the
name of a Python package whose directory should be used as the
base directory for a module relative filename. If no package is
specified, then the calling module's directory is used as the base
directory for module relative filenames. It is an error to
specify "package" if "module_relative" is False.
Optional keyword arg "globs" gives a dict to be used as the globals
when executing examples; by default, use {}. A copy of this dict
is actually used for each docstring, so that each docstring's
examples start with a clean slate.
Optional keyword arg "extraglobs" gives a dictionary that should be
merged into the globals that are used to execute examples. By
default, no extra globals are used.
Optional keyword arg "verbose" prints lots of stuff if true, prints
only failures if false; by default, it's true iff "-v" is in sys.argv.
Optional keyword arg "report" prints a summary at the end when true,
else prints nothing at the end. In verbose mode, the summary is
detailed, else very brief (in fact, empty if all tests passed).
Optional keyword arg "optionflags" or's together module constants,
and defaults to 0. Possible values (see the docs for details):
DONT_ACCEPT_TRUE_FOR_1
DONT_ACCEPT_BLANKLINE
NORMALIZE_WHITESPACE
ELLIPSIS
IGNORE_EXCEPTION_DETAIL
REPORT_UDIFF
REPORT_CDIFF
REPORT_NDIFF
REPORT_ONLY_FIRST_FAILURE
Optional keyword arg "raise_on_error" raises an exception on the
first unexpected exception or failure. This allows failures to be
post-mortem debugged.
Optional keyword arg "parser" specifies a DocTestParser (or
subclass) that should be used to extract tests from the files.
Advanced tomfoolery: testmod runs methods of a local instance of
class doctest.Tester, then merges the results into (or creates)
global Tester instance doctest.master. Methods of doctest.master
can be called directly too, if you want to do something unusual.
Passing report=0 to testmod is especially useful then, to delay
displaying a summary. Invoke doctest.master.summarize(verbose)
when you're done fiddling.
"""
global master
if package and not module_relative:
raise ValueError("Package may only be specified for module-"
"relative paths.")
# Relativize the path
if module_relative:
package = _normalize_module(package)
filename = _module_relative_path(package, filename)
# If no name was given, then use the file's name.
if name is None:
name = os.path.basename(filename)
# Assemble the globals.
if globs is None:
globs = {}
else:
globs = globs.copy()
if extraglobs is not None:
globs.update(extraglobs)
if raise_on_error:
runner = DebugRunner(verbose=verbose, optionflags=optionflags)
else:
runner = DocTestRunner(verbose=verbose, optionflags=optionflags)
# Read the file, convert it to a test, and run it.
f = open(filename)
s = f.read()
f.close()
test = parser.get_doctest(s, globs, name, filename, 0)
runner.run(test)
if report:
runner.summarize()
if master is None:
master = runner
else:
master.merge(runner)
return runner.failures, runner.tries
def run_docstring_examples(f, globs, verbose=False, name="NoName",
compileflags=None, optionflags=0):
"""
Test examples in the given object's docstring (`f`), using `globs`
as globals. Optional argument `name` is used in failure messages.
If the optional argument `verbose` is true, then generate output
even if there are no failures.
`compileflags` gives the set of flags that should be used by the
Python compiler when running the examples. If not specified, then
it will default to the set of future-import flags that apply to
`globs`.
Optional keyword arg `optionflags` specifies options for the
testing and output. See the documentation for `testmod` for more
information.
"""
# Find, parse, and run all tests in the given module.
finder = DocTestFinder(verbose=verbose, recurse=False)
runner = DocTestRunner(verbose=verbose, optionflags=optionflags)
for test in finder.find(f, name, globs=globs):
runner.run(test, compileflags=compileflags)
######################################################################
## 7. Tester
######################################################################
# This is provided only for backwards compatibility. It's not
# actually used in any way.
class Tester:
def __init__(self, mod=None, globs=None, verbose=None,
isprivate=None, optionflags=0):
warnings.warn("class Tester is deprecated; "
"use class doctest.DocTestRunner instead",
DeprecationWarning, stacklevel=2)
if mod is None and globs is None:
raise TypeError("Tester.__init__: must specify mod or globs")
if mod is not None and not inspect.ismodule(mod):
raise TypeError("Tester.__init__: mod must be a module; %r" %
(mod,))
if globs is None:
globs = mod.__dict__
self.globs = globs
self.verbose = verbose
self.isprivate = isprivate
self.optionflags = optionflags
self.testfinder = DocTestFinder(_namefilter=isprivate)
self.testrunner = DocTestRunner(verbose=verbose,
optionflags=optionflags)
def runstring(self, s, name):
test = DocTestParser().get_doctest(s, self.globs, name, None, None)
if self.verbose:
print("Running string", name)
(f,t) = self.testrunner.run(test)
if self.verbose:
print(f, "of", t, "examples failed in string", name)
return (f,t)
def rundoc(self, object, name=None, module=None):
f = t = 0
tests = self.testfinder.find(object, name, module=module,
globs=self.globs)
for test in tests:
(f2, t2) = self.testrunner.run(test)
(f,t) = (f+f2, t+t2)
return (f,t)
def rundict(self, d, name, module=None):
import types
m = types.ModuleType(name)
m.__dict__.update(d)
if module is None:
module = False
return self.rundoc(m, name, module)
def run__test__(self, d, name):
import types
m = types.ModuleType(name)
m.__test__ = d
return self.rundoc(m, name)
def summarize(self, verbose=None):
return self.testrunner.summarize(verbose)
def merge(self, other):
self.testrunner.merge(other.testrunner)
######################################################################
## 8. Unittest Support
######################################################################
_unittest_reportflags = 0
def set_unittest_reportflags(flags):
"""Sets the unittest option flags.
The old flag is returned so that a runner could restore the old
value if it wished to:
>>> old = _unittest_reportflags
>>> set_unittest_reportflags(REPORT_NDIFF |
... REPORT_ONLY_FIRST_FAILURE) == old
True
>>> import doctest
>>> doctest._unittest_reportflags == (REPORT_NDIFF |
... REPORT_ONLY_FIRST_FAILURE)
True
Only reporting flags can be set:
>>> set_unittest_reportflags(ELLIPSIS)
Traceback (most recent call last):
...
ValueError: ('Only reporting flags allowed', 8)
>>> set_unittest_reportflags(old) == (REPORT_NDIFF |
... REPORT_ONLY_FIRST_FAILURE)
True
"""
global _unittest_reportflags
if (flags & REPORTING_FLAGS) != flags:
raise ValueError("Only reporting flags allowed", flags)
old = _unittest_reportflags
_unittest_reportflags = flags
return old
class DocTestCase(unittest.TestCase):
def __init__(self, test, optionflags=0, setUp=None, tearDown=None,
checker=None):
unittest.TestCase.__init__(self)
self._dt_optionflags = optionflags
self._dt_checker = checker
self._dt_test = test
self._dt_setUp = setUp
self._dt_tearDown = tearDown
def setUp(self):
test = self._dt_test
if self._dt_setUp is not None:
self._dt_setUp(test)
def tearDown(self):
test = self._dt_test
if self._dt_tearDown is not None:
self._dt_tearDown(test)
test.globs.clear()
def runTest(self):
test = self._dt_test
old = sys.stdout
new = StringIO()
optionflags = self._dt_optionflags
if not (optionflags & REPORTING_FLAGS):
# The option flags don't include any reporting flags,
# so add the default reporting flags
optionflags |= _unittest_reportflags
runner = DocTestRunner(optionflags=optionflags,
checker=self._dt_checker, verbose=False)
try:
runner.DIVIDER = "-"*70
failures, tries = runner.run(
test, out=new.write, clear_globs=False)
finally:
sys.stdout = old
if failures:
raise self.failureException(self.format_failure(new.getvalue()))
def format_failure(self, err):
test = self._dt_test
if test.lineno is None:
lineno = 'unknown line number'
else:
lineno = '%s' % test.lineno
lname = '.'.join(test.name.split('.')[-1:])
return ('Failed doctest test for %s\n'
' File "%s", line %s, in %s\n\n%s'
% (test.name, test.filename, lineno, lname, err)
)
def debug(self):
r"""Run the test case without results and without catching exceptions
The unit test framework includes a debug method on test cases
and test suites to support post-mortem debugging. The test code
is run in such a way that errors are not caught. This way a
caller can catch the errors and initiate post-mortem debugging.
The DocTestCase provides a debug method that raises
UnexpectedException errors if there is an unexepcted
exception:
>>> test = DocTestParser().get_doctest('>>> raise KeyError\n42',
... {}, 'foo', 'foo.py', 0)
>>> case = DocTestCase(test)
>>> try:
... case.debug()
... except UnexpectedException, failure:
... pass
The UnexpectedException contains the test, the example, and
the original exception:
>>> failure.test is test
True
>>> failure.example.want
'42\n'
>>> exc_info = failure.exc_info
>>> raise exc_info[0], exc_info[1], exc_info[2]
Traceback (most recent call last):
...
KeyError
If the output doesn't match, then a DocTestFailure is raised:
>>> test = DocTestParser().get_doctest('''
... >>> x = 1
... >>> x
... 2
... ''', {}, 'foo', 'foo.py', 0)
>>> case = DocTestCase(test)
>>> try:
... case.debug()
... except DocTestFailure, failure:
... pass
DocTestFailure objects provide access to the test:
>>> failure.test is test
True
As well as to the example:
>>> failure.example.want
'2\n'
and the actual output:
>>> failure.got
'1\n'
"""
self.setUp()
runner = DebugRunner(optionflags=self._dt_optionflags,
checker=self._dt_checker, verbose=False)
runner.run(self._dt_test)
self.tearDown()
def id(self):
return self._dt_test.name
def __repr__(self):
name = self._dt_test.name.split('.')
return "%s (%s)" % (name[-1], '.'.join(name[:-1]))
__str__ = __repr__
def shortDescription(self):
return "Doctest: " + self._dt_test.name
def DocTestSuite(module=None, globs=None, extraglobs=None, test_finder=None,
**options):
"""
Convert doctest tests for a module to a unittest test suite.
This converts each documentation string in a module that
contains doctest tests to a unittest test case. If any of the
tests in a doc string fail, then the test case fails. An exception
is raised showing the name of the file containing the test and a
(sometimes approximate) line number.
The `module` argument provides the module to be tested. The argument
can be either a module or a module name.
If no argument is given, the calling module is used.
A number of options may be provided as keyword arguments:
setUp
A set-up function. This is called before running the
tests in each file. The setUp function will be passed a DocTest
object. The setUp function can access the test globals as the
globs attribute of the test passed.
tearDown
A tear-down function. This is called after running the
tests in each file. The tearDown function will be passed a DocTest
object. The tearDown function can access the test globals as the
globs attribute of the test passed.
globs
A dictionary containing initial global variables for the tests.
optionflags
A set of doctest option flags expressed as an integer.
"""
if test_finder is None:
test_finder = DocTestFinder()
module = _normalize_module(module)
tests = test_finder.find(module, globs=globs, extraglobs=extraglobs)
if globs is None:
globs = module.__dict__
if not tests:
# Why do we want to do this? Because it reveals a bug that might
# otherwise be hidden.
raise ValueError(module, "has no tests")
tests.sort()
suite = unittest.TestSuite()
for test in tests:
if len(test.examples) == 0:
continue
if not test.filename:
filename = module.__file__
if filename[-4:] in (".pyc", ".pyo"):
filename = filename[:-1]
test.filename = filename
suite.addTest(DocTestCase(test, **options))
return suite
class DocFileCase(DocTestCase):
def id(self):
return '_'.join(self._dt_test.name.split('.'))
def __repr__(self):
return self._dt_test.filename
__str__ = __repr__
def format_failure(self, err):
return ('Failed doctest test for %s\n File "%s", line 0\n\n%s'
% (self._dt_test.name, self._dt_test.filename, err)
)
def DocFileTest(path, module_relative=True, package=None,
globs=None, parser=DocTestParser(), **options):
if globs is None:
globs = {}
if package and not module_relative:
raise ValueError("Package may only be specified for module-"
"relative paths.")
# Relativize the path.
if module_relative:
package = _normalize_module(package)
path = _module_relative_path(package, path)
# Find the file and read it.
name = os.path.basename(path)
f = open(path)
doc = f.read()
f.close()
# Convert it to a test, and wrap it in a DocFileCase.
test = parser.get_doctest(doc, globs, name, path, 0)
return DocFileCase(test, **options)
def DocFileSuite(*paths, **kw):
"""A unittest suite for one or more doctest files.
The path to each doctest file is given as a string; the
interpretation of that string depends on the keyword argument
"module_relative".
A number of options may be provided as keyword arguments:
module_relative
If "module_relative" is True, then the given file paths are
interpreted as os-independent module-relative paths. By
default, these paths are relative to the calling module's
directory; but if the "package" argument is specified, then
they are relative to that package. To ensure os-independence,
"filename" should use "/" characters to separate path
segments, and may not be an absolute path (i.e., it may not
begin with "/").
If "module_relative" is False, then the given file paths are
interpreted as os-specific paths. These paths may be absolute
or relative (to the current working directory).
package
A Python package or the name of a Python package whose directory
should be used as the base directory for module relative paths.
If "package" is not specified, then the calling module's
directory is used as the base directory for module relative
filenames. It is an error to specify "package" if
"module_relative" is False.
setUp
A set-up function. This is called before running the
tests in each file. The setUp function will be passed a DocTest
object. The setUp function can access the test globals as the
globs attribute of the test passed.
tearDown
A tear-down function. This is called after running the
tests in each file. The tearDown function will be passed a DocTest
object. The tearDown function can access the test globals as the
globs attribute of the test passed.
globs
A dictionary containing initial global variables for the tests.
optionflags
A set of doctest option flags expressed as an integer.
parser
A DocTestParser (or subclass) that should be used to extract
tests from the files.
"""
suite = unittest.TestSuite()
# We do this here so that _normalize_module is called at the right
# level. If it were called in DocFileTest, then this function
# would be the caller and we might guess the package incorrectly.
if kw.get('module_relative', True):
kw['package'] = _normalize_module(kw.get('package'))
for path in paths:
suite.addTest(DocFileTest(path, **kw))
return suite
######################################################################
## 9. Debugging Support
######################################################################
def script_from_examples(s):
r"""Extract script from text with examples.
Converts text with examples to a Python script. Example input is
converted to regular code. Example output and all other words
are converted to comments:
>>> text = '''
... Here are examples of simple math.
...
... Python has super accurate integer addition
...
... >>> 2 + 2
... 5
...
... And very friendly error messages:
...
... >>> 1/0
... To Infinity
... And
... Beyond
...
... You can use logic if you want:
...
... >>> if 0:
... ... blah
... ... blah
... ...
...
... Ho hum
... '''
>>> print script_from_examples(text)
# Here are examples of simple math.
#
# Python has super accurate integer addition
#
2 + 2
# Expected:
## 5
#
# And very friendly error messages:
#
1/0
# Expected:
## To Infinity
## And
## Beyond
#
# You can use logic if you want:
#
if 0:
blah
blah
#
# Ho hum
"""
output = []
for piece in DocTestParser().parse(s):
if isinstance(piece, Example):
# Add the example's source code (strip trailing NL)
output.append(piece.source[:-1])
# Add the expected output:
want = piece.want
if want:
output.append('# Expected:')
output += ['## '+l for l in want.split('\n')[:-1]]
else:
# Add non-example text.
output += [_comment_line(l)
for l in piece.split('\n')[:-1]]
# Trim junk on both ends.
while output and output[-1] == '#':
output.pop()
while output and output[0] == '#':
output.pop(0)
# Combine the output, and return it.
return '\n'.join(output)
def testsource(module, name):
"""Extract the test sources from a doctest docstring as a script.
Provide the module (or dotted name of the module) containing the
test to be debugged and the name (within the module) of the object
with the doc string with tests to be debugged.
"""
module = _normalize_module(module)
tests = DocTestFinder().find(module)
test = [t for t in tests if t.name == name]
if not test:
raise ValueError(name, "not found in tests")
test = test[0]
testsrc = script_from_examples(test.docstring)
return testsrc
def debug_src(src, pm=False, globs=None):
"""Debug a single doctest docstring, in argument `src`'"""
testsrc = script_from_examples(src)
debug_script(testsrc, pm, globs)
def debug_script(src, pm=False, globs=None):
"Debug a test script. `src` is the script, as a string."
import pdb
# Note that tempfile.NameTemporaryFile() cannot be used. As the
# docs say, a file so created cannot be opened by name a second time
# on modern Windows boxes, and execfile() needs to open it.
srcfilename = tempfile.mktemp(".py", "doctestdebug")
f = open(srcfilename, 'w')
f.write(src)
f.close()
try:
if globs:
globs = globs.copy()
else:
globs = {}
if pm:
try:
execfile(srcfilename, globs, globs)
except:
print(sys.exc_info()[1])
pdb.post_mortem(sys.exc_info()[2])
else:
# Note that %r is vital here. '%s' instead can, e.g., cause
# backslashes to get treated as metacharacters on Windows.
pdb.run("execfile(%r)" % srcfilename, globs, globs)
finally:
os.remove(srcfilename)
def debug(module, name, pm=False):
"""Debug a single doctest docstring.
Provide the module (or dotted name of the module) containing the
test to be debugged and the name (within the module) of the object
with the docstring with tests to be debugged.
"""
module = _normalize_module(module)
testsrc = testsource(module, name)
debug_script(testsrc, pm, module.__dict__)
######################################################################
## 10. Example Usage
######################################################################
class _TestClass:
"""
A pointless class, for sanity-checking of docstring testing.
Methods:
square()
get()
>>> _TestClass(13).get() + _TestClass(-12).get()
1
>>> hex(_TestClass(13).square().get())
'0xa9'
"""
def __init__(self, val):
"""val -> _TestClass object with associated value val.
>>> t = _TestClass(123)
>>> print t.get()
123
"""
self.val = val
def square(self):
"""square() -> square TestClass's associated value
>>> _TestClass(13).square().get()
169
"""
self.val = self.val ** 2
return self
def get(self):
"""get() -> return TestClass's associated value.
>>> x = _TestClass(-42)
>>> print x.get()
-42
"""
return self.val
__test__ = {"_TestClass": _TestClass,
"string": r"""
Example of a string object, searched as-is.
>>> x = 1; y = 2
>>> x + y, x * y
(3, 2)
""",
"bool-int equivalence": r"""
In 2.2, boolean expressions displayed
0 or 1. By default, we still accept
them. This can be disabled by passing
DONT_ACCEPT_TRUE_FOR_1 to the new
optionflags argument.
>>> 4 == 4
1
>>> 4 == 4
True
>>> 4 > 4
0
>>> 4 > 4
False
""",
"blank lines": r"""
Blank lines can be marked with <BLANKLINE>:
>>> print 'foo\n\nbar\n'
foo
<BLANKLINE>
bar
<BLANKLINE>
""",
"ellipsis": r"""
If the ellipsis flag is used, then '...' can be used to
elide substrings in the desired output:
>>> print range(1000) #doctest: +ELLIPSIS
[0, 1, 2, ..., 999]
""",
"whitespace normalization": r"""
If the whitespace normalization flag is used, then
differences in whitespace are ignored.
>>> print range(30) #doctest: +NORMALIZE_WHITESPACE
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26,
27, 28, 29]
""",
}
def _test():
r = unittest.TextTestRunner()
r.run(DocTestSuite())
if __name__ == "__main__":
_test()
| lgpl-3.0 |
b-adkins/ros_pomdp | src/pomdp_node/test/test_pomdp_run.py | 1 | 5019 | #!/usr/bin/env python
# The MIT License (MIT)
#
# Copyright (c) 2014 Boone "Bea" Adkins
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
##
# Generic node-level POMDP test executable. Client for a pomdp_run node.
#
# Arguments (comprising the test case) can be passed from a rostest file or the
# command line.
#
# @author Bea Adkins
# @date 2013-07-29
#
PKG = 'pomdp_node'
import roslib; roslib.load_manifest(PKG)
import exceptions
import sys
from os.path import basename
import unittest
import rospy
import rostest
from pomdp_node import default
from pomdp_node import msg
NODE_NAME = 'test_pomdp_run'
def usage():
return ("USAGE: %s Z0 A0 Z1 A1...\n"
"For all i, tests Zi -> POMDP -> Ai.\n"
" Zi - observations sent (integers)\n"
" Ai - action expected (integers), or -1 for failure"%basename(sys.argv[0]))
#@todo Add service name.
## Utility function that tests if an object can be converted to an integer.
#
# @param obj Object to be tested as an integer.
# @return True if obj can be converted.
def isInt(obj):
try:
int(obj)
return True
except ValueError:
return False
class TestPOMDPRun(unittest.TestCase):
def __init__(self, *args):
super(TestPOMDPRun, self).__init__(*args)
rospy.init_node(NODE_NAME)
self.args = args
# Initialize communication
self.observation_pub = rospy.Publisher(default.OBSERVATION_TOPIC, msg.observation)
self.action_sub = rospy.Subscriber(default.ACTION_TOPIC, msg.action, self.save_action)
rospy.sleep(.5) # Give pub and sub time to start.
## Saves action from Publisher
def save_action(self, msg):
self.test_action = msg.action
def test_pomdp_run(self):
# Get command line arguments
args = rospy.myargv()
# Read integers
numbers = [int(arg) for arg in args if isInt(arg)]
if(len(numbers)%2 != 0):
raise ValueError("Test arguments must contain equal numbers observations and actions.")
if(len(numbers) < 2):
raise ValueError("Test arguments need at least one observation -> action pair.")
# Pair alternating integers.
obs_to_actions = zip(numbers[0::2], numbers[1::2])
# Call the actual test function for each pair
for index,obs_to_action in enumerate(obs_to_actions, start=1):
self._test_pomdp_run(index, *obs_to_action)
##
# Runs one cycle of a POMDP using the observation and action publish/subscribers and checks its
# returned action.
#
# @param index Index of this observation -> action pair.
# @param observation Input observation. (Integer.)
# @param expected_action Expected action. (Integer.) Or -1 for an expected error.
#
def _test_pomdp_run(self, index, observation, expected_action):
# Publish recorded observation
self.observation_pub.publish(msg.observation(observation))
# Give the subscriber time to return
TIMEOUT = .5
try:
rospy.wait_for_message(default.ACTION_TOPIC, msg.action, timeout=TIMEOUT)
# Timeout
except rospy.ROSException:
# Timeout expected
if expected_action == -1:
return # Pass
# Timeouts are an error
else:
self.fail("Unexpected timeout on action topic %s" % (action_sub.getTopic()))
# Check returned action.
self.assertEqual(self.test_action, expected_action,
"For #%i: observation %i, returned action '%i' != expected action '%i'!"
% (index, observation, self.test_action, expected_action))
if __name__ == '__main__':
# Display help message.
if(sys.argv[1] in ["help", "--help", "-h"]):
#rospy.loginfo("\n%s", usage());
print usage()
sys.exit(0)
rostest.rosrun(PKG, NODE_NAME, TestPOMDPRun, sys.argv)
| mit |
APCVSRepo/sdl_implementation_reference | src/components/dbus/codegen/ford_xml_parser.py | 13 | 9542 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# @file ford_xml_parser.py
# @brief Parser for HMI_API.xml
#
# This file is a part of HMI D-Bus layer.
#
# Copyright (c) 2013, Ford Motor Company
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following
# disclaimer in the documentation and/or other materials provided with the
# distribution.
#
# Neither the name of the Ford Motor Company nor the names of its contributors
# may be used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 'A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
from xml.etree import ElementTree
from collections import OrderedDict
node_name = '/com/ford/hmi'
class ParamDesc:
pass
class FordXmlParser:
def __init__(self, in_el_tree, interface_path=None):
self.interface_path = interface_path
self.el_tree = in_el_tree
self.find_enums()
self.find_structs()
def find_enums(self):
self.enums = dict()
for interface_el in self.el_tree.findall('interface'):
interface_name = interface_el.get('name')
for enum_el in interface_el.findall('enum'):
enum_name = enum_el.get('name')
self.enums[(interface_name,enum_name)] = enum_el
def make_param_desc(self, param_el, iface=None):
param_desc = ParamDesc()
param_desc.name = param_el.get('name')
param_desc.type = param_el.get('type')
param_desc.enum = False
param_desc.struct = False
param_desc.fulltype = param_desc.type
if param_el.get('mandatory') == 'false':
param_desc.mandatory = False
else:
param_desc.mandatory = True
if param_el.get('array') == 'true':
param_desc.array = True
else:
param_desc.array = False
param_desc.minValue = param_el.get('minvalue') if param_el.get('minvalue') else 0
param_desc.maxValue = param_el.get('maxvalue')
param_desc.minLength = param_el.get('minlength') if param_el.get('minlength') else 0
param_desc.maxLength = param_el.get('maxlength')
param_desc.minSize = param_el.get('minsize')
param_desc.maxSize = param_el.get('maxsize')
param_desc.restricted = param_desc.minValue != None or \
param_desc.maxValue != None or \
param_desc.minLength > 0 or \
param_desc.maxLength > 0
param_desc.restrictedArray = param_desc.array and (param_desc.minSize > 0 or param_desc.maxSize > 0)
if iface is None:
return param_desc
if param_desc.type not in ['Integer', 'String', 'Boolean', 'Float']:
param_type = param_desc.type.split('.')
if len(param_type) > 1:
param_type = (param_type[0], param_type[1])
else:
param_type = (iface, param_type[0])
param_desc.fulltype = param_type
if param_type in self.enums: param_desc.enum = True
elif param_type in self.structs: param_desc.struct = True
return param_desc
def find_structs(self):
self.structs = OrderedDict()
for interface_el in self.el_tree.findall('interface'):
interface_name = interface_el.get('name')
for struct_el in interface_el.findall('struct'):
struct_name = struct_el.get('name')
self.structs[(interface_name, struct_name)] = []
for interface_el in self.el_tree.findall('interface'):
interface_name = interface_el.get('name')
for struct_el in interface_el.findall('struct'):
struct_name = struct_el.get('name')
for param_el in struct_el.findall('param'):
param_desc = self.make_param_desc(param_el, interface_name)
self.structs[(interface_name, struct_name)].append(param_desc)
def convert_struct_to_dbus(self, param_type):
ret = '('
struct = self.structs[param_type]
for param in struct:
ret = ret + self.convert_to_dbus_type(param)
ret = ret + ')'
return ret
def convert_to_dbus_type(self, param):
if param.type == 'Integer': restype = 'i'
elif param.type == 'String': restype = 's'
elif param.type == 'Boolean': restype = 'b'
elif param.type == 'Float': restype = 'd' # D-Bus double
elif param.enum: restype = 'i' # D-Bus 32-bit signed int
elif param.struct: restype = self.convert_struct_to_dbus(param.fulltype)
else: raise RuntimeError('Unknown type: ' + param.type)
if param.array: restype = 'a' + restype
if not param.mandatory: restype = '(b' + restype + ')'
return restype
def find_notifications(self, interface_el):
notifications = list()
for function_el in interface_el.findall('function[@messagetype="notification"]'):
notifications.append(function_el)
return notifications
def find_notifications_by_provider(self, interface_el, provider):
notifications = list()
condition = 'function[@messagetype="notification"][@provider="%s"]' % provider
for function_el in interface_el.findall(condition):
notifications.append(function_el)
return notifications
def find_request_response_pairs(self, interface_el):
result = list()
request_els = interface_el.findall('function[@messagetype="request"]')
response_els = interface_el.findall('function[@messagetype="response"]')
for request_el in request_els:
name = request_el.get('name')
response_el = next(r for r in response_els if r.get('name') == name)
result.append((request_el, response_el))
return result
def find_request_response_pairs_by_provider(self, interface_el, provider):
result = list()
condition = 'function[@messagetype="request"][@provider="%s"]' % provider
request_els = interface_el.findall(condition)
response_els = interface_el.findall('function[@messagetype="response"]')
for request_el in request_els:
name = request_el.get('name')
response_el = next(r for r in response_els if r.get('name') == name)
result.append((request_el, response_el))
return result
def convert_to_signal(self, notification_el, interface):
result = ElementTree.Element('signal')
result.set('name', notification_el.get('name'))
for param_el in notification_el.findall('param'):
self.create_arg_element(result, param_el, interface)
return result
def convert_to_method(self, (request_el, response_el), interface):
result = ElementTree.Element('method')
result.set('name', request_el.get('name'))
for param_el in request_el.findall('param'):
arg_el = self.create_arg_element(result, param_el, interface)
arg_el.set('direction', 'in')
arg_el = ElementTree.SubElement(result, 'arg')
arg_el.set('name', 'retCode')
arg_el.set('type', 'i')
arg_el.set('direction', 'out')
for param_el in response_el.findall('param'):
arg_el = self.create_arg_element(result, param_el, interface)
arg_el.set('direction', 'out')
return result
def create_arg_element(self, parent, param_el, interface):
arg_el = ElementTree.SubElement(parent, 'arg')
arg_el.set('name', param_el.get('name'))
arg_el.set('type', self.convert_to_dbus_type(self.make_param_desc(param_el, interface)))
return arg_el
def create_introspection_iface_el(self, interface_el, provider):
interface = interface_el.get('name')
interface_name = self.interface_path + '.' + interface
notifications = self.find_notifications_by_provider(interface_el, provider)
signals = [self.convert_to_signal(n, interface) for n in notifications]
request_responses = self.find_request_response_pairs_by_provider(interface_el, provider)
methods = [self.convert_to_method(r, interface) for r in request_responses]
if signals or methods:
el = ElementTree.Element('interface', attrib={'name':interface_name})
for m in methods: el.append(m)
for s in signals: el.append(s)
return el
| bsd-3-clause |
penelopy/luigi | test/server_test.py | 7 | 5438 | # -*- coding: utf-8 -*-
#
# Copyright 2012-2015 Spotify AB
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import functools
import os
import multiprocessing
import random
import shutil
import signal
import time
import tempfile
from helpers import unittest, with_config, skipOnTravis
import luigi.rpc
import luigi.server
from luigi.scheduler import CentralPlannerScheduler
from luigi.six.moves.urllib.parse import urlencode, ParseResult
from tornado.testing import AsyncHTTPTestCase
from nose.plugins.attrib import attr
try:
from unittest import mock
except ImportError:
import mock
class ServerTestBase(AsyncHTTPTestCase):
def get_app(self):
return luigi.server.app(CentralPlannerScheduler())
def setUp(self):
super(ServerTestBase, self).setUp()
self._old_fetch = luigi.rpc.RemoteScheduler._fetch
def _fetch(obj, url, body, *args, **kwargs):
body = urlencode(body).encode('utf-8')
response = self.fetch(url, body=body, method='POST')
if response.code >= 400:
raise luigi.rpc.RPCError(
'Errror when connecting to remote scheduler'
)
return response.body.decode('utf-8')
luigi.rpc.RemoteScheduler._fetch = _fetch
def tearDown(self):
super(ServerTestBase, self).tearDown()
luigi.rpc.RemoteScheduler._fetch = self._old_fetch
class ServerTest(ServerTestBase):
def test_visualizer(self):
page = self.fetch('/').body
self.assertTrue(page.find(b'<title>') != -1)
def _test_404(self, path):
response = self.fetch(path)
self.assertEqual(response.code, 404)
def test_404(self):
self._test_404('/foo')
def test_api_404(self):
self._test_404('/api/foo')
class INETServerClient(object):
def __init__(self):
self.port = random.randint(1024, 9999)
def run_server(self):
luigi.server.run(api_port=self.port, address='127.0.0.1')
def scheduler(self):
return luigi.rpc.RemoteScheduler('http://localhost:' + str(self.port))
class UNIXServerClient(object):
def __init__(self):
self.tempdir = tempfile.mkdtemp()
self.unix_socket = os.path.join(self.temp_dir, 'luigid.sock')
def run_server(self):
luigi.server.run(unix_socket=unix_socket)
def scheduler(self):
url = ParseResult(
scheme='http+unix',
netloc=self.unix_socket,
path='',
params='',
query='',
fragment='',
).geturl()
return luigi.rpc.RemoteScheduler(url)
class ServerTestRun(unittest.TestCase):
"""Test to start and stop the server in a more "standard" way
"""
server_client_class = INETServerClient
def start_server(self):
self._process = multiprocessing.Process(
target=self.server_client.run_server
)
self._process.start()
time.sleep(0.1) # wait for server to start
self.sch = self.server_client.scheduler()
self.sch._wait = lambda: None
def stop_server(self):
self._process.terminate()
self._process.join(1)
if self._process.is_alive():
os.kill(self._process.pid, signal.SIGKILL)
def setUp(self):
self.server_client = self.server_client_class()
state_path = tempfile.mktemp(suffix=self.id())
self.addCleanup(functools.partial(os.unlink, state_path))
luigi.configuration.get_config().set('scheduler', 'state_path', state_path)
self.start_server()
def tearDown(self):
self.stop_server()
def test_ping(self):
self.sch.ping(worker='xyz')
def test_raw_ping(self):
self.sch._request('/api/ping', {'worker': 'xyz'})
def test_raw_ping_extended(self):
self.sch._request('/api/ping', {'worker': 'xyz', 'foo': 'bar'})
def test_404(self):
with self.assertRaises(luigi.rpc.RPCError):
self.sch._request('/api/fdsfds', {'dummy': 1})
@skipOnTravis('https://travis-ci.org/spotify/luigi/jobs/72953884')
def test_save_state(self):
self.sch.add_task('X', 'B', deps=('A',))
self.sch.add_task('X', 'A')
self.assertEqual(self.sch.get_work('X')['task_id'], 'A')
self.stop_server()
self.start_server()
work = self.sch.get_work('X')['running_tasks'][0]
self.assertEqual(work['task_id'], 'A')
class URLLibServerTestRun(ServerTestRun):
@mock.patch.object(luigi.rpc, 'HAS_REQUESTS', False)
def start_server(self, *args, **kwargs):
super(URLLibServerTestRun, self).start_server(*args, **kwargs)
@attr('unix')
class UNIXServerTestRun(unittest.TestCase):
server_client_class = UNIXServerClient
def tearDown(self):
super(self, ServerTestRun).tearDown()
shutil.rmtree(self.server_client.tempdir)
if __name__ == '__main__':
unittest.main()
| apache-2.0 |
pierotofy/OpenDroneMap | opendm/osfm.py | 1 | 20731 | """
OpenSfM related utils
"""
import os, shutil, sys, json, argparse
import yaml
from opendm import io
from opendm import log
from opendm import system
from opendm import context
from opendm import camera
from opensfm.large import metadataset
from opensfm.large import tools
from opensfm.commands import undistort
class OSFMContext:
def __init__(self, opensfm_project_path):
self.opensfm_project_path = opensfm_project_path
def run(self, command):
system.run('/usr/bin/env python3 %s/bin/opensfm %s "%s"' %
(context.opensfm_path, command, self.opensfm_project_path))
def is_reconstruction_done(self):
tracks_file = os.path.join(self.opensfm_project_path, 'tracks.csv')
reconstruction_file = os.path.join(self.opensfm_project_path, 'reconstruction.json')
return io.file_exists(tracks_file) and io.file_exists(reconstruction_file)
def reconstruct(self, rerun=False):
tracks_file = os.path.join(self.opensfm_project_path, 'tracks.csv')
reconstruction_file = os.path.join(self.opensfm_project_path, 'reconstruction.json')
if not io.file_exists(tracks_file) or rerun:
self.run('create_tracks')
else:
log.ODM_WARNING('Found a valid OpenSfM tracks file in: %s' % tracks_file)
if not io.file_exists(reconstruction_file) or rerun:
self.run('reconstruct')
else:
log.ODM_WARNING('Found a valid OpenSfM reconstruction file in: %s' % reconstruction_file)
# Check that a reconstruction file has been created
if not self.reconstructed():
log.ODM_ERROR("The program could not process this dataset using the current settings. "
"Check that the images have enough overlap, "
"that there are enough recognizable features "
"and that the images are in focus. "
"You could also try to increase the --min-num-features parameter."
"The program will now exit.")
exit(1)
def setup(self, args, images_path, photos, reconstruction, append_config = [], rerun=False):
"""
Setup a OpenSfM project
"""
if rerun and io.dir_exists(self.opensfm_project_path):
shutil.rmtree(self.opensfm_project_path)
if not io.dir_exists(self.opensfm_project_path):
system.mkdir_p(self.opensfm_project_path)
list_path = io.join_paths(self.opensfm_project_path, 'image_list.txt')
if not io.file_exists(list_path) or rerun:
# create file list
has_alt = True
has_gps = False
with open(list_path, 'w') as fout:
for photo in photos:
if not photo.altitude:
has_alt = False
if photo.latitude is not None and photo.longitude is not None:
has_gps = True
fout.write('%s\n' % io.join_paths(images_path, photo.filename))
# check for image_groups.txt (split-merge)
image_groups_file = os.path.join(args.project_path, "image_groups.txt")
if io.file_exists(image_groups_file):
log.ODM_INFO("Copied image_groups.txt to OpenSfM directory")
io.copy(image_groups_file, os.path.join(self.opensfm_project_path, "image_groups.txt"))
# check for cameras
if args.cameras:
try:
camera_overrides = camera.get_opensfm_camera_models(args.cameras)
with open(os.path.join(self.opensfm_project_path, "camera_models_overrides.json"), 'w') as f:
f.write(json.dumps(camera_overrides))
log.ODM_INFO("Wrote camera_models_overrides.json to OpenSfM directory")
except Exception as e:
log.ODM_WARNING("Cannot set camera_models_overrides.json: %s" % str(e))
use_bow = False
feature_type = "SIFT"
matcher_neighbors = args.matcher_neighbors
if matcher_neighbors != 0 and reconstruction.multi_camera is not None:
matcher_neighbors *= len(reconstruction.multi_camera)
log.ODM_INFO("Increasing matcher neighbors to %s to accomodate multi-camera setup" % matcher_neighbors)
log.ODM_INFO("Multi-camera setup, using BOW matching")
use_bow = True
# GPSDOP override if we have GPS accuracy information (such as RTK)
if 'gps_accuracy_is_set' in args:
log.ODM_INFO("Forcing GPS DOP to %s for all images" % args.gps_accuracy)
log.ODM_INFO("Writing exif overrides")
exif_overrides = {}
for p in photos:
if 'gps_accuracy_is_set' in args:
dop = args.gps_accuracy
elif p.get_gps_dop() is not None:
dop = p.get_gps_dop()
else:
dop = args.gps_accuracy # default value
if p.latitude is not None and p.longitude is not None:
exif_overrides[p.filename] = {
'gps': {
'latitude': p.latitude,
'longitude': p.longitude,
'altitude': p.altitude if p.altitude is not None else 0,
'dop': dop,
}
}
with open(os.path.join(self.opensfm_project_path, "exif_overrides.json"), 'w') as f:
f.write(json.dumps(exif_overrides))
# Check image masks
masks = []
for p in photos:
if p.mask is not None:
masks.append((p.filename, os.path.join(images_path, p.mask)))
if masks:
log.ODM_INFO("Found %s image masks" % len(masks))
with open(os.path.join(self.opensfm_project_path, "mask_list.txt"), 'w') as f:
for fname, mask in masks:
f.write("{} {}\n".format(fname, mask))
# Compute feature_process_size
feature_process_size = 2048 # default
if 'resize_to_is_set' in args:
# Legacy
log.ODM_WARNING("Legacy option --resize-to (this might be removed in a future version). Use --feature-quality instead.")
feature_process_size = int(args.resize_to)
else:
feature_quality_scale = {
'ultra': 1,
'high': 0.5,
'medium': 0.25,
'low': 0.125,
'lowest': 0.0675,
}
# Find largest photo dimension
max_dim = 0
for p in photos:
if p.width is None:
continue
max_dim = max(max_dim, max(p.width, p.height))
if max_dim > 0:
log.ODM_INFO("Maximum photo dimensions: %spx" % str(max_dim))
feature_process_size = int(max_dim * feature_quality_scale[args.feature_quality])
else:
log.ODM_WARNING("Cannot compute max image dimensions, going with defaults")
# create config file for OpenSfM
config = [
"use_exif_size: no",
"flann_algorithm: KDTREE", # more stable, faster than KMEANS
"feature_process_size: %s" % feature_process_size,
"feature_min_frames: %s" % args.min_num_features,
"processes: %s" % args.max_concurrency,
"matching_gps_neighbors: %s" % matcher_neighbors,
"matching_gps_distance: %s" % args.matcher_distance,
"depthmap_method: %s" % args.opensfm_depthmap_method,
"depthmap_resolution: %s" % args.depthmap_resolution,
"depthmap_min_patch_sd: %s" % args.opensfm_depthmap_min_patch_sd,
"depthmap_min_consistent_views: %s" % args.opensfm_depthmap_min_consistent_views,
"optimize_camera_parameters: %s" % ('no' if args.use_fixed_camera_params or args.cameras else 'yes'),
"undistorted_image_format: tif",
"bundle_outlier_filtering_type: AUTO",
"align_orientation_prior: vertical",
"triangulation_type: ROBUST",
"bundle_common_position_constraints: %s" % ('no' if reconstruction.multi_camera is None else 'yes'),
]
if args.camera_lens != 'auto':
config.append("camera_projection_type: %s" % args.camera_lens.upper())
if not has_gps:
log.ODM_INFO("No GPS information, using BOW matching")
use_bow = True
feature_type = args.feature_type.upper()
if use_bow:
config.append("matcher_type: WORDS")
# Cannot use SIFT with BOW
if feature_type == "SIFT":
log.ODM_WARNING("Using BOW matching, will use HAHOG feature type, not SIFT")
feature_type = "HAHOG"
config.append("feature_type: %s" % feature_type)
if has_alt:
log.ODM_INFO("Altitude data detected, enabling it for GPS alignment")
config.append("use_altitude_tag: yes")
gcp_path = reconstruction.gcp.gcp_path
if has_alt or gcp_path:
config.append("align_method: auto")
else:
config.append("align_method: orientation_prior")
if args.use_hybrid_bundle_adjustment:
log.ODM_INFO("Enabling hybrid bundle adjustment")
config.append("bundle_interval: 100") # Bundle after adding 'bundle_interval' cameras
config.append("bundle_new_points_ratio: 1.2") # Bundle when (new points) / (bundled points) > bundle_new_points_ratio
config.append("local_bundle_radius: 1") # Max image graph distance for images to be included in local bundle adjustment
else:
config.append("local_bundle_radius: 0")
if gcp_path:
config.append("bundle_use_gcp: yes")
if not args.force_gps:
config.append("bundle_use_gps: no")
io.copy(gcp_path, self.path("gcp_list.txt"))
config = config + append_config
# write config file
log.ODM_INFO(config)
config_filename = self.get_config_file_path()
with open(config_filename, 'w') as fout:
fout.write("\n".join(config))
else:
log.ODM_WARNING("%s already exists, not rerunning OpenSfM setup" % list_path)
def get_config_file_path(self):
return io.join_paths(self.opensfm_project_path, 'config.yaml')
def reconstructed(self):
if not io.file_exists(self.path("reconstruction.json")):
return False
with open(self.path("reconstruction.json"), 'r') as f:
return f.readline().strip() != "[]"
def extract_metadata(self, rerun=False):
metadata_dir = self.path("exif")
if not io.dir_exists(metadata_dir) or rerun:
self.run('extract_metadata')
def is_feature_matching_done(self):
features_dir = self.path("features")
matches_dir = self.path("matches")
return io.dir_exists(features_dir) and io.dir_exists(matches_dir)
def feature_matching(self, rerun=False):
features_dir = self.path("features")
matches_dir = self.path("matches")
if not io.dir_exists(features_dir) or rerun:
self.run('detect_features')
else:
log.ODM_WARNING('Detect features already done: %s exists' % features_dir)
if not io.dir_exists(matches_dir) or rerun:
self.run('match_features')
else:
log.ODM_WARNING('Match features already done: %s exists' % matches_dir)
def align_reconstructions(self, rerun):
alignment_file = self.path('alignment_done.txt')
if not io.file_exists(alignment_file) or rerun:
log.ODM_INFO("Aligning submodels...")
meta_data = metadataset.MetaDataSet(self.opensfm_project_path)
reconstruction_shots = tools.load_reconstruction_shots(meta_data)
transformations = tools.align_reconstructions(reconstruction_shots,
tools.partial_reconstruction_name,
True)
tools.apply_transformations(transformations)
self.touch(alignment_file)
else:
log.ODM_WARNING('Found a alignment done progress file in: %s' % alignment_file)
def touch(self, file):
with open(file, 'w') as fout:
fout.write("Done!\n")
def path(self, *paths):
return os.path.join(self.opensfm_project_path, *paths)
def extract_cameras(self, output, rerun=False):
if not os.path.exists(output) or rerun:
try:
reconstruction_file = self.path("reconstruction.json")
with open(output, 'w') as fout:
fout.write(json.dumps(camera.get_cameras_from_opensfm(reconstruction_file), indent=4))
except Exception as e:
log.ODM_WARNING("Cannot export cameras to %s. %s." % (output, str(e)))
else:
log.ODM_INFO("Already extracted cameras")
def convert_and_undistort(self, rerun=False, imageFilter=None):
log.ODM_INFO("Undistorting %s ..." % self.opensfm_project_path)
undistorted_images_path = self.path("undistorted", "images")
if not io.dir_exists(undistorted_images_path) or rerun:
cmd = undistort.Command(imageFilter)
parser = argparse.ArgumentParser()
cmd.add_arguments(parser)
cmd.run(parser.parse_args([self.opensfm_project_path]))
else:
log.ODM_WARNING("Found an undistorted directory in %s" % undistorted_images_path)
def update_config(self, cfg_dict):
cfg_file = self.get_config_file_path()
log.ODM_INFO("Updating %s" % cfg_file)
if os.path.exists(cfg_file):
try:
with open(cfg_file) as fin:
cfg = yaml.safe_load(fin)
for k, v in cfg_dict.items():
cfg[k] = v
log.ODM_INFO("%s: %s" % (k, v))
with open(cfg_file, 'w') as fout:
fout.write(yaml.dump(cfg, default_flow_style=False))
except Exception as e:
log.ODM_WARNING("Cannot update configuration file %s: %s" % (cfg_file, str(e)))
else:
log.ODM_WARNING("Tried to update configuration, but %s does not exist." % cfg_file)
def name(self):
return os.path.basename(os.path.abspath(self.path("..")))
def get_submodel_argv(args, submodels_path = None, submodel_name = None):
"""
Gets argv for a submodel starting from the args passed to the application startup.
Additionally, if project_name, submodels_path and submodel_name are passed, the function
handles the <project name> value and --project-path detection / override.
When all arguments are set to None, --project-path and project name are always removed.
:return the same as argv, but removing references to --split,
setting/replacing --project-path and name
removing --rerun-from, --rerun, --rerun-all, --sm-cluster
removing --pc-las, --pc-csv, --pc-ept, --tiles flags (processing these is wasteful)
adding --orthophoto-cutline
adding --dem-euclidean-map
adding --skip-3dmodel (split-merge does not support 3D model merging)
tweaking --crop if necessary (DEM merging makes assumption about the area of DEMs and their euclidean maps that require cropping. If cropping is skipped, this leads to errors.)
removing --gcp (the GCP path if specified is always "gcp_list.txt")
reading the contents of --cameras
"""
assure_always = ['orthophoto_cutline', 'dem_euclidean_map', 'skip_3dmodel']
remove_always = ['split', 'split_overlap', 'rerun_from', 'rerun', 'gcp', 'end_with', 'sm_cluster', 'rerun_all', 'pc_csv', 'pc_las', 'pc_ept', 'tiles']
read_json_always = ['cameras']
argv = sys.argv
result = [argv[0]] # Startup script (/path/to/run.py)
args_dict = vars(args).copy()
set_keys = [k[:-len("_is_set")] for k in args_dict.keys() if k.endswith("_is_set")]
# Handle project name and project path (special case)
if "name" in set_keys:
del args_dict["name"]
set_keys.remove("name")
if "project_path" in set_keys:
del args_dict["project_path"]
set_keys.remove("project_path")
# Remove parameters
set_keys = [k for k in set_keys if k not in remove_always]
# Assure parameters
for k in assure_always:
if not k in set_keys:
set_keys.append(k)
args_dict[k] = True
# Read JSON always
for k in read_json_always:
if k in set_keys:
try:
if isinstance(args_dict[k], str):
args_dict[k] = io.path_or_json_string_to_dict(args_dict[k])
if isinstance(args_dict[k], dict):
args_dict[k] = json.dumps(args_dict[k])
except ValueError as e:
log.ODM_WARNING("Cannot parse/read JSON: {}".format(str(e)))
# Handle crop (cannot be zero for split/merge)
if "crop" in set_keys:
crop_value = float(args_dict["crop"])
if crop_value == 0:
crop_value = 0.015625
args_dict["crop"] = crop_value
# Populate result
for k in set_keys:
result.append("--%s" % k.replace("_", "-"))
# No second value for booleans
if isinstance(args_dict[k], bool) and args_dict[k] == True:
continue
result.append(str(args_dict[k]))
if submodels_path:
result.append("--project-path")
result.append(submodels_path)
if submodel_name:
result.append(submodel_name)
return result
def get_submodel_args_dict(args):
submodel_argv = get_submodel_argv(args)
result = {}
i = 0
while i < len(submodel_argv):
arg = submodel_argv[i]
next_arg = None if i == len(submodel_argv) - 1 else submodel_argv[i + 1]
if next_arg and arg.startswith("--"):
if next_arg.startswith("--"):
result[arg[2:]] = True
else:
result[arg[2:]] = next_arg
i += 1
elif arg.startswith("--"):
result[arg[2:]] = True
i += 1
return result
def get_submodel_paths(submodels_path, *paths):
"""
:return Existing paths for all submodels
"""
result = []
if not os.path.exists(submodels_path):
return result
for f in os.listdir(submodels_path):
if f.startswith('submodel'):
p = os.path.join(submodels_path, f, *paths)
if os.path.exists(p):
result.append(p)
else:
log.ODM_WARNING("Missing %s from submodel %s" % (p, f))
return result
def get_all_submodel_paths(submodels_path, *all_paths):
"""
:return Existing, multiple paths for all submodels as a nested list (all or nothing for each submodel)
if a single file is missing from the submodule, no files are returned for that submodel.
(i.e. get_multi_submodel_paths("path/", "odm_orthophoto.tif", "dem.tif")) -->
[["path/submodel_0000/odm_orthophoto.tif", "path/submodel_0000/dem.tif"],
["path/submodel_0001/odm_orthophoto.tif", "path/submodel_0001/dem.tif"]]
"""
result = []
if not os.path.exists(submodels_path):
return result
for f in os.listdir(submodels_path):
if f.startswith('submodel'):
all_found = True
for ap in all_paths:
p = os.path.join(submodels_path, f, ap)
if not os.path.exists(p):
log.ODM_WARNING("Missing %s from submodel %s" % (p, f))
all_found = False
if all_found:
result.append([os.path.join(submodels_path, f, ap) for ap in all_paths])
return result | gpl-3.0 |
iradul/phantomjs-clone | src/qt/qtwebkit/Tools/TestResultServer/model/jsonresults.py | 121 | 14752 | # Copyright (C) 2010 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from datetime import datetime
from django.utils import simplejson
import logging
import sys
import traceback
from model.testfile import TestFile
JSON_RESULTS_FILE = "results.json"
JSON_RESULTS_FILE_SMALL = "results-small.json"
JSON_RESULTS_PREFIX = "ADD_RESULTS("
JSON_RESULTS_SUFFIX = ");"
JSON_RESULTS_VERSION_KEY = "version"
JSON_RESULTS_BUILD_NUMBERS = "buildNumbers"
JSON_RESULTS_TESTS = "tests"
JSON_RESULTS_RESULTS = "results"
JSON_RESULTS_TIMES = "times"
JSON_RESULTS_PASS = "P"
JSON_RESULTS_SKIP = "X"
JSON_RESULTS_NO_DATA = "N"
JSON_RESULTS_MIN_TIME = 3
JSON_RESULTS_HIERARCHICAL_VERSION = 4
JSON_RESULTS_MAX_BUILDS = 500
JSON_RESULTS_MAX_BUILDS_SMALL = 100
def _add_path_to_trie(path, value, trie):
if not "/" in path:
trie[path] = value
return
directory, slash, rest = path.partition("/")
if not directory in trie:
trie[directory] = {}
_add_path_to_trie(rest, value, trie[directory])
def _trie_json_tests(tests):
"""Breaks a test name into chunks by directory and puts the test time as a value in the lowest part, e.g.
foo/bar/baz.html: VALUE1
foo/bar/baz1.html: VALUE2
becomes
foo: {
bar: {
baz.html: VALUE1,
baz1.html: VALUE2
}
}
"""
trie = {}
for test, value in tests.iteritems():
_add_path_to_trie(test, value, trie)
return trie
def _is_directory(subtree):
# FIXME: Some data got corrupted and has results/times at the directory level.
# Once the data is fixed, this should assert that the directory level does not have
# results or times and just return "JSON_RESULTS_RESULTS not in subtree".
if JSON_RESULTS_RESULTS not in subtree:
return True
for key in subtree:
if key not in (JSON_RESULTS_RESULTS, JSON_RESULTS_TIMES):
del subtree[JSON_RESULTS_RESULTS]
del subtree[JSON_RESULTS_TIMES]
return True
return False
class JsonResults(object):
@classmethod
def _strip_prefix_suffix(cls, data):
# FIXME: Stop stripping jsonp callback once we upload pure json everywhere.
if data.startswith(JSON_RESULTS_PREFIX) and data.endswith(JSON_RESULTS_SUFFIX):
return data[len(JSON_RESULTS_PREFIX):len(data) - len(JSON_RESULTS_SUFFIX)]
return data
@classmethod
def _generate_file_data(cls, json, sort_keys=False):
return simplejson.dumps(json, separators=(',', ':'), sort_keys=sort_keys)
@classmethod
def _load_json(cls, file_data):
json_results_str = cls._strip_prefix_suffix(file_data)
if not json_results_str:
logging.warning("No json results data.")
return None
try:
return simplejson.loads(json_results_str)
except:
logging.debug(json_results_str)
logging.error("Failed to load json results: %s", traceback.print_exception(*sys.exc_info()))
return None
@classmethod
def _merge_json(cls, aggregated_json, incremental_json, num_runs):
cls._merge_non_test_data(aggregated_json, incremental_json, num_runs)
incremental_tests = incremental_json[JSON_RESULTS_TESTS]
if incremental_tests:
aggregated_tests = aggregated_json[JSON_RESULTS_TESTS]
cls._merge_tests(aggregated_tests, incremental_tests, num_runs)
cls._normalize_results(aggregated_tests, num_runs)
@classmethod
def _merge_non_test_data(cls, aggregated_json, incremental_json, num_runs):
incremental_builds = incremental_json[JSON_RESULTS_BUILD_NUMBERS]
aggregated_builds = aggregated_json[JSON_RESULTS_BUILD_NUMBERS]
aggregated_build_number = int(aggregated_builds[0])
for index in reversed(range(len(incremental_builds))):
build_number = int(incremental_builds[index])
logging.debug("Merging build %s, incremental json index: %d.", build_number, index)
# Merge this build into aggreagated results.
cls._merge_one_build(aggregated_json, incremental_json, index, num_runs)
@classmethod
def _merge_one_build(cls, aggregated_json, incremental_json, incremental_index, num_runs):
for key in incremental_json.keys():
# Merge json results except "tests" properties (results, times etc).
# "tests" properties will be handled separately.
if key == JSON_RESULTS_TESTS:
continue
if key in aggregated_json:
aggregated_json[key].insert(0, incremental_json[key][incremental_index])
aggregated_json[key] = aggregated_json[key][:num_runs]
else:
aggregated_json[key] = incremental_json[key]
@classmethod
def _merge_tests(cls, aggregated_json, incremental_json, num_runs):
# FIXME: Some data got corrupted and has results/times at the directory level.
# Once the data is fixe, this should assert that the directory level does not have
# results or times and just return "JSON_RESULTS_RESULTS not in subtree".
if JSON_RESULTS_RESULTS in aggregated_json:
del aggregated_json[JSON_RESULTS_RESULTS]
if JSON_RESULTS_TIMES in aggregated_json:
del aggregated_json[JSON_RESULTS_TIMES]
all_tests = set(aggregated_json.iterkeys())
if incremental_json:
all_tests |= set(incremental_json.iterkeys())
for test_name in all_tests:
if test_name not in aggregated_json:
aggregated_json[test_name] = incremental_json[test_name]
continue
incremental_sub_result = incremental_json[test_name] if incremental_json and test_name in incremental_json else None
if _is_directory(aggregated_json[test_name]):
cls._merge_tests(aggregated_json[test_name], incremental_sub_result, num_runs)
continue
if incremental_sub_result:
results = incremental_sub_result[JSON_RESULTS_RESULTS]
times = incremental_sub_result[JSON_RESULTS_TIMES]
else:
results = [[1, JSON_RESULTS_NO_DATA]]
times = [[1, 0]]
aggregated_test = aggregated_json[test_name]
cls._insert_item_run_length_encoded(results, aggregated_test[JSON_RESULTS_RESULTS], num_runs)
cls._insert_item_run_length_encoded(times, aggregated_test[JSON_RESULTS_TIMES], num_runs)
@classmethod
def _insert_item_run_length_encoded(cls, incremental_item, aggregated_item, num_runs):
for item in incremental_item:
if len(aggregated_item) and item[1] == aggregated_item[0][1]:
aggregated_item[0][0] = min(aggregated_item[0][0] + item[0], num_runs)
else:
aggregated_item.insert(0, item)
@classmethod
def _normalize_results(cls, aggregated_json, num_runs):
names_to_delete = []
for test_name in aggregated_json:
if _is_directory(aggregated_json[test_name]):
cls._normalize_results(aggregated_json[test_name], num_runs)
else:
leaf = aggregated_json[test_name]
leaf[JSON_RESULTS_RESULTS] = cls._remove_items_over_max_number_of_builds(leaf[JSON_RESULTS_RESULTS], num_runs)
leaf[JSON_RESULTS_TIMES] = cls._remove_items_over_max_number_of_builds(leaf[JSON_RESULTS_TIMES], num_runs)
if cls._should_delete_leaf(leaf):
names_to_delete.append(test_name)
for test_name in names_to_delete:
del aggregated_json[test_name]
@classmethod
def _should_delete_leaf(cls, leaf):
deletable_types = set((JSON_RESULTS_PASS, JSON_RESULTS_NO_DATA, JSON_RESULTS_SKIP))
for result in leaf[JSON_RESULTS_RESULTS]:
if result[1] not in deletable_types:
return False
for time in leaf[JSON_RESULTS_TIMES]:
if time[1] >= JSON_RESULTS_MIN_TIME:
return False
return True
@classmethod
def _remove_items_over_max_number_of_builds(cls, encoded_list, num_runs):
num_builds = 0
index = 0
for result in encoded_list:
num_builds = num_builds + result[0]
index = index + 1
if num_builds >= num_runs:
return encoded_list[:index]
return encoded_list
@classmethod
def _check_json(cls, builder, json):
version = json[JSON_RESULTS_VERSION_KEY]
if version > JSON_RESULTS_HIERARCHICAL_VERSION:
logging.error("Results JSON version '%s' is not supported.",
version)
return False
if not builder in json:
logging.error("Builder '%s' is not in json results.", builder)
return False
results_for_builder = json[builder]
if not JSON_RESULTS_BUILD_NUMBERS in results_for_builder:
logging.error("Missing build number in json results.")
return False
# FIXME: Once all the bots have cycled, we can remove this code since all the results will be heirarchical.
if version < JSON_RESULTS_HIERARCHICAL_VERSION:
json[builder][JSON_RESULTS_TESTS] = _trie_json_tests(results_for_builder[JSON_RESULTS_TESTS])
json[JSON_RESULTS_VERSION_KEY] = JSON_RESULTS_HIERARCHICAL_VERSION
return True
@classmethod
def merge(cls, builder, aggregated, incremental, num_runs, sort_keys=False):
if not incremental:
logging.warning("Nothing to merge.")
return None
logging.info("Loading incremental json...")
incremental_json = cls._load_json(incremental)
if not incremental_json:
return None
logging.info("Checking incremental json...")
if not cls._check_json(builder, incremental_json):
return None
logging.info("Loading existing aggregated json...")
aggregated_json = cls._load_json(aggregated)
if not aggregated_json:
return incremental
logging.info("Checking existing aggregated json...")
if not cls._check_json(builder, aggregated_json):
return incremental
if aggregated_json[builder][JSON_RESULTS_BUILD_NUMBERS][0] == incremental_json[builder][JSON_RESULTS_BUILD_NUMBERS][0]:
logging.error("Incremental JSON's build number is the latest build number in the aggregated JSON: %d." % aggregated_json[builder][JSON_RESULTS_BUILD_NUMBERS][0])
return aggregated
logging.info("Merging json results...")
try:
cls._merge_json(aggregated_json[builder], incremental_json[builder], num_runs)
except:
logging.error("Failed to merge json results: %s", traceback.print_exception(*sys.exc_info()))
return None
aggregated_json[JSON_RESULTS_VERSION_KEY] = JSON_RESULTS_HIERARCHICAL_VERSION
return cls._generate_file_data(aggregated_json, sort_keys)
@classmethod
def update(cls, master, builder, test_type, incremental):
small_file_updated = cls.update_file(master, builder, test_type, incremental, JSON_RESULTS_FILE_SMALL, JSON_RESULTS_MAX_BUILDS_SMALL)
large_file_updated = cls.update_file(master, builder, test_type, incremental, JSON_RESULTS_FILE, JSON_RESULTS_MAX_BUILDS)
return small_file_updated and large_file_updated
@classmethod
def update_file(cls, master, builder, test_type, incremental, filename, num_runs):
files = TestFile.get_files(master, builder, test_type, filename)
if files:
file = files[0]
new_results = cls.merge(builder, file.data, incremental, num_runs)
else:
# Use the incremental data if there is no aggregated file to merge.
file = TestFile()
file.master = master
file.builder = builder
file.test_type = test_type
file.name = filename
new_results = incremental
logging.info("No existing json results, incremental json is saved.")
if not new_results or not file.save(new_results):
logging.info("Update failed, master: %s, builder: %s, test_type: %s, name: %s." % (master, builder, test_type, filename))
return False
return True
@classmethod
def _delete_results_and_times(cls, tests):
for key in tests.keys():
if key in (JSON_RESULTS_RESULTS, JSON_RESULTS_TIMES):
del tests[key]
else:
cls._delete_results_and_times(tests[key])
@classmethod
def get_test_list(cls, builder, json_file_data):
logging.debug("Loading test results json...")
json = cls._load_json(json_file_data)
if not json:
return None
logging.debug("Checking test results json...")
if not cls._check_json(builder, json):
return None
test_list_json = {}
tests = json[builder][JSON_RESULTS_TESTS]
cls._delete_results_and_times(tests)
test_list_json[builder] = {"tests": tests}
return cls._generate_file_data(test_list_json)
| bsd-3-clause |
bingo20000/cupic | util/string.py | 1 | 2927 | import string
import random
import unittest
def randomstring(size=6,
chars=string.ascii_uppercase + string.ascii_lowercase
+ string.digits):
"""
:Time_Created: 2014.7.15
Constructs random string with given size & charset
:param size: (optional) desired length of string.
:param chars: (optional) charset from which string is chosen.
Usage::
>>> from string import randomstring
>>> print randomstring(20)
aXyGpX1bXx9IqTwF9JjT
"""
return ''.join(random.choice(chars) for _ in range(size))
class TestRandomString(unittest.TestCase):
"""
:Time_Created: 2014.7.16
Unit test for randomstring.
"""
def test_one(self):
size = 10
length = 20
for i in range(size):
astr = randomstring(length)
self.assertEqual(20, len(astr))
from urlparse import urljoin
from urlparse import urlparse
from urlparse import urlunparse
from posixpath import normpath
def urlnormjoin(base, path):
"""
:Time_Created: 2014.7.16
A util funciton for url join, add base url & path, also normalize the url.
This function will
1> extract the domain(scheme+host) from the 1st param,
2> join it with the 2nd param
3> normalize the url before return.
:param base: from where the domain is extracted.
:param path: the postfix to join domain
Usage::
>>> from string import randomstring
>>> urlnormjoin("http://www.baidu.com/123", "/../../abc.html"
"http://www.baidu.com/abc.html"
"""
arr0 = urlparse(base)
domain = '{uri.scheme}://{uri.netloc}/'.format(uri=arr0)
url1 = urljoin(domain, path)
arr = urlparse(url1)
path = normpath(arr[2])
return urlunparse((arr.scheme,
arr.netloc, path, arr.params, arr.query, arr.fragment))
class TestUrlnormJoin(unittest.TestCase):
"""
:Time_Created: 2014.7.16
Unit test for urlnormjoin.
"""
def test_jion(self):
self.assertEqual("http://www.baidu.com/abc.html",
urlnormjoin("http://www.baidu.com", "abc.html"))
self.assertEqual("http://www.baidu.com/abc.html",
urlnormjoin("http://www.baidu.com",
"/../../abc.html"))
self.assertEqual("http://www.baidu.com/abc.html",
urlnormjoin("http://www.baidu.com/xxx",
"./../../abc.html"))
self.assertEqual("http://www.baidu.com/abc.html?key=value&m=x",
urlnormjoin("http://www.baidu.com",
"abc.html?key=value&m=x"))
self.assertEqual("http://www.baidu.com/abc.html",
urlnormjoin("http://www.baidu.com/123",
"/../../abc.html"))
if __name__ == "__main__":
unittest.main()
| gpl-3.0 |
Juniper/ansible | plugins/callbacks/osx_say.py | 26 | 3265 |
# (C) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import subprocess
import os
FAILED_VOICE="Zarvox"
REGULAR_VOICE="Trinoids"
HAPPY_VOICE="Cellos"
LASER_VOICE="Princess"
SAY_CMD="/usr/bin/say"
def say(msg, voice):
subprocess.call([SAY_CMD, msg, "--voice=%s" % (voice)])
class CallbackModule(object):
"""
makes Ansible much more exciting on OS X.
"""
def __init__(self):
# plugin disable itself if say is not present
# ansible will not call any callback if disabled is set to True
if not os.path.exists(SAY_CMD):
self.disabled = True
print "%s does not exist, plugin %s disabled" % \
(SAY_CMD, os.path.basename(__file__))
def on_any(self, *args, **kwargs):
pass
def runner_on_failed(self, host, res, ignore_errors=False):
say("Failure on host %s" % host, FAILED_VOICE)
def runner_on_ok(self, host, res):
say("pew", LASER_VOICE)
def runner_on_error(self, host, msg):
pass
def runner_on_skipped(self, host, item=None):
say("pew", LASER_VOICE)
def runner_on_unreachable(self, host, res):
say("Failure on host %s" % host, FAILED_VOICE)
def runner_on_no_hosts(self):
pass
def runner_on_async_poll(self, host, res, jid, clock):
pass
def runner_on_async_ok(self, host, res, jid):
say("pew", LASER_VOICE)
def runner_on_async_failed(self, host, res, jid):
say("Failure on host %s" % host, FAILED_VOICE)
def playbook_on_start(self):
say("Running Playbook", REGULAR_VOICE)
def playbook_on_notify(self, host, handler):
say("pew", LASER_VOICE)
def playbook_on_no_hosts_matched(self):
pass
def playbook_on_no_hosts_remaining(self):
pass
def playbook_on_task_start(self, name, is_conditional):
if not is_conditional:
say("Starting task: %s" % name, REGULAR_VOICE)
else:
say("Notifying task: %s" % name, REGULAR_VOICE)
def playbook_on_vars_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None):
pass
def playbook_on_setup(self):
say("Gathering facts", REGULAR_VOICE)
def playbook_on_import_for_host(self, host, imported_file):
pass
def playbook_on_not_import_for_host(self, host, missing_file):
pass
def playbook_on_play_start(self, pattern):
say("Starting play: %s" % pattern, HAPPY_VOICE)
def playbook_on_stats(self, stats):
say("Play complete", HAPPY_VOICE)
| gpl-3.0 |
darkorb/eve-wspace | evewspace/Jabber/migrations/0001_initial.py | 22 | 7000 | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'JabberGroup'
db.create_table('Jabber_jabbergroup', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=64)),
('desc', self.gf('django.db.models.fields.CharField')(max_length=200)),
('special', self.gf('django.db.models.fields.BooleanField')(default=False)),
))
db.send_create_signal('Jabber', ['JabberGroup'])
# Adding model 'JabberGroupMember'
db.create_table('Jabber_jabbergroupmember', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('group', self.gf('django.db.models.fields.related.ForeignKey')(related_name='members', to=orm['Jabber.JabberGroup'])),
('user', self.gf('django.db.models.fields.related.ForeignKey')(related_name='jabber_groups', to=orm['auth.User'])),
))
db.send_create_signal('Jabber', ['JabberGroupMember'])
# Adding model 'JabberGroupPermissions'
db.create_table('Jabber_jabbergrouppermissions', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('usergroup', self.gf('django.db.models.fields.related.ForeignKey')(related_name='jabber_groups', to=orm['auth.Group'])),
('jabbergroup', self.gf('django.db.models.fields.related.ForeignKey')(related_name='group_permissions', to=orm['Jabber.JabberGroup'])),
('canbroadcast', self.gf('django.db.models.fields.BooleanField')(default=False)),
('canjoin', self.gf('django.db.models.fields.BooleanField')(default=False)),
))
db.send_create_signal('Jabber', ['JabberGroupPermissions'])
def backwards(self, orm):
# Deleting model 'JabberGroup'
db.delete_table('Jabber_jabbergroup')
# Deleting model 'JabberGroupMember'
db.delete_table('Jabber_jabbergroupmember')
# Deleting model 'JabberGroupPermissions'
db.delete_table('Jabber_jabbergrouppermissions')
models = {
'Jabber.jabbergroup': {
'Meta': {'object_name': 'JabberGroup'},
'desc': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '64'}),
'special': ('django.db.models.fields.BooleanField', [], {'default': 'False'})
},
'Jabber.jabbergroupmember': {
'Meta': {'object_name': 'JabberGroupMember'},
'group': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'members'", 'to': "orm['Jabber.JabberGroup']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'jabber_groups'", 'to': "orm['auth.User']"})
},
'Jabber.jabbergrouppermissions': {
'Meta': {'object_name': 'JabberGroupPermissions'},
'canbroadcast': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'canjoin': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'jabbergroup': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'group_permissions'", 'to': "orm['Jabber.JabberGroup']"}),
'usergroup': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'jabber_groups'", 'to': "orm['auth.Group']"})
},
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
}
}
complete_apps = ['Jabber'] | gpl-3.0 |
tylerjereddy/scipy | benchmarks/benchmarks/go_benchmark_functions/go_funcs_M.py | 13 | 20910 | # -*- coding: utf-8 -*-
from numpy import (abs, asarray, cos, exp, log, arange, pi, prod, sin, sqrt,
sum, tan)
from .go_benchmark import Benchmark, safe_import
with safe_import():
from scipy.special import factorial
class Matyas(Benchmark):
r"""
Matyas objective function.
This class defines the Matyas [1]_ global optimization problem. This is a
multimodal minimization problem defined as follows:
.. math::
f_{\text{Matyas}}(x) = 0.26(x_1^2 + x_2^2) - 0.48 x_1 x_2
with :math:`x_i \in [-10, 10]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [0, 0]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-10.0] * self.N, [10.0] * self.N))
self.global_optimum = [[0 for _ in range(self.N)]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
return 0.26 * (x[0] ** 2 + x[1] ** 2) - 0.48 * x[0] * x[1]
class McCormick(Benchmark):
r"""
McCormick objective function.
This class defines the McCormick [1]_ global optimization problem. This is a
multimodal minimization problem defined as follows:
.. math::
f_{\text{McCormick}}(x) = - x_{1} + 2 x_{2} + \left(x_{1}
- x_{2}\right)^{2} + \sin\left(x_{1} + x_{2}\right) + 1
with :math:`x_1 \in [-1.5, 4]`, :math:`x_2 \in [-3, 4]`.
*Global optimum*: :math:`f(x) = -1.913222954981037` for
:math:`x = [-0.5471975602214493, -1.547197559268372]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = [(-1.5, 4.0), (-3.0, 3.0)]
self.global_optimum = [[-0.5471975602214493, -1.547197559268372]]
self.fglob = -1.913222954981037
def fun(self, x, *args):
self.nfev += 1
return (sin(x[0] + x[1]) + (x[0] - x[1]) ** 2 - 1.5 * x[0]
+ 2.5 * x[1] + 1)
class Meyer(Benchmark):
r"""
Meyer [1]_ objective function.
..[1] https://www.itl.nist.gov/div898/strd/nls/data/mgh10.shtml
TODO NIST regression standard
"""
def __init__(self, dimensions=3):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([0., 100., 100.],
[1, 1000., 500.]))
self.global_optimum = [[5.6096364710e-3, 6.1813463463e3,
3.4522363462e2]]
self.fglob = 8.7945855171e1
self.a = asarray([3.478E+04, 2.861E+04, 2.365E+04, 1.963E+04, 1.637E+04,
1.372E+04, 1.154E+04, 9.744E+03, 8.261E+03, 7.030E+03,
6.005E+03, 5.147E+03, 4.427E+03, 3.820E+03, 3.307E+03,
2.872E+03])
self.b = asarray([5.000E+01, 5.500E+01, 6.000E+01, 6.500E+01, 7.000E+01,
7.500E+01, 8.000E+01, 8.500E+01, 9.000E+01, 9.500E+01,
1.000E+02, 1.050E+02, 1.100E+02, 1.150E+02, 1.200E+02,
1.250E+02])
def fun(self, x, *args):
self.nfev += 1
vec = x[0] * exp(x[1] / (self.b + x[2]))
return sum((self.a - vec) ** 2)
class Michalewicz(Benchmark):
r"""
Michalewicz objective function.
This class defines the Michalewicz [1]_ global optimization problem. This
is a multimodal minimization problem defined as follows:
.. math::
f_{\text{Michalewicz}}(x) = - \sum_{i=1}^{2} \sin\left(x_i\right)
\sin^{2 m}\left(\frac{i x_i^{2}}{\pi}\right)
Where, in this exercise, :math:`m = 10`.
with :math:`x_i \in [0, \pi]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x_i) = -1.8013` for :math:`x = [0, 0]`
.. [1] Adorio, E. MVF - "Multivariate Test Functions Library in C for
Unconstrained Global Optimization", 2005
TODO: could change dimensionality, but global minimum might change.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([0.0] * self.N, [pi] * self.N))
self.global_optimum = [[2.20290555, 1.570796]]
self.fglob = -1.8013
def fun(self, x, *args):
self.nfev += 1
m = 10.0
i = arange(1, self.N + 1)
return -sum(sin(x) * sin(i * x ** 2 / pi) ** (2 * m))
class MieleCantrell(Benchmark):
r"""
Miele-Cantrell [1]_ objective function.
This class defines the Miele-Cantrell global optimization problem. This
is a multimodal minimization problem defined as follows:
.. math::
f_{\text{MieleCantrell}}({x}) = (e^{-x_1} - x_2)^4 + 100(x_2 - x_3)^6
+ \tan^4(x_3 - x_4) + x_1^8
with :math:`x_i \in [-1, 1]` for :math:`i = 1, ..., 4`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [0, 1, 1, 1]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=4):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-1.0] * self.N, [1.0] * self.N))
self.global_optimum = [[0.0, 1.0, 1.0, 1.0]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
return ((exp(-x[0]) - x[1]) ** 4 + 100 * (x[1] - x[2]) ** 6
+ tan(x[2] - x[3]) ** 4 + x[0] ** 8)
class Mishra01(Benchmark):
r"""
Mishra 1 objective function.
This class defines the Mishra 1 [1]_ global optimization problem. This
is a multimodal minimization problem defined as follows:
.. math::
f_{\text{Mishra01}}(x) = (1 + x_n)^{x_n}
where
.. math::
x_n = n - \sum_{i=1}^{n-1} x_i
with :math:`x_i \in [0, 1]` for :math:`i =1, ..., n`.
*Global optimum*: :math:`f(x) = 2` for :math:`x_i = 1` for
:math:`i = 1, ..., n`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([0.0] * self.N,
[1.0 + 1e-9] * self.N))
self.global_optimum = [[1.0 for _ in range(self.N)]]
self.fglob = 2.0
self.change_dimensionality = True
def fun(self, x, *args):
self.nfev += 1
xn = self.N - sum(x[0:-1])
return (1 + xn) ** xn
class Mishra02(Benchmark):
r"""
Mishra 2 objective function.
This class defines the Mishra 2 [1]_ global optimization problem. This
is a multimodal minimization problem defined as follows:
.. math::
f_{\text{Mishra02}}({x}) = (1 + x_n)^{x_n}
with
.. math::
x_n = n - \sum_{i=1}^{n-1} \frac{(x_i + x_{i+1})}{2}
Here, :math:`n` represents the number of dimensions and
:math:`x_i \in [0, 1]` for :math:`i = 1, ..., n`.
*Global optimum*: :math:`f(x) = 2` for :math:`x_i = 1`
for :math:`i = 1, ..., n`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([0.0] * self.N,
[1.0 + 1e-9] * self.N))
self.global_optimum = [[1.0 for _ in range(self.N)]]
self.fglob = 2.0
self.change_dimensionality = True
def fun(self, x, *args):
self.nfev += 1
xn = self.N - sum((x[:-1] + x[1:]) / 2.0)
return (1 + xn) ** xn
class Mishra03(Benchmark):
r"""
Mishra 3 objective function.
This class defines the Mishra 3 [1]_ global optimization problem. This
is a multimodal minimization problem defined as follows:
.. math::
f_{\text{Mishra03}}(x) = \sqrt{\lvert \cos{\sqrt{\lvert x_1^2
+ x_2^2 \rvert}} \rvert} + 0.01(x_1 + x_2)
with :math:`x_i \in [-10, 10]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x) = -0.1999` for
:math:`x = [-9.99378322, -9.99918927]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
TODO: I think that Jamil#76 has the wrong global minimum, a smaller one
is possible
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-10.0] * self.N, [10.0] * self.N))
self.global_optimum = [[-9.99378322, -9.99918927]]
self.fglob = -0.19990562
def fun(self, x, *args):
self.nfev += 1
return ((0.01 * (x[0] + x[1])
+ sqrt(abs(cos(sqrt(abs(x[0] ** 2 + x[1] ** 2)))))))
class Mishra04(Benchmark):
r"""
Mishra 4 objective function.
This class defines the Mishra 4 [1]_ global optimization problem. This is a
multimodal minimization problem defined as follows:
.. math::
f_{\text{Mishra04}}({x}) = \sqrt{\lvert \sin{\sqrt{\lvert
x_1^2 + x_2^2 \rvert}} \rvert} + 0.01(x_1 + x_2)
with :math:`x_i \in [-10, 10]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x) = -0.17767` for
:math:`x = [-8.71499636, -9.0533148]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
TODO: I think that Jamil#77 has the wrong minimum, not possible
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-10.0] * self.N, [10.0] * self.N))
self.global_optimum = [[-8.88055269734, -8.89097599857]]
self.fglob = -0.177715264826
def fun(self, x, *args):
self.nfev += 1
return ((0.01 * (x[0] + x[1])
+ sqrt(abs(sin(sqrt(abs(x[0] ** 2 + x[1] ** 2)))))))
class Mishra05(Benchmark):
r"""
Mishra 5 objective function.
This class defines the Mishra 5 [1]_ global optimization problem. This is a
multimodal minimization problem defined as follows:
.. math::
f_{\text{Mishra05}}(x) = \left [ \sin^2 ((\cos(x_1) + \cos(x_2))^2)
+ \cos^2 ((\sin(x_1) + \sin(x_2))^2) + x_1 \right ]^2 + 0.01(x_1 + x_2)
with :math:`x_i \in [-10, 10]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x) = -0.119829` for :math:`x = [-1.98682, -10]`
.. [1] Mishra, S. Global Optimization by Differential Evolution and
Particle Swarm Methods: Evaluation on Some Benchmark Functions.
Munich Personal RePEc Archive, 2006, 1005
TODO Line 381 in paper
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-10.0] * self.N, [10.0] * self.N))
self.global_optimum = [[-1.98682, -10.0]]
self.fglob = -1.019829519930646
def fun(self, x, *args):
self.nfev += 1
return (0.01 * x[0] + 0.1 * x[1]
+ (sin((cos(x[0]) + cos(x[1])) ** 2) ** 2
+ cos((sin(x[0]) + sin(x[1])) ** 2) ** 2 + x[0]) ** 2)
class Mishra06(Benchmark):
r"""
Mishra 6 objective function.
This class defines the Mishra 6 [1]_ global optimization problem. This
is a multimodal minimization problem defined as follows:
.. math::
f_{\text{Mishra06}}(x) = -\log{\left [ \sin^2 ((\cos(x_1)
+ \cos(x_2))^2) - \cos^2 ((\sin(x_1) + \sin(x_2))^2) + x_1 \right ]^2}
+ 0.01 \left[(x_1 -1)^2 + (x_2 - 1)^2 \right]
with :math:`x_i \in [-10, 10]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x_i) = -2.28395` for :math:`x = [2.88631, 1.82326]`
.. [1] Mishra, S. Global Optimization by Differential Evolution and
Particle Swarm Methods: Evaluation on Some Benchmark Functions.
Munich Personal RePEc Archive, 2006, 1005
TODO line 397
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-10.0] * self.N, [10.0] * self.N))
self.global_optimum = [[2.88631, 1.82326]]
self.fglob = -2.28395
def fun(self, x, *args):
self.nfev += 1
a = 0.1 * ((x[0] - 1) ** 2 + (x[1] - 1) ** 2)
u = (cos(x[0]) + cos(x[1])) ** 2
v = (sin(x[0]) + sin(x[1])) ** 2
return a - log((sin(u) ** 2 - cos(v) ** 2 + x[0]) ** 2)
class Mishra07(Benchmark):
r"""
Mishra 7 objective function.
This class defines the Mishra 7 [1]_ global optimization problem. This
is a multimodal minimization problem defined as follows:
.. math::
f_{\text{Mishra07}}(x) = \left [\prod_{i=1}^{n} x_i - n! \right]^2
Here, :math:`n` represents the number of dimensions and
:math:`x_i \in [-10, 10]` for :math:`i = 1, ..., n`.
*Global optimum*: :math:`f(x) = 0` for :math:`x_i = \sqrt{n}`
for :math:`i = 1, ..., n`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-10.0] * self.N, [10.0] * self.N))
self.custom_bounds = [(-2, 2), (-2, 2)]
self.global_optimum = [[sqrt(self.N)
for i in range(self.N)]]
self.fglob = 0.0
self.change_dimensionality = True
def fun(self, x, *args):
self.nfev += 1
return (prod(x) - factorial(self.N)) ** 2.0
class Mishra08(Benchmark):
r"""
Mishra 8 objective function.
This class defines the Mishra 8 [1]_ global optimization problem. This
is a multimodal minimization problem defined as follows:
.. math::
f_{\text{Mishra08}}(x) = 0.001 \left[\lvert x_1^{10} - 20x_1^9
+ 180x_1^8 - 960 x_1^7 + 3360x_1^6 - 8064x_1^5 + 13340x_1^4 - 15360x_1^3
+ 11520x_1^2 - 5120x_1 + 2624 \rvert \lvert x_2^4 + 12x_2^3 + 54x_2^2
+ 108x_2 + 81 \rvert \right]^2
with :math:`x_i \in [-10, 10]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [2, -3]`
.. [1] Mishra, S. Global Optimization by Differential Evolution and
Particle Swarm Methods: Evaluation on Some Benchmark Functions.
Munich Personal RePEc Archive, 2006, 1005
TODO Line 1065
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-10.0] * self.N, [10.0] * self.N))
self.custom_bounds = [(1.0, 2.0), (-4.0, 1.0)]
self.global_optimum = [[2.0, -3.0]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
val = abs(x[0] ** 10 - 20 * x[0] ** 9 + 180 * x[0] ** 8
- 960 * x[0] ** 7 + 3360 * x[0] ** 6 - 8064 * x[0] ** 5
+ 13340 * x[0] ** 4 - 15360 * x[0] ** 3 + 11520 * x[0] ** 2
- 5120 * x[0] + 2624)
val += abs(x[1] ** 4 + 12 * x[1] ** 3 +
54 * x[1] ** 2 + 108 * x[1] + 81)
return 0.001 * val ** 2
class Mishra09(Benchmark):
r"""
Mishra 9 objective function.
This class defines the Mishra 9 [1]_ global optimization problem. This
is a multimodal minimization problem defined as follows:
.. math::
f_{\text{Mishra09}}({x}) = \left[ ab^2c + abc^2 + b^2
+ (x_1 + x_2 - x_3)^2 \right]^2
Where, in this exercise:
.. math::
\begin{cases} a = 2x_1^3 + 5x_1x_2 + 4x_3 - 2x_1^2x_3 - 18 \\
b = x_1 + x_2^3 + x_1x_2^2 + x_1x_3^2 - 22 \\
c = 8x_1^2 + 2x_2x_3 + 2x_2^2 + 3x_2^3 - 52 \end{cases}
with :math:`x_i \in [-10, 10]` for :math:`i = 1, 2, 3`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [1, 2, 3]`
.. [1] Mishra, S. Global Optimization by Differential Evolution and
Particle Swarm Methods: Evaluation on Some Benchmark Functions.
Munich Personal RePEc Archive, 2006, 1005
TODO Line 1103
"""
def __init__(self, dimensions=3):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-10.0] * self.N, [10.0] * self.N))
self.global_optimum = [[1.0, 2.0, 3.0]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
a = (2 * x[0] ** 3 + 5 * x[0] * x[1]
+ 4 * x[2] - 2 * x[0] ** 2 * x[2] - 18)
b = x[0] + x[1] ** 3 + x[0] * x[1] ** 2 + x[0] * x[2] ** 2 - 22.0
c = (8 * x[0] ** 2 + 2 * x[1] * x[2]
+ 2 * x[1] ** 2 + 3 * x[1] ** 3 - 52)
return (a * c * b ** 2 + a * b * c ** 2 + b ** 2
+ (x[0] + x[1] - x[2]) ** 2) ** 2
class Mishra10(Benchmark):
r"""
Mishra 10 objective function.
This class defines the Mishra 10 global optimization problem. This is a
multimodal minimization problem defined as follows:
.. math::
TODO - int(x) should be used instead of floor(x)!!!!!
f_{\text{Mishra10}}({x}) = \left[ \lfloor x_1 \perp x_2 \rfloor -
\lfloor x_1 \rfloor - \lfloor x_2 \rfloor \right]^2
with :math:`x_i \in [-10, 10]` for :math:`i =1, 2`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [2, 2]`
.. [1] Mishra, S. Global Optimization by Differential Evolution and
Particle Swarm Methods: Evaluation on Some Benchmark Functions.
Munich Personal RePEc Archive, 2006, 1005
TODO line 1115
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-10.0] * self.N, [10.0] * self.N))
self.global_optimum = [[2.0, 2.0]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
x1, x2 = int(x[0]), int(x[1])
f1 = x1 + x2
f2 = x1 * x2
return (f1 - f2) ** 2.0
class Mishra11(Benchmark):
r"""
Mishra 11 objective function.
This class defines the Mishra 11 [1]_ global optimization problem. This is a
multimodal minimization problem defined as follows:
.. math::
f_{\text{Mishra11}}(x) = \left [ \frac{1}{n} \sum_{i=1}^{n} \lvert x_i
\rvert - \left(\prod_{i=1}^{n} \lvert x_i \rvert \right )^{\frac{1}{n}}
\right]^2
Here, :math:`n` represents the number of dimensions and
:math:`x_i \in [-10, 10]` for :math:`i = 1, ..., n`.
*Global optimum*: :math:`f(x) = 0` for :math:`x_i = 0` for
:math:`i = 1, ..., n`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-10.0] * self.N, [10.0] * self.N))
self.custom_bounds = [(-3, 3), (-3, 3)]
self.global_optimum = [[0.0 for _ in range(self.N)]]
self.fglob = 0.0
self.change_dimensionality = True
def fun(self, x, *args):
self.nfev += 1
N = self.N
return ((1.0 / N) * sum(abs(x)) - (prod(abs(x))) ** 1.0 / N) ** 2.0
class MultiModal(Benchmark):
r"""
MultiModal objective function.
This class defines the MultiModal global optimization problem. This
is a multimodal minimization problem defined as follows:
.. math::
f_{\text{MultiModal}}(x) = \left( \sum_{i=1}^n \lvert x_i \rvert
\right) \left( \prod_{i=1}^n \lvert x_i \rvert \right)
Here, :math:`n` represents the number of dimensions and
:math:`x_i \in [-10, 10]` for :math:`i = 1, ..., n`.
*Global optimum*: :math:`f(x) = 0` for :math:`x_i = 0` for
:math:`i = 1, ..., n`
.. [1] Gavana, A. Global Optimization Benchmarks and AMPGO retrieved 2015
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = list(zip([-10.0] * self.N, [10.0] * self.N))
self.custom_bounds = [(-5, 5), (-5, 5)]
self.global_optimum = [[0.0 for _ in range(self.N)]]
self.fglob = 0.0
self.change_dimensionality = True
def fun(self, x, *args):
self.nfev += 1
return sum(abs(x)) * prod(abs(x))
| bsd-3-clause |
J861449197/edx-platform | common/lib/xmodule/xmodule/video_module/transcripts_utils.py | 44 | 24405 | """
Utility functions for transcripts.
++++++++++++++++++++++++++++++++++
"""
import os
import copy
import json
import requests
import logging
from pysrt import SubRipTime, SubRipItem, SubRipFile
from lxml import etree
from HTMLParser import HTMLParser
from xmodule.exceptions import NotFoundError
from xmodule.contentstore.content import StaticContent
from xmodule.contentstore.django import contentstore
from .bumper_utils import get_bumper_settings
log = logging.getLogger(__name__)
class TranscriptException(Exception): # pylint: disable=missing-docstring
pass
class TranscriptsGenerationException(Exception): # pylint: disable=missing-docstring
pass
class GetTranscriptsFromYouTubeException(Exception): # pylint: disable=missing-docstring
pass
class TranscriptsRequestValidationException(Exception): # pylint: disable=missing-docstring
pass
def generate_subs(speed, source_speed, source_subs):
"""
Generate transcripts from one speed to another speed.
Args:
`speed`: float, for this speed subtitles will be generated,
`source_speed`: float, speed of source_subs
`source_subs`: dict, existing subtitles for speed `source_speed`.
Returns:
`subs`: dict, actual subtitles.
"""
if speed == source_speed:
return source_subs
coefficient = 1.0 * speed / source_speed
subs = {
'start': [
int(round(timestamp * coefficient)) for
timestamp in source_subs['start']
],
'end': [
int(round(timestamp * coefficient)) for
timestamp in source_subs['end']
],
'text': source_subs['text']}
return subs
def save_to_store(content, name, mime_type, location):
"""
Save named content to store by location.
Returns location of saved content.
"""
content_location = Transcript.asset_location(location, name)
content = StaticContent(content_location, name, mime_type, content)
contentstore().save(content)
return content_location
def save_subs_to_store(subs, subs_id, item, language='en'):
"""
Save transcripts into `StaticContent`.
Args:
`subs_id`: str, subtitles id
`item`: video module instance
`language`: two chars str ('uk'), language of translation of transcripts
Returns: location of saved subtitles.
"""
filedata = json.dumps(subs, indent=2)
filename = subs_filename(subs_id, language)
return save_to_store(filedata, filename, 'application/json', item.location)
def youtube_video_transcript_name(youtube_text_api):
"""
Get the transcript name from available transcripts of video
with respect to language from youtube server
"""
utf8_parser = etree.XMLParser(encoding='utf-8')
transcripts_param = {'type': 'list', 'v': youtube_text_api['params']['v']}
lang = youtube_text_api['params']['lang']
# get list of transcripts of specific video
# url-form
# http://video.google.com/timedtext?type=list&v={VideoId}
youtube_response = requests.get('http://' + youtube_text_api['url'], params=transcripts_param)
if youtube_response.status_code == 200 and youtube_response.text:
youtube_data = etree.fromstring(youtube_response.content, parser=utf8_parser)
# iterate all transcripts information from youtube server
for element in youtube_data:
# search specific language code such as 'en' in transcripts info list
if element.tag == 'track' and element.get('lang_code', '') == lang:
return element.get('name')
return None
def get_transcripts_from_youtube(youtube_id, settings, i18n, youtube_transcript_name=''):
"""
Gets transcripts from youtube for youtube_id.
Parses only utf-8 encoded transcripts.
Other encodings are not supported at the moment.
Returns (status, transcripts): bool, dict.
"""
_ = i18n.ugettext
utf8_parser = etree.XMLParser(encoding='utf-8')
youtube_text_api = copy.deepcopy(settings.YOUTUBE['TEXT_API'])
youtube_text_api['params']['v'] = youtube_id
# if the transcript name is not empty on youtube server we have to pass
# name param in url in order to get transcript
# example http://video.google.com/timedtext?lang=en&v={VideoId}&name={transcript_name}
youtube_transcript_name = youtube_video_transcript_name(youtube_text_api)
if youtube_transcript_name:
youtube_text_api['params']['name'] = youtube_transcript_name
data = requests.get('http://' + youtube_text_api['url'], params=youtube_text_api['params'])
if data.status_code != 200 or not data.text:
msg = _("Can't receive transcripts from Youtube for {youtube_id}. Status code: {status_code}.").format(
youtube_id=youtube_id,
status_code=data.status_code
)
raise GetTranscriptsFromYouTubeException(msg)
sub_starts, sub_ends, sub_texts = [], [], []
xmltree = etree.fromstring(data.content, parser=utf8_parser)
for element in xmltree:
if element.tag == "text":
start = float(element.get("start"))
duration = float(element.get("dur", 0)) # dur is not mandatory
text = element.text
end = start + duration
if text:
# Start and end should be ints representing the millisecond timestamp.
sub_starts.append(int(start * 1000))
sub_ends.append(int((end + 0.0001) * 1000))
sub_texts.append(text.replace('\n', ' '))
return {'start': sub_starts, 'end': sub_ends, 'text': sub_texts}
def download_youtube_subs(youtube_id, video_descriptor, settings):
"""
Download transcripts from Youtube and save them to assets.
Args:
youtube_id: str, actual youtube_id of the video.
video_descriptor: video descriptor instance.
We save transcripts for 1.0 speed, as for other speed conversion is done on front-end.
Returns:
None, if transcripts were successfully downloaded and saved.
Raises:
GetTranscriptsFromYouTubeException, if fails.
"""
i18n = video_descriptor.runtime.service(video_descriptor, "i18n")
_ = i18n.ugettext
subs = get_transcripts_from_youtube(youtube_id, settings, i18n)
save_subs_to_store(subs, youtube_id, video_descriptor)
log.info("Transcripts for youtube_id %s for 1.0 speed are downloaded and saved.", youtube_id)
def remove_subs_from_store(subs_id, item, lang='en'):
"""
Remove from store, if transcripts content exists.
"""
filename = subs_filename(subs_id, lang)
Transcript.delete_asset(item.location, filename)
def generate_subs_from_source(speed_subs, subs_type, subs_filedata, item, language='en'):
"""Generate transcripts from source files (like SubRip format, etc.)
and save them to assets for `item` module.
We expect, that speed of source subs equal to 1
:param speed_subs: dictionary {speed: sub_id, ...}
:param subs_type: type of source subs: "srt", ...
:param subs_filedata:unicode, content of source subs.
:param item: module object.
:param language: str, language of translation of transcripts
:returns: True, if all subs are generated and saved successfully.
"""
_ = item.runtime.service(item, "i18n").ugettext
if subs_type.lower() != 'srt':
raise TranscriptsGenerationException(_("We support only SubRip (*.srt) transcripts format."))
try:
srt_subs_obj = SubRipFile.from_string(subs_filedata)
except Exception as ex:
msg = _("Something wrong with SubRip transcripts file during parsing. Inner message is {error_message}").format(
error_message=ex.message
)
raise TranscriptsGenerationException(msg)
if not srt_subs_obj:
raise TranscriptsGenerationException(_("Something wrong with SubRip transcripts file during parsing."))
sub_starts = []
sub_ends = []
sub_texts = []
for sub in srt_subs_obj:
sub_starts.append(sub.start.ordinal)
sub_ends.append(sub.end.ordinal)
sub_texts.append(sub.text.replace('\n', ' '))
subs = {
'start': sub_starts,
'end': sub_ends,
'text': sub_texts}
for speed, subs_id in speed_subs.iteritems():
save_subs_to_store(
generate_subs(speed, 1, subs),
subs_id,
item,
language
)
return subs
def generate_srt_from_sjson(sjson_subs, speed):
"""Generate transcripts with speed = 1.0 from sjson to SubRip (*.srt).
:param sjson_subs: "sjson" subs.
:param speed: speed of `sjson_subs`.
:returns: "srt" subs.
"""
output = ''
equal_len = len(sjson_subs['start']) == len(sjson_subs['end']) == len(sjson_subs['text'])
if not equal_len:
return output
sjson_speed_1 = generate_subs(speed, 1, sjson_subs)
for i in range(len(sjson_speed_1['start'])):
item = SubRipItem(
index=i,
start=SubRipTime(milliseconds=sjson_speed_1['start'][i]),
end=SubRipTime(milliseconds=sjson_speed_1['end'][i]),
text=sjson_speed_1['text'][i]
)
output += (unicode(item))
output += '\n'
return output
def copy_or_rename_transcript(new_name, old_name, item, delete_old=False, user=None):
"""
Renames `old_name` transcript file in storage to `new_name`.
If `old_name` is not found in storage, raises `NotFoundError`.
If `delete_old` is True, removes `old_name` files from storage.
"""
filename = 'subs_{0}.srt.sjson'.format(old_name)
content_location = StaticContent.compute_location(item.location.course_key, filename)
transcripts = contentstore().find(content_location).data
save_subs_to_store(json.loads(transcripts), new_name, item)
item.sub = new_name
item.save_with_metadata(user)
if delete_old:
remove_subs_from_store(old_name, item)
def get_html5_ids(html5_sources):
"""
Helper method to parse out an HTML5 source into the ideas
NOTE: This assumes that '/' are not in the filename
"""
html5_ids = [x.split('/')[-1].rsplit('.', 1)[0] for x in html5_sources]
return html5_ids
def manage_video_subtitles_save(item, user, old_metadata=None, generate_translation=False):
"""
Does some specific things, that can be done only on save.
Video player item has some video fields: HTML5 ones and Youtube one.
If value of `sub` field of `new_item` is cleared, transcripts should be removed.
`item` is video module instance with updated values of fields,
but actually have not been saved to store yet.
`old_metadata` contains old values of XFields.
# 1.
If value of `sub` field of `new_item` is different from values of video fields of `new_item`,
and `new_item.sub` file is present, then code in this function creates copies of
`new_item.sub` file with new names. That names are equal to values of video fields of `new_item`
After that `sub` field of `new_item` is changed to one of values of video fields.
This whole action ensures that after user changes video fields, proper `sub` files, corresponding
to new values of video fields, will be presented in system.
# 2 convert /static/filename.srt to filename.srt in self.transcripts.
(it is done to allow user to enter both /static/filename.srt and filename.srt)
# 3. Generate transcripts translation only when user clicks `save` button, not while switching tabs.
a) delete sjson translation for those languages, which were removed from `item.transcripts`.
Note: we are not deleting old SRT files to give user more flexibility.
b) For all SRT files in`item.transcripts` regenerate new SJSON files.
(To avoid confusing situation if you attempt to correct a translation by uploading
a new version of the SRT file with same name).
"""
_ = item.runtime.service(item, "i18n").ugettext
# 1.
html5_ids = get_html5_ids(item.html5_sources)
possible_video_id_list = [item.youtube_id_1_0] + html5_ids
sub_name = item.sub
for video_id in possible_video_id_list:
if not video_id:
continue
if not sub_name:
remove_subs_from_store(video_id, item)
continue
# copy_or_rename_transcript changes item.sub of module
try:
# updates item.sub with `video_id`, if it is successful.
copy_or_rename_transcript(video_id, sub_name, item, user=user)
except NotFoundError:
# subtitles file `sub_name` is not presented in the system. Nothing to copy or rename.
log.debug(
"Copying %s file content to %s name is failed, "
"original file does not exist.",
sub_name, video_id
)
# 2.
if generate_translation:
for lang, filename in item.transcripts.items():
item.transcripts[lang] = os.path.split(filename)[-1]
# 3.
if generate_translation:
old_langs = set(old_metadata.get('transcripts', {})) if old_metadata else set()
new_langs = set(item.transcripts)
for lang in old_langs.difference(new_langs): # 3a
for video_id in possible_video_id_list:
if video_id:
remove_subs_from_store(video_id, item, lang)
reraised_message = ''
for lang in new_langs: # 3b
try:
generate_sjson_for_all_speeds(
item,
item.transcripts[lang],
{speed: subs_id for subs_id, speed in youtube_speed_dict(item).iteritems()},
lang,
)
except TranscriptException as ex:
item.transcripts.pop(lang) # remove key from transcripts because proper srt file does not exist in assets.
reraised_message += ' ' + ex.message
if reraised_message:
item.save_with_metadata(user)
raise TranscriptException(reraised_message)
def youtube_speed_dict(item):
"""
Returns {speed: youtube_ids, ...} dict for existing youtube_ids
"""
yt_ids = [item.youtube_id_0_75, item.youtube_id_1_0, item.youtube_id_1_25, item.youtube_id_1_5]
yt_speeds = [0.75, 1.00, 1.25, 1.50]
youtube_ids = {p[0]: p[1] for p in zip(yt_ids, yt_speeds) if p[0]}
return youtube_ids
def subs_filename(subs_id, lang='en'):
"""
Generate proper filename for storage.
"""
if lang == 'en':
return u'subs_{0}.srt.sjson'.format(subs_id)
else:
return u'{0}_subs_{1}.srt.sjson'.format(lang, subs_id)
def generate_sjson_for_all_speeds(item, user_filename, result_subs_dict, lang):
"""
Generates sjson from srt for given lang.
`item` is module object.
"""
_ = item.runtime.service(item, "i18n").ugettext
try:
srt_transcripts = contentstore().find(Transcript.asset_location(item.location, user_filename))
except NotFoundError as ex:
raise TranscriptException(_("{exception_message}: Can't find uploaded transcripts: {user_filename}").format(
exception_message=ex.message,
user_filename=user_filename
))
if not lang:
lang = item.transcript_language
# Used utf-8-sig encoding type instead of utf-8 to remove BOM(Byte Order Mark), e.g. U+FEFF
generate_subs_from_source(
result_subs_dict,
os.path.splitext(user_filename)[1][1:],
srt_transcripts.data.decode('utf-8-sig'),
item,
lang
)
def get_or_create_sjson(item, transcripts):
"""
Get sjson if already exists, otherwise generate it.
Generate sjson with subs_id name, from user uploaded srt.
Subs_id is extracted from srt filename, which was set by user.
Args:
transcipts (dict): dictionary of (language: file) pairs.
Raises:
TranscriptException: when srt subtitles do not exist,
and exceptions from generate_subs_from_source.
`item` is module object.
"""
user_filename = transcripts[item.transcript_language]
user_subs_id = os.path.splitext(user_filename)[0]
source_subs_id, result_subs_dict = user_subs_id, {1.0: user_subs_id}
try:
sjson_transcript = Transcript.asset(item.location, source_subs_id, item.transcript_language).data
except NotFoundError: # generating sjson from srt
generate_sjson_for_all_speeds(item, user_filename, result_subs_dict, item.transcript_language)
sjson_transcript = Transcript.asset(item.location, source_subs_id, item.transcript_language).data
return sjson_transcript
class Transcript(object):
"""
Container for transcript methods.
"""
mime_types = {
'srt': 'application/x-subrip; charset=utf-8',
'txt': 'text/plain; charset=utf-8',
'sjson': 'application/json',
}
@staticmethod
def convert(content, input_format, output_format):
"""
Convert transcript `content` from `input_format` to `output_format`.
Accepted input formats: sjson, srt.
Accepted output format: srt, txt.
"""
assert input_format in ('srt', 'sjson')
assert output_format in ('txt', 'srt', 'sjson')
if input_format == output_format:
return content
if input_format == 'srt':
if output_format == 'txt':
text = SubRipFile.from_string(content.decode('utf8')).text
return HTMLParser().unescape(text)
elif output_format == 'sjson':
raise NotImplementedError
if input_format == 'sjson':
if output_format == 'txt':
text = json.loads(content)['text']
return HTMLParser().unescape("\n".join(text))
elif output_format == 'srt':
return generate_srt_from_sjson(json.loads(content), speed=1.0)
@staticmethod
def asset(location, subs_id, lang='en', filename=None):
"""
Get asset from contentstore, asset location is built from subs_id and lang.
`location` is module location.
"""
asset_filename = subs_filename(subs_id, lang) if not filename else filename
return Transcript.get_asset(location, asset_filename)
@staticmethod
def get_asset(location, filename):
"""
Return asset by location and filename.
"""
return contentstore().find(Transcript.asset_location(location, filename))
@staticmethod
def asset_location(location, filename):
"""
Return asset location. `location` is module location.
"""
return StaticContent.compute_location(location.course_key, filename)
@staticmethod
def delete_asset(location, filename):
"""
Delete asset by location and filename.
"""
try:
contentstore().delete(Transcript.asset_location(location, filename))
log.info("Transcript asset %s was removed from store.", filename)
except NotFoundError:
pass
return StaticContent.compute_location(location.course_key, filename)
class VideoTranscriptsMixin(object):
"""Mixin class for transcript functionality.
This is necessary for both VideoModule and VideoDescriptor.
"""
def available_translations(self, transcripts, verify_assets=True):
"""Return a list of language codes for which we have transcripts.
Args:
verify_assets (boolean): If True, checks to ensure that the transcripts
really exist in the contentstore. If False, we just look at the
VideoDescriptor fields and do not query the contentstore. One reason
we might do this is to avoid slamming contentstore() with queries
when trying to make a listing of videos and their languages.
Defaults to True.
transcripts (dict): A dict with all transcripts and a sub.
Defaults to False
"""
translations = []
sub, other_lang = transcripts["sub"], transcripts["transcripts"]
# If we're not verifying the assets, we just trust our field values
if not verify_assets:
translations = list(other_lang)
if not translations or sub:
translations += ['en']
return set(translations)
# If we've gotten this far, we're going to verify that the transcripts
# being referenced are actually in the contentstore.
if sub: # check if sjson exists for 'en'.
try:
Transcript.asset(self.location, sub, 'en')
except NotFoundError:
try:
Transcript.asset(self.location, None, None, sub)
except NotFoundError:
pass
else:
translations = ['en']
else:
translations = ['en']
for lang in other_lang:
try:
Transcript.asset(self.location, None, None, other_lang[lang])
except NotFoundError:
continue
translations.append(lang)
return translations
def get_transcript(self, transcripts, transcript_format='srt', lang=None):
"""
Returns transcript, filename and MIME type.
transcripts (dict): A dict with all transcripts and a sub.
Raises:
- NotFoundError if cannot find transcript file in storage.
- ValueError if transcript file is empty or incorrect JSON.
- KeyError if transcript file has incorrect format.
If language is 'en', self.sub should be correct subtitles name.
If language is 'en', but if self.sub is not defined, this means that we
should search for video name in order to get proper transcript (old style courses).
If language is not 'en', give back transcript in proper language and format.
"""
if not lang:
lang = self.get_default_transcript_language(transcripts)
sub, other_lang = transcripts["sub"], transcripts["transcripts"]
if lang == 'en':
if sub: # HTML5 case and (Youtube case for new style videos)
transcript_name = sub
elif self.youtube_id_1_0: # old courses
transcript_name = self.youtube_id_1_0
else:
log.debug("No subtitles for 'en' language")
raise ValueError
data = Transcript.asset(self.location, transcript_name, lang).data
filename = u'{}.{}'.format(transcript_name, transcript_format)
content = Transcript.convert(data, 'sjson', transcript_format)
else:
data = Transcript.asset(self.location, None, None, other_lang[lang]).data
filename = u'{}.{}'.format(os.path.splitext(other_lang[lang])[0], transcript_format)
content = Transcript.convert(data, 'srt', transcript_format)
if not content:
log.debug('no subtitles produced in get_transcript')
raise ValueError
return content, filename, Transcript.mime_types[transcript_format]
def get_default_transcript_language(self, transcripts):
"""
Returns the default transcript language for this video module.
Args:
transcripts (dict): A dict with all transcripts and a sub.
"""
sub, other_lang = transcripts["sub"], transcripts["transcripts"]
if self.transcript_language in other_lang:
transcript_language = self.transcript_language
elif sub:
transcript_language = u'en'
elif len(other_lang) > 0:
transcript_language = sorted(other_lang)[0]
else:
transcript_language = u'en'
return transcript_language
def get_transcripts_info(self, is_bumper=False):
"""
Returns a transcript dictionary for the video.
"""
if is_bumper:
transcripts = copy.deepcopy(get_bumper_settings(self).get('transcripts', {}))
return {
"sub": transcripts.pop("en", ""),
"transcripts": transcripts,
}
else:
return {
"sub": self.sub,
"transcripts": self.transcripts,
}
| agpl-3.0 |
danielfrg/jupyterhub-kubernetes_spawner | kubernetes_spawner/swagger_client/models/v1_config_map_key_selector.py | 1 | 3783 | # coding: utf-8
"""
Copyright 2015 SmartBear Software
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Ref: https://github.com/swagger-api/swagger-codegen
"""
from pprint import pformat
from six import iteritems
class V1ConfigMapKeySelector(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
def __init__(self):
"""
V1ConfigMapKeySelector - a model defined in Swagger
:param dict swaggerTypes: The key is attribute name
and the value is attribute type.
:param dict attributeMap: The key is attribute name
and the value is json key in definition.
"""
self.swagger_types = {
'name': 'str',
'key': 'str'
}
self.attribute_map = {
'name': 'name',
'key': 'key'
}
self._name = None
self._key = None
@property
def name(self):
"""
Gets the name of this V1ConfigMapKeySelector.
Name of the referent. More info: http://releases.k8s.io/HEAD/docs/user-guide/identifiers.md#names
:return: The name of this V1ConfigMapKeySelector.
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""
Sets the name of this V1ConfigMapKeySelector.
Name of the referent. More info: http://releases.k8s.io/HEAD/docs/user-guide/identifiers.md#names
:param name: The name of this V1ConfigMapKeySelector.
:type: str
"""
self._name = name
@property
def key(self):
"""
Gets the key of this V1ConfigMapKeySelector.
The key to select.
:return: The key of this V1ConfigMapKeySelector.
:rtype: str
"""
return self._key
@key.setter
def key(self, key):
"""
Sets the key of this V1ConfigMapKeySelector.
The key to select.
:param key: The key of this V1ConfigMapKeySelector.
:type: str
"""
self._key = key
def to_dict(self):
"""
Returns the model properties as a dict
"""
result = {}
for attr, _ in iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
else:
result[attr] = value
return result
def to_str(self):
"""
Returns the string representation of the model
"""
return pformat(self.to_dict())
def __repr__(self):
"""
For `print` and `pprint`
"""
return self.to_str()
def __eq__(self, other):
"""
Returns true if both objects are equal
"""
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""
Returns true if both objects are not equal
"""
return not self == other
| apache-2.0 |
fartashf/python-mode | pymode/libs/pylama/main.py | 7 | 2528 | """Pylama's shell support."""
from __future__ import absolute_import, with_statement
import sys
from os import walk, path as op
from .config import parse_options, CURDIR, setup_logger
from .core import LOGGER, run
from .async import check_async
def check_path(options, rootdir=None, candidates=None, code=None):
"""Check path.
:param rootdir: Root directory (for making relative file paths)
:param options: Parsed pylama options (from pylama.config.parse_options)
:returns: (list) Errors list
"""
if not candidates:
candidates = []
for path_ in options.paths:
path = op.abspath(path_)
if op.isdir(path):
for root, _, files in walk(path):
candidates += [op.relpath(op.join(root, f), CURDIR) for f in files]
else:
candidates.append(path)
if rootdir is None:
rootdir = path if op.isdir(path) else op.dirname(path)
paths = []
for path in candidates:
if not options.force and not any(l.allow(path) for _, l in options.linters):
continue
if not op.exists(path):
continue
paths.append(path)
if options.async:
return check_async(paths, options, rootdir)
errors = []
for path in paths:
errors += run(path=path, code=code, rootdir=rootdir, options=options)
return errors
def shell(args=None, error=True):
"""Endpoint for console.
Parse a command arguments, configuration files and run a checkers.
:return list: list of errors
:raise SystemExit:
"""
if args is None:
args = sys.argv[1:]
options = parse_options(args)
setup_logger(options)
LOGGER.info(options)
# Install VSC hook
if options.hook:
from .hook import install_hook
return install_hook(options.path)
return process_paths(options, error=error)
def process_paths(options, candidates=None, error=True):
"""Process files and log errors."""
errors = check_path(options, rootdir=CURDIR, candidates=candidates)
pattern = "%(filename)s:%(lnum)s:%(col)s: %(text)s"
if options.format == 'pylint':
pattern = "%(filename)s:%(lnum)s: [%(type)s] %(text)s"
for er in errors:
if options.abspath:
er._info['filename'] = op.abspath(er.filename)
LOGGER.warning(pattern, er._info)
if error:
sys.exit(int(bool(errors)))
return errors
if __name__ == '__main__':
shell()
# pylama:ignore=F0001
| lgpl-3.0 |
zhuzhezhe/weibobash | env/lib/python3.4/site-packages/pip/_vendor/requests/packages/chardet/big5prober.py | 2931 | 1684 | ######################## BEGIN LICENSE BLOCK ########################
# The Original Code is Mozilla Communicator client code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 1998
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
from .mbcharsetprober import MultiByteCharSetProber
from .codingstatemachine import CodingStateMachine
from .chardistribution import Big5DistributionAnalysis
from .mbcssm import Big5SMModel
class Big5Prober(MultiByteCharSetProber):
def __init__(self):
MultiByteCharSetProber.__init__(self)
self._mCodingSM = CodingStateMachine(Big5SMModel)
self._mDistributionAnalyzer = Big5DistributionAnalysis()
self.reset()
def get_charset_name(self):
return "Big5"
| mit |
mariosky/evo-drawings | venv/lib/python2.7/site-packages/numpy/polynomial/tests/test_hermite_e.py | 123 | 17069 | """Tests for hermite_e module.
"""
from __future__ import division, absolute_import, print_function
import numpy as np
import numpy.polynomial.hermite_e as herme
from numpy.polynomial.polynomial import polyval
from numpy.testing import (
TestCase, assert_almost_equal, assert_raises,
assert_equal, assert_, run_module_suite)
He0 = np.array([1])
He1 = np.array([0, 1])
He2 = np.array([-1, 0, 1])
He3 = np.array([0, -3, 0, 1])
He4 = np.array([3, 0, -6, 0, 1])
He5 = np.array([0, 15, 0, -10, 0, 1])
He6 = np.array([-15, 0, 45, 0, -15, 0, 1])
He7 = np.array([0, -105, 0, 105, 0, -21, 0, 1])
He8 = np.array([105, 0, -420, 0, 210, 0, -28, 0, 1])
He9 = np.array([0, 945, 0, -1260, 0, 378, 0, -36, 0, 1])
Helist = [He0, He1, He2, He3, He4, He5, He6, He7, He8, He9]
def trim(x):
return herme.hermetrim(x, tol=1e-6)
class TestConstants(TestCase):
def test_hermedomain(self):
assert_equal(herme.hermedomain, [-1, 1])
def test_hermezero(self):
assert_equal(herme.hermezero, [0])
def test_hermeone(self):
assert_equal(herme.hermeone, [1])
def test_hermex(self):
assert_equal(herme.hermex, [0, 1])
class TestArithmetic(TestCase):
x = np.linspace(-3, 3, 100)
def test_hermeadd(self):
for i in range(5):
for j in range(5):
msg = "At i=%d, j=%d" % (i, j)
tgt = np.zeros(max(i, j) + 1)
tgt[i] += 1
tgt[j] += 1
res = herme.hermeadd([0]*i + [1], [0]*j + [1])
assert_equal(trim(res), trim(tgt), err_msg=msg)
def test_hermesub(self):
for i in range(5):
for j in range(5):
msg = "At i=%d, j=%d" % (i, j)
tgt = np.zeros(max(i, j) + 1)
tgt[i] += 1
tgt[j] -= 1
res = herme.hermesub([0]*i + [1], [0]*j + [1])
assert_equal(trim(res), trim(tgt), err_msg=msg)
def test_hermemulx(self):
assert_equal(herme.hermemulx([0]), [0])
assert_equal(herme.hermemulx([1]), [0, 1])
for i in range(1, 5):
ser = [0]*i + [1]
tgt = [0]*(i - 1) + [i, 0, 1]
assert_equal(herme.hermemulx(ser), tgt)
def test_hermemul(self):
# check values of result
for i in range(5):
pol1 = [0]*i + [1]
val1 = herme.hermeval(self.x, pol1)
for j in range(5):
msg = "At i=%d, j=%d" % (i, j)
pol2 = [0]*j + [1]
val2 = herme.hermeval(self.x, pol2)
pol3 = herme.hermemul(pol1, pol2)
val3 = herme.hermeval(self.x, pol3)
assert_(len(pol3) == i + j + 1, msg)
assert_almost_equal(val3, val1*val2, err_msg=msg)
def test_hermediv(self):
for i in range(5):
for j in range(5):
msg = "At i=%d, j=%d" % (i, j)
ci = [0]*i + [1]
cj = [0]*j + [1]
tgt = herme.hermeadd(ci, cj)
quo, rem = herme.hermediv(tgt, ci)
res = herme.hermeadd(herme.hermemul(quo, ci), rem)
assert_equal(trim(res), trim(tgt), err_msg=msg)
class TestEvaluation(TestCase):
# coefficients of 1 + 2*x + 3*x**2
c1d = np.array([4., 2., 3.])
c2d = np.einsum('i,j->ij', c1d, c1d)
c3d = np.einsum('i,j,k->ijk', c1d, c1d, c1d)
# some random values in [-1, 1)
x = np.random.random((3, 5))*2 - 1
y = polyval(x, [1., 2., 3.])
def test_hermeval(self):
#check empty input
assert_equal(herme.hermeval([], [1]).size, 0)
#check normal input)
x = np.linspace(-1, 1)
y = [polyval(x, c) for c in Helist]
for i in range(10):
msg = "At i=%d" % i
tgt = y[i]
res = herme.hermeval(x, [0]*i + [1])
assert_almost_equal(res, tgt, err_msg=msg)
#check that shape is preserved
for i in range(3):
dims = [2]*i
x = np.zeros(dims)
assert_equal(herme.hermeval(x, [1]).shape, dims)
assert_equal(herme.hermeval(x, [1, 0]).shape, dims)
assert_equal(herme.hermeval(x, [1, 0, 0]).shape, dims)
def test_hermeval2d(self):
x1, x2, x3 = self.x
y1, y2, y3 = self.y
#test exceptions
assert_raises(ValueError, herme.hermeval2d, x1, x2[:2], self.c2d)
#test values
tgt = y1*y2
res = herme.hermeval2d(x1, x2, self.c2d)
assert_almost_equal(res, tgt)
#test shape
z = np.ones((2, 3))
res = herme.hermeval2d(z, z, self.c2d)
assert_(res.shape == (2, 3))
def test_hermeval3d(self):
x1, x2, x3 = self.x
y1, y2, y3 = self.y
#test exceptions
assert_raises(ValueError, herme.hermeval3d, x1, x2, x3[:2], self.c3d)
#test values
tgt = y1*y2*y3
res = herme.hermeval3d(x1, x2, x3, self.c3d)
assert_almost_equal(res, tgt)
#test shape
z = np.ones((2, 3))
res = herme.hermeval3d(z, z, z, self.c3d)
assert_(res.shape == (2, 3))
def test_hermegrid2d(self):
x1, x2, x3 = self.x
y1, y2, y3 = self.y
#test values
tgt = np.einsum('i,j->ij', y1, y2)
res = herme.hermegrid2d(x1, x2, self.c2d)
assert_almost_equal(res, tgt)
#test shape
z = np.ones((2, 3))
res = herme.hermegrid2d(z, z, self.c2d)
assert_(res.shape == (2, 3)*2)
def test_hermegrid3d(self):
x1, x2, x3 = self.x
y1, y2, y3 = self.y
#test values
tgt = np.einsum('i,j,k->ijk', y1, y2, y3)
res = herme.hermegrid3d(x1, x2, x3, self.c3d)
assert_almost_equal(res, tgt)
#test shape
z = np.ones((2, 3))
res = herme.hermegrid3d(z, z, z, self.c3d)
assert_(res.shape == (2, 3)*3)
class TestIntegral(TestCase):
def test_hermeint(self):
# check exceptions
assert_raises(ValueError, herme.hermeint, [0], .5)
assert_raises(ValueError, herme.hermeint, [0], -1)
assert_raises(ValueError, herme.hermeint, [0], 1, [0, 0])
# test integration of zero polynomial
for i in range(2, 5):
k = [0]*(i - 2) + [1]
res = herme.hermeint([0], m=i, k=k)
assert_almost_equal(res, [0, 1])
# check single integration with integration constant
for i in range(5):
scl = i + 1
pol = [0]*i + [1]
tgt = [i] + [0]*i + [1/scl]
hermepol = herme.poly2herme(pol)
hermeint = herme.hermeint(hermepol, m=1, k=[i])
res = herme.herme2poly(hermeint)
assert_almost_equal(trim(res), trim(tgt))
# check single integration with integration constant and lbnd
for i in range(5):
scl = i + 1
pol = [0]*i + [1]
hermepol = herme.poly2herme(pol)
hermeint = herme.hermeint(hermepol, m=1, k=[i], lbnd=-1)
assert_almost_equal(herme.hermeval(-1, hermeint), i)
# check single integration with integration constant and scaling
for i in range(5):
scl = i + 1
pol = [0]*i + [1]
tgt = [i] + [0]*i + [2/scl]
hermepol = herme.poly2herme(pol)
hermeint = herme.hermeint(hermepol, m=1, k=[i], scl=2)
res = herme.herme2poly(hermeint)
assert_almost_equal(trim(res), trim(tgt))
# check multiple integrations with default k
for i in range(5):
for j in range(2, 5):
pol = [0]*i + [1]
tgt = pol[:]
for k in range(j):
tgt = herme.hermeint(tgt, m=1)
res = herme.hermeint(pol, m=j)
assert_almost_equal(trim(res), trim(tgt))
# check multiple integrations with defined k
for i in range(5):
for j in range(2, 5):
pol = [0]*i + [1]
tgt = pol[:]
for k in range(j):
tgt = herme.hermeint(tgt, m=1, k=[k])
res = herme.hermeint(pol, m=j, k=list(range(j)))
assert_almost_equal(trim(res), trim(tgt))
# check multiple integrations with lbnd
for i in range(5):
for j in range(2, 5):
pol = [0]*i + [1]
tgt = pol[:]
for k in range(j):
tgt = herme.hermeint(tgt, m=1, k=[k], lbnd=-1)
res = herme.hermeint(pol, m=j, k=list(range(j)), lbnd=-1)
assert_almost_equal(trim(res), trim(tgt))
# check multiple integrations with scaling
for i in range(5):
for j in range(2, 5):
pol = [0]*i + [1]
tgt = pol[:]
for k in range(j):
tgt = herme.hermeint(tgt, m=1, k=[k], scl=2)
res = herme.hermeint(pol, m=j, k=list(range(j)), scl=2)
assert_almost_equal(trim(res), trim(tgt))
def test_hermeint_axis(self):
# check that axis keyword works
c2d = np.random.random((3, 4))
tgt = np.vstack([herme.hermeint(c) for c in c2d.T]).T
res = herme.hermeint(c2d, axis=0)
assert_almost_equal(res, tgt)
tgt = np.vstack([herme.hermeint(c) for c in c2d])
res = herme.hermeint(c2d, axis=1)
assert_almost_equal(res, tgt)
tgt = np.vstack([herme.hermeint(c, k=3) for c in c2d])
res = herme.hermeint(c2d, k=3, axis=1)
assert_almost_equal(res, tgt)
class TestDerivative(TestCase):
def test_hermeder(self):
# check exceptions
assert_raises(ValueError, herme.hermeder, [0], .5)
assert_raises(ValueError, herme.hermeder, [0], -1)
# check that zeroth deriviative does nothing
for i in range(5):
tgt = [0]*i + [1]
res = herme.hermeder(tgt, m=0)
assert_equal(trim(res), trim(tgt))
# check that derivation is the inverse of integration
for i in range(5):
for j in range(2, 5):
tgt = [0]*i + [1]
res = herme.hermeder(herme.hermeint(tgt, m=j), m=j)
assert_almost_equal(trim(res), trim(tgt))
# check derivation with scaling
for i in range(5):
for j in range(2, 5):
tgt = [0]*i + [1]
res = herme.hermeder(
herme.hermeint(tgt, m=j, scl=2), m=j, scl=.5)
assert_almost_equal(trim(res), trim(tgt))
def test_hermeder_axis(self):
# check that axis keyword works
c2d = np.random.random((3, 4))
tgt = np.vstack([herme.hermeder(c) for c in c2d.T]).T
res = herme.hermeder(c2d, axis=0)
assert_almost_equal(res, tgt)
tgt = np.vstack([herme.hermeder(c) for c in c2d])
res = herme.hermeder(c2d, axis=1)
assert_almost_equal(res, tgt)
class TestVander(TestCase):
# some random values in [-1, 1)
x = np.random.random((3, 5))*2 - 1
def test_hermevander(self):
# check for 1d x
x = np.arange(3)
v = herme.hermevander(x, 3)
assert_(v.shape == (3, 4))
for i in range(4):
coef = [0]*i + [1]
assert_almost_equal(v[..., i], herme.hermeval(x, coef))
# check for 2d x
x = np.array([[1, 2], [3, 4], [5, 6]])
v = herme.hermevander(x, 3)
assert_(v.shape == (3, 2, 4))
for i in range(4):
coef = [0]*i + [1]
assert_almost_equal(v[..., i], herme.hermeval(x, coef))
def test_hermevander2d(self):
# also tests hermeval2d for non-square coefficient array
x1, x2, x3 = self.x
c = np.random.random((2, 3))
van = herme.hermevander2d(x1, x2, [1, 2])
tgt = herme.hermeval2d(x1, x2, c)
res = np.dot(van, c.flat)
assert_almost_equal(res, tgt)
# check shape
van = herme.hermevander2d([x1], [x2], [1, 2])
assert_(van.shape == (1, 5, 6))
def test_hermevander3d(self):
# also tests hermeval3d for non-square coefficient array
x1, x2, x3 = self.x
c = np.random.random((2, 3, 4))
van = herme.hermevander3d(x1, x2, x3, [1, 2, 3])
tgt = herme.hermeval3d(x1, x2, x3, c)
res = np.dot(van, c.flat)
assert_almost_equal(res, tgt)
# check shape
van = herme.hermevander3d([x1], [x2], [x3], [1, 2, 3])
assert_(van.shape == (1, 5, 24))
class TestFitting(TestCase):
def test_hermefit(self):
def f(x):
return x*(x - 1)*(x - 2)
# Test exceptions
assert_raises(ValueError, herme.hermefit, [1], [1], -1)
assert_raises(TypeError, herme.hermefit, [[1]], [1], 0)
assert_raises(TypeError, herme.hermefit, [], [1], 0)
assert_raises(TypeError, herme.hermefit, [1], [[[1]]], 0)
assert_raises(TypeError, herme.hermefit, [1, 2], [1], 0)
assert_raises(TypeError, herme.hermefit, [1], [1, 2], 0)
assert_raises(TypeError, herme.hermefit, [1], [1], 0, w=[[1]])
assert_raises(TypeError, herme.hermefit, [1], [1], 0, w=[1, 1])
# Test fit
x = np.linspace(0, 2)
y = f(x)
#
coef3 = herme.hermefit(x, y, 3)
assert_equal(len(coef3), 4)
assert_almost_equal(herme.hermeval(x, coef3), y)
#
coef4 = herme.hermefit(x, y, 4)
assert_equal(len(coef4), 5)
assert_almost_equal(herme.hermeval(x, coef4), y)
#
coef2d = herme.hermefit(x, np.array([y, y]).T, 3)
assert_almost_equal(coef2d, np.array([coef3, coef3]).T)
# test weighting
w = np.zeros_like(x)
yw = y.copy()
w[1::2] = 1
y[0::2] = 0
wcoef3 = herme.hermefit(x, yw, 3, w=w)
assert_almost_equal(wcoef3, coef3)
#
wcoef2d = herme.hermefit(x, np.array([yw, yw]).T, 3, w=w)
assert_almost_equal(wcoef2d, np.array([coef3, coef3]).T)
# test scaling with complex values x points whose square
# is zero when summed.
x = [1, 1j, -1, -1j]
assert_almost_equal(herme.hermefit(x, x, 1), [0, 1])
class TestCompanion(TestCase):
def test_raises(self):
assert_raises(ValueError, herme.hermecompanion, [])
assert_raises(ValueError, herme.hermecompanion, [1])
def test_dimensions(self):
for i in range(1, 5):
coef = [0]*i + [1]
assert_(herme.hermecompanion(coef).shape == (i, i))
def test_linear_root(self):
assert_(herme.hermecompanion([1, 2])[0, 0] == -.5)
class TestGauss(TestCase):
def test_100(self):
x, w = herme.hermegauss(100)
# test orthogonality. Note that the results need to be normalized,
# otherwise the huge values that can arise from fast growing
# functions like Laguerre can be very confusing.
v = herme.hermevander(x, 99)
vv = np.dot(v.T * w, v)
vd = 1/np.sqrt(vv.diagonal())
vv = vd[:, None] * vv * vd
assert_almost_equal(vv, np.eye(100))
# check that the integral of 1 is correct
tgt = np.sqrt(2*np.pi)
assert_almost_equal(w.sum(), tgt)
class TestMisc(TestCase):
def test_hermefromroots(self):
res = herme.hermefromroots([])
assert_almost_equal(trim(res), [1])
for i in range(1, 5):
roots = np.cos(np.linspace(-np.pi, 0, 2*i + 1)[1::2])
pol = herme.hermefromroots(roots)
res = herme.hermeval(roots, pol)
tgt = 0
assert_(len(pol) == i + 1)
assert_almost_equal(herme.herme2poly(pol)[-1], 1)
assert_almost_equal(res, tgt)
def test_hermeroots(self):
assert_almost_equal(herme.hermeroots([1]), [])
assert_almost_equal(herme.hermeroots([1, 1]), [-1])
for i in range(2, 5):
tgt = np.linspace(-1, 1, i)
res = herme.hermeroots(herme.hermefromroots(tgt))
assert_almost_equal(trim(res), trim(tgt))
def test_hermetrim(self):
coef = [2, -1, 1, 0]
# Test exceptions
assert_raises(ValueError, herme.hermetrim, coef, -1)
# Test results
assert_equal(herme.hermetrim(coef), coef[:-1])
assert_equal(herme.hermetrim(coef, 1), coef[:-3])
assert_equal(herme.hermetrim(coef, 2), [0])
def test_hermeline(self):
assert_equal(herme.hermeline(3, 4), [3, 4])
def test_herme2poly(self):
for i in range(10):
assert_almost_equal(herme.herme2poly([0]*i + [1]), Helist[i])
def test_poly2herme(self):
for i in range(10):
assert_almost_equal(herme.poly2herme(Helist[i]), [0]*i + [1])
def test_weight(self):
x = np.linspace(-5, 5, 11)
tgt = np.exp(-.5*x**2)
res = herme.hermeweight(x)
assert_almost_equal(res, tgt)
if __name__ == "__main__":
run_module_suite()
| agpl-3.0 |
rwl/traitsbackendpyjamas | enthought/traits/ui/pyjd/ui_live.py | 1 | 22740 | #------------------------------------------------------------------------------
# Copyright (c) 2007, Riverbank Computing Limited
# Copyright (c) 2009, Richard Lincoln
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
#------------------------------------------------------------------------------
""" Creates a non-modal user interface for a specified UI object, where the
UI is "live", meaning that it immediately updates its underlying object(s).
"""
#------------------------------------------------------------------------------
# Imports:
#------------------------------------------------------------------------------
import pyjd
from pyjamas import Window
from pyjamas.ui.RootPanel import RootPanel
from pyjamas.ui.PopupPanel import PopupPanel
from pyjamas.ui.VerticalPanel import VerticalPanel
from pyjamas.ui.HorizontalPanel import HorizontalPanel
from pyjamas.ui import HasAlignment
from ui_base import BaseDialog
from ui_panel import panel
from enthought.traits.ui.undo import UndoHistory
from enthought.traits.ui.menu import UndoButton, RevertButton, OKButton, \
CancelButton, HelpButton
#-------------------------------------------------------------------------------
# Create the different 'live update' Pyjamas user interfaces.
#-------------------------------------------------------------------------------
def ui_live(ui, parent):
"""Creates a live, non-modal PyQt user interface for a specified UI object.
"""
_ui_dialog(ui, parent, BaseDialog.NONMODAL)
def ui_livemodal(ui, parent):
"""Creates a live, modal PyQt user interface for a specified UI object.
"""
_ui_dialog(ui, parent, BaseDialog.MODAL)
def ui_popup(ui, parent):
"""Creates a live, modal popup PyQt user interface for a specified UI
object.
"""
_ui_dialog(ui, parent, BaseDialog.POPUP)
def _ui_dialog(ui, parent, style):
"""Creates a live PyQt user interface for a specified UI object.
"""
if ui.owner is None:
ui.owner = _LiveWindow()
BaseDialog.display_ui(ui, parent, style)
class _LiveWindow(BaseDialog):
"""User interface window that immediately updates its underlying object(s).
"""
def init(self, ui, parent, style):
"""Initialise the object.
FIXME: Note that we treat MODAL and POPUP as equivalent until we
have an example that demonstrates how POPUP is supposed to work.
"""
self.ui = ui
self.control = ui.control
view = ui.view
history = ui.history
if self.control is not None:
if history is not None:
history.on_trait_change(self._on_undoable, 'undoable',
remove=True)
history.on_trait_change(self._on_redoable, 'redoable',
remove=True)
history.on_trait_change(self._on_revertable, 'undoable',
remove=True)
ui.reset()
else:
self.create_dialog(parent, style)
self.set_icon(view.icon)
# Convert the buttons to actions.
buttons = [self.coerce_button(button) for button in view.buttons]
nr_buttons = len(buttons)
no_buttons = ((nr_buttons == 1) and self.is_button(buttons[0], ''))
has_buttons = ((not no_buttons) and ((nr_buttons > 0) or view.undo or
view.revert or view.ok or view.cancel))
if has_buttons or (view.menubar is not None):
if history is None:
history = UndoHistory()
else:
history = None
ui.history = history
if (not no_buttons) and (has_buttons or view.help):
bbox = HorizontalPanel(width="100%")
bbox.setHorizontalAlignment(HasAlignment.ALIGN_RIGHT)
bbox.setVerticalAlignment(HasAlignment.ALIGN_RIGHT)
# Create the necessary special function buttons.
if nr_buttons == 0:
if view.undo:
self.check_button(buttons, UndoButton)
if view.revert:
self.check_button(buttons, RevertButton)
if view.ok:
self.check_button(buttons, OKButton)
if view.cancel:
self.check_button(buttons, CancelButton)
if view.help:
self.check_button(buttons, HelpButton)
for button in buttons:
if self.is_button(button, 'Undo'):
self.undo = self.add_button(button, bbox, self._on_undo,
False)
history.on_trait_change(self._on_undoable, 'undoable',
dispatch='ui')
if history.can_undo:
self._on_undoable(True)
self.redo = self.add_button(button, bbox, self._on_redo,
False, 'Redo')
history.on_trait_change(self._on_redoable, 'redoable',
dispatch='ui')
if history.can_redo:
self._on_redoable(True)
elif self.is_button(button, 'Revert'):
self.revert = self.add_button(button, bbox,
self._on_revert, False)
history.on_trait_change(self._on_revertable, 'undoable',
dispatch='ui')
if history.can_undo:
self._on_revertable(True)
elif self.is_button(button, 'OK'):
self.ok = self.add_button(button, bbox)
ui.on_trait_change(self._on_error, 'errors', dispatch='ui')
elif self.is_button(button, 'Cancel'):
self.add_button(button, bbox)
elif self.is_button(button, 'Help'):
self.add_button(button, bbox, self._on_help)
elif not self.is_button(button, ''):
self.add_button(button, bbox)
else:
bbox = None
self.add_contents(panel(ui), bbox)
def close(self, rc=True):
"""Close the dialog and set the given return code.
"""
super(_LiveWindow, self).close(rc)
self.undo = self.redo = self.revert = None
def _on_finished(self, result):
"""Handles the user finishing with the dialog.
"""
accept = bool(result)
if not accept and self.ui.history is not None:
self._on_revert()
self.close(accept)
#------------------------------------------------------------------------------
# Constants:
#------------------------------------------------------------------------------
## Window title to use if not specified in the view:
#DefaultTitle = "Edit properties"
#
## Types of supported windows:
#NONMODAL = 0
#MODAL = 1
#POPUP = 2
#POPOVER = 3
#INFO = 4
#
## Types of 'popup' dialogs:
#Popups = set( ( POPUP, POPOVER, INFO ) )
#
##------------------------------------------------------------------------------
## Creates a 'live update' user interface for a specified UI object:
##------------------------------------------------------------------------------
#
#def ui_live(ui, parent):
# """ Creates a live, non-modal Pyjamas web interface for a specified UI
# object.
# """
# _ui_dialog( ui, parent, NONMODAL )
#
#
#def _ui_dialog ( ui, parent, style ):
# """ Creates a live Pyjamas user interface for a specified UI object.
# """
# if ui.owner is None:
# # Toolkit-specific object that "owns" **control**
# ui.owner = LiveWindow()
#
# ui.owner.init( ui, parent, style )
# ui.control = ui.owner.control
## ui.control._parent = parent
## ui.control.transient(parent) # Associate this window with a parent window.
#
# try:
# ui.prepare_ui()
# except:
# ui.control.destroy()
# ui.control.ui = None
# ui.control = None
# ui.owner = None
# ui.result = False
# raise
#
# ui.handler.position( ui.info )
# # TODO: Restore the user preference items for a specified UI.
## restore_window( ui, is_popup = (style in Popups) )
#
# if style == MODAL:
# ui.control.grab_set()
# ui.control.focus_set()
# parent.wait_window(ui.control)
# else:
## ui.control.mainloop()
# RootPanel().add(ui.control)
# pyjd.run()
#
#
#class LiveWindow(BaseDialog):
# """ User interface window that immediately updates its underlying
# object(s).
# """
#
# #--------------------------------------------------------------------------
# # Initializes the object:
# #--------------------------------------------------------------------------
#
# def init(self, ui, parent, style):
# """ Initialises the dialog. """
#
# self.is_modal = (style == MODAL)
## window_style = 0
# view = ui.view
## if view.resizable:
## window_style |= (True, True)
#
# title = view.title
# if title == '':
# title = DefaultTitle
#
# history = ui.history
# window = ui.control
# if window is not None:
# if history is not None:
# history.on_trait_change( self._on_undoable, 'undoable',
# remove = True )
# history.on_trait_change( self._on_redoable, 'redoable',
# remove = True )
# history.on_trait_change( self._on_revertable, 'undoable',
# remove = True )
## window.SetSizer( None )
# ui.reset()
# else:
# self.ui = ui
# if style == MODAL:
# window = VerticalPanel()
## window.setTitle( title )
# if view.resizable:
# pass
# elif style == NONMODAL:
# window = VerticalPanel()
## window.setTitle( title )
# if parent is not None:
# pass
# else:
# window = PopupPanel
## window.bind("<Leave>", self._on_close_popup )
# window.addWindowCloseListener( self._on_close_popup )
# window._kind = ui.view.kind
## self._monitor = MouseMonitor( ui )
#
# # TODO: Set the correct default window background color.
#
# self.control = window
## window.protocol( "WM_DELETE_WINDOW", self._on_close_page )
## window.bind( "<Key>", self._on_key )
#
## self.set_icon( view.icon )
#
# # Buttons -------------------------------------------------------------
#
# buttons = [self.coerce_button( button ) for button in view.buttons]
# nbuttons = len( buttons )
#
# no_buttons = ((nbuttons == 1) and self.is_button( buttons[0], '' ))
# has_buttons = ((not no_buttons) and ((nbuttons > 0) or view.undo or
# view.revert or view.ok or view.cancel))
#
# if has_buttons or (view.menubar is not None):
# if history is None:
# history = UndoHistory()
# else:
# history = None
# ui.history = history
#
# #----------------------------------------------------------------------
# # Create the actual trait frame filled with widgets:
# #----------------------------------------------------------------------
#
# if ui.scrollable:
# sw = panel( ui, window )
# else:
# sw = panel( ui, window )
#
# RootPanel().add(sw)
#
# #----------------------------------------------------------------------
# # Add special function buttons (OK, Cancel, etc) as necessary:
# #----------------------------------------------------------------------
#
# if (not no_buttons) and (has_buttons or view.help):
#
# b_frame = VerticalPanel()
#
# # Convert all button flags to actual button actions if no buttons
# # were specified in the 'buttons' trait:
# if nbuttons == 0:
# if view.undo:
# self.check_button( buttons, UndoButton )
# if view.revert:
# self.check_button( buttons, RevertButton )
# if view.ok:
# self.check_button( buttons, OKButton )
# if view.cancel:
# self.check_button( buttons, CancelButton )
# if view.help:
# self.check_button( buttons, HelpButton )
#
# # Create a button for each button action.
# for button in buttons:
# button = self.coerce_button( button )
# # Undo button:
# if self.is_button( button, 'Undo' ):
# self.undo = self.add_button( button, b_frame,
# self._on_undo, False )
#
# self.redo = self.add_button( button, b_frame,
# self._on_redo, False, 'Redo' )
#
# history.on_trait_change( self._on_undoable, 'undoable',
# dispatch = 'ui' )
#
# history.on_trait_change( self._on_redoable, 'redoable',
# dispatch = 'ui' )
#
# if history.can_undo:
# self._on_undoable( True )
#
# if history.can_redo:
# self._on_redoable( True )
#
# # Revert button.
# elif self.is_button( button, 'Revert' ):
# self.revert = self.add_button( button, b_frame,
# self._on_revert, False )
# history.on_trait_change( self._on_revertable, 'undoable',
# dispatch = 'ui' )
# if history.can_undo:
# self._on_revertable( True )
#
# # OK button.
# elif self.is_button( button, 'OK' ):
# self.ok = self.add_button( button, b_frame, self._on_ok )
# ui.on_trait_change( self._on_error, 'errors',
# dispatch = 'ui' )
#
# # Cancel button.
# elif self.is_button( button, 'Cancel' ):
# self.add_button( button, b_frame, self._on_cancel )
#
# # Help button.
# elif self.is_button( button, 'Help' ):
# self.add_button( button, b_frame, self._on_help )
#
# elif not self.is_button( button, '' ):
# self.add_button( button, b_frame )
#
# RootPanel().add( b_frame )
#
# # Add the menu bar, tool bar and status bar (if any):
# self.add_menubar()
## self.add_toolbar()
## self.add_statusbar()
#
# #--------------------------------------------------------------------------
# # Closes the dialog window:
# #--------------------------------------------------------------------------
#
# def close ( self, rc = None ):
# """ Closes the dialog window.
# """
# ui = self.ui
# ui.result = (rc == OK)
# # TODO: Save the user preference items for a specified UI.
## save_window( ui )
#
# self.control.destroy()
#
# ui.finish()
# self.ui = self.undo = self.redo = self.revert = self.control = None
#
# #--------------------------------------------------------------------------
# # Handles the user clicking the window/dialog 'close' button/icon:
# #--------------------------------------------------------------------------
#
# def _on_close_page ( self, event ):
# """ Handles the user clicking the window/dialog "close" button/icon.
# """
# if self.ui.view.close_result == False:
# self._on_cancel( event )
# else:
# self._on_ok( event )
#
# #--------------------------------------------------------------------------
# # Handles the user giving focus to another window for a 'popup' view:
# #--------------------------------------------------------------------------
#
# def _on_close_popup ( self, event ):
# """ Handles the user giving focus to another window for a 'popup' view.
# """
## if not event.GetActive():
# self.close_popup()
#
#
# def close_popup ( self ):
# # Close the window if it has not already been closed:
# if self.ui.info.ui is not None:
# if self._on_ok():
## self._monitor.Stop()
# self.ui.control.destroy()
#
# #--------------------------------------------------------------------------
# # Handles the user clicking the 'OK' button:
# #--------------------------------------------------------------------------
#
# def _on_ok ( self, event = None ):
# """ Handles the user clicking the **OK** button.
# """
# if self.ui.handler.close( self.ui.info, True ):
# self.control.bind( "<Button-1>", None )
# self.close( OK )
# return True
#
# return False
#
# #--------------------------------------------------------------------------
# # Handles the user hitting the 'Esc'ape key:
# #--------------------------------------------------------------------------
#
# def _on_key ( self, event ):
# """ Handles the user pressing the Escape key.
# """
# if event.keycode() == 0x1B:
# self._on_close_page( event )
#
# #---------------------------------------------------------------------------
# # Handles an 'Undo' change request:
# #---------------------------------------------------------------------------
#
# def _on_undo ( self, event ):
# """ Handles an "Undo" change request.
# """
# self.ui.history.undo()
#
# #---------------------------------------------------------------------------
# # Handles a 'Redo' change request:
# #---------------------------------------------------------------------------
#
# def _on_redo ( self, event ):
# """ Handles a "Redo" change request.
# """
# self.ui.history.redo()
#
# #---------------------------------------------------------------------------
# # Handles a 'Revert' all changes request:
# #---------------------------------------------------------------------------
#
# def _on_revert ( self, event ):
# """ Handles a request to revert all changes.
# """
# ui = self.ui
# if ui.history is not None:
# ui.history.revert()
# ui.handler.revert( ui.info )
#
# #---------------------------------------------------------------------------
# # Handles a 'Cancel' all changes request:
# #---------------------------------------------------------------------------
#
# def _on_cancel ( self, event ):
# """ Handles a request to cancel all changes.
# """
# if self.ui.handler.close( self.ui.info, False ):
# self._on_revert( event )
# self.close( CANCEL )
#
# #---------------------------------------------------------------------------
# # Handles editing errors:
# #---------------------------------------------------------------------------
#
# def _on_error ( self, errors ):
# """ Handles editing errors.
# """
## if errors == 0:
## self.ok.config( state = Tkinter.NORMAL )
## else:
## self.ok.config( state = Tkinter.DISABLED )
#
# #---------------------------------------------------------------------------
# # Handles the 'Help' button being clicked:
# #---------------------------------------------------------------------------
#
# def _on_help ( self, event ):
# """ Handles the 'user clicking the Help button.
# """
# self.ui.handler.show_help( self.ui.info, event.widget )
#
# #---------------------------------------------------------------------------
# # Handles the undo history 'undoable' state changing:
# #---------------------------------------------------------------------------
#
# def _on_undoable ( self, state ):
# """ Handles a change to the "undoable" state of the undo history
# """
## if state:
## self.undo.config( state = Tkinter.NORMAL )
## else:
## self.undo.config( state = Tkinter.DISABLED )
#
# #---------------------------------------------------------------------------
# # Handles the undo history 'redoable' state changing:
# #---------------------------------------------------------------------------
#
# def _on_redoable ( self, state ):
# """ Handles a change to the "redoable state of the undo history.
# """
## if state:
## self.undo.config( state = Tkinter.NORMAL )
## else:
## self.undo.config( state = Tkinter.DISABLED )
#
# #---------------------------------------------------------------------------
# # Handles the 'revert' state changing:
# #---------------------------------------------------------------------------
#
# def _on_revertable ( self, state ):
# """ Handles a change to the "revert" state.
# """
## if state:
## self.undo.config( state = Tkinter.NORMAL )
## else:
## self.undo.config( state = Tkinter.DISABLED )
# EOF -------------------------------------------------------------------------
| mit |
Spstolar/BMachine | PythonCode/genericBM_hw_part_1.py | 1 | 4375 | import numpy as np
import time
def sigmoid(input):
return 1.0 / (1 + np.exp(-input))
class BoltzmannMachine(object):
def __init__(self, input_size, hidden_size, output_size):
self.total_nodes = input_size + hidden_size + output_size
self.state = np.random.randint(0, 2, self.total_nodes, dtype=int) # Begin with a random 0-1 draw.
self.state = (self.state - .5) * 2 # Convert to -1, +1 state.
self.weights = self.create_random_weights()
self.threshold_weights = np.random.uniform(-1, 1, size=(1, self.total_nodes)) # Random weights ~ U([-1,1])
self.history = self.state
self.sweeps = 1000
self.stabilization = np.zeros((self.sweeps, self.total_nodes))
self.threshold = .01
self.energy_history = np.zeros(200)
self.initial_weights = self.weights
self.initial_thresholds = self.threshold_weights
def print_current_state(self):
print self.state
def state_energy(self):
agreement_matrix = np.outer(self.state, self.state) # The (i,j) entry is 1 if i,j agree, else -1
energy_contributions = agreement_matrix * self.weights # Element-wise product.
energy = np.sum(energy_contributions) / 2 # Leaving off bias for now.
energy += np.dot(self.threshold_weights, self.state)
return energy
def state_prob(self):
"""
The (non-normalized) probability of this state. Does the whole calculation rather than just over the
affected subsets.
:return: conditional probability of this
"""
return np.exp(-self.state_energy())
def conditional_prob(self, node):
lin_sum_neighbors = np.dot(self.weights[node,:], self.state)
return sigmoid(lin_sum_neighbors)
def update(self, node):
plus_prob = self.conditional_prob(node) # P( x_j = 1 | all other node states)
coin_flip = np.random.binomial(1, plus_prob, 1)
result = 2*(coin_flip - .5) # Convert biased coin flip to -1 or 1.
# print result
self.state[node] = result
def run_machine(self, sweep_num, stabilized=0):
visit_list = np.arange(self.total_nodes) # The array [0 1 ... n-1].
for sweep in range(sweep_num):
np.random.shuffle(visit_list) # Shuffle the array [0 1 ... n-1].
for node_num in range(self.total_nodes):
node_to_update = visit_list[node_num]
self.update(node_to_update)
if stabilized == 0:
if self.stabilization_check(sweep) == 1:
break
if stabilized == 1:
self.history = np.vstack((self.history, self.state))
self.energy_history[sweep] = self.state_energy()
def stabilization_check(self, sweep):
prev_mean = self.empirical_mean()
self.history = np.vstack((self.history, self.state))
current_mean = self.empirical_mean()
difference = np.abs(current_mean - prev_mean)
self.stabilization[sweep, :] = np.less(difference, self.threshold)
if (np.sum(self.stabilization[sweep, :]) > 27) & (sweep > 100):
print sweep
# print self.stabilization[sweep, :]
return 1
else:
return 0
def create_random_weights(self):
weights = np.random.uniform(-1, 1, size=(self.total_nodes, self.total_nodes)) # Random weights ~ U([-1,1])
weights = np.triu(weights, k=1) # discard lower diagonal terms (and the diagonal to avoid self-connections)
weights = weights + weights.T # make the weights symmetric
return weights
def empirical_mean(self):
return np.mean(self.history, axis=0)
num_nodes = 30
start = time.time()
BM = BoltzmannMachine(0, num_nodes, 0)
BM.run_machine(BM.sweeps)
BM.run_machine(200,1)
end = time.time()
duration = end - start
print "It took " + str(duration) + " seconds."
np.save('energy_large_p1.npy', BM.energy_history)
np.save('stabilization_large.npy', BM.stabilization)
BM_small = BoltzmannMachine(0, num_nodes, 0)
BM_small.weights = BM.initial_weights / 10
BM_small.threshold_weights = BM.initial_thresholds / 10
BM_small.run_machine(BM_small.sweeps)
BM_small.run_machine(200,1)
np.save('energy_small_p1.npy', BM_small.energy_history)
np.save('stabilization_small.npy', BM_small.stabilization)
| mit |
digris/openbroadcast.org | website/apps/importer/util/tools.py | 2 | 2844 | import logging
import time
import discogs_client
import requests
from django.conf import settings
DISCOGS_HOST = getattr(settings, "DISCOGS_HOST", None)
log = logging.getLogger(__name__)
def discogs_image_by_url(url, type="uri"):
image = None
log.debug("search image for %s" % url)
try:
id = url.split("/")
id = id[-1]
except Exception as e:
log.debug("unable to extract id: url: {} - {}".format(url, e))
return
if id:
log.debug("Lookup image for discog id: %s" % (id))
type = None
if "/master/" in url:
type = "masters"
if "/release/" in url:
type = "releases"
if "/artist/" in url:
type = "artists"
if "/label/" in url:
type = "labels"
log.debug('Type is "%s"' % type)
if type:
url = "http://%s/%s/%s" % (DISCOGS_HOST, type, id)
log.debug('constructed API url "%s"' % url)
r = requests.get(url, timeout=5)
if not r.status_code == 200:
log.warning("server error: %s %s" % (r.status_code, r.text))
return
try:
response = r.json()
if "images" in response:
image = None
images = response["images"]
for img in images:
if img["type"] == "primary":
image = img["resource_url"]
if not image:
for img in images:
if img["type"] == "secondary":
image = img["resource_url"]
if image:
return image
except:
pass
def discogs_id_by_url(url, type="uri"):
# TODO: refactor id extraction to get rid of `discogs_client`
# TODO: `discogs_client` v1.1.1 does not work anyway, as API changed.
discogs_id = None
discogs_client.user_agent = "NRG Processor 0.0.1 http://anorg.net/"
try:
id = url.split("/")
id = id[-1]
try:
return "%s" % int(id)
except:
item = None
if "/master/" in url:
log.debug('Type is "master-release"')
item = discogs_client.MasterRelease(int(id))
if "/release/" in url:
log.debug('Type is "release"')
item = discogs_client.Release(int(id))
if "/artist/" in url:
log.debug('Type is "artist"')
item = discogs_client.Artist(id)
if not item:
return
time.sleep(1.1)
return item.data["id"]
except Exception as e:
log.info("Unable to get id: %s", e)
return None
| gpl-3.0 |
rajegannathan/grasp-lift-eeg-cat-dog-solution-updated | python-packages/mne-python-0.10/mne/tests/test_bem.py | 4 | 10831 | # Authors: Marijn van Vliet <w.m.vanvliet@gmail.com>
#
# License: BSD 3 clause
import os.path as op
import numpy as np
from nose.tools import assert_raises, assert_true
from numpy.testing import assert_equal, assert_allclose
from mne import (make_bem_model, read_bem_surfaces, write_bem_surfaces,
make_bem_solution, read_bem_solution, write_bem_solution,
make_sphere_model, Transform)
from mne.preprocessing.maxfilter import fit_sphere_to_headshape
from mne.io.constants import FIFF
from mne.transforms import translation
from mne.datasets import testing
from mne.utils import run_tests_if_main, _TempDir, slow_test
from mne.bem import (_ico_downsample, _get_ico_map, _order_surfaces,
_assert_complete_surface, _assert_inside,
_check_surface_size, _bem_find_surface)
from mne.io import read_info
fname_raw = op.join(op.dirname(__file__), '..', 'io', 'tests', 'data',
'test_raw.fif')
subjects_dir = op.join(testing.data_path(download=False), 'subjects')
fname_bem_3 = op.join(subjects_dir, 'sample', 'bem',
'sample-320-320-320-bem.fif')
fname_bem_1 = op.join(subjects_dir, 'sample', 'bem',
'sample-320-bem.fif')
fname_bem_sol_3 = op.join(subjects_dir, 'sample', 'bem',
'sample-320-320-320-bem-sol.fif')
fname_bem_sol_1 = op.join(subjects_dir, 'sample', 'bem',
'sample-320-bem-sol.fif')
def _compare_bem_surfaces(surfs_1, surfs_2):
"""Helper to compare BEM surfaces"""
names = ['id', 'nn', 'rr', 'coord_frame', 'tris', 'sigma', 'ntri', 'np']
ignores = ['tri_cent', 'tri_nn', 'tri_area', 'neighbor_tri']
for s0, s1 in zip(surfs_1, surfs_2):
assert_equal(set(names), set(s0.keys()) - set(ignores))
assert_equal(set(names), set(s1.keys()) - set(ignores))
for name in names:
assert_allclose(s0[name], s1[name], rtol=1e-3, atol=1e-6,
err_msg='Mismatch: "%s"' % name)
def _compare_bem_solutions(sol_a, sol_b):
"""Helper to compare BEM solutions"""
# compare the surfaces we used
_compare_bem_surfaces(sol_a['surfs'], sol_b['surfs'])
# compare the actual solutions
names = ['bem_method', 'field_mult', 'gamma', 'is_sphere',
'nsol', 'sigma', 'source_mult', 'solution']
assert_equal(set(sol_a.keys()), set(sol_b.keys()))
assert_equal(set(names + ['surfs']), set(sol_b.keys()))
for key in names:
assert_allclose(sol_a[key], sol_b[key], rtol=1e-3, atol=1e-5,
err_msg='Mismatch: %s' % key)
@testing.requires_testing_data
def test_io_bem():
"""Test reading and writing of bem surfaces and solutions
"""
tempdir = _TempDir()
temp_bem = op.join(tempdir, 'temp-bem.fif')
assert_raises(ValueError, read_bem_surfaces, fname_raw)
assert_raises(ValueError, read_bem_surfaces, fname_bem_3, s_id=10)
surf = read_bem_surfaces(fname_bem_3, patch_stats=True)
surf = read_bem_surfaces(fname_bem_3, patch_stats=False)
write_bem_surfaces(temp_bem, surf[0])
surf_read = read_bem_surfaces(temp_bem, patch_stats=False)
_compare_bem_surfaces(surf, surf_read)
assert_raises(RuntimeError, read_bem_solution, fname_bem_3)
temp_sol = op.join(tempdir, 'temp-sol.fif')
sol = read_bem_solution(fname_bem_sol_3)
assert_true('BEM' in repr(sol))
write_bem_solution(temp_sol, sol)
sol_read = read_bem_solution(temp_sol)
_compare_bem_solutions(sol, sol_read)
sol = read_bem_solution(fname_bem_sol_1)
assert_raises(RuntimeError, _bem_find_surface, sol, 3)
def test_make_sphere_model():
"""Test making a sphere model"""
info = read_info(fname_raw)
assert_raises(ValueError, make_sphere_model, 'foo', 'auto', info)
assert_raises(ValueError, make_sphere_model, 'auto', 'auto', None)
# here we just make sure it works -- the functionality is actually
# tested more extensively e.g. in the forward and dipole code
bem = make_sphere_model('auto', 'auto', info)
assert_true('3 layers' in repr(bem))
assert_true('Sphere ' in repr(bem))
assert_true(' mm' in repr(bem))
bem = make_sphere_model('auto', None, info)
assert_true('no layers' in repr(bem))
assert_true('Sphere ' in repr(bem))
@testing.requires_testing_data
def test_bem_model():
"""Test BEM model creation from Python with I/O"""
tempdir = _TempDir()
fname_temp = op.join(tempdir, 'temp-bem.fif')
for kwargs, fname in zip((dict(), dict(conductivity=[0.3])),
[fname_bem_3, fname_bem_1]):
model = make_bem_model('sample', ico=2, subjects_dir=subjects_dir,
**kwargs)
model_c = read_bem_surfaces(fname)
_compare_bem_surfaces(model, model_c)
write_bem_surfaces(fname_temp, model)
model_read = read_bem_surfaces(fname_temp)
_compare_bem_surfaces(model, model_c)
_compare_bem_surfaces(model_read, model_c)
assert_raises(ValueError, make_bem_model, 'sample', # bad conductivity
conductivity=[0.3, 0.006], subjects_dir=subjects_dir)
@slow_test
@testing.requires_testing_data
def test_bem_solution():
"""Test making a BEM solution from Python with I/O"""
# test degenerate conditions
surf = read_bem_surfaces(fname_bem_1)[0]
assert_raises(RuntimeError, _ico_downsample, surf, 10) # bad dec grade
s_bad = dict(tris=surf['tris'][1:], ntri=surf['ntri'] - 1, rr=surf['rr'])
assert_raises(RuntimeError, _ico_downsample, s_bad, 1) # not isomorphic
s_bad = dict(tris=surf['tris'].copy(), ntri=surf['ntri'],
rr=surf['rr']) # bad triangulation
s_bad['tris'][0] = [0, 0, 0]
assert_raises(RuntimeError, _ico_downsample, s_bad, 1)
s_bad['id'] = 1
assert_raises(RuntimeError, _assert_complete_surface, s_bad)
s_bad = dict(tris=surf['tris'], ntri=surf['ntri'], rr=surf['rr'].copy())
s_bad['rr'][0] = 0.
assert_raises(RuntimeError, _get_ico_map, surf, s_bad)
surfs = read_bem_surfaces(fname_bem_3)
assert_raises(RuntimeError, _assert_inside, surfs[0], surfs[1]) # outside
surfs[0]['id'] = 100 # bad surfs
assert_raises(RuntimeError, _order_surfaces, surfs)
surfs[1]['rr'] /= 1000.
assert_raises(RuntimeError, _check_surface_size, surfs[1])
# actually test functionality
tempdir = _TempDir()
fname_temp = op.join(tempdir, 'temp-bem-sol.fif')
# use a model and solution made in Python
conductivities = [(0.3,), (0.3, 0.006, 0.3)]
fnames = [fname_bem_sol_1, fname_bem_sol_3]
for cond, fname in zip(conductivities, fnames):
for model_type in ('python', 'c'):
if model_type == 'python':
model = make_bem_model('sample', conductivity=cond, ico=2,
subjects_dir=subjects_dir)
else:
model = fname_bem_1 if len(cond) == 1 else fname_bem_3
solution = make_bem_solution(model)
solution_c = read_bem_solution(fname)
_compare_bem_solutions(solution, solution_c)
write_bem_solution(fname_temp, solution)
solution_read = read_bem_solution(fname_temp)
_compare_bem_solutions(solution, solution_c)
_compare_bem_solutions(solution_read, solution_c)
def test_fit_sphere_to_headshape():
"""Test fitting a sphere to digitization points"""
# Create points of various kinds
rad = 90. # mm
center = np.array([0.5, -10., 40.]) # mm
dev_trans = np.array([0., -0.005, -10.])
dev_center = center - dev_trans
dig = [
# Left auricular
{'coord_frame': FIFF.FIFFV_COORD_HEAD,
'ident': FIFF.FIFFV_POINT_LPA,
'kind': FIFF.FIFFV_POINT_CARDINAL,
'r': np.array([-1.0, 0.0, 0.0])},
# Nasion
{'coord_frame': FIFF.FIFFV_COORD_HEAD,
'ident': FIFF.FIFFV_POINT_NASION,
'kind': FIFF.FIFFV_POINT_CARDINAL,
'r': np.array([0.0, 1.0, 0.0])},
# Right auricular
{'coord_frame': FIFF.FIFFV_COORD_HEAD,
'ident': FIFF.FIFFV_POINT_RPA,
'kind': FIFF.FIFFV_POINT_CARDINAL,
'r': np.array([1.0, 0.0, 0.0])},
# Top of the head (extra point)
{'coord_frame': FIFF.FIFFV_COORD_HEAD,
'kind': FIFF.FIFFV_POINT_EXTRA,
'r': np.array([0.0, 0.0, 1.0])},
# EEG points
# Fz
{'coord_frame': FIFF.FIFFV_COORD_HEAD,
'kind': FIFF.FIFFV_POINT_EEG,
'r': np.array([0, .72, .69])},
# F3
{'coord_frame': FIFF.FIFFV_COORD_HEAD,
'kind': FIFF.FIFFV_POINT_EEG,
'r': np.array([-.55, .67, .50])},
# F4
{'coord_frame': FIFF.FIFFV_COORD_HEAD,
'kind': FIFF.FIFFV_POINT_EEG,
'r': np.array([.55, .67, .50])},
# Cz
{'coord_frame': FIFF.FIFFV_COORD_HEAD,
'kind': FIFF.FIFFV_POINT_EEG,
'r': np.array([0.0, 0.0, 1.0])},
# Pz
{'coord_frame': FIFF.FIFFV_COORD_HEAD,
'kind': FIFF.FIFFV_POINT_EEG,
'r': np.array([0, -.72, .69])},
]
for d in dig:
d['r'] *= rad / 1000.
d['r'] += center / 1000.
# Device to head transformation (rotate .2 rad over X-axis)
dev_head_t = Transform('meg', 'head', translation(*(dev_trans / 1000.)))
info = {'dig': dig, 'dev_head_t': dev_head_t}
# Degenerate conditions
assert_raises(ValueError, fit_sphere_to_headshape, info,
dig_kinds=(FIFF.FIFFV_POINT_HPI,))
info['dig'][0]['coord_frame'] = FIFF.FIFFV_COORD_DEVICE
assert_raises(RuntimeError, fit_sphere_to_headshape, info)
info['dig'][0]['coord_frame'] = FIFF.FIFFV_COORD_HEAD
# # Test with 4 points that match a perfect sphere
dig_kinds = (FIFF.FIFFV_POINT_CARDINAL, FIFF.FIFFV_POINT_EXTRA)
r, oh, od = fit_sphere_to_headshape(info, dig_kinds=dig_kinds)
kwargs = dict(rtol=1e-3, atol=1e-2) # in mm
assert_allclose(r, rad, **kwargs)
assert_allclose(oh, center, **kwargs)
assert_allclose(od, dev_center, **kwargs)
# Test with all points
dig_kinds = (FIFF.FIFFV_POINT_CARDINAL, FIFF.FIFFV_POINT_EXTRA,
FIFF.FIFFV_POINT_EXTRA)
r, oh, od = fit_sphere_to_headshape(info, dig_kinds=dig_kinds)
assert_allclose(r, rad, **kwargs)
assert_allclose(oh, center, **kwargs)
assert_allclose(od, dev_center, **kwargs)
# Test with some noisy EEG points only.
dig_kinds = (FIFF.FIFFV_POINT_EEG,)
r, oh, od = fit_sphere_to_headshape(info, dig_kinds=dig_kinds)
kwargs = dict(rtol=1e-3, atol=10.) # in mm
assert_allclose(r, rad, **kwargs)
assert_allclose(oh, center, **kwargs)
assert_allclose(od, center, **kwargs)
dig = [dict(coord_frame=FIFF.FIFFV_COORD_DEVICE, )]
run_tests_if_main()
| bsd-3-clause |
bdastur/opsmonit | dockers/services/monit.py | 1 | 3156 | #!/usr/bin/env python
'''
Most basic flask application:
Initialization:
* All Flask applications must create an application instance.
* The webserver passes all requests from clients to this object for handling
using WSGI (Web Server Gateway Interface)
Routes and View Functions:
* Clients such as web browsers send requests to the web server, which in turn
sends them to the flask app.
* Flask keeps a mapping of URL to python functions. This association is called a
route
'''
import json
import rabbitmqadmin
from flask import Flask, request
app = Flask(__name__)
options = None
def rabbitmqadmin_init(request_data):
global options
if not options:
(options, args) = rabbitmqadmin.make_configuration()
options.username = request_data['username']
options.hostname = request_data['hostname']
options.port = "15672"
print "username: %s, hostname: %s " % (options.username, options.hostname)
@app.route("/")
def index():
return "<h1>Openstack Services Monitoring Console</h1>"
@app.route("/rabbitmq/", methods=['GET', 'POST'])
def rabbit_operations():
if request.headers['Content-Type'] == "application/json":
print "BRD: application/json"
request_data = request.json
else:
print "Request content should be application/json"
return 400
rabbitmqadmin_init(request_data)
operation = request_data['operation']
print "operation: ", operation
args = ["list", "nodes", "name"]
mgmt = rabbitmqadmin.Management(options, args[1:])
cols = args[1:]
(uri, obj_info) = mgmt.list_show_uri(rabbitmqadmin.LISTABLE,
'list', cols)
print "uri: ", uri
nodes_list = mgmt.get(uri).split(",")
if operation == "cluster_size":
print "nodes: ", nodes_list
len_nodes = len(nodes_list)
print "Cluzter size: ", len_nodes
return str(len_nodes)
elif operation == "validate_all":
#Check cluster size.
expected_size = int(request_data.get('cluster_size', 3))
len_nodes = len(nodes_list)
print "Expected size: %d, Nodes cnt: %d" % (expected_size, len_nodes)
if expected_size == len_nodes:
return json.dumps({'STATUS': 'STATUS_OK'})
else:
return json.dumps({'STATUS': 'STATUS_FAIL'})
else:
return "other oper"
@app.route("/rabbit/cluster_status")
def rabbit_clusterstatus():
global options
if not options:
(options, args) = rabbitmqadmin.make_configuration()
options.username = 'guest'
options.hostname = "172.22.191.199"
options.port = "15672"
args = ["list", "nodes", "name"]
mgmt = rabbitmqadmin.Management(options, args[1:])
cols = args[1:]
(uri, obj_info) = mgmt.list_show_uri(rabbitmqadmin.LISTABLE, 'list', cols)
print "uri: ", uri
return mgmt.get(uri)
@app.route("/user/<name>")
def user(name):
return "<h1> Hello %s! </h1>" % name
'''
The applciation instance has a run method, that launches Flask's integrated
development web server.
'''
if __name__ == "__main__":
app.run(host = '0.0.0.0', port = 5025, debug = True)
| apache-2.0 |
gangadhar-kadam/mic-wnframework | webnotes/modules/__init__.py | 6 | 3284 | # Copyright (c) 2012 Web Notes Technologies Pvt Ltd (http://erpnext.com)
#
# MIT License (MIT)
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
# CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
# OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
from __future__ import unicode_literals
"""
Utilities for using modules
"""
import webnotes, os, conf
transfer_types = ['Role', 'Print Format','DocType','Page','DocType Mapper',
'GL Mapper','Search Criteria', 'Patch', 'Report']
lower_case_files_for = ['DocType', 'Page', 'Search Criteria', 'Report',
"Workflow", 'Module Def', 'Desktop Item', 'Workflow State', 'Workflow Action']
code_fields_dict = {
'Page':[('script', 'js'), ('content', 'html'), ('style', 'css'), ('static_content', 'html'), ('server_code', 'py')],
'DocType':[('server_code_core', 'py'), ('client_script_core', 'js')],
'Search Criteria':[('report_script', 'js'), ('server_script', 'py'), ('custom_query', 'sql')],
'Patch':[('patch_code', 'py')],
'Stylesheet':['stylesheet', 'css'],
'Page Template':['template', 'html'],
'Control Panel':[('startup_code', 'js'), ('startup_css', 'css')]
}
def scrub(txt):
return txt.replace(' ','_').replace('-', '_').replace('/', '_').lower()
def scrub_dt_dn(dt, dn):
"""Returns in lowercase and code friendly names of doctype and name for certain types"""
ndt, ndn = dt, dn
if dt in lower_case_files_for:
ndt, ndn = scrub(dt), scrub(dn)
return ndt, ndn
def get_module_path(module):
"""Returns path of the given module"""
m = scrub(module)
app_path = os.path.dirname(conf.__file__)
if m in ('core'):
return os.path.join(app_path, 'lib', 'core')
else:
return os.path.join(app_path, 'app', m)
def get_doc_path(module, doctype, name):
dt, dn = scrub_dt_dn(doctype, name)
return os.path.join(get_module_path(module), dt, dn)
def reload_doc(module, dt=None, dn=None, force=True):
from webnotes.modules.import_file import import_files
return import_files(module, dt, dn, force)
def export_doc(doctype, name, module=None):
"""write out a doc"""
from webnotes.modules.export_file import write_document_file
import webnotes.model.doc
if not module: module = webnotes.conn.get_value(doctype, name, 'module')
write_document_file(webnotes.model.doc.get(doctype, name), module)
def get_doctype_module(doctype):
return webnotes.conn.get_value('DocType', doctype, 'module') | mit |
aekazakov/transform | t/demo/bin/upload_script_test.py | 4 | 18835 | #!/usr/bin/env python
import sys
import time
import datetime
import os
import os.path
import io
import bz2
import gzip
import zipfile
import tarfile
import pprint
import subprocess
import base64
# patch for handling unverified certificates
import ssl
if hasattr(ssl, '_create_unverified_context'):
ssl._create_default_https_context = ssl._create_unverified_context
# make sure the 3rd party and kbase modules are in the path for importing
#sys.path.insert(0,os.path.abspath("venv/lib/python2.7/site-packages/"))
from requests_toolbelt import MultipartEncoder, MultipartEncoderMonitor
import requests
import magic
import blessings
import dateutil.parser
import dateutil.tz
import simplejson
import biokbase.Transform.Client
import biokbase.Transform.script_utils as script_utils
import biokbase.Transform.handler_utils
import biokbase.Transform.drivers
import biokbase.userandjobstate.client
import biokbase.workspace.client
logger = biokbase.Transform.script_utils.stdoutlogger(__file__)
configs = dict()
def read_configs(configs_directory):
for x in os.listdir(configs_directory):
with open(os.path.join(configs_directory,x), 'r') as f:
c = simplejson.loads(f.read())
configs[c["script_type"]] = c
def validate_files(input_directory, external_type):
if external_type in configs["validate"]:
print "validate"
def show_workspace_object_list(workspace_url, workspace_name, object_name, token):
print term.blue("\tYour KBase data objects:")
c = biokbase.workspace.client.Workspace(workspace_url, token=token)
object_list = c.list_objects({"workspaces": [workspace_name]})
object_list = [x for x in object_list if object_name == x[1]]
for x in sorted(object_list):
elapsed_time = datetime.datetime.utcnow().replace(tzinfo=dateutil.tz.tzutc()) - dateutil.parser.parse(x[3])
print "\t\thow_recent: {0}\n\t\tname: {1}\n\t\ttype: {2}\n\t\tsize: {3:d}\n".format(elapsed_time, x[1], x[2], x[-2])
def show_workspace_object_contents(workspace_url, workspace_name, object_name, token):
c = biokbase.workspace.client.Workspace(workspace_url, token=token)
object_contents = c.get_objects([{"workspace": workspace_name, "objid": 2}])
print object_contents
def show_job_progress(ujs_url, awe_url, awe_id, ujs_id, token):
c = biokbase.userandjobstate.client.UserAndJobState(url=ujs_url, token=token)
completed = ["complete", "success"]
error = ["error", "fail", "ERROR"]
term = blessings.Terminal()
header = dict()
header["Authorization"] = "Oauth %s" % token
print term.blue("\tUJS Job Status:")
# wait for UJS to complete
last_status = ""
time_limit = 40
start = datetime.datetime.utcnow()
while 1:
try:
status = c.get_job_status(ujs_id)
except Exception, e:
print term.red("\t\tIssue connecting to UJS!")
status[1] = "ERROR"
status[2] = "Caught Exception"
if (datetime.datetime.utcnow() - start).seconds > time_limit:
print "\t\tJob is taking longer than it should, check debugging messages for more information."
status[1] = "ERROR"
status[2] = "Timeout"
if last_status != status[2]:
print "\t\t{0} status update: {1}".format(status[0], status[2])
last_status = status[2]
if status[1] in completed:
print term.green("\t\tKBase upload completed!\n")
break
elif status[1] in error:
print term.red("\t\tOur job failed!\n")
print term.red("{0}".format(c.get_detailed_error(ujs_id)))
print term.red("{0}".format(c.get_results(ujs_id)))
print term.bold("Additional AWE job details for debugging")
# check awe job output
awe_details = requests.get("{0}/job/{1}".format(awe_url,awe_id), headers=header, verify=True)
job_info = awe_details.json()["data"]
print term.red(simplejson.dumps(job_info, sort_keys=True, indent=4))
awe_stdout = requests.get("{0}/work/{1}?report=stdout".format(awe_url,job_info["tasks"][0]["taskid"]+"_0"), headers=header, verify=True)
print term.red("STDOUT : " + simplejson.dumps(awe_stdout.json()["data"], sort_keys=True, indent=4))
awe_stderr = requests.get("{0}/work/{1}?report=stderr".format(awe_url,job_info["tasks"][0]["taskid"]+"_0"), headers=header, verify=True)
print term.red("STDERR : " + simplejson.dumps(awe_stderr.json()["data"], sort_keys=True, indent=4))
break
def upload(transform_url, options, token):
c = biokbase.Transform.Client.Transform(url=transform_url, token=token)
response = c.upload(options)
return response
def post_to_shock(shockURL, filePath, token):
size = os.path.getsize(filePath)
term = blessings.Terminal()
print term.blue("\tShock upload status:\n")
def progress_indicator(monitor):
if monitor.bytes_read > size:
pass
else:
progress = int(monitor.bytes_read)/float(size) * 100.0
print term.move_up + term.move_left + "\t\tPercentage of bytes uploaded to shock {0:.2f}%".format(progress)
#build the header
header = dict()
header["Authorization"] = "Oauth %s" % token
dataFile = open(os.path.abspath(filePath))
encoder = MultipartEncoder(fields={'upload': (os.path.split(filePath)[-1], dataFile)})
header['Content-Type'] = encoder.content_type
m = MultipartEncoderMonitor(encoder, progress_indicator)
response = requests.post(shockURL + "/node", headers=header, data=m, allow_redirects=True, verify=True)
if not response.ok:
print response.raise_for_status()
result = response.json()
if result['error']:
raise Exception(result['error'][0])
else:
return result["data"]
def download_from_shock(shockURL, shock_id, filePath, token):
header = dict()
header["Authorization"] = "Oauth %s" % token
data = requests.get(shockURL + '/node/' + shock_id + "?download_raw", headers=header, stream=True)
size = int(data.headers['content-length'])
chunkSize = 10 * 2**20
download_iter = data.iter_content(chunkSize)
term = blessings.Terminal()
f = open(filePath, 'wb')
downloaded = 0
try:
for chunk in download_iter:
f.write(chunk)
if downloaded + chunkSize > size:
downloaded = size
else:
downloaded += chunkSize
print term.move_up + term.move_left + "\tDownloaded from shock {0:.2f}%".format(downloaded/float(size) * 100.0)
except:
raise
finally:
f.close()
data.close()
print "\tFile size : {0:f} MB".format(int(os.path.getsize(filePath))/float(1024*1024))
biokbase.Transform.script_utils.extract_data(logger, filePath)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description='KBase Upload handler test driver')
parser.add_argument('--demo', action="store_true")
parser.add_argument('--shock_service_url', nargs='?', help='SHOCK service to upload local files', const="", default="https://kbase.us/services/shock-api/")
parser.add_argument('--ujs_service_url', nargs='?', help='UserandJobState service for monitoring progress', const="", default="https://kbase.us/services/userandjobstate/")
parser.add_argument('--workspace_service_url', nargs='?', help='Workspace service for KBase objects', const="", default="https://kbase.us/services/ws/")
parser.add_argument('--awe_service_url', nargs='?', help='AWE service for additional job monitoring', const="", default="http://140.221.67.242:7080")
parser.add_argument('--transform_service_url', nargs='?', help='Transform service that handles the data conversion to KBase', const="", default="http://140.221.67.242:7778/")
parser.add_argument('--handle_service_url', nargs='?', help='Handle service for KBase handle', const="", default="https://kbase.us/services/handle_service")
parser.add_argument('--fba_service_url', nargs='?', help='FBA service for Model data', const="", default="https://kbase.us/services/handle_service")
parser.add_argument('--external_type', nargs='?', help='the external type of the data', const="", default="")
parser.add_argument('--kbase_type', nargs='?', help='the kbase object type to create', const="", default="")
parser.add_argument('--workspace', nargs='?', help='name of the workspace where your objects should be created', const="", default="upload_testing")
parser.add_argument('--object_name', nargs='?', help='name of the workspace object to create', const="", default="")
parser.add_argument('--url_mapping', nargs='?', help='input url mapping', const="", default="{}")
parser.add_argument('--optional_arguments', nargs='?', help='optional arguments', const="", default='{"validate" : {}, "transform" : {}}')
parser.add_argument('--plugin_directory', nargs='?', help='path to the plugins directory', const="", default="/kb/dev_container/modules/transform/plugins/configs")
parser.add_argument('--file_path', nargs='?', help='path to file for upload', const="", default="")
parser.add_argument('--download_path', nargs='?', help='path to place downloaded files for validation', const=".", default=".")
parser.add_argument('--config_file', nargs='?', help='path to config file with parameters', const="", default="")
parser.add_argument('--verify', help='verify uploaded files', action="store_true")
#parser.add_argument('--create_log', help='create pass/fail log file', action='store_true')
args = parser.parse_args()
token = script_utils.get_token()
plugin = None
plugin = biokbase.Transform.handler_utils.PlugIns(args.plugin_directory, logger)
inputs = list()
if not args.demo:
if args.config_file:
f = open(args.config_file, 'r')
config = simplejson.loads(f.read())
f.close()
services = config["services"]
inputs = config["upload"]
else:
inputs = {"user":
{"external_type": args.external_type,
"kbase_type": args.kbase_type,
"object_name": args.object_name,
"workspace_name" : args.workspace,
"filePath": args.file_path,
"downloadPath": args.download_path,
"optional_arguments": simplejson.loads(args.optional_arguments),
"url_mapping" : simplejson.loads(args.url_mapping)
}
}
services = {"shock_service_url": args.shock_service_url,
"ujs_service_url": args.ujs_service_url,
"workspace_service_url": args.workspace_service_url,
"awe_service_url": args.awe_service_url,
"fba_service_url": args.awe_service_url,
"transform_service_url": args.transform_service_url,
"handle_service_url": args.handle_service_url}
workspace = args.workspace
else:
if "kbasetest" not in token and len(args.workspace.strip()) == 0:
print "If you are running the demo as a different user than kbasetest, you need to provide the name of your workspace with --workspace."
sys.exit(0)
else:
if args.workspace is not None:
workspace = args.workspace
else :
workspace = "upload_testing"
f = open("conf/upload_demo.cfg")
config = simplejson.loads(f.read())
f.close()
services = config["services"]
inputs = config["upload"]
uc = biokbase.userandjobstate.client.UserAndJobState(url=args.ujs_service_url, token=token)
stamp = datetime.datetime.now().isoformat()
os.mkdir(stamp)
#task_driver = biokbase.Transform.drivers.TransformTaskRunnerDriver(services, args.plugin_directory)
task_driver = biokbase.Transform.drivers.TransformClientTerminalDriver(services)
plugins = biokbase.Transform.handler_utils.PlugIns(args.plugin_directory)
term = blessings.Terminal()
for x in sorted(inputs):
external_type = inputs[x]["external_type"]
kbase_type = inputs[x]["kbase_type"]
object_name = inputs[x]["object_name"]
optional_arguments = None
if inputs[x].has_key("optional_arguments"):
optional_arguments = inputs[x]["optional_arguments"]
print "\n\n"
print term.bold("#"*80)
print term.white_on_black("Converting {0} => {1}".format(external_type,kbase_type))
print term.bold("#"*80)
files_to_upload = [k for k in inputs[x]["url_mapping"] if inputs[x]["url_mapping"][k].startswith("file://")]
SIZE_MB = float(1024*1024)
upload_step = 1
# check to see if we need to put any files in shock
if len(files_to_upload) > 0:
print term.bright_blue("Uploading local files")
print term.bold("Step {0:d}: Place local files in SHOCK".format(upload_step))
upload_step += 1
try:
for n in files_to_upload:
filePath = inputs[x]["url_mapping"][n].split("file://")[1]
fileName = os.path.split(filePath)[-1]
print term.blue("\tPreparing to upload {0}".format(filePath))
print "\tFile size : {0:f} MB".format(int(os.path.getsize(filePath))/SIZE_MB)
shock_response = task_driver.post_to_shock(services["shock_service_url"], filePath)
print term.green("\tShock upload of {0} successful.".format(filePath))
print "\tShock id : {0}\n\n".format(shock_response['id'])
inputs[x]["url_mapping"][n] = "{0}/node/{1}".format(services["shock_service_url"],shock_response["id"])
if args.verify:
downloadPath = os.path.join(stamp, external_type + "_to_" + kbase_type)
try:
os.mkdir(downloadPath)
except:
pass
downloadFilePath = os.path.join(downloadPath, fileName)
print term.bold("Optional Step: Verify files uploaded to SHOCK\n")
task_driver.download_from_shock(services["shock_service_url"], shock_response["id"], downloadFilePath)
print term.green("\tShock download of {0} successful.\n\n".format(downloadFilePath))
except Exception, e:
passed = False
print e.message
raise
try:
print term.bright_blue("Uploading from remote http or ftp urls")
print term.bold("Step {0}: Make KBase upload request with urls of data".format(upload_step))
print term.bold("Using data from : {0}".format(inputs[x]["url_mapping"].values()))
upload_step += 1
status = 'Initializing'
description = 'Mock handler testing' #method_hash["ujs_description"]
#progress = { 'ptype' : method_hash["ujs_ptype"], 'max' : method_hash["ujs_mstep"] };
progress = { 'ptype' : 'task', 'max' : 100 };
est = datetime.datetime.utcnow() + datetime.timedelta(minutes=int(3000))
ujs_job_id = uc.create_and_start_job(token, status, description, progress, est.strftime('%Y-%m-%dT%H:%M:%S+0000'));
input_object = dict()
input_object["external_type"] = external_type
input_object["kbase_type"] = kbase_type
input_object["job_details"] = plugins.get_job_details('upload', input_object)
input_object["workspace_name"] = workspace
input_object["object_name"] = object_name
input_object["url_mapping"] = inputs[x]["url_mapping"]
input_object["working_directory"] = stamp
input_object.update(services)
if input_object.has_key("awe_service_url"): del input_object["awe_service_url"]
if input_object.has_key("transform_service_url"): del input_object["transform_service_url"]
#if input_object.has_key("handle_service_url"): del input_object["handle_service_url"]
print term.blue("\tTransform handler upload started:")
if optional_arguments is not None:
input_object["optional_arguments"] = optional_arguments
else:
input_object["optional_arguments"] = {'validate': {}, 'transform': {}}
for x in input_object:
if type(input_object[x]) == type(dict()):
input_object[x] = base64.urlsafe_b64encode(simplejson.dumps(input_object[x]))
command_list = ["trns_upload_taskrunner", "--ujs_job_id", ujs_job_id]
for k in input_object:
command_list.append("--{0}".format(k))
command_list.append("{0}".format(input_object[k]))
print "\n\nHandler invocation {0}".format(" ".join(command_list))
task = subprocess.Popen(command_list, stderr=subprocess.PIPE)
sub_stdout, sub_stderr = task.communicate()
if sub_stdout is not None:
print sub_stdout
if sub_stderr is not None:
print >> sys.stderr, sub_stderr
if task.returncode != 0:
raise Exception(sub_stderr)
print term.bold("Step 3: View or use workspace objects : {0}/{1}".format(workspace, object_name))
#show_workspace_object_list(services["workspace"], workspace, object_name, token)
task_driver.show_workspace_object_list(workspace, object_name)
print term.bold("Step 4: DONE")
#job_exit_status = task_driver.run_job("upload", input_object, "{0} => {1}".format(external_type,kbase_type))
#if not job_exit_status[0]:
# print job_exit_status[1]
# raise# Exception("KBase Upload exited with an error")
#print term.bold("Step {0}: View or use workspace objects".format(upload_step))
#task_driver.show_workspace_object_list(workspace, object_name)
except Exception, e:
print e.message
print e
| mit |
lounick/task_scheduling | task_scheduling/op_problem.py | 2 | 9881 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2015, lounick and decabyte
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# * Neither the name of task_scheduling nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Orienteering problem solver
Implementation of an integer linear formulation for maximizing the targets visited by a vehicle under cost constraint.
The vehicle has to start and finish at the first point and it is allowed to skip targets.
Described in:
Vansteenwegen, Pieter, Wouter Souffriau, and Dirk Van Oudheusden. "The orienteering problem: A survey."
European Journal of Operational Research 209.1 (2011): 1-10.
"""
from __future__ import division
import numpy as np
from gurobipy import *
def _callback(model, where):
"""Callback function for the solver
Callback function that adds lazy constraints for the optimisation process. Here it dynamically imposes cardinality
constraints for the vertices in the solution, ensuring that if a path enters a vertex there must be a path exiting.
Parameters
----------
model : object
The gurobi model instance
where : int
Gurobi specific callback variable
Returns
-------
"""
if where == GRB.callback.MIPSOL:
V = set(range(model._n))
idx_start = model._idxStart
# idx_finish = model._idxFinish
# solmat = np.zeros((model._n, model._n))
selected = []
for i in V:
sol = model.cbGetSolution([model._eVars[i, j] for j in V])
selected += [(i, j) for j in V if sol[j] > 0.5]
# solmat[i, :] = sol
if len(selected) <= 1:
return
for k in range(len(selected)):
el = selected[k]
entry = el[0]
if idx_start != entry:
expr1 = quicksum(model._eVars[i, entry] for i in V)
expr2 = quicksum(model._eVars[entry, j] for j in V)
model.cbLazy(expr1, GRB.EQUAL, expr2)
def op_solver(cost, profit=None, cost_max=None, idx_start=None, idx_finish=None, **kwargs):
"""Orienteering problem solver instance
Cost constrained traveling salesman problem solver for a single vehicle using the Gurobi MILP optimiser.
Parameters
----------
cost : ndarray (n, dims)
Cost matrix for traveling from point to point. Here is time (seconds) needed to go from points a to b.
profit : Optional[vector]
Profit vector for profit of visiting each point.
cost_max : Optional[double]
Maximum running time of the mission in seconds.
idx_start : Optional[int]
Optional starting point for the tour. If none is provided the first point of the array is chosen.
idx_finish : Optional[int]
Optional ending point of the tour. If none is provided the last point of the array is chosen.
kwargs : Optional[list]
Optional extra arguments/
Returns
-------
route : list
The calculated route.
profit : double
The profit of the route.
m : object
A gurobi model object.
"""
# Number of points
n = cost.shape[0]
# other params
node_energy = float(kwargs.get('node_energy', 1.0))
# Check for default values
if idx_start is None:
idx_start = 0
if idx_finish is None:
idx_finish = n - 1
if profit is None:
profit = np.ones(n)
if cost_max is None:
cost_max = cost[idx_start, idx_finish]
# Create the vertices set
V = set(range(n))
m = Model()
# Create model variables
e_vars = {}
for i in V:
for j in V:
e_vars[i, j] = m.addVar(vtype=GRB.BINARY, name='e_' + str(i) + '_' + str(j))
m.update()
for i in V:
e_vars[i, i].ub = 0
m.update()
u_vars = {}
for i in V:
u_vars[i] = m.addVar(vtype=GRB.INTEGER, name='u_' + str(i))
m.update()
# Set objective function (0)
expr = 0
for i in V:
for j in V:
if i != idx_start and i != idx_finish:
expr += profit[i] * e_vars[i, j]
m.setObjective(expr, GRB.MAXIMIZE)
m.update()
# Constraints
# Add constraints for the initial and final node (1)
# None enters the starting point
m.addConstr(quicksum(e_vars[j, idx_start] for j in V.difference([idx_start])) == 0, "s_entry")
m.update()
# None exits the finish point
m.addConstr(quicksum(e_vars[idx_finish, j] for j in V.difference([idx_finish])) == 0, "f_exit")
m.update()
# Always exit the starting point
m.addConstr(quicksum(e_vars[idx_start, i] for i in V.difference([idx_start])) == 1, "s_exit")
m.update()
# Always enter the finish point
m.addConstr(quicksum(e_vars[i, idx_finish] for i in V.difference([idx_finish])) == 1, "f_entry")
m.update()
# From all other points someone may exit
for i in V.difference([idx_start, idx_finish]):
m.addConstr(quicksum(e_vars[i, j] for j in V if i != j) <= 1, "v_" + str(i) + "_exit")
m.update()
# To all other points someone may enter
for i in V.difference([idx_start, idx_finish]):
m.addConstr(quicksum(e_vars[j, i] for j in V if i != j) <= 1, "v_" + str(i) + "_entry")
m.update()
# for i in V.difference([idx_start, idx_finish]):
# m.addConstr(quicksum(e_vars[j, i] for j in V if i != j) == quicksum(e_vars[i, j] for j in V if i != j), "v_" + str(i) + "_cardinality")
# m.update()
# Add cost constraints (3)
expr = 0
for i in V:
for j in V:
# add a fixed cost for intermediate nodes (sensing energy)
if i != idx_start and i != idx_finish:
expr += node_energy * e_vars[i, j]
expr += cost[i, j] * e_vars[i, j]
m.addConstr(expr <= cost_max, "max_energy")
m.update()
# Constraint (4)
for i in V:
u_vars[i].lb = 0
u_vars[i].ub = n
m.update()
# Add subtour constraint (5)
for i in V:
for j in V:
m.addConstr(u_vars[i] - u_vars[j] + 1, GRB.LESS_EQUAL, (n - 1)*(1 - e_vars[i, j]),
"sec_" + str(i) + "_" + str(j))
m.update()
m._n = n
m._eVars = e_vars
m._uVars = u_vars
m._idxStart = idx_start
m._idxFinish = idx_finish
m.update()
m.params.OutputFlag = int(kwargs.get('output_flag', 0))
m.params.TimeLimit = float(kwargs.get('time_limit', 60.0))
m.params.MIPGap = float(kwargs.get('mip_gap', 0.0))
m.params.LazyConstraints = 1
m.optimize(_callback)
solution = m.getAttr('X', e_vars)
selected = [(i, j) for i in V for j in V if solution[i, j] > 0.5]
# solmat = np.zeros((n, n))
# for k, v in solution.iteritems():
# solmat[k[0], k[1]] = v
# print("\n")
# print(solmat)
# print(u)
# print(selected)
# print(sum(cost[s[0], s[1]] for s in selected))
route = []
next_city = idx_start
while len(selected) > 0:
for i in range(len(selected)):
if selected[i][0] == next_city:
route.append(next_city)
next_city = selected[i][1]
selected.pop(i)
break
route.append(next_city)
return route, m.objVal, m
def main():
import matplotlib.pyplot as plt
import task_scheduling.utils as tsu
import random
nodes = tsu.generate_nodes(n=100, lb=-100, up=100, dims=2)
cost = tsu.calculate_distances(nodes)
nodes = []
random.seed(42)
nodes.append([0,0])
for i in range(1,6):
for j in range(-2,3):
ni = i
nj = j
# ni = random.uniform(-0.5,0.5) + i
# nj = random.uniform(-0.5,0.5) + j
nodes.append([ni,nj])
nodes.append([6,0])
nodes = np.array(nodes)
cost = tsu.calculate_distances(nodes)
max_cost = [25.5]
for mc in max_cost:
solution, objective, _ = tsu.solve_problem(op_solver, cost, cost_max=mc, output_flag=1, mip_gap=0.0, time_limit=3600)
util = 0
for i in solution:
extras = 0
if i != 0 and i != solution[len(solution)-1]:
for j in range(cost.shape[0]):
if j != i and j not in solution and j != 0 and j != solution[len(solution)-1]:
extras += np.e**(-2*cost[i,j])
util += 1 + extras
print("Utility: {0}".format(util))
fig, ax = tsu.plot_problem(nodes, solution, objective)
plt.show()
if __name__ == '__main__':
main() | bsd-3-clause |
yury-s/v8-inspector | Source/chrome/tools/telemetry/telemetry/decorators_unittest.py | 20 | 2837 | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import unittest
from telemetry import decorators
class FakePlatform(object):
def GetOSName(self):
return 'os_name'
def GetOSVersionName(self):
return 'os_version_name'
class FakePossibleBrowser(object):
def __init__(self):
self.browser_type = 'browser_type'
self.platform = FakePlatform()
self.supports_tab_control = False
class FakeTest(object):
def SetEnabledStrings(self, enabled_strings):
# pylint: disable=W0201
self._enabled_strings = enabled_strings
def SetDisabledStrings(self, disabled_strings):
# pylint: disable=W0201
self._disabled_strings = disabled_strings
class TestShouldSkip(unittest.TestCase):
def testEnabledStrings(self):
test = FakeTest()
possible_browser = FakePossibleBrowser()
# When no enabled_strings is given, everything should be enabled.
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetEnabledStrings(['os_name'])
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetEnabledStrings(['another_os_name'])
self.assertTrue(decorators.ShouldSkip(test, possible_browser)[0])
test.SetEnabledStrings(['os_version_name'])
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetEnabledStrings(['os_name', 'another_os_name'])
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetEnabledStrings(['another_os_name', 'os_name'])
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetEnabledStrings(['another_os_name', 'another_os_version_name'])
self.assertTrue(decorators.ShouldSkip(test, possible_browser)[0])
def testDisabledStrings(self):
test = FakeTest()
possible_browser = FakePossibleBrowser()
# When no disabled_strings is given, nothing should be disabled.
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetDisabledStrings(['os_name'])
self.assertTrue(decorators.ShouldSkip(test, possible_browser)[0])
test.SetDisabledStrings(['another_os_name'])
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
test.SetDisabledStrings(['os_version_name'])
self.assertTrue(decorators.ShouldSkip(test, possible_browser)[0])
test.SetDisabledStrings(['os_name', 'another_os_name'])
self.assertTrue(decorators.ShouldSkip(test, possible_browser)[0])
test.SetDisabledStrings(['another_os_name', 'os_name'])
self.assertTrue(decorators.ShouldSkip(test, possible_browser)[0])
test.SetDisabledStrings(['another_os_name', 'another_os_version_name'])
self.assertFalse(decorators.ShouldSkip(test, possible_browser)[0])
| bsd-3-clause |
Xarrow/shadowsocks | tests/test_udp_src.py | 1009 | 2482 | #!/usr/bin/python
import socket
import socks
SERVER_IP = '127.0.0.1'
SERVER_PORT = 1081
if __name__ == '__main__':
# Test 1: same source port IPv4
sock_out = socks.socksocket(socket.AF_INET, socket.SOCK_DGRAM,
socket.SOL_UDP)
sock_out.set_proxy(socks.SOCKS5, SERVER_IP, SERVER_PORT)
sock_out.bind(('127.0.0.1', 9000))
sock_in1 = socket.socket(socket.AF_INET, socket.SOCK_DGRAM,
socket.SOL_UDP)
sock_in2 = socket.socket(socket.AF_INET, socket.SOCK_DGRAM,
socket.SOL_UDP)
sock_in1.bind(('127.0.0.1', 9001))
sock_in2.bind(('127.0.0.1', 9002))
sock_out.sendto(b'data', ('127.0.0.1', 9001))
result1 = sock_in1.recvfrom(8)
sock_out.sendto(b'data', ('127.0.0.1', 9002))
result2 = sock_in2.recvfrom(8)
sock_out.close()
sock_in1.close()
sock_in2.close()
# make sure they're from the same source port
assert result1 == result2
# Test 2: same source port IPv6
# try again from the same port but IPv6
sock_out = socks.socksocket(socket.AF_INET, socket.SOCK_DGRAM,
socket.SOL_UDP)
sock_out.set_proxy(socks.SOCKS5, SERVER_IP, SERVER_PORT)
sock_out.bind(('127.0.0.1', 9000))
sock_in1 = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM,
socket.SOL_UDP)
sock_in2 = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM,
socket.SOL_UDP)
sock_in1.bind(('::1', 9001))
sock_in2.bind(('::1', 9002))
sock_out.sendto(b'data', ('::1', 9001))
result1 = sock_in1.recvfrom(8)
sock_out.sendto(b'data', ('::1', 9002))
result2 = sock_in2.recvfrom(8)
sock_out.close()
sock_in1.close()
sock_in2.close()
# make sure they're from the same source port
assert result1 == result2
# Test 3: different source ports IPv6
sock_out = socks.socksocket(socket.AF_INET, socket.SOCK_DGRAM,
socket.SOL_UDP)
sock_out.set_proxy(socks.SOCKS5, SERVER_IP, SERVER_PORT)
sock_out.bind(('127.0.0.1', 9003))
sock_in1 = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM,
socket.SOL_UDP)
sock_in1.bind(('::1', 9001))
sock_out.sendto(b'data', ('::1', 9001))
result3 = sock_in1.recvfrom(8)
# make sure they're from different source ports
assert result1 != result3
sock_out.close()
sock_in1.close()
| apache-2.0 |
okuta/chainer | examples/glance/glance.py | 8 | 2876 | # Note for contributors:
# This example code is referred to from "Chainer at a Glance" tutorial.
# If this file is to be modified, please also update the line numbers in
# `docs/source/glance.rst` accordingly.
import chainer as ch
from chainer import datasets
import chainer.functions as F
import chainer.links as L
from chainer import training
from chainer.training import extensions
import numpy as np
import matplotlib
matplotlib.use('Agg')
mushroomsfile = 'mushrooms.csv'
data_array = np.genfromtxt(
mushroomsfile, delimiter=',', dtype=str, skip_header=1)
for col in range(data_array.shape[1]):
data_array[:, col] = np.unique(data_array[:, col], return_inverse=True)[1]
X = data_array[:, 1:].astype(np.float32)
Y = data_array[:, 0].astype(np.int32)[:, None]
train, test = datasets.split_dataset_random(
datasets.TupleDataset(X, Y), int(data_array.shape[0] * .7))
train_iter = ch.iterators.SerialIterator(train, 100)
test_iter = ch.iterators.SerialIterator(
test, 100, repeat=False, shuffle=False)
# Network definition
def MLP(n_units, n_out):
layer = ch.Sequential(L.Linear(n_units), F.relu)
model = layer.repeat(2)
model.append(L.Linear(n_out))
return model
model = L.Classifier(
MLP(44, 1), lossfun=F.sigmoid_cross_entropy, accfun=F.binary_accuracy)
# Setup an optimizer
optimizer = ch.optimizers.SGD().setup(model)
# Create the updater, using the optimizer
updater = training.StandardUpdater(train_iter, optimizer, device=-1)
# Set up a trainer
trainer = training.Trainer(updater, (50, 'epoch'), out='result')
# Evaluate the model with the test dataset for each epoch
trainer.extend(extensions.Evaluator(test_iter, model, device=-1))
# Dump a computational graph from 'loss' variable at the first iteration
# The "main" refers to the target link of the "main" optimizer.
trainer.extend(extensions.DumpGraph('main/loss'))
trainer.extend(extensions.snapshot(), trigger=(20, 'epoch'))
# Write a log of evaluation statistics for each epoch
trainer.extend(extensions.LogReport())
# Save two plot images to the result dir
trainer.extend(
extensions.PlotReport(['main/loss', 'validation/main/loss'],
'epoch', file_name='loss.png'))
trainer.extend(
extensions.PlotReport(
['main/accuracy', 'validation/main/accuracy'],
'epoch', file_name='accuracy.png'))
# Print selected entries of the log to stdout
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
# Run the training
trainer.run()
x, t = test[np.random.randint(len(test))]
predict = model.predictor(x[None]).array
predict = predict[0][0]
if predict >= 0:
print('Predicted Poisonous, Actual ' + ['Edible', 'Poisonous'][t[0]])
else:
print('Predicted Edible, Actual ' + ['Edible', 'Poisonous'][t[0]])
| mit |
virtualopensystems/neutron | neutron/plugins/vmware/api_client/client.py | 11 | 5785 | # Copyright 2012 VMware, Inc.
#
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import httplib
from neutron.openstack.common import log as logging
from neutron.plugins.vmware.api_client import base
from neutron.plugins.vmware.api_client import eventlet_client
from neutron.plugins.vmware.api_client import eventlet_request
from neutron.plugins.vmware.api_client import exception
from neutron.plugins.vmware.api_client import version
LOG = logging.getLogger(__name__)
class NsxApiClient(eventlet_client.EventletApiClient):
"""The Nsx API Client."""
def __init__(self, api_providers, user, password,
concurrent_connections=base.DEFAULT_CONCURRENT_CONNECTIONS,
gen_timeout=base.GENERATION_ID_TIMEOUT,
use_https=True,
connect_timeout=base.DEFAULT_CONNECT_TIMEOUT,
http_timeout=75, retries=2, redirects=2):
'''Constructor. Adds the following:
:param http_timeout: how long to wait before aborting an
unresponsive controller (and allow for retries to another
controller in the cluster)
:param retries: the number of concurrent connections.
:param redirects: the number of concurrent connections.
'''
super(NsxApiClient, self).__init__(
api_providers, user, password,
concurrent_connections=concurrent_connections,
gen_timeout=gen_timeout, use_https=use_https,
connect_timeout=connect_timeout)
self._request_timeout = http_timeout * retries
self._http_timeout = http_timeout
self._retries = retries
self._redirects = redirects
self._version = None
# NOTE(salvatore-orlando): This method is not used anymore. Login is now
# performed automatically inside the request eventlet if necessary.
def login(self, user=None, password=None):
'''Login to NSX controller.
Assumes same password is used for all controllers.
:param user: controller user (usually admin). Provided for
backwards compatibility. In the normal mode of operation
this should be None.
:param password: controller password. Provided for backwards
compatibility. In the normal mode of operation this should
be None.
'''
if user:
self._user = user
if password:
self._password = password
return self._login()
def request(self, method, url, body="", content_type="application/json"):
'''Issues request to controller.'''
g = eventlet_request.GenericRequestEventlet(
self, method, url, body, content_type, auto_login=True,
http_timeout=self._http_timeout,
retries=self._retries, redirects=self._redirects)
g.start()
response = g.join()
LOG.debug(_('Request returns "%s"'), response)
# response is a modified HTTPResponse object or None.
# response.read() will not work on response as the underlying library
# request_eventlet.ApiRequestEventlet has already called this
# method in order to extract the body and headers for processing.
# ApiRequestEventlet derived classes call .read() and
# .getheaders() on the HTTPResponse objects and store the results in
# the response object's .body and .headers data members for future
# access.
if response is None:
# Timeout.
LOG.error(_('Request timed out: %(method)s to %(url)s'),
{'method': method, 'url': url})
raise exception.RequestTimeout()
status = response.status
if status == httplib.UNAUTHORIZED:
raise exception.UnAuthorizedRequest()
# Fail-fast: Check for exception conditions and raise the
# appropriate exceptions for known error codes.
if status in exception.ERROR_MAPPINGS:
LOG.error(_("Received error code: %s"), status)
LOG.error(_("Server Error Message: %s"), response.body)
exception.ERROR_MAPPINGS[status](response)
# Continue processing for non-error condition.
if (status != httplib.OK and status != httplib.CREATED
and status != httplib.NO_CONTENT):
LOG.error(_("%(method)s to %(url)s, unexpected response code: "
"%(status)d (content = '%(body)s')"),
{'method': method, 'url': url,
'status': response.status, 'body': response.body})
return None
if not self._version:
self._version = version.find_version(response.headers)
return response.body
def get_version(self):
if not self._version:
# Determine the controller version by querying the
# cluster nodes. Currently, the version will be the
# one of the server that responds.
self.request('GET', '/ws.v1/control-cluster/node')
if not self._version:
LOG.error(_('Unable to determine NSX version. '
'Plugin might not work as expected.'))
return self._version
| apache-2.0 |
emkailu/PAT3DEM | bin/p3pdbsuperimpose.py | 1 | 1225 | #!/usr/bin/env python
import os
import sys
import argparse
import pat3dem.pdb as p3p
from Bio.PDB import *
def main():
progname = os.path.basename(sys.argv[0])
usage = progname + """ [options] <pdbs>
Superimpose pdb
"""
args_def = {'fix':'','move':''}
parser = argparse.ArgumentParser()
parser.add_argument("pdbs", nargs='*', help="specify pdbs to be processed")
parser.add_argument("-f", "--fix", type=str, help="specify fixed pdb for calculating matrix, by default {}".format(args_def['fix']))
parser.add_argument("-m", "--move", type=str, help="specify moving pdb for calculating matrix, by default {}".format(args_def['move']))
args = parser.parse_args()
if len(sys.argv) == 1:
print "usage: " + usage
print "Please run '" + progname + " -h' for detailed options."
sys.exit(1)
# get default values
for i in args_def:
if args.__dict__[i] == None:
args.__dict__[i] = args_def[i]
#
sup = p3p.sup(args.fix,args.move)
print 'rms:', sup.rms
for i in args.pdbs:
p = PDBParser()
s_move = p.get_structure('move', i)
a_move = Selection.unfold_entities(s_move, 'A')
sup.apply(a_move)
io = PDBIO()
io.set_structure(s_move)
io.save(i+'_sup.pdb')
if __name__ == '__main__':
main()
| mit |
allanino/nupic | nupic/math/roc_utils.py | 49 | 8308 | # ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2013, Numenta, Inc. Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Affero Public License for more details.
#
# You should have received a copy of the GNU Affero Public License
# along with this program. If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------
"""
Utility functions to compute ROC (Receiver Operator Characteristic) curves
and AUC (Area Under the Curve).
The ROCCurve() and AreaUnderCurve() functions are based on the roc_curve()
and auc() functions found in metrics.py module of scikit-learn
(http://scikit-learn.org/stable/). Scikit-learn has a BSD license (3 clause).
Following is the original license/credits statement from the top of the
metrics.py file:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Mathieu Blondel <mathieu@mblondel.org>
# Olivier Grisel <olivier.grisel@ensta.org>
# License: BSD Style.
"""
import numpy as np
def ROCCurve(y_true, y_score):
"""compute Receiver operating characteristic (ROC)
Note: this implementation is restricted to the binary classification task.
Parameters
----------
y_true : array, shape = [n_samples]
true binary labels
y_score : array, shape = [n_samples]
target scores, can either be probability estimates of
the positive class, confidence values, or binary decisions.
Returns
-------
fpr : array, shape = [>2]
False Positive Rates
tpr : array, shape = [>2]
True Positive Rates
thresholds : array, shape = [>2]
Thresholds on y_score used to compute fpr and tpr
Examples
--------
>>> import numpy as np
>>> from sklearn import metrics
>>> y = np.array([1, 1, 2, 2])
>>> scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, tpr, thresholds = metrics.roc_curve(y, scores)
>>> fpr
array([ 0. , 0.5, 0.5, 1. ])
References
----------
http://en.wikipedia.org/wiki/Receiver_operating_characteristic
"""
y_true = np.ravel(y_true)
classes = np.unique(y_true)
# ROC only for binary classification
if classes.shape[0] != 2:
raise ValueError("ROC is defined for binary classification only")
y_score = np.ravel(y_score)
n_pos = float(np.sum(y_true == classes[1])) # nb of true positive
n_neg = float(np.sum(y_true == classes[0])) # nb of true negative
thresholds = np.unique(y_score)
neg_value, pos_value = classes[0], classes[1]
tpr = np.empty(thresholds.size, dtype=np.float) # True positive rate
fpr = np.empty(thresholds.size, dtype=np.float) # False positive rate
# Build tpr/fpr vector
current_pos_count = current_neg_count = sum_pos = sum_neg = idx = 0
signal = np.c_[y_score, y_true]
sorted_signal = signal[signal[:, 0].argsort(), :][::-1]
last_score = sorted_signal[0][0]
for score, value in sorted_signal:
if score == last_score:
if value == pos_value:
current_pos_count += 1
else:
current_neg_count += 1
else:
tpr[idx] = (sum_pos + current_pos_count) / n_pos
fpr[idx] = (sum_neg + current_neg_count) / n_neg
sum_pos += current_pos_count
sum_neg += current_neg_count
current_pos_count = 1 if value == pos_value else 0
current_neg_count = 1 if value == neg_value else 0
idx += 1
last_score = score
else:
tpr[-1] = (sum_pos + current_pos_count) / n_pos
fpr[-1] = (sum_neg + current_neg_count) / n_neg
# hard decisions, add (0,0)
if fpr.shape[0] == 2:
fpr = np.array([0.0, fpr[0], fpr[1]])
tpr = np.array([0.0, tpr[0], tpr[1]])
# trivial decisions, add (0,0) and (1,1)
elif fpr.shape[0] == 1:
fpr = np.array([0.0, fpr[0], 1.0])
tpr = np.array([0.0, tpr[0], 1.0])
return fpr, tpr, thresholds
def AreaUnderCurve(x, y):
"""Compute Area Under the Curve (AUC) using the trapezoidal rule
Parameters
----------
x : array, shape = [n]
x coordinates
y : array, shape = [n]
y coordinates
Returns
-------
auc : float
Examples
--------
>>> import numpy as np
>>> from sklearn import metrics
>>> y = np.array([1, 1, 2, 2])
>>> pred = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, tpr, thresholds = metrics.roc_curve(y, pred)
>>> metrics.auc(fpr, tpr)
0.75
"""
#x, y = check_arrays(x, y)
if x.shape[0] != y.shape[0]:
raise ValueError('x and y should have the same shape'
' to compute area under curve,'
' but x.shape = %s and y.shape = %s.'
% (x.shape, y.shape))
if x.shape[0] < 2:
raise ValueError('At least 2 points are needed to compute'
' area under curve, but x.shape = %s' % x.shape)
# reorder the data points according to the x axis
order = np.argsort(x)
x = x[order]
y = y[order]
h = np.diff(x)
area = np.sum(h * (y[1:] + y[:-1])) / 2.0
return area
def _printNPArray(x, precision=2):
format = "%%.%df" % (precision)
for elem in x:
print format % (elem),
print
def _test():
"""
This is a toy example, to show the basic functionality:
The dataset is:
actual prediction
-------------------------
0 0.1
0 0.4
1 0.5
1 0.3
1 0.45
Some ROC terminology:
A True Positive (TP) is when we predict TRUE and the actual value is 1.
A False Positive (FP) is when we predict TRUE, but the actual value is 0.
The True Positive Rate (TPR) is TP/P, where P is the total number of actual
positives (3 in this example, the last 3 samples).
The False Positive Rate (FPR) is FP/N, where N is the total number of actual
negatives (2 in this example, the first 2 samples)
Here are the classifications at various choices for the threshold. The
prediction is TRUE if the predicted value is >= threshold and FALSE otherwise.
actual pred 0.50 0.45 0.40 0.30 0.10
---------------------------------------------------------
0 0.1 0 0 0 0 1
0 0.4 0 0 1 1 1
1 0.5 1 1 1 1 1
1 0.3 0 0 0 1 1
1 0.45 0 1 1 1 1
TruePos(TP) 1 2 2 3 3
FalsePos(FP) 0 0 1 1 2
TruePosRate(TPR) 1/3 2/3 2/3 3/3 3/3
FalsePosRate(FPR) 0/2 0/2 1/2 1/2 2/2
The ROC curve is a plot of FPR on the x-axis and TPR on the y-axis. Basically,
one can pick any operating point along this curve to run, the operating point
determined by which threshold you want to use. By changing the threshold, you
tradeoff TP's for FPs.
The more area under this curve, the better the classification algorithm is.
The AreaUnderCurve() function can be used to compute the area under this
curve.
"""
yTrue = np.array([0, 0, 1, 1, 1])
yScore = np.array([0.1, 0.4, 0.5, 0.3, 0.45])
(fpr, tpr, thresholds) = ROCCurve(yTrue, yScore)
print "Actual: ",
_printNPArray(yTrue)
print "Predicted: ",
_printNPArray(yScore)
print
print "Thresholds:",
_printNPArray(thresholds[::-1])
print "FPR(x): ",
_printNPArray(fpr)
print "TPR(y): ",
_printNPArray(tpr)
print
area = AreaUnderCurve(fpr, tpr)
print "AUC: ", area
if __name__=='__main__':
_test()
| agpl-3.0 |
b0ttl3z/SickRage | lib/unidecode/x0a4.py | 252 | 4437 | data = (
'qiet', # 0x00
'qiex', # 0x01
'qie', # 0x02
'qiep', # 0x03
'quot', # 0x04
'quox', # 0x05
'quo', # 0x06
'quop', # 0x07
'qot', # 0x08
'qox', # 0x09
'qo', # 0x0a
'qop', # 0x0b
'qut', # 0x0c
'qux', # 0x0d
'qu', # 0x0e
'qup', # 0x0f
'qurx', # 0x10
'qur', # 0x11
'qyt', # 0x12
'qyx', # 0x13
'qy', # 0x14
'qyp', # 0x15
'qyrx', # 0x16
'qyr', # 0x17
'jjit', # 0x18
'jjix', # 0x19
'jji', # 0x1a
'jjip', # 0x1b
'jjiet', # 0x1c
'jjiex', # 0x1d
'jjie', # 0x1e
'jjiep', # 0x1f
'jjuox', # 0x20
'jjuo', # 0x21
'jjuop', # 0x22
'jjot', # 0x23
'jjox', # 0x24
'jjo', # 0x25
'jjop', # 0x26
'jjut', # 0x27
'jjux', # 0x28
'jju', # 0x29
'jjup', # 0x2a
'jjurx', # 0x2b
'jjur', # 0x2c
'jjyt', # 0x2d
'jjyx', # 0x2e
'jjy', # 0x2f
'jjyp', # 0x30
'njit', # 0x31
'njix', # 0x32
'nji', # 0x33
'njip', # 0x34
'njiet', # 0x35
'njiex', # 0x36
'njie', # 0x37
'njiep', # 0x38
'njuox', # 0x39
'njuo', # 0x3a
'njot', # 0x3b
'njox', # 0x3c
'njo', # 0x3d
'njop', # 0x3e
'njux', # 0x3f
'nju', # 0x40
'njup', # 0x41
'njurx', # 0x42
'njur', # 0x43
'njyt', # 0x44
'njyx', # 0x45
'njy', # 0x46
'njyp', # 0x47
'njyrx', # 0x48
'njyr', # 0x49
'nyit', # 0x4a
'nyix', # 0x4b
'nyi', # 0x4c
'nyip', # 0x4d
'nyiet', # 0x4e
'nyiex', # 0x4f
'nyie', # 0x50
'nyiep', # 0x51
'nyuox', # 0x52
'nyuo', # 0x53
'nyuop', # 0x54
'nyot', # 0x55
'nyox', # 0x56
'nyo', # 0x57
'nyop', # 0x58
'nyut', # 0x59
'nyux', # 0x5a
'nyu', # 0x5b
'nyup', # 0x5c
'xit', # 0x5d
'xix', # 0x5e
'xi', # 0x5f
'xip', # 0x60
'xiet', # 0x61
'xiex', # 0x62
'xie', # 0x63
'xiep', # 0x64
'xuox', # 0x65
'xuo', # 0x66
'xot', # 0x67
'xox', # 0x68
'xo', # 0x69
'xop', # 0x6a
'xyt', # 0x6b
'xyx', # 0x6c
'xy', # 0x6d
'xyp', # 0x6e
'xyrx', # 0x6f
'xyr', # 0x70
'yit', # 0x71
'yix', # 0x72
'yi', # 0x73
'yip', # 0x74
'yiet', # 0x75
'yiex', # 0x76
'yie', # 0x77
'yiep', # 0x78
'yuot', # 0x79
'yuox', # 0x7a
'yuo', # 0x7b
'yuop', # 0x7c
'yot', # 0x7d
'yox', # 0x7e
'yo', # 0x7f
'yop', # 0x80
'yut', # 0x81
'yux', # 0x82
'yu', # 0x83
'yup', # 0x84
'yurx', # 0x85
'yur', # 0x86
'yyt', # 0x87
'yyx', # 0x88
'yy', # 0x89
'yyp', # 0x8a
'yyrx', # 0x8b
'yyr', # 0x8c
'[?]', # 0x8d
'[?]', # 0x8e
'[?]', # 0x8f
'Qot', # 0x90
'Li', # 0x91
'Kit', # 0x92
'Nyip', # 0x93
'Cyp', # 0x94
'Ssi', # 0x95
'Ggop', # 0x96
'Gep', # 0x97
'Mi', # 0x98
'Hxit', # 0x99
'Lyr', # 0x9a
'Bbut', # 0x9b
'Mop', # 0x9c
'Yo', # 0x9d
'Put', # 0x9e
'Hxuo', # 0x9f
'Tat', # 0xa0
'Ga', # 0xa1
'[?]', # 0xa2
'[?]', # 0xa3
'Ddur', # 0xa4
'Bur', # 0xa5
'Gguo', # 0xa6
'Nyop', # 0xa7
'Tu', # 0xa8
'Op', # 0xa9
'Jjut', # 0xaa
'Zot', # 0xab
'Pyt', # 0xac
'Hmo', # 0xad
'Yit', # 0xae
'Vur', # 0xaf
'Shy', # 0xb0
'Vep', # 0xb1
'Za', # 0xb2
'Jo', # 0xb3
'[?]', # 0xb4
'Jjy', # 0xb5
'Got', # 0xb6
'Jjie', # 0xb7
'Wo', # 0xb8
'Du', # 0xb9
'Shur', # 0xba
'Lie', # 0xbb
'Cy', # 0xbc
'Cuop', # 0xbd
'Cip', # 0xbe
'Hxop', # 0xbf
'Shat', # 0xc0
'[?]', # 0xc1
'Shop', # 0xc2
'Che', # 0xc3
'Zziet', # 0xc4
'[?]', # 0xc5
'Ke', # 0xc6
'[?]', # 0xc7
'[?]', # 0xc8
'[?]', # 0xc9
'[?]', # 0xca
'[?]', # 0xcb
'[?]', # 0xcc
'[?]', # 0xcd
'[?]', # 0xce
'[?]', # 0xcf
'[?]', # 0xd0
'[?]', # 0xd1
'[?]', # 0xd2
'[?]', # 0xd3
'[?]', # 0xd4
'[?]', # 0xd5
'[?]', # 0xd6
'[?]', # 0xd7
'[?]', # 0xd8
'[?]', # 0xd9
'[?]', # 0xda
'[?]', # 0xdb
'[?]', # 0xdc
'[?]', # 0xdd
'[?]', # 0xde
'[?]', # 0xdf
'[?]', # 0xe0
'[?]', # 0xe1
'[?]', # 0xe2
'[?]', # 0xe3
'[?]', # 0xe4
'[?]', # 0xe5
'[?]', # 0xe6
'[?]', # 0xe7
'[?]', # 0xe8
'[?]', # 0xe9
'[?]', # 0xea
'[?]', # 0xeb
'[?]', # 0xec
'[?]', # 0xed
'[?]', # 0xee
'[?]', # 0xef
'[?]', # 0xf0
'[?]', # 0xf1
'[?]', # 0xf2
'[?]', # 0xf3
'[?]', # 0xf4
'[?]', # 0xf5
'[?]', # 0xf6
'[?]', # 0xf7
'[?]', # 0xf8
'[?]', # 0xf9
'[?]', # 0xfa
'[?]', # 0xfb
'[?]', # 0xfc
'[?]', # 0xfd
'[?]', # 0xfe
)
| gpl-3.0 |
lshain-android-source/external-chromium_org | tools/telemetry/telemetry/core/platform/profiler/sample_profiler.py | 23 | 2870 | # Copyright (c) 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import signal
import subprocess
import sys
import tempfile
from telemetry.core import exceptions
from telemetry.core import util
from telemetry.core.platform import profiler
class _SingleProcessSampleProfiler(object):
"""An internal class for using iprofiler for a given process."""
def __init__(self, pid, output_path):
self._output_path = output_path
self._tmp_output_file = tempfile.NamedTemporaryFile('w', 0)
self._proc = subprocess.Popen(
['sample', str(pid), '-mayDie', '-file', self._output_path],
stdout=self._tmp_output_file, stderr=subprocess.STDOUT)
def IsStarted():
stdout = self._GetStdOut()
if 'sample cannot examine process' in stdout:
raise exceptions.ProfilingException(
'Failed to start sample for process %s\n' %
self._output_path.split('.')[1])
return 'Sampling process' in stdout
util.WaitFor(IsStarted, 120)
def CollectProfile(self):
self._proc.send_signal(signal.SIGINT)
exit_code = self._proc.wait()
try:
if exit_code:
raise Exception(
'sample failed with exit code %d. Output:\n%s' % (
exit_code, self._GetStdOut()))
finally:
self._proc = None
self._tmp_output_file.close()
print 'To view the profile, run:'
print ' open -a TextEdit %s' % self._output_path
return self._output_path
def _GetStdOut(self):
self._tmp_output_file.flush()
try:
with open(self._tmp_output_file.name) as f:
return f.read()
except IOError:
return ''
class SampleProfiler(profiler.Profiler):
def __init__(self, browser_backend, platform_backend, output_path):
super(SampleProfiler, self).__init__(
browser_backend, platform_backend, output_path)
process_output_file_map = self._GetProcessOutputFileMap()
self._process_profilers = []
for pid, output_file in process_output_file_map.iteritems():
if '.utility' in output_file:
# The utility process may not have been started by Telemetry.
# So we won't have permissing to profile it
continue
self._process_profilers.append(
_SingleProcessSampleProfiler(pid, output_file))
@classmethod
def name(cls):
return 'sample'
@classmethod
def is_supported(cls, options):
if sys.platform != 'darwin':
return False
if not options:
return True
return (not options.browser_type.startswith('android') and
not options.browser_type.startswith('cros'))
def CollectProfile(self):
output_paths = []
for single_process in self._process_profilers:
output_paths.append(single_process.CollectProfile())
return output_paths
| bsd-3-clause |
rosmo/ansible-modules-core | inventory/add_host.py | 154 | 2000 | # -*- mode: python -*-
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: add_host
short_description: add a host (and alternatively a group) to the ansible-playbook in-memory inventory
description:
- Use variables to create new hosts and groups in inventory for use in later plays of the same playbook.
Takes variables so you can define the new hosts more fully.
version_added: "0.9"
options:
name:
aliases: [ 'hostname', 'host' ]
description:
- The hostname/ip of the host to add to the inventory, can include a colon and a port number.
required: true
groups:
aliases: [ 'groupname', 'group' ]
description:
- The groups to add the hostname to, comma separated.
required: false
notes:
- This module bypasses the play host loop and only runs once for all the hosts in the play, if you need it
to iterate use a with\_ directive.
author:
- "Ansible Core Team"
- "Seth Vidal"
'''
EXAMPLES = '''
# add host to group 'just_created' with variable foo=42
- add_host: name={{ ip_from_ec2 }} groups=just_created foo=42
# add a host with a non-standard port local to your machines
- add_host: name={{ new_ip }}:{{ new_port }}
# add a host alias that we reach through a tunnel
- add_host: hostname={{ new_ip }}
ansible_ssh_host={{ inventory_hostname }}
ansible_ssh_port={{ new_port }}
'''
| gpl-3.0 |
PRIMEDesigner15/PRIMEDesigner15 | Test_files/dependencies/Lib/collections/__init__.py | 625 | 25849 | #__all__ = ['deque', 'defaultdict', 'Counter']
from _collections import deque, defaultdict
#from itertools import repeat as _repeat, chain as _chain, starmap as _starmap
__all__ = ['deque', 'defaultdict', 'namedtuple', 'UserDict', 'UserList',
'UserString', 'Counter', 'OrderedDict']
# For bootstrapping reasons, the collection ABCs are defined in _abcoll.py.
# They should however be considered an integral part of collections.py.
# fixme brython.. there is an issue with _abcoll
#from _abcoll import *
#from _abcoll import Set
from _abcoll import MutableMapping
#import _abcoll
#__all__ += _abcoll.__all__
from collections.abc import *
import collections.abc
__all__ += collections.abc.__all__
from _collections import deque, defaultdict, namedtuple
from operator import itemgetter as _itemgetter
from keyword import iskeyword as _iskeyword
import sys as _sys
import heapq as _heapq
#fixme brython
#from weakref import proxy as _proxy
from itertools import repeat as _repeat, chain as _chain, starmap as _starmap
from reprlib import recursive_repr as _recursive_repr
class Set(set):
pass
class Sequence(list):
pass
def _proxy(obj):
return obj
################################################################################
### OrderedDict
################################################################################
class _Link(object):
__slots__ = 'prev', 'next', 'key', '__weakref__'
class OrderedDict(dict):
'Dictionary that remembers insertion order'
# An inherited dict maps keys to values.
# The inherited dict provides __getitem__, __len__, __contains__, and get.
# The remaining methods are order-aware.
# Big-O running times for all methods are the same as regular dictionaries.
# The internal self.__map dict maps keys to links in a doubly linked list.
# The circular doubly linked list starts and ends with a sentinel element.
# The sentinel element never gets deleted (this simplifies the algorithm).
# The sentinel is in self.__hardroot with a weakref proxy in self.__root.
# The prev links are weakref proxies (to prevent circular references).
# Individual links are kept alive by the hard reference in self.__map.
# Those hard references disappear when a key is deleted from an OrderedDict.
def __init__(self, *args, **kwds):
'''Initialize an ordered dictionary. The signature is the same as
regular dictionaries, but keyword arguments are not recommended because
their insertion order is arbitrary.
'''
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
try:
self.__root
except AttributeError:
self.__hardroot = _Link()
self.__root = root = _proxy(self.__hardroot)
root.prev = root.next = root
self.__map = {}
self.__update(*args, **kwds)
def __setitem__(self, key, value,
dict_setitem=dict.__setitem__, proxy=_proxy, Link=_Link):
'od.__setitem__(i, y) <==> od[i]=y'
# Setting a new item creates a new link at the end of the linked list,
# and the inherited dictionary is updated with the new key/value pair.
if key not in self:
self.__map[key] = link = Link()
root = self.__root
last = root.prev
link.prev, link.next, link.key = last, root, key
last.next = link
root.prev = proxy(link)
dict_setitem(self, key, value)
def __delitem__(self, key, dict_delitem=dict.__delitem__):
'od.__delitem__(y) <==> del od[y]'
# Deleting an existing item uses self.__map to find the link which gets
# removed by updating the links in the predecessor and successor nodes.
dict_delitem(self, key)
link = self.__map.pop(key)
link_prev = link.prev
link_next = link.next
link_prev.next = link_next
link_next.prev = link_prev
def __iter__(self):
'od.__iter__() <==> iter(od)'
# Traverse the linked list in order.
root = self.__root
curr = root.next
while curr is not root:
yield curr.key
curr = curr.next
def __reversed__(self):
'od.__reversed__() <==> reversed(od)'
# Traverse the linked list in reverse order.
root = self.__root
curr = root.prev
while curr is not root:
yield curr.key
curr = curr.prev
def clear(self):
'od.clear() -> None. Remove all items from od.'
root = self.__root
root.prev = root.next = root
self.__map.clear()
dict.clear(self)
def popitem(self, last=True):
'''od.popitem() -> (k, v), return and remove a (key, value) pair.
Pairs are returned in LIFO order if last is true or FIFO order if false.
'''
if not self:
raise KeyError('dictionary is empty')
root = self.__root
if last:
link = root.prev
link_prev = link.prev
link_prev.next = root
root.prev = link_prev
else:
link = root.next
link_next = link.next
root.next = link_next
link_next.prev = root
key = link.key
del self.__map[key]
value = dict.pop(self, key)
return key, value
def move_to_end(self, key, last=True):
'''Move an existing element to the end (or beginning if last==False).
Raises KeyError if the element does not exist.
When last=True, acts like a fast version of self[key]=self.pop(key).
'''
link = self.__map[key]
link_prev = link.prev
link_next = link.next
link_prev.next = link_next
link_next.prev = link_prev
root = self.__root
if last:
last = root.prev
link.prev = last
link.next = root
last.next = root.prev = link
else:
first = root.next
link.prev = root
link.next = first
root.next = first.prev = link
def __sizeof__(self):
sizeof = _sys.getsizeof
n = len(self) + 1 # number of links including root
size = sizeof(self.__dict__) # instance dictionary
size += sizeof(self.__map) * 2 # internal dict and inherited dict
size += sizeof(self.__hardroot) * n # link objects
size += sizeof(self.__root) * n # proxy objects
return size
#fixme brython.. Issue with _abcoll, which contains MutableMapping
update = __update = MutableMapping.update
keys = MutableMapping.keys
values = MutableMapping.values
items = MutableMapping.items
__ne__ = MutableMapping.__ne__
__marker = object()
def pop(self, key, default=__marker):
'''od.pop(k[,d]) -> v, remove specified key and return the corresponding
value. If key is not found, d is returned if given, otherwise KeyError
is raised.
'''
if key in self:
result = self[key]
del self[key]
return result
if default is self.__marker:
raise KeyError(key)
return default
def setdefault(self, key, default=None):
'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od'
if key in self:
return self[key]
self[key] = default
return default
#fixme, brython issue
#@_recursive_repr()
def __repr__(self):
'od.__repr__() <==> repr(od)'
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, list(self.items()))
def __reduce__(self):
'Return state information for pickling'
items = [[k, self[k]] for k in self]
inst_dict = vars(self).copy()
for k in vars(OrderedDict()):
inst_dict.pop(k, None)
if inst_dict:
return (self.__class__, (items,), inst_dict)
return self.__class__, (items,)
def copy(self):
'od.copy() -> a shallow copy of od'
return self.__class__(self)
@classmethod
def fromkeys(cls, iterable, value=None):
'''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S.
If not specified, the value defaults to None.
'''
self = cls()
for key in iterable:
self[key] = value
return self
def __eq__(self, other):
'''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive
while comparison to a regular mapping is order-insensitive.
'''
if isinstance(other, OrderedDict):
return len(self)==len(other) and \
all(p==q for p, q in zip(self.items(), other.items()))
return dict.__eq__(self, other)
########################################################################
### Counter
########################################################################
def _count_elements(mapping, iterable):
'Tally elements from the iterable.'
mapping_get = mapping.get
for elem in iterable:
mapping[elem] = mapping_get(elem, 0) + 1
#try: # Load C helper function if available
# from _collections import _count_elements
#except ImportError:
# pass
class Counter(dict):
'''Dict subclass for counting hashable items. Sometimes called a bag
or multiset. Elements are stored as dictionary keys and their counts
are stored as dictionary values.
>>> c = Counter('abcdeabcdabcaba') # count elements from a string
>>> c.most_common(3) # three most common elements
[('a', 5), ('b', 4), ('c', 3)]
>>> sorted(c) # list all unique elements
['a', 'b', 'c', 'd', 'e']
>>> ''.join(sorted(c.elements())) # list elements with repetitions
'aaaaabbbbcccdde'
>>> sum(c.values()) # total of all counts
15
>>> c['a'] # count of letter 'a'
5
>>> for elem in 'shazam': # update counts from an iterable
... c[elem] += 1 # by adding 1 to each element's count
>>> c['a'] # now there are seven 'a'
7
>>> del c['b'] # remove all 'b'
>>> c['b'] # now there are zero 'b'
0
>>> d = Counter('simsalabim') # make another counter
>>> c.update(d) # add in the second counter
>>> c['a'] # now there are nine 'a'
9
>>> c.clear() # empty the counter
>>> c
Counter()
Note: If a count is set to zero or reduced to zero, it will remain
in the counter until the entry is deleted or the counter is cleared:
>>> c = Counter('aaabbc')
>>> c['b'] -= 2 # reduce the count of 'b' by two
>>> c.most_common() # 'b' is still in, but its count is zero
[('a', 3), ('c', 1), ('b', 0)]
'''
# References:
# http://en.wikipedia.org/wiki/Multiset
# http://www.gnu.org/software/smalltalk/manual-base/html_node/Bag.html
# http://www.demo2s.com/Tutorial/Cpp/0380__set-multiset/Catalog0380__set-multiset.htm
# http://code.activestate.com/recipes/259174/
# Knuth, TAOCP Vol. II section 4.6.3
def __init__(self, iterable=None, **kwds):
'''Create a new, empty Counter object. And if given, count elements
from an input iterable. Or, initialize the count from another mapping
of elements to their counts.
>>> c = Counter() # a new, empty counter
>>> c = Counter('gallahad') # a new counter from an iterable
>>> c = Counter({'a': 4, 'b': 2}) # a new counter from a mapping
>>> c = Counter(a=4, b=2) # a new counter from keyword args
'''
#super().__init__() #BE modified since super not supported
dict.__init__(self)
self.update(iterable, **kwds)
def __missing__(self, key):
'The count of elements not in the Counter is zero.'
# Needed so that self[missing_item] does not raise KeyError
return 0
def most_common(self, n=None):
'''List the n most common elements and their counts from the most
common to the least. If n is None, then list all element counts.
>>> Counter('abcdeabcdabcaba').most_common(3)
[('a', 5), ('b', 4), ('c', 3)]
'''
# Emulate Bag.sortedByCount from Smalltalk
if n is None:
return sorted(self.items(), key=_itemgetter(1), reverse=True)
return _heapq.nlargest(n, self.items(), key=_itemgetter(1))
def elements(self):
'''Iterator over elements repeating each as many times as its count.
>>> c = Counter('ABCABC')
>>> sorted(c.elements())
['A', 'A', 'B', 'B', 'C', 'C']
# Knuth's example for prime factors of 1836: 2**2 * 3**3 * 17**1
>>> prime_factors = Counter({2: 2, 3: 3, 17: 1})
>>> product = 1
>>> for factor in prime_factors.elements(): # loop over factors
... product *= factor # and multiply them
>>> product
1836
Note, if an element's count has been set to zero or is a negative
number, elements() will ignore it.
'''
# Emulate Bag.do from Smalltalk and Multiset.begin from C++.
return _chain.from_iterable(_starmap(_repeat, self.items()))
# Override dict methods where necessary
@classmethod
def fromkeys(cls, iterable, v=None):
# There is no equivalent method for counters because setting v=1
# means that no element can have a count greater than one.
raise NotImplementedError(
'Counter.fromkeys() is undefined. Use Counter(iterable) instead.')
def update(self, iterable=None, **kwds):
'''Like dict.update() but add counts instead of replacing them.
Source can be an iterable, a dictionary, or another Counter instance.
>>> c = Counter('which')
>>> c.update('witch') # add elements from another iterable
>>> d = Counter('watch')
>>> c.update(d) # add elements from another counter
>>> c['h'] # four 'h' in which, witch, and watch
4
'''
# The regular dict.update() operation makes no sense here because the
# replace behavior results in the some of original untouched counts
# being mixed-in with all of the other counts for a mismash that
# doesn't have a straight-forward interpretation in most counting
# contexts. Instead, we implement straight-addition. Both the inputs
# and outputs are allowed to contain zero and negative counts.
if iterable is not None:
if isinstance(iterable, Mapping):
if self:
self_get = self.get
for elem, count in iterable.items():
self[elem] = count + self_get(elem, 0)
else:
super().update(iterable) # fast path when counter is empty
else:
_count_elements(self, iterable)
if kwds:
self.update(kwds)
def subtract(self, iterable=None, **kwds):
'''Like dict.update() but subtracts counts instead of replacing them.
Counts can be reduced below zero. Both the inputs and outputs are
allowed to contain zero and negative counts.
Source can be an iterable, a dictionary, or another Counter instance.
>>> c = Counter('which')
>>> c.subtract('witch') # subtract elements from another iterable
>>> c.subtract(Counter('watch')) # subtract elements from another counter
>>> c['h'] # 2 in which, minus 1 in witch, minus 1 in watch
0
>>> c['w'] # 1 in which, minus 1 in witch, minus 1 in watch
-1
'''
if iterable is not None:
self_get = self.get
if isinstance(iterable, Mapping):
for elem, count in iterable.items():
self[elem] = self_get(elem, 0) - count
else:
for elem in iterable:
self[elem] = self_get(elem, 0) - 1
if kwds:
self.subtract(kwds)
def copy(self):
'Return a shallow copy.'
return self.__class__(self)
def __reduce__(self):
return self.__class__, (dict(self),)
def __delitem__(self, elem):
'Like dict.__delitem__() but does not raise KeyError for missing values.'
if elem in self:
super().__delitem__(elem)
def __repr__(self):
if not self:
return '%s()' % self.__class__.__name__
try:
items = ', '.join(map('%r: %r'.__mod__, self.most_common()))
return '%s({%s})' % (self.__class__.__name__, items)
except TypeError:
# handle case where values are not orderable
return '{0}({1!r})'.format(self.__class__.__name__, dict(self))
# Multiset-style mathematical operations discussed in:
# Knuth TAOCP Volume II section 4.6.3 exercise 19
# and at http://en.wikipedia.org/wiki/Multiset
#
# Outputs guaranteed to only include positive counts.
#
# To strip negative and zero counts, add-in an empty counter:
# c += Counter()
def __add__(self, other):
'''Add counts from two counters.
>>> Counter('abbb') + Counter('bcc')
Counter({'b': 4, 'c': 2, 'a': 1})
'''
if not isinstance(other, Counter):
return NotImplemented
result = Counter()
for elem, count in self.items():
newcount = count + other[elem]
if newcount > 0:
result[elem] = newcount
for elem, count in other.items():
if elem not in self and count > 0:
result[elem] = count
return result
def __sub__(self, other):
''' Subtract count, but keep only results with positive counts.
>>> Counter('abbbc') - Counter('bccd')
Counter({'b': 2, 'a': 1})
'''
if not isinstance(other, Counter):
return NotImplemented
result = Counter()
for elem, count in self.items():
newcount = count - other[elem]
if newcount > 0:
result[elem] = newcount
for elem, count in other.items():
if elem not in self and count < 0:
result[elem] = 0 - count
return result
def __or__(self, other):
'''Union is the maximum of value in either of the input counters.
>>> Counter('abbb') | Counter('bcc')
Counter({'b': 3, 'c': 2, 'a': 1})
'''
if not isinstance(other, Counter):
return NotImplemented
result = Counter()
for elem, count in self.items():
other_count = other[elem]
newcount = other_count if count < other_count else count
if newcount > 0:
result[elem] = newcount
for elem, count in other.items():
if elem not in self and count > 0:
result[elem] = count
return result
def __and__(self, other):
''' Intersection is the minimum of corresponding counts.
>>> Counter('abbb') & Counter('bcc')
Counter({'b': 1})
'''
if not isinstance(other, Counter):
return NotImplemented
result = Counter()
for elem, count in self.items():
other_count = other[elem]
newcount = count if count < other_count else other_count
if newcount > 0:
result[elem] = newcount
return result
########################################################################
### ChainMap (helper for configparser)
########################################################################
class ChainMap(MutableMapping):
''' A ChainMap groups multiple dicts (or other mappings) together
to create a single, updateable view.
The underlying mappings are stored in a list. That list is public and can
accessed or updated using the *maps* attribute. There is no other state.
Lookups search the underlying mappings successively until a key is found.
In contrast, writes, updates, and deletions only operate on the first
mapping.
'''
def __init__(self, *maps):
'''Initialize a ChainMap by setting *maps* to the given mappings.
If no mappings are provided, a single empty dictionary is used.
'''
self.maps = list(maps) or [{}] # always at least one map
def __missing__(self, key):
raise KeyError(key)
def __getitem__(self, key):
for mapping in self.maps:
try:
return mapping[key] # can't use 'key in mapping' with defaultdict
except KeyError:
pass
return self.__missing__(key) # support subclasses that define __missing__
def get(self, key, default=None):
return self[key] if key in self else default
def __len__(self):
return len(set().union(*self.maps)) # reuses stored hash values if possible
def __iter__(self):
return iter(set().union(*self.maps))
def __contains__(self, key):
return any(key in m for m in self.maps)
def __bool__(self):
return any(self.maps)
#fixme, brython
#@_recursive_repr()
def __repr__(self):
return '{0.__class__.__name__}({1})'.format(
self, ', '.join(map(repr, self.maps)))
def __repr__(self):
return ','.join(str(_map) for _map in self.maps)
@classmethod
def fromkeys(cls, iterable, *args):
'Create a ChainMap with a single dict created from the iterable.'
return cls(dict.fromkeys(iterable, *args))
def copy(self):
'New ChainMap or subclass with a new copy of maps[0] and refs to maps[1:]'
return self.__class__(self.maps[0].copy(), *self.maps[1:])
__copy__ = copy
def new_child(self): # like Django's Context.push()
'New ChainMap with a new dict followed by all previous maps.'
return self.__class__({}, *self.maps)
@property
def parents(self): # like Django's Context.pop()
'New ChainMap from maps[1:].'
return self.__class__(*self.maps[1:])
def __setitem__(self, key, value):
self.maps[0][key] = value
def __delitem__(self, key):
try:
del self.maps[0][key]
except KeyError:
raise KeyError('Key not found in the first mapping: {!r}'.format(key))
def popitem(self):
'Remove and return an item pair from maps[0]. Raise KeyError is maps[0] is empty.'
try:
return self.maps[0].popitem()
except KeyError:
raise KeyError('No keys found in the first mapping.')
def pop(self, key, *args):
'Remove *key* from maps[0] and return its value. Raise KeyError if *key* not in maps[0].'
try:
return self.maps[0].pop(key, *args)
except KeyError:
#raise KeyError('Key not found in the first mapping: {!r}'.format(key))
raise KeyError('Key not found in the first mapping: %s' % key)
def clear(self):
'Clear maps[0], leaving maps[1:] intact.'
self.maps[0].clear()
################################################################################
### UserDict
################################################################################
class UserDict(MutableMapping):
# Start by filling-out the abstract methods
def __init__(self, dict=None, **kwargs):
self.data = {}
if dict is not None:
self.update(dict)
if len(kwargs):
self.update(kwargs)
def __len__(self): return len(self.data)
def __getitem__(self, key):
if key in self.data:
return self.data[key]
if hasattr(self.__class__, "__missing__"):
return self.__class__.__missing__(self, key)
raise KeyError(key)
def __setitem__(self, key, item): self.data[key] = item
def __delitem__(self, key): del self.data[key]
def __iter__(self):
return iter(self.data)
# Modify __contains__ to work correctly when __missing__ is present
def __contains__(self, key):
return key in self.data
# Now, add the methods in dicts but not in MutableMapping
def __repr__(self): return repr(self.data)
def copy(self):
if self.__class__ is UserDict:
return UserDict(self.data.copy())
import copy
data = self.data
try:
self.data = {}
c = copy.copy(self)
finally:
self.data = data
c.update(self)
return c
@classmethod
def fromkeys(cls, iterable, value=None):
d = cls()
for key in iterable:
d[key] = value
return d
################################################################################
### UserList
################################################################################
################################################################################
### UserString
################################################################################
| bsd-3-clause |
plasma-umass/DoubleTake | tests/multithreading/phoenixparsec/run.py | 2 | 1210 | #!/usr/bin/python
import os
import sys
import subprocess
import re
#all_benchmarks = os.listdir('tests')
#all_benchmarks.remove('Makefile')
#all_benchmarks.remove('defines.mk')
#all_benchmarks.sort()
#all_benchmarks = os.listdir('tests')
#all_benchmarks.remove('Makefile')
#all_benchmarks.remove('defines.mk')
all_benchmarks = ['blackscholes', 'bodytrack', 'dedup', 'ferret', 'fluidanimate', 'histogram', 'kmeans','linear_regression', 'reverse_index', 'matrix_multiply', 'pca', 'streamcluster', 'string_match', 'swaptions', 'word_count', 'x264']
#all_configs = ['pthread', 'dmp_o', 'dmp_b', 'dthread']
all_configs = ['pthread', 'doubletake']
#all_configs = ['defaults']
runs = 4
cores = '8'
benchmarks = all_benchmarks
configs = all_configs
data = {}
try:
for benchmark in benchmarks:
data[benchmark] = {}
for config in configs:
data[benchmark][config] = []
for n in range(0, runs):
print 'Running '+benchmark+'.'+config
os.chdir('tests/'+benchmark)
p = subprocess.Popen(['make', 'eval-'+config, 'NCORES='+str(cores)], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.wait()
os.chdir('../..')
except:
print 'Aborted!'
for benchmark in benchmarks:
print benchmark;
| mit |
cysnake4713/wechatpy | wechatpy/enterprise/events.py | 12 | 4012 | # -*- coding: utf-8 -*-
from __future__ import absolute_import, unicode_literals
from wechatpy.fields import IntegerField, BaseField
from wechatpy import events
EVENT_TYPES = {}
def register_event(event_type):
def register(cls):
EVENT_TYPES[event_type] = cls
return cls
return register
@register_event('subscribe')
class SubscribeEvent(events.SubscribeEvent):
"""
成员关注事件
详情请参阅
http://qydev.weixin.qq.com/wiki/index.php?title=接受事件
"""
agent = IntegerField('AgentID', 0)
@register_event('unsubscribe')
class UnsubscribeEvent(events.UnsubscribeEvent):
"""
成员取消关注事件
详情请参阅
http://qydev.weixin.qq.com/wiki/index.php?title=接受事件
"""
agent = IntegerField('AgentID', 0)
@register_event('click')
class ClickEvent(events.ClickEvent):
"""
点击菜单拉取消息事件
详情请参阅
http://qydev.weixin.qq.com/wiki/index.php?title=接受事件
"""
agent = IntegerField('AgentID', 0)
@register_event('view')
class ViewEvent(events.ViewEvent):
"""
点击菜单跳转链接事件
详情请参阅
http://qydev.weixin.qq.com/wiki/index.php?title=接受事件
"""
agent = IntegerField('AgentID', 0)
@register_event('location')
class LocationEvent(events.LocationEvent):
"""
上报地理位置事件
详情请参阅
http://qydev.weixin.qq.com/wiki/index.php?title=接受事件
"""
agent = IntegerField('AgentID', 0)
@register_event('scancode_push')
class ScanCodePushEvent(events.ScanCodePushEvent):
"""
扫码推事件的事件
详情请参阅
http://qydev.weixin.qq.com/wiki/index.php?title=接受事件
"""
agent = IntegerField('AgentID', 0)
@register_event('scancode_waitmsg')
class ScanCodeWaitMsgEvent(events.ScanCodeWaitMsgEvent):
"""
扫码推事件且弹出“消息接收中”提示框的事件
详情请参阅
http://qydev.weixin.qq.com/wiki/index.php?title=接受事件
"""
agent = IntegerField('AgentID', 0)
@register_event('pic_sysphoto')
class PicSysPhotoEvent(events.PicSysPhotoEvent):
"""
弹出系统拍照发图事件
详情请参阅
http://qydev.weixin.qq.com/wiki/index.php?title=接受事件
"""
agent = IntegerField('AgentID', 0)
@register_event('pic_photo_or_album')
class PicPhotoOrAlbumEvent(events.PicPhotoOrAlbumEvent):
"""
弹出拍照或相册发图事件
详情请参阅
http://qydev.weixin.qq.com/wiki/index.php?title=接受事件
"""
agent = IntegerField('AgentID', 0)
@register_event('pic_weixin')
class PicWeChatEvent(events.PicWeChatEvent):
"""
弹出微信相册发图器事件
详情请参阅
http://qydev.weixin.qq.com/wiki/index.php?title=接受事件
"""
agent = IntegerField('AgentID', 0)
@register_event('location_select')
class LocationSelectEvent(events.LocationSelectEvent):
"""
弹出地理位置选择器事件
详情请参阅
http://qydev.weixin.qq.com/wiki/index.php?title=接受事件
"""
agent = IntegerField('AgentID', 0)
@register_event('enter_agent')
class EnterAgentEvent(events.BaseEvent):
"""
用户进入应用的事件推送
详情请参阅
http://qydev.weixin.qq.com/wiki/index.php?title=接受事件
"""
agent = IntegerField('AgentID', 0)
event = 'enter_agent'
@register_event('batch_job_result')
class BatchJobResultEvent(events.BaseEvent):
"""
异步任务完成事件
详情请参阅
http://qydev.weixin.qq.com/wiki/index.php?title=接受事件
"""
event = 'batch_job_result'
batch_job = BaseField('BatchJob')
@property
def job_id(self):
return self.batch_job['JobId']
@property
def job_type(self):
return self.batch_job['JobType']
@property
def err_code(self):
return self.batch_job['ErrCode']
@property
def err_msg(self):
return self.batch_job['ErrMsg']
| mit |
shupelneker/gae_new_structure | lib/tests.py | 23 | 1624 | '''
Run the tests using testrunner.py script in the project root directory.
Usage: testrunner.py SDK_PATH TEST_PATH
Run unit tests for App Engine apps.
SDK_PATH Path to the SDK installation
TEST_PATH Path to package containing test modules
Options:
-h, --help show this help message and exit
'''
import unittest
from google.appengine.ext import testbed
import webapp2
from boilerplate import config as boilerplate_config
from boilerplate.lib import i18n
class I18nTest(unittest.TestCase):
def setUp(self):
webapp2_config = boilerplate_config.config
# create a WSGI application.
self.app = webapp2.WSGIApplication(config=webapp2_config)
# activate GAE stubs
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.init_datastore_v3_stub()
self.testbed.init_memcache_stub()
self.testbed.init_urlfetch_stub()
self.testbed.init_taskqueue_stub()
self.testbed.init_mail_stub()
self.mail_stub = self.testbed.get_stub(testbed.MAIL_SERVICE_NAME)
self.taskqueue_stub = self.testbed.get_stub(testbed.TASKQUEUE_SERVICE_NAME)
self.testbed.init_user_stub()
def tearDown(self):
self.testbed.deactivate()
def test_disable_i18n(self):
self.app.config['locales'] = []
locale = i18n.set_locale(self)
self.assertEqual(locale, None)
self.app.config['locales'] = None
locale = i18n.set_locale(self)
self.assertEqual(locale, None)
if __name__ == "__main__":
unittest.main() | lgpl-3.0 |
agileblaze/OpenStackTwoFactorAuthentication | horizon/openstack_dashboard/test/integration_tests/tests/test_flavors.py | 52 | 1522 | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from openstack_dashboard.test.integration_tests import helpers
class TestFlavors(helpers.AdminTestCase):
FLAVOR_NAME = helpers.gen_random_resource_name("flavor")
def test_flavor_create(self):
"""tests the flavor creation and deletion functionalities:
* creates a new flavor
* verifies the flavor appears in the flavors table
* deletes the newly created flavor
* verifies the flavor does not appear in the table after deletion
"""
flavors_page = self.home_pg.go_to_system_flavorspage()
flavors_page.create_flavor(name=self.FLAVOR_NAME, vcpus=1, ram=1024,
root_disk=20, ephemeral_disk=0,
swap_disk=0)
self.assertTrue(flavors_page.is_flavor_present(self.FLAVOR_NAME))
flavors_page.delete_flavor(self.FLAVOR_NAME)
self.assertFalse(flavors_page.is_flavor_present(self.FLAVOR_NAME))
| apache-2.0 |
ikaee/bfr-attendant | facerecognitionlibrary/jni-build/jni/include/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py | 52 | 7416 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Train and Eval the MNIST network.
This version is like fully_connected_feed.py but uses data converted
to a TFRecords file containing tf.train.Example protocol buffers.
See tensorflow/g3doc/how_tos/reading_data.md#reading-from-files
for context.
YOU MUST run convert_to_records before running this (but you only need to
run it once).
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import os.path
import sys
import time
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import mnist
# Basic model parameters as external flags.
FLAGS = None
# Constants used for dealing with the files, matches convert_to_records.
TRAIN_FILE = 'train.tfrecords'
VALIDATION_FILE = 'validation.tfrecords'
def read_and_decode(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
# Defaults are not specified since both keys are required.
features={
'image_raw': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64),
})
# Convert from a scalar string tensor (whose single string has
# length mnist.IMAGE_PIXELS) to a uint8 tensor with shape
# [mnist.IMAGE_PIXELS].
image = tf.decode_raw(features['image_raw'], tf.uint8)
image.set_shape([mnist.IMAGE_PIXELS])
# OPTIONAL: Could reshape into a 28x28 image and apply distortions
# here. Since we are not applying any distortions in this
# example, and the next step expects the image to be flattened
# into a vector, we don't bother.
# Convert from [0, 255] -> [-0.5, 0.5] floats.
image = tf.cast(image, tf.float32) * (1. / 255) - 0.5
# Convert label from a scalar uint8 tensor to an int32 scalar.
label = tf.cast(features['label'], tf.int32)
return image, label
def inputs(train, batch_size, num_epochs):
"""Reads input data num_epochs times.
Args:
train: Selects between the training (True) and validation (False) data.
batch_size: Number of examples per returned batch.
num_epochs: Number of times to read the input data, or 0/None to
train forever.
Returns:
A tuple (images, labels), where:
* images is a float tensor with shape [batch_size, mnist.IMAGE_PIXELS]
in the range [-0.5, 0.5].
* labels is an int32 tensor with shape [batch_size] with the true label,
a number in the range [0, mnist.NUM_CLASSES).
Note that an tf.train.QueueRunner is added to the graph, which
must be run using e.g. tf.train.start_queue_runners().
"""
if not num_epochs: num_epochs = None
filename = os.path.join(FLAGS.train_dir,
TRAIN_FILE if train else VALIDATION_FILE)
with tf.name_scope('input'):
filename_queue = tf.train.string_input_producer(
[filename], num_epochs=num_epochs)
# Even when reading in multiple threads, share the filename
# queue.
image, label = read_and_decode(filename_queue)
# Shuffle the examples and collect them into batch_size batches.
# (Internally uses a RandomShuffleQueue.)
# We run this in two threads to avoid being a bottleneck.
images, sparse_labels = tf.train.shuffle_batch(
[image, label], batch_size=batch_size, num_threads=2,
capacity=1000 + 3 * batch_size,
# Ensures a minimum amount of shuffling of examples.
min_after_dequeue=1000)
return images, sparse_labels
def run_training():
"""Train MNIST for a number of steps."""
# Tell TensorFlow that the model will be built into the default Graph.
with tf.Graph().as_default():
# Input images and labels.
images, labels = inputs(train=True, batch_size=FLAGS.batch_size,
num_epochs=FLAGS.num_epochs)
# Build a Graph that computes predictions from the inference model.
logits = mnist.inference(images,
FLAGS.hidden1,
FLAGS.hidden2)
# Add to the Graph the loss calculation.
loss = mnist.loss(logits, labels)
# Add to the Graph operations that train the model.
train_op = mnist.training(loss, FLAGS.learning_rate)
# The op for initializing the variables.
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
# Create a session for running operations in the Graph.
sess = tf.Session()
# Initialize the variables (the trained variables and the
# epoch counter).
sess.run(init_op)
# Start input enqueue threads.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
step = 0
while not coord.should_stop():
start_time = time.time()
# Run one step of the model. The return values are
# the activations from the `train_op` (which is
# discarded) and the `loss` op. To inspect the values
# of your ops or variables, you may include them in
# the list passed to sess.run() and the value tensors
# will be returned in the tuple from the call.
_, loss_value = sess.run([train_op, loss])
duration = time.time() - start_time
# Print an overview fairly often.
if step % 100 == 0:
print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value,
duration))
step += 1
except tf.errors.OutOfRangeError:
print('Done training for %d epochs, %d steps.' % (FLAGS.num_epochs, step))
finally:
# When done, ask the threads to stop.
coord.request_stop()
# Wait for threads to finish.
coord.join(threads)
sess.close()
def main(_):
run_training()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--learning_rate',
type=float,
default=0.01,
help='Initial learning rate.'
)
parser.add_argument(
'--num_epochs',
type=int,
default=2,
help='Number of epochs to run trainer.'
)
parser.add_argument(
'--hidden1',
type=int,
default=128,
help='Number of units in hidden layer 1.'
)
parser.add_argument(
'--hidden2',
type=int,
default=32,
help='Number of units in hidden layer 2.'
)
parser.add_argument(
'--batch_size',
type=int,
default=100,
help='Batch size.'
)
parser.add_argument(
'--train_dir',
type=str,
default='/tmp/data',
help='Directory with the training data.'
)
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
| apache-2.0 |
BackupGGCode/python-for-android | python-build/python-libs/gdata/src/gdata/tlslite/utils/Python_RSAKey.py | 239 | 7707 | """Pure-Python RSA implementation."""
from cryptomath import *
import xmltools
from ASN1Parser import ASN1Parser
from RSAKey import *
class Python_RSAKey(RSAKey):
def __init__(self, n=0, e=0, d=0, p=0, q=0, dP=0, dQ=0, qInv=0):
if (n and not e) or (e and not n):
raise AssertionError()
self.n = n
self.e = e
self.d = d
self.p = p
self.q = q
self.dP = dP
self.dQ = dQ
self.qInv = qInv
self.blinder = 0
self.unblinder = 0
def hasPrivateKey(self):
return self.d != 0
def hash(self):
s = self.writeXMLPublicKey('\t\t')
return hashAndBase64(s.strip())
def _rawPrivateKeyOp(self, m):
#Create blinding values, on the first pass:
if not self.blinder:
self.unblinder = getRandomNumber(2, self.n)
self.blinder = powMod(invMod(self.unblinder, self.n), self.e,
self.n)
#Blind the input
m = (m * self.blinder) % self.n
#Perform the RSA operation
c = self._rawPrivateKeyOpHelper(m)
#Unblind the output
c = (c * self.unblinder) % self.n
#Update blinding values
self.blinder = (self.blinder * self.blinder) % self.n
self.unblinder = (self.unblinder * self.unblinder) % self.n
#Return the output
return c
def _rawPrivateKeyOpHelper(self, m):
#Non-CRT version
#c = powMod(m, self.d, self.n)
#CRT version (~3x faster)
s1 = powMod(m, self.dP, self.p)
s2 = powMod(m, self.dQ, self.q)
h = ((s1 - s2) * self.qInv) % self.p
c = s2 + self.q * h
return c
def _rawPublicKeyOp(self, c):
m = powMod(c, self.e, self.n)
return m
def acceptsPassword(self): return False
def write(self, indent=''):
if self.d:
s = indent+'<privateKey xmlns="http://trevp.net/rsa">\n'
else:
s = indent+'<publicKey xmlns="http://trevp.net/rsa">\n'
s += indent+'\t<n>%s</n>\n' % numberToBase64(self.n)
s += indent+'\t<e>%s</e>\n' % numberToBase64(self.e)
if self.d:
s += indent+'\t<d>%s</d>\n' % numberToBase64(self.d)
s += indent+'\t<p>%s</p>\n' % numberToBase64(self.p)
s += indent+'\t<q>%s</q>\n' % numberToBase64(self.q)
s += indent+'\t<dP>%s</dP>\n' % numberToBase64(self.dP)
s += indent+'\t<dQ>%s</dQ>\n' % numberToBase64(self.dQ)
s += indent+'\t<qInv>%s</qInv>\n' % numberToBase64(self.qInv)
s += indent+'</privateKey>'
else:
s += indent+'</publicKey>'
#Only add \n if part of a larger structure
if indent != '':
s += '\n'
return s
def writeXMLPublicKey(self, indent=''):
return Python_RSAKey(self.n, self.e).write(indent)
def generate(bits):
key = Python_RSAKey()
p = getRandomPrime(bits/2, False)
q = getRandomPrime(bits/2, False)
t = lcm(p-1, q-1)
key.n = p * q
key.e = 3L #Needed to be long, for Java
key.d = invMod(key.e, t)
key.p = p
key.q = q
key.dP = key.d % (p-1)
key.dQ = key.d % (q-1)
key.qInv = invMod(q, p)
return key
generate = staticmethod(generate)
def parsePEM(s, passwordCallback=None):
"""Parse a string containing a <privateKey> or <publicKey>, or
PEM-encoded key."""
start = s.find("-----BEGIN PRIVATE KEY-----")
if start != -1:
end = s.find("-----END PRIVATE KEY-----")
if end == -1:
raise SyntaxError("Missing PEM Postfix")
s = s[start+len("-----BEGIN PRIVATE KEY -----") : end]
bytes = base64ToBytes(s)
return Python_RSAKey._parsePKCS8(bytes)
else:
start = s.find("-----BEGIN RSA PRIVATE KEY-----")
if start != -1:
end = s.find("-----END RSA PRIVATE KEY-----")
if end == -1:
raise SyntaxError("Missing PEM Postfix")
s = s[start+len("-----BEGIN RSA PRIVATE KEY -----") : end]
bytes = base64ToBytes(s)
return Python_RSAKey._parseSSLeay(bytes)
raise SyntaxError("Missing PEM Prefix")
parsePEM = staticmethod(parsePEM)
def parseXML(s):
element = xmltools.parseAndStripWhitespace(s)
return Python_RSAKey._parseXML(element)
parseXML = staticmethod(parseXML)
def _parsePKCS8(bytes):
p = ASN1Parser(bytes)
version = p.getChild(0).value[0]
if version != 0:
raise SyntaxError("Unrecognized PKCS8 version")
rsaOID = p.getChild(1).value
if list(rsaOID) != [6, 9, 42, 134, 72, 134, 247, 13, 1, 1, 1, 5, 0]:
raise SyntaxError("Unrecognized AlgorithmIdentifier")
#Get the privateKey
privateKeyP = p.getChild(2)
#Adjust for OCTET STRING encapsulation
privateKeyP = ASN1Parser(privateKeyP.value)
return Python_RSAKey._parseASN1PrivateKey(privateKeyP)
_parsePKCS8 = staticmethod(_parsePKCS8)
def _parseSSLeay(bytes):
privateKeyP = ASN1Parser(bytes)
return Python_RSAKey._parseASN1PrivateKey(privateKeyP)
_parseSSLeay = staticmethod(_parseSSLeay)
def _parseASN1PrivateKey(privateKeyP):
version = privateKeyP.getChild(0).value[0]
if version != 0:
raise SyntaxError("Unrecognized RSAPrivateKey version")
n = bytesToNumber(privateKeyP.getChild(1).value)
e = bytesToNumber(privateKeyP.getChild(2).value)
d = bytesToNumber(privateKeyP.getChild(3).value)
p = bytesToNumber(privateKeyP.getChild(4).value)
q = bytesToNumber(privateKeyP.getChild(5).value)
dP = bytesToNumber(privateKeyP.getChild(6).value)
dQ = bytesToNumber(privateKeyP.getChild(7).value)
qInv = bytesToNumber(privateKeyP.getChild(8).value)
return Python_RSAKey(n, e, d, p, q, dP, dQ, qInv)
_parseASN1PrivateKey = staticmethod(_parseASN1PrivateKey)
def _parseXML(element):
try:
xmltools.checkName(element, "privateKey")
except SyntaxError:
xmltools.checkName(element, "publicKey")
#Parse attributes
xmltools.getReqAttribute(element, "xmlns", "http://trevp.net/rsa\Z")
xmltools.checkNoMoreAttributes(element)
#Parse public values (<n> and <e>)
n = base64ToNumber(xmltools.getText(xmltools.getChild(element, 0, "n"), xmltools.base64RegEx))
e = base64ToNumber(xmltools.getText(xmltools.getChild(element, 1, "e"), xmltools.base64RegEx))
d = 0
p = 0
q = 0
dP = 0
dQ = 0
qInv = 0
#Parse private values, if present
if element.childNodes.length>=3:
d = base64ToNumber(xmltools.getText(xmltools.getChild(element, 2, "d"), xmltools.base64RegEx))
p = base64ToNumber(xmltools.getText(xmltools.getChild(element, 3, "p"), xmltools.base64RegEx))
q = base64ToNumber(xmltools.getText(xmltools.getChild(element, 4, "q"), xmltools.base64RegEx))
dP = base64ToNumber(xmltools.getText(xmltools.getChild(element, 5, "dP"), xmltools.base64RegEx))
dQ = base64ToNumber(xmltools.getText(xmltools.getChild(element, 6, "dQ"), xmltools.base64RegEx))
qInv = base64ToNumber(xmltools.getText(xmltools.getLastChild(element, 7, "qInv"), xmltools.base64RegEx))
return Python_RSAKey(n, e, d, p, q, dP, dQ, qInv)
_parseXML = staticmethod(_parseXML)
| apache-2.0 |
h3biomed/ansible | test/units/utils/test_helpers.py | 197 | 1140 | # (c) 2015, Marius Gedminas <marius@gedmin.as>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import unittest
from ansible.utils.helpers import pct_to_int
class TestHelpers(unittest.TestCase):
def test_pct_to_int(self):
self.assertEqual(pct_to_int(1, 100), 1)
self.assertEqual(pct_to_int(-1, 100), -1)
self.assertEqual(pct_to_int("1%", 10), 1)
self.assertEqual(pct_to_int("1%", 10, 0), 0)
self.assertEqual(pct_to_int("1", 100), 1)
self.assertEqual(pct_to_int("10%", 100), 10)
| gpl-3.0 |
RacerXx/GoAtThrottleUp | ServerRelay/cherrypy/test/modfcgid.py | 12 | 4268 | """Wrapper for mod_fcgid, for use as a CherryPy HTTP server when testing.
To autostart fcgid, the "apache" executable or script must be
on your system path, or you must override the global APACHE_PATH.
On some platforms, "apache" may be called "apachectl", "apache2ctl",
or "httpd"--create a symlink to them if needed.
You'll also need the WSGIServer from flup.servers.
See http://projects.amor.org/misc/wiki/ModPythonGateway
KNOWN BUGS
==========
1. Apache processes Range headers automatically; CherryPy's truncated
output is then truncated again by Apache. See test_core.testRanges.
This was worked around in http://www.cherrypy.org/changeset/1319.
2. Apache does not allow custom HTTP methods like CONNECT as per the spec.
See test_core.testHTTPMethods.
3. Max request header and body settings do not work with Apache.
4. Apache replaces status "reason phrases" automatically. For example,
CherryPy may set "304 Not modified" but Apache will write out
"304 Not Modified" (capital "M").
5. Apache does not allow custom error codes as per the spec.
6. Apache (or perhaps modpython, or modpython_gateway) unquotes %xx in the
Request-URI too early.
7. mod_python will not read request bodies which use the "chunked"
transfer-coding (it passes REQUEST_CHUNKED_ERROR to ap_setup_client_block
instead of REQUEST_CHUNKED_DECHUNK, see Apache2's http_protocol.c and
mod_python's requestobject.c).
8. Apache will output a "Content-Length: 0" response header even if there's
no response entity body. This isn't really a bug; it just differs from
the CherryPy default.
"""
import os
curdir = os.path.join(os.getcwd(), os.path.dirname(__file__))
import re
import sys
import time
import cherrypy
from cherrypy._cpcompat import ntob
from cherrypy.process import plugins, servers
from cherrypy.test import helper
def read_process(cmd, args=""):
pipein, pipeout = os.popen4("%s %s" % (cmd, args))
try:
firstline = pipeout.readline()
if (re.search(r"(not recognized|No such file|not found)", firstline,
re.IGNORECASE)):
raise IOError('%s must be on your system path.' % cmd)
output = firstline + pipeout.read()
finally:
pipeout.close()
return output
APACHE_PATH = "httpd"
CONF_PATH = "fcgi.conf"
conf_fcgid = """
# Apache2 server conf file for testing CherryPy with mod_fcgid.
DocumentRoot "%(root)s"
ServerName 127.0.0.1
Listen %(port)s
LoadModule fastcgi_module modules/mod_fastcgi.dll
LoadModule rewrite_module modules/mod_rewrite.so
Options ExecCGI
SetHandler fastcgi-script
RewriteEngine On
RewriteRule ^(.*)$ /fastcgi.pyc [L]
FastCgiExternalServer "%(server)s" -host 127.0.0.1:4000
"""
class ModFCGISupervisor(helper.LocalSupervisor):
using_apache = True
using_wsgi = True
template = conf_fcgid
def __str__(self):
return "FCGI Server on %s:%s" % (self.host, self.port)
def start(self, modulename):
cherrypy.server.httpserver = servers.FlupFCGIServer(
application=cherrypy.tree, bindAddress=('127.0.0.1', 4000))
cherrypy.server.httpserver.bind_addr = ('127.0.0.1', 4000)
# For FCGI, we both start apache...
self.start_apache()
# ...and our local server
helper.LocalServer.start(self, modulename)
def start_apache(self):
fcgiconf = CONF_PATH
if not os.path.isabs(fcgiconf):
fcgiconf = os.path.join(curdir, fcgiconf)
# Write the Apache conf file.
f = open(fcgiconf, 'wb')
try:
server = repr(os.path.join(curdir, 'fastcgi.pyc'))[1:-1]
output = self.template % {'port': self.port, 'root': curdir,
'server': server}
output = ntob(output.replace('\r\n', '\n'))
f.write(output)
finally:
f.close()
result = read_process(APACHE_PATH, "-k start -f %s" % fcgiconf)
if result:
print(result)
def stop(self):
"""Gracefully shutdown a server that is serving forever."""
read_process(APACHE_PATH, "-k stop")
helper.LocalServer.stop(self)
def sync_apps(self):
cherrypy.server.httpserver.fcgiserver.application = self.get_app()
| mit |
aferr/TemporalPartitioningMemCtl | src/dev/arm/RealView.py | 9 | 19110 | # Copyright (c) 2009-2012 ARM Limited
# All rights reserved.
#
# The license below extends only to copyright in the software and shall
# not be construed as granting a license to any other intellectual
# property including but not limited to intellectual property relating
# to a hardware implementation of the functionality of the software
# licensed hereunder. You may use the software subject to the license
# terms below provided that you ensure that this notice is replicated
# unmodified and in its entirety in all distributions of the software,
# modified or unmodified, in source code or in binary form.
#
# Copyright (c) 2006-2007 The Regents of The University of Michigan
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Authors: Ali Saidi
# Gabe Black
# William Wang
from m5.params import *
from m5.proxy import *
from Device import BasicPioDevice, PioDevice, IsaFake, BadAddr, DmaDevice
from Pci import PciConfigAll
from Ethernet import NSGigE, IGbE_e1000, IGbE_igb
from Ide import *
from Platform import Platform
from Terminal import Terminal
from Uart import Uart
from SimpleMemory import SimpleMemory
class AmbaDevice(BasicPioDevice):
type = 'AmbaDevice'
abstract = True
amba_id = Param.UInt32("ID of AMBA device for kernel detection")
class AmbaIntDevice(AmbaDevice):
type = 'AmbaIntDevice'
abstract = True
gic = Param.Gic(Parent.any, "Gic to use for interrupting")
int_num = Param.UInt32("Interrupt number that connects to GIC")
int_delay = Param.Latency("100ns",
"Time between action and interrupt generation by device")
class AmbaDmaDevice(DmaDevice):
type = 'AmbaDmaDevice'
abstract = True
pio_addr = Param.Addr("Address for AMBA slave interface")
pio_latency = Param.Latency("10ns", "Time between action and write/read result by AMBA DMA Device")
gic = Param.Gic(Parent.any, "Gic to use for interrupting")
int_num = Param.UInt32("Interrupt number that connects to GIC")
amba_id = Param.UInt32("ID of AMBA device for kernel detection")
class A9SCU(BasicPioDevice):
type = 'A9SCU'
class RealViewCtrl(BasicPioDevice):
type = 'RealViewCtrl'
proc_id0 = Param.UInt32(0x0C000000, "Processor ID, SYS_PROCID")
proc_id1 = Param.UInt32(0x0C000222, "Processor ID, SYS_PROCID1")
idreg = Param.UInt32(0x00000000, "ID Register, SYS_ID")
class Gic(PioDevice):
type = 'Gic'
platform = Param.Platform(Parent.any, "Platform this device is part of.")
dist_addr = Param.Addr(0x1f001000, "Address for distributor")
cpu_addr = Param.Addr(0x1f000100, "Address for cpu")
dist_pio_delay = Param.Latency('10ns', "Delay for PIO r/w to distributor")
cpu_pio_delay = Param.Latency('10ns', "Delay for PIO r/w to cpu interface")
int_latency = Param.Latency('10ns', "Delay for interrupt to get to CPU")
it_lines = Param.UInt32(128, "Number of interrupt lines supported (max = 1020)")
class AmbaFake(AmbaDevice):
type = 'AmbaFake'
ignore_access = Param.Bool(False, "Ignore reads/writes to this device, (e.g. IsaFake + AMBA)")
amba_id = 0;
class Pl011(Uart):
type = 'Pl011'
gic = Param.Gic(Parent.any, "Gic to use for interrupting")
int_num = Param.UInt32("Interrupt number that connects to GIC")
end_on_eot = Param.Bool(False, "End the simulation when a EOT is received on the UART")
int_delay = Param.Latency("100ns", "Time between action and interrupt generation by UART")
class Sp804(AmbaDevice):
type = 'Sp804'
gic = Param.Gic(Parent.any, "Gic to use for interrupting")
int_num0 = Param.UInt32("Interrupt number that connects to GIC")
clock0 = Param.Clock('1MHz', "Clock speed of the input")
int_num1 = Param.UInt32("Interrupt number that connects to GIC")
clock1 = Param.Clock('1MHz', "Clock speed of the input")
amba_id = 0x00141804
class CpuLocalTimer(BasicPioDevice):
type = 'CpuLocalTimer'
gic = Param.Gic(Parent.any, "Gic to use for interrupting")
int_num_timer = Param.UInt32("Interrrupt number used per-cpu to GIC")
int_num_watchdog = Param.UInt32("Interrupt number for per-cpu watchdog to GIC")
# Override the default clock
clock = '1GHz'
class PL031(AmbaIntDevice):
type = 'PL031'
time = Param.Time('01/01/2009', "System time to use ('Now' for actual time)")
amba_id = 0x00341031
class Pl050(AmbaIntDevice):
type = 'Pl050'
vnc = Param.VncServer(Parent.any, "Vnc server for remote frame buffer display")
is_mouse = Param.Bool(False, "Is this interface a mouse, if not a keyboard")
int_delay = '1us'
amba_id = 0x00141050
class Pl111(AmbaDmaDevice):
type = 'Pl111'
# Override the default clock
clock = '24MHz'
vnc = Param.VncServer(Parent.any, "Vnc server for remote frame buffer display")
amba_id = 0x00141111
class RealView(Platform):
type = 'RealView'
system = Param.System(Parent.any, "system")
pci_cfg_base = Param.Addr(0, "Base address of PCI Configuraiton Space")
mem_start_addr = Param.Addr(0, "Start address of main memory")
max_mem_size = Param.Addr('256MB', "Maximum amount of RAM supported by platform")
def setupBootLoader(self, mem_bus, cur_sys, loc):
self.nvmem = SimpleMemory(range = AddrRange(Addr('2GB'),
size = '64MB'),
zero = True)
self.nvmem.port = mem_bus.master
cur_sys.boot_loader = loc('boot.arm')
# Reference for memory map and interrupt number
# RealView Platform Baseboard Explore for Cortex-A9 User Guide(ARM DUI 0440A)
# Chapter 4: Programmer's Reference
class RealViewPBX(RealView):
uart = Pl011(pio_addr=0x10009000, int_num=44)
realview_io = RealViewCtrl(pio_addr=0x10000000)
gic = Gic()
timer0 = Sp804(int_num0=36, int_num1=36, pio_addr=0x10011000)
timer1 = Sp804(int_num0=37, int_num1=37, pio_addr=0x10012000)
local_cpu_timer = CpuLocalTimer(int_num_timer=29, int_num_watchdog=30, pio_addr=0x1f000600)
clcd = Pl111(pio_addr=0x10020000, int_num=55)
kmi0 = Pl050(pio_addr=0x10006000, int_num=52)
kmi1 = Pl050(pio_addr=0x10007000, int_num=53, is_mouse=True)
a9scu = A9SCU(pio_addr=0x1f000000)
cf_ctrl = IdeController(disks=[], pci_func=0, pci_dev=7, pci_bus=2,
io_shift = 1, ctrl_offset = 2, Command = 0x1,
BAR0 = 0x18000000, BAR0Size = '16B',
BAR1 = 0x18000100, BAR1Size = '1B',
BAR0LegacyIO = True, BAR1LegacyIO = True)
l2x0_fake = IsaFake(pio_addr=0x1f002000, pio_size=0xfff)
flash_fake = IsaFake(pio_addr=0x40000000, pio_size=0x20000000,
fake_mem=True)
dmac_fake = AmbaFake(pio_addr=0x10030000)
uart1_fake = AmbaFake(pio_addr=0x1000a000)
uart2_fake = AmbaFake(pio_addr=0x1000b000)
uart3_fake = AmbaFake(pio_addr=0x1000c000)
smc_fake = AmbaFake(pio_addr=0x100e1000)
sp810_fake = AmbaFake(pio_addr=0x10001000, ignore_access=True)
watchdog_fake = AmbaFake(pio_addr=0x10010000)
gpio0_fake = AmbaFake(pio_addr=0x10013000)
gpio1_fake = AmbaFake(pio_addr=0x10014000)
gpio2_fake = AmbaFake(pio_addr=0x10015000)
ssp_fake = AmbaFake(pio_addr=0x1000d000)
sci_fake = AmbaFake(pio_addr=0x1000e000)
aaci_fake = AmbaFake(pio_addr=0x10004000)
mmc_fake = AmbaFake(pio_addr=0x10005000)
rtc = PL031(pio_addr=0x10017000, int_num=42)
# Attach I/O devices that are on chip and also set the appropriate
# ranges for the bridge
def attachOnChipIO(self, bus, bridge):
self.gic.pio = bus.master
self.l2x0_fake.pio = bus.master
self.a9scu.pio = bus.master
self.local_cpu_timer.pio = bus.master
# Bridge ranges based on excluding what is part of on-chip I/O
# (gic, l2x0, a9scu, local_cpu_timer)
bridge.ranges = [AddrRange(self.realview_io.pio_addr,
self.a9scu.pio_addr - 1),
AddrRange(self.flash_fake.pio_addr,
self.flash_fake.pio_addr + \
self.flash_fake.pio_size - 1)]
# Attach I/O devices to specified bus object. Can't do this
# earlier, since the bus object itself is typically defined at the
# System level.
def attachIO(self, bus):
self.uart.pio = bus.master
self.realview_io.pio = bus.master
self.timer0.pio = bus.master
self.timer1.pio = bus.master
self.clcd.pio = bus.master
self.clcd.dma = bus.slave
self.kmi0.pio = bus.master
self.kmi1.pio = bus.master
self.cf_ctrl.pio = bus.master
self.cf_ctrl.config = bus.master
self.cf_ctrl.dma = bus.slave
self.dmac_fake.pio = bus.master
self.uart1_fake.pio = bus.master
self.uart2_fake.pio = bus.master
self.uart3_fake.pio = bus.master
self.smc_fake.pio = bus.master
self.sp810_fake.pio = bus.master
self.watchdog_fake.pio = bus.master
self.gpio0_fake.pio = bus.master
self.gpio1_fake.pio = bus.master
self.gpio2_fake.pio = bus.master
self.ssp_fake.pio = bus.master
self.sci_fake.pio = bus.master
self.aaci_fake.pio = bus.master
self.mmc_fake.pio = bus.master
self.rtc.pio = bus.master
self.flash_fake.pio = bus.master
# Reference for memory map and interrupt number
# RealView Emulation Baseboard User Guide (ARM DUI 0143B)
# Chapter 4: Programmer's Reference
class RealViewEB(RealView):
uart = Pl011(pio_addr=0x10009000, int_num=44)
realview_io = RealViewCtrl(pio_addr=0x10000000)
gic = Gic(dist_addr=0x10041000, cpu_addr=0x10040000)
timer0 = Sp804(int_num0=36, int_num1=36, pio_addr=0x10011000)
timer1 = Sp804(int_num0=37, int_num1=37, pio_addr=0x10012000)
clcd = Pl111(pio_addr=0x10020000, int_num=23)
kmi0 = Pl050(pio_addr=0x10006000, int_num=20)
kmi1 = Pl050(pio_addr=0x10007000, int_num=21, is_mouse=True)
l2x0_fake = IsaFake(pio_addr=0x1f002000, pio_size=0xfff, warn_access="1")
flash_fake = IsaFake(pio_addr=0x40000000, pio_size=0x20000000-1,
fake_mem=True)
dmac_fake = AmbaFake(pio_addr=0x10030000)
uart1_fake = AmbaFake(pio_addr=0x1000a000)
uart2_fake = AmbaFake(pio_addr=0x1000b000)
uart3_fake = AmbaFake(pio_addr=0x1000c000)
smcreg_fake = IsaFake(pio_addr=0x10080000, pio_size=0x10000-1)
smc_fake = AmbaFake(pio_addr=0x100e1000)
sp810_fake = AmbaFake(pio_addr=0x10001000, ignore_access=True)
watchdog_fake = AmbaFake(pio_addr=0x10010000)
gpio0_fake = AmbaFake(pio_addr=0x10013000)
gpio1_fake = AmbaFake(pio_addr=0x10014000)
gpio2_fake = AmbaFake(pio_addr=0x10015000)
ssp_fake = AmbaFake(pio_addr=0x1000d000)
sci_fake = AmbaFake(pio_addr=0x1000e000)
aaci_fake = AmbaFake(pio_addr=0x10004000)
mmc_fake = AmbaFake(pio_addr=0x10005000)
rtc_fake = AmbaFake(pio_addr=0x10017000, amba_id=0x41031)
# Attach I/O devices that are on chip and also set the appropriate
# ranges for the bridge
def attachOnChipIO(self, bus, bridge):
self.gic.pio = bus.master
self.l2x0_fake.pio = bus.master
# Bridge ranges based on excluding what is part of on-chip I/O
# (gic, l2x0)
bridge.ranges = [AddrRange(self.realview_io.pio_addr,
self.gic.cpu_addr - 1),
AddrRange(self.flash_fake.pio_addr, Addr.max)]
# Attach I/O devices to specified bus object. Can't do this
# earlier, since the bus object itself is typically defined at the
# System level.
def attachIO(self, bus):
self.uart.pio = bus.master
self.realview_io.pio = bus.master
self.timer0.pio = bus.master
self.timer1.pio = bus.master
self.clcd.pio = bus.master
self.clcd.dma = bus.slave
self.kmi0.pio = bus.master
self.kmi1.pio = bus.master
self.dmac_fake.pio = bus.master
self.uart1_fake.pio = bus.master
self.uart2_fake.pio = bus.master
self.uart3_fake.pio = bus.master
self.smc_fake.pio = bus.master
self.sp810_fake.pio = bus.master
self.watchdog_fake.pio = bus.master
self.gpio0_fake.pio = bus.master
self.gpio1_fake.pio = bus.master
self.gpio2_fake.pio = bus.master
self.ssp_fake.pio = bus.master
self.sci_fake.pio = bus.master
self.aaci_fake.pio = bus.master
self.mmc_fake.pio = bus.master
self.rtc_fake.pio = bus.master
self.flash_fake.pio = bus.master
self.smcreg_fake.pio = bus.master
class VExpress_EMM(RealView):
mem_start_addr = '2GB'
max_mem_size = '2GB'
pci_cfg_base = 0x30000000
uart = Pl011(pio_addr=0x1c090000, int_num=37)
realview_io = RealViewCtrl(proc_id0=0x14000000, proc_id1=0x14000000, pio_addr=0x1C010000)
gic = Gic(dist_addr=0x2C001000, cpu_addr=0x2C002000)
local_cpu_timer = CpuLocalTimer(int_num_timer=29, int_num_watchdog=30, pio_addr=0x2C080000)
timer0 = Sp804(int_num0=34, int_num1=34, pio_addr=0x1C110000, clock0='1MHz', clock1='1MHz')
timer1 = Sp804(int_num0=35, int_num1=35, pio_addr=0x1C120000, clock0='1MHz', clock1='1MHz')
clcd = Pl111(pio_addr=0x1c1f0000, int_num=46)
kmi0 = Pl050(pio_addr=0x1c060000, int_num=44)
kmi1 = Pl050(pio_addr=0x1c070000, int_num=45)
cf_ctrl = IdeController(disks=[], pci_func=0, pci_dev=0, pci_bus=2,
io_shift = 2, ctrl_offset = 2, Command = 0x1,
BAR0 = 0x1C1A0000, BAR0Size = '256B',
BAR1 = 0x1C1A0100, BAR1Size = '4096B',
BAR0LegacyIO = True, BAR1LegacyIO = True)
pciconfig = PciConfigAll(size='256MB')
ethernet = IGbE_e1000(pci_bus=0, pci_dev=0, pci_func=0,
InterruptLine=1, InterruptPin=1)
ide = IdeController(disks = [], pci_bus=0, pci_dev=1, pci_func=0,
InterruptLine=2, InterruptPin=2)
vram = SimpleMemory(range = AddrRange(0x18000000, size='32MB'),
zero = True)
rtc = PL031(pio_addr=0x1C170000, int_num=36)
l2x0_fake = IsaFake(pio_addr=0x2C100000, pio_size=0xfff)
uart1_fake = AmbaFake(pio_addr=0x1C0A0000)
uart2_fake = AmbaFake(pio_addr=0x1C0B0000)
uart3_fake = AmbaFake(pio_addr=0x1C0C0000)
sp810_fake = AmbaFake(pio_addr=0x1C020000, ignore_access=True)
watchdog_fake = AmbaFake(pio_addr=0x1C0F0000)
aaci_fake = AmbaFake(pio_addr=0x1C040000)
lan_fake = IsaFake(pio_addr=0x1A000000, pio_size=0xffff)
usb_fake = IsaFake(pio_addr=0x1B000000, pio_size=0x1ffff)
mmc_fake = AmbaFake(pio_addr=0x1c050000)
def setupBootLoader(self, mem_bus, cur_sys, loc):
self.nvmem = SimpleMemory(range = AddrRange(0, size = '64MB'),
zero = True)
self.nvmem.port = mem_bus.master
cur_sys.boot_loader = loc('boot_emm.arm')
cur_sys.atags_addr = 0x80000100
# Attach I/O devices that are on chip and also set the appropriate
# ranges for the bridge
def attachOnChipIO(self, bus, bridge):
self.gic.pio = bus.master
self.local_cpu_timer.pio = bus.master
# Bridge ranges based on excluding what is part of on-chip I/O
# (gic, a9scu)
bridge.ranges = [AddrRange(0x2F000000, size='16MB'),
AddrRange(0x30000000, size='256MB'),
AddrRange(0x40000000, size='512MB'),
AddrRange(0x18000000, size='64MB'),
AddrRange(0x1C000000, size='64MB')]
# Attach I/O devices to specified bus object. Can't do this
# earlier, since the bus object itself is typically defined at the
# System level.
def attachIO(self, bus):
self.uart.pio = bus.master
self.realview_io.pio = bus.master
self.timer0.pio = bus.master
self.timer1.pio = bus.master
self.clcd.pio = bus.master
self.clcd.dma = bus.slave
self.kmi0.pio = bus.master
self.kmi1.pio = bus.master
self.cf_ctrl.pio = bus.master
self.cf_ctrl.dma = bus.slave
self.cf_ctrl.config = bus.master
self.rtc.pio = bus.master
bus.use_default_range = True
self.vram.port = bus.master
self.ide.pio = bus.master
self.ide.config = bus.master
self.ide.dma = bus.slave
self.ethernet.pio = bus.master
self.ethernet.config = bus.master
self.ethernet.dma = bus.slave
self.pciconfig.pio = bus.default
self.l2x0_fake.pio = bus.master
self.uart1_fake.pio = bus.master
self.uart2_fake.pio = bus.master
self.uart3_fake.pio = bus.master
self.sp810_fake.pio = bus.master
self.watchdog_fake.pio = bus.master
self.aaci_fake.pio = bus.master
self.lan_fake.pio = bus.master
self.usb_fake.pio = bus.master
self.mmc_fake.pio = bus.master
| bsd-3-clause |
opps/opps | opps/boxes/migrations/0003_auto__add_field_queryset_order_field.py | 5 | 8523 | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
from django.contrib.auth import get_user_model
User = get_user_model()
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'QuerySet.order_field'
db.add_column(u'boxes_queryset', 'order_field',
self.gf('django.db.models.fields.CharField')(default='id', max_length=100),
keep_default=False)
def backwards(self, orm):
# Deleting field 'QuerySet.order_field'
db.delete_column(u'boxes_queryset', 'order_field')
models = {
u'%s.%s' % (User._meta.app_label, User._meta.module_name): {
'Meta': {'object_name': User.__name__},
},
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
u'boxes.queryset': {
'Meta': {'object_name': 'QuerySet'},
'channel': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['channels.Channel']", 'null': 'True', 'blank': 'True'}),
'date_available': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'null': 'True', 'db_index': 'True'}),
'date_insert': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date_update': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'filters': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'limit': ('django.db.models.fields.PositiveIntegerField', [], {'default': '7'}),
'mirror_site': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "u'boxes_queryset_mirror_site'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['sites.Site']"}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '150'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '140'}),
'offset': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'order': ('django.db.models.fields.CharField', [], {'max_length': '1'}),
'order_field': ('django.db.models.fields.CharField', [], {'default': "'id'", 'max_length': '100'}),
'published': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'db_index': 'True'}),
'site': ('django.db.models.fields.related.ForeignKey', [], {'default': '1', 'to': u"orm['sites.Site']"}),
'site_domain': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '100', 'null': 'True', 'blank': 'True'}),
'site_iid': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True', 'max_length': '4', 'null': 'True', 'blank': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '150'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['%s.%s']" % (User._meta.app_label, User._meta.object_name)})
},
u'channels.channel': {
'Meta': {'ordering': "['name', 'parent__id', 'published']", 'unique_together': "(('site', 'long_slug', 'slug', 'parent'),)", 'object_name': 'Channel'},
'date_available': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'null': 'True', 'db_index': 'True'}),
'date_insert': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date_update': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'group': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'hat': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'homepage': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'include_in_main_rss': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'layout': ('django.db.models.fields.CharField', [], {'default': "'default'", 'max_length': '250', 'db_index': 'True'}),
u'level': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
u'lft': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'long_slug': ('django.db.models.fields.SlugField', [], {'max_length': '250'}),
'mirror_site': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "u'channels_channel_mirror_site'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['sites.Site']"}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '60'}),
'order': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'paginate_by': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'parent': ('mptt.fields.TreeForeignKey', [], {'blank': 'True', 'related_name': "'subchannel'", 'null': 'True', 'to': u"orm['channels.Channel']"}),
'published': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'db_index': 'True'}),
u'rght': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'show_in_menu': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'site': ('django.db.models.fields.related.ForeignKey', [], {'default': '1', 'to': u"orm['sites.Site']"}),
'site_domain': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '100', 'null': 'True', 'blank': 'True'}),
'site_iid': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True', 'max_length': '4', 'null': 'True', 'blank': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '150'}),
u'tree_id': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['%s.%s']" % (User._meta.app_label, User._meta.object_name)})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'sites.site': {
'Meta': {'ordering': "('domain',)", 'object_name': 'Site', 'db_table': "'django_site'"},
'domain': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
}
}
complete_apps = ['boxes'] | mit |
qkitgroup/qkit | qkit/measure/timedomain/awg/generate_waveform.py | 1 | 22615 | '''
generate_waveform.py
M. Jerger, S. Probst, A. Schneider (04/2015), J. Braumueller (04/2016)
'''
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
import numpy as np
#import os.path
import time
import logging
import numpy
import sys
import qkit
if qkit.module_available("scipy"):
import scipy.special
# creates dummy objects to have everything well-defined.
#global sample
#sample = type('Sample', (object,),{ 'exc_T' : 1e-6 , 'tpi' : 2e-9 , 'tpi2' : 1e-9, 'clock' : 1e9 })
dtype = np.float16 #you can change this via gwf.dtype to anything you want
def compensate(wfm, gamma, sample):
'''
Function that translates a given (analog) waveform wfm into a waveform wfc that needs to be programmed to the AWG
to effectively obtain the waveform wfm after the bias T.
This compensation is required due to the finite time constant of the bias Ts capacitor. The time constant in seconds
is passed to the function as gamma.
Credits to A. Schneider
Inputs:
- wfm: original waveform to be compensated
- gamma: bias T time constant in seconds
- sample: sample object form which the function reads the AWG clock
Outputs:
- wfc: corrected waveform to be loaded to the AWG
'''
wfc = np.zeros_like(wfm)
dif = np.diff(wfm)
for i in range(1,len(wfm)):
wfc[i]=wfc[i-1]+wfm[i-1]/(sample.clock*gamma)+dif[i-1]
return wfc
def erf(pulse, attack, decay, sample, length=None, position = None, low=0, high=1, clock = None):
'''
create an erf-shaped envelope function
erf(\pm 2) is almost 0/1, erf(\pm 1) is ~15/85%
Input:
tstart, tstop - erf(-2) times
attack, decay - attack/decay times
'''
if not qkit.module_available("scipy"):
raise ImportError('scipy not available. scipy is needed for erf.')
if(clock == None): clock = sample.clock
if(length == None): length = sample.exc_T
if position == None: #automatically correct overlap only when position argument not explicitly given
position = length
if hasattr(sample, 'overlap'): #if overlap exists in sample object
position -= sample.overlap
else:
logging.warning('overlap attribute not found in sample object')
if(pulse>position):
logging.error(__name__ + ' : pulse does not fit into waveform')
sample_start = int(clock*(position-pulse))
sample_end = int(clock*position)
sample_length = int(np.round(length*clock))
wfm = low * np.ones(sample_length,dtype=dtype)
if attack != 0:
if attack < 2./clock:
logging.warning(__name__ + ' : attack too small compared to AWG sample frequency, setting to %.4g s'%(2./clock))
attack = 2./clock
nAttack = int(clock*attack)
sAttack = 0.5*(1+scipy.special.erf(np.linspace(-2, 2, nAttack)))
wfm[sample_start:sample_start+nAttack] += sAttack * (high-low)
else:
nAttack = 0
if decay != 0:
if decay < 2./clock:
logging.warning(__name__ + ' : decay too small compared to AWG sample frequency, setting to %.4g s'%(2./clock))
decay = 2./clock
nDecay = int(clock*decay)
sDecay = 0.5*(1+scipy.special.erf(np.linspace(2, -2, nDecay)))
wfm[sample_end-nDecay:sample_end] += sDecay * (high-low)
else:
nDecay = 0
wfm[sample_start+nAttack:sample_end-nDecay] = high
return wfm
def exp(pulse, decay, sample, position = None, low=0, high=1, clock = None):
'''
create and exponential decaying waveform
'''
if(clock == None): clock = sample.clock
if position == None: #automatically correct overlap only when position argument not explicitly given
position = length
if hasattr(sample, 'overlap'): #if overlap exists in sample object
position -= sample.overlap
else:
logging.warning('overlap attribute not found in sample object')
sample_length = int(np.ceil(sample.exc_T*clock))
wfm = low * np.ones(sample_length,dtype=dtype)
sample_start = int(clock*(position-pulse))
sample_end = int(clock*position)
wfm[sample_start:sample_end] += np.exp(-np.arange(sample_end-sample_start)/(decay*clock)) * (high-low)
return wfm
def triangle(attack, decay, sample, length = None, position = None, low=0, high=1, clock = None):
'''
create a pulse with triangular shape
Input:
attack, decay - attack and decay times in sec
length - length of the comlete resulting waveform in sec
position - position of the end of the pulse
'''
if(clock == None): clock = sample.clock
if(length == None): length = sample.exc_T
if position == None: #automatically correct overlap only when position argument not explicitly given
position = length
if hasattr(sample, 'overlap'): #if overlap exists in sample object
position -= sample.overlap
else:
logging.warning('overlap attribute not found in sample object')
sample_start = int(clock*(position-attack-decay))
sample_end = int(clock*position)
sample_length = int(np.ceil(length*clock))
sample_attack = int(np.ceil(attack*clock))
sample_decay = int(np.ceil(decay*clock))
wfm = low * np.ones(sample_length,dtype=dtype)
wfm[sample_start:sample_start+sample_attack] = np.linspace(low, high, sample_attack)
wfm[sample_start+sample_attack:sample_end-sample_decay] = high
wfm[sample_end-sample_decay:sample_end] = np.linspace(high, low, sample_decay)
return wfm
def square(pulse, sample, length = None,position = None, low = 0, high = 1, clock = None, adddelay=0.,freq=None):
'''
generate waveform corresponding to a dc pulse
Input:
pulse - pulse duration in seconds
length - length of the generated waveform
position - time instant of the end of the pulse
low - pulse 'off' sample value
high - pulse 'on' sample value
clock - sample rate of the DAC
Output:
float array of samples
'''
if(clock == None): clock= sample.clock
if(length == None): length= sample.exc_T
if position == None: #automatically correct overlap only when position argument not explicitly given
position = length
if hasattr(sample, 'overlap'): #if overlap exists in sample object
position -= sample.overlap
else:
logging.warning('overlap attribute not found in sample object')
if(pulse>position): logging.error(__name__ + ' : pulse does not fit into waveform')
sample_start = int(clock*(position-pulse-adddelay))
sample_end = int(clock*(position-adddelay))
sample_length = int(np.round(length*clock)/4)*4 #Ensures that the number of samples is divisible by 4 @andre20150615
#sample_length = int(np.ceil(length*clock)) #old definition
wfm = low*np.ones(sample_length,dtype=dtype)
if(sample_start < sample_end): wfm[int(sample_start)] = high + (low-high)*(sample_start-int(sample_start))
if freq==None: wfm[int(np.ceil(sample_start)):int(sample_end)] = high
else:
for i in range(int(sample_end)-int(np.ceil(sample_start))):
wfm[i+int(np.ceil(sample_start))] = high*np.sin(2*np.pi*freq/clock*(i))
if(np.ceil(sample_end) != np.floor(sample_end)): wfm[int(sample_end)] = low + (high-low)*(sample_end-int(sample_end))
return wfm
def gauss(pulse, sample, length = None,position = None, low = 0, high = 1, clock = None):
'''
generate waveform corresponding to a dc gauss pulse
Input:
pulse - pulse duration in seconds
length - length of the generated waveform
position - time instant of the end of the pulse
low - pulse 'off' sample value
high - pulse 'on' sample value
clock - sample rate of the DAC
Output:
float array of samples
'''
if(clock == None): clock= sample.clock
if(length == None):
length= sample.exc_T
sample_length = int(np.round(length*clock)/4)*4
else:
sample_length = int(length*clock)
if position == None: #automatically correct overlap only when position argument not explicitly given
position = length
if hasattr(sample, 'overlap'): #if overlap exists in sample object
position -= sample.overlap
else:
logging.warning('overlap attribute not found in sample object')
if(pulse>position): logging.error(__name__ + ' : pulse does not fit into waveform')
sample_start = int(clock*(position-pulse))
sample_end = int(clock*position)
wfm = low*np.ones(sample_length,dtype=dtype)
if(sample_start < sample_end): wfm[int(sample_start)] = 0.#high + (low-high)*(sample_start-int(sample_start))
#wfm[int(np.ceil(sample_start)):int(sample_end)] = high
pulsesamples = int(int(sample_end)-int(sample_start))
for i in range(pulsesamples):
wfm[int(np.ceil(sample_start)+i)] = high*np.exp(-(i-pulsesamples/2.)**2/(2.*(pulsesamples/5.)**2))
if(np.ceil(sample_end) != np.floor(sample_end)): wfm[int(sample_end)] = low + (high-low)*(sample_end-int(sample_end))
return wfm
def arb_function(function, pulse, length = None,position = None, clock = None):
'''
generate arbitrary waveform pulse
Input:
function - function called
pulse - duration of the signal
position - time instant of the end of the signal
length - duration of the waveform
clock - sample rate of the DAC
Output:
float array of samples
'''
if(clock == None): clock= sample.clock
if(length == None): length= sample.exc_T
if position == None: #automatically correct overlap only when position argument not explicitly given
position = length
if hasattr(sample, 'overlap'): #if overlap exists in sample object
position -= sample.overlap
else:
logging.warning('overlap attribute not found in sample object')
if(pulse>position): logging.error(__name__ + ' : pulse does not fit into waveform')
sample_start = clock*(position-pulse)
sample_end = clock*position
sample_length = int(np.ceil(length*clock))
wfm = np.zeros(sample_length,dtype=dtype)
times = 1./clock*np.arange(0, sample_end-sample_start+1)
wfm[sample_start:sample_end] = function(times)
return wfm
def t1(delay, sample, length = None, low = 0, high = 1, clock = None, DRAG_amplitude=None):
'''
generate waveform with one pi pulse and delay after
Input:
delay - time delay after pi pulse
sample object
Output:
float array of samples
'''
if(clock == None): clock = sample.clock
if(length == None): length = sample.exc_T
if hasattr(sample, 'overlap'): #if overlap exists in sample object
delay += sample.overlap
else:
logging.warning('overlap attribute not found in sample object')
if(delay+sample.tpi > length): logging.error(__name__ + ' : pulse does not fit into waveform')
if DRAG_amplitude == None:
wfm = square(sample.tpi, sample, length, length-delay, clock = clock)
else:
wfm = drag(sample.tpi, sample, DRAG_amplitude, length, length-delay, clock = clock)
wfm = wfm * (high-low) + low
return wfm
def ramsey(delay, sample, pi2_pulse = None, length = None,position = None, low = 0, high = 1, clock = None, DRAG_amplitude=None):
'''
generate waveform with two pi/2 pulses and delay in-between
Input:
delay - time delay between the pi/2 pulses
pi2_pulse - length of a pi/2 pulse
(see awg_pulse for rest)
Output:
float array of samples
'''
if(clock == None): clock = sample.clock
if(length == None): length = sample.exc_T
if position == None: #automatically correct overlap only when position argument not explicitly given
position = length
if hasattr(sample, 'overlap'): #if overlap exists in sample object
position -= sample.overlap
else:
logging.warning('overlap attribute not found in sample object')
if DRAG_amplitude == None:
if(pi2_pulse == None): pi2_pulse = sample.tpi2
if(delay+2*pi2_pulse>position): logging.error(__name__ + ' : ramsey pulses do not fit into waveform')
wfm = square(pi2_pulse, sample, length, position, clock = clock)
wfm += square(pi2_pulse, sample, length, position-delay-pi2_pulse, clock = clock)
else:
if(pi2_pulse == None): pi2_pulse = sample.tpi2
if(delay+2*pi2_pulse>position): logging.error(__name__ + ' : ramsey pulses do not fit into waveform')
wfm = drag(pi2_pulse, sample, DRAG_amplitude, length, position, clock = clock)
wfm += drag(pi2_pulse, sample, DRAG_amplitude, length, position-delay-pi2_pulse, clock = clock)
wfm = wfm * (high-low) + low
return wfm
def spinecho(delay, sample, pi2_pulse = None, pi_pulse = None, length = None,position = None, low = 0, high = 1, clock = None, readoutpulse=True, adddelay=0., freq=None, n = 1,DRAG_amplitude=None, phase = 0.):
'''
generate waveform with two pi/2 pulses at the ends and a number n of echo (pi) pulses in between
pi2 - delay/(n/2) - pi - delay/n - pi - ... - pi - delay/(n/2) - [pi2, if readoutpulse]
Phase shift included to perform CPMG-Measurements,
DRAG included
Sequence for n>1 fixed: between pi2 and pi is delay/(n/2) @TW20160907
pulse - pulse duration in seconds
length - length of the generated waveform
position - time instant of the end of the pulse
low - pulse 'off' sample value
high - pulse 'on' sample value
clock - sample rate of the DAC
phase - phase shift between pi2 and pi pulses in rad
DRAG_amplitude - if not None, DRAG-Pulses are used
waveforms are contructed from right to left
'''
if(clock == None): clock= sample.clock
if(length == None): length= sample.exc_T
if position == None: #automatically correct overlap only when position argument not explicitly given
position = length
if hasattr(sample, 'overlap'): #if overlap exists in sample object
position -= sample.overlap
else:
logging.warning('overlap attribute not found in sample object')
if(pi2_pulse == None): pi2_pulse = sample.tpi2
if(pi_pulse == None): pi_pulse = sample.tpi
if round(adddelay+delay+2*pi2_pulse+n*pi_pulse, 10) > round(position, 10): # round bc of floating points arethmetic
logging.error(__name__ + ' : sequence does not fit into waveform. delay is the sum of the waiting times in between the pi pulses')
if DRAG_amplitude == None:
if readoutpulse: #last pi/2 pulse
wfm = square(pi2_pulse, sample, length, position, clock = clock, freq=freq)*np.exp(0j) #add pi/2 pulse
else:
wfm = square(pi2_pulse, sample, length, position, low, low, clock,freq=freq)*np.exp(0j) #create space (low) of the length of a pi/2 pulse
for ni in range(n): #add pi pulses
wfm += square(pi_pulse, sample, length, position - pi2_pulse - ni*pi_pulse - float(delay)/(2*n)-delay/n*ni - adddelay, clock = clock, freq=freq)*np.exp(phase*1j)
wfm += square(pi2_pulse, sample, length, position - pi2_pulse - n*pi_pulse - delay - adddelay, clock = clock, freq=freq)*np.exp(0j) #pi/2 pulse
wfm = wfm * (high-low) + complex(low,low) #adjust offset
if phase == 0: wfm = wfm.real # to avoid conversion error messages
else:
if readoutpulse: #last pi/2 pulse
wfm = drag(pi2_pulse, sample, DRAG_amplitude, length, position, clock = clock) *np.exp(0j) #add pi/2 pulse
else:
wfm = square(pi2_pulse, sample, length, position, low, low, clock,freq=freq)*np.exp(0j) #create space (low) of the length of a pi/2 pulse
for ni in range(n): #add pi pulses
wfm += drag(pi_pulse, sample, DRAG_amplitude, length, position - pi2_pulse - ni*pi_pulse - float(delay)/(2*n)-delay/n*ni - adddelay, clock = clock)*np.exp(phase*1j)
wfm += drag(pi2_pulse, sample, DRAG_amplitude, length, position - pi2_pulse - n*pi_pulse - delay - adddelay, clock = clock)*np.exp(0j) #pi/2 pulse
wfm = wfm * (high-low) + complex(low,low) #adjust offset
return wfm
def udd(delay, sample, pi2_pulse = None, pi_pulse = None, length = None,position = None, low = 0, high = 1, clock = None, readoutpulse=True,adddelay=0., freq=None, n = 1, DRAG_amplitude=None, phase = np.pi/2):
'''
generate waveform with two pi/2 pulses at the ends and a number n of (pi) pulses in between
where the position of the j-th pulse is defined by sin^2[(pi*j)/(2N+2)] @TW20160908
pulse - pulse duration in seconds
length - length of the generated waveform
position - time instant of the end of the pulse
low - pulse 'off' sample value
high - pulse 'on' sample value
clock - sample rate of the DAC
phase - phase shift between pi2 and pi pulses
DRAG_amplitude - if not None, DRAG-Pulses are used
'''
if(clock == None): clock= sample.clock
if(length == None): length= sample.exc_T
if position == None: #automatically correct overlap only when position argument not explicitly given
position = length
if hasattr(sample, 'overlap'): #if overlap exists in sample object
position -= sample.overlap
else:
logging.warning('overlap attribute not found in sample object')
if(pi2_pulse == None): pi2_pulse = sample.tpi2
if(pi_pulse == None): pi_pulse = sample.tpi
if round(adddelay+delay+2*pi2_pulse+n*pi_pulse, 10) > round(position, 10):
logging.error(__name__ + ' : sequence does not fit into waveform. delay is the sum of the waiting times in between the pi pulses')
if DRAG_amplitude == None:
if readoutpulse: #last pi/2 pulse
wfm = square(pi2_pulse, sample, length, position, clock = clock, freq=freq)*np.exp(0j) #add pi/2 pulse
else:
wfm = square(pi2_pulse, sample, length, position, low, low, clock,freq=freq)*np.exp(0j) #create space (low) of the length of a pi/2 pulse
for ni in range(n): #add pi pulses
wfm += square(pi_pulse, sample, length, position - (delay+n*pi_pulse)*(np.sin((np.pi*(ni+1))/(2*n+2)))**2 - adddelay, clock = clock, freq=freq)*np.exp(phase*1j) # no pi2_pulse subtracted because equation yields position of center
wfm += square(pi2_pulse, sample, length, position - pi2_pulse - n*pi_pulse - delay - adddelay, clock = clock, freq=freq)*np.exp(0j) #pi/2 pulse
wfm = wfm * (high-low) + complex(low,low) #adjust offset
if phase == 0: wfm = wfm.real # to avoid conversion error messages
else:
if readoutpulse: #last pi/2 pulse
wfm = drag(pi2_pulse, sample, DRAG_amplitude, length, position, clock = clock)*np.exp(0j) #add pi/2 pulse
else:
wfm = square(pi2_pulse, sample, length, position, low, low, clock,freq=freq)*np.exp(0j) #create space (low) of the length of a pi/2 pulse
for ni in range(n): #add pi pulses
wfm += drag(pi_pulse, sample, DRAG_amplitude, length, position - (delay+n*pi_pulse)*(np.sin((np.pi*(ni+1))/(2*n+2)))**2 - adddelay, clock = clock)*np.exp(phase*1j)
wfm += drag(pi2_pulse, sample, DRAG_amplitude, length, position - pi2_pulse - n*pi_pulse - delay - adddelay, clock = clock)*np.exp(0j) #pi/2 pulse
wfm = wfm * (high-low) + complex(low,low) #adjust offset
return wfm
def drag(pulse, sample, amplitude, length = None, position=None, clock = None):
'''
if pulses are short, DRAG helps to reduce the gate error.
Pulseshape on I is gussian and on Q is the derivative of I times an experimentally determined amplitude
pulse - pulse duration in seconds
amplitude - experimentally determined amplitude
length - length of the generated waveform
position - time instant of the end of the pulse
clock - sample rate of the DAC
'''
if(clock == None): clock = sample.clock
if(length == None): length = sample.exc_T
if position == None: #automatically correct overlap only when position argument not explicitly given
position = length
if hasattr(sample, 'overlap'): #if overlap exists in sample object
position -= sample.overlap
else:
logging.warning('overlap attribute not found in sample object')
wfm = gauss(pulse, sample, length=np.ceil(length*1e9)/1e9, position=position) + 1j * np.concatenate([np.diff(gauss(pulse, sample,length=np.ceil(length*1e9)/1e9, position=position)*amplitude),[0]]) # actual pulse
wfm[int((position-pulse)*clock-1):int((position-pulse)*clock+1)]=wfm.real[int((position-pulse)*clock-1):int((position-pulse)*clock+1)] # for smooth derivative
wfm[int(position*clock-1):int(position*clock+1)]= wfm.real[int(position*clock-1):int(position*clock+1)]
return wfm
| gpl-2.0 |
seyko2/openvz_rhel6_kernel_mirror | tools/perf/tests/attr.py | 4 | 9571 | #! /usr/bin/python
import os
import sys
import glob
import optparse
import tempfile
import logging
import shutil
import ConfigParser
class Fail(Exception):
def __init__(self, test, msg):
self.msg = msg
self.test = test
def getMsg(self):
return '\'%s\' - %s' % (self.test.path, self.msg)
class Unsup(Exception):
def __init__(self, test):
self.test = test
def getMsg(self):
return '\'%s\'' % self.test.path
# RHEL6 - no support for
# exclude_callchain_kernel
# exclude_callchain_user
# sample_regs_user
# sample_stack_user
class Event(dict):
terms = [
'cpu',
'flags',
'type',
'size',
'config',
'sample_period',
'sample_type',
'read_format',
'disabled',
'inherit',
'pinned',
'exclusive',
'exclude_user',
'exclude_kernel',
'exclude_hv',
'exclude_idle',
'mmap',
'comm',
'freq',
'inherit_stat',
'enable_on_exec',
'task',
'watermark',
'precise_ip',
'mmap_data',
'sample_id_all',
'exclude_host',
'exclude_guest',
# 'exclude_callchain_kernel',
# 'exclude_callchain_user',
'wakeup_events',
'bp_type',
'config1',
'config2',
'branch_sample_type',
# 'sample_regs_user',
# 'sample_stack_user',
]
def add(self, data):
for key, val in data:
log.debug(" %s = %s" % (key, val))
self[key] = val
def __init__(self, name, data, base):
log.debug(" Event %s" % name);
self.name = name;
self.group = ''
self.add(base)
self.add(data)
def compare_data(self, a, b):
# Allow multiple values in assignment separated by '|'
a_list = a.split('|')
b_list = b.split('|')
for a_item in a_list:
for b_item in b_list:
if (a_item == b_item):
return True
elif (a_item == '*') or (b_item == '*'):
return True
return False
def equal(self, other):
for t in Event.terms:
log.debug(" [%s] %s %s" % (t, self[t], other[t]));
if not self.has_key(t) or not other.has_key(t):
return False
if not self.compare_data(self[t], other[t]):
return False
return True
def diff(self, other):
for t in Event.terms:
if not self.has_key(t) or not other.has_key(t):
continue
if not self.compare_data(self[t], other[t]):
log.warning("expected %s=%s, got %s" % (t, self[t], other[t]))
# Test file description needs to have following sections:
# [config]
# - just single instance in file
# - needs to specify:
# 'command' - perf command name
# 'args' - special command arguments
# 'ret' - expected command return value (0 by default)
#
# [eventX:base]
# - one or multiple instances in file
# - expected values assignments
class Test(object):
def __init__(self, path, options):
parser = ConfigParser.SafeConfigParser()
parser.read(path)
log.warning("running '%s'" % path)
self.path = path
self.test_dir = options.test_dir
self.perf = options.perf
self.command = parser.get('config', 'command')
self.args = parser.get('config', 'args')
try:
self.ret = parser.get('config', 'ret')
except:
self.ret = 0
self.expect = {}
self.result = {}
log.debug(" loading expected events");
self.load_events(path, self.expect)
def is_event(self, name):
if name.find("event") == -1:
return False
else:
return True
def load_events(self, path, events):
parser_event = ConfigParser.SafeConfigParser()
parser_event.read(path)
# The event record section header contains 'event' word,
# optionaly followed by ':' allowing to load 'parent
# event' first as a base
for section in filter(self.is_event, parser_event.sections()):
parser_items = parser_event.items(section);
base_items = {}
# Read parent event if there's any
if (':' in section):
base = section[section.index(':') + 1:]
parser_base = ConfigParser.SafeConfigParser()
parser_base.read(self.test_dir + '/' + base)
base_items = parser_base.items('event')
e = Event(section, parser_items, base_items)
events[section] = e
def run_cmd(self, tempdir):
cmd = "PERF_TEST_ATTR=%s %s %s -o %s/perf.data %s" % (tempdir,
self.perf, self.command, tempdir, self.args)
ret = os.WEXITSTATUS(os.system(cmd))
log.info(" '%s' ret %d " % (cmd, ret))
if ret != int(self.ret):
raise Unsup(self)
def compare(self, expect, result):
match = {}
log.debug(" compare");
# For each expected event find all matching
# events in result. Fail if there's not any.
for exp_name, exp_event in expect.items():
exp_list = []
log.debug(" matching [%s]" % exp_name)
for res_name, res_event in result.items():
log.debug(" to [%s]" % res_name)
if (exp_event.equal(res_event)):
exp_list.append(res_name)
log.debug(" ->OK")
else:
log.debug(" ->FAIL");
log.debug(" match: [%s] matches %s" % (exp_name, str(exp_list)))
# we did not any matching event - fail
if (not exp_list):
exp_event.diff(res_event)
raise Fail(self, 'match failure');
match[exp_name] = exp_list
# For each defined group in the expected events
# check we match the same group in the result.
for exp_name, exp_event in expect.items():
group = exp_event.group
if (group == ''):
continue
for res_name in match[exp_name]:
res_group = result[res_name].group
if res_group not in match[group]:
raise Fail(self, 'group failure')
log.debug(" group: [%s] matches group leader %s" %
(exp_name, str(match[group])))
log.debug(" matched")
def resolve_groups(self, events):
for name, event in events.items():
group_fd = event['group_fd'];
if group_fd == '-1':
continue;
for iname, ievent in events.items():
if (ievent['fd'] == group_fd):
event.group = iname
log.debug('[%s] has group leader [%s]' % (name, iname))
break;
def run(self):
tempdir = tempfile.mkdtemp();
try:
# run the test script
self.run_cmd(tempdir);
# load events expectation for the test
log.debug(" loading result events");
for f in glob.glob(tempdir + '/event*'):
self.load_events(f, self.result);
# resolve group_fd to event names
self.resolve_groups(self.expect);
self.resolve_groups(self.result);
# do the expectation - results matching - both ways
self.compare(self.expect, self.result)
self.compare(self.result, self.expect)
finally:
# cleanup
shutil.rmtree(tempdir)
def run_tests(options):
for f in glob.glob(options.test_dir + '/' + options.test):
try:
Test(f, options).run()
except Unsup, obj:
log.warning("unsupp %s" % obj.getMsg())
def setup_log(verbose):
global log
level = logging.CRITICAL
if verbose == 1:
level = logging.WARNING
if verbose == 2:
level = logging.INFO
if verbose >= 3:
level = logging.DEBUG
log = logging.getLogger('test')
log.setLevel(level)
ch = logging.StreamHandler()
ch.setLevel(level)
formatter = logging.Formatter('%(message)s')
ch.setFormatter(formatter)
log.addHandler(ch)
USAGE = '''%s [OPTIONS]
-d dir # tests dir
-p path # perf binary
-t test # single test
-v # verbose level
''' % sys.argv[0]
def main():
parser = optparse.OptionParser(usage=USAGE)
parser.add_option("-t", "--test",
action="store", type="string", dest="test")
parser.add_option("-d", "--test-dir",
action="store", type="string", dest="test_dir")
parser.add_option("-p", "--perf",
action="store", type="string", dest="perf")
parser.add_option("-v", "--verbose",
action="count", dest="verbose")
options, args = parser.parse_args()
if args:
parser.error('FAILED wrong arguments %s' % ' '.join(args))
return -1
setup_log(options.verbose)
if not options.test_dir:
print 'FAILED no -d option specified'
sys.exit(-1)
if not options.test:
options.test = 'test*'
try:
run_tests(options)
except Fail, obj:
print "FAILED %s" % obj.getMsg();
sys.exit(-1)
sys.exit(0)
if __name__ == '__main__':
main()
| gpl-2.0 |
gurneyalex/OpenUpgrade | addons/base_report_designer/plugin/openerp_report_designer/bin/script/SendToServer.py | 90 | 10565 | #########################################################################
#
# Copyright (c) 2003-2004 Danny Brewer d29583@groovegarden.com
# Copyright (C) 2004-2010 OpenERP SA (<http://openerp.com>).
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#
# See: http://www.gnu.org/licenses/lgpl.html
#
#############################################################################
import uno
import string
import unohelper
import random
import xmlrpclib
import base64, tempfile
from com.sun.star.task import XJobExecutor
import os
import sys
if __name__<>'package':
from lib.gui import *
from lib.error import *
from lib.functions import *
from lib.logreport import *
from lib.tools import *
from LoginTest import *
from lib.rpc import *
database="report"
uid = 3
class SendtoServer(unohelper.Base, XJobExecutor):
Kind = {
'PDF' : 'pdf',
'OpenOffice': 'sxw',
'HTML' : 'html'
}
def __init__(self, ctx):
self.ctx = ctx
self.module = "openerp_report"
self.version = "0.1"
LoginTest()
self.logobj=Logger()
if not loginstatus and __name__=="package":
exit(1)
global passwd
self.password = passwd
global url
self.sock=RPCSession(url)
desktop=getDesktop()
oDoc2 = desktop.getCurrentComponent()
docinfo=oDoc2.getDocumentInfo()
self.ids = self.sock.execute(database, uid, self.password, 'ir.module.module', 'search', [('name','=','base_report_designer'),('state', '=', 'installed')])
if not len(self.ids):
ErrorDialog("Please install base_report_designer module.", "", "Module Uninstalled Error!")
exit(1)
report_name = ""
name=""
if docinfo.getUserFieldValue(2)<>"" :
try:
fields=['name','report_name']
self.res_other = self.sock.execute(database, uid, self.password, 'ir.actions.report.xml', 'read', [int(docinfo.getUserFieldValue(2))],fields)
name = self.res_other[0]['name']
report_name = self.res_other[0]['report_name']
except:
import traceback,sys
info = reduce(lambda x, y: x+y, traceback.format_exception(sys.exc_type, sys.exc_value, sys.exc_traceback))
self.logob.log_write('ServerParameter', LOG_ERROR, info)
elif docinfo.getUserFieldValue(3) <> "":
name = ""
result = "rnd"
for i in range(5):
result =result + random.choice('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890')
report_name = docinfo.getUserFieldValue(3) + "." + result
else:
ErrorDialog("Please select appropriate module...","Note: use OpenERP Report -> Open a new Report", "Module selection ERROR");
exit(1)
self.win = DBModalDialog(60, 50, 180, 100, "Send To Server")
self.win.addFixedText("lblName",10 , 9, 40, 15, "Report Name :")
self.win.addEdit("txtName", -5, 5, 123, 15,name)
self.win.addFixedText("lblReportName", 2, 30, 50, 15, "Technical Name :")
self.win.addEdit("txtReportName", -5, 25, 123, 15,report_name)
self.win.addCheckBox("chkHeader", 51, 45, 70 ,15, "Corporate Header")
self.win.setCheckBoxState("chkHeader", True)
self.win.addFixedText("lblResourceType", 2 , 60, 50, 15, "Select Rpt. Type :")
self.win.addComboListBox("lstResourceType", -5, 58, 123, 15,True,itemListenerProc=self.lstbox_selected)
self.lstResourceType = self.win.getControl( "lstResourceType" )
self.txtReportName=self.win.getControl( "txtReportName" )
self.txtReportName.Enable=False
for kind in self.Kind.keys():
self.lstResourceType.addItem( kind, self.lstResourceType.getItemCount() )
self.win.addButton( "btnSend", -5, -5, 80, 15, "Send Report to Server", actionListenerProc = self.btnOk_clicked)
self.win.addButton( "btnCancel", -5 - 80 -5, -5, 40, 15, "Cancel", actionListenerProc = self.btnCancel_clicked)
self.win.doModalDialog("lstResourceType", self.Kind.keys()[0])
def lstbox_selected(self, oItemEvent):
pass
def btnCancel_clicked(self, oActionEvent):
self.win.endExecute()
def btnOk_clicked(self, oActionEvent):
if self.win.getEditText("txtName") <> "" and self.win.getEditText("txtReportName") <> "":
desktop=getDesktop()
oDoc2 = desktop.getCurrentComponent()
docinfo=oDoc2.getDocumentInfo()
self.getInverseFieldsRecord(1)
fp_name = tempfile.mktemp('.'+"sxw")
if not oDoc2.hasLocation():
oDoc2.storeAsURL("file://"+fp_name,Array(makePropertyValue("MediaType","application/vnd.sun.xml.writer"),))
if docinfo.getUserFieldValue(2)=="":
name=self.win.getEditText("txtName"),
name_id={}
try:
name_id = self.sock.execute(database, uid, self.password, 'ir.actions.report.xml' , 'search',[('name','=',name)])
if not name_id:
id=self.getID()
docinfo.setUserFieldValue(2,id)
rec = {
'name': self.win.getEditText("txtReportName"),
'key': 'action',
'model': docinfo.getUserFieldValue(3),
'value': 'ir.actions.report.xml,'+str(id),
'key2': 'client_print_multi',
'object': True,
'user_id': uid
}
res = self.sock.execute(database, uid, self.password, 'ir.values' , 'create',rec )
else :
ErrorDialog("This name is already used for another report.\nPlease try with another name.", "", "Error!")
self.logobj.log_write('SendToServer',LOG_WARNING, ': report name already used DB %s' % (database))
self.win.endExecute()
except Exception,e:
import traceback,sys
info = reduce(lambda x, y: x+y, traceback.format_exception(sys.exc_type, sys.exc_value, sys.exc_traceback))
self.logobj.log_write('ServerParameter', LOG_ERROR, info)
else:
id = docinfo.getUserFieldValue(2)
vId = self.sock.execute(database, uid, self.password, 'ir.values' , 'search', [('value','=','ir.actions.report.xml,'+str(id))])
rec = { 'name': self.win.getEditText("txtReportName") }
res = self.sock.execute(database, uid, self.password, 'ir.values' , 'write',vId,rec)
oDoc2.store()
data = read_data_from_file( get_absolute_file_path( oDoc2.getURL()[7:] ) )
self.getInverseFieldsRecord(0)
#sock = xmlrpclib.ServerProxy(docinfo.getUserFieldValue(0) +'/xmlrpc/object')
file_type = oDoc2.getURL()[7:].split(".")[-1]
params = {
'name': self.win.getEditText("txtName"),
'model': docinfo.getUserFieldValue(3),
'report_name': self.win.getEditText("txtReportName"),
'header': (self.win.getCheckBoxState("chkHeader") <> 0),
'report_type': self.Kind[self.win.getListBoxSelectedItem("lstResourceType")],
}
if self.win.getListBoxSelectedItem("lstResourceType")=='OpenOffice':
params['report_type']=file_type
self.sock.execute(database, uid, self.password, 'ir.actions.report.xml', 'write', int(docinfo.getUserFieldValue(2)), params)
# Call upload_report as the *last* step, as it will call register_all() and cause the report service
# to be loaded - which requires all the data to be correct in the database
self.sock.execute(database, uid, self.password, 'ir.actions.report.xml', 'upload_report', int(docinfo.getUserFieldValue(2)),base64.encodestring(data),file_type,{})
self.logobj.log_write('SendToServer',LOG_INFO, ':Report %s successfully send using %s'%(params['name'],database))
self.win.endExecute()
else:
ErrorDialog("Either report name or technical name is empty.\nPlease specify an appropriate name.", "", "Error!")
self.logobj.log_write('SendToServer',LOG_WARNING, ': either report name or technical name is empty.')
self.win.endExecute()
def getID(self):
desktop=getDesktop()
doc = desktop.getCurrentComponent()
docinfo=doc.getDocumentInfo()
params = {
'name': self.win.getEditText("txtName"),
'model': docinfo.getUserFieldValue(3),
'report_name': self.win.getEditText('txtReportName')
}
id=self.sock.execute(database, uid, self.password, 'ir.actions.report.xml' ,'create', params)
return id
def getInverseFieldsRecord(self, nVal):
desktop=getDesktop()
doc = desktop.getCurrentComponent()
count=0
oParEnum = doc.getTextFields().createEnumeration()
while oParEnum.hasMoreElements():
oPar = oParEnum.nextElement()
if oPar.supportsService("com.sun.star.text.TextField.DropDown"):
oPar.SelectedItem = oPar.Items[nVal]
if nVal==0:
oPar.update()
if __name__<>"package" and __name__=="__main__":
SendtoServer(None)
elif __name__=="package":
g_ImplementationHelper.addImplementation( SendtoServer, "org.openoffice.openerp.report.sendtoserver", ("com.sun.star.task.Job",),)
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
dials/dials | command_line/show_extensions.py | 1 | 2912 | # LIBTBX_SET_DISPATCHER_NAME dev.dials.show_extensions
import dials.util
class Script:
"""The class to encapsulate the script."""
def __init__(self):
"""Initialise the script."""
from libtbx.phil import parse
from dials.util.options import OptionParser
# Create the phil parameters
phil_scope = parse(
"""
interfaces=False
.type = bool
.help = "Only show information about the interfaces"
"""
)
# Create the option parser
usage = "dev.dials.show_extensions [options] /path/to/image/files"
self.parser = OptionParser(usage=usage, phil=phil_scope)
def run(self, args=None):
"""Run the script."""
import dials.extensions
# Parse the command line arguments
params, options = self.parser.parse_args(args)
# Create the list of interfaces
interfaces = [
dials.extensions.ProfileModel,
dials.extensions.Background,
dials.extensions.Centroid,
dials.extensions.SpotFinderThreshold,
]
# Loop through all the interfaces
for iface in interfaces:
print("-" * 80)
print(f"Extension interface: {iface.__name__}")
# Either just show information about interfaces or show some about
# extensions depending on user input
if params.interfaces:
# Print info about interface
if options.verbose > 0:
print(f" name = {iface.name}")
if options.verbose > 1:
level = options.verbose - 2
scope = iface.phil_scope()
phil = scope.as_str(print_width=80 - 3, attributes_level=level)
phil = "\n".join((" " * 2) + l for l in phil.split("\n"))
if phil.strip() != "":
print(f" phil:\n{phil}")
else:
# Loop through all the extensions
for ext in iface.extensions():
print(f" Extension: {ext.__name__}")
if options.verbose > 0:
print(f" name = {ext.name}")
if options.verbose > 1:
level = options.verbose - 2
scope = ext.phil_scope()
phil = scope.as_str(
print_width=80 - 3, attributes_level=level
)
phil = "\n".join((" " * 3) + l for l in phil.split("\n"))
if phil.strip() != "":
print(f" phil:\n{phil}")
@dials.util.show_mail_handle_errors()
def run(args=None):
script = Script()
script.run(args)
if __name__ == "__main__":
run()
| bsd-3-clause |
clumsy/intellij-community | python/helpers/py3only/docutils/languages/pt_br.py | 52 | 1948 | # $Id: pt_br.py 5567 2008-06-03 01:11:03Z goodger $
# Author: David Goodger <goodger@python.org>
# Copyright: This module has been placed in the public domain.
# New language mappings are welcome. Before doing a new translation, please
# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
# translated for each language: one in docutils/languages, the other in
# docutils/parsers/rst/languages.
"""
Brazilian Portuguese-language mappings for language-dependent features of Docutils.
"""
__docformat__ = 'reStructuredText'
labels = {
# fixed: language-dependent
'author': 'Autor',
'authors': 'Autores',
'organization': 'Organiza\u00E7\u00E3o',
'address': 'Endere\u00E7o',
'contact': 'Contato',
'version': 'Vers\u00E3o',
'revision': 'Revis\u00E3o',
'status': 'Estado',
'date': 'Data',
'copyright': 'Copyright',
'dedication': 'Dedicat\u00F3ria',
'abstract': 'Resumo',
'attention': 'Aten\u00E7\u00E3o!',
'caution': 'Cuidado!',
'danger': 'PERIGO!',
'error': 'Erro',
'hint': 'Sugest\u00E3o',
'important': 'Importante',
'note': 'Nota',
'tip': 'Dica',
'warning': 'Aviso',
'contents': 'Sum\u00E1rio'}
"""Mapping of node class name to label text."""
bibliographic_fields = {
# language-dependent: fixed
'autor': 'author',
'autores': 'authors',
'organiza\u00E7\u00E3o': 'organization',
'endere\u00E7o': 'address',
'contato': 'contact',
'vers\u00E3o': 'version',
'revis\u00E3o': 'revision',
'estado': 'status',
'data': 'date',
'copyright': 'copyright',
'dedicat\u00F3ria': 'dedication',
'resumo': 'abstract'}
"""Brazilian Portuguese (lowcased) to canonical name mapping for bibliographic fields."""
author_separators = [';', ',']
"""List of separator strings for the 'Authors' bibliographic field. Tried in
order."""
| apache-2.0 |
cchurch/ansible | lib/ansible/modules/cloud/cloudstack/cs_resourcelimit.py | 25 | 5486 | #!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2016, René Moser <mail@renemoser.net>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: cs_resourcelimit
short_description: Manages resource limits on Apache CloudStack based clouds.
description:
- Manage limits of resources for domains, accounts and projects.
version_added: '2.1'
author: René Moser (@resmo)
options:
resource_type:
description:
- Type of the resource.
type: str
required: true
choices:
- instance
- ip_address
- volume
- snapshot
- template
- network
- vpc
- cpu
- memory
- primary_storage
- secondary_storage
aliases: [ type ]
limit:
description:
- Maximum number of the resource.
- Default is unlimited C(-1).
type: int
default: -1
aliases: [ max ]
domain:
description:
- Domain the resource is related to.
type: str
account:
description:
- Account the resource is related to.
type: str
project:
description:
- Name of the project the resource is related to.
type: str
extends_documentation_fragment: cloudstack
'''
EXAMPLES = '''
- name: Update a resource limit for instances of a domain
cs_resourcelimit:
type: instance
limit: 10
domain: customers
delegate_to: localhost
- name: Update a resource limit for instances of an account
cs_resourcelimit:
type: instance
limit: 12
account: moserre
domain: customers
delegate_to: localhost
'''
RETURN = '''
---
recource_type:
description: Type of the resource
returned: success
type: str
sample: instance
limit:
description: Maximum number of the resource.
returned: success
type: int
sample: -1
domain:
description: Domain the resource is related to.
returned: success
type: str
sample: example domain
account:
description: Account the resource is related to.
returned: success
type: str
sample: example account
project:
description: Project the resource is related to.
returned: success
type: str
sample: example project
'''
# import cloudstack common
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.cloudstack import (
AnsibleCloudStack,
cs_required_together,
cs_argument_spec
)
RESOURCE_TYPES = {
'instance': 0,
'ip_address': 1,
'volume': 2,
'snapshot': 3,
'template': 4,
'network': 6,
'vpc': 7,
'cpu': 8,
'memory': 9,
'primary_storage': 10,
'secondary_storage': 11,
}
class AnsibleCloudStackResourceLimit(AnsibleCloudStack):
def __init__(self, module):
super(AnsibleCloudStackResourceLimit, self).__init__(module)
self.returns = {
'max': 'limit',
}
def get_resource_type(self):
resource_type = self.module.params.get('resource_type')
return RESOURCE_TYPES.get(resource_type)
def get_resource_limit(self):
args = {
'account': self.get_account(key='name'),
'domainid': self.get_domain(key='id'),
'projectid': self.get_project(key='id'),
'resourcetype': self.get_resource_type()
}
resource_limit = self.query_api('listResourceLimits', **args)
if resource_limit:
if 'limit' in resource_limit['resourcelimit'][0]:
resource_limit['resourcelimit'][0]['limit'] = int(resource_limit['resourcelimit'][0])
return resource_limit['resourcelimit'][0]
self.module.fail_json(msg="Resource limit type '%s' not found." % self.module.params.get('resource_type'))
def update_resource_limit(self):
resource_limit = self.get_resource_limit()
args = {
'account': self.get_account(key='name'),
'domainid': self.get_domain(key='id'),
'projectid': self.get_project(key='id'),
'resourcetype': self.get_resource_type(),
'max': self.module.params.get('limit', -1)
}
if self.has_changed(args, resource_limit):
self.result['changed'] = True
if not self.module.check_mode:
res = self.query_api('updateResourceLimit', **args)
resource_limit = res['resourcelimit']
return resource_limit
def get_result(self, resource_limit):
self.result = super(AnsibleCloudStackResourceLimit, self).get_result(resource_limit)
self.result['resource_type'] = self.module.params.get('resource_type')
return self.result
def main():
argument_spec = cs_argument_spec()
argument_spec.update(dict(
resource_type=dict(required=True, choices=RESOURCE_TYPES.keys(), aliases=['type']),
limit=dict(default=-1, aliases=['max'], type='int'),
domain=dict(),
account=dict(),
project=dict(),
))
module = AnsibleModule(
argument_spec=argument_spec,
required_together=cs_required_together(),
supports_check_mode=True
)
acs_resource_limit = AnsibleCloudStackResourceLimit(module)
resource_limit = acs_resource_limit.update_resource_limit()
result = acs_resource_limit.get_result(resource_limit)
module.exit_json(**result)
if __name__ == '__main__':
main()
| gpl-3.0 |
whitepages/nova | nova/virt/libvirt/volume/fs.py | 42 | 2957 | # Copyright 2015 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import os
import six
from nova import utils
from nova.virt.libvirt.volume import volume as libvirt_volume
@six.add_metaclass(abc.ABCMeta)
class LibvirtBaseFileSystemVolumeDriver(
libvirt_volume.LibvirtBaseVolumeDriver):
"""The base class for file system type volume drivers"""
def __init__(self, connection):
super(LibvirtBaseFileSystemVolumeDriver,
self).__init__(connection, is_block_dev=False)
@abc.abstractmethod
def _get_mount_point_base(self):
"""Return the mount point path prefix.
This is used to build the device path.
:returns: The mount point path prefix.
"""
raise NotImplementedError('_get_mount_point_base')
def _normalize_export(self, export):
"""Normalize the export (share) if necessary.
Subclasses should override this method if they have a non-standard
export value, e.g. if the export is a URL. By default this method just
returns the export value passed in unchanged.
:param export: The export (share) value to normalize.
:returns: The normalized export value.
"""
return export
def _get_mount_path(self, connection_info):
"""Returns the mount path prefix using the mount point base and share.
:param connection_info: dict of the form
::
connection_info = {
'data': {
'export': the file system share,
...
}
...
}
:returns: The mount path prefix.
"""
share = self._normalize_export(connection_info['data']['export'])
return os.path.join(self._get_mount_point_base(),
utils.get_hash_str(share))
def _get_device_path(self, connection_info):
"""Returns the hashed path to the device.
:param connection_info: dict of the form
::
connection_info = {
'data': {
'export': the file system share,
'name': the name of the device,
...
}
...
}
:returns: The full path to the device.
"""
mount_path = self._get_mount_path(connection_info)
return os.path.join(mount_path, connection_info['data']['name'])
| apache-2.0 |
markeTIC/l10n-spain | l10n_es_aeat_mod340/wizard/calculate_mod340_records.py | 1 | 7519 | # -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (C) 2011 Ting. All Rights Reserved
# Copyright (c) 2011-2013 Acysos S.L.(http://acysos.com)
# Ignacio Ibeas Izquierdo <ignacio@acysos.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import time
import re
from openerp.tools.translate import _
from openerp.osv import orm
from openerp.tools import DEFAULT_SERVER_DATETIME_FORMAT
class L10nEsAeatMod340CalculateRecords(orm.TransientModel):
_name = "l10n.es.aeat.mod340.calculate_records"
_description = u"AEAT Model 340 Wizard - Calculate Records"
def _calculate_records(self, cr, uid, ids, context=None, recalculate=True):
report_obj = self.pool['l10n.es.aeat.mod340.report']
mod340 = report_obj.browse(cr, uid, ids)[0]
invoices340 = self.pool['l10n.es.aeat.mod340.issued']
invoices340_rec = self.pool['l10n.es.aeat.mod340.received']
issued_obj = self.pool['l10n.es.aeat.mod340.tax_line_issued']
received_obj = self.pool['l10n.es.aeat.mod340.tax_line_received']
mod340.write({
'state': 'calculated',
'calculation_date': time.strftime(DEFAULT_SERVER_DATETIME_FORMAT)
})
if not mod340.company_id.partner_id.vat:
raise orm.except_orm(mod340.company_id.partner_id.name,
_('This company dont have NIF'))
account_period_ids = [x.id for x in mod340.periods]
# Limpieza de las facturas calculadas anteriormente
del_ids = invoices340.search(cr, uid, [('mod340_id', '=', mod340.id)])
if del_ids:
invoices340.unlink(cr, uid, del_ids, context=context)
del_ids = invoices340_rec.search(cr, uid,
[('mod340_id', '=', mod340.id)])
if del_ids:
invoices340_rec.unlink(cr, uid, del_ids, context=context)
domain = [
('period_id', 'in', account_period_ids),
('state', 'in', ('open', 'paid'))
]
invoice_obj = self.pool['account.invoice']
invoice_ids = invoice_obj.search(cr, uid, domain, context=context)
for invoice in invoice_obj.browse(cr, uid, invoice_ids, context):
include = False
for tax_line in invoice.tax_line:
if tax_line.base_code_id and tax_line.base:
if tax_line.base_code_id.mod340:
include = True
break
if include:
if invoice.partner_id.vat_type == '1':
if not invoice.partner_id.vat:
raise orm.except_orm(
_('La siguiente empresa no tiene asignado nif:'),
invoice.partner_id.name)
if invoice.partner_id.vat:
country_code, nif = (
re.match(r"([A-Z]{0,2})(.*)",
invoice.partner_id.vat).groups())
else:
country_code = False
nif = False
values = {
'mod340_id': mod340.id,
'partner_id': invoice.partner_id.id,
'partner_vat': nif,
'representative_vat': '',
'partner_country_code': country_code,
'invoice_id': invoice.id,
'base_tax': invoice.cc_amount_untaxed,
'amount_tax': invoice.cc_amount_tax,
'total': invoice.cc_amount_total,
'date_invoice': invoice.date_invoice,
}
if invoice.type in ['out_refund', 'in_refund']:
values['base_tax'] *= -1
values['amount_tax'] *= -1
values['total'] *= -1
if invoice.type in ['out_invoice', 'out_refund']:
invoice_created = invoices340.create(cr, uid, values)
if invoice.type in ['in_invoice', 'in_refund']:
invoice_created = invoices340_rec.create(cr, uid, values)
tot_tax_invoice = 0
check_tax = 0
check_base = 0
# Add the invoices detail to the partner record
for tax_line in invoice.tax_line:
if tax_line.base_code_id and tax_line.base:
if tax_line.base_code_id.mod340:
tax_percentage = tax_line.amount/tax_line.base
values = {
'name': tax_line.name,
'tax_percentage': tax_percentage,
'tax_amount': tax_line.tax_amount,
'base_amount': tax_line.base_amount,
'invoice_record_id': invoice_created,
}
if invoice.type in ("out_invoice",
"out_refund"):
issued_obj.create(cr, uid, values)
if invoice.type in ("in_invoice",
"in_refund"):
received_obj.create(cr, uid, values)
tot_tax_invoice += tax_line.tax_amount
check_tax += tax_line.tax_amount
if tax_percentage >= 0:
check_base += tax_line.base_amount
if invoice.type in ['out_invoice', 'out_refund']:
invoices340.write(cr, uid, invoice_created,
{'amount_tax': tot_tax_invoice})
if invoice.type in ['in_invoice', 'in_refund']:
invoices340_rec.write(cr, uid, invoice_created,
{'amount_tax': tot_tax_invoice})
sign = 1
if invoice.type in ('out_refund', 'in_refund'):
sign = -1
if str(invoice.cc_amount_untaxed * sign) != str(check_base):
raise orm.except_orm(
"REVIEW INVOICE",
_('Invoice %s, Amount untaxed Lines %.2f do not '
'correspond to AmountUntaxed on Invoice %.2f') %
(invoice.number, check_base,
invoice.cc_amount_untaxed * sign))
if recalculate:
mod340.write({
'state': 'calculated',
'calculation_date':
time.strftime(DEFAULT_SERVER_DATETIME_FORMAT)
})
return True
| agpl-3.0 |
bmbove/omxremote | cherrypy/test/test_wsgi_ns.py | 12 | 2897 | import cherrypy
from cherrypy._cpcompat import ntob
from cherrypy.test import helper
class WSGI_Namespace_Test(helper.CPWebCase):
def setup_server():
class WSGIResponse(object):
def __init__(self, appresults):
self.appresults = appresults
self.iter = iter(appresults)
def __iter__(self):
return self
def next(self):
return self.iter.next()
def __next__(self):
return next(self.iter)
def close(self):
if hasattr(self.appresults, "close"):
self.appresults.close()
class ChangeCase(object):
def __init__(self, app, to=None):
self.app = app
self.to = to
def __call__(self, environ, start_response):
res = self.app(environ, start_response)
class CaseResults(WSGIResponse):
def next(this):
return getattr(this.iter.next(), self.to)()
def __next__(this):
return getattr(next(this.iter), self.to)()
return CaseResults(res)
class Replacer(object):
def __init__(self, app, map={}):
self.app = app
self.map = map
def __call__(self, environ, start_response):
res = self.app(environ, start_response)
class ReplaceResults(WSGIResponse):
def next(this):
line = this.iter.next()
for k, v in self.map.iteritems():
line = line.replace(k, v)
return line
def __next__(this):
line = next(this.iter)
for k, v in self.map.items():
line = line.replace(k, v)
return line
return ReplaceResults(res)
class Root(object):
def index(self):
return "HellO WoRlD!"
index.exposed = True
root_conf = {'wsgi.pipeline': [('replace', Replacer)],
'wsgi.replace.map': {ntob('L'): ntob('X'),
ntob('l'): ntob('r')},
}
app = cherrypy.Application(Root())
app.wsgiapp.pipeline.append(('changecase', ChangeCase))
app.wsgiapp.config['changecase'] = {'to': 'upper'}
cherrypy.tree.mount(app, config={'/': root_conf})
setup_server = staticmethod(setup_server)
def test_pipeline(self):
if not cherrypy.server.httpserver:
return self.skip()
self.getPage("/")
# If body is "HEXXO WORXD!", the middleware was applied out of order.
self.assertBody("HERRO WORRD!")
| bsd-3-clause |
abhi11/dak | dak/dakdb/update97.py | 7 | 2019 | #!/usr/bin/env python
# coding=utf8
"""
Create path entries for changelog exporting
@contact: Debian FTP Master <ftpmaster@debian.org>
@copyright: 2013 Luca Falavigna <dktrkranz@debian.org>
@license: GNU General Public License version 2 or later
"""
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
################################################################################
import psycopg2
from daklib.dak_exceptions import DBUpdateError
################################################################################
def do_update(self):
"""
Move changelogs related config values into projectb
"""
print __doc__
try:
c = self.db.cursor()
c.execute("ALTER TABLE archive ADD COLUMN changelog text NULL")
c.execute("UPDATE archive SET changelog = '/srv/ftp-master.debian.org/export/changelogs' WHERE name = 'ftp-master'")
c.execute("UPDATE archive SET changelog = '/srv/backports-master.debian.org/export/changelogs' WHERE name = 'backports'")
c.execute("DELETE FROM config WHERE name = 'exportpath'")
c.execute("UPDATE config SET value = '97' WHERE name = 'db_revision'")
self.db.commit()
except psycopg2.ProgrammingError as msg:
self.db.rollback()
raise DBUpdateError('Unable to apply table-column update 97, rollback issued. Error message : %s' % (str(msg)))
| gpl-2.0 |
gongyicoin/gongyicoin | qa/rpc-tests/util.py | 112 | 12330 | # Copyright (c) 2014 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
#
# Helpful routines for regression testing
#
# Add python-bitcoinrpc to module search path:
import os
import sys
sys.path.append(os.path.join(os.path.dirname(os.path.abspath(__file__)), "python-bitcoinrpc"))
from decimal import Decimal, ROUND_DOWN
import json
import random
import shutil
import subprocess
import time
import re
from bitcoinrpc.authproxy import AuthServiceProxy, JSONRPCException
from util import *
def p2p_port(n):
return 11000 + n + os.getpid()%999
def rpc_port(n):
return 12000 + n + os.getpid()%999
def check_json_precision():
"""Make sure json library being used does not lose precision converting BTC values"""
n = Decimal("20000000.00000003")
satoshis = int(json.loads(json.dumps(float(n)))*1.0e8)
if satoshis != 2000000000000003:
raise RuntimeError("JSON encode/decode loses precision")
def sync_blocks(rpc_connections):
"""
Wait until everybody has the same block count
"""
while True:
counts = [ x.getblockcount() for x in rpc_connections ]
if counts == [ counts[0] ]*len(counts):
break
time.sleep(1)
def sync_mempools(rpc_connections):
"""
Wait until everybody has the same transactions in their memory
pools
"""
while True:
pool = set(rpc_connections[0].getrawmempool())
num_match = 1
for i in range(1, len(rpc_connections)):
if set(rpc_connections[i].getrawmempool()) == pool:
num_match = num_match+1
if num_match == len(rpc_connections):
break
time.sleep(1)
bitcoind_processes = {}
def initialize_datadir(dirname, n):
datadir = os.path.join(dirname, "node"+str(n))
if not os.path.isdir(datadir):
os.makedirs(datadir)
with open(os.path.join(datadir, "bitcoin.conf"), 'w') as f:
f.write("regtest=1\n");
f.write("rpcuser=rt\n");
f.write("rpcpassword=rt\n");
f.write("port="+str(p2p_port(n))+"\n");
f.write("rpcport="+str(rpc_port(n))+"\n");
return datadir
def initialize_chain(test_dir):
"""
Create (or copy from cache) a 200-block-long chain and
4 wallets.
bitcoind and bitcoin-cli must be in search path.
"""
if not os.path.isdir(os.path.join("cache", "node0")):
devnull = open("/dev/null", "w+")
# Create cache directories, run bitcoinds:
for i in range(4):
datadir=initialize_datadir("cache", i)
args = [ os.getenv("BITCOIND", "bitcoind"), "-keypool=1", "-datadir="+datadir, "-discover=0" ]
if i > 0:
args.append("-connect=127.0.0.1:"+str(p2p_port(0)))
bitcoind_processes[i] = subprocess.Popen(args)
subprocess.check_call([ os.getenv("BITCOINCLI", "bitcoin-cli"), "-datadir="+datadir,
"-rpcwait", "getblockcount"], stdout=devnull)
devnull.close()
rpcs = []
for i in range(4):
try:
url = "http://rt:rt@127.0.0.1:%d"%(rpc_port(i),)
rpcs.append(AuthServiceProxy(url))
except:
sys.stderr.write("Error connecting to "+url+"\n")
sys.exit(1)
# Create a 200-block-long chain; each of the 4 nodes
# gets 25 mature blocks and 25 immature.
# blocks are created with timestamps 10 minutes apart, starting
# at 1 Jan 2014
block_time = 1388534400
for i in range(2):
for peer in range(4):
for j in range(25):
set_node_times(rpcs, block_time)
rpcs[peer].setgenerate(True, 1)
block_time += 10*60
# Must sync before next peer starts generating blocks
sync_blocks(rpcs)
# Shut them down, and clean up cache directories:
stop_nodes(rpcs)
wait_bitcoinds()
for i in range(4):
os.remove(log_filename("cache", i, "debug.log"))
os.remove(log_filename("cache", i, "db.log"))
os.remove(log_filename("cache", i, "peers.dat"))
os.remove(log_filename("cache", i, "fee_estimates.dat"))
for i in range(4):
from_dir = os.path.join("cache", "node"+str(i))
to_dir = os.path.join(test_dir, "node"+str(i))
shutil.copytree(from_dir, to_dir)
initialize_datadir(test_dir, i) # Overwrite port/rpcport in bitcoin.conf
def initialize_chain_clean(test_dir, num_nodes):
"""
Create an empty blockchain and num_nodes wallets.
Useful if a test case wants complete control over initialization.
"""
for i in range(num_nodes):
datadir=initialize_datadir(test_dir, i)
def _rpchost_to_args(rpchost):
'''Convert optional IP:port spec to rpcconnect/rpcport args'''
if rpchost is None:
return []
match = re.match('(\[[0-9a-fA-f:]+\]|[^:]+)(?::([0-9]+))?$', rpchost)
if not match:
raise ValueError('Invalid RPC host spec ' + rpchost)
rpcconnect = match.group(1)
rpcport = match.group(2)
if rpcconnect.startswith('['): # remove IPv6 [...] wrapping
rpcconnect = rpcconnect[1:-1]
rv = ['-rpcconnect=' + rpcconnect]
if rpcport:
rv += ['-rpcport=' + rpcport]
return rv
def start_node(i, dirname, extra_args=None, rpchost=None):
"""
Start a bitcoind and return RPC connection to it
"""
datadir = os.path.join(dirname, "node"+str(i))
args = [ os.getenv("BITCOIND", "bitcoind"), "-datadir="+datadir, "-keypool=1", "-discover=0", "-rest" ]
if extra_args is not None: args.extend(extra_args)
bitcoind_processes[i] = subprocess.Popen(args)
devnull = open("/dev/null", "w+")
subprocess.check_call([ os.getenv("BITCOINCLI", "bitcoin-cli"), "-datadir="+datadir] +
_rpchost_to_args(rpchost) +
["-rpcwait", "getblockcount"], stdout=devnull)
devnull.close()
url = "http://rt:rt@%s:%d" % (rpchost or '127.0.0.1', rpc_port(i))
proxy = AuthServiceProxy(url)
proxy.url = url # store URL on proxy for info
return proxy
def start_nodes(num_nodes, dirname, extra_args=None, rpchost=None):
"""
Start multiple bitcoinds, return RPC connections to them
"""
if extra_args is None: extra_args = [ None for i in range(num_nodes) ]
return [ start_node(i, dirname, extra_args[i], rpchost) for i in range(num_nodes) ]
def log_filename(dirname, n_node, logname):
return os.path.join(dirname, "node"+str(n_node), "regtest", logname)
def stop_node(node, i):
node.stop()
bitcoind_processes[i].wait()
del bitcoind_processes[i]
def stop_nodes(nodes):
for node in nodes:
node.stop()
del nodes[:] # Emptying array closes connections as a side effect
def set_node_times(nodes, t):
for node in nodes:
node.setmocktime(t)
def wait_bitcoinds():
# Wait for all bitcoinds to cleanly exit
for bitcoind in bitcoind_processes.values():
bitcoind.wait()
bitcoind_processes.clear()
def connect_nodes(from_connection, node_num):
ip_port = "127.0.0.1:"+str(p2p_port(node_num))
from_connection.addnode(ip_port, "onetry")
# poll until version handshake complete to avoid race conditions
# with transaction relaying
while any(peer['version'] == 0 for peer in from_connection.getpeerinfo()):
time.sleep(0.1)
def connect_nodes_bi(nodes, a, b):
connect_nodes(nodes[a], b)
connect_nodes(nodes[b], a)
def find_output(node, txid, amount):
"""
Return index to output of txid with value amount
Raises exception if there is none.
"""
txdata = node.getrawtransaction(txid, 1)
for i in range(len(txdata["vout"])):
if txdata["vout"][i]["value"] == amount:
return i
raise RuntimeError("find_output txid %s : %s not found"%(txid,str(amount)))
def gather_inputs(from_node, amount_needed, confirmations_required=1):
"""
Return a random set of unspent txouts that are enough to pay amount_needed
"""
assert(confirmations_required >=0)
utxo = from_node.listunspent(confirmations_required)
random.shuffle(utxo)
inputs = []
total_in = Decimal("0.00000000")
while total_in < amount_needed and len(utxo) > 0:
t = utxo.pop()
total_in += t["amount"]
inputs.append({ "txid" : t["txid"], "vout" : t["vout"], "address" : t["address"] } )
if total_in < amount_needed:
raise RuntimeError("Insufficient funds: need %d, have %d"%(amount_needed, total_in))
return (total_in, inputs)
def make_change(from_node, amount_in, amount_out, fee):
"""
Create change output(s), return them
"""
outputs = {}
amount = amount_out+fee
change = amount_in - amount
if change > amount*2:
# Create an extra change output to break up big inputs
change_address = from_node.getnewaddress()
# Split change in two, being careful of rounding:
outputs[change_address] = Decimal(change/2).quantize(Decimal('0.00000001'), rounding=ROUND_DOWN)
change = amount_in - amount - outputs[change_address]
if change > 0:
outputs[from_node.getnewaddress()] = change
return outputs
def send_zeropri_transaction(from_node, to_node, amount, fee):
"""
Create&broadcast a zero-priority transaction.
Returns (txid, hex-encoded-txdata)
Ensures transaction is zero-priority by first creating a send-to-self,
then using it's output
"""
# Create a send-to-self with confirmed inputs:
self_address = from_node.getnewaddress()
(total_in, inputs) = gather_inputs(from_node, amount+fee*2)
outputs = make_change(from_node, total_in, amount+fee, fee)
outputs[self_address] = float(amount+fee)
self_rawtx = from_node.createrawtransaction(inputs, outputs)
self_signresult = from_node.signrawtransaction(self_rawtx)
self_txid = from_node.sendrawtransaction(self_signresult["hex"], True)
vout = find_output(from_node, self_txid, amount+fee)
# Now immediately spend the output to create a 1-input, 1-output
# zero-priority transaction:
inputs = [ { "txid" : self_txid, "vout" : vout } ]
outputs = { to_node.getnewaddress() : float(amount) }
rawtx = from_node.createrawtransaction(inputs, outputs)
signresult = from_node.signrawtransaction(rawtx)
txid = from_node.sendrawtransaction(signresult["hex"], True)
return (txid, signresult["hex"])
def random_zeropri_transaction(nodes, amount, min_fee, fee_increment, fee_variants):
"""
Create a random zero-priority transaction.
Returns (txid, hex-encoded-transaction-data, fee)
"""
from_node = random.choice(nodes)
to_node = random.choice(nodes)
fee = min_fee + fee_increment*random.randint(0,fee_variants)
(txid, txhex) = send_zeropri_transaction(from_node, to_node, amount, fee)
return (txid, txhex, fee)
def random_transaction(nodes, amount, min_fee, fee_increment, fee_variants):
"""
Create a random transaction.
Returns (txid, hex-encoded-transaction-data, fee)
"""
from_node = random.choice(nodes)
to_node = random.choice(nodes)
fee = min_fee + fee_increment*random.randint(0,fee_variants)
(total_in, inputs) = gather_inputs(from_node, amount+fee)
outputs = make_change(from_node, total_in, amount, fee)
outputs[to_node.getnewaddress()] = float(amount)
rawtx = from_node.createrawtransaction(inputs, outputs)
signresult = from_node.signrawtransaction(rawtx)
txid = from_node.sendrawtransaction(signresult["hex"], True)
return (txid, signresult["hex"], fee)
def assert_equal(thing1, thing2):
if thing1 != thing2:
raise AssertionError("%s != %s"%(str(thing1),str(thing2)))
def assert_greater_than(thing1, thing2):
if thing1 <= thing2:
raise AssertionError("%s <= %s"%(str(thing1),str(thing2)))
def assert_raises(exc, fun, *args, **kwds):
try:
fun(*args, **kwds)
except exc:
pass
except Exception as e:
raise AssertionError("Unexpected exception raised: "+type(e).__name__)
else:
raise AssertionError("No exception raised")
| mit |
imzcy/JavaScriptExecutable | thirdparty/v8/src/tools/testrunner/server/main.py | 111 | 8960 | # Copyright 2012 the V8 project authors. All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import multiprocessing
import os
import shutil
import subprocess
import threading
import time
from . import daemon
from . import local_handler
from . import presence_handler
from . import signatures
from . import status_handler
from . import work_handler
from ..network import perfdata
class Server(daemon.Daemon):
def __init__(self, pidfile, root, stdin="/dev/null",
stdout="/dev/null", stderr="/dev/null"):
super(Server, self).__init__(pidfile, stdin, stdout, stderr)
self.root = root
self.local_handler = None
self.local_handler_thread = None
self.work_handler = None
self.work_handler_thread = None
self.status_handler = None
self.status_handler_thread = None
self.presence_daemon = None
self.presence_daemon_thread = None
self.peers = []
self.jobs = multiprocessing.cpu_count()
self.peer_list_lock = threading.Lock()
self.perf_data_lock = None
self.presence_daemon_lock = None
self.datadir = os.path.join(self.root, "data")
pubkey_fingerprint_filename = os.path.join(self.datadir, "mypubkey")
with open(pubkey_fingerprint_filename) as f:
self.pubkey_fingerprint = f.read().strip()
self.relative_perf_filename = os.path.join(self.datadir, "myperf")
if os.path.exists(self.relative_perf_filename):
with open(self.relative_perf_filename) as f:
try:
self.relative_perf = float(f.read())
except:
self.relative_perf = 1.0
else:
self.relative_perf = 1.0
def run(self):
os.nice(20)
self.ip = presence_handler.GetOwnIP()
self.perf_data_manager = perfdata.PerfDataManager(self.datadir)
self.perf_data_lock = threading.Lock()
self.local_handler = local_handler.LocalSocketServer(self)
self.local_handler_thread = threading.Thread(
target=self.local_handler.serve_forever)
self.local_handler_thread.start()
self.work_handler = work_handler.WorkSocketServer(self)
self.work_handler_thread = threading.Thread(
target=self.work_handler.serve_forever)
self.work_handler_thread.start()
self.status_handler = status_handler.StatusSocketServer(self)
self.status_handler_thread = threading.Thread(
target=self.status_handler.serve_forever)
self.status_handler_thread.start()
self.presence_daemon = presence_handler.PresenceDaemon(self)
self.presence_daemon_thread = threading.Thread(
target=self.presence_daemon.serve_forever)
self.presence_daemon_thread.start()
self.presence_daemon.FindPeers()
time.sleep(0.5) # Give those peers some time to reply.
with self.peer_list_lock:
for p in self.peers:
if p.address == self.ip: continue
status_handler.RequestTrustedPubkeys(p, self)
while True:
try:
self.PeriodicTasks()
time.sleep(60)
except Exception, e:
print("MAIN LOOP EXCEPTION: %s" % e)
self.Shutdown()
break
except KeyboardInterrupt:
self.Shutdown()
break
def Shutdown(self):
with open(self.relative_perf_filename, "w") as f:
f.write("%s" % self.relative_perf)
self.presence_daemon.shutdown()
self.presence_daemon.server_close()
self.local_handler.shutdown()
self.local_handler.server_close()
self.work_handler.shutdown()
self.work_handler.server_close()
self.status_handler.shutdown()
self.status_handler.server_close()
def PeriodicTasks(self):
# If we know peers we don't trust, see if someone else trusts them.
with self.peer_list_lock:
for p in self.peers:
if p.trusted: continue
if self.IsTrusted(p.pubkey):
p.trusted = True
status_handler.ITrustYouNow(p)
continue
for p2 in self.peers:
if not p2.trusted: continue
status_handler.TryTransitiveTrust(p2, p.pubkey, self)
# TODO: Ping for more peers waiting to be discovered.
# TODO: Update the checkout (if currently idle).
def AddPeer(self, peer):
with self.peer_list_lock:
for p in self.peers:
if p.address == peer.address:
return
self.peers.append(peer)
if peer.trusted:
status_handler.ITrustYouNow(peer)
def DeletePeer(self, peer_address):
with self.peer_list_lock:
for i in xrange(len(self.peers)):
if self.peers[i].address == peer_address:
del self.peers[i]
return
def MarkPeerAsTrusting(self, peer_address):
with self.peer_list_lock:
for p in self.peers:
if p.address == peer_address:
p.trusting_me = True
break
def UpdatePeerPerformance(self, peer_address, performance):
with self.peer_list_lock:
for p in self.peers:
if p.address == peer_address:
p.relative_performance = performance
def CopyToTrusted(self, pubkey_filename):
with open(pubkey_filename, "r") as f:
lines = f.readlines()
fingerprint = lines[-1].strip()
target_filename = self._PubkeyFilename(fingerprint)
shutil.copy(pubkey_filename, target_filename)
with self.peer_list_lock:
for peer in self.peers:
if peer.address == self.ip: continue
if peer.pubkey == fingerprint:
status_handler.ITrustYouNow(peer)
else:
result = self.SignTrusted(fingerprint)
status_handler.NotifyNewTrusted(peer, result)
return fingerprint
def _PubkeyFilename(self, pubkey_fingerprint):
return os.path.join(self.root, "trusted", "%s.pem" % pubkey_fingerprint)
def IsTrusted(self, pubkey_fingerprint):
return os.path.exists(self._PubkeyFilename(pubkey_fingerprint))
def ListTrusted(self):
path = os.path.join(self.root, "trusted")
if not os.path.exists(path): return []
return [ f[:-4] for f in os.listdir(path) if f.endswith(".pem") ]
def SignTrusted(self, pubkey_fingerprint):
if not self.IsTrusted(pubkey_fingerprint):
return []
filename = self._PubkeyFilename(pubkey_fingerprint)
result = signatures.ReadFileAndSignature(filename) # Format: [key, sig].
return [pubkey_fingerprint, result[0], result[1], self.pubkey_fingerprint]
def AcceptNewTrusted(self, data):
# The format of |data| matches the return value of |SignTrusted()|.
if not data: return
fingerprint = data[0]
pubkey = data[1]
signature = data[2]
signer = data[3]
if not self.IsTrusted(signer):
return
if self.IsTrusted(fingerprint):
return # Already trust this guy.
filename = self._PubkeyFilename(fingerprint)
signer_pubkeyfile = self._PubkeyFilename(signer)
if not signatures.VerifySignature(filename, pubkey, signature,
signer_pubkeyfile):
return
return # Nothing more to do.
def AddPerfData(self, test_key, duration, arch, mode):
data_store = self.perf_data_manager.GetStore(arch, mode)
data_store.RawUpdatePerfData(str(test_key), duration)
def CompareOwnPerf(self, test, arch, mode):
data_store = self.perf_data_manager.GetStore(arch, mode)
observed = data_store.FetchPerfData(test)
if not observed: return
own_perf_estimate = observed / test.duration
with self.perf_data_lock:
kLearnRateLimiter = 9999
self.relative_perf *= kLearnRateLimiter
self.relative_perf += own_perf_estimate
self.relative_perf /= (kLearnRateLimiter + 1)
| mit |
rs2/pandas | ci/print_skipped.py | 8 | 1064 | #!/usr/bin/env python3
import os
import xml.etree.ElementTree as et
def main(filename):
if not os.path.isfile(filename):
raise RuntimeError(f"Could not find junit file {repr(filename)}")
tree = et.parse(filename)
root = tree.getroot()
current_class = ""
for el in root.iter("testcase"):
cn = el.attrib["classname"]
for sk in el.findall("skipped"):
old_class = current_class
current_class = cn
if old_class != current_class:
yield None
yield {
"class_name": current_class,
"test_name": el.attrib["name"],
"message": sk.attrib["message"],
}
if __name__ == "__main__":
print("SKIPPED TESTS:")
i = 1
for test_data in main("test-data.xml"):
if test_data is None:
print("-" * 80)
else:
print(
f"#{i} {test_data['class_name']}."
f"{test_data['test_name']}: {test_data['message']}"
)
i += 1
| bsd-3-clause |
pkirchhofer/nsa325-kernel | tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/Core.py | 11088 | 3246 | # Core.py - Python extension for perf script, core functions
#
# Copyright (C) 2010 by Tom Zanussi <tzanussi@gmail.com>
#
# This software may be distributed under the terms of the GNU General
# Public License ("GPL") version 2 as published by the Free Software
# Foundation.
from collections import defaultdict
def autodict():
return defaultdict(autodict)
flag_fields = autodict()
symbolic_fields = autodict()
def define_flag_field(event_name, field_name, delim):
flag_fields[event_name][field_name]['delim'] = delim
def define_flag_value(event_name, field_name, value, field_str):
flag_fields[event_name][field_name]['values'][value] = field_str
def define_symbolic_field(event_name, field_name):
# nothing to do, really
pass
def define_symbolic_value(event_name, field_name, value, field_str):
symbolic_fields[event_name][field_name]['values'][value] = field_str
def flag_str(event_name, field_name, value):
string = ""
if flag_fields[event_name][field_name]:
print_delim = 0
keys = flag_fields[event_name][field_name]['values'].keys()
keys.sort()
for idx in keys:
if not value and not idx:
string += flag_fields[event_name][field_name]['values'][idx]
break
if idx and (value & idx) == idx:
if print_delim and flag_fields[event_name][field_name]['delim']:
string += " " + flag_fields[event_name][field_name]['delim'] + " "
string += flag_fields[event_name][field_name]['values'][idx]
print_delim = 1
value &= ~idx
return string
def symbol_str(event_name, field_name, value):
string = ""
if symbolic_fields[event_name][field_name]:
keys = symbolic_fields[event_name][field_name]['values'].keys()
keys.sort()
for idx in keys:
if not value and not idx:
string = symbolic_fields[event_name][field_name]['values'][idx]
break
if (value == idx):
string = symbolic_fields[event_name][field_name]['values'][idx]
break
return string
trace_flags = { 0x00: "NONE", \
0x01: "IRQS_OFF", \
0x02: "IRQS_NOSUPPORT", \
0x04: "NEED_RESCHED", \
0x08: "HARDIRQ", \
0x10: "SOFTIRQ" }
def trace_flag_str(value):
string = ""
print_delim = 0
keys = trace_flags.keys()
for idx in keys:
if not value and not idx:
string += "NONE"
break
if idx and (value & idx) == idx:
if print_delim:
string += " | ";
string += trace_flags[idx]
print_delim = 1
value &= ~idx
return string
def taskState(state):
states = {
0 : "R",
1 : "S",
2 : "D",
64: "DEAD"
}
if state not in states:
return "Unknown"
return states[state]
class EventHeaders:
def __init__(self, common_cpu, common_secs, common_nsecs,
common_pid, common_comm):
self.cpu = common_cpu
self.secs = common_secs
self.nsecs = common_nsecs
self.pid = common_pid
self.comm = common_comm
def ts(self):
return (self.secs * (10 ** 9)) + self.nsecs
def ts_format(self):
return "%d.%d" % (self.secs, int(self.nsecs / 1000))
| gpl-2.0 |
butson/xmltodict | setup.py | 8 | 1392 | #!/usr/bin/env python
try:
from setuptools import setup
except ImportError:
from ez_setup import use_setuptools
use_setuptools()
from setuptools import setup
import xmltodict
setup(name='xmltodict',
version=xmltodict.__version__,
description=xmltodict.__doc__,
author=xmltodict.__author__,
author_email='martinblech@gmail.com',
url='https://github.com/martinblech/xmltodict',
license=xmltodict.__license__,
platforms=['all'],
classifiers=[
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.5',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: Implementation :: Jython',
'Programming Language :: Python :: Implementation :: PyPy',
'Topic :: Text Processing :: Markup :: XML',
],
py_modules=['xmltodict'],
tests_require=['nose>=1.0', 'coverage'],
)
| mit |
tntC4stl3/scrapy | tests/test_utils_iterators.py | 79 | 14864 | import os
from twisted.trial import unittest
from scrapy.utils.iterators import csviter, xmliter, _body_or_str, xmliter_lxml
from scrapy.http import XmlResponse, TextResponse, Response
from tests import get_testdata
FOOBAR_NL = u"foo" + os.linesep + u"bar"
class XmliterTestCase(unittest.TestCase):
xmliter = staticmethod(xmliter)
def test_xmliter(self):
body = b"""<?xml version="1.0" encoding="UTF-8"?>\
<products xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="someschmea.xsd">\
<product id="001">\
<type>Type 1</type>\
<name>Name 1</name>\
</product>\
<product id="002">\
<type>Type 2</type>\
<name>Name 2</name>\
</product>\
</products>"""
response = XmlResponse(url="http://example.com", body=body)
attrs = []
for x in self.xmliter(response, 'product'):
attrs.append((x.xpath("@id").extract(), x.xpath("name/text()").extract(), x.xpath("./type/text()").extract()))
self.assertEqual(attrs,
[(['001'], ['Name 1'], ['Type 1']), (['002'], ['Name 2'], ['Type 2'])])
def test_xmliter_text(self):
body = u"""<?xml version="1.0" encoding="UTF-8"?><products><product>one</product><product>two</product></products>"""
self.assertEqual([x.xpath("text()").extract() for x in self.xmliter(body, 'product')],
[[u'one'], [u'two']])
def test_xmliter_namespaces(self):
body = b"""\
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:g="http://base.google.com/ns/1.0">
<channel>
<title>My Dummy Company</title>
<link>http://www.mydummycompany.com</link>
<description>This is a dummy company. We do nothing.</description>
<item>
<title>Item 1</title>
<description>This is item 1</description>
<link>http://www.mydummycompany.com/items/1</link>
<g:image_link>http://www.mydummycompany.com/images/item1.jpg</g:image_link>
<g:id>ITEM_1</g:id>
<g:price>400</g:price>
</item>
</channel>
</rss>
"""
response = XmlResponse(url='http://mydummycompany.com', body=body)
my_iter = self.xmliter(response, 'item')
node = next(my_iter)
node.register_namespace('g', 'http://base.google.com/ns/1.0')
self.assertEqual(node.xpath('title/text()').extract(), ['Item 1'])
self.assertEqual(node.xpath('description/text()').extract(), ['This is item 1'])
self.assertEqual(node.xpath('link/text()').extract(), ['http://www.mydummycompany.com/items/1'])
self.assertEqual(node.xpath('g:image_link/text()').extract(), ['http://www.mydummycompany.com/images/item1.jpg'])
self.assertEqual(node.xpath('g:id/text()').extract(), ['ITEM_1'])
self.assertEqual(node.xpath('g:price/text()').extract(), ['400'])
self.assertEqual(node.xpath('image_link/text()').extract(), [])
self.assertEqual(node.xpath('id/text()').extract(), [])
self.assertEqual(node.xpath('price/text()').extract(), [])
def test_xmliter_exception(self):
body = u"""<?xml version="1.0" encoding="UTF-8"?><products><product>one</product><product>two</product></products>"""
iter = self.xmliter(body, 'product')
next(iter)
next(iter)
self.assertRaises(StopIteration, next, iter)
def test_xmliter_encoding(self):
body = b'<?xml version="1.0" encoding="ISO-8859-9"?>\n<xml>\n <item>Some Turkish Characters \xd6\xc7\xde\xdd\xd0\xdc \xfc\xf0\xfd\xfe\xe7\xf6</item>\n</xml>\n\n'
response = XmlResponse('http://www.example.com', body=body)
self.assertEqual(
self.xmliter(response, 'item').next().extract(),
u'<item>Some Turkish Characters \xd6\xc7\u015e\u0130\u011e\xdc \xfc\u011f\u0131\u015f\xe7\xf6</item>'
)
class LxmlXmliterTestCase(XmliterTestCase):
xmliter = staticmethod(xmliter_lxml)
def test_xmliter_iterate_namespace(self):
body = b"""\
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns="http://base.google.com/ns/1.0">
<channel>
<title>My Dummy Company</title>
<link>http://www.mydummycompany.com</link>
<description>This is a dummy company. We do nothing.</description>
<item>
<title>Item 1</title>
<description>This is item 1</description>
<link>http://www.mydummycompany.com/items/1</link>
<image_link>http://www.mydummycompany.com/images/item1.jpg</image_link>
<image_link>http://www.mydummycompany.com/images/item2.jpg</image_link>
</item>
</channel>
</rss>
"""
response = XmlResponse(url='http://mydummycompany.com', body=body)
no_namespace_iter = self.xmliter(response, 'image_link')
self.assertEqual(len(list(no_namespace_iter)), 0)
namespace_iter = self.xmliter(response, 'image_link', 'http://base.google.com/ns/1.0')
node = next(namespace_iter)
self.assertEqual(node.xpath('text()').extract(), ['http://www.mydummycompany.com/images/item1.jpg'])
node = next(namespace_iter)
self.assertEqual(node.xpath('text()').extract(), ['http://www.mydummycompany.com/images/item2.jpg'])
def test_xmliter_namespaces_prefix(self):
body = b"""\
<?xml version="1.0" encoding="UTF-8"?>
<root>
<h:table xmlns:h="http://www.w3.org/TR/html4/">
<h:tr>
<h:td>Apples</h:td>
<h:td>Bananas</h:td>
</h:tr>
</h:table>
<f:table xmlns:f="http://www.w3schools.com/furniture">
<f:name>African Coffee Table</f:name>
<f:width>80</f:width>
<f:length>120</f:length>
</f:table>
</root>
"""
response = XmlResponse(url='http://mydummycompany.com', body=body)
my_iter = self.xmliter(response, 'table', 'http://www.w3.org/TR/html4/', 'h')
node = next(my_iter)
self.assertEqual(len(node.xpath('h:tr/h:td').extract()), 2)
self.assertEqual(node.xpath('h:tr/h:td[1]/text()').extract(), ['Apples'])
self.assertEqual(node.xpath('h:tr/h:td[2]/text()').extract(), ['Bananas'])
my_iter = self.xmliter(response, 'table', 'http://www.w3schools.com/furniture', 'f')
node = next(my_iter)
self.assertEqual(node.xpath('f:name/text()').extract(), ['African Coffee Table'])
class UtilsCsvTestCase(unittest.TestCase):
sample_feeds_dir = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'sample_data', 'feeds')
sample_feed_path = os.path.join(sample_feeds_dir, 'feed-sample3.csv')
sample_feed2_path = os.path.join(sample_feeds_dir, 'feed-sample4.csv')
sample_feed3_path = os.path.join(sample_feeds_dir, 'feed-sample5.csv')
def test_csviter_defaults(self):
body = get_testdata('feeds', 'feed-sample3.csv')
response = TextResponse(url="http://example.com/", body=body)
csv = csviter(response)
result = [row for row in csv]
self.assertEqual(result,
[{u'id': u'1', u'name': u'alpha', u'value': u'foobar'},
{u'id': u'2', u'name': u'unicode', u'value': u'\xfan\xedc\xf3d\xe9\u203d'},
{u'id': u'3', u'name': u'multi', u'value': FOOBAR_NL},
{u'id': u'4', u'name': u'empty', u'value': u''}])
# explicit type check cuz' we no like stinkin' autocasting! yarrr
for result_row in result:
self.assert_(all((isinstance(k, unicode) for k in result_row.keys())))
self.assert_(all((isinstance(v, unicode) for v in result_row.values())))
def test_csviter_delimiter(self):
body = get_testdata('feeds', 'feed-sample3.csv').replace(',', '\t')
response = TextResponse(url="http://example.com/", body=body)
csv = csviter(response, delimiter='\t')
self.assertEqual([row for row in csv],
[{u'id': u'1', u'name': u'alpha', u'value': u'foobar'},
{u'id': u'2', u'name': u'unicode', u'value': u'\xfan\xedc\xf3d\xe9\u203d'},
{u'id': u'3', u'name': u'multi', u'value': FOOBAR_NL},
{u'id': u'4', u'name': u'empty', u'value': u''}])
def test_csviter_quotechar(self):
body1 = get_testdata('feeds', 'feed-sample6.csv')
body2 = get_testdata('feeds', 'feed-sample6.csv').replace(",", '|')
response1 = TextResponse(url="http://example.com/", body=body1)
csv1 = csviter(response1, quotechar="'")
self.assertEqual([row for row in csv1],
[{u'id': u'1', u'name': u'alpha', u'value': u'foobar'},
{u'id': u'2', u'name': u'unicode', u'value': u'\xfan\xedc\xf3d\xe9\u203d'},
{u'id': u'3', u'name': u'multi', u'value': FOOBAR_NL},
{u'id': u'4', u'name': u'empty', u'value': u''}])
response2 = TextResponse(url="http://example.com/", body=body2)
csv2 = csviter(response2, delimiter="|", quotechar="'")
self.assertEqual([row for row in csv2],
[{u'id': u'1', u'name': u'alpha', u'value': u'foobar'},
{u'id': u'2', u'name': u'unicode', u'value': u'\xfan\xedc\xf3d\xe9\u203d'},
{u'id': u'3', u'name': u'multi', u'value': FOOBAR_NL},
{u'id': u'4', u'name': u'empty', u'value': u''}])
def test_csviter_wrong_quotechar(self):
body = get_testdata('feeds', 'feed-sample6.csv')
response = TextResponse(url="http://example.com/", body=body)
csv = csviter(response)
self.assertEqual([row for row in csv],
[{u"'id'": u"1", u"'name'": u"'alpha'", u"'value'": u"'foobar'"},
{u"'id'": u"2", u"'name'": u"'unicode'", u"'value'": u"'\xfan\xedc\xf3d\xe9\u203d'"},
{u"'id'": u"'3'", u"'name'": u"'multi'", u"'value'": u"'foo"},
{u"'id'": u"4", u"'name'": u"'empty'", u"'value'": u""}])
def test_csviter_delimiter_binary_response_assume_utf8_encoding(self):
body = get_testdata('feeds', 'feed-sample3.csv').replace(',', '\t')
response = Response(url="http://example.com/", body=body)
csv = csviter(response, delimiter='\t')
self.assertEqual([row for row in csv],
[{u'id': u'1', u'name': u'alpha', u'value': u'foobar'},
{u'id': u'2', u'name': u'unicode', u'value': u'\xfan\xedc\xf3d\xe9\u203d'},
{u'id': u'3', u'name': u'multi', u'value': FOOBAR_NL},
{u'id': u'4', u'name': u'empty', u'value': u''}])
def test_csviter_headers(self):
sample = get_testdata('feeds', 'feed-sample3.csv').splitlines()
headers, body = sample[0].split(','), '\n'.join(sample[1:])
response = TextResponse(url="http://example.com/", body=body)
csv = csviter(response, headers=headers)
self.assertEqual([row for row in csv],
[{u'id': u'1', u'name': u'alpha', u'value': u'foobar'},
{u'id': u'2', u'name': u'unicode', u'value': u'\xfan\xedc\xf3d\xe9\u203d'},
{u'id': u'3', u'name': u'multi', u'value': u'foo\nbar'},
{u'id': u'4', u'name': u'empty', u'value': u''}])
def test_csviter_falserow(self):
body = get_testdata('feeds', 'feed-sample3.csv')
body = '\n'.join((body, 'a,b', 'a,b,c,d'))
response = TextResponse(url="http://example.com/", body=body)
csv = csviter(response)
self.assertEqual([row for row in csv],
[{u'id': u'1', u'name': u'alpha', u'value': u'foobar'},
{u'id': u'2', u'name': u'unicode', u'value': u'\xfan\xedc\xf3d\xe9\u203d'},
{u'id': u'3', u'name': u'multi', u'value': FOOBAR_NL},
{u'id': u'4', u'name': u'empty', u'value': u''}])
def test_csviter_exception(self):
body = get_testdata('feeds', 'feed-sample3.csv')
response = TextResponse(url="http://example.com/", body=body)
iter = csviter(response)
next(iter)
next(iter)
next(iter)
next(iter)
self.assertRaises(StopIteration, next, iter)
def test_csviter_encoding(self):
body1 = get_testdata('feeds', 'feed-sample4.csv')
body2 = get_testdata('feeds', 'feed-sample5.csv')
response = TextResponse(url="http://example.com/", body=body1, encoding='latin1')
csv = csviter(response)
self.assertEqual([row for row in csv],
[{u'id': u'1', u'name': u'latin1', u'value': u'test'},
{u'id': u'2', u'name': u'something', u'value': u'\xf1\xe1\xe9\xf3'}])
response = TextResponse(url="http://example.com/", body=body2, encoding='cp852')
csv = csviter(response)
self.assertEqual([row for row in csv],
[{u'id': u'1', u'name': u'cp852', u'value': u'test'},
{u'id': u'2', u'name': u'something', u'value': u'\u255a\u2569\u2569\u2569\u2550\u2550\u2557'}])
class TestHelper(unittest.TestCase):
bbody = b'utf8-body'
ubody = bbody.decode('utf8')
txtresponse = TextResponse(url='http://example.org/', body=bbody, encoding='utf-8')
response = Response(url='http://example.org/', body=bbody)
def test_body_or_str(self):
for obj in (self.bbody, self.ubody, self.txtresponse, self.response):
r1 = _body_or_str(obj)
self._assert_type_and_value(r1, self.ubody, obj)
r2 = _body_or_str(obj, unicode=True)
self._assert_type_and_value(r2, self.ubody, obj)
r3 = _body_or_str(obj, unicode=False)
self._assert_type_and_value(r3, self.bbody, obj)
self.assertTrue(type(r1) is type(r2))
self.assertTrue(type(r1) is not type(r3))
def _assert_type_and_value(self, a, b, obj):
self.assertTrue(type(a) is type(b),
'Got {}, expected {} for {!r}'.format(type(a), type(b), obj))
self.assertEqual(a, b)
if __name__ == "__main__":
unittest.main()
| bsd-3-clause |
dejay313/dojostreams | plugin.video.youtube/resources/lib/kodion/abstract_provider.py | 17 | 10720 | import re
from .exceptions import KodionException
from . import items
from . import constants
from . import utils
class AbstractProvider(object):
RESULT_CACHE_TO_DISC = 'cache_to_disc' # (bool)
def __init__(self):
self._local_map = {
'kodion.wizard.view.default': 30027,
'kodion.wizard.view.episodes': 30028,
'kodion.wizard.view.movies': 30029,
'kodion.wizard.view.tvshows': 30032,
'kodion.wizard.view.songs': 30033,
'kodion.wizard.view.artists': 30034,
'kodion.wizard.view.albums': 30035
}
# map for regular expression (path) to method (names)
self._dict_path = {}
# register some default paths
self.register_path('^/$', '_internal_root')
self.register_path('^/' + constants.paths.WATCH_LATER + '/(?P<command>add|remove|list)/?$',
'_internal_watch_later')
self.register_path('^/' + constants.paths.FAVORITES + '/(?P<command>add|remove|list)/?$', '_internal_favorite')
self.register_path('^/' + constants.paths.SEARCH + '/(?P<command>input|query|list|remove|clear|rename)/?$',
'_internal_search')
self.register_path('(?P<path>.*\/)extrafanart\/([\?#].+)?$', '_internal_on_extra_fanart')
"""
Test each method of this class for the appended attribute '_re_match' by the
decorator (RegisterProviderPath).
The '_re_match' attributes describes the path which must match for the decorated method.
"""
for method_name in dir(self):
method = getattr(self, method_name)
if hasattr(method, 'kodion_re_path'):
self.register_path(method.kodion_re_path, method_name)
pass
pass
pass
def get_alternative_fanart(self, context):
return context.get_fanart()
def register_path(self, re_path, method_name):
"""
Registers a new method by name (string) for the given regular expression
:param re_path: regular expression of the path
:param method_name: name of the method
:return:
"""
self._dict_path[re_path] = method_name
pass
def _process_wizard(self, context):
def _setup_views(_context, _view):
view_manager = utils.ViewManager(_context)
if not view_manager.update_view_mode(_context.localize(self._local_map['kodion.wizard.view.%s' % _view]),
_view):
return
_context.get_settings().set_bool(constants.setting.VIEW_OVERRIDE, True)
pass
# start the setup wizard
wizard_steps = []
if context.get_settings().is_setup_wizard_enabled():
context.get_settings().set_bool(constants.setting.SETUP_WIZARD, False)
if utils.ViewManager(context).has_supported_views():
views = self.get_wizard_supported_views()
for view in views:
if view in utils.ViewManager.SUPPORTED_VIEWS:
wizard_steps.append((_setup_views, [context, view]))
pass
else:
context.log_warning('[Setup-Wizard] Unsupported view "%s"' % view)
pass
pass
pass
else:
skin_id = context.get_ui().get_skin_id()
context.log("ViewManager: Unknown skin id '%s'" % skin_id)
pass
wizard_steps.extend(self.get_wizard_steps(context))
pass
if wizard_steps and context.get_ui().on_yes_no_input(context.get_name(),
context.localize(constants.localize.SETUP_WIZARD_EXECUTE)):
for wizard_step in wizard_steps:
wizard_step[0](*wizard_step[1])
pass
pass
pass
def get_wizard_supported_views(self):
return ['default']
def get_wizard_steps(self, context):
# can be overridden by the derived class
return []
def navigate(self, context):
self._process_wizard(context)
path = context.get_path()
for key in self._dict_path:
re_match = re.search(key, path, re.UNICODE)
if re_match is not None:
method_name = self._dict_path.get(key, '')
method = getattr(self, method_name)
if method is not None:
result = method(context, re_match)
if not isinstance(result, tuple):
result = result, {}
pass
return result
pass
pass
raise KodionException("Mapping for path '%s' not found" % path)
def on_extra_fanart(self, context, re_match):
"""
The implementation of the provider can override this behavior.
:param context:
:param re_match:
:return:
"""
return None
def _internal_on_extra_fanart(self, context, re_match):
path = re_match.group('path')
new_context = context.clone(new_path=path)
return self.on_extra_fanart(new_context, re_match)
def on_search(self, search_text, context, re_match):
raise NotImplementedError()
def on_root(self, context, re_match):
raise NotImplementedError()
def on_watch_later(self, context, re_match):
pass
def _internal_root(self, context, re_match):
return self.on_root(context, re_match)
def _internal_favorite(self, context, re_match):
context.add_sort_method(constants.sort_method.LABEL_IGNORE_THE)
params = context.get_params()
command = re_match.group('command')
if command == 'add':
fav_item = items.from_json(params['item'])
context.get_favorite_list().add(fav_item)
pass
elif command == 'remove':
fav_item = items.from_json(params['item'])
context.get_favorite_list().remove(fav_item)
context.get_ui().refresh_container()
pass
elif command == 'list':
directory_items = context.get_favorite_list().list()
for directory_item in directory_items:
context_menu = [(context.localize(constants.localize.WATCH_LATER_REMOVE),
'RunPlugin(%s)' % context.create_uri([constants.paths.FAVORITES, 'remove'],
params={'item': items.to_jsons(directory_item)}))]
directory_item.set_context_menu(context_menu)
pass
return directory_items
else:
pass
pass
def _internal_watch_later(self, context, re_match):
self.on_watch_later(context, re_match)
params = context.get_params()
command = re_match.group('command')
if command == 'add':
item = items.from_json(params['item'])
context.get_watch_later_list().add(item)
pass
elif command == 'remove':
item = items.from_json(params['item'])
context.get_watch_later_list().remove(item)
context.get_ui().refresh_container()
pass
elif command == 'list':
video_items = context.get_watch_later_list().list()
for video_item in video_items:
context_menu = [(context.localize(constants.localize.WATCH_LATER_REMOVE),
'RunPlugin(%s)' % context.create_uri([constants.paths.WATCH_LATER, 'remove'],
params={'item': items.to_jsons(video_item)}))]
video_item.set_context_menu(context_menu)
pass
return video_items
else:
# do something
pass
pass
def _internal_search(self, context, re_match):
params = context.get_params()
command = re_match.group('command')
search_history = context.get_search_history()
if command == 'remove':
query = params['q']
search_history.remove(query)
context.get_ui().refresh_container()
return True
elif command == 'rename':
query = params['q']
result, new_query = context.get_ui().on_keyboard_input(context.localize(constants.localize.SEARCH_RENAME),
query)
if result:
search_history.rename(query, new_query)
context.get_ui().refresh_container()
pass
return True
elif command == 'clear':
search_history.clear()
context.get_ui().refresh_container()
return True
elif command == 'input':
result, query = context.get_ui().on_keyboard_input(context.localize(constants.localize.SEARCH_TITLE))
if result:
context.execute(
'Container.Update(%s)' % context.create_uri([constants.paths.SEARCH, 'query'], {'q': query}))
pass
return True
elif command == 'query':
query = params['q']
search_history.update(query)
return self.on_search(query, context, re_match)
else:
result = []
# 'New Search...'
new_search_item = items.NewSearchItem(context, fanart=self.get_alternative_fanart(context))
result.append(new_search_item)
for search in search_history.list():
# little fallback for old history entries
if isinstance(search, items.DirectoryItem):
search = search.get_name()
pass
# we create a new instance of the SearchItem
search_history_item = items.SearchHistoryItem(context, search,
fanart=self.get_alternative_fanart(context))
result.append(search_history_item)
pass
if search_history.is_empty():
context.execute('RunPlugin(%s)' % context.create_uri([constants.paths.SEARCH, 'input']))
pass
return result, {self.RESULT_CACHE_TO_DISC: False}
return False
def handle_exception(self, context, exception_to_handle):
return True
def tear_down(self, context):
pass
pass | gpl-2.0 |
gbrammer/sgas-lens | sgas/other_bands.py | 1 | 2384 | def recalibrate():
"""
Rerun the WFC3 calibration pipeline to flatten the (potentially)
variable ramps
"""
import matplotlib as mpl
mpl.rcParams['backend'] = 'agg'
import glob
import os
import stsci.tools
from sgas import reprocess_wfc3
# In RAW
files=glob.glob('*raw.fits')
reprocess_wfc3.show_ramps_parallel(files, cpu_count=4)
files=glob.glob('*raw.fits')
reprocess_wfc3.reprocess_parallel(files)
def preprocess():
"""
Drizzle and align the other bands
"""
# In Prep
import grizli
import grizli.prep
import os
import glob
import numpy as np
import sgas
# other bands from the Gladders program
files=glob.glob('../RAW/ic2*_flt.fits')
visits, xx = grizli.utils.parse_flt_files(files=files, uniquename=True)
# Alignment list, generated by GBr
radec = os.path.join(sgas.get_data_path(), 'sdssj0851+3331-f160w.radec')
# Copy aligment guess files
os.system('cp {0}/*align_guess .'.format(sgas.get_data_path()))
all_failed = []
Skip=True
# This main loop does all of the alignment and background subtraction
for visit in visits:
if os.path.exists('%s.failed' %(visit['product'])):
all_failed.append(visit)
if (os.path.exists('%s_drz_sci.fits' %(visit['product']))) & (Skip):
continue
print(visit['product'])
try:
status = grizli.prep.process_direct_grism_visit(direct=visit, grism={}, radec=radec, skip_direct=False, align_mag_limits=[14,23], tweak_max_dist=8, tweak_threshold=8, align_tolerance=8, tweak_fit_order=2)
except:
fp = open('%s.failed' %(visit['product']), 'w')
fp.write('\n')
fp.close()
continue
if os.path.exists('%s.failed' %(visit['product'])):
os.remove('%s.failed' %(visit['product']))
# Make both images have the same pixel grid
visits[1]['reference'] = 'sdssj0851+3331-c2i-06-293.0-f125w_drz_sci.fits'
# Drizzle them, North-up and with 0.06" pixels
grizli.prep.drizzle_overlaps(visits, parse_visits=False, pixfrac=0.8, scale=0.06, skysub=False, final_wht_type='IVM', check_overlaps=False)
| mit |
MediaSapiens/wavesf | django/contrib/auth/tests/decorators.py | 94 | 1559 | from django.conf import settings
from django.contrib.auth.decorators import login_required
from django.contrib.auth.tests.views import AuthViewsTestCase
class LoginRequiredTestCase(AuthViewsTestCase):
"""
Tests the login_required decorators
"""
urls = 'django.contrib.auth.tests.urls'
def testCallable(self):
"""
Check that login_required is assignable to callable objects.
"""
class CallableView(object):
def __call__(self, *args, **kwargs):
pass
login_required(CallableView())
def testView(self):
"""
Check that login_required is assignable to normal views.
"""
def normal_view(request):
pass
login_required(normal_view)
def testLoginRequired(self, view_url='/login_required/', login_url=settings.LOGIN_URL):
"""
Check that login_required works on a simple view wrapped in a
login_required decorator.
"""
response = self.client.get(view_url)
self.assertEqual(response.status_code, 302)
self.assert_(login_url in response['Location'])
self.login()
response = self.client.get(view_url)
self.assertEqual(response.status_code, 200)
def testLoginRequiredNextUrl(self):
"""
Check that login_required works on a simple view wrapped in a
login_required decorator with a login_url set.
"""
self.testLoginRequired(view_url='/login_required_login_url/',
login_url='/somewhere/')
| bsd-3-clause |
kivatu/kivy_old | kivy/tools/packaging/osx/build.py | 10 | 13432 | from __future__ import print_function
import os
import sys
import shutil
import shlex
import re
import time
from urllib.request import urlretrieve
from urllib.request import urlopen
from subprocess import Popen, PIPE
from distutils.cmd import Command
class OSXPortableBuild(Command):
description = "custom build command that builds portable osx package"
user_options = [
('dist-dir=', None,
"path of dist directory to use for building portable kivy, "
"the resulting disk image will be output to this driectory. "
"defaults to cwd."),
('deps-url=', None,
"url of binary dependancies for portable kivy package default: "
"http://kivy.googlecode.com/files/portable-deps-osx.zip"),
('no-cext', None,
"flag to disable building of c extensions")]
def initialize_options(self):
self.dist_dir = None
self.deps_url = None
self.no_cext = None
def finalize_options(self):
if not self.deps_url:
url = 'http://kivy.googlecode.com/files/portable-deps-osx.zip'
self.deps_url = url
if not self.dist_dir:
self.dist_dir = os.getcwd()
self.src_dir = os.path.dirname(sys.modules['__main__'].__file__)
# e.g. Kivy-0.5 (name and version passed to setup())
self.dist_name = self.distribution.get_fullname()
self.build_dir = os.path.join(self.dist_dir,
self.dist_name + '-osx-build')
def run(self):
intro = "Building Kivy Portable for OSX (%s)" % (self.dist_name)
print("-" * len(intro))
print(intro)
print("-" * len(intro))
print("\nPreparing Build...")
print("---------------------------------------")
if os.path.exists(self.build_dir):
print("*Cleaning old build dir")
shutil.rmtree(self.build_dir, ignore_errors=True)
print("*Creating build directory:", self.build_dir)
os.makedirs(self.build_dir)
def download_deps():
print("\nGetting binary dependencies...")
print("*Downloading:", self.deps_url)
# report_hook is called every time a piece of teh file is
# downloaded to print progress
def report_hook(block_count, block_size, total_size):
p = block_count * block_size * 100.0 / total_size
print("\b\b\b\b\b\b\b\b\b", "%06.2f" % p + "%", end=' ')
print(" Progress: 000.00%", end=' ')
# location of binary dependencioes needed for portable kivy
urlretrieve(self.deps_url,
# tmp file to store the archive
os.path.join(self.dist_dir, 'deps.zip'),
reporthook=report_hook)
print(" [Done]")
fn = '.last_known_portable_deps_hash'
def get_latest_hash():
u = urlopen("http://code.google.com/p/kivy/downloads/detail?name=portable-deps-osx.zip")
c = u.read()
start = """Checksum: </th><td style="white-space:nowrap"> """
start_index = c.find(start) + len(start)
# SHA1 hash is 40 chars long
latest_hash = c[start_index:start_index+40]
print("Latest SHA1 Hash for deps is:", repr(latest_hash))
return latest_hash
print("\nChecking binary dependencies...")
print("---------------------------------------")
download = False
try:
with open(fn, 'r') as fd:
last_hash = fd.read()
print("Stored SHA1 Hash for deps is:", repr(last_hash))
except:
print('No cached copy of binary dependencies found.')
download = True
latest_hash = get_latest_hash()
deps = os.path.join(self.dist_dir, 'deps.zip')
if download or not (last_hash == latest_hash and os.path.isfile(deps)):
download_deps()
with open(fn, 'w') as fd:
fd.write(latest_hash)
else:
print("Using CACHED COPY for binary dependencies!")
print("*Extracting binary dependencies...")
# using osx sysetm command, because python zipfile cant
# handle the hidden files in teh archive
Popen(['unzip', os.path.join(self.dist_dir, 'deps.zip')],
cwd=self.build_dir, stdout=PIPE).communicate()
print("\nPutting kivy into portable environment")
print("---------------------------------------")
print("*Building kivy source distribution")
sdist_cmd = [sys.executable, #path to python.exe
os.path.join(self.src_dir, 'setup.py'), #path to setup.py
'sdist', #make setup.py create a src distribution
'--dist-dir=%s' % self.build_dir] #put it into build folder
Popen(sdist_cmd, stdout=PIPE).communicate()
print("*Placing kivy source distribution in portable context")
src_dist = os.path.join(self.build_dir, self.dist_name)
# using osx sysetm command, becasue python zipfile
# cant handle the hidden files in teh archive
Popen(['tar', 'xfv', src_dist + '.tar.gz'], cwd=self.build_dir,
stdout=PIPE, stderr=PIPE).communicate()
if self.no_cext:
print("*Skipping C Extension build", end=' ')
print("(either --no_cext or --no_mingw option set)")
else:
print("*Compiling C Extensions inplace for portable distribution")
cext_cmd = [sys.executable, #path to python.exe
'setup.py',
'build_ext', #make setup.py create a src distribution
'--inplace'] #do it inplace
#this time it runs teh setup.py inside the source distribution
#thats has been generated inside the build dir (to generate ext
#for teh target, instead of the source were building from)
Popen(cext_cmd, cwd=src_dist).communicate()
print("\nFinalizing Application Bundle")
print("---------------------------------------")
print("*Copying launcher script into the app bundle")
script_target = os.path.join(self.build_dir, 'portable-deps-osx',
'Kivy.app', 'Contents', 'Resources', 'script')
script = os.path.join(src_dist, 'kivy', 'tools', 'packaging',
'osx', 'kivy.sh')
shutil.copy(script, script_target)
# Write plist files with updated version & year info (for copyright)
year = time.strftime("%Y")
first = '2011'
if year != first:
year = first + '-' + year
version = self.dist_name.replace("Kivy-", "")
def write_plist(fn, target):
print("*Writing", fn)
plist_template = os.path.join(self.dist_dir, 'kivy', 'tools',
'packaging', 'osx', fn)
with open(plist_template, 'r') as fd:
plist_content = fd.read()
plist_content = plist_content.replace("{{__VERSION__}}", version)
plist_content = plist_content.replace("{{__YEAR__}}", year)
with open(plist_target, 'w') as fd:
fd.write(plist_content)
fn = 'InfoPlist.strings'
plist_target = os.path.join(self.build_dir, 'portable-deps-osx', 'Kivy.app',
'Contents', 'Resources', 'English.lproj', fn)
write_plist(fn, plist_target)
fn = 'Info.plist'
plist_target = os.path.join(self.build_dir, 'portable-deps-osx', 'Kivy.app',
'Contents', fn)
write_plist(fn, plist_target)
print("*Moving examples out of app bundle to be included in disk image")
examples_target = os.path.join(self.build_dir, 'portable-deps-osx',
'examples')
examples = os.path.join(src_dist, 'examples')
shutil.move(examples, examples_target)
print("*Moving newly build kivy distribution into app bundle")
kivy_target = os.path.join(self.build_dir, 'portable-deps-osx',
'Kivy.app', 'Contents', 'Resources', 'kivy')
shutil.move(src_dist, kivy_target)
print("*Removing intermediate file")
os.remove(os.path.join(self.build_dir, src_dist + '.tar.gz'))
shutil.rmtree(os.path.join(self.build_dir, '__MACOSX'),
ignore_errors=True)
#contents of portable-deps-osx, are now ready to go into teh disk image
dmg_dir = os.path.join(self.build_dir, 'portable-deps-osx')
vol_name = "Kivy"
print("\nCreating disk image for distribution")
print("---------------------------------------")
print("\nCreating intermediate DMG disk image: temp.dmg")
print("*checking how much space is needed for disk image...")
du_cmd = 'du -sh %s' % dmg_dir
du_out = Popen(shlex.split(du_cmd), stdout=PIPE).communicate()[0]
size, unit = re.search('(\d+)(.*)\s+/.*', du_out).group(1, 2)
print(" build needs at least %s%s." % (size, unit))
size = int(size) + 10
print("*allocating %d%s for temp.dmg" % (size, unit, ))
print("(volume name:%s)" % (vol_name, ))
create_dmg_cmd = 'hdiutil create -srcfolder %s -volname %s -fs HFS+ \
-fsargs "-c c=64,a=16,e=16" -format UDRW -size %d%s \
temp.dmg' % (dmg_dir, vol_name, size+10, unit)
Popen(shlex.split(create_dmg_cmd), cwd=self.build_dir).communicate()
print("*mounting intermediate disk image:")
mount_cmd = 'hdiutil attach -readwrite -noverify -noautoopen "temp.dmg"'
Popen(shlex.split(mount_cmd), cwd=self.build_dir,
stdout=PIPE).communicate()
print("*running Apple Script to configure DMG layout properties:")
dmg_config_script = """
tell application "Finder"
tell disk "%s"
open
set current view of container window to icon view
set toolbar visible of container window to false
set statusbar visible of container window to false
set the bounds of container window to {270,100,912,582}
set theViewOptions to the icon view options of container window
set arrangement of theViewOptions to not arranged
set icon size of theViewOptions to 72
set background picture of theViewOptions to file ".background:kivydmg.png"
make new alias file at container window to POSIX file "/Applications" with properties {name:"Applications"}
set position of item "Kivy" of container window to {155, 85}
set position of item "Applications" of container window to {495, 85}
set position of item "examples" of container window to {575, 400}
set position of item "Readme.txt" of container window to {475, 400}
set position of item "make-symlinks" of container window to {375, 400}
set position of item ".background" of container window to {900, 900}
set position of item ".DS_Store" of container window to {900, 900}
set position of item ".fseventsd" of container window to {900, 900}
set position of item ".Trashes" of container window to {900, 900}
set the label index of item "examples" to 7
set the label index of item "Readme.txt" to 7
set the label index of item "make-symlinks" to 7
close
open
update without registering applications
delay 2
eject
end tell
end tell
""" % vol_name
print(Popen(['osascript'], cwd=self.build_dir, stdin=PIPE,
stdout=PIPE).communicate(dmg_config_script)[0])
print("\nCreating final disk image")
print("*unmounting intermediate disk image")
umount_cmd = 'hdiutil detach /Volumes/%s' % vol_name
Popen(shlex.split(umount_cmd), cwd=self.build_dir,
stdout=PIPE).communicate()
print("*compressing and finalizing disk image")
fn = os.path.join(self.dist_dir, self.dist_name + "-osx.dmg")
try:
os.remove(fn)
except OSError:
pass
convert_cmd = 'hdiutil convert "temp.dmg" -format UDZO -imagekey ' + \
'zlib-level=9 -o %s' % (fn,)
Popen(shlex.split(convert_cmd), cwd=self.build_dir,
stdout=PIPE).communicate()
print("*Writing disk image, and cleaning build directory")
shutil.rmtree(self.build_dir, ignore_errors=True)
print("*Upload to google code")
cmd = ('{} kivy/tools/packaging/googlecode_upload.py -s {} '
'-p kivy -l Featured,OsSys-OSX {}'.format(
sys.executable,
'"Kivy {}, MacOSX portable version (Python 2.7, '
'64 bits, bundled dependencies)"'.format(version),
fn))
Popen(shlex.split(cmd), cwd=self.src_dir).communicate()
| mit |
ethanbao/api-client-staging-1 | generated/python/gapic-google-iam-admin-v1/google/cloud/gapic/iam_admin/v1/iam_api.py | 2 | 34526 | # Copyright 2016 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# EDITING INSTRUCTIONS
# This file was generated from the file
# https://github.com/google/googleapis/blob/master/google/iam/admin/v1/iam.proto,
# and updates to that file get reflected here through a refresh process.
# For the short term, the refresh process will only be runnable by Google engineers.
#
# The only allowed edits are to method and file documentation. A 3-way
# merge preserves those additions if the generated source changes.
"""Accesses the google.iam.admin.v1 IAM API."""
import json
import os
import pkg_resources
import platform
from google.gax import api_callable
from google.gax import config
from google.gax import path_template
import google.gax
from google.cloud.gapic.iam_admin.v1 import enums
from google.iam.admin.v1 import iam_pb2
from google.iam.v1 import iam_policy_pb2
from google.iam.v1 import policy_pb2
_PageDesc = google.gax.PageDescriptor
class IAMApi(object):
"""
Creates and manages service account objects.
Service account is an account that belongs to your project instead
of to an individual end user. It is used to authenticate calls
to a Google API.
To create a service account, specify the ``project_id`` and ``account_id``
for the account. The ``account_id`` is unique within the project, and used
to generate the service account email address and a stable
``unique_id``.
All other methods can identify accounts using the format
``projects/{project}/serviceAccounts/{account}``.
Using ``-`` as a wildcard for the project will infer the project from
the account. The ``account`` value can be the ``email`` address or the
``unique_id`` of the service account.
"""
SERVICE_ADDRESS = 'iam.googleapis.com'
"""The default address of the service."""
DEFAULT_SERVICE_PORT = 443
"""The default port of the service."""
_CODE_GEN_NAME_VERSION = 'gapic/0.1.0'
_GAX_VERSION = pkg_resources.get_distribution('google-gax').version
_PAGE_DESCRIPTORS = {
'list_service_accounts': _PageDesc('page_token', 'next_page_token',
'accounts')
}
# The scopes needed to make gRPC calls to all of the methods defined in
# this service
_ALL_SCOPES = ('https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/iam', )
_PROJECT_PATH_TEMPLATE = path_template.PathTemplate('projects/{project}')
_SERVICE_ACCOUNT_PATH_TEMPLATE = path_template.PathTemplate(
'projects/{project}/serviceAccounts/{service_account}')
_KEY_PATH_TEMPLATE = path_template.PathTemplate(
'projects/{project}/serviceAccounts/{service_account}/keys/{key}')
@classmethod
def project_path(cls, project):
"""Returns a fully-qualified project resource name string."""
return cls._PROJECT_PATH_TEMPLATE.render({'project': project, })
@classmethod
def service_account_path(cls, project, service_account):
"""Returns a fully-qualified service_account resource name string."""
return cls._SERVICE_ACCOUNT_PATH_TEMPLATE.render({
'project': project,
'service_account': service_account,
})
@classmethod
def key_path(cls, project, service_account, key):
"""Returns a fully-qualified key resource name string."""
return cls._KEY_PATH_TEMPLATE.render({
'project': project,
'service_account': service_account,
'key': key,
})
@classmethod
def match_project_from_project_name(cls, project_name):
"""Parses the project from a project resource.
Args:
project_name (string): A fully-qualified path representing a project
resource.
Returns:
A string representing the project.
"""
return cls._PROJECT_PATH_TEMPLATE.match(project_name).get('project')
@classmethod
def match_project_from_service_account_name(cls, service_account_name):
"""Parses the project from a service_account resource.
Args:
service_account_name (string): A fully-qualified path representing a service_account
resource.
Returns:
A string representing the project.
"""
return cls._SERVICE_ACCOUNT_PATH_TEMPLATE.match(
service_account_name).get('project')
@classmethod
def match_service_account_from_service_account_name(cls,
service_account_name):
"""Parses the service_account from a service_account resource.
Args:
service_account_name (string): A fully-qualified path representing a service_account
resource.
Returns:
A string representing the service_account.
"""
return cls._SERVICE_ACCOUNT_PATH_TEMPLATE.match(
service_account_name).get('service_account')
@classmethod
def match_project_from_key_name(cls, key_name):
"""Parses the project from a key resource.
Args:
key_name (string): A fully-qualified path representing a key
resource.
Returns:
A string representing the project.
"""
return cls._KEY_PATH_TEMPLATE.match(key_name).get('project')
@classmethod
def match_service_account_from_key_name(cls, key_name):
"""Parses the service_account from a key resource.
Args:
key_name (string): A fully-qualified path representing a key
resource.
Returns:
A string representing the service_account.
"""
return cls._KEY_PATH_TEMPLATE.match(key_name).get('service_account')
@classmethod
def match_key_from_key_name(cls, key_name):
"""Parses the key from a key resource.
Args:
key_name (string): A fully-qualified path representing a key
resource.
Returns:
A string representing the key.
"""
return cls._KEY_PATH_TEMPLATE.match(key_name).get('key')
def __init__(self,
service_path=SERVICE_ADDRESS,
port=DEFAULT_SERVICE_PORT,
channel=None,
metadata_transformer=None,
ssl_creds=None,
scopes=None,
client_config=None,
app_name='gax',
app_version=_GAX_VERSION):
"""Constructor.
Args:
service_path (string): The domain name of the API remote host.
port (int): The port on which to connect to the remote host.
channel (:class:`grpc.Channel`): A ``Channel`` instance through
which to make calls.
ssl_creds (:class:`grpc.ChannelCredentials`): A
``ChannelCredentials`` instance for use with an SSL-enabled
channel.
client_config (dict):
A dictionary for call options for each method. See
:func:`google.gax.construct_settings` for the structure of
this data. Falls back to the default config if not specified
or the specified config is missing data points.
metadata_transformer (Callable[[], list]): A function that creates
the metadata for requests.
app_name (string): The codename of the calling service.
app_version (string): The version of the calling service.
Returns:
A IAMApi object.
"""
if scopes is None:
scopes = self._ALL_SCOPES
if client_config is None:
client_config = {}
goog_api_client = '{}/{} {} gax/{} python/{}'.format(
app_name, app_version, self._CODE_GEN_NAME_VERSION,
self._GAX_VERSION, platform.python_version())
metadata = [('x-goog-api-client', goog_api_client)]
default_client_config = json.loads(
pkg_resources.resource_string(__name__, 'iam_client_config.json')
.decode())
defaults = api_callable.construct_settings(
'google.iam.admin.v1.IAM',
default_client_config,
client_config,
config.STATUS_CODE_NAMES,
kwargs={'metadata': metadata},
page_descriptors=self._PAGE_DESCRIPTORS)
self.iam_stub = config.create_stub(
iam_pb2.IAMStub,
service_path,
port,
ssl_creds=ssl_creds,
channel=channel,
metadata_transformer=metadata_transformer,
scopes=scopes)
self._list_service_accounts = api_callable.create_api_call(
self.iam_stub.ListServiceAccounts,
settings=defaults['list_service_accounts'])
self._get_service_account = api_callable.create_api_call(
self.iam_stub.GetServiceAccount,
settings=defaults['get_service_account'])
self._create_service_account = api_callable.create_api_call(
self.iam_stub.CreateServiceAccount,
settings=defaults['create_service_account'])
self._update_service_account = api_callable.create_api_call(
self.iam_stub.UpdateServiceAccount,
settings=defaults['update_service_account'])
self._delete_service_account = api_callable.create_api_call(
self.iam_stub.DeleteServiceAccount,
settings=defaults['delete_service_account'])
self._list_service_account_keys = api_callable.create_api_call(
self.iam_stub.ListServiceAccountKeys,
settings=defaults['list_service_account_keys'])
self._get_service_account_key = api_callable.create_api_call(
self.iam_stub.GetServiceAccountKey,
settings=defaults['get_service_account_key'])
self._create_service_account_key = api_callable.create_api_call(
self.iam_stub.CreateServiceAccountKey,
settings=defaults['create_service_account_key'])
self._delete_service_account_key = api_callable.create_api_call(
self.iam_stub.DeleteServiceAccountKey,
settings=defaults['delete_service_account_key'])
self._sign_blob = api_callable.create_api_call(
self.iam_stub.SignBlob, settings=defaults['sign_blob'])
self._get_iam_policy = api_callable.create_api_call(
self.iam_stub.GetIamPolicy, settings=defaults['get_iam_policy'])
self._set_iam_policy = api_callable.create_api_call(
self.iam_stub.SetIamPolicy, settings=defaults['set_iam_policy'])
self._test_iam_permissions = api_callable.create_api_call(
self.iam_stub.TestIamPermissions,
settings=defaults['test_iam_permissions'])
self._query_grantable_roles = api_callable.create_api_call(
self.iam_stub.QueryGrantableRoles,
settings=defaults['query_grantable_roles'])
# Service calls
def list_service_accounts(self, name, page_size=0, options=None):
"""
Lists ``ServiceAccounts`` for a project.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> from google.gax import CallOptions, INITIAL_PAGE
>>> api = iam_api.IAMApi()
>>> name = api.project_path('[PROJECT]')
>>>
>>> # Iterate over all results
>>> for element in api.list_service_accounts(name):
>>> # process element
>>> pass
>>>
>>> # Or iterate over results one page at a time
>>> for page in api.list_service_accounts(name, options=CallOptions(page_token=INITIAL_PAGE)):
>>> for element in page:
>>> # process element
>>> pass
Args:
name (string): Required. The resource name of the project associated with the service
accounts, such as ``projects/my-project-123``.
page_size (int): The maximum number of resources contained in the
underlying API response. If page streaming is performed per-
resource, this parameter does not affect the return value. If page
streaming is performed per-page, this determines the maximum number
of resources in a page.
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Returns:
A :class:`google.gax.PageIterator` instance. By default, this
is an iterable of :class:`google.iam.admin.v1.iam_pb2.ServiceAccount` instances.
This object can also be configured to iterate over the pages
of the response through the `CallOptions` parameter.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
request = iam_pb2.ListServiceAccountsRequest(
name=name, page_size=page_size)
return self._list_service_accounts(request, options)
def get_service_account(self, name, options=None):
"""
Gets a ``ServiceAccount``.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> api = iam_api.IAMApi()
>>> name = api.service_account_path('[PROJECT]', '[SERVICE_ACCOUNT]')
>>> response = api.get_service_account(name)
Args:
name (string): The resource name of the service account in the following format:
``projects/{project}/serviceAccounts/{account}``.
Using ``-`` as a wildcard for the project will infer the project from
the account. The ``account`` value can be the ``email`` address or the
``unique_id`` of the service account.
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Returns:
A :class:`google.iam.admin.v1.iam_pb2.ServiceAccount` instance.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
request = iam_pb2.GetServiceAccountRequest(name=name)
return self._get_service_account(request, options)
def create_service_account(self,
name,
account_id,
service_account=None,
options=None):
"""
Creates a ``ServiceAccount``
and returns it.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> api = iam_api.IAMApi()
>>> name = api.project_path('[PROJECT]')
>>> account_id = ''
>>> response = api.create_service_account(name, account_id)
Args:
name (string): Required. The resource name of the project associated with the service
accounts, such as ``projects/my-project-123``.
account_id (string): Required. The account id that is used to generate the service account
email address and a stable unique id. It is unique within a project,
must be 6-30 characters long, and match the regular expression
```a-z <https://cloud.google.com[-a-z0-9]*[a-z0-9]>`_`` to comply with RFC1035.
service_account (:class:`google.iam.admin.v1.iam_pb2.ServiceAccount`): The ``ServiceAccount`` resource to create.
Currently, only the following values are user assignable:
``display_name`` .
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Returns:
A :class:`google.iam.admin.v1.iam_pb2.ServiceAccount` instance.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
if service_account is None:
service_account = iam_pb2.ServiceAccount()
request = iam_pb2.CreateServiceAccountRequest(
name=name, account_id=account_id, service_account=service_account)
return self._create_service_account(request, options)
def update_service_account(self,
etag,
name='',
project_id='',
unique_id='',
email='',
display_name='',
oauth2_client_id='',
options=None):
"""
Updates a ``ServiceAccount``.
Currently, only the following fields are updatable:
``display_name`` .
The ``etag`` is mandatory.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> api = iam_api.IAMApi()
>>> etag = ''
>>> response = api.update_service_account(etag)
Args:
name (string): The resource name of the service account in the following format:
``projects/{project}/serviceAccounts/{account}``.
Requests using ``-`` as a wildcard for the project will infer the project
from the ``account`` and the ``account`` value can be the ``email`` address or
the ``unique_id`` of the service account.
In responses the resource name will always be in the format
``projects/{project}/serviceAccounts/{email}``.
project_id (string): @OutputOnly The id of the project that owns the service account.
unique_id (string): @OutputOnly The unique and stable id of the service account.
email (string): @OutputOnly The email address of the service account.
display_name (string): Optional. A user-specified description of the service account. Must be
fewer than 100 UTF-8 bytes.
etag (bytes): Used to perform a consistent read-modify-write.
oauth2_client_id (string): @OutputOnly. The OAuth2 client id for the service account.
This is used in conjunction with the OAuth2 clientconfig API to make
three legged OAuth2 (3LO) flows to access the data of Google users.
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Returns:
A :class:`google.iam.admin.v1.iam_pb2.ServiceAccount` instance.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
request = iam_pb2.ServiceAccount(
etag=etag,
name=name,
project_id=project_id,
unique_id=unique_id,
email=email,
display_name=display_name,
oauth2_client_id=oauth2_client_id)
return self._update_service_account(request, options)
def delete_service_account(self, name, options=None):
"""
Deletes a ``ServiceAccount``.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> api = iam_api.IAMApi()
>>> name = api.service_account_path('[PROJECT]', '[SERVICE_ACCOUNT]')
>>> api.delete_service_account(name)
Args:
name (string): The resource name of the service account in the following format:
``projects/{project}/serviceAccounts/{account}``.
Using ``-`` as a wildcard for the project will infer the project from
the account. The ``account`` value can be the ``email`` address or the
``unique_id`` of the service account.
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
request = iam_pb2.DeleteServiceAccountRequest(name=name)
self._delete_service_account(request, options)
def list_service_account_keys(self, name, key_types=None, options=None):
"""
Lists ``ServiceAccountKeys``.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> api = iam_api.IAMApi()
>>> name = api.service_account_path('[PROJECT]', '[SERVICE_ACCOUNT]')
>>> response = api.list_service_account_keys(name)
Args:
name (string): The resource name of the service account in the following format:
``projects/{project}/serviceAccounts/{account}``.
Using ``-`` as a wildcard for the project, will infer the project from
the account. The ``account`` value can be the ``email`` address or the
``unique_id`` of the service account.
key_types (list[enum :class:`google.cloud.gapic.iam_admin.v1.enums.KeyType`]): Filters the types of keys the user wants to include in the list
response. Duplicate key types are not allowed. If no key type
is provided, all keys are returned.
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Returns:
A :class:`google.iam.admin.v1.iam_pb2.ListServiceAccountKeysResponse` instance.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
if key_types is None:
key_types = []
request = iam_pb2.ListServiceAccountKeysRequest(
name=name, key_types=key_types)
return self._list_service_account_keys(request, options)
def get_service_account_key(self, name, public_key_type=None,
options=None):
"""
Gets the ``ServiceAccountKey``
by key id.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> api = iam_api.IAMApi()
>>> name = api.key_path('[PROJECT]', '[SERVICE_ACCOUNT]', '[KEY]')
>>> response = api.get_service_account_key(name)
Args:
name (string): The resource name of the service account key in the following format:
``projects/{project}/serviceAccounts/{account}/keys/{key}``.
Using ``-`` as a wildcard for the project will infer the project from
the account. The ``account`` value can be the ``email`` address or the
``unique_id`` of the service account.
public_key_type (enum :class:`google.cloud.gapic.iam_admin.v1.enums.ServiceAccountPublicKeyType`): The output format of the public key requested.
X509_PEM is the default output format.
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Returns:
A :class:`google.iam.admin.v1.iam_pb2.ServiceAccountKey` instance.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
if public_key_type is None:
public_key_type = enums.ServiceAccountPublicKeyType.TYPE_NONE
request = iam_pb2.GetServiceAccountKeyRequest(
name=name, public_key_type=public_key_type)
return self._get_service_account_key(request, options)
def create_service_account_key(self,
name,
private_key_type=None,
key_algorithm=None,
options=None):
"""
Creates a ``ServiceAccountKey``
and returns it.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> api = iam_api.IAMApi()
>>> name = api.service_account_path('[PROJECT]', '[SERVICE_ACCOUNT]')
>>> response = api.create_service_account_key(name)
Args:
name (string): The resource name of the service account in the following format:
``projects/{project}/serviceAccounts/{account}``.
Using ``-`` as a wildcard for the project will infer the project from
the account. The ``account`` value can be the ``email`` address or the
``unique_id`` of the service account.
private_key_type (enum :class:`google.cloud.gapic.iam_admin.v1.enums.ServiceAccountPrivateKeyType`): The output format of the private key. ``GOOGLE_CREDENTIALS_FILE`` is the
default output format.
key_algorithm (enum :class:`google.cloud.gapic.iam_admin.v1.enums.ServiceAccountKeyAlgorithm`): Which type of key and algorithm to use for the key.
The default is currently a 4K RSA key. However this may change in the
future.
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Returns:
A :class:`google.iam.admin.v1.iam_pb2.ServiceAccountKey` instance.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
if private_key_type is None:
private_key_type = enums.ServiceAccountPrivateKeyType.TYPE_UNSPECIFIED
if key_algorithm is None:
key_algorithm = enums.ServiceAccountKeyAlgorithm.KEY_ALG_UNSPECIFIED
request = iam_pb2.CreateServiceAccountKeyRequest(
name=name,
private_key_type=private_key_type,
key_algorithm=key_algorithm)
return self._create_service_account_key(request, options)
def delete_service_account_key(self, name, options=None):
"""
Deletes a ``ServiceAccountKey``.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> api = iam_api.IAMApi()
>>> name = api.key_path('[PROJECT]', '[SERVICE_ACCOUNT]', '[KEY]')
>>> api.delete_service_account_key(name)
Args:
name (string): The resource name of the service account key in the following format:
``projects/{project}/serviceAccounts/{account}/keys/{key}``.
Using ``-`` as a wildcard for the project will infer the project from
the account. The ``account`` value can be the ``email`` address or the
``unique_id`` of the service account.
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
request = iam_pb2.DeleteServiceAccountKeyRequest(name=name)
self._delete_service_account_key(request, options)
def sign_blob(self, name, bytes_to_sign, options=None):
"""
Signs a blob using a service account's system-managed private key.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> api = iam_api.IAMApi()
>>> name = api.service_account_path('[PROJECT]', '[SERVICE_ACCOUNT]')
>>> bytes_to_sign = ''
>>> response = api.sign_blob(name, bytes_to_sign)
Args:
name (string): The resource name of the service account in the following format:
``projects/{project}/serviceAccounts/{account}``.
Using ``-`` as a wildcard for the project will infer the project from
the account. The ``account`` value can be the ``email`` address or the
``unique_id`` of the service account.
bytes_to_sign (bytes): The bytes to sign.
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Returns:
A :class:`google.iam.admin.v1.iam_pb2.SignBlobResponse` instance.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
request = iam_pb2.SignBlobRequest(
name=name, bytes_to_sign=bytes_to_sign)
return self._sign_blob(request, options)
def get_iam_policy(self, resource, options=None):
"""
Returns the IAM access control policy for a
``ServiceAccount``.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> api = iam_api.IAMApi()
>>> resource = api.service_account_path('[PROJECT]', '[SERVICE_ACCOUNT]')
>>> response = api.get_iam_policy(resource)
Args:
resource (string): REQUIRED: The resource for which policy is being requested. Resource
is usually specified as a path, such as, projects/{project}.
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Returns:
A :class:`google.iam.v1.policy_pb2.Policy` instance.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
request = iam_policy_pb2.GetIamPolicyRequest(resource=resource)
return self._get_iam_policy(request, options)
def set_iam_policy(self, resource, policy, options=None):
"""
Sets the IAM access control policy for a
``ServiceAccount``.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> from google.iam.v1 import policy_pb2
>>> api = iam_api.IAMApi()
>>> resource = api.service_account_path('[PROJECT]', '[SERVICE_ACCOUNT]')
>>> policy = policy_pb2.Policy()
>>> response = api.set_iam_policy(resource, policy)
Args:
resource (string): REQUIRED: The resource for which policy is being specified.
Resource is usually specified as a path, such as,
projects/{project}/zones/{zone}/disks/{disk}.
policy (:class:`google.iam.v1.policy_pb2.Policy`): REQUIRED: The complete policy to be applied to the 'resource'. The size of
the policy is limited to a few 10s of KB. An empty policy is in general a
valid policy but certain services (like Projects) might reject them.
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Returns:
A :class:`google.iam.v1.policy_pb2.Policy` instance.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
request = iam_policy_pb2.SetIamPolicyRequest(
resource=resource, policy=policy)
return self._set_iam_policy(request, options)
def test_iam_permissions(self, resource, permissions, options=None):
"""
Tests the specified permissions against the IAM access control policy
for a ``ServiceAccount``.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> api = iam_api.IAMApi()
>>> resource = api.service_account_path('[PROJECT]', '[SERVICE_ACCOUNT]')
>>> permissions = []
>>> response = api.test_iam_permissions(resource, permissions)
Args:
resource (string): REQUIRED: The resource for which policy detail is being requested.
Resource is usually specified as a path, such as, projects/{project}.
permissions (list[string]): The set of permissions to check for the 'resource'. Permissions with
wildcards (such as '*' or 'storage.*') are not allowed.
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Returns:
A :class:`google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse` instance.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
request = iam_policy_pb2.TestIamPermissionsRequest(
resource=resource, permissions=permissions)
return self._test_iam_permissions(request, options)
def query_grantable_roles(self, full_resource_name, options=None):
"""
Queries roles that can be granted on a particular resource.
A role is grantable if it can be used as the role in a binding for a policy
for that resource.
Example:
>>> from google.cloud.gapic.iam_admin.v1 import iam_api
>>> api = iam_api.IAMApi()
>>> full_resource_name = ''
>>> response = api.query_grantable_roles(full_resource_name)
Args:
full_resource_name (string): Required. The full resource name to query from the list of grantable roles.
The name follows the Google Cloud Platform resource format.
For example, a Cloud Platform project with id ``my-project`` will be named
``//cloudresourcemanager.googleapis.com/projects/my-project``.
options (:class:`google.gax.CallOptions`): Overrides the default
settings for this call, e.g, timeout, retries etc.
Returns:
A :class:`google.iam.admin.v1.iam_pb2.QueryGrantableRolesResponse` instance.
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
"""
request = iam_pb2.QueryGrantableRolesRequest(
full_resource_name=full_resource_name)
return self._query_grantable_roles(request, options)
| bsd-3-clause |
cneill/designate-testing | designate/storage/impl_sqlalchemy/migrate_repo/versions/044_add_pool_id_to_domains.py | 8 | 1961 | # Copyright (c) 2014 Rackspace Hosting
#
# Author: Betsy Luzader <betsy.luzader@rackspace.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from sqlalchemy.schema import Table, Column, MetaData
from migrate.changeset.constraint import UniqueConstraint
from designate.sqlalchemy.types import UUID
meta = MetaData()
def upgrade(migrate_engine):
meta.bind = migrate_engine
domains_table = Table('domains', meta, autoload=True)
# Get the default pool_id from the config file
default_pool_id = cfg.CONF['service:central'].default_pool_id
# Create the pool_id column
pool_id_column = Column('pool_id',
UUID(),
default=default_pool_id,
nullable=True)
pool_id_column.create(domains_table, populate_default=True)
# Alter the table to drop default value after populating it
domains_table.c.pool_id.alter(default=None)
dialect = migrate_engine.url.get_dialect().name
if dialect.startswith('sqlite'):
# Add missing unique index
constraint = UniqueConstraint('name', 'deleted',
name='unique_domain_name',
table=domains_table)
constraint.create()
def downgrade(migrate_engine):
meta.bind = migrate_engine
domains_table = Table('domains', meta, autoload=True)
domains_table.c.pool_id.drop()
| apache-2.0 |
NKSG/ns-3-tcpbolt | src/visualizer/visualizer/base.py | 160 | 3799 | import ns.point_to_point
import ns.csma
import ns.wifi
import ns.bridge
import ns.internet
import ns.mesh
import ns.wimax
import ns.wimax
import ns.lte
import gobject
import os.path
import sys
PIXELS_PER_METER = 3.0 # pixels-per-meter, at 100% zoom level
class PyVizObject(gobject.GObject):
__gtype_name__ = "PyVizObject"
def tooltip_query(self, tooltip):
tooltip.set_text("TODO: tooltip for %r" % self)
class Link(PyVizObject):
pass
class InformationWindow(object):
def update(self):
raise NotImplementedError
class NetDeviceTraits(object):
def __init__(self, is_wireless=None, is_virtual=False):
assert is_virtual or is_wireless is not None
self.is_wireless = is_wireless
self.is_virtual = is_virtual
netdevice_traits = {
ns.point_to_point.PointToPointNetDevice: NetDeviceTraits(is_wireless=False),
ns.csma.CsmaNetDevice: NetDeviceTraits(is_wireless=False),
ns.wifi.WifiNetDevice: NetDeviceTraits(is_wireless=True),
ns.bridge.BridgeNetDevice: NetDeviceTraits(is_virtual=True),
ns.internet.LoopbackNetDevice: NetDeviceTraits(is_virtual=True, is_wireless=False),
ns.mesh.MeshPointDevice: NetDeviceTraits(is_virtual=True),
ns.wimax.SubscriberStationNetDevice: NetDeviceTraits(is_wireless=True),
ns.wimax.BaseStationNetDevice: NetDeviceTraits(is_wireless=True),
ns.lte.LteUeNetDevice: NetDeviceTraits(is_wireless=True),
ns.lte.LteEnbNetDevice: NetDeviceTraits(is_wireless=True),
}
def lookup_netdevice_traits(class_type):
try:
return netdevice_traits[class_type]
except KeyError:
sys.stderr.write("WARNING: no NetDeviceTraits registered for device type %r; "
"I will assume this is a non-virtual wireless device, "
"but you should edit %r, variable 'netdevice_traits',"
" to make sure.\n" % (class_type.__name__, __file__))
t = NetDeviceTraits(is_virtual=False, is_wireless=True)
netdevice_traits[class_type] = t
return t
def transform_distance_simulation_to_canvas(d):
return d*PIXELS_PER_METER
def transform_point_simulation_to_canvas(x, y):
return x*PIXELS_PER_METER, y*PIXELS_PER_METER
def transform_distance_canvas_to_simulation(d):
return d/PIXELS_PER_METER
def transform_point_canvas_to_simulation(x, y):
return x/PIXELS_PER_METER, y/PIXELS_PER_METER
plugins = []
plugin_modules = {}
def register_plugin(plugin_init_func, plugin_name=None, plugin_module=None):
"""
Register a plugin.
@param plugin: a callable object that will be invoked whenever a
Visualizer object is created, like this: plugin(visualizer)
"""
assert callable(plugin_init_func)
plugins.append(plugin_init_func)
if plugin_module is not None:
plugin_modules[plugin_name] = plugin_module
plugins_loaded = False
def load_plugins():
global plugins_loaded
if plugins_loaded:
return
plugins_loaded = True
plugins_dir = os.path.join(os.path.dirname(__file__), 'plugins')
old_path = list(sys.path)
sys.path.insert(0, plugins_dir)
for filename in os.listdir(plugins_dir):
name, ext = os.path.splitext(filename)
if ext != '.py':
continue
try:
plugin_module = __import__(name)
except ImportError, ex:
print >> sys.stderr, "Could not load plugin %r: %s" % (filename, str(ex))
continue
try:
plugin_func = plugin_module.register
except AttributeError:
print >> sys.stderr, "Plugin %r has no 'register' function" % name
else:
#print >> sys.stderr, "Plugin %r registered" % name
register_plugin(plugin_func, name, plugin_module)
sys.path = old_path
| gpl-2.0 |
bananacakes/bravo_2.6.35_gb-mr | tools/perf/scripts/python/sctop.py | 895 | 1936 | # system call top
# (c) 2010, Tom Zanussi <tzanussi@gmail.com>
# Licensed under the terms of the GNU GPL License version 2
#
# Periodically displays system-wide system call totals, broken down by
# syscall. If a [comm] arg is specified, only syscalls called by
# [comm] are displayed. If an [interval] arg is specified, the display
# will be refreshed every [interval] seconds. The default interval is
# 3 seconds.
import thread
import time
import os
import sys
sys.path.append(os.environ['PERF_EXEC_PATH'] + \
'/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
from perf_trace_context import *
from Core import *
from Util import *
usage = "perf trace -s syscall-counts.py [comm] [interval]\n";
for_comm = None
default_interval = 3
interval = default_interval
if len(sys.argv) > 3:
sys.exit(usage)
if len(sys.argv) > 2:
for_comm = sys.argv[1]
interval = int(sys.argv[2])
elif len(sys.argv) > 1:
try:
interval = int(sys.argv[1])
except ValueError:
for_comm = sys.argv[1]
interval = default_interval
syscalls = autodict()
def trace_begin():
thread.start_new_thread(print_syscall_totals, (interval,))
pass
def raw_syscalls__sys_enter(event_name, context, common_cpu,
common_secs, common_nsecs, common_pid, common_comm,
id, args):
if for_comm is not None:
if common_comm != for_comm:
return
try:
syscalls[id] += 1
except TypeError:
syscalls[id] = 1
def print_syscall_totals(interval):
while 1:
clear_term()
if for_comm is not None:
print "\nsyscall events for %s:\n\n" % (for_comm),
else:
print "\nsyscall events:\n\n",
print "%-40s %10s\n" % ("event", "count"),
print "%-40s %10s\n" % ("----------------------------------------", \
"----------"),
for id, val in sorted(syscalls.iteritems(), key = lambda(k, v): (v, k), \
reverse = True):
try:
print "%-40d %10d\n" % (id, val),
except TypeError:
pass
syscalls.clear()
time.sleep(interval)
| gpl-2.0 |
florian-dacosta/server-tools | fetchmail_attach_from_folder/wizard/attach_mail_manually.py | 51 | 5406 | # -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# This module copyright (C) 2013 Therp BV (<http://therp.nl>)
# All Rights Reserved
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp import fields, models
import logging
_logger = logging.getLogger(__name__)
class attach_mail_manually(models.TransientModel):
_name = 'fetchmail.attach.mail.manually'
folder_id = fields.Many2one(
'fetchmail.server.folder', 'Folder', readonly=True)
mail_ids = fields.One2many(
'fetchmail.attach.mail.manually.mail', 'wizard_id', 'Emails')
def default_get(self, cr, uid, fields_list, context=None):
if context is None:
context = {}
defaults = super(attach_mail_manually, self).default_get(
cr, uid, fields_list, context
)
for folder in self.pool.get('fetchmail.server.folder').browse(
cr, uid,
[context.get('default_folder_id')], context):
defaults['mail_ids'] = []
connection = folder.server_id.connect()
connection.select(folder.path)
result, msgids = connection.search(
None,
'FLAGGED' if folder.flag_nonmatching else 'UNDELETED')
if result != 'OK':
_logger.error('Could not search mailbox %s on %s',
folder.path, folder.server_id.name)
continue
for msgid in msgids[0].split():
result, msgdata = connection.fetch(msgid, '(RFC822)')
if result != 'OK':
_logger.error('Could not fetch %s in %s on %s',
msgid, folder.path, folder.server_id.name)
continue
mail_message = self.pool.get('mail.thread').message_parse(
cr, uid, msgdata[0][1],
save_original=folder.server_id.original,
context=context
)
defaults['mail_ids'].append((0, 0, {
'msgid': msgid,
'subject': mail_message.get('subject', ''),
'date': mail_message.get('date', ''),
'object_id': '%s,-1' % folder.model_id.model,
}))
connection.close()
return defaults
def attach_mails(self, cr, uid, ids, context=None):
for this in self.browse(cr, uid, ids, context):
for mail in this.mail_ids:
connection = this.folder_id.server_id.connect()
connection.select(this.folder_id.path)
result, msgdata = connection.fetch(mail.msgid, '(RFC822)')
if result != 'OK':
_logger.error('Could not fetch %s in %s on %s',
mail.msgid, this.folder_id.path, this.server)
continue
mail_message = self.pool.get('mail.thread').message_parse(
cr, uid, msgdata[0][1],
save_original=this.folder_id.server_id.original,
context=context)
this.folder_id.server_id.attach_mail(
connection,
mail.object_id.id, this.folder_id, mail_message,
mail.msgid
)
connection.close()
return {'type': 'ir.actions.act_window_close'}
def fields_view_get(self, cr, user, view_id=None, view_type='form',
context=None, toolbar=False, submenu=False):
result = super(attach_mail_manually, self).fields_view_get(
cr, user, view_id, view_type, context, toolbar, submenu)
tree = result['fields']['mail_ids']['views']['tree']
for folder in self.pool['fetchmail.server.folder'].browse(
cr, user, [context.get('default_folder_id')], context):
tree['fields']['object_id']['selection'] = [
(folder.model_id.model, folder.model_id.name)
]
return result
class attach_mail_manually_mail(models.TransientModel):
_name = 'fetchmail.attach.mail.manually.mail'
wizard_id = fields.Many2one(
'fetchmail.attach.mail.manually', readonly=True)
msgid = fields.Char('Message id', readonly=True)
subject = fields.Char('Subject', readonly=True)
date = fields.Datetime('Date', readonly=True)
object_id = fields.Reference(
lambda self: [
(m.model, m.name)
for m in self.env['ir.model'].search([])
],
string='Object')
| agpl-3.0 |
URXtech/scales | src/greplin/scales/samplestats_test.py | 3 | 2240 | # Copyright 2011 The scales Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Sample statistics tests."""
from greplin.scales.samplestats import UniformSample, ExponentiallyDecayingReservoir
import random
import unittest
class UniformSampleTest(unittest.TestCase):
"""Test cases for uniform sample stats."""
def testGaussian(self):
"""Test with gaussian random numbers."""
random.seed(42)
us = UniformSample()
for _ in range(300):
us.update(random.gauss(42.0, 13.0))
self.assertAlmostEqual(us.mean, 43.143067271195235, places=5)
self.assertAlmostEqual(us.stddev, 13.008553229943168, places=5)
us.clear()
for _ in range(30000):
us.update(random.gauss(0.0012, 0.00005))
self.assertAlmostEqual(us.mean, 0.0012015284549517493, places=5)
self.assertAlmostEqual(us.stddev, 4.9776450250869146e-05, places=5)
class ExponentiallyDecayingReservoirTest(unittest.TestCase):
"""Test cases for exponentially decaying reservoir sample stats."""
def testGaussian(self):
"""Test with gaussian random numbers."""
random.seed(42)
sample = ExponentiallyDecayingReservoir()
for _ in range(300):
sample.update(random.gauss(42.0, 13.0))
self.assertAlmostEqual(sample.mean, 41.974069434931714, places=5)
self.assertAlmostEqual(sample.stddev, 12.982363860393766, places=5)
def testWithRescale(self):
"""Excercise rescaling."""
# Not a good test, but at least we cover a little more of the code.
random.seed(42)
sample = ExponentiallyDecayingReservoir(rescale_threshold=-1)
sample.update(random.gauss(42.0, 13.0))
self.assertAlmostEqual(sample.mean, 40.12682571548693, places=5)
if __name__ == '__main__':
unittest.main()
| apache-2.0 |
mjirik/lisa | lisa/classification.py | 1 | 2243 | # ! /usr/bin/python
# -*- coding: utf-8 -*-
from loguru import logger
# logger = logging.getLogger()
import numpy as np
class GMMClassifier():
def __init__(self, each_class_params=None, **same_params):
"""
same_params: classifier params for each class are same
each_class_params: is list of dictionary of params for each
class classifier. For example:
[{'covariance_type': 'full'}, {'n_components': 2}])
"""
self.same_params = same_params
self.each_class_params = each_class_params
self.models = []
def fit(self, X_train, y_train):
X_train = np.asarray(X_train)
y_train = np.asarray(y_train)
# from sklearn.mixture import GMM as GaussianMixture
from sklearn.mixture import GaussianMixture
unlabels = range(0, np.max(y_train) + 1)
for lab in unlabels:
if self.each_class_params is not None:
# print 'eacl'
# print self.each_class_params[lab]
model = GaussianMixture(**self.each_class_params[lab])
# print 'po gmm ', model
elif len(self.same_params) > 0:
model = GaussianMixture(**self.same_params)
# print 'ewe ', model
else:
model = GaussianMixture()
X_train_lab = X_train[y_train == lab]
# logger.debug('xtr lab shape ' + str(X_train_lab))
model.fit(X_train_lab)
self.models.insert(lab, model)
def __str__(self):
if self.each_class_params is not None:
return "GMMClassificator(" + str(self.each_class_params) + ')'
else:
return "GMMClassificator(" + str(self.same_params) + ')'
def predict(self, X_test):
X_test = np.asarray(X_test)
logger.debug(str(X_test.shape))
logger.debug(str(X_test))
scores = np.zeros([X_test.shape[0], len(self.models)])
for lab in range(0, len(self.models)):
logger.debug('means shape' + str(self.models[lab].means_.shape))
sc = self.models[lab].score_samples(X_test)
scores[:, lab] = sc
pred = np.argmax(scores, 1)
return pred
| bsd-3-clause |
semonte/intellij-community | python/helpers/pycharm/teamcity/unittestpy.py | 5 | 11139 | # coding=utf-8
import sys
from unittest import TestResult, TextTestRunner
import datetime
import re
from teamcity.messages import TeamcityServiceMessages
from teamcity.common import is_string, get_class_fullname, convert_error_to_string, \
dump_test_stdout, dump_test_stderr, get_exception_message, to_unicode, FlushingStringIO
_real_stdout = sys.stdout
_real_stderr = sys.stderr
_ERROR_HOLDERS_FQN = ("unittest.suite._ErrorHolder", "unittest2.suite._ErrorHolder")
class TeamcityTestResult(TestResult):
separator2 = "\n"
# noinspection PyUnusedLocal
def __init__(self, stream=_real_stdout, descriptions=None, verbosity=None):
super(TeamcityTestResult, self).__init__()
# Some code may ask for self.failfast, see unittest2.case.TestCase.subTest
self.failfast = getattr(self, "failfast", False)
self.test_started_datetime_map = {}
self.failed_tests = set()
self.subtest_failures = {}
self.messages = TeamcityServiceMessages(_real_stdout)
self.current_test_id = None
@staticmethod
def get_test_id(test):
if is_string(test):
return test
test_class_fullname = get_class_fullname(test)
test_id = test.id()
if test_class_fullname in _ERROR_HOLDERS_FQN:
# patch setUpModule (__main__) -> __main__.setUpModule
return re.sub(r'^(.*) \((.*)\)$', r'\2.\1', test_id)
# Force test_id for doctests
if test_class_fullname != "doctest.DocTestCase":
desc = test.shortDescription()
test_method_name = getattr(test, "_testMethodName", "")
if desc and desc != test_id and desc != test_method_name:
return "%s (%s)" % (test_id, desc.replace('.', '_'))
return test_id
def addSuccess(self, test):
super(TeamcityTestResult, self).addSuccess(test)
def addExpectedFailure(self, test, err):
_super = super(TeamcityTestResult, self)
if hasattr(_super, "addExpectedFailure"):
_super.addExpectedFailure(test, err)
err = convert_error_to_string(err)
test_id = self.get_test_id(test)
self.messages.testIgnored(test_id, message="Expected failure: " + err, flowId=test_id)
def get_subtest_block_id(self, test, subtest):
test_id = self.get_test_id(test)
subtest_id = self.get_test_id(subtest)
if subtest_id.startswith(test_id):
block_id = subtest_id[len(test_id):].strip()
else:
block_id = subtest_id
if len(block_id) == 0:
block_id = test_id
return block_id
def addSkip(self, test, reason=""):
if sys.version_info >= (2, 7):
super(TeamcityTestResult, self).addSkip(test, reason)
if reason:
if isinstance(reason, Exception):
reason_str = ": " + get_exception_message(reason)
else:
reason_str = ": " + to_unicode(reason)
else:
reason_str = ""
test_class_name = get_class_fullname(test)
if test_class_name == "unittest.case._SubTest" or test_class_name == "unittest2.case._SubTest":
parent_test = test.test_case
parent_test_id = self.get_test_id(parent_test)
subtest = test
block_id = self.get_subtest_block_id(parent_test, subtest)
self.messages.subTestBlockOpened(block_id, subTestResult="Skip", flowId=parent_test_id)
self.messages.testStdOut(parent_test_id, out="SubTest skipped" + reason_str + "\n", flowId=parent_test_id)
self.messages.blockClosed(block_id, flowId=parent_test_id)
else:
test_id = self.get_test_id(test)
if test_id not in self.test_started_datetime_map:
# Test ignored without startTest. Handle start and finish events ourselves
self.messages.testStarted(test_id, flowId=test_id)
self.messages.testIgnored(test_id, message="Skipped" + reason_str, flowId=test_id)
self.messages.testFinished(test_id, flowId=test_id)
else:
self.messages.testIgnored(test_id, message="Skipped" + reason_str, flowId=test_id)
def addUnexpectedSuccess(self, test):
_super = super(TeamcityTestResult, self)
if hasattr(_super, "addUnexpectedSuccess"):
_super.addUnexpectedSuccess(test)
test_id = self.get_test_id(test)
self.messages.testFailed(test_id, message='Failure',
details="Test should not succeed since it's marked with @unittest.expectedFailure",
flowId=test_id)
def addError(self, test, err, *k):
super(TeamcityTestResult, self).addError(test, err)
test_class = get_class_fullname(test)
if test_class in _ERROR_HOLDERS_FQN:
# This is a standalone error
test_id = self.get_test_id(test)
self.messages.testStarted(test_id, flowId=test_id)
self.report_fail(test, 'Failure', err)
self.messages.testFinished(test_id, flowId=test_id)
elif get_class_fullname(err[0]) == "unittest2.case.SkipTest":
message = ""
if hasattr(err[1], "message"):
message = getattr(err[1], "message", "")
elif hasattr(err[1], "args"):
message = getattr(err[1], "args", [""])[0]
self.addSkip(test, message)
else:
self.report_fail(test, 'Error', err)
def addFailure(self, test, err, *k):
super(TeamcityTestResult, self).addFailure(test, err)
self.report_fail(test, 'Failure', err)
def addSubTest(self, test, subtest, err):
_super = super(TeamcityTestResult, self)
if hasattr(_super, "addSubTest"):
_super.addSubTest(test, subtest, err)
test_id = self.get_test_id(test)
subtest_id = self.get_test_id(subtest)
if subtest_id.startswith(test_id):
# Replace "." -> "_" since '.' is a test hierarchy separator
# See i.e. https://github.com/JetBrains/teamcity-messages/issues/134 (https://youtrack.jetbrains.com/issue/PY-23846)
block_id = subtest_id[len(test_id):].strip().replace(".", "_")
else:
block_id = subtest_id
if len(block_id) == 0:
block_id = subtest_id
if err is not None:
self.add_subtest_failure(test_id, block_id)
if issubclass(err[0], test.failureException):
self.messages.subTestBlockOpened(block_id, subTestResult="Failure", flowId=test_id)
self.messages.testStdErr(test_id, out="SubTest failure: %s\n" % convert_error_to_string(err), flowId=test_id)
self.messages.blockClosed(block_id, flowId=test_id)
else:
self.messages.subTestBlockOpened(block_id, subTestResult="Error", flowId=test_id)
self.messages.testStdErr(test_id, out="SubTest error: %s\n" % convert_error_to_string(err), flowId=test_id)
self.messages.blockClosed(block_id, flowId=test_id)
else:
self.messages.subTestBlockOpened(block_id, subTestResult="Success", flowId=test_id)
self.messages.blockClosed(block_id, flowId=test_id)
def add_subtest_failure(self, test_id, subtest_block_id):
fail_array = self.subtest_failures.get(test_id, [])
fail_array.append(subtest_block_id)
self.subtest_failures[test_id] = fail_array
def get_subtest_failure(self, test_id):
fail_array = self.subtest_failures.get(test_id, [])
return ", ".join(fail_array)
def report_fail(self, test, fail_type, err):
test_id = self.get_test_id(test)
if is_string(err):
details = err
elif get_class_fullname(err) == "twisted.python.failure.Failure":
details = err.getTraceback()
else:
details = convert_error_to_string(err)
subtest_failures = self.get_subtest_failure(test_id)
if subtest_failures:
details = "Failed subtests list: " + subtest_failures + "\n\n" + details.strip()
details = details.strip()
self.messages.testFailed(test_id, message=fail_type, details=details, flowId=test_id)
self.failed_tests.add(test_id)
def startTest(self, test):
test_id = self.get_test_id(test)
self.current_test_id = test_id
super(TeamcityTestResult, self).startTest(test)
self.test_started_datetime_map[test_id] = datetime.datetime.now()
self.messages.testStarted(test_id, captureStandardOutput='true', flowId=test_id)
def _dump_test_stderr(self, data):
if self.current_test_id is not None:
dump_test_stderr(self.messages, self.current_test_id, self.current_test_id, data)
else:
_real_stderr.write(data)
def _dump_test_stdout(self, data):
if self.current_test_id is not None:
dump_test_stdout(self.messages, self.current_test_id, self.current_test_id, data)
else:
_real_stdout.write(data)
def _setupStdout(self):
if getattr(self, 'buffer', None):
self._stderr_buffer = FlushingStringIO(self._dump_test_stderr)
self._stdout_buffer = FlushingStringIO(self._dump_test_stdout)
sys.stdout = self._stdout_buffer
sys.stderr = self._stderr_buffer
def stopTest(self, test):
test_id = self.get_test_id(test)
if getattr(self, 'buffer', None):
# Do not allow super() method to print output by itself
self._mirrorOutput = False
output = sys.stdout.getvalue()
if output:
dump_test_stdout(self.messages, test_id, test_id, output)
error = sys.stderr.getvalue()
if error:
dump_test_stderr(self.messages, test_id, test_id, error)
super(TeamcityTestResult, self).stopTest(test)
self.current_test_id = None
if test_id not in self.failed_tests:
subtest_failures = self.get_subtest_failure(test_id)
if subtest_failures:
self.report_fail(test, "One or more subtests failed", "")
time_diff = datetime.datetime.now() - self.test_started_datetime_map[test_id]
self.messages.testFinished(test_id, testDuration=time_diff, flowId=test_id)
def printErrors(self):
pass
class TeamcityTestRunner(TextTestRunner):
resultclass = TeamcityTestResult
if sys.version_info < (2, 7):
def _makeResult(self):
return TeamcityTestResult(self.stream, self.descriptions, self.verbosity)
def run(self, test):
# noinspection PyBroadException
try:
total_tests = test.countTestCases()
TeamcityServiceMessages(_real_stdout).testCount(total_tests)
except:
pass
return super(TeamcityTestRunner, self).run(test)
if __name__ == '__main__':
from unittest import main
main(module=None, testRunner=TeamcityTestRunner())
| apache-2.0 |
anthgur/servo | tests/wpt/web-platform-tests/tools/third_party/pytest/testing/test_collection.py | 13 | 31099 | from __future__ import absolute_import, division, print_function
import pytest
import py
import _pytest._code
from _pytest.main import Session, EXIT_NOTESTSCOLLECTED, _in_venv
class TestCollector(object):
def test_collect_versus_item(self):
from pytest import Collector, Item
assert not issubclass(Collector, Item)
assert not issubclass(Item, Collector)
def test_compat_attributes(self, testdir, recwarn):
modcol = testdir.getmodulecol("""
def test_pass(): pass
def test_fail(): assert 0
""")
recwarn.clear()
assert modcol.Module == pytest.Module
assert modcol.Class == pytest.Class
assert modcol.Item == pytest.Item
assert modcol.File == pytest.File
assert modcol.Function == pytest.Function
def test_check_equality(self, testdir):
modcol = testdir.getmodulecol("""
def test_pass(): pass
def test_fail(): assert 0
""")
fn1 = testdir.collect_by_name(modcol, "test_pass")
assert isinstance(fn1, pytest.Function)
fn2 = testdir.collect_by_name(modcol, "test_pass")
assert isinstance(fn2, pytest.Function)
assert fn1 == fn2
assert fn1 != modcol
if py.std.sys.version_info < (3, 0):
assert cmp(fn1, fn2) == 0
assert hash(fn1) == hash(fn2)
fn3 = testdir.collect_by_name(modcol, "test_fail")
assert isinstance(fn3, pytest.Function)
assert not (fn1 == fn3)
assert fn1 != fn3
for fn in fn1, fn2, fn3:
assert fn != 3
assert fn != modcol
assert fn != [1, 2, 3]
assert [1, 2, 3] != fn
assert modcol != fn
def test_getparent(self, testdir):
modcol = testdir.getmodulecol("""
class TestClass(object):
def test_foo():
pass
""")
cls = testdir.collect_by_name(modcol, "TestClass")
fn = testdir.collect_by_name(
testdir.collect_by_name(cls, "()"), "test_foo")
parent = fn.getparent(pytest.Module)
assert parent is modcol
parent = fn.getparent(pytest.Function)
assert parent is fn
parent = fn.getparent(pytest.Class)
assert parent is cls
def test_getcustomfile_roundtrip(self, testdir):
hello = testdir.makefile(".xxx", hello="world")
testdir.makepyfile(conftest="""
import pytest
class CustomFile(pytest.File):
pass
def pytest_collect_file(path, parent):
if path.ext == ".xxx":
return CustomFile(path, parent=parent)
""")
node = testdir.getpathnode(hello)
assert isinstance(node, pytest.File)
assert node.name == "hello.xxx"
nodes = node.session.perform_collect([node.nodeid], genitems=False)
assert len(nodes) == 1
assert isinstance(nodes[0], pytest.File)
def test_can_skip_class_with_test_attr(self, testdir):
"""Assure test class is skipped when using `__test__=False` (See #2007)."""
testdir.makepyfile("""
class TestFoo(object):
__test__ = False
def __init__(self):
pass
def test_foo():
assert True
""")
result = testdir.runpytest()
result.stdout.fnmatch_lines([
'collected 0 items',
'*no tests ran in*',
])
class TestCollectFS(object):
def test_ignored_certain_directories(self, testdir):
tmpdir = testdir.tmpdir
tmpdir.ensure("build", 'test_notfound.py')
tmpdir.ensure("dist", 'test_notfound.py')
tmpdir.ensure("_darcs", 'test_notfound.py')
tmpdir.ensure("CVS", 'test_notfound.py')
tmpdir.ensure("{arch}", 'test_notfound.py')
tmpdir.ensure(".whatever", 'test_notfound.py')
tmpdir.ensure(".bzr", 'test_notfound.py')
tmpdir.ensure("normal", 'test_found.py')
for x in tmpdir.visit("test_*.py"):
x.write("def test_hello(): pass")
result = testdir.runpytest("--collect-only")
s = result.stdout.str()
assert "test_notfound" not in s
assert "test_found" in s
@pytest.mark.parametrize('fname',
("activate", "activate.csh", "activate.fish",
"Activate", "Activate.bat", "Activate.ps1"))
def test_ignored_virtualenvs(self, testdir, fname):
bindir = "Scripts" if py.std.sys.platform.startswith("win") else "bin"
testdir.tmpdir.ensure("virtual", bindir, fname)
testfile = testdir.tmpdir.ensure("virtual", "test_invenv.py")
testfile.write("def test_hello(): pass")
# by default, ignore tests inside a virtualenv
result = testdir.runpytest()
assert "test_invenv" not in result.stdout.str()
# allow test collection if user insists
result = testdir.runpytest("--collect-in-virtualenv")
assert "test_invenv" in result.stdout.str()
# allow test collection if user directly passes in the directory
result = testdir.runpytest("virtual")
assert "test_invenv" in result.stdout.str()
@pytest.mark.parametrize('fname',
("activate", "activate.csh", "activate.fish",
"Activate", "Activate.bat", "Activate.ps1"))
def test_ignored_virtualenvs_norecursedirs_precedence(self, testdir, fname):
bindir = "Scripts" if py.std.sys.platform.startswith("win") else "bin"
# norecursedirs takes priority
testdir.tmpdir.ensure(".virtual", bindir, fname)
testfile = testdir.tmpdir.ensure(".virtual", "test_invenv.py")
testfile.write("def test_hello(): pass")
result = testdir.runpytest("--collect-in-virtualenv")
assert "test_invenv" not in result.stdout.str()
# ...unless the virtualenv is explicitly given on the CLI
result = testdir.runpytest("--collect-in-virtualenv", ".virtual")
assert "test_invenv" in result.stdout.str()
@pytest.mark.parametrize('fname',
("activate", "activate.csh", "activate.fish",
"Activate", "Activate.bat", "Activate.ps1"))
def test__in_venv(self, testdir, fname):
"""Directly test the virtual env detection function"""
bindir = "Scripts" if py.std.sys.platform.startswith("win") else "bin"
# no bin/activate, not a virtualenv
base_path = testdir.tmpdir.mkdir('venv')
assert _in_venv(base_path) is False
# with bin/activate, totally a virtualenv
base_path.ensure(bindir, fname)
assert _in_venv(base_path) is True
def test_custom_norecursedirs(self, testdir):
testdir.makeini("""
[pytest]
norecursedirs = mydir xyz*
""")
tmpdir = testdir.tmpdir
tmpdir.ensure("mydir", "test_hello.py").write("def test_1(): pass")
tmpdir.ensure("xyz123", "test_2.py").write("def test_2(): 0/0")
tmpdir.ensure("xy", "test_ok.py").write("def test_3(): pass")
rec = testdir.inline_run()
rec.assertoutcome(passed=1)
rec = testdir.inline_run("xyz123/test_2.py")
rec.assertoutcome(failed=1)
def test_testpaths_ini(self, testdir, monkeypatch):
testdir.makeini("""
[pytest]
testpaths = gui uts
""")
tmpdir = testdir.tmpdir
tmpdir.ensure("env", "test_1.py").write("def test_env(): pass")
tmpdir.ensure("gui", "test_2.py").write("def test_gui(): pass")
tmpdir.ensure("uts", "test_3.py").write("def test_uts(): pass")
# executing from rootdir only tests from `testpaths` directories
# are collected
items, reprec = testdir.inline_genitems('-v')
assert [x.name for x in items] == ['test_gui', 'test_uts']
# check that explicitly passing directories in the command-line
# collects the tests
for dirname in ('env', 'gui', 'uts'):
items, reprec = testdir.inline_genitems(tmpdir.join(dirname))
assert [x.name for x in items] == ['test_%s' % dirname]
# changing cwd to each subdirectory and running pytest without
# arguments collects the tests in that directory normally
for dirname in ('env', 'gui', 'uts'):
monkeypatch.chdir(testdir.tmpdir.join(dirname))
items, reprec = testdir.inline_genitems()
assert [x.name for x in items] == ['test_%s' % dirname]
class TestCollectPluginHookRelay(object):
def test_pytest_collect_file(self, testdir):
wascalled = []
class Plugin(object):
def pytest_collect_file(self, path, parent):
if not path.basename.startswith("."):
# Ignore hidden files, e.g. .testmondata.
wascalled.append(path)
testdir.makefile(".abc", "xyz")
pytest.main([testdir.tmpdir], plugins=[Plugin()])
assert len(wascalled) == 1
assert wascalled[0].ext == '.abc'
def test_pytest_collect_directory(self, testdir):
wascalled = []
class Plugin(object):
def pytest_collect_directory(self, path, parent):
wascalled.append(path.basename)
testdir.mkdir("hello")
testdir.mkdir("world")
pytest.main(testdir.tmpdir, plugins=[Plugin()])
assert "hello" in wascalled
assert "world" in wascalled
class TestPrunetraceback(object):
def test_custom_repr_failure(self, testdir):
p = testdir.makepyfile("""
import not_exists
""")
testdir.makeconftest("""
import pytest
def pytest_collect_file(path, parent):
return MyFile(path, parent)
class MyError(Exception):
pass
class MyFile(pytest.File):
def collect(self):
raise MyError()
def repr_failure(self, excinfo):
if excinfo.errisinstance(MyError):
return "hello world"
return pytest.File.repr_failure(self, excinfo)
""")
result = testdir.runpytest(p)
result.stdout.fnmatch_lines([
"*ERROR collecting*",
"*hello world*",
])
@pytest.mark.xfail(reason="other mechanism for adding to reporting needed")
def test_collect_report_postprocessing(self, testdir):
p = testdir.makepyfile("""
import not_exists
""")
testdir.makeconftest("""
import pytest
@pytest.hookimpl(hookwrapper=True)
def pytest_make_collect_report():
outcome = yield
rep = outcome.get_result()
rep.headerlines += ["header1"]
outcome.force_result(rep)
""")
result = testdir.runpytest(p)
result.stdout.fnmatch_lines([
"*ERROR collecting*",
"*header1*",
])
class TestCustomConftests(object):
def test_ignore_collect_path(self, testdir):
testdir.makeconftest("""
def pytest_ignore_collect(path, config):
return path.basename.startswith("x") or \
path.basename == "test_one.py"
""")
sub = testdir.mkdir("xy123")
sub.ensure("test_hello.py").write("syntax error")
sub.join("conftest.py").write("syntax error")
testdir.makepyfile("def test_hello(): pass")
testdir.makepyfile(test_one="syntax error")
result = testdir.runpytest("--fulltrace")
assert result.ret == 0
result.stdout.fnmatch_lines(["*1 passed*"])
def test_ignore_collect_not_called_on_argument(self, testdir):
testdir.makeconftest("""
def pytest_ignore_collect(path, config):
return True
""")
p = testdir.makepyfile("def test_hello(): pass")
result = testdir.runpytest(p)
assert result.ret == 0
result.stdout.fnmatch_lines("*1 passed*")
result = testdir.runpytest()
assert result.ret == EXIT_NOTESTSCOLLECTED
result.stdout.fnmatch_lines("*collected 0 items*")
def test_collectignore_exclude_on_option(self, testdir):
testdir.makeconftest("""
collect_ignore = ['hello', 'test_world.py']
def pytest_addoption(parser):
parser.addoption("--XX", action="store_true", default=False)
def pytest_configure(config):
if config.getvalue("XX"):
collect_ignore[:] = []
""")
testdir.mkdir("hello")
testdir.makepyfile(test_world="def test_hello(): pass")
result = testdir.runpytest()
assert result.ret == EXIT_NOTESTSCOLLECTED
assert "passed" not in result.stdout.str()
result = testdir.runpytest("--XX")
assert result.ret == 0
assert "passed" in result.stdout.str()
def test_pytest_fs_collect_hooks_are_seen(self, testdir):
testdir.makeconftest("""
import pytest
class MyModule(pytest.Module):
pass
def pytest_collect_file(path, parent):
if path.ext == ".py":
return MyModule(path, parent)
""")
testdir.mkdir("sub")
testdir.makepyfile("def test_x(): pass")
result = testdir.runpytest("--collect-only")
result.stdout.fnmatch_lines([
"*MyModule*",
"*test_x*"
])
def test_pytest_collect_file_from_sister_dir(self, testdir):
sub1 = testdir.mkpydir("sub1")
sub2 = testdir.mkpydir("sub2")
conf1 = testdir.makeconftest("""
import pytest
class MyModule1(pytest.Module):
pass
def pytest_collect_file(path, parent):
if path.ext == ".py":
return MyModule1(path, parent)
""")
conf1.move(sub1.join(conf1.basename))
conf2 = testdir.makeconftest("""
import pytest
class MyModule2(pytest.Module):
pass
def pytest_collect_file(path, parent):
if path.ext == ".py":
return MyModule2(path, parent)
""")
conf2.move(sub2.join(conf2.basename))
p = testdir.makepyfile("def test_x(): pass")
p.copy(sub1.join(p.basename))
p.copy(sub2.join(p.basename))
result = testdir.runpytest("--collect-only")
result.stdout.fnmatch_lines([
"*MyModule1*",
"*MyModule2*",
"*test_x*"
])
class TestSession(object):
def test_parsearg(self, testdir):
p = testdir.makepyfile("def test_func(): pass")
subdir = testdir.mkdir("sub")
subdir.ensure("__init__.py")
target = subdir.join(p.basename)
p.move(target)
subdir.chdir()
config = testdir.parseconfig(p.basename)
rcol = Session(config=config)
assert rcol.fspath == subdir
parts = rcol._parsearg(p.basename)
assert parts[0] == target
assert len(parts) == 1
parts = rcol._parsearg(p.basename + "::test_func")
assert parts[0] == target
assert parts[1] == "test_func"
assert len(parts) == 2
def test_collect_topdir(self, testdir):
p = testdir.makepyfile("def test_func(): pass")
id = "::".join([p.basename, "test_func"])
# XXX migrate to collectonly? (see below)
config = testdir.parseconfig(id)
topdir = testdir.tmpdir
rcol = Session(config)
assert topdir == rcol.fspath
# rootid = rcol.nodeid
# root2 = rcol.perform_collect([rcol.nodeid], genitems=False)[0]
# assert root2 == rcol, rootid
colitems = rcol.perform_collect([rcol.nodeid], genitems=False)
assert len(colitems) == 1
assert colitems[0].fspath == p
def get_reported_items(self, hookrec):
"""Return pytest.Item instances reported by the pytest_collectreport hook"""
calls = hookrec.getcalls('pytest_collectreport')
return [x for call in calls for x in call.report.result
if isinstance(x, pytest.Item)]
def test_collect_protocol_single_function(self, testdir):
p = testdir.makepyfile("def test_func(): pass")
id = "::".join([p.basename, "test_func"])
items, hookrec = testdir.inline_genitems(id)
item, = items
assert item.name == "test_func"
newid = item.nodeid
assert newid == id
py.std.pprint.pprint(hookrec.calls)
topdir = testdir.tmpdir # noqa
hookrec.assert_contains([
("pytest_collectstart", "collector.fspath == topdir"),
("pytest_make_collect_report", "collector.fspath == topdir"),
("pytest_collectstart", "collector.fspath == p"),
("pytest_make_collect_report", "collector.fspath == p"),
("pytest_pycollect_makeitem", "name == 'test_func'"),
("pytest_collectreport", "report.result[0].name == 'test_func'"),
])
# ensure we are reporting the collection of the single test item (#2464)
assert [x.name for x in self.get_reported_items(hookrec)] == ['test_func']
def test_collect_protocol_method(self, testdir):
p = testdir.makepyfile("""
class TestClass(object):
def test_method(self):
pass
""")
normid = p.basename + "::TestClass::()::test_method"
for id in [p.basename,
p.basename + "::TestClass",
p.basename + "::TestClass::()",
normid,
]:
items, hookrec = testdir.inline_genitems(id)
assert len(items) == 1
assert items[0].name == "test_method"
newid = items[0].nodeid
assert newid == normid
# ensure we are reporting the collection of the single test item (#2464)
assert [x.name for x in self.get_reported_items(hookrec)] == ['test_method']
def test_collect_custom_nodes_multi_id(self, testdir):
p = testdir.makepyfile("def test_func(): pass")
testdir.makeconftest("""
import pytest
class SpecialItem(pytest.Item):
def runtest(self):
return # ok
class SpecialFile(pytest.File):
def collect(self):
return [SpecialItem(name="check", parent=self)]
def pytest_collect_file(path, parent):
if path.basename == %r:
return SpecialFile(fspath=path, parent=parent)
""" % p.basename)
id = p.basename
items, hookrec = testdir.inline_genitems(id)
py.std.pprint.pprint(hookrec.calls)
assert len(items) == 2
hookrec.assert_contains([
("pytest_collectstart",
"collector.fspath == collector.session.fspath"),
("pytest_collectstart",
"collector.__class__.__name__ == 'SpecialFile'"),
("pytest_collectstart",
"collector.__class__.__name__ == 'Module'"),
("pytest_pycollect_makeitem", "name == 'test_func'"),
("pytest_collectreport", "report.nodeid.startswith(p.basename)"),
])
assert len(self.get_reported_items(hookrec)) == 2
def test_collect_subdir_event_ordering(self, testdir):
p = testdir.makepyfile("def test_func(): pass")
aaa = testdir.mkpydir("aaa")
test_aaa = aaa.join("test_aaa.py")
p.move(test_aaa)
items, hookrec = testdir.inline_genitems()
assert len(items) == 1
py.std.pprint.pprint(hookrec.calls)
hookrec.assert_contains([
("pytest_collectstart", "collector.fspath == test_aaa"),
("pytest_pycollect_makeitem", "name == 'test_func'"),
("pytest_collectreport",
"report.nodeid.startswith('aaa/test_aaa.py')"),
])
def test_collect_two_commandline_args(self, testdir):
p = testdir.makepyfile("def test_func(): pass")
aaa = testdir.mkpydir("aaa")
bbb = testdir.mkpydir("bbb")
test_aaa = aaa.join("test_aaa.py")
p.copy(test_aaa)
test_bbb = bbb.join("test_bbb.py")
p.move(test_bbb)
id = "."
items, hookrec = testdir.inline_genitems(id)
assert len(items) == 2
py.std.pprint.pprint(hookrec.calls)
hookrec.assert_contains([
("pytest_collectstart", "collector.fspath == test_aaa"),
("pytest_pycollect_makeitem", "name == 'test_func'"),
("pytest_collectreport", "report.nodeid == 'aaa/test_aaa.py'"),
("pytest_collectstart", "collector.fspath == test_bbb"),
("pytest_pycollect_makeitem", "name == 'test_func'"),
("pytest_collectreport", "report.nodeid == 'bbb/test_bbb.py'"),
])
def test_serialization_byid(self, testdir):
testdir.makepyfile("def test_func(): pass")
items, hookrec = testdir.inline_genitems()
assert len(items) == 1
item, = items
items2, hookrec = testdir.inline_genitems(item.nodeid)
item2, = items2
assert item2.name == item.name
assert item2.fspath == item.fspath
def test_find_byid_without_instance_parents(self, testdir):
p = testdir.makepyfile("""
class TestClass(object):
def test_method(self):
pass
""")
arg = p.basename + "::TestClass::test_method"
items, hookrec = testdir.inline_genitems(arg)
assert len(items) == 1
item, = items
assert item.nodeid.endswith("TestClass::()::test_method")
# ensure we are reporting the collection of the single test item (#2464)
assert [x.name for x in self.get_reported_items(hookrec)] == ['test_method']
class Test_getinitialnodes(object):
def test_global_file(self, testdir, tmpdir):
x = tmpdir.ensure("x.py")
with tmpdir.as_cwd():
config = testdir.parseconfigure(x)
col = testdir.getnode(config, x)
assert isinstance(col, pytest.Module)
assert col.name == 'x.py'
assert col.parent.parent is None
for col in col.listchain():
assert col.config is config
def test_pkgfile(self, testdir):
tmpdir = testdir.tmpdir
subdir = tmpdir.join("subdir")
x = subdir.ensure("x.py")
subdir.ensure("__init__.py")
with subdir.as_cwd():
config = testdir.parseconfigure(x)
col = testdir.getnode(config, x)
assert isinstance(col, pytest.Module)
assert col.name == 'x.py'
assert col.parent.parent is None
for col in col.listchain():
assert col.config is config
class Test_genitems(object):
def test_check_collect_hashes(self, testdir):
p = testdir.makepyfile("""
def test_1():
pass
def test_2():
pass
""")
p.copy(p.dirpath(p.purebasename + "2" + ".py"))
items, reprec = testdir.inline_genitems(p.dirpath())
assert len(items) == 4
for numi, i in enumerate(items):
for numj, j in enumerate(items):
if numj != numi:
assert hash(i) != hash(j)
assert i != j
def test_example_items1(self, testdir):
p = testdir.makepyfile('''
def testone():
pass
class TestX(object):
def testmethod_one(self):
pass
class TestY(TestX):
pass
''')
items, reprec = testdir.inline_genitems(p)
assert len(items) == 3
assert items[0].name == 'testone'
assert items[1].name == 'testmethod_one'
assert items[2].name == 'testmethod_one'
# let's also test getmodpath here
assert items[0].getmodpath() == "testone"
assert items[1].getmodpath() == "TestX.testmethod_one"
assert items[2].getmodpath() == "TestY.testmethod_one"
s = items[0].getmodpath(stopatmodule=False)
assert s.endswith("test_example_items1.testone")
print(s)
def test_class_and_functions_discovery_using_glob(self, testdir):
"""
tests that python_classes and python_functions config options work
as prefixes and glob-like patterns (issue #600).
"""
testdir.makeini("""
[pytest]
python_classes = *Suite Test
python_functions = *_test test
""")
p = testdir.makepyfile('''
class MyTestSuite(object):
def x_test(self):
pass
class TestCase(object):
def test_y(self):
pass
''')
items, reprec = testdir.inline_genitems(p)
ids = [x.getmodpath() for x in items]
assert ids == ['MyTestSuite.x_test', 'TestCase.test_y']
def test_matchnodes_two_collections_same_file(testdir):
testdir.makeconftest("""
import pytest
def pytest_configure(config):
config.pluginmanager.register(Plugin2())
class Plugin2(object):
def pytest_collect_file(self, path, parent):
if path.ext == ".abc":
return MyFile2(path, parent)
def pytest_collect_file(path, parent):
if path.ext == ".abc":
return MyFile1(path, parent)
class MyFile1(pytest.Item, pytest.File):
def runtest(self):
pass
class MyFile2(pytest.File):
def collect(self):
return [Item2("hello", parent=self)]
class Item2(pytest.Item):
def runtest(self):
pass
""")
p = testdir.makefile(".abc", "")
result = testdir.runpytest()
assert result.ret == 0
result.stdout.fnmatch_lines([
"*2 passed*",
])
res = testdir.runpytest("%s::hello" % p.basename)
res.stdout.fnmatch_lines([
"*1 passed*",
])
class TestNodekeywords(object):
def test_no_under(self, testdir):
modcol = testdir.getmodulecol("""
def test_pass(): pass
def test_fail(): assert 0
""")
values = list(modcol.keywords)
assert modcol.name in values
for x in values:
assert not x.startswith("_")
assert modcol.name in repr(modcol.keywords)
def test_issue345(self, testdir):
testdir.makepyfile("""
def test_should_not_be_selected():
assert False, 'I should not have been selected to run'
def test___repr__():
pass
""")
reprec = testdir.inline_run("-k repr")
reprec.assertoutcome(passed=1, failed=0)
COLLECTION_ERROR_PY_FILES = dict(
test_01_failure="""
def test_1():
assert False
""",
test_02_import_error="""
import asdfasdfasdf
def test_2():
assert True
""",
test_03_import_error="""
import asdfasdfasdf
def test_3():
assert True
""",
test_04_success="""
def test_4():
assert True
""",
)
def test_exit_on_collection_error(testdir):
"""Verify that all collection errors are collected and no tests executed"""
testdir.makepyfile(**COLLECTION_ERROR_PY_FILES)
res = testdir.runpytest()
assert res.ret == 2
res.stdout.fnmatch_lines([
"collected 2 items / 2 errors",
"*ERROR collecting test_02_import_error.py*",
"*No module named *asdfa*",
"*ERROR collecting test_03_import_error.py*",
"*No module named *asdfa*",
])
def test_exit_on_collection_with_maxfail_smaller_than_n_errors(testdir):
"""
Verify collection is aborted once maxfail errors are encountered ignoring
further modules which would cause more collection errors.
"""
testdir.makepyfile(**COLLECTION_ERROR_PY_FILES)
res = testdir.runpytest("--maxfail=1")
assert res.ret == 1
res.stdout.fnmatch_lines([
"*ERROR collecting test_02_import_error.py*",
"*No module named *asdfa*",
])
assert 'test_03' not in res.stdout.str()
def test_exit_on_collection_with_maxfail_bigger_than_n_errors(testdir):
"""
Verify the test run aborts due to collection errors even if maxfail count of
errors was not reached.
"""
testdir.makepyfile(**COLLECTION_ERROR_PY_FILES)
res = testdir.runpytest("--maxfail=4")
assert res.ret == 2
res.stdout.fnmatch_lines([
"collected 2 items / 2 errors",
"*ERROR collecting test_02_import_error.py*",
"*No module named *asdfa*",
"*ERROR collecting test_03_import_error.py*",
"*No module named *asdfa*",
])
def test_continue_on_collection_errors(testdir):
"""
Verify tests are executed even when collection errors occur when the
--continue-on-collection-errors flag is set
"""
testdir.makepyfile(**COLLECTION_ERROR_PY_FILES)
res = testdir.runpytest("--continue-on-collection-errors")
assert res.ret == 1
res.stdout.fnmatch_lines([
"collected 2 items / 2 errors",
"*1 failed, 1 passed, 2 error*",
])
def test_continue_on_collection_errors_maxfail(testdir):
"""
Verify tests are executed even when collection errors occur and that maxfail
is honoured (including the collection error count).
4 tests: 2 collection errors + 1 failure + 1 success
test_4 is never executed because the test run is with --maxfail=3 which
means it is interrupted after the 2 collection errors + 1 failure.
"""
testdir.makepyfile(**COLLECTION_ERROR_PY_FILES)
res = testdir.runpytest("--continue-on-collection-errors", "--maxfail=3")
assert res.ret == 1
res.stdout.fnmatch_lines([
"collected 2 items / 2 errors",
"*1 failed, 2 error*",
])
def test_fixture_scope_sibling_conftests(testdir):
"""Regression test case for https://github.com/pytest-dev/pytest/issues/2836"""
foo_path = testdir.mkpydir("foo")
foo_path.join("conftest.py").write(_pytest._code.Source("""
import pytest
@pytest.fixture
def fix():
return 1
"""))
foo_path.join("test_foo.py").write("def test_foo(fix): assert fix == 1")
# Tests in `food/` should not see the conftest fixture from `foo/`
food_path = testdir.mkpydir("food")
food_path.join("test_food.py").write("def test_food(fix): assert fix == 1")
res = testdir.runpytest()
assert res.ret == 1
res.stdout.fnmatch_lines([
"*ERROR at setup of test_food*",
"E*fixture 'fix' not found",
"*1 passed, 1 error*",
])
| mpl-2.0 |
js850/pele | pele/obsolete/rigid_bodies/rigid_body_system.py | 1 | 3627 | import numpy as np
from pele.potentials.potential import potential as basepotential
#from potentials.rigid_body_potential import RigidBodyPotential
import copy
class RigidBodySystem(basepotential):
"""
Defines a system of rigid body molecules
"""
def __init__(self, molecule_list, potential = None):
"""
molecule_list: a list of Molecule objects that define the system
potential: the class which calculates site energies and gradients
It can be attached later via the function setPotential
"""
self.molecule_list = [copy.deepcopy(mol) for mol in molecule_list]
self.nmol = len(self.molecule_list)
if potential != None:
self.potential = potential
self.nsites = 0
self.typelist = []
self.nsites_cum = np.zeros(self.nmol, np.int)
sitenum = 0
for i, mol in enumerate(self.molecule_list):
self.nsites_cum[i] = self.nsites
self.nsites += mol.nsites
for site in mol.sitelist:
self.typelist.append( site.type )
site.index = sitenum
sitenum += 1
self.oldcoords = np.zeros(3*2*self.nmol)
def setPotential(self, potential):
"""
attach or replace the potential object.
"""
self.potential = potential
def transformToXYZ(self, coords):
"""
convert center of mass + angle-axis coords into xyz coordinates of all the sites
"""
#self.update_coords(coords)
self.update_rot_mat(coords)
xyz = np.zeros(self.nsites*3)
isite = 0
for i, mol in enumerate(self.molecule_list):
#iaa = self.nmol * 3 + i*3
xyz[isite*3 : isite*3 + 3*mol.nsites] = mol.getxyz_rmat(rmat=mol.rotation_mat, com = coords[i*3:i*3+3] )
isite += mol.nsites
return xyz
def updateGradients(self, sitegrad, coords):
grad = np.zeros([2*self.nmol*3])
isite = 0
for i, mol in enumerate(self.molecule_list):
iaa = self.nmol*3 + i*3
comgrad, aagrad = mol.getGradients( coords[iaa:iaa+3], sitegrad[isite:isite + mol.nsites*3], recalculate_rot_mat = False )
grad[i*3 : i*3 + 3] = comgrad
grad[iaa : iaa + 3] = aagrad
isite += mol.nsites*3
return grad
def getEnergy(self, coords):
xyz = self.transformToXYZ(coords)
return self.potential.getEnergy(xyz)
def getEnergyGradient(self, coords):
xyz = self.transformToXYZ(coords)
E, xyzgrad = self.potential.getEnergyGradient(xyz)
grad = self.updateGradients(xyzgrad, coords)
return E, grad
def getxyz(self, coords):
return self.transformToXYZ(coords)
def coords_compare(self, coords):
""" return true if coords is the same as oldcoords"""
return all(coords == self.oldcoords)
def update_rot_mat(self, coords):
"""
update the com and angle-axis coords and dependents on all molecules
only do it if coords has changed.
"""
#using coords_compare makes quench almost always fail for some reason.
if self.coords_compare( coords):
return
self.oldcoords[:] = coords[:]
nmol= self.nmol
for imol, mol in enumerate(self.molecule_list):
#com = coords[ imol*3 : imol*3 + 3]
aa = coords[3*nmol + imol*3 : 3*nmol + imol*3 + 3]
mol.update_rot_mat( aa )
if __name__ == "__main__":
pass
| gpl-3.0 |
couchbaselabs/priority15 | lib/couchdb/mapping.py | 9 | 22342 | # -*- coding: utf-8 -*-
#
# Copyright (C) 2007-2009 Christopher Lenz
# All rights reserved.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution.
"""Mapping from raw JSON data structures to Python objects and vice versa.
>>> from couchdb import Server
>>> server = Server()
>>> db = server.create('python-tests')
To define a document mapping, you declare a Python class inherited from
`Document`, and add any number of `Field` attributes:
>>> from couchdb.mapping import TextField, IntegerField, DateField
>>> class Person(Document):
... name = TextField()
... age = IntegerField()
... added = DateTimeField(default=datetime.now)
>>> person = Person(name='John Doe', age=42)
>>> person.store(db) #doctest: +ELLIPSIS
<Person ...>
>>> person.age
42
You can then load the data from the CouchDB server through your `Document`
subclass, and conveniently access all attributes:
>>> person = Person.load(db, person.id)
>>> old_rev = person.rev
>>> person.name
u'John Doe'
>>> person.age
42
>>> person.added #doctest: +ELLIPSIS
datetime.datetime(...)
To update a document, simply set the attributes, and then call the ``store()``
method:
>>> person.name = 'John R. Doe'
>>> person.store(db) #doctest: +ELLIPSIS
<Person ...>
If you retrieve the document from the server again, you should be getting the
updated data:
>>> person = Person.load(db, person.id)
>>> person.name
u'John R. Doe'
>>> person.rev != old_rev
True
>>> del server['python-tests']
"""
import copy
from calendar import timegm
from datetime import date, datetime, time
from decimal import Decimal
from time import strptime, struct_time
from couchdb.design import ViewDefinition
__all__ = ['Mapping', 'Document', 'Field', 'TextField', 'FloatField',
'IntegerField', 'LongField', 'BooleanField', 'DecimalField',
'DateField', 'DateTimeField', 'TimeField', 'DictField', 'ListField',
'ViewField']
__docformat__ = 'restructuredtext en'
DEFAULT = object()
class Field(object):
"""Basic unit for mapping a piece of data between Python and JSON.
Instances of this class can be added to subclasses of `Document` to describe
the mapping of a document.
"""
def __init__(self, name=None, default=None):
self.name = name
self.default = default
def __get__(self, instance, owner):
if instance is None:
return self
value = instance._data.get(self.name)
if value is not None:
value = self._to_python(value)
elif self.default is not None:
default = self.default
if callable(default):
default = default()
value = default
return value
def __set__(self, instance, value):
if value is not None:
value = self._to_json(value)
instance._data[self.name] = value
def _to_python(self, value):
return unicode(value)
def _to_json(self, value):
return self._to_python(value)
class MappingMeta(type):
def __new__(cls, name, bases, d):
fields = {}
for base in bases:
if hasattr(base, '_fields'):
fields.update(base._fields)
for attrname, attrval in d.items():
if isinstance(attrval, Field):
if not attrval.name:
attrval.name = attrname
fields[attrname] = attrval
d['_fields'] = fields
return type.__new__(cls, name, bases, d)
class Mapping(object):
__metaclass__ = MappingMeta
def __init__(self, **values):
self._data = {}
for attrname, field in self._fields.items():
if attrname in values:
setattr(self, attrname, values.pop(attrname))
else:
setattr(self, attrname, getattr(self, attrname))
def __iter__(self):
return iter(self._data)
def __len__(self):
return len(self._data or ())
def __delitem__(self, name):
del self._data[name]
def __getitem__(self, name):
return self._data[name]
def __setitem__(self, name, value):
self._data[name] = value
def get(self, name, default=None):
return self._data.get(name, default)
def setdefault(self, name, default):
return self._data.setdefault(name, default)
def unwrap(self):
return self._data
@classmethod
def build(cls, **d):
fields = {}
for attrname, attrval in d.items():
if not attrval.name:
attrval.name = attrname
fields[attrname] = attrval
d['_fields'] = fields
return type('AnonymousStruct', (cls,), d)
@classmethod
def wrap(cls, data):
instance = cls()
instance._data = data
return instance
def _to_python(self, value):
return self.wrap(value)
def _to_json(self, value):
return self.unwrap()
class ViewField(object):
r"""Descriptor that can be used to bind a view definition to a property of
a `Document` class.
>>> class Person(Document):
... name = TextField()
... age = IntegerField()
... by_name = ViewField('people', '''\
... function(doc) {
... emit(doc.name, doc);
... }''')
>>> Person.by_name
<ViewDefinition '_design/people/_view/by_name'>
>>> print Person.by_name.map_fun
function(doc) {
emit(doc.name, doc);
}
That property can be used as a function, which will execute the view.
>>> from couchdb import Database
>>> db = Database('python-tests')
>>> Person.by_name(db, count=3)
<ViewResults <PermanentView '_design/people/_view/by_name'> {'count': 3}>
The results produced by the view are automatically wrapped in the
`Document` subclass the descriptor is bound to. In this example, it would
return instances of the `Person` class. But please note that this requires
the values of the view results to be dictionaries that can be mapped to the
mapping defined by the containing `Document` class. Alternatively, the
``include_docs`` query option can be used to inline the actual documents in
the view results, which will then be used instead of the values.
If you use Python view functions, this class can also be used as a
decorator:
>>> class Person(Document):
... name = TextField()
... age = IntegerField()
...
... @ViewField.define('people')
... def by_name(doc):
... yield doc['name'], doc
>>> Person.by_name
<ViewDefinition '_design/people/_view/by_name'>
>>> print Person.by_name.map_fun
def by_name(doc):
yield doc['name'], doc
"""
def __init__(self, design, map_fun, reduce_fun=None, name=None,
language='javascript', wrapper=DEFAULT, **defaults):
"""Initialize the view descriptor.
:param design: the name of the design document
:param map_fun: the map function code
:param reduce_fun: the reduce function code (optional)
:param name: the actual name of the view in the design document, if
it differs from the name the descriptor is assigned to
:param language: the name of the language used
:param wrapper: an optional callable that should be used to wrap the
result rows
:param defaults: default query string parameters to apply
"""
self.design = design
self.name = name
self.map_fun = map_fun
self.reduce_fun = reduce_fun
self.language = language
self.wrapper = wrapper
self.defaults = defaults
@classmethod
def define(cls, design, name=None, language='python', wrapper=DEFAULT,
**defaults):
"""Factory method for use as a decorator (only suitable for Python
view code).
"""
def view_wrapped(fun):
return cls(design, fun, language=language, wrapper=wrapper,
**defaults)
return view_wrapped
def __get__(self, instance, cls=None):
if self.wrapper is DEFAULT:
wrapper = cls._wrap_row
else:
wrapper = self.wrapper
return ViewDefinition(self.design, self.name, self.map_fun,
self.reduce_fun, language=self.language,
wrapper=wrapper, **self.defaults)
class DocumentMeta(MappingMeta):
def __new__(cls, name, bases, d):
for attrname, attrval in d.items():
if isinstance(attrval, ViewField):
if not attrval.name:
attrval.name = attrname
return MappingMeta.__new__(cls, name, bases, d)
class Document(Mapping):
__metaclass__ = DocumentMeta
def __init__(self, id=None, **values):
Mapping.__init__(self, **values)
if id is not None:
self.id = id
def __repr__(self):
return '<%s %r@%r %r>' % (type(self).__name__, self.id, self.rev,
dict([(k, v) for k, v in self._data.items()
if k not in ('_id', '_rev')]))
def _get_id(self):
if hasattr(self._data, 'id'): # When data is client.Document
return self._data.id
return self._data.get('_id')
def _set_id(self, value):
if self.id is not None:
raise AttributeError('id can only be set on new documents')
self._data['_id'] = value
id = property(_get_id, _set_id, doc='The document ID')
@property
def rev(self):
"""The document revision.
:rtype: basestring
"""
if hasattr(self._data, 'rev'): # When data is client.Document
return self._data.rev
return self._data.get('_rev')
def items(self):
"""Return the fields as a list of ``(name, value)`` tuples.
This method is provided to enable easy conversion to native dictionary
objects, for example to allow use of `mapping.Document` instances with
`client.Database.update`.
>>> class Post(Document):
... title = TextField()
... author = TextField()
>>> post = Post(id='foo-bar', title='Foo bar', author='Joe')
>>> sorted(post.items())
[('_id', 'foo-bar'), ('author', u'Joe'), ('title', u'Foo bar')]
:return: a list of ``(name, value)`` tuples
"""
retval = []
if self.id is not None:
retval.append(('_id', self.id))
if self.rev is not None:
retval.append(('_rev', self.rev))
for name, value in self._data.items():
if name not in ('_id', '_rev'):
retval.append((name, value))
return retval
@classmethod
def load(cls, db, id):
"""Load a specific document from the given database.
:param db: the `Database` object to retrieve the document from
:param id: the document ID
:return: the `Document` instance, or `None` if no document with the
given ID was found
"""
doc = db.get(id)
if doc is None:
return None
return cls.wrap(doc)
def store(self, db):
"""Store the document in the given database."""
db.save(self._data)
return self
@classmethod
def query(cls, db, map_fun, reduce_fun, language='javascript', **options):
"""Execute a CouchDB temporary view and map the result values back to
objects of this mapping.
Note that by default, any properties of the document that are not
included in the values of the view will be treated as if they were
missing from the document. If you want to load the full document for
every row, set the ``include_docs`` option to ``True``.
"""
return db.query(map_fun, reduce_fun=reduce_fun, language=language,
wrapper=cls._wrap_row, **options)
@classmethod
def view(cls, db, viewname, **options):
"""Execute a CouchDB named view and map the result values back to
objects of this mapping.
Note that by default, any properties of the document that are not
included in the values of the view will be treated as if they were
missing from the document. If you want to load the full document for
every row, set the ``include_docs`` option to ``True``.
"""
return db.view(viewname, wrapper=cls._wrap_row, **options)
@classmethod
def _wrap_row(cls, row):
doc = row.get('doc')
if doc is not None:
return cls.wrap(doc)
data = row['value']
data['_id'] = row['id']
return cls.wrap(data)
class TextField(Field):
"""Mapping field for string values."""
_to_python = unicode
class FloatField(Field):
"""Mapping field for float values."""
_to_python = float
class IntegerField(Field):
"""Mapping field for integer values."""
_to_python = int
class LongField(Field):
"""Mapping field for long integer values."""
_to_python = long
class BooleanField(Field):
"""Mapping field for boolean values."""
_to_python = bool
class DecimalField(Field):
"""Mapping field for decimal values."""
def _to_python(self, value):
return Decimal(value)
def _to_json(self, value):
return unicode(value)
class DateField(Field):
"""Mapping field for storing dates.
>>> field = DateField()
>>> field._to_python('2007-04-01')
datetime.date(2007, 4, 1)
>>> field._to_json(date(2007, 4, 1))
'2007-04-01'
>>> field._to_json(datetime(2007, 4, 1, 15, 30))
'2007-04-01'
"""
def _to_python(self, value):
if isinstance(value, basestring):
try:
value = date(*strptime(value, '%Y-%m-%d')[:3])
except ValueError:
raise ValueError('Invalid ISO date %r' % value)
return value
def _to_json(self, value):
if isinstance(value, datetime):
value = value.date()
return value.isoformat()
class DateTimeField(Field):
"""Mapping field for storing date/time values.
>>> field = DateTimeField()
>>> field._to_python('2007-04-01T15:30:00Z')
datetime.datetime(2007, 4, 1, 15, 30)
>>> field._to_json(datetime(2007, 4, 1, 15, 30, 0, 9876))
'2007-04-01T15:30:00Z'
>>> field._to_json(date(2007, 4, 1))
'2007-04-01T00:00:00Z'
"""
def _to_python(self, value):
if isinstance(value, basestring):
try:
value = value.split('.', 1)[0] # strip out microseconds
value = value.rstrip('Z') # remove timezone separator
value = datetime(*strptime(value, '%Y-%m-%dT%H:%M:%S')[:6])
except ValueError:
raise ValueError('Invalid ISO date/time %r' % value)
return value
def _to_json(self, value):
if isinstance(value, struct_time):
value = datetime.utcfromtimestamp(timegm(value))
elif not isinstance(value, datetime):
value = datetime.combine(value, time(0))
return value.replace(microsecond=0).isoformat() + 'Z'
class TimeField(Field):
"""Mapping field for storing times.
>>> field = TimeField()
>>> field._to_python('15:30:00')
datetime.time(15, 30)
>>> field._to_json(time(15, 30))
'15:30:00'
>>> field._to_json(datetime(2007, 4, 1, 15, 30))
'15:30:00'
"""
def _to_python(self, value):
if isinstance(value, basestring):
try:
value = value.split('.', 1)[0] # strip out microseconds
value = time(*strptime(value, '%H:%M:%S')[3:6])
except ValueError:
raise ValueError('Invalid ISO time %r' % value)
return value
def _to_json(self, value):
if isinstance(value, datetime):
value = value.time()
return value.replace(microsecond=0).isoformat()
class DictField(Field):
"""Field type for nested dictionaries.
>>> from couchdb import Server
>>> server = Server()
>>> db = server.create('python-tests')
>>> class Post(Document):
... title = TextField()
... content = TextField()
... author = DictField(Mapping.build(
... name = TextField(),
... email = TextField()
... ))
... extra = DictField()
>>> post = Post(
... title='Foo bar',
... author=dict(name='John Doe',
... email='john@doe.com'),
... extra=dict(foo='bar'),
... )
>>> post.store(db) #doctest: +ELLIPSIS
<Post ...>
>>> post = Post.load(db, post.id)
>>> post.author.name
u'John Doe'
>>> post.author.email
u'john@doe.com'
>>> post.extra
{'foo': 'bar'}
>>> del server['python-tests']
"""
def __init__(self, mapping=None, name=None, default=None):
default = default or {}
Field.__init__(self, name=name, default=lambda: default.copy())
self.mapping = mapping
def _to_python(self, value):
if self.mapping is None:
return value
else:
return self.mapping.wrap(value)
def _to_json(self, value):
if self.mapping is None:
return value
if not isinstance(value, Mapping):
value = self.mapping(**value)
return value.unwrap()
class ListField(Field):
"""Field type for sequences of other fields.
>>> from couchdb import Server
>>> server = Server()
>>> db = server.create('python-tests')
>>> class Post(Document):
... title = TextField()
... content = TextField()
... pubdate = DateTimeField(default=datetime.now)
... comments = ListField(DictField(Mapping.build(
... author = TextField(),
... content = TextField(),
... time = DateTimeField()
... )))
>>> post = Post(title='Foo bar')
>>> post.comments.append(author='myself', content='Bla bla',
... time=datetime.now())
>>> len(post.comments)
1
>>> post.store(db) #doctest: +ELLIPSIS
<Post ...>
>>> post = Post.load(db, post.id)
>>> comment = post.comments[0]
>>> comment['author']
'myself'
>>> comment['content']
'Bla bla'
>>> comment['time'] #doctest: +ELLIPSIS
'...T...Z'
>>> del server['python-tests']
"""
def __init__(self, field, name=None, default=None):
default = default or []
Field.__init__(self, name=name, default=lambda: copy.copy(default))
if type(field) is type:
if issubclass(field, Field):
field = field()
elif issubclass(field, Mapping):
field = DictField(field)
self.field = field
def _to_python(self, value):
return self.Proxy(value, self.field)
def _to_json(self, value):
return [self.field._to_json(item) for item in value]
class Proxy(list):
def __init__(self, list, field):
self.list = list
self.field = field
def __lt__(self, other):
return self.list < other
def __le__(self, other):
return self.list <= other
def __eq__(self, other):
return self.list == other
def __ne__(self, other):
return self.list != other
def __gt__(self, other):
return self.list > other
def __ge__(self, other):
return self.list >= other
def __repr__(self):
return repr(self.list)
def __str__(self):
return str(self.list)
def __unicode__(self):
return unicode(self.list)
def __delitem__(self, index):
del self.list[index]
def __getitem__(self, index):
return self.field._to_python(self.list[index])
def __setitem__(self, index, value):
self.list[index] = self.field._to_json(value)
def __delslice__(self, i, j):
del self.list[i:j]
def __getslice__(self, i, j):
return ListField.Proxy(self.list[i:j], self.field)
def __setslice__(self, i, j, seq):
self.list[i:j] = (self.field._to_json(v) for v in seq)
def __contains__(self, value):
for item in self.list:
if self.field._to_python(item) == value:
return True
return False
def __iter__(self):
for index in range(len(self)):
yield self[index]
def __len__(self):
return len(self.list)
def __nonzero__(self):
return bool(self.list)
def append(self, *args, **kwargs):
if args or not isinstance(self.field, DictField):
if len(args) != 1:
raise TypeError('append() takes exactly one argument '
'(%s given)' % len(args))
value = args[0]
else:
value = kwargs
self.list.append(self.field._to_json(value))
def count(self, value):
return [i for i in self].count(value)
def extend(self, list):
for item in list:
self.append(item)
def index(self, value):
return self.list.index(self.field._to_json(value))
def insert(self, idx, *args, **kwargs):
if args or not isinstance(self.field, DictField):
if len(args) != 1:
raise TypeError('insert() takes exactly 2 arguments '
'(%s given)' % len(args))
value = args[0]
else:
value = kwargs
self.list.insert(idx, self.field._to_json(value))
def remove(self, value):
return self.list.remove(self.field._to_json(value))
def pop(self, *args):
return self.field._to_python(self.list.pop(*args))
| apache-2.0 |
openplans/shareabouts-api | src/sa_api_v2/south_migrations/0002_auto__add_place.py | 2 | 1716 | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Place'
db.create_table('sa_api_place', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=256, null=True, blank=True)),
('description', self.gf('django.db.models.fields.TextField')(null=True, blank=True)),
('location', self.gf('django.contrib.gis.db.models.fields.PointField')()),
('visible', self.gf('django.db.models.fields.BooleanField')(default=True)),
('location_type', self.gf('django.db.models.fields.CharField')(max_length=100)),
))
db.send_create_signal('sa_api_v2', ['Place'])
def backwards(self, orm):
# Deleting model 'Place'
db.delete_table('sa_api_place')
models = {
'sa_api_v2.place': {
'Meta': {'object_name': 'Place'},
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'location': ('django.contrib.gis.db.models.fields.PointField', [], {}),
'location_type': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '256', 'null': 'True', 'blank': 'True'}),
'visible': ('django.db.models.fields.BooleanField', [], {'default': 'True'})
}
}
complete_apps = ['sa_api_v2'] | gpl-3.0 |
peterstace/project-euler | OLD_PY_CODE/project_euler_old_old/146/number_theory.py | 6 | 7695 | """This module implements functions that have to do with number theory."""
import random
from operator import mul
_stock_primes = [2, 3, 5, 7, 11, 13, 17, 19]
def int_pow(x, n):
"""Raise x to the power n (if n is negative a ValueError is raised).
intPow(0, 0) is defined to be 0.
"""
if n < 0:
raise ValueError("n must be non-negative")
elif n == 0:
return 1
else:
if n % 2 == 0:
tmp = int_pow(x, n // 2)
return tmp * tmp
else:
return x * int_pow(x, n - 1)
def mod_exp(b, e, m):
"""Calculate b to the e modulo m."""
if e < 0:
raise ValueError("e must be non-negative")
elif e == 0:
return 1
else:
if e % 2 == 0:
tmp = mod_exp(b, e // 2, m)
return tmp * tmp % m
else:
return b * mod_exp(b, e - 1, m) % m
def discrete_log(alpha, beta, n, totient_n):
"""
Calculate x such that alpha^x = beta in group G of order n. If no such x
exists, then None is returned.
"""
lookup = dict()
m = int(ceil(sqrt(n)))
for j in range(0, m):
lookup[mod_exp(alpha, j, n)] = j
alpha_to_minus_m = mod_exp(alpha, m, n)
alpha_to_minus_m = mod_exp(alpha_to_minus_m, totient_n - 1, n)
gamma = beta
for i in range(m):
if gamma in lookup:
return i * m + lookup[gamma]
gamma = (gamma * alpha_to_minus_m) % n
def miller_rabin(n, k):
"""Declare n probably prime with probability at most 1/4^k (if returns true)
otherwise declare n composite (if returns false).
"""
if n <= 4:
if n in (2, 3):
return True
else:
return False
d = n - 1
s = 0
while d % 2 == 0:
d = d // 2
s += 1
for _ in range(k):
a = random.randint(2, n - 2)
x = mod_exp(a, d, n)
if x in (1, n - 1):
continue
next_loop = False
for r in range(1, s):
x = x * x % n
if x == 1:
return False #composite
if x == n - 1:
next_loop = True
break
if not next_loop:
return False #composite
return True #probably prime
def prime_sieve(n):
"""Calculate all primes up to and including n, and return the list of those
primes. If n is negative, a ValueError is raised.
"""
if n < 0:
raise ValueError("n must be non-negative")
candidates = list(range(n+1))
finish = int(n**0.5)
for i in range(2, finish+1):
if candidates[i]:
candidates[i*i::i] = [None] * len(candidates[i*i::i])
return [i for i in candidates[2:] if i]
def prime(n, primes=_stock_primes):
"""Checks if an integer n is a prime number. If primes is provided,
these can be used to speed up the test."""
if n < 2:
return False
for p in primes:
if p * p > n:
return True
if n % p == 0:
return False
p = primes[-1] + 2
while p * p <= n:
if n % p == 0:
return False
p = p + 2
return True
def isqrt(n):
"""Calculate the integer part of the square root of a natural number n.
Uses a binary search to find the integer square root, and so runs
logarithmically. If n negative, a ValueError is raised.
"""
if n < 0:
raise ValueError("n must be non-negative")
a, b = 0, n+1
while b - a != 1:
mid = (a + b) // 2
if mid*mid <= n:
a = mid
else:
b = mid
return a
def perfect_square(n):
"""Calculate if an integer is a perfect square. Constant time complexity
for most numbers due to modulo tests. Worst case time complexity is logn
when the square of the isqrt is checked against n.
"""
#negative values cannot be perfect squares
if n < 0:
return False
#checks modulo 256
if (n & 7 != 1) and (n & 31 != 4) and (n & 127 != 16) and (n & 191 != 0):
return False
#checks the modulus of n is a quadratic residue mod 9, 5, 7, 13, and 17.
if n % 9 not in (0, 1, 4, 7): return False
if n % 5 not in (0, 1, 4): return False
if n % 7 not in (0, 1, 2, 4): return False
if n % 13 not in (0, 1, 3, 4, 9, 10, 12): return False
if n % 17 not in (0, 1, 2, 4, 8, 9, 13, 15, 16): return False
#check using isqrt
i = isqrt(n)
return i*i == n
def decomp_sieve(n):
"""Calculate the prime decomposition for each number up and including n,
and return the prime decompositions in a list indexed by that number.
"""
result = [dict() for i in range(n+1)]
p = 2
while p <= n:
for pk in range(p, n+1, p):
result[pk][p] = 1
palpha = p*p
while palpha <= n:
for palphak in range(palpha, n+1, palpha):
result[palphak][p] += 1
palpha *= p
while p <= n and result[p]:
p += 1
return result
def decomp(n, primes=_stock_primes):
"""Find the prime decomposition of a natural number. The result is returned
as a dictionary whose keys are powers and values are primes.
E.g. decomp(12) -> {2:2, 3:1}
A list of primes should be provided, with primes at least up to the square
root of n. If the prime list doesn't go that high, a ValueError will be
raised if any primes geater than the square root of the highest prime
provided enters n.
"""
if n < 1:
raise ValueError("n must be positive")
record = {}
if n == 1:
return record
for p in primes:
power = 0
while n % p == 0:
power += 1
n = n // p
if power != 0:
record[p] = power
if p * p > n:
if n != 1:
record[n] = 1 #this is the last prime in the record
return record
#we have run out of primes to check...
last_p = primes[-1]
if last_p * last_p > n:
record[n] = 1
else:
raise ValueError("not enough prime numbers in primes")
def factors(pd):
"""Yields all factors of a number given its prime decomposition."""
if pd:
prime, power = pd.popitem()
vals = [int_pow(prime, i) for i in range(power + 1)]
for partial_factor in factors(pd):
for val in vals:
yield val * partial_factor
pd[prime] = power
else:
yield 1
def sum_divisors(pd):
"""Calculate the lowercase sigma function (sum of divisors) of a natural
number given its prime decomposition pd.
"""
if pd == {}: #decomp corresponds to 1
return 1
else:
return reduce(mul, [(int_pow(p, pd[p]+1)-1) // (p-1) for p in pd])
def num_divisors(pd):
"""Calculates the tau function (number of divisors) of a natural number
given its prime decomposition pd.
"""
if pd == {}:
return 1
else:
return reduce(mul, [pd[p] + 1 for p in pd])
def nominal_record(n, base):
"""Calculate the digital record of a natural number n with a certain
base.
"""
if n < 1:
raise ValueError("n must be >= 1")
if base < 2:
raise ValueError("base must be >= 2")
record = []
while n > 0:
record.insert(0, n % base)
n = n // base
return record
def eval_nominal_record(record, base):
"""Calculates the integer representation given a digital record and its
base.
"""
place_value = 1
value = 0
for digit in reversed(record):
value += digit * place_value
place_value *= base
return value
| unlicense |
ryokochang/Slab-GCS | packages/IronPython.StdLib.2.7.5-beta1/content/Lib/os.py | 109 | 26300 | r"""OS routines for Mac, NT, or Posix depending on what system we're on.
This exports:
- all functions from posix, nt, os2, or ce, e.g. unlink, stat, etc.
- os.path is one of the modules posixpath, or ntpath
- os.name is 'posix', 'nt', 'os2', 'ce' or 'riscos'
- os.curdir is a string representing the current directory ('.' or ':')
- os.pardir is a string representing the parent directory ('..' or '::')
- os.sep is the (or a most common) pathname separator ('/' or ':' or '\\')
- os.extsep is the extension separator ('.' or '/')
- os.altsep is the alternate pathname separator (None or '/')
- os.pathsep is the component separator used in $PATH etc
- os.linesep is the line separator in text files ('\r' or '\n' or '\r\n')
- os.defpath is the default search path for executables
- os.devnull is the file path of the null device ('/dev/null', etc.)
Programs that import and use 'os' stand a better chance of being
portable between different platforms. Of course, they must then
only use functions that are defined by all platforms (e.g., unlink
and opendir), and leave all pathname manipulation to os.path
(e.g., split and join).
"""
#'
import sys, errno
_names = sys.builtin_module_names
# Note: more names are added to __all__ later.
__all__ = ["altsep", "curdir", "pardir", "sep", "extsep", "pathsep", "linesep",
"defpath", "name", "path", "devnull",
"SEEK_SET", "SEEK_CUR", "SEEK_END"]
def _get_exports_list(module):
try:
return list(module.__all__)
except AttributeError:
return [n for n in dir(module) if n[0] != '_']
if 'posix' in _names:
name = 'posix'
linesep = '\n'
from posix import *
try:
from posix import _exit
except ImportError:
pass
import posixpath as path
import posix
__all__.extend(_get_exports_list(posix))
del posix
elif 'nt' in _names:
name = 'nt'
linesep = '\r\n'
from nt import *
try:
from nt import _exit
except ImportError:
pass
import ntpath as path
import nt
__all__.extend(_get_exports_list(nt))
del nt
elif 'os2' in _names:
name = 'os2'
linesep = '\r\n'
from os2 import *
try:
from os2 import _exit
except ImportError:
pass
if sys.version.find('EMX GCC') == -1:
import ntpath as path
else:
import os2emxpath as path
from _emx_link import link
import os2
__all__.extend(_get_exports_list(os2))
del os2
elif 'ce' in _names:
name = 'ce'
linesep = '\r\n'
from ce import *
try:
from ce import _exit
except ImportError:
pass
# We can use the standard Windows path.
import ntpath as path
import ce
__all__.extend(_get_exports_list(ce))
del ce
elif 'riscos' in _names:
name = 'riscos'
linesep = '\n'
from riscos import *
try:
from riscos import _exit
except ImportError:
pass
import riscospath as path
import riscos
__all__.extend(_get_exports_list(riscos))
del riscos
else:
raise ImportError, 'no os specific module found'
sys.modules['os.path'] = path
from os.path import (curdir, pardir, sep, pathsep, defpath, extsep, altsep,
devnull)
del _names
# Python uses fixed values for the SEEK_ constants; they are mapped
# to native constants if necessary in posixmodule.c
SEEK_SET = 0
SEEK_CUR = 1
SEEK_END = 2
#'
# Super directory utilities.
# (Inspired by Eric Raymond; the doc strings are mostly his)
def makedirs(name, mode=0777):
"""makedirs(path [, mode=0777])
Super-mkdir; create a leaf directory and all intermediate ones.
Works like mkdir, except that any intermediate path segment (not
just the rightmost) will be created if it does not exist. This is
recursive.
"""
head, tail = path.split(name)
if not tail:
head, tail = path.split(head)
if head and tail and not path.exists(head):
try:
makedirs(head, mode)
except OSError, e:
# be happy if someone already created the path
if e.errno != errno.EEXIST:
raise
if tail == curdir: # xxx/newdir/. exists if xxx/newdir exists
return
mkdir(name, mode)
def removedirs(name):
"""removedirs(path)
Super-rmdir; remove a leaf directory and all empty intermediate
ones. Works like rmdir except that, if the leaf directory is
successfully removed, directories corresponding to rightmost path
segments will be pruned away until either the whole path is
consumed or an error occurs. Errors during this latter phase are
ignored -- they generally mean that a directory was not empty.
"""
rmdir(name)
head, tail = path.split(name)
if not tail:
head, tail = path.split(head)
while head and tail:
try:
rmdir(head)
except error:
break
head, tail = path.split(head)
def renames(old, new):
"""renames(old, new)
Super-rename; create directories as necessary and delete any left
empty. Works like rename, except creation of any intermediate
directories needed to make the new pathname good is attempted
first. After the rename, directories corresponding to rightmost
path segments of the old name will be pruned way until either the
whole path is consumed or a nonempty directory is found.
Note: this function can fail with the new directory structure made
if you lack permissions needed to unlink the leaf directory or
file.
"""
head, tail = path.split(new)
if head and tail and not path.exists(head):
makedirs(head)
rename(old, new)
head, tail = path.split(old)
if head and tail:
try:
removedirs(head)
except error:
pass
__all__.extend(["makedirs", "removedirs", "renames"])
def walk(top, topdown=True, onerror=None, followlinks=False):
"""Directory tree generator.
For each directory in the directory tree rooted at top (including top
itself, but excluding '.' and '..'), yields a 3-tuple
dirpath, dirnames, filenames
dirpath is a string, the path to the directory. dirnames is a list of
the names of the subdirectories in dirpath (excluding '.' and '..').
filenames is a list of the names of the non-directory files in dirpath.
Note that the names in the lists are just names, with no path components.
To get a full path (which begins with top) to a file or directory in
dirpath, do os.path.join(dirpath, name).
If optional arg 'topdown' is true or not specified, the triple for a
directory is generated before the triples for any of its subdirectories
(directories are generated top down). If topdown is false, the triple
for a directory is generated after the triples for all of its
subdirectories (directories are generated bottom up).
When topdown is true, the caller can modify the dirnames list in-place
(e.g., via del or slice assignment), and walk will only recurse into the
subdirectories whose names remain in dirnames; this can be used to prune
the search, or to impose a specific order of visiting. Modifying
dirnames when topdown is false is ineffective, since the directories in
dirnames have already been generated by the time dirnames itself is
generated.
By default errors from the os.listdir() call are ignored. If
optional arg 'onerror' is specified, it should be a function; it
will be called with one argument, an os.error instance. It can
report the error to continue with the walk, or raise the exception
to abort the walk. Note that the filename is available as the
filename attribute of the exception object.
By default, os.walk does not follow symbolic links to subdirectories on
systems that support them. In order to get this functionality, set the
optional argument 'followlinks' to true.
Caution: if you pass a relative pathname for top, don't change the
current working directory between resumptions of walk. walk never
changes the current directory, and assumes that the client doesn't
either.
Example:
import os
from os.path import join, getsize
for root, dirs, files in os.walk('python/Lib/email'):
print root, "consumes",
print sum([getsize(join(root, name)) for name in files]),
print "bytes in", len(files), "non-directory files"
if 'CVS' in dirs:
dirs.remove('CVS') # don't visit CVS directories
"""
islink, join, isdir = path.islink, path.join, path.isdir
# We may not have read permission for top, in which case we can't
# get a list of the files the directory contains. os.path.walk
# always suppressed the exception then, rather than blow up for a
# minor reason when (say) a thousand readable directories are still
# left to visit. That logic is copied here.
try:
# Note that listdir and error are globals in this module due
# to earlier import-*.
names = listdir(top)
except error, err:
if onerror is not None:
onerror(err)
return
dirs, nondirs = [], []
for name in names:
if isdir(join(top, name)):
dirs.append(name)
else:
nondirs.append(name)
if topdown:
yield top, dirs, nondirs
for name in dirs:
new_path = join(top, name)
if followlinks or not islink(new_path):
for x in walk(new_path, topdown, onerror, followlinks):
yield x
if not topdown:
yield top, dirs, nondirs
__all__.append("walk")
# Make sure os.environ exists, at least
try:
environ
except NameError:
environ = {}
def execl(file, *args):
"""execl(file, *args)
Execute the executable file with argument list args, replacing the
current process. """
execv(file, args)
def execle(file, *args):
"""execle(file, *args, env)
Execute the executable file with argument list args and
environment env, replacing the current process. """
env = args[-1]
execve(file, args[:-1], env)
def execlp(file, *args):
"""execlp(file, *args)
Execute the executable file (which is searched for along $PATH)
with argument list args, replacing the current process. """
execvp(file, args)
def execlpe(file, *args):
"""execlpe(file, *args, env)
Execute the executable file (which is searched for along $PATH)
with argument list args and environment env, replacing the current
process. """
env = args[-1]
execvpe(file, args[:-1], env)
def execvp(file, args):
"""execvp(file, args)
Execute the executable file (which is searched for along $PATH)
with argument list args, replacing the current process.
args may be a list or tuple of strings. """
_execvpe(file, args)
def execvpe(file, args, env):
"""execvpe(file, args, env)
Execute the executable file (which is searched for along $PATH)
with argument list args and environment env , replacing the
current process.
args may be a list or tuple of strings. """
_execvpe(file, args, env)
__all__.extend(["execl","execle","execlp","execlpe","execvp","execvpe"])
def _execvpe(file, args, env=None):
if env is not None:
func = execve
argrest = (args, env)
else:
func = execv
argrest = (args,)
env = environ
head, tail = path.split(file)
if head:
func(file, *argrest)
return
if 'PATH' in env:
envpath = env['PATH']
else:
envpath = defpath
PATH = envpath.split(pathsep)
saved_exc = None
saved_tb = None
for dir in PATH:
fullname = path.join(dir, file)
try:
func(fullname, *argrest)
except error, e:
tb = sys.exc_info()[2]
if (e.errno != errno.ENOENT and e.errno != errno.ENOTDIR
and saved_exc is None):
saved_exc = e
saved_tb = tb
if saved_exc:
raise error, saved_exc, saved_tb
raise error, e, tb
# Change environ to automatically call putenv() if it exists
try:
# This will fail if there's no putenv
putenv
except NameError:
pass
else:
import UserDict
# Fake unsetenv() for Windows
# not sure about os2 here but
# I'm guessing they are the same.
if name in ('os2', 'nt'):
def unsetenv(key):
putenv(key, "")
if name == "riscos":
# On RISC OS, all env access goes through getenv and putenv
from riscosenviron import _Environ
elif name in ('os2', 'nt'): # Where Env Var Names Must Be UPPERCASE
# But we store them as upper case
class _Environ(UserDict.IterableUserDict):
def __init__(self, environ):
UserDict.UserDict.__init__(self)
data = self.data
for k, v in environ.items():
data[k.upper()] = v
def __setitem__(self, key, item):
putenv(key, item)
self.data[key.upper()] = item
def __getitem__(self, key):
return self.data[key.upper()]
try:
unsetenv
except NameError:
def __delitem__(self, key):
del self.data[key.upper()]
else:
def __delitem__(self, key):
unsetenv(key)
del self.data[key.upper()]
def clear(self):
for key in self.data.keys():
unsetenv(key)
del self.data[key]
def pop(self, key, *args):
unsetenv(key)
return self.data.pop(key.upper(), *args)
def has_key(self, key):
return key.upper() in self.data
def __contains__(self, key):
return key.upper() in self.data
def get(self, key, failobj=None):
return self.data.get(key.upper(), failobj)
def update(self, dict=None, **kwargs):
if dict:
try:
keys = dict.keys()
except AttributeError:
# List of (key, value)
for k, v in dict:
self[k] = v
else:
# got keys
# cannot use items(), since mappings
# may not have them.
for k in keys:
self[k] = dict[k]
if kwargs:
self.update(kwargs)
def copy(self):
return dict(self)
else: # Where Env Var Names Can Be Mixed Case
class _Environ(UserDict.IterableUserDict):
def __init__(self, environ):
UserDict.UserDict.__init__(self)
self.data = environ
def __setitem__(self, key, item):
putenv(key, item)
self.data[key] = item
def update(self, dict=None, **kwargs):
if dict:
try:
keys = dict.keys()
except AttributeError:
# List of (key, value)
for k, v in dict:
self[k] = v
else:
# got keys
# cannot use items(), since mappings
# may not have them.
for k in keys:
self[k] = dict[k]
if kwargs:
self.update(kwargs)
try:
unsetenv
except NameError:
pass
else:
def __delitem__(self, key):
unsetenv(key)
del self.data[key]
def clear(self):
for key in self.data.keys():
unsetenv(key)
del self.data[key]
def pop(self, key, *args):
unsetenv(key)
return self.data.pop(key, *args)
def copy(self):
return dict(self)
environ = _Environ(environ)
def getenv(key, default=None):
"""Get an environment variable, return None if it doesn't exist.
The optional second argument can specify an alternate default."""
return environ.get(key, default)
__all__.append("getenv")
def _exists(name):
return name in globals()
# Supply spawn*() (probably only for Unix)
if _exists("fork") and not _exists("spawnv") and _exists("execv"):
P_WAIT = 0
P_NOWAIT = P_NOWAITO = 1
# XXX Should we support P_DETACH? I suppose it could fork()**2
# and close the std I/O streams. Also, P_OVERLAY is the same
# as execv*()?
def _spawnvef(mode, file, args, env, func):
# Internal helper; func is the exec*() function to use
pid = fork()
if not pid:
# Child
try:
if env is None:
func(file, args)
else:
func(file, args, env)
except:
_exit(127)
else:
# Parent
if mode == P_NOWAIT:
return pid # Caller is responsible for waiting!
while 1:
wpid, sts = waitpid(pid, 0)
if WIFSTOPPED(sts):
continue
elif WIFSIGNALED(sts):
return -WTERMSIG(sts)
elif WIFEXITED(sts):
return WEXITSTATUS(sts)
else:
raise error, "Not stopped, signaled or exited???"
def spawnv(mode, file, args):
"""spawnv(mode, file, args) -> integer
Execute file with arguments from args in a subprocess.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it. """
return _spawnvef(mode, file, args, None, execv)
def spawnve(mode, file, args, env):
"""spawnve(mode, file, args, env) -> integer
Execute file with arguments from args in a subprocess with the
specified environment.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it. """
return _spawnvef(mode, file, args, env, execve)
# Note: spawnvp[e] is't currently supported on Windows
def spawnvp(mode, file, args):
"""spawnvp(mode, file, args) -> integer
Execute file (which is looked for along $PATH) with arguments from
args in a subprocess.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it. """
return _spawnvef(mode, file, args, None, execvp)
def spawnvpe(mode, file, args, env):
"""spawnvpe(mode, file, args, env) -> integer
Execute file (which is looked for along $PATH) with arguments from
args in a subprocess with the supplied environment.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it. """
return _spawnvef(mode, file, args, env, execvpe)
if _exists("spawnv"):
# These aren't supplied by the basic Windows code
# but can be easily implemented in Python
def spawnl(mode, file, *args):
"""spawnl(mode, file, *args) -> integer
Execute file with arguments from args in a subprocess.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it. """
return spawnv(mode, file, args)
def spawnle(mode, file, *args):
"""spawnle(mode, file, *args, env) -> integer
Execute file with arguments from args in a subprocess with the
supplied environment.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it. """
env = args[-1]
return spawnve(mode, file, args[:-1], env)
__all__.extend(["spawnv", "spawnve", "spawnl", "spawnle",])
if _exists("spawnvp"):
# At the moment, Windows doesn't implement spawnvp[e],
# so it won't have spawnlp[e] either.
def spawnlp(mode, file, *args):
"""spawnlp(mode, file, *args) -> integer
Execute file (which is looked for along $PATH) with arguments from
args in a subprocess with the supplied environment.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it. """
return spawnvp(mode, file, args)
def spawnlpe(mode, file, *args):
"""spawnlpe(mode, file, *args, env) -> integer
Execute file (which is looked for along $PATH) with arguments from
args in a subprocess with the supplied environment.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it. """
env = args[-1]
return spawnvpe(mode, file, args[:-1], env)
__all__.extend(["spawnvp", "spawnvpe", "spawnlp", "spawnlpe",])
# Supply popen2 etc. (for Unix)
if _exists("fork"):
if not _exists("popen2"):
def popen2(cmd, mode="t", bufsize=-1):
"""Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd'
may be a sequence, in which case arguments will be passed directly to
the program without shell intervention (as with os.spawnv()). If 'cmd'
is a string it will be passed to the shell (as with os.system()). If
'bufsize' is specified, it sets the buffer size for the I/O pipes. The
file objects (child_stdin, child_stdout) are returned."""
import warnings
msg = "os.popen2 is deprecated. Use the subprocess module."
warnings.warn(msg, DeprecationWarning, stacklevel=2)
import subprocess
PIPE = subprocess.PIPE
p = subprocess.Popen(cmd, shell=isinstance(cmd, basestring),
bufsize=bufsize, stdin=PIPE, stdout=PIPE,
close_fds=True)
return p.stdin, p.stdout
__all__.append("popen2")
if not _exists("popen3"):
def popen3(cmd, mode="t", bufsize=-1):
"""Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd'
may be a sequence, in which case arguments will be passed directly to
the program without shell intervention (as with os.spawnv()). If 'cmd'
is a string it will be passed to the shell (as with os.system()). If
'bufsize' is specified, it sets the buffer size for the I/O pipes. The
file objects (child_stdin, child_stdout, child_stderr) are returned."""
import warnings
msg = "os.popen3 is deprecated. Use the subprocess module."
warnings.warn(msg, DeprecationWarning, stacklevel=2)
import subprocess
PIPE = subprocess.PIPE
p = subprocess.Popen(cmd, shell=isinstance(cmd, basestring),
bufsize=bufsize, stdin=PIPE, stdout=PIPE,
stderr=PIPE, close_fds=True)
return p.stdin, p.stdout, p.stderr
__all__.append("popen3")
if not _exists("popen4"):
def popen4(cmd, mode="t", bufsize=-1):
"""Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd'
may be a sequence, in which case arguments will be passed directly to
the program without shell intervention (as with os.spawnv()). If 'cmd'
is a string it will be passed to the shell (as with os.system()). If
'bufsize' is specified, it sets the buffer size for the I/O pipes. The
file objects (child_stdin, child_stdout_stderr) are returned."""
import warnings
msg = "os.popen4 is deprecated. Use the subprocess module."
warnings.warn(msg, DeprecationWarning, stacklevel=2)
import subprocess
PIPE = subprocess.PIPE
p = subprocess.Popen(cmd, shell=isinstance(cmd, basestring),
bufsize=bufsize, stdin=PIPE, stdout=PIPE,
stderr=subprocess.STDOUT, close_fds=True)
return p.stdin, p.stdout
__all__.append("popen4")
import copy_reg as _copy_reg
def _make_stat_result(tup, dict):
return stat_result(tup, dict)
def _pickle_stat_result(sr):
(type, args) = sr.__reduce__()
return (_make_stat_result, args)
try:
_copy_reg.pickle(stat_result, _pickle_stat_result, _make_stat_result)
except NameError: # stat_result may not exist
pass
def _make_statvfs_result(tup, dict):
return statvfs_result(tup, dict)
def _pickle_statvfs_result(sr):
(type, args) = sr.__reduce__()
return (_make_statvfs_result, args)
try:
_copy_reg.pickle(statvfs_result, _pickle_statvfs_result,
_make_statvfs_result)
except NameError: # statvfs_result may not exist
pass
if not _exists("urandom"):
def urandom(n):
"""urandom(n) -> str
Return a string of n random bytes suitable for cryptographic use.
"""
try:
_urandomfd = open("/dev/urandom", O_RDONLY)
except (OSError, IOError):
raise NotImplementedError("/dev/urandom (or equivalent) not found")
try:
bs = b""
while n > len(bs):
bs += read(_urandomfd, n - len(bs))
finally:
close(_urandomfd)
return bs
| gpl-3.0 |
nyalldawson/QGIS | python/plugins/db_manager/db_plugins/oracle/plugin.py | 4 | 22869 | # -*- coding: utf-8 -*-
"""
/***************************************************************************
Name : DB Manager
Description : Database manager plugin for QGIS (Oracle)
Date : Aug 27, 2014
copyright : (C) 2014 by Médéric RIBREUX
email : mederic.ribreux@gmail.com
The content of this file is based on
- PG_Manager by Martin Dobias <wonder.sk@gmail.com> (GPLv2 license)
- DB Manager by Giuseppe Sucameli <brush.tyler@gmail.com> (GPLv2 license)
***************************************************************************/
/***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************/
"""
from builtins import str
from builtins import range
# this will disable the dbplugin if the connector raise an ImportError
from .connector import OracleDBConnector
from qgis.PyQt.QtCore import Qt, QCoreApplication
from qgis.PyQt.QtGui import QIcon, QKeySequence
from qgis.PyQt.QtWidgets import QAction, QApplication, QMessageBox
from qgis.core import QgsApplication, QgsVectorLayer, NULL, QgsSettings
from ..plugin import ConnectionError, InvalidDataException, DBPlugin, \
Database, Schema, Table, VectorTable, TableField, TableConstraint, \
TableIndex, TableTrigger
from qgis.core import QgsCredentials
def classFactory():
return OracleDBPlugin
class OracleDBPlugin(DBPlugin):
@classmethod
def icon(self):
return QgsApplication.getThemeIcon("/mIconOracle.svg")
@classmethod
def typeName(self):
return 'oracle'
@classmethod
def typeNameString(self):
return QCoreApplication.translate('db_manager', 'Oracle Spatial')
@classmethod
def providerName(self):
return 'oracle'
@classmethod
def connectionSettingsKey(self):
return '/Oracle/connections'
def connectToUri(self, uri):
self.db = self.databasesFactory(self, uri)
if self.db:
return True
return False
def databasesFactory(self, connection, uri):
return ORDatabase(connection, uri)
def connect(self, parent=None):
conn_name = self.connectionName()
settings = QgsSettings()
settings.beginGroup(u"/{0}/{1}".format(
self.connectionSettingsKey(), conn_name))
if not settings.contains("database"): # non-existent entry?
raise InvalidDataException(
self.tr('There is no defined database connection "{0}".'.format(
conn_name)))
from qgis.core import QgsDataSourceUri
uri = QgsDataSourceUri()
settingsList = ["host", "port", "database", "username", "password"]
host, port, database, username, password = [
settings.value(x, "", type=str) for x in settingsList]
# get all of the connection options
useEstimatedMetadata = settings.value(
"estimatedMetadata", False, type=bool)
uri.setParam('userTablesOnly', str(
settings.value("userTablesOnly", False, type=bool)))
uri.setParam('geometryColumnsOnly', str(
settings.value("geometryColumnsOnly", False, type=bool)))
uri.setParam('allowGeometrylessTables', str(
settings.value("allowGeometrylessTables", False, type=bool)))
uri.setParam('onlyExistingTypes', str(
settings.value("onlyExistingTypes", False, type=bool)))
uri.setParam('includeGeoAttributes', str(
settings.value("includeGeoAttributes", False, type=bool)))
settings.endGroup()
uri.setConnection(host, port, database, username, password)
uri.setUseEstimatedMetadata(useEstimatedMetadata)
err = u""
try:
return self.connectToUri(uri)
except ConnectionError as e:
err = str(e)
# ask for valid credentials
max_attempts = 3
for i in range(max_attempts):
(ok, username, password) = QgsCredentials.instance().get(
uri.connectionInfo(False), username, password, err)
if not ok:
return False
uri.setConnection(host, port, database, username, password)
try:
self.connectToUri(uri)
except ConnectionError as e:
if i == max_attempts - 1: # failed the last attempt
raise e
err = str(e)
continue
QgsCredentials.instance().put(
uri.connectionInfo(False), username, password)
return True
return False
class ORDatabase(Database):
def __init__(self, connection, uri):
self.connName = connection.connectionName()
Database.__init__(self, connection, uri)
def connectorsFactory(self, uri):
return OracleDBConnector(uri, self.connName)
def dataTablesFactory(self, row, db, schema=None):
return ORTable(row, db, schema)
def vectorTablesFactory(self, row, db, schema=None):
return ORVectorTable(row, db, schema)
def info(self):
from .info_model import ORDatabaseInfo
return ORDatabaseInfo(self)
def schemasFactory(self, row, db):
return ORSchema(row, db)
def columnUniqueValuesModel(self, col, table, limit=10):
l = u""
if limit:
l = u"WHERE ROWNUM < {:d}".format(limit)
con = self.database().connector
# Prevent geometry column show
tableName = table.replace(u'"', u"").split(u".")
if len(tableName) == 0:
tableName = [None, tableName[0]]
colName = col.replace(u'"', u"").split(u".")[-1]
if con.isGeometryColumn(tableName, colName):
return None
query = u"SELECT DISTINCT {} FROM {} {}".format(col, table, l)
return self.sqlResultModel(query, self)
def sqlResultModel(self, sql, parent):
from .data_model import ORSqlResultModel
return ORSqlResultModel(self, sql, parent)
def sqlResultModelAsync(self, sql, parent):
from .data_model import ORSqlResultModelAsync
return ORSqlResultModelAsync(self, sql, parent)
def toSqlLayer(self, sql, geomCol, uniqueCol,
layerName=u"QueryLayer", layerType=None,
avoidSelectById=False, filter=""):
uri = self.uri()
con = self.database().connector
if uniqueCol is not None:
uniqueCol = uniqueCol.strip('"').replace('""', '"')
uri.setDataSource(u"", u"({}\n)".format(
sql), geomCol, filter, uniqueCol)
if avoidSelectById:
uri.disableSelectAtId(True)
provider = self.dbplugin().providerName()
vlayer = QgsVectorLayer(uri.uri(False), layerName, provider)
# handling undetermined geometry type
if not vlayer.isValid():
wkbType, srid = con.getTableMainGeomType(
u"({}\n)".format(sql), geomCol)
uri.setWkbType(wkbType)
if srid:
uri.setSrid(str(srid))
vlayer = QgsVectorLayer(uri.uri(False), layerName, provider)
return vlayer
def registerDatabaseActions(self, mainWindow):
action = QAction(QApplication.translate(
"DBManagerPlugin", "&Re-connect"), self)
mainWindow.registerAction(action, QApplication.translate(
"DBManagerPlugin", "&Database"), self.reconnectActionSlot)
if self.schemas():
action = QAction(QApplication.translate(
"DBManagerPlugin", "&Create Schema…"), self)
mainWindow.registerAction(action, QApplication.translate(
"DBManagerPlugin", "&Schema"), self.createSchemaActionSlot)
action = QAction(QApplication.translate(
"DBManagerPlugin", "&Delete (Empty) Schema…"), self)
mainWindow.registerAction(action, QApplication.translate(
"DBManagerPlugin", "&Schema"), self.deleteSchemaActionSlot)
action = QAction(QApplication.translate(
"DBManagerPlugin", "Delete Selected Item"), self)
mainWindow.registerAction(action, None, self.deleteActionSlot)
action.setShortcuts(QKeySequence.Delete)
action = QAction(QgsApplication.getThemeIcon("/mActionCreateTable.svg"),
QApplication.translate(
"DBManagerPlugin", "&Create Table…"), self)
mainWindow.registerAction(action, QApplication.translate(
"DBManagerPlugin", "&Table"), self.createTableActionSlot)
action = QAction(QgsApplication.getThemeIcon("/mActionEditTable.svg"),
QApplication.translate(
"DBManagerPlugin", "&Edit Table…"), self)
mainWindow.registerAction(action, QApplication.translate(
"DBManagerPlugin", "&Table"), self.editTableActionSlot)
action = QAction(QgsApplication.getThemeIcon("/mActionDeleteTable.svg"),
QApplication.translate(
"DBManagerPlugin", "&Delete Table/View…"), self)
mainWindow.registerAction(action, QApplication.translate(
"DBManagerPlugin", "&Table"), self.deleteTableActionSlot)
action = QAction(QApplication.translate(
"DBManagerPlugin", "&Empty Table…"), self)
mainWindow.registerAction(action, QApplication.translate(
"DBManagerPlugin", "&Table"), self.emptyTableActionSlot)
def supportsComment(self):
return False
class ORSchema(Schema):
def __init__(self, row, db):
Schema.__init__(self, db)
# self.oid, self.name, self.owner, self.perms, self.comment = row
self.name = row[0]
class ORTable(Table):
def __init__(self, row, db, schema=None):
Table.__init__(self, db, schema)
self.name, self.owner, isView = row
self.estimatedRowCount = None
self.objectType = None
self.isView = False
self.isMaterializedView = False
if isView == 1:
self.isView = True
self.creationDate = None
self.modificationDate = None
def getDates(self):
"""Grab the creation/modification dates of the table"""
self.creationDate, self.modificationDate = (
self.database().connector.getTableDates((self.schemaName(),
self.name)))
def refreshRowEstimation(self):
"""Use ALL_ALL_TABLE to get an estimation of rows"""
if self.isView:
self.estimatedRowCount = 0
self.estimatedRowCount = (
self.database().connector.getTableRowEstimation(
(self.schemaName(), self.name)))
def getType(self):
"""Grab the type of object for the table"""
self.objectType = self.database().connector.getTableType(
(self.schemaName(), self.name))
def getComment(self):
"""Grab the general comment of the table/view"""
self.comment = self.database().connector.getTableComment(
(self.schemaName(), self.name), self.objectType)
def getDefinition(self):
return self.database().connector.getDefinition(
(self.schemaName(), self.name), self.objectType)
def getMViewInfo(self):
if self.objectType == u"MATERIALIZED VIEW":
return self.database().connector.getMViewInfo(
(self.schemaName(), self.name))
else:
return None
def runAction(self, action):
action = str(action)
if action.startswith("rows/"):
if action == "rows/recount":
self.refreshRowCount()
return True
elif action.startswith("index/"):
parts = action.split('/')
index_name = parts[1]
index_action = parts[2]
msg = QApplication.translate(
"DBManagerPlugin",
"Do you want to {} index {}?".format(
index_action, index_name))
QApplication.restoreOverrideCursor()
try:
if QMessageBox.question(
None,
QApplication.translate(
"DBManagerPlugin", "Table Index"),
msg,
QMessageBox.Yes | QMessageBox.No) == QMessageBox.No:
return False
finally:
QApplication.setOverrideCursor(Qt.WaitCursor)
if index_action == "rebuild":
self.aboutToChange.emit()
self.database().connector.rebuildTableIndex(
(self.schemaName(), self.name), index_name)
self.refreshIndexes()
return True
elif action.startswith(u"mview/"):
if action == "mview/refresh":
self.aboutToChange.emit()
self.database().connector.refreshMView(
(self.schemaName(), self.name))
return True
return Table.runAction(self, action)
def tableFieldsFactory(self, row, table):
return ORTableField(row, table)
def tableConstraintsFactory(self, row, table):
return ORTableConstraint(row, table)
def tableIndexesFactory(self, row, table):
return ORTableIndex(row, table)
def tableTriggersFactory(self, row, table):
return ORTableTrigger(row, table)
def info(self):
from .info_model import ORTableInfo
return ORTableInfo(self)
def tableDataModel(self, parent):
from .data_model import ORTableDataModel
return ORTableDataModel(self, parent)
def getValidQgisUniqueFields(self, onlyOne=False):
""" list of fields valid to load the table as layer in Qgis canvas.
Qgis automatically search for a valid unique field, so it's
needed only for queries and views.
"""
ret = []
# add the pk
pkcols = [x for x in self.fields() if x.primaryKey]
if len(pkcols) == 1:
ret.append(pkcols[0])
# then add integer fields with an unique index
indexes = self.indexes()
if indexes is not None:
for idx in indexes:
if idx.isUnique and len(idx.columns) == 1:
fld = idx.fields()[idx.columns[0]]
if (fld.dataType == u"NUMBER" and not fld.modifier and fld.notNull and fld not in ret):
ret.append(fld)
# and finally append the other suitable fields
for fld in self.fields():
if (fld.dataType == u"NUMBER" and not fld.modifier and fld.notNull and fld not in ret):
ret.append(fld)
if onlyOne:
return ret[0] if len(ret) > 0 else None
return ret
def uri(self):
uri = self.database().uri()
schema = self.schemaName() if self.schemaName() else ''
geomCol = self.geomColumn if self.type in [
Table.VectorType, Table.RasterType] else ""
uniqueCol = self.getValidQgisUniqueFields(
True) if self.isView else None
uri.setDataSource(schema, self.name, geomCol if geomCol else None,
None, uniqueCol.name if uniqueCol else "")
# Handle geographic table
if geomCol:
uri.setWkbType(self.wkbType)
uri.setSrid(str(self.srid))
return uri
class ORVectorTable(ORTable, VectorTable):
def __init__(self, row, db, schema=None):
ORTable.__init__(self, row[0:3], db, schema)
VectorTable.__init__(self, db, schema)
self.geomColumn, self.geomType, self.wkbType, self.geomDim, \
self.srid = row[-7:-2]
def info(self):
from .info_model import ORVectorTableInfo
return ORVectorTableInfo(self)
def runAction(self, action):
if action.startswith("extent/"):
if action == "extent/update":
self.aboutToChange.emit()
self.updateExtent()
return True
if ORTable.runAction(self, action):
return True
return VectorTable.runAction(self, action)
def canUpdateMetadata(self):
return self.database().connector.canUpdateMetadata((self.schemaName(),
self.name))
def updateExtent(self):
self.database().connector.updateMetadata(
(self.schemaName(), self.name),
self.geomColumn, extent=self.extent)
self.refreshTableEstimatedExtent()
self.refresh()
def hasSpatialIndex(self, geom_column=None):
geom_column = geom_column if geom_column else self.geomColumn
for idx in self.indexes():
if geom_column == idx.column:
return True
return False
class ORTableField(TableField):
def __init__(self, row, table):
""" build fields information from query and find primary key """
TableField.__init__(self, table)
self.num, self.name, self.dataType, self.charMaxLen, \
self.modifier, self.notNull, self.hasDefault, \
self.default, typeStr, self.comment = row
self.primaryKey = False
self.num = int(self.num)
if self.charMaxLen == NULL:
self.charMaxLen = None
else:
self.charMaxLen = int(self.charMaxLen)
if self.modifier == NULL:
self.modifier = None
else:
self.modifier = int(self.modifier)
if self.notNull.upper() == u"Y":
self.notNull = False
else:
self.notNull = True
if self.comment == NULL:
self.comment = u""
# find out whether fields are part of primary key
for con in self.table().constraints():
if con.type == ORTableConstraint.TypePrimaryKey and self.name == con.column:
self.primaryKey = True
break
def type2String(self):
if (u"TIMESTAMP" in self.dataType or self.dataType in [u"DATE", u"SDO_GEOMETRY", u"BINARY_FLOAT", u"BINARY_DOUBLE"]):
return u"{}".format(self.dataType)
if self.charMaxLen in [None, -1]:
return u"{}".format(self.dataType)
elif self.modifier in [None, -1, 0]:
return u"{}({})".format(self.dataType, self.charMaxLen)
return u"{}({},{})".format(self.dataType, self.charMaxLen,
self.modifier)
def update(self, new_name, new_type_str=None, new_not_null=None,
new_default_str=None):
self.table().aboutToChange.emit()
if self.name == new_name:
new_name = None
if self.type2String() == new_type_str:
new_type_str = None
if self.notNull == new_not_null:
new_not_null = None
if self.default2String() == new_default_str:
new_default_str = None
ret = self.table().database().connector.updateTableColumn(
(self.table().schemaName(), self.table().name),
self.name, new_name, new_type_str,
new_not_null, new_default_str)
# When changing a field, refresh also constraints and
# indexes.
if ret is not False:
self.table().refreshFields()
self.table().refreshConstraints()
self.table().refreshIndexes()
return ret
class ORTableConstraint(TableConstraint):
TypeCheck, TypeForeignKey, TypePrimaryKey, \
TypeUnique, TypeUnknown = list(range(5))
types = {"c": TypeCheck, "r": TypeForeignKey,
"p": TypePrimaryKey, "u": TypeUnique}
def __init__(self, row, table):
""" build constraints info from query """
TableConstraint.__init__(self, table)
self.name, constr_type_str, self.column, self.validated, \
self.generated, self.status = row[0:6]
constr_type_str = constr_type_str.lower()
if constr_type_str in ORTableConstraint.types:
self.type = ORTableConstraint.types[constr_type_str]
else:
self.type = ORTableConstraint.TypeUnknown
if row[6] == NULL:
self.checkSource = u""
else:
self.checkSource = row[6]
if row[8] == NULL:
self.foreignTable = u""
else:
self.foreignTable = row[8]
if row[7] == NULL:
self.foreignOnDelete = u""
else:
self.foreignOnDelete = row[7]
if row[9] == NULL:
self.foreignKey = u""
else:
self.foreignKey = row[9]
def type2String(self):
if self.type == ORTableConstraint.TypeCheck:
return QApplication.translate("DBManagerPlugin", "Check")
if self.type == ORTableConstraint.TypePrimaryKey:
return QApplication.translate("DBManagerPlugin", "Primary key")
if self.type == ORTableConstraint.TypeForeignKey:
return QApplication.translate("DBManagerPlugin", "Foreign key")
if self.type == ORTableConstraint.TypeUnique:
return QApplication.translate("DBManagerPlugin", "Unique")
return QApplication.translate("DBManagerPlugin", 'Unknown')
def fields(self):
""" Hack to make edit dialog box work """
fields = self.table().fields()
field = None
for fld in fields:
if fld.name == self.column:
field = fld
cols = {}
cols[0] = field
return cols
class ORTableIndex(TableIndex):
def __init__(self, row, table):
TableIndex.__init__(self, table)
self.name, self.column, self.indexType, self.status, \
self.analyzed, self.compression, self.isUnique = row
def fields(self):
""" Hack to make edit dialog box work """
self.table().refreshFields()
fields = self.table().fields()
field = None
for fld in fields:
if fld.name == self.column:
field = fld
cols = {}
cols[0] = field
return cols
class ORTableTrigger(TableTrigger):
def __init__(self, row, table):
TableTrigger.__init__(self, table)
self.name, self.event, self.type, self.enabled = row
| gpl-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.