repo_name stringlengths 5 100 | path stringlengths 4 294 | copies stringclasses 990
values | size stringlengths 4 7 | content stringlengths 666 1M | license stringclasses 15
values |
|---|---|---|---|---|---|
vmg/hg-stable | hgext/convert/__init__.py | 92 | 15637 | # convert.py Foreign SCM converter
#
# Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
'''import revisions from foreign VCS repositories into Mercurial'''
import convcmd
import cvsps
import subversion
from mercurial import commands, templatekw
from mercurial.i18n import _
testedwith = 'internal'
# Commands definition was moved elsewhere to ease demandload job.
def convert(ui, src, dest=None, revmapfile=None, **opts):
"""convert a foreign SCM repository to a Mercurial one.
Accepted source formats [identifiers]:
- Mercurial [hg]
- CVS [cvs]
- Darcs [darcs]
- git [git]
- Subversion [svn]
- Monotone [mtn]
- GNU Arch [gnuarch]
- Bazaar [bzr]
- Perforce [p4]
Accepted destination formats [identifiers]:
- Mercurial [hg]
- Subversion [svn] (history on branches is not preserved)
If no revision is given, all revisions will be converted.
Otherwise, convert will only import up to the named revision
(given in a format understood by the source).
If no destination directory name is specified, it defaults to the
basename of the source with ``-hg`` appended. If the destination
repository doesn't exist, it will be created.
By default, all sources except Mercurial will use --branchsort.
Mercurial uses --sourcesort to preserve original revision numbers
order. Sort modes have the following effects:
--branchsort convert from parent to child revision when possible,
which means branches are usually converted one after
the other. It generates more compact repositories.
--datesort sort revisions by date. Converted repositories have
good-looking changelogs but are often an order of
magnitude larger than the same ones generated by
--branchsort.
--sourcesort try to preserve source revisions order, only
supported by Mercurial sources.
--closesort try to move closed revisions as close as possible
to parent branches, only supported by Mercurial
sources.
If ``REVMAP`` isn't given, it will be put in a default location
(``<dest>/.hg/shamap`` by default). The ``REVMAP`` is a simple
text file that maps each source commit ID to the destination ID
for that revision, like so::
<source ID> <destination ID>
If the file doesn't exist, it's automatically created. It's
updated on each commit copied, so :hg:`convert` can be interrupted
and can be run repeatedly to copy new commits.
The authormap is a simple text file that maps each source commit
author to a destination commit author. It is handy for source SCMs
that use unix logins to identify authors (e.g.: CVS). One line per
author mapping and the line format is::
source author = destination author
Empty lines and lines starting with a ``#`` are ignored.
The filemap is a file that allows filtering and remapping of files
and directories. Each line can contain one of the following
directives::
include path/to/file-or-dir
exclude path/to/file-or-dir
rename path/to/source path/to/destination
Comment lines start with ``#``. A specified path matches if it
equals the full relative name of a file or one of its parent
directories. The ``include`` or ``exclude`` directive with the
longest matching path applies, so line order does not matter.
The ``include`` directive causes a file, or all files under a
directory, to be included in the destination repository, and the
exclusion of all other files and directories not explicitly
included. The ``exclude`` directive causes files or directories to
be omitted. The ``rename`` directive renames a file or directory if
it is converted. To rename from a subdirectory into the root of
the repository, use ``.`` as the path to rename to.
The splicemap is a file that allows insertion of synthetic
history, letting you specify the parents of a revision. This is
useful if you want to e.g. give a Subversion merge two parents, or
graft two disconnected series of history together. Each entry
contains a key, followed by a space, followed by one or two
comma-separated values::
key parent1, parent2
The key is the revision ID in the source
revision control system whose parents should be modified (same
format as a key in .hg/shamap). The values are the revision IDs
(in either the source or destination revision control system) that
should be used as the new parents for that node. For example, if
you have merged "release-1.0" into "trunk", then you should
specify the revision on "trunk" as the first parent and the one on
the "release-1.0" branch as the second.
The branchmap is a file that allows you to rename a branch when it is
being brought in from whatever external repository. When used in
conjunction with a splicemap, it allows for a powerful combination
to help fix even the most badly mismanaged repositories and turn them
into nicely structured Mercurial repositories. The branchmap contains
lines of the form::
original_branch_name new_branch_name
where "original_branch_name" is the name of the branch in the
source repository, and "new_branch_name" is the name of the branch
is the destination repository. No whitespace is allowed in the
branch names. This can be used to (for instance) move code in one
repository from "default" to a named branch.
Mercurial Source
################
The Mercurial source recognizes the following configuration
options, which you can set on the command line with ``--config``:
:convert.hg.ignoreerrors: ignore integrity errors when reading.
Use it to fix Mercurial repositories with missing revlogs, by
converting from and to Mercurial. Default is False.
:convert.hg.saverev: store original revision ID in changeset
(forces target IDs to change). It takes a boolean argument and
defaults to False.
:convert.hg.startrev: convert start revision and its descendants.
It takes a hg revision identifier and defaults to 0.
CVS Source
##########
CVS source will use a sandbox (i.e. a checked-out copy) from CVS
to indicate the starting point of what will be converted. Direct
access to the repository files is not needed, unless of course the
repository is ``:local:``. The conversion uses the top level
directory in the sandbox to find the CVS repository, and then uses
CVS rlog commands to find files to convert. This means that unless
a filemap is given, all files under the starting directory will be
converted, and that any directory reorganization in the CVS
sandbox is ignored.
The following options can be used with ``--config``:
:convert.cvsps.cache: Set to False to disable remote log caching,
for testing and debugging purposes. Default is True.
:convert.cvsps.fuzz: Specify the maximum time (in seconds) that is
allowed between commits with identical user and log message in
a single changeset. When very large files were checked in as
part of a changeset then the default may not be long enough.
The default is 60.
:convert.cvsps.mergeto: Specify a regular expression to which
commit log messages are matched. If a match occurs, then the
conversion process will insert a dummy revision merging the
branch on which this log message occurs to the branch
indicated in the regex. Default is ``{{mergetobranch
([-\\w]+)}}``
:convert.cvsps.mergefrom: Specify a regular expression to which
commit log messages are matched. If a match occurs, then the
conversion process will add the most recent revision on the
branch indicated in the regex as the second parent of the
changeset. Default is ``{{mergefrombranch ([-\\w]+)}}``
:convert.localtimezone: use local time (as determined by the TZ
environment variable) for changeset date/times. The default
is False (use UTC).
:hooks.cvslog: Specify a Python function to be called at the end of
gathering the CVS log. The function is passed a list with the
log entries, and can modify the entries in-place, or add or
delete them.
:hooks.cvschangesets: Specify a Python function to be called after
the changesets are calculated from the CVS log. The
function is passed a list with the changeset entries, and can
modify the changesets in-place, or add or delete them.
An additional "debugcvsps" Mercurial command allows the builtin
changeset merging code to be run without doing a conversion. Its
parameters and output are similar to that of cvsps 2.1. Please see
the command help for more details.
Subversion Source
#################
Subversion source detects classical trunk/branches/tags layouts.
By default, the supplied ``svn://repo/path/`` source URL is
converted as a single branch. If ``svn://repo/path/trunk`` exists
it replaces the default branch. If ``svn://repo/path/branches``
exists, its subdirectories are listed as possible branches. If
``svn://repo/path/tags`` exists, it is looked for tags referencing
converted branches. Default ``trunk``, ``branches`` and ``tags``
values can be overridden with following options. Set them to paths
relative to the source URL, or leave them blank to disable auto
detection.
The following options can be set with ``--config``:
:convert.svn.branches: specify the directory containing branches.
The default is ``branches``.
:convert.svn.tags: specify the directory containing tags. The
default is ``tags``.
:convert.svn.trunk: specify the name of the trunk branch. The
default is ``trunk``.
:convert.localtimezone: use local time (as determined by the TZ
environment variable) for changeset date/times. The default
is False (use UTC).
Source history can be retrieved starting at a specific revision,
instead of being integrally converted. Only single branch
conversions are supported.
:convert.svn.startrev: specify start Subversion revision number.
The default is 0.
Perforce Source
###############
The Perforce (P4) importer can be given a p4 depot path or a
client specification as source. It will convert all files in the
source to a flat Mercurial repository, ignoring labels, branches
and integrations. Note that when a depot path is given you then
usually should specify a target directory, because otherwise the
target may be named ``...-hg``.
It is possible to limit the amount of source history to be
converted by specifying an initial Perforce revision:
:convert.p4.startrev: specify initial Perforce revision (a
Perforce changelist number).
Mercurial Destination
#####################
The following options are supported:
:convert.hg.clonebranches: dispatch source branches in separate
clones. The default is False.
:convert.hg.tagsbranch: branch name for tag revisions, defaults to
``default``.
:convert.hg.usebranchnames: preserve branch names. The default is
True.
"""
return convcmd.convert(ui, src, dest, revmapfile, **opts)
def debugsvnlog(ui, **opts):
return subversion.debugsvnlog(ui, **opts)
def debugcvsps(ui, *args, **opts):
'''create changeset information from CVS
This command is intended as a debugging tool for the CVS to
Mercurial converter, and can be used as a direct replacement for
cvsps.
Hg debugcvsps reads the CVS rlog for current directory (or any
named directory) in the CVS repository, and converts the log to a
series of changesets based on matching commit log entries and
dates.'''
return cvsps.debugcvsps(ui, *args, **opts)
commands.norepo += " convert debugsvnlog debugcvsps"
cmdtable = {
"convert":
(convert,
[('', 'authors', '',
_('username mapping filename (DEPRECATED, use --authormap instead)'),
_('FILE')),
('s', 'source-type', '',
_('source repository type'), _('TYPE')),
('d', 'dest-type', '',
_('destination repository type'), _('TYPE')),
('r', 'rev', '',
_('import up to target revision REV'), _('REV')),
('A', 'authormap', '',
_('remap usernames using this file'), _('FILE')),
('', 'filemap', '',
_('remap file names using contents of file'), _('FILE')),
('', 'splicemap', '',
_('splice synthesized history into place'), _('FILE')),
('', 'branchmap', '',
_('change branch names while converting'), _('FILE')),
('', 'branchsort', None, _('try to sort changesets by branches')),
('', 'datesort', None, _('try to sort changesets by date')),
('', 'sourcesort', None, _('preserve source changesets order')),
('', 'closesort', None, _('try to reorder closed revisions'))],
_('hg convert [OPTION]... SOURCE [DEST [REVMAP]]')),
"debugsvnlog":
(debugsvnlog,
[],
'hg debugsvnlog'),
"debugcvsps":
(debugcvsps,
[
# Main options shared with cvsps-2.1
('b', 'branches', [], _('only return changes on specified branches')),
('p', 'prefix', '', _('prefix to remove from file names')),
('r', 'revisions', [],
_('only return changes after or between specified tags')),
('u', 'update-cache', None, _("update cvs log cache")),
('x', 'new-cache', None, _("create new cvs log cache")),
('z', 'fuzz', 60, _('set commit time fuzz in seconds')),
('', 'root', '', _('specify cvsroot')),
# Options specific to builtin cvsps
('', 'parents', '', _('show parent changesets')),
('', 'ancestors', '',
_('show current changeset in ancestor branches')),
# Options that are ignored for compatibility with cvsps-2.1
('A', 'cvs-direct', None, _('ignored for compatibility')),
],
_('hg debugcvsps [OPTION]... [PATH]...')),
}
def kwconverted(ctx, name):
rev = ctx.extra().get('convert_revision', '')
if rev.startswith('svn:'):
if name == 'svnrev':
return str(subversion.revsplit(rev)[2])
elif name == 'svnpath':
return subversion.revsplit(rev)[1]
elif name == 'svnuuid':
return subversion.revsplit(rev)[0]
return rev
def kwsvnrev(repo, ctx, **args):
""":svnrev: String. Converted subversion revision number."""
return kwconverted(ctx, 'svnrev')
def kwsvnpath(repo, ctx, **args):
""":svnpath: String. Converted subversion revision project path."""
return kwconverted(ctx, 'svnpath')
def kwsvnuuid(repo, ctx, **args):
""":svnuuid: String. Converted subversion revision repository identifier."""
return kwconverted(ctx, 'svnuuid')
def extsetup(ui):
templatekw.keywords['svnrev'] = kwsvnrev
templatekw.keywords['svnpath'] = kwsvnpath
templatekw.keywords['svnuuid'] = kwsvnuuid
# tell hggettext to extract docstrings from these functions:
i18nfunctions = [kwsvnrev, kwsvnpath, kwsvnuuid]
| gpl-2.0 |
batxes/4c2vhic | Six_mouse_models/Six_mouse_models_final_output_0.2_-0.1_11000/Six_mouse_models20897.py | 2 | 18210 | import _surface
import chimera
try:
import chimera.runCommand
except:
pass
from VolumePath import markerset as ms
try:
from VolumePath import Marker_Set, Link
new_marker_set=Marker_Set
except:
from VolumePath import volume_path_dialog
d= volume_path_dialog(True)
new_marker_set= d.new_marker_set
marker_sets={}
surf_sets={}
if "particle_0 geometry" not in marker_sets:
s=new_marker_set('particle_0 geometry')
marker_sets["particle_0 geometry"]=s
s= marker_sets["particle_0 geometry"]
mark=s.place_marker((9895.85, 4779.14, 5060.59), (0, 1, 0), 846)
if "particle_1 geometry" not in marker_sets:
s=new_marker_set('particle_1 geometry')
marker_sets["particle_1 geometry"]=s
s= marker_sets["particle_1 geometry"]
mark=s.place_marker((11623.7, 6717.01, 4814.53), (0.7, 0.7, 0.7), 846)
if "particle_2 geometry" not in marker_sets:
s=new_marker_set('particle_2 geometry')
marker_sets["particle_2 geometry"]=s
s= marker_sets["particle_2 geometry"]
mark=s.place_marker((11497.3, 4795.33, 4496.22), (0.7, 0.7, 0.7), 846)
if "particle_3 geometry" not in marker_sets:
s=new_marker_set('particle_3 geometry')
marker_sets["particle_3 geometry"]=s
s= marker_sets["particle_3 geometry"]
mark=s.place_marker((10839.6, 4899.13, 4876.44), (0.7, 0.7, 0.7), 846)
if "particle_4 geometry" not in marker_sets:
s=new_marker_set('particle_4 geometry')
marker_sets["particle_4 geometry"]=s
s= marker_sets["particle_4 geometry"]
mark=s.place_marker((9854.51, 5272.55, 4890.34), (0.7, 0.7, 0.7), 846)
if "particle_5 geometry" not in marker_sets:
s=new_marker_set('particle_5 geometry')
marker_sets["particle_5 geometry"]=s
s= marker_sets["particle_5 geometry"]
mark=s.place_marker((10280.5, 4519.84, 5862.14), (0.7, 0.7, 0.7), 846)
if "particle_6 geometry" not in marker_sets:
s=new_marker_set('particle_6 geometry')
marker_sets["particle_6 geometry"]=s
s= marker_sets["particle_6 geometry"]
mark=s.place_marker((12023.6, 5317.59, 6979.15), (0.7, 0.7, 0.7), 846)
if "particle_7 geometry" not in marker_sets:
s=new_marker_set('particle_7 geometry')
marker_sets["particle_7 geometry"]=s
s= marker_sets["particle_7 geometry"]
mark=s.place_marker((10465, 3656.04, 6339.26), (0.7, 0.7, 0.7), 846)
if "particle_8 geometry" not in marker_sets:
s=new_marker_set('particle_8 geometry')
marker_sets["particle_8 geometry"]=s
s= marker_sets["particle_8 geometry"]
mark=s.place_marker((10053.8, 3730.14, 6431.28), (0.7, 0.7, 0.7), 846)
if "particle_9 geometry" not in marker_sets:
s=new_marker_set('particle_9 geometry')
marker_sets["particle_9 geometry"]=s
s= marker_sets["particle_9 geometry"]
mark=s.place_marker((9418.61, 3488.38, 7860.45), (0.7, 0.7, 0.7), 846)
if "particle_10 geometry" not in marker_sets:
s=new_marker_set('particle_10 geometry')
marker_sets["particle_10 geometry"]=s
s= marker_sets["particle_10 geometry"]
mark=s.place_marker((10136.6, 2031.21, 6614.07), (0, 1, 0), 846)
if "particle_11 geometry" not in marker_sets:
s=new_marker_set('particle_11 geometry')
marker_sets["particle_11 geometry"]=s
s= marker_sets["particle_11 geometry"]
mark=s.place_marker((9234.06, 2750.29, 4817.56), (0.7, 0.7, 0.7), 846)
if "particle_12 geometry" not in marker_sets:
s=new_marker_set('particle_12 geometry')
marker_sets["particle_12 geometry"]=s
s= marker_sets["particle_12 geometry"]
mark=s.place_marker((10259.7, 2000.18, 6286.46), (0.7, 0.7, 0.7), 846)
if "particle_13 geometry" not in marker_sets:
s=new_marker_set('particle_13 geometry')
marker_sets["particle_13 geometry"]=s
s= marker_sets["particle_13 geometry"]
mark=s.place_marker((9202.4, 3401.73, 6794.46), (0.7, 0.7, 0.7), 846)
if "particle_14 geometry" not in marker_sets:
s=new_marker_set('particle_14 geometry')
marker_sets["particle_14 geometry"]=s
s= marker_sets["particle_14 geometry"]
mark=s.place_marker((9495.73, 2495.88, 5919.12), (0.7, 0.7, 0.7), 846)
if "particle_15 geometry" not in marker_sets:
s=new_marker_set('particle_15 geometry')
marker_sets["particle_15 geometry"]=s
s= marker_sets["particle_15 geometry"]
mark=s.place_marker((8947.39, 1575.81, 4184.32), (0.7, 0.7, 0.7), 846)
if "particle_16 geometry" not in marker_sets:
s=new_marker_set('particle_16 geometry')
marker_sets["particle_16 geometry"]=s
s= marker_sets["particle_16 geometry"]
mark=s.place_marker((8618.3, 1137.5, 6097.84), (0.7, 0.7, 0.7), 846)
if "particle_17 geometry" not in marker_sets:
s=new_marker_set('particle_17 geometry')
marker_sets["particle_17 geometry"]=s
s= marker_sets["particle_17 geometry"]
mark=s.place_marker((8497.11, 1751.93, 6458.88), (0.7, 0.7, 0.7), 846)
if "particle_18 geometry" not in marker_sets:
s=new_marker_set('particle_18 geometry')
marker_sets["particle_18 geometry"]=s
s= marker_sets["particle_18 geometry"]
mark=s.place_marker((8305.11, 2909.33, 5110.94), (0.7, 0.7, 0.7), 846)
if "particle_19 geometry" not in marker_sets:
s=new_marker_set('particle_19 geometry')
marker_sets["particle_19 geometry"]=s
s= marker_sets["particle_19 geometry"]
mark=s.place_marker((8957.24, 2981.93, 4276.03), (0.7, 0.7, 0.7), 846)
if "particle_20 geometry" not in marker_sets:
s=new_marker_set('particle_20 geometry')
marker_sets["particle_20 geometry"]=s
s= marker_sets["particle_20 geometry"]
mark=s.place_marker((8154.87, 2489.16, 5528.53), (0, 1, 0), 846)
if "particle_21 geometry" not in marker_sets:
s=new_marker_set('particle_21 geometry')
marker_sets["particle_21 geometry"]=s
s= marker_sets["particle_21 geometry"]
mark=s.place_marker((8499.12, 3891.02, 6567.07), (0.7, 0.7, 0.7), 846)
if "particle_22 geometry" not in marker_sets:
s=new_marker_set('particle_22 geometry')
marker_sets["particle_22 geometry"]=s
s= marker_sets["particle_22 geometry"]
mark=s.place_marker((8130.18, 3715.06, 4312.58), (0.7, 0.7, 0.7), 846)
if "particle_23 geometry" not in marker_sets:
s=new_marker_set('particle_23 geometry')
marker_sets["particle_23 geometry"]=s
s= marker_sets["particle_23 geometry"]
mark=s.place_marker((7874.84, 3507.14, 4738.3), (0.7, 0.7, 0.7), 846)
if "particle_24 geometry" not in marker_sets:
s=new_marker_set('particle_24 geometry')
marker_sets["particle_24 geometry"]=s
s= marker_sets["particle_24 geometry"]
mark=s.place_marker((7444.87, 4280.92, 6976.12), (0.7, 0.7, 0.7), 846)
if "particle_25 geometry" not in marker_sets:
s=new_marker_set('particle_25 geometry')
marker_sets["particle_25 geometry"]=s
s= marker_sets["particle_25 geometry"]
mark=s.place_marker((7325.99, 4804.69, 5057.31), (0.7, 0.7, 0.7), 846)
if "particle_26 geometry" not in marker_sets:
s=new_marker_set('particle_26 geometry')
marker_sets["particle_26 geometry"]=s
s= marker_sets["particle_26 geometry"]
mark=s.place_marker((7746, 3864.73, 3430.43), (0.7, 0.7, 0.7), 846)
if "particle_27 geometry" not in marker_sets:
s=new_marker_set('particle_27 geometry')
marker_sets["particle_27 geometry"]=s
s= marker_sets["particle_27 geometry"]
mark=s.place_marker((7534.98, 3736.34, 4651.15), (0.7, 0.7, 0.7), 846)
if "particle_28 geometry" not in marker_sets:
s=new_marker_set('particle_28 geometry')
marker_sets["particle_28 geometry"]=s
s= marker_sets["particle_28 geometry"]
mark=s.place_marker((7652.65, 2860.11, 6249.68), (0.7, 0.7, 0.7), 846)
if "particle_29 geometry" not in marker_sets:
s=new_marker_set('particle_29 geometry')
marker_sets["particle_29 geometry"]=s
s= marker_sets["particle_29 geometry"]
mark=s.place_marker((7892.43, 3711.37, 6262.11), (0.7, 0.7, 0.7), 846)
if "particle_30 geometry" not in marker_sets:
s=new_marker_set('particle_30 geometry')
marker_sets["particle_30 geometry"]=s
s= marker_sets["particle_30 geometry"]
mark=s.place_marker((6613.76, 3508.26, 5469.47), (0, 1, 0), 846)
if "particle_31 geometry" not in marker_sets:
s=new_marker_set('particle_31 geometry')
marker_sets["particle_31 geometry"]=s
s= marker_sets["particle_31 geometry"]
mark=s.place_marker((5918.66, 4592.76, 4430.71), (0.7, 0.7, 0.7), 846)
if "particle_32 geometry" not in marker_sets:
s=new_marker_set('particle_32 geometry')
marker_sets["particle_32 geometry"]=s
s= marker_sets["particle_32 geometry"]
mark=s.place_marker((6251.28, 4361.57, 4679.59), (0.7, 0.7, 0.7), 846)
if "particle_33 geometry" not in marker_sets:
s=new_marker_set('particle_33 geometry')
marker_sets["particle_33 geometry"]=s
s= marker_sets["particle_33 geometry"]
mark=s.place_marker((6065.42, 5890.81, 5411.56), (0.7, 0.7, 0.7), 846)
if "particle_34 geometry" not in marker_sets:
s=new_marker_set('particle_34 geometry')
marker_sets["particle_34 geometry"]=s
s= marker_sets["particle_34 geometry"]
mark=s.place_marker((7559.8, 5167.09, 5812.79), (0.7, 0.7, 0.7), 846)
if "particle_35 geometry" not in marker_sets:
s=new_marker_set('particle_35 geometry')
marker_sets["particle_35 geometry"]=s
s= marker_sets["particle_35 geometry"]
mark=s.place_marker((6267.87, 5694.12, 4970.93), (0.7, 0.7, 0.7), 846)
if "particle_36 geometry" not in marker_sets:
s=new_marker_set('particle_36 geometry')
marker_sets["particle_36 geometry"]=s
s= marker_sets["particle_36 geometry"]
mark=s.place_marker((7112.38, 5371.11, 7739.48), (1, 0.7, 0), 846)
if "particle_37 geometry" not in marker_sets:
s=new_marker_set('particle_37 geometry')
marker_sets["particle_37 geometry"]=s
s= marker_sets["particle_37 geometry"]
mark=s.place_marker((6944.81, 6190.75, 5104.93), (0.7, 0.7, 0.7), 846)
if "particle_38 geometry" not in marker_sets:
s=new_marker_set('particle_38 geometry')
marker_sets["particle_38 geometry"]=s
s= marker_sets["particle_38 geometry"]
mark=s.place_marker((5932.69, 5734.96, 6029.25), (0.7, 0.7, 0.7), 846)
if "particle_39 geometry" not in marker_sets:
s=new_marker_set('particle_39 geometry')
marker_sets["particle_39 geometry"]=s
s= marker_sets["particle_39 geometry"]
mark=s.place_marker((5081.73, 6778.3, 3992.6), (1, 0.7, 0), 846)
if "particle_40 geometry" not in marker_sets:
s=new_marker_set('particle_40 geometry')
marker_sets["particle_40 geometry"]=s
s= marker_sets["particle_40 geometry"]
mark=s.place_marker((4559.6, 6124.14, 6011.27), (0.7, 0.7, 0.7), 846)
if "particle_41 geometry" not in marker_sets:
s=new_marker_set('particle_41 geometry')
marker_sets["particle_41 geometry"]=s
s= marker_sets["particle_41 geometry"]
mark=s.place_marker((3322.13, 6571.74, 5576.72), (0.7, 0.7, 0.7), 846)
if "particle_42 geometry" not in marker_sets:
s=new_marker_set('particle_42 geometry')
marker_sets["particle_42 geometry"]=s
s= marker_sets["particle_42 geometry"]
mark=s.place_marker((3551.28, 6409.95, 4825.22), (0.7, 0.7, 0.7), 846)
if "particle_43 geometry" not in marker_sets:
s=new_marker_set('particle_43 geometry')
marker_sets["particle_43 geometry"]=s
s= marker_sets["particle_43 geometry"]
mark=s.place_marker((3752.22, 6669, 5370.92), (0.7, 0.7, 0.7), 846)
if "particle_44 geometry" not in marker_sets:
s=new_marker_set('particle_44 geometry')
marker_sets["particle_44 geometry"]=s
s= marker_sets["particle_44 geometry"]
mark=s.place_marker((3418.88, 6493.22, 4213.52), (0.7, 0.7, 0.7), 846)
if "particle_45 geometry" not in marker_sets:
s=new_marker_set('particle_45 geometry')
marker_sets["particle_45 geometry"]=s
s= marker_sets["particle_45 geometry"]
mark=s.place_marker((3930.09, 6646.63, 4848.98), (0.7, 0.7, 0.7), 846)
if "particle_46 geometry" not in marker_sets:
s=new_marker_set('particle_46 geometry')
marker_sets["particle_46 geometry"]=s
s= marker_sets["particle_46 geometry"]
mark=s.place_marker((2762.49, 6665.76, 5109.8), (0.7, 0.7, 0.7), 846)
if "particle_47 geometry" not in marker_sets:
s=new_marker_set('particle_47 geometry')
marker_sets["particle_47 geometry"]=s
s= marker_sets["particle_47 geometry"]
mark=s.place_marker((2761.25, 7619.31, 4299.74), (0.7, 0.7, 0.7), 846)
if "particle_48 geometry" not in marker_sets:
s=new_marker_set('particle_48 geometry')
marker_sets["particle_48 geometry"]=s
s= marker_sets["particle_48 geometry"]
mark=s.place_marker((2602.98, 7564.42, 4864.86), (0.7, 0.7, 0.7), 846)
if "particle_49 geometry" not in marker_sets:
s=new_marker_set('particle_49 geometry')
marker_sets["particle_49 geometry"]=s
s= marker_sets["particle_49 geometry"]
mark=s.place_marker((4070.52, 8094.26, 6003.27), (0.7, 0.7, 0.7), 846)
if "particle_50 geometry" not in marker_sets:
s=new_marker_set('particle_50 geometry')
marker_sets["particle_50 geometry"]=s
s= marker_sets["particle_50 geometry"]
mark=s.place_marker((3572.01, 6661.62, 5051.67), (0.7, 0.7, 0.7), 846)
if "particle_51 geometry" not in marker_sets:
s=new_marker_set('particle_51 geometry')
marker_sets["particle_51 geometry"]=s
s= marker_sets["particle_51 geometry"]
mark=s.place_marker((4231.94, 8313.97, 5586.03), (0, 1, 0), 846)
if "particle_52 geometry" not in marker_sets:
s=new_marker_set('particle_52 geometry')
marker_sets["particle_52 geometry"]=s
s= marker_sets["particle_52 geometry"]
mark=s.place_marker((1900.28, 6048.71, 6645.73), (0.7, 0.7, 0.7), 846)
if "particle_53 geometry" not in marker_sets:
s=new_marker_set('particle_53 geometry')
marker_sets["particle_53 geometry"]=s
s= marker_sets["particle_53 geometry"]
mark=s.place_marker((3414.59, 7114.43, 5049.39), (0.7, 0.7, 0.7), 846)
if "particle_54 geometry" not in marker_sets:
s=new_marker_set('particle_54 geometry')
marker_sets["particle_54 geometry"]=s
s= marker_sets["particle_54 geometry"]
mark=s.place_marker((3565.22, 8352.9, 4375.73), (0.7, 0.7, 0.7), 846)
if "particle_55 geometry" not in marker_sets:
s=new_marker_set('particle_55 geometry')
marker_sets["particle_55 geometry"]=s
s= marker_sets["particle_55 geometry"]
mark=s.place_marker((3256.41, 6847.16, 5212.69), (0.7, 0.7, 0.7), 846)
if "particle_56 geometry" not in marker_sets:
s=new_marker_set('particle_56 geometry')
marker_sets["particle_56 geometry"]=s
s= marker_sets["particle_56 geometry"]
mark=s.place_marker((4365.14, 6940.32, 5970.71), (0.7, 0.7, 0.7), 846)
if "particle_57 geometry" not in marker_sets:
s=new_marker_set('particle_57 geometry')
marker_sets["particle_57 geometry"]=s
s= marker_sets["particle_57 geometry"]
mark=s.place_marker((3019.81, 7703.9, 4531.38), (0.7, 0.7, 0.7), 846)
if "particle_58 geometry" not in marker_sets:
s=new_marker_set('particle_58 geometry')
marker_sets["particle_58 geometry"]=s
s= marker_sets["particle_58 geometry"]
mark=s.place_marker((3290.55, 7000.61, 6106.3), (0.7, 0.7, 0.7), 846)
if "particle_59 geometry" not in marker_sets:
s=new_marker_set('particle_59 geometry')
marker_sets["particle_59 geometry"]=s
s= marker_sets["particle_59 geometry"]
mark=s.place_marker((4229.54, 8248.89, 4966.89), (0.7, 0.7, 0.7), 846)
if "particle_60 geometry" not in marker_sets:
s=new_marker_set('particle_60 geometry')
marker_sets["particle_60 geometry"]=s
s= marker_sets["particle_60 geometry"]
mark=s.place_marker((2692.49, 9182.64, 5209.29), (0.7, 0.7, 0.7), 846)
if "particle_61 geometry" not in marker_sets:
s=new_marker_set('particle_61 geometry')
marker_sets["particle_61 geometry"]=s
s= marker_sets["particle_61 geometry"]
mark=s.place_marker((4154.8, 9125.39, 6576.51), (0, 1, 0), 846)
if "particle_62 geometry" not in marker_sets:
s=new_marker_set('particle_62 geometry')
marker_sets["particle_62 geometry"]=s
s= marker_sets["particle_62 geometry"]
mark=s.place_marker((5178.95, 8918.41, 8445.83), (0.7, 0.7, 0.7), 846)
if "particle_63 geometry" not in marker_sets:
s=new_marker_set('particle_63 geometry')
marker_sets["particle_63 geometry"]=s
s= marker_sets["particle_63 geometry"]
mark=s.place_marker((3606.03, 8528.34, 7287.69), (0.7, 0.7, 0.7), 846)
if "particle_64 geometry" not in marker_sets:
s=new_marker_set('particle_64 geometry')
marker_sets["particle_64 geometry"]=s
s= marker_sets["particle_64 geometry"]
mark=s.place_marker((1596.76, 7898.6, 6282.75), (0.7, 0.7, 0.7), 846)
if "particle_65 geometry" not in marker_sets:
s=new_marker_set('particle_65 geometry')
marker_sets["particle_65 geometry"]=s
s= marker_sets["particle_65 geometry"]
mark=s.place_marker((2372.12, 9460.28, 7077.1), (0.7, 0.7, 0.7), 846)
if "particle_66 geometry" not in marker_sets:
s=new_marker_set('particle_66 geometry')
marker_sets["particle_66 geometry"]=s
s= marker_sets["particle_66 geometry"]
mark=s.place_marker((1515.82, 8738.76, 6597.78), (0.7, 0.7, 0.7), 846)
if "particle_67 geometry" not in marker_sets:
s=new_marker_set('particle_67 geometry')
marker_sets["particle_67 geometry"]=s
s= marker_sets["particle_67 geometry"]
mark=s.place_marker((2021.68, 7358.56, 7198.53), (0.7, 0.7, 0.7), 846)
if "particle_68 geometry" not in marker_sets:
s=new_marker_set('particle_68 geometry')
marker_sets["particle_68 geometry"]=s
s= marker_sets["particle_68 geometry"]
mark=s.place_marker((1283.48, 8221.92, 8808.6), (0.7, 0.7, 0.7), 846)
if "particle_69 geometry" not in marker_sets:
s=new_marker_set('particle_69 geometry')
marker_sets["particle_69 geometry"]=s
s= marker_sets["particle_69 geometry"]
mark=s.place_marker((1823.66, 7649.06, 7980.78), (0.7, 0.7, 0.7), 846)
if "particle_70 geometry" not in marker_sets:
s=new_marker_set('particle_70 geometry')
marker_sets["particle_70 geometry"]=s
s= marker_sets["particle_70 geometry"]
mark=s.place_marker((1491.87, 9529.63, 7887.3), (0.7, 0.7, 0.7), 846)
if "particle_71 geometry" not in marker_sets:
s=new_marker_set('particle_71 geometry')
marker_sets["particle_71 geometry"]=s
s= marker_sets["particle_71 geometry"]
mark=s.place_marker((2460.24, 7650.56, 7943.35), (0, 1, 0), 846)
if "particle_72 geometry" not in marker_sets:
s=new_marker_set('particle_72 geometry')
marker_sets["particle_72 geometry"]=s
s= marker_sets["particle_72 geometry"]
mark=s.place_marker((2619.71, 8921.31, 8490.59), (0.7, 0.7, 0.7), 846)
if "particle_73 geometry" not in marker_sets:
s=new_marker_set('particle_73 geometry')
marker_sets["particle_73 geometry"]=s
s= marker_sets["particle_73 geometry"]
mark=s.place_marker((1338.34, 8608.84, 7527.57), (0.7, 0.7, 0.7), 846)
if "particle_74 geometry" not in marker_sets:
s=new_marker_set('particle_74 geometry')
marker_sets["particle_74 geometry"]=s
s= marker_sets["particle_74 geometry"]
mark=s.place_marker((2886.19, 8600.63, 8719.08), (0, 1, 0), 846)
for k in surf_sets.keys():
chimera.openModels.add([surf_sets[k]])
| gpl-3.0 |
marknca/cling | dependencies/botocore/vendored/requests/packages/urllib3/util/connection.py | 679 | 3293 | import socket
try:
from select import poll, POLLIN
except ImportError: # `poll` doesn't exist on OSX and other platforms
poll = False
try:
from select import select
except ImportError: # `select` doesn't exist on AppEngine.
select = False
def is_connection_dropped(conn): # Platform-specific
"""
Returns True if the connection is dropped and should be closed.
:param conn:
:class:`httplib.HTTPConnection` object.
Note: For platforms like AppEngine, this will always return ``False`` to
let the platform handle connection recycling transparently for us.
"""
sock = getattr(conn, 'sock', False)
if sock is False: # Platform-specific: AppEngine
return False
if sock is None: # Connection already closed (such as by httplib).
return True
if not poll:
if not select: # Platform-specific: AppEngine
return False
try:
return select([sock], [], [], 0.0)[0]
except socket.error:
return True
# This version is better on platforms that support it.
p = poll()
p.register(sock, POLLIN)
for (fno, ev) in p.poll(0.0):
if fno == sock.fileno():
# Either data is buffered (bad), or the connection is dropped.
return True
# This function is copied from socket.py in the Python 2.7 standard
# library test suite. Added to its signature is only `socket_options`.
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
err = None
for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
# This is the only addition urllib3 makes to this function.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as _:
err = _
if sock is not None:
sock.close()
sock = None
if err is not None:
raise err
else:
raise socket.error("getaddrinfo returns an empty list")
def _set_socket_options(sock, options):
if options is None:
return
for opt in options:
sock.setsockopt(*opt)
| apache-2.0 |
albertomurillo/ansible | lib/ansible/plugins/action/net_system.py | 648 | 1057 | # (c) 2017, Ansible Inc,
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.plugins.action.net_base import ActionModule as _ActionModule
class ActionModule(_ActionModule):
def run(self, tmp=None, task_vars=None):
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
return result
| gpl-3.0 |
darrellsilver/norc | norc_utils/web.py | 1 | 1341 | import datetime
from django.utils import simplejson
from django.core.paginator import Paginator, InvalidPage
class JSONObjectEncoder(simplejson.JSONEncoder):
"""Handle encoding of complex objects.
The simplejson module doesn't handle the encoding of complex
objects such as datetime, so we handle it here.
"""
def default(self, obj):
if isinstance(obj, datetime.datetime):
return obj.strftime("%m/%d/%Y %H:%M:%S")
try:
return simplejson.JSONEncoder.default(self, obj)
except TypeError:
return str(obj)
def paginate(request, data_set):
try:
per_page = int(request.GET.get('per_page', 20))
except ValueError:
per_page = 20
paginator = Paginator(data_set, per_page)
try:
page_num = int(request.GET.get('page', 1))
except ValueError:
page_num = 1
if page_num < 1 or page_num > paginator.num_pages:
page_num = 1
page = paginator.page(page_num)
page_data = {
'nextPage': page.next_page_number() if page.has_next() else 0,
'prevPage': page.previous_page_number() if page.has_previous() else 0,
'start': page.start_index(),
'end': page.end_index(),
'current': page_num,
'total': paginator.num_pages,
}
return page, page_data
| bsd-3-clause |
yencarnacion/jaikuengine | vendor/simplejson/decoder.py | 142 | 15148 | """Implementation of JSONDecoder
"""
import re
import sys
import struct
from simplejson.scanner import make_scanner
def _import_c_scanstring():
try:
from simplejson._speedups import scanstring
return scanstring
except ImportError:
return None
c_scanstring = _import_c_scanstring()
__all__ = ['JSONDecoder']
FLAGS = re.VERBOSE | re.MULTILINE | re.DOTALL
def _floatconstants():
_BYTES = '7FF80000000000007FF0000000000000'.decode('hex')
# The struct module in Python 2.4 would get frexp() out of range here
# when an endian is specified in the format string. Fixed in Python 2.5+
if sys.byteorder != 'big':
_BYTES = _BYTES[:8][::-1] + _BYTES[8:][::-1]
nan, inf = struct.unpack('dd', _BYTES)
return nan, inf, -inf
NaN, PosInf, NegInf = _floatconstants()
class JSONDecodeError(ValueError):
"""Subclass of ValueError with the following additional properties:
msg: The unformatted error message
doc: The JSON document being parsed
pos: The start index of doc where parsing failed
end: The end index of doc where parsing failed (may be None)
lineno: The line corresponding to pos
colno: The column corresponding to pos
endlineno: The line corresponding to end (may be None)
endcolno: The column corresponding to end (may be None)
"""
def __init__(self, msg, doc, pos, end=None):
ValueError.__init__(self, errmsg(msg, doc, pos, end=end))
self.msg = msg
self.doc = doc
self.pos = pos
self.end = end
self.lineno, self.colno = linecol(doc, pos)
if end is not None:
self.endlineno, self.endcolno = linecol(doc, end)
else:
self.endlineno, self.endcolno = None, None
def linecol(doc, pos):
lineno = doc.count('\n', 0, pos) + 1
if lineno == 1:
colno = pos
else:
colno = pos - doc.rindex('\n', 0, pos)
return lineno, colno
def errmsg(msg, doc, pos, end=None):
# Note that this function is called from _speedups
lineno, colno = linecol(doc, pos)
if end is None:
#fmt = '{0}: line {1} column {2} (char {3})'
#return fmt.format(msg, lineno, colno, pos)
fmt = '%s: line %d column %d (char %d)'
return fmt % (msg, lineno, colno, pos)
endlineno, endcolno = linecol(doc, end)
#fmt = '{0}: line {1} column {2} - line {3} column {4} (char {5} - {6})'
#return fmt.format(msg, lineno, colno, endlineno, endcolno, pos, end)
fmt = '%s: line %d column %d - line %d column %d (char %d - %d)'
return fmt % (msg, lineno, colno, endlineno, endcolno, pos, end)
_CONSTANTS = {
'-Infinity': NegInf,
'Infinity': PosInf,
'NaN': NaN,
}
STRINGCHUNK = re.compile(r'(.*?)(["\\\x00-\x1f])', FLAGS)
BACKSLASH = {
'"': u'"', '\\': u'\\', '/': u'/',
'b': u'\b', 'f': u'\f', 'n': u'\n', 'r': u'\r', 't': u'\t',
}
DEFAULT_ENCODING = "utf-8"
def py_scanstring(s, end, encoding=None, strict=True,
_b=BACKSLASH, _m=STRINGCHUNK.match):
"""Scan the string s for a JSON string. End is the index of the
character in s after the quote that started the JSON string.
Unescapes all valid JSON string escape sequences and raises ValueError
on attempt to decode an invalid string. If strict is False then literal
control characters are allowed in the string.
Returns a tuple of the decoded string and the index of the character in s
after the end quote."""
if encoding is None:
encoding = DEFAULT_ENCODING
chunks = []
_append = chunks.append
begin = end - 1
while 1:
chunk = _m(s, end)
if chunk is None:
raise JSONDecodeError(
"Unterminated string starting at", s, begin)
end = chunk.end()
content, terminator = chunk.groups()
# Content is contains zero or more unescaped string characters
if content:
if not isinstance(content, unicode):
content = unicode(content, encoding)
_append(content)
# Terminator is the end of string, a literal control character,
# or a backslash denoting that an escape sequence follows
if terminator == '"':
break
elif terminator != '\\':
if strict:
msg = "Invalid control character %r at" % (terminator,)
#msg = "Invalid control character {0!r} at".format(terminator)
raise JSONDecodeError(msg, s, end)
else:
_append(terminator)
continue
try:
esc = s[end]
except IndexError:
raise JSONDecodeError(
"Unterminated string starting at", s, begin)
# If not a unicode escape sequence, must be in the lookup table
if esc != 'u':
try:
char = _b[esc]
except KeyError:
msg = "Invalid \\escape: " + repr(esc)
raise JSONDecodeError(msg, s, end)
end += 1
else:
# Unicode escape sequence
esc = s[end + 1:end + 5]
next_end = end + 5
if len(esc) != 4:
msg = "Invalid \\uXXXX escape"
raise JSONDecodeError(msg, s, end)
uni = int(esc, 16)
# Check for surrogate pair on UCS-4 systems
if 0xd800 <= uni <= 0xdbff and sys.maxunicode > 65535:
msg = "Invalid \\uXXXX\\uXXXX surrogate pair"
if not s[end + 5:end + 7] == '\\u':
raise JSONDecodeError(msg, s, end)
esc2 = s[end + 7:end + 11]
if len(esc2) != 4:
raise JSONDecodeError(msg, s, end)
uni2 = int(esc2, 16)
uni = 0x10000 + (((uni - 0xd800) << 10) | (uni2 - 0xdc00))
next_end += 6
char = unichr(uni)
end = next_end
# Append the unescaped character
_append(char)
return u''.join(chunks), end
# Use speedup if available
scanstring = c_scanstring or py_scanstring
WHITESPACE = re.compile(r'[ \t\n\r]*', FLAGS)
WHITESPACE_STR = ' \t\n\r'
def JSONObject((s, end), encoding, strict, scan_once, object_hook,
object_pairs_hook, memo=None,
_w=WHITESPACE.match, _ws=WHITESPACE_STR):
# Backwards compatibility
if memo is None:
memo = {}
memo_get = memo.setdefault
pairs = []
# Use a slice to prevent IndexError from being raised, the following
# check will raise a more specific ValueError if the string is empty
nextchar = s[end:end + 1]
# Normally we expect nextchar == '"'
if nextchar != '"':
if nextchar in _ws:
end = _w(s, end).end()
nextchar = s[end:end + 1]
# Trivial empty object
if nextchar == '}':
if object_pairs_hook is not None:
result = object_pairs_hook(pairs)
return result, end + 1
pairs = {}
if object_hook is not None:
pairs = object_hook(pairs)
return pairs, end + 1
elif nextchar != '"':
raise JSONDecodeError("Expecting property name", s, end)
end += 1
while True:
key, end = scanstring(s, end, encoding, strict)
key = memo_get(key, key)
# To skip some function call overhead we optimize the fast paths where
# the JSON key separator is ": " or just ":".
if s[end:end + 1] != ':':
end = _w(s, end).end()
if s[end:end + 1] != ':':
raise JSONDecodeError("Expecting : delimiter", s, end)
end += 1
try:
if s[end] in _ws:
end += 1
if s[end] in _ws:
end = _w(s, end + 1).end()
except IndexError:
pass
try:
value, end = scan_once(s, end)
except StopIteration:
raise JSONDecodeError("Expecting object", s, end)
pairs.append((key, value))
try:
nextchar = s[end]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end]
except IndexError:
nextchar = ''
end += 1
if nextchar == '}':
break
elif nextchar != ',':
raise JSONDecodeError("Expecting , delimiter", s, end - 1)
try:
nextchar = s[end]
if nextchar in _ws:
end += 1
nextchar = s[end]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end]
except IndexError:
nextchar = ''
end += 1
if nextchar != '"':
raise JSONDecodeError("Expecting property name", s, end - 1)
if object_pairs_hook is not None:
result = object_pairs_hook(pairs)
return result, end
pairs = dict(pairs)
if object_hook is not None:
pairs = object_hook(pairs)
return pairs, end
def JSONArray((s, end), scan_once, _w=WHITESPACE.match, _ws=WHITESPACE_STR):
values = []
nextchar = s[end:end + 1]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end:end + 1]
# Look-ahead for trivial empty array
if nextchar == ']':
return values, end + 1
_append = values.append
while True:
try:
value, end = scan_once(s, end)
except StopIteration:
raise JSONDecodeError("Expecting object", s, end)
_append(value)
nextchar = s[end:end + 1]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end:end + 1]
end += 1
if nextchar == ']':
break
elif nextchar != ',':
raise JSONDecodeError("Expecting , delimiter", s, end)
try:
if s[end] in _ws:
end += 1
if s[end] in _ws:
end = _w(s, end + 1).end()
except IndexError:
pass
return values, end
class JSONDecoder(object):
"""Simple JSON <http://json.org> decoder
Performs the following translations in decoding by default:
+---------------+-------------------+
| JSON | Python |
+===============+===================+
| object | dict |
+---------------+-------------------+
| array | list |
+---------------+-------------------+
| string | unicode |
+---------------+-------------------+
| number (int) | int, long |
+---------------+-------------------+
| number (real) | float |
+---------------+-------------------+
| true | True |
+---------------+-------------------+
| false | False |
+---------------+-------------------+
| null | None |
+---------------+-------------------+
It also understands ``NaN``, ``Infinity``, and ``-Infinity`` as
their corresponding ``float`` values, which is outside the JSON spec.
"""
def __init__(self, encoding=None, object_hook=None, parse_float=None,
parse_int=None, parse_constant=None, strict=True,
object_pairs_hook=None):
"""
*encoding* determines the encoding used to interpret any
:class:`str` objects decoded by this instance (``'utf-8'`` by
default). It has no effect when decoding :class:`unicode` objects.
Note that currently only encodings that are a superset of ASCII work,
strings of other encodings should be passed in as :class:`unicode`.
*object_hook*, if specified, will be called with the result of every
JSON object decoded and its return value will be used in place of the
given :class:`dict`. This can be used to provide custom
deserializations (e.g. to support JSON-RPC class hinting).
*object_pairs_hook* is an optional function that will be called with
the result of any object literal decode with an ordered list of pairs.
The return value of *object_pairs_hook* will be used instead of the
:class:`dict`. This feature can be used to implement custom decoders
that rely on the order that the key and value pairs are decoded (for
example, :func:`collections.OrderedDict` will remember the order of
insertion). If *object_hook* is also defined, the *object_pairs_hook*
takes priority.
*parse_float*, if specified, will be called with the string of every
JSON float to be decoded. By default, this is equivalent to
``float(num_str)``. This can be used to use another datatype or parser
for JSON floats (e.g. :class:`decimal.Decimal`).
*parse_int*, if specified, will be called with the string of every
JSON int to be decoded. By default, this is equivalent to
``int(num_str)``. This can be used to use another datatype or parser
for JSON integers (e.g. :class:`float`).
*parse_constant*, if specified, will be called with one of the
following strings: ``'-Infinity'``, ``'Infinity'``, ``'NaN'``. This
can be used to raise an exception if invalid JSON numbers are
encountered.
*strict* controls the parser's behavior when it encounters an
invalid control character in a string. The default setting of
``True`` means that unescaped control characters are parse errors, if
``False`` then control characters will be allowed in strings.
"""
self.encoding = encoding
self.object_hook = object_hook
self.object_pairs_hook = object_pairs_hook
self.parse_float = parse_float or float
self.parse_int = parse_int or int
self.parse_constant = parse_constant or _CONSTANTS.__getitem__
self.strict = strict
self.parse_object = JSONObject
self.parse_array = JSONArray
self.parse_string = scanstring
self.memo = {}
self.scan_once = make_scanner(self)
def decode(self, s, _w=WHITESPACE.match):
"""Return the Python representation of ``s`` (a ``str`` or ``unicode``
instance containing a JSON document)
"""
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
end = _w(s, end).end()
if end != len(s):
raise JSONDecodeError("Extra data", s, end, len(s))
return obj
def raw_decode(self, s, idx=0):
"""Decode a JSON document from ``s`` (a ``str`` or ``unicode``
beginning with a JSON document) and return a 2-tuple of the Python
representation and the index in ``s`` where the document ended.
This can be used to decode a JSON document from a string that may
have extraneous data at the end.
"""
try:
obj, end = self.scan_once(s, idx)
except StopIteration:
raise JSONDecodeError("No JSON object could be decoded", s, idx)
return obj, end
| apache-2.0 |
ryfeus/lambda-packs | Shapely_numpy/source/numpy/polynomial/tests/test_laguerre.py | 58 | 17242 | """Tests for laguerre module.
"""
from __future__ import division, absolute_import, print_function
import numpy as np
import numpy.polynomial.laguerre as lag
from numpy.polynomial.polynomial import polyval
from numpy.testing import (
TestCase, assert_almost_equal, assert_raises,
assert_equal, assert_, run_module_suite)
L0 = np.array([1])/1
L1 = np.array([1, -1])/1
L2 = np.array([2, -4, 1])/2
L3 = np.array([6, -18, 9, -1])/6
L4 = np.array([24, -96, 72, -16, 1])/24
L5 = np.array([120, -600, 600, -200, 25, -1])/120
L6 = np.array([720, -4320, 5400, -2400, 450, -36, 1])/720
Llist = [L0, L1, L2, L3, L4, L5, L6]
def trim(x):
return lag.lagtrim(x, tol=1e-6)
class TestConstants(TestCase):
def test_lagdomain(self):
assert_equal(lag.lagdomain, [0, 1])
def test_lagzero(self):
assert_equal(lag.lagzero, [0])
def test_lagone(self):
assert_equal(lag.lagone, [1])
def test_lagx(self):
assert_equal(lag.lagx, [1, -1])
class TestArithmetic(TestCase):
x = np.linspace(-3, 3, 100)
def test_lagadd(self):
for i in range(5):
for j in range(5):
msg = "At i=%d, j=%d" % (i, j)
tgt = np.zeros(max(i, j) + 1)
tgt[i] += 1
tgt[j] += 1
res = lag.lagadd([0]*i + [1], [0]*j + [1])
assert_equal(trim(res), trim(tgt), err_msg=msg)
def test_lagsub(self):
for i in range(5):
for j in range(5):
msg = "At i=%d, j=%d" % (i, j)
tgt = np.zeros(max(i, j) + 1)
tgt[i] += 1
tgt[j] -= 1
res = lag.lagsub([0]*i + [1], [0]*j + [1])
assert_equal(trim(res), trim(tgt), err_msg=msg)
def test_lagmulx(self):
assert_equal(lag.lagmulx([0]), [0])
assert_equal(lag.lagmulx([1]), [1, -1])
for i in range(1, 5):
ser = [0]*i + [1]
tgt = [0]*(i - 1) + [-i, 2*i + 1, -(i + 1)]
assert_almost_equal(lag.lagmulx(ser), tgt)
def test_lagmul(self):
# check values of result
for i in range(5):
pol1 = [0]*i + [1]
val1 = lag.lagval(self.x, pol1)
for j in range(5):
msg = "At i=%d, j=%d" % (i, j)
pol2 = [0]*j + [1]
val2 = lag.lagval(self.x, pol2)
pol3 = lag.lagmul(pol1, pol2)
val3 = lag.lagval(self.x, pol3)
assert_(len(pol3) == i + j + 1, msg)
assert_almost_equal(val3, val1*val2, err_msg=msg)
def test_lagdiv(self):
for i in range(5):
for j in range(5):
msg = "At i=%d, j=%d" % (i, j)
ci = [0]*i + [1]
cj = [0]*j + [1]
tgt = lag.lagadd(ci, cj)
quo, rem = lag.lagdiv(tgt, ci)
res = lag.lagadd(lag.lagmul(quo, ci), rem)
assert_almost_equal(trim(res), trim(tgt), err_msg=msg)
class TestEvaluation(TestCase):
# coefficients of 1 + 2*x + 3*x**2
c1d = np.array([9., -14., 6.])
c2d = np.einsum('i,j->ij', c1d, c1d)
c3d = np.einsum('i,j,k->ijk', c1d, c1d, c1d)
# some random values in [-1, 1)
x = np.random.random((3, 5))*2 - 1
y = polyval(x, [1., 2., 3.])
def test_lagval(self):
#check empty input
assert_equal(lag.lagval([], [1]).size, 0)
#check normal input)
x = np.linspace(-1, 1)
y = [polyval(x, c) for c in Llist]
for i in range(7):
msg = "At i=%d" % i
tgt = y[i]
res = lag.lagval(x, [0]*i + [1])
assert_almost_equal(res, tgt, err_msg=msg)
#check that shape is preserved
for i in range(3):
dims = [2]*i
x = np.zeros(dims)
assert_equal(lag.lagval(x, [1]).shape, dims)
assert_equal(lag.lagval(x, [1, 0]).shape, dims)
assert_equal(lag.lagval(x, [1, 0, 0]).shape, dims)
def test_lagval2d(self):
x1, x2, x3 = self.x
y1, y2, y3 = self.y
#test exceptions
assert_raises(ValueError, lag.lagval2d, x1, x2[:2], self.c2d)
#test values
tgt = y1*y2
res = lag.lagval2d(x1, x2, self.c2d)
assert_almost_equal(res, tgt)
#test shape
z = np.ones((2, 3))
res = lag.lagval2d(z, z, self.c2d)
assert_(res.shape == (2, 3))
def test_lagval3d(self):
x1, x2, x3 = self.x
y1, y2, y3 = self.y
#test exceptions
assert_raises(ValueError, lag.lagval3d, x1, x2, x3[:2], self.c3d)
#test values
tgt = y1*y2*y3
res = lag.lagval3d(x1, x2, x3, self.c3d)
assert_almost_equal(res, tgt)
#test shape
z = np.ones((2, 3))
res = lag.lagval3d(z, z, z, self.c3d)
assert_(res.shape == (2, 3))
def test_laggrid2d(self):
x1, x2, x3 = self.x
y1, y2, y3 = self.y
#test values
tgt = np.einsum('i,j->ij', y1, y2)
res = lag.laggrid2d(x1, x2, self.c2d)
assert_almost_equal(res, tgt)
#test shape
z = np.ones((2, 3))
res = lag.laggrid2d(z, z, self.c2d)
assert_(res.shape == (2, 3)*2)
def test_laggrid3d(self):
x1, x2, x3 = self.x
y1, y2, y3 = self.y
#test values
tgt = np.einsum('i,j,k->ijk', y1, y2, y3)
res = lag.laggrid3d(x1, x2, x3, self.c3d)
assert_almost_equal(res, tgt)
#test shape
z = np.ones((2, 3))
res = lag.laggrid3d(z, z, z, self.c3d)
assert_(res.shape == (2, 3)*3)
class TestIntegral(TestCase):
def test_lagint(self):
# check exceptions
assert_raises(ValueError, lag.lagint, [0], .5)
assert_raises(ValueError, lag.lagint, [0], -1)
assert_raises(ValueError, lag.lagint, [0], 1, [0, 0])
# test integration of zero polynomial
for i in range(2, 5):
k = [0]*(i - 2) + [1]
res = lag.lagint([0], m=i, k=k)
assert_almost_equal(res, [1, -1])
# check single integration with integration constant
for i in range(5):
scl = i + 1
pol = [0]*i + [1]
tgt = [i] + [0]*i + [1/scl]
lagpol = lag.poly2lag(pol)
lagint = lag.lagint(lagpol, m=1, k=[i])
res = lag.lag2poly(lagint)
assert_almost_equal(trim(res), trim(tgt))
# check single integration with integration constant and lbnd
for i in range(5):
scl = i + 1
pol = [0]*i + [1]
lagpol = lag.poly2lag(pol)
lagint = lag.lagint(lagpol, m=1, k=[i], lbnd=-1)
assert_almost_equal(lag.lagval(-1, lagint), i)
# check single integration with integration constant and scaling
for i in range(5):
scl = i + 1
pol = [0]*i + [1]
tgt = [i] + [0]*i + [2/scl]
lagpol = lag.poly2lag(pol)
lagint = lag.lagint(lagpol, m=1, k=[i], scl=2)
res = lag.lag2poly(lagint)
assert_almost_equal(trim(res), trim(tgt))
# check multiple integrations with default k
for i in range(5):
for j in range(2, 5):
pol = [0]*i + [1]
tgt = pol[:]
for k in range(j):
tgt = lag.lagint(tgt, m=1)
res = lag.lagint(pol, m=j)
assert_almost_equal(trim(res), trim(tgt))
# check multiple integrations with defined k
for i in range(5):
for j in range(2, 5):
pol = [0]*i + [1]
tgt = pol[:]
for k in range(j):
tgt = lag.lagint(tgt, m=1, k=[k])
res = lag.lagint(pol, m=j, k=list(range(j)))
assert_almost_equal(trim(res), trim(tgt))
# check multiple integrations with lbnd
for i in range(5):
for j in range(2, 5):
pol = [0]*i + [1]
tgt = pol[:]
for k in range(j):
tgt = lag.lagint(tgt, m=1, k=[k], lbnd=-1)
res = lag.lagint(pol, m=j, k=list(range(j)), lbnd=-1)
assert_almost_equal(trim(res), trim(tgt))
# check multiple integrations with scaling
for i in range(5):
for j in range(2, 5):
pol = [0]*i + [1]
tgt = pol[:]
for k in range(j):
tgt = lag.lagint(tgt, m=1, k=[k], scl=2)
res = lag.lagint(pol, m=j, k=list(range(j)), scl=2)
assert_almost_equal(trim(res), trim(tgt))
def test_lagint_axis(self):
# check that axis keyword works
c2d = np.random.random((3, 4))
tgt = np.vstack([lag.lagint(c) for c in c2d.T]).T
res = lag.lagint(c2d, axis=0)
assert_almost_equal(res, tgt)
tgt = np.vstack([lag.lagint(c) for c in c2d])
res = lag.lagint(c2d, axis=1)
assert_almost_equal(res, tgt)
tgt = np.vstack([lag.lagint(c, k=3) for c in c2d])
res = lag.lagint(c2d, k=3, axis=1)
assert_almost_equal(res, tgt)
class TestDerivative(TestCase):
def test_lagder(self):
# check exceptions
assert_raises(ValueError, lag.lagder, [0], .5)
assert_raises(ValueError, lag.lagder, [0], -1)
# check that zeroth derivative does nothing
for i in range(5):
tgt = [0]*i + [1]
res = lag.lagder(tgt, m=0)
assert_equal(trim(res), trim(tgt))
# check that derivation is the inverse of integration
for i in range(5):
for j in range(2, 5):
tgt = [0]*i + [1]
res = lag.lagder(lag.lagint(tgt, m=j), m=j)
assert_almost_equal(trim(res), trim(tgt))
# check derivation with scaling
for i in range(5):
for j in range(2, 5):
tgt = [0]*i + [1]
res = lag.lagder(lag.lagint(tgt, m=j, scl=2), m=j, scl=.5)
assert_almost_equal(trim(res), trim(tgt))
def test_lagder_axis(self):
# check that axis keyword works
c2d = np.random.random((3, 4))
tgt = np.vstack([lag.lagder(c) for c in c2d.T]).T
res = lag.lagder(c2d, axis=0)
assert_almost_equal(res, tgt)
tgt = np.vstack([lag.lagder(c) for c in c2d])
res = lag.lagder(c2d, axis=1)
assert_almost_equal(res, tgt)
class TestVander(TestCase):
# some random values in [-1, 1)
x = np.random.random((3, 5))*2 - 1
def test_lagvander(self):
# check for 1d x
x = np.arange(3)
v = lag.lagvander(x, 3)
assert_(v.shape == (3, 4))
for i in range(4):
coef = [0]*i + [1]
assert_almost_equal(v[..., i], lag.lagval(x, coef))
# check for 2d x
x = np.array([[1, 2], [3, 4], [5, 6]])
v = lag.lagvander(x, 3)
assert_(v.shape == (3, 2, 4))
for i in range(4):
coef = [0]*i + [1]
assert_almost_equal(v[..., i], lag.lagval(x, coef))
def test_lagvander2d(self):
# also tests lagval2d for non-square coefficient array
x1, x2, x3 = self.x
c = np.random.random((2, 3))
van = lag.lagvander2d(x1, x2, [1, 2])
tgt = lag.lagval2d(x1, x2, c)
res = np.dot(van, c.flat)
assert_almost_equal(res, tgt)
# check shape
van = lag.lagvander2d([x1], [x2], [1, 2])
assert_(van.shape == (1, 5, 6))
def test_lagvander3d(self):
# also tests lagval3d for non-square coefficient array
x1, x2, x3 = self.x
c = np.random.random((2, 3, 4))
van = lag.lagvander3d(x1, x2, x3, [1, 2, 3])
tgt = lag.lagval3d(x1, x2, x3, c)
res = np.dot(van, c.flat)
assert_almost_equal(res, tgt)
# check shape
van = lag.lagvander3d([x1], [x2], [x3], [1, 2, 3])
assert_(van.shape == (1, 5, 24))
class TestFitting(TestCase):
def test_lagfit(self):
def f(x):
return x*(x - 1)*(x - 2)
# Test exceptions
assert_raises(ValueError, lag.lagfit, [1], [1], -1)
assert_raises(TypeError, lag.lagfit, [[1]], [1], 0)
assert_raises(TypeError, lag.lagfit, [], [1], 0)
assert_raises(TypeError, lag.lagfit, [1], [[[1]]], 0)
assert_raises(TypeError, lag.lagfit, [1, 2], [1], 0)
assert_raises(TypeError, lag.lagfit, [1], [1, 2], 0)
assert_raises(TypeError, lag.lagfit, [1], [1], 0, w=[[1]])
assert_raises(TypeError, lag.lagfit, [1], [1], 0, w=[1, 1])
assert_raises(ValueError, lag.lagfit, [1], [1], [-1,])
assert_raises(ValueError, lag.lagfit, [1], [1], [2, -1, 6])
assert_raises(TypeError, lag.lagfit, [1], [1], [])
# Test fit
x = np.linspace(0, 2)
y = f(x)
#
coef3 = lag.lagfit(x, y, 3)
assert_equal(len(coef3), 4)
assert_almost_equal(lag.lagval(x, coef3), y)
coef3 = lag.lagfit(x, y, [0, 1, 2, 3])
assert_equal(len(coef3), 4)
assert_almost_equal(lag.lagval(x, coef3), y)
#
coef4 = lag.lagfit(x, y, 4)
assert_equal(len(coef4), 5)
assert_almost_equal(lag.lagval(x, coef4), y)
coef4 = lag.lagfit(x, y, [0, 1, 2, 3, 4])
assert_equal(len(coef4), 5)
assert_almost_equal(lag.lagval(x, coef4), y)
#
coef2d = lag.lagfit(x, np.array([y, y]).T, 3)
assert_almost_equal(coef2d, np.array([coef3, coef3]).T)
coef2d = lag.lagfit(x, np.array([y, y]).T, [0, 1, 2, 3])
assert_almost_equal(coef2d, np.array([coef3, coef3]).T)
# test weighting
w = np.zeros_like(x)
yw = y.copy()
w[1::2] = 1
y[0::2] = 0
wcoef3 = lag.lagfit(x, yw, 3, w=w)
assert_almost_equal(wcoef3, coef3)
wcoef3 = lag.lagfit(x, yw, [0, 1, 2, 3], w=w)
assert_almost_equal(wcoef3, coef3)
#
wcoef2d = lag.lagfit(x, np.array([yw, yw]).T, 3, w=w)
assert_almost_equal(wcoef2d, np.array([coef3, coef3]).T)
wcoef2d = lag.lagfit(x, np.array([yw, yw]).T, [0, 1, 2, 3], w=w)
assert_almost_equal(wcoef2d, np.array([coef3, coef3]).T)
# test scaling with complex values x points whose square
# is zero when summed.
x = [1, 1j, -1, -1j]
assert_almost_equal(lag.lagfit(x, x, 1), [1, -1])
assert_almost_equal(lag.lagfit(x, x, [0, 1]), [1, -1])
class TestCompanion(TestCase):
def test_raises(self):
assert_raises(ValueError, lag.lagcompanion, [])
assert_raises(ValueError, lag.lagcompanion, [1])
def test_dimensions(self):
for i in range(1, 5):
coef = [0]*i + [1]
assert_(lag.lagcompanion(coef).shape == (i, i))
def test_linear_root(self):
assert_(lag.lagcompanion([1, 2])[0, 0] == 1.5)
class TestGauss(TestCase):
def test_100(self):
x, w = lag.laggauss(100)
# test orthogonality. Note that the results need to be normalized,
# otherwise the huge values that can arise from fast growing
# functions like Laguerre can be very confusing.
v = lag.lagvander(x, 99)
vv = np.dot(v.T * w, v)
vd = 1/np.sqrt(vv.diagonal())
vv = vd[:, None] * vv * vd
assert_almost_equal(vv, np.eye(100))
# check that the integral of 1 is correct
tgt = 1.0
assert_almost_equal(w.sum(), tgt)
class TestMisc(TestCase):
def test_lagfromroots(self):
res = lag.lagfromroots([])
assert_almost_equal(trim(res), [1])
for i in range(1, 5):
roots = np.cos(np.linspace(-np.pi, 0, 2*i + 1)[1::2])
pol = lag.lagfromroots(roots)
res = lag.lagval(roots, pol)
tgt = 0
assert_(len(pol) == i + 1)
assert_almost_equal(lag.lag2poly(pol)[-1], 1)
assert_almost_equal(res, tgt)
def test_lagroots(self):
assert_almost_equal(lag.lagroots([1]), [])
assert_almost_equal(lag.lagroots([0, 1]), [1])
for i in range(2, 5):
tgt = np.linspace(0, 3, i)
res = lag.lagroots(lag.lagfromroots(tgt))
assert_almost_equal(trim(res), trim(tgt))
def test_lagtrim(self):
coef = [2, -1, 1, 0]
# Test exceptions
assert_raises(ValueError, lag.lagtrim, coef, -1)
# Test results
assert_equal(lag.lagtrim(coef), coef[:-1])
assert_equal(lag.lagtrim(coef, 1), coef[:-3])
assert_equal(lag.lagtrim(coef, 2), [0])
def test_lagline(self):
assert_equal(lag.lagline(3, 4), [7, -4])
def test_lag2poly(self):
for i in range(7):
assert_almost_equal(lag.lag2poly([0]*i + [1]), Llist[i])
def test_poly2lag(self):
for i in range(7):
assert_almost_equal(lag.poly2lag(Llist[i]), [0]*i + [1])
def test_weight(self):
x = np.linspace(0, 10, 11)
tgt = np.exp(-x)
res = lag.lagweight(x)
assert_almost_equal(res, tgt)
if __name__ == "__main__":
run_module_suite()
| mit |
amisrs/angular-flask | angular_flask/lib/python2.7/site-packages/requests/structures.py | 67 | 3576 | # -*- coding: utf-8 -*-
"""
requests.structures
~~~~~~~~~~~~~~~~~~~
Data structures that power Requests.
"""
import os
import collections
from itertools import islice
class IteratorProxy(object):
"""docstring for IteratorProxy"""
def __init__(self, i):
self.i = i
# self.i = chain.from_iterable(i)
def __iter__(self):
return self.i
def __len__(self):
if hasattr(self.i, '__len__'):
return len(self.i)
if hasattr(self.i, 'len'):
return self.i.len
if hasattr(self.i, 'fileno'):
return os.fstat(self.i.fileno()).st_size
def read(self, n):
return "".join(islice(self.i, None, n))
class CaseInsensitiveDict(collections.MutableMapping):
"""
A case-insensitive ``dict``-like object.
Implements all methods and operations of
``collections.MutableMapping`` as well as dict's ``copy``. Also
provides ``lower_items``.
All keys are expected to be strings. The structure remembers the
case of the last key to be set, and ``iter(instance)``,
``keys()``, ``items()``, ``iterkeys()``, and ``iteritems()``
will contain case-sensitive keys. However, querying and contains
testing is case insensitive:
cid = CaseInsensitiveDict()
cid['Accept'] = 'application/json'
cid['aCCEPT'] == 'application/json' # True
list(cid) == ['Accept'] # True
For example, ``headers['content-encoding']`` will return the
value of a ``'Content-Encoding'`` response header, regardless
of how the header name was originally stored.
If the constructor, ``.update``, or equality comparison
operations are given keys that have equal ``.lower()``s, the
behavior is undefined.
"""
def __init__(self, data=None, **kwargs):
self._store = dict()
if data is None:
data = {}
self.update(data, **kwargs)
def __setitem__(self, key, value):
# Use the lowercased key for lookups, but store the actual
# key alongside the value.
self._store[key.lower()] = (key, value)
def __getitem__(self, key):
return self._store[key.lower()][1]
def __delitem__(self, key):
del self._store[key.lower()]
def __iter__(self):
return (casedkey for casedkey, mappedvalue in self._store.values())
def __len__(self):
return len(self._store)
def lower_items(self):
"""Like iteritems(), but with all lowercase keys."""
return (
(lowerkey, keyval[1])
for (lowerkey, keyval)
in self._store.items()
)
def __eq__(self, other):
if isinstance(other, collections.Mapping):
other = CaseInsensitiveDict(other)
else:
return NotImplemented
# Compare insensitively
return dict(self.lower_items()) == dict(other.lower_items())
# Copy is required
def copy(self):
return CaseInsensitiveDict(self._store.values())
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, dict(self.items()))
class LookupDict(dict):
"""Dictionary lookup object."""
def __init__(self, name=None):
self.name = name
super(LookupDict, self).__init__()
def __repr__(self):
return '<lookup \'%s\'>' % (self.name)
def __getitem__(self, key):
# We allow fall-through here, so values default to None
return self.__dict__.get(key, None)
def get(self, key, default=None):
return self.__dict__.get(key, default)
| mit |
soldag/home-assistant | homeassistant/components/spotify/config_flow.py | 7 | 3169 | """Config flow for Spotify."""
import logging
from typing import Any, Dict, Optional
from spotipy import Spotify
import voluptuous as vol
from homeassistant import config_entries
from homeassistant.components import persistent_notification
from homeassistant.helpers import config_entry_oauth2_flow
from .const import DOMAIN, SPOTIFY_SCOPES
class SpotifyFlowHandler(
config_entry_oauth2_flow.AbstractOAuth2FlowHandler, domain=DOMAIN
):
"""Config flow to handle Spotify OAuth2 authentication."""
DOMAIN = DOMAIN
VERSION = 1
CONNECTION_CLASS = config_entries.CONN_CLASS_CLOUD_POLL
def __init__(self) -> None:
"""Instantiate config flow."""
super().__init__()
self.entry: Optional[Dict[str, Any]] = None
@property
def logger(self) -> logging.Logger:
"""Return logger."""
return logging.getLogger(__name__)
@property
def extra_authorize_data(self) -> Dict[str, Any]:
"""Extra data that needs to be appended to the authorize url."""
return {"scope": ",".join(SPOTIFY_SCOPES)}
async def async_oauth_create_entry(self, data: Dict[str, Any]) -> Dict[str, Any]:
"""Create an entry for Spotify."""
spotify = Spotify(auth=data["token"]["access_token"])
try:
current_user = await self.hass.async_add_executor_job(spotify.current_user)
except Exception: # pylint: disable=broad-except
return self.async_abort(reason="connection_error")
name = data["id"] = current_user["id"]
if self.entry and self.entry["id"] != current_user["id"]:
return self.async_abort(reason="reauth_account_mismatch")
if current_user.get("display_name"):
name = current_user["display_name"]
data["name"] = name
await self.async_set_unique_id(current_user["id"])
return self.async_create_entry(title=name, data=data)
async def async_step_reauth(self, entry: Dict[str, Any]) -> Dict[str, Any]:
"""Perform reauth upon migration of old entries."""
if entry:
self.entry = entry
assert self.hass
persistent_notification.async_create(
self.hass,
f"Spotify integration for account {entry['id']} needs to be re-authenticated. Please go to the integrations page to re-configure it.",
"Spotify re-authentication",
"spotify_reauth",
)
return await self.async_step_reauth_confirm()
async def async_step_reauth_confirm(
self, user_input: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""Confirm reauth dialog."""
if user_input is None:
return self.async_show_form(
step_id="reauth_confirm",
description_placeholders={"account": self.entry["id"]},
data_schema=vol.Schema({}),
errors={},
)
assert self.hass
persistent_notification.async_dismiss(self.hass, "spotify_reauth")
return await self.async_step_pick_implementation(
user_input={"implementation": self.entry["auth_implementation"]}
)
| apache-2.0 |
minrk/findspark | findspark.py | 1 | 6927 | """Find spark home, and initialize by adding pyspark to sys.path.
If SPARK_HOME is defined, it will be used to put pyspark on sys.path.
Otherwise, common locations for spark will be searched.
"""
from glob import glob
import os
import sys
__version__ = "2.0.0"
def find():
"""Find a local spark installation.
Will first check the SPARK_HOME env variable, and otherwise
search common installation locations, e.g. from homebrew
"""
spark_home = os.environ.get("SPARK_HOME", None)
if not spark_home:
if "pyspark" in sys.modules:
return os.path.dirname(sys.modules["pyspark"].__file__)
for path in [
"/usr/local/opt/apache-spark/libexec", # macOS Homebrew
"/usr/lib/spark/", # AWS Amazon EMR
"/usr/local/spark/", # common linux path for spark
"/opt/spark/", # other common linux path for spark
# Any other common places to look?
]:
if os.path.exists(path):
spark_home = path
break
# last resort: try importing pyspark (pip-installed, already on sys.path)
try:
import pyspark
except ImportError:
pass
else:
spark_home = os.path.dirname(pyspark.__file__)
if not spark_home:
raise ValueError(
"Couldn't find Spark, make sure SPARK_HOME env is set"
" or Spark is in an expected location (e.g. from homebrew installation)."
)
return spark_home
def _edit_rc(spark_home, sys_path=None):
"""Persists changes to environment by changing shell config.
Adds lines to .bashrc to set environment variables
including the adding of dependencies to the system path. Will only
edit this file if they already exist. Currently only works for bash.
Parameters
----------
spark_home : str
Path to Spark installation.
sys_path: list(str)
Paths (if any) to be added to $PYTHONPATH.
Should include python subdirectory of Spark installation, py4j
"""
bashrc_location = os.path.expanduser("~/.bashrc")
if os.path.isfile(bashrc_location):
with open(bashrc_location, "a") as bashrc:
bashrc.write("\n# Added by findspark\n")
bashrc.write("export SPARK_HOME={}\n".format(spark_home))
if sys_path:
bashrc.write(
"export PYTHONPATH={}\n".format(
os.pathsep.join(sys_path + ["$PYTHONPATH"])
)
)
bashrc.write("\n")
def _edit_ipython_profile(spark_home, sys_path=None):
"""Adds a startup file to the current IPython profile to import pyspark.
The startup file sets the required environment variables and imports pyspark.
Parameters
----------
spark_home : str
Path to Spark installation.
sys_path : list(str)
Paths to be added to sys.path.
Should include python subdirectory of Spark installation, py4j
"""
from IPython import get_ipython
ip = get_ipython()
if ip:
profile_dir = ip.profile_dir.location
else:
from IPython.utils.path import locate_profile
profile_dir = locate_profile()
startup_file_loc = os.path.join(profile_dir, "startup", "findspark.py")
with open(startup_file_loc, "w") as startup_file:
# Lines of code to be run when IPython starts
startup_file.write("import sys, os\n")
startup_file.write("os.environ['SPARK_HOME'] = {}\n".format(repr(spark_home)))
if sys_path:
startup_file.write("sys.path[:0] = {}\n".format(repr(sys_path)))
startup_file.write("import pyspark\n")
def init(spark_home=None, python_path=None, edit_rc=False, edit_profile=False):
"""Make pyspark importable.
Sets environment variables and adds dependencies to sys.path.
If no Spark location is provided, will try to find an installation.
Parameters
----------
spark_home : str, optional, default = None
Path to Spark installation, will try to find automatically
if not provided.
python_path : str, optional, default = None
Path to Python for Spark workers (PYSPARK_PYTHON),
will use the currently running Python if not provided.
edit_rc : bool, optional, default = False
Whether to attempt to persist changes by appending to shell
config.
edit_profile : bool, optional, default = False
Whether to create an IPython startup file to automatically
configure and import pyspark.
"""
if not spark_home:
spark_home = find()
if not python_path:
python_path = os.environ.get("PYSPARK_PYTHON", sys.executable)
# ensure SPARK_HOME is defined
os.environ["SPARK_HOME"] = spark_home
# ensure PYSPARK_PYTHON is defined
os.environ["PYSPARK_PYTHON"] = python_path
# add pyspark to sys.path
if "pyspark" not in sys.modules:
spark_python = os.path.join(spark_home, "python")
try:
py4j = glob(os.path.join(spark_python, "lib", "py4j-*.zip"))[0]
except IndexError:
raise Exception(
"Unable to find py4j in {}, your SPARK_HOME may not be configured correctly".format(
spark_python
)
)
sys.path[:0] = sys_path = [spark_python, py4j]
else:
# already imported, no need to patch sys.path
sys_path = None
if edit_rc:
_edit_rc(spark_home, sys_path)
if edit_profile:
_edit_ipython_profile(spark_home, sys_path)
def _add_to_submit_args(to_add):
"""Add string s to the PYSPARK_SUBMIT_ARGS env var"""
existing_args = os.environ.get("PYSPARK_SUBMIT_ARGS", "")
if not existing_args:
# if empty, start with default pyspark-shell
# ref: pyspark.java_gateway.launch_gateway
existing_args = "pyspark-shell"
# add new args to front to avoid insert after executable
submit_args = "{} {}".format(to_add, existing_args)
os.environ["PYSPARK_SUBMIT_ARGS"] = submit_args
return submit_args
def add_packages(packages):
"""Add external packages to the pyspark interpreter.
Set the PYSPARK_SUBMIT_ARGS properly.
Parameters
----------
packages: list of package names in string format
"""
# if the parameter is a string, convert to a single element list
if isinstance(packages, str):
packages = [packages]
_add_to_submit_args("--packages " + ",".join(packages))
def add_jars(jars):
"""Add external jars to the pyspark interpreter.
Set the PYSPARK_SUBMIT_ARGS properly.
Parameters
----------
jars: list of path to jars in string format
"""
# if the parameter is a string, convert to a single element list
if isinstance(jars, str):
jars = [jars]
_add_to_submit_args("--jars " + ",".join(jars))
| bsd-3-clause |
pedrobaeza/OpenUpgrade | addons/stock/res_config.py | 115 | 8115 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Business Applications
# Copyright (C) 2004-2012 OpenERP S.A. (<http://openerp.com>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import fields, osv
from openerp.tools.translate import _
class res_company(osv.osv):
_inherit = "res.company"
_columns = {
'propagation_minimum_delta': fields.integer('Minimum Delta for Propagation of a Date Change on moves linked together'),
'internal_transit_location_id': fields.many2one('stock.location', 'Internal Transit Location', help="Technical field used for resupply routes between warehouses that belong to this company", on_delete="restrict"),
}
def create_transit_location(self, cr, uid, company_id, company_name, context=None):
'''Create a transit location with company_id being the given company_id. This is needed
in case of resuply routes between warehouses belonging to the same company, because
we don't want to create accounting entries at that time.
'''
data_obj = self.pool.get('ir.model.data')
try:
parent_loc = data_obj.get_object_reference(cr, uid, 'stock', 'stock_location_locations')[1]
except:
parent_loc = False
location_vals = {
'name': _('%s: Transit Location') % company_name,
'usage': 'transit',
'company_id': company_id,
'location_id': parent_loc,
}
location_id = self.pool.get('stock.location').create(cr, uid, location_vals, context=context)
self.write(cr, uid, [company_id], {'internal_transit_location_id': location_id}, context=context)
def create(self, cr, uid, vals, context=None):
company_id = super(res_company, self).create(cr, uid, vals, context=context)
self.create_transit_location(cr, uid, company_id, vals['name'], context=context)
return company_id
_defaults = {
'propagation_minimum_delta': 1,
}
class stock_config_settings(osv.osv_memory):
_name = 'stock.config.settings'
_inherit = 'res.config.settings'
_columns = {
'company_id': fields.many2one('res.company', 'Company', required=True),
'module_procurement_jit': fields.boolean("Generate procurement in real time",
help="""This allows Just In Time computation of procurement orders.
All procurement orders will be processed immediately, which could in some
cases entail a small performance impact.
This installs the module procurement_jit."""),
'module_claim_from_delivery': fields.boolean("Allow claim on deliveries",
help='Adds a Claim link to the delivery order.\n'
'-This installs the module claim_from_delivery.'),
'module_product_expiry': fields.boolean("Expiry date on serial numbers",
help="""Track different dates on products and serial numbers.
The following dates can be tracked:
- end of life
- best before date
- removal date
- alert date.
This installs the module product_expiry."""),
'group_uom': fields.boolean("Manage different units of measure for products",
implied_group='product.group_uom',
help="""Allows you to select and maintain different units of measure for products."""),
'group_uos': fields.boolean("Invoice products in a different unit of measure than the sales order",
implied_group='product.group_uos',
help='Allows you to sell units of a product, but invoice based on a different unit of measure.\n'
'For instance, you can sell pieces of meat that you invoice based on their weight.'),
'group_stock_packaging': fields.boolean("Allow to define several packaging methods on products",
implied_group='product.group_stock_packaging',
help="""Allows you to create and manage your packaging dimensions and types you want to be maintained in your system."""),
'group_stock_production_lot': fields.boolean("Track lots or serial numbers",
implied_group='stock.group_production_lot',
help="""This allows you to assign a lot (or serial number) to the pickings and moves. This can make it possible to know which production lot was sent to a certain client, ..."""),
'group_stock_tracking_lot': fields.boolean("Use packages: pallets, boxes, ...",
implied_group='stock.group_tracking_lot',
help="""This allows to manipulate packages. You can put something in, take something from a package, but also move entire packages and put them even in another package. """),
'group_stock_tracking_owner': fields.boolean("Manage owner on stock",
implied_group='stock.group_tracking_owner',
help="""This way you can receive products attributed to a certain owner. """),
'group_stock_multiple_locations': fields.boolean("Manage multiple locations and warehouses",
implied_group='stock.group_locations',
help="""This will show you the locations and allows you to define multiple picking types and warehouses."""),
'group_stock_adv_location': fields.boolean("Manage advanced routes for your warehouse",
implied_group='stock.group_adv_location',
help="""This option supplements the warehouse application by effectively implementing Push and Pull inventory flows through Routes."""),
'decimal_precision': fields.integer('Decimal precision on weight', help="As an example, a decimal precision of 2 will allow weights like: 9.99 kg, whereas a decimal precision of 4 will allow weights like: 0.0231 kg."),
'propagation_minimum_delta': fields.related('company_id', 'propagation_minimum_delta', type='integer', string="Minimum days to trigger a propagation of date change in pushed/pull flows."),
'module_stock_dropshipping': fields.boolean("Manage dropshipping",
help='\nCreates the dropship route and add more complex tests'
'-This installs the module stock_dropshipping.'),
'module_stock_picking_wave': fields.boolean('Manage picking wave', help='Install the picking wave module which will help you grouping your pickings and processing them in batch'),
}
def onchange_adv_location(self, cr, uid, ids, group_stock_adv_location, context=None):
if group_stock_adv_location:
return {'value': {'group_stock_multiple_locations': True}}
return {}
def _default_company(self, cr, uid, context=None):
user = self.pool.get('res.users').browse(cr, uid, uid, context=context)
return user.company_id.id
def get_default_dp(self, cr, uid, fields, context=None):
dp = self.pool.get('ir.model.data').get_object(cr, uid, 'product', 'decimal_stock_weight')
return {'decimal_precision': dp.digits}
def set_default_dp(self, cr, uid, ids, context=None):
config = self.browse(cr, uid, ids[0], context)
dp = self.pool.get('ir.model.data').get_object(cr, uid, 'product', 'decimal_stock_weight')
dp.write({'digits': config.decimal_precision})
_defaults = {
'company_id': _default_company,
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
mancoast/CPythonPyc_test | cpython/278_test_sysconfig.py | 48 | 12637 | """Tests for sysconfig."""
import unittest
import sys
import os
import shutil
import subprocess
from copy import copy, deepcopy
from test.test_support import run_unittest, TESTFN, unlink, get_attribute
import sysconfig
from sysconfig import (get_paths, get_platform, get_config_vars,
get_path, get_path_names, _INSTALL_SCHEMES,
_get_default_scheme, _expand_vars,
get_scheme_names, get_config_var)
import _osx_support
class TestSysConfig(unittest.TestCase):
def setUp(self):
"""Make a copy of sys.path"""
super(TestSysConfig, self).setUp()
self.sys_path = sys.path[:]
self.makefile = None
# patching os.uname
if hasattr(os, 'uname'):
self.uname = os.uname
self._uname = os.uname()
else:
self.uname = None
self._uname = None
os.uname = self._get_uname
# saving the environment
self.name = os.name
self.platform = sys.platform
self.version = sys.version
self.sep = os.sep
self.join = os.path.join
self.isabs = os.path.isabs
self.splitdrive = os.path.splitdrive
self._config_vars = copy(sysconfig._CONFIG_VARS)
self.old_environ = deepcopy(os.environ)
def tearDown(self):
"""Restore sys.path"""
sys.path[:] = self.sys_path
if self.makefile is not None:
os.unlink(self.makefile)
self._cleanup_testfn()
if self.uname is not None:
os.uname = self.uname
else:
del os.uname
os.name = self.name
sys.platform = self.platform
sys.version = self.version
os.sep = self.sep
os.path.join = self.join
os.path.isabs = self.isabs
os.path.splitdrive = self.splitdrive
sysconfig._CONFIG_VARS = copy(self._config_vars)
for key, value in self.old_environ.items():
if os.environ.get(key) != value:
os.environ[key] = value
for key in os.environ.keys():
if key not in self.old_environ:
del os.environ[key]
super(TestSysConfig, self).tearDown()
def _set_uname(self, uname):
self._uname = uname
def _get_uname(self):
return self._uname
def _cleanup_testfn(self):
path = TESTFN
if os.path.isfile(path):
os.remove(path)
elif os.path.isdir(path):
shutil.rmtree(path)
def test_get_path_names(self):
self.assertEqual(get_path_names(), sysconfig._SCHEME_KEYS)
def test_get_paths(self):
scheme = get_paths()
default_scheme = _get_default_scheme()
wanted = _expand_vars(default_scheme, None)
wanted = wanted.items()
wanted.sort()
scheme = scheme.items()
scheme.sort()
self.assertEqual(scheme, wanted)
def test_get_path(self):
# xxx make real tests here
for scheme in _INSTALL_SCHEMES:
for name in _INSTALL_SCHEMES[scheme]:
res = get_path(name, scheme)
def test_get_config_vars(self):
cvars = get_config_vars()
self.assertIsInstance(cvars, dict)
self.assertTrue(cvars)
def test_get_platform(self):
# windows XP, 32bits
os.name = 'nt'
sys.version = ('2.4.4 (#71, Oct 18 2006, 08:34:43) '
'[MSC v.1310 32 bit (Intel)]')
sys.platform = 'win32'
self.assertEqual(get_platform(), 'win32')
# windows XP, amd64
os.name = 'nt'
sys.version = ('2.4.4 (#71, Oct 18 2006, 08:34:43) '
'[MSC v.1310 32 bit (Amd64)]')
sys.platform = 'win32'
self.assertEqual(get_platform(), 'win-amd64')
# windows XP, itanium
os.name = 'nt'
sys.version = ('2.4.4 (#71, Oct 18 2006, 08:34:43) '
'[MSC v.1310 32 bit (Itanium)]')
sys.platform = 'win32'
self.assertEqual(get_platform(), 'win-ia64')
# macbook
os.name = 'posix'
sys.version = ('2.5 (r25:51918, Sep 19 2006, 08:49:13) '
'\n[GCC 4.0.1 (Apple Computer, Inc. build 5341)]')
sys.platform = 'darwin'
self._set_uname(('Darwin', 'macziade', '8.11.1',
('Darwin Kernel Version 8.11.1: '
'Wed Oct 10 18:23:28 PDT 2007; '
'root:xnu-792.25.20~1/RELEASE_I386'), 'PowerPC'))
_osx_support._remove_original_values(get_config_vars())
get_config_vars()['MACOSX_DEPLOYMENT_TARGET'] = '10.3'
get_config_vars()['CFLAGS'] = ('-fno-strict-aliasing -DNDEBUG -g '
'-fwrapv -O3 -Wall -Wstrict-prototypes')
maxint = sys.maxint
try:
sys.maxint = 2147483647
self.assertEqual(get_platform(), 'macosx-10.3-ppc')
sys.maxint = 9223372036854775807
self.assertEqual(get_platform(), 'macosx-10.3-ppc64')
finally:
sys.maxint = maxint
self._set_uname(('Darwin', 'macziade', '8.11.1',
('Darwin Kernel Version 8.11.1: '
'Wed Oct 10 18:23:28 PDT 2007; '
'root:xnu-792.25.20~1/RELEASE_I386'), 'i386'))
_osx_support._remove_original_values(get_config_vars())
get_config_vars()['MACOSX_DEPLOYMENT_TARGET'] = '10.3'
get_config_vars()['CFLAGS'] = ('-fno-strict-aliasing -DNDEBUG -g '
'-fwrapv -O3 -Wall -Wstrict-prototypes')
maxint = sys.maxint
try:
sys.maxint = 2147483647
self.assertEqual(get_platform(), 'macosx-10.3-i386')
sys.maxint = 9223372036854775807
self.assertEqual(get_platform(), 'macosx-10.3-x86_64')
finally:
sys.maxint = maxint
# macbook with fat binaries (fat, universal or fat64)
_osx_support._remove_original_values(get_config_vars())
get_config_vars()['MACOSX_DEPLOYMENT_TARGET'] = '10.4'
get_config_vars()['CFLAGS'] = ('-arch ppc -arch i386 -isysroot '
'/Developer/SDKs/MacOSX10.4u.sdk '
'-fno-strict-aliasing -fno-common '
'-dynamic -DNDEBUG -g -O3')
self.assertEqual(get_platform(), 'macosx-10.4-fat')
_osx_support._remove_original_values(get_config_vars())
get_config_vars()['CFLAGS'] = ('-arch x86_64 -arch i386 -isysroot '
'/Developer/SDKs/MacOSX10.4u.sdk '
'-fno-strict-aliasing -fno-common '
'-dynamic -DNDEBUG -g -O3')
self.assertEqual(get_platform(), 'macosx-10.4-intel')
_osx_support._remove_original_values(get_config_vars())
get_config_vars()['CFLAGS'] = ('-arch x86_64 -arch ppc -arch i386 -isysroot '
'/Developer/SDKs/MacOSX10.4u.sdk '
'-fno-strict-aliasing -fno-common '
'-dynamic -DNDEBUG -g -O3')
self.assertEqual(get_platform(), 'macosx-10.4-fat3')
_osx_support._remove_original_values(get_config_vars())
get_config_vars()['CFLAGS'] = ('-arch ppc64 -arch x86_64 -arch ppc -arch i386 -isysroot '
'/Developer/SDKs/MacOSX10.4u.sdk '
'-fno-strict-aliasing -fno-common '
'-dynamic -DNDEBUG -g -O3')
self.assertEqual(get_platform(), 'macosx-10.4-universal')
_osx_support._remove_original_values(get_config_vars())
get_config_vars()['CFLAGS'] = ('-arch x86_64 -arch ppc64 -isysroot '
'/Developer/SDKs/MacOSX10.4u.sdk '
'-fno-strict-aliasing -fno-common '
'-dynamic -DNDEBUG -g -O3')
self.assertEqual(get_platform(), 'macosx-10.4-fat64')
for arch in ('ppc', 'i386', 'x86_64', 'ppc64'):
_osx_support._remove_original_values(get_config_vars())
get_config_vars()['CFLAGS'] = ('-arch %s -isysroot '
'/Developer/SDKs/MacOSX10.4u.sdk '
'-fno-strict-aliasing -fno-common '
'-dynamic -DNDEBUG -g -O3'%(arch,))
self.assertEqual(get_platform(), 'macosx-10.4-%s'%(arch,))
# linux debian sarge
os.name = 'posix'
sys.version = ('2.3.5 (#1, Jul 4 2007, 17:28:59) '
'\n[GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)]')
sys.platform = 'linux2'
self._set_uname(('Linux', 'aglae', '2.6.21.1dedibox-r7',
'#1 Mon Apr 30 17:25:38 CEST 2007', 'i686'))
self.assertEqual(get_platform(), 'linux-i686')
# XXX more platforms to tests here
def test_get_config_h_filename(self):
config_h = sysconfig.get_config_h_filename()
self.assertTrue(os.path.isfile(config_h), config_h)
def test_get_scheme_names(self):
wanted = ('nt', 'nt_user', 'os2', 'os2_home', 'osx_framework_user',
'posix_home', 'posix_prefix', 'posix_user')
self.assertEqual(get_scheme_names(), wanted)
def test_symlink(self):
# Issue 7880
symlink = get_attribute(os, "symlink")
def get(python):
cmd = [python, '-c',
'import sysconfig; print sysconfig.get_platform()']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
return p.communicate()
real = os.path.realpath(sys.executable)
link = os.path.abspath(TESTFN)
symlink(real, link)
try:
self.assertEqual(get(real), get(link))
finally:
unlink(link)
def test_user_similar(self):
# Issue #8759: make sure the posix scheme for the users
# is similar to the global posix_prefix one
base = get_config_var('base')
user = get_config_var('userbase')
# the global scheme mirrors the distinction between prefix and
# exec-prefix but not the user scheme, so we have to adapt the paths
# before comparing (issue #9100)
adapt = sys.prefix != sys.exec_prefix
for name in ('stdlib', 'platstdlib', 'purelib', 'platlib'):
global_path = get_path(name, 'posix_prefix')
if adapt:
global_path = global_path.replace(sys.exec_prefix, sys.prefix)
base = base.replace(sys.exec_prefix, sys.prefix)
user_path = get_path(name, 'posix_user')
self.assertEqual(user_path, global_path.replace(base, user, 1))
@unittest.skipUnless(sys.platform == "darwin", "test only relevant on MacOSX")
def test_platform_in_subprocess(self):
my_platform = sysconfig.get_platform()
# Test without MACOSX_DEPLOYMENT_TARGET in the environment
env = os.environ.copy()
if 'MACOSX_DEPLOYMENT_TARGET' in env:
del env['MACOSX_DEPLOYMENT_TARGET']
with open('/dev/null', 'w') as devnull_fp:
p = subprocess.Popen([
sys.executable, '-c',
'import sysconfig; print(sysconfig.get_platform())',
],
stdout=subprocess.PIPE,
stderr=devnull_fp,
env=env)
test_platform = p.communicate()[0].strip()
test_platform = test_platform.decode('utf-8')
status = p.wait()
self.assertEqual(status, 0)
self.assertEqual(my_platform, test_platform)
# Test with MACOSX_DEPLOYMENT_TARGET in the environment, and
# using a value that is unlikely to be the default one.
env = os.environ.copy()
env['MACOSX_DEPLOYMENT_TARGET'] = '10.1'
p = subprocess.Popen([
sys.executable, '-c',
'import sysconfig; print(sysconfig.get_platform())',
],
stdout=subprocess.PIPE,
stderr=open('/dev/null'),
env=env)
test_platform = p.communicate()[0].strip()
test_platform = test_platform.decode('utf-8')
status = p.wait()
self.assertEqual(status, 0)
self.assertEqual(my_platform, test_platform)
def test_main():
run_unittest(TestSysConfig)
if __name__ == "__main__":
test_main()
| gpl-3.0 |
Belxjander/Kirito | LibBitcoin/ProtoBuffers/python/google/protobuf/internal/decoder.py | 29 | 30799 | # Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#PY25 compatible for GAE.
#
# Copyright 2009 Google Inc. All Rights Reserved.
"""Code for decoding protocol buffer primitives.
This code is very similar to encoder.py -- read the docs for that module first.
A "decoder" is a function with the signature:
Decode(buffer, pos, end, message, field_dict)
The arguments are:
buffer: The string containing the encoded message.
pos: The current position in the string.
end: The position in the string where the current message ends. May be
less than len(buffer) if we're reading a sub-message.
message: The message object into which we're parsing.
field_dict: message._fields (avoids a hashtable lookup).
The decoder reads the field and stores it into field_dict, returning the new
buffer position. A decoder for a repeated field may proactively decode all of
the elements of that field, if they appear consecutively.
Note that decoders may throw any of the following:
IndexError: Indicates a truncated message.
struct.error: Unpacking of a fixed-width field failed.
message.DecodeError: Other errors.
Decoders are expected to raise an exception if they are called with pos > end.
This allows callers to be lax about bounds checking: it's fineto read past
"end" as long as you are sure that someone else will notice and throw an
exception later on.
Something up the call stack is expected to catch IndexError and struct.error
and convert them to message.DecodeError.
Decoders are constructed using decoder constructors with the signature:
MakeDecoder(field_number, is_repeated, is_packed, key, new_default)
The arguments are:
field_number: The field number of the field we want to decode.
is_repeated: Is the field a repeated field? (bool)
is_packed: Is the field a packed field? (bool)
key: The key to use when looking up the field within field_dict.
(This is actually the FieldDescriptor but nothing in this
file should depend on that.)
new_default: A function which takes a message object as a parameter and
returns a new instance of the default value for this field.
(This is called for repeated fields and sub-messages, when an
instance does not already exist.)
As with encoders, we define a decoder constructor for every type of field.
Then, for every field of every message class we construct an actual decoder.
That decoder goes into a dict indexed by tag, so when we decode a message
we repeatedly read a tag, look up the corresponding decoder, and invoke it.
"""
__author__ = 'kenton@google.com (Kenton Varda)'
import struct
import sys ##PY25
_PY2 = sys.version_info[0] < 3 ##PY25
from google.protobuf.internal import encoder
from google.protobuf.internal import wire_format
from google.protobuf import message
# This will overflow and thus become IEEE-754 "infinity". We would use
# "float('inf')" but it doesn't work on Windows pre-Python-2.6.
_POS_INF = 1e10000
_NEG_INF = -_POS_INF
_NAN = _POS_INF * 0
# This is not for optimization, but rather to avoid conflicts with local
# variables named "message".
_DecodeError = message.DecodeError
def _VarintDecoder(mask, result_type):
"""Return an encoder for a basic varint value (does not include tag).
Decoded values will be bitwise-anded with the given mask before being
returned, e.g. to limit them to 32 bits. The returned decoder does not
take the usual "end" parameter -- the caller is expected to do bounds checking
after the fact (often the caller can defer such checking until later). The
decoder returns a (value, new_pos) pair.
"""
local_ord = ord
py2 = _PY2 ##PY25
##!PY25 py2 = str is bytes
def DecodeVarint(buffer, pos):
result = 0
shift = 0
while 1:
b = local_ord(buffer[pos]) if py2 else buffer[pos]
result |= ((b & 0x7f) << shift)
pos += 1
if not (b & 0x80):
result &= mask
result = result_type(result)
return (result, pos)
shift += 7
if shift >= 64:
raise _DecodeError('Too many bytes when decoding varint.')
return DecodeVarint
def _SignedVarintDecoder(mask, result_type):
"""Like _VarintDecoder() but decodes signed values."""
local_ord = ord
py2 = _PY2 ##PY25
##!PY25 py2 = str is bytes
def DecodeVarint(buffer, pos):
result = 0
shift = 0
while 1:
b = local_ord(buffer[pos]) if py2 else buffer[pos]
result |= ((b & 0x7f) << shift)
pos += 1
if not (b & 0x80):
if result > 0x7fffffffffffffff:
result -= (1 << 64)
result |= ~mask
else:
result &= mask
result = result_type(result)
return (result, pos)
shift += 7
if shift >= 64:
raise _DecodeError('Too many bytes when decoding varint.')
return DecodeVarint
# We force 32-bit values to int and 64-bit values to long to make
# alternate implementations where the distinction is more significant
# (e.g. the C++ implementation) simpler.
_DecodeVarint = _VarintDecoder((1 << 64) - 1, long)
_DecodeSignedVarint = _SignedVarintDecoder((1 << 64) - 1, long)
# Use these versions for values which must be limited to 32 bits.
_DecodeVarint32 = _VarintDecoder((1 << 32) - 1, int)
_DecodeSignedVarint32 = _SignedVarintDecoder((1 << 32) - 1, int)
def ReadTag(buffer, pos):
"""Read a tag from the buffer, and return a (tag_bytes, new_pos) tuple.
We return the raw bytes of the tag rather than decoding them. The raw
bytes can then be used to look up the proper decoder. This effectively allows
us to trade some work that would be done in pure-python (decoding a varint)
for work that is done in C (searching for a byte string in a hash table).
In a low-level language it would be much cheaper to decode the varint and
use that, but not in Python.
"""
py2 = _PY2 ##PY25
##!PY25 py2 = str is bytes
start = pos
while (ord(buffer[pos]) if py2 else buffer[pos]) & 0x80:
pos += 1
pos += 1
return (buffer[start:pos], pos)
# --------------------------------------------------------------------
def _SimpleDecoder(wire_type, decode_value):
"""Return a constructor for a decoder for fields of a particular type.
Args:
wire_type: The field's wire type.
decode_value: A function which decodes an individual value, e.g.
_DecodeVarint()
"""
def SpecificDecoder(field_number, is_repeated, is_packed, key, new_default):
if is_packed:
local_DecodeVarint = _DecodeVarint
def DecodePackedField(buffer, pos, end, message, field_dict):
value = field_dict.get(key)
if value is None:
value = field_dict.setdefault(key, new_default(message))
(endpoint, pos) = local_DecodeVarint(buffer, pos)
endpoint += pos
if endpoint > end:
raise _DecodeError('Truncated message.')
while pos < endpoint:
(element, pos) = decode_value(buffer, pos)
value.append(element)
if pos > endpoint:
del value[-1] # Discard corrupt value.
raise _DecodeError('Packed element was truncated.')
return pos
return DecodePackedField
elif is_repeated:
tag_bytes = encoder.TagBytes(field_number, wire_type)
tag_len = len(tag_bytes)
def DecodeRepeatedField(buffer, pos, end, message, field_dict):
value = field_dict.get(key)
if value is None:
value = field_dict.setdefault(key, new_default(message))
while 1:
(element, new_pos) = decode_value(buffer, pos)
value.append(element)
# Predict that the next tag is another copy of the same repeated
# field.
pos = new_pos + tag_len
if buffer[new_pos:pos] != tag_bytes or new_pos >= end:
# Prediction failed. Return.
if new_pos > end:
raise _DecodeError('Truncated message.')
return new_pos
return DecodeRepeatedField
else:
def DecodeField(buffer, pos, end, message, field_dict):
(field_dict[key], pos) = decode_value(buffer, pos)
if pos > end:
del field_dict[key] # Discard corrupt value.
raise _DecodeError('Truncated message.')
return pos
return DecodeField
return SpecificDecoder
def _ModifiedDecoder(wire_type, decode_value, modify_value):
"""Like SimpleDecoder but additionally invokes modify_value on every value
before storing it. Usually modify_value is ZigZagDecode.
"""
# Reusing _SimpleDecoder is slightly slower than copying a bunch of code, but
# not enough to make a significant difference.
def InnerDecode(buffer, pos):
(result, new_pos) = decode_value(buffer, pos)
return (modify_value(result), new_pos)
return _SimpleDecoder(wire_type, InnerDecode)
def _StructPackDecoder(wire_type, format):
"""Return a constructor for a decoder for a fixed-width field.
Args:
wire_type: The field's wire type.
format: The format string to pass to struct.unpack().
"""
value_size = struct.calcsize(format)
local_unpack = struct.unpack
# Reusing _SimpleDecoder is slightly slower than copying a bunch of code, but
# not enough to make a significant difference.
# Note that we expect someone up-stack to catch struct.error and convert
# it to _DecodeError -- this way we don't have to set up exception-
# handling blocks every time we parse one value.
def InnerDecode(buffer, pos):
new_pos = pos + value_size
result = local_unpack(format, buffer[pos:new_pos])[0]
return (result, new_pos)
return _SimpleDecoder(wire_type, InnerDecode)
def _FloatDecoder():
"""Returns a decoder for a float field.
This code works around a bug in struct.unpack for non-finite 32-bit
floating-point values.
"""
local_unpack = struct.unpack
b = (lambda x:x) if _PY2 else lambda x:x.encode('latin1') ##PY25
def InnerDecode(buffer, pos):
# We expect a 32-bit value in little-endian byte order. Bit 1 is the sign
# bit, bits 2-9 represent the exponent, and bits 10-32 are the significand.
new_pos = pos + 4
float_bytes = buffer[pos:new_pos]
# If this value has all its exponent bits set, then it's non-finite.
# In Python 2.4, struct.unpack will convert it to a finite 64-bit value.
# To avoid that, we parse it specially.
if ((float_bytes[3:4] in b('\x7F\xFF')) ##PY25
##!PY25 if ((float_bytes[3:4] in b'\x7F\xFF')
and (float_bytes[2:3] >= b('\x80'))): ##PY25
##!PY25 and (float_bytes[2:3] >= b'\x80')):
# If at least one significand bit is set...
if float_bytes[0:3] != b('\x00\x00\x80'): ##PY25
##!PY25 if float_bytes[0:3] != b'\x00\x00\x80':
return (_NAN, new_pos)
# If sign bit is set...
if float_bytes[3:4] == b('\xFF'): ##PY25
##!PY25 if float_bytes[3:4] == b'\xFF':
return (_NEG_INF, new_pos)
return (_POS_INF, new_pos)
# Note that we expect someone up-stack to catch struct.error and convert
# it to _DecodeError -- this way we don't have to set up exception-
# handling blocks every time we parse one value.
result = local_unpack('<f', float_bytes)[0]
return (result, new_pos)
return _SimpleDecoder(wire_format.WIRETYPE_FIXED32, InnerDecode)
def _DoubleDecoder():
"""Returns a decoder for a double field.
This code works around a bug in struct.unpack for not-a-number.
"""
local_unpack = struct.unpack
b = (lambda x:x) if _PY2 else lambda x:x.encode('latin1') ##PY25
def InnerDecode(buffer, pos):
# We expect a 64-bit value in little-endian byte order. Bit 1 is the sign
# bit, bits 2-12 represent the exponent, and bits 13-64 are the significand.
new_pos = pos + 8
double_bytes = buffer[pos:new_pos]
# If this value has all its exponent bits set and at least one significand
# bit set, it's not a number. In Python 2.4, struct.unpack will treat it
# as inf or -inf. To avoid that, we treat it specially.
##!PY25 if ((double_bytes[7:8] in b'\x7F\xFF')
##!PY25 and (double_bytes[6:7] >= b'\xF0')
##!PY25 and (double_bytes[0:7] != b'\x00\x00\x00\x00\x00\x00\xF0')):
if ((double_bytes[7:8] in b('\x7F\xFF')) ##PY25
and (double_bytes[6:7] >= b('\xF0')) ##PY25
and (double_bytes[0:7] != b('\x00\x00\x00\x00\x00\x00\xF0'))): ##PY25
return (_NAN, new_pos)
# Note that we expect someone up-stack to catch struct.error and convert
# it to _DecodeError -- this way we don't have to set up exception-
# handling blocks every time we parse one value.
result = local_unpack('<d', double_bytes)[0]
return (result, new_pos)
return _SimpleDecoder(wire_format.WIRETYPE_FIXED64, InnerDecode)
def EnumDecoder(field_number, is_repeated, is_packed, key, new_default):
enum_type = key.enum_type
if is_packed:
local_DecodeVarint = _DecodeVarint
def DecodePackedField(buffer, pos, end, message, field_dict):
value = field_dict.get(key)
if value is None:
value = field_dict.setdefault(key, new_default(message))
(endpoint, pos) = local_DecodeVarint(buffer, pos)
endpoint += pos
if endpoint > end:
raise _DecodeError('Truncated message.')
while pos < endpoint:
value_start_pos = pos
(element, pos) = _DecodeSignedVarint32(buffer, pos)
if element in enum_type.values_by_number:
value.append(element)
else:
if not message._unknown_fields:
message._unknown_fields = []
tag_bytes = encoder.TagBytes(field_number,
wire_format.WIRETYPE_VARINT)
message._unknown_fields.append(
(tag_bytes, buffer[value_start_pos:pos]))
if pos > endpoint:
if element in enum_type.values_by_number:
del value[-1] # Discard corrupt value.
else:
del message._unknown_fields[-1]
raise _DecodeError('Packed element was truncated.')
return pos
return DecodePackedField
elif is_repeated:
tag_bytes = encoder.TagBytes(field_number, wire_format.WIRETYPE_VARINT)
tag_len = len(tag_bytes)
def DecodeRepeatedField(buffer, pos, end, message, field_dict):
value = field_dict.get(key)
if value is None:
value = field_dict.setdefault(key, new_default(message))
while 1:
(element, new_pos) = _DecodeSignedVarint32(buffer, pos)
if element in enum_type.values_by_number:
value.append(element)
else:
if not message._unknown_fields:
message._unknown_fields = []
message._unknown_fields.append(
(tag_bytes, buffer[pos:new_pos]))
# Predict that the next tag is another copy of the same repeated
# field.
pos = new_pos + tag_len
if buffer[new_pos:pos] != tag_bytes or new_pos >= end:
# Prediction failed. Return.
if new_pos > end:
raise _DecodeError('Truncated message.')
return new_pos
return DecodeRepeatedField
else:
def DecodeField(buffer, pos, end, message, field_dict):
value_start_pos = pos
(enum_value, pos) = _DecodeSignedVarint32(buffer, pos)
if pos > end:
raise _DecodeError('Truncated message.')
if enum_value in enum_type.values_by_number:
field_dict[key] = enum_value
else:
if not message._unknown_fields:
message._unknown_fields = []
tag_bytes = encoder.TagBytes(field_number,
wire_format.WIRETYPE_VARINT)
message._unknown_fields.append(
(tag_bytes, buffer[value_start_pos:pos]))
return pos
return DecodeField
# --------------------------------------------------------------------
Int32Decoder = _SimpleDecoder(
wire_format.WIRETYPE_VARINT, _DecodeSignedVarint32)
Int64Decoder = _SimpleDecoder(
wire_format.WIRETYPE_VARINT, _DecodeSignedVarint)
UInt32Decoder = _SimpleDecoder(wire_format.WIRETYPE_VARINT, _DecodeVarint32)
UInt64Decoder = _SimpleDecoder(wire_format.WIRETYPE_VARINT, _DecodeVarint)
SInt32Decoder = _ModifiedDecoder(
wire_format.WIRETYPE_VARINT, _DecodeVarint32, wire_format.ZigZagDecode)
SInt64Decoder = _ModifiedDecoder(
wire_format.WIRETYPE_VARINT, _DecodeVarint, wire_format.ZigZagDecode)
# Note that Python conveniently guarantees that when using the '<' prefix on
# formats, they will also have the same size across all platforms (as opposed
# to without the prefix, where their sizes depend on the C compiler's basic
# type sizes).
Fixed32Decoder = _StructPackDecoder(wire_format.WIRETYPE_FIXED32, '<I')
Fixed64Decoder = _StructPackDecoder(wire_format.WIRETYPE_FIXED64, '<Q')
SFixed32Decoder = _StructPackDecoder(wire_format.WIRETYPE_FIXED32, '<i')
SFixed64Decoder = _StructPackDecoder(wire_format.WIRETYPE_FIXED64, '<q')
FloatDecoder = _FloatDecoder()
DoubleDecoder = _DoubleDecoder()
BoolDecoder = _ModifiedDecoder(
wire_format.WIRETYPE_VARINT, _DecodeVarint, bool)
def StringDecoder(field_number, is_repeated, is_packed, key, new_default):
"""Returns a decoder for a string field."""
local_DecodeVarint = _DecodeVarint
local_unicode = unicode
def _ConvertToUnicode(byte_str):
try:
return local_unicode(byte_str, 'utf-8')
except UnicodeDecodeError, e:
# add more information to the error message and re-raise it.
e.reason = '%s in field: %s' % (e, key.full_name)
raise
assert not is_packed
if is_repeated:
tag_bytes = encoder.TagBytes(field_number,
wire_format.WIRETYPE_LENGTH_DELIMITED)
tag_len = len(tag_bytes)
def DecodeRepeatedField(buffer, pos, end, message, field_dict):
value = field_dict.get(key)
if value is None:
value = field_dict.setdefault(key, new_default(message))
while 1:
(size, pos) = local_DecodeVarint(buffer, pos)
new_pos = pos + size
if new_pos > end:
raise _DecodeError('Truncated string.')
value.append(_ConvertToUnicode(buffer[pos:new_pos]))
# Predict that the next tag is another copy of the same repeated field.
pos = new_pos + tag_len
if buffer[new_pos:pos] != tag_bytes or new_pos == end:
# Prediction failed. Return.
return new_pos
return DecodeRepeatedField
else:
def DecodeField(buffer, pos, end, message, field_dict):
(size, pos) = local_DecodeVarint(buffer, pos)
new_pos = pos + size
if new_pos > end:
raise _DecodeError('Truncated string.')
field_dict[key] = _ConvertToUnicode(buffer[pos:new_pos])
return new_pos
return DecodeField
def BytesDecoder(field_number, is_repeated, is_packed, key, new_default):
"""Returns a decoder for a bytes field."""
local_DecodeVarint = _DecodeVarint
assert not is_packed
if is_repeated:
tag_bytes = encoder.TagBytes(field_number,
wire_format.WIRETYPE_LENGTH_DELIMITED)
tag_len = len(tag_bytes)
def DecodeRepeatedField(buffer, pos, end, message, field_dict):
value = field_dict.get(key)
if value is None:
value = field_dict.setdefault(key, new_default(message))
while 1:
(size, pos) = local_DecodeVarint(buffer, pos)
new_pos = pos + size
if new_pos > end:
raise _DecodeError('Truncated string.')
value.append(buffer[pos:new_pos])
# Predict that the next tag is another copy of the same repeated field.
pos = new_pos + tag_len
if buffer[new_pos:pos] != tag_bytes or new_pos == end:
# Prediction failed. Return.
return new_pos
return DecodeRepeatedField
else:
def DecodeField(buffer, pos, end, message, field_dict):
(size, pos) = local_DecodeVarint(buffer, pos)
new_pos = pos + size
if new_pos > end:
raise _DecodeError('Truncated string.')
field_dict[key] = buffer[pos:new_pos]
return new_pos
return DecodeField
def GroupDecoder(field_number, is_repeated, is_packed, key, new_default):
"""Returns a decoder for a group field."""
end_tag_bytes = encoder.TagBytes(field_number,
wire_format.WIRETYPE_END_GROUP)
end_tag_len = len(end_tag_bytes)
assert not is_packed
if is_repeated:
tag_bytes = encoder.TagBytes(field_number,
wire_format.WIRETYPE_START_GROUP)
tag_len = len(tag_bytes)
def DecodeRepeatedField(buffer, pos, end, message, field_dict):
value = field_dict.get(key)
if value is None:
value = field_dict.setdefault(key, new_default(message))
while 1:
value = field_dict.get(key)
if value is None:
value = field_dict.setdefault(key, new_default(message))
# Read sub-message.
pos = value.add()._InternalParse(buffer, pos, end)
# Read end tag.
new_pos = pos+end_tag_len
if buffer[pos:new_pos] != end_tag_bytes or new_pos > end:
raise _DecodeError('Missing group end tag.')
# Predict that the next tag is another copy of the same repeated field.
pos = new_pos + tag_len
if buffer[new_pos:pos] != tag_bytes or new_pos == end:
# Prediction failed. Return.
return new_pos
return DecodeRepeatedField
else:
def DecodeField(buffer, pos, end, message, field_dict):
value = field_dict.get(key)
if value is None:
value = field_dict.setdefault(key, new_default(message))
# Read sub-message.
pos = value._InternalParse(buffer, pos, end)
# Read end tag.
new_pos = pos+end_tag_len
if buffer[pos:new_pos] != end_tag_bytes or new_pos > end:
raise _DecodeError('Missing group end tag.')
return new_pos
return DecodeField
def MessageDecoder(field_number, is_repeated, is_packed, key, new_default):
"""Returns a decoder for a message field."""
local_DecodeVarint = _DecodeVarint
assert not is_packed
if is_repeated:
tag_bytes = encoder.TagBytes(field_number,
wire_format.WIRETYPE_LENGTH_DELIMITED)
tag_len = len(tag_bytes)
def DecodeRepeatedField(buffer, pos, end, message, field_dict):
value = field_dict.get(key)
if value is None:
value = field_dict.setdefault(key, new_default(message))
while 1:
value = field_dict.get(key)
if value is None:
value = field_dict.setdefault(key, new_default(message))
# Read length.
(size, pos) = local_DecodeVarint(buffer, pos)
new_pos = pos + size
if new_pos > end:
raise _DecodeError('Truncated message.')
# Read sub-message.
if value.add()._InternalParse(buffer, pos, new_pos) != new_pos:
# The only reason _InternalParse would return early is if it
# encountered an end-group tag.
raise _DecodeError('Unexpected end-group tag.')
# Predict that the next tag is another copy of the same repeated field.
pos = new_pos + tag_len
if buffer[new_pos:pos] != tag_bytes or new_pos == end:
# Prediction failed. Return.
return new_pos
return DecodeRepeatedField
else:
def DecodeField(buffer, pos, end, message, field_dict):
value = field_dict.get(key)
if value is None:
value = field_dict.setdefault(key, new_default(message))
# Read length.
(size, pos) = local_DecodeVarint(buffer, pos)
new_pos = pos + size
if new_pos > end:
raise _DecodeError('Truncated message.')
# Read sub-message.
if value._InternalParse(buffer, pos, new_pos) != new_pos:
# The only reason _InternalParse would return early is if it encountered
# an end-group tag.
raise _DecodeError('Unexpected end-group tag.')
return new_pos
return DecodeField
# --------------------------------------------------------------------
MESSAGE_SET_ITEM_TAG = encoder.TagBytes(1, wire_format.WIRETYPE_START_GROUP)
def MessageSetItemDecoder(extensions_by_number):
"""Returns a decoder for a MessageSet item.
The parameter is the _extensions_by_number map for the message class.
The message set message looks like this:
message MessageSet {
repeated group Item = 1 {
required int32 type_id = 2;
required string message = 3;
}
}
"""
type_id_tag_bytes = encoder.TagBytes(2, wire_format.WIRETYPE_VARINT)
message_tag_bytes = encoder.TagBytes(3, wire_format.WIRETYPE_LENGTH_DELIMITED)
item_end_tag_bytes = encoder.TagBytes(1, wire_format.WIRETYPE_END_GROUP)
local_ReadTag = ReadTag
local_DecodeVarint = _DecodeVarint
local_SkipField = SkipField
def DecodeItem(buffer, pos, end, message, field_dict):
message_set_item_start = pos
type_id = -1
message_start = -1
message_end = -1
# Technically, type_id and message can appear in any order, so we need
# a little loop here.
while 1:
(tag_bytes, pos) = local_ReadTag(buffer, pos)
if tag_bytes == type_id_tag_bytes:
(type_id, pos) = local_DecodeVarint(buffer, pos)
elif tag_bytes == message_tag_bytes:
(size, message_start) = local_DecodeVarint(buffer, pos)
pos = message_end = message_start + size
elif tag_bytes == item_end_tag_bytes:
break
else:
pos = SkipField(buffer, pos, end, tag_bytes)
if pos == -1:
raise _DecodeError('Missing group end tag.')
if pos > end:
raise _DecodeError('Truncated message.')
if type_id == -1:
raise _DecodeError('MessageSet item missing type_id.')
if message_start == -1:
raise _DecodeError('MessageSet item missing message.')
extension = extensions_by_number.get(type_id)
if extension is not None:
value = field_dict.get(extension)
if value is None:
value = field_dict.setdefault(
extension, extension.message_type._concrete_class())
if value._InternalParse(buffer, message_start,message_end) != message_end:
# The only reason _InternalParse would return early is if it encountered
# an end-group tag.
raise _DecodeError('Unexpected end-group tag.')
else:
if not message._unknown_fields:
message._unknown_fields = []
message._unknown_fields.append((MESSAGE_SET_ITEM_TAG,
buffer[message_set_item_start:pos]))
return pos
return DecodeItem
# --------------------------------------------------------------------
# Optimization is not as heavy here because calls to SkipField() are rare,
# except for handling end-group tags.
def _SkipVarint(buffer, pos, end):
"""Skip a varint value. Returns the new position."""
# Previously ord(buffer[pos]) raised IndexError when pos is out of range.
# With this code, ord(b'') raises TypeError. Both are handled in
# python_message.py to generate a 'Truncated message' error.
while ord(buffer[pos:pos+1]) & 0x80:
pos += 1
pos += 1
if pos > end:
raise _DecodeError('Truncated message.')
return pos
def _SkipFixed64(buffer, pos, end):
"""Skip a fixed64 value. Returns the new position."""
pos += 8
if pos > end:
raise _DecodeError('Truncated message.')
return pos
def _SkipLengthDelimited(buffer, pos, end):
"""Skip a length-delimited value. Returns the new position."""
(size, pos) = _DecodeVarint(buffer, pos)
pos += size
if pos > end:
raise _DecodeError('Truncated message.')
return pos
def _SkipGroup(buffer, pos, end):
"""Skip sub-group. Returns the new position."""
while 1:
(tag_bytes, pos) = ReadTag(buffer, pos)
new_pos = SkipField(buffer, pos, end, tag_bytes)
if new_pos == -1:
return pos
pos = new_pos
def _EndGroup(buffer, pos, end):
"""Skipping an END_GROUP tag returns -1 to tell the parent loop to break."""
return -1
def _SkipFixed32(buffer, pos, end):
"""Skip a fixed32 value. Returns the new position."""
pos += 4
if pos > end:
raise _DecodeError('Truncated message.')
return pos
def _RaiseInvalidWireType(buffer, pos, end):
"""Skip function for unknown wire types. Raises an exception."""
raise _DecodeError('Tag had invalid wire type.')
def _FieldSkipper():
"""Constructs the SkipField function."""
WIRETYPE_TO_SKIPPER = [
_SkipVarint,
_SkipFixed64,
_SkipLengthDelimited,
_SkipGroup,
_EndGroup,
_SkipFixed32,
_RaiseInvalidWireType,
_RaiseInvalidWireType,
]
wiretype_mask = wire_format.TAG_TYPE_MASK
def SkipField(buffer, pos, end, tag_bytes):
"""Skips a field with the specified tag.
|pos| should point to the byte immediately after the tag.
Returns:
The new position (after the tag value), or -1 if the tag is an end-group
tag (in which case the calling loop should break).
"""
# The wire type is always in the first byte since varints are little-endian.
wire_type = ord(tag_bytes[0:1]) & wiretype_mask
return WIRETYPE_TO_SKIPPER[wire_type](buffer, pos, end)
return SkipField
SkipField = _FieldSkipper()
| gpl-3.0 |
PaddlePaddle/Paddle | python/paddle/fluid/tests/unittests/test_complex_elementwise_layers.py | 2 | 3757 | # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import numpy as np
from numpy.random import random as rand
import paddle
import paddle.fluid as fluid
import paddle.fluid.dygraph as dg
paddle_apis = {
"add": paddle.add,
"sub": paddle.subtract,
"mul": paddle.multiply,
"div": paddle.divide,
}
class TestComplexElementwiseLayers(unittest.TestCase):
def setUp(self):
self._dtypes = ["float32", "float64"]
self._places = [paddle.CPUPlace()]
if fluid.core.is_compiled_with_cuda():
self._places.append(paddle.CUDAPlace(0))
def paddle_calc(self, x, y, op, place):
with dg.guard(place):
x_t = dg.to_variable(x)
y_t = dg.to_variable(y)
return paddle_apis[op](x_t, y_t).numpy()
def assert_check(self, pd_result, np_result, place):
self.assertTrue(
np.allclose(pd_result, np_result),
"\nplace: {}\npaddle diff result:\n {}\nnumpy diff result:\n {}\n".
format(place, pd_result[~np.isclose(pd_result, np_result)],
np_result[~np.isclose(pd_result, np_result)]))
def compare_by_basic_api(self, x, y):
for place in self._places:
self.assert_check(
self.paddle_calc(x, y, "add", place), x + y, place)
self.assert_check(
self.paddle_calc(x, y, "sub", place), x - y, place)
self.assert_check(
self.paddle_calc(x, y, "mul", place), x * y, place)
self.assert_check(
self.paddle_calc(x, y, "div", place), x / y, place)
def compare_op_by_basic_api(self, x, y):
for place in self._places:
with dg.guard(place):
var_x = dg.to_variable(x)
var_y = dg.to_variable(y)
self.assert_check((var_x + var_y).numpy(), x + y, place)
self.assert_check((var_x - var_y).numpy(), x - y, place)
self.assert_check((var_x * var_y).numpy(), x * y, place)
self.assert_check((var_x / var_y).numpy(), x / y, place)
def test_complex_xy(self):
for dtype in self._dtypes:
x = rand([2, 3, 4, 5]).astype(dtype) + 1j * rand(
[2, 3, 4, 5]).astype(dtype)
y = rand([2, 3, 4, 5]).astype(dtype) + 1j * rand(
[2, 3, 4, 5]).astype(dtype)
self.compare_by_basic_api(x, y)
self.compare_op_by_basic_api(x, y)
def test_complex_x_real_y(self):
for dtype in self._dtypes:
x = rand([2, 3, 4, 5]).astype(dtype) + 1j * rand(
[2, 3, 4, 5]).astype(dtype)
y = rand([4, 5]).astype(dtype)
# promote types cases
self.compare_by_basic_api(x, y)
self.compare_op_by_basic_api(x, y)
def test_real_x_complex_y(self):
for dtype in self._dtypes:
x = rand([2, 3, 4, 5]).astype(dtype)
y = rand([5]).astype(dtype) + 1j * rand([5]).astype(dtype)
# promote types cases
self.compare_by_basic_api(x, y)
self.compare_op_by_basic_api(x, y)
if __name__ == '__main__':
unittest.main()
| apache-2.0 |
Kamik423/uni_plan | plan/plan/lib64/python3.4/site-packages/wheel/signatures/djbec.py | 566 | 6755 | # Ed25519 digital signatures
# Based on http://ed25519.cr.yp.to/python/ed25519.py
# See also http://ed25519.cr.yp.to/software.html
# Adapted by Ron Garret
# Sped up considerably using coordinate transforms found on:
# http://www.hyperelliptic.org/EFD/g1p/auto-twisted-extended-1.html
# Specifically add-2008-hwcd-4 and dbl-2008-hwcd
try: # pragma nocover
unicode
PY3 = False
def asbytes(b):
"""Convert array of integers to byte string"""
return ''.join(chr(x) for x in b)
def joinbytes(b):
"""Convert array of bytes to byte string"""
return ''.join(b)
def bit(h, i):
"""Return i'th bit of bytestring h"""
return (ord(h[i//8]) >> (i%8)) & 1
except NameError: # pragma nocover
PY3 = True
asbytes = bytes
joinbytes = bytes
def bit(h, i):
return (h[i//8] >> (i%8)) & 1
import hashlib
b = 256
q = 2**255 - 19
l = 2**252 + 27742317777372353535851937790883648493
def H(m):
return hashlib.sha512(m).digest()
def expmod(b, e, m):
if e == 0: return 1
t = expmod(b, e // 2, m) ** 2 % m
if e & 1: t = (t * b) % m
return t
# Can probably get some extra speedup here by replacing this with
# an extended-euclidean, but performance seems OK without that
def inv(x):
return expmod(x, q-2, q)
d = -121665 * inv(121666)
I = expmod(2,(q-1)//4,q)
def xrecover(y):
xx = (y*y-1) * inv(d*y*y+1)
x = expmod(xx,(q+3)//8,q)
if (x*x - xx) % q != 0: x = (x*I) % q
if x % 2 != 0: x = q-x
return x
By = 4 * inv(5)
Bx = xrecover(By)
B = [Bx % q,By % q]
#def edwards(P,Q):
# x1 = P[0]
# y1 = P[1]
# x2 = Q[0]
# y2 = Q[1]
# x3 = (x1*y2+x2*y1) * inv(1+d*x1*x2*y1*y2)
# y3 = (y1*y2+x1*x2) * inv(1-d*x1*x2*y1*y2)
# return (x3 % q,y3 % q)
#def scalarmult(P,e):
# if e == 0: return [0,1]
# Q = scalarmult(P,e/2)
# Q = edwards(Q,Q)
# if e & 1: Q = edwards(Q,P)
# return Q
# Faster (!) version based on:
# http://www.hyperelliptic.org/EFD/g1p/auto-twisted-extended-1.html
def xpt_add(pt1, pt2):
(X1, Y1, Z1, T1) = pt1
(X2, Y2, Z2, T2) = pt2
A = ((Y1-X1)*(Y2+X2)) % q
B = ((Y1+X1)*(Y2-X2)) % q
C = (Z1*2*T2) % q
D = (T1*2*Z2) % q
E = (D+C) % q
F = (B-A) % q
G = (B+A) % q
H = (D-C) % q
X3 = (E*F) % q
Y3 = (G*H) % q
Z3 = (F*G) % q
T3 = (E*H) % q
return (X3, Y3, Z3, T3)
def xpt_double (pt):
(X1, Y1, Z1, _) = pt
A = (X1*X1)
B = (Y1*Y1)
C = (2*Z1*Z1)
D = (-A) % q
J = (X1+Y1) % q
E = (J*J-A-B) % q
G = (D+B) % q
F = (G-C) % q
H = (D-B) % q
X3 = (E*F) % q
Y3 = (G*H) % q
Z3 = (F*G) % q
T3 = (E*H) % q
return (X3, Y3, Z3, T3)
def pt_xform (pt):
(x, y) = pt
return (x, y, 1, (x*y)%q)
def pt_unxform (pt):
(x, y, z, _) = pt
return ((x*inv(z))%q, (y*inv(z))%q)
def xpt_mult (pt, n):
if n==0: return pt_xform((0,1))
_ = xpt_double(xpt_mult(pt, n>>1))
return xpt_add(_, pt) if n&1 else _
def scalarmult(pt, e):
return pt_unxform(xpt_mult(pt_xform(pt), e))
def encodeint(y):
bits = [(y >> i) & 1 for i in range(b)]
e = [(sum([bits[i * 8 + j] << j for j in range(8)]))
for i in range(b//8)]
return asbytes(e)
def encodepoint(P):
x = P[0]
y = P[1]
bits = [(y >> i) & 1 for i in range(b - 1)] + [x & 1]
e = [(sum([bits[i * 8 + j] << j for j in range(8)]))
for i in range(b//8)]
return asbytes(e)
def publickey(sk):
h = H(sk)
a = 2**(b-2) + sum(2**i * bit(h,i) for i in range(3,b-2))
A = scalarmult(B,a)
return encodepoint(A)
def Hint(m):
h = H(m)
return sum(2**i * bit(h,i) for i in range(2*b))
def signature(m,sk,pk):
h = H(sk)
a = 2**(b-2) + sum(2**i * bit(h,i) for i in range(3,b-2))
inter = joinbytes([h[i] for i in range(b//8,b//4)])
r = Hint(inter + m)
R = scalarmult(B,r)
S = (r + Hint(encodepoint(R) + pk + m) * a) % l
return encodepoint(R) + encodeint(S)
def isoncurve(P):
x = P[0]
y = P[1]
return (-x*x + y*y - 1 - d*x*x*y*y) % q == 0
def decodeint(s):
return sum(2**i * bit(s,i) for i in range(0,b))
def decodepoint(s):
y = sum(2**i * bit(s,i) for i in range(0,b-1))
x = xrecover(y)
if x & 1 != bit(s,b-1): x = q-x
P = [x,y]
if not isoncurve(P): raise Exception("decoding point that is not on curve")
return P
def checkvalid(s, m, pk):
if len(s) != b//4: raise Exception("signature length is wrong")
if len(pk) != b//8: raise Exception("public-key length is wrong")
R = decodepoint(s[0:b//8])
A = decodepoint(pk)
S = decodeint(s[b//8:b//4])
h = Hint(encodepoint(R) + pk + m)
v1 = scalarmult(B,S)
# v2 = edwards(R,scalarmult(A,h))
v2 = pt_unxform(xpt_add(pt_xform(R), pt_xform(scalarmult(A, h))))
return v1==v2
##########################################################
#
# Curve25519 reference implementation by Matthew Dempsky, from:
# http://cr.yp.to/highspeed/naclcrypto-20090310.pdf
# P = 2 ** 255 - 19
P = q
A = 486662
#def expmod(b, e, m):
# if e == 0: return 1
# t = expmod(b, e / 2, m) ** 2 % m
# if e & 1: t = (t * b) % m
# return t
# def inv(x): return expmod(x, P - 2, P)
def add(n, m, d):
(xn, zn) = n
(xm, zm) = m
(xd, zd) = d
x = 4 * (xm * xn - zm * zn) ** 2 * zd
z = 4 * (xm * zn - zm * xn) ** 2 * xd
return (x % P, z % P)
def double(n):
(xn, zn) = n
x = (xn ** 2 - zn ** 2) ** 2
z = 4 * xn * zn * (xn ** 2 + A * xn * zn + zn ** 2)
return (x % P, z % P)
def curve25519(n, base=9):
one = (base,1)
two = double(one)
# f(m) evaluates to a tuple
# containing the mth multiple and the
# (m+1)th multiple of base.
def f(m):
if m == 1: return (one, two)
(pm, pm1) = f(m // 2)
if (m & 1):
return (add(pm, pm1, one), double(pm1))
return (double(pm), add(pm, pm1, one))
((x,z), _) = f(n)
return (x * inv(z)) % P
import random
def genkey(n=0):
n = n or random.randint(0,P)
n &= ~7
n &= ~(128 << 8 * 31)
n |= 64 << 8 * 31
return n
#def str2int(s):
# return int(hexlify(s), 16)
# # return sum(ord(s[i]) << (8 * i) for i in range(32))
#
#def int2str(n):
# return unhexlify("%x" % n)
# # return ''.join([chr((n >> (8 * i)) & 255) for i in range(32)])
#################################################
def dsa_test():
import os
msg = str(random.randint(q,q+q)).encode('utf-8')
sk = os.urandom(32)
pk = publickey(sk)
sig = signature(msg, sk, pk)
return checkvalid(sig, msg, pk)
def dh_test():
sk1 = genkey()
sk2 = genkey()
return curve25519(sk1, curve25519(sk2)) == curve25519(sk2, curve25519(sk1))
| apache-2.0 |
fieldOfView/pyZShaderToy | pyopengles/pyopengles.py | 1 | 11701 | #
# Copyright (c) 2012 Peter de Rivaz
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted.
#
# Raspberry Pi 3d demo using OpenGLES 2.0 via Python
#
# Version 0.1 (Draws a rectangle using vertex and fragment shaders)
# Version 0.2 (Draws a Julia set on top of a Mandelbrot controlled by the mouse. Mandelbrot rendered to texture in advance.
import ctypes
import time
import math
import sys
# Pick up our constants extracted from the header files with prepare_constants.py
from .egl import *
from .gl2 import *
from .gl2ext import *
# Python3 compatibility
try:
basestring
except NameError:
basestring = str
if sys.version.startswith("3"):
def eglb(s):
return s.encode("latin-1")
else:
def eglb(s):
return s
# Define verbose=True to get debug messages
verbose = False
# Define some extra constants that the automatic extraction misses
GL_NO_ERROR = 0
EGL_DEFAULT_DISPLAY = 0
EGL_NO_CONTEXT = 0
EGL_NO_DISPLAY = 0
EGL_NO_SURFACE = 0
DISPMANX_PROTECTION_NONE = 0
# Open the libraries
try:
bcm = ctypes.CDLL('libbcm_host.so')
opengles = ctypes.CDLL('libGLESv2.so')
openegl = ctypes.CDLL('libEGL.so')
except OSError:
print("Could not open Raspberry Pi libraries.")
raise(OSError)
eglint = ctypes.c_int
eglshort = ctypes.c_short
def eglints(L):
"""Converts a tuple to an array of eglints (would a pointer return be better?)"""
return (eglint*len(L))(*L)
eglfloat = ctypes.c_float
def eglfloats(L):
return (eglfloat*len(L))(*L)
class Alpha_struct(ctypes.Structure):
"""typedef enum {
/* Bottom 2 bits sets the alpha mode */
DISPMANX_FLAGS_ALPHA_FROM_SOURCE = 0,
DISPMANX_FLAGS_ALPHA_FIXED_ALL_PIXELS = 1,
DISPMANX_FLAGS_ALPHA_FIXED_NON_ZERO = 2,
DISPMANX_FLAGS_ALPHA_FIXED_EXCEED_0X07 = 3,
DISPMANX_FLAGS_ALPHA_PREMULT = 1 << 16,
DISPMANX_FLAGS_ALPHA_MIX = 1 << 17
} DISPMANX_FLAGS_ALPHA_T;
typedef struct {
DISPMANX_FLAGS_ALPHA_T flags;
uint32_t opacity;
VC_IMAGE_T *mask;
} DISPMANX_ALPHA_T;
typedef struct {
DISPMANX_FLAGS_ALPHA_T flags;
uint32_t opacity;
DISPMANX_RESOURCE_HANDLE_T mask;
} VC_DISPMANX_ALPHA_T; /* for use with vmcs_host */
"""
_fields_ = [ ("flags", ctypes.c_long),
("opacity", ctypes.c_ulong),
("mask", ctypes.c_ulong)]
def check(e):
"""Checks that error is zero"""
if e==0: return
if verbose:
print ('Error code %s' % hex(e&0xffffffff))
raise ValueError
class ShaderCompilationFailed(Exception):
def __init__(self, reason=None):
self.reason = reason
class GLError(Exception):
def __init__(self, code=None):
self.code = code
def __str__(self):
if self.code:
if self.code == GL_NO_ERROR:
return "No Error found (%s)" % self.code
elif self.code == GL_INVALID_ENUM:
return "A GLenum argument is out of range. The command that generated that error is ignored (%s)" % self.code
elif self.code == GL_INVALID_VALUE:
return "A numeric argument is out of range. The command that generated that error is ignored (%s)" % self.code
elif self.code == GL_INVALID_OPERATION:
return "The specific command cannot be performed in the current state. The command that generated this is ignored (%s)" % self.code
elif self.code == GL_OUT_OF_MEMORY:
return "There is insufficient memory to execute this command. The state of the OpenGL ES pipeline is considered to be undefined. All bets are off, basically. (%s)" % self.code
return "Unknown error code (%s)" % self.code
else:
return "No error code given"
class EGL(object):
def __init__(self, pref_width=None, pref_height=None,
red_size=8, green_size=8,blue_size=8,
alpha_size=8, depth_size=None,
layer=0, alpha_flags=0, alpha_opacity=0, other_attribs = []):
"""Opens up the OpenGL library and prepares a window for display"""
b = bcm.bcm_host_init()
assert b==0
self.display = openegl.eglGetDisplay(EGL_DEFAULT_DISPLAY)
assert self.display
r = openegl.eglInitialize(self.display,0,0)
assert r
attribute_list = [EGL_RED_SIZE, red_size,
EGL_GREEN_SIZE, green_size,
EGL_BLUE_SIZE, blue_size]
if alpha_size:
attribute_list.extend([EGL_ALPHA_SIZE, alpha_size])
attribute_list.extend([EGL_SURFACE_TYPE, EGL_WINDOW_BIT])
if depth_size:
attribute_list.extend([EGL_DEPTH_SIZE, depth_size])
if other_attribs and len(other_attribs) % 2 == 0:
attribute_list.extend(other_attribs)
attribute_list.append(EGL_NONE)
attribute_list = eglints( attribute_list )
numconfig = eglint()
config = ctypes.c_void_p()
r = openegl.eglChooseConfig(self.display,
ctypes.byref(attribute_list),
ctypes.byref(config), 1,
ctypes.byref(numconfig));
assert r
r = openegl.eglBindAPI(EGL_OPENGL_ES_API)
assert r
if verbose:
print ('numconfig=',numconfig)
context_attribs = eglints( (EGL_CONTEXT_CLIENT_VERSION, 2, EGL_NONE) )
self.context = openegl.eglCreateContext(self.display, config,
EGL_NO_CONTEXT,
ctypes.byref(context_attribs))
assert self.context != EGL_NO_CONTEXT
width = eglint()
height = eglint()
s = bcm.graphics_get_display_size(0,ctypes.byref(width),ctypes.byref(height))
if pref_width and pref_height:
self.width = eglint(pref_width)
self.height = eglint(pref_height)
width = self.width
height = self.height
else:
self.width = width
self.height = height
assert s>=0
dispman_display = bcm.vc_dispmanx_display_open(0)
dispman_update = bcm.vc_dispmanx_update_start( 0 )
dst_rect = eglints( (0,0,width.value,height.value) )
src_rect = eglints( (0,0,width.value<<16, height.value<<16) )
assert dispman_update
assert dispman_display
alpha_s = Alpha_struct(alpha_flags, alpha_opacity, 0)
dispman_element = bcm.vc_dispmanx_element_add ( dispman_update, dispman_display,
layer, ctypes.byref(dst_rect), 0,
ctypes.byref(src_rect),
DISPMANX_PROTECTION_NONE,
alpha_s , 0, 0)
bcm.vc_dispmanx_update_submit_sync( dispman_update )
nativewindow = eglints((dispman_element,width,height));
nw_p = ctypes.pointer(nativewindow)
self.nw_p = nw_p
self.surface = openegl.eglCreateWindowSurface( self.display, config, nw_p, 0)
assert self.surface != EGL_NO_SURFACE
r = openegl.eglMakeCurrent(self.display, self.surface, self.surface, self.context)
assert r
def _check_Linked_status(self, programObject):
# Check the link status
linked = eglint()
opengles.glGetProgramiv ( programObject, GL_LINK_STATUS, ctypes.byref(linked))
try:
self._check_glerror()
except GLError as e:
print (e)
return False
if (linked.value == 0):
print ("Linking failed!")
loglength = eglint()
charswritten = eglint()
opengles.glGetProgramiv(programObject, GL_INFO_LOG_LENGTH, ctypes.byref(loglength))
logmsg = ctypes.c_char_p(eglb(" "*loglength.value))
opengles.glGetProgramInfoLog(programObject, loglength, ctypes.byref(charswritten), logmsg)
print (logmsg.value)
return False
return True
def _check_glerror(self):
e=opengles.glGetError()
if e:
raise GLError(e)
return
def _show_shader_log(self, shader):
"""Prints the compile log for a shader"""
N=1024
log=(ctypes.c_char*N)()
loglen=ctypes.c_int()
opengles.glGetShaderInfoLog(shader,N,ctypes.byref(loglen),ctypes.byref(log))
print (log.value)
def _show_program_log(self, program):
"""Prints the compile log for a program"""
N=1024
log=(ctypes.c_char*N)()
loglen=ctypes.c_int()
opengles.glGetProgramInfoLog(program,N,ctypes.byref(loglen),ctypes.byref(log))
print (log.value)
def load_shader ( self, shader_src, shader_type = GL_VERTEX_SHADER, quiet = True ):
# Convert the src to the correct ctype, if not already done
c_shader_src = shader_src
if type(shader_src) == basestring or type(shader_src) == str:
c_shader_src = ctypes.c_char_p(eglb(shader_src))
# Create a shader of the given type
if not quiet:
print ("Creating shader object")
shader = opengles.glCreateShader(shader_type)
opengles.glShaderSource(shader, 1, ctypes.byref(c_shader_src), 0)
opengles.glCompileShader(shader)
compiled = eglint()
# Check compiled status
opengles.glGetShaderiv ( shader, GL_COMPILE_STATUS, ctypes.byref(compiled) )
if (compiled.value == 0):
print ("Failed to compile shader '%s'" % shader_src)
loglength = eglint()
charswritten = eglint()
opengles.glGetShaderiv(shader, GL_INFO_LOG_LENGTH, ctypes.byref(loglength))
logmsg = ctypes.c_char_p(eglb(" "*loglength.value))
opengles.glGetShaderInfoLog(shader, loglength, ctypes.byref(charswritten), logmsg)
print (logmsg.value)
raise ShaderCompilationFailed(logmsg.value)
elif not quiet:
shdtyp = "{unknown}"
if shader_type == GL_VERTEX_SHADER:
shdtyp = "GL_VERTEX_SHADER"
elif shader_type == GL_FRAGMENT_SHADER:
shdtyp = "GL_FRAGMENT_SHADER"
print ("Compiled %s shader" % shdtyp)
if not quiet:
self._show_shader_log()
return shader
def get_program(self, vertex_shader_src, fragment_shader_src, bindings=[], quiet=True):
# Load the vertex/fragment shaders (can throw a ShaderCompilationFailed exception)
vertexShader = self.load_shader ( vertex_shader_src, GL_VERTEX_SHADER, quiet )
fragmentShader = self.load_shader ( fragment_shader_src, GL_FRAGMENT_SHADER, quiet )
self._check_glerror()
# Create the program object
programObject = opengles.glCreateProgram ( )
self._check_glerror()
opengles.glAttachShader ( programObject, vertexShader )
opengles.glAttachShader ( programObject, fragmentShader )
self._check_glerror()
# Bind positions to attributes:
for pos, attrib in bindings:
opengles.glBindAttribLocation ( programObject, pos, attrib )
self._check_glerror()
# Link the program
opengles.glLinkProgram ( programObject )
self._check_glerror()
# Check the link status
if not (self._check_Linked_status(programObject)):
print ("Couldn't link the shaders to the program object. Check the bindings and shader sourcefiles.")
raise Exception
if not quiet:
self._show_program_log(programObject)
return programObject
| lgpl-3.0 |
nishad-jobsglobal/odoo-marriot | addons/base_report_designer/plugin/openerp_report_designer/test/test_fields.py | 391 | 1308 | #
# Use this module to retrive the fields you need according to the type
# of the OpenOffice operation:
# * Insert a Field
# * Insert a RepeatIn
#
import xmlrpclib
import time
sock = xmlrpclib.ServerProxy('http://localhost:8069/xmlrpc/object')
def get(object, level=3, ending=None, ending_excl=None, recur=None, root=''):
if ending is None:
ending = []
if ending_excl is None:
ending_excl = []
if recur is None:
recur = []
res = sock.execute('terp', 3, 'admin', 'account.invoice', 'fields_get')
key = res.keys()
key.sort()
for k in key:
if (not ending or res[k]['type'] in ending) and ((not ending_excl) or not (res[k]['type'] in ending_excl)):
print root+'/'+k
if res[k]['type'] in recur:
print root+'/'+k
if (res[k]['type'] in recur) and (level>0):
get(res[k]['relation'], level-1, ending, ending_excl, recur, root+'/'+k)
print 'Field selection for a rields', '='*40
get('account.invoice', level=0, ending_excl=['one2many','many2one','many2many','reference'], recur=['many2one'])
print
print 'Field selection for a repeatIn', '='*40
get('account.invoice', level=0, ending=['one2many','many2many'], recur=['many2one'])
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
sbesson/PyGithub | tests/Time.py | 5 | 1925 | ############################ Copyrights and license ############################
# #
# Copyright 2018 itsbruce <it.is.bruce@gmail.com> #
# #
# This file is part of PyGithub. #
# http://pygithub.readthedocs.io/ #
# #
# PyGithub is free software: you can redistribute it and/or modify it under #
# the terms of the GNU Lesser General Public License as published by the Free #
# Software Foundation, either version 3 of the License, or (at your option) #
# any later version. #
# #
# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
# details. #
# #
# You should have received a copy of the GNU Lesser General Public License #
# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
# #
################################################################################
from datetime import timedelta, tzinfo
class UTCtzinfo(tzinfo):
def utcoffset(self, dt):
return timedelta(0)
def tzname(self, dt):
return "UTC"
def dst(self, dt):
return timedelta(0)
| lgpl-3.0 |
yoelk/kivy | kivy/uix/videoplayer.py | 32 | 21415 | '''
Video player
============
.. versionadded:: 1.2.0
The video player widget can be used to play video and let the user control the
play/pausing, volume and position. The widget cannot be customized much because
of the complex assembly of numerous base widgets.
.. image:: images/videoplayer.jpg
:align: center
Annotations
-----------
If you want to display text at a specific time and for a certain duration,
consider annotations. An annotation file has a ".jsa" extension. The player
will automatically load the associated annotation file if it exists.
An annotation file is JSON-based, providing a list of label dictionary items.
The key and value must match one of the :class:`VideoPlayerAnnotation` items.
For example, here is a short version of a jsa file that you can find in
`examples/widgets/softboy.jsa`::
[
{"start": 0, "duration": 2,
"text": "This is an example of annotation"},
{"start": 2, "duration": 2,
"bgcolor": [0.5, 0.2, 0.4, 0.5],
"text": "You can change the background color"}
]
For our softboy.avi example, the result will be:
.. image:: images/videoplayer-annotation.jpg
:align: center
If you want to experiment with annotation files, test with::
python -m kivy.uix.videoplayer examples/widgets/softboy.avi
Fullscreen
----------
The video player can play the video in fullscreen, if
:attr:`VideoPlayer.allow_fullscreen` is activated by a double-tap on
the video. By default, if the video is smaller than the Window, it will be not
stretched.
You can allow stretching by passing custom options to a
:class:`VideoPlayer` instance::
player = VideoPlayer(source='myvideo.avi', state='play',
options={'allow_stretch': True})
End-of-stream behavior
----------------------
You can specify what happens when the video has finished playing by passing an
`eos` (end of stream) directive to the underlying
:class:`~kivy.core.video.VideoBase` class. `eos` can be one of 'stop', 'pause'
or 'loop' and defaults to 'stop'. For example, in order to loop the video::
player = VideoPlayer(source='myvideo.avi', state='play',
options={'eos': 'loop'})
.. note::
The `eos` property of the VideoBase class is a string specifying the
end-of-stream behavior. This property differs from the `eos`
properties of the :class:`VideoPlayer` and
:class:`~kivy.uix.video.Video` classes, whose `eos`
property is simply a boolean indicating that the end of the file has
been reached.
'''
__all__ = ('VideoPlayer', 'VideoPlayerAnnotation')
from json import load
from os.path import exists
from kivy.properties import ObjectProperty, StringProperty, BooleanProperty, \
NumericProperty, DictProperty, OptionProperty
from kivy.animation import Animation
from kivy.uix.gridlayout import GridLayout
from kivy.uix.floatlayout import FloatLayout
from kivy.uix.progressbar import ProgressBar
from kivy.uix.label import Label
from kivy.uix.video import Video
from kivy.uix.video import Image
from kivy.factory import Factory
from kivy.logger import Logger
from kivy.clock import Clock
class VideoPlayerVolume(Image):
video = ObjectProperty(None)
def on_touch_down(self, touch):
if not self.collide_point(*touch.pos):
return False
touch.grab(self)
# save the current volume and delta to it
touch.ud[self.uid] = [self.video.volume, 0]
return True
def on_touch_move(self, touch):
if touch.grab_current is not self:
return
# calculate delta
dy = abs(touch.y - touch.oy)
if dy > 10:
dy = min(dy - 10, 100)
touch.ud[self.uid][1] = dy
self.video.volume = dy / 100.
return True
def on_touch_up(self, touch):
if touch.grab_current is not self:
return
touch.ungrab(self)
dy = abs(touch.y - touch.oy)
if dy < 10:
if self.video.volume > 0:
self.video.volume = 0
else:
self.video.volume = 1.
class VideoPlayerPlayPause(Image):
video = ObjectProperty(None)
def on_touch_down(self, touch):
'''.. versionchanged:: 1.4.0'''
if self.collide_point(*touch.pos):
if self.video.state == 'play':
self.video.state = 'pause'
else:
self.video.state = 'play'
return True
class VideoPlayerStop(Image):
video = ObjectProperty(None)
def on_touch_down(self, touch):
if self.collide_point(*touch.pos):
self.video.state = 'stop'
self.video.position = 0
return True
class VideoPlayerProgressBar(ProgressBar):
video = ObjectProperty(None)
seek = NumericProperty(None, allownone=True)
alpha = NumericProperty(1.)
def __init__(self, **kwargs):
super(VideoPlayerProgressBar, self).__init__(**kwargs)
self.bubble = Factory.Bubble(size=(50, 44))
self.bubble_label = Factory.Label(text='0:00')
self.bubble.add_widget(self.bubble_label)
self.add_widget(self.bubble)
update = self._update_bubble
fbind = self.fbind
fbind('pos', update)
fbind('size', update)
fbind('seek', update)
def on_video(self, instance, value):
self.video.bind(position=self._update_bubble,
state=self._showhide_bubble)
def on_touch_down(self, touch):
if not self.collide_point(*touch.pos):
return
self._show_bubble()
touch.grab(self)
self._update_seek(touch.x)
return True
def on_touch_move(self, touch):
if touch.grab_current is not self:
return
self._update_seek(touch.x)
return True
def on_touch_up(self, touch):
if touch.grab_current is not self:
return
touch.ungrab(self)
if self.seek:
self.video.seek(self.seek)
self.seek = None
self._hide_bubble()
return True
def _update_seek(self, x):
if self.width == 0:
return
x = max(self.x, min(self.right, x)) - self.x
self.seek = x / float(self.width)
def _show_bubble(self):
self.alpha = 1
Animation.stop_all(self, 'alpha')
def _hide_bubble(self):
self.alpha = 1.
Animation(alpha=0, d=4, t='in_out_expo').start(self)
def on_alpha(self, instance, value):
self.bubble.background_color = (1, 1, 1, value)
self.bubble_label.color = (1, 1, 1, value)
def _update_bubble(self, *l):
seek = self.seek
if self.seek is None:
if self.video.duration == 0:
seek = 0
else:
seek = self.video.position / self.video.duration
# convert to minutes:seconds
d = self.video.duration * seek
minutes = int(d / 60)
seconds = int(d - (minutes * 60))
# fix bubble label & position
self.bubble_label.text = '%d:%02d' % (minutes, seconds)
self.bubble.center_x = self.x + seek * self.width
self.bubble.y = self.top
def _showhide_bubble(self, instance, value):
if value == 'play':
self._hide_bubble()
else:
self._show_bubble()
class VideoPlayerPreview(FloatLayout):
source = ObjectProperty(None)
video = ObjectProperty(None)
click_done = BooleanProperty(False)
def on_touch_down(self, touch):
if self.collide_point(*touch.pos) and not self.click_done:
self.click_done = True
self.video.state = 'play'
return True
class VideoPlayerAnnotation(Label):
'''Annotation class used for creating annotation labels.
Additional keys are available:
* bgcolor: [r, g, b, a] - background color of the text box
* bgsource: 'filename' - background image used for the background text box
* border: (n, e, s, w) - border used for the background image
'''
start = NumericProperty(0)
'''Start time of the annotation.
:attr:`start` is a :class:`~kivy.properties.NumericProperty` and defaults
to 0.
'''
duration = NumericProperty(1)
'''Duration of the annotation.
:attr:`duration` is a :class:`~kivy.properties.NumericProperty` and
defaults to 1.
'''
annotation = DictProperty({})
def on_annotation(self, instance, ann):
for key, value in ann.items():
setattr(self, key, value)
class VideoPlayer(GridLayout):
'''VideoPlayer class. See module documentation for more information.
'''
source = StringProperty('')
'''Source of the video to read.
:attr:`source` is a :class:`~kivy.properties.StringProperty` and
defaults to ''.
.. versionchanged:: 1.4.0
'''
thumbnail = StringProperty('')
'''Thumbnail of the video to show. If None, VideoPlayer will try to find
the thumbnail from the :attr:`source` + '.png'.
:attr:`thumbnail` a :class:`~kivy.properties.StringProperty` and defaults
to ''.
.. versionchanged:: 1.4.0
'''
duration = NumericProperty(-1)
'''Duration of the video. The duration defaults to -1 and is set to the
real duration when the video is loaded.
:attr:`duration` is a :class:`~kivy.properties.NumericProperty` and
defaults to -1.
'''
position = NumericProperty(0)
'''Position of the video between 0 and :attr:`duration`. The position
defaults to -1 and is set to the real position when the video is loaded.
:attr:`position` is a :class:`~kivy.properties.NumericProperty` and
defaults to -1.
'''
volume = NumericProperty(1.0)
'''Volume of the video in the range 0-1. 1 means full volume and 0 means
mute.
:attr:`volume` is a :class:`~kivy.properties.NumericProperty` and defaults
to 1.
'''
state = OptionProperty('stop', options=('play', 'pause', 'stop'))
'''String, indicates whether to play, pause, or stop the video::
# start playing the video at creation
video = VideoPlayer(source='movie.mkv', state='play')
# create the video, and start later
video = VideoPlayer(source='movie.mkv')
# and later
video.state = 'play'
:attr:`state` is an :class:`~kivy.properties.OptionProperty` and defaults
to 'stop'.
'''
play = BooleanProperty(False)
'''
.. deprecated:: 1.4.0
Use :attr:`state` instead.
Boolean, indicates whether the video is playing or not. You can start/stop
the video by setting this property::
# start playing the video at creation
video = VideoPlayer(source='movie.mkv', play=True)
# create the video, and start later
video = VideoPlayer(source='movie.mkv')
# and later
video.play = True
:attr:`play` is a :class:`~kivy.properties.BooleanProperty` and defaults
to False.
'''
image_overlay_play = StringProperty(
'atlas://data/images/defaulttheme/player-play-overlay')
'''Image filename used to show a "play" overlay when the video has not yet
started.
:attr:`image_overlay_play` is a
:class:`~kivy.properties.StringProperty` and
defaults to 'atlas://data/images/defaulttheme/player-play-overlay'.
'''
image_loading = StringProperty('data/images/image-loading.gif')
'''Image filename used when the video is loading.
:attr:`image_loading` is a :class:`~kivy.properties.StringProperty` and
defaults to 'data/images/image-loading.gif'.
'''
image_play = StringProperty(
'atlas://data/images/defaulttheme/media-playback-start')
'''Image filename used for the "Play" button.
:attr:`image_play` is a :class:`~kivy.properties.StringProperty` and
defaults to 'atlas://data/images/defaulttheme/media-playback-start'.
'''
image_stop = StringProperty(
'atlas://data/images/defaulttheme/media-playback-stop')
'''Image filename used for the "Stop" button.
:attr:`image_stop` is a :class:`~kivy.properties.StringProperty` and
defaults to 'atlas://data/images/defaulttheme/media-playback-stop'.
'''
image_pause = StringProperty(
'atlas://data/images/defaulttheme/media-playback-pause')
'''Image filename used for the "Pause" button.
:attr:`image_pause` is a :class:`~kivy.properties.StringProperty` and
defaults to 'atlas://data/images/defaulttheme/media-playback-pause'.
'''
image_volumehigh = StringProperty(
'atlas://data/images/defaulttheme/audio-volume-high')
'''Image filename used for the volume icon when the volume is high.
:attr:`image_volumehigh` is a :class:`~kivy.properties.StringProperty` and
defaults to 'atlas://data/images/defaulttheme/audio-volume-high'.
'''
image_volumemedium = StringProperty(
'atlas://data/images/defaulttheme/audio-volume-medium')
'''Image filename used for the volume icon when the volume is medium.
:attr:`image_volumemedium` is a :class:`~kivy.properties.StringProperty`
and defaults to 'atlas://data/images/defaulttheme/audio-volume-medium'.
'''
image_volumelow = StringProperty(
'atlas://data/images/defaulttheme/audio-volume-low')
'''Image filename used for the volume icon when the volume is low.
:attr:`image_volumelow` is a :class:`~kivy.properties.StringProperty`
and defaults to 'atlas://data/images/defaulttheme/audio-volume-low'.
'''
image_volumemuted = StringProperty(
'atlas://data/images/defaulttheme/audio-volume-muted')
'''Image filename used for the volume icon when the volume is muted.
:attr:`image_volumemuted` is a :class:`~kivy.properties.StringProperty`
and defaults to 'atlas://data/images/defaulttheme/audio-volume-muted'.
'''
annotations = StringProperty('')
'''If set, it will be used for reading annotations box.
:attr:`annotations` is a :class:`~kivy.properties.StringProperty`
and defaults to ''.
'''
fullscreen = BooleanProperty(False)
'''Switch to fullscreen view. This should be used with care. When
activated, the widget will remove itself from its parent, remove all
children from the window and will add itself to it. When fullscreen is
unset, all the previous children are restored and the widget is restored to
its previous parent.
.. warning::
The re-add operation doesn't care about the index position of it's
children within the parent.
:attr:`fullscreen` is a :class:`~kivy.properties.BooleanProperty`
and defaults to False.
'''
allow_fullscreen = BooleanProperty(True)
'''By default, you can double-tap on the video to make it fullscreen. Set
this property to False to prevent this behavior.
:attr:`allow_fullscreen` is a :class:`~kivy.properties.BooleanProperty`
defaults to True.
'''
options = DictProperty({})
'''Optional parameters can be passed to a :class:`~kivy.uix.video.Video`
instance with this property.
:attr:`options` a :class:`~kivy.properties.DictProperty` and
defaults to {}.
'''
# internals
container = ObjectProperty(None)
def __init__(self, **kwargs):
self._video = None
self._image = None
self._annotations = ''
self._annotations_labels = []
super(VideoPlayer, self).__init__(**kwargs)
self._load_thumbnail()
self._load_annotations()
if self.source:
self._trigger_video_load()
def _trigger_video_load(self, *largs):
Clock.unschedule(self._do_video_load)
Clock.schedule_once(self._do_video_load, -1)
def on_source(self, instance, value):
# we got a value, try to see if we have an image for it
self._load_thumbnail()
self._load_annotations()
if self._video is not None:
self._video.unload()
self._video = None
if value:
self._trigger_video_load()
def on_image_overlay_play(self, instance, value):
self._image.image_overlay_play = value
def on_image_loading(self, instance, value):
self._image.image_loading = value
def _load_thumbnail(self):
if not self.container:
return
self.container.clear_widgets()
# get the source, remove extension, and use png
thumbnail = self.thumbnail
if not thumbnail:
filename = self.source.rsplit('.', 1)
thumbnail = filename[0] + '.png'
self._image = VideoPlayerPreview(source=thumbnail, video=self)
self.container.add_widget(self._image)
def _load_annotations(self):
if not self.container:
return
self._annotations_labels = []
annotations = self.annotations
if not annotations:
filename = self.source.rsplit('.', 1)
annotations = filename[0] + '.jsa'
if exists(annotations):
with open(annotations, 'r') as fd:
self._annotations = load(fd)
if self._annotations:
for ann in self._annotations:
self._annotations_labels.append(
VideoPlayerAnnotation(annotation=ann))
def on_state(self, instance, value):
if self._video is not None:
self._video.state = value
def _set_state(self, instance, value):
self.state = value
def _do_video_load(self, *largs):
self._video = Video(source=self.source, state=self.state,
volume=self.volume, pos_hint={'x': 0, 'y': 0},
**self.options)
self._video.bind(texture=self._play_started,
duration=self.setter('duration'),
position=self.setter('position'),
volume=self.setter('volume'),
state=self._set_state)
def on_play(self, instance, value):
value = 'play' if value else 'stop'
return self.on_state(instance, value)
def on_volume(self, instance, value):
if not self._video:
return
self._video.volume = value
def on_position(self, instance, value):
labels = self._annotations_labels
if not labels:
return
for label in labels:
start = label.start
duration = label.duration
if start > value or (start + duration) < value:
if label.parent:
label.parent.remove_widget(label)
elif label.parent is None:
self.container.add_widget(label)
def seek(self, percent):
'''Change the position to a percentage of the duration. Percentage must
be a value between 0-1.
.. warning::
Calling seek() before video is loaded has no effect.
'''
if not self._video:
return
self._video.seek(percent)
def _play_started(self, instance, value):
self.container.clear_widgets()
self.container.add_widget(self._video)
def on_touch_down(self, touch):
if not self.collide_point(*touch.pos):
return False
if touch.is_double_tap and self.allow_fullscreen:
self.fullscreen = not self.fullscreen
return True
return super(VideoPlayer, self).on_touch_down(touch)
def on_fullscreen(self, instance, value):
window = self.get_parent_window()
if not window:
Logger.warning('VideoPlayer: Cannot switch to fullscreen, '
'window not found.')
if value:
self.fullscreen = False
return
if not self.parent:
Logger.warning('VideoPlayer: Cannot switch to fullscreen, '
'no parent.')
if value:
self.fullscreen = False
return
if value:
self._fullscreen_state = state = {
'parent': self.parent,
'pos': self.pos,
'size': self.size,
'pos_hint': self.pos_hint,
'size_hint': self.size_hint,
'window_children': window.children[:]}
# remove all window children
for child in window.children[:]:
window.remove_widget(child)
# put the video in fullscreen
if state['parent'] is not window:
state['parent'].remove_widget(self)
window.add_widget(self)
# ensure the video widget is in 0, 0, and the size will be
# reajusted
self.pos = (0, 0)
self.size = (100, 100)
self.pos_hint = {}
self.size_hint = (1, 1)
else:
state = self._fullscreen_state
window.remove_widget(self)
for child in state['window_children']:
window.add_widget(child)
self.pos_hint = state['pos_hint']
self.size_hint = state['size_hint']
self.pos = state['pos']
self.size = state['size']
if state['parent'] is not window:
state['parent'].add_widget(self)
if __name__ == '__main__':
import sys
from kivy.base import runTouchApp
player = VideoPlayer(source=sys.argv[1])
runTouchApp(player)
if player:
player.state = 'stop'
| mit |
bertl4398/MiGBox | MiGBox/sftp/server_interface.py | 1 | 14550 | # SFTP server interface based on paramiko
#
# Copyright (C) 2013 Benjamin Ertl
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
SFTP server interface implementation based on paramiko.
Provides a SFTP server interface for the MiG SFTP server.
"""
import os
import base64
import json
import paramiko
from Queue import Empty
from Crypto import Random
from Crypto.Hash import MD5
from watchdog.events import DirMovedEvent, FileMovedEvent
from MiGBox.sync.delta import blockchecksums, delta, patch
class SFTPHandle(paramiko.SFTPHandle):
"""
This class inherits from L{paramiko.SFTPHandle}.
It provides an abstract object representing a handle to an open file/directory.
"""
def stat(self):
"""
Return an L{paramiko.SFTPAttributes} object for this open file.
@return: an attribute object for the given file.
@rtype: L{paramiko.SFTPAttributes}
"""
try:
return paramiko.SFTPAttributes.from_stat(os.fstat(self.readfile.fileno()))
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
def chattr(self, attr):
"""
Change the attributes of this file.
@param attr: attributes to change.
@type attr: L{paramiko.SFTPAttributes}
@return: return code
@rtype: int
"""
try:
paramiko.SFTPServer.set_file_attr(self.filename, attr)
return paramiko.SFTP_OK
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
class SFTPServerInterface(paramiko.SFTPServerInterface):
"""
This class inherits from L{paramiko.SFTPServerInterface}.
It defines an interface for the SFTP server.
"""
def __init__(self, server, *largs, **kwargs):
"""
Create a new SFTP server interface that handles SFTP operations.
@param server: SFTP server associated with this interface.
@type server: ServerInterface
"""
super(paramiko.SFTPServerInterface, self).__init__(*largs, **kwargs)
self.root = os.path.normpath(server.root)
self.salt = server.salt
self.eventQueue = server.eventQueue
def session_started(self):
"""
The SFTP server session has just started. This method is meant to be
overridden to perform any necessary setup before handling callbacks
from SFTP operations.
"""
# TODO implement necessary setup?
pass
def session_ended(self):
"""
The SFTP server session has just ended, either cleanly or via an
exception. This method is meant to be overridden to perform any
necessary cleanup before this C{paramiko.SFTPServerInterface} object is
destroyed.
"""
# TODO implement clean up?
pass
def _get_path(self, path):
return self.root + self.canonicalize(path)
def open(self, path, flags, attr):
"""
Open a file on the server and create a handle for future operations
on that file.
@param path: path of the file to be opened.
@type path: str
@param flags: flags or'd together from the C{os} module.
@type flags: int
@param attr: requested attributes of the file if it is newly created.
@type attr: L{paramiko.SFTPAttributes}
@return: a new L{paramiko.SFTPHandle} I{or error code}.
@rtype L{paramiko.SFTPHandle}
"""
path = self._get_path(path)
try:
binary_flag = getattr(os, 'O_BINARY', 0)
flags |= binary_flag
mode = getattr(attr, 'st_mode', None)
if mode is not None:
fd = os.open(path, flags, mode)
else:
fd = os.open(path, flags, 0666)
except OSError, e:
return paramiko.SFTPServer.convert_errno(e.errno)
if (flags & os.O_CREAT) and (attr is not None):
attr._flags &= ~attr.FLAG_PERMISSIONS
paramiko.SFTPServer.set_file_attr(path, attr)
if flags & os.O_WRONLY:
if flags & os.O_APPEND:
fstr = 'ab'
else:
fstr = 'wb'
elif flags & os.O_RDWR:
if flags & os.O_APPEND:
fstr = 'a+b'
else:
fstr = 'r+b'
else:
fstr = 'rb'
try:
f = os.fdopen(fd, fstr)
except OSError, e:
return paramiko.SFTPServer.convert_errno(e.errno)
fobj = SFTPHandle(flags)
fobj.filename = path
fobj.readfile = f
fobj.writefile = f
return fobj
def list_folder(self, path):
"""
Return a list of files within a given folder.
@param path: path to be listed.
@type path: str
@return: a list of the files in the given folder, using
L{paramiko.SFTPAttributes} objects.
@rtype: list of L{paramiko.SFTPAttributes} I{or error code}
"""
path = self._get_path(path)
attr_list = []
try:
files = os.listdir(path)
for file_ in files:
stat = os.stat(os.path.join(path, file_))
attr = paramiko.SFTPAttributes.from_stat(stat)
attr.filename = file_
attr_list.append(attr)
return attr_list
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
def stat(self, path):
"""
Return an L{paramiko.SFTPAttributes} object for a path on the server,
or an error code.
@param path: path for stat infos.
@type path: str
@return: an attributes object for the given path, or error code.
@rtype: L{paramiko.SFTPAttributes} I{or error code}
"""
path = self._get_path(path)
try:
return paramiko.SFTPAttributes.from_stat(os.stat(path))
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
def lstat(self, path):
"""
Return an L{paramiko.SFTPAttributes} object for a path on the server,
or an error code.
@param path: path for stat infos.
@type path: str
@return: an attributes object for the given file, or an error code.
@rtype: L{SFTPAttributes} I{or error code}
"""
path = self._get_path(path)
try:
return paramiko.SFTPAttributes.from_stat(os.lstat(path))
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
def remove(self, path):
"""
Delete a file, if possible.
@param path: path the file to delete.
@type path: str
@return: return code.
@rtype: int
"""
path = self._get_path(path)
try:
os.remove(path)
return paramiko.SFTP_OK
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
def rename(self, oldpath, newpath):
"""
Rename (or move) a file.
@param oldpath: path of the existing file.
@type oldpath: str
@param newpath: new path of the file.
@type newpath: str
@return: return code.
@rtype: int
"""
oldpath = self._get_path(oldpath)
newpath = self._get_path(newpath)
try:
os.rename(oldpath, newpath)
return paramiko.SFTP_OK
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
def mkdir(self, path, attr):
"""
Create a new directory with the given attributes.
@param path: path to the new directory.
@type path: str
@param attr: requested attributes of the new folder.
@type attr: L{paramiko.SFTPAttributes}
@return: return code
@rtype: int
"""
path = self._get_path(path)
try:
os.mkdir(path)
if attr:
paramiko.SFTPServer.set_file_attr(path, attr)
return paramiko.SFTP_OK
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
def rmdir(self, path):
"""
Remove an empty directory if it exists.
@param path: path to the directory to remove.
@type path: str
@return: return code.
@rtype: int
"""
path = self._get_path(path)
try:
os.rmdir(path)
return paramiko.SFTP_OK
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
def chattr(self, path, attr):
"""
Change the attributes of a file.
@param path: path of the file to change.
@type path: str
@param attr: attributes to change on the file.
@type attr: L{paramiko.SFTPAttributes}
@return: return code.
@rtype: int
"""
path = self._get_path(path)
try:
paramiko.SFTPServer.set_file_attr(path, attr)
return paramiko.SFTP_OK
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
def readlink(self, path):
"""
Return the target of a symbolic link (or shortcut) on the server.
@param path: path of the symbolic link.
@type path: str
@return: the target path of the symbolic link, or an error code.
@rtype: str I{or error code}
"""
path = self._get_path(path)
try:
symlink = os.readlink(path)
if os.path.isabs(symlink):
head, tail = os.path.split(symlink)
head = head.replace(self.root + os.path.sep, '')
symlink = os.path.join(head, tail)
return symlink
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
def symlink(self, target_path, path):
"""
Create a symbolic link on the server, as new pathname C{path},
with C{target_path} as the target of the link.
@param target_path: path of the target for this new symbolic link.
@type target_path: str
@param path: path of the symbolic link to create.
@type path: str
@return: return code.
@rtype: int
"""
path = self._get_path(path)
target_path = self._get_path(target_path)
try:
os.symlink(target_path, path)
return paramiko.SFTP_OK
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
def onetimepass(self):
"""
Create a one time password stored in a new file on the
root path the user uses for synchronization.
This credentials can be used to log in one time only and
are supposed to enable sharing and collaboration with
other users.
"""
rnd = Random.OSRNG.new().read(1024)
pas = base64.b64encode(rnd)
usr = MD5.new(self.salt)
usr.update(pas)
path = self._get_path(usr.hexdigest())
try:
fd = os.open(path, os.O_CREAT|os.O_WRONLY, 0644)
os.write(fd, pas)
return paramiko.SFTP_OK
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
def blockchecksums(self, path):
"""
Get blockchecksums for the given file.
@param path: path.
@type path: str
@return: blockchecksums.
@rtype: str (json dict)
"""
path = self._get_path(path)
try:
bs = blockchecksums(path)
except OSError as e:
bs = {}
return json.dumps(bs)
def delta(self, path, checksums):
"""
Get a delta for the given file to the given checksums.
@param path: path.
@type path: str
@param checksums: blockchecksums.
@type checksums: str (json dict)
@return: delta.
@rtype: str (json list)
"""
path = self._get_path(path)
bs = json.loads(checksums)
try:
d = delta(path, bs)
except OSError as e:
d = []
return json.dumps(d)
def patch(self, path, patch):
"""
Patch the given path with patch.
@param path: path.
@type path: str
@param patch: patch data.
@type patch: str (json list)
@return: return code.
@rtype: int
"""
path = self._get_path(path)
d = json.loads(patch)
patched = patch(path, d)
try:
os.rename(patched, path)
return paramiko.SFTP_OK
except OSError as e:
return paramiko.SFTPServer.convert_errno(e.errno)
def poll(self):
"""
Poll for events observed by the watchdog file system observer.
@return: list of events.
@rtype: str (json list)
"""
r = []
try:
while True:
event = self.eventQueue.get_nowait()
r.append(event)
except Empty:
pass
#print "empty"
r = filter(lambda x: isinstance(x, dict), map(self._serialize_event, r))
return json.dumps(r)
def _serialize_event(self, event):
dst_path = ""
try:
src_path = event.src_path.split(self.root + os.path.sep, 1)[1]
if isinstance(event, DirMovedEvent) or isinstance(event, FileMovedEvent):
dst_path = event.dest_path.split(self.root + os.sep, 1)[1]
except IndexError:
return
else:
return {"event_type": event.event_type, "src_path": src_path,
"dst_path": dst_path, "is_dir": event.is_directory}
| gpl-2.0 |
asnorkin/sentiment_analysis | site/lib/python2.7/site-packages/scipy/optimize/_lsq/least_squares.py | 27 | 37725 | """Generic interface for least-square minimization."""
from __future__ import division, print_function, absolute_import
from warnings import warn
import numpy as np
from numpy.linalg import norm
from scipy.sparse import issparse, csr_matrix
from scipy.sparse.linalg import LinearOperator
from scipy.optimize import _minpack, OptimizeResult
from scipy.optimize._numdiff import approx_derivative, group_columns
from scipy._lib.six import string_types
from .trf import trf
from .dogbox import dogbox
from .common import EPS, in_bounds, make_strictly_feasible
TERMINATION_MESSAGES = {
-1: "Improper input parameters status returned from `leastsq`",
0: "The maximum number of function evaluations is exceeded.",
1: "`gtol` termination condition is satisfied.",
2: "`ftol` termination condition is satisfied.",
3: "`xtol` termination condition is satisfied.",
4: "Both `ftol` and `xtol` termination conditions are satisfied."
}
FROM_MINPACK_TO_COMMON = {
0: -1, # Improper input parameters from MINPACK.
1: 2,
2: 3,
3: 4,
4: 1,
5: 0
# There are 6, 7, 8 for too small tolerance parameters,
# but we guard against it by checking ftol, xtol, gtol beforehand.
}
def call_minpack(fun, x0, jac, ftol, xtol, gtol, max_nfev, x_scale, diff_step):
n = x0.size
if diff_step is None:
epsfcn = EPS
else:
epsfcn = diff_step**2
# Compute MINPACK's `diag`, which is inverse of our `x_scale` and
# ``x_scale='jac'`` corresponds to ``diag=None``.
if isinstance(x_scale, string_types) and x_scale == 'jac':
diag = None
else:
diag = 1 / x_scale
full_output = True
col_deriv = False
factor = 100.0
if jac is None:
if max_nfev is None:
# n squared to account for Jacobian evaluations.
max_nfev = 100 * n * (n + 1)
x, info, status = _minpack._lmdif(
fun, x0, (), full_output, ftol, xtol, gtol,
max_nfev, epsfcn, factor, diag)
else:
if max_nfev is None:
max_nfev = 100 * n
x, info, status = _minpack._lmder(
fun, jac, x0, (), full_output, col_deriv,
ftol, xtol, gtol, max_nfev, factor, diag)
f = info['fvec']
if callable(jac):
J = jac(x)
else:
J = np.atleast_2d(approx_derivative(fun, x))
cost = 0.5 * np.dot(f, f)
g = J.T.dot(f)
g_norm = norm(g, ord=np.inf)
nfev = info['nfev']
njev = info.get('njev', None)
status = FROM_MINPACK_TO_COMMON[status]
active_mask = np.zeros_like(x0, dtype=int)
return OptimizeResult(
x=x, cost=cost, fun=f, jac=J, grad=g, optimality=g_norm,
active_mask=active_mask, nfev=nfev, njev=njev, status=status)
def prepare_bounds(bounds, n):
lb, ub = [np.asarray(b, dtype=float) for b in bounds]
if lb.ndim == 0:
lb = np.resize(lb, n)
if ub.ndim == 0:
ub = np.resize(ub, n)
return lb, ub
def check_tolerance(ftol, xtol, gtol):
message = "{} is too low, setting to machine epsilon {}."
if ftol < EPS:
warn(message.format("`ftol`", EPS))
ftol = EPS
if xtol < EPS:
warn(message.format("`xtol`", EPS))
xtol = EPS
if gtol < EPS:
warn(message.format("`gtol`", EPS))
gtol = EPS
return ftol, xtol, gtol
def check_x_scale(x_scale, x0):
if isinstance(x_scale, string_types) and x_scale == 'jac':
return x_scale
try:
x_scale = np.asarray(x_scale, dtype=float)
valid = np.all(np.isfinite(x_scale)) and np.all(x_scale > 0)
except (ValueError, TypeError):
valid = False
if not valid:
raise ValueError("`x_scale` must be 'jac' or array_like with "
"positive numbers.")
if x_scale.ndim == 0:
x_scale = np.resize(x_scale, x0.shape)
if x_scale.shape != x0.shape:
raise ValueError("Inconsistent shapes between `x_scale` and `x0`.")
return x_scale
def check_jac_sparsity(jac_sparsity, m, n):
if jac_sparsity is None:
return None
if not issparse(jac_sparsity):
jac_sparsity = np.atleast_2d(jac_sparsity)
if jac_sparsity.shape != (m, n):
raise ValueError("`jac_sparsity` has wrong shape.")
return jac_sparsity, group_columns(jac_sparsity)
# Loss functions.
def huber(z, rho, cost_only):
mask = z <= 1
rho[0, mask] = z[mask]
rho[0, ~mask] = 2 * z[~mask]**0.5 - 1
if cost_only:
return
rho[1, mask] = 1
rho[1, ~mask] = z[~mask]**-0.5
rho[2, mask] = 0
rho[2, ~mask] = -0.5 * z[~mask]**-1.5
def soft_l1(z, rho, cost_only):
t = 1 + z
rho[0] = 2 * (t**0.5 - 1)
if cost_only:
return
rho[1] = t**-0.5
rho[2] = -0.5 * t**-1.5
def cauchy(z, rho, cost_only):
rho[0] = np.log1p(z)
if cost_only:
return
t = 1 + z
rho[1] = 1 / t
rho[2] = -1 / t**2
def arctan(z, rho, cost_only):
rho[0] = np.arctan(z)
if cost_only:
return
t = 1 + z**2
rho[1] = 1 / t
rho[2] = -2 * z / t**2
IMPLEMENTED_LOSSES = dict(linear=None, huber=huber, soft_l1=soft_l1,
cauchy=cauchy, arctan=arctan)
def construct_loss_function(m, loss, f_scale):
if loss == 'linear':
return None
if not callable(loss):
loss = IMPLEMENTED_LOSSES[loss]
rho = np.empty((3, m))
def loss_function(f, cost_only=False):
z = (f / f_scale) ** 2
loss(z, rho, cost_only=cost_only)
if cost_only:
return 0.5 * f_scale ** 2 * np.sum(rho[0])
rho[0] *= f_scale ** 2
rho[2] /= f_scale ** 2
return rho
else:
def loss_function(f, cost_only=False):
z = (f / f_scale) ** 2
rho = loss(z)
if cost_only:
return 0.5 * f_scale ** 2 * np.sum(rho[0])
rho[0] *= f_scale ** 2
rho[2] /= f_scale ** 2
return rho
return loss_function
def least_squares(
fun, x0, jac='2-point', bounds=(-np.inf, np.inf), method='trf',
ftol=1e-8, xtol=1e-8, gtol=1e-8, x_scale=1.0, loss='linear',
f_scale=1.0, diff_step=None, tr_solver=None, tr_options={},
jac_sparsity=None, max_nfev=None, verbose=0, args=(), kwargs={}):
"""Solve a nonlinear least-squares problem with bounds on the variables.
Given the residuals f(x) (an m-dimensional real function of n real
variables) and the loss function rho(s) (a scalar function), `least_squares`
finds a local minimum of the cost function F(x)::
minimize F(x) = 0.5 * sum(rho(f_i(x)**2), i = 0, ..., m - 1)
subject to lb <= x <= ub
The purpose of the loss function rho(s) is to reduce the influence of
outliers on the solution.
Parameters
----------
fun : callable
Function which computes the vector of residuals, with the signature
``fun(x, *args, **kwargs)``, i.e., the minimization proceeds with
respect to its first argument. The argument ``x`` passed to this
function is an ndarray of shape (n,) (never a scalar, even for n=1).
It must return a 1-d array_like of shape (m,) or a scalar. If the
argument ``x`` is complex or the function ``fun`` returns complex
residuals, it must be wrapped in a real function of real arguments,
as shown at the end of the Examples section.
x0 : array_like with shape (n,) or float
Initial guess on independent variables. If float, it will be treated
as a 1-d array with one element.
jac : {'2-point', '3-point', 'cs', callable}, optional
Method of computing the Jacobian matrix (an m-by-n matrix, where
element (i, j) is the partial derivative of f[i] with respect to
x[j]). The keywords select a finite difference scheme for numerical
estimation. The scheme '3-point' is more accurate, but requires
twice as much operations compared to '2-point' (default). The
scheme 'cs' uses complex steps, and while potentially the most
accurate, it is applicable only when `fun` correctly handles
complex inputs and can be analytically continued to the complex
plane. Method 'lm' always uses the '2-point' scheme. If callable,
it is used as ``jac(x, *args, **kwargs)`` and should return a
good approximation (or the exact value) for the Jacobian as an
array_like (np.atleast_2d is applied), a sparse matrix or a
`scipy.sparse.linalg.LinearOperator`.
bounds : 2-tuple of array_like, optional
Lower and upper bounds on independent variables. Defaults to no bounds.
Each array must match the size of `x0` or be a scalar, in the latter
case a bound will be the same for all variables. Use ``np.inf`` with
an appropriate sign to disable bounds on all or some variables.
method : {'trf', 'dogbox', 'lm'}, optional
Algorithm to perform minimization.
* 'trf' : Trust Region Reflective algorithm, particularly suitable
for large sparse problems with bounds. Generally robust method.
* 'dogbox' : dogleg algorithm with rectangular trust regions,
typical use case is small problems with bounds. Not recommended
for problems with rank-deficient Jacobian.
* 'lm' : Levenberg-Marquardt algorithm as implemented in MINPACK.
Doesn't handle bounds and sparse Jacobians. Usually the most
efficient method for small unconstrained problems.
Default is 'trf'. See Notes for more information.
ftol : float, optional
Tolerance for termination by the change of the cost function. Default
is 1e-8. The optimization process is stopped when ``dF < ftol * F``,
and there was an adequate agreement between a local quadratic model and
the true model in the last step.
xtol : float, optional
Tolerance for termination by the change of the independent variables.
Default is 1e-8. The exact condition depends on the `method` used:
* For 'trf' and 'dogbox' : ``norm(dx) < xtol * (xtol + norm(x))``
* For 'lm' : ``Delta < xtol * norm(xs)``, where ``Delta`` is
a trust-region radius and ``xs`` is the value of ``x``
scaled according to `x_scale` parameter (see below).
gtol : float, optional
Tolerance for termination by the norm of the gradient. Default is 1e-8.
The exact condition depends on a `method` used:
* For 'trf' : ``norm(g_scaled, ord=np.inf) < gtol``, where
``g_scaled`` is the value of the gradient scaled to account for
the presence of the bounds [STIR]_.
* For 'dogbox' : ``norm(g_free, ord=np.inf) < gtol``, where
``g_free`` is the gradient with respect to the variables which
are not in the optimal state on the boundary.
* For 'lm' : the maximum absolute value of the cosine of angles
between columns of the Jacobian and the residual vector is less
than `gtol`, or the residual vector is zero.
x_scale : array_like or 'jac', optional
Characteristic scale of each variable. Setting `x_scale` is equivalent
to reformulating the problem in scaled variables ``xs = x / x_scale``.
An alternative view is that the size of a trust region along j-th
dimension is proportional to ``x_scale[j]``. Improved convergence may
be achieved by setting `x_scale` such that a step of a given size
along any of the scaled variables has a similar effect on the cost
function. If set to 'jac', the scale is iteratively updated using the
inverse norms of the columns of the Jacobian matrix (as described in
[JJMore]_).
loss : str or callable, optional
Determines the loss function. The following keyword values are allowed:
* 'linear' (default) : ``rho(z) = z``. Gives a standard
least-squares problem.
* 'soft_l1' : ``rho(z) = 2 * ((1 + z)**0.5 - 1)``. The smooth
approximation of l1 (absolute value) loss. Usually a good
choice for robust least squares.
* 'huber' : ``rho(z) = z if z <= 1 else 2*z**0.5 - 1``. Works
similarly to 'soft_l1'.
* 'cauchy' : ``rho(z) = ln(1 + z)``. Severely weakens outliers
influence, but may cause difficulties in optimization process.
* 'arctan' : ``rho(z) = arctan(z)``. Limits a maximum loss on
a single residual, has properties similar to 'cauchy'.
If callable, it must take a 1-d ndarray ``z=f**2`` and return an
array_like with shape (3, m) where row 0 contains function values,
row 1 contains first derivatives and row 2 contains second
derivatives. Method 'lm' supports only 'linear' loss.
f_scale : float, optional
Value of soft margin between inlier and outlier residuals, default
is 1.0. The loss function is evaluated as follows
``rho_(f**2) = C**2 * rho(f**2 / C**2)``, where ``C`` is `f_scale`,
and ``rho`` is determined by `loss` parameter. This parameter has
no effect with ``loss='linear'``, but for other `loss` values it is
of crucial importance.
max_nfev : None or int, optional
Maximum number of function evaluations before the termination.
If None (default), the value is chosen automatically:
* For 'trf' and 'dogbox' : 100 * n.
* For 'lm' : 100 * n if `jac` is callable and 100 * n * (n + 1)
otherwise (because 'lm' counts function calls in Jacobian
estimation).
diff_step : None or array_like, optional
Determines the relative step size for the finite difference
approximation of the Jacobian. The actual step is computed as
``x * diff_step``. If None (default), then `diff_step` is taken to be
a conventional "optimal" power of machine epsilon for the finite
difference scheme used [NR]_.
tr_solver : {None, 'exact', 'lsmr'}, optional
Method for solving trust-region subproblems, relevant only for 'trf'
and 'dogbox' methods.
* 'exact' is suitable for not very large problems with dense
Jacobian matrices. The computational complexity per iteration is
comparable to a singular value decomposition of the Jacobian
matrix.
* 'lsmr' is suitable for problems with sparse and large Jacobian
matrices. It uses the iterative procedure
`scipy.sparse.linalg.lsmr` for finding a solution of a linear
least-squares problem and only requires matrix-vector product
evaluations.
If None (default) the solver is chosen based on the type of Jacobian
returned on the first iteration.
tr_options : dict, optional
Keyword options passed to trust-region solver.
* ``tr_solver='exact'``: `tr_options` are ignored.
* ``tr_solver='lsmr'``: options for `scipy.sparse.linalg.lsmr`.
Additionally ``method='trf'`` supports 'regularize' option
(bool, default is True) which adds a regularization term to the
normal equation, which improves convergence if the Jacobian is
rank-deficient [Byrd]_ (eq. 3.4).
jac_sparsity : {None, array_like, sparse matrix}, optional
Defines the sparsity structure of the Jacobian matrix for finite
difference estimation, its shape must be (m, n). If the Jacobian has
only few non-zero elements in *each* row, providing the sparsity
structure will greatly speed up the computations [Curtis]_. A zero
entry means that a corresponding element in the Jacobian is identically
zero. If provided, forces the use of 'lsmr' trust-region solver.
If None (default) then dense differencing will be used. Has no effect
for 'lm' method.
verbose : {0, 1, 2}, optional
Level of algorithm's verbosity:
* 0 (default) : work silently.
* 1 : display a termination report.
* 2 : display progress during iterations (not supported by 'lm'
method).
args, kwargs : tuple and dict, optional
Additional arguments passed to `fun` and `jac`. Both empty by default.
The calling signature is ``fun(x, *args, **kwargs)`` and the same for
`jac`.
Returns
-------
`OptimizeResult` with the following fields defined:
x : ndarray, shape (n,)
Solution found.
cost : float
Value of the cost function at the solution.
fun : ndarray, shape (m,)
Vector of residuals at the solution.
jac : ndarray, sparse matrix or LinearOperator, shape (m, n)
Modified Jacobian matrix at the solution, in the sense that J^T J
is a Gauss-Newton approximation of the Hessian of the cost function.
The type is the same as the one used by the algorithm.
grad : ndarray, shape (m,)
Gradient of the cost function at the solution.
optimality : float
First-order optimality measure. In unconstrained problems, it is always
the uniform norm of the gradient. In constrained problems, it is the
quantity which was compared with `gtol` during iterations.
active_mask : ndarray of int, shape (n,)
Each component shows whether a corresponding constraint is active
(that is, whether a variable is at the bound):
* 0 : a constraint is not active.
* -1 : a lower bound is active.
* 1 : an upper bound is active.
Might be somewhat arbitrary for 'trf' method as it generates a sequence
of strictly feasible iterates and `active_mask` is determined within a
tolerance threshold.
nfev : int
Number of function evaluations done. Methods 'trf' and 'dogbox' do not
count function calls for numerical Jacobian approximation, as opposed
to 'lm' method.
njev : int or None
Number of Jacobian evaluations done. If numerical Jacobian
approximation is used in 'lm' method, it is set to None.
status : int
The reason for algorithm termination:
* -1 : improper input parameters status returned from MINPACK.
* 0 : the maximum number of function evaluations is exceeded.
* 1 : `gtol` termination condition is satisfied.
* 2 : `ftol` termination condition is satisfied.
* 3 : `xtol` termination condition is satisfied.
* 4 : Both `ftol` and `xtol` termination conditions are satisfied.
message : str
Verbal description of the termination reason.
success : bool
True if one of the convergence criteria is satisfied (`status` > 0).
See Also
--------
leastsq : A legacy wrapper for the MINPACK implementation of the
Levenberg-Marquadt algorithm.
curve_fit : Least-squares minimization applied to a curve fitting problem.
Notes
-----
Method 'lm' (Levenberg-Marquardt) calls a wrapper over least-squares
algorithms implemented in MINPACK (lmder, lmdif). It runs the
Levenberg-Marquardt algorithm formulated as a trust-region type algorithm.
The implementation is based on paper [JJMore]_, it is very robust and
efficient with a lot of smart tricks. It should be your first choice
for unconstrained problems. Note that it doesn't support bounds. Also
it doesn't work when m < n.
Method 'trf' (Trust Region Reflective) is motivated by the process of
solving a system of equations, which constitute the first-order optimality
condition for a bound-constrained minimization problem as formulated in
[STIR]_. The algorithm iteratively solves trust-region subproblems
augmented by a special diagonal quadratic term and with trust-region shape
determined by the distance from the bounds and the direction of the
gradient. This enhancements help to avoid making steps directly into bounds
and efficiently explore the whole space of variables. To further improve
convergence, the algorithm considers search directions reflected from the
bounds. To obey theoretical requirements, the algorithm keeps iterates
strictly feasible. With dense Jacobians trust-region subproblems are
solved by an exact method very similar to the one described in [JJMore]_
(and implemented in MINPACK). The difference from the MINPACK
implementation is that a singular value decomposition of a Jacobian
matrix is done once per iteration, instead of a QR decomposition and series
of Givens rotation eliminations. For large sparse Jacobians a 2-d subspace
approach of solving trust-region subproblems is used [STIR]_, [Byrd]_.
The subspace is spanned by a scaled gradient and an approximate
Gauss-Newton solution delivered by `scipy.sparse.linalg.lsmr`. When no
constraints are imposed the algorithm is very similar to MINPACK and has
generally comparable performance. The algorithm works quite robust in
unbounded and bounded problems, thus it is chosen as a default algorithm.
Method 'dogbox' operates in a trust-region framework, but considers
rectangular trust regions as opposed to conventional ellipsoids [Voglis]_.
The intersection of a current trust region and initial bounds is again
rectangular, so on each iteration a quadratic minimization problem subject
to bound constraints is solved approximately by Powell's dogleg method
[NumOpt]_. The required Gauss-Newton step can be computed exactly for
dense Jacobians or approximately by `scipy.sparse.linalg.lsmr` for large
sparse Jacobians. The algorithm is likely to exhibit slow convergence when
the rank of Jacobian is less than the number of variables. The algorithm
often outperforms 'trf' in bounded problems with a small number of
variables.
Robust loss functions are implemented as described in [BA]_. The idea
is to modify a residual vector and a Jacobian matrix on each iteration
such that computed gradient and Gauss-Newton Hessian approximation match
the true gradient and Hessian approximation of the cost function. Then
the algorithm proceeds in a normal way, i.e. robust loss functions are
implemented as a simple wrapper over standard least-squares algorithms.
.. versionadded:: 0.17.0
References
----------
.. [STIR] M. A. Branch, T. F. Coleman, and Y. Li, "A Subspace, Interior,
and Conjugate Gradient Method for Large-Scale Bound-Constrained
Minimization Problems," SIAM Journal on Scientific Computing,
Vol. 21, Number 1, pp 1-23, 1999.
.. [NR] William H. Press et. al., "Numerical Recipes. The Art of Scientific
Computing. 3rd edition", Sec. 5.7.
.. [Byrd] R. H. Byrd, R. B. Schnabel and G. A. Shultz, "Approximate
solution of the trust region problem by minimization over
two-dimensional subspaces", Math. Programming, 40, pp. 247-263,
1988.
.. [Curtis] A. Curtis, M. J. D. Powell, and J. Reid, "On the estimation of
sparse Jacobian matrices", Journal of the Institute of
Mathematics and its Applications, 13, pp. 117-120, 1974.
.. [JJMore] J. J. More, "The Levenberg-Marquardt Algorithm: Implementation
and Theory," Numerical Analysis, ed. G. A. Watson, Lecture
Notes in Mathematics 630, Springer Verlag, pp. 105-116, 1977.
.. [Voglis] C. Voglis and I. E. Lagaris, "A Rectangular Trust Region
Dogleg Approach for Unconstrained and Bound Constrained
Nonlinear Optimization", WSEAS International Conference on
Applied Mathematics, Corfu, Greece, 2004.
.. [NumOpt] J. Nocedal and S. J. Wright, "Numerical optimization,
2nd edition", Chapter 4.
.. [BA] B. Triggs et. al., "Bundle Adjustment - A Modern Synthesis",
Proceedings of the International Workshop on Vision Algorithms:
Theory and Practice, pp. 298-372, 1999.
Examples
--------
In this example we find a minimum of the Rosenbrock function without bounds
on independed variables.
>>> def fun_rosenbrock(x):
... return np.array([10 * (x[1] - x[0]**2), (1 - x[0])])
Notice that we only provide the vector of the residuals. The algorithm
constructs the cost function as a sum of squares of the residuals, which
gives the Rosenbrock function. The exact minimum is at ``x = [1.0, 1.0]``.
>>> from scipy.optimize import least_squares
>>> x0_rosenbrock = np.array([2, 2])
>>> res_1 = least_squares(fun_rosenbrock, x0_rosenbrock)
>>> res_1.x
array([ 1., 1.])
>>> res_1.cost
9.8669242910846867e-30
>>> res_1.optimality
8.8928864934219529e-14
We now constrain the variables, in such a way that the previous solution
becomes infeasible. Specifically, we require that ``x[1] >= 1.5``, and
``x[0]`` left unconstrained. To this end, we specify the `bounds` parameter
to `least_squares` in the form ``bounds=([-np.inf, 1.5], np.inf)``.
We also provide the analytic Jacobian:
>>> def jac_rosenbrock(x):
... return np.array([
... [-20 * x[0], 10],
... [-1, 0]])
Putting this all together, we see that the new solution lies on the bound:
>>> res_2 = least_squares(fun_rosenbrock, x0_rosenbrock, jac_rosenbrock,
... bounds=([-np.inf, 1.5], np.inf))
>>> res_2.x
array([ 1.22437075, 1.5 ])
>>> res_2.cost
0.025213093946805685
>>> res_2.optimality
1.5885401433157753e-07
Now we solve a system of equations (i.e., the cost function should be zero
at a minimum) for a Broyden tridiagonal vector-valued function of 100000
variables:
>>> def fun_broyden(x):
... f = (3 - x) * x + 1
... f[1:] -= x[:-1]
... f[:-1] -= 2 * x[1:]
... return f
The corresponding Jacobian matrix is sparse. We tell the algorithm to
estimate it by finite differences and provide the sparsity structure of
Jacobian to significantly speed up this process.
>>> from scipy.sparse import lil_matrix
>>> def sparsity_broyden(n):
... sparsity = lil_matrix((n, n), dtype=int)
... i = np.arange(n)
... sparsity[i, i] = 1
... i = np.arange(1, n)
... sparsity[i, i - 1] = 1
... i = np.arange(n - 1)
... sparsity[i, i + 1] = 1
... return sparsity
...
>>> n = 100000
>>> x0_broyden = -np.ones(n)
...
>>> res_3 = least_squares(fun_broyden, x0_broyden,
... jac_sparsity=sparsity_broyden(n))
>>> res_3.cost
4.5687069299604613e-23
>>> res_3.optimality
1.1650454296851518e-11
Let's also solve a curve fitting problem using robust loss function to
take care of outliers in the data. Define the model function as
``y = a + b * exp(c * t)``, where t is a predictor variable, y is an
observation and a, b, c are parameters to estimate.
First, define the function which generates the data with noise and
outliers, define the model parameters, and generate data:
>>> def gen_data(t, a, b, c, noise=0, n_outliers=0, random_state=0):
... y = a + b * np.exp(t * c)
...
... rnd = np.random.RandomState(random_state)
... error = noise * rnd.randn(t.size)
... outliers = rnd.randint(0, t.size, n_outliers)
... error[outliers] *= 10
...
... return y + error
...
>>> a = 0.5
>>> b = 2.0
>>> c = -1
>>> t_min = 0
>>> t_max = 10
>>> n_points = 15
...
>>> t_train = np.linspace(t_min, t_max, n_points)
>>> y_train = gen_data(t_train, a, b, c, noise=0.1, n_outliers=3)
Define function for computing residuals and initial estimate of
parameters.
>>> def fun(x, t, y):
... return x[0] + x[1] * np.exp(x[2] * t) - y
...
>>> x0 = np.array([1.0, 1.0, 0.0])
Compute a standard least-squares solution:
>>> res_lsq = least_squares(fun, x0, args=(t_train, y_train))
Now compute two solutions with two different robust loss functions. The
parameter `f_scale` is set to 0.1, meaning that inlier residuals should
not significantly exceed 0.1 (the noise level used).
>>> res_soft_l1 = least_squares(fun, x0, loss='soft_l1', f_scale=0.1,
... args=(t_train, y_train))
>>> res_log = least_squares(fun, x0, loss='cauchy', f_scale=0.1,
... args=(t_train, y_train))
And finally plot all the curves. We see that by selecting an appropriate
`loss` we can get estimates close to optimal even in the presence of
strong outliers. But keep in mind that generally it is recommended to try
'soft_l1' or 'huber' losses first (if at all necessary) as the other two
options may cause difficulties in optimization process.
>>> t_test = np.linspace(t_min, t_max, n_points * 10)
>>> y_true = gen_data(t_test, a, b, c)
>>> y_lsq = gen_data(t_test, *res_lsq.x)
>>> y_soft_l1 = gen_data(t_test, *res_soft_l1.x)
>>> y_log = gen_data(t_test, *res_log.x)
...
>>> import matplotlib.pyplot as plt
>>> plt.plot(t_train, y_train, 'o')
>>> plt.plot(t_test, y_true, 'k', linewidth=2, label='true')
>>> plt.plot(t_test, y_lsq, label='linear loss')
>>> plt.plot(t_test, y_soft_l1, label='soft_l1 loss')
>>> plt.plot(t_test, y_log, label='cauchy loss')
>>> plt.xlabel("t")
>>> plt.ylabel("y")
>>> plt.legend()
>>> plt.show()
In the next example, we show how complex-valued residual functions of
complex variables can be optimized with ``least_squares()``. Consider the
following function:
>>> def f(z):
... return z - (0.5 + 0.5j)
We wrap it into a function of real variables that returns real residuals
by simply handling the real and imaginary parts as independent variables:
>>> def f_wrap(x):
... fx = f(x[0] + 1j*x[1])
... return np.array([fx.real, fx.imag])
Thus, instead of the original m-dimensional complex function of n complex
variables we optimize a 2m-dimensional real function of 2n real variables:
>>> from scipy.optimize import least_squares
>>> res_wrapped = least_squares(f_wrap, (0.1, 0.1), bounds=([0, 0], [1, 1]))
>>> z = res_wrapped.x[0] + res_wrapped.x[1]*1j
>>> z
(0.49999999999925893+0.49999999999925893j)
"""
if method not in ['trf', 'dogbox', 'lm']:
raise ValueError("`method` must be 'trf', 'dogbox' or 'lm'.")
if jac not in ['2-point', '3-point', 'cs'] and not callable(jac):
raise ValueError("`jac` must be '2-point', '3-point', 'cs' or "
"callable.")
if tr_solver not in [None, 'exact', 'lsmr']:
raise ValueError("`tr_solver` must be None, 'exact' or 'lsmr'.")
if loss not in IMPLEMENTED_LOSSES and not callable(loss):
raise ValueError("`loss` must be one of {0} or a callable."
.format(IMPLEMENTED_LOSSES.keys()))
if method == 'lm' and loss != 'linear':
raise ValueError("method='lm' supports only 'linear' loss function.")
if verbose not in [0, 1, 2]:
raise ValueError("`verbose` must be in [0, 1, 2].")
if len(bounds) != 2:
raise ValueError("`bounds` must contain 2 elements.")
if max_nfev is not None and max_nfev <= 0:
raise ValueError("`max_nfev` must be None or positive integer.")
if np.iscomplexobj(x0):
raise ValueError("`x0` must be real.")
x0 = np.atleast_1d(x0).astype(float)
if x0.ndim > 1:
raise ValueError("`x0` must have at most 1 dimension.")
lb, ub = prepare_bounds(bounds, x0.shape[0])
if method == 'lm' and not np.all((lb == -np.inf) & (ub == np.inf)):
raise ValueError("Method 'lm' doesn't support bounds.")
if lb.shape != x0.shape or ub.shape != x0.shape:
raise ValueError("Inconsistent shapes between bounds and `x0`.")
if np.any(lb >= ub):
raise ValueError("Each lower bound must be strictly less than each "
"upper bound.")
if not in_bounds(x0, lb, ub):
raise ValueError("`x0` is infeasible.")
x_scale = check_x_scale(x_scale, x0)
ftol, xtol, gtol = check_tolerance(ftol, xtol, gtol)
def fun_wrapped(x):
return np.atleast_1d(fun(x, *args, **kwargs))
if method == 'trf':
x0 = make_strictly_feasible(x0, lb, ub)
f0 = fun_wrapped(x0)
if f0.ndim != 1:
raise ValueError("`fun` must return at most 1-d array_like.")
if not np.all(np.isfinite(f0)):
raise ValueError("Residuals are not finite in the initial point.")
n = x0.size
m = f0.size
if method == 'lm' and m < n:
raise ValueError("Method 'lm' doesn't work when the number of "
"residuals is less than the number of variables.")
loss_function = construct_loss_function(m, loss, f_scale)
if callable(loss):
rho = loss_function(f0)
if rho.shape != (3, m):
raise ValueError("The return value of `loss` callable has wrong "
"shape.")
initial_cost = 0.5 * np.sum(rho[0])
elif loss_function is not None:
initial_cost = loss_function(f0, cost_only=True)
else:
initial_cost = 0.5 * np.dot(f0, f0)
if callable(jac):
J0 = jac(x0, *args, **kwargs)
if issparse(J0):
J0 = csr_matrix(J0)
def jac_wrapped(x, _=None):
return csr_matrix(jac(x, *args, **kwargs))
elif isinstance(J0, LinearOperator):
def jac_wrapped(x, _=None):
return jac(x, *args, **kwargs)
else:
J0 = np.atleast_2d(J0)
def jac_wrapped(x, _=None):
return np.atleast_2d(jac(x, *args, **kwargs))
else: # Estimate Jacobian by finite differences.
if method == 'lm':
if jac_sparsity is not None:
raise ValueError("method='lm' does not support "
"`jac_sparsity`.")
if jac != '2-point':
warn("jac='{0}' works equivalently to '2-point' "
"for method='lm'.".format(jac))
J0 = jac_wrapped = None
else:
if jac_sparsity is not None and tr_solver == 'exact':
raise ValueError("tr_solver='exact' is incompatible "
"with `jac_sparsity`.")
jac_sparsity = check_jac_sparsity(jac_sparsity, m, n)
def jac_wrapped(x, f):
J = approx_derivative(fun, x, rel_step=diff_step, method=jac,
f0=f, bounds=bounds, args=args,
kwargs=kwargs, sparsity=jac_sparsity)
if J.ndim != 2: # J is guaranteed not sparse.
J = np.atleast_2d(J)
return J
J0 = jac_wrapped(x0, f0)
if J0 is not None:
if J0.shape != (m, n):
raise ValueError(
"The return value of `jac` has wrong shape: expected {0}, "
"actual {1}.".format((m, n), J0.shape))
if not isinstance(J0, np.ndarray):
if method == 'lm':
raise ValueError("method='lm' works only with dense "
"Jacobian matrices.")
if tr_solver == 'exact':
raise ValueError(
"tr_solver='exact' works only with dense "
"Jacobian matrices.")
jac_scale = isinstance(x_scale, string_types) and x_scale == 'jac'
if isinstance(J0, LinearOperator) and jac_scale:
raise ValueError("x_scale='jac' can't be used when `jac` "
"returns LinearOperator.")
if tr_solver is None:
if isinstance(J0, np.ndarray):
tr_solver = 'exact'
else:
tr_solver = 'lsmr'
if method == 'lm':
result = call_minpack(fun_wrapped, x0, jac_wrapped, ftol, xtol, gtol,
max_nfev, x_scale, diff_step)
elif method == 'trf':
result = trf(fun_wrapped, jac_wrapped, x0, f0, J0, lb, ub, ftol, xtol,
gtol, max_nfev, x_scale, loss_function, tr_solver,
tr_options.copy(), verbose)
elif method == 'dogbox':
if tr_solver == 'lsmr' and 'regularize' in tr_options:
warn("The keyword 'regularize' in `tr_options` is not relevant "
"for 'dogbox' method.")
tr_options = tr_options.copy()
del tr_options['regularize']
result = dogbox(fun_wrapped, jac_wrapped, x0, f0, J0, lb, ub, ftol,
xtol, gtol, max_nfev, x_scale, loss_function,
tr_solver, tr_options, verbose)
result.message = TERMINATION_MESSAGES[result.status]
result.success = result.status > 0
if verbose >= 1:
print(result.message)
print("Function evaluations {0}, initial cost {1:.4e}, final cost "
"{2:.4e}, first-order optimality {3:.2e}."
.format(result.nfev, initial_cost, result.cost,
result.optimality))
return result
| mit |
rven/odoo | addons/test_mail_full/tests/test_sms_sms.py | 2 | 7139 | # -*- coding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
import werkzeug
from unittest.mock import patch
from unittest.mock import DEFAULT
from odoo import exceptions
from odoo.addons.sms.models.sms_sms import SmsSms as SmsSms
from odoo.addons.test_mail_full.tests.common import TestMailFullCommon
from odoo.tests import common
class LinkTrackerMock(common.BaseCase):
def setUp(self):
super(LinkTrackerMock, self).setUp()
def _get_title_from_url(u):
return "Test_TITLE"
self.env['ir.config_parameter'].sudo().set_param('web.base.url', 'https://test.odoo.com')
link_tracker_title_patch = patch('odoo.addons.link_tracker.models.link_tracker.LinkTracker._get_title_from_url', wraps=_get_title_from_url)
link_tracker_title_patch.start()
self.addCleanup(link_tracker_title_patch.stop)
self.utm_c = self.env.ref('utm.utm_campaign_fall_drive')
self.utm_m = self.env.ref('mass_mailing_sms.utm_medium_sms')
self.tracker_values = {
'campaign_id': self.utm_c.id,
'medium_id': self.utm_m.id,
}
def assertLinkTracker(self, url, url_params):
links = self.env['link.tracker'].sudo().search([('url', '=', url)])
self.assertEqual(len(links), 1)
# check UTMS are correctly set on redirect URL
original_url = werkzeug.urls.url_parse(url)
redirect_url = werkzeug.urls.url_parse(links.redirected_url)
redirect_params = redirect_url.decode_query().to_dict(flat=True)
self.assertEqual(redirect_url.scheme, original_url.scheme)
self.assertEqual(redirect_url.decode_netloc(), original_url.decode_netloc())
self.assertEqual(redirect_url.path, original_url.path)
self.assertEqual(redirect_params, url_params)
class TestSMSPost(TestMailFullCommon, LinkTrackerMock):
@classmethod
def setUpClass(cls):
super(TestSMSPost, cls).setUpClass()
cls._test_body = 'VOID CONTENT'
cls.sms_all = cls.env['sms.sms']
for x in range(10):
cls.sms_all |= cls.env['sms.sms'].create({
'number': '+324560000%s%s' % (x, x),
'body': cls._test_body,
})
def test_body_link_shorten(self):
link = 'http://www.example.com'
self.env['link.tracker'].search([('url', '=', link)]).unlink()
new_body = self.env['mail.render.mixin']._shorten_links_text('Welcome to %s !' % link, self.tracker_values)
self.assertNotIn(link, new_body)
self.assertLinkTracker(link, {'utm_campaign': self.utm_c.name, 'utm_medium': self.utm_m.name})
link = self.env['link.tracker'].search([('url', '=', link)])
self.assertIn(link.short_url, new_body)
link = 'https://test.odoo.com/my/super_page?test[0]=42&toto=áâà#title3'
self.env['link.tracker'].search([('url', '=', link)]).unlink()
new_body = self.env['mail.render.mixin']._shorten_links_text('Welcome to %s !' % link, self.tracker_values)
self.assertNotIn(link, new_body)
self.assertLinkTracker(link, {
'utm_campaign': self.utm_c.name,
'utm_medium': self.utm_m.name,
'test[0]': '42',
'toto': 'áâà',
})
link = self.env['link.tracker'].search([('url', '=', link)])
self.assertIn(link.short_url, new_body)
def test_body_link_shorten_wshort(self):
link = 'https://test.odoo.com/r/RAOUL'
self.env['link.tracker'].search([('url', '=', link)]).unlink()
new_body = self.env['mail.render.mixin']._shorten_links_text('Welcome to %s !' % link, self.tracker_values)
self.assertIn(link, new_body)
self.assertFalse(self.env['link.tracker'].search([('url', '=', link)]))
def test_body_link_shorten_wunsubscribe(self):
link = 'https://test.odoo.com/sms/3/'
self.env['link.tracker'].search([('url', '=', link)]).unlink()
new_body = self.env['mail.render.mixin']._shorten_links_text('Welcome to %s !' % link, self.tracker_values)
self.assertIn(link, new_body)
self.assertFalse(self.env['link.tracker'].search([('url', '=', link)]))
def test_sms_body_link_shorten_suffix(self):
mailing = self.env['mailing.mailing'].create({
'subject': 'Minimal mailing',
'mailing_model_id': self.env['ir.model']._get('mail.test.sms').id,
'mailing_type': 'sms',
})
sms_0 = self.env['sms.sms'].create({
'body': 'Welcome to https://test.odoo.com',
'number': '12',
'mailing_id': mailing.id,
})
sms_1 = self.env['sms.sms'].create({
'body': 'Welcome to https://test.odoo.com/r/RAOUL',
'number': '12',
})
sms_2 = self.env['sms.sms'].create({
'body': 'Welcome to https://test.odoo.com/r/RAOUL',
'number': '12', 'mailing_id': mailing.id,
})
sms_3 = self.env['sms.sms'].create({
'body': 'Welcome to https://test.odoo.com/leodagan/r/RAOUL',
'number': '12', 'mailing_id': mailing.id,
})
res = (sms_0 | sms_1 | sms_2 | sms_3)._update_body_short_links()
self.assertEqual(res[sms_0.id], 'Welcome to https://test.odoo.com')
self.assertEqual(res[sms_1.id], 'Welcome to https://test.odoo.com/r/RAOUL')
self.assertEqual(res[sms_2.id], 'Welcome to https://test.odoo.com/r/RAOUL/s/%s' % sms_2.id)
self.assertEqual(res[sms_3.id], 'Welcome to https://test.odoo.com/leodagan/r/RAOUL')
def test_sms_send_batch_size(self):
self.count = 0
def _send(sms_self, delete_all=False, raise_exception=False):
self.count += 1
return DEFAULT
self.env['ir.config_parameter'].set_param('sms.session.batch.size', '3')
with patch.object(SmsSms, '_send', autospec=True, side_effect=_send) as send_mock:
self.env['sms.sms'].browse(self.sms_all.ids).send()
self.assertEqual(self.count, 4)
def test_sms_send_crash_employee(self):
with self.assertRaises(exceptions.AccessError):
self.env['sms.sms'].with_user(self.user_employee).browse(self.sms_all.ids).send()
def test_sms_send_delete_all(self):
with self.mockSMSGateway(sms_allow_unlink=True, sim_error='jsonrpc_exception'):
self.env['sms.sms'].browse(self.sms_all.ids).send(delete_all=True, raise_exception=False)
self.assertFalse(len(self.sms_all.exists()))
def test_sms_send_raise(self):
with self.assertRaises(exceptions.AccessError):
with self.mockSMSGateway(sim_error='jsonrpc_exception'):
self.env['sms.sms'].browse(self.sms_all.ids).send(raise_exception=True)
self.assertEqual(set(self.sms_all.mapped('state')), set(['outgoing']))
def test_sms_send_raise_catch(self):
with self.mockSMSGateway(sim_error='jsonrpc_exception'):
self.env['sms.sms'].browse(self.sms_all.ids).send(raise_exception=False)
self.assertEqual(set(self.sms_all.mapped('state')), set(['error']))
| agpl-3.0 |
Lh4cKg/sl4a | python/src/Lib/xml/dom/minidom.py | 60 | 66134 | """\
minidom.py -- a lightweight DOM implementation.
parse("foo.xml")
parseString("<foo><bar/></foo>")
Todo:
=====
* convenience methods for getting elements and text.
* more testing
* bring some of the writer and linearizer code into conformance with this
interface
* SAX 2 namespaces
"""
import xml.dom
from xml.dom import EMPTY_NAMESPACE, EMPTY_PREFIX, XMLNS_NAMESPACE, domreg
from xml.dom.minicompat import *
from xml.dom.xmlbuilder import DOMImplementationLS, DocumentLS
# This is used by the ID-cache invalidation checks; the list isn't
# actually complete, since the nodes being checked will never be the
# DOCUMENT_NODE or DOCUMENT_FRAGMENT_NODE. (The node being checked is
# the node being added or removed, not the node being modified.)
#
_nodeTypes_with_children = (xml.dom.Node.ELEMENT_NODE,
xml.dom.Node.ENTITY_REFERENCE_NODE)
class Node(xml.dom.Node):
namespaceURI = None # this is non-null only for elements and attributes
parentNode = None
ownerDocument = None
nextSibling = None
previousSibling = None
prefix = EMPTY_PREFIX # non-null only for NS elements and attributes
def __nonzero__(self):
return True
def toxml(self, encoding = None):
return self.toprettyxml("", "", encoding)
def toprettyxml(self, indent="\t", newl="\n", encoding = None):
# indent = the indentation string to prepend, per level
# newl = the newline string to append
writer = _get_StringIO()
if encoding is not None:
import codecs
# Can't use codecs.getwriter to preserve 2.0 compatibility
writer = codecs.lookup(encoding)[3](writer)
if self.nodeType == Node.DOCUMENT_NODE:
# Can pass encoding only to document, to put it into XML header
self.writexml(writer, "", indent, newl, encoding)
else:
self.writexml(writer, "", indent, newl)
return writer.getvalue()
def hasChildNodes(self):
if self.childNodes:
return True
else:
return False
def _get_childNodes(self):
return self.childNodes
def _get_firstChild(self):
if self.childNodes:
return self.childNodes[0]
def _get_lastChild(self):
if self.childNodes:
return self.childNodes[-1]
def insertBefore(self, newChild, refChild):
if newChild.nodeType == self.DOCUMENT_FRAGMENT_NODE:
for c in tuple(newChild.childNodes):
self.insertBefore(c, refChild)
### The DOM does not clearly specify what to return in this case
return newChild
if newChild.nodeType not in self._child_node_types:
raise xml.dom.HierarchyRequestErr(
"%s cannot be child of %s" % (repr(newChild), repr(self)))
if newChild.parentNode is not None:
newChild.parentNode.removeChild(newChild)
if refChild is None:
self.appendChild(newChild)
else:
try:
index = self.childNodes.index(refChild)
except ValueError:
raise xml.dom.NotFoundErr()
if newChild.nodeType in _nodeTypes_with_children:
_clear_id_cache(self)
self.childNodes.insert(index, newChild)
newChild.nextSibling = refChild
refChild.previousSibling = newChild
if index:
node = self.childNodes[index-1]
node.nextSibling = newChild
newChild.previousSibling = node
else:
newChild.previousSibling = None
newChild.parentNode = self
return newChild
def appendChild(self, node):
if node.nodeType == self.DOCUMENT_FRAGMENT_NODE:
for c in tuple(node.childNodes):
self.appendChild(c)
### The DOM does not clearly specify what to return in this case
return node
if node.nodeType not in self._child_node_types:
raise xml.dom.HierarchyRequestErr(
"%s cannot be child of %s" % (repr(node), repr(self)))
elif node.nodeType in _nodeTypes_with_children:
_clear_id_cache(self)
if node.parentNode is not None:
node.parentNode.removeChild(node)
_append_child(self, node)
node.nextSibling = None
return node
def replaceChild(self, newChild, oldChild):
if newChild.nodeType == self.DOCUMENT_FRAGMENT_NODE:
refChild = oldChild.nextSibling
self.removeChild(oldChild)
return self.insertBefore(newChild, refChild)
if newChild.nodeType not in self._child_node_types:
raise xml.dom.HierarchyRequestErr(
"%s cannot be child of %s" % (repr(newChild), repr(self)))
if newChild is oldChild:
return
if newChild.parentNode is not None:
newChild.parentNode.removeChild(newChild)
try:
index = self.childNodes.index(oldChild)
except ValueError:
raise xml.dom.NotFoundErr()
self.childNodes[index] = newChild
newChild.parentNode = self
oldChild.parentNode = None
if (newChild.nodeType in _nodeTypes_with_children
or oldChild.nodeType in _nodeTypes_with_children):
_clear_id_cache(self)
newChild.nextSibling = oldChild.nextSibling
newChild.previousSibling = oldChild.previousSibling
oldChild.nextSibling = None
oldChild.previousSibling = None
if newChild.previousSibling:
newChild.previousSibling.nextSibling = newChild
if newChild.nextSibling:
newChild.nextSibling.previousSibling = newChild
return oldChild
def removeChild(self, oldChild):
try:
self.childNodes.remove(oldChild)
except ValueError:
raise xml.dom.NotFoundErr()
if oldChild.nextSibling is not None:
oldChild.nextSibling.previousSibling = oldChild.previousSibling
if oldChild.previousSibling is not None:
oldChild.previousSibling.nextSibling = oldChild.nextSibling
oldChild.nextSibling = oldChild.previousSibling = None
if oldChild.nodeType in _nodeTypes_with_children:
_clear_id_cache(self)
oldChild.parentNode = None
return oldChild
def normalize(self):
L = []
for child in self.childNodes:
if child.nodeType == Node.TEXT_NODE:
data = child.data
if data and L and L[-1].nodeType == child.nodeType:
# collapse text node
node = L[-1]
node.data = node.data + child.data
node.nextSibling = child.nextSibling
child.unlink()
elif data:
if L:
L[-1].nextSibling = child
child.previousSibling = L[-1]
else:
child.previousSibling = None
L.append(child)
else:
# empty text node; discard
child.unlink()
else:
if L:
L[-1].nextSibling = child
child.previousSibling = L[-1]
else:
child.previousSibling = None
L.append(child)
if child.nodeType == Node.ELEMENT_NODE:
child.normalize()
if L:
L[-1].nextSibling = None
self.childNodes[:] = L
def cloneNode(self, deep):
return _clone_node(self, deep, self.ownerDocument or self)
def isSupported(self, feature, version):
return self.ownerDocument.implementation.hasFeature(feature, version)
def _get_localName(self):
# Overridden in Element and Attr where localName can be Non-Null
return None
# Node interfaces from Level 3 (WD 9 April 2002)
def isSameNode(self, other):
return self is other
def getInterface(self, feature):
if self.isSupported(feature, None):
return self
else:
return None
# The "user data" functions use a dictionary that is only present
# if some user data has been set, so be careful not to assume it
# exists.
def getUserData(self, key):
try:
return self._user_data[key][0]
except (AttributeError, KeyError):
return None
def setUserData(self, key, data, handler):
old = None
try:
d = self._user_data
except AttributeError:
d = {}
self._user_data = d
if key in d:
old = d[key][0]
if data is None:
# ignore handlers passed for None
handler = None
if old is not None:
del d[key]
else:
d[key] = (data, handler)
return old
def _call_user_data_handler(self, operation, src, dst):
if hasattr(self, "_user_data"):
for key, (data, handler) in self._user_data.items():
if handler is not None:
handler.handle(operation, key, data, src, dst)
# minidom-specific API:
def unlink(self):
self.parentNode = self.ownerDocument = None
if self.childNodes:
for child in self.childNodes:
child.unlink()
self.childNodes = NodeList()
self.previousSibling = None
self.nextSibling = None
defproperty(Node, "firstChild", doc="First child node, or None.")
defproperty(Node, "lastChild", doc="Last child node, or None.")
defproperty(Node, "localName", doc="Namespace-local name of this node.")
def _append_child(self, node):
# fast path with less checks; usable by DOM builders if careful
childNodes = self.childNodes
if childNodes:
last = childNodes[-1]
node.__dict__["previousSibling"] = last
last.__dict__["nextSibling"] = node
childNodes.append(node)
node.__dict__["parentNode"] = self
def _in_document(node):
# return True iff node is part of a document tree
while node is not None:
if node.nodeType == Node.DOCUMENT_NODE:
return True
node = node.parentNode
return False
def _write_data(writer, data):
"Writes datachars to writer."
data = data.replace("&", "&").replace("<", "<")
data = data.replace("\"", """).replace(">", ">")
writer.write(data)
def _get_elements_by_tagName_helper(parent, name, rc):
for node in parent.childNodes:
if node.nodeType == Node.ELEMENT_NODE and \
(name == "*" or node.tagName == name):
rc.append(node)
_get_elements_by_tagName_helper(node, name, rc)
return rc
def _get_elements_by_tagName_ns_helper(parent, nsURI, localName, rc):
for node in parent.childNodes:
if node.nodeType == Node.ELEMENT_NODE:
if ((localName == "*" or node.localName == localName) and
(nsURI == "*" or node.namespaceURI == nsURI)):
rc.append(node)
_get_elements_by_tagName_ns_helper(node, nsURI, localName, rc)
return rc
class DocumentFragment(Node):
nodeType = Node.DOCUMENT_FRAGMENT_NODE
nodeName = "#document-fragment"
nodeValue = None
attributes = None
parentNode = None
_child_node_types = (Node.ELEMENT_NODE,
Node.TEXT_NODE,
Node.CDATA_SECTION_NODE,
Node.ENTITY_REFERENCE_NODE,
Node.PROCESSING_INSTRUCTION_NODE,
Node.COMMENT_NODE,
Node.NOTATION_NODE)
def __init__(self):
self.childNodes = NodeList()
class Attr(Node):
nodeType = Node.ATTRIBUTE_NODE
attributes = None
ownerElement = None
specified = False
_is_id = False
_child_node_types = (Node.TEXT_NODE, Node.ENTITY_REFERENCE_NODE)
def __init__(self, qName, namespaceURI=EMPTY_NAMESPACE, localName=None,
prefix=None):
# skip setattr for performance
d = self.__dict__
d["nodeName"] = d["name"] = qName
d["namespaceURI"] = namespaceURI
d["prefix"] = prefix
d['childNodes'] = NodeList()
# Add the single child node that represents the value of the attr
self.childNodes.append(Text())
# nodeValue and value are set elsewhere
def _get_localName(self):
return self.nodeName.split(":", 1)[-1]
def _get_name(self):
return self.name
def _get_specified(self):
return self.specified
def __setattr__(self, name, value):
d = self.__dict__
if name in ("value", "nodeValue"):
d["value"] = d["nodeValue"] = value
d2 = self.childNodes[0].__dict__
d2["data"] = d2["nodeValue"] = value
if self.ownerElement is not None:
_clear_id_cache(self.ownerElement)
elif name in ("name", "nodeName"):
d["name"] = d["nodeName"] = value
if self.ownerElement is not None:
_clear_id_cache(self.ownerElement)
else:
d[name] = value
def _set_prefix(self, prefix):
nsuri = self.namespaceURI
if prefix == "xmlns":
if nsuri and nsuri != XMLNS_NAMESPACE:
raise xml.dom.NamespaceErr(
"illegal use of 'xmlns' prefix for the wrong namespace")
d = self.__dict__
d['prefix'] = prefix
if prefix is None:
newName = self.localName
else:
newName = "%s:%s" % (prefix, self.localName)
if self.ownerElement:
_clear_id_cache(self.ownerElement)
d['nodeName'] = d['name'] = newName
def _set_value(self, value):
d = self.__dict__
d['value'] = d['nodeValue'] = value
if self.ownerElement:
_clear_id_cache(self.ownerElement)
self.childNodes[0].data = value
def unlink(self):
# This implementation does not call the base implementation
# since most of that is not needed, and the expense of the
# method call is not warranted. We duplicate the removal of
# children, but that's all we needed from the base class.
elem = self.ownerElement
if elem is not None:
del elem._attrs[self.nodeName]
del elem._attrsNS[(self.namespaceURI, self.localName)]
if self._is_id:
self._is_id = False
elem._magic_id_nodes -= 1
self.ownerDocument._magic_id_count -= 1
for child in self.childNodes:
child.unlink()
del self.childNodes[:]
def _get_isId(self):
if self._is_id:
return True
doc = self.ownerDocument
elem = self.ownerElement
if doc is None or elem is None:
return False
info = doc._get_elem_info(elem)
if info is None:
return False
if self.namespaceURI:
return info.isIdNS(self.namespaceURI, self.localName)
else:
return info.isId(self.nodeName)
def _get_schemaType(self):
doc = self.ownerDocument
elem = self.ownerElement
if doc is None or elem is None:
return _no_type
info = doc._get_elem_info(elem)
if info is None:
return _no_type
if self.namespaceURI:
return info.getAttributeTypeNS(self.namespaceURI, self.localName)
else:
return info.getAttributeType(self.nodeName)
defproperty(Attr, "isId", doc="True if this attribute is an ID.")
defproperty(Attr, "localName", doc="Namespace-local name of this attribute.")
defproperty(Attr, "schemaType", doc="Schema type for this attribute.")
class NamedNodeMap(object):
"""The attribute list is a transient interface to the underlying
dictionaries. Mutations here will change the underlying element's
dictionary.
Ordering is imposed artificially and does not reflect the order of
attributes as found in an input document.
"""
__slots__ = ('_attrs', '_attrsNS', '_ownerElement')
def __init__(self, attrs, attrsNS, ownerElement):
self._attrs = attrs
self._attrsNS = attrsNS
self._ownerElement = ownerElement
def _get_length(self):
return len(self._attrs)
def item(self, index):
try:
return self[self._attrs.keys()[index]]
except IndexError:
return None
def items(self):
L = []
for node in self._attrs.values():
L.append((node.nodeName, node.value))
return L
def itemsNS(self):
L = []
for node in self._attrs.values():
L.append(((node.namespaceURI, node.localName), node.value))
return L
def has_key(self, key):
if isinstance(key, StringTypes):
return self._attrs.has_key(key)
else:
return self._attrsNS.has_key(key)
def keys(self):
return self._attrs.keys()
def keysNS(self):
return self._attrsNS.keys()
def values(self):
return self._attrs.values()
def get(self, name, value=None):
return self._attrs.get(name, value)
__len__ = _get_length
__hash__ = None # Mutable type can't be correctly hashed
def __cmp__(self, other):
if self._attrs is getattr(other, "_attrs", None):
return 0
else:
return cmp(id(self), id(other))
def __getitem__(self, attname_or_tuple):
if isinstance(attname_or_tuple, tuple):
return self._attrsNS[attname_or_tuple]
else:
return self._attrs[attname_or_tuple]
# same as set
def __setitem__(self, attname, value):
if isinstance(value, StringTypes):
try:
node = self._attrs[attname]
except KeyError:
node = Attr(attname)
node.ownerDocument = self._ownerElement.ownerDocument
self.setNamedItem(node)
node.value = value
else:
if not isinstance(value, Attr):
raise TypeError, "value must be a string or Attr object"
node = value
self.setNamedItem(node)
def getNamedItem(self, name):
try:
return self._attrs[name]
except KeyError:
return None
def getNamedItemNS(self, namespaceURI, localName):
try:
return self._attrsNS[(namespaceURI, localName)]
except KeyError:
return None
def removeNamedItem(self, name):
n = self.getNamedItem(name)
if n is not None:
_clear_id_cache(self._ownerElement)
del self._attrs[n.nodeName]
del self._attrsNS[(n.namespaceURI, n.localName)]
if 'ownerElement' in n.__dict__:
n.__dict__['ownerElement'] = None
return n
else:
raise xml.dom.NotFoundErr()
def removeNamedItemNS(self, namespaceURI, localName):
n = self.getNamedItemNS(namespaceURI, localName)
if n is not None:
_clear_id_cache(self._ownerElement)
del self._attrsNS[(n.namespaceURI, n.localName)]
del self._attrs[n.nodeName]
if 'ownerElement' in n.__dict__:
n.__dict__['ownerElement'] = None
return n
else:
raise xml.dom.NotFoundErr()
def setNamedItem(self, node):
if not isinstance(node, Attr):
raise xml.dom.HierarchyRequestErr(
"%s cannot be child of %s" % (repr(node), repr(self)))
old = self._attrs.get(node.name)
if old:
old.unlink()
self._attrs[node.name] = node
self._attrsNS[(node.namespaceURI, node.localName)] = node
node.ownerElement = self._ownerElement
_clear_id_cache(node.ownerElement)
return old
def setNamedItemNS(self, node):
return self.setNamedItem(node)
def __delitem__(self, attname_or_tuple):
node = self[attname_or_tuple]
_clear_id_cache(node.ownerElement)
node.unlink()
def __getstate__(self):
return self._attrs, self._attrsNS, self._ownerElement
def __setstate__(self, state):
self._attrs, self._attrsNS, self._ownerElement = state
defproperty(NamedNodeMap, "length",
doc="Number of nodes in the NamedNodeMap.")
AttributeList = NamedNodeMap
class TypeInfo(object):
__slots__ = 'namespace', 'name'
def __init__(self, namespace, name):
self.namespace = namespace
self.name = name
def __repr__(self):
if self.namespace:
return "<TypeInfo %r (from %r)>" % (self.name, self.namespace)
else:
return "<TypeInfo %r>" % self.name
def _get_name(self):
return self.name
def _get_namespace(self):
return self.namespace
_no_type = TypeInfo(None, None)
class Element(Node):
nodeType = Node.ELEMENT_NODE
nodeValue = None
schemaType = _no_type
_magic_id_nodes = 0
_child_node_types = (Node.ELEMENT_NODE,
Node.PROCESSING_INSTRUCTION_NODE,
Node.COMMENT_NODE,
Node.TEXT_NODE,
Node.CDATA_SECTION_NODE,
Node.ENTITY_REFERENCE_NODE)
def __init__(self, tagName, namespaceURI=EMPTY_NAMESPACE, prefix=None,
localName=None):
self.tagName = self.nodeName = tagName
self.prefix = prefix
self.namespaceURI = namespaceURI
self.childNodes = NodeList()
self._attrs = {} # attributes are double-indexed:
self._attrsNS = {} # tagName -> Attribute
# URI,localName -> Attribute
# in the future: consider lazy generation
# of attribute objects this is too tricky
# for now because of headaches with
# namespaces.
def _get_localName(self):
return self.tagName.split(":", 1)[-1]
def _get_tagName(self):
return self.tagName
def unlink(self):
for attr in self._attrs.values():
attr.unlink()
self._attrs = None
self._attrsNS = None
Node.unlink(self)
def getAttribute(self, attname):
try:
return self._attrs[attname].value
except KeyError:
return ""
def getAttributeNS(self, namespaceURI, localName):
try:
return self._attrsNS[(namespaceURI, localName)].value
except KeyError:
return ""
def setAttribute(self, attname, value):
attr = self.getAttributeNode(attname)
if attr is None:
attr = Attr(attname)
# for performance
d = attr.__dict__
d["value"] = d["nodeValue"] = value
d["ownerDocument"] = self.ownerDocument
self.setAttributeNode(attr)
elif value != attr.value:
d = attr.__dict__
d["value"] = d["nodeValue"] = value
if attr.isId:
_clear_id_cache(self)
def setAttributeNS(self, namespaceURI, qualifiedName, value):
prefix, localname = _nssplit(qualifiedName)
attr = self.getAttributeNodeNS(namespaceURI, localname)
if attr is None:
# for performance
attr = Attr(qualifiedName, namespaceURI, localname, prefix)
d = attr.__dict__
d["prefix"] = prefix
d["nodeName"] = qualifiedName
d["value"] = d["nodeValue"] = value
d["ownerDocument"] = self.ownerDocument
self.setAttributeNode(attr)
else:
d = attr.__dict__
if value != attr.value:
d["value"] = d["nodeValue"] = value
if attr.isId:
_clear_id_cache(self)
if attr.prefix != prefix:
d["prefix"] = prefix
d["nodeName"] = qualifiedName
def getAttributeNode(self, attrname):
return self._attrs.get(attrname)
def getAttributeNodeNS(self, namespaceURI, localName):
return self._attrsNS.get((namespaceURI, localName))
def setAttributeNode(self, attr):
if attr.ownerElement not in (None, self):
raise xml.dom.InuseAttributeErr("attribute node already owned")
old1 = self._attrs.get(attr.name, None)
if old1 is not None:
self.removeAttributeNode(old1)
old2 = self._attrsNS.get((attr.namespaceURI, attr.localName), None)
if old2 is not None and old2 is not old1:
self.removeAttributeNode(old2)
_set_attribute_node(self, attr)
if old1 is not attr:
# It might have already been part of this node, in which case
# it doesn't represent a change, and should not be returned.
return old1
if old2 is not attr:
return old2
setAttributeNodeNS = setAttributeNode
def removeAttribute(self, name):
try:
attr = self._attrs[name]
except KeyError:
raise xml.dom.NotFoundErr()
self.removeAttributeNode(attr)
def removeAttributeNS(self, namespaceURI, localName):
try:
attr = self._attrsNS[(namespaceURI, localName)]
except KeyError:
raise xml.dom.NotFoundErr()
self.removeAttributeNode(attr)
def removeAttributeNode(self, node):
if node is None:
raise xml.dom.NotFoundErr()
try:
self._attrs[node.name]
except KeyError:
raise xml.dom.NotFoundErr()
_clear_id_cache(self)
node.unlink()
# Restore this since the node is still useful and otherwise
# unlinked
node.ownerDocument = self.ownerDocument
removeAttributeNodeNS = removeAttributeNode
def hasAttribute(self, name):
return self._attrs.has_key(name)
def hasAttributeNS(self, namespaceURI, localName):
return self._attrsNS.has_key((namespaceURI, localName))
def getElementsByTagName(self, name):
return _get_elements_by_tagName_helper(self, name, NodeList())
def getElementsByTagNameNS(self, namespaceURI, localName):
return _get_elements_by_tagName_ns_helper(
self, namespaceURI, localName, NodeList())
def __repr__(self):
return "<DOM Element: %s at %#x>" % (self.tagName, id(self))
def writexml(self, writer, indent="", addindent="", newl=""):
# indent = current indentation
# addindent = indentation to add to higher levels
# newl = newline string
writer.write(indent+"<" + self.tagName)
attrs = self._get_attributes()
a_names = attrs.keys()
a_names.sort()
for a_name in a_names:
writer.write(" %s=\"" % a_name)
_write_data(writer, attrs[a_name].value)
writer.write("\"")
if self.childNodes:
writer.write(">%s"%(newl))
for node in self.childNodes:
node.writexml(writer,indent+addindent,addindent,newl)
writer.write("%s</%s>%s" % (indent,self.tagName,newl))
else:
writer.write("/>%s"%(newl))
def _get_attributes(self):
return NamedNodeMap(self._attrs, self._attrsNS, self)
def hasAttributes(self):
if self._attrs:
return True
else:
return False
# DOM Level 3 attributes, based on the 22 Oct 2002 draft
def setIdAttribute(self, name):
idAttr = self.getAttributeNode(name)
self.setIdAttributeNode(idAttr)
def setIdAttributeNS(self, namespaceURI, localName):
idAttr = self.getAttributeNodeNS(namespaceURI, localName)
self.setIdAttributeNode(idAttr)
def setIdAttributeNode(self, idAttr):
if idAttr is None or not self.isSameNode(idAttr.ownerElement):
raise xml.dom.NotFoundErr()
if _get_containing_entref(self) is not None:
raise xml.dom.NoModificationAllowedErr()
if not idAttr._is_id:
idAttr.__dict__['_is_id'] = True
self._magic_id_nodes += 1
self.ownerDocument._magic_id_count += 1
_clear_id_cache(self)
defproperty(Element, "attributes",
doc="NamedNodeMap of attributes on the element.")
defproperty(Element, "localName",
doc="Namespace-local name of this element.")
def _set_attribute_node(element, attr):
_clear_id_cache(element)
element._attrs[attr.name] = attr
element._attrsNS[(attr.namespaceURI, attr.localName)] = attr
# This creates a circular reference, but Element.unlink()
# breaks the cycle since the references to the attribute
# dictionaries are tossed.
attr.__dict__['ownerElement'] = element
class Childless:
"""Mixin that makes childless-ness easy to implement and avoids
the complexity of the Node methods that deal with children.
"""
attributes = None
childNodes = EmptyNodeList()
firstChild = None
lastChild = None
def _get_firstChild(self):
return None
def _get_lastChild(self):
return None
def appendChild(self, node):
raise xml.dom.HierarchyRequestErr(
self.nodeName + " nodes cannot have children")
def hasChildNodes(self):
return False
def insertBefore(self, newChild, refChild):
raise xml.dom.HierarchyRequestErr(
self.nodeName + " nodes do not have children")
def removeChild(self, oldChild):
raise xml.dom.NotFoundErr(
self.nodeName + " nodes do not have children")
def replaceChild(self, newChild, oldChild):
raise xml.dom.HierarchyRequestErr(
self.nodeName + " nodes do not have children")
class ProcessingInstruction(Childless, Node):
nodeType = Node.PROCESSING_INSTRUCTION_NODE
def __init__(self, target, data):
self.target = self.nodeName = target
self.data = self.nodeValue = data
def _get_data(self):
return self.data
def _set_data(self, value):
d = self.__dict__
d['data'] = d['nodeValue'] = value
def _get_target(self):
return self.target
def _set_target(self, value):
d = self.__dict__
d['target'] = d['nodeName'] = value
def __setattr__(self, name, value):
if name == "data" or name == "nodeValue":
self.__dict__['data'] = self.__dict__['nodeValue'] = value
elif name == "target" or name == "nodeName":
self.__dict__['target'] = self.__dict__['nodeName'] = value
else:
self.__dict__[name] = value
def writexml(self, writer, indent="", addindent="", newl=""):
writer.write("%s<?%s %s?>%s" % (indent,self.target, self.data, newl))
class CharacterData(Childless, Node):
def _get_length(self):
return len(self.data)
__len__ = _get_length
def _get_data(self):
return self.__dict__['data']
def _set_data(self, data):
d = self.__dict__
d['data'] = d['nodeValue'] = data
_get_nodeValue = _get_data
_set_nodeValue = _set_data
def __setattr__(self, name, value):
if name == "data" or name == "nodeValue":
self.__dict__['data'] = self.__dict__['nodeValue'] = value
else:
self.__dict__[name] = value
def __repr__(self):
data = self.data
if len(data) > 10:
dotdotdot = "..."
else:
dotdotdot = ""
return '<DOM %s node "%r%s">' % (
self.__class__.__name__, data[0:10], dotdotdot)
def substringData(self, offset, count):
if offset < 0:
raise xml.dom.IndexSizeErr("offset cannot be negative")
if offset >= len(self.data):
raise xml.dom.IndexSizeErr("offset cannot be beyond end of data")
if count < 0:
raise xml.dom.IndexSizeErr("count cannot be negative")
return self.data[offset:offset+count]
def appendData(self, arg):
self.data = self.data + arg
def insertData(self, offset, arg):
if offset < 0:
raise xml.dom.IndexSizeErr("offset cannot be negative")
if offset >= len(self.data):
raise xml.dom.IndexSizeErr("offset cannot be beyond end of data")
if arg:
self.data = "%s%s%s" % (
self.data[:offset], arg, self.data[offset:])
def deleteData(self, offset, count):
if offset < 0:
raise xml.dom.IndexSizeErr("offset cannot be negative")
if offset >= len(self.data):
raise xml.dom.IndexSizeErr("offset cannot be beyond end of data")
if count < 0:
raise xml.dom.IndexSizeErr("count cannot be negative")
if count:
self.data = self.data[:offset] + self.data[offset+count:]
def replaceData(self, offset, count, arg):
if offset < 0:
raise xml.dom.IndexSizeErr("offset cannot be negative")
if offset >= len(self.data):
raise xml.dom.IndexSizeErr("offset cannot be beyond end of data")
if count < 0:
raise xml.dom.IndexSizeErr("count cannot be negative")
if count:
self.data = "%s%s%s" % (
self.data[:offset], arg, self.data[offset+count:])
defproperty(CharacterData, "length", doc="Length of the string data.")
class Text(CharacterData):
# Make sure we don't add an instance __dict__ if we don't already
# have one, at least when that's possible:
# XXX this does not work, CharacterData is an old-style class
# __slots__ = ()
nodeType = Node.TEXT_NODE
nodeName = "#text"
attributes = None
def splitText(self, offset):
if offset < 0 or offset > len(self.data):
raise xml.dom.IndexSizeErr("illegal offset value")
newText = self.__class__()
newText.data = self.data[offset:]
newText.ownerDocument = self.ownerDocument
next = self.nextSibling
if self.parentNode and self in self.parentNode.childNodes:
if next is None:
self.parentNode.appendChild(newText)
else:
self.parentNode.insertBefore(newText, next)
self.data = self.data[:offset]
return newText
def writexml(self, writer, indent="", addindent="", newl=""):
_write_data(writer, "%s%s%s"%(indent, self.data, newl))
# DOM Level 3 (WD 9 April 2002)
def _get_wholeText(self):
L = [self.data]
n = self.previousSibling
while n is not None:
if n.nodeType in (Node.TEXT_NODE, Node.CDATA_SECTION_NODE):
L.insert(0, n.data)
n = n.previousSibling
else:
break
n = self.nextSibling
while n is not None:
if n.nodeType in (Node.TEXT_NODE, Node.CDATA_SECTION_NODE):
L.append(n.data)
n = n.nextSibling
else:
break
return ''.join(L)
def replaceWholeText(self, content):
# XXX This needs to be seriously changed if minidom ever
# supports EntityReference nodes.
parent = self.parentNode
n = self.previousSibling
while n is not None:
if n.nodeType in (Node.TEXT_NODE, Node.CDATA_SECTION_NODE):
next = n.previousSibling
parent.removeChild(n)
n = next
else:
break
n = self.nextSibling
if not content:
parent.removeChild(self)
while n is not None:
if n.nodeType in (Node.TEXT_NODE, Node.CDATA_SECTION_NODE):
next = n.nextSibling
parent.removeChild(n)
n = next
else:
break
if content:
d = self.__dict__
d['data'] = content
d['nodeValue'] = content
return self
else:
return None
def _get_isWhitespaceInElementContent(self):
if self.data.strip():
return False
elem = _get_containing_element(self)
if elem is None:
return False
info = self.ownerDocument._get_elem_info(elem)
if info is None:
return False
else:
return info.isElementContent()
defproperty(Text, "isWhitespaceInElementContent",
doc="True iff this text node contains only whitespace"
" and is in element content.")
defproperty(Text, "wholeText",
doc="The text of all logically-adjacent text nodes.")
def _get_containing_element(node):
c = node.parentNode
while c is not None:
if c.nodeType == Node.ELEMENT_NODE:
return c
c = c.parentNode
return None
def _get_containing_entref(node):
c = node.parentNode
while c is not None:
if c.nodeType == Node.ENTITY_REFERENCE_NODE:
return c
c = c.parentNode
return None
class Comment(Childless, CharacterData):
nodeType = Node.COMMENT_NODE
nodeName = "#comment"
def __init__(self, data):
self.data = self.nodeValue = data
def writexml(self, writer, indent="", addindent="", newl=""):
if "--" in self.data:
raise ValueError("'--' is not allowed in a comment node")
writer.write("%s<!--%s-->%s" % (indent, self.data, newl))
class CDATASection(Text):
# Make sure we don't add an instance __dict__ if we don't already
# have one, at least when that's possible:
# XXX this does not work, Text is an old-style class
# __slots__ = ()
nodeType = Node.CDATA_SECTION_NODE
nodeName = "#cdata-section"
def writexml(self, writer, indent="", addindent="", newl=""):
if self.data.find("]]>") >= 0:
raise ValueError("']]>' not allowed in a CDATA section")
writer.write("<![CDATA[%s]]>" % self.data)
class ReadOnlySequentialNamedNodeMap(object):
__slots__ = '_seq',
def __init__(self, seq=()):
# seq should be a list or tuple
self._seq = seq
def __len__(self):
return len(self._seq)
def _get_length(self):
return len(self._seq)
def getNamedItem(self, name):
for n in self._seq:
if n.nodeName == name:
return n
def getNamedItemNS(self, namespaceURI, localName):
for n in self._seq:
if n.namespaceURI == namespaceURI and n.localName == localName:
return n
def __getitem__(self, name_or_tuple):
if isinstance(name_or_tuple, tuple):
node = self.getNamedItemNS(*name_or_tuple)
else:
node = self.getNamedItem(name_or_tuple)
if node is None:
raise KeyError, name_or_tuple
return node
def item(self, index):
if index < 0:
return None
try:
return self._seq[index]
except IndexError:
return None
def removeNamedItem(self, name):
raise xml.dom.NoModificationAllowedErr(
"NamedNodeMap instance is read-only")
def removeNamedItemNS(self, namespaceURI, localName):
raise xml.dom.NoModificationAllowedErr(
"NamedNodeMap instance is read-only")
def setNamedItem(self, node):
raise xml.dom.NoModificationAllowedErr(
"NamedNodeMap instance is read-only")
def setNamedItemNS(self, node):
raise xml.dom.NoModificationAllowedErr(
"NamedNodeMap instance is read-only")
def __getstate__(self):
return [self._seq]
def __setstate__(self, state):
self._seq = state[0]
defproperty(ReadOnlySequentialNamedNodeMap, "length",
doc="Number of entries in the NamedNodeMap.")
class Identified:
"""Mix-in class that supports the publicId and systemId attributes."""
# XXX this does not work, this is an old-style class
# __slots__ = 'publicId', 'systemId'
def _identified_mixin_init(self, publicId, systemId):
self.publicId = publicId
self.systemId = systemId
def _get_publicId(self):
return self.publicId
def _get_systemId(self):
return self.systemId
class DocumentType(Identified, Childless, Node):
nodeType = Node.DOCUMENT_TYPE_NODE
nodeValue = None
name = None
publicId = None
systemId = None
internalSubset = None
def __init__(self, qualifiedName):
self.entities = ReadOnlySequentialNamedNodeMap()
self.notations = ReadOnlySequentialNamedNodeMap()
if qualifiedName:
prefix, localname = _nssplit(qualifiedName)
self.name = localname
self.nodeName = self.name
def _get_internalSubset(self):
return self.internalSubset
def cloneNode(self, deep):
if self.ownerDocument is None:
# it's ok
clone = DocumentType(None)
clone.name = self.name
clone.nodeName = self.name
operation = xml.dom.UserDataHandler.NODE_CLONED
if deep:
clone.entities._seq = []
clone.notations._seq = []
for n in self.notations._seq:
notation = Notation(n.nodeName, n.publicId, n.systemId)
clone.notations._seq.append(notation)
n._call_user_data_handler(operation, n, notation)
for e in self.entities._seq:
entity = Entity(e.nodeName, e.publicId, e.systemId,
e.notationName)
entity.actualEncoding = e.actualEncoding
entity.encoding = e.encoding
entity.version = e.version
clone.entities._seq.append(entity)
e._call_user_data_handler(operation, n, entity)
self._call_user_data_handler(operation, self, clone)
return clone
else:
return None
def writexml(self, writer, indent="", addindent="", newl=""):
writer.write("<!DOCTYPE ")
writer.write(self.name)
if self.publicId:
writer.write("%s PUBLIC '%s'%s '%s'"
% (newl, self.publicId, newl, self.systemId))
elif self.systemId:
writer.write("%s SYSTEM '%s'" % (newl, self.systemId))
if self.internalSubset is not None:
writer.write(" [")
writer.write(self.internalSubset)
writer.write("]")
writer.write(">"+newl)
class Entity(Identified, Node):
attributes = None
nodeType = Node.ENTITY_NODE
nodeValue = None
actualEncoding = None
encoding = None
version = None
def __init__(self, name, publicId, systemId, notation):
self.nodeName = name
self.notationName = notation
self.childNodes = NodeList()
self._identified_mixin_init(publicId, systemId)
def _get_actualEncoding(self):
return self.actualEncoding
def _get_encoding(self):
return self.encoding
def _get_version(self):
return self.version
def appendChild(self, newChild):
raise xml.dom.HierarchyRequestErr(
"cannot append children to an entity node")
def insertBefore(self, newChild, refChild):
raise xml.dom.HierarchyRequestErr(
"cannot insert children below an entity node")
def removeChild(self, oldChild):
raise xml.dom.HierarchyRequestErr(
"cannot remove children from an entity node")
def replaceChild(self, newChild, oldChild):
raise xml.dom.HierarchyRequestErr(
"cannot replace children of an entity node")
class Notation(Identified, Childless, Node):
nodeType = Node.NOTATION_NODE
nodeValue = None
def __init__(self, name, publicId, systemId):
self.nodeName = name
self._identified_mixin_init(publicId, systemId)
class DOMImplementation(DOMImplementationLS):
_features = [("core", "1.0"),
("core", "2.0"),
("core", "3.0"),
("core", None),
("xml", "1.0"),
("xml", "2.0"),
("xml", "3.0"),
("xml", None),
("ls-load", "3.0"),
("ls-load", None),
]
def hasFeature(self, feature, version):
if version == "":
version = None
return (feature.lower(), version) in self._features
def createDocument(self, namespaceURI, qualifiedName, doctype):
if doctype and doctype.parentNode is not None:
raise xml.dom.WrongDocumentErr(
"doctype object owned by another DOM tree")
doc = self._create_document()
add_root_element = not (namespaceURI is None
and qualifiedName is None
and doctype is None)
if not qualifiedName and add_root_element:
# The spec is unclear what to raise here; SyntaxErr
# would be the other obvious candidate. Since Xerces raises
# InvalidCharacterErr, and since SyntaxErr is not listed
# for createDocument, that seems to be the better choice.
# XXX: need to check for illegal characters here and in
# createElement.
# DOM Level III clears this up when talking about the return value
# of this function. If namespaceURI, qName and DocType are
# Null the document is returned without a document element
# Otherwise if doctype or namespaceURI are not None
# Then we go back to the above problem
raise xml.dom.InvalidCharacterErr("Element with no name")
if add_root_element:
prefix, localname = _nssplit(qualifiedName)
if prefix == "xml" \
and namespaceURI != "http://www.w3.org/XML/1998/namespace":
raise xml.dom.NamespaceErr("illegal use of 'xml' prefix")
if prefix and not namespaceURI:
raise xml.dom.NamespaceErr(
"illegal use of prefix without namespaces")
element = doc.createElementNS(namespaceURI, qualifiedName)
if doctype:
doc.appendChild(doctype)
doc.appendChild(element)
if doctype:
doctype.parentNode = doctype.ownerDocument = doc
doc.doctype = doctype
doc.implementation = self
return doc
def createDocumentType(self, qualifiedName, publicId, systemId):
doctype = DocumentType(qualifiedName)
doctype.publicId = publicId
doctype.systemId = systemId
return doctype
# DOM Level 3 (WD 9 April 2002)
def getInterface(self, feature):
if self.hasFeature(feature, None):
return self
else:
return None
# internal
def _create_document(self):
return Document()
class ElementInfo(object):
"""Object that represents content-model information for an element.
This implementation is not expected to be used in practice; DOM
builders should provide implementations which do the right thing
using information available to it.
"""
__slots__ = 'tagName',
def __init__(self, name):
self.tagName = name
def getAttributeType(self, aname):
return _no_type
def getAttributeTypeNS(self, namespaceURI, localName):
return _no_type
def isElementContent(self):
return False
def isEmpty(self):
"""Returns true iff this element is declared to have an EMPTY
content model."""
return False
def isId(self, aname):
"""Returns true iff the named attribte is a DTD-style ID."""
return False
def isIdNS(self, namespaceURI, localName):
"""Returns true iff the identified attribute is a DTD-style ID."""
return False
def __getstate__(self):
return self.tagName
def __setstate__(self, state):
self.tagName = state
def _clear_id_cache(node):
if node.nodeType == Node.DOCUMENT_NODE:
node._id_cache.clear()
node._id_search_stack = None
elif _in_document(node):
node.ownerDocument._id_cache.clear()
node.ownerDocument._id_search_stack= None
class Document(Node, DocumentLS):
_child_node_types = (Node.ELEMENT_NODE, Node.PROCESSING_INSTRUCTION_NODE,
Node.COMMENT_NODE, Node.DOCUMENT_TYPE_NODE)
nodeType = Node.DOCUMENT_NODE
nodeName = "#document"
nodeValue = None
attributes = None
doctype = None
parentNode = None
previousSibling = nextSibling = None
implementation = DOMImplementation()
# Document attributes from Level 3 (WD 9 April 2002)
actualEncoding = None
encoding = None
standalone = None
version = None
strictErrorChecking = False
errorHandler = None
documentURI = None
_magic_id_count = 0
def __init__(self):
self.childNodes = NodeList()
# mapping of (namespaceURI, localName) -> ElementInfo
# and tagName -> ElementInfo
self._elem_info = {}
self._id_cache = {}
self._id_search_stack = None
def _get_elem_info(self, element):
if element.namespaceURI:
key = element.namespaceURI, element.localName
else:
key = element.tagName
return self._elem_info.get(key)
def _get_actualEncoding(self):
return self.actualEncoding
def _get_doctype(self):
return self.doctype
def _get_documentURI(self):
return self.documentURI
def _get_encoding(self):
return self.encoding
def _get_errorHandler(self):
return self.errorHandler
def _get_standalone(self):
return self.standalone
def _get_strictErrorChecking(self):
return self.strictErrorChecking
def _get_version(self):
return self.version
def appendChild(self, node):
if node.nodeType not in self._child_node_types:
raise xml.dom.HierarchyRequestErr(
"%s cannot be child of %s" % (repr(node), repr(self)))
if node.parentNode is not None:
# This needs to be done before the next test since this
# may *be* the document element, in which case it should
# end up re-ordered to the end.
node.parentNode.removeChild(node)
if node.nodeType == Node.ELEMENT_NODE \
and self._get_documentElement():
raise xml.dom.HierarchyRequestErr(
"two document elements disallowed")
return Node.appendChild(self, node)
def removeChild(self, oldChild):
try:
self.childNodes.remove(oldChild)
except ValueError:
raise xml.dom.NotFoundErr()
oldChild.nextSibling = oldChild.previousSibling = None
oldChild.parentNode = None
if self.documentElement is oldChild:
self.documentElement = None
return oldChild
def _get_documentElement(self):
for node in self.childNodes:
if node.nodeType == Node.ELEMENT_NODE:
return node
def unlink(self):
if self.doctype is not None:
self.doctype.unlink()
self.doctype = None
Node.unlink(self)
def cloneNode(self, deep):
if not deep:
return None
clone = self.implementation.createDocument(None, None, None)
clone.encoding = self.encoding
clone.standalone = self.standalone
clone.version = self.version
for n in self.childNodes:
childclone = _clone_node(n, deep, clone)
assert childclone.ownerDocument.isSameNode(clone)
clone.childNodes.append(childclone)
if childclone.nodeType == Node.DOCUMENT_NODE:
assert clone.documentElement is None
elif childclone.nodeType == Node.DOCUMENT_TYPE_NODE:
assert clone.doctype is None
clone.doctype = childclone
childclone.parentNode = clone
self._call_user_data_handler(xml.dom.UserDataHandler.NODE_CLONED,
self, clone)
return clone
def createDocumentFragment(self):
d = DocumentFragment()
d.ownerDocument = self
return d
def createElement(self, tagName):
e = Element(tagName)
e.ownerDocument = self
return e
def createTextNode(self, data):
if not isinstance(data, StringTypes):
raise TypeError, "node contents must be a string"
t = Text()
t.data = data
t.ownerDocument = self
return t
def createCDATASection(self, data):
if not isinstance(data, StringTypes):
raise TypeError, "node contents must be a string"
c = CDATASection()
c.data = data
c.ownerDocument = self
return c
def createComment(self, data):
c = Comment(data)
c.ownerDocument = self
return c
def createProcessingInstruction(self, target, data):
p = ProcessingInstruction(target, data)
p.ownerDocument = self
return p
def createAttribute(self, qName):
a = Attr(qName)
a.ownerDocument = self
a.value = ""
return a
def createElementNS(self, namespaceURI, qualifiedName):
prefix, localName = _nssplit(qualifiedName)
e = Element(qualifiedName, namespaceURI, prefix)
e.ownerDocument = self
return e
def createAttributeNS(self, namespaceURI, qualifiedName):
prefix, localName = _nssplit(qualifiedName)
a = Attr(qualifiedName, namespaceURI, localName, prefix)
a.ownerDocument = self
a.value = ""
return a
# A couple of implementation-specific helpers to create node types
# not supported by the W3C DOM specs:
def _create_entity(self, name, publicId, systemId, notationName):
e = Entity(name, publicId, systemId, notationName)
e.ownerDocument = self
return e
def _create_notation(self, name, publicId, systemId):
n = Notation(name, publicId, systemId)
n.ownerDocument = self
return n
def getElementById(self, id):
if id in self._id_cache:
return self._id_cache[id]
if not (self._elem_info or self._magic_id_count):
return None
stack = self._id_search_stack
if stack is None:
# we never searched before, or the cache has been cleared
stack = [self.documentElement]
self._id_search_stack = stack
elif not stack:
# Previous search was completed and cache is still valid;
# no matching node.
return None
result = None
while stack:
node = stack.pop()
# add child elements to stack for continued searching
stack.extend([child for child in node.childNodes
if child.nodeType in _nodeTypes_with_children])
# check this node
info = self._get_elem_info(node)
if info:
# We have to process all ID attributes before
# returning in order to get all the attributes set to
# be IDs using Element.setIdAttribute*().
for attr in node.attributes.values():
if attr.namespaceURI:
if info.isIdNS(attr.namespaceURI, attr.localName):
self._id_cache[attr.value] = node
if attr.value == id:
result = node
elif not node._magic_id_nodes:
break
elif info.isId(attr.name):
self._id_cache[attr.value] = node
if attr.value == id:
result = node
elif not node._magic_id_nodes:
break
elif attr._is_id:
self._id_cache[attr.value] = node
if attr.value == id:
result = node
elif node._magic_id_nodes == 1:
break
elif node._magic_id_nodes:
for attr in node.attributes.values():
if attr._is_id:
self._id_cache[attr.value] = node
if attr.value == id:
result = node
if result is not None:
break
return result
def getElementsByTagName(self, name):
return _get_elements_by_tagName_helper(self, name, NodeList())
def getElementsByTagNameNS(self, namespaceURI, localName):
return _get_elements_by_tagName_ns_helper(
self, namespaceURI, localName, NodeList())
def isSupported(self, feature, version):
return self.implementation.hasFeature(feature, version)
def importNode(self, node, deep):
if node.nodeType == Node.DOCUMENT_NODE:
raise xml.dom.NotSupportedErr("cannot import document nodes")
elif node.nodeType == Node.DOCUMENT_TYPE_NODE:
raise xml.dom.NotSupportedErr("cannot import document type nodes")
return _clone_node(node, deep, self)
def writexml(self, writer, indent="", addindent="", newl="",
encoding = None):
if encoding is None:
writer.write('<?xml version="1.0" ?>'+newl)
else:
writer.write('<?xml version="1.0" encoding="%s"?>%s' % (encoding, newl))
for node in self.childNodes:
node.writexml(writer, indent, addindent, newl)
# DOM Level 3 (WD 9 April 2002)
def renameNode(self, n, namespaceURI, name):
if n.ownerDocument is not self:
raise xml.dom.WrongDocumentErr(
"cannot rename nodes from other documents;\n"
"expected %s,\nfound %s" % (self, n.ownerDocument))
if n.nodeType not in (Node.ELEMENT_NODE, Node.ATTRIBUTE_NODE):
raise xml.dom.NotSupportedErr(
"renameNode() only applies to element and attribute nodes")
if namespaceURI != EMPTY_NAMESPACE:
if ':' in name:
prefix, localName = name.split(':', 1)
if ( prefix == "xmlns"
and namespaceURI != xml.dom.XMLNS_NAMESPACE):
raise xml.dom.NamespaceErr(
"illegal use of 'xmlns' prefix")
else:
if ( name == "xmlns"
and namespaceURI != xml.dom.XMLNS_NAMESPACE
and n.nodeType == Node.ATTRIBUTE_NODE):
raise xml.dom.NamespaceErr(
"illegal use of the 'xmlns' attribute")
prefix = None
localName = name
else:
prefix = None
localName = None
if n.nodeType == Node.ATTRIBUTE_NODE:
element = n.ownerElement
if element is not None:
is_id = n._is_id
element.removeAttributeNode(n)
else:
element = None
# avoid __setattr__
d = n.__dict__
d['prefix'] = prefix
d['localName'] = localName
d['namespaceURI'] = namespaceURI
d['nodeName'] = name
if n.nodeType == Node.ELEMENT_NODE:
d['tagName'] = name
else:
# attribute node
d['name'] = name
if element is not None:
element.setAttributeNode(n)
if is_id:
element.setIdAttributeNode(n)
# It's not clear from a semantic perspective whether we should
# call the user data handlers for the NODE_RENAMED event since
# we're re-using the existing node. The draft spec has been
# interpreted as meaning "no, don't call the handler unless a
# new node is created."
return n
defproperty(Document, "documentElement",
doc="Top-level element of this document.")
def _clone_node(node, deep, newOwnerDocument):
"""
Clone a node and give it the new owner document.
Called by Node.cloneNode and Document.importNode
"""
if node.ownerDocument.isSameNode(newOwnerDocument):
operation = xml.dom.UserDataHandler.NODE_CLONED
else:
operation = xml.dom.UserDataHandler.NODE_IMPORTED
if node.nodeType == Node.ELEMENT_NODE:
clone = newOwnerDocument.createElementNS(node.namespaceURI,
node.nodeName)
for attr in node.attributes.values():
clone.setAttributeNS(attr.namespaceURI, attr.nodeName, attr.value)
a = clone.getAttributeNodeNS(attr.namespaceURI, attr.localName)
a.specified = attr.specified
if deep:
for child in node.childNodes:
c = _clone_node(child, deep, newOwnerDocument)
clone.appendChild(c)
elif node.nodeType == Node.DOCUMENT_FRAGMENT_NODE:
clone = newOwnerDocument.createDocumentFragment()
if deep:
for child in node.childNodes:
c = _clone_node(child, deep, newOwnerDocument)
clone.appendChild(c)
elif node.nodeType == Node.TEXT_NODE:
clone = newOwnerDocument.createTextNode(node.data)
elif node.nodeType == Node.CDATA_SECTION_NODE:
clone = newOwnerDocument.createCDATASection(node.data)
elif node.nodeType == Node.PROCESSING_INSTRUCTION_NODE:
clone = newOwnerDocument.createProcessingInstruction(node.target,
node.data)
elif node.nodeType == Node.COMMENT_NODE:
clone = newOwnerDocument.createComment(node.data)
elif node.nodeType == Node.ATTRIBUTE_NODE:
clone = newOwnerDocument.createAttributeNS(node.namespaceURI,
node.nodeName)
clone.specified = True
clone.value = node.value
elif node.nodeType == Node.DOCUMENT_TYPE_NODE:
assert node.ownerDocument is not newOwnerDocument
operation = xml.dom.UserDataHandler.NODE_IMPORTED
clone = newOwnerDocument.implementation.createDocumentType(
node.name, node.publicId, node.systemId)
clone.ownerDocument = newOwnerDocument
if deep:
clone.entities._seq = []
clone.notations._seq = []
for n in node.notations._seq:
notation = Notation(n.nodeName, n.publicId, n.systemId)
notation.ownerDocument = newOwnerDocument
clone.notations._seq.append(notation)
if hasattr(n, '_call_user_data_handler'):
n._call_user_data_handler(operation, n, notation)
for e in node.entities._seq:
entity = Entity(e.nodeName, e.publicId, e.systemId,
e.notationName)
entity.actualEncoding = e.actualEncoding
entity.encoding = e.encoding
entity.version = e.version
entity.ownerDocument = newOwnerDocument
clone.entities._seq.append(entity)
if hasattr(e, '_call_user_data_handler'):
e._call_user_data_handler(operation, n, entity)
else:
# Note the cloning of Document and DocumentType nodes is
# implemenetation specific. minidom handles those cases
# directly in the cloneNode() methods.
raise xml.dom.NotSupportedErr("Cannot clone node %s" % repr(node))
# Check for _call_user_data_handler() since this could conceivably
# used with other DOM implementations (one of the FourThought
# DOMs, perhaps?).
if hasattr(node, '_call_user_data_handler'):
node._call_user_data_handler(operation, node, clone)
return clone
def _nssplit(qualifiedName):
fields = qualifiedName.split(':', 1)
if len(fields) == 2:
return fields
else:
return (None, fields[0])
def _get_StringIO():
# we can't use cStringIO since it doesn't support Unicode strings
from StringIO import StringIO
return StringIO()
def _do_pulldom_parse(func, args, kwargs):
events = func(*args, **kwargs)
toktype, rootNode = events.getEvent()
events.expandNode(rootNode)
events.clear()
return rootNode
def parse(file, parser=None, bufsize=None):
"""Parse a file into a DOM by filename or file object."""
if parser is None and not bufsize:
from xml.dom import expatbuilder
return expatbuilder.parse(file)
else:
from xml.dom import pulldom
return _do_pulldom_parse(pulldom.parse, (file,),
{'parser': parser, 'bufsize': bufsize})
def parseString(string, parser=None):
"""Parse a file into a DOM from a string."""
if parser is None:
from xml.dom import expatbuilder
return expatbuilder.parseString(string)
else:
from xml.dom import pulldom
return _do_pulldom_parse(pulldom.parseString, (string,),
{'parser': parser})
def getDOMImplementation(features=None):
if features:
if isinstance(features, StringTypes):
features = domreg._parse_feature_string(features)
for f, v in features:
if not Document.implementation.hasFeature(f, v):
return None
return Document.implementation
| apache-2.0 |
supergentle/migueltutorial | flask/lib/python2.7/site-packages/sqlalchemy/orm/descriptor_props.py | 60 | 25141 | # orm/descriptor_props.py
# Copyright (C) 2005-2015 the SQLAlchemy authors and contributors
# <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""Descriptor properties are more "auxiliary" properties
that exist as configurational elements, but don't participate
as actively in the load/persist ORM loop.
"""
from .interfaces import MapperProperty, PropComparator
from .util import _none_set
from . import attributes
from .. import util, sql, exc as sa_exc, event, schema
from ..sql import expression
from . import properties
from . import query
class DescriptorProperty(MapperProperty):
""":class:`.MapperProperty` which proxies access to a
user-defined descriptor."""
doc = None
def instrument_class(self, mapper):
prop = self
class _ProxyImpl(object):
accepts_scalar_loader = False
expire_missing = True
collection = False
def __init__(self, key):
self.key = key
if hasattr(prop, 'get_history'):
def get_history(self, state, dict_,
passive=attributes.PASSIVE_OFF):
return prop.get_history(state, dict_, passive)
if self.descriptor is None:
desc = getattr(mapper.class_, self.key, None)
if mapper._is_userland_descriptor(desc):
self.descriptor = desc
if self.descriptor is None:
def fset(obj, value):
setattr(obj, self.name, value)
def fdel(obj):
delattr(obj, self.name)
def fget(obj):
return getattr(obj, self.name)
self.descriptor = property(
fget=fget,
fset=fset,
fdel=fdel,
)
proxy_attr = attributes.create_proxied_attribute(
self.descriptor)(
self.parent.class_,
self.key,
self.descriptor,
lambda: self._comparator_factory(mapper),
doc=self.doc,
original_property=self
)
proxy_attr.impl = _ProxyImpl(self.key)
mapper.class_manager.instrument_attribute(self.key, proxy_attr)
@util.langhelpers.dependency_for("sqlalchemy.orm.properties")
class CompositeProperty(DescriptorProperty):
"""Defines a "composite" mapped attribute, representing a collection
of columns as one attribute.
:class:`.CompositeProperty` is constructed using the :func:`.composite`
function.
.. seealso::
:ref:`mapper_composite`
"""
def __init__(self, class_, *attrs, **kwargs):
"""Return a composite column-based property for use with a Mapper.
See the mapping documentation section :ref:`mapper_composite` for a
full usage example.
The :class:`.MapperProperty` returned by :func:`.composite`
is the :class:`.CompositeProperty`.
:param class\_:
The "composite type" class.
:param \*cols:
List of Column objects to be mapped.
:param active_history=False:
When ``True``, indicates that the "previous" value for a
scalar attribute should be loaded when replaced, if not
already loaded. See the same flag on :func:`.column_property`.
.. versionchanged:: 0.7
This flag specifically becomes meaningful
- previously it was a placeholder.
:param group:
A group name for this property when marked as deferred.
:param deferred:
When True, the column property is "deferred", meaning that it does
not load immediately, and is instead loaded when the attribute is
first accessed on an instance. See also
:func:`~sqlalchemy.orm.deferred`.
:param comparator_factory: a class which extends
:class:`.CompositeProperty.Comparator` which provides custom SQL
clause generation for comparison operations.
:param doc:
optional string that will be applied as the doc on the
class-bound descriptor.
:param info: Optional data dictionary which will be populated into the
:attr:`.MapperProperty.info` attribute of this object.
.. versionadded:: 0.8
:param extension:
an :class:`.AttributeExtension` instance,
or list of extensions, which will be prepended to the list of
attribute listeners for the resulting descriptor placed on the
class. **Deprecated.** Please see :class:`.AttributeEvents`.
"""
super(CompositeProperty, self).__init__()
self.attrs = attrs
self.composite_class = class_
self.active_history = kwargs.get('active_history', False)
self.deferred = kwargs.get('deferred', False)
self.group = kwargs.get('group', None)
self.comparator_factory = kwargs.pop('comparator_factory',
self.__class__.Comparator)
if 'info' in kwargs:
self.info = kwargs.pop('info')
util.set_creation_order(self)
self._create_descriptor()
def instrument_class(self, mapper):
super(CompositeProperty, self).instrument_class(mapper)
self._setup_event_handlers()
def do_init(self):
"""Initialization which occurs after the :class:`.CompositeProperty`
has been associated with its parent mapper.
"""
self._setup_arguments_on_columns()
def _create_descriptor(self):
"""Create the Python descriptor that will serve as
the access point on instances of the mapped class.
"""
def fget(instance):
dict_ = attributes.instance_dict(instance)
state = attributes.instance_state(instance)
if self.key not in dict_:
# key not present. Iterate through related
# attributes, retrieve their values. This
# ensures they all load.
values = [
getattr(instance, key)
for key in self._attribute_keys
]
# current expected behavior here is that the composite is
# created on access if the object is persistent or if
# col attributes have non-None. This would be better
# if the composite were created unconditionally,
# but that would be a behavioral change.
if self.key not in dict_ and (
state.key is not None or
not _none_set.issuperset(values)
):
dict_[self.key] = self.composite_class(*values)
state.manager.dispatch.refresh(state, None, [self.key])
return dict_.get(self.key, None)
def fset(instance, value):
dict_ = attributes.instance_dict(instance)
state = attributes.instance_state(instance)
attr = state.manager[self.key]
previous = dict_.get(self.key, attributes.NO_VALUE)
for fn in attr.dispatch.set:
value = fn(state, value, previous, attr.impl)
dict_[self.key] = value
if value is None:
for key in self._attribute_keys:
setattr(instance, key, None)
else:
for key, value in zip(
self._attribute_keys,
value.__composite_values__()):
setattr(instance, key, value)
def fdel(instance):
state = attributes.instance_state(instance)
dict_ = attributes.instance_dict(instance)
previous = dict_.pop(self.key, attributes.NO_VALUE)
attr = state.manager[self.key]
attr.dispatch.remove(state, previous, attr.impl)
for key in self._attribute_keys:
setattr(instance, key, None)
self.descriptor = property(fget, fset, fdel)
@util.memoized_property
def _comparable_elements(self):
return [
getattr(self.parent.class_, prop.key)
for prop in self.props
]
@util.memoized_property
def props(self):
props = []
for attr in self.attrs:
if isinstance(attr, str):
prop = self.parent.get_property(
attr, _configure_mappers=False)
elif isinstance(attr, schema.Column):
prop = self.parent._columntoproperty[attr]
elif isinstance(attr, attributes.InstrumentedAttribute):
prop = attr.property
else:
raise sa_exc.ArgumentError(
"Composite expects Column objects or mapped "
"attributes/attribute names as arguments, got: %r"
% (attr,))
props.append(prop)
return props
@property
def columns(self):
return [a for a in self.attrs if isinstance(a, schema.Column)]
def _setup_arguments_on_columns(self):
"""Propagate configuration arguments made on this composite
to the target columns, for those that apply.
"""
for prop in self.props:
prop.active_history = self.active_history
if self.deferred:
prop.deferred = self.deferred
prop.strategy_class = prop._strategy_lookup(
("deferred", True),
("instrument", True))
prop.group = self.group
def _setup_event_handlers(self):
"""Establish events that populate/expire the composite attribute."""
def load_handler(state, *args):
dict_ = state.dict
if self.key in dict_:
return
# if column elements aren't loaded, skip.
# __get__() will initiate a load for those
# columns
for k in self._attribute_keys:
if k not in dict_:
return
# assert self.key not in dict_
dict_[self.key] = self.composite_class(
*[state.dict[key] for key in
self._attribute_keys]
)
def expire_handler(state, keys):
if keys is None or set(self._attribute_keys).intersection(keys):
state.dict.pop(self.key, None)
def insert_update_handler(mapper, connection, state):
"""After an insert or update, some columns may be expired due
to server side defaults, or re-populated due to client side
defaults. Pop out the composite value here so that it
recreates.
"""
state.dict.pop(self.key, None)
event.listen(self.parent, 'after_insert',
insert_update_handler, raw=True)
event.listen(self.parent, 'after_update',
insert_update_handler, raw=True)
event.listen(self.parent, 'load',
load_handler, raw=True, propagate=True)
event.listen(self.parent, 'refresh',
load_handler, raw=True, propagate=True)
event.listen(self.parent, 'expire',
expire_handler, raw=True, propagate=True)
# TODO: need a deserialize hook here
@util.memoized_property
def _attribute_keys(self):
return [
prop.key for prop in self.props
]
def get_history(self, state, dict_, passive=attributes.PASSIVE_OFF):
"""Provided for userland code that uses attributes.get_history()."""
added = []
deleted = []
has_history = False
for prop in self.props:
key = prop.key
hist = state.manager[key].impl.get_history(state, dict_)
if hist.has_changes():
has_history = True
non_deleted = hist.non_deleted()
if non_deleted:
added.extend(non_deleted)
else:
added.append(None)
if hist.deleted:
deleted.extend(hist.deleted)
else:
deleted.append(None)
if has_history:
return attributes.History(
[self.composite_class(*added)],
(),
[self.composite_class(*deleted)]
)
else:
return attributes.History(
(), [self.composite_class(*added)], ()
)
def _comparator_factory(self, mapper):
return self.comparator_factory(self, mapper)
class CompositeBundle(query.Bundle):
def __init__(self, property, expr):
self.property = property
super(CompositeProperty.CompositeBundle, self).__init__(
property.key, *expr)
def create_row_processor(self, query, procs, labels):
def proc(row):
return self.property.composite_class(
*[proc(row) for proc in procs])
return proc
class Comparator(PropComparator):
"""Produce boolean, comparison, and other operators for
:class:`.CompositeProperty` attributes.
See the example in :ref:`composite_operations` for an overview
of usage , as well as the documentation for :class:`.PropComparator`.
See also:
:class:`.PropComparator`
:class:`.ColumnOperators`
:ref:`types_operators`
:attr:`.TypeEngine.comparator_factory`
"""
__hash__ = None
@property
def clauses(self):
return self.__clause_element__()
def __clause_element__(self):
return expression.ClauseList(
group=False, *self._comparable_elements)
def _query_clause_element(self):
return CompositeProperty.CompositeBundle(
self.prop, self.__clause_element__())
@util.memoized_property
def _comparable_elements(self):
if self._adapt_to_entity:
return [
getattr(
self._adapt_to_entity.entity,
prop.key
) for prop in self.prop._comparable_elements
]
else:
return self.prop._comparable_elements
def __eq__(self, other):
if other is None:
values = [None] * len(self.prop._comparable_elements)
else:
values = other.__composite_values__()
comparisons = [
a == b
for a, b in zip(self.prop._comparable_elements, values)
]
if self._adapt_to_entity:
comparisons = [self.adapter(x) for x in comparisons]
return sql.and_(*comparisons)
def __ne__(self, other):
return sql.not_(self.__eq__(other))
def __str__(self):
return str(self.parent.class_.__name__) + "." + self.key
@util.langhelpers.dependency_for("sqlalchemy.orm.properties")
class ConcreteInheritedProperty(DescriptorProperty):
"""A 'do nothing' :class:`.MapperProperty` that disables
an attribute on a concrete subclass that is only present
on the inherited mapper, not the concrete classes' mapper.
Cases where this occurs include:
* When the superclass mapper is mapped against a
"polymorphic union", which includes all attributes from
all subclasses.
* When a relationship() is configured on an inherited mapper,
but not on the subclass mapper. Concrete mappers require
that relationship() is configured explicitly on each
subclass.
"""
def _comparator_factory(self, mapper):
comparator_callable = None
for m in self.parent.iterate_to_root():
p = m._props[self.key]
if not isinstance(p, ConcreteInheritedProperty):
comparator_callable = p.comparator_factory
break
return comparator_callable
def __init__(self):
super(ConcreteInheritedProperty, self).__init__()
def warn():
raise AttributeError("Concrete %s does not implement "
"attribute %r at the instance level. Add "
"this property explicitly to %s." %
(self.parent, self.key, self.parent))
class NoninheritedConcreteProp(object):
def __set__(s, obj, value):
warn()
def __delete__(s, obj):
warn()
def __get__(s, obj, owner):
if obj is None:
return self.descriptor
warn()
self.descriptor = NoninheritedConcreteProp()
@util.langhelpers.dependency_for("sqlalchemy.orm.properties")
class SynonymProperty(DescriptorProperty):
def __init__(self, name, map_column=None,
descriptor=None, comparator_factory=None,
doc=None, info=None):
"""Denote an attribute name as a synonym to a mapped property,
in that the attribute will mirror the value and expression behavior
of another attribute.
:param name: the name of the existing mapped property. This
can refer to the string name of any :class:`.MapperProperty`
configured on the class, including column-bound attributes
and relationships.
:param descriptor: a Python :term:`descriptor` that will be used
as a getter (and potentially a setter) when this attribute is
accessed at the instance level.
:param map_column: if ``True``, the :func:`.synonym` construct will
locate the existing named :class:`.MapperProperty` based on the
attribute name of this :func:`.synonym`, and assign it to a new
attribute linked to the name of this :func:`.synonym`.
That is, given a mapping like::
class MyClass(Base):
__tablename__ = 'my_table'
id = Column(Integer, primary_key=True)
job_status = Column(String(50))
job_status = synonym("_job_status", map_column=True)
The above class ``MyClass`` will now have the ``job_status``
:class:`.Column` object mapped to the attribute named
``_job_status``, and the attribute named ``job_status`` will refer
to the synonym itself. This feature is typically used in
conjunction with the ``descriptor`` argument in order to link a
user-defined descriptor as a "wrapper" for an existing column.
:param info: Optional data dictionary which will be populated into the
:attr:`.InspectionAttr.info` attribute of this object.
.. versionadded:: 1.0.0
:param comparator_factory: A subclass of :class:`.PropComparator`
that will provide custom comparison behavior at the SQL expression
level.
.. note::
For the use case of providing an attribute which redefines both
Python-level and SQL-expression level behavior of an attribute,
please refer to the Hybrid attribute introduced at
:ref:`mapper_hybrids` for a more effective technique.
.. seealso::
:ref:`synonyms` - examples of functionality.
:ref:`mapper_hybrids` - Hybrids provide a better approach for
more complicated attribute-wrapping schemes than synonyms.
"""
super(SynonymProperty, self).__init__()
self.name = name
self.map_column = map_column
self.descriptor = descriptor
self.comparator_factory = comparator_factory
self.doc = doc or (descriptor and descriptor.__doc__) or None
if info:
self.info = info
util.set_creation_order(self)
# TODO: when initialized, check _proxied_property,
# emit a warning if its not a column-based property
@util.memoized_property
def _proxied_property(self):
return getattr(self.parent.class_, self.name).property
def _comparator_factory(self, mapper):
prop = self._proxied_property
if self.comparator_factory:
comp = self.comparator_factory(prop, mapper)
else:
comp = prop.comparator_factory(prop, mapper)
return comp
def set_parent(self, parent, init):
if self.map_column:
# implement the 'map_column' option.
if self.key not in parent.mapped_table.c:
raise sa_exc.ArgumentError(
"Can't compile synonym '%s': no column on table "
"'%s' named '%s'"
% (self.name, parent.mapped_table.description, self.key))
elif parent.mapped_table.c[self.key] in \
parent._columntoproperty and \
parent._columntoproperty[
parent.mapped_table.c[self.key]
].key == self.name:
raise sa_exc.ArgumentError(
"Can't call map_column=True for synonym %r=%r, "
"a ColumnProperty already exists keyed to the name "
"%r for column %r" %
(self.key, self.name, self.name, self.key)
)
p = properties.ColumnProperty(parent.mapped_table.c[self.key])
parent._configure_property(
self.name, p,
init=init,
setparent=True)
p._mapped_by_synonym = self.key
self.parent = parent
@util.langhelpers.dependency_for("sqlalchemy.orm.properties")
class ComparableProperty(DescriptorProperty):
"""Instruments a Python property for use in query expressions."""
def __init__(
self, comparator_factory, descriptor=None, doc=None, info=None):
"""Provides a method of applying a :class:`.PropComparator`
to any Python descriptor attribute.
.. versionchanged:: 0.7
:func:`.comparable_property` is superseded by
the :mod:`~sqlalchemy.ext.hybrid` extension. See the example
at :ref:`hybrid_custom_comparators`.
Allows any Python descriptor to behave like a SQL-enabled
attribute when used at the class level in queries, allowing
redefinition of expression operator behavior.
In the example below we redefine :meth:`.PropComparator.operate`
to wrap both sides of an expression in ``func.lower()`` to produce
case-insensitive comparison::
from sqlalchemy.orm import comparable_property
from sqlalchemy.orm.interfaces import PropComparator
from sqlalchemy.sql import func
from sqlalchemy import Integer, String, Column
from sqlalchemy.ext.declarative import declarative_base
class CaseInsensitiveComparator(PropComparator):
def __clause_element__(self):
return self.prop
def operate(self, op, other):
return op(
func.lower(self.__clause_element__()),
func.lower(other)
)
Base = declarative_base()
class SearchWord(Base):
__tablename__ = 'search_word'
id = Column(Integer, primary_key=True)
word = Column(String)
word_insensitive = comparable_property(lambda prop, mapper:
CaseInsensitiveComparator(
mapper.c.word, mapper)
)
A mapping like the above allows the ``word_insensitive`` attribute
to render an expression like::
>>> print SearchWord.word_insensitive == "Trucks"
lower(search_word.word) = lower(:lower_1)
:param comparator_factory:
A PropComparator subclass or factory that defines operator behavior
for this property.
:param descriptor:
Optional when used in a ``properties={}`` declaration. The Python
descriptor or property to layer comparison behavior on top of.
The like-named descriptor will be automatically retrieved from the
mapped class if left blank in a ``properties`` declaration.
:param info: Optional data dictionary which will be populated into the
:attr:`.InspectionAttr.info` attribute of this object.
.. versionadded:: 1.0.0
"""
super(ComparableProperty, self).__init__()
self.descriptor = descriptor
self.comparator_factory = comparator_factory
self.doc = doc or (descriptor and descriptor.__doc__) or None
if info:
self.info = info
util.set_creation_order(self)
def _comparator_factory(self, mapper):
return self.comparator_factory(self, mapper)
| bsd-3-clause |
brianmay/karaage | karaage/emails/views.py | 2 | 2890 | # Copyright 2010-2017, The University of Melbourne
# Copyright 2010-2017, Brian May
#
# This file is part of Karaage.
#
# Karaage is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Karaage is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Karaage If not, see <http://www.gnu.org/licenses/>.
from django.conf import settings
from django.contrib import messages
from django.core.mail import send_mass_mail
from django.http import HttpResponseRedirect
from django.shortcuts import render
from django.template import Context, Template
from django.urls import reverse
from karaage.common.decorators import admin_required
from karaage.emails.forms import BulkEmailForm
def _get_emails(person_query, subject, body):
emails = []
if person_query:
email_list = []
subject_t = Template(subject)
body_t = Template(body)
person_query = person_query.filter(
is_systemuser=False, login_enabled=True)
for person in person_query:
if person.email not in email_list:
projects = ", ".join(
[str(project) for project in person.leads.all()]
)
ctx = Context({
'projects': projects,
'receiver': person,
})
subject = subject_t.render(ctx)
body = body_t.render(ctx)
emails.append(
(subject, body, settings.ACCOUNTS_EMAIL,
[person.email])
)
email_list.append(person.email)
return emails
@admin_required
def send_email(request):
form = BulkEmailForm(request.POST or None)
if request.method == 'POST':
if form.is_valid():
subject = form.cleaned_data['subject']
body = form.cleaned_data['body']
person_query = form.get_person_query()
emails = _get_emails(person_query, subject, body)
if 'preview' in request.POST:
try:
preview = emails[0]
except IndexError:
preview = None
else:
send_mass_mail(emails)
messages.success(request, "Emails sent successfully")
return HttpResponseRedirect(reverse('index'))
return render(
template_name='karaage/emails/send_email_form.html', context=locals(),
request=request)
| gpl-3.0 |
AkA84/edx-platform | cms/djangoapps/contentstore/views/tests/test_tabs.py | 129 | 8489 | """ Tests for tab functions (just primitive). """
import json
from contentstore.views import tabs
from contentstore.tests.utils import CourseTestCase
from contentstore.utils import reverse_course_url
from xmodule.x_module import STUDENT_VIEW
from xmodule.modulestore.tests.factories import CourseFactory, ItemFactory
from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase
from xmodule.tabs import CourseTabList
from xmodule.modulestore.django import modulestore
class TabsPageTests(CourseTestCase):
"""Test cases for Tabs (a.k.a Pages) page"""
def setUp(self):
"""Common setup for tests"""
# call super class to setup course, etc.
super(TabsPageTests, self).setUp()
# Set the URL for tests
self.url = reverse_course_url('tabs_handler', self.course.id)
# add a static tab to the course, for code coverage
self.test_tab = ItemFactory.create(
parent_location=self.course.location,
category="static_tab",
display_name="Static_1"
)
self.reload_course()
def check_invalid_tab_id_response(self, resp):
"""Verify response is an error listing the invalid_tab_id"""
self.assertEqual(resp.status_code, 400)
resp_content = json.loads(resp.content)
self.assertIn("error", resp_content)
self.assertIn("invalid_tab_id", resp_content['error'])
def test_not_implemented(self):
"""Verify not implemented errors"""
# JSON GET request not supported
with self.assertRaises(NotImplementedError):
self.client.get(self.url)
# JSON POST request not supported
with self.assertRaises(NotImplementedError):
self.client.ajax_post(
self.url,
data=json.dumps({
'tab_id_locator': {'tab_id': 'courseware'},
'unsupported_request': None,
}),
)
# invalid JSON POST request
with self.assertRaises(NotImplementedError):
self.client.ajax_post(
self.url,
data={'invalid_request': None},
)
def test_view_index(self):
"""Basic check that the Pages page responds correctly"""
resp = self.client.get_html(self.url)
self.assertEqual(resp.status_code, 200)
self.assertIn('course-nav-list', resp.content)
def test_reorder_tabs(self):
"""Test re-ordering of tabs"""
# get the original tab ids
orig_tab_ids = [tab.tab_id for tab in self.course.tabs]
tab_ids = list(orig_tab_ids)
num_orig_tabs = len(orig_tab_ids)
# make sure we have enough tabs to play around with
self.assertTrue(num_orig_tabs >= 5)
# reorder the last two tabs
tab_ids[num_orig_tabs - 1], tab_ids[num_orig_tabs - 2] = tab_ids[num_orig_tabs - 2], tab_ids[num_orig_tabs - 1]
# remove the middle tab
# (the code needs to handle the case where tabs requested for re-ordering is a subset of the tabs in the course)
removed_tab = tab_ids.pop(num_orig_tabs / 2)
self.assertTrue(len(tab_ids) == num_orig_tabs - 1)
# post the request
resp = self.client.ajax_post(
self.url,
data={'tabs': [{'tab_id': tab_id} for tab_id in tab_ids]},
)
self.assertEqual(resp.status_code, 204)
# reload the course and verify the new tab order
self.reload_course()
new_tab_ids = [tab.tab_id for tab in self.course.tabs]
self.assertEqual(new_tab_ids, tab_ids + [removed_tab])
self.assertNotEqual(new_tab_ids, orig_tab_ids)
def test_reorder_tabs_invalid_list(self):
"""Test re-ordering of tabs with invalid tab list"""
orig_tab_ids = [tab.tab_id for tab in self.course.tabs]
tab_ids = list(orig_tab_ids)
# reorder the first two tabs
tab_ids[0], tab_ids[1] = tab_ids[1], tab_ids[0]
# post the request
resp = self.client.ajax_post(
self.url,
data={'tabs': [{'tab_id': tab_id} for tab_id in tab_ids]},
)
self.assertEqual(resp.status_code, 400)
resp_content = json.loads(resp.content)
self.assertIn("error", resp_content)
def test_reorder_tabs_invalid_tab(self):
"""Test re-ordering of tabs with invalid tab"""
invalid_tab_ids = ['courseware', 'info', 'invalid_tab_id']
# post the request
resp = self.client.ajax_post(
self.url,
data={'tabs': [{'tab_id': tab_id} for tab_id in invalid_tab_ids]},
)
self.check_invalid_tab_id_response(resp)
def check_toggle_tab_visiblity(self, tab_type, new_is_hidden_setting):
"""Helper method to check changes in tab visibility"""
# find the tab
old_tab = CourseTabList.get_tab_by_type(self.course.tabs, tab_type)
# visibility should be different from new setting
self.assertNotEqual(old_tab.is_hidden, new_is_hidden_setting)
# post the request
resp = self.client.ajax_post(
self.url,
data=json.dumps({
'tab_id_locator': {'tab_id': old_tab.tab_id},
'is_hidden': new_is_hidden_setting,
}),
)
self.assertEqual(resp.status_code, 204)
# reload the course and verify the new visibility setting
self.reload_course()
new_tab = CourseTabList.get_tab_by_type(self.course.tabs, tab_type)
self.assertEqual(new_tab.is_hidden, new_is_hidden_setting)
def test_toggle_tab_visibility(self):
"""Test toggling of tab visibility"""
self.check_toggle_tab_visiblity('wiki', True)
self.check_toggle_tab_visiblity('wiki', False)
def test_toggle_invalid_tab_visibility(self):
"""Test toggling visibility of an invalid tab"""
# post the request
resp = self.client.ajax_post(
self.url,
data=json.dumps({
'tab_id_locator': {'tab_id': 'invalid_tab_id'}
}),
)
self.check_invalid_tab_id_response(resp)
def test_tab_preview_html(self):
"""
Verify that the static tab renders itself with the correct HTML
"""
preview_url = '/xblock/{}/{}'.format(self.test_tab.location, STUDENT_VIEW)
resp = self.client.get(preview_url, HTTP_ACCEPT='application/json')
self.assertEqual(resp.status_code, 200)
resp_content = json.loads(resp.content)
html = resp_content['html']
# Verify that the HTML contains the expected elements
self.assertIn('<span class="action-button-text">Edit</span>', html)
self.assertIn('<span class="sr">Duplicate this component</span>', html)
self.assertIn('<span class="sr">Delete this component</span>', html)
self.assertIn('<span data-tooltip="Drag to reorder" class="drag-handle action"></span>', html)
class PrimitiveTabEdit(ModuleStoreTestCase):
"""Tests for the primitive tab edit data manipulations"""
def test_delete(self):
"""Test primitive tab deletion."""
course = CourseFactory.create()
with self.assertRaises(ValueError):
tabs.primitive_delete(course, 0)
with self.assertRaises(ValueError):
tabs.primitive_delete(course, 1)
with self.assertRaises(IndexError):
tabs.primitive_delete(course, 6)
tabs.primitive_delete(course, 2)
self.assertFalse({u'type': u'textbooks'} in course.tabs)
# Check that discussion has shifted up
self.assertEquals(course.tabs[2], {'type': 'discussion', 'name': 'Discussion'})
def test_insert(self):
"""Test primitive tab insertion."""
course = CourseFactory.create()
tabs.primitive_insert(course, 2, 'notes', 'aname')
self.assertEquals(course.tabs[2], {'type': 'notes', 'name': 'aname'})
with self.assertRaises(ValueError):
tabs.primitive_insert(course, 0, 'notes', 'aname')
with self.assertRaises(ValueError):
tabs.primitive_insert(course, 3, 'static_tab', 'aname')
def test_save(self):
"""Test course saving."""
course = CourseFactory.create()
tabs.primitive_insert(course, 3, 'notes', 'aname')
course2 = modulestore().get_course(course.id)
self.assertEquals(course2.tabs[3], {'type': 'notes', 'name': 'aname'})
| agpl-3.0 |
myfleetingtime/spark2.11_bingo | python/pyspark/ml/recommendation.py | 15 | 14683 | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from pyspark import since, keyword_only
from pyspark.ml.util import *
from pyspark.ml.wrapper import JavaEstimator, JavaModel
from pyspark.ml.param.shared import *
from pyspark.ml.common import inherit_doc
__all__ = ['ALS', 'ALSModel']
@inherit_doc
class ALS(JavaEstimator, HasCheckpointInterval, HasMaxIter, HasPredictionCol, HasRegParam, HasSeed,
JavaMLWritable, JavaMLReadable):
"""
Alternating Least Squares (ALS) matrix factorization.
ALS attempts to estimate the ratings matrix `R` as the product of
two lower-rank matrices, `X` and `Y`, i.e. `X * Yt = R`. Typically
these approximations are called 'factor' matrices. The general
approach is iterative. During each iteration, one of the factor
matrices is held constant, while the other is solved for using least
squares. The newly-solved factor matrix is then held constant while
solving for the other factor matrix.
This is a blocked implementation of the ALS factorization algorithm
that groups the two sets of factors (referred to as "users" and
"products") into blocks and reduces communication by only sending
one copy of each user vector to each product block on each
iteration, and only for the product blocks that need that user's
feature vector. This is achieved by pre-computing some information
about the ratings matrix to determine the "out-links" of each user
(which blocks of products it will contribute to) and "in-link"
information for each product (which of the feature vectors it
receives from each user block it will depend on). This allows us to
send only an array of feature vectors between each user block and
product block, and have the product block find the users' ratings
and update the products based on these messages.
For implicit preference data, the algorithm used is based on
`"Collaborative Filtering for Implicit Feedback Datasets",
<http://dx.doi.org/10.1109/ICDM.2008.22>`_, adapted for the blocked
approach used here.
Essentially instead of finding the low-rank approximations to the
rating matrix `R`, this finds the approximations for a preference
matrix `P` where the elements of `P` are 1 if r > 0 and 0 if r <= 0.
The ratings then act as 'confidence' values related to strength of
indicated user preferences rather than explicit ratings given to
items.
>>> df = spark.createDataFrame(
... [(0, 0, 4.0), (0, 1, 2.0), (1, 1, 3.0), (1, 2, 4.0), (2, 1, 1.0), (2, 2, 5.0)],
... ["user", "item", "rating"])
>>> als = ALS(rank=10, maxIter=5, seed=0)
>>> model = als.fit(df)
>>> model.rank
10
>>> model.userFactors.orderBy("id").collect()
[Row(id=0, features=[...]), Row(id=1, ...), Row(id=2, ...)]
>>> test = spark.createDataFrame([(0, 2), (1, 0), (2, 0)], ["user", "item"])
>>> predictions = sorted(model.transform(test).collect(), key=lambda r: r[0])
>>> predictions[0]
Row(user=0, item=2, prediction=-0.13807615637779236)
>>> predictions[1]
Row(user=1, item=0, prediction=2.6258413791656494)
>>> predictions[2]
Row(user=2, item=0, prediction=-1.5018409490585327)
>>> als_path = temp_path + "/als"
>>> als.save(als_path)
>>> als2 = ALS.load(als_path)
>>> als.getMaxIter()
5
>>> model_path = temp_path + "/als_model"
>>> model.save(model_path)
>>> model2 = ALSModel.load(model_path)
>>> model.rank == model2.rank
True
>>> sorted(model.userFactors.collect()) == sorted(model2.userFactors.collect())
True
>>> sorted(model.itemFactors.collect()) == sorted(model2.itemFactors.collect())
True
.. versionadded:: 1.4.0
"""
rank = Param(Params._dummy(), "rank", "rank of the factorization",
typeConverter=TypeConverters.toInt)
numUserBlocks = Param(Params._dummy(), "numUserBlocks", "number of user blocks",
typeConverter=TypeConverters.toInt)
numItemBlocks = Param(Params._dummy(), "numItemBlocks", "number of item blocks",
typeConverter=TypeConverters.toInt)
implicitPrefs = Param(Params._dummy(), "implicitPrefs", "whether to use implicit preference",
typeConverter=TypeConverters.toBoolean)
alpha = Param(Params._dummy(), "alpha", "alpha for implicit preference",
typeConverter=TypeConverters.toFloat)
userCol = Param(Params._dummy(), "userCol", "column name for user ids. Ids must be within " +
"the integer value range.", typeConverter=TypeConverters.toString)
itemCol = Param(Params._dummy(), "itemCol", "column name for item ids. Ids must be within " +
"the integer value range.", typeConverter=TypeConverters.toString)
ratingCol = Param(Params._dummy(), "ratingCol", "column name for ratings",
typeConverter=TypeConverters.toString)
nonnegative = Param(Params._dummy(), "nonnegative",
"whether to use nonnegative constraint for least squares",
typeConverter=TypeConverters.toBoolean)
intermediateStorageLevel = Param(Params._dummy(), "intermediateStorageLevel",
"StorageLevel for intermediate datasets. Cannot be 'NONE'.",
typeConverter=TypeConverters.toString)
finalStorageLevel = Param(Params._dummy(), "finalStorageLevel",
"StorageLevel for ALS model factors.",
typeConverter=TypeConverters.toString)
@keyword_only
def __init__(self, rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10,
implicitPrefs=False, alpha=1.0, userCol="user", itemCol="item", seed=None,
ratingCol="rating", nonnegative=False, checkpointInterval=10,
intermediateStorageLevel="MEMORY_AND_DISK",
finalStorageLevel="MEMORY_AND_DISK"):
"""
__init__(self, rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10, \
implicitPrefs=false, alpha=1.0, userCol="user", itemCol="item", seed=None, \
ratingCol="rating", nonnegative=false, checkpointInterval=10, \
intermediateStorageLevel="MEMORY_AND_DISK", \
finalStorageLevel="MEMORY_AND_DISK")
"""
super(ALS, self).__init__()
self._java_obj = self._new_java_obj("org.apache.spark.ml.recommendation.ALS", self.uid)
self._setDefault(rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10,
implicitPrefs=False, alpha=1.0, userCol="user", itemCol="item",
ratingCol="rating", nonnegative=False, checkpointInterval=10,
intermediateStorageLevel="MEMORY_AND_DISK",
finalStorageLevel="MEMORY_AND_DISK")
kwargs = self.__init__._input_kwargs
self.setParams(**kwargs)
@keyword_only
@since("1.4.0")
def setParams(self, rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10,
implicitPrefs=False, alpha=1.0, userCol="user", itemCol="item", seed=None,
ratingCol="rating", nonnegative=False, checkpointInterval=10,
intermediateStorageLevel="MEMORY_AND_DISK",
finalStorageLevel="MEMORY_AND_DISK"):
"""
setParams(self, rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10, \
implicitPrefs=False, alpha=1.0, userCol="user", itemCol="item", seed=None, \
ratingCol="rating", nonnegative=False, checkpointInterval=10, \
intermediateStorageLevel="MEMORY_AND_DISK", \
finalStorageLevel="MEMORY_AND_DISK")
Sets params for ALS.
"""
kwargs = self.setParams._input_kwargs
return self._set(**kwargs)
def _create_model(self, java_model):
return ALSModel(java_model)
@since("1.4.0")
def setRank(self, value):
"""
Sets the value of :py:attr:`rank`.
"""
return self._set(rank=value)
@since("1.4.0")
def getRank(self):
"""
Gets the value of rank or its default value.
"""
return self.getOrDefault(self.rank)
@since("1.4.0")
def setNumUserBlocks(self, value):
"""
Sets the value of :py:attr:`numUserBlocks`.
"""
return self._set(numUserBlocks=value)
@since("1.4.0")
def getNumUserBlocks(self):
"""
Gets the value of numUserBlocks or its default value.
"""
return self.getOrDefault(self.numUserBlocks)
@since("1.4.0")
def setNumItemBlocks(self, value):
"""
Sets the value of :py:attr:`numItemBlocks`.
"""
return self._set(numItemBlocks=value)
@since("1.4.0")
def getNumItemBlocks(self):
"""
Gets the value of numItemBlocks or its default value.
"""
return self.getOrDefault(self.numItemBlocks)
@since("1.4.0")
def setNumBlocks(self, value):
"""
Sets both :py:attr:`numUserBlocks` and :py:attr:`numItemBlocks` to the specific value.
"""
self._set(numUserBlocks=value)
return self._set(numItemBlocks=value)
@since("1.4.0")
def setImplicitPrefs(self, value):
"""
Sets the value of :py:attr:`implicitPrefs`.
"""
return self._set(implicitPrefs=value)
@since("1.4.0")
def getImplicitPrefs(self):
"""
Gets the value of implicitPrefs or its default value.
"""
return self.getOrDefault(self.implicitPrefs)
@since("1.4.0")
def setAlpha(self, value):
"""
Sets the value of :py:attr:`alpha`.
"""
return self._set(alpha=value)
@since("1.4.0")
def getAlpha(self):
"""
Gets the value of alpha or its default value.
"""
return self.getOrDefault(self.alpha)
@since("1.4.0")
def setUserCol(self, value):
"""
Sets the value of :py:attr:`userCol`.
"""
return self._set(userCol=value)
@since("1.4.0")
def getUserCol(self):
"""
Gets the value of userCol or its default value.
"""
return self.getOrDefault(self.userCol)
@since("1.4.0")
def setItemCol(self, value):
"""
Sets the value of :py:attr:`itemCol`.
"""
return self._set(itemCol=value)
@since("1.4.0")
def getItemCol(self):
"""
Gets the value of itemCol or its default value.
"""
return self.getOrDefault(self.itemCol)
@since("1.4.0")
def setRatingCol(self, value):
"""
Sets the value of :py:attr:`ratingCol`.
"""
return self._set(ratingCol=value)
@since("1.4.0")
def getRatingCol(self):
"""
Gets the value of ratingCol or its default value.
"""
return self.getOrDefault(self.ratingCol)
@since("1.4.0")
def setNonnegative(self, value):
"""
Sets the value of :py:attr:`nonnegative`.
"""
return self._set(nonnegative=value)
@since("1.4.0")
def getNonnegative(self):
"""
Gets the value of nonnegative or its default value.
"""
return self.getOrDefault(self.nonnegative)
@since("2.0.0")
def setIntermediateStorageLevel(self, value):
"""
Sets the value of :py:attr:`intermediateStorageLevel`.
"""
return self._set(intermediateStorageLevel=value)
@since("2.0.0")
def getIntermediateStorageLevel(self):
"""
Gets the value of intermediateStorageLevel or its default value.
"""
return self.getOrDefault(self.intermediateStorageLevel)
@since("2.0.0")
def setFinalStorageLevel(self, value):
"""
Sets the value of :py:attr:`finalStorageLevel`.
"""
return self._set(finalStorageLevel=value)
@since("2.0.0")
def getFinalStorageLevel(self):
"""
Gets the value of finalStorageLevel or its default value.
"""
return self.getOrDefault(self.finalStorageLevel)
class ALSModel(JavaModel, JavaMLWritable, JavaMLReadable):
"""
Model fitted by ALS.
.. versionadded:: 1.4.0
"""
@property
@since("1.4.0")
def rank(self):
"""rank of the matrix factorization model"""
return self._call_java("rank")
@property
@since("1.4.0")
def userFactors(self):
"""
a DataFrame that stores user factors in two columns: `id` and
`features`
"""
return self._call_java("userFactors")
@property
@since("1.4.0")
def itemFactors(self):
"""
a DataFrame that stores item factors in two columns: `id` and
`features`
"""
return self._call_java("itemFactors")
if __name__ == "__main__":
import doctest
import pyspark.ml.recommendation
from pyspark.sql import SparkSession
globs = pyspark.ml.recommendation.__dict__.copy()
# The small batch size here ensures that we see multiple batches,
# even in these small test examples:
spark = SparkSession.builder\
.master("local[2]")\
.appName("ml.recommendation tests")\
.getOrCreate()
sc = spark.sparkContext
globs['sc'] = sc
globs['spark'] = spark
import tempfile
temp_path = tempfile.mkdtemp()
globs['temp_path'] = temp_path
try:
(failure_count, test_count) = doctest.testmod(globs=globs, optionflags=doctest.ELLIPSIS)
spark.stop()
finally:
from shutil import rmtree
try:
rmtree(temp_path)
except OSError:
pass
if failure_count:
exit(-1)
| apache-2.0 |
wfxiang08/django185 | django/middleware/csrf.py | 86 | 8801 | """
Cross Site Request Forgery Middleware.
This module provides a middleware that implements protection
against request forgeries from other sites.
"""
from __future__ import unicode_literals
import logging
import re
from django.conf import settings
from django.core.urlresolvers import get_callable
from django.utils.cache import patch_vary_headers
from django.utils.crypto import constant_time_compare, get_random_string
from django.utils.encoding import force_text
from django.utils.http import same_origin
logger = logging.getLogger('django.request')
REASON_NO_REFERER = "Referer checking failed - no Referer."
REASON_BAD_REFERER = "Referer checking failed - %s does not match %s."
REASON_NO_CSRF_COOKIE = "CSRF cookie not set."
REASON_BAD_TOKEN = "CSRF token missing or incorrect."
CSRF_KEY_LENGTH = 32
def _get_failure_view():
"""
Returns the view to be used for CSRF rejections
"""
return get_callable(settings.CSRF_FAILURE_VIEW)
def _get_new_csrf_key():
return get_random_string(CSRF_KEY_LENGTH)
def get_token(request):
"""
Returns the CSRF token required for a POST form. The token is an
alphanumeric value.
A side effect of calling this function is to make the csrf_protect
decorator and the CsrfViewMiddleware add a CSRF cookie and a 'Vary: Cookie'
header to the outgoing response. For this reason, you may need to use this
function lazily, as is done by the csrf context processor.
"""
request.META["CSRF_COOKIE_USED"] = True
return request.META.get("CSRF_COOKIE", None)
def rotate_token(request):
"""
Changes the CSRF token in use for a request - should be done on login
for security purposes.
"""
request.META.update({
"CSRF_COOKIE_USED": True,
"CSRF_COOKIE": _get_new_csrf_key(),
})
def _sanitize_token(token):
# Allow only alphanum
if len(token) > CSRF_KEY_LENGTH:
return _get_new_csrf_key()
token = re.sub('[^a-zA-Z0-9]+', '', force_text(token))
if token == "":
# In case the cookie has been truncated to nothing at some point.
return _get_new_csrf_key()
return token
class CsrfViewMiddleware(object):
"""
Middleware that requires a present and correct csrfmiddlewaretoken
for POST requests that have a CSRF cookie, and sets an outgoing
CSRF cookie.
This middleware should be used in conjunction with the csrf_token template
tag.
"""
# The _accept and _reject methods currently only exist for the sake of the
# requires_csrf_token decorator.
def _accept(self, request):
# Avoid checking the request twice by adding a custom attribute to
# request. This will be relevant when both decorator and middleware
# are used.
request.csrf_processing_done = True
return None
def _reject(self, request, reason):
logger.warning('Forbidden (%s): %s', reason, request.path,
extra={
'status_code': 403,
'request': request,
}
)
return _get_failure_view()(request, reason=reason)
def process_view(self, request, callback, callback_args, callback_kwargs):
if getattr(request, 'csrf_processing_done', False):
return None
try:
csrf_token = _sanitize_token(
request.COOKIES[settings.CSRF_COOKIE_NAME])
# Use same token next time
request.META['CSRF_COOKIE'] = csrf_token
except KeyError:
csrf_token = None
# Generate token and store it in the request, so it's
# available to the view.
request.META["CSRF_COOKIE"] = _get_new_csrf_key()
# Wait until request.META["CSRF_COOKIE"] has been manipulated before
# bailing out, so that get_token still works
if getattr(callback, 'csrf_exempt', False):
return None
# Assume that anything not defined as 'safe' by RFC2616 needs protection
if request.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):
if getattr(request, '_dont_enforce_csrf_checks', False):
# Mechanism to turn off CSRF checks for test suite.
# It comes after the creation of CSRF cookies, so that
# everything else continues to work exactly the same
# (e.g. cookies are sent, etc.), but before any
# branches that call reject().
return self._accept(request)
if request.is_secure():
# Suppose user visits http://example.com/
# An active network attacker (man-in-the-middle, MITM) sends a
# POST form that targets https://example.com/detonate-bomb/ and
# submits it via JavaScript.
#
# The attacker will need to provide a CSRF cookie and token, but
# that's no problem for a MITM and the session-independent
# nonce we're using. So the MITM can circumvent the CSRF
# protection. This is true for any HTTP connection, but anyone
# using HTTPS expects better! For this reason, for
# https://example.com/ we need additional protection that treats
# http://example.com/ as completely untrusted. Under HTTPS,
# Barth et al. found that the Referer header is missing for
# same-domain requests in only about 0.2% of cases or less, so
# we can use strict Referer checking.
referer = force_text(
request.META.get('HTTP_REFERER'),
strings_only=True,
errors='replace'
)
if referer is None:
return self._reject(request, REASON_NO_REFERER)
# Note that request.get_host() includes the port.
good_referer = 'https://%s/' % request.get_host()
if not same_origin(referer, good_referer):
reason = REASON_BAD_REFERER % (referer, good_referer)
return self._reject(request, reason)
if csrf_token is None:
# No CSRF cookie. For POST requests, we insist on a CSRF cookie,
# and in this way we can avoid all CSRF attacks, including login
# CSRF.
return self._reject(request, REASON_NO_CSRF_COOKIE)
# Check non-cookie token for match.
request_csrf_token = ""
if request.method == "POST":
try:
request_csrf_token = request.POST.get('csrfmiddlewaretoken', '')
except IOError:
# Handle a broken connection before we've completed reading
# the POST data. process_view shouldn't raise any
# exceptions, so we'll ignore and serve the user a 403
# (assuming they're still listening, which they probably
# aren't because of the error).
pass
if request_csrf_token == "":
# Fall back to X-CSRFToken, to make things easier for AJAX,
# and possible for PUT/DELETE.
request_csrf_token = request.META.get('HTTP_X_CSRFTOKEN', '')
if not constant_time_compare(request_csrf_token, csrf_token):
return self._reject(request, REASON_BAD_TOKEN)
return self._accept(request)
def process_response(self, request, response):
if getattr(response, 'csrf_processing_done', False):
return response
# If CSRF_COOKIE is unset, then CsrfViewMiddleware.process_view was
# never called, probably because a request middleware returned a response
# (for example, contrib.auth redirecting to a login page).
if request.META.get("CSRF_COOKIE") is None:
return response
if not request.META.get("CSRF_COOKIE_USED", False):
return response
# Set the CSRF cookie even if it's already set, so we renew
# the expiry timer.
response.set_cookie(settings.CSRF_COOKIE_NAME,
request.META["CSRF_COOKIE"],
max_age=settings.CSRF_COOKIE_AGE,
domain=settings.CSRF_COOKIE_DOMAIN,
path=settings.CSRF_COOKIE_PATH,
secure=settings.CSRF_COOKIE_SECURE,
httponly=settings.CSRF_COOKIE_HTTPONLY
)
# Content varies with the CSRF cookie, so set the Vary header.
patch_vary_headers(response, ('Cookie',))
response.csrf_processing_done = True
return response
| bsd-3-clause |
foufou55/Sick-Beard | lib/imdb/helpers.py | 126 | 25097 | """
helpers module (imdb package).
This module provides functions not used directly by the imdb package,
but useful for IMDbPY-based programs.
Copyright 2006-2012 Davide Alberani <da@erlug.linux.it>
2012 Alberto Malagoli <albemala AT gmail.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
"""
# XXX: find better names for the functions in this modules.
import re
import difflib
from cgi import escape
import gettext
from gettext import gettext as _
gettext.textdomain('imdbpy')
# The modClearRefs can be used to strip names and titles references from
# the strings in Movie and Person objects.
from imdb.utils import modClearRefs, re_titleRef, re_nameRef, \
re_characterRef, _tagAttr, _Container, TAGS_TO_MODIFY
from imdb import IMDb, imdbURL_movie_base, imdbURL_person_base, \
imdbURL_character_base
import imdb.locale
from imdb.linguistics import COUNTRY_LANG
from imdb.Movie import Movie
from imdb.Person import Person
from imdb.Character import Character
from imdb.Company import Company
from imdb.parser.http.utils import re_entcharrefssub, entcharrefs, \
subXMLRefs, subSGMLRefs
from imdb.parser.http.bsouplxml.etree import BeautifulSoup
# An URL, more or less.
_re_href = re.compile(r'(http://.+?)(?=\s|$)', re.I)
_re_hrefsub = _re_href.sub
def makeCgiPrintEncoding(encoding):
"""Make a function to pretty-print strings for the web."""
def cgiPrint(s):
"""Encode the given string using the %s encoding, and replace
chars outside the given charset with XML char references.""" % encoding
s = escape(s, quote=1)
if isinstance(s, unicode):
s = s.encode(encoding, 'xmlcharrefreplace')
return s
return cgiPrint
# cgiPrint uses the latin_1 encoding.
cgiPrint = makeCgiPrintEncoding('latin_1')
# Regular expression for %(varname)s substitutions.
re_subst = re.compile(r'%\((.+?)\)s')
# Regular expression for <if condition>....</if condition> clauses.
re_conditional = re.compile(r'<if\s+(.+?)\s*>(.+?)</if\s+\1\s*>')
def makeTextNotes(replaceTxtNotes):
"""Create a function useful to handle text[::optional_note] values.
replaceTxtNotes is a format string, which can include the following
values: %(text)s and %(notes)s.
Portions of the text can be conditionally excluded, if one of the
values is absent. E.g.: <if notes>[%(notes)s]</if notes> will be replaced
with '[notes]' if notes exists, or by an empty string otherwise.
The returned function is suitable be passed as applyToValues argument
of the makeObject2Txt function."""
def _replacer(s):
outS = replaceTxtNotes
if not isinstance(s, (unicode, str)):
return s
ssplit = s.split('::', 1)
text = ssplit[0]
# Used to keep track of text and note existence.
keysDict = {}
if text:
keysDict['text'] = True
outS = outS.replace('%(text)s', text)
if len(ssplit) == 2:
keysDict['notes'] = True
outS = outS.replace('%(notes)s', ssplit[1])
else:
outS = outS.replace('%(notes)s', u'')
def _excludeFalseConditionals(matchobj):
# Return an empty string if the conditional is false/empty.
if matchobj.group(1) in keysDict:
return matchobj.group(2)
return u''
while re_conditional.search(outS):
outS = re_conditional.sub(_excludeFalseConditionals, outS)
return outS
return _replacer
def makeObject2Txt(movieTxt=None, personTxt=None, characterTxt=None,
companyTxt=None, joiner=' / ',
applyToValues=lambda x: x, _recurse=True):
""""Return a function useful to pretty-print Movie, Person,
Character and Company instances.
*movieTxt* -- how to format a Movie object.
*personTxt* -- how to format a Person object.
*characterTxt* -- how to format a Character object.
*companyTxt* -- how to format a Company object.
*joiner* -- string used to join a list of objects.
*applyToValues* -- function to apply to values.
*_recurse* -- if True (default) manage only the given object.
"""
# Some useful defaults.
if movieTxt is None:
movieTxt = '%(long imdb title)s'
if personTxt is None:
personTxt = '%(long imdb name)s'
if characterTxt is None:
characterTxt = '%(long imdb name)s'
if companyTxt is None:
companyTxt = '%(long imdb name)s'
def object2txt(obj, _limitRecursion=None):
"""Pretty-print objects."""
# Prevent unlimited recursion.
if _limitRecursion is None:
_limitRecursion = 0
elif _limitRecursion > 5:
return u''
_limitRecursion += 1
if isinstance(obj, (list, tuple)):
return joiner.join([object2txt(o, _limitRecursion=_limitRecursion)
for o in obj])
elif isinstance(obj, dict):
# XXX: not exactly nice, neither useful, I fear.
return joiner.join([u'%s::%s' %
(object2txt(k, _limitRecursion=_limitRecursion),
object2txt(v, _limitRecursion=_limitRecursion))
for k, v in obj.items()])
objData = {}
if isinstance(obj, Movie):
objData['movieID'] = obj.movieID
outs = movieTxt
elif isinstance(obj, Person):
objData['personID'] = obj.personID
outs = personTxt
elif isinstance(obj, Character):
objData['characterID'] = obj.characterID
outs = characterTxt
elif isinstance(obj, Company):
objData['companyID'] = obj.companyID
outs = companyTxt
else:
return obj
def _excludeFalseConditionals(matchobj):
# Return an empty string if the conditional is false/empty.
condition = matchobj.group(1)
proceed = obj.get(condition) or getattr(obj, condition, None)
if proceed:
return matchobj.group(2)
else:
return u''
return matchobj.group(2)
while re_conditional.search(outs):
outs = re_conditional.sub(_excludeFalseConditionals, outs)
for key in re_subst.findall(outs):
value = obj.get(key) or getattr(obj, key, None)
if not isinstance(value, (unicode, str)):
if not _recurse:
if value:
value = unicode(value)
if value:
value = object2txt(value, _limitRecursion=_limitRecursion)
elif value:
value = applyToValues(unicode(value))
if not value:
value = u''
elif not isinstance(value, (unicode, str)):
value = unicode(value)
outs = outs.replace(u'%(' + key + u')s', value)
return outs
return object2txt
def makeModCGILinks(movieTxt, personTxt, characterTxt=None,
encoding='latin_1'):
"""Make a function used to pretty-print movies and persons refereces;
movieTxt and personTxt are the strings used for the substitutions.
movieTxt must contains %(movieID)s and %(title)s, while personTxt
must contains %(personID)s and %(name)s and characterTxt %(characterID)s
and %(name)s; characterTxt is optional, for backward compatibility."""
_cgiPrint = makeCgiPrintEncoding(encoding)
def modCGILinks(s, titlesRefs, namesRefs, characterRefs=None):
"""Substitute movies and persons references."""
if characterRefs is None: characterRefs = {}
# XXX: look ma'... more nested scopes! <g>
def _replaceMovie(match):
to_replace = match.group(1)
item = titlesRefs.get(to_replace)
if item:
movieID = item.movieID
to_replace = movieTxt % {'movieID': movieID,
'title': unicode(_cgiPrint(to_replace),
encoding,
'xmlcharrefreplace')}
return to_replace
def _replacePerson(match):
to_replace = match.group(1)
item = namesRefs.get(to_replace)
if item:
personID = item.personID
to_replace = personTxt % {'personID': personID,
'name': unicode(_cgiPrint(to_replace),
encoding,
'xmlcharrefreplace')}
return to_replace
def _replaceCharacter(match):
to_replace = match.group(1)
if characterTxt is None:
return to_replace
item = characterRefs.get(to_replace)
if item:
characterID = item.characterID
if characterID is None:
return to_replace
to_replace = characterTxt % {'characterID': characterID,
'name': unicode(_cgiPrint(to_replace),
encoding,
'xmlcharrefreplace')}
return to_replace
s = s.replace('<', '<').replace('>', '>')
s = _re_hrefsub(r'<a href="\1">\1</a>', s)
s = re_titleRef.sub(_replaceMovie, s)
s = re_nameRef.sub(_replacePerson, s)
s = re_characterRef.sub(_replaceCharacter, s)
return s
modCGILinks.movieTxt = movieTxt
modCGILinks.personTxt = personTxt
modCGILinks.characterTxt = characterTxt
return modCGILinks
# links to the imdb.com web site.
_movieTxt = '<a href="' + imdbURL_movie_base + 'tt%(movieID)s">%(title)s</a>'
_personTxt = '<a href="' + imdbURL_person_base + 'nm%(personID)s">%(name)s</a>'
_characterTxt = '<a href="' + imdbURL_character_base + \
'ch%(characterID)s">%(name)s</a>'
modHtmlLinks = makeModCGILinks(movieTxt=_movieTxt, personTxt=_personTxt,
characterTxt=_characterTxt)
modHtmlLinksASCII = makeModCGILinks(movieTxt=_movieTxt, personTxt=_personTxt,
characterTxt=_characterTxt,
encoding='ascii')
everyentcharrefs = entcharrefs.copy()
for k, v in {'lt':u'<','gt':u'>','amp':u'&','quot':u'"','apos':u'\''}.items():
everyentcharrefs[k] = v
everyentcharrefs['#%s' % ord(v)] = v
everyentcharrefsget = everyentcharrefs.get
re_everyentcharrefs = re.compile('&(%s|\#160|\#\d{1,5});' %
'|'.join(map(re.escape, everyentcharrefs)))
re_everyentcharrefssub = re_everyentcharrefs.sub
def _replAllXMLRef(match):
"""Replace the matched XML reference."""
ref = match.group(1)
value = everyentcharrefsget(ref)
if value is None:
if ref[0] == '#':
return unichr(int(ref[1:]))
else:
return ref
return value
def subXMLHTMLSGMLRefs(s):
"""Return the given string with XML/HTML/SGML entity and char references
replaced."""
return re_everyentcharrefssub(_replAllXMLRef, s)
def sortedSeasons(m):
"""Return a sorted list of seasons of the given series."""
seasons = m.get('episodes', {}).keys()
seasons.sort()
return seasons
def sortedEpisodes(m, season=None):
"""Return a sorted list of episodes of the given series,
considering only the specified season(s) (every season, if None)."""
episodes = []
seasons = season
if season is None:
seasons = sortedSeasons(m)
else:
if not isinstance(season, (tuple, list)):
seasons = [season]
for s in seasons:
eps_indx = m.get('episodes', {}).get(s, {}).keys()
eps_indx.sort()
for e in eps_indx:
episodes.append(m['episodes'][s][e])
return episodes
# Idea and portions of the code courtesy of none none (dclist at gmail.com)
_re_imdbIDurl = re.compile(r'\b(nm|tt|ch|co)([0-9]{7})\b')
def get_byURL(url, info=None, args=None, kwds=None):
"""Return a Movie, Person, Character or Company object for the given URL;
info is the info set to retrieve, args and kwds are respectively a list
and a dictionary or arguments to initialize the data access system.
Returns None if unable to correctly parse the url; can raise
exceptions if unable to retrieve the data."""
if args is None: args = []
if kwds is None: kwds = {}
ia = IMDb(*args, **kwds)
match = _re_imdbIDurl.search(url)
if not match:
return None
imdbtype = match.group(1)
imdbID = match.group(2)
if imdbtype == 'tt':
return ia.get_movie(imdbID, info=info)
elif imdbtype == 'nm':
return ia.get_person(imdbID, info=info)
elif imdbtype == 'ch':
return ia.get_character(imdbID, info=info)
elif imdbtype == 'co':
return ia.get_company(imdbID, info=info)
return None
# Idea and portions of code courtesy of Basil Shubin.
# Beware that these information are now available directly by
# the Movie/Person/Character instances.
def fullSizeCoverURL(obj):
"""Given an URL string or a Movie, Person or Character instance,
returns an URL to the full-size version of the cover/headshot,
or None otherwise. This function is obsolete: the same information
are available as keys: 'full-size cover url' and 'full-size headshot',
respectively for movies and persons/characters."""
if isinstance(obj, Movie):
coverUrl = obj.get('cover url')
elif isinstance(obj, (Person, Character)):
coverUrl = obj.get('headshot')
else:
coverUrl = obj
if not coverUrl:
return None
return _Container._re_fullsizeURL.sub('', coverUrl)
def keyToXML(key):
"""Return a key (the ones used to access information in Movie and
other classes instances) converted to the style of the XML output."""
return _tagAttr(key, '')[0]
def translateKey(key):
"""Translate a given key."""
return _(keyToXML(key))
# Maps tags to classes.
_MAP_TOP_OBJ = {
'person': Person,
'movie': Movie,
'character': Character,
'company': Company
}
# Tags to be converted to lists.
_TAGS_TO_LIST = dict([(x[0], None) for x in TAGS_TO_MODIFY.values()])
_TAGS_TO_LIST.update(_MAP_TOP_OBJ)
def tagToKey(tag):
"""Return the name of the tag, taking it from the 'key' attribute,
if present."""
keyAttr = tag.get('key')
if keyAttr:
if tag.get('keytype') == 'int':
keyAttr = int(keyAttr)
return keyAttr
return tag.name
def _valueWithType(tag, tagValue):
"""Return tagValue, handling some type conversions."""
tagType = tag.get('type')
if tagType == 'int':
tagValue = int(tagValue)
elif tagType == 'float':
tagValue = float(tagValue)
return tagValue
# Extra tags to get (if values were not already read from title/name).
_titleTags = ('imdbindex', 'kind', 'year')
_nameTags = ('imdbindex')
_companyTags = ('imdbindex', 'country')
def parseTags(tag, _topLevel=True, _as=None, _infoset2keys=None,
_key2infoset=None):
"""Recursively parse a tree of tags."""
# The returned object (usually a _Container subclass, but it can
# be a string, an int, a float, a list or a dictionary).
item = None
if _infoset2keys is None:
_infoset2keys = {}
if _key2infoset is None:
_key2infoset = {}
name = tagToKey(tag)
firstChild = tag.find(recursive=False)
tagStr = (tag.string or u'').strip()
if not tagStr and name == 'item':
# Handles 'item' tags containing text and a 'notes' sub-tag.
tagContent = tag.contents[0]
if isinstance(tagContent, BeautifulSoup.NavigableString):
tagStr = (unicode(tagContent) or u'').strip()
tagType = tag.get('type')
infoset = tag.get('infoset')
if infoset:
_key2infoset[name] = infoset
_infoset2keys.setdefault(infoset, []).append(name)
# Here we use tag.name to avoid tags like <item title="company">
if tag.name in _MAP_TOP_OBJ:
# One of the subclasses of _Container.
item = _MAP_TOP_OBJ[name]()
itemAs = tag.get('access-system')
if itemAs:
if not _as:
_as = itemAs
else:
itemAs = _as
item.accessSystem = itemAs
tagsToGet = []
theID = tag.get('id')
if name == 'movie':
item.movieID = theID
tagsToGet = _titleTags
theTitle = tag.find('title', recursive=False)
if tag.title:
item.set_title(tag.title.string)
tag.title.extract()
else:
if name == 'person':
item.personID = theID
tagsToGet = _nameTags
theName = tag.find('long imdb canonical name', recursive=False)
if not theName:
theName = tag.find('name', recursive=False)
elif name == 'character':
item.characterID = theID
tagsToGet = _nameTags
theName = tag.find('name', recursive=False)
elif name == 'company':
item.companyID = theID
tagsToGet = _companyTags
theName = tag.find('name', recursive=False)
if theName:
item.set_name(theName.string)
if theName:
theName.extract()
for t in tagsToGet:
if t in item.data:
continue
dataTag = tag.find(t, recursive=False)
if dataTag:
item.data[tagToKey(dataTag)] = _valueWithType(dataTag,
dataTag.string)
if tag.notes:
item.notes = tag.notes.string
tag.notes.extract()
episodeOf = tag.find('episode-of', recursive=False)
if episodeOf:
item.data['episode of'] = parseTags(episodeOf, _topLevel=False,
_as=_as, _infoset2keys=_infoset2keys,
_key2infoset=_key2infoset)
episodeOf.extract()
cRole = tag.find('current-role', recursive=False)
if cRole:
cr = parseTags(cRole, _topLevel=False, _as=_as,
_infoset2keys=_infoset2keys, _key2infoset=_key2infoset)
item.currentRole = cr
cRole.extract()
# XXX: big assumption, here. What about Movie instances used
# as keys in dictionaries? What about other keys (season and
# episode number, for example?)
if not _topLevel:
#tag.extract()
return item
_adder = lambda key, value: item.data.update({key: value})
elif tagStr:
if tag.notes:
notes = (tag.notes.string or u'').strip()
if notes:
tagStr += u'::%s' % notes
else:
tagStr = _valueWithType(tag, tagStr)
return tagStr
elif firstChild:
firstChildName = tagToKey(firstChild)
if firstChildName in _TAGS_TO_LIST:
item = []
_adder = lambda key, value: item.append(value)
else:
item = {}
_adder = lambda key, value: item.update({key: value})
else:
item = {}
_adder = lambda key, value: item.update({name: value})
for subTag in tag(recursive=False):
subTagKey = tagToKey(subTag)
# Exclude dinamically generated keys.
if tag.name in _MAP_TOP_OBJ and subTagKey in item._additional_keys():
continue
subItem = parseTags(subTag, _topLevel=False, _as=_as,
_infoset2keys=_infoset2keys, _key2infoset=_key2infoset)
if subItem:
_adder(subTagKey, subItem)
if _topLevel and name in _MAP_TOP_OBJ:
# Add information about 'info sets', but only to the top-level object.
item.infoset2keys = _infoset2keys
item.key2infoset = _key2infoset
item.current_info = _infoset2keys.keys()
return item
def parseXML(xml):
"""Parse a XML string, returning an appropriate object (usually an
instance of a subclass of _Container."""
xmlObj = BeautifulSoup.BeautifulStoneSoup(xml,
convertEntities=BeautifulSoup.BeautifulStoneSoup.XHTML_ENTITIES)
if xmlObj:
mainTag = xmlObj.find()
if mainTag:
return parseTags(mainTag)
return None
_re_akas_lang = re.compile('(?:[(])([a-zA-Z]+?)(?: title[)])')
_re_akas_country = re.compile('\(.*?\)')
# akasLanguages, sortAKAsBySimilarity and getAKAsInLanguage code
# copyright of Alberto Malagoli (refactoring by Davide Alberani).
def akasLanguages(movie):
"""Given a movie, return a list of tuples in (lang, AKA) format;
lang can be None, if unable to detect."""
lang_and_aka = []
akas = set((movie.get('akas') or []) +
(movie.get('akas from release info') or []))
for aka in akas:
# split aka
aka = aka.encode('utf8').split('::')
# sometimes there is no countries information
if len(aka) == 2:
# search for something like "(... title)" where ... is a language
language = _re_akas_lang.search(aka[1])
if language:
language = language.groups()[0]
else:
# split countries using , and keep only the first one (it's sufficient)
country = aka[1].split(',')[0]
# remove parenthesis
country = _re_akas_country.sub('', country).strip()
# given the country, get corresponding language from dictionary
language = COUNTRY_LANG.get(country)
else:
language = None
lang_and_aka.append((language, aka[0].decode('utf8')))
return lang_and_aka
def sortAKAsBySimilarity(movie, title, _titlesOnly=True, _preferredLang=None):
"""Return a list of movie AKAs, sorted by their similarity to
the given title.
If _titlesOnly is not True, similarity information are returned.
If _preferredLang is specified, AKAs in the given language will get
a higher score.
The return is a list of title, or a list of tuples if _titlesOnly is False."""
language = movie.guessLanguage()
# estimate string distance between current title and given title
m_title = movie['title'].lower()
l_title = title.lower()
if isinstance(l_title, unicode):
l_title = l_title.encode('utf8')
scores = []
score = difflib.SequenceMatcher(None, m_title.encode('utf8'), l_title).ratio()
# set original title and corresponding score as the best match for given title
scores.append((score, movie['title'], None))
for language, aka in akasLanguages(movie):
# estimate string distance between current title and given title
m_title = aka.lower()
if isinstance(m_title, unicode):
m_title = m_title.encode('utf8')
score = difflib.SequenceMatcher(None, m_title, l_title).ratio()
# if current language is the same as the given one, increase score
if _preferredLang and _preferredLang == language:
score += 1
scores.append((score, aka, language))
scores.sort(reverse=True)
if _titlesOnly:
return [x[1] for x in scores]
return scores
def getAKAsInLanguage(movie, lang, _searchedTitle=None):
"""Return a list of AKAs of a movie, in the specified language.
If _searchedTitle is given, the AKAs are sorted by their similarity
to it."""
akas = []
for language, aka in akasLanguages(movie):
if lang == language:
akas.append(aka)
if _searchedTitle:
scores = []
if isinstance(_searchedTitle, unicode):
_searchedTitle = _searchedTitle.encode('utf8')
for aka in akas:
m_aka = aka
if isinstance(m_aka):
m_aka = m_aka.encode('utf8')
scores.append(difflib.SequenceMatcher(None, m_aka.lower(),
_searchedTitle.lower()), aka)
scores.sort(reverse=True)
akas = [x[1] for x in scores]
return akas
| gpl-3.0 |
RhysC/ClickstreamApi | infrastructure/create_cloudformation_template.py | 1 | 1317 | import jinja2
import os
import string
script_dir = os.path.dirname(os.path.realpath(__file__))
src_dir = os.path.join(script_dir, '../src')
params = {}
with open(script_dir + '/swagger.yaml.j2') as swagger_j2File:
swagger_template = jinja2.Template(swagger_j2File.read())
swagger_def = swagger_template.render(params=params)
# Convert the pretty swagger file to a nested yaml node in the CF template.
# To do this we need to appropriately indent
swagger_def = string.replace(swagger_def, "\n", "\n ")
params["swagger_def"] = swagger_def
with open(src_dir + '/index.js') as lambda_file:
# change to read each line and warp in double quotes as per:
# https://forrestbrazeal.com/2016/06/06/aws-lambda-cookbook/#highlighter_861602
lines = []
for line in lambda_file.readlines():
if line.strip() != "":
line = string.replace(line, "\n", "")
lines.append(" - \"{}\"".format(line))
lambda_code = string.join(lines, "\n")
params["lambda_code"] = lambda_code
with open(script_dir + '/Clickstream.cf.yaml.j2') as cf_j2File:
cf_template = jinja2.Template(cf_j2File.read())
result = cf_template.render(params=params)
with open(script_dir + '/Clickstream.cf.yaml', 'w') as yamlFile:
yamlFile.write(result)
| apache-2.0 |
gannetson/django | tests/force_insert_update/tests.py | 381 | 2213 | from __future__ import unicode_literals
from django.db import DatabaseError, IntegrityError, transaction
from django.test import TestCase
from .models import (
Counter, InheritedCounter, ProxyCounter, SubCounter, WithCustomPK,
)
class ForceTests(TestCase):
def test_force_update(self):
c = Counter.objects.create(name="one", value=1)
# The normal case
c.value = 2
c.save()
# Same thing, via an update
c.value = 3
c.save(force_update=True)
# Won't work because force_update and force_insert are mutually
# exclusive
c.value = 4
with self.assertRaises(ValueError):
c.save(force_insert=True, force_update=True)
# Try to update something that doesn't have a primary key in the first
# place.
c1 = Counter(name="two", value=2)
with self.assertRaises(ValueError):
with transaction.atomic():
c1.save(force_update=True)
c1.save(force_insert=True)
# Won't work because we can't insert a pk of the same value.
c.value = 5
with self.assertRaises(IntegrityError):
with transaction.atomic():
c.save(force_insert=True)
# Trying to update should still fail, even with manual primary keys, if
# the data isn't in the database already.
obj = WithCustomPK(name=1, value=1)
with self.assertRaises(DatabaseError):
with transaction.atomic():
obj.save(force_update=True)
class InheritanceTests(TestCase):
def test_force_update_on_inherited_model(self):
a = InheritedCounter(name="count", value=1, tag="spam")
a.save()
a.save(force_update=True)
def test_force_update_on_proxy_model(self):
a = ProxyCounter(name="count", value=1)
a.save()
a.save(force_update=True)
def test_force_update_on_inherited_model_without_fields(self):
'''
Issue 13864: force_update fails on subclassed models, if they don't
specify custom fields.
'''
a = SubCounter(name="count", value=1)
a.save()
a.value = 2
a.save(force_update=True)
| bsd-3-clause |
mwiebe/blaze | blaze/compute/ckernel/tests/test_wrapped_ckernel.py | 2 | 3500 | from __future__ import absolute_import, division, print_function
import unittest
import ctypes
import sys
from blaze.compute import ckernel
from blaze.py2help import skipIf
from dynd import nd, ndt, _lowlevel
# On 64-bit windows python 2.6 appears to have
# ctypes bugs in the C calling convention, so
# disable these tests.
win64_py26 = (sys.platform == 'win32' and
ctypes.sizeof(ctypes.c_void_p) == 8 and
sys.version_info[:2] <= (2, 6))
class TestWrappedCKernel(unittest.TestCase):
@skipIf(win64_py26, 'py26 win64 ctypes is buggy')
def test_ctypes_callback(self):
# Create a ckernel directly with ctypes
def my_kernel_func(dst_ptr, src_ptr, kdp):
dst = ctypes.c_int32.from_address(dst_ptr)
src = ctypes.c_double.from_address(src_ptr)
dst.value = int(src.value * 3.5)
my_callback = _lowlevel.UnarySingleOperation(my_kernel_func)
with _lowlevel.ckernel.CKernelBuilder() as ckb:
# The ctypes callback object is both the function and the owner
ckernel.wrap_ckernel_func(ckb, 0, my_callback, my_callback)
# Delete the callback to make sure the ckernel is holding a reference
del my_callback
# Make some memory and call the kernel
src_val = ctypes.c_double(4.0)
dst_val = ctypes.c_int32(-1)
ck = ckb.ckernel(_lowlevel.UnarySingleOperation)
ck(ctypes.addressof(dst_val), ctypes.addressof(src_val))
self.assertEqual(dst_val.value, 14)
@skipIf(win64_py26, 'py26 win64 ctypes is buggy')
def test_ctypes_callback_deferred(self):
# Create a deferred ckernel via a closure
def instantiate_ckernel(out_ckb, ckb_offset, types, meta, kerntype):
out_ckb = _lowlevel.CKernelBuilder(out_ckb)
def my_kernel_func_single(dst_ptr, src_ptr, kdp):
dst = ctypes.c_int32.from_address(dst_ptr)
src = ctypes.c_double.from_address(src_ptr[0])
dst.value = int(src.value * 3.5)
def my_kernel_func_strided(dst_ptr, dst_stride, src_ptr, src_stride, count, kdp):
src_ptr0 = src_ptr[0]
src_stride0 = src_stride[0]
for i in range(count):
my_kernel_func_single(dst_ptr, [src_ptr0], kdp)
dst_ptr += dst_stride
src_ptr0 += src_stride0
if kerntype == 'single':
kfunc = _lowlevel.ExprSingleOperation(my_kernel_func_single)
else:
kfunc = _lowlevel.ExprStridedOperation(my_kernel_func_strided)
return ckernel.wrap_ckernel_func(out_ckb, ckb_offset,
kfunc, kfunc)
ckd = _lowlevel.ckernel_deferred_from_pyfunc(instantiate_ckernel,
[ndt.int32, ndt.float64])
# Test calling the ckd
out = nd.empty(ndt.int32)
in0 = nd.array(4.0, type=ndt.float64)
ckd.__call__(out, in0)
self.assertEqual(nd.as_py(out), 14)
# Also call it lifted
ckd_lifted = _lowlevel.lift_ckernel_deferred(ckd,
['2 * var * int32', '2 * var * float64'])
out = nd.empty('2 * var * int32')
in0 = nd.array([[1.0, 3.0, 2.5], [1.25, -1.5]], type='2 * var * float64')
ckd_lifted.__call__(out, in0)
self.assertEqual(nd.as_py(out), [[3, 10, 8], [4, -5]])
if __name__ == '__main__':
unittest.main()
| bsd-3-clause |
jfinkels/ophot | ophot/filters.py | 1 | 1586 | # Copyright 2011 Jeffrey Finkelstein
#
# This file is part of Ophot.
#
# Ophot is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ophot is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Ophot. If not, see <http://www.gnu.org/licenses/>.
"""Provides custom filters for Jinja templates."""
# imports for compatibility with future python versions
from __future__ import absolute_import
from __future__ import division
# imports from built-in modules
import re
# imports from third-party modules
from jinja2 import evalcontextfilter
from jinja2 import Markup
# imports from this application
from .app import app
# a regular expression which matches email addresses, from
# http://www.regular-expressions.info/email.html
EMAILS = re.compile(r'\b([A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4})\b', re.I)
@app.template_filter()
@evalcontextfilter
def link_emails(eval_context, data):
"""Replaces email addresses in the specified *data* string with HTML mailto
links.
"""
result = EMAILS.sub(r'<a href="mailto:\1">\1</a>', data)
if eval_context.autoescape:
result = Markup(result)
return result
| agpl-3.0 |
yeewang/omniORB | src/lib/omniORB/python/omniidl_be/cxx/header/poa.py | 4 | 5480 | # -*- python -*-
# Package : omniidl
# poa.py Created on: 1999/11/4
# Author : David Scott (djs)
#
# Copyright (C) 2003-2011 Apasphere Ltd
# Copyright (C) 1999 AT&T Laboratories Cambridge
#
# This file is part of omniidl.
#
# omniidl is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
# 02111-1307, USA.
#
# Description:
"""Produce the main header POA definitions for the C++ backend"""
from omniidl_be.cxx import id, config, ast
from omniidl_be.cxx.header import tie, template
import sys
self = sys.modules[__name__]
stream = None
def init(_stream):
global stream, __nested
__nested = 0
stream = _stream
return self
def POA_prefix():
if not self.__nested:
return "POA_"
return ""
# Control arrives here
#
def visitAST(node):
self.__completedModules = {}
for n in node.declarations():
if ast.shouldGenerateCodeForDecl(n):
n.accept(self)
def visitModule(node):
if self.__completedModules.has_key(node):
return
self.__completedModules[node] = 1
name = id.mapID(node.identifier())
if not config.state['Fragment']:
stream.out(template.POA_module_begin,
name = name,
POA_prefix = POA_prefix())
stream.inc_indent()
nested = self.__nested
self.__nested = 1
for n in node.definitions():
n.accept(self)
# Splice the continuations together if splice-modules flag is set
# (This might be unnecessary as there (seems to be) no relationship
# between things in the POA module- they all point back into the main
# module?)
if config.state['Splice Modules']:
for c in node.continuations():
for n in c.definitions():
n.accept(self)
self.__completedModules[c] = 1
self.__nested = nested
if not config.state['Fragment']:
stream.dec_indent()
stream.out(template.POA_module_end)
return
def visitInterface(node):
if node.local():
# No POA class for local interfaces
return
iname = id.mapID(node.identifier())
environment = id.lookup(node)
scopedName = id.Name(node.scopedName())
impl_scopedName = scopedName.prefix("_impl_")
scopedID = scopedName.fullyQualify()
impl_scopedID = impl_scopedName.fullyQualify()
POA_name = POA_prefix() + iname
# deal with inheritance
inherits = []
for i in map(ast.remove_ast_typedefs, node.inherits()):
name = id.Name(i.scopedName())
i_POA_name = name.unambiguous(environment)
if name.relName(environment) == None:
# we need to fully qualify from the root
i_POA_name = "::POA_" + name.fullyQualify(environment)
elif name.relName(environment) == i.scopedName():
# fully qualified (but not from root) POA name has a POA_ on the
# front
i_POA_name = "POA_" + i_POA_name
inherits.append("public virtual " + i_POA_name)
# Note that RefCountServantBase is a mixin class specified by the
# implementor, not generated by the idl compiler.
if node.inherits() == []:
inherits.append("public virtual ::PortableServer::ServantBase")
inherits_str = ",\n ".join(inherits)
# build the normal POA class first
stream.out(template.POA_interface,
POA_name = POA_name,
scopedID = scopedID,
impl_scopedID = impl_scopedID,
inherits = inherits_str)
if config.state['Normal Tie']:
# Normal tie templates, inline (so already in relevant POA_
# module)
poa_name = ""
if len(scopedName.fullName()) == 1:
poa_name = "POA_"
poa_name = poa_name + scopedName.simple()
tie_name = poa_name + "_tie"
tie.write_template(tie_name, poa_name, node, stream)
return
def visitTypedef(node):
pass
def visitEnum(node):
pass
def visitStruct(node):
pass
def visitStructForward(node):
pass
def visitUnion(node):
pass
def visitUnionForward(node):
pass
def visitForward(node):
pass
def visitConst(node):
pass
def visitDeclarator(node):
pass
def visitMember(node):
pass
def visitException(node):
pass
def visitValue(node):
from omniidl_be.cxx import value
v = value.getValueType(node)
v.poa_module_decls(stream, self)
def visitValueForward(node):
from omniidl_be.cxx import value
v = value.getValueType(node)
v.poa_module_decls(stream, self)
def visitValueAbs(node):
from omniidl_be.cxx import value
v = value.getValueType(node)
v.poa_module_decls(stream, self)
def visitValueBox(node):
from omniidl_be.cxx import value
v = value.getValueType(node)
v.poa_module_decls(stream, self)
| gpl-2.0 |
frt-arch/taiga-back | taiga/projects/notifications/serializers.py | 3 | 1221 | # Copyright (C) 2014 Andrey Antukh <niwi@niwi.be>
# Copyright (C) 2014 Jesús Espino <jespinog@gmail.com>
# Copyright (C) 2014 David Barragán <bameda@dbarragan.com>
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import json
from rest_framework import serializers
from . import models
class NotifyPolicySerializer(serializers.ModelSerializer):
project_name = serializers.SerializerMethodField("get_project_name")
class Meta:
model = models.NotifyPolicy
fields = ('id', 'project', 'project_name', 'notify_level')
def get_project_name(self, obj):
return obj.project.name
| agpl-3.0 |
spacelis/anatool | anatool/dm/result.py | 1 | 4660 | #!python
# -*- coding: utf-8 -*-
"""File: result.py
Description:
Analysis tools for WEKA prediction results
History:
0.1.0 The first version.
"""
__version__ = '0.1.0'
__author__ = 'SpaceLis'
import json
from operator import itemgetter
from anatool.analyze import weka
from anatool.analyze.text_util import fourq_filter, csv_filter
import dataset
def confusion_matrix(log, plcf, col):
"""generate type confusion matrix"""
ins_lst = weka.log_parse(log)
classes = set()
places = dataset.DataItem()
with open(plcf) as fplc:
for line in fplc:
place = json.loads(line)
places[place[col]] = place
classes.add(place[col])
print classes
cmat = dict()
for ref in classes:
cmat[ref] = dict()
for prd in classes:
cmat[ref][prd] = list()
for ins in ins_lst:
ref = places[ins['refN']][col]
hyp = places[ins['prdN']][col]
cmat[ref][hyp].append(int(ins['id']))
return cmat
def count_matrix(mat):
"""return the count of each list in matrix"""
rmat = dict()
for i in mat:
rmat[i] = dict()
print '%3d' % int(i),
for j in mat:
rmat[i][j] = len(mat[i][j])
print '%4d' % rmat[i][j],
print
return rmat
def err_analyze(dst, mat, twtf, plcf, col):
"""output csv for mat"""
twt_lst = dataset.Dataset()
with open(twtf) as ftwt:
for line in ftwt:
twt_lst.append(json.loads(line))
places = dataset.DataItem()
with open(plcf) as fplc:
for line in fplc:
place = json.loads(line)
places[place[col]] = place
with open(dst, 'w') as fdst:
print >>fdst, '"Ref POI", "Hyp POI", "Text", "Ref Genre", "Hyp Genre", "Ref SGenre", "Hyp SGenre"'
for i in mat:
for j in mat:
#if i != j:
for item in mat[i][j]:
# ref hyp text rcat hcat rsc hsc
try:
print >>fdst, '"{0}","{1}","{2}","{3}","{4}","{5}","{6}"' \
.format(csv_filter(places[i]['name']),csv_filter(places[j]['name']), \
fourq_filter(csv_filter(twt_lst[item]['text'])), \
places[i]['category'],places[j]['category'], \
places[i]['super_category'], places[j]['super_category'])
except: pass
def catelist(src, col):
"""generate list of col for picturing confusion matrix"""
places = dict()
with open(src) as fsrc:
for line in fsrc:
place = json.loads(line)
places[place[col]] = place['label']
print "'" + "','".join(sorted(places, key=places.__getitem__)) + "'"
def thirgest(log):
"""The accuracy in first three"""
threshold = 3
ins_lst = weka.log_parse(log)
pos, cnt = 0, 0
for ins in ins_lst:
#print ins['score']
rnk = sorted(zip(ins['score'], range(1, len(ins['score']) + 1)), \
key=lambda x: x[0], reverse=True)
#print rnk
for i in range(threshold):
if rnk[i][1] == ins['ref']:
pos += 1
break
#print pos
#if cnt>10: return
cnt += 1
return pos/float(cnt)
if __name__ == '__main__':
#cmat = confusion_matrix('../weka2/chicago_no_fold.log', '../weka2/chicago.arff.place', 'label')
#err_analyze('../weka2/chicago.errxs.csv', cmat, '../weka2/chicago.arff.tweet', '../weka2/chicago.arff.place', 'label')
#cmat = confusion_matrix('../weka2/la_no_fold.log', '../weka2/la.arff.place', 'label')
#err_analyze('../weka2/la.errxs.csv', cmat, '../weka2/la.arff.tweet', '../weka2/la.arff.place', 'label')
#cmat = confusion_matrix('../weka2/new_york_no_fold.log', '../weka2/new_york.arff.place', 'label')
#err_analyze('../weka2/new_york.errxs.csv', cmat, '../weka2/new_york.arff.tweet', '../weka2/new_york.arff.place', 'label')
#cmat = confusion_matrix('../weka2/san_fran_no_fold.log', '../weka2/san_fran.arff.place', 'label')
#err_analyze('../weka2/san_fran.errxs.csv', cmat, '../weka2/san_fran.arff.tweet', '../weka2/san_fran.arff.place', 'label')
#catelist('../weka2/chicago_type.arff.place', 'category')
#catelist('../weka2/new_york_type.arff.place', 'category')
#catelist('../weka2/la_type.arff.place', 'category')
#catelist('../weka2/san_fran_type.arff.place', 'category')
| mit |
maohongyuan/kbengine | kbe/res/scripts/common/Lib/test/test_fileinput.py | 81 | 34338 | '''
Tests for fileinput module.
Nick Mathewson
'''
import os
import sys
import re
import fileinput
import collections
import builtins
import unittest
try:
import bz2
except ImportError:
bz2 = None
try:
import gzip
except ImportError:
gzip = None
from io import BytesIO, StringIO
from fileinput import FileInput, hook_encoded
from test.support import verbose, TESTFN, run_unittest, check_warnings
from test.support import unlink as safe_unlink
from unittest import mock
# The fileinput module has 2 interfaces: the FileInput class which does
# all the work, and a few functions (input, etc.) that use a global _state
# variable.
# Write lines (a list of lines) to temp file number i, and return the
# temp file's name.
def writeTmp(i, lines, mode='w'): # opening in text mode is the default
name = TESTFN + str(i)
f = open(name, mode)
for line in lines:
f.write(line)
f.close()
return name
def remove_tempfiles(*names):
for name in names:
if name:
safe_unlink(name)
class BufferSizesTests(unittest.TestCase):
def test_buffer_sizes(self):
# First, run the tests with default and teeny buffer size.
for round, bs in (0, 0), (1, 30):
t1 = t2 = t3 = t4 = None
try:
t1 = writeTmp(1, ["Line %s of file 1\n" % (i+1) for i in range(15)])
t2 = writeTmp(2, ["Line %s of file 2\n" % (i+1) for i in range(10)])
t3 = writeTmp(3, ["Line %s of file 3\n" % (i+1) for i in range(5)])
t4 = writeTmp(4, ["Line %s of file 4\n" % (i+1) for i in range(1)])
self.buffer_size_test(t1, t2, t3, t4, bs, round)
finally:
remove_tempfiles(t1, t2, t3, t4)
def buffer_size_test(self, t1, t2, t3, t4, bs=0, round=0):
pat = re.compile(r'LINE (\d+) OF FILE (\d+)')
start = 1 + round*6
if verbose:
print('%s. Simple iteration (bs=%s)' % (start+0, bs))
fi = FileInput(files=(t1, t2, t3, t4), bufsize=bs)
lines = list(fi)
fi.close()
self.assertEqual(len(lines), 31)
self.assertEqual(lines[4], 'Line 5 of file 1\n')
self.assertEqual(lines[30], 'Line 1 of file 4\n')
self.assertEqual(fi.lineno(), 31)
self.assertEqual(fi.filename(), t4)
if verbose:
print('%s. Status variables (bs=%s)' % (start+1, bs))
fi = FileInput(files=(t1, t2, t3, t4), bufsize=bs)
s = "x"
while s and s != 'Line 6 of file 2\n':
s = fi.readline()
self.assertEqual(fi.filename(), t2)
self.assertEqual(fi.lineno(), 21)
self.assertEqual(fi.filelineno(), 6)
self.assertFalse(fi.isfirstline())
self.assertFalse(fi.isstdin())
if verbose:
print('%s. Nextfile (bs=%s)' % (start+2, bs))
fi.nextfile()
self.assertEqual(fi.readline(), 'Line 1 of file 3\n')
self.assertEqual(fi.lineno(), 22)
fi.close()
if verbose:
print('%s. Stdin (bs=%s)' % (start+3, bs))
fi = FileInput(files=(t1, t2, t3, t4, '-'), bufsize=bs)
savestdin = sys.stdin
try:
sys.stdin = StringIO("Line 1 of stdin\nLine 2 of stdin\n")
lines = list(fi)
self.assertEqual(len(lines), 33)
self.assertEqual(lines[32], 'Line 2 of stdin\n')
self.assertEqual(fi.filename(), '<stdin>')
fi.nextfile()
finally:
sys.stdin = savestdin
if verbose:
print('%s. Boundary conditions (bs=%s)' % (start+4, bs))
fi = FileInput(files=(t1, t2, t3, t4), bufsize=bs)
self.assertEqual(fi.lineno(), 0)
self.assertEqual(fi.filename(), None)
fi.nextfile()
self.assertEqual(fi.lineno(), 0)
self.assertEqual(fi.filename(), None)
if verbose:
print('%s. Inplace (bs=%s)' % (start+5, bs))
savestdout = sys.stdout
try:
fi = FileInput(files=(t1, t2, t3, t4), inplace=1, bufsize=bs)
for line in fi:
line = line[:-1].upper()
print(line)
fi.close()
finally:
sys.stdout = savestdout
fi = FileInput(files=(t1, t2, t3, t4), bufsize=bs)
for line in fi:
self.assertEqual(line[-1], '\n')
m = pat.match(line[:-1])
self.assertNotEqual(m, None)
self.assertEqual(int(m.group(1)), fi.filelineno())
fi.close()
class UnconditionallyRaise:
def __init__(self, exception_type):
self.exception_type = exception_type
self.invoked = False
def __call__(self, *args, **kwargs):
self.invoked = True
raise self.exception_type()
class FileInputTests(unittest.TestCase):
def test_zero_byte_files(self):
t1 = t2 = t3 = t4 = None
try:
t1 = writeTmp(1, [""])
t2 = writeTmp(2, [""])
t3 = writeTmp(3, ["The only line there is.\n"])
t4 = writeTmp(4, [""])
fi = FileInput(files=(t1, t2, t3, t4))
line = fi.readline()
self.assertEqual(line, 'The only line there is.\n')
self.assertEqual(fi.lineno(), 1)
self.assertEqual(fi.filelineno(), 1)
self.assertEqual(fi.filename(), t3)
line = fi.readline()
self.assertFalse(line)
self.assertEqual(fi.lineno(), 1)
self.assertEqual(fi.filelineno(), 0)
self.assertEqual(fi.filename(), t4)
fi.close()
finally:
remove_tempfiles(t1, t2, t3, t4)
def test_files_that_dont_end_with_newline(self):
t1 = t2 = None
try:
t1 = writeTmp(1, ["A\nB\nC"])
t2 = writeTmp(2, ["D\nE\nF"])
fi = FileInput(files=(t1, t2))
lines = list(fi)
self.assertEqual(lines, ["A\n", "B\n", "C", "D\n", "E\n", "F"])
self.assertEqual(fi.filelineno(), 3)
self.assertEqual(fi.lineno(), 6)
finally:
remove_tempfiles(t1, t2)
## def test_unicode_filenames(self):
## # XXX A unicode string is always returned by writeTmp.
## # So is this needed?
## try:
## t1 = writeTmp(1, ["A\nB"])
## encoding = sys.getfilesystemencoding()
## if encoding is None:
## encoding = 'ascii'
## fi = FileInput(files=str(t1, encoding))
## lines = list(fi)
## self.assertEqual(lines, ["A\n", "B"])
## finally:
## remove_tempfiles(t1)
def test_fileno(self):
t1 = t2 = None
try:
t1 = writeTmp(1, ["A\nB"])
t2 = writeTmp(2, ["C\nD"])
fi = FileInput(files=(t1, t2))
self.assertEqual(fi.fileno(), -1)
line =next( fi)
self.assertNotEqual(fi.fileno(), -1)
fi.nextfile()
self.assertEqual(fi.fileno(), -1)
line = list(fi)
self.assertEqual(fi.fileno(), -1)
finally:
remove_tempfiles(t1, t2)
def test_opening_mode(self):
try:
# invalid mode, should raise ValueError
fi = FileInput(mode="w")
self.fail("FileInput should reject invalid mode argument")
except ValueError:
pass
t1 = None
try:
# try opening in universal newline mode
t1 = writeTmp(1, [b"A\nB\r\nC\rD"], mode="wb")
with check_warnings(('', DeprecationWarning)):
fi = FileInput(files=t1, mode="U")
with check_warnings(('', DeprecationWarning)):
lines = list(fi)
self.assertEqual(lines, ["A\n", "B\n", "C\n", "D"])
finally:
remove_tempfiles(t1)
def test_stdin_binary_mode(self):
with mock.patch('sys.stdin') as m_stdin:
m_stdin.buffer = BytesIO(b'spam, bacon, sausage, and spam')
fi = FileInput(files=['-'], mode='rb')
lines = list(fi)
self.assertEqual(lines, [b'spam, bacon, sausage, and spam'])
def test_file_opening_hook(self):
try:
# cannot use openhook and inplace mode
fi = FileInput(inplace=1, openhook=lambda f, m: None)
self.fail("FileInput should raise if both inplace "
"and openhook arguments are given")
except ValueError:
pass
try:
fi = FileInput(openhook=1)
self.fail("FileInput should check openhook for being callable")
except ValueError:
pass
class CustomOpenHook:
def __init__(self):
self.invoked = False
def __call__(self, *args):
self.invoked = True
return open(*args)
t = writeTmp(1, ["\n"])
self.addCleanup(remove_tempfiles, t)
custom_open_hook = CustomOpenHook()
with FileInput([t], openhook=custom_open_hook) as fi:
fi.readline()
self.assertTrue(custom_open_hook.invoked, "openhook not invoked")
def test_readline(self):
with open(TESTFN, 'wb') as f:
f.write(b'A\nB\r\nC\r')
# Fill TextIOWrapper buffer.
f.write(b'123456789\n' * 1000)
# Issue #20501: readline() shouldn't read whole file.
f.write(b'\x80')
self.addCleanup(safe_unlink, TESTFN)
with FileInput(files=TESTFN,
openhook=hook_encoded('ascii'), bufsize=8) as fi:
try:
self.assertEqual(fi.readline(), 'A\n')
self.assertEqual(fi.readline(), 'B\n')
self.assertEqual(fi.readline(), 'C\n')
except UnicodeDecodeError:
self.fail('Read to end of file')
with self.assertRaises(UnicodeDecodeError):
# Read to the end of file.
list(fi)
def test_context_manager(self):
try:
t1 = writeTmp(1, ["A\nB\nC"])
t2 = writeTmp(2, ["D\nE\nF"])
with FileInput(files=(t1, t2)) as fi:
lines = list(fi)
self.assertEqual(lines, ["A\n", "B\n", "C", "D\n", "E\n", "F"])
self.assertEqual(fi.filelineno(), 3)
self.assertEqual(fi.lineno(), 6)
self.assertEqual(fi._files, ())
finally:
remove_tempfiles(t1, t2)
def test_close_on_exception(self):
try:
t1 = writeTmp(1, [""])
with FileInput(files=t1) as fi:
raise OSError
except OSError:
self.assertEqual(fi._files, ())
finally:
remove_tempfiles(t1)
def test_empty_files_list_specified_to_constructor(self):
with FileInput(files=[]) as fi:
self.assertEqual(fi._files, ('-',))
def test__getitem__(self):
"""Tests invoking FileInput.__getitem__() with the current
line number"""
t = writeTmp(1, ["line1\n", "line2\n"])
self.addCleanup(remove_tempfiles, t)
with FileInput(files=[t]) as fi:
retval1 = fi[0]
self.assertEqual(retval1, "line1\n")
retval2 = fi[1]
self.assertEqual(retval2, "line2\n")
def test__getitem__invalid_key(self):
"""Tests invoking FileInput.__getitem__() with an index unequal to
the line number"""
t = writeTmp(1, ["line1\n", "line2\n"])
self.addCleanup(remove_tempfiles, t)
with FileInput(files=[t]) as fi:
with self.assertRaises(RuntimeError) as cm:
fi[1]
self.assertEqual(cm.exception.args, ("accessing lines out of order",))
def test__getitem__eof(self):
"""Tests invoking FileInput.__getitem__() with the line number but at
end-of-input"""
t = writeTmp(1, [])
self.addCleanup(remove_tempfiles, t)
with FileInput(files=[t]) as fi:
with self.assertRaises(IndexError) as cm:
fi[0]
self.assertEqual(cm.exception.args, ("end of input reached",))
def test_nextfile_oserror_deleting_backup(self):
"""Tests invoking FileInput.nextfile() when the attempt to delete
the backup file would raise OSError. This error is expected to be
silently ignored"""
os_unlink_orig = os.unlink
os_unlink_replacement = UnconditionallyRaise(OSError)
try:
t = writeTmp(1, ["\n"])
self.addCleanup(remove_tempfiles, t)
with FileInput(files=[t], inplace=True) as fi:
next(fi) # make sure the file is opened
os.unlink = os_unlink_replacement
fi.nextfile()
finally:
os.unlink = os_unlink_orig
# sanity check to make sure that our test scenario was actually hit
self.assertTrue(os_unlink_replacement.invoked,
"os.unlink() was not invoked")
def test_readline_os_fstat_raises_OSError(self):
"""Tests invoking FileInput.readline() when os.fstat() raises OSError.
This exception should be silently discarded."""
os_fstat_orig = os.fstat
os_fstat_replacement = UnconditionallyRaise(OSError)
try:
t = writeTmp(1, ["\n"])
self.addCleanup(remove_tempfiles, t)
with FileInput(files=[t], inplace=True) as fi:
os.fstat = os_fstat_replacement
fi.readline()
finally:
os.fstat = os_fstat_orig
# sanity check to make sure that our test scenario was actually hit
self.assertTrue(os_fstat_replacement.invoked,
"os.fstat() was not invoked")
@unittest.skipIf(not hasattr(os, "chmod"), "os.chmod does not exist")
def test_readline_os_chmod_raises_OSError(self):
"""Tests invoking FileInput.readline() when os.chmod() raises OSError.
This exception should be silently discarded."""
os_chmod_orig = os.chmod
os_chmod_replacement = UnconditionallyRaise(OSError)
try:
t = writeTmp(1, ["\n"])
self.addCleanup(remove_tempfiles, t)
with FileInput(files=[t], inplace=True) as fi:
os.chmod = os_chmod_replacement
fi.readline()
finally:
os.chmod = os_chmod_orig
# sanity check to make sure that our test scenario was actually hit
self.assertTrue(os_chmod_replacement.invoked,
"os.fstat() was not invoked")
def test_fileno_when_ValueError_raised(self):
class FilenoRaisesValueError(UnconditionallyRaise):
def __init__(self):
UnconditionallyRaise.__init__(self, ValueError)
def fileno(self):
self.__call__()
unconditionally_raise_ValueError = FilenoRaisesValueError()
t = writeTmp(1, ["\n"])
self.addCleanup(remove_tempfiles, t)
with FileInput(files=[t]) as fi:
file_backup = fi._file
try:
fi._file = unconditionally_raise_ValueError
result = fi.fileno()
finally:
fi._file = file_backup # make sure the file gets cleaned up
# sanity check to make sure that our test scenario was actually hit
self.assertTrue(unconditionally_raise_ValueError.invoked,
"_file.fileno() was not invoked")
self.assertEqual(result, -1, "fileno() should return -1")
class MockFileInput:
"""A class that mocks out fileinput.FileInput for use during unit tests"""
def __init__(self, files=None, inplace=False, backup="", bufsize=0,
mode="r", openhook=None):
self.files = files
self.inplace = inplace
self.backup = backup
self.bufsize = bufsize
self.mode = mode
self.openhook = openhook
self._file = None
self.invocation_counts = collections.defaultdict(lambda: 0)
self.return_values = {}
def close(self):
self.invocation_counts["close"] += 1
def nextfile(self):
self.invocation_counts["nextfile"] += 1
return self.return_values["nextfile"]
def filename(self):
self.invocation_counts["filename"] += 1
return self.return_values["filename"]
def lineno(self):
self.invocation_counts["lineno"] += 1
return self.return_values["lineno"]
def filelineno(self):
self.invocation_counts["filelineno"] += 1
return self.return_values["filelineno"]
def fileno(self):
self.invocation_counts["fileno"] += 1
return self.return_values["fileno"]
def isfirstline(self):
self.invocation_counts["isfirstline"] += 1
return self.return_values["isfirstline"]
def isstdin(self):
self.invocation_counts["isstdin"] += 1
return self.return_values["isstdin"]
class BaseFileInputGlobalMethodsTest(unittest.TestCase):
"""Base class for unit tests for the global function of
the fileinput module."""
def setUp(self):
self._orig_state = fileinput._state
self._orig_FileInput = fileinput.FileInput
fileinput.FileInput = MockFileInput
def tearDown(self):
fileinput.FileInput = self._orig_FileInput
fileinput._state = self._orig_state
def assertExactlyOneInvocation(self, mock_file_input, method_name):
# assert that the method with the given name was invoked once
actual_count = mock_file_input.invocation_counts[method_name]
self.assertEqual(actual_count, 1, method_name)
# assert that no other unexpected methods were invoked
actual_total_count = len(mock_file_input.invocation_counts)
self.assertEqual(actual_total_count, 1)
class Test_fileinput_input(BaseFileInputGlobalMethodsTest):
"""Unit tests for fileinput.input()"""
def test_state_is_not_None_and_state_file_is_not_None(self):
"""Tests invoking fileinput.input() when fileinput._state is not None
and its _file attribute is also not None. Expect RuntimeError to
be raised with a meaningful error message and for fileinput._state
to *not* be modified."""
instance = MockFileInput()
instance._file = object()
fileinput._state = instance
with self.assertRaises(RuntimeError) as cm:
fileinput.input()
self.assertEqual(("input() already active",), cm.exception.args)
self.assertIs(instance, fileinput._state, "fileinput._state")
def test_state_is_not_None_and_state_file_is_None(self):
"""Tests invoking fileinput.input() when fileinput._state is not None
but its _file attribute *is* None. Expect it to create and return
a new fileinput.FileInput object with all method parameters passed
explicitly to the __init__() method; also ensure that
fileinput._state is set to the returned instance."""
instance = MockFileInput()
instance._file = None
fileinput._state = instance
self.do_test_call_input()
def test_state_is_None(self):
"""Tests invoking fileinput.input() when fileinput._state is None
Expect it to create and return a new fileinput.FileInput object
with all method parameters passed explicitly to the __init__()
method; also ensure that fileinput._state is set to the returned
instance."""
fileinput._state = None
self.do_test_call_input()
def do_test_call_input(self):
"""Tests that fileinput.input() creates a new fileinput.FileInput
object, passing the given parameters unmodified to
fileinput.FileInput.__init__(). Note that this test depends on the
monkey patching of fileinput.FileInput done by setUp()."""
files = object()
inplace = object()
backup = object()
bufsize = object()
mode = object()
openhook = object()
# call fileinput.input() with different values for each argument
result = fileinput.input(files=files, inplace=inplace, backup=backup,
bufsize=bufsize,
mode=mode, openhook=openhook)
# ensure fileinput._state was set to the returned object
self.assertIs(result, fileinput._state, "fileinput._state")
# ensure the parameters to fileinput.input() were passed directly
# to FileInput.__init__()
self.assertIs(files, result.files, "files")
self.assertIs(inplace, result.inplace, "inplace")
self.assertIs(backup, result.backup, "backup")
self.assertIs(bufsize, result.bufsize, "bufsize")
self.assertIs(mode, result.mode, "mode")
self.assertIs(openhook, result.openhook, "openhook")
class Test_fileinput_close(BaseFileInputGlobalMethodsTest):
"""Unit tests for fileinput.close()"""
def test_state_is_None(self):
"""Tests that fileinput.close() does nothing if fileinput._state
is None"""
fileinput._state = None
fileinput.close()
self.assertIsNone(fileinput._state)
def test_state_is_not_None(self):
"""Tests that fileinput.close() invokes close() on fileinput._state
and sets _state=None"""
instance = MockFileInput()
fileinput._state = instance
fileinput.close()
self.assertExactlyOneInvocation(instance, "close")
self.assertIsNone(fileinput._state)
class Test_fileinput_nextfile(BaseFileInputGlobalMethodsTest):
"""Unit tests for fileinput.nextfile()"""
def test_state_is_None(self):
"""Tests fileinput.nextfile() when fileinput._state is None.
Ensure that it raises RuntimeError with a meaningful error message
and does not modify fileinput._state"""
fileinput._state = None
with self.assertRaises(RuntimeError) as cm:
fileinput.nextfile()
self.assertEqual(("no active input()",), cm.exception.args)
self.assertIsNone(fileinput._state)
def test_state_is_not_None(self):
"""Tests fileinput.nextfile() when fileinput._state is not None.
Ensure that it invokes fileinput._state.nextfile() exactly once,
returns whatever it returns, and does not modify fileinput._state
to point to a different object."""
nextfile_retval = object()
instance = MockFileInput()
instance.return_values["nextfile"] = nextfile_retval
fileinput._state = instance
retval = fileinput.nextfile()
self.assertExactlyOneInvocation(instance, "nextfile")
self.assertIs(retval, nextfile_retval)
self.assertIs(fileinput._state, instance)
class Test_fileinput_filename(BaseFileInputGlobalMethodsTest):
"""Unit tests for fileinput.filename()"""
def test_state_is_None(self):
"""Tests fileinput.filename() when fileinput._state is None.
Ensure that it raises RuntimeError with a meaningful error message
and does not modify fileinput._state"""
fileinput._state = None
with self.assertRaises(RuntimeError) as cm:
fileinput.filename()
self.assertEqual(("no active input()",), cm.exception.args)
self.assertIsNone(fileinput._state)
def test_state_is_not_None(self):
"""Tests fileinput.filename() when fileinput._state is not None.
Ensure that it invokes fileinput._state.filename() exactly once,
returns whatever it returns, and does not modify fileinput._state
to point to a different object."""
filename_retval = object()
instance = MockFileInput()
instance.return_values["filename"] = filename_retval
fileinput._state = instance
retval = fileinput.filename()
self.assertExactlyOneInvocation(instance, "filename")
self.assertIs(retval, filename_retval)
self.assertIs(fileinput._state, instance)
class Test_fileinput_lineno(BaseFileInputGlobalMethodsTest):
"""Unit tests for fileinput.lineno()"""
def test_state_is_None(self):
"""Tests fileinput.lineno() when fileinput._state is None.
Ensure that it raises RuntimeError with a meaningful error message
and does not modify fileinput._state"""
fileinput._state = None
with self.assertRaises(RuntimeError) as cm:
fileinput.lineno()
self.assertEqual(("no active input()",), cm.exception.args)
self.assertIsNone(fileinput._state)
def test_state_is_not_None(self):
"""Tests fileinput.lineno() when fileinput._state is not None.
Ensure that it invokes fileinput._state.lineno() exactly once,
returns whatever it returns, and does not modify fileinput._state
to point to a different object."""
lineno_retval = object()
instance = MockFileInput()
instance.return_values["lineno"] = lineno_retval
fileinput._state = instance
retval = fileinput.lineno()
self.assertExactlyOneInvocation(instance, "lineno")
self.assertIs(retval, lineno_retval)
self.assertIs(fileinput._state, instance)
class Test_fileinput_filelineno(BaseFileInputGlobalMethodsTest):
"""Unit tests for fileinput.filelineno()"""
def test_state_is_None(self):
"""Tests fileinput.filelineno() when fileinput._state is None.
Ensure that it raises RuntimeError with a meaningful error message
and does not modify fileinput._state"""
fileinput._state = None
with self.assertRaises(RuntimeError) as cm:
fileinput.filelineno()
self.assertEqual(("no active input()",), cm.exception.args)
self.assertIsNone(fileinput._state)
def test_state_is_not_None(self):
"""Tests fileinput.filelineno() when fileinput._state is not None.
Ensure that it invokes fileinput._state.filelineno() exactly once,
returns whatever it returns, and does not modify fileinput._state
to point to a different object."""
filelineno_retval = object()
instance = MockFileInput()
instance.return_values["filelineno"] = filelineno_retval
fileinput._state = instance
retval = fileinput.filelineno()
self.assertExactlyOneInvocation(instance, "filelineno")
self.assertIs(retval, filelineno_retval)
self.assertIs(fileinput._state, instance)
class Test_fileinput_fileno(BaseFileInputGlobalMethodsTest):
"""Unit tests for fileinput.fileno()"""
def test_state_is_None(self):
"""Tests fileinput.fileno() when fileinput._state is None.
Ensure that it raises RuntimeError with a meaningful error message
and does not modify fileinput._state"""
fileinput._state = None
with self.assertRaises(RuntimeError) as cm:
fileinput.fileno()
self.assertEqual(("no active input()",), cm.exception.args)
self.assertIsNone(fileinput._state)
def test_state_is_not_None(self):
"""Tests fileinput.fileno() when fileinput._state is not None.
Ensure that it invokes fileinput._state.fileno() exactly once,
returns whatever it returns, and does not modify fileinput._state
to point to a different object."""
fileno_retval = object()
instance = MockFileInput()
instance.return_values["fileno"] = fileno_retval
instance.fileno_retval = fileno_retval
fileinput._state = instance
retval = fileinput.fileno()
self.assertExactlyOneInvocation(instance, "fileno")
self.assertIs(retval, fileno_retval)
self.assertIs(fileinput._state, instance)
class Test_fileinput_isfirstline(BaseFileInputGlobalMethodsTest):
"""Unit tests for fileinput.isfirstline()"""
def test_state_is_None(self):
"""Tests fileinput.isfirstline() when fileinput._state is None.
Ensure that it raises RuntimeError with a meaningful error message
and does not modify fileinput._state"""
fileinput._state = None
with self.assertRaises(RuntimeError) as cm:
fileinput.isfirstline()
self.assertEqual(("no active input()",), cm.exception.args)
self.assertIsNone(fileinput._state)
def test_state_is_not_None(self):
"""Tests fileinput.isfirstline() when fileinput._state is not None.
Ensure that it invokes fileinput._state.isfirstline() exactly once,
returns whatever it returns, and does not modify fileinput._state
to point to a different object."""
isfirstline_retval = object()
instance = MockFileInput()
instance.return_values["isfirstline"] = isfirstline_retval
fileinput._state = instance
retval = fileinput.isfirstline()
self.assertExactlyOneInvocation(instance, "isfirstline")
self.assertIs(retval, isfirstline_retval)
self.assertIs(fileinput._state, instance)
class Test_fileinput_isstdin(BaseFileInputGlobalMethodsTest):
"""Unit tests for fileinput.isstdin()"""
def test_state_is_None(self):
"""Tests fileinput.isstdin() when fileinput._state is None.
Ensure that it raises RuntimeError with a meaningful error message
and does not modify fileinput._state"""
fileinput._state = None
with self.assertRaises(RuntimeError) as cm:
fileinput.isstdin()
self.assertEqual(("no active input()",), cm.exception.args)
self.assertIsNone(fileinput._state)
def test_state_is_not_None(self):
"""Tests fileinput.isstdin() when fileinput._state is not None.
Ensure that it invokes fileinput._state.isstdin() exactly once,
returns whatever it returns, and does not modify fileinput._state
to point to a different object."""
isstdin_retval = object()
instance = MockFileInput()
instance.return_values["isstdin"] = isstdin_retval
fileinput._state = instance
retval = fileinput.isstdin()
self.assertExactlyOneInvocation(instance, "isstdin")
self.assertIs(retval, isstdin_retval)
self.assertIs(fileinput._state, instance)
class InvocationRecorder:
def __init__(self):
self.invocation_count = 0
def __call__(self, *args, **kwargs):
self.invocation_count += 1
self.last_invocation = (args, kwargs)
class Test_hook_compressed(unittest.TestCase):
"""Unit tests for fileinput.hook_compressed()"""
def setUp(self):
self.fake_open = InvocationRecorder()
def test_empty_string(self):
self.do_test_use_builtin_open("", 1)
def test_no_ext(self):
self.do_test_use_builtin_open("abcd", 2)
@unittest.skipUnless(gzip, "Requires gzip and zlib")
def test_gz_ext_fake(self):
original_open = gzip.open
gzip.open = self.fake_open
try:
result = fileinput.hook_compressed("test.gz", 3)
finally:
gzip.open = original_open
self.assertEqual(self.fake_open.invocation_count, 1)
self.assertEqual(self.fake_open.last_invocation, (("test.gz", 3), {}))
@unittest.skipUnless(bz2, "Requires bz2")
def test_bz2_ext_fake(self):
original_open = bz2.BZ2File
bz2.BZ2File = self.fake_open
try:
result = fileinput.hook_compressed("test.bz2", 4)
finally:
bz2.BZ2File = original_open
self.assertEqual(self.fake_open.invocation_count, 1)
self.assertEqual(self.fake_open.last_invocation, (("test.bz2", 4), {}))
def test_blah_ext(self):
self.do_test_use_builtin_open("abcd.blah", 5)
def test_gz_ext_builtin(self):
self.do_test_use_builtin_open("abcd.Gz", 6)
def test_bz2_ext_builtin(self):
self.do_test_use_builtin_open("abcd.Bz2", 7)
def do_test_use_builtin_open(self, filename, mode):
original_open = self.replace_builtin_open(self.fake_open)
try:
result = fileinput.hook_compressed(filename, mode)
finally:
self.replace_builtin_open(original_open)
self.assertEqual(self.fake_open.invocation_count, 1)
self.assertEqual(self.fake_open.last_invocation,
((filename, mode), {}))
@staticmethod
def replace_builtin_open(new_open_func):
original_open = builtins.open
builtins.open = new_open_func
return original_open
class Test_hook_encoded(unittest.TestCase):
"""Unit tests for fileinput.hook_encoded()"""
def test(self):
encoding = object()
result = fileinput.hook_encoded(encoding)
fake_open = InvocationRecorder()
original_open = builtins.open
builtins.open = fake_open
try:
filename = object()
mode = object()
open_result = result(filename, mode)
finally:
builtins.open = original_open
self.assertEqual(fake_open.invocation_count, 1)
args, kwargs = fake_open.last_invocation
self.assertIs(args[0], filename)
self.assertIs(args[1], mode)
self.assertIs(kwargs.pop('encoding'), encoding)
self.assertFalse(kwargs)
def test_modes(self):
with open(TESTFN, 'wb') as f:
# UTF-7 is a convenient, seldom used encoding
f.write(b'A\nB\r\nC\rD+IKw-')
self.addCleanup(safe_unlink, TESTFN)
def check(mode, expected_lines):
with FileInput(files=TESTFN, mode=mode,
openhook=hook_encoded('utf-7')) as fi:
lines = list(fi)
self.assertEqual(lines, expected_lines)
check('r', ['A\n', 'B\n', 'C\n', 'D\u20ac'])
with self.assertWarns(DeprecationWarning):
check('rU', ['A\n', 'B\n', 'C\n', 'D\u20ac'])
with self.assertWarns(DeprecationWarning):
check('U', ['A\n', 'B\n', 'C\n', 'D\u20ac'])
with self.assertRaises(ValueError):
check('rb', ['A\n', 'B\r\n', 'C\r', 'D\u20ac'])
if __name__ == "__main__":
unittest.main()
| lgpl-3.0 |
pleaseproject/python-for-android | python3-alpha/python3-src/Lib/test/test_cgi.py | 46 | 13764 | from test.support import run_unittest, check_warnings
import cgi
import os
import sys
import tempfile
import unittest
from io import StringIO, BytesIO
class HackedSysModule:
# The regression test will have real values in sys.argv, which
# will completely confuse the test of the cgi module
argv = []
stdin = sys.stdin
cgi.sys = HackedSysModule()
class ComparableException:
def __init__(self, err):
self.err = err
def __str__(self):
return str(self.err)
def __eq__(self, anExc):
if not isinstance(anExc, Exception):
return NotImplemented
return (self.err.__class__ == anExc.__class__ and
self.err.args == anExc.args)
def __getattr__(self, attr):
return getattr(self.err, attr)
def do_test(buf, method):
env = {}
if method == "GET":
fp = None
env['REQUEST_METHOD'] = 'GET'
env['QUERY_STRING'] = buf
elif method == "POST":
fp = BytesIO(buf.encode('latin-1')) # FieldStorage expects bytes
env['REQUEST_METHOD'] = 'POST'
env['CONTENT_TYPE'] = 'application/x-www-form-urlencoded'
env['CONTENT_LENGTH'] = str(len(buf))
else:
raise ValueError("unknown method: %s" % method)
try:
return cgi.parse(fp, env, strict_parsing=1)
except Exception as err:
return ComparableException(err)
parse_strict_test_cases = [
("", ValueError("bad query field: ''")),
("&", ValueError("bad query field: ''")),
("&&", ValueError("bad query field: ''")),
(";", ValueError("bad query field: ''")),
(";&;", ValueError("bad query field: ''")),
# Should the next few really be valid?
("=", {}),
("=&=", {}),
("=;=", {}),
# This rest seem to make sense
("=a", {'': ['a']}),
("&=a", ValueError("bad query field: ''")),
("=a&", ValueError("bad query field: ''")),
("=&a", ValueError("bad query field: 'a'")),
("b=a", {'b': ['a']}),
("b+=a", {'b ': ['a']}),
("a=b=a", {'a': ['b=a']}),
("a=+b=a", {'a': [' b=a']}),
("&b=a", ValueError("bad query field: ''")),
("b&=a", ValueError("bad query field: 'b'")),
("a=a+b&b=b+c", {'a': ['a b'], 'b': ['b c']}),
("a=a+b&a=b+a", {'a': ['a b', 'b a']}),
("x=1&y=2.0&z=2-3.%2b0", {'x': ['1'], 'y': ['2.0'], 'z': ['2-3.+0']}),
("x=1;y=2.0&z=2-3.%2b0", {'x': ['1'], 'y': ['2.0'], 'z': ['2-3.+0']}),
("x=1;y=2.0;z=2-3.%2b0", {'x': ['1'], 'y': ['2.0'], 'z': ['2-3.+0']}),
("Hbc5161168c542333633315dee1182227:key_store_seqid=400006&cuyer=r&view=bustomer&order_id=0bb2e248638833d48cb7fed300000f1b&expire=964546263&lobale=en-US&kid=130003.300038&ss=env",
{'Hbc5161168c542333633315dee1182227:key_store_seqid': ['400006'],
'cuyer': ['r'],
'expire': ['964546263'],
'kid': ['130003.300038'],
'lobale': ['en-US'],
'order_id': ['0bb2e248638833d48cb7fed300000f1b'],
'ss': ['env'],
'view': ['bustomer'],
}),
("group_id=5470&set=custom&_assigned_to=31392&_status=1&_category=100&SUBMIT=Browse",
{'SUBMIT': ['Browse'],
'_assigned_to': ['31392'],
'_category': ['100'],
'_status': ['1'],
'group_id': ['5470'],
'set': ['custom'],
})
]
def norm(seq):
return sorted(seq, key=repr)
def first_elts(list):
return [p[0] for p in list]
def first_second_elts(list):
return [(p[0], p[1][0]) for p in list]
def gen_result(data, environ):
encoding = 'latin-1'
fake_stdin = BytesIO(data.encode(encoding))
fake_stdin.seek(0)
form = cgi.FieldStorage(fp=fake_stdin, environ=environ, encoding=encoding)
result = {}
for k, v in dict(form).items():
result[k] = isinstance(v, list) and form.getlist(k) or v.value
return result
class CgiTests(unittest.TestCase):
def test_strict(self):
for orig, expect in parse_strict_test_cases:
# Test basic parsing
d = do_test(orig, "GET")
self.assertEqual(d, expect, "Error parsing %s method GET" % repr(orig))
d = do_test(orig, "POST")
self.assertEqual(d, expect, "Error parsing %s method POST" % repr(orig))
env = {'QUERY_STRING': orig}
fs = cgi.FieldStorage(environ=env)
if isinstance(expect, dict):
# test dict interface
self.assertEqual(len(expect), len(fs))
self.assertCountEqual(expect.keys(), fs.keys())
##self.assertEqual(norm(expect.values()), norm(fs.values()))
##self.assertEqual(norm(expect.items()), norm(fs.items()))
self.assertEqual(fs.getvalue("nonexistent field", "default"), "default")
# test individual fields
for key in expect.keys():
expect_val = expect[key]
self.assertIn(key, fs)
if len(expect_val) > 1:
self.assertEqual(fs.getvalue(key), expect_val)
else:
self.assertEqual(fs.getvalue(key), expect_val[0])
def test_log(self):
cgi.log("Testing")
cgi.logfp = StringIO()
cgi.initlog("%s", "Testing initlog 1")
cgi.log("%s", "Testing log 2")
self.assertEqual(cgi.logfp.getvalue(), "Testing initlog 1\nTesting log 2\n")
if os.path.exists("/dev/null"):
cgi.logfp = None
cgi.logfile = "/dev/null"
cgi.initlog("%s", "Testing log 3")
def log_cleanup():
"""Restore the global state of the log vars."""
cgi.logfile = ''
cgi.logfp.close()
cgi.logfp = None
cgi.log = cgi.initlog
self.addCleanup(log_cleanup)
cgi.log("Testing log 4")
def test_fieldstorage_readline(self):
# FieldStorage uses readline, which has the capacity to read all
# contents of the input file into memory; we use readline's size argument
# to prevent that for files that do not contain any newlines in
# non-GET/HEAD requests
class TestReadlineFile:
def __init__(self, file):
self.file = file
self.numcalls = 0
def readline(self, size=None):
self.numcalls += 1
if size:
return self.file.readline(size)
else:
return self.file.readline()
def __getattr__(self, name):
file = self.__dict__['file']
a = getattr(file, name)
if not isinstance(a, int):
setattr(self, name, a)
return a
f = TestReadlineFile(tempfile.TemporaryFile("wb+"))
self.addCleanup(f.close)
f.write(b'x' * 256 * 1024)
f.seek(0)
env = {'REQUEST_METHOD':'PUT'}
fs = cgi.FieldStorage(fp=f, environ=env)
self.addCleanup(fs.file.close)
# if we're not chunking properly, readline is only called twice
# (by read_binary); if we are chunking properly, it will be called 5 times
# as long as the chunksize is 1 << 16.
self.assertTrue(f.numcalls > 2)
f.close()
def test_fieldstorage_multipart(self):
#Test basic FieldStorage multipart parsing
env = {
'REQUEST_METHOD': 'POST',
'CONTENT_TYPE': 'multipart/form-data; boundary={}'.format(BOUNDARY),
'CONTENT_LENGTH': '558'}
fp = BytesIO(POSTDATA.encode('latin-1'))
fs = cgi.FieldStorage(fp, environ=env, encoding="latin-1")
self.assertEqual(len(fs.list), 4)
expect = [{'name':'id', 'filename':None, 'value':'1234'},
{'name':'title', 'filename':None, 'value':''},
{'name':'file', 'filename':'test.txt', 'value':b'Testing 123.\n'},
{'name':'submit', 'filename':None, 'value':' Add '}]
for x in range(len(fs.list)):
for k, exp in expect[x].items():
got = getattr(fs.list[x], k)
self.assertEqual(got, exp)
def test_fieldstorage_multipart_non_ascii(self):
#Test basic FieldStorage multipart parsing
env = {'REQUEST_METHOD':'POST',
'CONTENT_TYPE': 'multipart/form-data; boundary={}'.format(BOUNDARY),
'CONTENT_LENGTH':'558'}
for encoding in ['iso-8859-1','utf-8']:
fp = BytesIO(POSTDATA_NON_ASCII.encode(encoding))
fs = cgi.FieldStorage(fp, environ=env,encoding=encoding)
self.assertEqual(len(fs.list), 1)
expect = [{'name':'id', 'filename':None, 'value':'\xe7\xf1\x80'}]
for x in range(len(fs.list)):
for k, exp in expect[x].items():
got = getattr(fs.list[x], k)
self.assertEqual(got, exp)
_qs_result = {
'key1': 'value1',
'key2': ['value2x', 'value2y'],
'key3': 'value3',
'key4': 'value4'
}
def testQSAndUrlEncode(self):
data = "key2=value2x&key3=value3&key4=value4"
environ = {
'CONTENT_LENGTH': str(len(data)),
'CONTENT_TYPE': 'application/x-www-form-urlencoded',
'QUERY_STRING': 'key1=value1&key2=value2y',
'REQUEST_METHOD': 'POST',
}
v = gen_result(data, environ)
self.assertEqual(self._qs_result, v)
def testQSAndFormData(self):
data = """---123
Content-Disposition: form-data; name="key2"
value2y
---123
Content-Disposition: form-data; name="key3"
value3
---123
Content-Disposition: form-data; name="key4"
value4
---123--
"""
environ = {
'CONTENT_LENGTH': str(len(data)),
'CONTENT_TYPE': 'multipart/form-data; boundary=-123',
'QUERY_STRING': 'key1=value1&key2=value2x',
'REQUEST_METHOD': 'POST',
}
v = gen_result(data, environ)
self.assertEqual(self._qs_result, v)
def testQSAndFormDataFile(self):
data = """---123
Content-Disposition: form-data; name="key2"
value2y
---123
Content-Disposition: form-data; name="key3"
value3
---123
Content-Disposition: form-data; name="key4"
value4
---123
Content-Disposition: form-data; name="upload"; filename="fake.txt"
Content-Type: text/plain
this is the content of the fake file
---123--
"""
environ = {
'CONTENT_LENGTH': str(len(data)),
'CONTENT_TYPE': 'multipart/form-data; boundary=-123',
'QUERY_STRING': 'key1=value1&key2=value2x',
'REQUEST_METHOD': 'POST',
}
result = self._qs_result.copy()
result.update({
'upload': b'this is the content of the fake file\n'
})
v = gen_result(data, environ)
self.assertEqual(result, v)
def test_deprecated_parse_qs(self):
# this func is moved to urllib.parse, this is just a sanity check
with check_warnings(('cgi.parse_qs is deprecated, use urllib.parse.'
'parse_qs instead', DeprecationWarning)):
self.assertEqual({'a': ['A1'], 'B': ['B3'], 'b': ['B2']},
cgi.parse_qs('a=A1&b=B2&B=B3'))
def test_deprecated_parse_qsl(self):
# this func is moved to urllib.parse, this is just a sanity check
with check_warnings(('cgi.parse_qsl is deprecated, use urllib.parse.'
'parse_qsl instead', DeprecationWarning)):
self.assertEqual([('a', 'A1'), ('b', 'B2'), ('B', 'B3')],
cgi.parse_qsl('a=A1&b=B2&B=B3'))
def test_parse_header(self):
self.assertEqual(
cgi.parse_header("text/plain"),
("text/plain", {}))
self.assertEqual(
cgi.parse_header("text/vnd.just.made.this.up ; "),
("text/vnd.just.made.this.up", {}))
self.assertEqual(
cgi.parse_header("text/plain;charset=us-ascii"),
("text/plain", {"charset": "us-ascii"}))
self.assertEqual(
cgi.parse_header('text/plain ; charset="us-ascii"'),
("text/plain", {"charset": "us-ascii"}))
self.assertEqual(
cgi.parse_header('text/plain ; charset="us-ascii"; another=opt'),
("text/plain", {"charset": "us-ascii", "another": "opt"}))
self.assertEqual(
cgi.parse_header('attachment; filename="silly.txt"'),
("attachment", {"filename": "silly.txt"}))
self.assertEqual(
cgi.parse_header('attachment; filename="strange;name"'),
("attachment", {"filename": "strange;name"}))
self.assertEqual(
cgi.parse_header('attachment; filename="strange;name";size=123;'),
("attachment", {"filename": "strange;name", "size": "123"}))
BOUNDARY = "---------------------------721837373350705526688164684"
POSTDATA = """-----------------------------721837373350705526688164684
Content-Disposition: form-data; name="id"
1234
-----------------------------721837373350705526688164684
Content-Disposition: form-data; name="title"
-----------------------------721837373350705526688164684
Content-Disposition: form-data; name="file"; filename="test.txt"
Content-Type: text/plain
Testing 123.
-----------------------------721837373350705526688164684
Content-Disposition: form-data; name="submit"
Add\x20
-----------------------------721837373350705526688164684--
"""
POSTDATA_NON_ASCII = """-----------------------------721837373350705526688164684
Content-Disposition: form-data; name="id"
\xe7\xf1\x80
-----------------------------721837373350705526688164684
"""
def test_main():
run_unittest(CgiTests)
if __name__ == '__main__':
test_main()
| apache-2.0 |
towerjoo/mindsbook | django/core/serializers/__init__.py | 36 | 3536 | """
Interfaces for serializing Django objects.
Usage::
from django.core import serializers
json = serializers.serialize("json", some_query_set)
objects = list(serializers.deserialize("json", json))
To add your own serializers, use the SERIALIZATION_MODULES setting::
SERIALIZATION_MODULES = {
"csv" : "path.to.csv.serializer",
"txt" : "path.to.txt.serializer",
}
"""
from django.conf import settings
from django.utils import importlib
# Built-in serializers
BUILTIN_SERIALIZERS = {
"xml" : "django.core.serializers.xml_serializer",
"python" : "django.core.serializers.python",
"json" : "django.core.serializers.json",
}
# Check for PyYaml and register the serializer if it's available.
try:
import yaml
BUILTIN_SERIALIZERS["yaml"] = "django.core.serializers.pyyaml"
except ImportError:
pass
_serializers = {}
def register_serializer(format, serializer_module, serializers=None):
""""Register a new serializer.
``serializer_module`` should be the fully qualified module name
for the serializer.
If ``serializers`` is provided, the registration will be added
to the provided dictionary.
If ``serializers`` is not provided, the registration will be made
directly into the global register of serializers. Adding serializers
directly is not a thread-safe operation.
"""
module = importlib.import_module(serializer_module)
if serializers is None:
_serializers[format] = module
else:
serializers[format] = module
def unregister_serializer(format):
"Unregister a given serializer. This is not a thread-safe operation."
del _serializers[format]
def get_serializer(format):
if not _serializers:
_load_serializers()
return _serializers[format].Serializer
def get_serializer_formats():
if not _serializers:
_load_serializers()
return _serializers.keys()
def get_public_serializer_formats():
if not _serializers:
_load_serializers()
return [k for k, v in _serializers.iteritems() if not v.Serializer.internal_use_only]
def get_deserializer(format):
if not _serializers:
_load_serializers()
return _serializers[format].Deserializer
def serialize(format, queryset, **options):
"""
Serialize a queryset (or any iterator that returns database objects) using
a certain serializer.
"""
s = get_serializer(format)()
s.serialize(queryset, **options)
return s.getvalue()
def deserialize(format, stream_or_string, **options):
"""
Deserialize a stream or a string. Returns an iterator that yields ``(obj,
m2m_relation_dict)``, where ``obj`` is a instantiated -- but *unsaved* --
object, and ``m2m_relation_dict`` is a dictionary of ``{m2m_field_name :
list_of_related_objects}``.
"""
d = get_deserializer(format)
return d(stream_or_string, **options)
def _load_serializers():
"""
Register built-in and settings-defined serializers. This is done lazily so
that user code has a chance to (e.g.) set up custom settings without
needing to be careful of import order.
"""
global _serializers
serializers = {}
for format in BUILTIN_SERIALIZERS:
register_serializer(format, BUILTIN_SERIALIZERS[format], serializers)
if hasattr(settings, "SERIALIZATION_MODULES"):
for format in settings.SERIALIZATION_MODULES:
register_serializer(format, settings.SERIALIZATION_MODULES[format], serializers)
_serializers = serializers
| bsd-3-clause |
races1986/SafeLanguage | CEM/i18n/editarticle.py | 2 | 10637 | # -*- coding: utf-8 -*-
msg = {
'en': {
'editarticle-edit': u'Manual edit with robot: %(description)s',
},
# Author: Csisc
'qqq': {
'editarticle-edit': u'Edit summary when the bot owner edits a text while the bot invoked an editor. <code>%(descriptions)s</code> gives further informations.',
},
# Author: Csisc
'aeb': {
'editarticle-edit': u'تعديل يدوي: %(description)s',
},
# Author: Als-Chlämens
'als': {
'editarticle-edit': u'Manuelli Bearbeitig: %(description)s',
},
'ar': {
'editarticle-edit': u'تعديل يدوي: %(description)s',
},
# Author: Esbardu
# Author: Xuacu
'ast': {
'editarticle-edit': u'Edición manual con robó: %(description)s',
},
# Author: Ebrahimi-amir
'az': {
'editarticle-edit': u'Bot hesabından əllə tənzimləmə: %(description)s',
},
# Author: E THP
'azb': {
'editarticle-edit': u'بوت حسابیندان الله تنزیملهمه: %(description)s',
},
# Author: Sagan
'ba': {
'editarticle-edit': u'Робот ярҙамында ҡулдан мөхәррирләү: %(description)s',
},
# Author: Mucalexx
'bar': {
'editarticle-edit': u'Manuelle Beorweitung: %(description)s',
},
# Author: EugeneZelenko
'be-x-old': {
'editarticle-edit': u'Ручное рэдагаваньне з робатам: %(description)s',
},
# Author: Riemogerz
'bjn': {
'editarticle-edit': u'Babakan manual lawan bot: %(description)s',
},
# Author: Aftab1995
'bn': {
'editarticle-edit': u'রোবটের সাথে ম্যানুয়াল সম্পাদনা: %(description)s',
},
# Author: Fulup
'br': {
'editarticle-edit': u'Kemmm dornel a-drugarez d\'ur bot : %(description)s',
},
# Author: Edinwiki
'bs': {
'editarticle-edit': u'Ručna izmjena sa botom: %(description)s.',
},
# Author: SMP
'ca': {
'editarticle-edit': u'Edició manual amb robot: %(description)s',
},
# Author: Asoxor
'ckb': {
'editarticle-edit': u'چاکسازیی دەستی بە ڕۆبۆت: %(description)s',
},
# Author: Spiffyk
'cs': {
'editarticle-edit': u'Ruční úprava s pomocí robota: %(description)s',
},
# Author: Robin Owain
'cy': {
'editarticle-edit': u'Golygu dynol gyda robot: %(description)s',
},
# Author: Peter Alberti
'da': {
'editarticle-edit': u'Manuel redigering udført med robot: %(description)s',
},
'de': {
'editarticle-edit': u'Manuelle Bearbeitung: %(description)s',
},
# Author: Eruedin
'de-ch': {
'editarticle-edit': u'Manuelle Bearbeitung mit Bot: %(description)s',
},
# Author: Erdemaslancan
# Author: Mirzali
'diq': {
'editarticle-edit': u'Hesabê boti ra xo desti vurnayış: %(description)s',
},
# Author: Geraki
'el': {
'editarticle-edit': u'Μη αυτόματη επεξεργασία με ρομπότ: %(description)s',
},
# Author: Airon90
'eo': {
'editarticle-edit': u'Permana redakto per roboto: %(description)s',
},
# Author: Fitoschido
'es': {
'editarticle-edit': u'Edición manual con bot: %(description)s',
},
# Author: ZxxZxxZ
'fa': {
'editarticle-edit': u'ویرایش دستی با ربات: %(description)s',
},
# Author: Crt
'fi': {
'editarticle-edit': u'Manuaalinen muutos botillla: %(description)s',
},
# Author: EileenSanda
'fo': {
'editarticle-edit': u'Manuel rætting við botti: %(description)s',
},
# Author: Romainhk
'fr': {
'editarticle-edit': u'Modification manuelle grâce à un bot : %(description)s',
},
# Author: ChrisPtDe
'frp': {
'editarticle-edit': u'Changement manuèl avouéc un robot : %(description)s',
},
# Author: Murma174
'frr': {
'editarticle-edit': u'Manuel bewerke: %(description)s',
},
# Author: Toliño
'gl': {
'editarticle-edit': u'Edición manual con bot: %(description)s',
},
# Author: Jetlag
'hak': {
'editarticle-edit': u'手動控制機械人進行更改:%(description)s',
},
'he': {
'editarticle-edit': u'עריכה ידנית: %(description)s',
},
# Author: Michawiki
'hsb': {
'editarticle-edit': u'Manuelna změna přez bot: %(description)s',
},
# Author: Dj
'hu': {
'editarticle-edit': u'Kézi szerkesztés bottal: %(description)s',
},
# Author: McDutchie
'ia': {
'editarticle-edit': u'Modification manual con robot: %(description)s',
},
# Author: Farras
'id': {
'editarticle-edit': u'Suntingan manual dengan bot: %(description)s',
},
# Author: Renan
'ie': {
'editarticle-edit': u'Redaction manual che machine: %(description)s',
},
# Author: Lam-ang
'ilo': {
'editarticle-edit': u'Manual a panag-urnos ti robot: %(description)s',
},
'is': {
'editarticle-edit': u'Handvirk breyting: %(description)s',
},
# Author: Beta16
'it': {
'editarticle-edit': u'Modifica manuale tramite bot: %(description)s',
},
'ja': {
'editarticle-edit': u'手動編集: %(description)s',
},
# Author: NoiX180
'jv': {
'editarticle-edit': u'Sunting manual mawa bot: %(description)s',
},
# Author: 아라
'ko': {
'editarticle-edit': u'로봇으로 수동 편집: %(description)s',
},
# Author: Purodha
'ksh': {
'editarticle-edit': u'Vun Hand mem Botprojramm jeändert: %(description)s',
},
# Author: Викиней
'ky': {
'editarticle-edit': u'Боттун жардамы аркылуу колго оңдоо: %(description)s',
},
# Author: Robby
'lb': {
'editarticle-edit': u'Manuell Ännerungen mam Bot: %(description)s',
},
# Author: Ooswesthoesbes
'li': {
'editarticle-edit': u'Handjmaotige botbewirking: %(description)s',
},
# Author: Mantak111
'lt': {
'editarticle-edit': u'Rankiniu būdu redagavimas su robotas: %(description)s',
},
# Author: StefanusRA
'map-bms': {
'editarticle-edit': u'Suntingan manual nganggo bot: %(description)s',
},
# Author: Jagwar
'mg': {
'editarticle-edit': u'Asa tanana nataon\'ny rôbô : %(description)s',
},
# Author: Luthfi94
'min': {
'editarticle-edit': u'Penyuntingan manual dengan memakai Bot: %(description)s',
},
# Author: Bjankuloski06
# Author: Rancher
'mk': {
'editarticle-edit': u'Рачно уредување со робот: %(description)s',
},
# Author: Praveenp
'ml': {
'editarticle-edit': u'യന്ത്രം ഉപയോഗിച്ച് ഉപയോക്താവ് ചെയ്ത തിരുത്തൽ: %(description)s',
},
# Author: Anakmalaysia
'ms': {
'editarticle-edit': u'Suntingan manual dengan bot: %(description)s',
},
# Author: Servien
'nds-nl': {
'editarticle-edit': u'Haandmaotige bewarking mit bot: %(description)s',
},
# Author: Eukesh
'new': {
'editarticle-edit': u'रोबटं याःगु म्यानुअल ज्या: %(description)s',
},
# Author: SPQRobin
# Author: Siebrand
'nl': {
'editarticle-edit': u'Handmatige bewerking met robot: %(description)s',
},
# Author: Nghtwlkr
'no': {
'editarticle-edit': u'Manuell redigert med robot: %(description)s',
},
# Author: Shisir 1945
'or': {
'editarticle-edit': u'ରୋବଟ୍ ସହିତ ହସ୍ତକୃତ ସମ୍ପାଦନା: %(description)s',
},
# Author: Val2397
'pam': {
'editarticle-edit': u'↓Gamatan ya ing pamanalili kapamilatan ning robot: %(description)s',
},
# Author: Sp5uhe
'pl': {
'editarticle-edit': u'Ręczna zmiana przy pomocy robota – %(description)s',
},
# Author: Borichèt
'pms': {
'editarticle-edit': u'Modìfica manual con un trigomiro: %(description)s',
},
# Author: Giro720
'pt': {
'editarticle-edit': u'A editar manualmente com bot: %(description)s',
},
# Author: Giro720
# Author: 555
'pt-br': {
'editarticle-edit': u'Edição manual via Bot: %(description)s',
},
# Author: KlaudiuMihaila
# Author: Minisarm
'ro': {
'editarticle-edit': u'Modificare manuală cu robot: %(description)s',
},
# Author: Александр Сигачёв
'ru': {
'editarticle-edit': u'Ручное редактирование с помощью бота: %(description)s',
},
# Author: Avicennasis
'sco': {
'editarticle-edit': u'Manual edit with bot: %(description)s',
},
# Author: Wizzard
'sk': {
'editarticle-edit': u'Ručná úprava s pomocou robota: %(description)s',
},
# Author: Dbc334
'sl': {
'editarticle-edit': u'Ročno urejanje z botom: %(description)s',
},
# Author: Abshirdheere
'so': {
'editarticle-edit': u'Gacan ku badal Bot ah: %(description)s',
},
# Author: Euriditi
'sq': {
'editarticle-edit': u'Redaktim manual me robot: %(description)s',
},
# Author: Rancher
'sr': {
'editarticle-edit': u'Ручно уређивање с роботом: %(description)s',
},
# Author: Rancher
'sr-el': {
'editarticle-edit': u'Ručno uređivanje s robotom: %(description)s',
},
'sv': {
'editarticle-edit': u'Manuell redigering: %(description)s',
},
# Author: Kwisha
'sw': {
'editarticle-edit': u'Hariri kwa mkono na boti: %(description)s',
},
# Author: Przemub
'szl': {
'editarticle-edit': u'Manualne sprowianiy z robotym: %(description)s',
},
# Author: Nullzero
'th': {
'editarticle-edit': u'แก้ไขผ่านโรบอตด้วยตนเอง: %(description)s',
},
# Author: AnakngAraw
'tl': {
'editarticle-edit': u'Kinakamay na pamamatnugot na may robot: %(description)s',
},
# Author: Гусейн
'tly': {
'editarticle-edit': u'Дастә редәктә кардеј де боти воситә: %(description)s',
},
# Author: Khutuck
'tr': {
'editarticle-edit': u'Bot hesabından elle düzenleme: %(description)s',
},
# Author: Ильнар
'tt': {
'editarticle-edit': u'Бот белән идарә итү өчен кулланма: %(description)s',
},
# Author: Dim Grits
'uk': {
'editarticle-edit': u'Ручне редагування за допомогою бота: %(description)s',
},
# Author: CoderSI
'uz': {
'editarticle-edit': u'Bot yordamida qoʻlda tahrirlash: %(description)s',
},
# Author: Alunardon90
# Author: Candalua
'vec': {
'editarticle-edit': u'Modifega a man tramite robot: %(description)s',
},
# Author: Minh Nguyen
'vi': {
'editarticle-edit': u'Sửa đổi thủ công dùng bot: %(description)s',
},
# Author: Harvzsf
'war': {
'editarticle-edit': u'Manwal nga pagliwat upod hin robot: %(description)s',
},
# Author: פוילישער
'yi': {
'editarticle-edit': u'האנט־רעדאקטירונג: %(description)s',
},
# Author: Hydra
'zh': {
'editarticle-edit': u'利用机器方式手动编辑:%(description)s',
},
# Author: Justincheng12345
'zh-hant': {
'editarticle-edit': u'手動控制機械人進行更改:%(description)s',
},
# Author: Justincheng12345
'zh-hk': {
'editarticle-edit': u'手動控制機械人進行更改:%(description)s',
},
};
| epl-1.0 |
gjcarneiro/pybindgen | tests/c-hello/hellomodulegen.py | 1 | 1877 | #! /usr/bin/env python
import sys
import re
import pybindgen
from pybindgen import FileCodeSink
from pybindgen.castxmlparser import ModuleParser
constructor_rx = re.compile("hello_foo_new(_.*)?")
method_rx = re.compile("hello_foo(_.*)?")
def pre_scan_hook(dummy_module_parser,
pygccxml_definition,
global_annotations,
parameter_annotations):
if pygccxml_definition.name == "_HelloFoo":
global_annotations['free_function'] = 'hello_foo_unref'
global_annotations['incref_function'] = 'hello_foo_ref'
global_annotations['decref_function'] = 'hello_foo_unref'
global_annotations['custom_name'] = 'Foo'
## constructor?
m = constructor_rx.match(pygccxml_definition.name)
if m:
global_annotations['is_constructor_of'] = 'HelloFoo'
return
## method?
m = method_rx.match(pygccxml_definition.name)
if m:
method_name = m.group(1)[1:]
if method_name in ['ref', 'unref']:
global_annotations['ignore'] = 'true'
return
global_annotations['as_method'] = m.group(1)[1:]
global_annotations['of_class'] = 'HelloFoo'
parameter_annotations['foo'] = {'transfer_ownership': 'false'}
def my_module_gen(out_file, pygccxml_mode):
out = FileCodeSink(out_file)
#pybindgen.write_preamble(out)
out.writeln("#include \"hello.h\"")
module_parser = ModuleParser('hello')
module_parser.add_pre_scan_hook(pre_scan_hook)
module = module_parser.parse(sys.argv[2:])
module.generate(out)
if __name__ == '__main__':
try:
import cProfile as profile
except ImportError:
my_module_gen(sys.stdout, sys.argv[1])
else:
sys.stderr.write("** running under profiler\n")
profile.run('my_module_gen(sys.stdout, sys.argv[1])', 'hellomodulegen.pstat')
| lgpl-2.1 |
mohae/rancher-templates | packer_sources/saltbase/salt/roots/salt/_modules/informer.py | 4 | 1611 | """
This enables us to call the minions and search for a specific role
Roles are set using grains (described in http://www.saltstat.es/posts/role-infrastructure.html)
and propagated using salt-mine
"""
import logging
# Import salt libs
import salt.utils
import salt.payload
log = logging.getLogger(__name__)
def get_roles(role, *args, **kwargs):
"""
Send the informer.is_role command to all minions
"""
ret = []
nodes = __salt__['mine.get']('*', 'grains.item')
print "-------------------------------> NODES {0}".format(nodes)
for name, node_details in nodes.iteritems():
name = _realname(name)
roles = node_details.get('roles', [])
if role in roles:
ret.append(name)
return ret
def get_node_grain_item(name, item):
"""Get the details of a node by the name nodename"""
name = _realname(name)
node = __salt__['mine.get'](name, 'grains.item')
print "NODE DETAILS ------> {0}: {1}".format(name, node[name])
return node[name][item]
def all():
"""Get all the hosts and their ip addresses"""
ret = {}
nodes = __salt__['mine.get']('*', 'grains.item')
for name, node_details in nodes.iteritems():
if 'ec2_local-ipv4' in node_details:
ret[_realname(name)] = node_details['ec2_local-ipv4']
else:
ip = __salt__['mine.get'](name, 'network.ip_addrs')[name][0]
print "-----------------------------> {0}".format(ip)
ret[_realname(name)] = ip
return ret
def _realname(name):
"""Basically a filter to get the 'real' name of a node"""
if name == 'master':
return 'saltmaster'
else:
return name | mit |
saurabh6790/medsynaptic-app | stock/stock_ledger.py | 27 | 11820 | # Copyright (c) 2013, Web Notes Technologies Pvt. Ltd. and Contributors
# License: GNU General Public License v3. See license.txt
from __future__ import unicode_literals
import webnotes
from webnotes import msgprint
from webnotes.utils import cint, flt, cstr, now
from stock.utils import get_valuation_method
import json
# future reposting
class NegativeStockError(webnotes.ValidationError): pass
_exceptions = webnotes.local('stockledger_exceptions')
# _exceptions = []
def make_sl_entries(sl_entries, is_amended=None):
if sl_entries:
from stock.utils import update_bin
cancel = True if sl_entries[0].get("is_cancelled") == "Yes" else False
if cancel:
set_as_cancel(sl_entries[0].get('voucher_no'), sl_entries[0].get('voucher_type'))
for sle in sl_entries:
sle_id = None
if sle.get('is_cancelled') == 'Yes':
sle['actual_qty'] = -flt(sle['actual_qty'])
if sle.get("actual_qty"):
sle_id = make_entry(sle)
args = sle.copy()
args.update({
"sle_id": sle_id,
"is_amended": is_amended
})
update_bin(args)
if cancel:
delete_cancelled_entry(sl_entries[0].get('voucher_type'),
sl_entries[0].get('voucher_no'))
def set_as_cancel(voucher_type, voucher_no):
webnotes.conn.sql("""update `tabStock Ledger Entry` set is_cancelled='Yes',
modified=%s, modified_by=%s
where voucher_no=%s and voucher_type=%s""",
(now(), webnotes.session.user, voucher_type, voucher_no))
def make_entry(args):
args.update({"doctype": "Stock Ledger Entry"})
sle = webnotes.bean([args])
sle.ignore_permissions = 1
sle.insert()
sle.submit()
return sle.doc.name
def delete_cancelled_entry(voucher_type, voucher_no):
webnotes.conn.sql("""delete from `tabStock Ledger Entry`
where voucher_type=%s and voucher_no=%s""", (voucher_type, voucher_no))
def update_entries_after(args, verbose=1):
"""
update valution rate and qty after transaction
from the current time-bucket onwards
args = {
"item_code": "ABC",
"warehouse": "XYZ",
"posting_date": "2012-12-12",
"posting_time": "12:00"
}
"""
if not _exceptions:
webnotes.local.stockledger_exceptions = []
previous_sle = get_sle_before_datetime(args)
qty_after_transaction = flt(previous_sle.get("qty_after_transaction"))
valuation_rate = flt(previous_sle.get("valuation_rate"))
stock_queue = json.loads(previous_sle.get("stock_queue") or "[]")
stock_value = flt(previous_sle.get("stock_value"))
prev_stock_value = flt(previous_sle.get("stock_value"))
entries_to_fix = get_sle_after_datetime(previous_sle or \
{"item_code": args["item_code"], "warehouse": args["warehouse"]}, for_update=True)
valuation_method = get_valuation_method(args["item_code"])
stock_value_difference = 0.0
for sle in entries_to_fix:
if sle.serial_no or not cint(webnotes.conn.get_default("allow_negative_stock")):
# validate negative stock for serialized items, fifo valuation
# or when negative stock is not allowed for moving average
if not validate_negative_stock(qty_after_transaction, sle):
qty_after_transaction += flt(sle.actual_qty)
continue
if sle.serial_no:
valuation_rate = get_serialized_values(qty_after_transaction, sle, valuation_rate)
elif valuation_method == "Moving Average":
valuation_rate = get_moving_average_values(qty_after_transaction, sle, valuation_rate)
else:
valuation_rate = get_fifo_values(qty_after_transaction, sle, stock_queue)
qty_after_transaction += flt(sle.actual_qty)
# get stock value
if sle.serial_no:
stock_value = qty_after_transaction * valuation_rate
elif valuation_method == "Moving Average":
stock_value = (qty_after_transaction > 0) and \
(qty_after_transaction * valuation_rate) or 0
else:
stock_value = sum((flt(batch[0]) * flt(batch[1]) for batch in stock_queue))
# rounding as per precision
from webnotes.model.meta import get_field_precision
meta = webnotes.get_doctype("Stock Ledger Entry")
stock_value = flt(stock_value, get_field_precision(meta.get_field("stock_value"),
webnotes._dict({"fields": sle})))
stock_value_difference = stock_value - prev_stock_value
prev_stock_value = stock_value
# update current sle
webnotes.conn.sql("""update `tabStock Ledger Entry`
set qty_after_transaction=%s, valuation_rate=%s, stock_queue=%s,
stock_value=%s, stock_value_difference=%s where name=%s""",
(qty_after_transaction, valuation_rate,
json.dumps(stock_queue), stock_value, stock_value_difference, sle.name))
if _exceptions:
_raise_exceptions(args, verbose)
# update bin
if not webnotes.conn.exists({"doctype": "Bin", "item_code": args["item_code"],
"warehouse": args["warehouse"]}):
bin_wrapper = webnotes.bean([{
"doctype": "Bin",
"item_code": args["item_code"],
"warehouse": args["warehouse"],
}])
bin_wrapper.ignore_permissions = 1
bin_wrapper.insert()
webnotes.conn.sql("""update `tabBin` set valuation_rate=%s, actual_qty=%s,
stock_value=%s,
projected_qty = (actual_qty + indented_qty + ordered_qty + planned_qty - reserved_qty)
where item_code=%s and warehouse=%s""", (valuation_rate, qty_after_transaction,
stock_value, args["item_code"], args["warehouse"]))
def get_sle_before_datetime(args, for_update=False):
"""
get previous stock ledger entry before current time-bucket
Details:
get the last sle before the current time-bucket, so that all values
are reposted from the current time-bucket onwards.
this is necessary because at the time of cancellation, there may be
entries between the cancelled entries in the same time-bucket
"""
sle = get_stock_ledger_entries(args,
["timestamp(posting_date, posting_time) < timestamp(%(posting_date)s, %(posting_time)s)"],
"desc", "limit 1", for_update=for_update)
return sle and sle[0] or webnotes._dict()
def get_sle_after_datetime(args, for_update=False):
"""get Stock Ledger Entries after a particular datetime, for reposting"""
# NOTE: using for update of
return get_stock_ledger_entries(args,
["timestamp(posting_date, posting_time) > timestamp(%(posting_date)s, %(posting_time)s)"],
"asc", for_update=for_update)
def get_stock_ledger_entries(args, conditions=None, order="desc", limit=None, for_update=False):
"""get stock ledger entries filtered by specific posting datetime conditions"""
if not args.get("posting_date"):
args["posting_date"] = "1900-01-01"
if not args.get("posting_time"):
args["posting_time"] = "00:00"
return webnotes.conn.sql("""select * from `tabStock Ledger Entry`
where item_code = %%(item_code)s
and warehouse = %%(warehouse)s
and ifnull(is_cancelled, 'No')='No'
%(conditions)s
order by timestamp(posting_date, posting_time) %(order)s, name %(order)s
%(limit)s %(for_update)s""" % {
"conditions": conditions and ("and " + " and ".join(conditions)) or "",
"limit": limit or "",
"for_update": for_update and "for update" or "",
"order": order
}, args, as_dict=1)
def validate_negative_stock(qty_after_transaction, sle):
"""
validate negative stock for entries current datetime onwards
will not consider cancelled entries
"""
diff = qty_after_transaction + flt(sle.actual_qty)
if not _exceptions:
webnotes.local.stockledger_exceptions = []
if diff < 0 and abs(diff) > 0.0001:
# negative stock!
exc = sle.copy().update({"diff": diff})
_exceptions.append(exc)
return False
else:
return True
def get_serialized_values(qty_after_transaction, sle, valuation_rate):
incoming_rate = flt(sle.incoming_rate)
actual_qty = flt(sle.actual_qty)
serial_no = cstr(sle.serial_no).split("\n")
if incoming_rate < 0:
# wrong incoming rate
incoming_rate = valuation_rate
elif incoming_rate == 0 or flt(sle.actual_qty) < 0:
# In case of delivery/stock issue, get average purchase rate
# of serial nos of current entry
incoming_rate = flt(webnotes.conn.sql("""select avg(ifnull(purchase_rate, 0))
from `tabSerial No` where name in (%s)""" % (", ".join(["%s"]*len(serial_no))),
tuple(serial_no))[0][0])
if incoming_rate and not valuation_rate:
valuation_rate = incoming_rate
else:
new_stock_qty = qty_after_transaction + actual_qty
if new_stock_qty > 0:
new_stock_value = qty_after_transaction * valuation_rate + actual_qty * incoming_rate
if new_stock_value > 0:
# calculate new valuation rate only if stock value is positive
# else it remains the same as that of previous entry
valuation_rate = new_stock_value / new_stock_qty
return valuation_rate
def get_moving_average_values(qty_after_transaction, sle, valuation_rate):
incoming_rate = flt(sle.incoming_rate)
actual_qty = flt(sle.actual_qty)
if not incoming_rate:
# In case of delivery/stock issue in_rate = 0 or wrong incoming rate
incoming_rate = valuation_rate
elif qty_after_transaction < 0:
# if negative stock, take current valuation rate as incoming rate
valuation_rate = incoming_rate
new_stock_qty = qty_after_transaction + actual_qty
new_stock_value = qty_after_transaction * valuation_rate + actual_qty * incoming_rate
if new_stock_qty > 0 and new_stock_value > 0:
valuation_rate = new_stock_value / flt(new_stock_qty)
elif new_stock_qty <= 0:
valuation_rate = 0.0
# NOTE: val_rate is same as previous entry if new stock value is negative
return valuation_rate
def get_fifo_values(qty_after_transaction, sle, stock_queue):
incoming_rate = flt(sle.incoming_rate)
actual_qty = flt(sle.actual_qty)
if not stock_queue:
stock_queue.append([0, 0])
if actual_qty > 0:
if stock_queue[-1][0] > 0:
stock_queue.append([actual_qty, incoming_rate])
else:
qty = stock_queue[-1][0] + actual_qty
stock_queue[-1] = [qty, qty > 0 and incoming_rate or 0]
else:
incoming_cost = 0
qty_to_pop = abs(actual_qty)
while qty_to_pop:
if not stock_queue:
stock_queue.append([0, 0])
batch = stock_queue[0]
if 0 < batch[0] <= qty_to_pop:
# if batch qty > 0
# not enough or exactly same qty in current batch, clear batch
incoming_cost += flt(batch[0]) * flt(batch[1])
qty_to_pop -= batch[0]
stock_queue.pop(0)
else:
# all from current batch
incoming_cost += flt(qty_to_pop) * flt(batch[1])
batch[0] -= qty_to_pop
qty_to_pop = 0
stock_value = sum((flt(batch[0]) * flt(batch[1]) for batch in stock_queue))
stock_qty = sum((flt(batch[0]) for batch in stock_queue))
valuation_rate = stock_qty and (stock_value / flt(stock_qty)) or 0
return valuation_rate
def _raise_exceptions(args, verbose=1):
deficiency = min(e["diff"] for e in _exceptions)
msg = """Negative stock error:
Cannot complete this transaction because stock will start
becoming negative (%s) for Item <b>%s</b> in Warehouse
<b>%s</b> on <b>%s %s</b> in Transaction %s %s.
Total Quantity Deficiency: <b>%s</b>""" % \
(_exceptions[0]["diff"], args.get("item_code"), args.get("warehouse"),
_exceptions[0]["posting_date"], _exceptions[0]["posting_time"],
_exceptions[0]["voucher_type"], _exceptions[0]["voucher_no"],
abs(deficiency))
if verbose:
msgprint(msg, raise_exception=NegativeStockError)
else:
raise NegativeStockError, msg
def get_previous_sle(args, for_update=False):
"""
get the last sle on or before the current time-bucket,
to get actual qty before transaction, this function
is called from various transaction like stock entry, reco etc
args = {
"item_code": "ABC",
"warehouse": "XYZ",
"posting_date": "2012-12-12",
"posting_time": "12:00",
"sle": "name of reference Stock Ledger Entry"
}
"""
if not args.get("sle"): args["sle"] = ""
sle = get_stock_ledger_entries(args, ["name != %(sle)s",
"timestamp(posting_date, posting_time) <= timestamp(%(posting_date)s, %(posting_time)s)"],
"desc", "limit 1", for_update=for_update)
return sle and sle[0] or {}
| agpl-3.0 |
guizos/androguard-acid | androguard/core/api_specific_resources/aosp_permissions/aosp_permissions_api9.py | 15 | 56007 | #!/usr/bin/python
# -*- coding: utf-8 -*-
#################################################
### Extracted from platform version: 2.3.2
#################################################
AOSP_PERMISSIONS = {
'android.permission.BIND_WALLPAPER': {
'permissionGroup': '',
'description':
'Allows the holder to bind to the top-level interface of a wallpaper. Should never be needed for normal applications.',
'protectionLevel': 'signatureOrSystem',
'label': 'bind to a wallpaper'
},
'android.permission.FORCE_BACK': {
'permissionGroup': '',
'description':
'Allows an application to force any activity that is in the foreground to close and go back. Should never be needed for normal applications.',
'protectionLevel': 'signature',
'label': 'force application to close'
},
'android.permission.READ_CALENDAR': {
'permissionGroup': 'android.permission-group.PERSONAL_INFO',
'description':
'Allows an application to read all of the calendar events stored on your phone. Malicious applications can use this to send your calendar events to other people.',
'protectionLevel': 'dangerous',
'label': 'read calendar events'
},
'android.permission.READ_FRAME_BUFFER': {
'permissionGroup': '',
'description':
'Allows application to read the content of the frame buffer.',
'protectionLevel': 'signature',
'label': 'read frame buffer'
},
'android.permission.READ_SYNC_STATS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to read the sync stats; e.g., the history of syncs that have occurred.',
'protectionLevel': 'normal',
'label': 'read sync statistics'
},
'android.permission.SHUTDOWN': {
'permissionGroup': '',
'description':
'Puts the activity manager into a shutdown state. Does not perform a complete shutdown.',
'protectionLevel': 'signature',
'label': 'partial shutdown'
},
'android.permission.ACCESS_NETWORK_STATE': {
'permissionGroup': 'android.permission-group.NETWORK',
'description':
'Allows an application to view the state of all networks.',
'protectionLevel': 'normal',
'label': 'view network state'
},
'android.permission.INTERNET': {
'permissionGroup': 'android.permission-group.NETWORK',
'description': 'Allows an application to create network sockets.',
'protectionLevel': 'dangerous',
'label': 'full Internet access'
},
'android.permission.CHANGE_CONFIGURATION': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to change the current configuration, such as the locale or overall font size.',
'protectionLevel': 'dangerous',
'label': 'change your UI settings'
},
'android.permission.READ_CONTACTS': {
'permissionGroup': 'android.permission-group.PERSONAL_INFO',
'description':
'Allows an application to read all of the contact (address) data stored on your phone. Malicious applications can use this to send your data to other people.',
'protectionLevel': 'dangerous',
'label': 'read contact data'
},
'android.permission.HARDWARE_TEST': {
'permissionGroup': 'android.permission-group.HARDWARE_CONTROLS',
'description':
'Allows the application to control various peripherals for the purpose of hardware testing.',
'protectionLevel': 'signature',
'label': 'test hardware'
},
'android.permission.ACCESS_DOWNLOAD_MANAGER_ADVANCED': {
'permissionGroup': '',
'description':
'Allows the application to access the download manager\'s advanced functions. Malicious applications can use this to disrupt downloads and access private information.',
'protectionLevel': 'signatureOrSystem',
'label': 'Advanced download manager functions.'
},
'com.android.launcher.permission.INSTALL_SHORTCUT': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to add shortcuts without user intervention.',
'protectionLevel': 'normal',
'label': 'install shortcuts'
},
'android.permission.CHANGE_WIFI_MULTICAST_STATE': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to receive packets not directly addressed to your device. This can be useful when discovering services offered near by. It uses more power than the non-multicast mode.',
'protectionLevel': 'dangerous',
'label': 'allow Wi-Fi Multicast reception'
},
'android.permission.VIBRATE': {
'permissionGroup': 'android.permission-group.HARDWARE_CONTROLS',
'description': 'Allows the application to control the vibrator.',
'protectionLevel': 'normal',
'label': 'control vibrator'
},
'android.permission.BIND_INPUT_METHOD': {
'permissionGroup': '',
'description':
'Allows the holder to bind to the top-level interface of an input method. Should never be needed for normal applications.',
'protectionLevel': 'signature',
'label': 'bind to an input method'
},
'android.permission.SET_TIME_ZONE': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to change the phone\'s time zone.',
'protectionLevel': 'dangerous',
'label': 'set time zone'
},
'android.permission.ACCESS_CACHE_FILESYSTEM': {
'permissionGroup': '',
'description':
'Allows an application to read and write the cache filesystem.',
'protectionLevel': 'signatureOrSystem',
'label': 'access the cache filesystem'
},
'android.permission.DOWNLOAD_CACHE_NON_PURGEABLE': {
'permissionGroup': '',
'description':
'Allows the application to download files to the download cache which cannot be automatically deleted when the download manager needs more space.',
'protectionLevel': 'signatureOrSystem',
'label': 'Reserve space in the download cache'
},
'android.permission.WRITE_SYNC_SETTINGS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to modify the sync settings, such as whether sync is enabled for Contacts.',
'protectionLevel': 'dangerous',
'label': 'write sync settings'
},
'android.permission.READ_LOGS': {
'permissionGroup': 'android.permission-group.PERSONAL_INFO',
'description':
'Allows an application to read from the system\'s various log files. This allows it to discover general information about what you are doing with the phone, potentially including personal or private information.',
'protectionLevel': 'dangerous',
'label': 'read sensitive log data'
},
'android.permission.DUMP': {
'permissionGroup': 'android.permission-group.PERSONAL_INFO',
'description':
'Allows application to retrieve internal state of the system. Malicious applications may retrieve a wide variety of private and secure information that they should never normally need.',
'protectionLevel': 'signatureOrSystem',
'label': 'retrieve system internal state'
},
'android.permission.WRITE_GSERVICES': {
'permissionGroup': '',
'description':
'Allows an application to modify the Google services map. Not for use by normal applications.',
'protectionLevel': 'signatureOrSystem',
'label': 'modify the Google services map'
},
'android.permission.INJECT_EVENTS': {
'permissionGroup': '',
'description':
'Allows an application to deliver its own input events (key presses, etc.) to other applications. Malicious applications can use this to take over the phone.',
'protectionLevel': 'signature',
'label': 'press keys and control buttons'
},
'android.permission.BIND_DEVICE_ADMIN': {
'permissionGroup': '',
'description':
'Allows the holder to send intents to a device administrator. Should never be needed for normal applications.',
'protectionLevel': 'signature',
'label': 'interact with a device admin'
},
'android.permission.FORCE_STOP_PACKAGES': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to forcibly stop other applications.',
'protectionLevel': 'signature',
'label': 'force stop other applications'
},
'com.android.frameworks.coretests.permission.TEST_DENIED': {
'permissionGroup': '',
'description':
'Used for running unit tests, for testing operations where we do not have the permission.',
'protectionLevel': 'normal',
'label': 'Test Denied'
},
'android.permission.WRITE_SECURE_SETTINGS': {
'permissionGroup': '',
'description':
'Allows an application to modify the system\'s secure settings data. Not for use by normal applications.',
'protectionLevel': 'signatureOrSystem',
'label': 'modify secure system settings'
},
'android.permission.UPDATE_DEVICE_STATS': {
'permissionGroup': '',
'description':
'Allows the modification of collected battery statistics. Not for use by normal applications.',
'protectionLevel': 'signatureOrSystem',
'label': 'modify battery statistics'
},
'android.permission.BROADCAST_PACKAGE_REMOVED': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to broadcast a notification that an application package has been removed. Malicious applications may use this to kill any other running application.',
'protectionLevel': 'signature',
'label': 'send package removed broadcast'
},
'android.permission.SYSTEM_ALERT_WINDOW': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to show system alert windows. Malicious applications can take over the entire screen of the phone.',
'protectionLevel': 'dangerous',
'label': 'display system-level alerts'
},
'com.android.cts.permissionNotUsedWithSignature': {
'permissionGroup': '',
'description': '',
'protectionLevel': 'signature',
'label': ''
},
'android.permission.ACCESS_LOCATION_EXTRA_COMMANDS': {
'permissionGroup': 'android.permission-group.LOCATION',
'description':
'Access extra location provider commands. Malicious applications could use this to interfere with the operation of the GPS or other location sources.',
'protectionLevel': 'normal',
'label': 'access extra location provider commands'
},
'android.permission.BRICK': {
'permissionGroup': '',
'description':
'Allows the application to disable the entire phone permanently. This is very dangerous.',
'protectionLevel': 'signature',
'label': 'permanently disable phone'
},
'com.android.browser.permission.WRITE_HISTORY_BOOKMARKS': {
'permissionGroup': 'android.permission-group.PERSONAL_INFO',
'description':
'Allows an application to modify the Browser\'s history or bookmarks stored on your phone. Malicious applications can use this to erase or modify your Browser\'s data.',
'protectionLevel': 'dangerous',
'label': 'write Browser\'s history and bookmarks'
},
'android.permission.CHANGE_WIFI_STATE': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to connect to and disconnect from Wi-Fi access points, and to make changes to configured Wi-Fi networks.',
'protectionLevel': 'dangerous',
'label': 'change Wi-Fi state'
},
'android.permission.RECORD_AUDIO': {
'permissionGroup': 'android.permission-group.HARDWARE_CONTROLS',
'description': 'Allows application to access the audio record path.',
'protectionLevel': 'dangerous',
'label': 'record audio'
},
'android.permission.MODIFY_PHONE_STATE': {
'permissionGroup': 'android.permission-group.PHONE_CALLS',
'description':
'Allows the application to control the phone features of the device. An application with this permission can switch networks, turn the phone radio on and off and the like without ever notifying you.',
'protectionLevel': 'signatureOrSystem',
'label': 'modify phone state'
},
'android.permission.ACCOUNT_MANAGER': {
'permissionGroup': 'android.permission-group.ACCOUNTS',
'description':
'Allows an application to make calls to AccountAuthenticators',
'protectionLevel': 'signature',
'label': 'act as the AccountManagerService'
},
'android.permission.SET_ANIMATION_SCALE': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to change the global animation speed (faster or slower animations) at any time.',
'protectionLevel': 'dangerous',
'label': 'modify global animation speed'
},
'android.permission.SET_PROCESS_LIMIT': {
'permissionGroup': 'android.permission-group.DEVELOPMENT_TOOLS',
'description':
'Allows an application to control the maximum number of processes that will run. Never needed for normal applications.',
'protectionLevel': 'dangerous',
'label': 'limit number of running processes'
},
'android.permission.SET_PREFERRED_APPLICATIONS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to modify your preferred applications. This can allow malicious applications to silently change the applications that are run, spoofing your existing applications to collect private data from you.',
'protectionLevel': 'signature',
'label': 'set preferred applications'
},
'com.android.cts.permissionAllowedWithSignature': {
'permissionGroup': '',
'description': '',
'protectionLevel': 'signature',
'label': ''
},
'android.permission.SET_DEBUG_APP': {
'permissionGroup': 'android.permission-group.DEVELOPMENT_TOOLS',
'description':
'Allows an application to turn on debugging for another application. Malicious applications can use this to kill other applications.',
'protectionLevel': 'dangerous',
'label': 'enable application debugging'
},
'android.permission.INSTALL_DRM': {
'permissionGroup': '',
'description': 'Allows application to install DRM-protected content.',
'protectionLevel': 'normal',
'label': 'Install DRM content.'
},
'android.permission.BLUETOOTH': {
'permissionGroup': 'android.permission-group.NETWORK',
'description':
'Allows an application to view configuration of the local Bluetooth phone, and to make and accept connections with paired devices.',
'protectionLevel': 'dangerous',
'label': 'create Bluetooth connections'
},
'android.permission.CAMERA': {
'permissionGroup': 'android.permission-group.HARDWARE_CONTROLS',
'description':
'Allows application to take pictures and videos with the camera. This allows the application at any time to collect images the camera is seeing.',
'protectionLevel': 'dangerous',
'label': 'take pictures and videos'
},
'android.permission.SET_WALLPAPER_HINTS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows the application to set the system wallpaper size hints.',
'protectionLevel': 'normal',
'label': 'set wallpaper size hints'
},
'android.permission.WAKE_LOCK': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to prevent the phone from going to sleep.',
'protectionLevel': 'dangerous',
'label': 'prevent phone from sleeping'
},
'android.permission.REBOOT': {
'permissionGroup': '',
'description': 'Allows the application to force the phone to reboot.',
'protectionLevel': 'signatureOrSystem',
'label': 'force phone reboot'
},
'android.permission.BROADCAST_WAP_PUSH': {
'permissionGroup': 'android.permission-group.MESSAGES',
'description':
'Allows an application to broadcast a notification that a WAP PUSH message has been received. Malicious applications may use this to forge MMS message receipt or to silently replace the content of any web page with malicious variants.',
'protectionLevel': 'signature',
'label': 'send WAP-PUSH-received broadcast'
},
'android.permission.SET_WALLPAPER_COMPONENT': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description': '',
'protectionLevel': 'signatureOrSystem',
'label': ''
},
'android.permission.ACCESS_BLUETOOTH_SHARE': {
'permissionGroup': '',
'description':
'Allows the application to access the BluetoothShare manager and to use it to transfer files.',
'protectionLevel': 'signature',
'label': 'Access download manager.'
},
'android.intent.category.MASTER_CLEAR.permission.C2D_MESSAGE': {
'permissionGroup': '',
'description': '',
'protectionLevel': 'signature',
'label': ''
},
'android.permission.STATUS_BAR': {
'permissionGroup': '',
'description':
'Allows application to disable the status bar or add and remove system icons.',
'protectionLevel': 'signatureOrSystem',
'label': 'disable or modify status bar'
},
'android.permission.WRITE_USER_DICTIONARY': {
'permissionGroup': 'android.permission-group.PERSONAL_INFO',
'description':
'Allows an application to write new words into the user dictionary.',
'protectionLevel': 'normal',
'label': 'write to user defined dictionary'
},
'com.android.browser.permission.READ_HISTORY_BOOKMARKS': {
'permissionGroup': 'android.permission-group.PERSONAL_INFO',
'description':
'Allows the application to read all the URLs that the Browser has visited, and all of the Browser\'s bookmarks.',
'protectionLevel': 'dangerous',
'label': 'read Browser\'s history and bookmarks'
},
'android.permission.ACCESS_DRM': {
'permissionGroup': '',
'description': 'Allows application to access DRM-protected content.',
'protectionLevel': 'signature',
'label': 'Access DRM content.'
},
'android.permission.RECEIVE_SMS': {
'permissionGroup': 'android.permission-group.MESSAGES',
'description':
'Allows application to receive and process SMS messages. Malicious applications may monitor your messages or delete them without showing them to you.',
'protectionLevel': 'dangerous',
'label': 'receive SMS'
},
'android.permission.WRITE_CONTACTS': {
'permissionGroup': 'android.permission-group.PERSONAL_INFO',
'description':
'Allows an application to modify the contact (address) data stored on your phone. Malicious applications can use this to erase or modify your contact data.',
'protectionLevel': 'dangerous',
'label': 'write contact data'
},
'android.permission.CONTROL_LOCATION_UPDATES': {
'permissionGroup': '',
'description':
'Allows enabling/disabling location update notifications from the radio. Not for use by normal applications.',
'protectionLevel': 'signatureOrSystem',
'label': 'control location update notifications'
},
'android.permission.BIND_APPWIDGET': {
'permissionGroup': 'android.permission-group.PERSONAL_INFO',
'description':
'Allows the application to tell the system which widgets can be used by which application. With this permission, applications can give access to personal data to other applications. Not for use by normal applications.',
'protectionLevel': 'signatureOrSystem',
'label': 'choose widgets'
},
'com.android.frameworks.coretests.permission.TEST_GRANTED': {
'permissionGroup': '',
'description':
'Used for running unit tests, for testing operations where we have the permission.',
'protectionLevel': 'normal',
'label': 'Test Granted'
},
'android.permission.SIGNAL_PERSISTENT_PROCESSES': {
'permissionGroup': 'android.permission-group.DEVELOPMENT_TOOLS',
'description':
'Allows application to request that the supplied signal be sent to all persistent processes.',
'protectionLevel': 'dangerous',
'label': 'send Linux signals to applications'
},
'android.permission.ASEC_CREATE': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description': 'Allows the application to create internal storage.',
'protectionLevel': 'signature',
'label': 'create internal storage'
},
'android.permission.INSTALL_LOCATION_PROVIDER': {
'permissionGroup': '',
'description':
'Create mock location sources for testing. Malicious applications can use this to override the location and/or status returned by real location sources such as GPS or Network providers or monitor and report your location to an external source.',
'protectionLevel': 'signatureOrSystem',
'label': 'permission to install a location provider'
},
'android.permission.SEND_DOWNLOAD_COMPLETED_INTENTS': {
'permissionGroup': '',
'description':
'Allows the application to send notifications about completed downloads. Malicious applications can use this to confuse other applications that download files.',
'protectionLevel': 'signature',
'label': 'Send download notifications.'
},
'android.permission.WRITE_SETTINGS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to modify the system\'s settings data. Malicious applications can corrupt your system\'s configuration.',
'protectionLevel': 'dangerous',
'label': 'modify global system settings'
},
'android.permission.MASTER_CLEAR': {
'permissionGroup': '',
'description':
'Allows an application to completely reset the system to its factory settings, erasing all data, configuration, and installed applications.',
'protectionLevel': 'signatureOrSystem',
'label': 'reset system to factory defaults'
},
'android.permission.READ_INPUT_STATE': {
'permissionGroup': '',
'description':
'Allows applications to watch the keys you press even when interacting with another application (such as entering a password). Should never be needed for normal applications.',
'protectionLevel': 'signature',
'label': 'record what you type and actions you take'
},
'android.permission.MANAGE_APP_TOKENS': {
'permissionGroup': '',
'description':
'Allows applications to create and manage their own tokens, bypassing their normal Z-ordering. Should never be needed for normal applications.',
'protectionLevel': 'signature',
'label': 'manage application tokens'
},
'com.android.email.permission.ACCESS_PROVIDER': {
'permissionGroup': '',
'description':
'Allows this application to access your Email database, including received messages, sent messages, usernames and passwords.',
'protectionLevel': 'signatureOrSystem',
'label': 'Access Email provider data'
},
'com.android.frameworks.coretests.DANGEROUS': {
'permissionGroup': 'android.permission-group.COST_MONEY',
'description': '',
'protectionLevel': 'dangerous',
'label': ''
},
'com.android.launcher.permission.WRITE_SETTINGS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to change the settings and shortcuts in Home.',
'protectionLevel': 'normal',
'label': 'write Home settings and shortcuts'
},
'android.permission.MODIFY_AUDIO_SETTINGS': {
'permissionGroup': 'android.permission-group.HARDWARE_CONTROLS',
'description':
'Allows application to modify global audio settings such as volume and routing.',
'protectionLevel': 'dangerous',
'label': 'change your audio settings'
},
'android.permission.ASEC_ACCESS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows the application to get information on internal storage.',
'protectionLevel': 'signature',
'label': 'get information on internal storage'
},
'android.permission.USE_SIP': {
'permissionGroup': 'android.permission-group.NETWORK',
'description':
'Allows an application to use the SIP service to make/receive Internet calls.',
'protectionLevel': 'dangerous',
'label': 'make/receive Internet calls'
},
'android.permission.CHANGE_NETWORK_STATE': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to change the state of network connectivity.',
'protectionLevel': 'dangerous',
'label': 'change network connectivity'
},
'android.permission.WRITE_APN_SETTINGS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to modify the APN settings, such as Proxy and Port of any APN.',
'protectionLevel': 'dangerous',
'label': 'write Access Point Name settings'
},
'android.permission.ACCESS_SURFACE_FLINGER': {
'permissionGroup': '',
'description':
'Allows application to use SurfaceFlinger low-level features.',
'protectionLevel': 'signature',
'label': 'access SurfaceFlinger'
},
'android.permission.MOVE_PACKAGE': {
'permissionGroup': '',
'description':
'Allows an application to move application resources from internal to external media and vice versa.',
'protectionLevel': 'signatureOrSystem',
'label': 'Move application resources'
},
'android.permission.FACTORY_TEST': {
'permissionGroup': '',
'description':
'Run as a low-level manufacturer test, allowing complete access to the phone hardware. Only available when a phone is running in manufacturer test mode.',
'protectionLevel': 'signature',
'label': 'run in factory test mode'
},
'android.permission.CHANGE_BACKGROUND_DATA_SETTING': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to change the background data usage setting.',
'protectionLevel': 'signature',
'label': 'change background data usage setting'
},
'android.permission.PROCESS_OUTGOING_CALLS': {
'permissionGroup': 'android.permission-group.PHONE_CALLS',
'description':
'Allows application to process outgoing calls and change the number to be dialed. Malicious applications may monitor, redirect, or prevent outgoing calls.',
'protectionLevel': 'dangerous',
'label': 'intercept outgoing calls'
},
'android.permission.CALL_PRIVILEGED': {
'permissionGroup': '',
'description':
'Allows the application to call any phone number, including emergency numbers, without your intervention. Malicious applications may place unnecessary and illegal calls to emergency services.',
'protectionLevel': 'signatureOrSystem',
'label': 'directly call any phone numbers'
},
'android.permission.WRITE_CALENDAR': {
'permissionGroup': 'android.permission-group.PERSONAL_INFO',
'description':
'Allows an application to add or change the events on your calendar, which may send email to guests. Malicious applications can use this to erase or modify your calendar events or to send email to guests.',
'protectionLevel': 'dangerous',
'label': 'add or modify calendar events and send email to guests'
},
'android.permission.NFC': {
'permissionGroup': 'android.permission-group.NETWORK',
'description':
'Allows an application to communicate with Near Field Communication (NFC) tags, cards, and readers.',
'protectionLevel': 'dangerous',
'label': 'control Near Field Communication'
},
'android.permission.MANAGE_ACCOUNTS': {
'permissionGroup': 'android.permission-group.ACCOUNTS',
'description':
'Allows an application to perform operations like adding, and removing accounts and deleting their password.',
'protectionLevel': 'dangerous',
'label': 'manage the accounts list'
},
'android.permission.SEND_SMS': {
'permissionGroup': 'android.permission-group.COST_MONEY',
'description':
'Allows application to send SMS messages. Malicious applications may cost you money by sending messages without your confirmation.',
'protectionLevel': 'dangerous',
'label': 'send SMS messages'
},
'android.permission.CLEAR_APP_USER_DATA': {
'permissionGroup': '',
'description': 'Allows an application to clear user data.',
'protectionLevel': 'signature',
'label': 'delete other applications\' data'
},
'android.permission.ACCESS_MOCK_LOCATION': {
'permissionGroup': 'android.permission-group.LOCATION',
'description':
'Create mock location sources for testing. Malicious applications can use this to override the location and/or status returned by real location sources such as GPS or Network providers.',
'protectionLevel': 'dangerous',
'label': 'mock location sources for testing'
},
'android.permission.PERFORM_CDMA_PROVISIONING': {
'permissionGroup': '',
'description':
'Allows the application to start CDMA provisioning. Malicious applications may unnecessarily start CDMA provisioning',
'protectionLevel': 'signatureOrSystem',
'label': 'directly start CDMA phone setup'
},
'android.permission.WRITE_SMS': {
'permissionGroup': 'android.permission-group.MESSAGES',
'description':
'Allows application to write to SMS messages stored on your phone or SIM card. Malicious applications may delete your messages.',
'protectionLevel': 'dangerous',
'label': 'edit SMS or MMS'
},
'android.permission.ACCESS_ALL_DOWNLOADS': {
'permissionGroup': '',
'description':
'Allows the application to view and modify all initiated by any application on the system.',
'protectionLevel': 'signature',
'label': 'Access all system downloads'
},
'android.permission.DELETE_PACKAGES': {
'permissionGroup': '',
'description':
'Allows an application to delete Android packages. Malicious applications can use this to delete important applications.',
'protectionLevel': 'signatureOrSystem',
'label': 'delete applications'
},
'android.permission.COPY_PROTECTED_DATA': {
'permissionGroup': '',
'description':
'Allows to invoke default container service to copy content. Not for use by normal applications.',
'protectionLevel': 'signature',
'label':
'Allows to invoke default container service to copy content. Not for use by normal applications.'
},
'android.permission.ACCESS_CHECKIN_PROPERTIES': {
'permissionGroup': '',
'description':
'Allows read/write access to properties uploaded by the checkin service. Not for use by normal applications.',
'protectionLevel': 'signatureOrSystem',
'label': 'access checkin properties'
},
'android.permission.DOWNLOAD_WITHOUT_NOTIFICATION': {
'permissionGroup': 'android.permission-group.NETWORK',
'description':
'Allows the application to download files through the download manager without any notification being shown to the user.',
'protectionLevel': 'normal',
'label': 'download files without notification'
},
'com.android.email.permission.READ_ATTACHMENT': {
'permissionGroup': 'android.permission-group.MESSAGES',
'description':
'Allows this application to read your Email attachments.',
'protectionLevel': 'dangerous',
'label': 'Read Email attachments'
},
'android.permission.SET_TIME': {
'permissionGroup': '',
'description':
'Allows an application to change the phone\'s clock time.',
'protectionLevel': 'signatureOrSystem',
'label': 'set time'
},
'android.permission.BATTERY_STATS': {
'permissionGroup': '',
'description':
'Allows the modification of collected battery statistics. Not for use by normal applications.',
'protectionLevel': 'normal',
'label': 'modify battery statistics'
},
'android.app.cts.permission.TEST_GRANTED': {
'permissionGroup': '',
'description':
'Used for running CTS tests, for testing operations where we have the permission.',
'protectionLevel': 'normal',
'label': 'Test Granted'
},
'android.permission.DIAGNOSTIC': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to read and write to any resource owned by the diag group; for example, files in /dev. This could potentially affect system stability and security. This should be ONLY be used for hardware-specific diagnostics by the manufacturer or operator.',
'protectionLevel': 'signature',
'label': 'read/write to resources owned by diag'
},
'android.permission.CALL_PHONE': {
'permissionGroup': 'android.permission-group.COST_MONEY',
'description':
'Allows the application to call phone numbers without your intervention. Malicious applications may cause unexpected calls on your phone bill. Note that this does not allow the application to call emergency numbers.',
'protectionLevel': 'dangerous',
'label': 'directly call phone numbers'
},
'android.permission.MOUNT_FORMAT_FILESYSTEMS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description': 'Allows the application to format removable storage.',
'protectionLevel': 'dangerous',
'label': 'format external storage'
},
'android.permission.READ_PHONE_STATE': {
'permissionGroup': 'android.permission-group.PHONE_CALLS',
'description':
'Allows the application to access the phone features of the device. An application with this permission can determine the phone number and serial number of this phone, whether a call is active, the number that call is connected to and the like.',
'protectionLevel': 'dangerous',
'label': 'read phone state and identity'
},
'android.permission.ACCESS_COARSE_LOCATION': {
'permissionGroup': 'android.permission-group.LOCATION',
'description':
'Access coarse location sources such as the cellular network database to determine an approximate phone location, where available. Malicious applications can use this to determine approximately where you are.',
'protectionLevel': 'dangerous',
'label': 'coarse (network-based) location'
},
'android.permission.BROADCAST_SMS': {
'permissionGroup': 'android.permission-group.MESSAGES',
'description':
'Allows an application to broadcast a notification that an SMS message has been received. Malicious applications may use this to forge incoming SMS messages.',
'protectionLevel': 'signature',
'label': 'send SMS-received broadcast'
},
'android.permission.KILL_BACKGROUND_PROCESSES': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to kill background processes of other applications, even if memory isn\'t low.',
'protectionLevel': 'normal',
'label': 'kill background processes'
},
'android.permission.STOP_APP_SWITCHES': {
'permissionGroup': '',
'description':
'Prevents the user from switching to another application.',
'protectionLevel': 'signature',
'label': 'prevent app switches'
},
'android.permission.ACCESS_WIFI_STATE': {
'permissionGroup': 'android.permission-group.NETWORK',
'description':
'Allows an application to view the information about the state of Wi-Fi.',
'protectionLevel': 'normal',
'label': 'view Wi-Fi state'
},
'android.permission.RECEIVE_MMS': {
'permissionGroup': 'android.permission-group.MESSAGES',
'description':
'Allows application to receive and process MMS messages. Malicious applications may monitor your messages or delete them without showing them to you.',
'protectionLevel': 'dangerous',
'label': 'receive MMS'
},
'android.permission.GLOBAL_SEARCH_CONTROL': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description': '',
'protectionLevel': 'signature',
'label': ''
},
'android.permission.ACCESS_DOWNLOAD_MANAGER': {
'permissionGroup': '',
'description':
'Allows the application to access the download manager and to use it to download files. Malicious applications can use this to disrupt downloads and access private information.',
'protectionLevel': 'signatureOrSystem',
'label': 'Access download manager.'
},
'android.permission.STATUS_BAR_SERVICE': {
'permissionGroup': '',
'description': 'Allows the application to be the status bar.',
'protectionLevel': 'signature',
'label': 'status bar'
},
'android.permission.DELETE_CACHE_FILES': {
'permissionGroup': '',
'description': 'Allows an application to delete cache files.',
'protectionLevel': 'signatureOrSystem',
'label': 'delete other applications\' caches'
},
'android.permission.RESTART_PACKAGES': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to kill background processes of other applications, even if memory isn\'t low.',
'protectionLevel': 'normal',
'label': 'kill background processes'
},
'android.permission.GET_ACCOUNTS': {
'permissionGroup': 'android.permission-group.ACCOUNTS',
'description':
'Allows an application to get the list of accounts known by the phone.',
'protectionLevel': 'normal',
'label': 'discover known accounts'
},
'android.permission.SUBSCRIBED_FEEDS_READ': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to get details about the currently synced feeds.',
'protectionLevel': 'normal',
'label': 'read subscribed feeds'
},
'android.permission.READ_SYNC_SETTINGS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to read the sync settings, such as whether sync is enabled for Contacts.',
'protectionLevel': 'normal',
'label': 'read sync settings'
},
'android.permission.DISABLE_KEYGUARD': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to disable the keylock and any associated password security. A legitimate example of this is the phone disabling the keylock when receiving an incoming phone call, then re-enabling the keylock when the call is finished.',
'protectionLevel': 'dangerous',
'label': 'disable keylock'
},
'com.android.launcher.permission.UNINSTALL_SHORTCUT': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to remove shortcuts without user intervention.',
'protectionLevel': 'normal',
'label': 'uninstall shortcuts'
},
'android.permission.USE_CREDENTIALS': {
'permissionGroup': 'android.permission-group.ACCOUNTS',
'description':
'Allows an application to request authentication tokens.',
'protectionLevel': 'dangerous',
'label': 'use the authentication credentials of an account'
},
'android.permission.SUBSCRIBED_FEEDS_WRITE': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to modify your currently synced feeds. This could allow a malicious application to change your synced feeds.',
'protectionLevel': 'dangerous',
'label': 'write subscribed feeds'
},
'android.permission.READ_USER_DICTIONARY': {
'permissionGroup': 'android.permission-group.PERSONAL_INFO',
'description':
'Allows an application to read any private words, names and phrases that the user may have stored in the user dictionary.',
'protectionLevel': 'dangerous',
'label': 'read user defined dictionary'
},
'android.permission.CHANGE_COMPONENT_ENABLED_STATE': {
'permissionGroup': '',
'description':
'Allows an application to change whether a component of another application is enabled or not. Malicious applications can use this to disable important phone capabilities. Care must be used with permission, as it is possible to get application components into an unusable, inconsistent, or unstable state.',
'protectionLevel': 'signature',
'label': 'enable or disable application components'
},
'android.permission.RECEIVE_BOOT_COMPLETED': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to have itself started as soon as the system has finished booting. This can make it take longer to start the phone and allow the application to slow down the overall phone by always running.',
'protectionLevel': 'normal',
'label': 'automatically start at boot'
},
'android.permission.BACKUP': {
'permissionGroup': '',
'description':
'Allows the application to control the system\'s backup and restore mechanism. Not for use by normal applications.',
'protectionLevel': 'signatureOrSystem',
'label': 'control system backup and restore'
},
'android.permission.EXPAND_STATUS_BAR': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows application to expand or collapse the status bar.',
'protectionLevel': 'normal',
'label': 'expand/collapse status bar'
},
'android.permission.ACCESS_USB': {
'permissionGroup': 'android.permission-group.HARDWARE_CONTROLS',
'description': 'Allows the application to access USB devices.',
'protectionLevel': 'signatureOrSystem',
'label': 'access USB devices'
},
'android.permission.ACCESS_FINE_LOCATION': {
'permissionGroup': 'android.permission-group.LOCATION',
'description':
'Access fine location sources such as the Global Positioning System on the phone, where available. Malicious applications can use this to determine where you are, and may consume additional battery power.',
'protectionLevel': 'dangerous',
'label': 'fine (GPS) location'
},
'android.permission.ASEC_RENAME': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description': 'Allows the application to rename internal storage.',
'protectionLevel': 'signature',
'label': 'rename internal storage'
},
'android.permission.PERSISTENT_ACTIVITY': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to make parts of itself persistent, so the system can\'t use it for other applications.',
'protectionLevel': 'dangerous',
'label': 'make application always run'
},
'android.permission.REORDER_TASKS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to move tasks to the foreground and background. Malicious applications can force themselves to the front without your control.',
'protectionLevel': 'dangerous',
'label': 'reorder running applications'
},
'android.permission.RECEIVE_WAP_PUSH': {
'permissionGroup': 'android.permission-group.MESSAGES',
'description':
'Allows application to receive and process WAP messages. Malicious applications may monitor your messages or delete them without showing them to you.',
'protectionLevel': 'dangerous',
'label': 'receive WAP'
},
'android.permission.DEVICE_POWER': {
'permissionGroup': '',
'description': 'Allows the application to turn the phone on or off.',
'protectionLevel': 'signature',
'label': 'power phone on or off'
},
'android.permission.SET_WALLPAPER': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description': 'Allows the application to set the system wallpaper.',
'protectionLevel': 'normal',
'label': 'set wallpaper'
},
'android.permission.ASEC_DESTROY': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description': 'Allows the application to destroy internal storage.',
'protectionLevel': 'signature',
'label': 'destroy internal storage'
},
'android.permission.BLUETOOTH_ADMIN': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to configure the local Bluetooth phone, and to discover and pair with remote devices.',
'protectionLevel': 'dangerous',
'label': 'bluetooth administration'
},
'android.permission.WRITE_EXTERNAL_STORAGE': {
'permissionGroup': 'android.permission-group.STORAGE',
'description': 'Allows an application to write to the SD card.',
'protectionLevel': 'dangerous',
'label': 'modify/delete SD card contents'
},
'android.permission.GET_PACKAGE_SIZE': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to retrieve its code, data, and cache sizes',
'protectionLevel': 'normal',
'label': 'measure application storage space'
},
'android.permission.ASEC_MOUNT_UNMOUNT': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows the application to mount / unmount internal storage.',
'protectionLevel': 'signature',
'label': 'mount / unmount internal storage'
},
'android.permission.INSTALL_PACKAGES': {
'permissionGroup': '',
'description':
'Allows an application to install new or updated Android packages. Malicious applications can use this to add new applications with arbitrarily powerful permissions.',
'protectionLevel': 'signatureOrSystem',
'label': 'directly install applications'
},
'android.permission.AUTHENTICATE_ACCOUNTS': {
'permissionGroup': 'android.permission-group.ACCOUNTS',
'description':
'Allows an application to use the account authenticator capabilities of the AccountManager, including creating accounts and getting and setting their passwords.',
'protectionLevel': 'dangerous',
'label': 'act as an account authenticator'
},
'com.android.frameworks.coretests.SIGNATURE': {
'permissionGroup': 'android.permission-group.COST_MONEY',
'description': '',
'protectionLevel': 'signature',
'label': ''
},
'com.android.launcher.permission.READ_SETTINGS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to read the settings and shortcuts in Home.',
'protectionLevel': 'normal',
'label': 'read Home settings and shortcuts'
},
'com.android.alarm.permission.SET_ALARM': {
'permissionGroup': 'android.permission-group.PERSONAL_INFO',
'description':
'Allows the application to set an alarm in an installed alarm clock application. Some alarm clock applications may not implement this feature.',
'protectionLevel': 'normal',
'label': 'set alarm in alarm clock'
},
'android.permission.INTERNAL_SYSTEM_WINDOW': {
'permissionGroup': '',
'description':
'Allows the creation of windows that are intended to be used by the internal system user interface. Not for use by normal applications.',
'protectionLevel': 'signature',
'label': 'display unauthorized windows'
},
'android.permission.GET_TASKS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows application to retrieve information about currently and recently running tasks. May allow malicious applications to discover private information about other applications.',
'protectionLevel': 'dangerous',
'label': 'retrieve running applications'
},
'android.permission.SET_ORIENTATION': {
'permissionGroup': '',
'description':
'Allows an application to change the rotation of the screen at any time. Should never be needed for normal applications.',
'protectionLevel': 'signature',
'label': 'change screen orientation'
},
'android.permission.SET_ACTIVITY_WATCHER': {
'permissionGroup': '',
'description':
'Allows an application to monitor and control how the system launches activities. Malicious applications may completely compromise the system. This permission is only needed for development, never for normal phone usage.',
'protectionLevel': 'signature',
'label': 'monitor and control all application launching'
},
'com.android.frameworks.coretests.NORMAL': {
'permissionGroup': 'android.permission-group.COST_MONEY',
'description': '',
'protectionLevel': 'normal',
'label': ''
},
'android.permission.READ_SMS': {
'permissionGroup': 'android.permission-group.MESSAGES',
'description':
'Allows application to read SMS messages stored on your phone or SIM card. Malicious applications may read your confidential messages.',
'protectionLevel': 'dangerous',
'label': 'read SMS or MMS'
},
'android.permission.BROADCAST_STICKY': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to send sticky broadcasts, which remain after the broadcast ends. Malicious applications can make the phone slow or unstable by causing it to use too much memory.',
'protectionLevel': 'normal',
'label': 'send sticky broadcast'
},
'android.permission.GLOBAL_SEARCH': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description': '',
'protectionLevel': 'signatureOrSystem',
'label': ''
},
'com.android.cts.permissionWithSignature': {
'permissionGroup': '',
'description': '',
'protectionLevel': 'signature',
'label': ''
},
'android.permission.PACKAGE_USAGE_STATS': {
'permissionGroup': '',
'description':
'Allows the modification of collected component usage statistics. Not for use by normal applications.',
'protectionLevel': 'signature',
'label': 'update component usage statistics'
},
'android.permission.SET_ALWAYS_FINISH': {
'permissionGroup': 'android.permission-group.DEVELOPMENT_TOOLS',
'description':
'Allows an application to control whether activities are always finished as soon as they go to the background. Never needed for normal applications.',
'protectionLevel': 'dangerous',
'label': 'make all background applications close'
},
'android.permission.CLEAR_APP_CACHE': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows an application to free phone storage by deleting files in application cache directory. Access is very restricted usually to system process.',
'protectionLevel': 'dangerous',
'label': 'delete all application cache data'
},
'android.permission.MOUNT_UNMOUNT_FILESYSTEMS': {
'permissionGroup': 'android.permission-group.SYSTEM_TOOLS',
'description':
'Allows the application to mount and unmount filesystems for removable storage.',
'protectionLevel': 'dangerous',
'label': 'mount and unmount filesystems'
},
'android.permission.FLASHLIGHT': {
'permissionGroup': 'android.permission-group.HARDWARE_CONTROLS',
'description': 'Allows the application to control the flashlight.',
'protectionLevel': 'normal',
'label': 'control flashlight'
},
}
AOSP_PERMISSION_GROUPS = {
'android.permission-group.NETWORK': {
'description': 'Allow applications to access various network features.',
'label': 'Network communication'
},
'android.permission-group.STORAGE':
{'description': 'Access the SD card.',
'label': 'Storage'},
'android.permission-group.MESSAGES': {
'description': 'Read and write your SMS, e-mail, and other messages.',
'label': 'Your messages'
},
'android.permission-group.PERSONAL_INFO': {
'description':
'Direct access to your contacts and calendar stored on the phone.',
'label': 'Your personal information'
},
'android.permission-group.DEVELOPMENT_TOOLS': {
'description': 'Features only needed for application developers.',
'label': 'Development tools'
},
'android.permission-group.COST_MONEY': {'description': '',
'label': ''},
'android.permission-group.ACCOUNTS': {
'description': 'Access the available accounts.',
'label': 'Your accounts'
},
'android.permission-group.LOCATION': {
'description': 'Monitor your physical location',
'label': 'Your location'
},
'android.permission-group.HARDWARE_CONTROLS': {
'description': 'Direct access to hardware on the handset.',
'label': 'Hardware controls'
},
'android.permission-group.SYSTEM_TOOLS': {
'description': 'Lower-level access and control of the system.',
'label': 'System tools'
},
'android.permission-group.PHONE_CALLS': {
'description': 'Monitor, record, and process phone calls.',
'label': 'Phone calls'
},
}
#################################################
| apache-2.0 |
josthkko/ggrc-core | src/ggrc/migrations/versions/20160314155056_39aec99639d5_add_defintion_id_to_ca_definitions.py | 7 | 1963 | # Copyright (C) 2016 Google Inc.
# Licensed under http://www.apache.org/licenses/LICENSE-2.0 <see LICENSE file>
"""
Add definition_id to custom attribute definitions
Create Date: 2016-03-14 15:50:56.407589
"""
# disable Invalid constant name pylint warning for mandatory Alembic variables.
# pylint: disable=invalid-name
import sqlalchemy as sa
from sqlalchemy.dialects import mysql
from alembic import op
# revision identifiers, used by Alembic.
revision = '39aec99639d5'
down_revision = '9db8d85e82b'
def upgrade():
"""Add the new id field and fix some indexes/nullable issues."""
op.add_column('custom_attribute_definitions',
sa.Column('definition_id', sa.Integer(), nullable=True))
op.add_column('custom_attribute_definitions',
sa.Column('multi_choice_mandatory', sa.Text(), nullable=True))
op.alter_column('custom_attribute_definitions', 'helptext',
existing_type=mysql.VARCHAR(length=250), nullable=True)
op.drop_constraint(u'uq_custom_attribute',
'custom_attribute_definitions', type_='unique')
op.create_unique_constraint('uq_custom_attribute',
'custom_attribute_definitions',
['definition_type', 'definition_id', 'title'])
def downgrade():
"""Remove the new id field and reintroduce some indexes/nullable issues."""
op.drop_constraint('uq_custom_attribute',
'custom_attribute_definitions', type_='unique')
op.create_unique_constraint(u'uq_custom_attribute',
'custom_attribute_definitions',
['title', 'definition_type'])
op.alter_column('custom_attribute_definitions', 'helptext',
existing_type=mysql.VARCHAR(length=250),
nullable=False)
op.drop_column('custom_attribute_definitions', 'multi_choice_mandatory')
op.drop_column('custom_attribute_definitions', 'definition_id')
| apache-2.0 |
dfalk/mezzanine-wiki | mezzanine_wiki/templatetags/mezawiki_tags.py | 1 | 2178 | from datetime import datetime
from django.contrib.auth.models import User
from django.db.models import Count
from mezzanine_wiki.models import WikiPage, WikiCategory
from mezzanine import template
from mezzanine.conf import settings
from mezzanine.utils.importing import import_dotted_path
from django.utils.safestring import mark_safe
from diff_match_patch import diff_match_patch
register = template.Library()
@register.filter
def html_diff(diff):
html = []
for (op, data) in diff:
text = (data.replace("&", "&").replace("<", "<")\
.replace(">", ">").replace("\n", "<br />"),)
if op == diff_match_patch.DIFF_INSERT:
html.append("<span class=\"added\">%s</span>" % text)
elif op == diff_match_patch.DIFF_DELETE:
html.append("<span class=\"removed\">%s</del>" % text)
elif op == diff_match_patch.DIFF_EQUAL:
html.append("<span>%s</span>" % text)
return mark_safe("".join(html))
@register.as_tag
def wiki_categories(*args):
"""
Put a list of categories for wiki pages into the template context.
"""
pages = WikiPage.objects.published()
categories = WikiCategory.objects.filter(wikipages__in=pages)
return list(categories.annotate(page_count=Count("wikipages")))
@register.as_tag
def wiki_authors(*args):
"""
Put a list of authors (users) for wiki pages into the template context.
"""
wiki_pages = WikiPage.objects.published()
authors = User.objects.filter(wikipages__in=wiki_pages)
return list(authors.annotate(post_count=Count("wikipages")))
@register.as_tag
def wiki_recent_pages(limit=5):
"""
Put a list of recently published wiki pages into the template context.
"""
return list(WikiPage.objects.published().order_by('-publish_date')[:limit])
@register.filter
def wikitext_filter(content):
"""
This template filter takes a string value and passes it through the
function specified by the WIKI_TEXT_FILTER setting.
"""
if settings.WIKI_TEXT_FILTER:
func = import_dotted_path(settings.WIKI_TEXT_FILTER)
else:
func = lambda s: s
return func(content)
| bsd-2-clause |
jazztpt/edx-platform | lms/djangoapps/django_comment_client/tests.py | 102 | 2213 | """
Tests of various permissions levels for the comment client
"""
import string # pylint: disable=deprecated-module
import random
from django.contrib.auth.models import User
from django.test import TestCase
from student.models import CourseEnrollment
from django_comment_client.permissions import has_permission
from django_comment_common.models import Role
class PermissionsTestCase(TestCase):
def random_str(self, length=15, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for x in range(length))
def setUp(self):
super(PermissionsTestCase, self).setUp()
self.course_id = "edX/toy/2012_Fall"
self.moderator_role = Role.objects.get_or_create(name="Moderator", course_id=self.course_id)[0]
self.student_role = Role.objects.get_or_create(name="Student", course_id=self.course_id)[0]
self.student = User.objects.create(username=self.random_str(),
password="123456", email="john@yahoo.com")
self.moderator = User.objects.create(username=self.random_str(),
password="123456", email="staff@edx.org")
self.moderator.is_staff = True
self.moderator.save()
self.student_enrollment = CourseEnrollment.enroll(self.student, self.course_id)
self.addCleanup(self.student_enrollment.delete)
self.moderator_enrollment = CourseEnrollment.enroll(self.moderator, self.course_id)
self.addCleanup(self.moderator_enrollment.delete)
# Do we need to have this in a cleanup? We shouldn't be deleting students, ever.
# self.student.delete()
# self.moderator.delete()
def testDefaultRoles(self):
self.assertTrue(self.student_role in self.student.roles.all())
self.assertTrue(self.moderator_role in self.moderator.roles.all())
def testPermission(self):
name = self.random_str()
self.moderator_role.add_permission(name)
self.assertTrue(has_permission(self.moderator, name, self.course_id))
self.student_role.add_permission(name)
self.assertTrue(has_permission(self.student, name, self.course_id))
| agpl-3.0 |
loopspace/microbit | mbhandler/mbhandler.py | 1 | 4316 | import serial
import serial.tools.list_ports
import threading
import warnings
import queue as q
class NoMicrobitError(Exception):
def __init__(self, message):
# Call the base class constructor with the parameters it needs
super(Exception, self).__init__(message)
BAUD = 115200
RUNNING = True
DEBUG = False
# Taken (and adapted) from https://github.com/ntoll/microrepl/blob/master/microrepl.py
def get_port():
ports = list(serial.tools.list_ports.comports())
for port, desc, opts in ports:
if 'VID:PID=' in opts:
s = opts.find('VID:PID=') + 8
e = opts.find(' ',s)
vid, pid = opts[s:e].split(':')
vid, pid = int(vid, 16), int(pid, 16)
if vid == 0x0D28 and pid == 0x0204:
return port
raise NoMicrobitError("No microbits found")
return None
def __default_worker():
port = get_port()
s = serial.Serial(port)
s.baudrate = BAUD
s.parity = serial.PARITY_NONE
s.databits = serial.EIGHTBITS
s.stopbits = serial.STOPBITS_ONE
state = {'a': False, 'b': False}
while True:
if not RUNNING:
break
try:
data = s.readline().decode("ascii").rstrip()
except:
continue
if data == 'None':
continue
try:
v = int(data.split(':')[0],16)
except:
continue
b = (v & 1 == 1)
v >>= 1
a = (v & 1 == 1)
v >>= 1
z = v & 255
v >>= 8
y = v & 255
v >>= 8
x = v & 255
if x > 127:
x -= 256
if y > 127:
y -= 256
if z > 127:
z -= 256
x *= -1
y *= -1
name = data.split(':')[1]
e = {
'name': name,
'accelerometer': {
'x': x,
'y': y,
'z': z,
},
'button_a': {
'pressed': a,
'down': a and not state['a'],
'up': not a and state['a']
},
'button_b': {
'pressed': b,
'down': b and not state['b'],
'up': not b and state['b']
}
}
state['a'] = a
state['b'] = b
if DEBUG:
print(e)
post(e)
s.close()
def __raw_worker():
port = get_port()
s = serial.Serial(port)
s.baudrate = BAUD
s.parity = serial.PARITY_NONE
s.databits = serial.EIGHTBITS
s.stopbits = serial.STOPBITS_ONE
while True:
if not RUNNING:
break
try:
data = s.readline().decode("ascii").rstrip()
except:
continue
if data == 'None':
continue
if DEBUG:
print(data)
post(data)
s.close()
def __pygame_init():
global pygame
import pygame
global post
global MICROBITEVENT
MICROBITEVENT = pygame.USEREVENT
pygame.USEREVENT += 1
post = __pygame_post
def __pygame_post(e):
if isinstance(e,str):
e = {'message': e}
ev = pygame.event.Event(MICROBITEVENT,**e)
try:
pygame.event.post(ev)
except: # what's the error here if the queue is full/non-existent?
pass
def __queue_init():
global post
global queue
queue = q.Queue()
post = __queue_post
def __queue_post(e):
try:
queue.put(e)
except q.Full:
pass
def init(**kwargs):
global worker
global DEBUG
method = "queue"
output = "default"
worker = __default_worker
if 'method' in kwargs:
method = kwargs['method']
if 'output' in kwargs:
output = kwargs['output']
if 'debug' in kwargs:
DEBUG = True
if output == "raw":
worker = __raw_worker
if method == "pygame":
__pygame_init()
else:
__queue_init()
t = threading.Thread(target=worker)
t.daemon = True
t.start()
def quit():
global RUNNING
RUNNING = False
# Should we clean up after ourselves?
| mit |
rgom/Pydev | plugins/org.python.pydev.jython/Lib/email/mime/application.py | 414 | 1256 | # Copyright (C) 2001-2006 Python Software Foundation
# Author: Keith Dart
# Contact: email-sig@python.org
"""Class representing application/* type MIME documents."""
__all__ = ["MIMEApplication"]
from email import encoders
from email.mime.nonmultipart import MIMENonMultipart
class MIMEApplication(MIMENonMultipart):
"""Class for generating application/* MIME documents."""
def __init__(self, _data, _subtype='octet-stream',
_encoder=encoders.encode_base64, **_params):
"""Create an application/* type MIME document.
_data is a string containing the raw application data.
_subtype is the MIME content type subtype, defaulting to
'octet-stream'.
_encoder is a function which will perform the actual encoding for
transport of the application data, defaulting to base64 encoding.
Any additional keyword arguments are passed to the base class
constructor, which turns them into parameters on the Content-Type
header.
"""
if _subtype is None:
raise TypeError('Invalid application MIME subtype')
MIMENonMultipart.__init__(self, 'application', _subtype, **_params)
self.set_payload(_data)
_encoder(self)
| epl-1.0 |
SonicHedgehog/node-gyp | gyp/pylib/gyp/xcode_ninja.py | 1789 | 10585 | # Copyright (c) 2014 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Xcode-ninja wrapper project file generator.
This updates the data structures passed to the Xcode gyp generator to build
with ninja instead. The Xcode project itself is transformed into a list of
executable targets, each with a build step to build with ninja, and a target
with every source and resource file. This appears to sidestep some of the
major performance headaches experienced using complex projects and large number
of targets within Xcode.
"""
import errno
import gyp.generator.ninja
import os
import re
import xml.sax.saxutils
def _WriteWorkspace(main_gyp, sources_gyp, params):
""" Create a workspace to wrap main and sources gyp paths. """
(build_file_root, build_file_ext) = os.path.splitext(main_gyp)
workspace_path = build_file_root + '.xcworkspace'
options = params['options']
if options.generator_output:
workspace_path = os.path.join(options.generator_output, workspace_path)
try:
os.makedirs(workspace_path)
except OSError, e:
if e.errno != errno.EEXIST:
raise
output_string = '<?xml version="1.0" encoding="UTF-8"?>\n' + \
'<Workspace version = "1.0">\n'
for gyp_name in [main_gyp, sources_gyp]:
name = os.path.splitext(os.path.basename(gyp_name))[0] + '.xcodeproj'
name = xml.sax.saxutils.quoteattr("group:" + name)
output_string += ' <FileRef location = %s></FileRef>\n' % name
output_string += '</Workspace>\n'
workspace_file = os.path.join(workspace_path, "contents.xcworkspacedata")
try:
with open(workspace_file, 'r') as input_file:
input_string = input_file.read()
if input_string == output_string:
return
except IOError:
# Ignore errors if the file doesn't exist.
pass
with open(workspace_file, 'w') as output_file:
output_file.write(output_string)
def _TargetFromSpec(old_spec, params):
""" Create fake target for xcode-ninja wrapper. """
# Determine ninja top level build dir (e.g. /path/to/out).
ninja_toplevel = None
jobs = 0
if params:
options = params['options']
ninja_toplevel = \
os.path.join(options.toplevel_dir,
gyp.generator.ninja.ComputeOutputDir(params))
jobs = params.get('generator_flags', {}).get('xcode_ninja_jobs', 0)
target_name = old_spec.get('target_name')
product_name = old_spec.get('product_name', target_name)
product_extension = old_spec.get('product_extension')
ninja_target = {}
ninja_target['target_name'] = target_name
ninja_target['product_name'] = product_name
if product_extension:
ninja_target['product_extension'] = product_extension
ninja_target['toolset'] = old_spec.get('toolset')
ninja_target['default_configuration'] = old_spec.get('default_configuration')
ninja_target['configurations'] = {}
# Tell Xcode to look in |ninja_toplevel| for build products.
new_xcode_settings = {}
if ninja_toplevel:
new_xcode_settings['CONFIGURATION_BUILD_DIR'] = \
"%s/$(CONFIGURATION)$(EFFECTIVE_PLATFORM_NAME)" % ninja_toplevel
if 'configurations' in old_spec:
for config in old_spec['configurations'].iterkeys():
old_xcode_settings = \
old_spec['configurations'][config].get('xcode_settings', {})
if 'IPHONEOS_DEPLOYMENT_TARGET' in old_xcode_settings:
new_xcode_settings['CODE_SIGNING_REQUIRED'] = "NO"
new_xcode_settings['IPHONEOS_DEPLOYMENT_TARGET'] = \
old_xcode_settings['IPHONEOS_DEPLOYMENT_TARGET']
ninja_target['configurations'][config] = {}
ninja_target['configurations'][config]['xcode_settings'] = \
new_xcode_settings
ninja_target['mac_bundle'] = old_spec.get('mac_bundle', 0)
ninja_target['ios_app_extension'] = old_spec.get('ios_app_extension', 0)
ninja_target['ios_watchkit_extension'] = \
old_spec.get('ios_watchkit_extension', 0)
ninja_target['ios_watchkit_app'] = old_spec.get('ios_watchkit_app', 0)
ninja_target['type'] = old_spec['type']
if ninja_toplevel:
ninja_target['actions'] = [
{
'action_name': 'Compile and copy %s via ninja' % target_name,
'inputs': [],
'outputs': [],
'action': [
'env',
'PATH=%s' % os.environ['PATH'],
'ninja',
'-C',
new_xcode_settings['CONFIGURATION_BUILD_DIR'],
target_name,
],
'message': 'Compile and copy %s via ninja' % target_name,
},
]
if jobs > 0:
ninja_target['actions'][0]['action'].extend(('-j', jobs))
return ninja_target
def IsValidTargetForWrapper(target_extras, executable_target_pattern, spec):
"""Limit targets for Xcode wrapper.
Xcode sometimes performs poorly with too many targets, so only include
proper executable targets, with filters to customize.
Arguments:
target_extras: Regular expression to always add, matching any target.
executable_target_pattern: Regular expression limiting executable targets.
spec: Specifications for target.
"""
target_name = spec.get('target_name')
# Always include targets matching target_extras.
if target_extras is not None and re.search(target_extras, target_name):
return True
# Otherwise just show executable targets.
if spec.get('type', '') == 'executable' and \
spec.get('product_extension', '') != 'bundle':
# If there is a filter and the target does not match, exclude the target.
if executable_target_pattern is not None:
if not re.search(executable_target_pattern, target_name):
return False
return True
return False
def CreateWrapper(target_list, target_dicts, data, params):
"""Initialize targets for the ninja wrapper.
This sets up the necessary variables in the targets to generate Xcode projects
that use ninja as an external builder.
Arguments:
target_list: List of target pairs: 'base/base.gyp:base'.
target_dicts: Dict of target properties keyed on target pair.
data: Dict of flattened build files keyed on gyp path.
params: Dict of global options for gyp.
"""
orig_gyp = params['build_files'][0]
for gyp_name, gyp_dict in data.iteritems():
if gyp_name == orig_gyp:
depth = gyp_dict['_DEPTH']
# Check for custom main gyp name, otherwise use the default CHROMIUM_GYP_FILE
# and prepend .ninja before the .gyp extension.
generator_flags = params.get('generator_flags', {})
main_gyp = generator_flags.get('xcode_ninja_main_gyp', None)
if main_gyp is None:
(build_file_root, build_file_ext) = os.path.splitext(orig_gyp)
main_gyp = build_file_root + ".ninja" + build_file_ext
# Create new |target_list|, |target_dicts| and |data| data structures.
new_target_list = []
new_target_dicts = {}
new_data = {}
# Set base keys needed for |data|.
new_data[main_gyp] = {}
new_data[main_gyp]['included_files'] = []
new_data[main_gyp]['targets'] = []
new_data[main_gyp]['xcode_settings'] = \
data[orig_gyp].get('xcode_settings', {})
# Normally the xcode-ninja generator includes only valid executable targets.
# If |xcode_ninja_executable_target_pattern| is set, that list is reduced to
# executable targets that match the pattern. (Default all)
executable_target_pattern = \
generator_flags.get('xcode_ninja_executable_target_pattern', None)
# For including other non-executable targets, add the matching target name
# to the |xcode_ninja_target_pattern| regular expression. (Default none)
target_extras = generator_flags.get('xcode_ninja_target_pattern', None)
for old_qualified_target in target_list:
spec = target_dicts[old_qualified_target]
if IsValidTargetForWrapper(target_extras, executable_target_pattern, spec):
# Add to new_target_list.
target_name = spec.get('target_name')
new_target_name = '%s:%s#target' % (main_gyp, target_name)
new_target_list.append(new_target_name)
# Add to new_target_dicts.
new_target_dicts[new_target_name] = _TargetFromSpec(spec, params)
# Add to new_data.
for old_target in data[old_qualified_target.split(':')[0]]['targets']:
if old_target['target_name'] == target_name:
new_data_target = {}
new_data_target['target_name'] = old_target['target_name']
new_data_target['toolset'] = old_target['toolset']
new_data[main_gyp]['targets'].append(new_data_target)
# Create sources target.
sources_target_name = 'sources_for_indexing'
sources_target = _TargetFromSpec(
{ 'target_name' : sources_target_name,
'toolset': 'target',
'default_configuration': 'Default',
'mac_bundle': '0',
'type': 'executable'
}, None)
# Tell Xcode to look everywhere for headers.
sources_target['configurations'] = {'Default': { 'include_dirs': [ depth ] } }
sources = []
for target, target_dict in target_dicts.iteritems():
base = os.path.dirname(target)
files = target_dict.get('sources', []) + \
target_dict.get('mac_bundle_resources', [])
for action in target_dict.get('actions', []):
files.extend(action.get('inputs', []))
# Remove files starting with $. These are mostly intermediate files for the
# build system.
files = [ file for file in files if not file.startswith('$')]
# Make sources relative to root build file.
relative_path = os.path.dirname(main_gyp)
sources += [ os.path.relpath(os.path.join(base, file), relative_path)
for file in files ]
sources_target['sources'] = sorted(set(sources))
# Put sources_to_index in it's own gyp.
sources_gyp = \
os.path.join(os.path.dirname(main_gyp), sources_target_name + ".gyp")
fully_qualified_target_name = \
'%s:%s#target' % (sources_gyp, sources_target_name)
# Add to new_target_list, new_target_dicts and new_data.
new_target_list.append(fully_qualified_target_name)
new_target_dicts[fully_qualified_target_name] = sources_target
new_data_target = {}
new_data_target['target_name'] = sources_target['target_name']
new_data_target['_DEPTH'] = depth
new_data_target['toolset'] = "target"
new_data[sources_gyp] = {}
new_data[sources_gyp]['targets'] = []
new_data[sources_gyp]['included_files'] = []
new_data[sources_gyp]['xcode_settings'] = \
data[orig_gyp].get('xcode_settings', {})
new_data[sources_gyp]['targets'].append(new_data_target)
# Write workspace to file.
_WriteWorkspace(main_gyp, sources_gyp, params)
return (new_target_list, new_target_dicts, new_data)
| mit |
hainn8x/gnuradio | gr-trellis/python/trellis/fsm_utils.py | 45 | 7046 | #!/usr/bin/env python
#
# Copyright 2004 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
import re
import math
import sys
import operator
import numpy
#from gnuradio import trellis
try:
import scipy.linalg
except ImportError:
print "Error: Program requires scipy (see: www.scipy.org)."
sys.exit(1)
######################################################################
# Decimal to any base conversion.
# Convert 'num' to a list of 'l' numbers representing 'num'
# to base 'base' (most significant symbol first).
######################################################################
def dec2base(num,base,l):
s=range(l)
n=num
for i in range(l):
s[l-i-1]=n%base
n=int(n/base)
if n!=0:
print 'Number ', num, ' requires more than ', l, 'digits.'
return s
######################################################################
# Conversion from any base to decimal.
# Convert a list 's' of symbols to a decimal number
# (most significant symbol first)
######################################################################
def base2dec(s,base):
num=0
for i in range(len(s)):
num=num*base+s[i]
return num
######################################################################
# Automatically generate the lookup table that maps the FSM outputs
# to channel inputs corresponding to a channel 'channel' and a modulation
# 'mod'. Optional normalization of channel to unit energy.
# This table is used by the 'metrics' block to translate
# channel outputs to metrics for use with the Viterbi algorithm.
# Limitations: currently supports only one-dimensional modulations.
######################################################################
def make_isi_lookup(mod,channel,normalize):
dim=mod[0]
constellation = mod[1]
if normalize:
p = 0
for i in range(len(channel)):
p = p + channel[i]**2
for i in range(len(channel)):
channel[i] = channel[i]/math.sqrt(p)
lookup=range(len(constellation)**len(channel))
for o in range(len(constellation)**len(channel)):
ss=dec2base(o,len(constellation),len(channel))
ll=0
for i in range(len(channel)):
ll=ll+constellation[ss[i]]*channel[i]
lookup[o]=ll
return (1,lookup)
######################################################################
# Automatically generate the signals appropriate for CPM
# decomposition.
# This decomposition is based on the paper by B. Rimoldi
# "A decomposition approach to CPM", IEEE Trans. Info Theory, March 1988
# See also my own notes at http://www.eecs.umich.edu/~anastas/docs/cpm.pdf
######################################################################
def make_cpm_signals(K,P,M,L,q,frac):
Q=numpy.size(q)/L
h=(1.0*K)/P
f0=-h*(M-1)/2
dt=0.0; # maybe start at t=0.5
t=(dt+numpy.arange(0,Q))/Q
qq=numpy.zeros(Q)
for m in range(L):
qq=qq + q[m*Q:m*Q+Q]
w=math.pi*h*(M-1)*t-2*math.pi*h*(M-1)*qq+math.pi*h*(L-1)*(M-1)
X=(M**L)*P
PSI=numpy.empty((X,Q))
for x in range(X):
xv=dec2base(x/P,M,L)
xv=numpy.append(xv, x%P)
qq1=numpy.zeros(Q)
for m in range(L):
qq1=qq1+xv[m]*q[m*Q:m*Q+Q]
psi=2*math.pi*h*xv[-1]+4*math.pi*h*qq1+w
#print psi
PSI[x]=psi
PSI = numpy.transpose(PSI)
SS=numpy.exp(1j*PSI) # contains all signals as columns
#print SS
# Now we need to orthogonalize the signals
F = scipy.linalg.orth(SS) # find an orthonormal basis for SS
#print numpy.dot(numpy.transpose(F.conjugate()),F) # check for orthonormality
S = numpy.dot(numpy.transpose(F.conjugate()),SS)
#print F
#print S
# We only want to keep those dimensions that contain most
# of the energy of the overall constellation (eg, frac=0.9 ==> 90%)
# evaluate mean energy in each dimension
E=numpy.sum(numpy.absolute(S)**2,axis=1)/Q
E=E/numpy.sum(E)
#print E
Es = -numpy.sort(-E)
Esi = numpy.argsort(-E)
#print Es
#print Esi
Ecum=numpy.cumsum(Es)
#print Ecum
v0=numpy.searchsorted(Ecum,frac)
N = v0+1
#print v0
#print Esi[0:v0+1]
Ff=numpy.transpose(numpy.transpose(F)[Esi[0:v0+1]])
#print Ff
Sf = S[Esi[0:v0+1]]
#print Sf
return (f0,SS,S,F,Sf,Ff,N)
#return f0
######################################################################
# A list of common modulations.
# Format: (dimensionality,constellation)
######################################################################
pam2 = (1,[-1, 1])
pam4 = (1,[-3, -1, 3, 1]) # includes Gray mapping
pam8 = (1,[-7, -5, -3, -1, 1, 3, 5, 7])
psk4=(2,[1, 0, \
0, 1, \
0, -1,\
-1, 0]) # includes Gray mapping
psk8=(2,[math.cos(2*math.pi*0/8), math.sin(2*math.pi*0/8), \
math.cos(2*math.pi*1/8), math.sin(2*math.pi*1/8), \
math.cos(2*math.pi*2/8), math.sin(2*math.pi*2/8), \
math.cos(2*math.pi*3/8), math.sin(2*math.pi*3/8), \
math.cos(2*math.pi*4/8), math.sin(2*math.pi*4/8), \
math.cos(2*math.pi*5/8), math.sin(2*math.pi*5/8), \
math.cos(2*math.pi*6/8), math.sin(2*math.pi*6/8), \
math.cos(2*math.pi*7/8), math.sin(2*math.pi*7/8)])
psk2x3 = (3,[-1,-1,-1, \
-1,-1,1, \
-1,1,-1, \
-1,1,1, \
1,-1,-1, \
1,-1,1, \
1,1,-1, \
1,1,1])
psk2x4 = (4,[-1,-1,-1,-1, \
-1,-1,-1,1, \
-1,-1,1,-1, \
-1,-1,1,1, \
-1,1,-1,-1, \
-1,1,-1,1, \
-1,1,1,-1, \
-1,1,1,1, \
1,-1,-1,-1, \
1,-1,-1,1, \
1,-1,1,-1, \
1,-1,1,1, \
1,1,-1,-1, \
1,1,-1,1, \
1,1,1,-1, \
1,1,1,1])
orth2 = (2,[1, 0, \
0, 1])
orth4=(4,[1, 0, 0, 0, \
0, 1, 0, 0, \
0, 0, 1, 0, \
0, 0, 0, 1])
######################################################################
# A list of channels to be tested
######################################################################
# C test channel (J. Proakis, Digital Communications, McGraw-Hill Inc., 2001)
c_channel = [0.227, 0.460, 0.688, 0.460, 0.227]
| gpl-3.0 |
Cinntax/home-assistant | tests/components/sun/test_init.py | 4 | 5881 | """The tests for the Sun component."""
from datetime import datetime, timedelta
from unittest.mock import patch
from pytest import mark
import homeassistant.components.sun as sun
import homeassistant.core as ha
import homeassistant.util.dt as dt_util
from homeassistant.const import EVENT_STATE_CHANGED
from homeassistant.setup import async_setup_component
async def test_setting_rising(hass):
"""Test retrieving sun setting and rising."""
utc_now = datetime(2016, 11, 1, 8, 0, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.helpers.condition.dt_util.utcnow", return_value=utc_now):
await async_setup_component(
hass, sun.DOMAIN, {sun.DOMAIN: {sun.CONF_ELEVATION: 0}}
)
await hass.async_block_till_done()
state = hass.states.get(sun.ENTITY_ID)
from astral import Astral
astral = Astral()
utc_today = utc_now.date()
latitude = hass.config.latitude
longitude = hass.config.longitude
mod = -1
while True:
next_dawn = astral.dawn_utc(
utc_today + timedelta(days=mod), latitude, longitude
)
if next_dawn > utc_now:
break
mod += 1
mod = -1
while True:
next_dusk = astral.dusk_utc(
utc_today + timedelta(days=mod), latitude, longitude
)
if next_dusk > utc_now:
break
mod += 1
mod = -1
while True:
next_midnight = astral.solar_midnight_utc(
utc_today + timedelta(days=mod), longitude
)
if next_midnight > utc_now:
break
mod += 1
mod = -1
while True:
next_noon = astral.solar_noon_utc(utc_today + timedelta(days=mod), longitude)
if next_noon > utc_now:
break
mod += 1
mod = -1
while True:
next_rising = astral.sunrise_utc(
utc_today + timedelta(days=mod), latitude, longitude
)
if next_rising > utc_now:
break
mod += 1
mod = -1
while True:
next_setting = astral.sunset_utc(
utc_today + timedelta(days=mod), latitude, longitude
)
if next_setting > utc_now:
break
mod += 1
assert next_dawn == dt_util.parse_datetime(
state.attributes[sun.STATE_ATTR_NEXT_DAWN]
)
assert next_dusk == dt_util.parse_datetime(
state.attributes[sun.STATE_ATTR_NEXT_DUSK]
)
assert next_midnight == dt_util.parse_datetime(
state.attributes[sun.STATE_ATTR_NEXT_MIDNIGHT]
)
assert next_noon == dt_util.parse_datetime(
state.attributes[sun.STATE_ATTR_NEXT_NOON]
)
assert next_rising == dt_util.parse_datetime(
state.attributes[sun.STATE_ATTR_NEXT_RISING]
)
assert next_setting == dt_util.parse_datetime(
state.attributes[sun.STATE_ATTR_NEXT_SETTING]
)
async def test_state_change(hass):
"""Test if the state changes at next setting/rising."""
now = datetime(2016, 6, 1, 8, 0, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.helpers.condition.dt_util.utcnow", return_value=now):
await async_setup_component(
hass, sun.DOMAIN, {sun.DOMAIN: {sun.CONF_ELEVATION: 0}}
)
await hass.async_block_till_done()
test_time = dt_util.parse_datetime(
hass.states.get(sun.ENTITY_ID).attributes[sun.STATE_ATTR_NEXT_RISING]
)
assert test_time is not None
assert sun.STATE_BELOW_HORIZON == hass.states.get(sun.ENTITY_ID).state
hass.bus.async_fire(
ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: test_time + timedelta(seconds=5)}
)
await hass.async_block_till_done()
assert sun.STATE_ABOVE_HORIZON == hass.states.get(sun.ENTITY_ID).state
with patch("homeassistant.helpers.condition.dt_util.utcnow", return_value=now):
await hass.config.async_update(longitude=hass.config.longitude + 90)
await hass.async_block_till_done()
assert sun.STATE_ABOVE_HORIZON == hass.states.get(sun.ENTITY_ID).state
async def test_norway_in_june(hass):
"""Test location in Norway where the sun doesn't set in summer."""
hass.config.latitude = 69.6
hass.config.longitude = 18.8
june = datetime(2016, 6, 1, tzinfo=dt_util.UTC)
with patch("homeassistant.helpers.condition.dt_util.utcnow", return_value=june):
assert await async_setup_component(
hass, sun.DOMAIN, {sun.DOMAIN: {sun.CONF_ELEVATION: 0}}
)
state = hass.states.get(sun.ENTITY_ID)
assert state is not None
assert dt_util.parse_datetime(
state.attributes[sun.STATE_ATTR_NEXT_RISING]
) == datetime(2016, 7, 25, 23, 23, 39, tzinfo=dt_util.UTC)
assert dt_util.parse_datetime(
state.attributes[sun.STATE_ATTR_NEXT_SETTING]
) == datetime(2016, 7, 26, 22, 19, 1, tzinfo=dt_util.UTC)
assert state.state == sun.STATE_ABOVE_HORIZON
@mark.skip
async def test_state_change_count(hass):
"""Count the number of state change events in a location."""
# Skipped because it's a bit slow. Has been validated with
# multiple lattitudes and dates
hass.config.latitude = 10
hass.config.longitude = 0
now = datetime(2016, 6, 1, tzinfo=dt_util.UTC)
with patch("homeassistant.helpers.condition.dt_util.utcnow", return_value=now):
assert await async_setup_component(
hass, sun.DOMAIN, {sun.DOMAIN: {sun.CONF_ELEVATION: 0}}
)
events = []
@ha.callback
def state_change_listener(event):
if event.data.get("entity_id") == "sun.sun":
events.append(event)
hass.bus.async_listen(EVENT_STATE_CHANGED, state_change_listener)
await hass.async_block_till_done()
for _ in range(24 * 60 * 60):
now += timedelta(seconds=1)
hass.bus.async_fire(ha.EVENT_TIME_CHANGED, {ha.ATTR_NOW: now})
await hass.async_block_till_done()
assert len(events) < 721
| apache-2.0 |
up2wing/fox-kernel-comment | linux-3.10.89/tools/perf/scripts/python/sctop.py | 11180 | 1924 | # system call top
# (c) 2010, Tom Zanussi <tzanussi@gmail.com>
# Licensed under the terms of the GNU GPL License version 2
#
# Periodically displays system-wide system call totals, broken down by
# syscall. If a [comm] arg is specified, only syscalls called by
# [comm] are displayed. If an [interval] arg is specified, the display
# will be refreshed every [interval] seconds. The default interval is
# 3 seconds.
import os, sys, thread, time
sys.path.append(os.environ['PERF_EXEC_PATH'] + \
'/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
from perf_trace_context import *
from Core import *
from Util import *
usage = "perf script -s sctop.py [comm] [interval]\n";
for_comm = None
default_interval = 3
interval = default_interval
if len(sys.argv) > 3:
sys.exit(usage)
if len(sys.argv) > 2:
for_comm = sys.argv[1]
interval = int(sys.argv[2])
elif len(sys.argv) > 1:
try:
interval = int(sys.argv[1])
except ValueError:
for_comm = sys.argv[1]
interval = default_interval
syscalls = autodict()
def trace_begin():
thread.start_new_thread(print_syscall_totals, (interval,))
pass
def raw_syscalls__sys_enter(event_name, context, common_cpu,
common_secs, common_nsecs, common_pid, common_comm,
id, args):
if for_comm is not None:
if common_comm != for_comm:
return
try:
syscalls[id] += 1
except TypeError:
syscalls[id] = 1
def print_syscall_totals(interval):
while 1:
clear_term()
if for_comm is not None:
print "\nsyscall events for %s:\n\n" % (for_comm),
else:
print "\nsyscall events:\n\n",
print "%-40s %10s\n" % ("event", "count"),
print "%-40s %10s\n" % ("----------------------------------------", \
"----------"),
for id, val in sorted(syscalls.iteritems(), key = lambda(k, v): (v, k), \
reverse = True):
try:
print "%-40s %10d\n" % (syscall_name(id), val),
except TypeError:
pass
syscalls.clear()
time.sleep(interval)
| gpl-2.0 |
robhudson/kuma | vendor/packages/translate/lang/test_af.py | 26 | 1732 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from translate.lang import af, factory
def test_sentences():
"""Tests basic functionality of sentence segmentation."""
language = factory.getlanguage('af')
sentences = language.sentences(u"Normal case. Nothing interesting.")
assert sentences == [u"Normal case.", "Nothing interesting."]
sentences = language.sentences(u"Wat? 'n Fout?")
assert sentences == [u"Wat?", "'n Fout?"]
sentences = language.sentences(u"Dit sal a.g.v. 'n fout gebeur.")
assert sentences == [u"Dit sal a.g.v. 'n fout gebeur."]
sentences = language.sentences(u"Weet nie hoe om lêer '%s' te open nie.\nMiskien is dit 'n tipe beeld wat nog nie ondersteun word nie.\n\nKies liewer 'n ander prent.")
assert len(sentences) == 3
def test_capsstart():
"""Tests that the indefinite article ('n) doesn't confuse startcaps()."""
language = factory.getlanguage('af')
assert not language.capsstart("")
assert language.capsstart("Koeie kraam koeie")
assert language.capsstart("'Koeie' kraam koeie")
assert not language.capsstart("koeie kraam koeie")
assert language.capsstart("\n\nKoeie kraam koeie")
assert language.capsstart("'n Koei kraam koeie")
assert language.capsstart("'n 'Koei' kraam koeie")
assert not language.capsstart("'n koei kraam koeie")
assert language.capsstart("\n\n'n Koei kraam koeie")
def test_transliterate_cyrillic():
def trans(text):
print(("Orig: %s" % text).encode("utf-8"))
trans = af.tranliterate_cyrillic(text)
print(("Trans: %s" % trans).encode("utf-8"))
return trans
assert trans(u"Борис Николаевич Ельцин") == u"Boris Nikolajewitj Jeltsin"
| mpl-2.0 |
dpac-vlsi/SynchroTrace | tests/configs/tsunami-o3.py | 14 | 3193 | # Copyright (c) 2006-2007 The Regents of The University of Michigan
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Authors: Steve Reinhardt
import m5
from m5.objects import *
m5.util.addToPath('../configs/common')
import FSConfig
# --------------------
# Base L1 Cache
# ====================
class L1(BaseCache):
latency = '1ns'
block_size = 64
mshrs = 4
tgts_per_mshr = 20
is_top_level = True
# ----------------------
# Base L2 Cache
# ----------------------
class L2(BaseCache):
block_size = 64
latency = '10ns'
mshrs = 92
tgts_per_mshr = 16
write_buffers = 8
# ---------------------
# I/O Cache
# ---------------------
class IOCache(BaseCache):
assoc = 8
block_size = 64
latency = '50ns'
mshrs = 20
size = '1kB'
tgts_per_mshr = 12
addr_ranges = [AddrRange(0, size='8GB')]
forward_snoops = False
is_top_level = True
#cpu
cpu = DerivO3CPU(cpu_id=0)
#the system
system = FSConfig.makeLinuxAlphaSystem('timing')
system.cpu = cpu
#create the l1/l2 bus
system.toL2Bus = CoherentBus()
system.iocache = IOCache()
system.iocache.cpu_side = system.iobus.master
system.iocache.mem_side = system.membus.slave
#connect up the l2 cache
system.l2c = L2(size='4MB', assoc=8)
system.l2c.cpu_side = system.toL2Bus.master
system.l2c.mem_side = system.membus.slave
#connect up the cpu and l1s
cpu.addPrivateSplitL1Caches(L1(size = '32kB', assoc = 1),
L1(size = '32kB', assoc = 4))
# create the interrupt controller
cpu.createInterruptController()
# connect cpu level-1 caches to shared level-2 cache
cpu.connectAllPorts(system.toL2Bus, system.membus)
cpu.clock = '2GHz'
root = Root(full_system=True, system=system)
m5.ticks.setGlobalFrequency('1THz')
| bsd-3-clause |
CarlFK/wafer | wafer/management/commands/wafer_speaker_tickets.py | 1 | 1606 | import sys
import csv
from optparse import make_option
from django.core.management.base import BaseCommand
from django.contrib.auth import get_user_model
from wafer.talks.models import ACCEPTED
class Command(BaseCommand):
help = ("List speakers and associated tickets. By default, only lists"
" speakers for accepted talk, but this can be overriden by"
" the --all option")
option_list = BaseCommand.option_list + tuple([
make_option('--all', action="store_true", default=False,
help='List speakers and tickets (for all talks)'),
])
def _speaker_tickets(self, options):
people = get_user_model().objects.filter(
talks__isnull=False).distinct()
csv_file = csv.writer(sys.stdout)
for person in people:
# We query talks to filter out the speakers from ordinary
# accounts
if options['all']:
titles = [x.title for x in person.talks.all()]
else:
titles = [x.title for x in
person.talks.filter(status=ACCEPTED)]
if not titles:
continue
tickets = person.ticket.all()
if tickets:
ticket = u'%d' % tickets[0].barcode
else:
ticket = u'NO TICKET PURCHASED'
row = [x.encode("utf-8") for x in (person.get_full_name(),
person.email,
ticket)]
csv_file.writerow(row)
def handle(self, *args, **options):
self._speaker_tickets(options)
| isc |
matthew-tucker/mne-python | mne/tests/test_source_space.py | 9 | 23354 | from __future__ import print_function
import os
import os.path as op
from nose.tools import assert_true, assert_raises
from nose.plugins.skip import SkipTest
import numpy as np
from numpy.testing import assert_array_equal, assert_allclose, assert_equal
import warnings
from mne.datasets import testing
from mne import (read_source_spaces, vertex_to_mni, write_source_spaces,
setup_source_space, setup_volume_source_space,
add_source_space_distances, read_bem_surfaces)
from mne.utils import (_TempDir, requires_fs_or_nibabel, requires_nibabel,
requires_freesurfer, run_subprocess,
requires_mne, requires_scipy_version,
run_tests_if_main, slow_test)
from mne.surface import _accumulate_normals, _triangle_neighbors
from mne.source_space import _get_mgz_header
from mne.externals.six.moves import zip
from mne.source_space import (get_volume_labels_from_aseg, SourceSpaces,
_compare_source_spaces)
from mne.io.constants import FIFF
warnings.simplefilter('always')
data_path = testing.data_path(download=False)
subjects_dir = op.join(data_path, 'subjects')
fname_mri = op.join(data_path, 'subjects', 'sample', 'mri', 'T1.mgz')
fname = op.join(subjects_dir, 'sample', 'bem', 'sample-oct-6-src.fif')
fname_vol = op.join(subjects_dir, 'sample', 'bem',
'sample-volume-7mm-src.fif')
fname_bem = op.join(data_path, 'subjects', 'sample', 'bem',
'sample-1280-bem.fif')
base_dir = op.join(op.dirname(__file__), '..', 'io', 'tests', 'data')
fname_small = op.join(base_dir, 'small-src.fif.gz')
@testing.requires_testing_data
@requires_nibabel(vox2ras_tkr=True)
def test_mgz_header():
"""Test MGZ header reading"""
import nibabel as nib
header = _get_mgz_header(fname_mri)
mri_hdr = nib.load(fname_mri).get_header()
assert_allclose(mri_hdr.get_data_shape(), header['dims'])
assert_allclose(mri_hdr.get_vox2ras_tkr(), header['vox2ras_tkr'])
assert_allclose(mri_hdr.get_ras2vox(), header['ras2vox'])
@requires_scipy_version('0.11')
def test_add_patch_info():
"""Test adding patch info to source space"""
# let's setup a small source space
src = read_source_spaces(fname_small)
src_new = read_source_spaces(fname_small)
for s in src_new:
s['nearest'] = None
s['nearest_dist'] = None
s['pinfo'] = None
# test that no patch info is added for small dist_limit
try:
add_source_space_distances(src_new, dist_limit=0.00001)
except RuntimeError: # what we throw when scipy version is wrong
pass
else:
assert_true(all(s['nearest'] is None for s in src_new))
assert_true(all(s['nearest_dist'] is None for s in src_new))
assert_true(all(s['pinfo'] is None for s in src_new))
# now let's use one that works
add_source_space_distances(src_new)
for s1, s2 in zip(src, src_new):
assert_array_equal(s1['nearest'], s2['nearest'])
assert_allclose(s1['nearest_dist'], s2['nearest_dist'], atol=1e-7)
assert_equal(len(s1['pinfo']), len(s2['pinfo']))
for p1, p2 in zip(s1['pinfo'], s2['pinfo']):
assert_array_equal(p1, p2)
@testing.requires_testing_data
@requires_scipy_version('0.11')
def test_add_source_space_distances_limited():
"""Test adding distances to source space with a dist_limit"""
tempdir = _TempDir()
src = read_source_spaces(fname)
src_new = read_source_spaces(fname)
del src_new[0]['dist']
del src_new[1]['dist']
n_do = 200 # limit this for speed
src_new[0]['vertno'] = src_new[0]['vertno'][:n_do].copy()
src_new[1]['vertno'] = src_new[1]['vertno'][:n_do].copy()
out_name = op.join(tempdir, 'temp-src.fif')
try:
add_source_space_distances(src_new, dist_limit=0.007)
except RuntimeError: # what we throw when scipy version is wrong
raise SkipTest('dist_limit requires scipy > 0.13')
write_source_spaces(out_name, src_new)
src_new = read_source_spaces(out_name)
for so, sn in zip(src, src_new):
assert_array_equal(so['dist_limit'], np.array([-0.007], np.float32))
assert_array_equal(sn['dist_limit'], np.array([0.007], np.float32))
do = so['dist']
dn = sn['dist']
# clean out distances > 0.007 in C code
do.data[do.data > 0.007] = 0
do.eliminate_zeros()
# make sure we have some comparable distances
assert_true(np.sum(do.data < 0.007) > 400)
# do comparison over the region computed
d = (do - dn)[:sn['vertno'][n_do - 1]][:, :sn['vertno'][n_do - 1]]
assert_allclose(np.zeros_like(d.data), d.data, rtol=0, atol=1e-6)
@slow_test
@testing.requires_testing_data
@requires_scipy_version('0.11')
def test_add_source_space_distances():
"""Test adding distances to source space"""
tempdir = _TempDir()
src = read_source_spaces(fname)
src_new = read_source_spaces(fname)
del src_new[0]['dist']
del src_new[1]['dist']
n_do = 20 # limit this for speed
src_new[0]['vertno'] = src_new[0]['vertno'][:n_do].copy()
src_new[1]['vertno'] = src_new[1]['vertno'][:n_do].copy()
out_name = op.join(tempdir, 'temp-src.fif')
add_source_space_distances(src_new)
write_source_spaces(out_name, src_new)
src_new = read_source_spaces(out_name)
# iterate over both hemispheres
for so, sn in zip(src, src_new):
v = so['vertno'][:n_do]
assert_array_equal(so['dist_limit'], np.array([-0.007], np.float32))
assert_array_equal(sn['dist_limit'], np.array([np.inf], np.float32))
do = so['dist']
dn = sn['dist']
# clean out distances > 0.007 in C code (some residual), and Python
ds = list()
for d in [do, dn]:
d.data[d.data > 0.007] = 0
d = d[v][:, v]
d.eliminate_zeros()
ds.append(d)
# make sure we actually calculated some comparable distances
assert_true(np.sum(ds[0].data < 0.007) > 10)
# do comparison
d = ds[0] - ds[1]
assert_allclose(np.zeros_like(d.data), d.data, rtol=0, atol=1e-9)
@testing.requires_testing_data
@requires_mne
def test_discrete_source_space():
"""Test setting up (and reading/writing) discrete source spaces
"""
tempdir = _TempDir()
src = read_source_spaces(fname)
v = src[0]['vertno']
# let's make a discrete version with the C code, and with ours
temp_name = op.join(tempdir, 'temp-src.fif')
try:
# save
temp_pos = op.join(tempdir, 'temp-pos.txt')
np.savetxt(temp_pos, np.c_[src[0]['rr'][v], src[0]['nn'][v]])
# let's try the spherical one (no bem or surf supplied)
run_subprocess(['mne_volume_source_space', '--meters',
'--pos', temp_pos, '--src', temp_name])
src_c = read_source_spaces(temp_name)
pos_dict = dict(rr=src[0]['rr'][v], nn=src[0]['nn'][v])
src_new = setup_volume_source_space('sample', None,
pos=pos_dict,
subjects_dir=subjects_dir)
_compare_source_spaces(src_c, src_new, mode='approx')
assert_allclose(src[0]['rr'][v], src_new[0]['rr'],
rtol=1e-3, atol=1e-6)
assert_allclose(src[0]['nn'][v], src_new[0]['nn'],
rtol=1e-3, atol=1e-6)
# now do writing
write_source_spaces(temp_name, src_c)
src_c2 = read_source_spaces(temp_name)
_compare_source_spaces(src_c, src_c2)
# now do MRI
assert_raises(ValueError, setup_volume_source_space, 'sample',
pos=pos_dict, mri=fname_mri)
finally:
if op.isfile(temp_name):
os.remove(temp_name)
@slow_test
@testing.requires_testing_data
def test_volume_source_space():
"""Test setting up volume source spaces
"""
tempdir = _TempDir()
src = read_source_spaces(fname_vol)
temp_name = op.join(tempdir, 'temp-src.fif')
surf = read_bem_surfaces(fname_bem, s_id=FIFF.FIFFV_BEM_SURF_ID_BRAIN)
surf['rr'] *= 1e3 # convert to mm
# The one in the testing dataset (uses bem as bounds)
for bem, surf in zip((fname_bem, None), (None, surf)):
src_new = setup_volume_source_space('sample', temp_name, pos=7.0,
bem=bem, surface=surf,
mri=fname_mri,
subjects_dir=subjects_dir)
_compare_source_spaces(src, src_new, mode='approx')
del src_new
src_new = read_source_spaces(temp_name)
_compare_source_spaces(src, src_new, mode='approx')
assert_raises(IOError, setup_volume_source_space, 'sample', temp_name,
pos=7.0, bem=None, surface='foo', # bad surf
mri=fname_mri, subjects_dir=subjects_dir)
@testing.requires_testing_data
@requires_mne
def test_other_volume_source_spaces():
"""Test setting up other volume source spaces"""
# these are split off because they require the MNE tools, and
# Travis doesn't seem to like them
# let's try the spherical one (no bem or surf supplied)
tempdir = _TempDir()
temp_name = op.join(tempdir, 'temp-src.fif')
run_subprocess(['mne_volume_source_space',
'--grid', '7.0',
'--src', temp_name,
'--mri', fname_mri])
src = read_source_spaces(temp_name)
src_new = setup_volume_source_space('sample', temp_name, pos=7.0,
mri=fname_mri,
subjects_dir=subjects_dir)
_compare_source_spaces(src, src_new, mode='approx')
del src
del src_new
assert_raises(ValueError, setup_volume_source_space, 'sample', temp_name,
pos=7.0, sphere=[1., 1.], mri=fname_mri, # bad sphere
subjects_dir=subjects_dir)
# now without MRI argument, it should give an error when we try
# to read it
run_subprocess(['mne_volume_source_space',
'--grid', '7.0',
'--src', temp_name])
assert_raises(ValueError, read_source_spaces, temp_name)
@testing.requires_testing_data
def test_triangle_neighbors():
"""Test efficient vertex neighboring triangles for surfaces"""
this = read_source_spaces(fname)[0]
this['neighbor_tri'] = [list() for _ in range(this['np'])]
for p in range(this['ntri']):
verts = this['tris'][p]
this['neighbor_tri'][verts[0]].append(p)
this['neighbor_tri'][verts[1]].append(p)
this['neighbor_tri'][verts[2]].append(p)
this['neighbor_tri'] = [np.array(nb, int) for nb in this['neighbor_tri']]
neighbor_tri = _triangle_neighbors(this['tris'], this['np'])
assert_true(np.array_equal(nt1, nt2)
for nt1, nt2 in zip(neighbor_tri, this['neighbor_tri']))
def test_accumulate_normals():
"""Test efficient normal accumulation for surfaces"""
# set up comparison
rng = np.random.RandomState(0)
n_pts = int(1.6e5) # approx number in sample source space
n_tris = int(3.2e5)
# use all positive to make a worst-case for cumulative summation
# (real "nn" vectors will have both positive and negative values)
tris = (rng.rand(n_tris, 1) * (n_pts - 2)).astype(int)
tris = np.c_[tris, tris + 1, tris + 2]
tri_nn = rng.rand(n_tris, 3)
this = dict(tris=tris, np=n_pts, ntri=n_tris, tri_nn=tri_nn)
# cut-and-paste from original code in surface.py:
# Find neighboring triangles and accumulate vertex normals
this['nn'] = np.zeros((this['np'], 3))
for p in range(this['ntri']):
# vertex normals
verts = this['tris'][p]
this['nn'][verts, :] += this['tri_nn'][p, :]
nn = _accumulate_normals(this['tris'], this['tri_nn'], this['np'])
# the moment of truth (or reckoning)
assert_allclose(nn, this['nn'], rtol=1e-7, atol=1e-7)
@slow_test
@testing.requires_testing_data
def test_setup_source_space():
"""Test setting up ico, oct, and all source spaces
"""
tempdir = _TempDir()
fname_ico = op.join(data_path, 'subjects', 'fsaverage', 'bem',
'fsaverage-ico-5-src.fif')
# first lets test some input params
assert_raises(ValueError, setup_source_space, 'sample', spacing='oct',
add_dist=False)
assert_raises(ValueError, setup_source_space, 'sample', spacing='octo',
add_dist=False)
assert_raises(ValueError, setup_source_space, 'sample', spacing='oct6e',
add_dist=False)
assert_raises(ValueError, setup_source_space, 'sample', spacing='7emm',
add_dist=False)
assert_raises(ValueError, setup_source_space, 'sample', spacing='alls',
add_dist=False)
assert_raises(IOError, setup_source_space, 'sample', spacing='oct6',
subjects_dir=subjects_dir, add_dist=False)
# ico 5 (fsaverage) - write to temp file
src = read_source_spaces(fname_ico)
temp_name = op.join(tempdir, 'temp-src.fif')
with warnings.catch_warnings(record=True): # sklearn equiv neighbors
warnings.simplefilter('always')
src_new = setup_source_space('fsaverage', temp_name, spacing='ico5',
subjects_dir=subjects_dir, add_dist=False,
overwrite=True)
_compare_source_spaces(src, src_new, mode='approx')
assert_array_equal(src[0]['vertno'], np.arange(10242))
assert_array_equal(src[1]['vertno'], np.arange(10242))
# oct-6 (sample) - auto filename + IO
src = read_source_spaces(fname)
temp_name = op.join(tempdir, 'temp-src.fif')
with warnings.catch_warnings(record=True): # sklearn equiv neighbors
warnings.simplefilter('always')
src_new = setup_source_space('sample', temp_name, spacing='oct6',
subjects_dir=subjects_dir,
overwrite=True, add_dist=False)
_compare_source_spaces(src, src_new, mode='approx')
src_new = read_source_spaces(temp_name)
_compare_source_spaces(src, src_new, mode='approx')
# all source points - no file writing
src_new = setup_source_space('sample', None, spacing='all',
subjects_dir=subjects_dir, add_dist=False)
assert_true(src_new[0]['nuse'] == len(src_new[0]['rr']))
assert_true(src_new[1]['nuse'] == len(src_new[1]['rr']))
# dense source space to hit surf['inuse'] lines of _create_surf_spacing
assert_raises(RuntimeError, setup_source_space, 'sample', None,
spacing='ico6', subjects_dir=subjects_dir, add_dist=False)
@testing.requires_testing_data
def test_read_source_spaces():
"""Test reading of source space meshes
"""
src = read_source_spaces(fname, patch_stats=True)
# 3D source space
lh_points = src[0]['rr']
lh_faces = src[0]['tris']
lh_use_faces = src[0]['use_tris']
rh_points = src[1]['rr']
rh_faces = src[1]['tris']
rh_use_faces = src[1]['use_tris']
assert_true(lh_faces.min() == 0)
assert_true(lh_faces.max() == lh_points.shape[0] - 1)
assert_true(lh_use_faces.min() >= 0)
assert_true(lh_use_faces.max() <= lh_points.shape[0] - 1)
assert_true(rh_faces.min() == 0)
assert_true(rh_faces.max() == rh_points.shape[0] - 1)
assert_true(rh_use_faces.min() >= 0)
assert_true(rh_use_faces.max() <= rh_points.shape[0] - 1)
@slow_test
@testing.requires_testing_data
def test_write_source_space():
"""Test reading and writing of source spaces
"""
tempdir = _TempDir()
src0 = read_source_spaces(fname, patch_stats=False)
write_source_spaces(op.join(tempdir, 'tmp-src.fif'), src0)
src1 = read_source_spaces(op.join(tempdir, 'tmp-src.fif'),
patch_stats=False)
_compare_source_spaces(src0, src1)
# test warnings on bad filenames
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
src_badname = op.join(tempdir, 'test-bad-name.fif.gz')
write_source_spaces(src_badname, src0)
read_source_spaces(src_badname)
assert_equal(len(w), 2)
@testing.requires_testing_data
@requires_fs_or_nibabel
def test_vertex_to_mni():
"""Test conversion of vertices to MNI coordinates
"""
# obtained using "tksurfer (sample) (l/r)h white"
vertices = [100960, 7620, 150549, 96761]
coords = np.array([[-60.86, -11.18, -3.19], [-36.46, -93.18, -2.36],
[-38.00, 50.08, -10.61], [47.14, 8.01, 46.93]])
hemis = [0, 0, 0, 1]
coords_2 = vertex_to_mni(vertices, hemis, 'sample', subjects_dir)
# less than 1mm error
assert_allclose(coords, coords_2, atol=1.0)
@testing.requires_testing_data
@requires_freesurfer
@requires_nibabel()
def test_vertex_to_mni_fs_nibabel():
"""Test equivalence of vert_to_mni for nibabel and freesurfer
"""
n_check = 1000
subject = 'sample'
vertices = np.random.randint(0, 100000, n_check)
hemis = np.random.randint(0, 1, n_check)
coords = vertex_to_mni(vertices, hemis, subject, subjects_dir,
'nibabel')
coords_2 = vertex_to_mni(vertices, hemis, subject, subjects_dir,
'freesurfer')
# less than 0.1 mm error
assert_allclose(coords, coords_2, atol=0.1)
@testing.requires_testing_data
@requires_freesurfer
@requires_nibabel()
def test_get_volume_label_names():
"""Test reading volume label names
"""
aseg_fname = op.join(subjects_dir, 'sample', 'mri', 'aseg.mgz')
label_names = get_volume_labels_from_aseg(aseg_fname)
assert_equal(label_names.count('Brain-Stem'), 1)
@testing.requires_testing_data
@requires_freesurfer
@requires_nibabel()
def test_source_space_from_label():
"""Test generating a source space from volume label
"""
tempdir = _TempDir()
aseg_fname = op.join(subjects_dir, 'sample', 'mri', 'aseg.mgz')
label_names = get_volume_labels_from_aseg(aseg_fname)
volume_label = label_names[int(np.random.rand() * len(label_names))]
# Test pos as dict
pos = dict()
assert_raises(ValueError, setup_volume_source_space, 'sample', pos=pos,
volume_label=volume_label, mri=aseg_fname)
# Test no mri provided
assert_raises(RuntimeError, setup_volume_source_space, 'sample', mri=None,
volume_label=volume_label)
# Test invalid volume label
assert_raises(ValueError, setup_volume_source_space, 'sample',
volume_label='Hello World!', mri=aseg_fname)
src = setup_volume_source_space('sample', subjects_dir=subjects_dir,
volume_label=volume_label, mri=aseg_fname,
add_interpolator=False)
assert_equal(volume_label, src[0]['seg_name'])
# test reading and writing
out_name = op.join(tempdir, 'temp-src.fif')
write_source_spaces(out_name, src)
src_from_file = read_source_spaces(out_name)
_compare_source_spaces(src, src_from_file, mode='approx')
@testing.requires_testing_data
@requires_freesurfer
@requires_nibabel()
def test_combine_source_spaces():
"""Test combining source spaces
"""
tempdir = _TempDir()
aseg_fname = op.join(subjects_dir, 'sample', 'mri', 'aseg.mgz')
label_names = get_volume_labels_from_aseg(aseg_fname)
volume_labels = [label_names[int(np.random.rand() * len(label_names))]
for ii in range(2)]
# get a surface source space (no need to test creation here)
srf = read_source_spaces(fname, patch_stats=False)
# setup 2 volume source spaces
vol = setup_volume_source_space('sample', subjects_dir=subjects_dir,
volume_label=volume_labels[0],
mri=aseg_fname, add_interpolator=False)
# setup a discrete source space
rr = np.random.randint(0, 20, (100, 3)) * 1e-3
nn = np.zeros(rr.shape)
nn[:, -1] = 1
pos = {'rr': rr, 'nn': nn}
disc = setup_volume_source_space('sample', subjects_dir=subjects_dir,
pos=pos, verbose='error')
# combine source spaces
src = srf + vol + disc
# test addition of source spaces
assert_equal(type(src), SourceSpaces)
assert_equal(len(src), 4)
# test reading and writing
src_out_name = op.join(tempdir, 'temp-src.fif')
src.save(src_out_name)
src_from_file = read_source_spaces(src_out_name)
_compare_source_spaces(src, src_from_file, mode='approx')
# test that all source spaces are in MRI coordinates
coord_frames = np.array([s['coord_frame'] for s in src])
assert_true((coord_frames == FIFF.FIFFV_COORD_MRI).all())
# test errors for export_volume
image_fname = op.join(tempdir, 'temp-image.mgz')
# source spaces with no volume
assert_raises(ValueError, srf.export_volume, image_fname, verbose='error')
# unrecognized source type
disc2 = disc.copy()
disc2[0]['type'] = 'kitty'
src_unrecognized = src + disc2
assert_raises(ValueError, src_unrecognized.export_volume, image_fname,
verbose='error')
# unrecognized file type
bad_image_fname = op.join(tempdir, 'temp-image.png')
assert_raises(ValueError, src.export_volume, bad_image_fname,
verbose='error')
# mixed coordinate frames
disc3 = disc.copy()
disc3[0]['coord_frame'] = 10
src_mixed_coord = src + disc3
assert_raises(ValueError, src_mixed_coord.export_volume, image_fname,
verbose='error')
run_tests_if_main()
# The following code was used to generate small-src.fif.gz.
# Unfortunately the C code bombs when trying to add source space distances,
# possibly due to incomplete "faking" of a smaller surface on our part here.
"""
# -*- coding: utf-8 -*-
import os
import numpy as np
import mne
data_path = mne.datasets.sample.data_path()
src = mne.setup_source_space('sample', fname=None, spacing='oct5')
hemis = ['lh', 'rh']
fnames = [data_path + '/subjects/sample/surf/%s.decimated' % h for h in hemis]
vs = list()
for s, fname in zip(src, fnames):
coords = s['rr'][s['vertno']]
vs.append(s['vertno'])
idx = -1 * np.ones(len(s['rr']))
idx[s['vertno']] = np.arange(s['nuse'])
faces = s['use_tris']
faces = idx[faces]
mne.write_surface(fname, coords, faces)
# we need to move sphere surfaces
spheres = [data_path + '/subjects/sample/surf/%s.sphere' % h for h in hemis]
for s in spheres:
os.rename(s, s + '.bak')
try:
for s, v in zip(spheres, vs):
coords, faces = mne.read_surface(s + '.bak')
coords = coords[v]
mne.write_surface(s, coords, faces)
src = mne.setup_source_space('sample', fname=None, spacing='oct4',
surface='decimated')
finally:
for s in spheres:
os.rename(s + '.bak', s)
fname = 'small-src.fif'
fname_gz = fname + '.gz'
mne.write_source_spaces(fname, src)
mne.utils.run_subprocess(['mne_add_patch_info', '--src', fname,
'--srcp', fname])
mne.write_source_spaces(fname_gz, mne.read_source_spaces(fname))
"""
| bsd-3-clause |
i19/shadowsocks | tests/test.py | 1016 | 5029 | #!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright 2015 clowwindy
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import, division, print_function, \
with_statement
import sys
import os
import signal
import select
import time
import argparse
from subprocess import Popen, PIPE
python = ['python']
default_url = 'http://localhost/'
parser = argparse.ArgumentParser(description='test Shadowsocks')
parser.add_argument('-c', '--client-conf', type=str, default=None)
parser.add_argument('-s', '--server-conf', type=str, default=None)
parser.add_argument('-a', '--client-args', type=str, default=None)
parser.add_argument('-b', '--server-args', type=str, default=None)
parser.add_argument('--with-coverage', action='store_true', default=None)
parser.add_argument('--should-fail', action='store_true', default=None)
parser.add_argument('--tcp-only', action='store_true', default=None)
parser.add_argument('--url', type=str, default=default_url)
parser.add_argument('--dns', type=str, default='8.8.8.8')
config = parser.parse_args()
if config.with_coverage:
python = ['coverage', 'run', '-p', '-a']
client_args = python + ['shadowsocks/local.py', '-v']
server_args = python + ['shadowsocks/server.py', '-v']
if config.client_conf:
client_args.extend(['-c', config.client_conf])
if config.server_conf:
server_args.extend(['-c', config.server_conf])
else:
server_args.extend(['-c', config.client_conf])
if config.client_args:
client_args.extend(config.client_args.split())
if config.server_args:
server_args.extend(config.server_args.split())
else:
server_args.extend(config.client_args.split())
if config.url == default_url:
server_args.extend(['--forbidden-ip', ''])
p1 = Popen(server_args, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)
p2 = Popen(client_args, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)
p3 = None
p4 = None
p3_fin = False
p4_fin = False
# 1 shadowsocks started
# 2 curl started
# 3 curl finished
# 4 dig started
# 5 dig finished
stage = 1
try:
local_ready = False
server_ready = False
fdset = [p1.stdout, p2.stdout, p1.stderr, p2.stderr]
while True:
r, w, e = select.select(fdset, [], fdset)
if e:
break
for fd in r:
line = fd.readline()
if not line:
if stage == 2 and fd == p3.stdout:
stage = 3
if stage == 4 and fd == p4.stdout:
stage = 5
if bytes != str:
line = str(line, 'utf8')
sys.stderr.write(line)
if line.find('starting local') >= 0:
local_ready = True
if line.find('starting server') >= 0:
server_ready = True
if stage == 1:
time.sleep(2)
p3 = Popen(['curl', config.url, '-v', '-L',
'--socks5-hostname', '127.0.0.1:1081',
'-m', '15', '--connect-timeout', '10'],
stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)
if p3 is not None:
fdset.append(p3.stdout)
fdset.append(p3.stderr)
stage = 2
else:
sys.exit(1)
if stage == 3 and p3 is not None:
fdset.remove(p3.stdout)
fdset.remove(p3.stderr)
r = p3.wait()
if config.should_fail:
if r == 0:
sys.exit(1)
else:
if r != 0:
sys.exit(1)
if config.tcp_only:
break
p4 = Popen(['socksify', 'dig', '@%s' % config.dns,
'www.google.com'],
stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)
if p4 is not None:
fdset.append(p4.stdout)
fdset.append(p4.stderr)
stage = 4
else:
sys.exit(1)
if stage == 5:
r = p4.wait()
if config.should_fail:
if r == 0:
sys.exit(1)
print('test passed (expecting failure)')
else:
if r != 0:
sys.exit(1)
print('test passed')
break
finally:
for p in [p1, p2]:
try:
os.kill(p.pid, signal.SIGINT)
os.waitpid(p.pid, 0)
except OSError:
pass
| apache-2.0 |
benrudolph/commcare-hq | corehq/apps/commtrack/management/commands/find_location_fields.py | 2 | 2314 | from django.core.management.base import BaseCommand
from corehq.apps.locations.models import Location
from dimagi.utils.couch.database import iter_docs
import csv
class Command(BaseCommand):
# frequency domain/property
def has_any_hardcoded_properties(self, loc, csv_writer):
hardcoded = {
'outlet': [
'outlet_type',
'outlet_type_other',
'address',
'landmark',
'contact_name',
'contact_phone',
],
'village': [
'village_size',
'village_class',
],
}
found = False
if loc.location_type in hardcoded.keys():
for prop in hardcoded[loc.location_type]:
prop_val = getattr(loc, prop, '')
if prop_val:
csv_writer.writerow([
loc._id,
loc.location_type,
loc.domain,
prop,
prop_val
])
found = True
return found
def handle(self, *args, **options):
with open('location_results.csv', 'wb+') as csvfile:
csv_writer = csv.writer(
csvfile,
delimiter=',',
quotechar='|',
quoting=csv.QUOTE_MINIMAL
)
csv_writer.writerow(['id', 'type', 'domain', 'property', 'value'])
locations = list(set(Location.get_db().view(
'locations/by_type',
reduce=False,
wrapper=lambda row: row['id'],
).all()))
problematic_domains = {}
for loc in iter_docs(Location.get_db(), locations):
loc = Location.get(loc['_id'])
if self.has_any_hardcoded_properties(loc, csv_writer):
if loc.domain in problematic_domains:
problematic_domains[loc.domain] += 1
else:
problematic_domains[loc.domain] = 1
self.stdout.write("\nDomain stats:\n")
for domain, count in problematic_domains.iteritems():
self.stdout.write("%s: %d" % (domain, count))
| bsd-3-clause |
Varentsov/servo | tests/wpt/web-platform-tests/webdriver/tests/actions/key.py | 17 | 6896 | import pytest
from tests.actions.support.keys import Keys
from tests.actions.support.refine import filter_dict, get_keys, get_events
def test_lone_keyup_sends_no_events(session, key_reporter, key_chain):
key_chain.key_up("a").perform()
assert len(get_keys(key_reporter)) == 0
assert len(get_events(session)) == 0
session.actions.release()
assert len(get_keys(key_reporter)) == 0
assert len(get_events(session)) == 0
@pytest.mark.parametrize("value,code", [
(u"a", "KeyA",),
("a", "KeyA",),
(u"\"", "Quote"),
(u",", "Comma"),
(u"\u00E0", ""),
(u"\u0416", ""),
(u"@", "Digit2"),
(u"\u2603", ""),
(u"\uF6C2", ""), # PUA
])
def test_single_printable_key_sends_correct_events(session,
key_reporter,
key_chain,
value,
code):
key_chain \
.key_down(value) \
.key_up(value) \
.perform()
expected = [
{"code": code, "key": value, "type": "keydown"},
{"code": code, "key": value, "type": "keypress"},
{"code": code, "key": value, "type": "keyup"},
]
all_events = get_events(session)
events = [filter_dict(e, expected[0]) for e in all_events]
if len(events) > 0 and events[0]["code"] == None:
# Remove 'code' entry if browser doesn't support it
expected = [filter_dict(e, {"key": "", "type": ""}) for e in expected]
events = [filter_dict(e, expected[0]) for e in events]
assert events == expected
assert get_keys(key_reporter) == value
@pytest.mark.parametrize("value", [
(u"\U0001F604"),
(u"\U0001F60D"),
])
def test_single_emoji_records_correct_key(session, key_reporter, key_chain, value):
# Not using key_chain.send_keys() because we always want to treat value as
# one character here. `len(value)` varies by platform for non-BMP characters,
# so we don't want to iterate over value.
key_chain \
.key_down(value) \
.key_up(value) \
.perform()
# events sent by major browsers are inconsistent so only check key value
assert get_keys(key_reporter) == value
@pytest.mark.parametrize("value,code,key", [
(u"\uE050", "ShiftRight", "Shift"),
(u"\uE053", "OSRight", "Meta"),
(Keys.CONTROL, "ControlLeft", "Control"),
])
def test_single_modifier_key_sends_correct_events(session,
key_reporter,
key_chain,
value,
code,
key):
key_chain \
.key_down(value) \
.key_up(value) \
.perform()
all_events = get_events(session)
expected = [
{"code": code, "key": key, "type": "keydown"},
{"code": code, "key": key, "type": "keyup"},
]
events = [filter_dict(e, expected[0]) for e in all_events]
if len(events) > 0 and events[0]["code"] == None:
# Remove 'code' entry if browser doesn't support it
expected = [filter_dict(e, {"key": "", "type": ""}) for e in expected]
events = [filter_dict(e, expected[0]) for e in events]
assert events == expected
assert len(get_keys(key_reporter)) == 0
@pytest.mark.parametrize("value,code,key", [
(Keys.ESCAPE, "Escape", "Escape"),
(Keys.RIGHT, "ArrowRight", "ArrowRight"),
])
def test_single_nonprintable_key_sends_events(session,
key_reporter,
key_chain,
value,
code,
key):
key_chain \
.key_down(value) \
.key_up(value) \
.perform()
expected = [
{"code": code, "key": key, "type": "keydown"},
{"code": code, "key": key, "type": "keypress"},
{"code": code, "key": key, "type": "keyup"},
]
all_events = get_events(session)
events = [filter_dict(e, expected[0]) for e in all_events]
if len(events) > 0 and events[0]["code"] == None:
# Remove 'code' entry if browser doesn't support it
expected = [filter_dict(e, {"key": "", "type": ""}) for e in expected]
events = [filter_dict(e, expected[0]) for e in events]
if len(events) == 2:
# most browsers don't send a keypress for non-printable keys
assert events == [expected[0], expected[2]]
else:
assert events == expected
assert len(get_keys(key_reporter)) == 0
def test_sequence_of_keydown_printable_keys_sends_events(session,
key_reporter,
key_chain):
key_chain \
.key_down("a") \
.key_down("b") \
.perform()
expected = [
{"code": "KeyA", "key": "a", "type": "keydown"},
{"code": "KeyA", "key": "a", "type": "keypress"},
{"code": "KeyB", "key": "b", "type": "keydown"},
{"code": "KeyB", "key": "b", "type": "keypress"},
]
all_events = get_events(session)
events = [filter_dict(e, expected[0]) for e in all_events]
if len(events) > 0 and events[0]["code"] == None:
# Remove 'code' entry if browser doesn't support it
expected = [filter_dict(e, {"key": "", "type": ""}) for e in expected]
events = [filter_dict(e, expected[0]) for e in events]
assert events == expected
assert get_keys(key_reporter) == "ab"
def test_sequence_of_keydown_character_keys(session, key_reporter, key_chain):
key_chain.send_keys("ef").perform()
expected = [
{"code": "KeyE", "key": "e", "type": "keydown"},
{"code": "KeyE", "key": "e", "type": "keypress"},
{"code": "KeyE", "key": "e", "type": "keyup"},
{"code": "KeyF", "key": "f", "type": "keydown"},
{"code": "KeyF", "key": "f", "type": "keypress"},
{"code": "KeyF", "key": "f", "type": "keyup"},
]
all_events = get_events(session)
events = [filter_dict(e, expected[0]) for e in all_events]
if len(events) > 0 and events[0]["code"] == None:
# Remove 'code' entry if browser doesn't support it
expected = [filter_dict(e, {"key": "", "type": ""}) for e in expected]
events = [filter_dict(e, expected[0]) for e in events]
assert events == expected
assert get_keys(key_reporter) == "ef"
def test_backspace_erases_keys(session, key_reporter, key_chain):
key_chain \
.send_keys("efcd") \
.send_keys([Keys.BACKSPACE, Keys.BACKSPACE]) \
.perform()
assert get_keys(key_reporter) == "ef"
| mpl-2.0 |
gauribhoite/personfinder | env/site-packages/pygments/styles/manni.py | 135 | 2374 | # -*- coding: utf-8 -*-
"""
pygments.styles.manni
~~~~~~~~~~~~~~~~~~~~~
A colorful style, inspired by the terminal highlighting style.
This is a port of the style used in the `php port`_ of pygments
by Manni. The style is called 'default' there.
:copyright: Copyright 2006-2014 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.style import Style
from pygments.token import Keyword, Name, Comment, String, Error, \
Number, Operator, Generic, Whitespace
class ManniStyle(Style):
"""
A colorful style, inspired by the terminal highlighting style.
"""
background_color = '#f0f3f3'
styles = {
Whitespace: '#bbbbbb',
Comment: 'italic #0099FF',
Comment.Preproc: 'noitalic #009999',
Comment.Special: 'bold',
Keyword: 'bold #006699',
Keyword.Pseudo: 'nobold',
Keyword.Type: '#007788',
Operator: '#555555',
Operator.Word: 'bold #000000',
Name.Builtin: '#336666',
Name.Function: '#CC00FF',
Name.Class: 'bold #00AA88',
Name.Namespace: 'bold #00CCFF',
Name.Exception: 'bold #CC0000',
Name.Variable: '#003333',
Name.Constant: '#336600',
Name.Label: '#9999FF',
Name.Entity: 'bold #999999',
Name.Attribute: '#330099',
Name.Tag: 'bold #330099',
Name.Decorator: '#9999FF',
String: '#CC3300',
String.Doc: 'italic',
String.Interpol: '#AA0000',
String.Escape: 'bold #CC3300',
String.Regex: '#33AAAA',
String.Symbol: '#FFCC33',
String.Other: '#CC3300',
Number: '#FF6600',
Generic.Heading: 'bold #003300',
Generic.Subheading: 'bold #003300',
Generic.Deleted: 'border:#CC0000 bg:#FFCCCC',
Generic.Inserted: 'border:#00CC00 bg:#CCFFCC',
Generic.Error: '#FF0000',
Generic.Emph: 'italic',
Generic.Strong: 'bold',
Generic.Prompt: 'bold #000099',
Generic.Output: '#AAAAAA',
Generic.Traceback: '#99CC66',
Error: 'bg:#FFAAAA #AA0000'
}
| apache-2.0 |
robertozoia/cartelera | unify_names.py | 1 | 2655 | # encoding: utf-8
# The MIT License (MIT)
#
# Copyright (c) 2012 Roberto Zoia
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
import copy
import Levenshtein
from tools import purify
DEBUG = True
def get_unique_movienames(theater_chain):
"""
Given a 'cadena' data structure, returns a set of (unique) movie names.
"""
data = set()
# We just need unique movie names, in no particular order
for theater in theater_chain.theaters:
for movie in theater.movies:
# print '-- %s'% movie.name
data.add(movie.name)
return data
def get_reference_movienames(theater_chains, reference_chain_tag="UVK"):
"""
Uses the 'cadena de cines' as indicated by reference_tag to build a set of unique movie names,
which other movies's names can be compared to be unified.
"""
result = None
for chain in theater_chains:
if chain.tag == reference_chain_tag:
result = get_unique_movienames(chain)
break
return result
def unify_names(theater_chains, ref_movienames):
"""
Checks each movie name in data structure against movie names in ref_movienames.
Uses Levenshtein for name comparison
"""
# Two movienames with Levenshtein.ratio above this number are considered equal
# 0.8 is an empirical value based on sampled data for movie names in Perú
LLimit = 0.8
# Don't modify original data
#theater_chains_copy = copy.deepcopy(theater_chains)
for ref_name in ref_movienames:
for chain in theater_chains:
for theater in chain.theaters:
for movie in theater.movies:
if Levenshtein.ratio(ref_name.lower(), movie.name.lower()) >= LLimit:
movie.name = ref_name
return theater_chains
| mit |
yanchen036/tensorflow | tensorflow/contrib/distributions/python/kernel_tests/conditional_transformed_distribution_test.py | 14 | 3030 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for ConditionalTransformedDistribution."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.contrib import distributions
from tensorflow.contrib.distributions.python.kernel_tests import transformed_distribution_test
from tensorflow.contrib.distributions.python.ops.bijectors.conditional_bijector import ConditionalBijector
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.platform import test
ds = distributions
class _ChooseLocation(ConditionalBijector):
"""A Bijector which chooses between one of two location parameters."""
def __init__(self, loc, name="ChooseLocation"):
self._graph_parents = []
self._name = name
with self._name_scope("init", values=[loc]):
self._loc = ops.convert_to_tensor(loc, name="loc")
super(_ChooseLocation, self).__init__(
graph_parents=[self._loc],
is_constant_jacobian=True,
validate_args=False,
forward_min_event_ndims=0,
name=name)
def _forward(self, x, z):
return x + self._gather_loc(z)
def _inverse(self, x, z):
return x - self._gather_loc(z)
def _inverse_log_det_jacobian(self, x, event_ndims, z=None):
return 0.
def _gather_loc(self, z):
z = ops.convert_to_tensor(z)
z = math_ops.cast((1 + z) / 2, dtypes.int32)
return array_ops.gather(self._loc, z)
class ConditionalTransformedDistributionTest(
transformed_distribution_test.TransformedDistributionTest):
def _cls(self):
return ds.ConditionalTransformedDistribution
def testConditioning(self):
with self.test_session():
conditional_normal = ds.ConditionalTransformedDistribution(
distribution=ds.Normal(loc=0., scale=1.),
bijector=_ChooseLocation(loc=[-100., 100.]))
z = [-1, +1, -1, -1, +1]
self.assertAllClose(
np.sign(conditional_normal.sample(
5, bijector_kwargs={"z": z}).eval()), z)
class ConditionalScalarToMultiTest(
transformed_distribution_test.ScalarToMultiTest):
def _cls(self):
return ds.ConditionalTransformedDistribution
if __name__ == "__main__":
test.main()
| apache-2.0 |
chendaniely/scipy_proceedings | publisher/build_paper.py | 2 | 5776 | #!/usr/bin/env python
from __future__ import print_function
import docutils.core as dc
import os.path
import sys
import re
import tempfile
import glob
import shutil
from writer import writer
from conf import papers_dir, output_dir
import options
header = r'''
.. role:: ref
.. role:: label
.. role:: cite
.. raw:: latex
\InputIfFileExists{page_numbers.tex}{}{}
\newcommand*{\docutilsroleref}{\ref}
\newcommand*{\docutilsrolelabel}{\label}
\providecommand*\DUrolecite[1]{\cite{#1}}
.. |---| unicode:: U+2014 .. em dash, trimming surrounding whitespace
:trim:
.. |--| unicode:: U+2013 .. en dash
:trim:
'''
def rst2tex(in_path, out_path):
options.mkdir_p(out_path)
for file in glob.glob(os.path.join(in_path, '*')):
shutil.copy(file, out_path)
base_dir = os.path.dirname(__file__)
scipy_status = os.path.join(base_dir, '_static/status.sty')
shutil.copy(scipy_status, out_path)
scipy_style = os.path.join(base_dir, '_static/scipy.sty')
shutil.copy(scipy_style, out_path)
preamble = r'''\usepackage{scipy}'''
# Add the LaTeX commands required by Pygments to do syntax highlighting
pygments = None
try:
import pygments
except ImportError:
import warnings
warnings.warn(RuntimeWarning('Could not import Pygments. '
'Syntax highlighting will fail.'))
if pygments:
from pygments.formatters import LatexFormatter
from writer.sphinx_highlight import SphinxStyle
preamble += LatexFormatter(style=SphinxStyle).get_style_defs()
settings = {'documentclass': 'IEEEtran',
'use_verbatim_when_possible': True,
'use_latex_citations': True,
'latex_preamble': preamble,
'documentoptions': 'letterpaper,compsoc,twoside',
'halt_level': 3, # 2: warn; 3: error; 4: severe
}
try:
rst, = glob.glob(os.path.join(in_path, '*.rst'))
except ValueError:
raise RuntimeError("Found more than one input .rst--not sure which "
"one to use.")
content = header + open(rst, 'r').read()
tex = dc.publish_string(source=content, writer=writer,
settings_overrides=settings)
stats_file = os.path.join(out_path, 'paper_stats.json')
d = options.cfg2dict(stats_file)
try:
d.update(writer.document.stats)
options.dict2cfg(d, stats_file)
except AttributeError:
print("Error: no paper configuration found")
tex_file = os.path.join(out_path, 'paper.tex')
with open(tex_file, 'w') as f:
f.write(tex)
def tex2pdf(out_path):
import subprocess
command_line = 'pdflatex -halt-on-error paper.tex'
# -- dummy tempfile is a hacky way to prevent pdflatex
# from asking for any missing files via stdin prompts,
# which mess up our build process.
dummy = tempfile.TemporaryFile()
run = subprocess.Popen(command_line, shell=True,
stdin=dummy,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd=out_path,
)
out, err = run.communicate()
if "Fatal" in out or run.returncode:
print("PDFLaTeX error output:")
print("=" * 80)
print(out)
print("=" * 80)
if err:
print(err)
print("=" * 80)
# Errors, exit early
return out
# Compile BiBTeX if available
stats_file = os.path.join(out_path, 'paper_stats.json')
d = options.cfg2dict(stats_file)
bib_file = os.path.join(out_path, d["bibliography"] + '.bib')
if os.path.exists(bib_file):
bibtex_cmd = 'bibtex paper && ' + command_line
run = subprocess.Popen(bibtex_cmd, shell=True,
stdin=dummy,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd=out_path,
)
out, err = run.communicate()
if err:
print("Error compiling BiBTeX")
return out
# -- returncode always 0, have to check output for error
if not run.returncode:
# -- pdflatex has to run twice to actually work
run = subprocess.Popen(command_line, shell=True,
stdin=dummy,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd=out_path,
)
out, err = run.communicate()
return out
def page_count(pdflatex_stdout, paper_dir):
"""
Parse pdflatex output for paper count, and store in a .ini file.
"""
if pdflatex_stdout is None:
print("*** WARNING: PDFLaTeX failed to generate output.")
return
regexp = re.compile('Output written on paper.pdf \((\d+) pages')
cfgname = os.path.join(paper_dir, 'paper_stats.json')
d = options.cfg2dict(cfgname)
for line in pdflatex_stdout.splitlines():
m = regexp.match(line)
if m:
pages = m.groups()[0]
d.update({'pages': int(pages)})
break
options.dict2cfg(d, cfgname)
def build_paper(paper_id):
out_path = os.path.join(output_dir, paper_id)
in_path = os.path.join(papers_dir, paper_id)
print("Building:", paper_id)
rst2tex(in_path, out_path)
pdflatex_stdout = tex2pdf(out_path)
page_count(pdflatex_stdout, out_path)
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: build_paper.py paper_directory")
sys.exit(-1)
in_path = os.path.normpath(sys.argv[1])
if not os.path.isdir(in_path):
print("Cannot open directory: %s" % in_path)
sys.exit(-1)
paper_id = os.path.basename(in_path)
build_paper(paper_id)
| bsd-2-clause |
egabancho/invenio | invenio/modules/comments/utils.py | 1 | 1536 | # -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2014 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""Comment utility functions."""
from flask import request
def comments_nb_counts():
"""Get number of comments for the record `recid`."""
recid = request.view_args.get('recid')
if recid is None:
return
elif recid == 0:
return 0
else:
from invenio.legacy.webcomment.adminlib import get_nb_comments
return get_nb_comments(recid, count_deleted=False)
def reviews_nb_counts():
"""Get number of reviews for the record `recid`."""
recid = request.view_args.get('recid')
if recid is None:
return
elif recid == 0:
return 0
else:
from invenio.legacy.webcomment.adminlib import get_nb_reviews
return get_nb_reviews(recid, count_deleted=False)
| gpl-2.0 |
vmg/hg-stable | mercurial/templatefilters.py | 1 | 12187 | # template-filters.py - common template expansion filters
#
# Copyright 2005-2008 Matt Mackall <mpm@selenic.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
import cgi, re, os, time, urllib
import encoding, node, util
import hbisect
def addbreaks(text):
""":addbreaks: Any text. Add an XHTML "<br />" tag before the end of
every line except the last.
"""
return text.replace('\n', '<br/>\n')
agescales = [("year", 3600 * 24 * 365),
("month", 3600 * 24 * 30),
("week", 3600 * 24 * 7),
("day", 3600 * 24),
("hour", 3600),
("minute", 60),
("second", 1)]
def age(date):
""":age: Date. Returns a human-readable date/time difference between the
given date/time and the current date/time.
"""
def plural(t, c):
if c == 1:
return t
return t + "s"
def fmt(t, c):
return "%d %s" % (c, plural(t, c))
now = time.time()
then = date[0]
future = False
if then > now:
future = True
delta = max(1, int(then - now))
if delta > agescales[0][1] * 30:
return 'in the distant future'
else:
delta = max(1, int(now - then))
if delta > agescales[0][1] * 2:
return util.shortdate(date)
for t, s in agescales:
n = delta // s
if n >= 2 or s == 1:
if future:
return '%s from now' % fmt(t, n)
return '%s ago' % fmt(t, n)
def basename(path):
""":basename: Any text. Treats the text as a path, and returns the last
component of the path after splitting by the path separator
(ignoring trailing separators). For example, "foo/bar/baz" becomes
"baz" and "foo/bar//" becomes "bar".
"""
return os.path.basename(path)
def datefilter(text):
""":date: Date. Returns a date in a Unix date format, including the
timezone: "Mon Sep 04 15:13:13 2006 0700".
"""
return util.datestr(text)
def domain(author):
""":domain: Any text. Finds the first string that looks like an email
address, and extracts just the domain component. Example: ``User
<user@example.com>`` becomes ``example.com``.
"""
f = author.find('@')
if f == -1:
return ''
author = author[f + 1:]
f = author.find('>')
if f >= 0:
author = author[:f]
return author
def email(text):
""":email: Any text. Extracts the first string that looks like an email
address. Example: ``User <user@example.com>`` becomes
``user@example.com``.
"""
return util.email(text)
def escape(text):
""":escape: Any text. Replaces the special XML/XHTML characters "&", "<"
and ">" with XML entities, and filters out NUL characters.
"""
return cgi.escape(text.replace('\0', ''), True)
para_re = None
space_re = None
def fill(text, width, initindent = '', hangindent = ''):
'''fill many paragraphs with optional indentation.'''
global para_re, space_re
if para_re is None:
para_re = re.compile('(\n\n|\n\\s*[-*]\\s*)', re.M)
space_re = re.compile(r' +')
def findparas():
start = 0
while True:
m = para_re.search(text, start)
if not m:
uctext = unicode(text[start:], encoding.encoding)
w = len(uctext)
while 0 < w and uctext[w - 1].isspace():
w -= 1
yield (uctext[:w].encode(encoding.encoding),
uctext[w:].encode(encoding.encoding))
break
yield text[start:m.start(0)], m.group(1)
start = m.end(1)
return "".join([util.wrap(space_re.sub(' ', util.wrap(para, width)),
width, initindent, hangindent) + rest
for para, rest in findparas()])
def fill68(text):
""":fill68: Any text. Wraps the text to fit in 68 columns."""
return fill(text, 68)
def fill76(text):
""":fill76: Any text. Wraps the text to fit in 76 columns."""
return fill(text, 76)
def firstline(text):
""":firstline: Any text. Returns the first line of text."""
try:
return text.splitlines(True)[0].rstrip('\r\n')
except IndexError:
return ''
def hexfilter(text):
""":hex: Any text. Convert a binary Mercurial node identifier into
its long hexadecimal representation.
"""
return node.hex(text)
def hgdate(text):
""":hgdate: Date. Returns the date as a pair of numbers: "1157407993
25200" (Unix timestamp, timezone offset).
"""
return "%d %d" % text
def isodate(text):
""":isodate: Date. Returns the date in ISO 8601 format: "2009-08-18 13:00
+0200".
"""
return util.datestr(text, '%Y-%m-%d %H:%M %1%2')
def isodatesec(text):
""":isodatesec: Date. Returns the date in ISO 8601 format, including
seconds: "2009-08-18 13:00:13 +0200". See also the rfc3339date
filter.
"""
return util.datestr(text, '%Y-%m-%d %H:%M:%S %1%2')
def indent(text, prefix):
'''indent each non-empty line of text after first with prefix.'''
lines = text.splitlines()
num_lines = len(lines)
endswithnewline = text[-1:] == '\n'
def indenter():
for i in xrange(num_lines):
l = lines[i]
if i and l.strip():
yield prefix
yield l
if i < num_lines - 1 or endswithnewline:
yield '\n'
return "".join(indenter())
def json(obj):
if obj is None or obj is False or obj is True:
return {None: 'null', False: 'false', True: 'true'}[obj]
elif isinstance(obj, int) or isinstance(obj, float):
return str(obj)
elif isinstance(obj, str):
u = unicode(obj, encoding.encoding, 'replace')
return '"%s"' % jsonescape(u)
elif isinstance(obj, unicode):
return '"%s"' % jsonescape(obj)
elif util.safehasattr(obj, 'keys'):
out = []
for k, v in obj.iteritems():
s = '%s: %s' % (json(k), json(v))
out.append(s)
return '{' + ', '.join(out) + '}'
elif util.safehasattr(obj, '__iter__'):
out = []
for i in obj:
out.append(json(i))
return '[' + ', '.join(out) + ']'
else:
raise TypeError('cannot encode type %s' % obj.__class__.__name__)
def _uescape(c):
if ord(c) < 0x80:
return c
else:
return '\\u%04x' % ord(c)
_escapes = [
('\\', '\\\\'), ('"', '\\"'), ('\t', '\\t'), ('\n', '\\n'),
('\r', '\\r'), ('\f', '\\f'), ('\b', '\\b'),
]
def jsonescape(s):
for k, v in _escapes:
s = s.replace(k, v)
return ''.join(_uescape(c) for c in s)
def localdate(text):
""":localdate: Date. Converts a date to local date."""
return (util.parsedate(text)[0], util.makedate()[1])
def nonempty(str):
""":nonempty: Any text. Returns '(none)' if the string is empty."""
return str or "(none)"
def obfuscate(text):
""":obfuscate: Any text. Returns the input text rendered as a sequence of
XML entities.
"""
text = unicode(text, encoding.encoding, 'replace')
return ''.join(['&#%d;' % ord(c) for c in text])
def permissions(flags):
if "l" in flags:
return "lrwxrwxrwx"
if "x" in flags:
return "-rwxr-xr-x"
return "-rw-r--r--"
def person(author):
""":person: Any text. Returns the name before an email address,
interpreting it as per RFC 5322.
>>> person('foo@bar')
'foo'
>>> person('Foo Bar <foo@bar>')
'Foo Bar'
>>> person('"Foo Bar" <foo@bar>')
'Foo Bar'
>>> person('"Foo \"buz\" Bar" <foo@bar>')
'Foo "buz" Bar'
>>> # The following are invalid, but do exist in real-life
...
>>> person('Foo "buz" Bar <foo@bar>')
'Foo "buz" Bar'
>>> person('"Foo Bar <foo@bar>')
'Foo Bar'
"""
if '@' not in author:
return author
f = author.find('<')
if f != -1:
return author[:f].strip(' "').replace('\\"', '"')
f = author.find('@')
return author[:f].replace('.', ' ')
def rfc3339date(text):
""":rfc3339date: Date. Returns a date using the Internet date format
specified in RFC 3339: "2009-08-18T13:00:13+02:00".
"""
return util.datestr(text, "%Y-%m-%dT%H:%M:%S%1:%2")
def rfc822date(text):
""":rfc822date: Date. Returns a date using the same format used in email
headers: "Tue, 18 Aug 2009 13:00:13 +0200".
"""
return util.datestr(text, "%a, %d %b %Y %H:%M:%S %1%2")
def short(text):
""":short: Changeset hash. Returns the short form of a changeset hash,
i.e. a 12 hexadecimal digit string.
"""
return text[:12]
def shortbisect(text):
""":shortbisect: Any text. Treats `text` as a bisection status, and
returns a single-character representing the status (G: good, B: bad,
S: skipped, U: untested, I: ignored). Returns single space if `text`
is not a valid bisection status.
"""
return hbisect.shortlabel(text) or ' '
def shortdate(text):
""":shortdate: Date. Returns a date like "2006-09-18"."""
return util.shortdate(text)
def stringescape(text):
return text.encode('string_escape')
def stringify(thing):
""":stringify: Any type. Turns the value into text by converting values into
text and concatenating them.
"""
if util.safehasattr(thing, '__iter__') and not isinstance(thing, str):
return "".join([stringify(t) for t in thing if t is not None])
return str(thing)
def strip(text):
""":strip: Any text. Strips all leading and trailing whitespace."""
return text.strip()
def stripdir(text):
""":stripdir: Treat the text as path and strip a directory level, if
possible. For example, "foo" and "foo/bar" becomes "foo".
"""
dir = os.path.dirname(text)
if dir == "":
return os.path.basename(text)
else:
return dir
def tabindent(text):
""":tabindent: Any text. Returns the text, with every line except the
first starting with a tab character.
"""
return indent(text, '\t')
def urlescape(text):
""":urlescape: Any text. Escapes all "special" characters. For example,
"foo bar" becomes "foo%20bar".
"""
return urllib.quote(text)
def userfilter(text):
""":user: Any text. Returns a short representation of a user name or email
address."""
return util.shortuser(text)
def emailuser(text):
""":emailuser: Any text. Returns the user portion of an email address."""
return util.emailuser(text)
def xmlescape(text):
text = (text
.replace('&', '&')
.replace('<', '<')
.replace('>', '>')
.replace('"', '"')
.replace("'", ''')) # ' invalid in HTML
return re.sub('[\x00-\x08\x0B\x0C\x0E-\x1F]', ' ', text)
filters = {
"addbreaks": addbreaks,
"age": age,
"basename": basename,
"date": datefilter,
"domain": domain,
"email": email,
"escape": escape,
"fill68": fill68,
"fill76": fill76,
"firstline": firstline,
"hex": hexfilter,
"hgdate": hgdate,
"isodate": isodate,
"isodatesec": isodatesec,
"json": json,
"jsonescape": jsonescape,
"localdate": localdate,
"nonempty": nonempty,
"obfuscate": obfuscate,
"permissions": permissions,
"person": person,
"rfc3339date": rfc3339date,
"rfc822date": rfc822date,
"short": short,
"shortbisect": shortbisect,
"shortdate": shortdate,
"stringescape": stringescape,
"stringify": stringify,
"strip": strip,
"stripdir": stripdir,
"tabindent": tabindent,
"urlescape": urlescape,
"user": userfilter,
"emailuser": emailuser,
"xmlescape": xmlescape,
}
def websub(text, websubtable):
""":websub: Any text. Only applies to hgweb. Applies the regular
expression replacements defined in the websub section.
"""
if websubtable:
for regexp, format in websubtable:
text = regexp.sub(format, text)
return text
# tell hggettext to extract docstrings from these functions:
i18nfunctions = filters.values()
| gpl-2.0 |
mxOBS/deb-pkg_trusty_chromium-browser | native_client/src/trusted/validator_ragel/check_trie.py | 6 | 1331 | #!/usr/bin/python
# Copyright (c) 2014 The Native Client Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Verifies that a trie is equal to the latest snapshotted one."""
import argparse
import os
import sys
# Add location of latest_trie module to sys.path:
RAGEL_DIR = os.path.dirname(os.path.abspath(__file__))
NACL_DIR = os.path.dirname(os.path.dirname(os.path.dirname(RAGEL_DIR)))
SNAPSHOTS_DIR = os.path.join(os.path.dirname(NACL_DIR), 'validator_snapshots')
if os.path.isdir(SNAPSHOTS_DIR):
sys.path.append(SNAPSHOTS_DIR)
else:
print "couldn't find: ", SNAPSHOTS_DIR
sys.exit(1)
import latest_trie
import trie
def ParseArgs():
parser = argparse.ArgumentParser(description='Compare to latest trie.')
parser.add_argument('trie', metavar='t', nargs=1,
help='path of the trie to check.')
parser.add_argument('--bitness', type=int, required=True,
choices=[32, 64])
return parser.parse_args()
def main():
args = ParseArgs()
golden = latest_trie.LatestRagelTriePath(SNAPSHOTS_DIR, args.bitness)
diff_list = [diff for diff in trie.DiffTrieFiles(args.trie[0], golden)]
if diff_list:
print 'tries differ: ', diff_list
sys.exit(1)
if __name__ == '__main__':
main()
| bsd-3-clause |
jank3/django | django/views/generic/base.py | 281 | 7690 | from __future__ import unicode_literals
import logging
from functools import update_wrapper
from django import http
from django.core.exceptions import ImproperlyConfigured
from django.core.urlresolvers import NoReverseMatch, reverse
from django.template.response import TemplateResponse
from django.utils import six
from django.utils.decorators import classonlymethod
logger = logging.getLogger('django.request')
class ContextMixin(object):
"""
A default context mixin that passes the keyword arguments received by
get_context_data as the template context.
"""
def get_context_data(self, **kwargs):
if 'view' not in kwargs:
kwargs['view'] = self
return kwargs
class View(object):
"""
Intentionally simple parent class for all views. Only implements
dispatch-by-method and simple sanity checking.
"""
http_method_names = ['get', 'post', 'put', 'patch', 'delete', 'head', 'options', 'trace']
def __init__(self, **kwargs):
"""
Constructor. Called in the URLconf; can contain helpful extra
keyword arguments, and other things.
"""
# Go through keyword arguments, and either save their values to our
# instance, or raise an error.
for key, value in six.iteritems(kwargs):
setattr(self, key, value)
@classonlymethod
def as_view(cls, **initkwargs):
"""
Main entry point for a request-response process.
"""
for key in initkwargs:
if key in cls.http_method_names:
raise TypeError("You tried to pass in the %s method name as a "
"keyword argument to %s(). Don't do that."
% (key, cls.__name__))
if not hasattr(cls, key):
raise TypeError("%s() received an invalid keyword %r. as_view "
"only accepts arguments that are already "
"attributes of the class." % (cls.__name__, key))
def view(request, *args, **kwargs):
self = cls(**initkwargs)
if hasattr(self, 'get') and not hasattr(self, 'head'):
self.head = self.get
self.request = request
self.args = args
self.kwargs = kwargs
return self.dispatch(request, *args, **kwargs)
view.view_class = cls
view.view_initkwargs = initkwargs
# take name and docstring from class
update_wrapper(view, cls, updated=())
# and possible attributes set by decorators
# like csrf_exempt from dispatch
update_wrapper(view, cls.dispatch, assigned=())
return view
def dispatch(self, request, *args, **kwargs):
# Try to dispatch to the right method; if a method doesn't exist,
# defer to the error handler. Also defer to the error handler if the
# request method isn't on the approved list.
if request.method.lower() in self.http_method_names:
handler = getattr(self, request.method.lower(), self.http_method_not_allowed)
else:
handler = self.http_method_not_allowed
return handler(request, *args, **kwargs)
def http_method_not_allowed(self, request, *args, **kwargs):
logger.warning('Method Not Allowed (%s): %s', request.method, request.path,
extra={
'status_code': 405,
'request': request
}
)
return http.HttpResponseNotAllowed(self._allowed_methods())
def options(self, request, *args, **kwargs):
"""
Handles responding to requests for the OPTIONS HTTP verb.
"""
response = http.HttpResponse()
response['Allow'] = ', '.join(self._allowed_methods())
response['Content-Length'] = '0'
return response
def _allowed_methods(self):
return [m.upper() for m in self.http_method_names if hasattr(self, m)]
class TemplateResponseMixin(object):
"""
A mixin that can be used to render a template.
"""
template_name = None
template_engine = None
response_class = TemplateResponse
content_type = None
def render_to_response(self, context, **response_kwargs):
"""
Returns a response, using the `response_class` for this
view, with a template rendered with the given context.
If any keyword arguments are provided, they will be
passed to the constructor of the response class.
"""
response_kwargs.setdefault('content_type', self.content_type)
return self.response_class(
request=self.request,
template=self.get_template_names(),
context=context,
using=self.template_engine,
**response_kwargs
)
def get_template_names(self):
"""
Returns a list of template names to be used for the request. Must return
a list. May not be called if render_to_response is overridden.
"""
if self.template_name is None:
raise ImproperlyConfigured(
"TemplateResponseMixin requires either a definition of "
"'template_name' or an implementation of 'get_template_names()'")
else:
return [self.template_name]
class TemplateView(TemplateResponseMixin, ContextMixin, View):
"""
A view that renders a template. This view will also pass into the context
any keyword arguments passed by the url conf.
"""
def get(self, request, *args, **kwargs):
context = self.get_context_data(**kwargs)
return self.render_to_response(context)
class RedirectView(View):
"""
A view that provides a redirect on any GET request.
"""
permanent = False
url = None
pattern_name = None
query_string = False
def get_redirect_url(self, *args, **kwargs):
"""
Return the URL redirect to. Keyword arguments from the
URL pattern match generating the redirect request
are provided as kwargs to this method.
"""
if self.url:
url = self.url % kwargs
elif self.pattern_name:
try:
url = reverse(self.pattern_name, args=args, kwargs=kwargs)
except NoReverseMatch:
return None
else:
return None
args = self.request.META.get('QUERY_STRING', '')
if args and self.query_string:
url = "%s?%s" % (url, args)
return url
def get(self, request, *args, **kwargs):
url = self.get_redirect_url(*args, **kwargs)
if url:
if self.permanent:
return http.HttpResponsePermanentRedirect(url)
else:
return http.HttpResponseRedirect(url)
else:
logger.warning('Gone: %s', request.path,
extra={
'status_code': 410,
'request': request
})
return http.HttpResponseGone()
def head(self, request, *args, **kwargs):
return self.get(request, *args, **kwargs)
def post(self, request, *args, **kwargs):
return self.get(request, *args, **kwargs)
def options(self, request, *args, **kwargs):
return self.get(request, *args, **kwargs)
def delete(self, request, *args, **kwargs):
return self.get(request, *args, **kwargs)
def put(self, request, *args, **kwargs):
return self.get(request, *args, **kwargs)
def patch(self, request, *args, **kwargs):
return self.get(request, *args, **kwargs)
| bsd-3-clause |
GinnyN/towerofdimensions-django | django-social-auth/social_auth/db/base.py | 7 | 5724 | """Models mixins for Social Auth"""
import base64
import time
from datetime import datetime, timedelta
from openid.association import Association as OIDAssociation
from social_auth.utils import setting, utc
class UserSocialAuthMixin(object):
User = None
user = ''
provider = ''
def __unicode__(self):
"""Return associated user unicode representation"""
return u'%s - %s' % (unicode(self.user), self.provider.title())
@property
def tokens(self):
"""Return access_token stored in extra_data or None"""
# Make import here to avoid recursive imports :-/
from social_auth.backends import get_backends
backend = get_backends().get(self.provider)
if backend:
return backend.AUTH_BACKEND.tokens(self)
else:
return {}
def expiration_datetime(self):
"""Return provider session live seconds. Returns a timedelta ready to
use with session.set_expiry().
If provider returns a timestamp instead of session seconds to live, the
timedelta is inferred from current time (using UTC timezone). None is
returned if there's no value stored or it's invalid.
"""
name = setting('SOCIAL_AUTH_EXPIRATION', 'expires')
if self.extra_data and name in self.extra_data:
try:
expires = int(self.extra_data.get(name))
except (ValueError, TypeError):
return None
now = datetime.now()
now_timestamp = time.mktime(now.timetuple())
# Detect if expires is a timestamp
if expires > now_timestamp: # expires is a datetime
return datetime.utcfromtimestamp(expires) \
.replace(tzinfo=utc) - \
now.replace(tzinfo=utc)
else: # expires is a timedelta
return timedelta(seconds=expires)
@classmethod
def username_max_length(cls):
raise NotImplementedError('Implement in subclass')
@classmethod
def simple_user_exists(cls, *args, **kwargs):
"""
Return True/False if a User instance exists with the given arguments.
Arguments are directly passed to filter() manager method.
"""
return cls.User.objects.filter(*args, **kwargs).count() > 0
@classmethod
def create_user(cls, *args, **kwargs):
return cls.User.objects.create(*args, **kwargs)
@classmethod
def get_user(cls, pk):
try:
return cls.User.objects.get(pk=pk)
except cls.User.DoesNotExist:
return None
@classmethod
def get_user_by_email(cls, email):
return cls.User.objects.get(email=email)
@classmethod
def resolve_user_or_id(cls, user_or_id):
if isinstance(user_or_id, cls.User):
return user_or_id
return cls.User.objects.get(pk=user_or_id)
@classmethod
def get_social_auth(cls, provider, uid):
if not isinstance(uid, basestring):
uid = str(uid)
try:
return cls.objects.get(provider=provider, uid=uid)
except cls.DoesNotExist:
return None
@classmethod
def get_social_auth_for_user(cls, user):
return user.social_auth.all()
@classmethod
def create_social_auth(cls, user, uid, provider):
if not isinstance(uid, basestring):
uid = str(uid)
return cls.objects.create(user=user, uid=uid, provider=provider)
@classmethod
def store_association(cls, server_url, association):
from social_auth.models import Association
args = {'server_url': server_url, 'handle': association.handle}
try:
assoc = Association.objects.get(**args)
except Association.DoesNotExist:
assoc = Association(**args)
assoc.secret = base64.encodestring(association.secret)
assoc.issued = association.issued
assoc.lifetime = association.lifetime
assoc.assoc_type = association.assoc_type
assoc.save()
@classmethod
def get_oid_associations(cls, server_url, handle=None):
from social_auth.models import Association
args = {'server_url': server_url}
if handle is not None:
args['handle'] = handle
return sorted([
(assoc.id,
OIDAssociation(assoc.handle,
base64.decodestring(assoc.secret),
assoc.issued,
assoc.lifetime,
assoc.assoc_type))
for assoc in Association.objects.filter(**args)
], key=lambda x: x[1].issued, reverse=True)
@classmethod
def delete_associations(cls, ids_to_delete):
from social_auth.models import Association
Association.objects.filter(pk__in=ids_to_delete).delete()
@classmethod
def use_nonce(cls, server_url, timestamp, salt):
from social_auth.models import Nonce
return Nonce.objects.get_or_create(server_url=server_url,
timestamp=timestamp,
salt=salt)[1]
class NonceMixin(object):
"""One use numbers"""
server_url = ''
timestamp = 0
salt = ''
def __unicode__(self):
"""Unicode representation"""
return self.server_url
class AssociationMixin(object):
"""OpenId account association"""
server_url = ''
handle = ''
secret = ''
issued = 0
lifetime = 0
assoc_type = ''
def __unicode__(self):
"""Unicode representation"""
return '%s %s' % (self.handle, self.issued)
| bsd-3-clause |
asavah/xbmc | addons/metadata.generic.artists/lib/scraper.py | 24 | 21792 | # -*- coding: utf-8 -*-
import json
import socket
import sys
import time
import urllib.parse
import urllib.request
import _strptime # https://bugs.python.org/issue7980
from socket import timeout
from threading import Thread
from urllib.error import HTTPError, URLError
import xbmc
import xbmcaddon
import xbmcgui
import xbmcplugin
from .allmusic import allmusic_artistfind
from .allmusic import allmusic_artistdetails
from .allmusic import allmusic_artistalbums
from .discogs import discogs_artistfind
from .discogs import discogs_artistdetails
from .discogs import discogs_artistalbums
from .fanarttv import fanarttv_artistart
from .musicbrainz import musicbrainz_artistfind
from .musicbrainz import musicbrainz_artistdetails
from .nfo import nfo_geturl
from .theaudiodb import theaudiodb_artistdetails
from .theaudiodb import theaudiodb_artistalbums
from .wikipedia import wikipedia_artistdetails
from .utils import *
ADDONID = xbmcaddon.Addon().getAddonInfo('id')
ADDONNAME = xbmcaddon.Addon().getAddonInfo('name')
ADDONVERSION = xbmcaddon.Addon().getAddonInfo('version')
def log(txt):
message = '%s: %s' % (ADDONID, txt)
xbmc.log(msg=message, level=xbmc.LOGDEBUG)
def get_data(url, jsonformat, retry=True):
try:
if url.startswith('https://musicbrainz.org/'):
api_timeout('musicbrainztime')
elif url.startswith('https://api.discogs.com/'):
api_timeout('discogstime')
headers = {}
headers['User-Agent'] = '%s/%s ( http://kodi.tv )' % (ADDONNAME, ADDONVERSION)
req = urllib.request.Request(url, headers=headers)
resp = urllib.request.urlopen(req, timeout=5)
respdata = resp.read()
except URLError as e:
log('URLError: %s - %s' % (e.reason, url))
return
except HTTPError as e:
log('HTTPError: %s - %s' % (e.reason, url))
return
except socket.timeout as e:
log('socket: %s - %s' % (e, url))
return
if resp.getcode() == 503:
log('exceeding musicbrainz api limit')
if retry:
xbmc.sleep(1000)
get_data(url, jsonformat, retry=False)
else:
return
elif resp.getcode() == 429:
log('exceeding discogs api limit')
if retry:
xbmc.sleep(1000)
get_data(url, jsonformat, retry=False)
else:
return
if jsonformat:
respdata = json.loads(respdata)
return respdata
def api_timeout(scraper):
currenttime = round(time.time() * 1000)
previoustime = xbmcgui.Window(10000).getProperty(scraper)
if previoustime:
timeout = currenttime - int(previoustime)
if timeout < 1000:
xbmc.sleep(1000 - timeout)
xbmcgui.Window(10000).setProperty(scraper, str(round(time.time() * 1000)))
class Scraper():
def __init__(self, action, key, artist, url, nfo, settings):
# parse path settings
self.parse_settings(settings)
# this is just for backward compitability with xml based scrapers https://github.com/xbmc/xbmc/pull/11632
if action == 'resolveid':
# return the result
result = self.resolve_mbid(key)
self.return_resolved(result)
# search for artist name matches
elif action == 'find':
# try musicbrainz first
result = self.find_artist(artist, 'musicbrainz')
if result:
self.return_search(result)
# fallback to discogs
else:
result = self.find_artist(artist, 'discogs')
if result:
self.return_search(result)
# return info using id's
elif action == 'getdetails':
details = {}
discography = {}
url = json.loads(url)
artist = url.get('artist')
mbartistid = url.get('mbartistid')
dcid = url.get('dcid')
threads = []
extrascrapers = []
discographyscrapers = []
# we have a musicbrainz id
if mbartistid:
scrapers = [[mbartistid, 'musicbrainz'], [mbartistid, 'theaudiodb'], [mbartistid, 'fanarttv']]
for item in scrapers:
thread = Thread(target = self.get_details, args = (item[0], item[1], details))
threads.append(thread)
thread.start()
# theaudiodb discograhy
thread = Thread(target = self.get_discography, args = (mbartistid, 'theaudiodb', discography))
threads.append(thread)
thread.start()
# wait for musicbrainz to finish
threads[0].join()
# check if we have a result:
if 'musicbrainz' in details:
if not artist:
artist = details['musicbrainz']['artist']
# scrape allmusic if we have an url provided by musicbrainz
if 'allmusic' in details['musicbrainz']:
extrascrapers.append([{'url': details['musicbrainz']['allmusic']}, 'allmusic'])
# allmusic discograhy
discographyscrapers.append([{'url': details['musicbrainz']['allmusic']}, 'allmusic'])
# only scrape allmusic by artistname if explicitly enabled
elif self.inaccurate and artist:
extrascrapers.append([{'artist': artist}, 'allmusic'])
# scrape wikipedia if we have an url provided by musicbrainz
if 'wikipedia' in details['musicbrainz']:
extrascrapers.append([details['musicbrainz']['wikipedia'], 'wikipedia'])
elif 'wikidata' in details['musicbrainz']:
extrascrapers.append([details['musicbrainz']['wikidata'], 'wikidata'])
# scrape discogs if we have an url provided by musicbrainz
if 'discogs' in details['musicbrainz']:
extrascrapers.append([{'url': details['musicbrainz']['discogs']}, 'discogs'])
# discogs discograhy
discographyscrapers.append([{'url': details['musicbrainz']['discogs']}, 'discogs'])
# only scrape discogs by artistname if explicitly enabled
elif self.inaccurate and artist:
extrascrapers.append([{'artist': artist}, 'discogs'])
for item in extrascrapers:
thread = Thread(target = self.get_details, args = (item[0], item[1], details))
threads.append(thread)
thread.start()
# get allmusic / discogs discography if we have an url
for item in discographyscrapers:
thread = Thread(target = self.get_discography, args = (item[0], item[1], discography))
threads.append(thread)
thread.start()
# we have a discogs id
else:
thread = Thread(target = self.get_details, args = ({'url': dcid}, 'discogs', details))
threads.append(thread)
thread.start()
thread = Thread(target = self.get_discography, args = ({'url': dcid}, 'discogs', discography))
threads.append(thread)
thread.start()
if threads:
for thread in threads:
thread.join()
# merge discography items
for site, albumlist in discography.items():
if site in details:
details[site]['albums'] = albumlist
else:
details[site] = {}
details[site]['albums'] = albumlist
result = self.compile_results(details)
if result:
self.return_details(result)
elif action == 'NfoUrl':
# check if there is a musicbrainz url in the nfo file
mbartistid = nfo_geturl(nfo)
if mbartistid:
# return the result
result = self.resolve_mbid(mbartistid)
self.return_nfourl(result)
xbmcplugin.endOfDirectory(int(sys.argv[1]))
def parse_settings(self, data):
settings = json.loads(data)
# note: path settings are taken from the db, they may not reflect the current settings.xml file
self.bio = settings['bio']
self.discog = settings['discog']
self.genre = settings['genre']
self.lang = settings['lang']
self.mood = settings['mood']
self.style = settings['style']
self.inaccurate = settings['inaccurate']
def resolve_mbid(self, mbartistid):
item = {}
item['artist'] = ''
item['mbartistid'] = mbartistid
return item
def find_artist(self, artist, site):
json = True
# musicbrainz
if site == 'musicbrainz':
url = MUSICBRAINZURL % (MUSICBRAINZSEARCH % urllib.parse.quote_plus(artist))
scraper = musicbrainz_artistfind
# musicbrainz
if site == 'discogs':
url = DISCOGSURL % (DISCOGSSEARCH % (urllib.parse.quote_plus(artist), DISCOGSKEY , DISCOGSSECRET))
scraper = discogs_artistfind
result = get_data(url, json)
if not result:
return
artistresults = scraper(result, artist)
return artistresults
def get_details(self, param, site, details, discography={}):
json = True
# theaudiodb
if site == 'theaudiodb':
url = AUDIODBURL % (AUDIODBKEY, AUDIODBDETAILS % param)
artistscraper = theaudiodb_artistdetails
# musicbrainz
elif site == 'musicbrainz':
url = MUSICBRAINZURL % (MUSICBRAINZDETAILS % param)
artistscraper = musicbrainz_artistdetails
# fanarttv
elif site == 'fanarttv':
url = FANARTVURL % (param, FANARTVKEY)
artistscraper = fanarttv_artistart
# discogs
elif site == 'discogs':
# search by artistname if we do not have an url
if 'artist' in param:
url = DISCOGSURL % (DISCOGSSEARCH % (urllib.parse.quote_plus(param['artist']), DISCOGSKEY , DISCOGSSECRET))
artistresult = get_data(url, json)
if artistresult:
artists = discogs_artistfind(artistresult, param['artist'])
if artists:
artistresult = sorted(artists, key=lambda k: k['relevance'], reverse=True)
param['url'] = artistresult[0]['dcid']
else:
return
else:
return
url = DISCOGSURL % (DISCOGSDETAILS % (param['url'], DISCOGSKEY, DISCOGSSECRET))
artistscraper = discogs_artistdetails
# wikipedia
elif site == 'wikipedia':
url = WIKIPEDIAURL % param
artistscraper = wikipedia_artistdetails
elif site == 'wikidata':
# resolve wikidata to wikipedia url
result = get_data(WIKIDATAURL % param, json)
try:
artist = result['entities'][param]['sitelinks']['enwiki']['url'].rsplit('/', 1)[1]
except:
return
site = 'wikipedia'
url = WIKIPEDIAURL % artist
artistscraper = wikipedia_artistdetails
# allmusic
elif site == 'allmusic':
json = False
# search by artistname if we do not have an url
if 'artist' in param:
url = ALLMUSICURL % urllib.parse.quote_plus(param['artist'])
artistresult = get_data(url, json)
if artistresult:
artists = allmusic_artistfind(artistresult, param['artist'])
if artists:
param['url'] = artists[0]['url']
else:
return
else:
return
url = param['url']
artistscraper = allmusic_artistdetails
result = get_data(url, json)
if not result:
return
artistresults = artistscraper(result)
if not artistresults:
return
details[site] = artistresults
# get allmusic / discogs discography if we searched by artistname
if (site == 'discogs' or site == 'allmusic') and 'artist' in param:
albums = self.get_discography(param, site, {})
if albums:
details[site]['albums'] = albums[site]
return details
def get_discography(self, param, site, discography):
json = True
if site == 'theaudiodb':
# theaudiodb - discography
albumsurl = AUDIODBURL % (AUDIODBKEY, AUDIODBDISCOGRAPHY % param)
scraper = theaudiodb_artistalbums
elif site == 'discogs':
# discogs - discography
albumsurl = DISCOGSURL % (DISCOGSDISCOGRAPHY % (param['url'], DISCOGSKEY, DISCOGSSECRET))
scraper = discogs_artistalbums
elif site == 'allmusic':
# allmusic - discography
json = False
albumsurl = param['url'] + '/discography'
scraper = allmusic_artistalbums
albumdata = get_data(albumsurl, json)
if not albumdata:
return
albumresults = scraper(albumdata)
if not albumresults:
return
discography[site] = albumresults
return discography
def compile_results(self, details):
result = {}
thumbs = []
fanart = []
extras = []
# merge metadata results, start with the least accurate sources
if 'discogs' in details:
for k, v in details['discogs'].items():
if v:
result[k] = v
if k == 'thumb' and v:
thumbs.append(v)
if 'wikipedia' in details:
for k, v in details['wikipedia'].items():
if v:
result[k] = v
if 'allmusic' in details:
for k, v in details['allmusic'].items():
if v:
result[k] = v
if k == 'thumb' and v:
thumbs.append(v)
if 'theaudiodb' in details:
for k, v in details['theaudiodb'].items():
if v:
result[k] = v
if k == 'thumb' and v:
thumbs.append(v)
elif k == 'fanart' and v:
fanart.append(v)
if k == 'extras' and v:
extras.append(v)
if 'musicbrainz' in details:
for k, v in details['musicbrainz'].items():
if v:
result[k] = v
if 'fanarttv' in details:
for k, v in details['fanarttv'].items():
if v:
result[k] = v
if k == 'thumb' and v:
thumbs.append(v)
elif k == 'fanart' and v:
fanart.append(v)
if k == 'extras' and v:
extras.append(v)
# merge artwork from all scrapers
if result:
# artworks from most accurate sources first
thumbs.reverse()
thumbnails = []
fanart.reverse()
fanarts = []
# the order for extra art does not matter
extraart = []
for thumblist in thumbs:
for item in thumblist:
thumbnails.append(item)
for extralist in extras:
for item in extralist:
extraart.append(item)
# add the extra art to the end of the thumb list
if extraart:
thumbnails.extend(extraart)
for fanartlist in fanart:
for item in fanartlist:
fanarts.append(item)
# add the fanart to the end of the thumb list
if fanarts:
thumbnails.extend(fanarts)
if thumbnails:
result['thumb'] = thumbnails
data = self.user_prefs(details, result)
return data
def user_prefs(self, details, result):
# user preferences
lang = 'biography' + self.lang
if self.bio == 'theaudiodb' and 'theaudiodb' in details:
if lang in details['theaudiodb']:
result['biography'] = details['theaudiodb'][lang]
elif 'biographyEN' in details['theaudiodb']:
result['biography'] = details['theaudiodb']['biographyEN']
elif (self.bio in details) and ('biography' in details[self.bio]):
result['biography'] = details[self.bio]['biography']
if (self.discog in details) and ('albums' in details[self.discog]):
result['albums'] = details[self.discog]['albums']
if (self.genre in details) and ('genre' in details[self.genre]):
result['genre'] = details[self.genre]['genre']
if (self.style in details) and ('styles' in details[self.style]):
result['styles'] = details[self.style]['styles']
if (self.mood in details) and ('moods' in details[self.mood]):
result['moods'] = details[self.mood]['moods']
return result
def return_search(self, data):
items = []
for item in data:
listitem = xbmcgui.ListItem(item['artist'], offscreen=True)
listitem.setArt({'thumb': item['thumb']})
listitem.setProperty('artist.genre', item['genre'])
listitem.setProperty('artist.born', item['born'])
listitem.setProperty('relevance', item['relevance'])
if 'type' in item:
listitem.setProperty('artist.type', item['type'])
if 'gender' in item:
listitem.setProperty('artist.gender', item['gender'])
if 'disambiguation' in item:
listitem.setProperty('artist.disambiguation', item['disambiguation'])
url = {'artist':item['artist']}
if 'mbartistid' in item:
url['mbartistid'] = item['mbartistid']
if 'dcid' in item:
url['dcid'] = item['dcid']
items.append((json.dumps(url), listitem, True))
if items:
xbmcplugin.addDirectoryItems(handle=int(sys.argv[1]), items=items)
def return_nfourl(self, item):
listitem = xbmcgui.ListItem(offscreen=True)
xbmcplugin.addDirectoryItem(handle=int(sys.argv[1]), url=json.dumps(item), listitem=listitem, isFolder=True)
def return_resolved(self, item):
listitem = xbmcgui.ListItem(path=json.dumps(item), offscreen=True)
xbmcplugin.setResolvedUrl(handle=int(sys.argv[1]), succeeded=True, listitem=listitem)
def return_details(self, item):
if not 'artist' in item:
return
listitem = xbmcgui.ListItem(item['artist'], offscreen=True)
if 'mbartistid' in item:
listitem.setProperty('artist.musicbrainzid', item['mbartistid'])
if 'genre' in item:
listitem.setProperty('artist.genre', item['genre'])
if 'biography' in item:
listitem.setProperty('artist.biography', item['biography'])
if 'gender' in item:
listitem.setProperty('artist.gender', item['gender'])
if 'styles' in item:
listitem.setProperty('artist.styles', item['styles'])
if 'moods' in item:
listitem.setProperty('artist.moods', item['moods'])
if 'instruments' in item:
listitem.setProperty('artist.instruments', item['instruments'])
if 'disambiguation' in item:
listitem.setProperty('artist.disambiguation', item['disambiguation'])
if 'type' in item:
listitem.setProperty('artist.type', item['type'])
if 'sortname' in item:
listitem.setProperty('artist.sortname', item['sortname'])
if 'active' in item:
listitem.setProperty('artist.years_active', item['active'])
if 'born' in item:
listitem.setProperty('artist.born', item['born'])
if 'formed' in item:
listitem.setProperty('artist.formed', item['formed'])
if 'died' in item:
listitem.setProperty('artist.died', item['died'])
if 'disbanded' in item:
listitem.setProperty('artist.disbanded', item['disbanded'])
if 'thumb' in item:
listitem.setProperty('artist.thumbs', str(len(item['thumb'])))
for count, thumb in enumerate(item['thumb']):
listitem.setProperty('artist.thumb%i.url' % (count + 1), thumb['image'])
listitem.setProperty('artist.thumb%i.preview' % (count + 1), thumb['preview'])
listitem.setProperty('artist.thumb%i.aspect' % (count + 1), thumb['aspect'])
if 'albums' in item:
listitem.setProperty('artist.albums', str(len(item['albums'])))
for count, album in enumerate(item['albums']):
listitem.setProperty('artist.album%i.title' % (count + 1), album['title'])
listitem.setProperty('artist.album%i.year' % (count + 1), album['year'])
if 'musicbrainzreleasegroupid' in album:
listitem.setProperty('artist.album%i.musicbrainzreleasegroupid' % (count + 1), album['musicbrainzreleasegroupid'])
xbmcplugin.setResolvedUrl(handle=int(sys.argv[1]), succeeded=True, listitem=listitem)
| gpl-2.0 |
Myasuka/scikit-learn | examples/svm/plot_svm_regression.py | 249 | 1451 | """
===================================================================
Support Vector Regression (SVR) using linear and non-linear kernels
===================================================================
Toy example of 1D regression using linear, polynomial and RBF kernels.
"""
print(__doc__)
import numpy as np
from sklearn.svm import SVR
import matplotlib.pyplot as plt
###############################################################################
# Generate sample data
X = np.sort(5 * np.random.rand(40, 1), axis=0)
y = np.sin(X).ravel()
###############################################################################
# Add noise to targets
y[::5] += 3 * (0.5 - np.random.rand(8))
###############################################################################
# Fit regression model
svr_rbf = SVR(kernel='rbf', C=1e3, gamma=0.1)
svr_lin = SVR(kernel='linear', C=1e3)
svr_poly = SVR(kernel='poly', C=1e3, degree=2)
y_rbf = svr_rbf.fit(X, y).predict(X)
y_lin = svr_lin.fit(X, y).predict(X)
y_poly = svr_poly.fit(X, y).predict(X)
###############################################################################
# look at the results
plt.scatter(X, y, c='k', label='data')
plt.hold('on')
plt.plot(X, y_rbf, c='g', label='RBF model')
plt.plot(X, y_lin, c='r', label='Linear model')
plt.plot(X, y_poly, c='b', label='Polynomial model')
plt.xlabel('data')
plt.ylabel('target')
plt.title('Support Vector Regression')
plt.legend()
plt.show()
| bsd-3-clause |
cancan101/tensorflow | tensorflow/examples/image_retraining/retrain_test.py | 53 | 3941 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=g-bad-import-order,unused-import
"""Tests the graph freezing tool."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow.examples.image_retraining import retrain
from tensorflow.python.framework import test_util
class ImageRetrainingTest(test_util.TensorFlowTestCase):
def dummyImageLists(self):
return {'label_one': {'dir': 'somedir', 'training': ['image_one.jpg',
'image_two.jpg'],
'testing': ['image_three.jpg', 'image_four.jpg'],
'validation': ['image_five.jpg', 'image_six.jpg']},
'label_two': {'dir': 'otherdir', 'training': ['image_one.jpg',
'image_two.jpg'],
'testing': ['image_three.jpg', 'image_four.jpg'],
'validation': ['image_five.jpg', 'image_six.jpg']}}
def testGetImagePath(self):
image_lists = self.dummyImageLists()
self.assertEqual('image_dir/somedir/image_one.jpg', retrain.get_image_path(
image_lists, 'label_one', 0, 'image_dir', 'training'))
self.assertEqual('image_dir/otherdir/image_four.jpg',
retrain.get_image_path(image_lists, 'label_two', 1,
'image_dir', 'testing'))
def testGetBottleneckPath(self):
image_lists = self.dummyImageLists()
self.assertEqual('bottleneck_dir/somedir/image_five.jpg.txt',
retrain.get_bottleneck_path(
image_lists, 'label_one', 0, 'bottleneck_dir',
'validation'))
def testShouldDistortImage(self):
self.assertEqual(False, retrain.should_distort_images(False, 0, 0, 0))
self.assertEqual(True, retrain.should_distort_images(True, 0, 0, 0))
self.assertEqual(True, retrain.should_distort_images(False, 10, 0, 0))
self.assertEqual(True, retrain.should_distort_images(False, 0, 1, 0))
self.assertEqual(True, retrain.should_distort_images(False, 0, 0, 50))
def testAddInputDistortions(self):
with tf.Graph().as_default():
with tf.Session() as sess:
retrain.add_input_distortions(True, 10, 10, 10)
self.assertIsNotNone(sess.graph.get_tensor_by_name('DistortJPGInput:0'))
self.assertIsNotNone(sess.graph.get_tensor_by_name('DistortResult:0'))
@tf.test.mock.patch.object(retrain, 'FLAGS', learning_rate=0.01)
def testAddFinalTrainingOps(self, flags_mock):
with tf.Graph().as_default():
with tf.Session() as sess:
bottleneck = tf.placeholder(
tf.float32, [1, retrain.BOTTLENECK_TENSOR_SIZE],
name=retrain.BOTTLENECK_TENSOR_NAME.split(':')[0])
retrain.add_final_training_ops(5, 'final', bottleneck)
self.assertIsNotNone(sess.graph.get_tensor_by_name('final:0'))
def testAddEvaluationStep(self):
with tf.Graph().as_default():
final = tf.placeholder(tf.float32, [1], name='final')
gt = tf.placeholder(tf.float32, [1], name='gt')
self.assertIsNotNone(retrain.add_evaluation_step(final, gt))
if __name__ == '__main__':
tf.test.main()
| apache-2.0 |
warp1337/xdemo | xdemo/launcher/syslaunch.py | 1 | 11778 | """
This file is part of XDEMO
Copyright(c) <Florian Lier>
https://github.com/warp1337/xdemo
This file may be licensed under the terms of the
GNU Lesser General Public License Version 3 (the ``LGPL''),
or (at your option) any later version.
Software distributed under the License is distributed
on an ``AS IS'' basis, WITHOUT WARRANTY OF ANY KIND, either
express or implied. See the LGPL for the specific language
governing rights and limitations.
You should have received a copy of the LGPL along with this
program. If not, go to http://www.gnu.org/licenses/lgpl.html
or write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
The development of this software was supported by the
Excellence Cluster EXC 277 Cognitive Interaction Technology.
The Excellence Cluster EXC 277 is a grant of the Deutsche
Forschungsgemeinschaft (DFG) in the context of the German
Excellence Initiative.
Authors: Florian Lier
<flier>@techfak.uni-bielefeld.de
"""
# STD
import sys
import time
from pprint import pprint
from threading import Lock
class SystemLauncherClient:
def __init__(self, _system_instance, _screen_pool, _log, _debug_mode):
self.log = _log
self.lock = Lock()
self.debug_mode = _debug_mode
self.screen_pool = _screen_pool
self.strict_policy_missed = False
self.exit_allowed_missed = False
self.all_screen_session_pids = None
self.hierarchical_session_list = []
self.hierarchical_component_list = []
self.system_instance = _system_instance
self.base_path = self.system_instance.base_path
self.local_hostname = _system_instance.local_hostname
self.local_platform = _system_instance.local_platform
self.runtimeenvironment = _system_instance.runtimeenvironment
def dict_print(self, _dict):
pprint(_dict)
def stop_all_initcriteria(self):
for item in self.hierarchical_component_list:
values = item.values()
for component in values:
for obs in component.initcriteria:
obs.keep_running = False
def inner_mk_session(self, _component):
component_host = _component.executionhost
if component_host == self.local_hostname:
self.lock.acquire()
# DO NOT CHANGE THE NAMING PATTERN OR ALL HELL BREAKS LOSE
# SEE: screen.id and how it is constructed in the system class
component_name = _component.name
screen_name = _component.screen_id
exec_script = _component.execscript
info_dict = { "component_name": component_name,
"exec_script": exec_script,
"screen_session_name": screen_name,
"osinfo": {"children": [], "screenpid": None},
"component_status": "unknown",
"screen_status": "init"
}
new_screen_session = self.screen_pool.new_screen_session(screen_name, self.runtimeenvironment, info_dict)
# Add some time to spawn the session, 50ms
time.sleep(0.05)
result = self.screen_pool.check_exists_in_pool(screen_name)
if result is not None:
source_exec_script_cmd = ". " + exec_script
new_screen_session.send_commands(source_exec_script_cmd)
else:
self.log.error("[launcher] '%s' could not be initialized THIS IS FATAL!" % screen_name)
self.screen_pool.kill_all_screen_sessions()
sys.exit(1)
self.lock.release()
def mk_screen_sessions(self):
for item in self.system_instance.flat_execution_list:
if 'component' in item.keys():
component = item['component']
self.inner_mk_session(component)
if 'group' in item.keys():
for component in item['group'].flat_execution_list:
self.inner_mk_session(component)
# Start components
self.deploy_commands()
# Activate continuous monitoring
if self.strict_policy_missed is False and self.exit_allowed_missed is False:
self.screen_pool.start()
else:
pass
def inner_deploy(self, _component, _executed_list_components, _type):
# Name is actually derived from the path: component_something.yaml
component_name = _component.name
cmd = "start"
platform = _component.platform
host = _component.executionhost
final_cmd = self.construct_command(host, platform, cmd, component_name, True, True)
if final_cmd is None:
return
else:
screen_name = _component.screen_id
# Check if it has already been started on this host
if screen_name not in _executed_list_components.keys():
# Get the global lock
self.lock.acquire()
# Now clean the log and deploy the command in the screen session
self.screen_pool.clean_log(screen_name)
result = self.screen_pool.send_cmd(screen_name, final_cmd, _type, component_name)
if result is None:
self.log.error("[launcher] '%s' command could not be sent" % component_name)
self.lock.release()
return
# Add this component to the hierarchical list of started components
informed_item = {screen_name: _component}
self.hierarchical_component_list.append(informed_item)
_executed_list_components[screen_name] = "started"
# Give the process some time to spawn: 50ms
time.sleep(0.05)
self.log.debug("├╼[os] '%s' gathering pids" % component_name)
# Get the status of the send_cmd()
# status > 0 : everything is okay, session has 1 or more children [#1 is always an init bash/sh]
# status == 0 : command exited, only the init bash/sh is present
# status == -1 : screen session not found in session list, this is bad
# status == -2 : no process found for screen session, this is the worst case
status = self.screen_pool.get_session_os_info(screen_name)
self.log.debug(" ├╼[os] '%s' children %d" % (component_name, status))
if status > 0:
self.log.debug("├╼[os] '%s' is running" % component_name)
if status == 0:
self.log.obswar("├╼[os] '%s' exited" % component_name)
if _component.exitallowed is False:
self.log.obserr("└╼[os] '%s' exit not allowed in config" % component_name)
self.exit_allowed_missed = True
self.lock.release()
return
if status == -1:
self.log.debug("├╼[os] '%s' screen not in session list THIS IS BAD!" % component_name)
if self.debug_mode:
self.log.debug("[debugger] press RETURN to go on...")
raw_input('')
if status == -2:
self.log.debug("├╼[os] '%s' no process for screen session found THIS IS REALLY BAD!" % component_name)
if self.debug_mode:
self.log.debug("[debugger] press RETURN to go on...")
raw_input('')
# Logfile has been created, we can safely start init criteria threads
for init_criteria in _component.initcriteria:
init_criteria.start()
blocking_initcriteria = len(_component.initcriteria)
if blocking_initcriteria > 0:
self.log.info("├╼[initcriteria] %d pending" % blocking_initcriteria)
else:
self.log.info("├╼[initcriteria] no criteria defined")
while blocking_initcriteria > 0:
for initcriteria in _component.initcriteria:
if initcriteria.is_alive():
time.sleep(0.005)
continue
else:
if initcriteria.ok is True:
self.log.obsok("├╼[initcriteria] found '%s'" % initcriteria.criteria)
else:
self.log.obswar("├╼[initcriteria] missing '%s'" % initcriteria.criteria)
if _component.initpolicy == 'strict':
self.log.obserr("└╼[initcriteria] strict init policy enabled in config for '%s'" % component_name)
self.strict_policy_missed = True
self.lock.release()
return
blocking_initcriteria -= 1
_component.initcriteria.remove(initcriteria)
self.log.debug("├╼[initcriteria] waiting for %d criteria" % blocking_initcriteria)
# Release the global lock
self.lock.release()
self.log.info("└╼[initcriteria] done")
else:
self.log.debug("[launcher] skipping '%s' on %s --> duplicate in components/groups?" % (component_name, self.local_hostname))
def deploy_commands(self):
executed_list_components = {}
executed_list_groups = {}
for item in self.system_instance.flat_execution_list:
if self.strict_policy_missed is False and self.exit_allowed_missed is False:
if 'component' in item.keys():
_type = 'component'
component = item['component']
self.inner_deploy(component, executed_list_components, _type)
if 'group' in item.keys():
_type = 'group'
group_name = item['group'].name
if group_name not in executed_list_groups.keys():
executed_list_groups[group_name] = "started"
self.log.info("┋[group] descending into '%s'" % item['group'].name)
for component in item['group'].flat_execution_list:
self.inner_deploy(component, executed_list_components, _type)
else:
self.log.debug(
"[launcher] skipping '%s' on %s --> duplicate in components/groups ?" % (item['group'].name, self.local_hostname))
else:
pass
def construct_command(self, _host, _platform, _cmd, _component, _requires_x=None, _requires_remote_x=None):
if _platform != self.local_platform:
self.log.debug("[launcher] skipping '%s' what, running %s component on %s?!?" % (_component,
_platform,
self.local_platform))
return None
if _host == self.local_hostname:
return _cmd.strip()
else:
self.log.debug("[launcher] skipping %s | host %s | platform %s" % (_component,
_host,
_platform))
return None
| gpl-3.0 |
philanthropy-u/edx-platform | lms/djangoapps/branding/models.py | 23 | 1688 | """
Model used by Video module for Branding configuration.
Includes:
BrandingInfoConfig: A ConfigurationModel for managing how Video Module will
use Branding.
"""
import json
from config_models.models import ConfigurationModel
from django.core.exceptions import ValidationError
from django.db.models import TextField
class BrandingInfoConfig(ConfigurationModel):
"""
Configuration for Branding.
Example of configuration that must be stored:
{
"CN": {
"url": "http://www.xuetangx.com",
"logo_src": "http://www.xuetangx.com/static/images/logo.png",
"logo_tag": "Video hosted by XuetangX.com"
}
}
"""
class Meta(ConfigurationModel.Meta):
app_label = "branding"
configuration = TextField(
help_text="JSON data of Configuration for Video Branding."
)
def clean(self):
"""
Validates configuration text field.
"""
try:
json.loads(self.configuration)
except ValueError:
raise ValidationError('Must be valid JSON string.')
@classmethod
def get_config(cls):
"""
Get the Video Branding Configuration.
"""
info = cls.current()
return json.loads(info.configuration) if info.enabled else {}
class BrandingApiConfig(ConfigurationModel):
"""Configure Branding api's
Enable or disable api's functionality.
When this flag is disabled, the api will return 404.
When the flag is enabled, the api will returns the valid reponse.
"""
class Meta(ConfigurationModel.Meta):
app_label = "branding"
| agpl-3.0 |
gspeedtech/Audacity-2 | lib-src/lv2/lv2/plugins/eg-fifths.lv2/waflib/Tools/c_preproc.py | 181 | 16705 | #! /usr/bin/env python
# encoding: utf-8
# WARNING! Do not edit! http://waf.googlecode.com/git/docs/wafbook/single.html#_obtaining_the_waf_file
import re,string,traceback
from waflib import Logs,Utils,Errors
from waflib.Logs import debug,error
class PreprocError(Errors.WafError):
pass
POPFILE='-'
recursion_limit=150
go_absolute=False
standard_includes=['/usr/include']
if Utils.is_win32:
standard_includes=[]
use_trigraphs=0
strict_quotes=0
g_optrans={'not':'!','and':'&&','bitand':'&','and_eq':'&=','or':'||','bitor':'|','or_eq':'|=','xor':'^','xor_eq':'^=','compl':'~',}
re_lines=re.compile('^[ \t]*(#|%:)[ \t]*(ifdef|ifndef|if|else|elif|endif|include|import|define|undef|pragma)[ \t]*(.*)\r*$',re.IGNORECASE|re.MULTILINE)
re_mac=re.compile("^[a-zA-Z_]\w*")
re_fun=re.compile('^[a-zA-Z_][a-zA-Z0-9_]*[(]')
re_pragma_once=re.compile('^\s*once\s*',re.IGNORECASE)
re_nl=re.compile('\\\\\r*\n',re.MULTILINE)
re_cpp=re.compile(r'//.*?$|/\*.*?\*/|\'(?:\\.|[^\\\'])*\'|"(?:\\.|[^\\"])*"',re.DOTALL|re.MULTILINE)
trig_def=[('??'+a,b)for a,b in zip("=-/!'()<>",r'#~\|^[]{}')]
chr_esc={'0':0,'a':7,'b':8,'t':9,'n':10,'f':11,'v':12,'r':13,'\\':92,"'":39}
NUM='i'
OP='O'
IDENT='T'
STR='s'
CHAR='c'
tok_types=[NUM,STR,IDENT,OP]
exp_types=[r"""0[xX](?P<hex>[a-fA-F0-9]+)(?P<qual1>[uUlL]*)|L*?'(?P<char>(\\.|[^\\'])+)'|(?P<n1>\d+)[Ee](?P<exp0>[+-]*?\d+)(?P<float0>[fFlL]*)|(?P<n2>\d*\.\d+)([Ee](?P<exp1>[+-]*?\d+))?(?P<float1>[fFlL]*)|(?P<n4>\d+\.\d*)([Ee](?P<exp2>[+-]*?\d+))?(?P<float2>[fFlL]*)|(?P<oct>0*)(?P<n0>\d+)(?P<qual2>[uUlL]*)""",r'L?"([^"\\]|\\.)*"',r'[a-zA-Z_]\w*',r'%:%:|<<=|>>=|\.\.\.|<<|<%|<:|<=|>>|>=|\+\+|\+=|--|->|-=|\*=|/=|%:|%=|%>|==|&&|&=|\|\||\|=|\^=|:>|!=|##|[\(\)\{\}\[\]<>\?\|\^\*\+&=:!#;,%/\-\?\~\.]',]
re_clexer=re.compile('|'.join(["(?P<%s>%s)"%(name,part)for name,part in zip(tok_types,exp_types)]),re.M)
accepted='a'
ignored='i'
undefined='u'
skipped='s'
def repl(m):
s=m.group(0)
if s.startswith('/'):
return' '
return s
def filter_comments(filename):
code=Utils.readf(filename)
if use_trigraphs:
for(a,b)in trig_def:code=code.split(a).join(b)
code=re_nl.sub('',code)
code=re_cpp.sub(repl,code)
return[(m.group(2),m.group(3))for m in re.finditer(re_lines,code)]
prec={}
ops=['* / %','+ -','<< >>','< <= >= >','== !=','& | ^','&& ||',',']
for x in range(len(ops)):
syms=ops[x]
for u in syms.split():
prec[u]=x
def trimquotes(s):
if not s:return''
s=s.rstrip()
if s[0]=="'"and s[-1]=="'":return s[1:-1]
return s
def reduce_nums(val_1,val_2,val_op):
try:a=0+val_1
except TypeError:a=int(val_1)
try:b=0+val_2
except TypeError:b=int(val_2)
d=val_op
if d=='%':c=a%b
elif d=='+':c=a+b
elif d=='-':c=a-b
elif d=='*':c=a*b
elif d=='/':c=a/b
elif d=='^':c=a^b
elif d=='|':c=a|b
elif d=='||':c=int(a or b)
elif d=='&':c=a&b
elif d=='&&':c=int(a and b)
elif d=='==':c=int(a==b)
elif d=='!=':c=int(a!=b)
elif d=='<=':c=int(a<=b)
elif d=='<':c=int(a<b)
elif d=='>':c=int(a>b)
elif d=='>=':c=int(a>=b)
elif d=='^':c=int(a^b)
elif d=='<<':c=a<<b
elif d=='>>':c=a>>b
else:c=0
return c
def get_num(lst):
if not lst:raise PreprocError("empty list for get_num")
(p,v)=lst[0]
if p==OP:
if v=='(':
count_par=1
i=1
while i<len(lst):
(p,v)=lst[i]
if p==OP:
if v==')':
count_par-=1
if count_par==0:
break
elif v=='(':
count_par+=1
i+=1
else:
raise PreprocError("rparen expected %r"%lst)
(num,_)=get_term(lst[1:i])
return(num,lst[i+1:])
elif v=='+':
return get_num(lst[1:])
elif v=='-':
num,lst=get_num(lst[1:])
return(reduce_nums('-1',num,'*'),lst)
elif v=='!':
num,lst=get_num(lst[1:])
return(int(not int(num)),lst)
elif v=='~':
num,lst=get_num(lst[1:])
return(~int(num),lst)
else:
raise PreprocError("Invalid op token %r for get_num"%lst)
elif p==NUM:
return v,lst[1:]
elif p==IDENT:
return 0,lst[1:]
else:
raise PreprocError("Invalid token %r for get_num"%lst)
def get_term(lst):
if not lst:raise PreprocError("empty list for get_term")
num,lst=get_num(lst)
if not lst:
return(num,[])
(p,v)=lst[0]
if p==OP:
if v==',':
return get_term(lst[1:])
elif v=='?':
count_par=0
i=1
while i<len(lst):
(p,v)=lst[i]
if p==OP:
if v==')':
count_par-=1
elif v=='(':
count_par+=1
elif v==':':
if count_par==0:
break
i+=1
else:
raise PreprocError("rparen expected %r"%lst)
if int(num):
return get_term(lst[1:i])
else:
return get_term(lst[i+1:])
else:
num2,lst=get_num(lst[1:])
if not lst:
num2=reduce_nums(num,num2,v)
return get_term([(NUM,num2)]+lst)
p2,v2=lst[0]
if p2!=OP:
raise PreprocError("op expected %r"%lst)
if prec[v2]>=prec[v]:
num2=reduce_nums(num,num2,v)
return get_term([(NUM,num2)]+lst)
else:
num3,lst=get_num(lst[1:])
num3=reduce_nums(num2,num3,v2)
return get_term([(NUM,num),(p,v),(NUM,num3)]+lst)
raise PreprocError("cannot reduce %r"%lst)
def reduce_eval(lst):
num,lst=get_term(lst)
return(NUM,num)
def stringize(lst):
lst=[str(v2)for(p2,v2)in lst]
return"".join(lst)
def paste_tokens(t1,t2):
p1=None
if t1[0]==OP and t2[0]==OP:
p1=OP
elif t1[0]==IDENT and(t2[0]==IDENT or t2[0]==NUM):
p1=IDENT
elif t1[0]==NUM and t2[0]==NUM:
p1=NUM
if not p1:
raise PreprocError('tokens do not make a valid paste %r and %r'%(t1,t2))
return(p1,t1[1]+t2[1])
def reduce_tokens(lst,defs,ban=[]):
i=0
while i<len(lst):
(p,v)=lst[i]
if p==IDENT and v=="defined":
del lst[i]
if i<len(lst):
(p2,v2)=lst[i]
if p2==IDENT:
if v2 in defs:
lst[i]=(NUM,1)
else:
lst[i]=(NUM,0)
elif p2==OP and v2=='(':
del lst[i]
(p2,v2)=lst[i]
del lst[i]
if v2 in defs:
lst[i]=(NUM,1)
else:
lst[i]=(NUM,0)
else:
raise PreprocError("Invalid define expression %r"%lst)
elif p==IDENT and v in defs:
if isinstance(defs[v],str):
a,b=extract_macro(defs[v])
defs[v]=b
macro_def=defs[v]
to_add=macro_def[1]
if isinstance(macro_def[0],list):
del lst[i]
accu=to_add[:]
reduce_tokens(accu,defs,ban+[v])
for x in range(len(accu)):
lst.insert(i,accu[x])
i+=1
else:
args=[]
del lst[i]
if i>=len(lst):
raise PreprocError("expected '(' after %r (got nothing)"%v)
(p2,v2)=lst[i]
if p2!=OP or v2!='(':
raise PreprocError("expected '(' after %r"%v)
del lst[i]
one_param=[]
count_paren=0
while i<len(lst):
p2,v2=lst[i]
del lst[i]
if p2==OP and count_paren==0:
if v2=='(':
one_param.append((p2,v2))
count_paren+=1
elif v2==')':
if one_param:args.append(one_param)
break
elif v2==',':
if not one_param:raise PreprocError("empty param in funcall %s"%v)
args.append(one_param)
one_param=[]
else:
one_param.append((p2,v2))
else:
one_param.append((p2,v2))
if v2=='(':count_paren+=1
elif v2==')':count_paren-=1
else:
raise PreprocError('malformed macro')
accu=[]
arg_table=macro_def[0]
j=0
while j<len(to_add):
(p2,v2)=to_add[j]
if p2==OP and v2=='#':
if j+1<len(to_add)and to_add[j+1][0]==IDENT and to_add[j+1][1]in arg_table:
toks=args[arg_table[to_add[j+1][1]]]
accu.append((STR,stringize(toks)))
j+=1
else:
accu.append((p2,v2))
elif p2==OP and v2=='##':
if accu and j+1<len(to_add):
t1=accu[-1]
if to_add[j+1][0]==IDENT and to_add[j+1][1]in arg_table:
toks=args[arg_table[to_add[j+1][1]]]
if toks:
accu[-1]=paste_tokens(t1,toks[0])
accu.extend(toks[1:])
else:
accu.append((p2,v2))
accu.extend(toks)
elif to_add[j+1][0]==IDENT and to_add[j+1][1]=='__VA_ARGS__':
va_toks=[]
st=len(macro_def[0])
pt=len(args)
for x in args[pt-st+1:]:
va_toks.extend(x)
va_toks.append((OP,','))
if va_toks:va_toks.pop()
if len(accu)>1:
(p3,v3)=accu[-1]
(p4,v4)=accu[-2]
if v3=='##':
accu.pop()
if v4==','and pt<st:
accu.pop()
accu+=va_toks
else:
accu[-1]=paste_tokens(t1,to_add[j+1])
j+=1
else:
accu.append((p2,v2))
elif p2==IDENT and v2 in arg_table:
toks=args[arg_table[v2]]
reduce_tokens(toks,defs,ban+[v])
accu.extend(toks)
else:
accu.append((p2,v2))
j+=1
reduce_tokens(accu,defs,ban+[v])
for x in range(len(accu)-1,-1,-1):
lst.insert(i,accu[x])
i+=1
def eval_macro(lst,defs):
reduce_tokens(lst,defs,[])
if not lst:raise PreprocError("missing tokens to evaluate")
(p,v)=reduce_eval(lst)
return int(v)!=0
def extract_macro(txt):
t=tokenize(txt)
if re_fun.search(txt):
p,name=t[0]
p,v=t[1]
if p!=OP:raise PreprocError("expected open parenthesis")
i=1
pindex=0
params={}
prev='('
while 1:
i+=1
p,v=t[i]
if prev=='(':
if p==IDENT:
params[v]=pindex
pindex+=1
prev=p
elif p==OP and v==')':
break
else:
raise PreprocError("unexpected token (3)")
elif prev==IDENT:
if p==OP and v==',':
prev=v
elif p==OP and v==')':
break
else:
raise PreprocError("comma or ... expected")
elif prev==',':
if p==IDENT:
params[v]=pindex
pindex+=1
prev=p
elif p==OP and v=='...':
raise PreprocError("not implemented (1)")
else:
raise PreprocError("comma or ... expected (2)")
elif prev=='...':
raise PreprocError("not implemented (2)")
else:
raise PreprocError("unexpected else")
return(name,[params,t[i+1:]])
else:
(p,v)=t[0]
if len(t)>1:
return(v,[[],t[1:]])
else:
return(v,[[],[('T','')]])
re_include=re.compile('^\s*(<(?P<a>.*)>|"(?P<b>.*)")')
def extract_include(txt,defs):
m=re_include.search(txt)
if m:
if m.group('a'):return'<',m.group('a')
if m.group('b'):return'"',m.group('b')
toks=tokenize(txt)
reduce_tokens(toks,defs,['waf_include'])
if not toks:
raise PreprocError("could not parse include %s"%txt)
if len(toks)==1:
if toks[0][0]==STR:
return'"',toks[0][1]
else:
if toks[0][1]=='<'and toks[-1][1]=='>':
return stringize(toks).lstrip('<').rstrip('>')
raise PreprocError("could not parse include %s."%txt)
def parse_char(txt):
if not txt:raise PreprocError("attempted to parse a null char")
if txt[0]!='\\':
return ord(txt)
c=txt[1]
if c=='x':
if len(txt)==4 and txt[3]in string.hexdigits:return int(txt[2:],16)
return int(txt[2:],16)
elif c.isdigit():
if c=='0'and len(txt)==2:return 0
for i in 3,2,1:
if len(txt)>i and txt[1:1+i].isdigit():
return(1+i,int(txt[1:1+i],8))
else:
try:return chr_esc[c]
except KeyError:raise PreprocError("could not parse char literal '%s'"%txt)
def tokenize(s):
return tokenize_private(s)[:]
@Utils.run_once
def tokenize_private(s):
ret=[]
for match in re_clexer.finditer(s):
m=match.group
for name in tok_types:
v=m(name)
if v:
if name==IDENT:
try:v=g_optrans[v];name=OP
except KeyError:
if v.lower()=="true":
v=1
name=NUM
elif v.lower()=="false":
v=0
name=NUM
elif name==NUM:
if m('oct'):v=int(v,8)
elif m('hex'):v=int(m('hex'),16)
elif m('n0'):v=m('n0')
else:
v=m('char')
if v:v=parse_char(v)
else:v=m('n2')or m('n4')
elif name==OP:
if v=='%:':v='#'
elif v=='%:%:':v='##'
elif name==STR:
v=v[1:-1]
ret.append((name,v))
break
return ret
@Utils.run_once
def define_name(line):
return re_mac.match(line).group(0)
class c_parser(object):
def __init__(self,nodepaths=None,defines=None):
self.lines=[]
if defines is None:
self.defs={}
else:
self.defs=dict(defines)
self.state=[]
self.count_files=0
self.currentnode_stack=[]
self.nodepaths=nodepaths or[]
self.nodes=[]
self.names=[]
self.curfile=''
self.ban_includes=set([])
def cached_find_resource(self,node,filename):
try:
nd=node.ctx.cache_nd
except AttributeError:
nd=node.ctx.cache_nd={}
tup=(node,filename)
try:
return nd[tup]
except KeyError:
ret=node.find_resource(filename)
if ret:
if getattr(ret,'children',None):
ret=None
elif ret.is_child_of(node.ctx.bldnode):
tmp=node.ctx.srcnode.search_node(ret.path_from(node.ctx.bldnode))
if tmp and getattr(tmp,'children',None):
ret=None
nd[tup]=ret
return ret
def tryfind(self,filename):
self.curfile=filename
found=self.cached_find_resource(self.currentnode_stack[-1],filename)
for n in self.nodepaths:
if found:
break
found=self.cached_find_resource(n,filename)
if found and not found in self.ban_includes:
self.nodes.append(found)
if filename[-4:]!='.moc':
self.addlines(found)
else:
if not filename in self.names:
self.names.append(filename)
return found
def addlines(self,node):
self.currentnode_stack.append(node.parent)
filepath=node.abspath()
self.count_files+=1
if self.count_files>recursion_limit:
raise PreprocError("recursion limit exceeded")
pc=self.parse_cache
debug('preproc: reading file %r',filepath)
try:
lns=pc[filepath]
except KeyError:
pass
else:
self.lines.extend(lns)
return
try:
lines=filter_comments(filepath)
lines.append((POPFILE,''))
lines.reverse()
pc[filepath]=lines
self.lines.extend(lines)
except IOError:
raise PreprocError("could not read the file %s"%filepath)
except Exception:
if Logs.verbose>0:
error("parsing %s failed"%filepath)
traceback.print_exc()
def start(self,node,env):
debug('preproc: scanning %s (in %s)',node.name,node.parent.name)
bld=node.ctx
try:
self.parse_cache=bld.parse_cache
except AttributeError:
bld.parse_cache={}
self.parse_cache=bld.parse_cache
self.current_file=node
self.addlines(node)
if env['DEFINES']:
try:
lst=['%s %s'%(x[0],trimquotes('='.join(x[1:])))for x in[y.split('=')for y in env['DEFINES']]]
lst.reverse()
self.lines.extend([('define',x)for x in lst])
except AttributeError:
pass
while self.lines:
(token,line)=self.lines.pop()
if token==POPFILE:
self.count_files-=1
self.currentnode_stack.pop()
continue
try:
ve=Logs.verbose
if ve:debug('preproc: line is %s - %s state is %s',token,line,self.state)
state=self.state
if token[:2]=='if':
state.append(undefined)
elif token=='endif':
state.pop()
if token[0]!='e':
if skipped in self.state or ignored in self.state:
continue
if token=='if':
ret=eval_macro(tokenize(line),self.defs)
if ret:state[-1]=accepted
else:state[-1]=ignored
elif token=='ifdef':
m=re_mac.match(line)
if m and m.group(0)in self.defs:state[-1]=accepted
else:state[-1]=ignored
elif token=='ifndef':
m=re_mac.match(line)
if m and m.group(0)in self.defs:state[-1]=ignored
else:state[-1]=accepted
elif token=='include'or token=='import':
(kind,inc)=extract_include(line,self.defs)
if ve:debug('preproc: include found %s (%s) ',inc,kind)
if kind=='"'or not strict_quotes:
self.current_file=self.tryfind(inc)
if token=='import':
self.ban_includes.add(self.current_file)
elif token=='elif':
if state[-1]==accepted:
state[-1]=skipped
elif state[-1]==ignored:
if eval_macro(tokenize(line),self.defs):
state[-1]=accepted
elif token=='else':
if state[-1]==accepted:state[-1]=skipped
elif state[-1]==ignored:state[-1]=accepted
elif token=='define':
try:
self.defs[define_name(line)]=line
except Exception:
raise PreprocError("Invalid define line %s"%line)
elif token=='undef':
m=re_mac.match(line)
if m and m.group(0)in self.defs:
self.defs.__delitem__(m.group(0))
elif token=='pragma':
if re_pragma_once.match(line.lower()):
self.ban_includes.add(self.current_file)
except Exception ,e:
if Logs.verbose:
debug('preproc: line parsing failed (%s): %s %s',e,line,Utils.ex_stack())
def scan(task):
global go_absolute
try:
incn=task.generator.includes_nodes
except AttributeError:
raise Errors.WafError('%r is missing a feature such as "c", "cxx" or "includes": '%task.generator)
if go_absolute:
nodepaths=incn+[task.generator.bld.root.find_dir(x)for x in standard_includes]
else:
nodepaths=[x for x in incn if x.is_child_of(x.ctx.srcnode)or x.is_child_of(x.ctx.bldnode)]
tmp=c_parser(nodepaths)
tmp.start(task.inputs[0],task.env)
if Logs.verbose:
debug('deps: deps for %r: %r; unresolved %r'%(task.inputs,tmp.nodes,tmp.names))
return(tmp.nodes,tmp.names)
| gpl-2.0 |
medecau/PyUserInput | pykeyboard/base.py | 4 | 7566 | #Copyright 2013 Paul Barton
#
#This program is free software: you can redistribute it and/or modify
#it under the terms of the GNU General Public License as published by
#the Free Software Foundation, either version 3 of the License, or
#(at your option) any later version.
#
#This program is distributed in the hope that it will be useful,
#but WITHOUT ANY WARRANTY; without even the implied warranty of
#MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
#GNU General Public License for more details.
#
#You should have received a copy of the GNU General Public License
#along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
As the base file, this provides a rough operational model along with the
framework to be extended by each platform.
"""
import time
from threading import Thread
class PyKeyboardMeta(object):
"""
The base class for PyKeyboard. Represents basic operational model.
"""
#: We add this named character for convenience
space = ' '
def press_key(self, character=''):
"""Press a given character key."""
raise NotImplementedError
def release_key(self, character=''):
"""Release a given character key."""
raise NotImplementedError
def tap_key(self, character='', n=1, interval=0):
"""Press and release a given character key n times."""
for i in range(n):
self.press_key(character)
self.release_key(character)
time.sleep(interval)
def press_keys(self,characters=[]):
"""Press a given character key."""
for character in characters:
self.press_key(character)
for character in characters:
self.release_key(character)
def type_string(self, char_string, interval=0):
"""
A convenience method for typing longer strings of characters. Generates
as few Shift events as possible."""
shift = False
for char in char_string:
if self.is_char_shifted(char):
if not shift: # Only press Shift as needed
time.sleep(interval)
self.press_key(self.shift_key)
shift = True
#In order to avoid tap_key pressing Shift, we need to pass the
#unshifted form of the character
if char in '<>?:"{}|~!@#$%^&*()_+':
ch_index = '<>?:"{}|~!@#$%^&*()_+'.index(char)
unshifted_char = ",./;'[]\\`1234567890-="[ch_index]
else:
unshifted_char = char.lower()
time.sleep(interval)
self.tap_key(unshifted_char)
else: # Unshifted already
if shift and char != ' ': # Only release Shift as needed
self.release_key(self.shift_key)
shift = False
time.sleep(interval)
self.tap_key(char)
if shift: # Turn off Shift if it's still ON
self.release_key(self.shift_key)
def special_key_assignment(self):
"""Makes special keys more accessible."""
raise NotImplementedError
def lookup_character_value(self, character):
"""
If necessary, lookup a valid API value for the key press from the
character.
"""
raise NotImplementedError
def is_char_shifted(self, character):
"""Returns True if the key character is uppercase or shifted."""
if character.isupper():
return True
if character in '<>?:"{}|~!@#$%^&*()_+':
return True
return False
class PyKeyboardEventMeta(Thread):
"""
The base class for PyKeyboard. Represents basic operational model.
"""
#One of the most variable components of keyboards throughout history and
#across manufacturers is the Modifier Key...
#I am attempting to cover a lot of bases to make using PyKeyboardEvent
#simpler, without digging a bunch of traps for incompatibilities between
#platforms.
#Keeping track of the keyboard's state is not only necessary at times to
#correctly interpret character identities in keyboard events, but should
#also enable a user to easily query modifier states without worrying about
#chaining event triggers for mod-combinations
#The keyboard's state will be represented by an integer, the individual
#mod keys by a bit mask of that integer
state = 0
#Each platform should assign, where applicable/possible, the bit masks for
#modifier keys initially set to 0 here. Not all modifiers are recommended
#for cross-platform use
modifier_bits = {'Shift': 1,
'Lock': 2,
'Control': 4,
'Mod1': 8, # X11 dynamic assignment
'Mod2': 16, # X11 dynamic assignment
'Mod3': 32, # X11 dynamic assignment
'Mod4': 64, # X11 dynamic assignment
'Mod5': 128, # X11 dynamic assignment
'Alt': 0,
'AltGr': 0, # Uncommon
'Caps_Lock': 0,
'Command': 0, # Mac key without generic equivalent
'Function': 0, # Not advised; typically undetectable
'Hyper': 0, # Uncommon?
'Meta': 0, # Uncommon?
'Num_Lock': 0,
'Mode_switch': 0, # Uncommon
'Shift_Lock': 0, # Uncommon
'Super': 0, # X11 key, sometimes equivalent to Windows
'Windows': 0} # Windows key, sometimes equivalent to Super
#Make the modifiers dictionary for individual states, setting all to off
modifiers = {}
for key in modifier_bits.keys():
modifiers[key] = False
def __init__(self, capture=False):
Thread.__init__(self)
self.daemon = True
self.capture = capture
self.state = True
self.configure_keys()
def run(self):
self.state = True
def stop(self):
self.state = False
def handler(self):
raise NotImplementedError
def tap(self, keycode, character, press):
"""
Subclass this method with your key event handler. It will receive
the keycode associated with the key event, as well as string name for
the key if one can be assigned (keyboard mask states will apply). The
argument 'press' will be True if the key was depressed and False if the
key was released.
"""
pass
def escape(self, event):
"""
A function that defines when to stop listening; subclass this with your
escape behavior. If the program is meant to stop, this method should
return True. Every key event will go through this method before going to
tap(), allowing this method to check for exit conditions.
The default behavior is to stop when the 'Esc' key is pressed.
If one wishes to use key combinations, or key series, one might be
interested in reading about Finite State Machines.
http://en.wikipedia.org/wiki/Deterministic_finite_automaton
"""
condition = None
return event == condition
def configure_keys(self):
"""
Does per-platform work of configuring the modifier keys as well as data
structures for simplified key access. Does nothing in this base
implementation.
"""
pass
| lgpl-3.0 |
vikatory/kbengine | kbe/res/scripts/common/Lib/idlelib/configHelpSourceEdit.py | 82 | 6670 | "Dialog to specify or edit the parameters for a user configured help source."
import os
import sys
from tkinter import *
import tkinter.messagebox as tkMessageBox
import tkinter.filedialog as tkFileDialog
class GetHelpSourceDialog(Toplevel):
def __init__(self, parent, title, menuItem='', filePath='', _htest=False):
"""Get menu entry and url/ local file location for Additional Help
User selects a name for the Help resource and provides a web url
or a local file as its source. The user can enter a url or browse
for the file.
_htest - bool, change box location when running htest
"""
Toplevel.__init__(self, parent)
self.configure(borderwidth=5)
self.resizable(height=FALSE, width=FALSE)
self.title(title)
self.transient(parent)
self.grab_set()
self.protocol("WM_DELETE_WINDOW", self.Cancel)
self.parent = parent
self.result = None
self.CreateWidgets()
self.menu.set(menuItem)
self.path.set(filePath)
self.withdraw() #hide while setting geometry
#needs to be done here so that the winfo_reqwidth is valid
self.update_idletasks()
#centre dialog over parent. below parent if running htest.
self.geometry(
"+%d+%d" % (
parent.winfo_rootx() +
(parent.winfo_width()/2 - self.winfo_reqwidth()/2),
parent.winfo_rooty() +
((parent.winfo_height()/2 - self.winfo_reqheight()/2)
if not _htest else 150)))
self.deiconify() #geometry set, unhide
self.bind('<Return>', self.Ok)
self.wait_window()
def CreateWidgets(self):
self.menu = StringVar(self)
self.path = StringVar(self)
self.fontSize = StringVar(self)
self.frameMain = Frame(self, borderwidth=2, relief=GROOVE)
self.frameMain.pack(side=TOP, expand=TRUE, fill=BOTH)
labelMenu = Label(self.frameMain, anchor=W, justify=LEFT,
text='Menu Item:')
self.entryMenu = Entry(self.frameMain, textvariable=self.menu,
width=30)
self.entryMenu.focus_set()
labelPath = Label(self.frameMain, anchor=W, justify=LEFT,
text='Help File Path: Enter URL or browse for file')
self.entryPath = Entry(self.frameMain, textvariable=self.path,
width=40)
self.entryMenu.focus_set()
labelMenu.pack(anchor=W, padx=5, pady=3)
self.entryMenu.pack(anchor=W, padx=5, pady=3)
labelPath.pack(anchor=W, padx=5, pady=3)
self.entryPath.pack(anchor=W, padx=5, pady=3)
browseButton = Button(self.frameMain, text='Browse', width=8,
command=self.browseFile)
browseButton.pack(pady=3)
frameButtons = Frame(self)
frameButtons.pack(side=BOTTOM, fill=X)
self.buttonOk = Button(frameButtons, text='OK',
width=8, default=ACTIVE, command=self.Ok)
self.buttonOk.grid(row=0, column=0, padx=5,pady=5)
self.buttonCancel = Button(frameButtons, text='Cancel',
width=8, command=self.Cancel)
self.buttonCancel.grid(row=0, column=1, padx=5, pady=5)
def browseFile(self):
filetypes = [
("HTML Files", "*.htm *.html", "TEXT"),
("PDF Files", "*.pdf", "TEXT"),
("Windows Help Files", "*.chm"),
("Text Files", "*.txt", "TEXT"),
("All Files", "*")]
path = self.path.get()
if path:
dir, base = os.path.split(path)
else:
base = None
if sys.platform[:3] == 'win':
dir = os.path.join(os.path.dirname(sys.executable), 'Doc')
if not os.path.isdir(dir):
dir = os.getcwd()
else:
dir = os.getcwd()
opendialog = tkFileDialog.Open(parent=self, filetypes=filetypes)
file = opendialog.show(initialdir=dir, initialfile=base)
if file:
self.path.set(file)
def MenuOk(self):
"Simple validity check for a sensible menu item name"
menuOk = True
menu = self.menu.get()
menu.strip()
if not menu:
tkMessageBox.showerror(title='Menu Item Error',
message='No menu item specified',
parent=self)
self.entryMenu.focus_set()
menuOk = False
elif len(menu) > 30:
tkMessageBox.showerror(title='Menu Item Error',
message='Menu item too long:'
'\nLimit 30 characters.',
parent=self)
self.entryMenu.focus_set()
menuOk = False
return menuOk
def PathOk(self):
"Simple validity check for menu file path"
pathOk = True
path = self.path.get()
path.strip()
if not path: #no path specified
tkMessageBox.showerror(title='File Path Error',
message='No help file path specified.',
parent=self)
self.entryPath.focus_set()
pathOk = False
elif path.startswith(('www.', 'http')):
pass
else:
if path[:5] == 'file:':
path = path[5:]
if not os.path.exists(path):
tkMessageBox.showerror(title='File Path Error',
message='Help file path does not exist.',
parent=self)
self.entryPath.focus_set()
pathOk = False
return pathOk
def Ok(self, event=None):
if self.MenuOk() and self.PathOk():
self.result = (self.menu.get().strip(),
self.path.get().strip())
if sys.platform == 'darwin':
path = self.result[1]
if path.startswith(('www', 'file:', 'http:')):
pass
else:
# Mac Safari insists on using the URI form for local files
self.result = list(self.result)
self.result[1] = "file://" + path
self.destroy()
def Cancel(self, event=None):
self.result = None
self.destroy()
if __name__ == '__main__':
from idlelib.idle_test.htest import run
run(GetHelpSourceDialog)
| lgpl-3.0 |
Jurevic/BB-8_droid | docs/conf.py | 1 | 7749 | # -*- coding: utf-8 -*-
#
# ui documentation build configuration file, created by
# sphinx-quickstart.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
from __future__ import unicode_literals
import os
import sys
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'ui'
copyright = """2017, none"""
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.1'
# The full version, including alpha/beta/rc tags.
release = '0.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'bb8doc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index',
'ui.tex',
'ui Documentation',
"""none""", 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'ui', 'ui Documentation',
["""none"""], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'ui', 'ui Documentation',
"""none""", 'ui',
"""BB8 unit from star wars""", 'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
| mit |
IllusionRom-deprecated/android_platform_tools_idea | plugins/hg4idea/testData/bin/hgext/transplant.py | 91 | 26119 | # Patch transplanting extension for Mercurial
#
# Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
'''command to transplant changesets from another branch
This extension allows you to transplant changes to another parent revision,
possibly in another repository. The transplant is done using 'diff' patches.
Transplanted patches are recorded in .hg/transplant/transplants, as a
map from a changeset hash to its hash in the source repository.
'''
from mercurial.i18n import _
import os, tempfile
from mercurial.node import short
from mercurial import bundlerepo, hg, merge, match
from mercurial import patch, revlog, scmutil, util, error, cmdutil
from mercurial import revset, templatekw
class TransplantError(error.Abort):
pass
cmdtable = {}
command = cmdutil.command(cmdtable)
testedwith = 'internal'
class transplantentry(object):
def __init__(self, lnode, rnode):
self.lnode = lnode
self.rnode = rnode
class transplants(object):
def __init__(self, path=None, transplantfile=None, opener=None):
self.path = path
self.transplantfile = transplantfile
self.opener = opener
if not opener:
self.opener = scmutil.opener(self.path)
self.transplants = {}
self.dirty = False
self.read()
def read(self):
abspath = os.path.join(self.path, self.transplantfile)
if self.transplantfile and os.path.exists(abspath):
for line in self.opener.read(self.transplantfile).splitlines():
lnode, rnode = map(revlog.bin, line.split(':'))
list = self.transplants.setdefault(rnode, [])
list.append(transplantentry(lnode, rnode))
def write(self):
if self.dirty and self.transplantfile:
if not os.path.isdir(self.path):
os.mkdir(self.path)
fp = self.opener(self.transplantfile, 'w')
for list in self.transplants.itervalues():
for t in list:
l, r = map(revlog.hex, (t.lnode, t.rnode))
fp.write(l + ':' + r + '\n')
fp.close()
self.dirty = False
def get(self, rnode):
return self.transplants.get(rnode) or []
def set(self, lnode, rnode):
list = self.transplants.setdefault(rnode, [])
list.append(transplantentry(lnode, rnode))
self.dirty = True
def remove(self, transplant):
list = self.transplants.get(transplant.rnode)
if list:
del list[list.index(transplant)]
self.dirty = True
class transplanter(object):
def __init__(self, ui, repo):
self.ui = ui
self.path = repo.join('transplant')
self.opener = scmutil.opener(self.path)
self.transplants = transplants(self.path, 'transplants',
opener=self.opener)
self.editor = None
def applied(self, repo, node, parent):
'''returns True if a node is already an ancestor of parent
or is parent or has already been transplanted'''
if hasnode(repo, parent):
parentrev = repo.changelog.rev(parent)
if hasnode(repo, node):
rev = repo.changelog.rev(node)
reachable = repo.changelog.ancestors([parentrev], rev,
inclusive=True)
if rev in reachable:
return True
for t in self.transplants.get(node):
# it might have been stripped
if not hasnode(repo, t.lnode):
self.transplants.remove(t)
return False
lnoderev = repo.changelog.rev(t.lnode)
if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
inclusive=True):
return True
return False
def apply(self, repo, source, revmap, merges, opts={}):
'''apply the revisions in revmap one by one in revision order'''
revs = sorted(revmap)
p1, p2 = repo.dirstate.parents()
pulls = []
diffopts = patch.diffopts(self.ui, opts)
diffopts.git = True
lock = wlock = tr = None
try:
wlock = repo.wlock()
lock = repo.lock()
tr = repo.transaction('transplant')
for rev in revs:
node = revmap[rev]
revstr = '%s:%s' % (rev, short(node))
if self.applied(repo, node, p1):
self.ui.warn(_('skipping already applied revision %s\n') %
revstr)
continue
parents = source.changelog.parents(node)
if not (opts.get('filter') or opts.get('log')):
# If the changeset parent is the same as the
# wdir's parent, just pull it.
if parents[0] == p1:
pulls.append(node)
p1 = node
continue
if pulls:
if source != repo:
repo.pull(source.peer(), heads=pulls)
merge.update(repo, pulls[-1], False, False, None)
p1, p2 = repo.dirstate.parents()
pulls = []
domerge = False
if node in merges:
# pulling all the merge revs at once would mean we
# couldn't transplant after the latest even if
# transplants before them fail.
domerge = True
if not hasnode(repo, node):
repo.pull(source, heads=[node])
skipmerge = False
if parents[1] != revlog.nullid:
if not opts.get('parent'):
self.ui.note(_('skipping merge changeset %s:%s\n')
% (rev, short(node)))
skipmerge = True
else:
parent = source.lookup(opts['parent'])
if parent not in parents:
raise util.Abort(_('%s is not a parent of %s') %
(short(parent), short(node)))
else:
parent = parents[0]
if skipmerge:
patchfile = None
else:
fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
fp = os.fdopen(fd, 'w')
gen = patch.diff(source, parent, node, opts=diffopts)
for chunk in gen:
fp.write(chunk)
fp.close()
del revmap[rev]
if patchfile or domerge:
try:
try:
n = self.applyone(repo, node,
source.changelog.read(node),
patchfile, merge=domerge,
log=opts.get('log'),
filter=opts.get('filter'))
except TransplantError:
# Do not rollback, it is up to the user to
# fix the merge or cancel everything
tr.close()
raise
if n and domerge:
self.ui.status(_('%s merged at %s\n') % (revstr,
short(n)))
elif n:
self.ui.status(_('%s transplanted to %s\n')
% (short(node),
short(n)))
finally:
if patchfile:
os.unlink(patchfile)
tr.close()
if pulls:
repo.pull(source.peer(), heads=pulls)
merge.update(repo, pulls[-1], False, False, None)
finally:
self.saveseries(revmap, merges)
self.transplants.write()
if tr:
tr.release()
lock.release()
wlock.release()
def filter(self, filter, node, changelog, patchfile):
'''arbitrarily rewrite changeset before applying it'''
self.ui.status(_('filtering %s\n') % patchfile)
user, date, msg = (changelog[1], changelog[2], changelog[4])
fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
fp = os.fdopen(fd, 'w')
fp.write("# HG changeset patch\n")
fp.write("# User %s\n" % user)
fp.write("# Date %d %d\n" % date)
fp.write(msg + '\n')
fp.close()
try:
util.system('%s %s %s' % (filter, util.shellquote(headerfile),
util.shellquote(patchfile)),
environ={'HGUSER': changelog[1],
'HGREVISION': revlog.hex(node),
},
onerr=util.Abort, errprefix=_('filter failed'),
out=self.ui.fout)
user, date, msg = self.parselog(file(headerfile))[1:4]
finally:
os.unlink(headerfile)
return (user, date, msg)
def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
filter=None):
'''apply the patch in patchfile to the repository as a transplant'''
(manifest, user, (time, timezone), files, message) = cl[:5]
date = "%d %d" % (time, timezone)
extra = {'transplant_source': node}
if filter:
(user, date, message) = self.filter(filter, node, cl, patchfile)
if log:
# we don't translate messages inserted into commits
message += '\n(transplanted from %s)' % revlog.hex(node)
self.ui.status(_('applying %s\n') % short(node))
self.ui.note('%s %s\n%s\n' % (user, date, message))
if not patchfile and not merge:
raise util.Abort(_('can only omit patchfile if merging'))
if patchfile:
try:
files = set()
patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
files = list(files)
except Exception, inst:
seriespath = os.path.join(self.path, 'series')
if os.path.exists(seriespath):
os.unlink(seriespath)
p1 = repo.dirstate.p1()
p2 = node
self.log(user, date, message, p1, p2, merge=merge)
self.ui.write(str(inst) + '\n')
raise TransplantError(_('fix up the merge and run '
'hg transplant --continue'))
else:
files = None
if merge:
p1, p2 = repo.dirstate.parents()
repo.setparents(p1, node)
m = match.always(repo.root, '')
else:
m = match.exact(repo.root, '', files)
n = repo.commit(message, user, date, extra=extra, match=m,
editor=self.editor)
if not n:
self.ui.warn(_('skipping emptied changeset %s\n') % short(node))
return None
if not merge:
self.transplants.set(n, node)
return n
def resume(self, repo, source, opts):
'''recover last transaction and apply remaining changesets'''
if os.path.exists(os.path.join(self.path, 'journal')):
n, node = self.recover(repo, source, opts)
self.ui.status(_('%s transplanted as %s\n') % (short(node),
short(n)))
seriespath = os.path.join(self.path, 'series')
if not os.path.exists(seriespath):
self.transplants.write()
return
nodes, merges = self.readseries()
revmap = {}
for n in nodes:
revmap[source.changelog.rev(n)] = n
os.unlink(seriespath)
self.apply(repo, source, revmap, merges, opts)
def recover(self, repo, source, opts):
'''commit working directory using journal metadata'''
node, user, date, message, parents = self.readlog()
merge = False
if not user or not date or not message or not parents[0]:
raise util.Abort(_('transplant log file is corrupt'))
parent = parents[0]
if len(parents) > 1:
if opts.get('parent'):
parent = source.lookup(opts['parent'])
if parent not in parents:
raise util.Abort(_('%s is not a parent of %s') %
(short(parent), short(node)))
else:
merge = True
extra = {'transplant_source': node}
wlock = repo.wlock()
try:
p1, p2 = repo.dirstate.parents()
if p1 != parent:
raise util.Abort(
_('working dir not at transplant parent %s') %
revlog.hex(parent))
if merge:
repo.setparents(p1, parents[1])
n = repo.commit(message, user, date, extra=extra,
editor=self.editor)
if not n:
raise util.Abort(_('commit failed'))
if not merge:
self.transplants.set(n, node)
self.unlog()
return n, node
finally:
wlock.release()
def readseries(self):
nodes = []
merges = []
cur = nodes
for line in self.opener.read('series').splitlines():
if line.startswith('# Merges'):
cur = merges
continue
cur.append(revlog.bin(line))
return (nodes, merges)
def saveseries(self, revmap, merges):
if not revmap:
return
if not os.path.isdir(self.path):
os.mkdir(self.path)
series = self.opener('series', 'w')
for rev in sorted(revmap):
series.write(revlog.hex(revmap[rev]) + '\n')
if merges:
series.write('# Merges\n')
for m in merges:
series.write(revlog.hex(m) + '\n')
series.close()
def parselog(self, fp):
parents = []
message = []
node = revlog.nullid
inmsg = False
user = None
date = None
for line in fp.read().splitlines():
if inmsg:
message.append(line)
elif line.startswith('# User '):
user = line[7:]
elif line.startswith('# Date '):
date = line[7:]
elif line.startswith('# Node ID '):
node = revlog.bin(line[10:])
elif line.startswith('# Parent '):
parents.append(revlog.bin(line[9:]))
elif not line.startswith('# '):
inmsg = True
message.append(line)
if None in (user, date):
raise util.Abort(_("filter corrupted changeset (no user or date)"))
return (node, user, date, '\n'.join(message), parents)
def log(self, user, date, message, p1, p2, merge=False):
'''journal changelog metadata for later recover'''
if not os.path.isdir(self.path):
os.mkdir(self.path)
fp = self.opener('journal', 'w')
fp.write('# User %s\n' % user)
fp.write('# Date %s\n' % date)
fp.write('# Node ID %s\n' % revlog.hex(p2))
fp.write('# Parent ' + revlog.hex(p1) + '\n')
if merge:
fp.write('# Parent ' + revlog.hex(p2) + '\n')
fp.write(message.rstrip() + '\n')
fp.close()
def readlog(self):
return self.parselog(self.opener('journal'))
def unlog(self):
'''remove changelog journal'''
absdst = os.path.join(self.path, 'journal')
if os.path.exists(absdst):
os.unlink(absdst)
def transplantfilter(self, repo, source, root):
def matchfn(node):
if self.applied(repo, node, root):
return False
if source.changelog.parents(node)[1] != revlog.nullid:
return False
extra = source.changelog.read(node)[5]
cnode = extra.get('transplant_source')
if cnode and self.applied(repo, cnode, root):
return False
return True
return matchfn
def hasnode(repo, node):
try:
return repo.changelog.rev(node) is not None
except error.RevlogError:
return False
def browserevs(ui, repo, nodes, opts):
'''interactively transplant changesets'''
def browsehelp(ui):
ui.write(_('y: transplant this changeset\n'
'n: skip this changeset\n'
'm: merge at this changeset\n'
'p: show patch\n'
'c: commit selected changesets\n'
'q: cancel transplant\n'
'?: show this help\n'))
displayer = cmdutil.show_changeset(ui, repo, opts)
transplants = []
merges = []
for node in nodes:
displayer.show(repo[node])
action = None
while not action:
action = ui.prompt(_('apply changeset? [ynmpcq?]:'))
if action == '?':
browsehelp(ui)
action = None
elif action == 'p':
parent = repo.changelog.parents(node)[0]
for chunk in patch.diff(repo, parent, node):
ui.write(chunk)
action = None
elif action not in ('y', 'n', 'm', 'c', 'q'):
ui.write(_('no such option\n'))
action = None
if action == 'y':
transplants.append(node)
elif action == 'm':
merges.append(node)
elif action == 'c':
break
elif action == 'q':
transplants = ()
merges = ()
break
displayer.close()
return (transplants, merges)
@command('transplant',
[('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
('b', 'branch', [], _('use this source changeset as head'), _('REV')),
('a', 'all', None, _('pull all changesets up to the --branch revisions')),
('p', 'prune', [], _('skip over REV'), _('REV')),
('m', 'merge', [], _('merge at REV'), _('REV')),
('', 'parent', '',
_('parent to choose when transplanting merge'), _('REV')),
('e', 'edit', False, _('invoke editor on commit messages')),
('', 'log', None, _('append transplant info to log message')),
('c', 'continue', None, _('continue last transplant session '
'after fixing conflicts')),
('', 'filter', '',
_('filter changesets through command'), _('CMD'))],
_('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
'[-m REV] [REV]...'))
def transplant(ui, repo, *revs, **opts):
'''transplant changesets from another branch
Selected changesets will be applied on top of the current working
directory with the log of the original changeset. The changesets
are copied and will thus appear twice in the history with different
identities.
Consider using the graft command if everything is inside the same
repository - it will use merges and will usually give a better result.
Use the rebase extension if the changesets are unpublished and you want
to move them instead of copying them.
If --log is specified, log messages will have a comment appended
of the form::
(transplanted from CHANGESETHASH)
You can rewrite the changelog message with the --filter option.
Its argument will be invoked with the current changelog message as
$1 and the patch as $2.
--source/-s specifies another repository to use for selecting changesets,
just as if it temporarily had been pulled.
If --branch/-b is specified, these revisions will be used as
heads when deciding which changsets to transplant, just as if only
these revisions had been pulled.
If --all/-a is specified, all the revisions up to the heads specified
with --branch will be transplanted.
Example:
- transplant all changes up to REV on top of your current revision::
hg transplant --branch REV --all
You can optionally mark selected transplanted changesets as merge
changesets. You will not be prompted to transplant any ancestors
of a merged transplant, and you can merge descendants of them
normally instead of transplanting them.
Merge changesets may be transplanted directly by specifying the
proper parent changeset by calling :hg:`transplant --parent`.
If no merges or revisions are provided, :hg:`transplant` will
start an interactive changeset browser.
If a changeset application fails, you can fix the merge by hand
and then resume where you left off by calling :hg:`transplant
--continue/-c`.
'''
def incwalk(repo, csets, match=util.always):
for node in csets:
if match(node):
yield node
def transplantwalk(repo, dest, heads, match=util.always):
'''Yield all nodes that are ancestors of a head but not ancestors
of dest.
If no heads are specified, the heads of repo will be used.'''
if not heads:
heads = repo.heads()
ancestors = []
for head in heads:
ancestors.append(repo.changelog.ancestor(dest, head))
for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
if match(node):
yield node
def checkopts(opts, revs):
if opts.get('continue'):
if opts.get('branch') or opts.get('all') or opts.get('merge'):
raise util.Abort(_('--continue is incompatible with '
'--branch, --all and --merge'))
return
if not (opts.get('source') or revs or
opts.get('merge') or opts.get('branch')):
raise util.Abort(_('no source URL, branch revision or revision '
'list provided'))
if opts.get('all'):
if not opts.get('branch'):
raise util.Abort(_('--all requires a branch revision'))
if revs:
raise util.Abort(_('--all is incompatible with a '
'revision list'))
checkopts(opts, revs)
if not opts.get('log'):
opts['log'] = ui.config('transplant', 'log')
if not opts.get('filter'):
opts['filter'] = ui.config('transplant', 'filter')
tp = transplanter(ui, repo)
if opts.get('edit'):
tp.editor = cmdutil.commitforceeditor
p1, p2 = repo.dirstate.parents()
if len(repo) > 0 and p1 == revlog.nullid:
raise util.Abort(_('no revision checked out'))
if not opts.get('continue'):
if p2 != revlog.nullid:
raise util.Abort(_('outstanding uncommitted merges'))
m, a, r, d = repo.status()[:4]
if m or a or r or d:
raise util.Abort(_('outstanding local changes'))
sourcerepo = opts.get('source')
if sourcerepo:
peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
heads = map(peer.lookup, opts.get('branch', ()))
source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
onlyheads=heads, force=True)
else:
source = repo
heads = map(source.lookup, opts.get('branch', ()))
cleanupfn = None
try:
if opts.get('continue'):
tp.resume(repo, source, opts)
return
tf = tp.transplantfilter(repo, source, p1)
if opts.get('prune'):
prune = set(source.lookup(r)
for r in scmutil.revrange(source, opts.get('prune')))
matchfn = lambda x: tf(x) and x not in prune
else:
matchfn = tf
merges = map(source.lookup, opts.get('merge', ()))
revmap = {}
if revs:
for r in scmutil.revrange(source, revs):
revmap[int(r)] = source.lookup(r)
elif opts.get('all') or not merges:
if source != repo:
alltransplants = incwalk(source, csets, match=matchfn)
else:
alltransplants = transplantwalk(source, p1, heads,
match=matchfn)
if opts.get('all'):
revs = alltransplants
else:
revs, newmerges = browserevs(ui, source, alltransplants, opts)
merges.extend(newmerges)
for r in revs:
revmap[source.changelog.rev(r)] = r
for r in merges:
revmap[source.changelog.rev(r)] = r
tp.apply(repo, source, revmap, merges, opts)
finally:
if cleanupfn:
cleanupfn()
def revsettransplanted(repo, subset, x):
"""``transplanted([set])``
Transplanted changesets in set, or all transplanted changesets.
"""
if x:
s = revset.getset(repo, subset, x)
else:
s = subset
return [r for r in s if repo[r].extra().get('transplant_source')]
def kwtransplanted(repo, ctx, **args):
""":transplanted: String. The node identifier of the transplanted
changeset if any."""
n = ctx.extra().get('transplant_source')
return n and revlog.hex(n) or ''
def extsetup(ui):
revset.symbols['transplanted'] = revsettransplanted
templatekw.keywords['transplanted'] = kwtransplanted
# tell hggettext to extract docstrings from these functions:
i18nfunctions = [revsettransplanted, kwtransplanted]
| apache-2.0 |
aron-bordin/Tyrant-Sql | SQL_Map/plugins/dbms/oracle/enumeration.py | 8 | 5990 | #!/usr/bin/env python
"""
Copyright (c) 2006-2013 sqlmap developers (http://sqlmap.org/)
See the file 'doc/COPYING' for copying permission
"""
from lib.core.common import Backend
from lib.core.common import getLimitRange
from lib.core.common import isAdminFromPrivileges
from lib.core.common import isInferenceAvailable
from lib.core.common import isNoneValue
from lib.core.common import isNumPosStrValue
from lib.core.common import isTechniqueAvailable
from lib.core.data import conf
from lib.core.data import kb
from lib.core.data import logger
from lib.core.data import queries
from lib.core.enums import CHARSET_TYPE
from lib.core.enums import EXPECTED
from lib.core.enums import PAYLOAD
from lib.core.exception import SqlmapNoneDataException
from lib.request import inject
from plugins.generic.enumeration import Enumeration as GenericEnumeration
class Enumeration(GenericEnumeration):
def __init__(self):
GenericEnumeration.__init__(self)
def getRoles(self, query2=False):
infoMsg = "fetching database users roles"
rootQuery = queries[Backend.getIdentifiedDbms()].roles
if conf.user == "CU":
infoMsg += " for current user"
conf.user = self.getCurrentUser()
logger.info(infoMsg)
# Set containing the list of DBMS administrators
areAdmins = set()
if any(isTechniqueAvailable(_) for _ in (PAYLOAD.TECHNIQUE.UNION, PAYLOAD.TECHNIQUE.ERROR, PAYLOAD.TECHNIQUE.QUERY)) or conf.direct:
if query2:
query = rootQuery.inband.query2
condition = rootQuery.inband.condition2
else:
query = rootQuery.inband.query
condition = rootQuery.inband.condition
if conf.user:
users = conf.user.split(",")
query += " WHERE "
query += " OR ".join("%s = '%s'" % (condition, user) for user in sorted(users))
values = inject.getValue(query, blind=False, time=False)
if not values and not query2:
infoMsg = "trying with table USER_ROLE_PRIVS"
logger.info(infoMsg)
return self.getRoles(query2=True)
if not isNoneValue(values):
for value in values:
user = None
roles = set()
for count in xrange(0, len(value)):
# The first column is always the username
if count == 0:
user = value[count]
# The other columns are the roles
else:
role = value[count]
# In Oracle we get the list of roles as string
roles.add(role)
if user in kb.data.cachedUsersRoles:
kb.data.cachedUsersRoles[user] = list(roles.union(kb.data.cachedUsersRoles[user]))
else:
kb.data.cachedUsersRoles[user] = list(roles)
if not kb.data.cachedUsersRoles and isInferenceAvailable() and not conf.direct:
if conf.user:
users = conf.user.split(",")
else:
if not len(kb.data.cachedUsers):
users = self.getUsers()
else:
users = kb.data.cachedUsers
retrievedUsers = set()
for user in users:
unescapedUser = None
if user in retrievedUsers:
continue
infoMsg = "fetching number of roles "
infoMsg += "for user '%s'" % user
logger.info(infoMsg)
if unescapedUser:
queryUser = unescapedUser
else:
queryUser = user
if query2:
query = rootQuery.blind.count2 % queryUser
else:
query = rootQuery.blind.count % queryUser
count = inject.getValue(query, union=False, error=False, expected=EXPECTED.INT, charsetType=CHARSET_TYPE.DIGITS)
if not isNumPosStrValue(count):
if count != 0 and not query2:
infoMsg = "trying with table USER_SYS_PRIVS"
logger.info(infoMsg)
return self.getPrivileges(query2=True)
warnMsg = "unable to retrieve the number of "
warnMsg += "roles for user '%s'" % user
logger.warn(warnMsg)
continue
infoMsg = "fetching roles for user '%s'" % user
logger.info(infoMsg)
roles = set()
indexRange = getLimitRange(count, plusOne=True)
for index in indexRange:
if query2:
query = rootQuery.blind.query2 % (queryUser, index)
else:
query = rootQuery.blind.query % (queryUser, index)
role = inject.getValue(query, union=False, error=False)
# In Oracle we get the list of roles as string
roles.add(role)
if roles:
kb.data.cachedUsersRoles[user] = list(roles)
else:
warnMsg = "unable to retrieve the roles "
warnMsg += "for user '%s'" % user
logger.warn(warnMsg)
retrievedUsers.add(user)
if not kb.data.cachedUsersRoles:
errMsg = "unable to retrieve the roles "
errMsg += "for the database users"
raise SqlmapNoneDataException(errMsg)
for user, privileges in kb.data.cachedUsersRoles.items():
if isAdminFromPrivileges(privileges):
areAdmins.add(user)
return kb.data.cachedUsersRoles, areAdmins
| gpl-3.0 |
zenodo/invenio | invenio/ext/email/errors.py | 17 | 1187 | # -*- coding: utf-8 -*-
#
# This file is part of Invenio.
# Copyright (C) 2005, 2006, 2007, 2008, 2010, 2011, 2013 CERN.
#
# Invenio is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of the
# License, or (at your option) any later version.
#
# Invenio is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Invenio; if not, write to the Free Software Foundation, Inc.,
# 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
invenio.ext.email.errors
------------------------
Contains standard error messages for email extension.
"""
class EmailError(Exception):
"""A generic email error."""
def __init__(self, message):
"""Initialisation."""
self.message = message
def __str__(self):
"""String representation."""
return repr(self.message)
| gpl-2.0 |
nttks/edx-platform | lms/djangoapps/mobile_api/test_milestones.py | 80 | 5550 | """
Milestone related tests for the mobile_api
"""
from mock import patch
from courseware.access_response import MilestoneError
from courseware.tests.helpers import get_request_for_user
from courseware.tests.test_entrance_exam import answer_entrance_exam_problem, add_entrance_exam_milestone
from util.milestones_helpers import (
add_prerequisite_course,
fulfill_course_milestone,
seed_milestone_relationship_types,
)
from xmodule.modulestore.django import modulestore
from xmodule.modulestore.tests.factories import CourseFactory, ItemFactory
class MobileAPIMilestonesMixin(object):
"""
Tests the Mobile API decorators for milestones.
The two milestones currently supported in these tests are entrance exams and
pre-requisite courses. If either of these milestones are unfulfilled,
the mobile api will appropriately block content until the milestone is
fulfilled.
"""
ALLOW_ACCESS_TO_MILESTONE_COURSE = False # pylint: disable=invalid-name
@patch.dict('django.conf.settings.FEATURES', {'ENABLE_PREREQUISITE_COURSES': True, 'MILESTONES_APP': True})
def test_unfulfilled_prerequisite_course(self):
""" Tests the case for an unfulfilled pre-requisite course """
self._add_prerequisite_course()
self.init_course_access()
self._verify_unfulfilled_milestone_response()
@patch.dict('django.conf.settings.FEATURES', {'ENABLE_PREREQUISITE_COURSES': True, 'MILESTONES_APP': True})
def test_unfulfilled_prerequisite_course_for_staff(self):
self._add_prerequisite_course()
self.user.is_staff = True
self.user.save()
self.init_course_access()
self.api_response()
@patch.dict('django.conf.settings.FEATURES', {'ENABLE_PREREQUISITE_COURSES': True, 'MILESTONES_APP': True})
def test_fulfilled_prerequisite_course(self):
"""
Tests the case when a user fulfills existing pre-requisite course
"""
self._add_prerequisite_course()
add_prerequisite_course(self.course.id, self.prereq_course.id)
fulfill_course_milestone(self.prereq_course.id, self.user)
self.init_course_access()
self.api_response()
@patch.dict('django.conf.settings.FEATURES', {'ENTRANCE_EXAMS': True, 'MILESTONES_APP': True})
def test_unpassed_entrance_exam(self):
"""
Tests the case where the user has not passed the entrance exam
"""
self._add_entrance_exam()
self.init_course_access()
self._verify_unfulfilled_milestone_response()
@patch.dict('django.conf.settings.FEATURES', {'ENTRANCE_EXAMS': True, 'MILESTONES_APP': True})
def test_unpassed_entrance_exam_for_staff(self):
self._add_entrance_exam()
self.user.is_staff = True
self.user.save()
self.init_course_access()
self.api_response()
@patch.dict('django.conf.settings.FEATURES', {'ENTRANCE_EXAMS': True, 'MILESTONES_APP': True})
def test_passed_entrance_exam(self):
"""
Tests access when user has passed the entrance exam
"""
self._add_entrance_exam()
self._pass_entrance_exam()
self.init_course_access()
self.api_response()
def _add_entrance_exam(self):
""" Sets up entrance exam """
seed_milestone_relationship_types()
self.course.entrance_exam_enabled = True
self.entrance_exam = ItemFactory.create( # pylint: disable=attribute-defined-outside-init
parent=self.course,
category="chapter",
display_name="Entrance Exam Chapter",
is_entrance_exam=True,
in_entrance_exam=True
)
self.problem_1 = ItemFactory.create( # pylint: disable=attribute-defined-outside-init
parent=self.entrance_exam,
category='problem',
display_name="The Only Exam Problem",
graded=True,
in_entrance_exam=True
)
add_entrance_exam_milestone(self.course, self.entrance_exam)
self.course.entrance_exam_minimum_score_pct = 0.50
self.course.entrance_exam_id = unicode(self.entrance_exam.location)
modulestore().update_item(self.course, self.user.id)
def _add_prerequisite_course(self):
""" Helper method to set up the prerequisite course """
seed_milestone_relationship_types()
self.prereq_course = CourseFactory.create() # pylint: disable=attribute-defined-outside-init
add_prerequisite_course(self.course.id, self.prereq_course.id)
def _pass_entrance_exam(self):
""" Helper function to pass the entrance exam """
request = get_request_for_user(self.user)
answer_entrance_exam_problem(self.course, request, self.problem_1)
def _verify_unfulfilled_milestone_response(self):
"""
Verifies the response depending on ALLOW_ACCESS_TO_MILESTONE_COURSE
Since different endpoints will have different behaviours towards milestones,
setting ALLOW_ACCESS_TO_MILESTONE_COURSE (default is False) to True, will
not return a 404. For example, when getting a list of courses a user is
enrolled in, although a user may have unfulfilled milestones, the course
should still show up in the course enrollments list.
"""
if self.ALLOW_ACCESS_TO_MILESTONE_COURSE:
self.api_response()
else:
response = self.api_response(expected_response_code=404)
self.assertEqual(response.data, MilestoneError().to_json())
| agpl-3.0 |
nortikin/sverchok | nodes/curve/blend_curves.py | 2 | 10315 |
import numpy as np
import bpy
from bpy.props import FloatProperty, EnumProperty, BoolProperty, IntProperty
from sverchok.node_tree import SverchCustomTreeNode
from sverchok.data_structure import (updateNode, zip_long_repeat, ensure_nesting_level,
repeat_last_for_length, throttle_and_update_node)
from sverchok.utils.curve import SvCurve, SvCubicBezierCurve, SvBezierCurve, SvLine
from sverchok.utils.curve.algorithms import concatenate_curves
from sverchok.utils.curve.biarc import SvBiArc
class SvBlendCurvesMk2Node(bpy.types.Node, SverchCustomTreeNode):
"""
Triggers: Blend curves
Tooltip: Blend two or more curves by use of Bezier curve segment
"""
bl_idname = 'SvExBlendCurvesMk2Node'
bl_label = 'Blend Curves'
bl_icon = 'CURVE_NCURVE'
sv_icon = 'SV_BLEND_CURVE'
factor1 : FloatProperty(
name = "Factor 1",
description = "Bulge factor for the first curve",
default = 1.0,
update = updateNode)
factor2 : FloatProperty(
name = "Factor 2",
description = "Bulge factor for the second curve",
default = 1.0,
update = updateNode)
parameter : FloatProperty(
name = "Parameter",
description = "P parameter for the family of biarcs",
default = 1.0,
update = updateNode)
@throttle_and_update_node
def update_sockets(self, context):
self.inputs['Curve1'].hide_safe = self.mode != 'TWO'
self.inputs['Curve2'].hide_safe = self.mode != 'TWO'
self.inputs['Curves'].hide_safe = self.mode != 'N'
self.inputs['Factor1'].hide_safe = self.smooth_mode != '1'
self.inputs['Factor2'].hide_safe = self.smooth_mode != '1'
self.inputs['Parameter'].hide_safe = self.smooth_mode != '1b'
modes = [
('TWO', "Two curves", "Blend two curves", 0),
('N', "List of curves", "Blend several curves", 1)
]
mode : EnumProperty(
name = "Blend",
items = modes,
default = 'TWO',
update = update_sockets)
smooth_modes = [
('0', "0 - Position", "Connect ends of curves with straight line segment", 0),
('1', "1 - Tangency", "Connect curves such that their tangents are smoothly joined", 1),
('1b', "1 - Bi Arc", "Connect curves with Bi Arc, such that tangents are smoothly joined", 2),
('2', "2 - Normals", "Connect curves such that their normals (second derivatives) are smoothly joined", 3),
('3', "3 - Curvature", "Connect curves such that their curvatures (third derivatives) are smoothly joined", 4)
]
smooth_mode : EnumProperty(
name = "Continuity",
description = "How smooth should be the blending; bigger value give more smooth curves",
items = smooth_modes,
default = '1',
update = update_sockets)
cyclic : BoolProperty(
name = "Cyclic",
default = False,
update = updateNode)
output_src : BoolProperty(
name = "Output source curves",
default = True,
update = updateNode)
concat : BoolProperty(
name = "Concatenate",
description = "Concatenate source curves and blending curves into one curve",
default = True,
update = updateNode)
planar_tolerance : FloatProperty(
name = "Planar Tolerance",
description = "Tolerance value for checking if the curve is planar",
default = 1e-6,
precision = 8,
update = updateNode)
def draw_buttons(self, context, layout):
layout.label(text="Continuity:")
layout.prop(self, 'smooth_mode', text='')
layout.prop(self, "mode", text='')
layout.prop(self, 'concat', toggle=True)
if self.mode == 'N':
layout.prop(self, 'cyclic', toggle=True)
if not self.concat:
layout.prop(self, 'output_src', toggle=True)
def draw_buttons_ext(self, context, layout):
self.draw_buttons(context, layout)
layout.prop(self, 'planar_tolerance')
def sv_init(self, context):
self.inputs.new('SvCurveSocket', 'Curve1')
self.inputs.new('SvCurveSocket', 'Curve2')
self.inputs.new('SvCurveSocket', 'Curves')
self.inputs.new('SvStringsSocket', "Factor1").prop_name = 'factor1'
self.inputs.new('SvStringsSocket', "Factor2").prop_name = 'factor2'
self.inputs.new('SvStringsSocket', "Parameter").prop_name = 'parameter'
self.outputs.new('SvCurveSocket', 'Curve')
self.outputs.new('SvVerticesSocket', "ControlPoints")
self.update_sockets(context)
def get_inputs(self):
factor1_s = self.inputs['Factor1'].sv_get()
factor2_s = self.inputs['Factor2'].sv_get()
params_s = self.inputs['Parameter'].sv_get()
factor1_s = ensure_nesting_level(factor1_s, 2)
factor2_s = ensure_nesting_level(factor2_s, 2)
params_s = ensure_nesting_level(params_s, 2)
if self.mode == 'TWO':
curve1_s = self.inputs['Curve1'].sv_get()
curve2_s = self.inputs['Curve2'].sv_get()
if isinstance(curve1_s[0], SvCurve):
curve1_s = [curve1_s]
if isinstance(curve2_s[0], SvCurve):
curve2_s = [curve2_s]
for inputs in zip_long_repeat(curve1_s, curve2_s, factor1_s, factor2_s, params_s):
for curve1, curve2, factor1, factor2, parameter in zip_long_repeat(*inputs):
yield curve1, curve2, factor1, factor2, parameter
else:
curves_s = self.inputs['Curves'].sv_get()
if isinstance(curves_s[0], SvCurve):
curves_s = [curves_s]
for curves, factor1s, factor2s, params in zip_long_repeat(curves_s, factor1_s, factor2_s, params_s):
factor1s = repeat_last_for_length(factor1s, len(curves))
factor2s = repeat_last_for_length(factor2s, len(curves))
params = repeat_last_for_length(params, len(curves))
for curve1, curve2, factor1, factor2, parameter in zip(curves, curves[1:], factor1s, factor2s, params):
yield curve1, curve2, factor1, factor2, parameter
if self.cyclic:
yield curves[-1], curves[0], factor1s[-1], factor2s[-1], params[-1]
def process(self):
if not any(socket.is_linked for socket in self.outputs):
return
output_src = self.output_src or self.concat
curves_out = []
controls_out = []
is_first = True
for curve1, curve2, factor1, factor2, parameter in self.get_inputs():
_, t_max_1 = curve1.get_u_bounds()
t_min_2, _ = curve2.get_u_bounds()
curve1_end = curve1.evaluate(t_max_1)
curve2_begin = curve2.evaluate(t_min_2)
smooth = self.smooth_mode
if smooth == '0':
new_curve = SvLine.from_two_points(curve1_end, curve2_begin)
new_controls = [curve1_end, curve2_begin]
elif smooth == '1':
tangent_1_end = curve1.tangent(t_max_1)
tangent_2_begin = curve2.tangent(t_min_2)
tangent1 = factor1 * tangent_1_end
tangent2 = factor2 * tangent_2_begin
new_curve = SvCubicBezierCurve(
curve1_end,
curve1_end + tangent1 / 3.0,
curve2_begin - tangent2 / 3.0,
curve2_begin
)
new_controls = [new_curve.p0.tolist(), new_curve.p1.tolist(),
new_curve.p2.tolist(), new_curve.p3.tolist()]
elif smooth == '1b':
tangent_1_end = curve1.tangent(t_max_1)
tangent_2_begin = curve2.tangent(t_min_2)
new_curve = SvBiArc.calc(
curve1_end, curve2_begin,
tangent_1_end, tangent_2_begin,
parameter,
planar_tolerance = self.planar_tolerance)
new_controls = [new_curve.junction.tolist()]
elif smooth == '2':
tangent_1_end = curve1.tangent(t_max_1)
tangent_2_begin = curve2.tangent(t_min_2)
second_1_end = curve1.second_derivative(t_max_1)
second_2_begin = curve2.second_derivative(t_min_2)
new_curve = SvBezierCurve.blend_second_derivatives(
curve1_end, tangent_1_end, second_1_end,
curve2_begin, tangent_2_begin, second_2_begin)
new_controls = [p.tolist() for p in new_curve.points]
elif smooth == '3':
tangent_1_end = curve1.tangent(t_max_1)
tangent_2_begin = curve2.tangent(t_min_2)
second_1_end = curve1.second_derivative(t_max_1)
second_2_begin = curve2.second_derivative(t_min_2)
third_1_end = curve1.third_derivative_array(np.array([t_max_1]))[0]
third_2_begin = curve2.third_derivative_array(np.array([t_min_2]))[0]
new_curve = SvBezierCurve.blend_third_derivatives(
curve1_end, tangent_1_end, second_1_end, third_1_end,
curve2_begin, tangent_2_begin, second_2_begin, third_2_begin)
new_controls = [p.tolist() for p in new_curve.points]
else:
raise Exception("Unsupported smooth level")
if self.mode == 'N' and not self.cyclic and output_src and is_first:
curves_out.append(curve1)
curves_out.append(new_curve)
if self.mode == 'N' and output_src:
curves_out.append(curve2)
controls_out.append(new_controls)
is_first = False
if self.concat:
curves_out = [concatenate_curves(curves_out)]
self.outputs['Curve'].sv_set(curves_out)
self.outputs['ControlPoints'].sv_set(controls_out)
def register():
bpy.utils.register_class(SvBlendCurvesMk2Node)
def unregister():
bpy.utils.unregister_class(SvBlendCurvesMk2Node)
| gpl-3.0 |
ftomassetti/intellij-community | python/lib/Lib/site-packages/django/contrib/gis/db/backends/spatialite/base.py | 244 | 4347 | from ctypes.util import find_library
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
from django.db.backends.sqlite3.base import *
from django.db.backends.sqlite3.base import DatabaseWrapper as SqliteDatabaseWrapper, \
_sqlite_extract, _sqlite_date_trunc, _sqlite_regexp
from django.contrib.gis.db.backends.spatialite.client import SpatiaLiteClient
from django.contrib.gis.db.backends.spatialite.creation import SpatiaLiteCreation
from django.contrib.gis.db.backends.spatialite.introspection import SpatiaLiteIntrospection
from django.contrib.gis.db.backends.spatialite.operations import SpatiaLiteOperations
class DatabaseWrapper(SqliteDatabaseWrapper):
def __init__(self, *args, **kwargs):
# Before we get too far, make sure pysqlite 2.5+ is installed.
if Database.version_info < (2, 5, 0):
raise ImproperlyConfigured('Only versions of pysqlite 2.5+ are '
'compatible with SpatiaLite and GeoDjango.')
# Trying to find the location of the SpatiaLite library.
# Here we are figuring out the path to the SpatiaLite library
# (`libspatialite`). If it's not in the system library path (e.g., it
# cannot be found by `ctypes.util.find_library`), then it may be set
# manually in the settings via the `SPATIALITE_LIBRARY_PATH` setting.
self.spatialite_lib = getattr(settings, 'SPATIALITE_LIBRARY_PATH',
find_library('spatialite'))
if not self.spatialite_lib:
raise ImproperlyConfigured('Unable to locate the SpatiaLite library. '
'Make sure it is in your library path, or set '
'SPATIALITE_LIBRARY_PATH in your settings.'
)
super(DatabaseWrapper, self).__init__(*args, **kwargs)
self.ops = SpatiaLiteOperations(self)
self.client = SpatiaLiteClient(self)
self.creation = SpatiaLiteCreation(self)
self.introspection = SpatiaLiteIntrospection(self)
def _cursor(self):
if self.connection is None:
## The following is the same as in django.db.backends.sqlite3.base ##
settings_dict = self.settings_dict
if not settings_dict['NAME']:
raise ImproperlyConfigured("Please fill out the database NAME in the settings module before using the database.")
kwargs = {
'database': settings_dict['NAME'],
'detect_types': Database.PARSE_DECLTYPES | Database.PARSE_COLNAMES,
}
kwargs.update(settings_dict['OPTIONS'])
self.connection = Database.connect(**kwargs)
# Register extract, date_trunc, and regexp functions.
self.connection.create_function("django_extract", 2, _sqlite_extract)
self.connection.create_function("django_date_trunc", 2, _sqlite_date_trunc)
self.connection.create_function("regexp", 2, _sqlite_regexp)
connection_created.send(sender=self.__class__, connection=self)
## From here on, customized for GeoDjango ##
# Enabling extension loading on the SQLite connection.
try:
self.connection.enable_load_extension(True)
except AttributeError:
raise ImproperlyConfigured('The pysqlite library does not support C extension loading. '
'Both SQLite and pysqlite must be configured to allow '
'the loading of extensions to use SpatiaLite.'
)
# Loading the SpatiaLite library extension on the connection, and returning
# the created cursor.
cur = self.connection.cursor(factory=SQLiteCursorWrapper)
try:
cur.execute("SELECT load_extension(%s)", (self.spatialite_lib,))
except Exception, msg:
raise ImproperlyConfigured('Unable to load the SpatiaLite library extension '
'"%s" because: %s' % (self.spatialite_lib, msg))
return cur
else:
return self.connection.cursor(factory=SQLiteCursorWrapper)
| apache-2.0 |
dongjoon-hyun/tensorflow | tensorflow/contrib/slim/python/slim/data/data_decoder.py | 18 | 2315 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains helper functions and classes necessary for decoding data.
While data providers read data from disk, sstables or other formats, data
decoders decode the data (if necessary). A data decoder is provided with a
serialized or encoded piece of data as well as a list of items and
returns a set of tensors, each of which correspond to the requested list of
items extracted from the data:
def Decode(self, data, items):
...
For example, if data is a compressed map, the implementation might be:
def Decode(self, data, items):
decompressed_map = _Decompress(data)
outputs = []
for item in items:
outputs.append(decompressed_map[item])
return outputs.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import six
@six.add_metaclass(abc.ABCMeta)
class DataDecoder(object):
"""An abstract class which is used to decode data for a provider."""
@abc.abstractmethod
def decode(self, data, items):
"""Decodes the data to returns the tensors specified by the list of items.
Args:
data: A possibly encoded data format.
items: A list of strings, each of which indicate a particular data type.
Returns:
A list of `Tensors`, whose length matches the length of `items`, where
each `Tensor` corresponds to each item.
Raises:
ValueError: If any of the items cannot be satisfied.
"""
pass
@abc.abstractmethod
def list_items(self):
"""Lists the names of the items that the decoder can decode.
Returns:
A list of string names.
"""
pass
| apache-2.0 |
visualputty/Landing-Lights | django/views/i18n.py | 188 | 9673 | import os
import gettext as gettext_module
from django import http
from django.conf import settings
from django.utils import importlib
from django.utils.translation import check_for_language, activate, to_locale, get_language
from django.utils.text import javascript_quote
from django.utils.encoding import smart_unicode
from django.utils.formats import get_format_modules, get_format
def set_language(request):
"""
Redirect to a given url while setting the chosen language in the
session or cookie. The url and the language code need to be
specified in the request parameters.
Since this view changes how the user will see the rest of the site, it must
only be accessed as a POST request. If called as a GET request, it will
redirect to the page in the request (the 'next' parameter) without changing
any state.
"""
next = request.REQUEST.get('next', None)
if not next:
next = request.META.get('HTTP_REFERER', None)
if not next:
next = '/'
response = http.HttpResponseRedirect(next)
if request.method == 'POST':
lang_code = request.POST.get('language', None)
if lang_code and check_for_language(lang_code):
if hasattr(request, 'session'):
request.session['django_language'] = lang_code
else:
response.set_cookie(settings.LANGUAGE_COOKIE_NAME, lang_code)
return response
def get_formats():
"""
Returns all formats strings required for i18n to work
"""
FORMAT_SETTINGS = (
'DATE_FORMAT', 'DATETIME_FORMAT', 'TIME_FORMAT',
'YEAR_MONTH_FORMAT', 'MONTH_DAY_FORMAT', 'SHORT_DATE_FORMAT',
'SHORT_DATETIME_FORMAT', 'FIRST_DAY_OF_WEEK', 'DECIMAL_SEPARATOR',
'THOUSAND_SEPARATOR', 'NUMBER_GROUPING',
'DATE_INPUT_FORMATS', 'TIME_INPUT_FORMATS', 'DATETIME_INPUT_FORMATS'
)
result = {}
for module in [settings] + get_format_modules(reverse=True):
for attr in FORMAT_SETTINGS:
result[attr] = get_format(attr)
src = []
for k, v in result.items():
if isinstance(v, (basestring, int)):
src.append("formats['%s'] = '%s';\n" % (javascript_quote(k), javascript_quote(smart_unicode(v))))
elif isinstance(v, (tuple, list)):
v = [javascript_quote(smart_unicode(value)) for value in v]
src.append("formats['%s'] = ['%s'];\n" % (javascript_quote(k), "', '".join(v)))
return ''.join(src)
NullSource = """
/* gettext identity library */
function gettext(msgid) { return msgid; }
function ngettext(singular, plural, count) { return (count == 1) ? singular : plural; }
function gettext_noop(msgid) { return msgid; }
function pgettext(context, msgid) { return msgid; }
function npgettext(context, singular, plural, count) { return (count == 1) ? singular : plural; }
"""
LibHead = """
/* gettext library */
var catalog = new Array();
"""
LibFoot = """
function gettext(msgid) {
var value = catalog[msgid];
if (typeof(value) == 'undefined') {
return msgid;
} else {
return (typeof(value) == 'string') ? value : value[0];
}
}
function ngettext(singular, plural, count) {
value = catalog[singular];
if (typeof(value) == 'undefined') {
return (count == 1) ? singular : plural;
} else {
return value[pluralidx(count)];
}
}
function gettext_noop(msgid) { return msgid; }
function pgettext(context, msgid) {
var value = gettext(context + '\x04' + msgid);
if (value.indexOf('\x04') != -1) {
value = msgid;
}
return value;
}
function npgettext(context, singular, plural, count) {
var value = ngettext(context + '\x04' + singular, context + '\x04' + plural, count);
if (value.indexOf('\x04') != -1) {
value = ngettext(singular, plural, count);
}
return value;
}
"""
LibFormatHead = """
/* formatting library */
var formats = new Array();
"""
LibFormatFoot = """
function get_format(format_type) {
var value = formats[format_type];
if (typeof(value) == 'undefined') {
return msgid;
} else {
return value;
}
}
"""
SimplePlural = """
function pluralidx(count) { return (count == 1) ? 0 : 1; }
"""
InterPolate = r"""
function interpolate(fmt, obj, named) {
if (named) {
return fmt.replace(/%\(\w+\)s/g, function(match){return String(obj[match.slice(2,-2)])});
} else {
return fmt.replace(/%s/g, function(match){return String(obj.shift())});
}
}
"""
PluralIdx = r"""
function pluralidx(n) {
var v=%s;
if (typeof(v) == 'boolean') {
return v ? 1 : 0;
} else {
return v;
}
}
"""
def null_javascript_catalog(request, domain=None, packages=None):
"""
Returns "identity" versions of the JavaScript i18n functions -- i.e.,
versions that don't actually do anything.
"""
src = [NullSource, InterPolate, LibFormatHead, get_formats(), LibFormatFoot]
return http.HttpResponse(''.join(src), 'text/javascript')
def javascript_catalog(request, domain='djangojs', packages=None):
"""
Returns the selected language catalog as a javascript library.
Receives the list of packages to check for translations in the
packages parameter either from an infodict or as a +-delimited
string from the request. Default is 'django.conf'.
Additionally you can override the gettext domain for this view,
but usually you don't want to do that, as JavaScript messages
go to the djangojs domain. But this might be needed if you
deliver your JavaScript source from Django templates.
"""
if request.GET:
if 'language' in request.GET:
if check_for_language(request.GET['language']):
activate(request.GET['language'])
if packages is None:
packages = ['django.conf']
if isinstance(packages, basestring):
packages = packages.split('+')
packages = [p for p in packages if p == 'django.conf' or p in settings.INSTALLED_APPS]
default_locale = to_locale(settings.LANGUAGE_CODE)
locale = to_locale(get_language())
t = {}
paths = []
en_selected = locale.startswith('en')
en_catalog_missing = True
# paths of requested packages
for package in packages:
p = importlib.import_module(package)
path = os.path.join(os.path.dirname(p.__file__), 'locale')
paths.append(path)
# add the filesystem paths listed in the LOCALE_PATHS setting
paths.extend(list(reversed(settings.LOCALE_PATHS)))
# first load all english languages files for defaults
for path in paths:
try:
catalog = gettext_module.translation(domain, path, ['en'])
t.update(catalog._catalog)
except IOError:
pass
else:
# 'en' is the selected language and at least one of the packages
# listed in `packages` has an 'en' catalog
if en_selected:
en_catalog_missing = False
# next load the settings.LANGUAGE_CODE translations if it isn't english
if default_locale != 'en':
for path in paths:
try:
catalog = gettext_module.translation(domain, path, [default_locale])
except IOError:
catalog = None
if catalog is not None:
t.update(catalog._catalog)
# last load the currently selected language, if it isn't identical to the default.
if locale != default_locale:
# If the currently selected language is English but it doesn't have a
# translation catalog (presumably due to being the language translated
# from) then a wrong language catalog might have been loaded in the
# previous step. It needs to be discarded.
if en_selected and en_catalog_missing:
t = {}
else:
locale_t = {}
for path in paths:
try:
catalog = gettext_module.translation(domain, path, [locale])
except IOError:
catalog = None
if catalog is not None:
locale_t.update(catalog._catalog)
if locale_t:
t = locale_t
src = [LibHead]
plural = None
if '' in t:
for l in t[''].split('\n'):
if l.startswith('Plural-Forms:'):
plural = l.split(':',1)[1].strip()
if plural is not None:
# this should actually be a compiled function of a typical plural-form:
# Plural-Forms: nplurals=3; plural=n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2;
plural = [el.strip() for el in plural.split(';') if el.strip().startswith('plural=')][0].split('=',1)[1]
src.append(PluralIdx % plural)
else:
src.append(SimplePlural)
csrc = []
pdict = {}
for k, v in t.items():
if k == '':
continue
if isinstance(k, basestring):
csrc.append("catalog['%s'] = '%s';\n" % (javascript_quote(k), javascript_quote(v)))
elif isinstance(k, tuple):
if k[0] not in pdict:
pdict[k[0]] = k[1]
else:
pdict[k[0]] = max(k[1], pdict[k[0]])
csrc.append("catalog['%s'][%d] = '%s';\n" % (javascript_quote(k[0]), k[1], javascript_quote(v)))
else:
raise TypeError(k)
csrc.sort()
for k, v in pdict.items():
src.append("catalog['%s'] = [%s];\n" % (javascript_quote(k), ','.join(["''"]*(v+1))))
src.extend(csrc)
src.append(LibFoot)
src.append(InterPolate)
src.append(LibFormatHead)
src.append(get_formats())
src.append(LibFormatFoot)
src = ''.join(src)
return http.HttpResponse(src, 'text/javascript')
| bsd-3-clause |
gauribhoite/personfinder | env/site-packages/django/db/backends/postgresql_psycopg2/version.py | 632 | 1517 | """
Extracts the version of the PostgreSQL server.
"""
import re
# This reg-exp is intentionally fairly flexible here.
# Needs to be able to handle stuff like:
# PostgreSQL #.#.#
# EnterpriseDB #.#
# PostgreSQL #.# beta#
# PostgreSQL #.#beta#
VERSION_RE = re.compile(r'\S+ (\d+)\.(\d+)\.?(\d+)?')
def _parse_version(text):
"Internal parsing method. Factored out for testing purposes."
major, major2, minor = VERSION_RE.search(text).groups()
try:
return int(major) * 10000 + int(major2) * 100 + int(minor)
except (ValueError, TypeError):
return int(major) * 10000 + int(major2) * 100
def get_version(connection):
"""
Returns an integer representing the major, minor and revision number of the
server. Format is the one used for the return value of libpq
PQServerVersion()/``server_version`` connection attribute (available in
newer psycopg2 versions.)
For example, 90304 for 9.3.4. The last two digits will be 00 in the case of
releases (e.g., 90400 for 'PostgreSQL 9.4') or in the case of beta and
prereleases (e.g. 90100 for 'PostgreSQL 9.1beta2').
PQServerVersion()/``server_version`` doesn't execute a query so try that
first, then fallback to a ``SELECT version()`` query.
"""
if hasattr(connection, 'server_version'):
return connection.server_version
else:
with connection.cursor() as cursor:
cursor.execute("SELECT version()")
return _parse_version(cursor.fetchone()[0])
| apache-2.0 |
gauribhoite/personfinder | tools/iterate.py | 17 | 1735 | #!/usr/bin/env python
# Copyright 2011 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
import re
import sys
from google.appengine.ext import db
def iterate(query, callback=lambda x: x, batch_size=1000, verbose=True):
"""Utility for iterating over a query, applying the callback to each row."""
start = time.time()
count = 0
results = query.fetch(batch_size)
while results:
rstart = time.time()
for row in results:
output = callback(row)
if output:
print output
count += 1
if verbose:
print '%s rows processed in %.1fs' % (count, time.time() - rstart)
print 'total time: %.1fs' % (time.time() - start)
results = query.with_cursor(query.cursor()).fetch(batch_size)
callback()
print 'total rows: %s, total time: %.1fs' % (count, time.time() - start)
def dangling_pic(pic):
"""Filter for photos with no referencing person."""
ppl = pic.person_set.fetch(100)
if not ppl:
return pic.key().id()
ids = []
def dangling_pic_list(pic):
"""Track photos with no referencing person."""
if pic and not pic.person_set.count():
ids.append(pic.key().id())
| apache-2.0 |
ua-snap/downscale | snap_scripts/epscor_sc/fix_singledigit_months_epscor_sc.py | 1 | 1184 | # walk directories and repair single digit months to 2 digit
def get_month( fn ):
return fn.split( '.' )[ 0 ].split( '_' )[ -2 ]
def get_year( fn ):
return fn.split( '.' )[ 0 ].split( '_' )[ -1 ]
def fix_month( fn ):
lookup = {'1':'01','2':'02','3':'03','4':'04',\
'5':'05','6':'06','7':'07','8':'08','9':'09',\
'10':'10','11':'11','12':'12'}
month = get_month( fn )
year = get_year( fn )
if month in lookup.keys():
new_month = lookup[ month ]
new_fn = fn.replace( '_'+month+'_', '_'+new_month+'_' )
os.rename( fn, new_fn )
return 1
if __name__ == '__main__':
import os, glob
from pathos import multiprocessing as mp
base_dirs = [ '/workspace/Shared/Tech_Projects/EPSCoR_Southcentral/project_data/downscaled_cru', \
'/workspace/Shared/Tech_Projects/EPSCoR_Southcentral/project_data/downscaled_cmip5' ]
for base_dir in base_dirs:
out_files = []
for root, subs, files in os.walk( base_dir ):
if len([ i for i in files if i.endswith( '.tif' ) ]) > 0:
out_files = out_files + [ os.path.join( root, j ) for j in files ]
pool = mp.Pool( 32 )
pool.map( fix_month, out_files )
pool.close()
pool.join()
pool.terminate()
pool = None
| mit |
puzan/ansible | lib/ansible/modules/cloud/google/gce_mig.py | 24 | 30500 | #!/usr/bin/python
# Copyright 2016 Google Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'version': '1.0'}
DOCUMENTATION = '''
---
module: gce_mig
version_added: "2.2"
short_description: Create, Update or Destroy a Managed Instance Group (MIG).
description:
- Create, Update or Destroy a Managed Instance Group (MIG). See
U(https://cloud.google.com/compute/docs/instance-groups) for an overview.
Full install/configuration instructions for the gce* modules can
be found in the comments of ansible/test/gce_tests.py.
requirements:
- "python >= 2.6"
- "apache-libcloud >= 1.2.0"
notes:
- Resizing and Recreating VM are also supported.
- An existing instance template is required in order to create a
Managed Instance Group.
author:
- "Tom Melendez (@supertom) <tom@supertom.com>"
options:
name:
description:
- Name of the Managed Instance Group.
required: true
template:
description:
- Instance Template to be used in creating the VMs. See
U(https://cloud.google.com/compute/docs/instance-templates) to learn more
about Instance Templates. Required for creating MIGs.
required: false
size:
description:
- Size of Managed Instance Group. If MIG already exists, it will be
resized to the number provided here. Required for creating MIGs.
required: false
service_account_email:
description:
- service account email
required: false
default: null
credentials_file:
description:
- Path to the JSON file associated with the service account email
default: null
required: false
project_id:
description:
- GCE project ID
required: false
default: null
state:
description:
- desired state of the resource
required: false
default: "present"
choices: ["absent", "present"]
zone:
description:
- The GCE zone to use for this Managed Instance Group.
required: true
autoscaling:
description:
- A dictionary of configuration for the autoscaler. 'enabled (bool)', 'name (str)'
and policy.max_instances (int) are required fields if autoscaling is used. See
U(https://cloud.google.com/compute/docs/reference/beta/autoscalers) for more information
on Autoscaling.
required: false
default: null
named_ports:
version_added: "2.3"
description:
- Define named ports that backend services can forward data to. Format is a a list of
name:port dictionaries.
required: false
default: null
'''
EXAMPLES = '''
# Following playbook creates, rebuilds instances, resizes and then deletes a MIG.
# Notes:
# - Two valid Instance Templates must exist in your GCE project in order to run
# this playbook. Change the fields to match the templates used in your
# project.
# - The use of the 'pause' module is not required, it is just for convenience.
- name: Managed Instance Group Example
hosts: localhost
gather_facts: False
tasks:
- name: Create MIG
gce_mig:
name: ansible-mig-example
zone: us-central1-c
state: present
size: 1
template: my-instance-template-1
named_ports:
- name: http
port: 80
- name: foobar
port: 82
- name: Pause for 30 seconds
pause:
seconds: 30
- name: Recreate MIG Instances with Instance Template change.
gce_mig:
name: ansible-mig-example
zone: us-central1-c
state: present
template: my-instance-template-2-small
recreate_instances: yes
- name: Pause for 30 seconds
pause:
seconds: 30
- name: Resize MIG
gce_mig:
name: ansible-mig-example
zone: us-central1-c
state: present
size: 3
- name: Update MIG with Autoscaler
gce_mig:
name: ansible-mig-example
zone: us-central1-c
state: present
size: 3
template: my-instance-template-2-small
recreate_instances: yes
autoscaling:
enabled: yes
name: my-autoscaler
policy:
min_instances: 2
max_instances: 5
cool_down_period: 37
cpu_utilization:
target: .39
load_balancing_utilization:
target: 0.4
- name: Pause for 30 seconds
pause:
seconds: 30
- name: Delete MIG
gce_mig:
name: ansible-mig-example
zone: us-central1-c
state: absent
autoscaling:
enabled: no
name: my-autoscaler
'''
RETURN = '''
zone:
description: Zone in which to launch MIG.
returned: always
type: string
sample: "us-central1-b"
template:
description: Instance Template to use for VMs. Must exist prior to using with MIG.
returned: changed
type: string
sample: "my-instance-template"
name:
description: Name of the Managed Instance Group.
returned: changed
type: string
sample: "my-managed-instance-group"
named_ports:
description: list of named ports acted upon
returned: when named_ports are initially set or updated
type: list
sample: [{ "name": "http", "port": 80 }, { "name": "foo", "port": 82 }]
size:
description: Number of VMs in Managed Instance Group.
returned: changed
type: integer
sample: 4
created_instances:
description: Names of instances created.
returned: When instances are created.
type: list
sample: ["ansible-mig-new-0k4y", "ansible-mig-new-0zk5", "ansible-mig-new-kp68"]
deleted_instances:
description: Names of instances deleted.
returned: When instances are deleted.
type: list
sample: ["ansible-mig-new-0k4y", "ansible-mig-new-0zk5", "ansible-mig-new-kp68"]
resize_created_instances:
description: Names of instances created during resizing.
returned: When a resize results in the creation of instances.
type: list
sample: ["ansible-mig-new-0k4y", "ansible-mig-new-0zk5", "ansible-mig-new-kp68"]
resize_deleted_instances:
description: Names of instances deleted during resizing.
returned: When a resize results in the deletion of instances.
type: list
sample: ["ansible-mig-new-0k4y", "ansible-mig-new-0zk5", "ansible-mig-new-kp68"]
recreated_instances:
description: Names of instances recreated.
returned: When instances are recreated.
type: list
sample: ["ansible-mig-new-0k4y", "ansible-mig-new-0zk5", "ansible-mig-new-kp68"]
created_autoscaler:
description: True if Autoscaler was attempted and created. False otherwise.
returned: When the creation of an Autoscaler was attempted.
type: bool
sample: true
updated_autoscaler:
description: True if an Autoscaler update was attempted and succeeded.
False returned if update failed.
returned: When the update of an Autoscaler was attempted.
type: bool
sample: true
deleted_autoscaler:
description: True if an Autoscaler delete attempted and succeeded.
False returned if delete failed.
returned: When the delete of an Autoscaler was attempted.
type: bool
sample: true
set_named_ports:
description: True if the named_ports have been set
returned: named_ports have been set
type: bool
sample: true
updated_named_ports:
description: True if the named_ports have been updated
returned: named_ports have been updated
type: bool
sample: true
'''
import socket
try:
import libcloud
from libcloud.compute.types import Provider
from libcloud.compute.providers import get_driver
from libcloud.common.google import GoogleBaseError, QuotaExceededError, \
ResourceExistsError, ResourceInUseError, ResourceNotFoundError
from libcloud.compute.drivers.gce import GCEAddress
_ = Provider.GCE
HAS_LIBCLOUD = True
except ImportError:
HAS_LIBCLOUD = False
try:
from ast import literal_eval
HAS_PYTHON26 = True
except ImportError:
HAS_PYTHON26 = False
def _check_params(params, field_list):
"""
Helper to validate params.
Use this in function definitions if they require specific fields
to be present.
:param params: structure that contains the fields
:type params: ``dict``
:param field_list: list of dict representing the fields
[{'name': str, 'required': True/False', 'type': cls}]
:type field_list: ``list`` of ``dict``
:return True, exits otherwise
:rtype: ``bool``
"""
for d in field_list:
if not d['name'] in params:
if d['required'] is True:
return (False, "%s is required and must be of type: %s" %
(d['name'], str(d['type'])))
else:
if not isinstance(params[d['name']], d['type']):
return (False,
"%s must be of type: %s" % (d['name'], str(d['type'])))
return (True, '')
def _validate_autoscaling_params(params):
"""
Validate that the minimum configuration is present for autoscaling.
:param params: Ansible dictionary containing autoscaling configuration
It is expected that autoscaling config will be found at the
key 'autoscaling'.
:type params: ``dict``
:return: Tuple containing a boolean and a string. True if autoscaler
is valid, False otherwise, plus str for message.
:rtype: ``(``bool``, ``str``)``
"""
if not params['autoscaling']:
# It's optional, so if not set at all, it's valid.
return (True, '')
if not isinstance(params['autoscaling'], dict):
return (False,
'autoscaling: configuration expected to be a dictionary.')
# check first-level required fields
as_req_fields = [
{'name': 'name', 'required': True, 'type': str},
{'name': 'enabled', 'required': True, 'type': bool},
{'name': 'policy', 'required': True, 'type': dict}
] # yapf: disable
(as_req_valid, as_req_msg) = _check_params(params['autoscaling'],
as_req_fields)
if not as_req_valid:
return (False, as_req_msg)
# check policy configuration
as_policy_fields = [
{'name': 'max_instances', 'required': True, 'type': int},
{'name': 'min_instances', 'required': False, 'type': int},
{'name': 'cool_down_period', 'required': False, 'type': int}
] # yapf: disable
(as_policy_valid, as_policy_msg) = _check_params(
params['autoscaling']['policy'], as_policy_fields)
if not as_policy_valid:
return (False, as_policy_msg)
# TODO(supertom): check utilization fields
return (True, '')
def _validate_named_port_params(params):
"""
Validate the named ports parameters
:param params: Ansible dictionary containing named_ports configuration
It is expected that autoscaling config will be found at the
key 'named_ports'. That key should contain a list of
{name : port} dictionaries.
:type params: ``dict``
:return: Tuple containing a boolean and a string. True if params
are valid, False otherwise, plus str for message.
:rtype: ``(``bool``, ``str``)``
"""
if not params['named_ports']:
# It's optional, so if not set at all, it's valid.
return (True, '')
if not isinstance(params['named_ports'], list):
return (False, 'named_ports: expected list of name:port dictionaries.')
req_fields = [
{'name': 'name', 'required': True, 'type': str},
{'name': 'port', 'required': True, 'type': int}
] # yapf: disable
for np in params['named_ports']:
(valid_named_ports, np_msg) = _check_params(np, req_fields)
if not valid_named_ports:
return (False, np_msg)
return (True, '')
def _get_instance_list(mig, field='name', filter_list=['NONE']):
"""
Helper to grab field from instances response.
:param mig: Managed Instance Group Object from libcloud.
:type mig: :class: `GCEInstanceGroupManager`
:param field: Field name in list_managed_instances response. Defaults
to 'name'.
:type field: ``str``
:param filter_list: list of 'currentAction' strings to filter on. Only
items that match a currentAction in this list will
be returned. Default is "['NONE']".
:type filter_list: ``list`` of ``str``
:return: List of strings from list_managed_instances response.
:rtype: ``list``
"""
return [x[field] for x in mig.list_managed_instances()
if x['currentAction'] in filter_list]
def _gen_gce_as_policy(as_params):
"""
Take Autoscaler params and generate GCE-compatible policy.
:param as_params: Dictionary in Ansible-playbook format
containing policy arguments.
:type as_params: ``dict``
:return: GCE-compatible policy dictionary
:rtype: ``dict``
"""
asp_data = {}
asp_data['maxNumReplicas'] = as_params['max_instances']
if 'min_instances' in as_params:
asp_data['minNumReplicas'] = as_params['min_instances']
if 'cool_down_period' in as_params:
asp_data['coolDownPeriodSec'] = as_params['cool_down_period']
if 'cpu_utilization' in as_params and 'target' in as_params[
'cpu_utilization']:
asp_data['cpuUtilization'] = {'utilizationTarget':
as_params['cpu_utilization']['target']}
if 'load_balancing_utilization' in as_params and 'target' in as_params[
'load_balancing_utilization']:
asp_data['loadBalancingUtilization'] = {
'utilizationTarget':
as_params['load_balancing_utilization']['target']
}
return asp_data
def create_autoscaler(gce, mig, params):
"""
Create a new Autoscaler for a MIG.
:param gce: An initialized GCE driver object.
:type gce: :class: `GCENodeDriver`
:param mig: An initialized GCEInstanceGroupManager.
:type mig: :class: `GCEInstanceGroupManager`
:param params: Dictionary of autoscaling parameters.
:type params: ``dict``
:return: Tuple with changed stats.
:rtype: tuple in the format of (bool, list)
"""
changed = False
as_policy = _gen_gce_as_policy(params['policy'])
autoscaler = gce.ex_create_autoscaler(name=params['name'], zone=mig.zone,
instance_group=mig, policy=as_policy)
if autoscaler:
changed = True
return changed
def update_autoscaler(gce, autoscaler, params):
"""
Update an Autoscaler.
Takes an existing Autoscaler object, and updates it with
the supplied params before calling libcloud's update method.
:param gce: An initialized GCE driver object.
:type gce: :class: `GCENodeDriver`
:param autoscaler: An initialized GCEAutoscaler.
:type autoscaler: :class: `GCEAutoscaler`
:param params: Dictionary of autoscaling parameters.
:type params: ``dict``
:return: True if changes, False otherwise.
:rtype: ``bool``
"""
as_policy = _gen_gce_as_policy(params['policy'])
if autoscaler.policy != as_policy:
autoscaler.policy = as_policy
autoscaler = gce.ex_update_autoscaler(autoscaler)
if autoscaler:
return True
return False
def delete_autoscaler(autoscaler):
"""
Delete an Autoscaler. Does not affect MIG.
:param mig: Managed Instance Group Object from Libcloud.
:type mig: :class: `GCEInstanceGroupManager`
:return: Tuple with changed stats and a list of affected instances.
:rtype: tuple in the format of (bool, list)
"""
changed = False
if autoscaler.destroy():
changed = True
return changed
def get_autoscaler(gce, name, zone):
"""
Get an Autoscaler from GCE.
If the Autoscaler is not found, None is found.
:param gce: An initialized GCE driver object.
:type gce: :class: `GCENodeDriver`
:param name: Name of the Autoscaler.
:type name: ``str``
:param zone: Zone that the Autoscaler is located in.
:type zone: ``str``
:return: A GCEAutoscaler object or None.
:rtype: :class: `GCEAutoscaler` or None
"""
try:
# Does the Autoscaler already exist?
return gce.ex_get_autoscaler(name, zone)
except ResourceNotFoundError:
return None
def create_mig(gce, params):
"""
Create a new Managed Instance Group.
:param gce: An initialized GCE driver object.
:type gce: :class: `GCENodeDriver`
:param params: Dictionary of parameters needed by the module.
:type params: ``dict``
:return: Tuple with changed stats and a list of affected instances.
:rtype: tuple in the format of (bool, list)
"""
changed = False
return_data = []
actions_filter = ['CREATING']
mig = gce.ex_create_instancegroupmanager(
name=params['name'], size=params['size'], template=params['template'],
zone=params['zone'])
if mig:
changed = True
return_data = _get_instance_list(mig, filter_list=actions_filter)
return (changed, return_data)
def delete_mig(mig):
"""
Delete a Managed Instance Group. All VMs in that MIG are also deleted."
:param mig: Managed Instance Group Object from Libcloud.
:type mig: :class: `GCEInstanceGroupManager`
:return: Tuple with changed stats and a list of affected instances.
:rtype: tuple in the format of (bool, list)
"""
changed = False
return_data = []
actions_filter = ['NONE', 'CREATING', 'RECREATING', 'DELETING',
'ABANDONING', 'RESTARTING', 'REFRESHING']
instance_names = _get_instance_list(mig, filter_list=actions_filter)
if mig.destroy():
changed = True
return_data = instance_names
return (changed, return_data)
def recreate_instances_in_mig(mig):
"""
Recreate the instances for a Managed Instance Group.
:param mig: Managed Instance Group Object from libcloud.
:type mig: :class: `GCEInstanceGroupManager`
:return: Tuple with changed stats and a list of affected instances.
:rtype: tuple in the format of (bool, list)
"""
changed = False
return_data = []
actions_filter = ['RECREATING']
if mig.recreate_instances():
changed = True
return_data = _get_instance_list(mig, filter_list=actions_filter)
return (changed, return_data)
def resize_mig(mig, size):
"""
Resize a Managed Instance Group.
Based on the size provided, GCE will automatically create and delete
VMs as needed.
:param mig: Managed Instance Group Object from libcloud.
:type mig: :class: `GCEInstanceGroupManager`
:return: Tuple with changed stats and a list of affected instances.
:rtype: tuple in the format of (bool, list)
"""
changed = False
return_data = []
actions_filter = ['CREATING', 'DELETING']
if mig.resize(size):
changed = True
return_data = _get_instance_list(mig, filter_list=actions_filter)
return (changed, return_data)
def get_mig(gce, name, zone):
"""
Get a Managed Instance Group from GCE.
If the MIG is not found, None is found.
:param gce: An initialized GCE driver object.
:type gce: :class: `GCENodeDriver`
:param name: Name of the Managed Instance Group.
:type name: ``str``
:param zone: Zone that the Managed Instance Group is located in.
:type zone: ``str``
:return: A GCEInstanceGroupManager object or None.
:rtype: :class: `GCEInstanceGroupManager` or None
"""
try:
# Does the MIG already exist?
return gce.ex_get_instancegroupmanager(name=name, zone=zone)
except ResourceNotFoundError:
return None
def update_named_ports(mig, named_ports):
"""
Set the named ports on a Managed Instance Group.
Sort the existing named ports and new. If different, update.
This also implicitly allows for the removal of named_por
:param mig: Managed Instance Group Object from libcloud.
:type mig: :class: `GCEInstanceGroupManager`
:param named_ports: list of dictionaries in the format of {'name': port}
:type named_ports: ``list`` of ``dict``
:return: True if successful
:rtype: ``bool``
"""
changed = False
existing_ports = []
new_ports = []
if hasattr(mig.instance_group, 'named_ports'):
existing_ports = sorted(mig.instance_group.named_ports,
key=lambda x: x['name'])
if named_ports is not None:
new_ports = sorted(named_ports, key=lambda x: x['name'])
if existing_ports != new_ports:
if mig.instance_group.set_named_ports(named_ports):
changed = True
return changed
def main():
module = AnsibleModule(argument_spec=dict(
name=dict(required=True),
template=dict(),
recreate_instances=dict(type='bool', default=False),
# Do not set a default size here. For Create and some update
# operations, it is required and should be explicitly set.
# Below, we set it to the existing value if it has not been set.
size=dict(type='int'),
state=dict(choices=['absent', 'present'], default='present'),
zone=dict(required=True),
autoscaling=dict(type='dict', default=None),
named_ports=dict(type='list', default=None),
service_account_email=dict(),
service_account_permissions=dict(type='list'),
pem_file=dict(),
credentials_file=dict(),
project_id=dict(), ), )
if not HAS_PYTHON26:
module.fail_json(
msg="GCE module requires python's 'ast' module, python v2.6+")
if not HAS_LIBCLOUD:
module.fail_json(
msg='libcloud with GCE Managed Instance Group support (1.2+) required for this module.')
gce = gce_connect(module)
if not hasattr(gce, 'ex_create_instancegroupmanager'):
module.fail_json(
msg='libcloud with GCE Managed Instance Group support (1.2+) required for this module.',
changed=False)
params = {}
params['state'] = module.params.get('state')
params['zone'] = module.params.get('zone')
params['name'] = module.params.get('name')
params['size'] = module.params.get('size')
params['template'] = module.params.get('template')
params['recreate_instances'] = module.params.get('recreate_instances')
params['autoscaling'] = module.params.get('autoscaling', None)
params['named_ports'] = module.params.get('named_ports', None)
(valid_autoscaling, as_msg) = _validate_autoscaling_params(params)
if not valid_autoscaling:
module.fail_json(msg=as_msg, changed=False)
if params['named_ports'] is not None and not hasattr(
gce, 'ex_instancegroup_set_named_ports'):
module.fail_json(
msg="Apache Libcloud 1.3.0+ is required to use 'named_ports' option",
changed=False)
(valid_named_ports, np_msg) = _validate_named_port_params(params)
if not valid_named_ports:
module.fail_json(msg=np_msg, changed=False)
changed = False
json_output = {'state': params['state'], 'zone': params['zone']}
mig = get_mig(gce, params['name'], params['zone'])
if not mig:
if params['state'] == 'absent':
# Doesn't exist in GCE, and state==absent.
changed = False
module.fail_json(
msg="Cannot delete unknown managed instance group: %s" %
(params['name']))
else:
# Create MIG
req_create_fields = [
{'name': 'template', 'required': True, 'type': str},
{'name': 'size', 'required': True, 'type': int}
] # yapf: disable
(valid_create_fields, valid_create_msg) = _check_params(
params, req_create_fields)
if not valid_create_fields:
module.fail_json(msg=valid_create_msg, changed=False)
(changed, json_output['created_instances']) = create_mig(gce,
params)
if params['autoscaling'] and params['autoscaling'][
'enabled'] is True:
# Fetch newly-created MIG and create Autoscaler for it.
mig = get_mig(gce, params['name'], params['zone'])
if not mig:
module.fail_json(
msg='Unable to fetch created MIG %s to create \
autoscaler in zone: %s' % (
params['name'], params['zone']), changed=False)
if not create_autoscaler(gce, mig, params['autoscaling']):
module.fail_json(
msg='Unable to fetch MIG %s to create autoscaler \
in zone: %s' % (params['name'], params['zone']),
changed=False)
json_output['created_autoscaler'] = True
# Add named ports if available
if params['named_ports']:
mig = get_mig(gce, params['name'], params['zone'])
if not mig:
module.fail_json(
msg='Unable to fetch created MIG %s to create \
autoscaler in zone: %s' % (
params['name'], params['zone']), changed=False)
json_output['set_named_ports'] = update_named_ports(
mig, params['named_ports'])
if json_output['set_named_ports']:
json_output['named_ports'] = params['named_ports']
elif params['state'] == 'absent':
# Delete MIG
# First, check and remove the autoscaler, if present.
# Note: multiple autoscalers can be associated to a single MIG. We
# only handle the one that is named, but we might want to think about this.
if params['autoscaling']:
autoscaler = get_autoscaler(gce, params['autoscaling']['name'],
params['zone'])
if not autoscaler:
module.fail_json(msg='Unable to fetch autoscaler %s to delete \
in zone: %s' % (params['autoscaling']['name'], params['zone']),
changed=False)
changed = delete_autoscaler(autoscaler)
json_output['deleted_autoscaler'] = changed
# Now, delete the MIG.
(changed, json_output['deleted_instances']) = delete_mig(mig)
else:
# Update MIG
# If we're going to update a MIG, we need a size and template values.
# If not specified, we use the values from the existing MIG.
if not params['size']:
params['size'] = mig.size
if not params['template']:
params['template'] = mig.template.name
if params['template'] != mig.template.name:
# Update Instance Template.
new_template = gce.ex_get_instancetemplate(params['template'])
mig.set_instancetemplate(new_template)
json_output['updated_instancetemplate'] = True
changed = True
if params['recreate_instances'] is True:
# Recreate Instances.
(changed, json_output['recreated_instances']
) = recreate_instances_in_mig(mig)
if params['size'] != mig.size:
# Resize MIG.
keystr = 'created' if params['size'] > mig.size else 'deleted'
(changed, json_output['resize_%s_instances' %
(keystr)]) = resize_mig(mig, params['size'])
# Update Autoscaler
if params['autoscaling']:
autoscaler = get_autoscaler(gce, params['autoscaling']['name'],
params['zone'])
if not autoscaler:
# Try to create autoscaler.
# Note: this isn't perfect, if the autoscaler name has changed
# we wouldn't know that here.
if not create_autoscaler(gce, mig, params['autoscaling']):
module.fail_json(
msg='Unable to create autoscaler %s for existing MIG %s\
in zone: %s' % (params['autoscaling']['name'],
params['name'], params['zone']),
changed=False)
json_output['created_autoscaler'] = True
changed = True
else:
if params['autoscaling']['enabled'] is False:
# Delete autoscaler
changed = delete_autoscaler(autoscaler)
json_output['delete_autoscaler'] = changed
else:
# Update policy, etc.
changed = update_autoscaler(gce, autoscaler,
params['autoscaling'])
json_output['updated_autoscaler'] = changed
named_ports = params['named_ports'] or []
json_output['updated_named_ports'] = update_named_ports(mig,
named_ports)
if json_output['updated_named_ports']:
json_output['named_ports'] = named_ports
json_output['changed'] = changed
json_output.update(params)
module.exit_json(**json_output)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.gce import *
if __name__ == '__main__':
main()
| gpl-3.0 |
thnee/ansible | lib/ansible/modules/storage/netapp/netapp_e_storagepool.py | 19 | 45593 | #!/usr/bin/python
# (c) 2016, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {"metadata_version": "1.1",
"status": ["preview"],
"supported_by": "community"}
DOCUMENTATION = """
---
module: netapp_e_storagepool
short_description: NetApp E-Series manage volume groups and disk pools
description: Create or remove volume groups and disk pools for NetApp E-series storage arrays.
version_added: '2.2'
author:
- Kevin Hulquest (@hulquest)
- Nathan Swartz (@ndswartz)
extends_documentation_fragment:
- netapp.eseries
options:
state:
description:
- Whether the specified storage pool should exist or not.
- Note that removing a storage pool currently requires the removal of all defined volumes first.
required: true
choices: ["present", "absent"]
name:
description:
- The name of the storage pool to manage
required: true
criteria_drive_count:
description:
- The number of disks to use for building the storage pool.
- When I(state=="present") then I(criteria_drive_count) or I(criteria_min_usable_capacity) must be specified.
- The pool will be expanded if this number exceeds the number of disks already in place (See expansion note below)
required: false
type: int
criteria_min_usable_capacity:
description:
- The minimum size of the storage pool (in size_unit).
- When I(state=="present") then I(criteria_drive_count) or I(criteria_min_usable_capacity) must be specified.
- The pool will be expanded if this value exceeds its current size. (See expansion note below)
required: false
type: float
criteria_drive_type:
description:
- The type of disk (hdd or ssd) to use when searching for candidates to use.
- When not specified each drive type will be evaluated until successful drive candidates are found starting with
the most prevalent drive type.
required: false
choices: ["hdd","ssd"]
criteria_size_unit:
description:
- The unit used to interpret size parameters
choices: ["bytes", "b", "kb", "mb", "gb", "tb", "pb", "eb", "zb", "yb"]
default: "gb"
criteria_drive_min_size:
description:
- The minimum individual drive size (in size_unit) to consider when choosing drives for the storage pool.
criteria_drive_interface_type:
description:
- The interface type to use when selecting drives for the storage pool
- If not provided then all interface types will be considered.
choices: ["sas", "sas4k", "fibre", "fibre520b", "scsi", "sata", "pata"]
required: false
criteria_drive_require_da:
description:
- Ensures the storage pool will be created with only data assurance (DA) capable drives.
- Only available for new storage pools; existing storage pools cannot be converted.
default: false
type: bool
version_added: '2.9'
criteria_drive_require_fde:
description:
- Whether full disk encryption ability is required for drives to be added to the storage pool
default: false
type: bool
raid_level:
description:
- The RAID level of the storage pool to be created.
- Required only when I(state=="present").
- When I(raid_level=="raidDiskPool") then I(criteria_drive_count >= 10 or criteria_drive_count >= 11) is required
depending on the storage array specifications.
- When I(raid_level=="raid0") then I(1<=criteria_drive_count) is required.
- When I(raid_level=="raid1") then I(2<=criteria_drive_count) is required.
- When I(raid_level=="raid3") then I(3<=criteria_drive_count<=30) is required.
- When I(raid_level=="raid5") then I(3<=criteria_drive_count<=30) is required.
- When I(raid_level=="raid6") then I(5<=criteria_drive_count<=30) is required.
- Note that raidAll will be treated as raidDiskPool and raid3 as raid5.
required: false
choices: ["raidAll", "raid0", "raid1", "raid3", "raid5", "raid6", "raidDiskPool"]
default: "raidDiskPool"
secure_pool:
description:
- Enables security at rest feature on the storage pool.
- Will only work if all drives in the pool are security capable (FDE, FIPS, or mix)
- Warning, once security is enabled it is impossible to disable without erasing the drives.
required: false
type: bool
reserve_drive_count:
description:
- Set the number of drives reserved by the storage pool for reconstruction operations.
- Only valid on raid disk pools.
required: false
remove_volumes:
description:
- Prior to removing a storage pool, delete all volumes in the pool.
default: true
erase_secured_drives:
description:
- If I(state=="absent") then all storage pool drives will be erase
- If I(state=="present") then delete all available storage array drives that have security enabled.
default: true
type: bool
notes:
- The expansion operations are non-blocking due to the time consuming nature of expanding volume groups
- Traditional volume groups (raid0, raid1, raid5, raid6) are performed in steps dictated by the storage array. Each
required step will be attempted until the request fails which is likely because of the required expansion time.
- raidUnsupported will be treated as raid0, raidAll as raidDiskPool and raid3 as raid5.
- Tray loss protection and drawer loss protection will be chosen if at all possible.
"""
EXAMPLES = """
- name: No disk groups
netapp_e_storagepool:
ssid: "{{ ssid }}"
name: "{{ item }}"
state: absent
api_url: "{{ netapp_api_url }}"
api_username: "{{ netapp_api_username }}"
api_password: "{{ netapp_api_password }}"
validate_certs: "{{ netapp_api_validate_certs }}"
"""
RETURN = """
msg:
description: Success message
returned: success
type: str
sample: Json facts for the pool that was created.
"""
import functools
from itertools import groupby
from time import sleep
from pprint import pformat
from ansible.module_utils.netapp import NetAppESeriesModule
from ansible.module_utils._text import to_native
def get_most_common_elements(iterator):
"""Returns a generator containing a descending list of most common elements."""
if not isinstance(iterator, list):
raise TypeError("iterator must be a list.")
grouped = [(key, len(list(group))) for key, group in groupby(sorted(iterator))]
return sorted(grouped, key=lambda x: x[1], reverse=True)
def memoize(func):
"""Generic memoizer for any function with any number of arguments including zero."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
class MemoizeFuncArgs(dict):
def __missing__(self, _key):
self[_key] = func(*args, **kwargs)
return self[_key]
key = str((args, kwargs)) if args and kwargs else "no_argument_response"
return MemoizeFuncArgs().__getitem__(key)
return wrapper
class NetAppESeriesStoragePool(NetAppESeriesModule):
EXPANSION_TIMEOUT_SEC = 10
DEFAULT_DISK_POOL_MINIMUM_DISK_COUNT = 11
def __init__(self):
version = "02.00.0000.0000"
ansible_options = dict(
state=dict(required=True, choices=["present", "absent"], type="str"),
name=dict(required=True, type="str"),
criteria_size_unit=dict(choices=["bytes", "b", "kb", "mb", "gb", "tb", "pb", "eb", "zb", "yb"],
default="gb", type="str"),
criteria_drive_count=dict(type="int"),
criteria_drive_interface_type=dict(choices=["sas", "sas4k", "fibre", "fibre520b", "scsi", "sata", "pata"],
type="str"),
criteria_drive_type=dict(choices=["ssd", "hdd"], type="str", required=False),
criteria_drive_min_size=dict(type="float"),
criteria_drive_require_da=dict(type="bool", required=False),
criteria_drive_require_fde=dict(type="bool", required=False),
criteria_min_usable_capacity=dict(type="float"),
raid_level=dict(choices=["raidAll", "raid0", "raid1", "raid3", "raid5", "raid6", "raidDiskPool"],
default="raidDiskPool"),
erase_secured_drives=dict(type="bool", default=True),
secure_pool=dict(type="bool", default=False),
reserve_drive_count=dict(type="int"),
remove_volumes=dict(type="bool", default=True))
required_if = [["state", "present", ["raid_level"]]]
super(NetAppESeriesStoragePool, self).__init__(ansible_options=ansible_options,
web_services_version=version,
supports_check_mode=True,
required_if=required_if)
args = self.module.params
self.state = args["state"]
self.ssid = args["ssid"]
self.name = args["name"]
self.criteria_drive_count = args["criteria_drive_count"]
self.criteria_min_usable_capacity = args["criteria_min_usable_capacity"]
self.criteria_size_unit = args["criteria_size_unit"]
self.criteria_drive_min_size = args["criteria_drive_min_size"]
self.criteria_drive_type = args["criteria_drive_type"]
self.criteria_drive_interface_type = args["criteria_drive_interface_type"]
self.criteria_drive_require_fde = args["criteria_drive_require_fde"]
self.criteria_drive_require_da = args["criteria_drive_require_da"]
self.raid_level = args["raid_level"]
self.erase_secured_drives = args["erase_secured_drives"]
self.secure_pool = args["secure_pool"]
self.reserve_drive_count = args["reserve_drive_count"]
self.remove_volumes = args["remove_volumes"]
self.pool_detail = None
# Change all sizes to be measured in bytes
if self.criteria_min_usable_capacity:
self.criteria_min_usable_capacity = int(self.criteria_min_usable_capacity *
self.SIZE_UNIT_MAP[self.criteria_size_unit])
if self.criteria_drive_min_size:
self.criteria_drive_min_size = int(self.criteria_drive_min_size *
self.SIZE_UNIT_MAP[self.criteria_size_unit])
self.criteria_size_unit = "bytes"
# Adjust unused raid level option to reflect documentation
if self.raid_level == "raidAll":
self.raid_level = "raidDiskPool"
if self.raid_level == "raid3":
self.raid_level = "raid5"
@property
@memoize
def available_drives(self):
"""Determine the list of available drives"""
return [drive["id"] for drive in self.drives if drive["available"] and drive["status"] == "optimal"]
@property
@memoize
def available_drive_types(self):
"""Determine the types of available drives sorted by the most common first."""
types = [drive["driveMediaType"] for drive in self.drives]
return [entry[0] for entry in get_most_common_elements(types)]
@property
@memoize
def available_drive_interface_types(self):
"""Determine the types of available drives."""
interfaces = [drive["phyDriveType"] for drive in self.drives]
return [entry[0] for entry in get_most_common_elements(interfaces)]
@property
def storage_pool_drives(self, exclude_hotspares=True):
"""Retrieve list of drives found in storage pool."""
if exclude_hotspares:
return [drive for drive in self.drives
if drive["currentVolumeGroupRef"] == self.pool_detail["id"] and not drive["hotSpare"]]
return [drive for drive in self.drives if drive["currentVolumeGroupRef"] == self.pool_detail["id"]]
@property
def expandable_drive_count(self):
"""Maximum number of drives that a storage pool can be expanded at a given time."""
capabilities = None
if self.raid_level == "raidDiskPool":
return len(self.available_drives)
try:
rc, capabilities = self.request("storage-systems/%s/capabilities" % self.ssid)
except Exception as error:
self.module.fail_json(msg="Failed to fetch maximum expandable drive count. Array id [%s]. Error[%s]."
% (self.ssid, to_native(error)))
return capabilities["featureParameters"]["maxDCEDrives"]
@property
def disk_pool_drive_minimum(self):
"""Provide the storage array's minimum disk pool drive count."""
rc, attr = self.request("storage-systems/%s/symbol/getSystemAttributeDefaults" % self.ssid, ignore_errors=True)
# Standard minimum is 11 drives but some allow 10 drives. 10 will be the default
if (rc != 200 or "minimumDriveCount" not in attr["defaults"]["diskPoolDefaultAttributes"].keys() or
attr["defaults"]["diskPoolDefaultAttributes"]["minimumDriveCount"] == 0):
return self.DEFAULT_DISK_POOL_MINIMUM_DISK_COUNT
return attr["defaults"]["diskPoolDefaultAttributes"]["minimumDriveCount"]
def get_available_drive_capacities(self, drive_id_list=None):
"""Determine the list of available drive capacities."""
if drive_id_list:
available_drive_capacities = set([int(drive["usableCapacity"]) for drive in self.drives
if drive["id"] in drive_id_list and drive["available"] and
drive["status"] == "optimal"])
else:
available_drive_capacities = set([int(drive["usableCapacity"]) for drive in self.drives
if drive["available"] and drive["status"] == "optimal"])
self.module.log("available drive capacities: %s" % available_drive_capacities)
return list(available_drive_capacities)
@property
def drives(self):
"""Retrieve list of drives found in storage pool."""
drives = None
try:
rc, drives = self.request("storage-systems/%s/drives" % self.ssid)
except Exception as error:
self.module.fail_json(msg="Failed to fetch disk drives. Array id [%s]. Error[%s]."
% (self.ssid, to_native(error)))
return drives
def is_drive_count_valid(self, drive_count):
"""Validate drive count criteria is met."""
if self.criteria_drive_count and drive_count < self.criteria_drive_count:
return False
if self.raid_level == "raidDiskPool":
return drive_count >= self.disk_pool_drive_minimum
if self.raid_level == "raid0":
return drive_count > 0
if self.raid_level == "raid1":
return drive_count >= 2 and (drive_count % 2) == 0
if self.raid_level in ["raid3", "raid5"]:
return 3 <= drive_count <= 30
if self.raid_level == "raid6":
return 5 <= drive_count <= 30
return False
@property
def storage_pool(self):
"""Retrieve storage pool information."""
storage_pools_resp = None
try:
rc, storage_pools_resp = self.request("storage-systems/%s/storage-pools" % self.ssid)
except Exception as err:
self.module.fail_json(msg="Failed to get storage pools. Array id [%s]. Error[%s]. State[%s]."
% (self.ssid, to_native(err), self.state))
pool_detail = [pool for pool in storage_pools_resp if pool["name"] == self.name]
return pool_detail[0] if pool_detail else dict()
@property
def storage_pool_volumes(self):
"""Retrieve list of volumes associated with storage pool."""
volumes_resp = None
try:
rc, volumes_resp = self.request("storage-systems/%s/volumes" % self.ssid)
except Exception as err:
self.module.fail_json(msg="Failed to get storage pools. Array id [%s]. Error[%s]. State[%s]."
% (self.ssid, to_native(err), self.state))
group_ref = self.storage_pool["volumeGroupRef"]
storage_pool_volume_list = [volume["id"] for volume in volumes_resp if volume["volumeGroupRef"] == group_ref]
return storage_pool_volume_list
def get_ddp_capacity(self, expansion_drive_list):
"""Return the total usable capacity based on the additional drives."""
def get_ddp_error_percent(_drive_count, _extent_count):
"""Determine the space reserved for reconstruction"""
if _drive_count <= 36:
if _extent_count <= 600:
return 0.40
elif _extent_count <= 1400:
return 0.35
elif _extent_count <= 6200:
return 0.20
elif _extent_count <= 50000:
return 0.15
elif _drive_count <= 64:
if _extent_count <= 600:
return 0.20
elif _extent_count <= 1400:
return 0.15
elif _extent_count <= 6200:
return 0.10
elif _extent_count <= 50000:
return 0.05
elif _drive_count <= 480:
if _extent_count <= 600:
return 0.20
elif _extent_count <= 1400:
return 0.15
elif _extent_count <= 6200:
return 0.10
elif _extent_count <= 50000:
return 0.05
self.module.fail_json(msg="Drive count exceeded the error percent table. Array[%s]" % self.ssid)
def get_ddp_reserved_drive_count(_disk_count):
"""Determine the number of reserved drive."""
reserve_count = 0
if self.reserve_drive_count:
reserve_count = self.reserve_drive_count
elif _disk_count >= 256:
reserve_count = 8
elif _disk_count >= 192:
reserve_count = 7
elif _disk_count >= 128:
reserve_count = 6
elif _disk_count >= 64:
reserve_count = 4
elif _disk_count >= 32:
reserve_count = 3
elif _disk_count >= 12:
reserve_count = 2
elif _disk_count == 11:
reserve_count = 1
return reserve_count
if self.pool_detail:
drive_count = len(self.storage_pool_drives) + len(expansion_drive_list)
else:
drive_count = len(expansion_drive_list)
drive_usable_capacity = min(min(self.get_available_drive_capacities()),
min(self.get_available_drive_capacities(expansion_drive_list)))
drive_data_extents = ((drive_usable_capacity - 8053063680) / 536870912)
maximum_stripe_count = (drive_count * drive_data_extents) / 10
error_percent = get_ddp_error_percent(drive_count, drive_data_extents)
error_overhead = (drive_count * drive_data_extents / 10 * error_percent + 10) / 10
total_stripe_count = maximum_stripe_count - error_overhead
stripe_count_per_drive = total_stripe_count / drive_count
reserved_stripe_count = get_ddp_reserved_drive_count(drive_count) * stripe_count_per_drive
available_stripe_count = total_stripe_count - reserved_stripe_count
return available_stripe_count * 4294967296
@memoize
def get_candidate_drives(self):
"""Retrieve set of drives candidates for creating a new storage pool."""
def get_candidate_drive_request():
"""Perform request for new volume creation."""
candidates_list = list()
drive_types = [self.criteria_drive_type] if self.criteria_drive_type else self.available_drive_types
interface_types = [self.criteria_drive_interface_type] \
if self.criteria_drive_interface_type else self.available_drive_interface_types
for interface_type in interface_types:
for drive_type in drive_types:
candidates = None
volume_candidate_request_data = dict(
type="diskPool" if self.raid_level == "raidDiskPool" else "traditional",
diskPoolVolumeCandidateRequestData=dict(
reconstructionReservedDriveCount=65535))
candidate_selection_type = dict(
candidateSelectionType="count",
driveRefList=dict(driveRef=self.available_drives))
criteria = dict(raidLevel=self.raid_level,
phyDriveType=interface_type,
dssPreallocEnabled=False,
securityType="capable" if self.criteria_drive_require_fde else "none",
driveMediaType=drive_type,
onlyProtectionInformationCapable=True if self.criteria_drive_require_da else False,
volumeCandidateRequestData=volume_candidate_request_data,
allocateReserveSpace=False,
securityLevel="fde" if self.criteria_drive_require_fde else "none",
candidateSelectionType=candidate_selection_type)
try:
rc, candidates = self.request("storage-systems/%s/symbol/getVolumeCandidates?verboseError"
"Response=true" % self.ssid, data=criteria, method="POST")
except Exception as error:
self.module.fail_json(msg="Failed to retrieve volume candidates. Array [%s]. Error [%s]."
% (self.ssid, to_native(error)))
if candidates:
candidates_list.extend(candidates["volumeCandidate"])
# Sort output based on tray and then drawer protection first
tray_drawer_protection = list()
tray_protection = list()
drawer_protection = list()
no_protection = list()
sorted_candidates = list()
for item in candidates_list:
if item["trayLossProtection"]:
if item["drawerLossProtection"]:
tray_drawer_protection.append(item)
else:
tray_protection.append(item)
elif item["drawerLossProtection"]:
drawer_protection.append(item)
else:
no_protection.append(item)
if tray_drawer_protection:
sorted_candidates.extend(tray_drawer_protection)
if tray_protection:
sorted_candidates.extend(tray_protection)
if drawer_protection:
sorted_candidates.extend(drawer_protection)
if no_protection:
sorted_candidates.extend(no_protection)
return sorted_candidates
# Determine the appropriate candidate list
for candidate in get_candidate_drive_request():
# Evaluate candidates for required drive count, collective drive usable capacity and minimum drive size
if self.criteria_drive_count:
if self.criteria_drive_count != int(candidate["driveCount"]):
continue
if self.criteria_min_usable_capacity:
if ((self.raid_level == "raidDiskPool" and self.criteria_min_usable_capacity >
self.get_ddp_capacity(candidate["driveRefList"]["driveRef"])) or
self.criteria_min_usable_capacity > int(candidate["usableSize"])):
continue
if self.criteria_drive_min_size:
if self.criteria_drive_min_size > min(self.get_available_drive_capacities(candidate["driveRefList"]["driveRef"])):
continue
return candidate
self.module.fail_json(msg="Not enough drives to meet the specified criteria. Array [%s]." % self.ssid)
@memoize
def get_expansion_candidate_drives(self):
"""Retrieve required expansion drive list.
Note: To satisfy the expansion criteria each item in the candidate list must added specified group since there
is a potential limitation on how many drives can be incorporated at a time.
* Traditional raid volume groups must be added two drives maximum at a time. No limits on raid disk pools.
:return list(candidate): list of candidate structures from the getVolumeGroupExpansionCandidates symbol endpoint
"""
def get_expansion_candidate_drive_request():
"""Perform the request for expanding existing volume groups or disk pools.
Note: the list of candidate structures do not necessarily produce candidates that meet all criteria.
"""
candidates_list = None
url = "storage-systems/%s/symbol/getVolumeGroupExpansionCandidates?verboseErrorResponse=true" % self.ssid
if self.raid_level == "raidDiskPool":
url = "storage-systems/%s/symbol/getDiskPoolExpansionCandidates?verboseErrorResponse=true" % self.ssid
try:
rc, candidates_list = self.request(url, method="POST", data=self.pool_detail["id"])
except Exception as error:
self.module.fail_json(msg="Failed to retrieve volume candidates. Array [%s]. Error [%s]."
% (self.ssid, to_native(error)))
return candidates_list["candidates"]
required_candidate_list = list()
required_additional_drives = 0
required_additional_capacity = 0
total_required_capacity = 0
# determine whether and how much expansion is need to satisfy the specified criteria
if self.criteria_min_usable_capacity:
total_required_capacity = self.criteria_min_usable_capacity
required_additional_capacity = self.criteria_min_usable_capacity - int(self.pool_detail["totalRaidedSpace"])
if self.criteria_drive_count:
required_additional_drives = self.criteria_drive_count - len(self.storage_pool_drives)
# Determine the appropriate expansion candidate list
if required_additional_drives > 0 or required_additional_capacity > 0:
for candidate in get_expansion_candidate_drive_request():
if self.criteria_drive_min_size:
if self.criteria_drive_min_size > min(self.get_available_drive_capacities(candidate["drives"])):
continue
if self.raid_level == "raidDiskPool":
if (len(candidate["drives"]) >= required_additional_drives and
self.get_ddp_capacity(candidate["drives"]) >= total_required_capacity):
required_candidate_list.append(candidate)
break
else:
required_additional_drives -= len(candidate["drives"])
required_additional_capacity -= int(candidate["usableCapacity"])
required_candidate_list.append(candidate)
# Determine if required drives and capacities are satisfied
if required_additional_drives <= 0 and required_additional_capacity <= 0:
break
else:
self.module.fail_json(msg="Not enough drives to meet the specified criteria. Array [%s]." % self.ssid)
return required_candidate_list
def get_reserve_drive_count(self):
"""Retrieve the current number of reserve drives for raidDiskPool (Only for raidDiskPool)."""
if not self.pool_detail:
self.module.fail_json(msg="The storage pool must exist. Array [%s]." % self.ssid)
if self.raid_level != "raidDiskPool":
self.module.fail_json(msg="The storage pool must be a raidDiskPool. Pool [%s]. Array [%s]."
% (self.pool_detail["id"], self.ssid))
return self.pool_detail["volumeGroupData"]["diskPoolData"]["reconstructionReservedDriveCount"]
def get_maximum_reserve_drive_count(self):
"""Retrieve the maximum number of reserve drives for storage pool (Only for raidDiskPool)."""
if self.raid_level != "raidDiskPool":
self.module.fail_json(msg="The storage pool must be a raidDiskPool. Pool [%s]. Array [%s]."
% (self.pool_detail["id"], self.ssid))
drives_ids = list()
if self.pool_detail:
drives_ids.extend(self.storage_pool_drives)
for candidate in self.get_expansion_candidate_drives():
drives_ids.extend((candidate["drives"]))
else:
candidate = self.get_candidate_drives()
drives_ids.extend(candidate["driveRefList"]["driveRef"])
drive_count = len(drives_ids)
maximum_reserve_drive_count = min(int(drive_count * 0.2 + 1), drive_count - 10)
if maximum_reserve_drive_count > 10:
maximum_reserve_drive_count = 10
return maximum_reserve_drive_count
def set_reserve_drive_count(self, check_mode=False):
"""Set the reserve drive count for raidDiskPool."""
changed = False
if self.raid_level == "raidDiskPool" and self.reserve_drive_count:
maximum_count = self.get_maximum_reserve_drive_count()
if self.reserve_drive_count < 0 or self.reserve_drive_count > maximum_count:
self.module.fail_json(msg="Supplied reserve drive count is invalid or exceeds the maximum allowed. "
"Note that it may be necessary to wait for expansion operations to complete "
"before the adjusting the reserve drive count. Maximum [%s]. Array [%s]."
% (maximum_count, self.ssid))
if self.reserve_drive_count != self.get_reserve_drive_count():
changed = True
if not check_mode:
try:
rc, resp = self.request("storage-systems/%s/symbol/setDiskPoolReservedDriveCount" % self.ssid,
method="POST", data=dict(volumeGroupRef=self.pool_detail["id"],
newDriveCount=self.reserve_drive_count))
except Exception as error:
self.module.fail_json(msg="Failed to set reserve drive count for disk pool. Disk Pool [%s]."
" Array [%s]." % (self.pool_detail["id"], self.ssid))
return changed
def erase_all_available_secured_drives(self, check_mode=False):
"""Erase all available drives that have encryption at rest feature enabled."""
changed = False
drives_list = list()
for drive in self.drives:
if drive["available"] and drive["fdeEnabled"]:
changed = True
drives_list.append(drive["id"])
if drives_list and not check_mode:
try:
rc, resp = self.request("storage-systems/%s/symbol/reprovisionDrive?verboseErrorResponse=true"
% self.ssid, method="POST", data=dict(driveRef=drives_list))
except Exception as error:
self.module.fail_json(msg="Failed to erase all secured drives. Array [%s]" % self.ssid)
return changed
def create_storage_pool(self):
"""Create new storage pool."""
url = "storage-systems/%s/symbol/createVolumeGroup?verboseErrorResponse=true" % self.ssid
request_body = dict(label=self.name,
candidate=self.get_candidate_drives())
if self.raid_level == "raidDiskPool":
url = "storage-systems/%s/symbol/createDiskPool?verboseErrorResponse=true" % self.ssid
request_body.update(
dict(backgroundOperationPriority="useDefault",
criticalReconstructPriority="useDefault",
degradedReconstructPriority="useDefault",
poolUtilizationCriticalThreshold=65535,
poolUtilizationWarningThreshold=0))
if self.reserve_drive_count:
request_body.update(dict(volumeCandidateData=dict(
diskPoolVolumeCandidateData=dict(reconstructionReservedDriveCount=self.reserve_drive_count))))
try:
rc, resp = self.request(url, method="POST", data=request_body)
except Exception as error:
self.module.fail_json(msg="Failed to create storage pool. Array id [%s]. Error[%s]."
% (self.ssid, to_native(error)))
# Update drive and storage pool information
self.pool_detail = self.storage_pool
def delete_storage_pool(self):
"""Delete storage pool."""
storage_pool_drives = [drive["id"] for drive in self.storage_pool_drives if drive["fdeEnabled"]]
try:
delete_volumes_parameter = "?delete-volumes=true" if self.remove_volumes else ""
rc, resp = self.request("storage-systems/%s/storage-pools/%s%s"
% (self.ssid, self.pool_detail["id"], delete_volumes_parameter), method="DELETE")
except Exception as error:
self.module.fail_json(msg="Failed to delete storage pool. Pool id [%s]. Array id [%s]. Error[%s]."
% (self.pool_detail["id"], self.ssid, to_native(error)))
if storage_pool_drives and self.erase_secured_drives:
try:
rc, resp = self.request("storage-systems/%s/symbol/reprovisionDrive?verboseErrorResponse=true"
% self.ssid, method="POST", data=dict(driveRef=storage_pool_drives))
except Exception as error:
self.module.fail_json(msg="Failed to erase drives prior to creating new storage pool. Array [%s]."
" Error [%s]." % (self.ssid, to_native(error)))
def secure_storage_pool(self, check_mode=False):
"""Enable security on an existing storage pool"""
self.pool_detail = self.storage_pool
needs_secure_pool = False
if not self.secure_pool and self.pool_detail["securityType"] == "enabled":
self.module.fail_json(msg="It is not possible to disable storage pool security! See array documentation.")
if self.secure_pool and self.pool_detail["securityType"] != "enabled":
needs_secure_pool = True
if needs_secure_pool and not check_mode:
try:
rc, resp = self.request("storage-systems/%s/storage-pools/%s" % (self.ssid, self.pool_detail["id"]),
data=dict(securePool=True), method="POST")
except Exception as error:
self.module.fail_json(msg="Failed to secure storage pool. Pool id [%s]. Array [%s]. Error"
" [%s]." % (self.pool_detail["id"], self.ssid, to_native(error)))
self.pool_detail = self.storage_pool
return needs_secure_pool
def migrate_raid_level(self, check_mode=False):
"""Request storage pool raid level migration."""
needs_migration = self.raid_level != self.pool_detail["raidLevel"]
if needs_migration and self.pool_detail["raidLevel"] == "raidDiskPool":
self.module.fail_json(msg="Raid level cannot be changed for disk pools")
if needs_migration and not check_mode:
sp_raid_migrate_req = dict(raidLevel=self.raid_level)
try:
rc, resp = self.request("storage-systems/%s/storage-pools/%s/raid-type-migration"
% (self.ssid, self.name), data=sp_raid_migrate_req, method="POST")
except Exception as error:
self.module.fail_json(msg="Failed to change the raid level of storage pool. Array id [%s]."
" Error[%s]." % (self.ssid, to_native(error)))
self.pool_detail = self.storage_pool
return needs_migration
def expand_storage_pool(self, check_mode=False):
"""Add drives to existing storage pool.
:return bool: whether drives were required to be added to satisfy the specified criteria."""
expansion_candidate_list = self.get_expansion_candidate_drives()
changed_required = bool(expansion_candidate_list)
estimated_completion_time = 0.0
# build expandable groupings of traditional raid candidate
required_expansion_candidate_list = list()
while expansion_candidate_list:
subset = list()
while expansion_candidate_list and len(subset) < self.expandable_drive_count:
subset.extend(expansion_candidate_list.pop()["drives"])
required_expansion_candidate_list.append(subset)
if required_expansion_candidate_list and not check_mode:
url = "storage-systems/%s/symbol/startVolumeGroupExpansion?verboseErrorResponse=true" % self.ssid
if self.raid_level == "raidDiskPool":
url = "storage-systems/%s/symbol/startDiskPoolExpansion?verboseErrorResponse=true" % self.ssid
while required_expansion_candidate_list:
candidate_drives_list = required_expansion_candidate_list.pop()
request_body = dict(volumeGroupRef=self.pool_detail["volumeGroupRef"],
driveRef=candidate_drives_list)
try:
rc, resp = self.request(url, method="POST", data=request_body)
except Exception as error:
rc, actions_resp = self.request("storage-systems/%s/storage-pools/%s/action-progress"
% (self.ssid, self.pool_detail["id"]), ignore_errors=True)
if rc == 200 and actions_resp:
actions = [action["currentAction"] for action in actions_resp
if action["volumeRef"] in self.storage_pool_volumes]
self.module.fail_json(msg="Failed to add drives to the storage pool possibly because of actions"
" in progress. Actions [%s]. Pool id [%s]. Array id [%s]. Error[%s]."
% (", ".join(actions), self.pool_detail["id"], self.ssid,
to_native(error)))
self.module.fail_json(msg="Failed to add drives to storage pool. Pool id [%s]. Array id [%s]."
" Error[%s]." % (self.pool_detail["id"], self.ssid, to_native(error)))
# Wait for expansion completion unless it is the last request in the candidate list
if required_expansion_candidate_list:
for dummy in range(self.EXPANSION_TIMEOUT_SEC):
rc, actions_resp = self.request("storage-systems/%s/storage-pools/%s/action-progress"
% (self.ssid, self.pool_detail["id"]), ignore_errors=True)
if rc == 200:
for action in actions_resp:
if (action["volumeRef"] in self.storage_pool_volumes and
action["currentAction"] == "remappingDce"):
sleep(1)
estimated_completion_time = action["estimatedTimeToCompletion"]
break
else:
estimated_completion_time = 0.0
break
return changed_required, estimated_completion_time
def apply(self):
"""Apply requested state to storage array."""
changed = False
if self.state == "present":
if self.criteria_drive_count is None and self.criteria_min_usable_capacity is None:
self.module.fail_json(msg="One of criteria_min_usable_capacity or criteria_drive_count must be"
" specified.")
if self.criteria_drive_count and not self.is_drive_count_valid(self.criteria_drive_count):
self.module.fail_json(msg="criteria_drive_count must be valid for the specified raid level.")
self.pool_detail = self.storage_pool
self.module.log(pformat(self.pool_detail))
if self.state == "present" and self.erase_secured_drives:
self.erase_all_available_secured_drives(check_mode=True)
# Determine whether changes need to be applied to the storage array
if self.pool_detail:
if self.state == "absent":
changed = True
elif self.state == "present":
if self.criteria_drive_count and self.criteria_drive_count < len(self.storage_pool_drives):
self.module.fail_json(msg="Failed to reduce the size of the storage pool. Array [%s]. Pool [%s]."
% (self.ssid, self.pool_detail["id"]))
if self.criteria_drive_type and self.criteria_drive_type != self.pool_detail["driveMediaType"]:
self.module.fail_json(msg="Failed! It is not possible to modify storage pool media type."
" Array [%s]. Pool [%s]." % (self.ssid, self.pool_detail["id"]))
if (self.criteria_drive_require_da is not None and self.criteria_drive_require_da !=
self.pool_detail["protectionInformationCapabilities"]["protectionInformationCapable"]):
self.module.fail_json(msg="Failed! It is not possible to modify DA-capability. Array [%s]."
" Pool [%s]." % (self.ssid, self.pool_detail["id"]))
# Evaluate current storage pool for required change.
needs_expansion, estimated_completion_time = self.expand_storage_pool(check_mode=True)
if needs_expansion:
changed = True
if self.migrate_raid_level(check_mode=True):
changed = True
if self.secure_storage_pool(check_mode=True):
changed = True
if self.set_reserve_drive_count(check_mode=True):
changed = True
elif self.state == "present":
changed = True
# Apply changes to storage array
msg = "No changes were required for the storage pool [%s]."
if changed and not self.module.check_mode:
if self.state == "present":
if self.erase_secured_drives:
self.erase_all_available_secured_drives()
if self.pool_detail:
change_list = list()
# Expansion needs to occur before raid level migration to account for any sizing needs.
expanded, estimated_completion_time = self.expand_storage_pool()
if expanded:
change_list.append("expanded")
if self.migrate_raid_level():
change_list.append("raid migration")
if self.secure_storage_pool():
change_list.append("secured")
if self.set_reserve_drive_count():
change_list.append("adjusted reserve drive count")
if change_list:
msg = "Following changes have been applied to the storage pool [%s]: " + ", ".join(change_list)
if expanded:
msg += "\nThe expansion operation will complete in an estimated %s minutes."\
% estimated_completion_time
else:
self.create_storage_pool()
msg = "Storage pool [%s] was created."
if self.secure_storage_pool():
msg = "Storage pool [%s] was created and secured."
if self.set_reserve_drive_count():
msg += " Adjusted reserve drive count."
elif self.pool_detail:
self.delete_storage_pool()
msg = "Storage pool [%s] removed."
self.pool_detail = self.storage_pool
self.module.log(pformat(self.pool_detail))
self.module.log(msg % self.name)
self.module.exit_json(msg=msg % self.name, changed=changed, **self.pool_detail)
def main():
storage_pool = NetAppESeriesStoragePool()
storage_pool.apply()
if __name__ == "__main__":
main()
| gpl-3.0 |
timbooo/traktforalfred | requests/packages/urllib3/__init__.py | 52 | 2174 | """
urllib3 - Thread-safe connection pooling and re-using.
"""
__author__ = 'Andrey Petrov (andrey.petrov@shazow.net)'
__license__ = 'MIT'
__version__ = '1.12'
from .connectionpool import (
HTTPConnectionPool,
HTTPSConnectionPool,
connection_from_url
)
from . import exceptions
from .filepost import encode_multipart_formdata
from .poolmanager import PoolManager, ProxyManager, proxy_from_url
from .response import HTTPResponse
from .util.request import make_headers
from .util.url import get_host
from .util.timeout import Timeout
from .util.retry import Retry
# Set default logging handler to avoid "No handler found" warnings.
import logging
try: # Python 2.7+
from logging import NullHandler
except ImportError:
class NullHandler(logging.Handler):
def emit(self, record):
pass
logging.getLogger(__name__).addHandler(NullHandler())
def add_stderr_logger(level=logging.DEBUG):
"""
Helper for quickly adding a StreamHandler to the logger. Useful for
debugging.
Returns the handler after adding it.
"""
# This method needs to be in this __init__.py to get the __name__ correct
# even if urllib3 is vendored within another package.
logger = logging.getLogger(__name__)
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(message)s'))
logger.addHandler(handler)
logger.setLevel(level)
logger.debug('Added a stderr logging handler to logger: %s' % __name__)
return handler
# ... Clean up.
del NullHandler
import warnings
# SecurityWarning's always go off by default.
warnings.simplefilter('always', exceptions.SecurityWarning, append=True)
# SubjectAltNameWarning's should go off once per host
warnings.simplefilter('default', exceptions.SubjectAltNameWarning)
# InsecurePlatformWarning's don't vary between requests, so we keep it default.
warnings.simplefilter('default', exceptions.InsecurePlatformWarning,
append=True)
def disable_warnings(category=exceptions.HTTPWarning):
"""
Helper for quickly disabling all urllib3 warnings.
"""
warnings.simplefilter('ignore', category)
| mit |
TwolDE2/enigma2 | lib/python/Screens/TaskView.py | 7 | 5656 | from Screens.Screen import Screen
from Components.ConfigList import ConfigListScreen
from Components.config import ConfigSubsection, ConfigSelection, getConfigListEntry
from Components.SystemInfo import SystemInfo
from Components.Task import job_manager
from InfoBarGenerics import InfoBarNotifications
import Screens.Standby
from Tools import Notifications
from boxbranding import getMachineBrand, getMachineName
class JobView(InfoBarNotifications, Screen, ConfigListScreen):
def __init__(self, session, job, parent=None, cancelable = True, backgroundable = True, afterEventChangeable = True , afterEvent="nothing"):
from Components.Sources.StaticText import StaticText
from Components.Sources.Progress import Progress
from Components.Sources.Boolean import Boolean
from Components.ActionMap import ActionMap
Screen.__init__(self, session, parent)
Screen.setTitle(self, _("Job View"))
InfoBarNotifications.__init__(self)
ConfigListScreen.__init__(self, [])
self.parent = parent
self.job = job
if afterEvent:
self.job.afterEvent = afterEvent
self["job_name"] = StaticText(job.name)
self["job_progress"] = Progress()
self["job_task"] = StaticText()
self["summary_job_name"] = StaticText(job.name)
self["summary_job_progress"] = Progress()
self["summary_job_task"] = StaticText()
self["job_status"] = StaticText()
self["finished"] = Boolean()
self["cancelable"] = Boolean(cancelable)
self["backgroundable"] = Boolean(backgroundable)
self["key_blue"] = StaticText(_("Background"))
self.onShow.append(self.windowShow)
self.onHide.append(self.windowHide)
self["setupActions"] = ActionMap(["ColorActions", "SetupActions"],
{
"green": self.ok,
"red": self.abort,
"blue": self.background,
"cancel": self.abort,
"ok": self.ok,
}, -2)
self.settings = ConfigSubsection()
if SystemInfo["DeepstandbySupport"]:
shutdownString = _("go to deep standby")
else:
shutdownString = _("shut down")
self.settings.afterEvent = ConfigSelection(choices = [("nothing", _("do nothing")), ("close", _("Close")), ("standby", _("go to standby")), ("deepstandby", shutdownString)], default = self.job.afterEvent or "nothing")
self.job.afterEvent = self.settings.afterEvent.value
self.afterEventChangeable = afterEventChangeable
self.setupList()
self.state_changed()
def setupList(self):
if self.afterEventChangeable:
self["config"].setList( [ getConfigListEntry(_("After event"), self.settings.afterEvent) ])
else:
self["config"].hide()
self.job.afterEvent = self.settings.afterEvent.value
def keyLeft(self):
ConfigListScreen.keyLeft(self)
self.setupList()
def keyRight(self):
ConfigListScreen.keyRight(self)
self.setupList()
def windowShow(self):
job_manager.visible = True
self.job.state_changed.append(self.state_changed)
def windowHide(self):
job_manager.visible = False
if len(self.job.state_changed) > 0:
self.job.state_changed.remove(self.state_changed)
def state_changed(self):
j = self.job
self["job_progress"].range = j.end
self["summary_job_progress"].range = j.end
self["job_progress"].value = j.progress
self["summary_job_progress"].value = j.progress
#print "JobView::state_changed:", j.end, j.progress
self["job_status"].text = j.getStatustext()
if j.status == j.IN_PROGRESS:
self["job_task"].text = j.tasks[j.current_task].name
self["summary_job_task"].text = j.tasks[j.current_task].name
else:
self["job_task"].text = ""
self["summary_job_task"].text = j.getStatustext()
if j.status in (j.FINISHED, j.FAILED):
self.performAfterEvent()
self["backgroundable"].boolean = False
if j.status == j.FINISHED:
self["finished"].boolean = True
self["cancelable"].boolean = False
elif j.status == j.FAILED:
self["cancelable"].boolean = True
def background(self):
if self["backgroundable"].boolean:
self.close(True)
def ok(self):
if self.job.status in (self.job.FINISHED, self.job.FAILED):
self.close(False)
else:
self.background()
def abort(self):
if self.job.status == self.job.NOT_STARTED:
job_manager.active_jobs.remove(self.job)
self.close(False)
elif self.job.status == self.job.IN_PROGRESS and self["cancelable"].boolean == True:
self.job.cancel()
else:
self.close(False)
def performAfterEvent(self):
self["config"].hide()
if self.settings.afterEvent.value == "nothing":
return
elif self.settings.afterEvent.value == "close" and self.job.status == self.job.FINISHED:
self.close(False)
from Screens.MessageBox import MessageBox
if self.settings.afterEvent.value == "deepstandby":
if not Screens.Standby.inTryQuitMainloop:
Notifications.AddNotificationWithCallback(self.sendTryQuitMainloopNotification, MessageBox, _("A sleep timer wants to shut down\nyour %s %s. Shutdown now?") % (getMachineBrand(), getMachineName()), timeout = 20)
elif self.settings.afterEvent.value == "standby":
if not Screens.Standby.inStandby:
Notifications.AddNotificationWithCallback(self.sendStandbyNotification, MessageBox, _("A sleep timer wants to set your\n%s %s to standby. Do that now?") % (getMachineBrand(), getMachineName()), timeout = 20)
def checkNotifications(self):
InfoBarNotifications.checkNotifications(self)
if not Notifications.notifications:
if self.settings.afterEvent.value == "close" and self.job.status == self.job.FAILED:
self.close(False)
def sendStandbyNotification(self, answer):
if answer:
Notifications.AddNotification(Screens.Standby.Standby)
def sendTryQuitMainloopNotification(self, answer):
if answer:
Notifications.AddNotification(Screens.Standby.TryQuitMainloop, 1)
| gpl-2.0 |
emilio/servo | tests/wpt/web-platform-tests/webdriver/tests/get_active_element/get.py | 21 | 3988 | from tests.support.asserts import assert_error, assert_is_active_element, assert_success
from tests.support.inline import inline
def read_global(session, name):
return session.execute_script("return %s;" % name)
def get_active_element(session):
return session.transport.send(
"GET", "session/{session_id}/element/active".format(**vars(session)))
def test_no_browsing_context(session, closed_window):
response = get_active_element(session)
assert_error(response, "no such window")
def test_success_document(session):
session.url = inline("""
<body>
<h1>Heading</h1>
<input />
<input />
<input style="opacity: 0" />
<p>Another element</p>
</body>""")
response = get_active_element(session)
element = assert_success(response)
assert_is_active_element(session, element)
def test_sucess_input(session):
session.url = inline("""
<body>
<h1>Heading</h1>
<input autofocus />
<input style="opacity: 0" />
<p>Another element</p>
</body>""")
response = get_active_element(session)
element = assert_success(response)
assert_is_active_element(session, element)
def test_sucess_input_non_interactable(session):
session.url = inline("""
<body>
<h1>Heading</h1>
<input />
<input style="opacity: 0" autofocus />
<p>Another element</p>
</body>""")
response = get_active_element(session)
element = assert_success(response)
assert_is_active_element(session, element)
def test_success_explicit_focus(session):
session.url = inline("""
<body>
<h1>Heading</h1>
<input />
<iframe></iframe>
</body>""")
session.execute_script("document.body.getElementsByTagName('h1')[0].focus()")
response = get_active_element(session)
element = assert_success(response)
assert_is_active_element(session, element)
session.execute_script("document.body.getElementsByTagName('input')[0].focus()")
response = get_active_element(session)
element = assert_success(response)
assert_is_active_element(session, element)
session.execute_script("document.body.getElementsByTagName('iframe')[0].focus()")
response = get_active_element(session)
element = assert_success(response)
assert_is_active_element(session, element)
session.execute_script("document.body.getElementsByTagName('iframe')[0].focus();")
session.execute_script("""
var iframe = document.body.getElementsByTagName('iframe')[0];
if (iframe.remove) {
iframe.remove();
} else {
iframe.removeNode(true);
}""")
response = get_active_element(session)
element = assert_success(response)
assert_is_active_element(session, element)
session.execute_script("document.body.appendChild(document.createElement('textarea'))")
response = get_active_element(session)
element = assert_success(response)
assert_is_active_element(session, element)
def test_success_iframe_content(session):
session.url = inline("<body></body>")
session.execute_script("""
let iframe = document.createElement('iframe');
document.body.appendChild(iframe);
let input = iframe.contentDocument.createElement('input');
iframe.contentDocument.body.appendChild(input);
input.focus();
""")
response = get_active_element(session)
element = assert_success(response)
assert_is_active_element(session, element)
def test_missing_document_element(session):
session.url = inline("<body></body>")
session.execute_script("""
if (document.body.remove) {
document.body.remove();
} else {
document.body.removeNode(true);
}""")
response = get_active_element(session)
assert_error(response, "no such element")
| mpl-2.0 |
htwenhe/DJOA | env/Lib/site-packages/openpyxl/worksheet/pagebreak.py | 2 | 1664 | from __future__ import absolute_import
# Copyright (c) 2010-2017 openpyxl
from openpyxl.descriptors.serialisable import Serialisable
from openpyxl.descriptors import (
Integer,
Bool,
Sequence,
)
class Break(Serialisable):
tagname = "brk"
id = Integer(allow_none=True)
min = Integer(allow_none=True)
max = Integer(allow_none=True)
man = Bool(allow_none=True)
pt = Bool(allow_none=True)
def __init__(self,
id=0,
min=0,
max=16383,
man=True,
pt=None,
):
self.id = id
self.min = min
self.max = max
self.man = man
self.pt = pt
class PageBreak(Serialisable):
tagname = "rowBreaks"
count = Integer(allow_none=True)
manualBreakCount = Integer(allow_none=True)
brk = Sequence(expected_type=Break, allow_none=True)
__elements__ = ('brk',)
__attrs__ = ("count", "manualBreakCount",)
def __init__(self,
count=None,
manualBreakCount=None,
brk=(),
):
self.brk = brk
def __bool__(self):
return len(self.brk) > 0
__nonzero__ = __bool__
def __len__(self):
return len(self.brk)
@property
def count(self):
return len(self)
@property
def manualBreakCount(self):
return len(self)
def append(self, brk=None):
"""
Add a page break
"""
vals = list(self.brk)
if not isinstance(brk, Break):
brk = Break(id=self.count+1)
vals.append(brk)
self.brk = vals
| mit |
detrout/debian-statsmodels | statsmodels/graphics/tests/test_boxplots.py | 28 | 1257 | import numpy as np
from numpy.testing import dec
from statsmodels.graphics.boxplots import violinplot, beanplot
from statsmodels.datasets import anes96
try:
import matplotlib.pyplot as plt
have_matplotlib = True
except:
have_matplotlib = False
@dec.skipif(not have_matplotlib)
def test_violinplot_beanplot():
# Test violinplot and beanplot with the same dataset.
data = anes96.load_pandas()
party_ID = np.arange(7)
labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat",
"Independent-Independent", "Independent-Republican",
"Weak Republican", "Strong Republican"]
age = [data.exog['age'][data.endog == id] for id in party_ID]
fig = plt.figure()
ax = fig.add_subplot(111)
violinplot(age, ax=ax, labels=labels,
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30})
plt.close(fig)
fig = plt.figure()
ax = fig.add_subplot(111)
beanplot(age, ax=ax, labels=labels,
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30})
plt.close(fig)
| bsd-3-clause |
Dandandan/wikiprogramming | jsrepl/build/extern/python/reloop-closured/lib/python2.7/pipes.py | 82 | 9647 | """Conversion pipeline templates.
The problem:
------------
Suppose you have some data that you want to convert to another format,
such as from GIF image format to PPM image format. Maybe the
conversion involves several steps (e.g. piping it through compress or
uuencode). Some of the conversion steps may require that their input
is a disk file, others may be able to read standard input; similar for
their output. The input to the entire conversion may also be read
from a disk file or from an open file, and similar for its output.
The module lets you construct a pipeline template by sticking one or
more conversion steps together. It will take care of creating and
removing temporary files if they are necessary to hold intermediate
data. You can then use the template to do conversions from many
different sources to many different destinations. The temporary
file names used are different each time the template is used.
The templates are objects so you can create templates for many
different conversion steps and store them in a dictionary, for
instance.
Directions:
-----------
To create a template:
t = Template()
To add a conversion step to a template:
t.append(command, kind)
where kind is a string of two characters: the first is '-' if the
command reads its standard input or 'f' if it requires a file; the
second likewise for the output. The command must be valid /bin/sh
syntax. If input or output files are required, they are passed as
$IN and $OUT; otherwise, it must be possible to use the command in
a pipeline.
To add a conversion step at the beginning:
t.prepend(command, kind)
To convert a file to another file using a template:
sts = t.copy(infile, outfile)
If infile or outfile are the empty string, standard input is read or
standard output is written, respectively. The return value is the
exit status of the conversion pipeline.
To open a file for reading or writing through a conversion pipeline:
fp = t.open(file, mode)
where mode is 'r' to read the file, or 'w' to write it -- just like
for the built-in function open() or for os.popen().
To create a new template object initialized to a given one:
t2 = t.clone()
For an example, see the function test() at the end of the file.
""" # '
import re
import os
import tempfile
import string
__all__ = ["Template"]
# Conversion step kinds
FILEIN_FILEOUT = 'ff' # Must read & write real files
STDIN_FILEOUT = '-f' # Must write a real file
FILEIN_STDOUT = 'f-' # Must read a real file
STDIN_STDOUT = '--' # Normal pipeline element
SOURCE = '.-' # Must be first, writes stdout
SINK = '-.' # Must be last, reads stdin
stepkinds = [FILEIN_FILEOUT, STDIN_FILEOUT, FILEIN_STDOUT, STDIN_STDOUT, \
SOURCE, SINK]
class Template:
"""Class representing a pipeline template."""
def __init__(self):
"""Template() returns a fresh pipeline template."""
self.debugging = 0
self.reset()
def __repr__(self):
"""t.__repr__() implements repr(t)."""
return '<Template instance, steps=%r>' % (self.steps,)
def reset(self):
"""t.reset() restores a pipeline template to its initial state."""
self.steps = []
def clone(self):
"""t.clone() returns a new pipeline template with identical
initial state as the current one."""
t = Template()
t.steps = self.steps[:]
t.debugging = self.debugging
return t
def debug(self, flag):
"""t.debug(flag) turns debugging on or off."""
self.debugging = flag
def append(self, cmd, kind):
"""t.append(cmd, kind) adds a new step at the end."""
if type(cmd) is not type(''):
raise TypeError, \
'Template.append: cmd must be a string'
if kind not in stepkinds:
raise ValueError, \
'Template.append: bad kind %r' % (kind,)
if kind == SOURCE:
raise ValueError, \
'Template.append: SOURCE can only be prepended'
if self.steps and self.steps[-1][1] == SINK:
raise ValueError, \
'Template.append: already ends with SINK'
if kind[0] == 'f' and not re.search(r'\$IN\b', cmd):
raise ValueError, \
'Template.append: missing $IN in cmd'
if kind[1] == 'f' and not re.search(r'\$OUT\b', cmd):
raise ValueError, \
'Template.append: missing $OUT in cmd'
self.steps.append((cmd, kind))
def prepend(self, cmd, kind):
"""t.prepend(cmd, kind) adds a new step at the front."""
if type(cmd) is not type(''):
raise TypeError, \
'Template.prepend: cmd must be a string'
if kind not in stepkinds:
raise ValueError, \
'Template.prepend: bad kind %r' % (kind,)
if kind == SINK:
raise ValueError, \
'Template.prepend: SINK can only be appended'
if self.steps and self.steps[0][1] == SOURCE:
raise ValueError, \
'Template.prepend: already begins with SOURCE'
if kind[0] == 'f' and not re.search(r'\$IN\b', cmd):
raise ValueError, \
'Template.prepend: missing $IN in cmd'
if kind[1] == 'f' and not re.search(r'\$OUT\b', cmd):
raise ValueError, \
'Template.prepend: missing $OUT in cmd'
self.steps.insert(0, (cmd, kind))
def open(self, file, rw):
"""t.open(file, rw) returns a pipe or file object open for
reading or writing; the file is the other end of the pipeline."""
if rw == 'r':
return self.open_r(file)
if rw == 'w':
return self.open_w(file)
raise ValueError, \
'Template.open: rw must be \'r\' or \'w\', not %r' % (rw,)
def open_r(self, file):
"""t.open_r(file) and t.open_w(file) implement
t.open(file, 'r') and t.open(file, 'w') respectively."""
if not self.steps:
return open(file, 'r')
if self.steps[-1][1] == SINK:
raise ValueError, \
'Template.open_r: pipeline ends width SINK'
cmd = self.makepipeline(file, '')
return os.popen(cmd, 'r')
def open_w(self, file):
if not self.steps:
return open(file, 'w')
if self.steps[0][1] == SOURCE:
raise ValueError, \
'Template.open_w: pipeline begins with SOURCE'
cmd = self.makepipeline('', file)
return os.popen(cmd, 'w')
def copy(self, infile, outfile):
return os.system(self.makepipeline(infile, outfile))
def makepipeline(self, infile, outfile):
cmd = makepipeline(infile, self.steps, outfile)
if self.debugging:
print cmd
cmd = 'set -x; ' + cmd
return cmd
def makepipeline(infile, steps, outfile):
# Build a list with for each command:
# [input filename or '', command string, kind, output filename or '']
list = []
for cmd, kind in steps:
list.append(['', cmd, kind, ''])
#
# Make sure there is at least one step
#
if not list:
list.append(['', 'cat', '--', ''])
#
# Take care of the input and output ends
#
[cmd, kind] = list[0][1:3]
if kind[0] == 'f' and not infile:
list.insert(0, ['', 'cat', '--', ''])
list[0][0] = infile
#
[cmd, kind] = list[-1][1:3]
if kind[1] == 'f' and not outfile:
list.append(['', 'cat', '--', ''])
list[-1][-1] = outfile
#
# Invent temporary files to connect stages that need files
#
garbage = []
for i in range(1, len(list)):
lkind = list[i-1][2]
rkind = list[i][2]
if lkind[1] == 'f' or rkind[0] == 'f':
(fd, temp) = tempfile.mkstemp()
os.close(fd)
garbage.append(temp)
list[i-1][-1] = list[i][0] = temp
#
for item in list:
[inf, cmd, kind, outf] = item
if kind[1] == 'f':
cmd = 'OUT=' + quote(outf) + '; ' + cmd
if kind[0] == 'f':
cmd = 'IN=' + quote(inf) + '; ' + cmd
if kind[0] == '-' and inf:
cmd = cmd + ' <' + quote(inf)
if kind[1] == '-' and outf:
cmd = cmd + ' >' + quote(outf)
item[1] = cmd
#
cmdlist = list[0][1]
for item in list[1:]:
[cmd, kind] = item[1:3]
if item[0] == '':
if 'f' in kind:
cmd = '{ ' + cmd + '; }'
cmdlist = cmdlist + ' |\n' + cmd
else:
cmdlist = cmdlist + '\n' + cmd
#
if garbage:
rmcmd = 'rm -f'
for file in garbage:
rmcmd = rmcmd + ' ' + quote(file)
trapcmd = 'trap ' + quote(rmcmd + '; exit') + ' 1 2 3 13 14 15'
cmdlist = trapcmd + '\n' + cmdlist + '\n' + rmcmd
#
return cmdlist
# Reliably quote a string as a single argument for /bin/sh
# Safe unquoted
_safechars = frozenset(string.ascii_letters + string.digits + '@%_-+=:,./')
def quote(file):
"""Return a shell-escaped version of the file string."""
for c in file:
if c not in _safechars:
break
else:
if not file:
return "''"
return file
# use single quotes, and put single quotes into double quotes
# the string $'b is then quoted as '$'"'"'b'
return "'" + file.replace("'", "'\"'\"'") + "'"
| mit |
GrumpyNounours/PySeidon | pyseidon/utilities/windrose.py | 2 | 20431 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
__version__ = '1.4'
__author__ = 'Lionel Roubeyrie'
__mail__ = 'lionel.roubeyrie@gmail.com'
__license__ = 'CeCILL-B'
import matplotlib
import matplotlib.cm as cm
import numpy as np
from matplotlib.patches import Rectangle, Polygon
from matplotlib.ticker import ScalarFormatter, AutoLocator
from matplotlib.text import Text, FontProperties
from matplotlib.projections.polar import PolarAxes
from numpy.lib.twodim_base import histogram2d
import matplotlib.pyplot as plt
from pylab import poly_between
RESOLUTION = 100
ZBASE = -1000 #The starting zorder for all drawing, negative to have the grid on
class WindroseAxes(PolarAxes):
"""
Create a windrose axes
"""
def __init__(self, *args, **kwargs):
"""
See Axes base class for args and kwargs documentation
"""
#Uncomment to have the possibility to change the resolution directly
#when the instance is created
#self.RESOLUTION = kwargs.pop('resolution', 100)
PolarAxes.__init__(self, *args, **kwargs)
self.set_aspect('equal', adjustable='box', anchor='C')
self.radii_angle = 67.5
self.cla()
def cla(self):
"""
Clear the current axes
"""
PolarAxes.cla(self)
self.theta_angles = np.arange(0, 360, 45)
self.theta_labels = ['E', 'N-E', 'N', 'N-W', 'W', 'S-W', 'S', 'S-E']
self.set_thetagrids(angles=self.theta_angles, labels=self.theta_labels)
self._info = {'dir' : list(),
'bins' : list(),
'table' : list()}
self.patches_list = list()
def _colors(self, cmap, n):
'''
Returns a list of n colors based on the colormap cmap
'''
return [cmap(i) for i in np.linspace(0.0, 1.0, n)]
def set_radii_angle(self, **kwargs):
"""
Set the radii labels angle
"""
null = kwargs.pop('labels', None)
angle = kwargs.pop('angle', None)
if angle is None:
angle = self.radii_angle
self.radii_angle = angle
radii = np.linspace(0.1, self.get_rmax(), 6)
radii_labels = [ "%.1f" %r for r in radii ]
radii_labels[0] = "" #Removing label 0
null = self.set_rgrids(radii=radii, labels=radii_labels,
angle=self.radii_angle, **kwargs)
def _update(self):
self.set_rmax(rmax=np.max(np.sum(self._info['table'], axis=0)))
self.set_radii_angle(angle=self.radii_angle)
def legend(self, loc='lower left', **kwargs):
"""
Sets the legend location and her properties.
The location codes are
'best' : 0,
'upper right' : 1,
'upper left' : 2,
'lower left' : 3,
'lower right' : 4,
'right' : 5,
'center left' : 6,
'center right' : 7,
'lower center' : 8,
'upper center' : 9,
'center' : 10,
If none of these are suitable, loc can be a 2-tuple giving x,y
in axes coords, ie,
loc = (0, 1) is left top
loc = (0.5, 0.5) is center, center
and so on. The following kwargs are supported:
isaxes=True # whether this is an axes legend
prop = FontProperties(size='smaller') # the font property
pad = 0.2 # the fractional whitespace inside the legend border
shadow # if True, draw a shadow behind legend
labelsep = 0.005 # the vertical space between the legend entries
handlelen = 0.05 # the length of the legend lines
handletextsep = 0.02 # the space between the legend line and legend text
axespad = 0.02 # the border between the axes and legend edge
"""
def get_handles():
handles = list()
for p in self.patches_list:
if isinstance(p, matplotlib.patches.Polygon) or \
isinstance(p, matplotlib.patches.Rectangle):
color = p.get_facecolor()
elif isinstance(p, matplotlib.lines.Line2D):
color = p.get_color()
else:
raise AttributeError("Can't handle patches")
handles.append(Rectangle((0, 0), 0.2, 0.2,
facecolor=color, edgecolor='black'))
return handles
def get_labels():
labels = np.copy(self._info['bins'])
labels = ["[%.1f : %0.1f[" %(labels[i], labels[i+1]) \
for i in range(len(labels)-1)]
return labels
null = kwargs.pop('labels', None)
null = kwargs.pop('handles', None)
handles = get_handles()
labels = get_labels()
self.legend_ = matplotlib.legend.Legend(self, handles, labels,
loc, **kwargs)
return self.legend_
def _init_plot(self, dir, var, **kwargs):
"""
Internal method used by all plotting commands
"""
#self.cla()
null = kwargs.pop('zorder', None)
#Init of the bins array if not set
bins = kwargs.pop('bins', None)
if bins is None:
bins = np.linspace(np.min(var), np.max(var), 6)
if isinstance(bins, int):
bins = np.linspace(np.min(var), np.max(var), bins)
bins = np.asarray(bins)
nbins = len(bins)
#Number of sectors
nsector = kwargs.pop('nsector', None)
if nsector is None:
nsector = 16
#Sets the colors table based on the colormap or the "colors" argument
colors = kwargs.pop('colors', None)
cmap = kwargs.pop('cmap', None)
if colors is not None:
if isinstance(colors, str):
colors = [colors]*nbins
if isinstance(colors, (tuple, list)):
if len(colors) != nbins:
raise ValueError("colors and bins must have same length")
else:
if cmap is None:
cmap = cm.jet
colors = self._colors(cmap, nbins)
#Building the angles list
angles = np.arange(0, -2*np.pi, -2*np.pi/nsector) + np.pi/2
normed = kwargs.pop('normed', False)
blowto = kwargs.pop('blowto', False)
#Set the global information dictionnary
self._info['dir'], self._info['bins'], self._info['table'] = histogram(dir, var, bins, nsector, normed, blowto)
return bins, nbins, nsector, colors, angles, kwargs
def contour(self, dir, var, **kwargs):
"""
Plot a windrose in linear mode. For each var bins, a line will be
draw on the axes, a segment between each sector (center to center).
Each line can be formated (color, width, ...) like with standard plot
pylab command.
Mandatory:
* dir : 1D array - directions the wind blows from, North centred
* var : 1D array - values of the variable to compute. Typically the wind
speeds
Optional:
* nsector: integer - number of sectors used to compute the windrose
table. If not set, nsectors=16, then each sector will be 360/16=22.5°,
and the resulting computed table will be aligned with the cardinals
points.
* bins : 1D array or integer- number of bins, or a sequence of
bins variable. If not set, bins=6, then
bins=linspace(min(var), max(var), 6)
* blowto : bool. If True, the windrose will be pi rotated,
to show where the wind blow to (usefull for pollutant rose).
* colors : string or tuple - one string color ('k' or 'black'), in this
case all bins will be plotted in this color; a tuple of matplotlib
color args (string, float, rgb, etc), different levels will be plotted
in different colors in the order specified.
* cmap : a cm Colormap instance from matplotlib.cm.
- if cmap == None and colors == None, a default Colormap is used.
others kwargs : see help(pylab.plot)
"""
bins, nbins, nsector, colors, angles, kwargs = self._init_plot(dir, var,
**kwargs)
#closing lines
angles = np.hstack((angles, angles[-1]-2*np.pi/nsector))
vals = np.hstack((self._info['table'],
np.reshape(self._info['table'][:,0],
(self._info['table'].shape[0], 1))))
offset = 0
for i in range(nbins):
val = vals[i,:] + offset
offset += vals[i, :]
zorder = ZBASE + nbins - i
patch = self.plot(angles, val, color=colors[i], zorder=zorder,
**kwargs)
self.patches_list.extend(patch)
self._update()
def contourf(self, dir, var, **kwargs):
"""
Plot a windrose in filled mode. For each var bins, a line will be
draw on the axes, a segment between each sector (center to center).
Each line can be formated (color, width, ...) like with standard plot
pylab command.
Mandatory:
* dir : 1D array - directions the wind blows from, North centred
* var : 1D array - values of the variable to compute. Typically the wind
speeds
Optional:
* nsector: integer - number of sectors used to compute the windrose
table. If not set, nsectors=16, then each sector will be 360/16=22.5°,
and the resulting computed table will be aligned with the cardinals
points.
* bins : 1D array or integer- number of bins, or a sequence of
bins variable. If not set, bins=6, then
bins=linspace(min(var), max(var), 6)
* blowto : bool. If True, the windrose will be pi rotated,
to show where the wind blow to (usefull for pollutant rose).
* colors : string or tuple - one string color ('k' or 'black'), in this
case all bins will be plotted in this color; a tuple of matplotlib
color args (string, float, rgb, etc), different levels will be plotted
in different colors in the order specified.
* cmap : a cm Colormap instance from matplotlib.cm.
- if cmap == None and colors == None, a default Colormap is used.
others kwargs : see help(pylab.plot)
"""
bins, nbins, nsector, colors, angles, kwargs = self._init_plot(dir, var,
**kwargs)
null = kwargs.pop('facecolor', None)
null = kwargs.pop('edgecolor', None)
#closing lines
angles = np.hstack((angles, angles[-1]-2*np.pi/nsector))
vals = np.hstack((self._info['table'],
np.reshape(self._info['table'][:,0],
(self._info['table'].shape[0], 1))))
offset = 0
for i in range(nbins):
val = vals[i,:] + offset
offset += vals[i, :]
zorder = ZBASE + nbins - i
xs, ys = poly_between(angles, 0, val)
patch = self.fill(xs, ys, facecolor=colors[i],
edgecolor=colors[i], zorder=zorder, **kwargs)
self.patches_list.extend(patch)
def bar(self, dir, var, **kwargs):
"""
Plot a windrose in bar mode. For each var bins and for each sector,
a colored bar will be draw on the axes.
Mandatory:
* dir : 1D array - directions the wind blows from, North centred
* var : 1D array - values of the variable to compute. Typically the wind
speeds
Optional:
* nsector: integer - number of sectors used to compute the windrose
table. If not set, nsectors=16, then each sector will be 360/16=22.5°,
and the resulting computed table will be aligned with the cardinals
points.
* bins : 1D array or integer- number of bins, or a sequence of
bins variable. If not set, bins=6 between min(var) and max(var).
* blowto : bool. If True, the windrose will be pi rotated,
to show where the wind blow to (usefull for pollutant rose).
* colors : string or tuple - one string color ('k' or 'black'), in this
case all bins will be plotted in this color; a tuple of matplotlib
color args (string, float, rgb, etc), different levels will be plotted
in different colors in the order specified.
* cmap : a cm Colormap instance from matplotlib.cm.
- if cmap == None and colors == None, a default Colormap is used.
edgecolor : string - The string color each edge bar will be plotted.
Default : no edgecolor
* opening : float - between 0.0 and 1.0, to control the space between
each sector (1.0 for no space)
"""
bins, nbins, nsector, colors, angles, kwargs = self._init_plot(dir, var,
**kwargs)
null = kwargs.pop('facecolor', None)
edgecolor = kwargs.pop('edgecolor', None)
if edgecolor is not None:
if not isinstance(edgecolor, str):
raise ValueError('edgecolor must be a string color')
opening = kwargs.pop('opening', None)
if opening is None:
opening = 0.8
dtheta = 2*np.pi/nsector
opening = dtheta*opening
for j in range(nsector):
offset = 0
for i in range(nbins):
if i > 0:
offset += self._info['table'][i-1, j]
val = self._info['table'][i, j]
zorder = ZBASE + nbins - i
patch = Rectangle((angles[j]-opening/2, offset), opening, val,
facecolor=colors[i], edgecolor=edgecolor, zorder=zorder,
**kwargs)
self.add_patch(patch)
if j == 0:
self.patches_list.append(patch)
self._update()
def box(self, dir, var, **kwargs):
"""
Plot a windrose in proportional bar mode. For each var bins and for each
sector, a colored bar will be draw on the axes.
Mandatory:
* dir : 1D array - directions the wind blows from, North centred
* var : 1D array - values of the variable to compute. Typically the wind
speeds
Optional:
* nsector: integer - number of sectors used to compute the windrose
table. If not set, nsectors=16, then each sector will be 360/16=22.5°,
and the resulting computed table will be aligned with the cardinals
points.
* bins : 1D array or integer- number of bins, or a sequence of
bins variable. If not set, bins=6 between min(var) and max(var).
* blowto : bool. If True, the windrose will be pi rotated,
to show where the wind blow to (usefull for pollutant rose).
* colors : string or tuple - one string color ('k' or 'black'), in this
case all bins will be plotted in this color; a tuple of matplotlib
color args (string, float, rgb, etc), different levels will be plotted
in different colors in the order specified.
* cmap : a cm Colormap instance from matplotlib.cm.
- if cmap == None and colors == None, a default Colormap is used.
edgecolor : string - The string color each edge bar will be plotted.
Default : no edgecolor
"""
bins, nbins, nsector, colors, angles, kwargs = self._init_plot(dir, var,
**kwargs)
null = kwargs.pop('facecolor', None)
edgecolor = kwargs.pop('edgecolor', None)
if edgecolor is not None:
if not isinstance(edgecolor, str):
raise ValueError('edgecolor must be a string color')
opening = np.linspace(0.0, np.pi/16, nbins)
for j in range(nsector):
offset = 0
for i in range(nbins):
if i > 0:
offset += self._info['table'][i-1, j]
val = self._info['table'][i, j]
zorder = ZBASE + nbins - i
patch = Rectangle((angles[j]-opening[i]/2, offset), opening[i],
val, facecolor=colors[i], edgecolor=edgecolor,
zorder=zorder, **kwargs)
self.add_patch(patch)
if j == 0:
self.patches_list.append(patch)
self._update()
def histogram(dir, var, bins, nsector, normed=False, blowto=False):
"""
Returns an array where, for each sector of wind
(centred on the north), we have the number of time the wind comes with a
particular var (speed, polluant concentration, ...).
* dir : 1D array - directions the wind blows from, North centred
* var : 1D array - values of the variable to compute. Typically the wind
speeds
* bins : list - list of var category against we're going to compute the table
* nsector : integer - number of sectors
* normed : boolean - The resulting table is normed in percent or not.
* blowto : boolean - Normaly a windrose is computed with directions
as wind blows from. If true, the table will be reversed (usefull for
pollutantrose)
"""
if len(var) != len(dir):
raise ValueError, "var and dir must have same length"
angle = 360./nsector
dir_bins = np.arange(-angle/2 ,360.+angle, angle, dtype=np.float)
dir_edges = dir_bins.tolist()
dir_edges.pop(-1)
dir_edges[0] = dir_edges.pop(-1)
dir_bins[0] = 0.
var_bins = bins.tolist()
var_bins.append(np.inf)
if blowto:
dir = dir + 180.
dir[dir>=360.] = dir[dir>=360.] - 360
table = histogram2d(x=var, y=dir, bins=[var_bins, dir_bins],
normed=False)[0]
# add the last value to the first to have the table of North winds
table[:,0] = table[:,0] + table[:,-1]
# and remove the last col
table = table[:, :-1]
if normed:
table = table*100/table.sum()
return dir_edges, var_bins, table
def wrcontour(dir, var, **kwargs):
fig = plt.figure()
rect = [0.1, 0.1, 0.8, 0.8]
ax = WindroseAxes(fig, rect)
fig.add_axes(ax)
ax.contour(dir, var, **kwargs)
l = ax.legend(axespad=-0.10)
plt.setp(l.get_texts(), fontsize=8)
plt.draw()
plt.show()
return ax
def wrcontourf(dir, var, **kwargs):
fig = plt.figure()
rect = [0.1, 0.1, 0.8, 0.8]
ax = WindroseAxes(fig, rect)
fig.add_axes(ax)
ax.contourf(dir, var, **kwargs)
l = ax.legend(axespad=-0.10)
plt.setp(l.get_texts(), fontsize=8)
plt.draw()
plt.show()
return ax
def wrbox(dir, var, **kwargs):
fig = plt.figure()
rect = [0.1, 0.1, 0.8, 0.8]
ax = WindroseAxes(fig, rect)
fig.add_axes(ax)
ax.box(dir, var, **kwargs)
l = ax.legend(axespad=-0.10)
plt.setp(l.get_texts(), fontsize=8)
plt.draw()
plt.show()
return ax
def wrbar(dir, var, **kwargs):
fig = plt.figure()
rect = [0.1, 0.1, 0.8, 0.8]
ax = WindroseAxes(fig, rect)
fig.add_axes(ax)
ax.bar(dir, var, **kwargs)
l = ax.legend(axespad=-0.10)
plt.setp(l.get_texts(), fontsize=8)
plt.draw()
plt.show()
return ax
def clean(dir, var):
"""
Remove masked values in the two arrays, where if a direction data is masked,
the var data will also be removed in the cleaning process (and vice-versa)
"""
dirmask = dir.mask==False
varmask = var.mask==False
ind = dirmask*varmask
return dir[ind], var[ind]
if __name__=='__main__':
from pylab import figure, show, setp, random, grid, draw
vv=random(500)*6
dv=random(500)*360
fig = figure(figsize=(8, 8), dpi=80, facecolor='w', edgecolor='w')
rect = [0.1, 0.1, 0.8, 0.8]
ax = WindroseAxes(fig, rect, axisbg='w')
fig.add_axes(ax)
# ax.contourf(dv, vv, bins=np.arange(0,8,1), cmap=cm.hot)
# ax.contour(dv, vv, bins=np.arange(0,8,1), colors='k')
# ax.bar(dv, vv, normed=True, opening=0.8, edgecolor='white')
ax.box(dv, vv, normed=True)
l = ax.legend(axespad=-0.10)
setp(l.get_texts(), fontsize=8)
draw()
#print ax._info
show()
| agpl-3.0 |
vprime/puuuu | env/lib/python2.7/site-packages/django/db/models/sql/subqueries.py | 98 | 11301 | """
Query subclasses which provide extra functionality beyond simple data retrieval.
"""
from django.conf import settings
from django.core.exceptions import FieldError
from django.db import connections
from django.db.models.constants import LOOKUP_SEP
from django.db.models.fields import DateField, DateTimeField, FieldDoesNotExist
from django.db.models.sql.constants import *
from django.db.models.sql.datastructures import Date, DateTime
from django.db.models.sql.query import Query
from django.db.models.sql.where import AND, Constraint
from django.utils.functional import Promise
from django.utils.encoding import force_text
from django.utils import six
from django.utils import timezone
__all__ = ['DeleteQuery', 'UpdateQuery', 'InsertQuery', 'DateQuery',
'DateTimeQuery', 'AggregateQuery']
class DeleteQuery(Query):
"""
Delete queries are done through this class, since they are more constrained
than general queries.
"""
compiler = 'SQLDeleteCompiler'
def do_query(self, table, where, using):
self.tables = [table]
self.where = where
self.get_compiler(using).execute_sql(None)
def delete_batch(self, pk_list, using, field=None):
"""
Set up and execute delete queries for all the objects in pk_list.
More than one physical query may be executed if there are a
lot of values in pk_list.
"""
if not field:
field = self.get_meta().pk
for offset in range(0, len(pk_list), GET_ITERATOR_CHUNK_SIZE):
where = self.where_class()
where.add((Constraint(None, field.column, field), 'in',
pk_list[offset:offset + GET_ITERATOR_CHUNK_SIZE]), AND)
self.do_query(self.get_meta().db_table, where, using=using)
def delete_qs(self, query, using):
"""
Delete the queryset in one SQL query (if possible). For simple queries
this is done by copying the query.query.where to self.query, for
complex queries by using subquery.
"""
innerq = query.query
# Make sure the inner query has at least one table in use.
innerq.get_initial_alias()
# The same for our new query.
self.get_initial_alias()
innerq_used_tables = [t for t in innerq.tables
if innerq.alias_refcount[t]]
if ((not innerq_used_tables or innerq_used_tables == self.tables)
and not len(innerq.having)):
# There is only the base table in use in the query, and there are
# no aggregate filtering going on.
self.where = innerq.where
else:
pk = query.model._meta.pk
if not connections[using].features.update_can_self_select:
# We can't do the delete using subquery.
values = list(query.values_list('pk', flat=True))
if not values:
return
self.delete_batch(values, using)
return
else:
innerq.clear_select_clause()
innerq.select = [SelectInfo((self.get_initial_alias(), pk.column), None)]
values = innerq
where = self.where_class()
where.add((Constraint(None, pk.column, pk), 'in', values), AND)
self.where = where
self.get_compiler(using).execute_sql(None)
class UpdateQuery(Query):
"""
Represents an "update" SQL query.
"""
compiler = 'SQLUpdateCompiler'
def __init__(self, *args, **kwargs):
super(UpdateQuery, self).__init__(*args, **kwargs)
self._setup_query()
def _setup_query(self):
"""
Runs on initialization and after cloning. Any attributes that would
normally be set in __init__ should go in here, instead, so that they
are also set up after a clone() call.
"""
self.values = []
self.related_ids = None
if not hasattr(self, 'related_updates'):
self.related_updates = {}
def clone(self, klass=None, **kwargs):
return super(UpdateQuery, self).clone(klass,
related_updates=self.related_updates.copy(), **kwargs)
def update_batch(self, pk_list, values, using):
pk_field = self.get_meta().pk
self.add_update_values(values)
for offset in range(0, len(pk_list), GET_ITERATOR_CHUNK_SIZE):
self.where = self.where_class()
self.where.add((Constraint(None, pk_field.column, pk_field), 'in',
pk_list[offset:offset + GET_ITERATOR_CHUNK_SIZE]),
AND)
self.get_compiler(using).execute_sql(None)
def add_update_values(self, values):
"""
Convert a dictionary of field name to value mappings into an update
query. This is the entry point for the public update() method on
querysets.
"""
values_seq = []
for name, val in six.iteritems(values):
field, model, direct, m2m = self.get_meta().get_field_by_name(name)
if not direct or m2m:
raise FieldError('Cannot update model field %r (only non-relations and foreign keys permitted).' % field)
if model:
self.add_related_update(model, field, val)
continue
values_seq.append((field, model, val))
return self.add_update_fields(values_seq)
def add_update_fields(self, values_seq):
"""
Turn a sequence of (field, model, value) triples into an update query.
Used by add_update_values() as well as the "fast" update path when
saving models.
"""
# Check that no Promise object passes to the query. Refs #10498.
values_seq = [(value[0], value[1], force_text(value[2]))
if isinstance(value[2], Promise) else value
for value in values_seq]
self.values.extend(values_seq)
def add_related_update(self, model, field, value):
"""
Adds (name, value) to an update query for an ancestor model.
Updates are coalesced so that we only run one update query per ancestor.
"""
try:
self.related_updates[model].append((field, None, value))
except KeyError:
self.related_updates[model] = [(field, None, value)]
def get_related_updates(self):
"""
Returns a list of query objects: one for each update required to an
ancestor model. Each query will have the same filtering conditions as
the current query but will only update a single table.
"""
if not self.related_updates:
return []
result = []
for model, values in six.iteritems(self.related_updates):
query = UpdateQuery(model)
query.values = values
if self.related_ids is not None:
query.add_filter(('pk__in', self.related_ids))
result.append(query)
return result
class InsertQuery(Query):
compiler = 'SQLInsertCompiler'
def __init__(self, *args, **kwargs):
super(InsertQuery, self).__init__(*args, **kwargs)
self.fields = []
self.objs = []
def clone(self, klass=None, **kwargs):
extras = {
'fields': self.fields[:],
'objs': self.objs[:],
'raw': self.raw,
}
extras.update(kwargs)
return super(InsertQuery, self).clone(klass, **extras)
def insert_values(self, fields, objs, raw=False):
"""
Set up the insert query from the 'insert_values' dictionary. The
dictionary gives the model field names and their target values.
If 'raw_values' is True, the values in the 'insert_values' dictionary
are inserted directly into the query, rather than passed as SQL
parameters. This provides a way to insert NULL and DEFAULT keywords
into the query, for example.
"""
self.fields = fields
# Check that no Promise object reaches the DB. Refs #10498.
for field in fields:
for obj in objs:
value = getattr(obj, field.attname)
if isinstance(value, Promise):
setattr(obj, field.attname, force_text(value))
self.objs = objs
self.raw = raw
class DateQuery(Query):
"""
A DateQuery is a normal query, except that it specifically selects a single
date field. This requires some special handling when converting the results
back to Python objects, so we put it in a separate class.
"""
compiler = 'SQLDateCompiler'
def add_select(self, field_name, lookup_type, order='ASC'):
"""
Converts the query into an extraction query.
"""
try:
result = self.setup_joins(
field_name.split(LOOKUP_SEP),
self.get_meta(),
self.get_initial_alias(),
)
except FieldError:
raise FieldDoesNotExist("%s has no field named '%s'" % (
self.get_meta().object_name, field_name
))
field = result[0]
self._check_field(field) # overridden in DateTimeQuery
alias = result[3][-1]
select = self._get_select((alias, field.column), lookup_type)
self.clear_select_clause()
self.select = [SelectInfo(select, None)]
self.distinct = True
self.order_by = [1] if order == 'ASC' else [-1]
if field.null:
self.add_filter(("%s__isnull" % field_name, False))
def _check_field(self, field):
assert isinstance(field, DateField), \
"%r isn't a DateField." % field.name
if settings.USE_TZ:
assert not isinstance(field, DateTimeField), \
"%r is a DateTimeField, not a DateField." % field.name
def _get_select(self, col, lookup_type):
return Date(col, lookup_type)
class DateTimeQuery(DateQuery):
"""
A DateTimeQuery is like a DateQuery but for a datetime field. If time zone
support is active, the tzinfo attribute contains the time zone to use for
converting the values before truncating them. Otherwise it's set to None.
"""
compiler = 'SQLDateTimeCompiler'
def clone(self, klass=None, memo=None, **kwargs):
if 'tzinfo' not in kwargs and hasattr(self, 'tzinfo'):
kwargs['tzinfo'] = self.tzinfo
return super(DateTimeQuery, self).clone(klass, memo, **kwargs)
def _check_field(self, field):
assert isinstance(field, DateTimeField), \
"%r isn't a DateTimeField." % field.name
def _get_select(self, col, lookup_type):
if self.tzinfo is None:
tzname = None
else:
tzname = timezone._get_timezone_name(self.tzinfo)
return DateTime(col, lookup_type, tzname)
class AggregateQuery(Query):
"""
An AggregateQuery takes another query as a parameter to the FROM
clause and only selects the elements in the provided list.
"""
compiler = 'SQLAggregateCompiler'
def add_subquery(self, query, using):
self.subquery, self.sub_params = query.get_compiler(using).as_sql(with_col_aliases=True)
| mit |
AndrewPeelMV/Blender2.78c | 2.78/scripts/startup/bl_ui/properties_data_metaball.py | 5 | 4123 | # ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
import bpy
from bpy.types import Panel
from rna_prop_ui import PropertyPanel
class DataButtonsPanel:
bl_space_type = 'PROPERTIES'
bl_region_type = 'WINDOW'
bl_context = "data"
@classmethod
def poll(cls, context):
return context.meta_ball
class DATA_PT_context_metaball(DataButtonsPanel, Panel):
bl_label = ""
bl_options = {'HIDE_HEADER'}
def draw(self, context):
layout = self.layout
ob = context.object
mball = context.meta_ball
space = context.space_data
if ob:
layout.template_ID(ob, "data")
elif mball:
layout.template_ID(space, "pin_id")
class DATA_PT_metaball(DataButtonsPanel, Panel):
bl_label = "Metaball"
def draw(self, context):
layout = self.layout
mball = context.meta_ball
split = layout.split()
col = split.column()
col.label(text="Resolution:")
sub = col.column(align=True)
sub.prop(mball, "resolution", text="View")
sub.prop(mball, "render_resolution", text="Render")
col = split.column()
col.label(text="Settings:")
col.prop(mball, "threshold", text="Threshold")
layout.label(text="Update:")
layout.prop(mball, "update_method", expand=True)
class DATA_PT_mball_texture_space(DataButtonsPanel, Panel):
bl_label = "Texture Space"
bl_options = {'DEFAULT_CLOSED'}
COMPAT_ENGINES = {'BLENDER_RENDER', 'BLENDER_GAME'}
def draw(self, context):
layout = self.layout
mball = context.meta_ball
layout.prop(mball, "use_auto_texspace")
row = layout.row()
row.column().prop(mball, "texspace_location", text="Location")
row.column().prop(mball, "texspace_size", text="Size")
class DATA_PT_metaball_element(DataButtonsPanel, Panel):
bl_label = "Active Element"
@classmethod
def poll(cls, context):
return (context.meta_ball and context.meta_ball.elements.active)
def draw(self, context):
layout = self.layout
metaelem = context.meta_ball.elements.active
layout.prop(metaelem, "type")
split = layout.split()
col = split.column(align=True)
col.label(text="Settings:")
col.prop(metaelem, "stiffness", text="Stiffness")
col.prop(metaelem, "use_negative", text="Negative")
col.prop(metaelem, "hide", text="Hide")
col = split.column(align=True)
if metaelem.type in {'CUBE', 'ELLIPSOID'}:
col.label(text="Size:")
col.prop(metaelem, "size_x", text="X")
col.prop(metaelem, "size_y", text="Y")
col.prop(metaelem, "size_z", text="Z")
elif metaelem.type == 'TUBE':
col.label(text="Size:")
col.prop(metaelem, "size_x", text="X")
elif metaelem.type == 'PLANE':
col.label(text="Size:")
col.prop(metaelem, "size_x", text="X")
col.prop(metaelem, "size_y", text="Y")
class DATA_PT_custom_props_metaball(DataButtonsPanel, PropertyPanel, Panel):
COMPAT_ENGINES = {'BLENDER_RENDER', 'BLENDER_GAME'}
_context_path = "object.data"
_property_type = bpy.types.MetaBall
if __name__ == "__main__": # only for live edit.
bpy.utils.register_module(__name__)
| gpl-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.